[mythtv-users] Tips for user jobs on NFS mounts over Wi-Fi?

Dave MythTV dave.mythtv at gmail.com
Wed May 6 03:11:44 UTC 2015


Thanks everyone.
I can see that there are a lot of different aspects to this, I'll try to
see if I can work through your suggestions.


f-myth-users:  I do have other things on the wifi network that need some
bandwidth, but luckily nothing that needs a substantial amount as my
primary frontend machine is also the master backend.  I think I could
mitigate most of my other-use bandwidth issues by restricting the times
that the job queue is allowed to run... and it would be safe to saturate
the wifi in the off-hours.

I'll definitely do some testing, both of bandwidth and of transcoding speed
on the various machines, and see where the numbers are at.  I haven't done
any tuning at all of the NFS mount... so I'll have to check what the
default options are, and if anything can be improved.  It's a NFSv4 share,
so I believe it would default to a TCP connection.  I think I could switch
to UDP and gain a bit of performance and be OK with the occasional retries,
since it is a non-realtime transfer?

No idea if the transcoder is having issues with the I/O stalls, or if it is
fine now but would have problems later when the network is more congested.
Time for some logging!


Great idea Michael for locking the files/directory.  Using a modification
of your proposal, you could lock just the file transfer stages, but unlock
during the transcode... which when coupled with the copy-transcode-copy
method, it could permit any number of backends to be crunching through the
video in parallel, but not overwhelm the network (or my master backend) by
having them all transferring data simultaneously.    Hmmmm.....



Gary, sneakernet of large batches of recordings is another very good idea.
I'm sure the overall efficiency would go WAY up, which is a huge benefit.
The only issue I can think of would be how I tell MythTV which recordings I
have moved to which backend when it processes the user job?   There's the
setting to only run jobs on the backend that recorded the program, but
wouldn't that not work since the database says that all of the recordings
were made on the master backend?   Another possible issue is that the
remote backends are not completely dedicated machines, so they won't have
100% availability from the perspective of the frontend or the master
backend... so things like auto-expiration for an episode limit might fail?


Great ideas, everyone.   Thanks again!
- Dave
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mythtv.org/pipermail/mythtv-users/attachments/20150505/f388722b/attachment.html>


More information about the mythtv-users mailing list