[mythtv-users] random livetv stalls

Michael Watson michael at thewatsonfamily.id.au
Thu Feb 6 14:03:40 UTC 2014


On 6/02/2014 6:43 PM, Jean-Yves Avenard wrote:
> On 6 February 2014 14:39, Kevin Johnson <iitywygms at gmail.com> wrote:
>> [  4] local 192.168.2.6 port 5001 connected with 192.168.2.4 port 52745
>> [ ID] Interval       Transfer     Bandwidth
>> [  4]  0.0-10.1 sec   112 MBytes  93.4 Mbits/sec
> if 100Mbit/s it's okay (not great), if gigabit it's pretty poor :)
>
>> That seems to be good numbers.  Every frontend is similar in results.
>>
>> I also checked the drives using palimpsest
>>
>> /dev/sdb1:
>>   Timing cached reads:   15616 MB in  2.00 seconds = 7813.59 MB/sec
>>   Timing buffered disk reads: 378 MB in  3.01 seconds = 125.57 MB/sec
>>
>> /dev/sdc1:
>>   Timing cached reads:   16480 MB in  2.00 seconds = 8246.36 MB/sec
>>   Timing buffered disk reads: 232 MB in  3.02 seconds =  76.73 MB/sec
>>
>> /dev/sda1:
>>   Timing cached reads:   12612 MB in  2.00 seconds = 6309.82 MB/sec
>>   Timing buffered disk reads: 320 MB in  3.00 seconds = 106.55 MB/sec
>>
>> Those numbers seem okay too?
> they are meaningless really, the read value only shows how fast your
> cache is and not the actual drive. You need to use much bigger file
> size.
>
>
>
>> I do use nfs on the backend.  But mythtv does not use that directory at all.
>> All my frontends can access that shared nfs directory on the backend.
>> However I only use nfs to share random files and other minor things.
>> Could the nfs mounts still be causing this?
> if the file is available locally, the frontend will access it directly
> instead of streaming it from the backend.
>
> by locally, it means that there's a direct path to the file, be it via
> a locally mounted file system or networked (NFS, SMB you name it)
>
> example:
> the backend store the recordings to:
> /data/recordings/recording1.mpg
>
> the frontend has mounted /data/recording via NFS
> the frontend sees that the file /data/recordings/recording1.mpg exists
> it will read directly the file.
> So here, it goes via NFS and will *not* stream it from the backend.
>
> I've personally experienced issue with NFS, not so much the speed of
> transfer, but the lag between the time one client write to the file
> and the time another client sees the new size.
>
> I was mounting NFS with the option: rsize=8192,wsize=8192,timeo=14,intr
I ended up with mount options 
"proto=tcp,retry=10,rsize=32768,wsize=32768,hard,intr".  Shares are 
mounted by automount, using nfs4.   Using nfs4 and the tuning of wsize 
and rsize solved all my problems.   automount is used so the drives can 
enter powersave.

I have 2x SBE/FE and 1x FE reading and writing via NFS to storage 
located on mbe.  I run a gigabit network with 1x FE on wireless

>
> Simply because that's what I had been using it for years and many
> tutorial give those options.
> intr is a deprecated option, and does nothing with kernel >= 2.6.26
>
> removing rsize=8192,wsize=8192 fixed the issue for me.
> So now I only have timeo=14 as nfs mounting option. That resolved the
> hang issue I was having in livetv...
>
> Having said that, as a trial I would unmount the NFS filesystem. Let
> the frontend access the recording by streaming it from the backend.
>
> And see how it goes.
>
>
>
>> Any other idea for fixing the long delays between the backend and the
>> frontend?
> at this stage no
>
>> I assume this long delay is the reason my frontend computers randomly lock
>> up?
> yes, when there's a long delay, the frontend gives up and it doesn't
> always recover gracefully, often taking minutes before it's available
> again.
> It's even worse when streaming from the backend. I usually have to
> kill the frontend (I have a button on my remote doing just that, a
> very unfortunate requirement)
>



More information about the mythtv-users mailing list