[mythtv-users] random livetv stalls
Kevin Johnson
iitywygms at gmail.com
Thu Feb 6 03:39:51 UTC 2014
Hi All.
I am trying to chase down a stall I am getting. It happens at random
times. Sometimes 1 time a day. Sometimes 1 time a week.
I am on 12.04 lts with the latest mythbuntu installed.
Backend does not show anything in the logs. Just lines and lines of this
when it happens. Which from what I understand is normal.
Feb 4 17:32:20 mythbackend mythbackend: mythbackend[20022]: I
ProcessRequest ringbuffer.cpp:1098 (WaitForAvail)
RingBuf(/var/lib/mythtv/livet 702_2$
The frontend shows this.
Feb 4 17:32:20 g430 mythfrontend.real: mythfrontend[1915]: W Decoder
ringbuffer.cpp:1035 (WaitForReadsAllowed) RingBuf(myth://
192.168.2.4:6543/1702_2$
Feb 4 17:32:20 mythfrontend.real: last message repeated 5 times
Feb 4 17:32:20 g430 mythfrontend.real: mythfrontend[1915]: N CoreContext
mythplayer.cpp:2130 (PrebufferEnoughFrames) Player(j): Waited 6730ms for
vid$
Feb 4 17:32:20 g430 mythfrontend.real: mythfrontend[1915]: W Decoder
ringbuffer.cpp:1035 (WaitForReadsAllowed) RingBuf(myth://
192.168.2.4:6543/1702_2$
Feb 4 17:32:20 mythfrontend.real: last message repeated 6 times
While the frontend is locked, all I see is a frozen image of whatever was
on livetv at that time. Hitting exit clears the screen and I can start
live tv again and it works fine again.
It seems like the frontend is waiting for info from the backend. But I
really cant figure out what to look for. And google search of the above
errors shows very little.
Could this be a i/o error with one of the drives? If so, how do I test?
If not, any ideas where I should start looking?
Thanks to all. On 5 February 2014 13:48, Kevin Johnson <iitywygms [at]
gmail> wrote:
>
> One other request. If someone could point me to a how-to correctly reply
to
> this list using gmail I will be eternally grateful.
simply reply at the bottom of the answer, not the top
> On Tue, Feb 4, 2014 at 5:55 PM, Kevin Johnson <iitywygms [at] gmail>
wrote:
>> It seems like the frontend is waiting for info from the backend. But I
>> really cant figure out what to look for. And google search of the above
>> errors shows very little.
>>
>> Could this be a i/o error with one of the drives? If so, how do I test?
>> If not, any ideas where I should start looking?
>> Thanks to all.
Having been through this myself, I can guarantee you that it is always
disk related....
When you are playing from liveTV there are two things happening
simultaneously from the backend.
First is the recorder writing to the disk: that's thread #1
Then you have thread #2 handling request from the frontend and
requesting data; that is reading the data from the disk.
So one process of the backend write on disk, while another process
read from disk. You have a long delay occurring there, from your log,
it takes over 6s between the time the backend write to the disk, and
the reader reads that data. That's way too much. For information, you
get a warning that it's too low if it takes more than 0.2s.
After 10s waiting for sufficient data the reader on the backend will abort.
If it takes more than 16s to read a block of data, it will also abort.
Now how the frontend handle it when the backend abort is very poor.
The frontend usually will just appear to be locked. After a little
while you can exit it by pressing the exit key.
I guess, optimisation could be done, so if the frontend is watching
liveTV we don't have to wait for the backend to finish writing data to
disk before reading from disk once again and directly transfer it
right away to the client.
But at this stage, the design doesn't allow for it.
In the mean time, you need to identify where the bottleneck is. If you
are using NFS , check the mounting option (that's the issue I was
having), if using a RAID array, check the smart status to make sure
you don't have a disk dying that is slowing everything down.
Make sure your database isn't on the same disks where you are writing
your recordings and not on a RAID array. that has proven to often be
the reason for having massive speed slowdowns with disks
I checked the network using iperf
[ 4] local 192.168.2.6 port 5001 connected with 192.168.2.4 port 52745
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.1 sec 112 MBytes 93.4 Mbits/sec
That seems to be good numbers. Every frontend is similar in results.
I also checked the drives using palimpsest
/dev/sdb1:
Timing cached reads: 15616 MB in 2.00 seconds = 7813.59 MB/sec
Timing buffered disk reads: 378 MB in 3.01 seconds = 125.57 MB/sec
/dev/sdc1:
Timing cached reads: 16480 MB in 2.00 seconds = 8246.36 MB/sec
Timing buffered disk reads: 232 MB in 3.02 seconds = 76.73 MB/sec
/dev/sda1:
Timing cached reads: 12612 MB in 2.00 seconds = 6309.82 MB/sec
Timing buffered disk reads: 320 MB in 3.00 seconds = 106.55 MB/sec
Those numbers seem okay too?
I tried removing the recordings directory from the drive that has the
database. Still get really long wait times between frontend and backend.
I do not use raid.
I do use nfs on the backend. But mythtv does not use that directory at
all. All my frontends can access that shared nfs directory on the
backend. However I only use nfs to share random files and other minor
things.
Could the nfs mounts still be causing this?
Any other idea for fixing the long delays between the backend and the
frontend?
I assume this long delay is the reason my frontend computers randomly lock
up?
And lastly. I finally disabled the digest option. So now, hopefully I am
using this list properly.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.mythtv.org/pipermail/mythtv-users/attachments/20140205/d6087d3c/attachment-0001.html>
More information about the mythtv-users
mailing list