[mythtv] [mythtv-commits] Ticket #2708: patch: Allow remote FE/BE to prefer to use the myth protocol

Mark Buechler mark.buechler at gmail.com
Wed Nov 22 01:56:30 UTC 2006


When viewing HDTV over and nfs mount, I get high IO wait times on the
frontend for some reason. The backend, which has the recordings store local,
has one of two CPUs averaging 50% and at times pegged due to EIT usually. If
I lower the NFS read buffer to 1 meg I get much better performance than with
say an 8meg read buffer. Why this is, I don't know.

I tried switching to samba for my frontend recordings mount and that helps a
great deal. I don't get get high IO wait times anymore but HD playback still
isn't perfect. If anything on the frontend or backend takes extra CPU, like
EIT on the backend or mysql, I get stuttering.

I'm starting to think my problem with HD viewing is with smbd or nfsd
context swithing on the backend causing the stutter, whereas using myth://
elminiates that since the process causing the context switching is
mythbackend itself.

- Mark.

On 11/21/06, Daniel Kristjansson <danielk at cuymedia.net> wrote:
>
> On Tue, 2006-11-21 at 15:38 -0800, Bruce Markey wrote:
> > Chris Pinkham wrote:
>
> > "FWIW: My in experience streaming works far better...
> >   ...several posts in the dev & users lists that suggest to
> >   use streaming rather than nfs mounts."
> >
> > "I am one that finds streaming via protocol better than
> >   nfs mount."
> >
> > They are both saying that serving the files from the backend
> > seems to work better than NFS. This is what I'd expect as the
> > backend method can be application aware of the read ahead and
> > block size needs. If we can't out-perform NFS then we're doing
> > something wrong because nothing in NFS was designed with the
> > needs of mythfrontend specifically in mind =).
>
> I theory Myth streaming should perform as almost as well or
> in some cases better than NFS when the file storage is local
> to the backend, but in practice I've found NFS to be much better
> at the task. Perhaps it's because NFS is in the kernel and
> so doesn't add as much latency as Myth streaming. Or it could
> be because it doesn't use TCP by default and so can transmit
> the data with much less overhead and the Nagling doesn't
> stop us cold when there is a dropped packet. Maybe there is
> a bug or there are some really large timeouts in the Myth
> streaming implementation. Whatever the cause, most of the times
> I've had performance problems with MythTV it was because Myth
> streaming was getting in the way. I haven't felt the motivation
> to make Myth streaming actually work, since the work around
> is so simple, don't use it.
>
> > The block size used to be adjusted based on an estimated bitrate
> > but that didn't turn out to work so well. What I'm thinking
> > right now is that maybe the block size should be adjustable
> > based on the demand for the read ahead buffer. Start small then
> > grow the block size until the read ahead buffer stays above
> > some level without having to request too often. This could make
> > it use small blocks right after seeking then build in size after
> > a few iteration during continuous playback.
>
> Wouldn't this cause HDTV streams to always stutter at startup?
> Or are you considering something that preserves state across
> streaming sessions?
>
> -- Daniel
>
> _______________________________________________
> mythtv-dev mailing list
> mythtv-dev at mythtv.org
> http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mythtv.org/pipermail/mythtv-dev/attachments/20061121/25fbf56b/attachment.htm 


More information about the mythtv-dev mailing list