[mythtv-users] Combined FE/BE using USB for all I/O?

Jean-Yves Avenard jyavenard at gmail.com
Mon Aug 18 02:17:58 UTC 2014


Hi

On 16 August 2014 23:59, Simon Hobson <linux at thehobsons.co.uk> wrote:
> Lindsay Mathieson <lindsay.mathieson at gmail.com> wrote:

> The OS default really doesn't work well - apparently. It lets large dirty buffers build up, then flushes them causing a period of high disk I/O which interferes with interactive performance (in particular, it causes momentary pauses in playback which rely on timely reading of data from disk).
>
> MythTV uses an internal circular buffer per record stream. The record process drops it's data in the buffer, and a separate process copies any data in the buffer to the file and does an fsync to ensure it gets written to disk. The size of the buffer and write/fsync period are fixed at compile time (in-code constants). In the past I have thought that perhaps larger buffers and longer periods might help - but I never got round to trying it.

It's close, but not quite. But the differences are fundamentals

Each writer is made of a 8MB buffer (used to be 64MB).
It is made of three threads.

* The first thread is the recorder itself, that feeds the writer
typically 188 bytes of data at a time (most streams are mpeg-ts, so a
block of data is 188 bytes wide)

* The second thread is writing to disks in blocks that are a minimum
of 64kB. It never writes to disk in block > 1MB.
The writer writes to disk as data is coming in or at regular interval,
and so may in fact write less than 64kB (typical with low bandwidth
stream like IPTV)

* The third thread is the disk flush/sync thread.

>From the documentation:
/** \brief Flush data written to the file descriptor to disk.
 *
 *  This prevents freezing up Linux disk access on a running
 *  CFQ, AS, or Deadline as the disk write schedulers. It does
 *  this via two mechanism. One is a data sync using the best
 *  mechanism available (fdatasync then fsync). The second is
 *  by telling the kernel we do not intend to use the data just
 *  written anytime soon so other processes time-slices will
 *  not be used to deal with our excess dirty pages.
 *
 *  \note We used to also use sync_file_range on Linux, however
 *  this is incompatible with newer filesystems such as BRTFS and
 *  does not actually sync any blocks that have not been allocated
 *  yet so it was never really appropriate for ThreadedFileWriter.

A write from the recorder as such, is totally asynchronous.
8MB buffer allows for over 5s of data of typical HD stream, not 1s..
Much more for SD content.

> As it is, I don't think it takes long to fill the buffer - at which point the older part gets overwritten and data is lost.

no.. that's not the way it works.
there is never loss of data due to the buffer being overwritten. While
it's a ringbuffer, new data only gets written once there's space.

The writer will wait a very short time if there's no space. It will
also cause the writer thread to force a write and flush some data from
the buffer. Due to the nature of Video for Linux, you can't wait, data
from the capture card will not be buffered. If there's no one to read
it it will be lost.

So the system does rely on the OS to do the right thing and not block
for too long when issuing a write.

If it takes more than 10s for the writer to write data (to write, not
to flush on disk), then the system enter an error condition and the
recording is aborted.

> The default time is 1 second, hence my comment earlier in the thread about the "tick from the disk" about once a second. It's not a fixed 1 second - the code does a 'write - fsync - "wait 1 second"' loop, so if the fsync is slow returning then the period will stretch. Given this, I suspect under heavy load all the loops would simply slow down to whatever seek rate the disk could manage - if the buffers were large enough not to cause any overflows.

It doesn't work that way. The write thread never waits for a sync to
complete. It's all done in parallel
As such, it doesn't matter how long it takes for fsync to complete, it
has no impact on the writer itself.

A typical failure isn't due to the TFW (ThreadedFileWriter) taking too
long to write, recordings pretty much always succeed unless you have a
major hardware fault.

A typical issue is the combination of the frontend + backend and
attempting to watch an in-progress recording or liveTV.
The frontend request data from the backend. While the backend is happy
to write data in a non-blocking fashion, the read on the other hand is
blocking, and there timeouts will occur if you have a bottleneck
somewhere.

Hope that clears things up.


More information about the mythtv-users mailing list