[mythtv] scheduling, starttime accuracy

f-myth-users at media.mit.edu f-myth-users at media.mit.edu
Fri Feb 13 07:59:01 UTC 2009


    > Date: Fri, 13 Feb 2009 09:06:15 +1100
    > From: Nigel Pearson <nigel at ind.tansu.com.au>

    > > 4) Could you use the ctime of the recording file?

    > Brilliant workaround! Won't break anything,
    > and much more reliable than what I was planning
    > (looking for timestamp in backend log:
    > 2009-02-12 12:00:03.246 Started recording: blah)

I can't quite see how -any- time-based metric can help your problem,
which (as I understand it) was sharing frame-accurate cutlists.

Even if both machines are NTP-synchronized and even -if- we recorded
the exact time (down to the frame) that a recording started on (which
a ctime won't give you, if you're using a filesystem with 1-second
granularity---and some are), you're still screwed, even if both of you
are recording from exactly the same source (same cable channel on same
cable system, same OTA broadcast, whatever):

If you're using STB's, differences on model number or even firmware
could lead to temporal offsets of -at least- a frame, and probably
much more.

If you're using different capture hardware, ditto.

Even if everything else is the same, I can easily see different
software on the host CPU leading to differences in exact frame timing
(perhaps different kernel versions? different downloaded firmware to
an ivtv-based card?).

It all seems fraught with timing races and general fragility.

Rather than mess around with recording accurate starting times (there
may well be reasons to do this---but not, I think, for the reason you
gave initially), you might be able to accomplish this with something
more computationally intensive but much more reliable, such as
analysis of the input.

For example, even if you don't want to use data directly from the
commflagger, how about scanning the first minute of when the recording
"should" have started (e.g., after compensating for prerolls that
might differ between the two systems recording) for sudden changes in
frame brightness?  That's an easy cut-detector.  Take the ratio of the
before & after brightnesses at the cut, divide that into some small
number of buckets (5?) as a thresholding scheme (can't depend on the
same absolute brightness across capture hardware), and then use some
simple sliding-window heuristic to find the same sequence of cuts in
the other system's recording.

The analogy here is recovering the chirp pattern on a spread-spectrum
transmission system---here, we're synchronizing the receivers by the
hop pattern of the scene-cut boundaries.  (Or you could think of it
as disciplining a phase-locked loop, but since scene boundaries are
irregular, it's not as obvious an analogy.)

Presumably each system would scan the input for a given number of cuts
(rather than just scanning a fixed number of minutes in), and then
whichever system is supposed to be the slave can sync to the master
by reading the master's cut-detector framecounts and slidings its
window into synchronization.  At that point, assuming no frames are
being dropped by either system, they should stay synchronized.  (And
if we think frames -might- be dropped, e.g., by poor OTA reception,
they can periodically resynchronize, something that a system based on
time-of-recording-start cannot possibly achieve.)

Sure, it's some programming, but a really rough kluge could be mocked
up using mplayer/ffmpeg to dump successive frames and jp2a to turn
them into ASCII; then use diff to look for giant interframe changes.
This sort of mockup could be tested just by writing a shell script,
and half the logic might already be available in the check-stb script
that went around a year or so ago.


More information about the mythtv-dev mailing list