[mythtv-users] Way out idea on watching same thingin multiplerooms

Simon Hobson linux at thehobsons.co.uk
Wed Jan 6 23:36:06 UTC 2010


Gareth Glaccum wrote:

>Chris Pinkham's idea/concept is probably the way that things need to 
>happen. A primary frontend sends commands to the slave frontends. I 
>don't know how far he got, but I think that some way of testing the 
>synchronisation of the streams at the point of output is required 
>(which gets fed back to a stream position fudge factor).

Well I thought the idea of a 'master' frontend logging timestamps of 
some key points (either keyframes, or timestamps in the program 
material) and passing that on seemed like a good idea - sorry, forgot 
who suggested it. NTP will keep clocks synchronised to better 
accuracy than we need, so if there was a way to deterministically 
take key timing info from a program stream, it would be relatively 
easy for a 'slave' frontend to compare it's own list of 
keypoint-timestamp pairs and compare then with those obtained from 
the master. This doesn't have to happen in 'real realtime', as long 
as it happens soon after the key event.

It wouldn't put any constraints on implementation other than the 
ability to a) deterministically select key points, and b) accurately 
determine the time they were displayed (or played in the case of 
sound). The slave frontends can then, after the fact, determine if 
they were on time or not, and adjust their playback accordingly.

One idea that comes to mind for key events would simply to take the 
time offset into the recording, and simply select every n whole 
seconds (whether that's every second, every 10 seconds, 60 or 
whatever). It would be media type agnostic since I imagine there will 
be material types (sound for example) that don't have key frames to 
pick on. So the master frontend would simply timestamp (for example) 
every 10th whole second into the stream and broadcast (or multicast) 
that information to the slaves. The slaves, without there needing to 
be a low latency link from the master, can then adjust their playback 
until their internally logged timestamps match those received (after 
the fact) from the master. Whether the adjustment is by stepping, or 
by adjusting playback speed, or something else, would not need to be 
defined - that could be implementation dependent on each frontend.

Also, it would not constrain the method of distribution - each 
frontend could separately stream the material from the backend, so it 
would be a separate problem to solve than the multicast one. 
Technically, though in practice unlikely, different frontends could 
stream a recording from different backends as long as the recordings 
were identical !



There could be a requirement for the master backend to delay it's 
output in some cases. If a slave determined that it could not advance 
playback any more, yet was still playing late - then you'd either 
have to accept that in some situations you would not achieve perfect 
sync, or you could require the master frontend to have a minimum 
buffering delay, or you could have a dynamic process where a slave 
frontend could ask the master to back off a bit.


As an aside, I've seen something similar used for passive monitoring 
of network performance. Two monitoring boxes (acting as tranparent 
bridges) were inserted into the network at different sites. They 
would pick packets and timestamp them, and then compare timestamps to 
passively determine latency and log the results.

-- 
Simon Hobson

Visit http://www.magpiesnestpublishing.co.uk/ for books by acclaimed
author Gladys Hobson. Novels - poetry - short stories - ideal as
Christmas stocking fillers. Some available as e-books.


More information about the mythtv-users mailing list