[mythtv-users] MythTV Is Now Forked As Torc

Raymond Wagner raymond at wagnerrp.com
Tue Feb 21 21:24:27 UTC 2012


On 2/21/2012 15:04, Eric Sharkey wrote:
> On Tue, Feb 21, 2012 at 12:17 PM, jedi<jedi at mishnet.org>  wrote:
>>> advanced much in reality.  It's still doing the same thing it did 9
>>> years ago generally except better (VDPAU and storage pools for
>>> example).  I'm OK with that though.  I am however disappointed by area
>>> like streaming to phones, laptops, transcoding, etc are still woefully
>>      "streaming" to other devices is primarily hung up on the fact that
>> many of them aren't well equipped to deal with any TV recording or most
>> other forms of video for that matter.
> That's what content negotiation is for:
>
> http://en.wikipedia.org/wiki/Content_negotiation
>
> Ideally the mythTV protocol used to communicate between the backend
> and frontend would include some form of content negotiation, and
> perhaps the backend could generate transcoded content on the fly to
> best fit the frontend's needs.  It can be done, it just hasn't been.

Ideally, content negotiation would be trivial, except its not.  The 
FFMPEG libraries mean we should have access to just about any codec we 
might want to use, once someone codes up the interface to use those 
libraries (see HLS in 0.25).  The problem becomes power.  Video encoding 
is damn hard to do.  Mobile devices seem to be standardizing on H264 as 
the codec of choice, and something like the iPhone/Pad can only manage 
H264 and MJPEG.

Now the HDPVR can output 13.5Mbps H264 in real time with very modest 
power consumption.  For software playback, you're looking at a modern 
CPU architecture at around 2.5GHz for decoding on a single core.  Since 
DCT-based codecs are asynchronous, to encode that same quality in 
real-time with x264 is going to require that same speed on 6-8 cores.  
You're going to need a high end PC to manage one playback device in HD, 
and a multi-socket server if you want multiple frontends.  Most people 
with decent computers as a backend are still going to have to use 
downscaled content with the HLS. While others running their backends on 
Atoms, ARMs, or old P4s and Athlon XPs, will simply be out of luck.

Now we have these big GPUs on our systems that we can use, only no one 
has yet written open source video encoding routines for them.  There are 
some bits of experimental code that allow partial offload of certain 
analyses, but nothing comprehensive.  The last time I looked at the 
commercial, closed source Badaboom, they claimed significant 
improvements over x264 and other encoders.  On the other hand, 3rd party 
comparisons using x264 quality settings comparable to what Badaboom was 
outputting, put x264 on a high end CPU comparable in speed to Badaboom 
on a high end GPU.

Then there are things like Intel Quicksync.  That's great and all, but 
completely useless to us until Intel releases Linux driver support for 
it.  Depending on licensing restrictions, that may never happen.  There 
was a library to support the Maxim encoder chips such as is in Elegato's 
TurboH264 stick, but that has long since been abandoned.  There is a 
project called LeopardBoard building an open source ARM SoC with 1080p 
encoding, but I have no idea how usable the development systems are 
currently.

Simple take away, live transcoding and "content negotiation" are not the 
magical solution to all your content format woes.


More information about the mythtv-users mailing list