[mythtv-users] Share your transcode settings!

Raymond Wagner raymond at wagnerrp.com
Wed Jan 11 20:34:10 UTC 2012


On 1/11/2012 14:25, Jeremy Jones wrote:
>
>     What Paul Gardiner is using and what I described is called a
>     frameserver.  It is intended to allow video processing to be
>     pipelined through multiple applications, without having to waste
>     disk space and disk IO on one or more intermediary files. 
>     Nuvexport uses this mode internally, and a number of Windows tools
>     such as MeGUI use AviSynth for similar purposes, but it is beyond
>     the scope of most tasks that just wrap 'ffmpeg' or 'handbrakecli'.
>
>
> That helps on my understanding of what Paul is doing.  Is this 
> something that you would recommend for custom transcoding? What are 
> the pro's and cons of doing it that way?  Or asked another way, when 
> would one want to do that as opposed to using the wrapper with ffmpeg 
> (or other  cli program)?  Can you elaborate on the, 'beyond the scope' 
> comment above?

If you can do everything you want to do to the video within a single 
instance of ffmpeg or handbrakecli, then a frameserver is beyond the 
scope of your task.  Say you have some special filter you want to apply 
that is not built into the application you're using.  Or, they have the 
filters but you want to apply them in an order different than what it 
will default to.  Or, you have video in a format your application of 
choice does not support.  In the case of nuvexport, you want 
mythtranscode to apply your cuts rather than trying to do your own 
segmenting.

You can run each stage individually, writing out to temporary files.  
You don't want to lose quality on each stage, so you write losslessly.  
An hour of CD quality audio is... the size of a CD.  More channels, 
higher sampling rate and depth, will result in larger files.  Raw YUV 
1080p24 is massive, at some 75MB/s.  For a 2-hr long Bluray, you're 
talking about 16GB of audio, and a whopping 540GB of video.  Lossless 
compression like FLAC and Huffman will bring that down to maybe 1/2 to 
1/3, but that's still a ton of data.

Enter the frame server.  Instead of using files, you run each bit in 
parallel, feeding the data from one application to the next down the 
pipeline.  Typically, you would "pipe" (|) this stuff using standard in 
and standard out.  In the specific case of mythtranscode, since you're 
applying cuts, you audio and video simultaneously through named pipes 
rather than risking synchronization issues.

For something very similar, but easier to mess around with, check out 
Netpbm.  It is an image editing toolset that follows the old UNIX 
mantra, "do one thing and do it well".  It is a bunch of little tools 
with singular purposes, such as loading, writing, scaling, cropping, 
inverting, flipping, and so on.  A long time back, I wrote a simple 
image gallery for my web server that would generate thumbnails by 
importing to pnm (internal data format), doing a very rapid closest 
color scaling to twice the desired thumbnail size, followed by an 
interpolated scaling to the desired size, and writing out to a jpeg.  
Staging it as that meant it efficiently discarded the bulk of the image 
information, but still had enough to blend to a smooth thumbnail.


More information about the mythtv-users mailing list