[mythtv-users] Transcoding and deinterlacing, and other transcode issues

Brad Templeton brad+myth at templetons.com
Fri Jun 16 06:12:11 UTC 2006


On Fri, Jun 16, 2006 at 12:34:31PM +1200, Steve Hodge wrote:
> On 6/16/06, Steven Adeff <adeffs.mythtv at gmail.com> wrote:
> > 1080i is is still a frame resolution of 1920x1080 pixels not 1920x540.
> > So you can't convert 1080i to 1280x540 without losing a lot of
> > information.
> 
> 1080i is two interlaced fields, each one 1920x540 @ a field rate of 50
> or 60. 1280x540p50/p60 preserves the vertical resolution and loses
> none of the temporal resolution, but does lose 33% of the horizontal
> resolution. But in Brad's case the TV can only do a horizontal
> resolution of 1280 anyway so this is irrelevant.

Strictly 1280x540p 60fps would not preserve all the information
since each frame is supposed to be offset by one scan line.  However,
it's not a big drop from the 720 the TV can do.    (And if I wanted
to get strict, due to overscan the TV probably only does about 1150 x 760
if I wanted to aim things carefully.


When I display a 1080i program (with bob) the playback program does what
an interlaced TV would do, as I understand it.  Displays the first frame
in 540 lines spaced 2 apart, with the gap filled in with the line from
the previous frame (which on a TV would have been there from the
persistence of phosphor.)  Other deints try various interpolation
algorithms at half the framerate.   At least I think I have this right,
it's pretty messy.

Anyway, after myth has build this image into a 1920x1080 frame buffer,
xvideo hardware scales it down to 1280x720, and this is then sent out
as rasters to the TV, that displays it.

(I could, alternately, drive the display at 1080i and the TV would
do the deinterlace and down-res before going to the DLP.)

Anyway, the point of all this is it seems to make sense to try to
compress the end result of all this (a 720p image) rather than the
original double-the-pixels image.    To do that requires the same
fancy deinterlace that happens at display time, since the TV and
the linux frame buffer, are progressive scan devices.

Done properly, I would get files that were perhaps a third of the
size of 1080i mpeg2s, (8gb/hour down to about 3gb/hour) but look the
same on my 720p TV.   Since 95% of the HD market has 720P TVs, this
seems like a good thing to do.   Some of that reduction to 3gb/hour
comes from mp4's better compression, some from throwing away info that
will never be displayed.

I found it simpler to always keep my TV in its native 720p mode and
have Myth and xvideo scale things down.  The alternative is to switch
video modes when playing 1080i and perhaps even 480i, but that's more
jarring.   Do the people who do it this way find it is worth the
effort?

(Another reason I don't switch to 1080i mode is that I use the VGA
connector on my TV, since the DVI is poorer quality -- go figure -- and
component could be done if I pulled out my converter, but I like the
direct cable.)


More information about the mythtv-users mailing list