[mythtv] Perfect sync to interlaced TVs

Paul Gardiner lists at glidos.net
Thu Sep 25 07:03:21 UTC 2008


Mark Kendall wrote:
> 2008/9/24 Paul Gardiner <lists at glidos.net>:
>> I have an idea for an alternative form of interlacing to achieve
>> perfect sync for interlaced content played on interlaced TVs. In
>> theory the best picture is achieved if the interlaces are displayed
>> as is, with no processing, but there are problems in doing so
>> in MythTV.
> 
>> Is this of any use?
> 
> Firstly, I'll just say that I don't believe there's a mechanism for
> doing this properly in linux. I've searched wide and far and browsed a
> lot of code and I've found nothing to suggest that there is a way to
> determine when the graphics card is displaying which field when using
> an interlaced mode.

Is that something lacking in the driver API, or does it originate
from a problem with the cards themselves?

> Secondly, given point one, we need a workaround (see below) but I'm
> not sure that a single workaround will give consistent results on all
> combinations of drivers/GPU vendors/TV models. I've experimented using
> nvidia cards (hdmi hd and sd, vga to rgb-scart sd) and my ps3 (hdmi
> and rgb-scart). Most approaches worked on the ps3 but getting
> consistent results out nvidia is trickier.
> 
> What I think you're describing above is what I've previously called
> 'field order' - whereby on each refresh we display the two most recent
> fields. This is a deinterlacer option in both trunk and 0.21 fixes
> when using the opengl renderer (it's called the 'Interlaced'
> deinterlacer!). Other opengl rendering issues aside,

Ah right. I thought it had to be something someone had considered
before. "Field order" is a good name for it.

> I later realised that that 'deinterlacer' doesn't give consistent results.

How can that be? So somewhere there's a fault in the reasoning of
my previous post. When you say "realised", do you know how it can
go wrong, or was it from trial?

> A lot more testing later (and various discussions on the -users list),
> I realised that bobdeint does actually work consistently though
> picture quality can be degraded slightly due to the 'bobbing'. So I
> implemented a bobdeint without the bobbing.

Interesting. Is the algorithm easily describable?

> What is now in the mythtv-vid branch for the 'Interlaced' deinterlacer
> is effectively bobdeint without the bob. It shows each field
> consecutively, scaled to full frame size.

How is that different? Is it that the scaling method is more
complicated than duplicating the lines? I would still have
thought that field order was theoretically better because
of not softening the image.

> This appears to work consistently - both in my own experience and feedback I've received
> from others -  and picture quality (including OSD) seems not to be
> affected.
> 
> It should be easy enough to implement as a software deinterlacer as well.

I wish I could try it. I'm using a little VIA ITX board, and it
can't even consistently handle the CPU requirements of bobdeing,
although people tell me that bobdeing shouldn't put any more
strain on the CPU. What I do at the moment is use no deinterlacing,
and mess with pause and rewind until I get a sync on the correct
VSync pulse. :-)

Cheers,
	Paul.



More information about the mythtv-dev mailing list