[mythtv] ffmpeg SWSCALE!

Daniel Kristjansson danielk at cuymedia.net
Thu Aug 31 13:32:22 UTC 2006

On Thu, 2006-08-31 at 03:08 -0400, Yeasah Pell wrote:
> Daniel Kristjansson wrote:
> >
> > Right, I understand the artifact you are talking about, it
> > is simply not called aliasing.
> >   
> Hee hee, ok, ok, I'll stop calling it that!

> > Umm, I would not consider that ideal at all. I would consider point
> > samples convolved with an infinite sinc function to be ideal. And
> > I would consider the sampled Gaussian blur of a CRT to be more ideal
> > than this the box filter.
> "Ideal" there means "mathematically pure model", not "best suited". Like 
> an "ideal diode" -- it's ideal in the sense that it is a perfect 
> representation of a particular model. The model in that case is an 
> abstract array of pixels. I guess that was pretty ambiguous though.
Yeah, the LCD contains all the data in the original signal, unlike
the blurred output on the CRT. So in that sense it is better, but
only in the sense that if you trained a perfectly aligned camera
on the screen you could use the data to create a better reconstruction
than a similarly aligned camera on the CRT. As a final image it
is less ideal.

> > Nope, only a small portion of the residual is due to the high frequency
> > components, you would need to filter that with the sinc function in
> > order for the image to contain just the high frequencies. And that
> > still wouldn't be called aliasing.
> Oops, you're right -- I forgot to filter the comparison image to limit 
> its bandwidth before taking the difference.
> http://schwide.com/aliasing/difference2.jpg
This is much closer. If you want to see just the high frequencies,
you can do this by taking the FFT using something like FFTW and
chopping out the low frequency components, then reversing it and
displaying the image. It will have slightly less energy, and the
diagonal lines won't have that aliasing (due to the pixel grid
you are doing this math on, if you enlarge the images 10x smoothly,
and then do the subtraction and then shrink 10x smoothly you
won't see the jaggies anymore).

> Wow, that's a really neat idea! I notice you dodged the question of 
> whether you can diddle the frame rate of the graphics card though, I was 
> hoping you'd have some idea about that -- I think I've seen programs 
> that let you adjust the X video mode timings in realtime, haven't I?
You can diddle with the display times by adding more lines to the
off-screen portion of the scan. LCD's just cut these off, and with
CRTs you can do it if you are very careful with your timing. This
is controlled in the mode-setting portion of the X server. It would
need graphics card specific code, X server hacking, a new X API and
you could probably only do it with open source drivers. But the
bigger problem might be modern monitors which usually shut off the
display of pixels for several seconds while the resync with the
signal if it changes too much from what they expect. On modern
CRT's this is a safety feature, and on LCD/Plasmas it is often
needed for the buffering circuits.

> Still, if it's possible, it'd be a lot easier than essentially
> creating alternate-universe renderings of MPEG data. :-)
For any one monitor & video display card maybe. We do have a
_very_crude_ approximation of this with the "Separate video
modes for GUI and TV playback" functionality in MythTV, you
can put up to three video resolution to display resolution
and frame rate combinations (and more with DB hacking). You
could have one 60fps and one 59.97fps modeline, though...
But you hit X server limitations even with this, it doesn't
work with Xinerama.. The plan is to integrate this into the
video resolution & frame rate to display algorithm matching
done in the mythtv-vid branch, so that you can match the
ffmpeg reported video resolution and frame rate to the display
resolution and frame rate. But we will still need to adjust
the audio or accept judder for perfect A/V sync. At this point
though, if the monitor + video card driver supported switching
between 59.94 fps, 59.97 fps, and 60.0 fps without blanking,
it would be trivial to add those modelines and use this for
A/V sync in place of audio resampling.

Resampling the audio correctly should be much cheaper, in coding
and run-time, than resampling the video. Not because of theory,
but because of how ffmpeg is implemented and because of how much
less data you have to deal with. Problems with fixed block data
like AC3/DTS can be overcome with a decode/encode step like in
Mark Spieth's audio patch. For audiophiles decoding in the ffmpeg
library and resampling (properly) in MythTV and NOT reencoding
in AC3/DTS should avoid unneeded codec generation loss. We have
at least one library in MythTV capable of resampling the audio
properly, we simply don't use it in the NVP yet. Maybe in 0.23 :)

-- Daniel

More information about the mythtv-dev mailing list