[mythtv] ffmpeg SWSCALE!

Daniel Kristjansson danielk at cuymedia.net
Thu Aug 31 03:13:22 UTC 2006


On Wed, 2006-08-30 at 22:25 -0400, Yeasah Pell wrote:
> Daniel Kristjansson wrote:

> The term "aliasing" is pretty overloaded, it can mean virtually anything 
> depending on the context.
Ok, maybe I'm just being overly picky because I'm an
electrical engineer who did his graduate work in
computer graphics, but as far as the signal processing
world or the computer graphics world aliasing has only
one meaning. Colloquially it may mean whatever the
someone wants it to mean, but none of those meanings
are as good as the Nyquist-Shannon meaning.

> The answer I came up with is "if you have a properly set up viewing 
> distance given your eyes and the size of the display device, you do not."
This is not true of signal aliasing. This is true of two
of other the artifacts of an LCD. 1/ that the "pixel" is
actually three (or four) shudders with different color
filters and with three different pixel locations. 2/ that
the "pixel" is square. The viewing distance will blur these
artifacts away. A good CRT already blurs most of these away
at the display surface because it ALWAYS re-samples the
image with a very high resolution shadow mask, but if you
ever saw those Morie patterns that X's default background
induces on poor CRTs, that is aliasing. Depending on the
frequency, it can be seen so long as two cones in your eye
register the screen in the far distance.

In MythTV you are unlikely to ever see aliasing of the image
when using MythTV with any halfway decent video card, but you
will see the blurring that Xv does to avoid aliasing when it
scales your image up to the display resolution. The audio is
a different matter, there MythTV can create aliasing because
of how we adjust A/V sync.

The reason a software scaler can do better than the XVideo
scaler, is because the latter is usually optimized for speed.
The XVideo scaler in your video card may be using something
as crude as linear scaling, while the software scaler may be
using a filter that samples 10 or 20 pixels in the input image
for each pixel in the output image.

I think there are better ways to improve DVD playback though.
For one you could run a MPEG-4 type de-blocking filter (but
after it is used for prediction.) If you wanted to be really
ambitious you could modify ffmpeg to decode two frames for
each image in the stream, one regular resolution for prediction
and one at display resolution. You could also decode the
display frames at the display framerate, rather than the
encoded frame rate. That would also eliminate "judder".
That is really ambitious though, de-blocking is pretty simple
and really needs to be done before applying a unsharp filter,
so it doesn't make the blockyness worse.

-- Daniel



More information about the mythtv-dev mailing list