[mythtv] ffmpeg SWSCALE!

Yeasah Pell yeasah at schwide.com
Thu Aug 31 03:51:57 UTC 2006


Daniel Kristjansson wrote:
>> The answer I came up with is "if you have a properly set up viewing 
>> distance given your eyes and the size of the display device, you do not."
>>     
> This is not true of signal aliasing. This is true of two
> of other the artifacts of an LCD. 1/ that the "pixel" is
> actually three (or four) shudders with different color
> filters and with three different pixel locations. 2/ that
> the "pixel" is square. The viewing distance will blur these
> artifacts away. A good CRT already blurs most of these away
> at the display surface because it ALWAYS re-samples the
> image with a very high resolution shadow mask, but if you
> ever saw those Morie patterns that X's default background
> induces on poor CRTs, that is aliasing. Depending on the
> frequency, it can be seen so long as two cones in your eye
> register the screen in the far distance.
>   
Forget about the subpixels for a second. Assume the pixel on a panel is 
simply a colored square. That is what I mean by "ideal discrete output 
device". In visible character, LCDs and the ilk come quite close to this 
in my opinion, but I'm just talking theoretically here -- a grid of 
colored squares.

A grid of colored squares is an ideal representation of a pre-filtered 
raw discrete output, it's the 2 dimensional analog of the stair-stepped 
output of an audio DAC that hasn't been filtered at all. In essence, it 
contains the original signal, plus all the output sample aliasing that 
is of a higher frequency than the Nyquist frequency of the image being 
displayed. In audio, this aliasing would be ideally stripped off with 
the equivalent of a low pass filter. In video, it is ideally stripped 
off with a convolution that is a 2 dimensional low pass filter ("blur")

Ok? Understand what I mean by aliasing here? Some pictures might help.

Say this picture is representative of the continuous 2d analog original 
image (of course it isn't, but it's higher resolution, so it will suffice):

http://schwide.com/aliasing/original.jpg

And this picture is representative of a lower resolution picture which 
is displayed on an idealized discrete output device:

http://schwide.com/aliasing/lowres.jpg

The difference between those two pictures shows the high frequency 
components that should have been filtered out, i.e. the nyquist aliasing:

http://schwide.com/aliasing/difference.jpg

You can't filter this out -- it's inherent in the displaying of a grid 
of colored blocks. The only way you can possibly "filter" that is to a) 
position yourself so you can't see details of that fineness, or b) feed 
your display with content that is of lower resolution, which allows you 
to reduce it by filtering the image. Option b) is what I believe Mike 
has been talking about.

Yes, actual display devices aren't idealized discrete output devices, 
especially CRTs. But I believe LCDs, et. al. are close enough that they 
can be viably thought of as such. Certainly the aliasing described above 
will be present. Some differences will be present on a CRT as well, but 
they will probably be more like errors -- lower frequency deviations in 
the signal that are below the nyquist limit of the original signal.
> In MythTV you are unlikely to ever see aliasing of the image
> when using MythTV with any halfway decent video card, but you
> will see the blurring that Xv does to avoid aliasing when it
> scales your image up to the display resolution. The audio is
> a different matter, there MythTV can create aliasing because
> of how we adjust A/V sync.
>
> The reason a software scaler can do better than the XVideo
> scaler, is because the latter is usually optimized for speed.
> The XVideo scaler in your video card may be using something
> as crude as linear scaling, while the software scaler may be
> using a filter that samples 10 or 20 pixels in the input image
> for each pixel in the output image.
>   
Of course that's all true -- but it also has nothing to do with what I 
was talking about. Xv will not scale a signal that is the same 
resolution as the output device, which as I've said is explicitly the 
case I'm talking about. I'm off on a tangent here, you should probably 
be replying to somebody else's less-tangential post. :-)
> I think there are better ways to improve DVD playback though.
> For one you could run a MPEG-4 type de-blocking filter (but
> after it is used for prediction.) If you wanted to be really
> ambitious you could modify ffmpeg to decode two frames for
> each image in the stream, one regular resolution for prediction
> and one at display resolution. You could also decode the
> display frames at the display framerate, rather than the
> encoded frame rate. That would also eliminate "judder".
> That is really ambitious though, de-blocking is pretty simple
> and really needs to be done before applying a unsharp filter,
> so it doesn't make the blockyness worse.
>
>   
I don't disagree with any of that either, thought the judder issue is 
more complicated than that -- you can't always just change the 
framerate, since a very common setup is to have the multichannel audio 
decoded in an external receiver, so you don't have direct control over 
the audio clock (and can't play bending tricks with the audio, the only 
thing you can do is drop or add audio frames, which is noticeable -- 
unless you want to decode and re-encode, which is 'orrible.) I'm not 
sure if it modern graphics cards could be made to make small adjustments 
to the output frame rate to basically PLL the audio clock, but if they 
could it would be a wonderful improvement -- no more judder.



More information about the mythtv-dev mailing list