Michael T. Dean
mtdean at thirdcontact.com
Fri Sep 16 23:56:23 UTC 2005
Tim McClarren wrote:
> Michael T. Dean wrote:
>> Tim McClarren wrote:
>>> I had posted a while back about a problem I was having with video
>>> Basically, I get about 10 black pixels on the left edge of captured
>>> stream, and about 20 pixels at the bottom, which are mostly black,
>>> but with some grey blocks in it. I'm pretty sure it's entirely an
>>> issue with my very cheap SAA7130 based capture card.
>> No. It's an issue with NTSC (or PAL). These analog formats have the
>> unfortunate problem of having "ragged edges," sometimes with "rainbow
>> stripes." Therefore, when designing the specifications, the solution
>> that was chosen was to factor in an amount of "overscan"--extra image
>> size that would be shown outside the visible area of the screen.
>> Because of this designed in overscan--which had a bit of an
>> engineering factor built in--some of the extra lines of information
>> were used to include non-visible data (i.e. closed-captions, etc.).
>> Therefore, you may see gray lines on the top of your video, too, that
>> aren't a problem with your capture card.
> Yes, I know... I just assumed that a GOOD capture card would
> understand that this part of the NTSC signal is not part of the
> visible image, and would "auto calibrate". I don't really have a lot
> of experience with a lot of different capture cards, but I do know
> that if I plop this card in a Windows box and use the regular Windows
> driver, I don't need to mess with cropping... the driver seems to
> handle it "out of the box", as it were.
Hmmm. Windows making a decision to do something whether you want to or
not. That's hard to believe. ;)
>> Although that's how the specification was designed--to "correct"
>> these ragged edges with the player by covering part of the picture
>> tube with a bezel. Therefore, correcting it on a computer outputting
>> to a non-overscanned screen should be done by the player.
>>> which means I have to configure each front end seperately, and I
>>> have to set up the transcoder to crop out those annoying bits if I
>>> want to watch my stuff in some other format).
>> But each frontend typically has different playback characteristics
>> requiring different settings, anyway. If you crop the video at
>> capture, it looks great when displayed on a digital projector or
>> computer monitor without overscan, but you lose part of the image
>> when displayed on a TV with overscan...
> I'm not sure I understand this. With Myth, you're no longer viewing
> the raw NTSC signal, so overscan/underscan isn't an issue. Are you
> saying that people make their X desktop start some ways above and to
> the left of the TV's top left corner, and have the bottom right corner
> not visible because it's beyond the display area? I have my MythTV
> front end connected to a plasma using WXGA on an RGB cable, so I'm not
> sure what happens when you use S-Video or composite -- I thought the
> video out of a graphics card would emit an NTSC signal that made all
> of the desktop visible, which means that MythTV's front end output
> would all be visible. But, it sounds like you have to mess with image
> size and offsets if you're displaying S-Video or composite out to make
> the image look right on a TV.
> Oh, and by the by, if anyone has a modeline to make the Radeon driver
> do 138x by 768, I'd love to have it :)
What I'm saying is that every TV in existance (including CRT/PDP/LCD/DLP
TV's) has some overscan. Therefore, although it says it has a
resolution of 1280x720 or 1920x1080, not all of those pixels are
actually visible. Therefore, you have a choice to make. You must
configure your system to output such that your 1280x720 pixels are
delivered to the TV within the visible range--in which case they're
scaled to fit into maybe a 1200x690 area on screen (this is what most
people do by "fixing" their modelines to fit the picture in the visible
area of the screen)--or you can configure your system to output the
1280x720 pixels at a 1:1 pixel mapping and tell your software to only
use a portion of the display (specifying a GUI size and telling Myth to
use the GUI size for the video), or you can set up your TV to use a 1:1
pixel mapping to deliver 1280x720 pixels and allow for overscan
(specifying a GUI size, but not using the GUI size for the video).
The example I'm making here is using digital TV resolutions, primarily
because it's digital (and therefore has a real digital resolution), but
the same applies for SDTV. Keeping the overscan in the recording is
good if displaying on an overscanned TV. And, since the player can be
configured to get rid of overscan, it also works well for
non-overscanned computer monitors/projectors.
When capturing HDTV streams, however, they're typically "cut" (no ragged
edges), so the design of the TV--not allowing a 1:1 pixel mapping, full
resolution, and no overscan (instead, allowing only two of the
three)--means that with HDTV, you don't get the choice you have with
SDTV. You can't keep the "ragged edges" so that they--and not the
image--are cut off by overscan. That's why I want the overscan area in
my SDTV recordings--so I can use 1:1 pixel mapping and full resolution
and not miss out on a portion of the picture. Unfortunately, with the
HDTV recordings, I'll be missing a portion of the picture (granted, it's
one that I'm supposed to miss, but still...) because I'd rather have a
1:1 pixel mapping and full resolution (i.e. no scaling) than have no
Note, though, if you use a computer monitor--and not a TV or HDTV
"monitor"--you can get 1:1 pixel mapping with no overscan, but typically
computer monitors have resolutions different from HDTV's native pixel
resolutions (i.e. 1920x1200), so you still don't get to use the full
resolution supported by your display (you'll get the black bars above
More information about the mythtv-dev