[mythtv] G400-TV questions
4 Dec 2002 15:27:01 -0000
I'm just jumping in here with my limited understanding :-)
On Wed, 04 Dec 2002 12:02:53 +0100, Erik Arendse
<email@example.com> wrote :
> Hi All,
> Am I too simple, or are the following thoughts valid:
> 1) If I have a G400, I want to grab maximum quality, which means resolution
> equal to TV standard used.
> 2) If I put my TV-out on the same resolution as the TV standard used no
> scaling is needed.
> 3) If I decode a recorder G400-grab and put it on the screen WITHOUT
> RESIZING (I'm not talking xv here, just a hypothetical put) then the
> resolution will be by definition OK for the TV
see below, but sounds good
> 4) So if I want a standalone box which is only grabbing full resolution
> interlaced TV and senting out full resolution interlaced TV there is no
> need for any scaling.
> 5) Ergo a G400-TV should be sufficient, if I replace the XV output with a
> optimized 1:1 put.
sounds reasonable; you just have to find a setup for your video card
that does "Real TV Output Sync (TM)".
I think the problem is that "Real TV Output" is pretty ugly.
You need to generate sync timings that lead to hidden areas around all 4
edges of your "data area". And these areas are not all the same size.
Since TV is decades old, and analog of course, they had to have these
"hidden areas" to enable the hardware to lock up sync before the part
you want to see is visible. So you need to display data in what we
call "overscan mode" and expect that stuff at the edges is not going to
be visible. (you can of course put black or some other color there)
Also, some parts of the top of the TV picture have digital data encoded
nowadays for teletext and closed captioning.
Now, I'm not saying that generating these signals is impossible for a
video card! But, most people use a TV to show their desktop in winblows,
linux, whatever, or games, or word processing, or a presentation, etc.
These people almost always want to see everything that they see on their
monitor. So they want what we call "underscan mode" They don't want
their data expanded so it looks great, but has edges chopped off in all
In fact, for some settop boxes I helped create, we display VGA data on TVs
in overscan mode and I adjusted the "desktop" size to look "great" but
with a visible resolution of say 580x400 in NTSC mode (approximate, its been
years and I don't have the exact numbers handy). This was a closed system
and worked great, but how many people have that kind of mode/desktop size
available under windblows/linux/etc.?
I know I'm rambling, but the answer is "yes, what you propose would lead
to the best quality image with least artifacts, etc" but you will
probably have to tweak the video timings on your video chip to get it
to look right since the drivers for winblows/linux are most likely setup
to make the other 99.1% of the world happy and display data underscanned.
Just look around for "overscan" settings for your card.
> Another question: Why the YUV<-->RGB problems, can't I just grab the
> configuration in the format I need for putting it on the output device?
Not sure what your question is here; maybe I missed an earlier question
in your thread?
Basically there are different formats for capturing video and displaying
video. I believe YUV typically is used for TV because of the nature
of encoded TV signals. It leads to smaller data captures that is basically
still "right". RGB is usually used for computer desktops because it
gives exactly the same video color depth to each pixel and each color
in a pixel (this is not at all what TV does).
If you have a combined device like a computer with TV in or out then
you often have to choose to do things in RGB or YUV and often end
up converting between them (there are other choices as well).
> Enlighten me....
> mythtv-dev mailing list
Hope some of that helps :-)
I'm certainly not an expert on these things,