[mythtv-users] Capture Card Advice

GARY GENDEL ggendel at sarnoff.com
Thu Jul 24 23:43:25 EDT 2003


Jeff Williams wrote:
>>Thanks Jeff,
>>
>>I guess my question comes down to is this: obviously, a 
>>software encoder can
>>potentially produce a better image given enough computing 
>>power, with little
>>to no artifact, smearing, or ghosting problems on the encode 
>>side, and good
>>de-interlacing of the source image.  Since the hardware 
>>encoders are fixed
>>silicon (unless there's a pga in there), are they at a fixed 
>>compression
>>mode or do they allow different compression levels of the 
>>mpeg encoding.
> 
> 
> Well, just to clarify the answer someone else gave - hardware decoders will allow you different levels of compression but you are locked in to whatever codec is programmed into the chip.  I am not sure if this is upgradeable via firmware or whatever but in any case you will at least be at the mercy of the card manufacturer.  So the answer to your question seems to be yes, they are at a fixed compression mode - though the compression quality can be adjusted.
> 
> I would personally rather be able to use MPEG4 decoding, so if I was to do it over again I'd probably still buy a fast CPU and a software decoder card that only captures raw signal and lets the software determine the codec used.

To clarify the clarification.  An MPEG-2 Hardware solution only does 
MPEG-2.  MPEG-2 was developed originally for the broadcast market.  The 
development was incubated from work originally done here at the David 
Sarnoff Research Center, formally RCA Laboratories, now Sarnoff 
Corporation, and others.  Direct TV was the first commercial product 
that used a preliminary (and slightly incompatible) MPEG-2 stream, but 
once MPEG-2 was ratified, they changed their systems over to it.  Almost 
all current hardware MPEG-2 encoders are all roughly equivalent.  The 
main differences are how it does motion estimation (which can take > 60% 
of the overall cycles), the rate control system, and some of the more 
advanced tools (such as dual-prime vectors, scenecut detection, inverse 
3:2 pulldown, etc.)  Bottom line is that for CCIR601 (720x480) once the 
bit-rate exceeds 8Mbs or so, there is plenty of overhead to correct 
errors caused by mediocre encoders.  It's only when the bitrate gets 
below 4Mbs that better encoders start showing their true colors (pun 
intended).

MPEG-4 started out as a replacement for H.263 (video conferencing), but 
it grew to encompass MPEG-2's domain as new compression techniques were 
introduced.  The current state of MPEG-4 seems to be a mishmash of 
profiles and levels.  I haven't seen any MPEG-4 decoder or encoder (sans 
reference software) that covers any complete profile/level.  The real 
advantage of MPEG-4 comes with smaller resolutions because of smaller 
block sizes, and lower bitrates (due to tools like global motion).  It 
has some really nifty capabilities, like the ability to send multiple 
resolutions at the same time and sprites.  In all it's glory, a 
streaming MPEG-4 encoder can negotiate with the decoder to select the 
tool set and encoding parameters on the fly.  Any hardware solution that 
implements the full spec will be one hot sucker. MPEG-4 starts showing 
off as the bitrate goes down, mainly due to the ability to drop frames. 
  This isn't bad for video conferencing, but I surely don't want to 
watch a movie at 6 fps or less.  Even standard 24 fps of film looks a 
bit choppy at the higher illumination of a normal home viewing condition 
(that's one reason why they darken a movie theatre).

However, if you really want to look at a spec that is a culmination of 
everyone's pipe dream, take a look at the new JVT spec.  Yeehaw!

Gary




More information about the mythtv-users mailing list