[mythtv-users] Does VDPAU decoding require dimensions that are a particular multiple of pixels?

f-myth-users at media.mit.edu f-myth-users at media.mit.edu
Thu Nov 19 04:20:24 UTC 2015


I'm trying to figure out what the minimum granularity of image
dimensions are that VDPAU is supposed to be able to decode.
[And do other likely decoders have different limits?]

For either dimension, must it be a multiple of 2 pixels? 4? 8? 16?

[I'm thinking of stuff that started out in NTSC standard definition
and has been transcoded to x264 in an MKV container by Handbrake.
If the answer is different for interlaced vs progressive, that
would be useful to know, too---my intent is to let VDPAU handle
the deinterlacing if possible, in the assumption that its best
deinterlacers are better than Handbrake's best deinterlacers.
(If this isn't true, I'd like to know!)]

I've seen this bug report from November 3

  Ticket #12531: vdpau segfaults when video dimensions not multiple of 16

and assume that it -is- a bug, and that the dimensions may be more
tightly specified, but how tightly does the spec claim?  I haven't
been able to figure this out.  (I know that myth != spec, but assume
that, modulo bugs, adhering to the spec is good enough.)

I don't [yet] have a VDPAU implementation to test against, but I'd
like to not be unpleasantly surprised later if I happen to pick bad
crop values.  I'd also like to not crop way more loosely than I
otherwise might want to.

Thanks.


More information about the mythtv-users mailing list