support 16bit input pixels

Issue #23 resolved
Steve Borho created an issue

If a user app wanted to prepare pixels for input into the encoder but did not want to know the bit depth of the encode (8, 10, or 12), it would pack the pixels into 16bit samples and place all the padding in the lowest significant bits (and perhaps add dithering). The encoder would then be responsible for down-shifting to remove the bits it did not want.

To support this, we need to add a X265_CSP_HIGH_DEPTH flag that is supported on the x265_picture.colorSpace field. When present, our TComPicYuv picture import function will know that the input picture has 16bit samples that need to be downscaled to the internal bit depth.

It will not be supported on x265_param.internalCsp, where it would have no meaning. This needs to be clearly documented in x265.h

Required for 0.8

Comments (1)

  1. Log in to comment