Many TVs become unhappy at unusual/unexpected aspect ratios, which was the reason for the padding tricks. Note that the example above doesn’t use npl - I’m 90% certain you do need to tweak the npl option to get desired results as I believe the default is 100 - e.g. it maps the peak brightness in the TIF to 100 nits which is NOT what you want, I haven’t had time to revisit these efforts other than pulling up some really old command examples so far.
The input TIF was, at the time, a linear TIFF saved from darktable in the Rec.2020 colorspace. I’ll be revisiting this using RT instead sometimes soon (maybe this weekend?). I can’t remember if I scaled the image to be 2160 pixels high prior to saving it out or not.
Also note that ffmpeg in most distributions is not built with zscale enabled as the ZIMG library is not packaged in many distributions, you’ll need to build from source.
Tables 1 and 2 show the expected signal levels for reference (18% greycard, greyscale charts etc.) and non-reference items (various skin tones, grass etc.)
They should allow you to adjust the levels in your image to the levels expected in the standard.
Yep. My -1.36 EV compensation was just one “dumb” way to target 21.4% (0.5 of sRGB) for a quick demo, there are many other options depending on one’s needs and preferences.
Only if you do cropping to get to a 16:9 aspect ratio. If your camera shoots a typical 3:2 aspect ratio, you need to pad out the sides. In this case, I already downscaled to 2160 pixels high in my converter, but that means the input image was only 3236 pixels wide, not 3840. So I pad it to 3840 wide with a black background, offsetting it by 304 pixels from the left to center it. (304 = (3840-3236)/2)
zscale defaults to npl=100 - which maps the brightest parts of the image to only 100 nits and makes the whole image dark.
Don’t know, my concern is that lossless H.265 might make some display devices very unhappy.
Or to ensure that the exported data is linear, create a linear output profile with Rec.2020 color space. Since RT doesn’t have support for HDR displays (and neither does Linux last time I checked), HDR preview is a lost cause, your preview just plain will NOT be accurate compared to what is shown on your TV via the ffmpeg export process, since RT’s preview output will be SDR.
And let’s not forget you also have the soft proofing tool in darktable that lets you set the histogram display for HLG/PQ to help you place the level of your subject of interest where you want. (This can be useful even if you end up exporting to linear 16-bit/float TIFF for later encoding…)
“The curve and histogram is always displayed with sRGB gamma, regardless of working or output profile. This means that the shadow range is expanded and highlights compressed to better match human perception” https://rawpedia.rawtherapee.com/Exposure#Tone_Curves
yes, but that refers to the “creative” tone curve, it has nothing to do with the TRC of the output profile. If you set it to linear, it will be linear (and in fact, even in the “tone curve tool”, if you use an identity curve, the gamma has no effect).
HLG is a relative system with black at 0 and peak at 1.0. For different screen brightnesses it has a transfer curve adjustment specified - see note 5f in ITU-R BT.2100.
The %age values for reference levels are constant as the signal doesn’t change, just the transfer curve in the monitor.
Note 5f – For displays with nominal peak luminance (LW) greater than 1000 cd/m2, or where the effective nominal peak luminance is reduced through the use of a contrast control, the system gamma value should be adjusted according to the formula below, and may be rounded to three significant digits:
AFAIK the HLG adjustments only apply to the display (decode) side. Content should still be “mastered” and encoded for the reference 1000 nits, as one doesn’t know in advance what display will be used to view it.
No, HLG is scene referred. The levels represent the illumination of the sensor not a master display.
You can master on any HLG display and it will appear perceptually similar on any other HLG display (provided that they correctly implement the system gamma listed above). So diffuse white appears at 75% signal level, which will be displayed at different brightnesses depending on the monitor’s peak brightness.
You can be pedantic all you want, but here’s the facts:
The standard specifies a reference peak level of 1000 nits for a typical/reference display
The standard specifies that it is the display’s job to alter its tone curve in a specific manner if it has a different peak
The standard specifies no absolute references for scene luminance, only relative luminance to an arbitrary number that has no units or any connection to reality
Claim it’s scene referred all you want - for any practical real-world purpose, it is not because your only real-world reference point to start from is an assumption of 1000 nits for a reference/standard/typical display. Myself, I live in the real world. If you have a problem with how zscale operates, raise an issue on its github project and take it up with sekrit-twc.
The European spec for reference monitors has multiple minimum peak brightnesses listed for different locations and uses and only lists a minimum, not a fixed value. There’s no one fixed monitor brightness for HLG. https://tech.ebu.ch/docs/tech/tech3320.pdf