how to edit for HLG Rec2020 output on a SDR screen?

i would like to view photos on iOS with EDR. But do i need to compensate by altering the AGX module or is purely changing the output colorspace ok? i want the tonemapping to look right eg. properly bright sun. And where’s the difference between BT and HLG (and why does BT looks weird)

Simplest way is export in Rec.2020 2.4, and then pretend that it’s Rec.2100 HLG

I don’t understand. AgX, as it’s implemented in darktable, has a max. linear output of 1 (about 2.5 EV above mid grey). It’s SDR.

1 Like

HDR picture also has peak, like 1000 nits ect., which can be expressed as 1.0 if you want to. In this perspective, Rec.2100 HDR contents are simply pictures with lower average levels relative to the peak.
Due to the way HLG was designed, pretendng Rec.2020 2.4 as Rec.2100 HLG works surprisungly well.

So it will artificially boost the compressed highlights and create a ‘properly bright sun’?

There’s no “compressed highlight”, the reading of highlight in a picture is visual grammar, the “attenuation to white” thing also contributes to the visual grammar of highlights.

In Rec.2100 HDR, the “pivot” will be significantly lower than 18% of the actual peak level, and the peak will be displayed to be 1000 etc. nits. That’s what Rec.2100 HDR really is, at the end of the day.

For example, the “midgray” pivot might be displayed at 18% of 100 nits while the peak is at 1000. This means the picture’s pivot is at 1.8% of the actual peak.

It’s never about compressing any highlight. The whole thing is just about exaggerating the distance between “average content level” and “peak level”.

I don’t think that’s entirely true. You definitely want to compress it less/differently as you have all that range now available between ~150nits and 1000nits+. The HLG transfer curve is specifically designed to leverage that range.

What I mean by there is no compressed highlight, Is that there’s no highlight in the raw file, the raw file only contains exposure data, In that how much light hit the sensor when you took the footage. At this stage there is no highlight, all of the values are arbitrary. There’s only highlight after it gets processed by filmic, sigmoid or AgX, Which designs the visual grammar of how the highlight should look like.

Therefore, there’s no compression of highlight in picture formation, there’s only interpretation of exposure data and how to form the highlights in the picture, highlights only exist after the picture formation algorithm.

And SDR picture has always been about percentage only, You can definitely display an sdr picture with its peak at 1000 nits, and it would still be the same picture as always. The middle gray would still be relative to the actual peak of 100%, all the percentage values remain the same.

HDR Simply defines an arbitrary Imaginary ceiling below the actual peak, And say that the average content values cannot be higher than this Imaginary ceiling. By doing so, it exaggerated the difference between the average content level and the actual peak

What it means is that, you can simply make the middle gray pivot of an sdr image lower, and display that image with peak at 1000 nits, and it would be a proper “HDR” picture. In fact, that’s how blender 5.0’s AgX HDR was made.

But this method is still more complex than just pretending rec.2020 to be hlg.

HLG is designed to look like a well defined camera knee with a curve designed to have no banding at 10 or 12 bits and to be backwards compatible with Rec 2020 displays.

It can be displayed at any brightness as the display adaptation is applied in the display (unlike PQ where it’s baked in to the signal).

As the signal is backwards compatible, you can view it on an SDR 2020 monitor or you can use a LUT to convert the HLG to SDR and monitor it that way. (The second is how broadcasters currently monitor HDR in live production)