Processing RAWs for HDR displays in Darktable

Great to see that the library is open-source! I’ll try to build it and play with it.

However, it is not fully clear to me if it supports anything else than JPEG as input or output. I saw some mention of TIFF in the issues, but I would have loved to see AVIF or JXL too.

Interesting thread. Following a tea and laptop accident in 2021 I bought a macbook with XDR display (P3 gamut and 1600 nit max) so have tracked the HDR evolution with interest.

It seems to me that the various approaches reflect the nature of the businesses involved.

Dolby has historically provided a new high end standards e.g. PQ 10,000 nits max.

Adobe with a huge base of legacy users/images has pursued a bridging approach jpg + GainMap.

I expect Google has their own business interests in minimizing image transport/retention and the number of formats to efficiently support.

The hardware vendors will push out new display hardware as they can differentiate it, produce it, and sell it. Apple with XDR. VESA (PC trade group) has a multiple tiers of max luminance (400 500 600 1000 1400) https://displayhdr.org

My cursory look at OLED is that while there is wide adoption for the larger area lower pixel-per-inch televisions, and smaller area higher pixel-per-inch phones/tablets it will still be a few years before they will be commonplace for computer displays medium area medium pixel-per-inch.

I expect there will need to be some bridging solution given the large based of displays and images, and maybe a separate final next generation standard based on technology that may not yet exist. So, it seems to me a big part of a solution is how the navigate the next decade while all this is sorted out. Specifically, how to produce images that can be rendered optimally for a target but sufficiently well elsewhere.

For some odd reason, this is the case with all PC and laptop monitors. It’s basically an “anti-sweet-spot”.

HDR TVs can provide excellent picture quality at EXTREMELY affordable price per square inch. Especially stuff like an LG Cn (where n is either the last digit of the year or X, not sure what happens when they roll around to previously used years…), where a 42" C3 (smallest they make) will outperform most monitors in the 27-32 inch range at a much lower price.

Similarly lots of OLED in phones, hell Samsung has been doing AMOLED for well over a decade in the vast majority of their phones. Of course back then, they were AMOLED with zero color management so were kinda notorious for garish oversaturated colors (Samsung marketed this as a feature…)

I think you may have misunderstood what I mean. All of what you say here is correct. I’m from the ICC world, and most colorspace ICC profiles are matrix profile, and can only do gamut clipping in theory, some CMM cheat a little bit. Here is a perfect example of what I mean, on the left is perceptual gamut mapping, and on the right, plain gamut clipping. The flower photos I take, looks almost as bad the right photo when converted from the camera colorspace to Adobe RGB ICC profile of my monitor. I have created ICC profiles versions of sRGB, Adobe RGB, ProPhoto, and soon Rec2020 made with 3d LUT, which will allow gamut mapping, contrary to matrix profile. I can also create ICC device link profile that do an even better job, and can be translated into LUT.

I’m trying to use my DSLR stills, which in my case mean a 14 bits depth and 14 stops, either alone, or merge into EXR, without tone mapping, to create HDR slideshows that I can upload to YouTube.

If your images or even your video clips colors fit with minimal or without clipping into say Rec709 (sRGB), you’ll never see that much clipping, if any, of course. Trust me, my shots are almost always way out of the gamut of Rec709 and even out the Rec2020 gamut quite often.

I’ll most likely do the following, first I’ll do a perceptual gamut mapping from ProPhoto RGB to DCI-P3 or Display-P3 and then to linear Rec2020 or to HLG Rec2020. That way, for any display capable of 100% P3, it will never require clipping. Of course, on an SDR, Rec709 display, it’s another story, but it seems we can provide a LUT for YouTube to use in rendering Rec709, I’ll make sure there is no clipping for SDR viewing as well.

I really want to view these images but it seems I don’t have a single HDR display in my house. I hoped my Pixel 6 supported it, but I think only the Pixel 6 Pro does.

I have made a website showing the PQ HDR AVIF images processed using darktable. https://andrewkeyanzhe.github.io/posts/Starship_flight_5/

The images on the website will display correctly on Chrome + HDR monitor (e.g. Macbook Pro with XDR display.)

The development process involves disabling the filmic module, and setting the output colour space to PQ BT2020.

1 Like

On my cellphone + chrome browser, it looks really cool. Firefox + cellphone, it does not.

The flames in the 4th image show the distribution pattern.

Just spotted this post. Whilst you’re right about not achieving wider gamut with 3 primaries, there are moves afoot from the likes of Baylor university to bring in 6 primary displays. https://news.web.baylor.edu/news/story/2021/baylor-researchers-introduce-6p-color-imaging-system-could-revolutionize-tv-cinema

2 Likes

Thanks for sharing this article. Very interesting to read and to see where this may go. Wondering when affordable displays will hit the consumer market :sunglasses: that will heavily promote GAS :grin:

While a6P display sounds very interesting, we’re still bound by the input. As long as the cameras use three color primaries, you’re stuck with a triangular space, and any color in the scene outside that triangle cannot be captured correctly.

And, how important is in in practice to cover more of the horseshoe? HDR needs a large luminosity range, and (related to that) more bits/channel, not sure it needs more colours for a given luminosity (which is what those colour space diagrams show!)

Sounds like a six-layer version of this sensor should be able to do it:

Or some kind of Bayer array sensor. But I guess it will take a while for any such technology to become widespread. Until then, we’re still bound by the three-channel input.

The triangular colour gamut is only valid for displays - cameras have odd shaped colour capture capabilities - this example is for a Nikon D200 under D65 lighting.


The blue area is the colours that can be captured by the camera. It will only be triangular if there’s a linear relationship between the filter outputs and the colour matching functions being used (in the case of this diagram, the CIE 1931 CMFs)

see: Color conversion matrices in digital cameras: a tutorial

https://horizon-lab.org/colorvis/camcolor.html

2 Likes

I am experimenting with a HDR workflow based on manipulating the sigmoid module.

I have only recently started doing HDR processing. I have a decent SDR display that works well with darktable. I knew darktable could export to HDR, but I didn’t have a reliable HDR display for reference until I got a MacBook Pro.

To solve the problem I have encountered and I’m sure by others in making HDR exports:

  • Module defaults and presets do not work. A lot of tweaking to get the correct exposure and tone.
  • Likewise with sliders. Big adjustments often need to go outside the standard range.

I could get around the above using multiple modules, parametric masks etc. But it is a complicated workflow that is hard to replicate across different files. And a lot of tedious work before getting to the fun (creative) edits.

My aim was to simplify and do the technical processing with the least amount of modules. I.e. mapping the mid-tones, shadows, and high lights to where they should be on the output, or at least within the ballpark. I wanted to see if I could shape the sigmoid to approximate the HLG curve.

It’s a similar logic to the filmic (or normal use of the sigmoid) module, which provides a consistent (parametric) way of mapping a HDR input to an SDR output. I like filmic and as it matured I found the less I played with it the better the result. And I have moved to sigmoid because I pretty much only use it with the default preset when processing for SDR.

Setting up Sigmoid

Using the app by @jandren :

I eyeballed the sigmoid against the ACES HLG. The curves have similar dy/dx through the range. They don’t converge at either end, but I figured the important part was having the same slope and position in the middle. The sigmoid has a more aggressive roll off in the highlights which isn’t necessarily a bad thing for HDR photos.

Arriving at the following values to use in the sigmoid module:
Target black -13 (default)
Target white 12
Contrast 1.45
Skew -0.55

I left the default target black as-is in darktable, as I am not sure how the value translated from the web app to darktable. Colour processing is by RGB ratio.

Output colour profile

I have been using HLG P3 as the HDR profile in darktable. This appears to be the profile used by the iPhone, and the built-in apps in macOS deal with it quite well (as expected).

I find darktable behave strangely when targeting PQ, the black point is raised and the histogram distribution is skewed towards the highlights.

Processing

The processing experience so far has been good. There may still be aggressive use of adjustments in exposure, tone equaliser, or perceptual brilliance (colour balance rgb). But I haven’t found the urge to stack modules or use masks. And the adjustments in these modules feel more progressive and smooth. Responsive without being twitchy.

My approach is similar to my SDR edits, get the mid tones where I want. Adjust the shadows and highlights for the appropriate amount of detail. Since the intent is HDR export, I have to constantly remind myself not to edit for contrast.

While darktable working display is limited to sRGB and SDR on the Mac (and I understand it is backend limited by GTK3 so won’t change anytime soon), I find I can gauge the output to a decent degree by what I see in the photo rather than looking at the histogram.

The level of the shadows will drop slightly but still retain their details. The mid-tones stay as mid-tones but gain contrast and punch.

The highlights retain their details, push into HDR (but not too much). And if they look too pushed in the working display they will probably be too pushed and over bright in the HDR export.

Examples (Work on Chrome based browser)

I do not have many raw files that I feel would look nice in HDR, and I really liked these from the community. The JPEG is what the darktable working display looks like.

Using @gigaturbo - Sunset reflection
https://discuss.pixls.us/uploads/short-url/qdgcI3PREDNX9q6XGWoqiWZgPQj.RAF



DSCF6376_01.RAF.xmp (13.2 KB)

Using @streetfighter - Evening Commute
https://discuss.pixls.us/uploads/short-url/tvaZEDKZ4wYO5piRzSepVBYV7Dm.RAF



20241017_0191.RAF.xmp (15.6 KB)

Using Kristoffer Tolle



DSCF4207.RAF.xmp (13.5 KB)

It’s quite possible the internally constructed ICC profile has a limit representing the PQ transfer curve?

What actually happens in the pixel pipe is beyond me.

I have just skimmed this thread, and I don’t pretend to have some deeper understanding of the subject.

However, i have spent the two+ hours needed to listen to Steve Yedlin’s lecture on “Debunking the HDR myth” from May this year:

It is addressed to cinematographers, but I anyhow would be surprised if not many participating in this thread would gain a lot from listening to Yedlin.

He points out that new HDR HW with more contrast etc is good, but as for the image rendering hardly anything is really gained from “HDR” and 10-bit compared to following current 8-bit SDR standards on HDR HW, in some marginal cases the expanded gamut of Rec2020. Rather HDR in its current form has several disadvantages and entails unnecessary cost.

I believe Yedlin have saved me quite some time and money.

3 Likes

Updated Parameters

I took the functions from the sigmoid module source code and plugged that into a spreadsheet against the HLG OETF from BT.2100. While also doing more reading on HLG and PQ HDR capture and grading, though currently the majority of discussion is about video not photography.

The sigmoid changes shape depending on the scene dynamic range. I used the 1000 nit peak baseline for “HDR” as the input (and output) peak. This puts SDR white at 83.3 nit.

The other issue which I already knew was that apart from the HLG being a hybrid and not pure log curve, the HLG reference point is also “SDR white” not middle grey; and “middle grey” is hard coded in the current sigmoid module as 0.1845. Darktable modules also use middle grey as reference (ability to adjust its value notwithstanding), which was one of the original motivations for this experiment, as using auto pickers or defaults inside modules when processing for HDR export give poor results.

The previous settings I obtained using the web app unfortunately did not plot correctly at all.

I found that:

  • A curve bearing reasonable resemblance to HLG in the highlights will significantly lift the shadows and midtones.
  • Attempting to compensate for the above would flatten the contrast too much.

I ended up testing two approaches. This site doesn’t accept uploading of spreadsheets for some reason (/s):

Contrast 0.981, skew 1, target black 0.0152% (default), target white 1600%

The curves meet at 1. It provides nice contrasty HDR looking exports. But need to adjust shadows down a lot, and midtones (and/or default exposure) down a moderate amount.

Contrast 0.965, skew 1, target black 0.0152% (default), target white 1600%

The curves meet at 0.75. I tried other intersects but this appeared to be a good compromise. The sigmoid would output to 1 at around 1.16, so the difference isn’t too great. Several sites state signal value of 0.75 is the recommended point at which pixels should go into HDR (diffused white) for displays rendering HLG. This curve still requires large adjustments to pull down shadows. But I find on most edits the default exposure falls pretty much where I want, or I may even pull up the midtones a little. The exports are not as punchy as the “1.0” curve, and there is a need to put back contrast (global and/or local).

Most recommendations for HDR capture, grading, and rendering are to take some of the headroom from the highlights, e.g. middle grey should be exposed to 0.38 rather than 0.22, and white at 0.75 rather than 0.5. So the lifted shadows and midtones by the sigmoid isn’t as bad as it first appears.

For now, I am settled on using the second curve. I personally don’t like too much contrast in my edits so others may find this curve to be too flat. Majority of the exports actually look acceptable on non-HDR displays too (obviously the contrast is flatter and no detail in the HDR parts of the highlights).

The HLG and PQ curves might be perceptually correct in capturing and recreating a scene’s relative luminance, but realistic doesn’t necessarily look good in video and even less so in photography. I have read critique that some HDR (video) edits have so little display dynamic range they could be recreated by turning up the brightness on a SDR monitor. I don’t know how true that is for my edits, but for now I am happy I have a process in darktable to make the sort of photos with my “proper camera” as my phone puts out.

2 Likes