Mastering workflow for linear images for HDR displays

I primarily use darktable to edit, and I’m wondering if there is an end-to-end way to create an image that is suitable for display in an actual HDR monitor device, such as high end monitors and modern phones.

I believe the standard darktable workflow assumes that the output will be an SDR image, hence the tonemapping, and I have a feeling that for a truly HDR standards compliant image it needs to be in linear light space (which already is violated in many of dt modules, such as filmic) and I believe there needs to be extra ICC profile metadata (not sure, not really an expert).

I don’t have an HDR monitor yet, and I intend to get one soon. I would like to create image files that can take advantage of high luminance, wide color gamut, wide dynamic range displays such as on certain devices (Macbook, certain phones like my Galaxy S10e, iPad Pro, and of course the monitor itself).

My guess as to how it’s done is to use a working profile of something appropriate like linear Rec 2020 in darktable, and then set the monitor display profile to something appropriate (not sure how to do this in Linux), and hopefully then I can edit in an HDR environment. Please let me know if this is wrong. What I definitely don’t understand is how to create an output file: what format do I use (surely JPEG won’t work)? How do I indicate that this is an HDR image such that it appropriately triggers all the HDR codepaths in OS, GPU drivers, monitor, etc, just as HDR youtube videos do that?

Note that I am absolutely not talking about the “fake” tonemapped HDR that compresses a wide dynamic range into a narrow range. Please don’t comment unless you have understood what I said in this post… HDR is such a shitty term because it’s been overloaded.

HDR is a complex subject. See Wayland color management to appreciate that.

To begin your quest, I would encourage you not to be caught up in the particulars or terminology and concentrate on what scene-referred processing means. The so-called linear processing is a part of the equation. Scene-referred means that what you see is what you get. That is, the output matches the ground truth scene perfectly. Of course, that is rather idealistic and not currently achievable, nor is it usually desirable because your goals and viewing circumstances are different (e.g. you don’t view the world in the same way as computers do; and your room, screen and eyes may change).

True HDR images cannot be displayed on any screen. Compression of tones and colours is a necessary evil to prevent clipping, artifacts and other discontinuities. Surround requires adaptation too. Therefore, don’t be so quick to dismiss filmic and transfer functions. The devs have been hard at work making dt more amenable to HDR workflows; but as I said at the beginning, getting the CMS, firmware, hardware, apps, etc., to cooperate is another problem altogether.

No certain answers because I am not a dt user or qualified enough, in my mind, to speak of the particulars. Good luck.

To take full advantage of a HDR display, software that renders an image on it needs to be able to do two things: 1) convert the image colorspace and tone to the calibrated display colorspace and tone, and 2) display the values in the bit depth acceptable to the display. #1 is just standard color management. #2 is the more challenging one, as most rendering software toolkits of which I’m aware default to 8-bit, and maybe don’t offer anything more. Actually, depending on the image, I’d surmise that #2 may not be a constraint, as a bit of gamut greater than sRGB can probably be resolved in 8-bit. Oh, overloaded use of ‘bit’… :smiley:

HDR displays don’t need ‘linear’, they need the tone curve that corresponds to their perceptual range. That should be baked into the calibrated display profile, as well as the colorspace accommodated by the display. So, just color management in a software should take you a long way toward using the capability of your display.

Disclaimer: I’m still a neophyte in this, so I welcome corrections and clarifications by the ‘big-heads’ in color science… :smile:

Thank you for the replies. I think I understand the color science itself, I just don’t understand how to wrangle the software stack to make it work. So from what @afre said, it sounds like I am to use filmic rgb with an appropriate (non-sRGB, such as PQ DCI-P3) output color gamut and it will “just work” on my display?

I’m currently using Linux, but I have access to Windows and Mac. I understand that HDR support on Linux is absolutely terrible. I’m wondering if everything works as expected on Win/Mac or if I’m just SOL entirely…

One thing I thought of after writing the post is that one should do the work of getting from one stage to another. What I mean by that is exactly what the OP is asking: going from cradle to maturation.

The cradle stage is the capturing the image and consequently making a raw file. The camera can be a black box with lots of non-linear stuff going on in it. The goal is to characterize the result. @ggbutcher recently took up the challenge and wrote about his journey to generate a spectral profile of his own camera. This step is vital to good processing because it is an early one. If you interpret your colours wrong on the outset, there is no point in maintaining anything, since your colours will only get worse the farther you are along your workflow. Then as @ggbutcher noted, you have to characterize your screen, and as I wrote, your surroundings as well. There are of course other smaller matters such as doing the characterization under working and ideal conditions and the whole process of characterization. Usually people use characterization, calibration and profiling interchangeably. I won’t get into the nuance of the terms. Anyway, more to think about. :stuck_out_tongue_closed_eyes:

To your latest post, there are several transformations. In camera (internal profile), out of camera (input), intermediary (working), screen (OETF) and file (output). Depending on your setup, you may have to not only deal with the CMS (colour management system) but also other hardware (video card LUT, profile, independent monitor software or controls, etc.; the list can be long and confusing). Macs are pretty good but Linux can be better at colour management because you have full control. Full control means more work to get it to work though. Windows is a black box: more misses than hits.

I’m not even sure what you’re trying to say. I think I understand how all the color science works. I just need to know how to execute it using the tools I have.

A metaphor: I know how mechanical engineering works, explain to me how I can use Solidworks

Don’t HDR monitors just have greater brightness? I don’t see how this would effect your workflow. You might see into the shadows a bit more, which could change how you adjust the sliders, but not your entire workflow.

All images should have an ICC profile embedded. It primarily gives information about the gamut, so programs understand how to display it properly. Wide gamut is not unique to HDR. In order to take advantage of a wider gamut you simply ensure your ‘working profile’ is something wide gamut - like pro photo, aces or rec. 2020.

Linear gamma is not unique to HDR either. It is simply how the sensor records light, and a preferred gamma for mathematical operations, often giving more pleasing results than non-linear. At some point the information will have to be turned non-linear so it looks correct to our eyes - which is why we use tonal curves, like base curve or filmic. (OOC JPEGs automatically apply their own). I’ve never used a HDR monitor, but I imagine you will have to do this whether the monitor is SDR or HDR. (I don’t know, I could be wrong here, the extra screen brightness may negate the need for a tonal curve, however you still have to consider the final output of your image: Will it be web display, with most people viewing on SDR monitors? If so, you’re going to want that tonal curve. Applying tonal curves have been standard for print work too. So even if your HDR screen negates the need for a tonal curve, I don’t see the practical use of that in the current environment).

I don’t think there is a such thing as a HDR codepath?
If you have a 10 bit monitor, then you also need a graphics card that can support 10 bits, and a connection that supports 10 bits, else your screen will only display 8 bits. But bit depth does not = HDR.

JPEGs are only 8 bit images. But they can still be displayed on 10 bit screens, and will look exactly the same. The main advantage of a 10 bit screen is that if you see banding, you know it is in the image, not the display. If you work on an 8 bit screen, it could be either/or (though in the image is most likely). In terms of image editing, more bits gives you more room to play, more accurately, with fewer artifacts. In terms of display, more bits can show smoother gradients.

Don’t HDR monitors just have greater brightness?

“HDR” as used in monitor lingo usually means:

  • Wider color gamut
  • Higher contrast ratio
  • Higher max luminance
  • Wider bit depth

don’t know, I could be wrong here, the extra screen brightness may negate the need for a tonal curve, however you still have to consider the final output of your image: Will it be web display, with most people viewing on SDR monitors?

I wanna experiment with making HDR-ready content, for instance for DCI-P3, which many newer phones with OLED displays support, as well as a handful of other devices. I was amazed after coming across an HDR video on my phone–the expanded gamut is /incredible/, you can’t even imagine the eye-searing colors. It’s beautiful. I was wondering if I could make photos that could take advantage of that as well.

As stated above, wide gamut is not unique to HDR. Nor is bit depth. You have to understand those terms independently. Gamuts may have a limit on brightness, but wide gamuts are no exception (see link:

That leaves brightness/luminance and contrast ratio.

It sounds like the gamut was responsible for the beautiful colours here, not the brightness or contrast ratio. (And even if it was the brightness and contrast ratio, those things would be due to your screen, they would not be embedded in the video. And if they were due to your screen, it would be the same for all content viewed on that screen, not unique to that video.) As stated above, to take advantage of this, set ‘working profile’ to something wide. Rec 2020, aces and pro photo are all wider than DCI P3. (But remember, just having a wide gamut isn’t enough to give you beautiful colours - it gives you greater range of saturation - it is up to the artist to make it beautiful).

If you are specifically outputting something to the DCI P3 space, then you can either set that as working profile, or set one of the others mentioned, with DCI P3 set as export/output profile. And for accuracy it would of course be preferable to edit and view on a monitor that covers (as close as possible to) 100% DCI P3.

Which is why I said HDR in monitor lingo. When you buy an “HDR” monitor, it’s a premium product with some industry certifications (namely DisplayHDR from VESA) with a minimum luminance, gamut, etc etc.

Also, to get wider color gamut, you generally need more bits to prevent posterization. Adobe RGB you can get away with, but for DCI P3 and especially Rec 2020 you’re gonna have a bad time.

It sounds like the gamut was responsible for the beautiful colours here, not the brightness or contrast ratio. (And even if it was the brightness and contrast ratio, those things would be due to your screen

Brightness /does/ impact contrast ratio. Generally when HDR content plays, your display goes into full brightness to increase contrast ratio. I hope I don’t have to explain why that is the case. They also might activate things like local dimming to further improve contrast ratio.
When HDR TVs and such play SDR content, they usually set up with a lower luminance value than the maximum (unless you specifically override it somehow) so things don’t look too bright, because SDR content was mastered assuming the white point luminance is at 100-300 nits or thereabouts.

they would not be embedded in the video.

They literally are. This is where my knowledge falls short, but there’s a lot of content out there that say “alright, we want this to be displayed at 600 nits” or whatever (and a bunch of other metadata as well, such as a built-in tonemap in case you can’t hit 600 nits, and even a LUT to move color spaces). This is called HDR metadata. I believe even pictures can be embedded with this kind of info. Check out colorist:, which aims to be a CLI app to get that done.

Often, the HDR metadata needs to be sent from the app to the OS through the GPU driver to the HDR-compatible display for everything to click together and work. Most HDR displays work in sRGB compatibility mode (AFAIK) if you don’t pass this information, in fact. Which is my concern. How the heck do I create an output raster that will trigger all the good stuff and make it look nice on my OLED phone?

edit: to add, there were plenty of indications that the HDR content kicked in a bunch of switches on my phone to go into a special mode. The brightness became locked at 100% even though I tried to pull it down, and the color gamut definitely looked expanded because there were some regions where when viewed on an sRGB screen like my laptop, it looked deepfried and had no separation between medium bright saturated vs. full luminance saturated. All automatic. I’m sure it’s because the video had embedded HDR metadata, which I know is a thing (look it up). HDR seems to be well-built for video and the ecosystem is way more mature.

And if they were due to your screen, it would be the same for all content viewed on that screen, not unique to that video.) As stated above, to take advantage of this, set ‘working profile’ to something wide. Rec 2020, aces and pro photo are all wider than DCI P3.

If the output profile is sRGB, darktable will do color space conversion from the working profile to output profile, using algorithms like “perceptual” or “relative colorimetric” (I’m not even sure what algorithm they use, because there is no way to set it). Oh by the way, the default in dt is linear Rec 2020. So yeah, almost everyone works in wide gamut mode.

I am hoping that setting output profile to DCI P3 is enough to allow me to display with better capabilities, but considering your other inaccuracies I’m a bit worried that this will actually happen. This seems to be a path that not many have gone through. Semi related tangent: remember the Android wallpaper brick incident? It was caused by a software bug that was triggered by a DCI P3 image, because after multiplying with a luminance matrix to convert to grayscale, floating point rounding errors caused an index to overflow. Woops. Looks like these HDR images are a new thing.

If you are specifically outputting something to the DCI P3 space, then you can either set that as working profile, or set one of the others mentioned, with DCI P3 set as export/output profile. And for accuracy it would of course be preferable to edit and view on a monitor that covers (as close as possible to) 100% DCI P3.

I don’t have my DCI P3 monitor yet, but I am hoping that everything works well and I get something with a wide dynamic range, but I’m a little worried that 10 bit support and the appropriate HDR metadata passing is all supported.

I was hoping that someone with a bit more knowledge on the ecosystem or someone with prior experience could share a bit more insight. Again, repeating my original question: what file format do I use to get 10 bit and also the appropriate HDR metadata to get passed through the monitor with good compatibility? Does it even exist?

Maybe this helps (disclaimer: I have zero experience with this, I don’t even have an HDR display)

press F :frowning:

But you’re the first guy in this thread that understood my question fully. Thank you!!

Anyone know what to even search to find good info on this? If you search HDR imaging you get a lot of idiotic posts on how to deep fry an image into something that looks like it came straight out of /r/shittyHDR.

I had no idea this was a thing. HDR has come to mean so many different things.

I don’t know you and it seemed that you were getting some things mixed up. Just wanted to be helpful. To keep it simple:

1 Make custom profiles of your camera, screen and printer, keeping in mind the editing and target surround and media. Other profiles to consider would be noise, white level and lens profiles. Approach could be simple or advanced; however, doing it vs not doing it makes a big difference.

2 Do colour management. Your OS, CMS, apps, screen and video card must work in tandem; otherwise, it fails. In a simple workflow, you have a raw processor, raster editor and image viewer. All of these must do colour management internally and constantly communicate up and down the system the proper dynamic range, bit depth and gamut for accurate display.

3 In dt, choose a suitable working profile. This one is up for debate. I would choose a well-behaved profile. See: The Quest for Good Color - 1. Spectral Sensitivity Functions (SSFs) and Camera Profiles and And I would choose a colour space that is slightly larger than your output profile to give room for processing the pipeline but not too much as to make the gamut compression difficult. dt 3.0.2 has the following:


4 Choose an appropriate output profile for your output file. The list is almost the same as the working profile list. Notice you have some interesting options such as PQ Rec2020, which I am guessing is linear Rec2020 with a PQ OETF. You would have to check what kind of PQ it is. HLG is simpler because it is designed to work with SD and HD displays; it is backwards compatible.

Anyway, have had lots of insomnia, so maybe I am getting it wrong or thinking one thing and writing something entirely different. Smarter and clearer minded people are bound to join the conversation. In fact, I see one of them typing already. :slight_smile:

do you know if this alone will activate all the right switches in the HDR display such that “HDR mode” is enabled?

I’m not a hardware specialist, so my understanding of HDR screens is incomplete.

SDR displays rely on the assumption that display peak luminance (100% of the 8 bits range = 255) equates scene white reflective luminance, which is a white patch at 20% reflectance lit by the illuminant in the scene.

HDR displays encode specular highlights at display peak luminance (100% of the 10 bits range = 1024) and, usually (but that is dependent on the actual hardware decoding), keep the reflective white at 255, for direct legacy compatibility.

An in all, the difference between SDR and HDR displays is what peak luminance means: white or brighter than white.

Filmic is compatible with HDR output, you just need to set the “display white luminance” (in display tab) to more than 100% (for 10 bits output, that would be 400% since you get 2EV more). However, the output color profile and gamma/display encoding modules, later in the pipe, clip signal to 8 bits unsigned integer, e.g [0; 255], so darktable as an app is not HDR-ready yet.

specular highlights

I assume this is really up to the colorist (in video parlance) to determine, right? Like if I wanted, I could assign things like emissive sources in a scene to be in the last 2 EV region.

Also :frowning: for the clipping. I might try to do a PR.

That is because the term HDR is overloaded and can mean different things in different contexts, for example HDR photography for years meant to tonemap an HDR source (often created by bracketing exposures and melding those together) to SDR and that is what you are finding here. In the video world it means something that is suitable for a HDR monitor. So for what you want your best bet is to look how video is mastered for HDR.

A couple of things to note:

  • Consumer HDR monitors potentially do really bad things™ to your image (it is normal to send rec.2020-pq or -hlg + the metadata to the screen which is then gamut- and tone-mapped to the screen itself the specs leave a lot of leeway “to make it look good” for the manufacturers so we don’t know what most of these screens are actually doing)[1]
  • You’ll encounter the term scene referred, this is linear encoded images where 0 is black 1.0 doesn’t necessarily mean anything (usually mapped to diffuse white but not standardized) and theoretically goes to +inf, this means it always needs to be tonemapped even for a HDR monitor![2]
  • Since the only standard for working with HDR currently is ACES[3] and the current version assumes a high end work environment, requiring a descent amount of knowledge to transfer it to less high end (consumer) spaces

So all in all my conclusion is that currently it is not really possible to master HDR content for consumers, semi-pro and even small time professionals. That doesn’t mean it can’t be created but either will be stuff directly from camera (where the camera manufacturer does all the work for us) or will be a monumental amount of effort for not much gain. Probably just better of getting a wide-gamut non-HDR monitor since that is about 70~80% of the benefit of current HDR tech.

[1] See also this twitter thread about where consumer HDR currently is (TLDR: it is a mess)
[2] “Friends don’t let friends view scene-linear imagery without an “S-shaped” view transform” - from cinematic color by Jeremy Selan

1 Like

I guess. I don’t know. As far as filmic is concerned, it’s a 3-points mapping (black/middle grey/peak), so what display peak means in the scene is an artistic decision already in SDR.

Good luck with OpenCL :-/

Thank you (and AP) for your nuggets of wisdom, it is exactly what I needed.

AFAIK HDR mastering for video seems to be fairly easy to do even for amateurs, but is the software support for still photos just simply not here yet? That’s a big bummer.

And as for the PR, if the case is that the software to actually view the damn thing is simply not there, then I might as well just give up :man_shrugging:

I might look online to see if there are (real) HDR photos online and try to view them on my phone and see if they trigger the same behavior as a HDR youtube video. Maybe drilling down the metadata on that file might yield some insight.