Mastering workflow for linear images for HDR displays

As stated above, wide gamut is not unique to HDR. Nor is bit depth. You have to understand those terms independently. Gamuts may have a limit on brightness, but wide gamuts are no exception (see link: When Nits Are Not Enough: Quantifying Brightness in Wide Gamut Displays)

That leaves brightness/luminance and contrast ratio.

It sounds like the gamut was responsible for the beautiful colours here, not the brightness or contrast ratio. (And even if it was the brightness and contrast ratio, those things would be due to your screen, they would not be embedded in the video. And if they were due to your screen, it would be the same for all content viewed on that screen, not unique to that video.) As stated above, to take advantage of this, set ‘working profile’ to something wide. Rec 2020, aces and pro photo are all wider than DCI P3. (But remember, just having a wide gamut isn’t enough to give you beautiful colours - it gives you greater range of saturation - it is up to the artist to make it beautiful).

If you are specifically outputting something to the DCI P3 space, then you can either set that as working profile, or set one of the others mentioned, with DCI P3 set as export/output profile. And for accuracy it would of course be preferable to edit and view on a monitor that covers (as close as possible to) 100% DCI P3.

Which is why I said HDR in monitor lingo. When you buy an “HDR” monitor, it’s a premium product with some industry certifications (namely DisplayHDR from VESA) with a minimum luminance, gamut, etc etc.

Also, to get wider color gamut, you generally need more bits to prevent posterization. Adobe RGB you can get away with, but for DCI P3 and especially Rec 2020 you’re gonna have a bad time.

It sounds like the gamut was responsible for the beautiful colours here, not the brightness or contrast ratio. (And even if it was the brightness and contrast ratio, those things would be due to your screen

Brightness /does/ impact contrast ratio. Generally when HDR content plays, your display goes into full brightness to increase contrast ratio. I hope I don’t have to explain why that is the case. They also might activate things like local dimming to further improve contrast ratio.
When HDR TVs and such play SDR content, they usually set up with a lower luminance value than the maximum (unless you specifically override it somehow) so things don’t look too bright, because SDR content was mastered assuming the white point luminance is at 100-300 nits or thereabouts.

they would not be embedded in the video.

They literally are. This is where my knowledge falls short, but there’s a lot of content out there that say “alright, we want this to be displayed at 600 nits” or whatever (and a bunch of other metadata as well, such as a built-in tonemap in case you can’t hit 600 nits, and even a LUT to move color spaces). This is called HDR metadata. I believe even pictures can be embedded with this kind of info. Check out colorist: GitHub - joedrago/colorist: Absolute luminance or bust!, which aims to be a CLI app to get that done.

Often, the HDR metadata needs to be sent from the app to the OS through the GPU driver to the HDR-compatible display for everything to click together and work. Most HDR displays work in sRGB compatibility mode (AFAIK) if you don’t pass this information, in fact. Which is my concern. How the heck do I create an output raster that will trigger all the good stuff and make it look nice on my OLED phone?

edit: to add, there were plenty of indications that the HDR content kicked in a bunch of switches on my phone to go into a special mode. The brightness became locked at 100% even though I tried to pull it down, and the color gamut definitely looked expanded because there were some regions where when viewed on an sRGB screen like my laptop, it looked deepfried and had no separation between medium bright saturated vs. full luminance saturated. All automatic. I’m sure it’s because the video had embedded HDR metadata, which I know is a thing (look it up). HDR seems to be well-built for video and the ecosystem is way more mature.

And if they were due to your screen, it would be the same for all content viewed on that screen, not unique to that video.) As stated above, to take advantage of this, set ‘working profile’ to something wide. Rec 2020, aces and pro photo are all wider than DCI P3.

If the output profile is sRGB, darktable will do color space conversion from the working profile to output profile, using algorithms like “perceptual” or “relative colorimetric” (I’m not even sure what algorithm they use, because there is no way to set it). Oh by the way, the default in dt is linear Rec 2020. So yeah, almost everyone works in wide gamut mode.

I am hoping that setting output profile to DCI P3 is enough to allow me to display with better capabilities, but considering your other inaccuracies I’m a bit worried that this will actually happen. This seems to be a path that not many have gone through. Semi related tangent: remember the Android wallpaper brick incident? It was caused by a software bug that was triggered by a DCI P3 image, because after multiplying with a luminance matrix to convert to grayscale, floating point rounding errors caused an index to overflow. Woops. Looks like these HDR images are a new thing.

If you are specifically outputting something to the DCI P3 space, then you can either set that as working profile, or set one of the others mentioned, with DCI P3 set as export/output profile. And for accuracy it would of course be preferable to edit and view on a monitor that covers (as close as possible to) 100% DCI P3.

I don’t have my DCI P3 monitor yet, but I am hoping that everything works well and I get something with a wide dynamic range, but I’m a little worried that 10 bit support and the appropriate HDR metadata passing is all supported.

I was hoping that someone with a bit more knowledge on the ecosystem or someone with prior experience could share a bit more insight. Again, repeating my original question: what file format do I use to get 10 bit and also the appropriate HDR metadata to get passed through the monitor with good compatibility? Does it even exist?

Maybe this helps (disclaimer: I have zero experience with this, I don’t even have an HDR display)

press F :frowning:

But you’re the first guy in this thread that understood my question fully. Thank you!!

Anyone know what to even search to find good info on this? If you search HDR imaging you get a lot of idiotic posts on how to deep fry an image into something that looks like it came straight out of /r/shittyHDR.

I had no idea this was a thing. HDR has come to mean so many different things.

I don’t know you and it seemed that you were getting some things mixed up. Just wanted to be helpful. To keep it simple:

1 Make custom profiles of your camera, screen and printer, keeping in mind the editing and target surround and media. Other profiles to consider would be noise, white level and lens profiles. Approach could be simple or advanced; however, doing it vs not doing it makes a big difference.

2 Do colour management. Your OS, CMS, apps, screen and video card must work in tandem; otherwise, it fails. In a simple workflow, you have a raw processor, raster editor and image viewer. All of these must do colour management internally and constantly communicate up and down the system the proper dynamic range, bit depth and gamut for accurate display.

3 In dt, choose a suitable working profile. This one is up for debate. I would choose a well-behaved profile. See: The Quest for Good Color - 1. Spectral Sensitivity Functions (SSFs) and Camera Profiles and Elle Stone's well-behaved ICC profiles and code. And I would choose a colour space that is slightly larger than your output profile to give room for processing the pipeline but not too much as to make the gamut compression difficult. dt 3.0.2 has the following:

image

4 Choose an appropriate output profile for your output file. The list is almost the same as the working profile list. Notice you have some interesting options such as PQ Rec2020, which I am guessing is linear Rec2020 with a PQ OETF. You would have to check what kind of PQ it is. HLG is simpler because it is designed to work with SD and HD displays; it is backwards compatible.

Anyway, have had lots of insomnia, so maybe I am getting it wrong or thinking one thing and writing something entirely different. Smarter and clearer minded people are bound to join the conversation. In fact, I see one of them typing already. :slight_smile:

do you know if this alone will activate all the right switches in the HDR display such that “HDR mode” is enabled?

I’m not a hardware specialist, so my understanding of HDR screens is incomplete.

SDR displays rely on the assumption that display peak luminance (100% of the 8 bits range = 255) equates scene white reflective luminance, which is a white patch at 20% reflectance lit by the illuminant in the scene.

HDR displays encode specular highlights at display peak luminance (100% of the 10 bits range = 1024) and, usually (but that is dependent on the actual hardware decoding), keep the reflective white at 255, for direct legacy compatibility.

An in all, the difference between SDR and HDR displays is what peak luminance means: white or brighter than white.

Filmic is compatible with HDR output, you just need to set the “display white luminance” (in display tab) to more than 100% (for 10 bits output, that would be 400% since you get 2EV more). However, the output color profile and gamma/display encoding modules, later in the pipe, clip signal to 8 bits unsigned integer, e.g [0; 255], so darktable as an app is not HDR-ready yet.

specular highlights

I assume this is really up to the colorist (in video parlance) to determine, right? Like if I wanted, I could assign things like emissive sources in a scene to be in the last 2 EV region.

Also :frowning: for the clipping. I might try to do a PR.

That is because the term HDR is overloaded and can mean different things in different contexts, for example HDR photography for years meant to tonemap an HDR source (often created by bracketing exposures and melding those together) to SDR and that is what you are finding here. In the video world it means something that is suitable for a HDR monitor. So for what you want your best bet is to look how video is mastered for HDR.

A couple of things to note:

  • Consumer HDR monitors potentially do really bad things™ to your image (it is normal to send rec.2020-pq or -hlg + the metadata to the screen which is then gamut- and tone-mapped to the screen itself the specs leave a lot of leeway “to make it look good” for the manufacturers so we don’t know what most of these screens are actually doing)[1]
  • You’ll encounter the term scene referred, this is linear encoded images where 0 is black 1.0 doesn’t necessarily mean anything (usually mapped to diffuse white but not standardized) and theoretically goes to +inf, this means it always needs to be tonemapped even for a HDR monitor![2]
  • Since the only standard for working with HDR currently is ACES[3] and the current version assumes a high end work environment, requiring a descent amount of knowledge to transfer it to less high end (consumer) spaces

So all in all my conclusion is that currently it is not really possible to master HDR content for consumers, semi-pro and even small time professionals. That doesn’t mean it can’t be created but either will be stuff directly from camera (where the camera manufacturer does all the work for us) or will be a monumental amount of effort for not much gain. Probably just better of getting a wide-gamut non-HDR monitor since that is about 70~80% of the benefit of current HDR tech.


[1] See also this twitter thread about where consumer HDR currently is https://twitter.com/FoldableHuman/status/1286773414657536000 (TLDR: it is a mess)
[2] “Friends don’t let friends view scene-linear imagery without an “S-shaped” view transform” - from cinematic color by Jeremy Selan
[3] Academy Color Encoding System - Wikipedia

1 Like

I guess. I don’t know. As far as filmic is concerned, it’s a 3-points mapping (black/middle grey/peak), so what display peak means in the scene is an artistic decision already in SDR.

Good luck with OpenCL :-/

Thank you (and AP) for your nuggets of wisdom, it is exactly what I needed.

AFAIK HDR mastering for video seems to be fairly easy to do even for amateurs, but is the software support for still photos just simply not here yet? That’s a big bummer.

And as for the PR, if the case is that the software to actually view the damn thing is simply not there, then I might as well just give up :man_shrugging:

I might look online to see if there are (real) HDR photos online and try to view them on my phone and see if they trigger the same behavior as a HDR youtube video. Maybe drilling down the metadata on that file might yield some insight.

This Github gave me some insights into that world.

All in all. The thrust of my posts have been to promote the idea that while waiting for HDR to be common we can develop good habits in photo processing and management.

PS Krita, not a raw processor but a raster editor, does support OCIO, so it isn’t a stretch that stills editors could adopt stuff from the video world.

speaking of ACES: HDR, ACES and the Digital Photographer 2.0

And yes, Krita is like the only thing that supports “real” scene-referred pixels well. Unfortunately I think using that for photography is not a great idea, just because t he tools aren’t there.

GitHub - ampas/rawtoaces: RAW to ACES Utility wonder what this is?

Scene Linear Painting — Krita Manual 4.4.0 documentation Here’s some Krita documentation on scene-referred (they call it scene-linear, and used to call it HDR, lol) painting.

Indeed though I need to stress that though the process is simple (for video at least) the hardware requirements to pull it of are anything but - remember that 5k screen from Apple? that is cheap for a proper reference monitor, usually those start at 10k. Sure not everyone in the production will need one of those only the colorist for the final grade still the other people are probably using expensive calibrated monitors (price range in the 1k to 2k per monitor)

I fully agree with this, which is why I have been hammering that the color management in wayland needs to be top notch and preferably designed from the ground up to be able to include HDR imagery.

Krita is the farther here and on windows does support HDR output, but then you run into that a) windows and HDR is a bit of crapshoot still (it is designed for consumption not creation for one thing) and b) consumer hdr is a bit of mess right now

1 Like

Yeah, I am actually thinking of getting that if I can find a good discount for it, haha
I wanted to try one out at an Apple store but well, the world’s gone to shit at the moment

But one can get decent HDR monitors for <1k with great HDR, not to mention a LOT of people have devices with HDR functionality (like, pretty sure all the new iPhones, Galaxy S9? above, iPad, macbook, LG OLED TVs, samsung QLEDs,… ) that I kind of wanted to tap into, just to see how things are. And according to rumors, the new Macbook might have microLED displays which will definitely hit crazy nits and have a ridiculous color gamut, possibly wider than ProDisplay XDR…

Not saying I’m gonna edit darktable on my phone or something, but still, it would be awesome to see crazy cool photos on these devices.

and from what I hear apparently upstream X.org and wayland have no idea how color management works and it’s an absolute shitshow right now. Not a good sign :frowning:

1 Like

Yes but almost all of those are consumer focused and do internal tone- and gamut- mapping which for creation is something you don’t want since it means loss of control. (current exception in the above list ar the Apple displays probably we don’t know what tech Apple is using exactly and any screen that supports Freesync2 but only in specific display modes)

For modern display techniques X.org is a lost cause, we are trying to work with wayland but run into a chicken and egg problem there, no apps interested in going to wayland yet since there is no color management (in X it worked cause we could bypass X sort off which is not possible in wayland) and no interest in implementing a color management protocol since there are no applications. For HDR we also run into the problem that all current HDR standards are consumer ones that assume you just push rec2020-pq/hlg to the screen and let the screen handle the rest (as a black box) so currently no way to properly profile or calibrate such screens (even if there is a protocol) with the exception of again the high-end monitors (which often use in build luts or lut boxes but do have support to change those luts making calibration possible)

1 Like

Lovely, looks like I just need to check out and look what’s happening 5 years later…
Hopefully Apple pushes the envelope by being first to market with a non-stupid microLED display then…

Indeed, there doesn’t seem to be an easy way to switch the HDR TV to the HDR mode when watching stills. E.g. the new R5 captures HEIF that can only be displayed from the camera over HDMI: https://www.google.com/amp/s/www.dpreview.com/reviews/canon-eos-r5-initial-review.amp

The stills ecosystem is behind the video workflow when it comes to HDR unfortunately, hopefully it’ll catch up soon with more and more cameras capturing HDR directly, with a push from Apple and Adobe…

1 Like

That’s an endeavor to develop camera characterizations for ACES Input Device Transforms (IDTs). In this thread, you’re picking at the other end, Output Device Transforms (ODTs), in ACES terms…

I’m not so sure HDR for video is even well-corralled. It has the same fundamental problem as still media, in that how the rendition of a file on all the possible screens cannot be controlled by the media creator in all its incarnations. To avoid all that, still photography came to rely on sRGB as a device “assumption”; now that we have better displays, we don’t have the intermediate exchange of information needed to handle both them and the old technology.

In FOSS software, there’s movement to keeping data scene-referred until the act of display or export, darktable being at the forefront (rawproc is further along, but nobody but me uses that… :smiley: ). It’s that “export” thing that’ll vex taking advantage of better displays, until such time as display color management becomes transparent to all, incluing my mom…