Mastering workflow for linear images for HDR displays

That’s an endeavor to develop camera characterizations for ACES Input Device Transforms (IDTs). In this thread, you’re picking at the other end, Output Device Transforms (ODTs), in ACES terms…

I’m not so sure HDR for video is even well-corralled. It has the same fundamental problem as still media, in that how the rendition of a file on all the possible screens cannot be controlled by the media creator in all its incarnations. To avoid all that, still photography came to rely on sRGB as a device “assumption”; now that we have better displays, we don’t have the intermediate exchange of information needed to handle both them and the old technology.

In FOSS software, there’s movement to keeping data scene-referred until the act of display or export, darktable being at the forefront (rawproc is further along, but nobody but me uses that… :smiley: ). It’s that “export” thing that’ll vex taking advantage of better displays, until such time as display color management becomes transparent to all, incluing my mom…

Not sRGB exactly though that is the right output for the internet (currently hopefully that will change in the future).

History lenses time:
Digital photography as we currently know it has its origin in desktop publishing which invented ICC profiles to deal with a wide variety of (potentially cheap!) output/input devices (printers (including offset),scanners, displays, etc). In contrast the film industry having only to deal with a few input/output devices and those being a drop in the bucket in comparison to the real costs of making a movie just went for the expensive option set them up once and called it a day. Later with digital composition/editing and VFX they realized they needed a standardized workflow so they developed ACES, which still assumes you are using a screen that is not only properly setup but can display the selected output gamut 100% or nearly so (though that is probably going to change when OCIO v2 is coming out hopefully later this year since that should have ICC support for the output display).

The biggest problem in HDR production on the cheap currently is that there is no standard way to describe the output (like an ICC profile), there are some potential candidates (like ICCmax) for this though. (The other big problem is that consumer screens have carte blanch in converting the input rec202-pq/-hlg to the actual display with a mandate to “make it look good”)

1 Like

The so-called HDR devices have a good wow factor but I would rather have a boring yet cooperative and accurate display. (Coming from someone who doesn’t do CMS because the screens are ugly and defective anyway. :roll_of_toilet_paper::roll_eyes:)

PS That is how sales works. Every TV and monitor has a showroom mode where everything is turned to max and then some, and the space and lighting around the product is fabulous too. :sparkles:

All of this is bad news, but at least using a wider color space will actually work decently well… right?
As opposed to triggering the wide dynamic range mode in monitors.

Yes, what I have seen many HDR monitors (at least the once for PCs) also offer a wide gamut mode (non-HDR) do note that in that mode anything not being color corrected (e.g. user interface parts) will look off (at least on Windows and Linux, Mac is a lot better in this regard) since there is no full screen color management. That said all serious image editors/viewers (Krita, Gimp, Darktable, Rawtherapee, etc, etc) are color managed so the photos/image you work on should look correct!

Excellent. Ironically this means I don’t have to buy 1000 dollar FALD HDR monitors, I just need the stuff that offers DCI-P3 coverage :smiley: (for other people reading the thread, DisplayHDR 500 and above, instead of DisplayHDR 1000, essentially)

I’d like to have both actually. A calibrated sRGB or AdobeRGB display AND the best attempt of the manufacturer of a HDR rec2020-pq/hlg rendition for the wow factor. Of course a calibrated HDR display would be better, no doubt about that, but until then give me the other one.
The video HDR demos at the electronics retailers look mind-blowingly better than sRGB displays. The prices are dropping and photographers can’t even display a static image? Aren’t even Games starting to exist in rec2020-pq?

Shotcut can talk to blackmagic SDI/HDMI cards…since it is based on MLT framework it probably only writes 8bit data into some memory which gets displayed. If colormanagment in wayland is so broken or so much of a chicken-edd situation, how about talking to a decklink for displaying static images?

a) what is the difference in that Windows case between consumption(displaying?) and creation? As I alluded to above, displaying is not the same as color-critical editing/decision-making.
b) do you mean the vesa-HDR vs. HDR10 vs. DolbyVision vs. HDR10+ multitude of ‘fromats’?

Yes, and apparently not, and yes. There are plenty of games that support HDR. Destiny 2 is free to play and supports HDR, and ROTR looks great in HDR.

The problem for implementing this requires cooporation in so many parties it’s not even funny. The viewing of HDR images needs to work on Android and iOS to actually get the ball rolling. What’s the point in making HDR images (which ATM you can’t even do that) if nobody can see them because the SW doesn’t support it?

1 Like

a) Note this is from my understanding and reading some reviews and seems that windows HDR is focused on full screen applications (games and videos mostly), sometimes apparently doesn’t turn on properly and non-hdr content apparently looks awful (again I got this mostly from reading reviews and most of those where gaming focused). Also though not sure if it is still true needed to be turned on at 2 or even 3 places to work (IIRC)
b) That is part of it yes, also the vesa standards that require a table to figure out which is which, unclear marketing, content that is supposedly HDR but when actually analyzed is anything but (and the content is hard to analyze since most of it is behind DRM).

When it works it is amazing (though personally I think most of that comes from the wider gamut not from the wider dynamic range) but there are currently so many hoops to jump through I am still in the wait and see mode myself

1 Like

So Windows10 is not properly color-managing HDR then. Interesting that Krita seems to have implemented something none the less.

I strongly disagree that this is only down to gamut. If you can, try to find two different sRGB displays with different on-off Contrast ratios (say 700:1 and 1100:1 ), calibrate them. Then go through a few hundred photos on flickr. If you can’t get deltaE close between both displays just look at black and white pictures. Even with just 8bit jpegs, the clear winner will be the display with more contrast.

We have a long way to go. :nerd_face:

Never had a fancy TV or monitor. I have always considered contrast ratios, wide gamut and HDR speak to be mostly marketing material. E.g. claimed contrast ratios are often dynamic, not absolute, certainly not per pixel. When you add gamut into the equation, if you wanted to display a series of haldcluts at varying brightnesses spread throughout the screen, how many displays could actually reproduce a fraction of that? Now add motion and transform them into ever changing fractals, etc. Then add windows of apps with different colour management settings on that one display. Change the lighting conditions of your room, turn off the AC. :stuck_out_tongue:

Maybe, if you had one of these…

That is because we humans generally like a more contrasty look, but most of those images won’t be mastered for the high contrast display so they might look good but not as the artist intended.

1 Like

I suspect that is because reality has more than 9.5stops of dynamic range.

my argument was along the lines of: if you don’t have a HDR display to see the difference to a SDR display, take SDR displays to see the different impact of dynamic range within the same small gamut. It seems you agree that contrast/DR is a key component to make something ‘look better’, no?

I am a bit baffled by this. This is easy to measure and is measured by people testing displays. For the subjective quality…aka what you think of this, any electronics store can be enlightening.

true, but again, this can be measured. static on-off contrast, simultaneous contrast (ANSI contrast), and I wouldn’t be surprised if people measure for HDR displays short term peak contrast now. It exists for a reason not as a marketing gimmick.

You lost me there a bit. :slight_smile: All those things happen already. It seems photographers just aren’t aware of this. HDR10 displays, the bog standard one, not the ones with the fancy dynamic metadata for real time gamut remapping(!) (a.k.a. HDR10+ and DolbyVision) look mindblowingly good. I actually want to know how much better a HDR10 mastering display looks…but just a consumer HDR10 is simply amazing.

That is probably not the full story, since AFAIK people also prefer pictures with a higher contrast even when shown on the same display (i.e. when the contrast slider is pushed just a bit further of course can be pushed to far), so it is not only due to higher dynamic range.

but you won’t see the impact of dynamic range since the source image can’t have a wider dynamic range, you just stretch that dynamic range of a larger contrast area probably ruining the artists intent. Sure this might look good to some people but isn’t what you should be looking for when color accuracy is important.

Without exactly knowing what the consumer screens do (it is still a black box color management wise) I wouldn’t be surprised to learn that those screens actually increase contrast and thus a good mastering screen will actually look “worse” due to it being much more color accurate. Also see this twitter thread I linked to earlier.

Honestly, me too… I didn’t think that having more contrast (all else being equal) could not be considered a good thing. But maybe I misunderstood the other posts?

1 Like

Yes, lets say for example that I have a poor contrast (SDR) screen on which I edit my pictures while you have a nice high contrast (SDR) screen, any pictures I send you will look different and in some cases can be totally ruined (vice-versa is also true and might even be worse), this is because in the SDR world there is no encoding for how bright something actually should be. This is one theoretical benefit of HDR encoding since the Electrical Optical Transfer Function (EOTF for short) actually encodes digital input to analog output (e.g. with the pq EOTF 1024 (10 bit pc), 1.0 (float) or 940 (10bit tv legal) is 1000nits per definition)

1 Like

This is a huge not so theoretical benefit over i.e. sRGB actually. It allows for vastly different display tech to be much much closer together in appearance than in sRGB. Even in the case the display has to do internal black box shenanigans when out-of-gamut or out-of-luminance is reached. So right there, ‘HDR’ to throw around that term as loosly as possible, is better.

( I hope I am not being misunderstood, out-of-gamut and out-of-luminance is still a problem then as is could look horrible if the implementation is wrong. All else from the get go is better defined though.)

I agree it is an amazing benefit, I call it theoretical since it assumes that people will actually follow the spec, and we all know how companies (especially in the consumer space) like to skirt those lines a bit (understatement). There are a lot of in theory amazing spec out there that due to being completely abused are not worth the paper they are written on only time well tell if the HDR specs will be among those or not.

1 Like

But that contrast slider manipulates screen space luminance. I am not sure how this is not creating a higher dynamic range presentation…it is.

well, as a photographer I sometimes have an intent how things have to look. If my display is too low contrast in comparison to whatever standard, color accuracy is not helping me. I want to know, and decide, that the luminances that I assign to certain elements in a scene, are looking how I want them to look. The contrast slider is a poor mans mastering tool for that. My camera shoots easily 3 or 4 stops more than I can display. I have to employ DR-compression to get everything into what I want to show. Now there is better display tech solving this to a certain extent. Gaming and Movies are already using that tech, Photographers are very late to that game.

I get the point that you’re making. But as you wrote above, the standard already allows for much less of this fudging blackbox implementation. I read Dan Olsons thread and I partly agree. What his thread boils down to is this wonderfully snarky comment

So standards are being developed for a display tech that isn’t ready, and thus compensation for the not-ready display tech is being baked into the standards, and I’m sure that will have zero downsides in the future

The HDR standards (unfortunately it’s more than one) are the first display standards to allow for display tech to improve while not giving garbage on the way to the ‘perfect’-display. Yes there will be bogus implementations of companies slapping badges on stuff. This was way worse with SDR displays though. Orange reds, yellow greens, 6bit display drivers…If the standard forces everyone to up their game, I am all for it.

I have the feeling that when smartphone photography implements HDR editing of their photos, real photographers will quickly want this too. Another death-blow for traditional camera-makers is waiting.

(Also I think it’s obvious that I want this technology to work, maybe a bit too much actually :smile: )