HDR, ACES and the Digital Photographer 2.0

Someday someone has to write a tutorial on this or similar workflow. :wink: Maybe it already exists. Just lazy or ignorant. :blush:

If I have this right, you:

  1. img.Raw → rawtoaces → img.EXR
  2. img.EXR ->natron with ACES OCIO config, graph to apply OCIO tools → img.jpg

Yes that is right, of course you don’t have to stick to OCIO nodes, so long as the node keeps the scene referred data intact it should work (so for example no inverse on color data, alpha maybe (depends on if it is pre-multiplied or not), masks are ok to invert)

Very interesting exercise!

@dutch_wolf @Elle @gwgill, and others, I have a very basic question concerning how to prepare images for HDR displays. IN out “normal” workflow we adapt the output such that "0"is mapped to black and “1” is mapped to the display white point, with the middle gray mapped to 0.18 in linear encoding. The display has a maximum brightness of 100 nits or so.

What happens when the display is capable of generating a brightness level 10x bigger? I suppose that the black level does not increase by 10x as well, otherwise it would not be HDR but simply brighter, right?

To which brightness levels should one map the “0.18” and “1” values in this case?

Sorry don’t have the answer to that, probably should look at the encoding specs (maybe take a look at the standard for HDR10?)


Anyway I think this is how I want a photography workflow to look like

Undecided about the exact color spaces to use, although either ACES2065-4 or ACEScg should be workable.

Also this hypotetical RAW editor will also be usable for an ACES workflow, just disable the user adjustable tonemap operator and load an ACES OCIO config, that would look like this:


(EDIT: this assumes that the user is working on providing photos for use as mate)

1 Like

I faintly recall someone (@age?) briefly talking about this or the like in one of the many threads; maybe:

Rendering also has a specific meaning in color management, in relation to color appearance/viewing environment adjustment, and/or adjusting for device gamut limitation.
A (typically input referred) image is rendered to an output device space.

1 Like

To 0.18 and 1.0 in scene referred space

The convention employed by OpenEXR is to determine a middle gray object, and assign it the photographic 18% gray value, or 0.18 in the floating point scheme.

[Technical Introduction to OpenEXR PDFwww.openexr.com › documentation › Te…](http://Technical Introduction to OpenEXR PDFwww.openexr.com › documentation › Te…)

ACES is scaled such that a perfect reflecting diffuser under a particular illuminant produce ACES values of 1.0

https://acescentral.com/t/supported-compression-for-aces-exr-container/391/5

@age - I looked at both of those links. I don’t think either really has the correct equation for the TRC, only information for putting the equation together. Either way, the step from “here’s stuff that can be used to make the equation to put into a spreadsheet” to “here’s the actual equation to put into the spreadsheet” is a step I’m not prepared to take on my own :slight_smile: . I’m going to send out a couple of emails to ask for some guidance, and in the meantime maybe someone on this list might have or can find the actual equation? It probably starts with “Y=” and probably has a value that indicates how to modify the PQ equation to incorporate the nits value. Please don’t assume the previous sentence means I know what I’m talking about! Or maybe the previous sentence makes it obvious that I don’t know what I’m talking about :slight_smile:

@dutch_wolf - thanks! for the Natron files and for the explanations of various terms. I downloaded Natron from git - which branch should be compiled? Also I found a link on github - probably already given earlier in this long thread - for the ACES OCIO configurations, is this also needed? Because of other time commitments it will be maybe a week before I can find the time to work through your examples but I’m looking forward to seeing other people’s responses to/results from working with Natron/OCIO/ACES.

As an aside, I have a very high dynamic range EXR file if anyone might find it useful - it’s not very pretty! But it might be nice to see what results are for both low, “middle/extended” and very high dynamic range images, so I can post the file if anyone is interested.

1 Like

Currently use the flatpak natron package myself which AFAICT is reasonable up to date, for the OCIO configs I just cloned from https://github.com/ampas/OpenColorIO-Configs and the used v1.0.3.

Do note that the Reader and Writer nodes are fully OCIO color managed but the view node isn’t, so before the view node there needs to be an OCIO display node and the viewer needs to be set to linear

(Viewer setting highlighted in red, example is actually setup to use Filmic Blender but the general principle stays the same)

I’m no expert but from what I’ve read so far, it seems that:

  • ACES2065-1 is linear and is a format for file archival and interchange.
  • ACEScg is linear and is used for CGI and compositing.
  • ACEScc is logarithmic and is for colour correction and grading
  • ACEScct is logarithmic and is also for colour correction and grading but has a toe like traditional log curves.

There’s mention in the primer that ACES2065-1 can cause problems with software leading to distortion when used as an internal colour space.

I think the normalization of the exposure for the scene referred space is very important.
For example I’ve taken two shots, the one on the left is exposed for mid-gray and the other one for the highlights
The overall brightness on the left is similat to what my eyes have seen in real life


Anchor the mid gray to 0.18 for both pictures, values above 1.0 could be seen only in a HDR monitor

Convert to log gamma from scene-referred

now it’s possible to color grading for a SDR srgb monitor (same s-curve “tonemapper” for both)

or converto to standard HDR10, it is true hdr from a single image

The convention
employed by OpenEXR is to determine a middle gray object, and assign it the photographic 18% gray
value, or 0.18 in the floating point scheme. Other pixel values can be easily determined from there (a
stop brighter is 0.36, another stop is 0.72)

1 Like

You are right so for the internal workspace I am thinking ACEScg and digital intermediates in ACES2065-1[1], this is similar to full ACES, the biggest change would be to not use the ACES ODTs since as a photographer I want to be more flexible in my Render Transform (so not using RRT all the time)

For communicating the LUT generated by the processors to downstream program I would just use environment variables as described in the OCIO documentation (see: http://opencolorio.org/userguide/looks.html#userguide-looks)


[1] Keep typing ACES2065-4 for some reason but that is ACES EXR container that uses ACES2065-1

1 Like

From here:

https://www.smpte.org/sites/default/files/section-files/HDR.pdf

With HDR PQ, there is no agreed upon diffuse white point level. Many are using 100-200 nits as the diffuse white point level, the old 90% reflectance point (100 IRE). Camera operator or colorist/editor must also know what reference monitor will used for grading the content. For example, if a 1000 nit monitor is used for grading, with a diffused white point of 100 nits, white is set at 51% for SMPTE ST 2084 (1K). If a 2000 nit monitor is used, diffuse white is set at 68 %.

Which I’m assuming leaves room for the specular highlights and “brighter than diffuse white”. But again, HDR displays are not something I know anything at all about. When I asked for a multiplier for the equation for the TRC for the profile for an HDR10 monitor, I’m guessing that the multiplier has to do with where diffuse white is set. But this is just a guess.

@Carmelo_DrRaw @Elle I read through lots of PDFs today. I didn’t keep track of them but here is one that seems to address much of it: https://www.itu.int/dms_pub/itu-r/opb/rep/R-REP-BT.2390-1-2016-PDF-E.pdf, which I only skimmed just now, sorry. However, the obsessive reading spree was on my small-screened and half-touch-broken phone, while busy doing something else. :joy_cat: I might have gotten things mixed up because of that and the fact that there is so much info out there, but here are some points that might be relevant. Again, I am speaking in non-technical possibly vague terms. My purpose in threads like these is to brainstorm and / or provide sanity to a very complex subject. I will leave the technical efforts and battles to the rest of you. :innocent:

1. There are two standard transfer functions called Perceptual Quantizer (PQ) and Hybrid Log-Gamma (HLG). Each has its strengths and weaknesses. Briefly, PQ is an absolute display-referred signal; HLG is a relative scene-referred signal. The former needs metadata and isn’t backwards compatible with SDR displays; the latter is. Depending on the rest of the specs, esp. for PQ, there is a preferable type of HDR display and surround. A common measure is the amount of nits.

Looking into this would probably answer @Carmelo_DrRaw’s question. Many documents show what happens on various displays and surrounds combinations. Pretty graphs and descriptions. Makes me want to root for one or the other as if it were a competition. :stuck_out_tongue: (I am leaning toward HLG :racing_car: :horse_racing: :soccer:).

2. Next we have @Elle’s link and comments.

Stupid Github, now, won’t let me search from its front page without logging in. MS’s handiwork? :angry:

These commits and their comments show how variable the standards could be. There were PDFs talking about the choices made by various entities, workflows and devices. The discussion varies depending on the perspectives of the document or slideshow publisher but you kind of get a gist of the common themes are among the infographs, tables and figures.

@Elle’s particular linked document HDR.pdf gives examples in the form of waveforms, which is very helpful from our perspective. Photographers tend to use the histogram (JPG); videographers use waveforms (and other scopes) to quickly gauge where the DR, among other things, is. As you look at the images, to me at least, it is easy to understand why “there is no agreed upon diffuse white point level”. It has to do with a lot of things, a few which I will briefly list in the next paragraph.

Just as we need to make decisions when we look at the camera’s histogram (generally generated by the preview JPG, not the raw!), the videographer has to look at the scopes to determine and decide on the DR and the distribution of tones, among other things. Choices need to be made (edit: and we need to consider leaving some data and perceptual headroom too). Hopefully consistent ones per batch or project. These decisions are based on a number of factors including personal experience and tastes; client and product expectations; workflow; and ultimate output and viewing conditions. There is a lot to be said about point #2 but I have to rest after a tough day!

2 Likes

It’s definitively the second link http://www.streamingmedia.com/Downloads/NetflixP32020.pdf

Linear to st2084 10000nits

y=\big({c1 + c2* x^{m1} \over (1 + c3*x^{m1})}\big)^{m2}

Linear to st2084 1000nits

y=\big({c1 + c2* (x/10)^{m1} \over 1 + c3*(x/10)^{m1}}\big)^{m2}

Tested in Vapoursynth https://github.com/vapoursynth/vapoursynth where

c=core.resize.Bicubic(clip=c, format=vs.RGBS, transfer_in_s="linear", transfer_s="st2084", nominal_luminance=1000)

is equivalent to this in polish standard notation

c = core.std.Expr(c, expr=" 0.8359375 x 10 / 0.1593017578125 pow 18.8515625 * + 1 18.6875 x 10 / 0.1593017578125 pow * + / 78.84375 pow ",format=vs.RGBS)

y=\big({0.8359375 + 18.8515625 * (x / 10)^{0.1593017578125} \over 1 + 18.6875 * (x/10)^ {0.1593017578125}})^{78.84375}

OK, that’s an equation that I can put into a spreadsheet - thanks! I’ll try to post the resulting ICC profile later today.

I don’t quite get it… this means that the same y value is obtained for an input value that is 10x bigger at 1000 nits than at 10000 nits. Shouldn’t be the other way around?

EDIT: I have changed the formulas in @afre post to take advantage of the new math formatting, for better readability…

1 Like

See the equation on page 20 here:

https://www.smpte.org/sites/default/files/23-1615-TS7-2-IProc02-Miller.pdf

The pdf is very readable.

I think the equation @age gave perhaps “goes the other way”.

The equation on page 20 is for 10,000 nits, fwiw.

Edit: Ok, here’s the equation on page 20 in a spreadsheet:

pq-luminance-equation.ods (29.2 KB)

@Carmelo_DrRaw or anyone - could you check my equations?

My spreadsheet just replicates the equation on page 20 and spits out the point TRC values for a point TRC with 101 points from 0 to 100, where 0 maps to 0 and 100 maps to 65535. Assuming my equations are correct, more TRC points would be needed for an actual ICC profile, and I’m absolutely sure the current equations in the spreadsheet aren’t right for a 1000-nit monitor, for one thing the nits is wrong.

I have converted this hdr image https://hdrihaven.com/hdri/?h=aerodynamics_workshop to linear rgb with 2020 primaries, it’s scene referred so it goes from 0 to inf. (actually fro 0 to +15.0=1500nits)

Scaled down to 0-1 range

Y=x/15

Converted to st2084 1500nits

y=\big({c1 + c2* (x/6.666666)^{m1} \over 1 + c3*(x/6.66666)^{m1}}\big)^{m2}

10000 nits / 1500 nits=6.66666666

While for st2084 1000 nits we have to scaled down from scene referred in this way (there will be some highlights clipped)

y=x/10

and this fomula

y=\big({c1 + c2* (x/10)^{m1} \over 1 + c3*(x/10)^{m1}}\big)^{m2}

10000 nits / 1000 nits = 10