HDR, ACES and the Digital Photographer 2.0

I was going to refrain from further posting 'till after the topics were sorted out, but I ran across this, which is pertinent I think to all the aspects of this thread:

(License: CC BY-NC 2.5)

3 Likes

@ggbutcher To me, it is more about what I hinted at in another thread. People doing stuff with other people but not talking about it in their report. I have linked to at least one doc where there were blanket or vague statements like ā€œwe talked about itā€. That and industry isnā€™t willing to share; e.g., Adobe and others are involved. They would share if you gave them money, or used their patents and gave them money for them. I guess that is a part of their job and business. Or they are just people who are above or too busy to engage with us, the common folk.

1 Like

actually, I thought you were going to post thisā€¦ :stuck_out_tongue:

2 Likes

Played arround with Natron a bit and the rawtoaces utility, the biggest issue I ran into is that since the rawtoaces tool is a command line tool that spits out EXR files it is hard to check if the inputs are actually correct/useful. Although since it doesnā€™t really throw any data away so it is possible to recover from this, it does make the whole process quite a bit harder.

Anyway some results

Scene graph of above

The loop in the scene graph is used to isolate the eyes and give them a bit more emphasis

(note from the OCIO CDL node you can export the CDL which then can be used as a look in other OCIO compliant SW, if configured correctly)

Second example

With the scene graph

This time isolated the sky

All needed files to reproduce: https://drive.google.com/open?id=1dHmcxXVTIPdvEjwUZjHAeG8oMiNJ6kCQ

For this install Natron and download ACES 1.0.3 and configure Natron to use that config, for the DNG to EXR conversion download the rawtoaces tool

Above is licensed CC BY-NC 2.5

6 Likes

Someday someone has to write a tutorial on this or similar workflow. :wink: Maybe it already exists. Just lazy or ignorant. :blush:

If I have this right, you:

  1. img.Raw ā†’ rawtoaces ā†’ img.EXR
  2. img.EXR ->natron with ACES OCIO config, graph to apply OCIO tools ā†’ img.jpg

Yes that is right, of course you donā€™t have to stick to OCIO nodes, so long as the node keeps the scene referred data intact it should work (so for example no inverse on color data, alpha maybe (depends on if it is pre-multiplied or not), masks are ok to invert)

Very interesting exercise!

@dutch_wolf @Elle @gwgill, and others, I have a very basic question concerning how to prepare images for HDR displays. IN out ā€œnormalā€ workflow we adapt the output such that "0"is mapped to black and ā€œ1ā€ is mapped to the display white point, with the middle gray mapped to 0.18 in linear encoding. The display has a maximum brightness of 100 nits or so.

What happens when the display is capable of generating a brightness level 10x bigger? I suppose that the black level does not increase by 10x as well, otherwise it would not be HDR but simply brighter, right?

To which brightness levels should one map the ā€œ0.18ā€ and ā€œ1ā€ values in this case?

Sorry donā€™t have the answer to that, probably should look at the encoding specs (maybe take a look at the standard for HDR10?)


Anyway I think this is how I want a photography workflow to look like

Undecided about the exact color spaces to use, although either ACES2065-4 or ACEScg should be workable.

Also this hypotetical RAW editor will also be usable for an ACES workflow, just disable the user adjustable tonemap operator and load an ACES OCIO config, that would look like this:


(EDIT: this assumes that the user is working on providing photos for use as mate)

1 Like

I faintly recall someone (@age?) briefly talking about this or the like in one of the many threads; maybe:

Rendering also has a specific meaning in color management, in relation to color appearance/viewing environment adjustment, and/or adjusting for device gamut limitation.
A (typically input referred) image is rendered to an output device space.

1 Like

To 0.18 and 1.0 in scene referred space

The convention employed by OpenEXR is to determine a middle gray object, and assign it the photographic 18% gray value, or 0.18 in the floating point scheme.

[Technical Introduction to OpenEXR PDFwww.openexr.com ā€ŗ documentation ā€ŗ Teā€¦](http://Technical Introduction to OpenEXR PDFwww.openexr.com ā€ŗ documentation ā€ŗ Teā€¦)

ACES is scaled such that a perfect reflecting diffuser under a particular illuminant produce ACES values of 1.0

https://acescentral.com/t/supported-compression-for-aces-exr-container/391/5

@age - I looked at both of those links. I donā€™t think either really has the correct equation for the TRC, only information for putting the equation together. Either way, the step from ā€œhereā€™s stuff that can be used to make the equation to put into a spreadsheetā€ to ā€œhereā€™s the actual equation to put into the spreadsheetā€ is a step Iā€™m not prepared to take on my own :slight_smile: . Iā€™m going to send out a couple of emails to ask for some guidance, and in the meantime maybe someone on this list might have or can find the actual equation? It probably starts with ā€œY=ā€ and probably has a value that indicates how to modify the PQ equation to incorporate the nits value. Please donā€™t assume the previous sentence means I know what Iā€™m talking about! Or maybe the previous sentence makes it obvious that I donā€™t know what Iā€™m talking about :slight_smile:

@dutch_wolf - thanks! for the Natron files and for the explanations of various terms. I downloaded Natron from git - which branch should be compiled? Also I found a link on github - probably already given earlier in this long thread - for the ACES OCIO configurations, is this also needed? Because of other time commitments it will be maybe a week before I can find the time to work through your examples but Iā€™m looking forward to seeing other peopleā€™s responses to/results from working with Natron/OCIO/ACES.

As an aside, I have a very high dynamic range EXR file if anyone might find it useful - itā€™s not very pretty! But it might be nice to see what results are for both low, ā€œmiddle/extendedā€ and very high dynamic range images, so I can post the file if anyone is interested.

1 Like

Currently use the flatpak natron package myself which AFAICT is reasonable up to date, for the OCIO configs I just cloned from https://github.com/ampas/OpenColorIO-Configs and the used v1.0.3.

Do note that the Reader and Writer nodes are fully OCIO color managed but the view node isnā€™t, so before the view node there needs to be an OCIO display node and the viewer needs to be set to linear

(Viewer setting highlighted in red, example is actually setup to use Filmic Blender but the general principle stays the same)

Iā€™m no expert but from what Iā€™ve read so far, it seems that:

  • ACES2065-1 is linear and is a format for file archival and interchange.
  • ACEScg is linear and is used for CGI and compositing.
  • ACEScc is logarithmic and is for colour correction and grading
  • ACEScct is logarithmic and is also for colour correction and grading but has a toe like traditional log curves.

Thereā€™s mention in the primer that ACES2065-1 can cause problems with software leading to distortion when used as an internal colour space.

I think the normalization of the exposure for the scene referred space is very important.
For example Iā€™ve taken two shots, the one on the left is exposed for mid-gray and the other one for the highlights
The overall brightness on the left is similat to what my eyes have seen in real life


Anchor the mid gray to 0.18 for both pictures, values above 1.0 could be seen only in a HDR monitor

Convert to log gamma from scene-referred

now itā€™s possible to color grading for a SDR srgb monitor (same s-curve ā€œtonemapperā€ for both)

or converto to standard HDR10, it is true hdr from a single image

The convention
employed by OpenEXR is to determine a middle gray object, and assign it the photographic 18% gray
value, or 0.18 in the floating point scheme. Other pixel values can be easily determined from there (a
stop brighter is 0.36, another stop is 0.72)

1 Like

You are right so for the internal workspace I am thinking ACEScg and digital intermediates in ACES2065-1[1], this is similar to full ACES, the biggest change would be to not use the ACES ODTs since as a photographer I want to be more flexible in my Render Transform (so not using RRT all the time)

For communicating the LUT generated by the processors to downstream program I would just use environment variables as described in the OCIO documentation (see: http://opencolorio.org/userguide/looks.html#userguide-looks)


[1] Keep typing ACES2065-4 for some reason but that is ACES EXR container that uses ACES2065-1

1 Like

From here:

https://www.smpte.org/sites/default/files/section-files/HDR.pdf

With HDR PQ, there is no agreed upon diffuse white point level. Many are using 100-200 nits as the diffuse white point level, the old 90% reflectance point (100 IRE). Camera operator or colorist/editor must also know what reference monitor will used for grading the content. For example, if a 1000 nit monitor is used for grading, with a diffused white point of 100 nits, white is set at 51% for SMPTE ST 2084 (1K). If a 2000 nit monitor is used, diffuse white is set at 68 %.

Which Iā€™m assuming leaves room for the specular highlights and ā€œbrighter than diffuse whiteā€. But again, HDR displays are not something I know anything at all about. When I asked for a multiplier for the equation for the TRC for the profile for an HDR10 monitor, Iā€™m guessing that the multiplier has to do with where diffuse white is set. But this is just a guess.

@Carmelo_DrRaw @Elle I read through lots of PDFs today. I didnā€™t keep track of them but here is one that seems to address much of it: https://www.itu.int/dms_pub/itu-r/opb/rep/R-REP-BT.2390-1-2016-PDF-E.pdf, which I only skimmed just now, sorry. However, the obsessive reading spree was on my small-screened and half-touch-broken phone, while busy doing something else. :joy_cat: I might have gotten things mixed up because of that and the fact that there is so much info out there, but here are some points that might be relevant. Again, I am speaking in non-technical possibly vague terms. My purpose in threads like these is to brainstorm and / or provide sanity to a very complex subject. I will leave the technical efforts and battles to the rest of you. :innocent:

1. There are two standard transfer functions called Perceptual Quantizer (PQ) and Hybrid Log-Gamma (HLG). Each has its strengths and weaknesses. Briefly, PQ is an absolute display-referred signal; HLG is a relative scene-referred signal. The former needs metadata and isnā€™t backwards compatible with SDR displays; the latter is. Depending on the rest of the specs, esp. for PQ, there is a preferable type of HDR display and surround. A common measure is the amount of nits.

Looking into this would probably answer @Carmelo_DrRawā€™s question. Many documents show what happens on various displays and surrounds combinations. Pretty graphs and descriptions. Makes me want to root for one or the other as if it were a competition. :stuck_out_tongue: (I am leaning toward HLG :racing_car: :horse_racing: :soccer:).

2. Next we have @Elleā€™s link and comments.

Stupid Github, now, wonā€™t let me search from its front page without logging in. MSā€™s handiwork? :angry:

These commits and their comments show how variable the standards could be. There were PDFs talking about the choices made by various entities, workflows and devices. The discussion varies depending on the perspectives of the document or slideshow publisher but you kind of get a gist of the common themes are among the infographs, tables and figures.

@Elleā€™s particular linked document HDR.pdf gives examples in the form of waveforms, which is very helpful from our perspective. Photographers tend to use the histogram (JPG); videographers use waveforms (and other scopes) to quickly gauge where the DR, among other things, is. As you look at the images, to me at least, it is easy to understand why ā€œthere is no agreed upon diffuse white point levelā€. It has to do with a lot of things, a few which I will briefly list in the next paragraph.

Just as we need to make decisions when we look at the cameraā€™s histogram (generally generated by the preview JPG, not the raw!), the videographer has to look at the scopes to determine and decide on the DR and the distribution of tones, among other things. Choices need to be made (edit: and we need to consider leaving some data and perceptual headroom too). Hopefully consistent ones per batch or project. These decisions are based on a number of factors including personal experience and tastes; client and product expectations; workflow; and ultimate output and viewing conditions. There is a lot to be said about point #2 but I have to rest after a tough day!

2 Likes

Itā€™s definitively the second link http://www.streamingmedia.com/Downloads/NetflixP32020.pdf

Linear to st2084 10000nits

y=\big({c1 + c2* x^{m1} \over (1 + c3*x^{m1})}\big)^{m2}

Linear to st2084 1000nits

y=\big({c1 + c2* (x/10)^{m1} \over 1 + c3*(x/10)^{m1}}\big)^{m2}

Tested in Vapoursynth https://github.com/vapoursynth/vapoursynth where

c=core.resize.Bicubic(clip=c, format=vs.RGBS, transfer_in_s="linear", transfer_s="st2084", nominal_luminance=1000)

is equivalent to this in polish standard notation

c = core.std.Expr(c, expr=" 0.8359375 x 10 / 0.1593017578125 pow 18.8515625 * + 1 18.6875 x 10 / 0.1593017578125 pow * + / 78.84375 pow ",format=vs.RGBS)

y=\big({0.8359375 + 18.8515625 * (x / 10)^{0.1593017578125} \over 1 + 18.6875 * (x/10)^ {0.1593017578125}})^{78.84375}