HDR, ACES and the Digital Photographer 2.0

Normally I’d agree with you. But the long discussion between @gwgill and @aurelienpierre - in which @gwgill is so very patiently trying to point out where @aurelienpierre’s understanding of how ICC profile color management actually works is fundamentally flawed - with other people occasionally chiming in to help - this is a discussion that is quite apart from the main topics that are being discussed.

For example, the posts by @dutch_wolf, @nosle, @Tobias, etc starting with post 75 are about possibly setting up a test implementation of an actual ACES workflow possibly using Blender or Krita or Natron, plus a discussion of what parts of the ACES workflow (however it might be implemented) could be useful for photographers with various goals.

But this handful of totally relevant posts is interrupted by 15 very long posts that mostly or entirely address or consist of @aurelienpierre’s complaints about and misunderstandings of ICC.

1 Like

I see over in one of the Natron threads something about 2D and 3D for Natron and Blender. I think maybe Natron is easier for complete newbies to start using, compared to Blender? This is just a guess! As the Natron people are on this forum, could they help with setting up whatever is needed to implement enough of an ACES workflow (edited - previously said “transform”) so that people could see what it’s like to use?

Blender would actually a bit easier (after some setup but that could be stored in a .blend file) since its output is properly color-managed for Natron it needs an extra node to pick up the OCIO config before the view node. Do note I am talking about the compositing here (the renderer of Blender still uses Rec.709 primaries)

Going to see if I can do anything with a default ACES config later this evening.


Addendum/Edit

Going to ignore aurelien’s previous reply to me since he seems to have completely misread what I was talking about namely a photographic centered alternative (or actually any alternative) to ACES not anything related to ICC

For those of us like me who might not know the terminology:

What is the difference between 2D and 3D?

What is rendering?

What is the difference between compositing and rendering? Is 2D compositing and 3D rendering?

Is/how is compositing in Natron/Blender different from using layer blend modes in GIMP?

WIll try to answer this to my best understanding do note I am not an expert at this, so take these answers with a pinch of salt :wink:

2D is 2 dimension think you’re typical image
3D can mean a couple of different things from a 3D scene created in blender (or other 3D editing SW) to a 3D lut (used to do color transform effectively tells how an RGB triplet get converted to an R’G’B’ triplet), the first meaning of 3D is not very relevant (the 3D workspace people where asking for in Natron is to make it easier to composite a render of a 3D scene with other elements)

At it barest it is creating a image from a set of metadata, in this particular instances it means turning a 3D scene into a 2D image (view from a virtual camera) or rendering out a composite to a final image

Compositing is generally turning multiple 2D images into a single output (theoretically you could have 3D compositing of scenes but that hasn’t been done) a 3D scene first needs to be rendered out before it can be composited (although for complex operations it can be rendered out in layers)

Yes although you do have the blends modes available they are found in a mix node where 2 (or more!) images are “mixed” together, this is mostly useful to not duplicate any inputs.

Sample below (EDIT: This is a sample of node based editing NOT ACES or scene referred)

Hope this makes sense!

1 Like

I’ve done an experiment for hdr from a raw file [PlayRaw] Everglades.

I’ve exported the raw as a 16bit tiff, Rec.2020 primaries using the neutral profile in rawtherapee, the file was cropped

Without IDT or color chart I’ve raised the exposure by eyes so that middle-gray is anchored at 0.18
Scene referred to srgb for display purpose:

converted to st.2084 eotf 1000nits


And finally here’s the video(single image for 5 sec) file that should works in the HDR TVs :slightly_smiling_face:

https://filebin.net/tu3uu2ru016xt9zc
(just put the file in a USB key and connect it to a hdr tv)

P.S.
Is there a .icc profile for hdr10(rec2020,st2084 1000nits)?

Just an offtopic comment … I fear if I give you the mathml plugin for discourse that will get even worse with all the formulas :stuck_out_tongue:

The primaries are Rec.2020 primaries. If you have an equation for the TRC and you put that into a spreadsheet to generate however many points you want for the TRC, I can make you an ICC profile - it’s super easy to do using iccToXML and iccFromXml.

If you have a link to the documentation that provides the equation that you want for the TRC I might be able to set up the spreadsheet myself.

I’m guessing you want at least 4096 points for the TRC. But more are possible.

Edit: From the point of view of ICC profiles, we speak of TRCs. In the realm of building and calibrating electronic display devices to various standards, the term “EOTF” is used, such as the EOTF specified in BT.1886:

https://imagingscience.com/2017/06/08/goodbye-gamma-hello-e-o-t-f/

Also see:

“EOTF” specifies the desired calibration state of the display device. How well an actual display device conforms to the intended calibration specification is something that must be measured and perhaps periodically corrected by recalibration. In studio environments I’m assuming there is staff that ensures that the display devices all actually do conform to their nominal specifications.

This term “EOTF” is not directly interchangeable with the term “ICC profile TRC”. The first is a specification that includes a transfer function that a given display should conform to. The second describes the actually measured transfer function for use in an ICC profile color-managed application. If one assumes a given display conforms to the spec, then one can make an ICC profile that describes the display. This might be splitting hairs, but I think it’s an important distinction.

In an OCIO workflow, there are chains of transforms, the last one of which is a LUT that maps the channel values of the image to the corresponding RGB signals that are sent to the display. Making this last LUT requires knowing the actual state of calibration of the display and also requires knowing the RGB color space of the image that’s being sent to the screen.

In an ICC profile workflow, the image is in a user-chosen ICC RGB working space and the user has supplied a monitor profile. ICC profile color management takes the image RGB channel values to the PCS (“profile connection space”, either XYZ or LAB), and then from PCS to the monitor profile color space, and the resulting RGB intensities are sent to the display.

As an aside, in GIMP and I think in darktable, when the source and destination profiles are both RGB matrix profiles, the conversion from RGB(image) to XYZ to RGB(monitor) is just done using matrix math, and the resulting RGB intensities are sent to the screen without actually invoking LCMS.

@gwgill, anyone who knows, please correct any errors in what I just posted!

2 Likes

I’m note sure if the formula is this at pag.13 https://www.smpte.org/sites/default/files/2014-05-06-EOTF-Miller-1-2-handout.pdf
or this at page 2
http://www.streamingmedia.com/Downloads/NetflixP32020.pdf
But first you need to divide by 10 (10000nits/1000 nits)

I was going to refrain from further posting 'till after the topics were sorted out, but I ran across this, which is pertinent I think to all the aspects of this thread:

(License: CC BY-NC 2.5)

3 Likes

@ggbutcher To me, it is more about what I hinted at in another thread. People doing stuff with other people but not talking about it in their report. I have linked to at least one doc where there were blanket or vague statements like “we talked about it”. That and industry isn’t willing to share; e.g., Adobe and others are involved. They would share if you gave them money, or used their patents and gave them money for them. I guess that is a part of their job and business. Or they are just people who are above or too busy to engage with us, the common folk.

1 Like

actually, I thought you were going to post this… :stuck_out_tongue:

2 Likes

Played arround with Natron a bit and the rawtoaces utility, the biggest issue I ran into is that since the rawtoaces tool is a command line tool that spits out EXR files it is hard to check if the inputs are actually correct/useful. Although since it doesn’t really throw any data away so it is possible to recover from this, it does make the whole process quite a bit harder.

Anyway some results

Scene graph of above

The loop in the scene graph is used to isolate the eyes and give them a bit more emphasis

(note from the OCIO CDL node you can export the CDL which then can be used as a look in other OCIO compliant SW, if configured correctly)

Second example

With the scene graph

This time isolated the sky

All needed files to reproduce: https://drive.google.com/open?id=1dHmcxXVTIPdvEjwUZjHAeG8oMiNJ6kCQ

For this install Natron and download ACES 1.0.3 and configure Natron to use that config, for the DNG to EXR conversion download the rawtoaces tool

Above is licensed CC BY-NC 2.5

6 Likes

Someday someone has to write a tutorial on this or similar workflow. :wink: Maybe it already exists. Just lazy or ignorant. :blush:

If I have this right, you:

  1. img.Raw → rawtoaces → img.EXR
  2. img.EXR ->natron with ACES OCIO config, graph to apply OCIO tools → img.jpg

Yes that is right, of course you don’t have to stick to OCIO nodes, so long as the node keeps the scene referred data intact it should work (so for example no inverse on color data, alpha maybe (depends on if it is pre-multiplied or not), masks are ok to invert)

Very interesting exercise!

@dutch_wolf @Elle @gwgill, and others, I have a very basic question concerning how to prepare images for HDR displays. IN out “normal” workflow we adapt the output such that "0"is mapped to black and “1” is mapped to the display white point, with the middle gray mapped to 0.18 in linear encoding. The display has a maximum brightness of 100 nits or so.

What happens when the display is capable of generating a brightness level 10x bigger? I suppose that the black level does not increase by 10x as well, otherwise it would not be HDR but simply brighter, right?

To which brightness levels should one map the “0.18” and “1” values in this case?

Sorry don’t have the answer to that, probably should look at the encoding specs (maybe take a look at the standard for HDR10?)


Anyway I think this is how I want a photography workflow to look like

Undecided about the exact color spaces to use, although either ACES2065-4 or ACEScg should be workable.

Also this hypotetical RAW editor will also be usable for an ACES workflow, just disable the user adjustable tonemap operator and load an ACES OCIO config, that would look like this:


(EDIT: this assumes that the user is working on providing photos for use as mate)

1 Like

I faintly recall someone (@age?) briefly talking about this or the like in one of the many threads; maybe:

Rendering also has a specific meaning in color management, in relation to color appearance/viewing environment adjustment, and/or adjusting for device gamut limitation.
A (typically input referred) image is rendered to an output device space.

1 Like