HDR, ACES and the Digital Photographer 2.0

This is probably the original “dissection” of the problem when editing in ACES:

The source of the problem is that chromaticities matter when using operations that involve multiplying by a color: different chromaticities produce different results, as I had previously discussed in my explanation of the many reasons why unbounded linear sRGB is not a good “one size fits all” editing color space, which Mansencal very kindly linked to in his article:

The specific problems for sRGB and ACES are somewhat different as ACES allows to encode all real colors without using negative channel values. Quoting Mansencal:

The slopes are obviously very different now, ACES RGB is steeper than sRGB. This is the result of the respective colourspaces basis vectors not being orthogonal. In this case it leads to colours approaching the spectral locus boundaries in ACES RGB faster than sRGB. It is the reason why in some instances secondary bounces quickly lead to very saturated colours.

Unlike in ACES, in small color spaces like sRGB you very quickly reach the edge of the sRGB color gamut, which causes unpleasant colors when doing things like modifying the gamma of a single channel (gamma operations being a type of multiply). Not to mention if you try to avoid the gamut limitations of sRGB by using unbounded sRGB and you multiply by a color, multiplying negative RGB channel values produces nonsense:

And again, multiplying negative RGB channel values produces nonsense in any RGB color space, not just sRGB.

This can be accomodated by a suitable Curves interface for working on linear RGB as @Carmelo_DrRaw has done for PhotoFlow and as I think Lightroom also does for their “MelissaRGB” or whatever they are calling linear ProPhoto these days.

2 Likes

Oh, now I see what you are talking about. You want to make it possible for users to take advantage of tone-mapping as done by hopefully some of the most advanced and “sensitive to aesthetics” tone mapping algorithms available today.

This is somewhat like using Picture Styles, which more or less emulate various film stocks, to get quickly from scene-referred to a nice output rendering, yes?

The question is whether “ACES precoded transforms” allow users to more easily and quickly get the image closer to where they might want it to be, and especially when dealing with actual high dynamic range images, as opposed to a low-dynamic-range but still scene-referred image straight from the raw processor. Yes? Did I understand properly this time :slight_smile: ?

Right on target!!! :+1:

And also, to properly deal with new display technologies that are just round the corner, and prepare images for either print, SDR or HDR displays from the same “master edit” in a consistent way, without having to scratch their heads too much.

Again if I understand correctly, ACES provides a set of recommended tone mappings, one for each type of display standard. And they all require input data in the ACES colorspace.

This is not a must, neither something that should be hard-coded as the only possible approach, but still might be the optimal approach for the “average user”…

I’m wondering how this can be captured in a JPEG output, for example. Right now, the only metadata container in JPEG that I know of that captures sufficient information to characterize the image colorspace is the ICC marker…

I glanced at the text of one of these “patents”. It looks like they are patenting image editing using ICC profile transforms. Patenting math transforms. I’d advise developers to not look at these patents. Clean room implement whatever seems reasonable for the various tasks at hand.

1 Like

I have a patent. I have a hard time translating what the lawyer wrote into what I demonstrated… :smile:

To find the unique thing, you really have to read through the whole thing. The descriptions are kind of a “history of oil” treatise, they start with the prehistoric days and describe how things were done before the Illuminating Revelation, in order to characterize how it’s different from the status quo.

I understand not wanting to laying eyes on it, but that doesn’t keep the owner from coming after you if They feel infringed. I was chased down in such a manner, because I dared to write a TCP server for model railroad control. Scoundrel that I am…

I’ll take a look at them and maybe synopsize what I believe to be the unique thing. They’ll have a hard time with anything that was captured in the original ICC specifications, if the timing of their release proves “prior art”

But “the unique thing” is exactly what we don’t want to know, isn’t it? And maybe it’s just a design patent.

“The Clean Mind” is a copyright infringement protection. Patents are made public to inform others about how to not infringe on the claim, among other things. Not reading it doesn’t exculpate you…

Here’s the one I’ve dissected so far: " Rendering and Un-Rendering Using Profile Replacement", 2008-08-06. Essentially, applying a gamma transform to an image by replacing its ‘correct’ ICC color profile with another, I think with similar primaries but a different TRC. I think that 's easy enough to avoid…

Oh please stop summarizing and posting other people’s patents - that stuff has no end.

Of course “not reading” doesn’t exculpate you from the possibility of being sued. That’s not my point.

If a coder goes and locates every possible patent that might be somewhat the same as what the coder wants to do, there would be no end at all, the coder would be stymied from the get-go.

1 Like

I am afraid that the “assign ICC profile” function in GIMP and PhotoFlow basically corresponds to this definition. That’s totally nonsense…

rawproc has an ‘assign’ function as well. I use it to associate my camera profile with recently-opened raw images. But I Promise Not To Use It for Weird Gamma Transformations (said for the benefit of anyone listening… :smiley: )

My last post on anything with the p-word, @Elle

Back to regular programming…

So, I’ve become an ardent fan of shipping image data with metadata describing its encoded color and tone characteristics. For most of the containers we use, JPEG, TIFF, PNG, the ICC profile is the primary format. There are other places to capture pieces of it scattered in EXIF and MakerNote tags, but the ICC profile has all that’s needed to “know” the image data respect to color and tone, and that container has a proper place in every one of the big-three formats.

I perused the OpenEXR format specs yesterday, and it provides tags for storing the individual components, but a using application has to extract and convert them to something their library will use (e.g., LittleCMS…). @Elle has a good discussion of all this at :Embedded color space information

I don’t think it’s enough to stick “sRGB” in an EXIF DCF tag. Too many distinct notions of that specification floating around. The primaries, white point, and energy relationship need to follow the data around in a concise way most applications can use.

So, how would ACES help us there?

Me [innocently]: the p-word?

Linguistically Yours,
Claes in Lund, Sweden

1 Like

This got a lot more replies since I last posted so lets see if I can at least make a start with how I think a photographic scene referred workflow would look like. Since I do think the ACES standard[1] is probably a good reference to start from, lets first look at what ACES is and then try to compare the workflow to a photographic work flow.

The ACES standard is in its bare essence a workflow specification for managing color in a cinema production pipeline. To do this they introduced a set of color spaces

  • ACES2065-1 - Main linear color space using AP0 primaries
  • ACEScg - Linear color space using AP1 primaries (used for compositing and digital renders)
  • ACEScc - Log based color space using AP1 primaries (used for color grading)
  • ACEScct - Same as ACEScc but includes a toe in the transfer curve to make it more similar to older log based formate

Added to this are a set of well defined transfer functions for both input[2] and output, although for output there is an extra wrinkle in that instead of directly going trough output it first might go trough a Look transform and secondly trough what is called a “Reference Rendering Transform” (or RRT for short). The RRT is in practice directly combined with the output transform so isn’t directly seen.

On top of all this ACES provides a reference implementation of all this in an OpenColorIO (OCIO) configuration.

Chapter 1: Ingress
At this stage there is already a huge difference between photography and cinema, not in the first place because most digital cinema cameras don’t use RAW files but a high bit depth video file with a log encoded color space[3]. Also due to the almost ubiquitous use of stage light (even for outdoor shoots) with well defined properties a single common transform[4] can be used. It is at this stage as well that an initial look is decided upon which is then also immediately used for any monitoring and screenings. This look is later communicated to the editing/VFX/DI departments so that everyone agrees on what they are seeing.

Contrast this with photography where with a few exceptions (studio and product photography come to mind) there is a lot less control over the lightning conditions. Add to that, that the details of RAW files are often kept secret[5] and I know of no cameras that offer anything but 8-bit jpeg/tiff besides RAW. So before a photograph can be brought into a scene referred color space it first will need to be demosaiced, color corrected and transformed into the linear scene referred space (not necessarily in that order). Hopefully this can be mostly automated.

Chapter 2: Editing
In the cinema workflow the footage is send out to the different post processing departments that will use the look transform from the previous step in combination with a correct output transform to edit (select which footage to use in what order), add visual effects (compositing) and finally add the final color grade (the final look of the film).

In scene referred photography this wouldn’t be too different from the cinema space of course selection will look a bit different, so in practice only the color grade aspect would remain (unless doing a photo composite of course). In photography it will be generally be at this stage that a look with be developed.

Chapter 3: Egress
In cinema in this stage the output transform is applied for the specified outputs needed sometimes some extra grading is added here to adjust for some output specific hangups.

Except for the somewhat different output formats (film will hardly be printed on paper for example) there is not much of a difference here either for photography. The only problem might be that most printers only provide ICC profiles and not LUTs compatible with an OCIO configuration[6].

Chapter 4: What does this mean aka Conclusion
The biggest issue with scene referred editing in photography seems to me to be what I would like to call the “Ingress problem”, where a photographer will need a bit more control over the input transform then is currently given in a standard ACES workflow, even if this is mostly automated. Some other things that we as photographers might want to tinker with a bit more is the RRT (which is effectively the tone map operator).

In effect I would propose for a photography workflow instead of combining the RRT with an output to combine it with the look transform. Of course such a workflow would need to include a default look which could be based on the ACES RRT. In this case global adjustments should be done by changing this look transform to suit our needs on a photographic basis, of course local edits will also be needed but those will be done by adjusting the values directly in the scene referred color space.


[1] Surprisingly the last S in ACES doesn’t stand for Standard but for System
[2] Currently besides sRGB mostly defined for the log based formats many professional recorders/cameras work in
[3] This is effectively a sort of intermediate form between RAW and something like jpeg output, enough color and dynamic range data that further editing is possible without all that pesky demosaicing
[4] Although since each manufacturer has their own Log based color space you would need 1 transform per manufacturer in practice
[5] With the sole exception of cameras that can shoot DNG (like my own Pentax K-1)
[6] Although many modern printers want standard RGB files anyway and only provide the ICC file for soft proofing

3 Likes

Patents! Nothing to do with our private conversation :smile:

There is an openexr chromaticities tag. If it’s empty, assume Rec.709 and if that doesn’t work you are on your own.

ICC profile software needs to use the chromaticities tag information to make an appropriate ICC profile to assign, and also needs to embed the proper chromaticities information on export. GIMP code is in the file “gimp/plug-ins/file-exr/openexr-wrapper.cc”. I’m sure PhotoFlow has similar code, as does darktable.

Also peruse this thread on this page:

[Openexr-devel] OpenEXR files with nonlinearly encoded RGB
http://lists.nongnu.org/archive/html/openexr-devel/2015-02/msg00010.html

which here and there in the thread touches on why openexr doesn’t support ICC profiles and what workarounds people have devised - it’s an interesting read.

The typical ACES workflow of course uses openexr. But openexr is used quite apart from ACES. They aren’t - to use the metaphor I used before - “married”. So ACES isn’t going to fix the problem of carrying along ICC profile information, unless someone has added some sort of metadata or sidecar files to do this, which I’ve never heard of any such thing.

Really nice overview!

As far as coding goes, the initial transforms in the ACES workflow are in the ctl folder, with a nice README for “look” transforms:

and the code for the rrt is here:

I have no clue how to even begin thinking about how this sort of code might be implemented for a photographic workflow using ICC profile color management.

1 Like

Currently I think this would be easier to implement using OCIO, partly since it was made to do this kind of stuff and partly as @gwgill has pointed out most ICC based software currently doesn’t implement the extensions needed (even tough the standard does define them, if I read his post correctly).

I might see if I can coble something together in the weekend.

I do fully believe something like this is theoretically possible in ICC, just as it is quite possible to have a display referred configuration for OCIO (not that you would want that tough)

2 Likes

I was thinking the same thing and wondering what software could be used just to run through some sample scene-referred interpolated raw files. I’m very much looking forward to what you might come up with.