HDR, ACES and the Digital Photographer 2.0

+1 to that.

In my early explorations of scene-referencing, I used a false assumption, that LittleCMS would take a profile transform to a destination with “linear gamma” TRC and just leave the data alone in that regard, and just do the gamut translation. Yeow, that was a bad assumption, thanks @Elle for setting me straight. If the source and destination profiles are both gamma=1.0, the result just looks “identity”, but the TRC gonkulator still did the math. I still believe that gamma=1.0 working space profiles preserve the radiometric relationship, but I need to probably dig into LittleCMS code to make sure of that.

I believe it is very important to understand how ICC profiles are specifically used when we discuss this. Most of the image file formats we use depend primarily on ICC profiles to describe the color and tone space of the image data they contain, and I know of no other way to specify calibration of such to displays. After extensive use of @Elle’s g10 profiles I still believe this can be done and maintain processing pipeline that respects the radiometric relationship until the display transform, but that is based on my “black box” observation of behavior, and I now know how badly that can go…

As far as I am aware this is not strictly speaking necessary to be considered scene referred, even tough this is commonly done in the film industry and recommended by the ACES standard. (Matching everything like that makes it a lot easier when compositing things)

ACES is about coordinating the work of many and is currently the most well known and accepted workflow for working with scene referred imagery. (For a counter example although blender can be integrated in the ACES workflow it currently actually uses its own per default)

Scene referred is any color space that isn’t bounded by what in color science is called a display but only by the primaries that are used to describe it.

It should be possible to define a scene referred workflow that is more suited for the photographic world, ACES might or might not be a good starting point to develop this workflow tough.

So know that we know that ACES is a workflow for working in scene referred data it is time to see if we can develop a workflow for photography. I would like to note that this workflow would be most suited when making photo-composites and if you just want to edit and prepare for print (or other LDR output format) a singular photo then using some form of tonemapping and adjusting the LDR (like we have done traditionally in photography) is probably the quicker and more useful way of working (especially if you are on you own)

1 Like

Yes this is important !
:slight_smile:

1 Like

@Carmelo_DrRaw and other gurus

As I am trying to grasp some informations from your posts, I stumbled upon the question of camera ICC or DCP profiles
One source of information about this topic is dcamprof from Anders Torger
Quoting him: Using a camera profile convert camera raw camera into a colorimetric “scene-referred” representation.

His paragraph about tone curves and tone reproduction operators seems also interesting though it goes over my head.
I think that can close the controversy about ICC profiles.

hi,

finding out whether/to what extent this is actually the case is what interests me most in this context. that’s why I’ve started playing with log encoding, CDL, and working with linear rgb data as much as possible. I have implemented some basic support for this in RT, and so far I must say that I like the “feeling” of it (for lack of a better word). there’s not a night-and-day difference wrt the usual way, but I seem to be struggling less to obtain a result I like, especially with more difficult pictures (eg portraits in artificial or harsh lighting). but I need more time to determine if this is actually just a placebo effect :slight_smile:

You (and others of course) can add an “at least for now” to the end of that paragraph :wink:

I do think that working in a linear space that doesn’t throw away the high dynamic range of the camera is the way forward (so in effect the working space will be scene referred), but depending on the final output[1] needed the exact way of working will be slightly different. Of course these uses are similar enough that with the right design a single application should be able to perform multiple of these use cases.

I might make a post (or maybe even a topic) later with how I think a photographic scene referred workflow should look like.


[1]be that print, a mate for use in film production, a jpeg for viewing online or as source material to perform compositing

Yes, I’m feeling just that about moving whitebalance into my camera profile. I think I have measured benefit for daylight scenes, but what this implies is having to take a colorchecker shot at the sessions I want to do this so I can make a custom profile for the scene. Beeeeecause… anything other than daylight still requires some whitebalance correction.

Intuitively, working the linear rgb appeals to me also, mainly because I’ve experienced the downsides of treating assertions like measurements in my other endeavors. Philosophically, any measurement is still an assertion, but if you go down that rabbit hole you’ll end up in the asylum sure enough:

In our endeavor, I consider what is delivered in the raw file to be measurement. Anything else is assertion. Not perfect, but it makes me happy…

Ok devil’s advocate mode:

All the advantages you present here sound awesome, really. So we fix all our favorite programs to use linear RGB everywhere and proper display profiles to display all the things.

Now someone asks you … “I really liked that shot of you, can you give it to me in a slightly different crop”

You are in a hurry and open your old raw file with the old pp3/xmp file to just change the crop.

I am sure we can all agree at this moment we want the output to be exactly like it was before, we just want to change the crop.

  1. How hard would it be to preserve the old rendering?
  2. Can we use the new pipeline and maybe just apply some math to the input params to get the old look on the new pipeline?
  3. If 2nd is not possible … how long would you be willing to keep both pipelines working? I can understand the dev side here to say “just fix your images with the new pipeline”. but I also know … none of us would accept this answer, if we are on the user side.

I’ve been wondering, I’ve read the primer and it seems to me that the ACES colourspace is for archiving stuff while the ACEScg one is a working space for digital editing/manipulation. Is that correct?

So am I correct in thinking that ACEScg would be the one to use in editing software?

I think that in this business there are facts, feelings, and what I’d call “subtle facts”.

First, the facts (already mentioned by @Elle):

  • image resizing is more accurate when done in linear RGB
  • colored gradients, and more generally operations that mix RGB channels, are better applied to linear data
  • blurs also work better with linear data
  • noise reduction is probably more effective when applied on perceptually uniform data (to be confirmed)

Now let’s see an example of “subtle fact”. This is the result of applying a linear curve to a an image encoded with the same TRC as the Lab L channel:

This is a linear curve applied to the same image encoded in linear RGB:


In this case, the curve adjustment is equivalent to a lift of exposure.

As one can see, in the linear case the shadows are lifted more than in the perceptual case and, to my taste, more naturally. The difference is rather subtle, but visible.

You will notice that the curve in the second screenshot looks strange: this is because the linear curve in linear encoding is “translated” back to perceptual encoding for visualisation. This shows why a “linear curve in linear space” lifts the shadows more than “a linear curve in perceptual space”.

Which one do you consider a better starting point for further editing?

The comparison above might be at least part of the explanation…

That is also my understanding…

I think this is one of the main drawbacks of introducing hard-coded decisions in the processing pipeline. In my opinion, the choice of a linear RGB working colorspace should be a “suggested default option”, but not enforced as the “only possible choice”.

Hi,
and thanks for the little demo!

Indeed, that might explain why I dislike messing with the “L” channel in Lab space – no matter what I do, the results always feel unnatural to my eye (in fact, sometimes I use tweak contrast in Lab precisely to give this unnatural feeling, but that’s a different point…).

However, that’s not the point I wanted to make before. RawTherapee already operates on linear RGB data for the first half of the pipeline. This includes exposure adjustment, dynamic range compression, channel mixer, and tone curves (among others).

However, currently the tone curves are applied very early in the pipeline, before any colour manipulations. Also, the contrast and brightness sliders work in “display referred mode”: internally, the sliders are translated into curves, that only work in [0,1]. This is not the ACES / “scene referred” way (in quotes because I’m never sure that I’m using the term as intended), which would be to operate in linear space as much as possible, with data in [0,+oo) (i.e. no assumption that it is always in [0,1]) until the very end of the pipeline, where the [0,+oo) data is remapped to [0,1] for display (using e.g. a log encoding) and after that a “film-like” S-curve is applied. This is what I’m currently playing with: operate in linear [0,+oo) as much as possible, and move the tone curves at the end, preceded by a log encoding formula similar to the one that was posted here by Troy (taken from ACES). So far, I seem to be happy about it, although in most cases the difference is minimal. But I seem to see a difference nonetheless, so I’ll keep experimenting :slight_smile:

2 Likes

This 2008 article has a nice and very readable overview of the benefits of starting image editing with a scene-referred rendition:

The Scene-referred Workflow - a proposal

1 Like

From ACEScg spec

The set of valid ACEScg RGB values also includes members whose projection onto the CIE chromaticity
diagram falls outside the region of the AP1 primaries. These ACEScg RGB values include those with one
or more negative ACEScg color component values; Ideally these values would be preserved through any
compositing operations done in ACEScg space but it is recognized that keeping negative values is not always
practical, in which case it will be acceptable to replace negative values with zero.
Values well above 1.0 are expected and should not be clamped except as part of the color correction needed
to produce a desired artistic intent.

@gaaned92 Well, there is a note in the conversion file saying that it would be ideal to keep the negative values but at the moment there isn’t a method to preserve them for the round trip. I am paraphrasing here.

Thanks much! for the link to dcamprof documentation - it really does cover a lot of stuff discussed in this and other threads, well worth reading.

Sometimes we talk about “scene-referred” as if it’s easy to actually get colorimetrically accurate colors when shooting raw. Torger’s documentation very nicely explains some of the ways camera-captured colors fail to be “the same as the scene colors” because of issues in creating camera input profiles.

Other factors also limit “how scene-referred” can an image actually be, some of which are discussed in the Tindemans article I linked to. Then there’s the Luther-Ives condition, and on and on. Even so, “as scene-referred as possible” is a good starting place for image editing.

As two asides to the topics in this thread, but covered in the dcamprof documentation:

@ggbutcher - Torger’s profile-making does start with a “not white balanced” file and calculates the white balance, I’m guessing similar to this 2012 post: [argyllcms] Re: Profile input white not mapping to output white - argyllcms - FreeLists - I’m also guessing the calculated white balance in dcamprof is then applied to the “uniwb” ti3 values before the profile is made.

Torger also talks about dealing dark saturated blues - neon lights, backlit blue glass, etc. Sometimes after applying a linear gamma camera input profile, blue lights and such can end up with negative Y values. dcamprof has an option to artificially desaturate the problem patches in the target chart shot thus somewhat compromising linearity but alleviating the “extreme colors” problem (my own more complicated approach has been to output raw color, use a linear matrix profile for the main parts of the image, and blend in a second copy to which a LAB LUT profile was applied, to recover the blue colors).

My thought was to move to a LUT camera profile, created with an IT8 target shot. I’ve demonstrated the linearity compromise using Anders’ approach; it’s not bad, but definitely a departure from the original characterization.

I just skimmed this; a very good discussion. We do need some perspective on how to consider this concept, and Tindemans’ article helps.

Maybe mathematically but not always true perceptively. It is a matter of trade offs and perspective. (E.g., see ImageMagick docs and discussions, and elsewhere.)

I have noticed that too. The latter is definitely more aesthetically pleasing but how would this tiny boost in the shadows affect subsequent operations?

1 Like

It only looks like a tiny boost in the shadows because all the dark colors are lifted proportionately along with the lighter colors. The curve in the user interface has a funny “uplift” at the toe, but that’s only so the user can comfortably place points on the curve - there is no “extra” lifting of the darker colors. The same results are obtained using Exposure though without the highlight clipping that comes from using Curves to add exposure.

Sometimes, often, after using Exposure or linear Curves (linear in the shadows) I find myself deliberately pushing the darkest shadows back down. This breaks the scene-referred nature of the camera-captured data, unless of course one assumes the original data departed from scene-referred by having too light of a black point perhaps from flare and glare.