HDR, ACES and the Digital Photographer 2.0

I’ve been wondering, I’ve read the primer and it seems to me that the ACES colourspace is for archiving stuff while the ACEScg one is a working space for digital editing/manipulation. Is that correct?

So am I correct in thinking that ACEScg would be the one to use in editing software?

I think that in this business there are facts, feelings, and what I’d call “subtle facts”.

First, the facts (already mentioned by @Elle):

  • image resizing is more accurate when done in linear RGB
  • colored gradients, and more generally operations that mix RGB channels, are better applied to linear data
  • blurs also work better with linear data
  • noise reduction is probably more effective when applied on perceptually uniform data (to be confirmed)

Now let’s see an example of “subtle fact”. This is the result of applying a linear curve to a an image encoded with the same TRC as the Lab L channel:

This is a linear curve applied to the same image encoded in linear RGB:


In this case, the curve adjustment is equivalent to a lift of exposure.

As one can see, in the linear case the shadows are lifted more than in the perceptual case and, to my taste, more naturally. The difference is rather subtle, but visible.

You will notice that the curve in the second screenshot looks strange: this is because the linear curve in linear encoding is “translated” back to perceptual encoding for visualisation. This shows why a “linear curve in linear space” lifts the shadows more than “a linear curve in perceptual space”.

Which one do you consider a better starting point for further editing?

The comparison above might be at least part of the explanation…

That is also my understanding…

I think this is one of the main drawbacks of introducing hard-coded decisions in the processing pipeline. In my opinion, the choice of a linear RGB working colorspace should be a “suggested default option”, but not enforced as the “only possible choice”.

Hi,
and thanks for the little demo!

Indeed, that might explain why I dislike messing with the “L” channel in Lab space – no matter what I do, the results always feel unnatural to my eye (in fact, sometimes I use tweak contrast in Lab precisely to give this unnatural feeling, but that’s a different point…).

However, that’s not the point I wanted to make before. RawTherapee already operates on linear RGB data for the first half of the pipeline. This includes exposure adjustment, dynamic range compression, channel mixer, and tone curves (among others).

However, currently the tone curves are applied very early in the pipeline, before any colour manipulations. Also, the contrast and brightness sliders work in “display referred mode”: internally, the sliders are translated into curves, that only work in [0,1]. This is not the ACES / “scene referred” way (in quotes because I’m never sure that I’m using the term as intended), which would be to operate in linear space as much as possible, with data in [0,+oo) (i.e. no assumption that it is always in [0,1]) until the very end of the pipeline, where the [0,+oo) data is remapped to [0,1] for display (using e.g. a log encoding) and after that a “film-like” S-curve is applied. This is what I’m currently playing with: operate in linear [0,+oo) as much as possible, and move the tone curves at the end, preceded by a log encoding formula similar to the one that was posted here by Troy (taken from ACES). So far, I seem to be happy about it, although in most cases the difference is minimal. But I seem to see a difference nonetheless, so I’ll keep experimenting :slight_smile:

2 Likes

This 2008 article has a nice and very readable overview of the benefits of starting image editing with a scene-referred rendition:

The Scene-referred Workflow - a proposal

1 Like

From ACEScg spec

The set of valid ACEScg RGB values also includes members whose projection onto the CIE chromaticity
diagram falls outside the region of the AP1 primaries. These ACEScg RGB values include those with one
or more negative ACEScg color component values; Ideally these values would be preserved through any
compositing operations done in ACEScg space but it is recognized that keeping negative values is not always
practical, in which case it will be acceptable to replace negative values with zero.
Values well above 1.0 are expected and should not be clamped except as part of the color correction needed
to produce a desired artistic intent.

@gaaned92 Well, there is a note in the conversion file saying that it would be ideal to keep the negative values but at the moment there isn’t a method to preserve them for the round trip. I am paraphrasing here.

Thanks much! for the link to dcamprof documentation - it really does cover a lot of stuff discussed in this and other threads, well worth reading.

Sometimes we talk about “scene-referred” as if it’s easy to actually get colorimetrically accurate colors when shooting raw. Torger’s documentation very nicely explains some of the ways camera-captured colors fail to be “the same as the scene colors” because of issues in creating camera input profiles.

Other factors also limit “how scene-referred” can an image actually be, some of which are discussed in the Tindemans article I linked to. Then there’s the Luther-Ives condition, and on and on. Even so, “as scene-referred as possible” is a good starting place for image editing.

As two asides to the topics in this thread, but covered in the dcamprof documentation:

@ggbutcher - Torger’s profile-making does start with a “not white balanced” file and calculates the white balance, I’m guessing similar to this 2012 post: [argyllcms] Re: Profile input white not mapping to output white - argyllcms - FreeLists - I’m also guessing the calculated white balance in dcamprof is then applied to the “uniwb” ti3 values before the profile is made.

Torger also talks about dealing dark saturated blues - neon lights, backlit blue glass, etc. Sometimes after applying a linear gamma camera input profile, blue lights and such can end up with negative Y values. dcamprof has an option to artificially desaturate the problem patches in the target chart shot thus somewhat compromising linearity but alleviating the “extreme colors” problem (my own more complicated approach has been to output raw color, use a linear matrix profile for the main parts of the image, and blend in a second copy to which a LAB LUT profile was applied, to recover the blue colors).

My thought was to move to a LUT camera profile, created with an IT8 target shot. I’ve demonstrated the linearity compromise using Anders’ approach; it’s not bad, but definitely a departure from the original characterization.

I just skimmed this; a very good discussion. We do need some perspective on how to consider this concept, and Tindemans’ article helps.

Maybe mathematically but not always true perceptively. It is a matter of trade offs and perspective. (E.g., see ImageMagick docs and discussions, and elsewhere.)

I have noticed that too. The latter is definitely more aesthetically pleasing but how would this tiny boost in the shadows affect subsequent operations?

1 Like

It only looks like a tiny boost in the shadows because all the dark colors are lifted proportionately along with the lighter colors. The curve in the user interface has a funny “uplift” at the toe, but that’s only so the user can comfortably place points on the curve - there is no “extra” lifting of the darker colors. The same results are obtained using Exposure though without the highlight clipping that comes from using Curves to add exposure.

Sometimes, often, after using Exposure or linear Curves (linear in the shadows) I find myself deliberately pushing the darkest shadows back down. This breaks the scene-referred nature of the camera-captured data, unless of course one assumes the original data departed from scene-referred by having too light of a black point perhaps from flare and glare.

I meant, comparatively, image #2 looks like its shadows have been boosted, at least that is what I see on my terrible laptop monitor.

Umm - what are you talking about ??

ICC’s use the transforms that they need to use, to encode conversions between colorspaces. I’ve no idea what you mean by “don’t keep ratios”. ICC profiles are accurate to the degree that they encode accurate transforms, irrespective of the details of how that is achieved. Their basic function is to preserve color values as faithfully as possible, which implies preserving ratios between values.

They are used quite successfully for inputs, and there is no reason not to do so, because they are a standard way of encoding colorspace definitions.

You’ll have to explain what you mean by this, because it doesn’t make any sense to me. ICC profiles encode color space transforms. It’s up to you which color spaces you choose to use in your workflow - the ICC profiles just provide a mechanism to be able to use them.

Don’t blame the ICC profile format for bad usage - get the application authors to fix their code! (And by all means convince them to make their applications more transparent in how they handle color. One of the biggest obstacles to getting things to work, is an application obscuring or hiding what it is doing in regard to color.)

2 Likes

f(x) = x^(1/gamma) => f(n * x) = (x * n)^(1/gamma) ≠ n * f(x)

Thus, the gamma function does not preserve the ratios, thus the gamma function does not preserve the chromaticity. And this flaw is not needed anymore since screens are mostly linear. So drop it maybe ?

Because when you save a file from software A in Adobe RGB 16 bits to further edit it in software B, if the first thing B does is not f(x) = x^gamma, you have lost your colors.

All transforms are not mathematically inversible. No matter their accuracy, it means that some ICC encoding can be destructive and non-reversible, in the sense that the original ratios can’t be recovered. So you don’t want to push pixels on anything that has been touched by an ICC because it’s display material, not working material.

A lot of inaccurate things have worked fairly with 8 bits/8 EV files on 8 bits sRGB screens for people doing mono-medium work (paper prints). Now people work 12-16 bits/15 EV files on 10 bits screens and counting, HDR screens are on their way, and everyone works for several output media. Besides, you have one standard for print work (D50, 90-120 Cd/m², 285:1) and one standard for web work (D65, 160 Cd/m²), as if choosing and picking one was an option. That means you would have to change the workflow depending on the output.

The modern way to do that, OpenOCIO and ACES, is to adapt the workflow on the input and let the outputs take care of themselves with view transforms. Let alone the fact that ICCs assume D50 white point when 99 % of RGB color spaces and displays are D65 natively, just so you get the pleasure to mess-up your RGB ratios even more using 2 unnecessary Bradford transforms.

ICCs treat color spaces and display calibration profiles exactly the same, using the same terms and files format, leading to a massive confusion amongst users. Color spaces are symbolic vector spaces. Calibration profiles are LUTs or LUTS interpolations (curves) linking a reference color space to the display output. Not the same thing, so give it a different name.

Sure, but the pleasure of ICCs is the ability to have black boxes doing magic for you, isn’t it ?

Not just a white balance correction, a full profiling, because the Sun is a black body, and artificial lights are not.

The white balance is the spectrum compensation achieved assuming the same emissive black body at a different temperature. LED and such are far from a black body spectrum, and white balance adjustments won’t suffice.

  1. Nothing about ICC (Device) profiles makes them always perform a gamma function. They perform whatever function best models the relationship between the device colorspace and the (device independent) PCS.

  2. By definition the input and output space of a color space transform is a different color space encoding. So you can’t apply the same maths to different encodings and expect the same result. i.e. to check the linear light ratio of two values in a gamma encoded space, you first have to convert to a linear light space to check the ratio.

[ And none of this is specific to the ICC profile format - the same logic applies to whatever format you want to define color spaces in. ]

If you mix color managed applications with non-color managed applications, or loose the colorspace tag, or manually disregard the colorspace tag, then yes, of course your colors are mis-interpreted.

“Some ICC transforms are not exactly invertible” → “All ICC transforms are destructive” simply doesn’t follow. Most simple transforms, particularly those used for working space definitions are exactly invertible (i.e. matrix & power or shaper). Even cLUT based profiles can be exactly inverted if you go to enough trouble (witness xicclu -fif option). Some particular transforms are deliberately not so invertible (i.e gamut mapping intents.)

ICC profiles have been 16 bit for a very long time, and the format has provision for floating point accuracy for those that need it. Given the typical repeatability of real world transducers (i.e. cameras and displays), it’s actually hard to justify more that 16 bits at input and output. (Image space representation, HDR or working space transforms, yes - extra headroom simplifies things.)

I’ve no idea why you think that. ICC by default (i.e. relative colorimetric) automatically takes the white point into account, and there is the flexibility of dealing more explicitly with white point considerations (absolute intent). Nothing about the format ties you to particular device white points, light intensities or contrast ratio’s.
[ i.e. don’t be fooled by the PCS. It’s just the common interchange space, and is invisible in a device to device space transformation. ]

And ICC is no different. The fundamentals of color management apply - conversion from and to device dependent spaces, via a defined device independent space. Details may differ (Input referred vs. output referred CIE based space, D60 vs. D50 white point etc.), but it’s the same idea, because it has to be. Bradford transforms are exactly invertible, and I’ve yet to hear of any issues related to this with regard to the extensive use of ICC profiles for display profiling.

There’s nothing in ICC called display calibration profiles, so I’m not sure what you are talking about. Calibration != Characterization, and user confusion about color management is hardly new, and not something specific to ICC.

Yep, they do. One is a color profile, the other is calibration information. The latter is not something ICC deals with, although of course users do, since changing the calibration state of a device can invalidate a profile.

Sorry, I’m not sure what you are saying here. There’s nothing particularly “black box” about ICC profiles. Applications on the other hand often go out of their way to hide what’s happening with regard to color management, thereby making it very difficult to know if it’s right, or debug if it’s not.

1 Like

That’s still invertible without destroying ratios, addition is probably a better example

I agree with all you said. I just want to point out a peculiar feature of power functions, which makes them mathematically invertible but numerically inaccurate in the vicinity of zero: their derivative around zero is either zero or infinite (depending wether the exponent is >1 or <1). As such, a round-trip linear → gamma → linear conversion becomes numerically inaccurate for values very close to zero.
AFAIK this is one of the reasons why sRGB and Lab TRCs have a linear section near zero…

Let’s take a practical example, and I propose you try to explain what can go wrong in it: I start from a RAW file, for which I have an input ICC profile that I obtained from an IT8 target shot. I convert from this profile to a linear ACEScg working colorspace, which is described by a matrix-type ICC profile with linear TRC and therefore I use an ICC conversion. I do my processing in linear ACEScg, and then I apply a final ICC conversion from linear ACEScg to the ICC profile of my calibrated monitor.
As you can see, there is a lot of ICC stuff involved, but nowhere there is a gamma TRC being used. In this pipeline, the ICC profiles are used to conveniently describe the input device (the camera), the working colorspace (linear ACEScg) and the output device (the monitor). From direct experience, I can tell you that there is no risk of inaccuracies, or of “ratios that cannot be recovered”.

Again, ICC profiles are a tool, and if used properly they can be as accurate as any other computation. Of course they have limitations (no log encoding in the current standard, and no “sophisticated” view transforms as those proposed by the ACES workflow), but if they are used for their intended purpose of “keeping color consistency between devices” they are perfectly OK (and they spare you a lot of boring math).

Bradford transforms are simple 3x3 matrices, therefore they preserve ratios between RGB channels:

RGB' = M * RGB
c * RGB' = c * (M * RGB) = M * (c*RGB)

where c is a scalar number, M is a 3x3 matrix and RGB is a vector of RGB values.

1 Like

Exactly.

I’m confident that a workflow exactly equivalent to ACES could be constructed using ICC as the profile format, if one were prepared to do the work to implement it. It would almost certainly mean implementing some of the more advanced tags not common in current tools, perhaps up to and including IccMAX, and adapting a lot of tools etc. Note that things like log encoding are supported in IccMAX, and “looks” have long been supported by ICC using abstract profiles.

Of course there is not a lot of motivation to want to do this, since ACES is perfectly adequate for what it was intended for, but none the less, ICC is not excluded from the same domain by any technical limitations that I’m aware of.