HDR, ACES and the Digital Photographer 2.0

I meant, comparatively, image #2 looks like its shadows have been boosted, at least that is what I see on my terrible laptop monitor.

Umm - what are you talking about ??

ICCā€™s use the transforms that they need to use, to encode conversions between colorspaces. Iā€™ve no idea what you mean by ā€œdonā€™t keep ratiosā€. ICC profiles are accurate to the degree that they encode accurate transforms, irrespective of the details of how that is achieved. Their basic function is to preserve color values as faithfully as possible, which implies preserving ratios between values.

They are used quite successfully for inputs, and there is no reason not to do so, because they are a standard way of encoding colorspace definitions.

Youā€™ll have to explain what you mean by this, because it doesnā€™t make any sense to me. ICC profiles encode color space transforms. Itā€™s up to you which color spaces you choose to use in your workflow - the ICC profiles just provide a mechanism to be able to use them.

Donā€™t blame the ICC profile format for bad usage - get the application authors to fix their code! (And by all means convince them to make their applications more transparent in how they handle color. One of the biggest obstacles to getting things to work, is an application obscuring or hiding what it is doing in regard to color.)

2 Likes

f(x) = x^(1/gamma) => f(n * x) = (x * n)^(1/gamma) ā‰  n * f(x)

Thus, the gamma function does not preserve the ratios, thus the gamma function does not preserve the chromaticity. And this flaw is not needed anymore since screens are mostly linear. So drop it maybe ?

Because when you save a file from software A in Adobe RGB 16 bits to further edit it in software B, if the first thing B does is not f(x) = x^gamma, you have lost your colors.

All transforms are not mathematically inversible. No matter their accuracy, it means that some ICC encoding can be destructive and non-reversible, in the sense that the original ratios canā€™t be recovered. So you donā€™t want to push pixels on anything that has been touched by an ICC because itā€™s display material, not working material.

A lot of inaccurate things have worked fairly with 8 bits/8 EV files on 8 bits sRGB screens for people doing mono-medium work (paper prints). Now people work 12-16 bits/15 EV files on 10 bits screens and counting, HDR screens are on their way, and everyone works for several output media. Besides, you have one standard for print work (D50, 90-120 Cd/mĀ², 285:1) and one standard for web work (D65, 160 Cd/mĀ²), as if choosing and picking one was an option. That means you would have to change the workflow depending on the output.

The modern way to do that, OpenOCIO and ACES, is to adapt the workflow on the input and let the outputs take care of themselves with view transforms. Let alone the fact that ICCs assume D50 white point when 99 % of RGB color spaces and displays are D65 natively, just so you get the pleasure to mess-up your RGB ratios even more using 2 unnecessary Bradford transforms.

ICCs treat color spaces and display calibration profiles exactly the same, using the same terms and files format, leading to a massive confusion amongst users. Color spaces are symbolic vector spaces. Calibration profiles are LUTs or LUTS interpolations (curves) linking a reference color space to the display output. Not the same thing, so give it a different name.

Sure, but the pleasure of ICCs is the ability to have black boxes doing magic for you, isnā€™t it ?

Not just a white balance correction, a full profiling, because the Sun is a black body, and artificial lights are not.

The white balance is the spectrum compensation achieved assuming the same emissive black body at a different temperature. LED and such are far from a black body spectrum, and white balance adjustments wonā€™t suffice.

  1. Nothing about ICC (Device) profiles makes them always perform a gamma function. They perform whatever function best models the relationship between the device colorspace and the (device independent) PCS.

  2. By definition the input and output space of a color space transform is a different color space encoding. So you canā€™t apply the same maths to different encodings and expect the same result. i.e. to check the linear light ratio of two values in a gamma encoded space, you first have to convert to a linear light space to check the ratio.

[ And none of this is specific to the ICC profile format - the same logic applies to whatever format you want to define color spaces in. ]

If you mix color managed applications with non-color managed applications, or loose the colorspace tag, or manually disregard the colorspace tag, then yes, of course your colors are mis-interpreted.

ā€œSome ICC transforms are not exactly invertibleā€ ā†’ ā€œAll ICC transforms are destructiveā€ simply doesnā€™t follow. Most simple transforms, particularly those used for working space definitions are exactly invertible (i.e. matrix & power or shaper). Even cLUT based profiles can be exactly inverted if you go to enough trouble (witness xicclu -fif option). Some particular transforms are deliberately not so invertible (i.e gamut mapping intents.)

ICC profiles have been 16 bit for a very long time, and the format has provision for floating point accuracy for those that need it. Given the typical repeatability of real world transducers (i.e. cameras and displays), itā€™s actually hard to justify more that 16 bits at input and output. (Image space representation, HDR or working space transforms, yes - extra headroom simplifies things.)

Iā€™ve no idea why you think that. ICC by default (i.e. relative colorimetric) automatically takes the white point into account, and there is the flexibility of dealing more explicitly with white point considerations (absolute intent). Nothing about the format ties you to particular device white points, light intensities or contrast ratioā€™s.
[ i.e. donā€™t be fooled by the PCS. Itā€™s just the common interchange space, and is invisible in a device to device space transformation. ]

And ICC is no different. The fundamentals of color management apply - conversion from and to device dependent spaces, via a defined device independent space. Details may differ (Input referred vs. output referred CIE based space, D60 vs. D50 white point etc.), but itā€™s the same idea, because it has to be. Bradford transforms are exactly invertible, and Iā€™ve yet to hear of any issues related to this with regard to the extensive use of ICC profiles for display profiling.

Thereā€™s nothing in ICC called display calibration profiles, so Iā€™m not sure what you are talking about. Calibration != Characterization, and user confusion about color management is hardly new, and not something specific to ICC.

Yep, they do. One is a color profile, the other is calibration information. The latter is not something ICC deals with, although of course users do, since changing the calibration state of a device can invalidate a profile.

Sorry, Iā€™m not sure what you are saying here. Thereā€™s nothing particularly ā€œblack boxā€ about ICC profiles. Applications on the other hand often go out of their way to hide whatā€™s happening with regard to color management, thereby making it very difficult to know if itā€™s right, or debug if itā€™s not.

1 Like

Thatā€™s still invertible without destroying ratios, addition is probably a better example

I agree with all you said. I just want to point out a peculiar feature of power functions, which makes them mathematically invertible but numerically inaccurate in the vicinity of zero: their derivative around zero is either zero or infinite (depending wether the exponent is >1 or <1). As such, a round-trip linear ā†’ gamma ā†’ linear conversion becomes numerically inaccurate for values very close to zero.
AFAIK this is one of the reasons why sRGB and Lab TRCs have a linear section near zeroā€¦

Letā€™s take a practical example, and I propose you try to explain what can go wrong in it: I start from a RAW file, for which I have an input ICC profile that I obtained from an IT8 target shot. I convert from this profile to a linear ACEScg working colorspace, which is described by a matrix-type ICC profile with linear TRC and therefore I use an ICC conversion. I do my processing in linear ACEScg, and then I apply a final ICC conversion from linear ACEScg to the ICC profile of my calibrated monitor.
As you can see, there is a lot of ICC stuff involved, but nowhere there is a gamma TRC being used. In this pipeline, the ICC profiles are used to conveniently describe the input device (the camera), the working colorspace (linear ACEScg) and the output device (the monitor). From direct experience, I can tell you that there is no risk of inaccuracies, or of ā€œratios that cannot be recoveredā€.

Again, ICC profiles are a tool, and if used properly they can be as accurate as any other computation. Of course they have limitations (no log encoding in the current standard, and no ā€œsophisticatedā€ view transforms as those proposed by the ACES workflow), but if they are used for their intended purpose of ā€œkeeping color consistency between devicesā€ they are perfectly OK (and they spare you a lot of boring math).

Bradford transforms are simple 3x3 matrices, therefore they preserve ratios between RGB channels:

RGB' = M * RGB
c * RGB' = c * (M * RGB) = M * (c*RGB)

where c is a scalar number, M is a 3x3 matrix and RGB is a vector of RGB values.

1 Like

Exactly.

Iā€™m confident that a workflow exactly equivalent to ACES could be constructed using ICC as the profile format, if one were prepared to do the work to implement it. It would almost certainly mean implementing some of the more advanced tags not common in current tools, perhaps up to and including IccMAX, and adapting a lot of tools etc. Note that things like log encoding are supported in IccMAX, and ā€œlooksā€ have long been supported by ICC using abstract profiles.

Of course there is not a lot of motivation to want to do this, since ACES is perfectly adequate for what it was intended for, but none the less, ICC is not excluded from the same domain by any technical limitations that Iā€™m aware of.

Pretty sure Graeme is right. ACES and ICC seem to have have a lot in common actually.


In fact at one point I believe the ICC committee set up a working group for Motion Pictures and made a few unbounded ICC profiles. http://www.color.org/ICC_Chiba_07-06-19_PM_DMP_Float.pdf Some were in After Effects.

Currently ICC profiles generated with OpencolorIO are used as a workaround with ACES when using applications like Photoshop.

A lot more info here:

http://cinematiccolor.org/

Alternatively to ACES, apps like Baselight/Truelight have decided to take their own color management approach using LAB for the underlying math. https://lowepost.com/colorgrading/insights/base-grade-and-the-evolution-of-grading-tools-r15/ & https://www.filmlight.ltd.uk/pdf/whitepapers/FL-TL-TN-0416-SettingTLProfile.pdf

1 Like

My current understanding is the following:

  • ACES provides sophisticated and standardised transforms from the ā€œuniversalā€ ACES colorspace to a number of standardised output devices (sRGB, Rec.2020, P3-D60, etcā€¦ including different options for HDR-capable displays). However, it seems not to provide a simple handling of specific output devices that are usually characterised by ICC profiles
  • ICC provides more simple transforms, and I have no idea how it incorporates HDR-capable output devices. I guess it does not make any distinction between SDR and HDR, and only considers black and white levels

My impression is that we should try to incorporate both: ACES to go from the linear working colorspace to the ā€œstandardisedā€ output device, and ICC to go from there to the actual calibrated monitor (or printer) we are using.

Does this make sense?

My understanding is that not every studio that makes movies - not even every well-known studio that makes movies - also uses/always uses ACES. Some have their own inhouse color management. But perhaps none of these ā€œnot ACESā€ pipelines are being used by studios when making HDR movies.

What I know for a fact is that however the images/frames might be produced for use with HDR displays, the images can be viewed with their proper colors by assigning an ICC profile such as profiles built using the various permutations of Rec.2020 in these articles:

Iā€™m not just speaking theory here - a couple of years ago I made one of those PQ ST-2084 Rec.2020 profiles for someone to use with images/frames to be displayed on an HDR display.

A device color space for standaradized display devices including HDR monitors is still a color space described by a specification, regardless of what type of color management was used to produce the images to be shown on the device.

Why would there be any reason to do a double conversion, first using ACES (do you mean the ACES RGB color space? the ACES workflow? presumably with OCIO?) to get to some nearest standardized device color space, then assigning the standardized ICC profile to the output, and then using ICC to convert to the actual monitor profile?

Assume your particular output device (monitor) is well calibrated to a standard given in a spec:

  • In an OCIO workflow, use an appropriate OCIO LUT to link the RGB working space ( including ACES RGB) to the standardized output device color space.

  • In an ICC profile workflow, as your monitor profile use an ICC profile made to the standardized specification to which the monitor is well-calibrated, and convert from the RGB working space (including ACES working space) to the monitor profile.

When using monitors that arenā€™t well-calibrated to specific standards:

  • In an OCIO workflow, make an ICC profile for your monitor, and then use that ICC profile and the selected RGB working space to make an OCIO LUT. One LUT per each selected RGB working space including ACES RGB.

  • In an ICC profile workflow, calibrate and profile your monitor and use the monitor ICC profile. Thereā€™s no need to make a new monitor profile for use with each different RGB working space you might want to use, as the PCS is used to go from RGB working space profile to monitor profile.

My apologies, the more I look at the above sentence, the odder it seems, and so Iā€™m guessing Iā€™m not properly interpreting what you meant to say. Why is the phrase ā€œlinear working colorspaceā€ linked with ACES? Do you mean ACES RGB which is linear? Why would converting from an image in the ACES RGB color space to an output device color space be treated any differently than any other color space conversion?

Thank you! Iā€™ve been looking and looking for that pdf, and just couldnā€™t find it.

Iā€™m guessing thereā€™s confusion about how the RRT+ODT works?

No problem! Thereā€™s this one as well: http://www.color.org/ICCSpecRevision_02_11_06_Float.pdf

LittleCMSā€™s unbounded ICC PDF: http://www.littlecms.com/CIC18_UnboundedCMM.pdf

And a few Adobe patents on the topic that describe the system :slight_smile:

I found a clear article about ACES
https://www.lightillusion.com/aces_overview.html

If a decent ā€˜colour chartā€™ is used for each shot captured, with the same ā€˜lighting exposureā€™ used every time, performing an input grade to ā€˜normaliseā€™ the colour chart will also perform the role of an IDT, pushing each shot into a user defined ā€˜Gradingā€™ space.

Scene Referred simply means the image data is maintained in a format that as closely as possible represents the original scene, without effective restriction on colour or dynamic range. This is not necessarily the same as the ā€˜rawā€™ image data as exported from the camera (after any necessary debayering, etc), but attempts to ā€˜correctā€™ the image to better match the scene the camera was originally pointing at, which may include white point correction, gamut correction, etc. Theses processes are often referred to as ā€˜Scene Reconstructionā€™ processes.

Here not so different to almost every raw editor

The simplest workflow by far is to use a Display Referred workflow, with a suitable viewing LUT to maintain the timeline images in a colour space that is greater than the display colour space. Such simple workflows can often be far easier, with far less complexity and issues to overcome, with less image manipulations being performed, and so potentially the best possible final image quality.

First point :slight_smile:

The first point to be made is that compared to other workflows ACES will not improve the final image quality, or enable improved/better colours, or provided any other image related benefit. It is not a ā€˜magic bulletā€™ that somehow guarantees better end results.

I think so

ACES is a Linear Light colour space, which is not suitable for colour grading work

This can well be, and in my head this part is the main motivation for this discussionā€¦ my understanding is that the ACES workflow proposes a standardised way to map the unbounded linear values in the ACES working colorspace to different output devices. And this mapping is more complex (ā€œsophisticatedā€) than a simple ICC conversion (at least according to the current ICC standard).
This is the role of the ā€œRRT+ODTā€, right?

Wether the result also ā€œlooks betterā€ in the digital photography case is still an open question, and will probably require some experimenting.

While this is technically OK, for me it looks already ā€œtoo complicatedā€ for an average user. This is where ICC profiles provide a much simpler solution IMHO.

1 Like

Absolutely. However, my hope is that it can provide the ā€œaverage userā€ with a set of tools/transforms that provide good-looking results when sending unbounded image data to actual output devices, better than what could be achieved with straight ICC conversions. The knot of the rope in this thread is wether this is true or notā€¦

Experienced users should always have the possibility to follow their own workflow, so any choice we discuss here should be a ā€œsuggested defaultā€ and not an ā€œimposed choiceā€.

Is this because it is linear, or because of the extremely large gamut?

1 Like

I tried searching for an answer myself, but I didnā€™t see anything suggesting that linear is inadequate for colour grading, except that sliders/wheels/controls in popular grading software were/are designed to operate in a log space, and so would respond in a ā€œstrangeā€ manner when applied in linear. If Iā€™m wrong, Iā€™d be happy to see references that show otherwise.

this I can see (and examples are easy to find online)

1 Like

Because drawing an s-curve or similar is more natural and intuitive when the middle gray is roughly 0.4/0.5 like in the sRGB gamma or log