HDR, ACES and the Digital Photographer 2.0

For software we would currently be looking at Natron, Blender (only the compositor tough) and/or Krita. Not sure how it currently would work to develop new looks in (semi) real time (although it should be possible since this is done in the official ACES workflow during recording)

Thatā€™s precisely why no pixel pusher wants to mess with ICCs/color transforms early in the chain.

There are 2 problems here:

  1. even the so-called ā€œcolor managedā€ apps donā€™t say what sort of transforms they operate. Their docs are scarse and you have to reverse-engineer their source code to get a hint. For example, Gnome is supposed to be 100 % color-managed, but even its dev docs say nothing about how itā€™s done, what input from the software windows are expected, etc. In Photoshop, I donā€™t know if the soft linearizes the TRC first or if it keeps the 2.2 Adobe RGB gamma in the color workspace. That changes completely the meaning of a multiply blending mode.
  2. in my experience of opensource softwares, devs grab whatever CMS, put it on top of their code and suddenly their app is color-managed, even if nobody knows whatā€™s really happening in the transforms. As if everything would be alright because we handle ICCs in and out.

ā€œSome ICC transforms are not exactly invertibleā€ -> ā€œAll ICC transforms are suspiciousā€ ā†’ ā€œUse them as the last step if neededā€

Well, I have been trained in metrology in French, so I go by the VIM definitions. For me, characterization is quality control (gamut coverage, Delta E, etc.) and calibration is error correction through the reversed transfer function (definition 2.39 of the VIM). The calibration results in TRC and/or 3D LUT to correct the RGB values so the transducer output ā€œtrueā€ colors.

The ICC specs says you need to clamp between [0;1]. When/how do you do that ?

I missed that.

ACES are intended to keep masters untouched as long as possible, and convert to display spaces as late as possible in one direction only.

I just tested that with rawproc, which uses LittleCMS2, and @Elleā€™s profile set. First, I opened a JPEG Iā€™d saved previously which is output-referred sRGB, gamma 2.2, per the embedded sRGB-elle-V2-g22.icc. I then applied a colorspace transform to the same .icc, relative colorimetric. The histogram didnā€™t budge in either scale or range (min-max).

Second, I opened a raw image (dcraw raw, linear, camera white balance, no scaling) with my camera profile assigned, then converted it to Rec2020-elle-g10.icc, which Iā€™d normally do for editing. The color distributions changed slightly, but the data range remained intact.

If I understand the cmsTransform routine correctly, it is doing two TRC transforms: 1) input TRC ->linear XYZ, then linear XYZ ā†’ output TRC, so right now Iā€™d assert that, if the TRCs of the input and output profiles are the same, the radiometric relationship survives the round-trip in LittleCMS2.

As always, comment and criticism welcome.

Working in native camera space hardly what I would call color managed!

Thatā€™s up to the application writers. If they know their stuff, then they know the implications of doing maths like blends in certain spaces. Color management is just a means of preserving the meaning of colors independently of the color representation. Itā€™s what allows you to maintain color fidelity while dealing with different input and output formats, and being able to choose working color spaces that make it easy to do the transformations you want.

I can only speak for my own code.

Thereā€™s nothing magically good or bad about ICC transforms. All other color systems such as ACES etc., uses similar mechanisms such as equations and 1D and nD lookup tables.

Applying some definitions not at all related to ICC profiles and common color management nomenclature is sure to lead to confusion.

Thatā€™s not color management meanings. Characterization/Profiling is what it says it is - measuring and recording the responses of a color device.

All calibration uses characterization/profiling as its input, since you always need two such profiles to create a correction/transformation. A color management system just formalizes it, and typically provides a means of saving the profiles so they can be mixed and matched, rather than thrown away, as happens in typical calibration systems.

This is the type of statement I often hear from people donā€™t really understand color management, and think in process terms of tweaking things to make them ā€œrightā€.

Color management is different to that idea, itā€™s not an open loop adjustment process, itā€™s a closed loop process end to end. There is no such thing as ā€œtrueā€ colors, just a profiles. The profiles define the start and end, and the CMM automates creating the transformation between them.

The ICC spec. is for a file format, with information on how to interpret it. Implementations certainly arenā€™t bound by the need to clamp when it doesnā€™t make sense to. My implementation certainly doesnā€™t, and Martiā€™s makes it optional.

3 Likes

What are those?

In the specific case of my PhotoFlow code, the clamping is optional and left as a userā€™s choice. The default is to only clamp negative values, because they can lead to really wrong results in some circumstances:
19

Concerning the clamping in ICC specs, this is unavoidable when using LUT profiles or v2 matrix profiles, in the latter case because the TRC is defined point-wise in the [0;1] range and cannot be analytically extrapolated (I am unsure about v2 profiles with linear TRC).
On the other hand, when using V4 matrix profiles with analytical TRC the conversions can be reliably applied (and are invertible) also for out-of-bound values.

Again, in the PhotoFlow case all working colorspaces are implemented as V4 matrix profiles with analytical TRC, so out-of-bound values can be preserved up to the point where the data is converted and saved to a lossy output format (here lossy also includes TIFF files encoded with integer values, because of the range limitations of the integer representation).

When you open a file from disk, it can be in any colorspace. The first step you want to do for a precise and predictable editing is to promote the values to floating point and then convert them to a well-defined working colorspace (I am using linear Rec.2020 by default).
Thatā€™s the very important first step in the chain, and fortunately we have ICC profiles and CMS libraries that simplify this taskā€¦

ACES retain the masters untouched. Only the outputs are corrected.

Metrology is the science of measurement and error characterization in measurement devices, that relies on statistics and physics. Itā€™s nice to see that random dudes choosed to discard it in image processing as if they knew better. (fact: they donā€™t).

Calibration is calibration. I have done calibration curves for chemical and physical instrumentation, itā€™s no tweaking, itā€™s how metrology works : find out the transfer function of the measurement device against reference values (standards), find out the error function (usually, itā€™s a linearity error), subtract the error function from the readings to get the real value, and you have a working transducer.

You input RGB primaries, you record the output wavelength with a spectrometer or a colorimeter, compute the error between the expected wavelength and the recorded one, compute the numeric transform on RGB values that reverts the error, so that you get closer to the expected ā€œtrueā€ wavelength. There has to be some true color (standard), otherwise itā€™s pointless to even try to calibrate anything (you calibrate against a standard). Unless metrology has changed since I graduated. (Iā€™m pretty sure it has not). This is what calibration is amongst physicists. If itā€™s something else for color gurus, they need to get fired right now because they are stupid.

Use ACES IDT and get out of that nonsense. I ponder a way to produce IDTs from IT8 charts.

Please use the Image Magick diff tools to compare the float 32 PFM files and output RMS. GUI histograms are merely indications, not metrics.

This you can do with ICC profiles as well (and actually we do this). The differences between the two systems are mostly technical, not conceptual.

For digital cameras we can have standard color matrices, DCP profiles that ā€œmimicā€ the vendorā€™s look of in-camera Jpegs, and custom-made ICC profiles from IT8 targets. All this is based on well-established color science, and the math behind it is known and understood. You can write the formulas yourself and implement them in the code, or rely on a CMS system like LittleCMS.

Iā€™d really like to get to the point where in our discussions the sentences like get out of that nonsense are either avoided (because they do not explain anything) or followed by a clear explanation of why this is nonsense. The fact that ICC profiles allow for gamma-like TRCs is not a nonsense. Instead, it is for example essential to save the image to low bit depth formats like Jpeg, and have the file properly interpreted by other color-managed softwares.

1 Like

Hi,

let me quote this, and add something more. @anon41087856, from your posts on the topic I understood that:

  1. you have a strong enough scientific background to know what you are talking about; and

  2. you have reasons to believe that some of (most? all?) the image processing apps that get commonly mentioned on pixls are doing something fundamentally wrong regarding colour management.

So far, you have mentioned some problem ā€œcategoriesā€, in very general terms, which has made people point out that perhaps you were overgeneralising. I think it would be much more productive to focus on specific issues that you have identified. I cannot obviously speak for everyone, but if there is some concrete evidence that things are done in the wrong (or even just sub-optimal) way, I am totally interested in learning.

4 Likes

Thatā€™s not correct. ACES establishes an input referred working space, and inputs (i.e. from cameras, CGI, film scans etc.) need to be converted to that space using an input profile (IDT) to participate in the workflow.

I generally donā€™t refer to the whole of the Color Science, Graphic Arts, Television, Film and Computer Graphics community, with their long and sophisticated history as a ā€œrandom dudesā€.
Fact: different areas of endeavor develop their own nomenclature. If you want to have an intelligent conversation about an area you arenā€™t familiar with, then at least be aware of this possibility, if not learn some of it !

Color measurement of course is rooted firmly in metrology. The application of those measurements is not.

Calibration has a few similar but different meanings when applied to color.
While the above meaning can be applied to the instruments, it canā€™t be applied to color spaces or color capture and reproduction devices. In your above context, itā€™s simply assumed that there is ā€œone true valueā€ that an instrument should be calibrated to, and this is not the case for color spaces. There are an infinite number of possible color spaces, so a color output device (for instance) can (with suitable color transform machinery) be made to emulate any one of them. To achieve this requires two profiles, that of the device native behavior and the colorspace to be emulated. Combine them together and you have an automated means of achieving emulation (i.e. ā€œcalibrationā€). Of course there may be lots of other means with greater limitations of achieving adjustment or emulation, such as per channel lookup curves, or device specific adjustments. Generally then, the term ā€œcalibrationā€ is used for making such built in device adjustments, while ā€œprofilingā€ is done as a way of linking the device into a color management framework.

So youā€™ve just demonstrated that you really know very little about color. I seriously suggest learning something about it before wading in with pronouncements about ICC profiles or any other color management system.

While the output wavelength of a primary of a display may be of interest to the manufacturer of the display in ensuring the consistency of their manufacturing, it is of much less interest in terms of color. What is of interest is the appearance of the display to the observer. Thatā€™s where Color Science comes in.

Well, I wonā€™t call you stupid, but I will call you ignorant. It really doesnā€™t matter that much if all the displays of one model are identical or not (although it is desirable of course), because displays of different models, or from different manufacturers, or with different technologies, or with their controls set differently will not look the same when the same RGB values are fed into them. There is no ā€œone true valueā€ to set things to, to make them all the same - they arenā€™t designed to be the same. The whole point of color management is to deal with this fact.

How do you think IDTā€™s are defined ? They use Color Transformation Language, which allows a variety of possible transformation elements, including equations, functions and LUTs, just like ICC profiles!

4 Likes

:+1:

Getting IDT from IT8 charts would be trivial with ArgyllCMS[1] (see the manual of collink especially the -3 c option for cube LUT output) although this will only work for shoots shots under the same condition as the IT8 target was shot under and also remember the IT8 target can only cover the Pointing color space at most since it is a printed target (so no specular highlights covered).

Anyway I think I found at least a temporary solution for the Ingress problem with the rawtoaces tool sadly it seems the last update is a year ago so probably a bit out of date.


EDIT to add
[1] I would like to thank @gwgill for this awesome software

2 Likes

@anon41087856 clearly has a strong scientific background. But that doesnā€™t mean heā€™s taken taken the time to learn how ICC profile color management really does work. From what heā€™s said - that @gwgill has kindly taken the time to help unravel - it doesnā€™t seem that @anon41087856 is talking about actual ICC profile color management at all.

Adding to @agriggioā€™s request for a discussion focusing on @anon41087856ā€™s specific issues with ICC profile color management, it would be nice if @anon41087856 could start a separate thread on the topic.

When using LCMS2 to convert from a source ICC profile to a destination ICC profile, and considering only what happens regarding the RGB channel values in the source ICC profile:

  • If the source ICC profile is an RGB matrix color space profile with a true ā€œgamma=1.0ā€ TRC, no RGB channel values (whether <0.0 or >1.0) are clipped upon conversion to another color space. This applies to V2 and V4 profiles.

  • If the source ICC profile is an RGB matrix color space with a true ā€œgamma=xā€ TRC, but x doesnā€™t equal 1.0, then all negative RGB channel values are clipped, but channel values >1.0 are not clipped. This applies to V2 and V4 profiles.

    The reason specified is that there is no unambiguous way to extrapolate below 0.0 for true gamma TRCs that arenā€™t linear gamma TRCs.

  • If the source ICC profile is an RGB matrix color space with a point TRC (even if the point TRC is a LUT version of a true gamma TRC), channel values outside the range 0.0f to 1.0f are clipped. This applies to V2 and V4 profiles.

  • if the source ICC profile is an RGB matrix color space with a parametric curve - which requires a V4 profile - then nothing is clipped assuming the parametric curve itself allows unambiguous extension outside the range 0.0f-1.0f.

So V2 and V4 sRGB profiles that use a point TRC will clip. But if the sRGB TRC is specified in a V4 profile by using a parametric curve, then thereā€™s no clipping because of the straight line portion near zero, which allows unambiguous extrapolation below 0.0, and the gamma curve elsewhere, which allows unambiguous extrapolation above 1.0.

Please note that the destination color space might limit the colors that can be encoded regardless of the source color space, for example LUT printer profiles canā€™t encode RGB channel values that are would require RGB channel values outside the range 0.0f to 1.0f.

Has everybodyā€™s eyes glazed over yet? :slight_smile:

1 Like

Actually I did just that, back in February of 2013:

LCMS2 Unbounded ICC Profile Conversions

When Marti Maria rewrote LCMS (LittleCMS) and released it as LCMS2, his motivation wasnā€™t just to accomodate V4 ICC profiles: he designed LCMS2 to work in ā€œunbounded modeā€. Unbounded mode ICC profile conversions eliminate interim clipping from color space encoding limitations. And when using linear gamma profiles at 32-bit floating point image precision, in unbounded mode there is no gamut clipping even when the image color gamut exceeds the color gamut of the destination color space.

I used floating point tiffs instead of PFM files. Both containers hold floating point channel values, but tiffs allow embedding ICC profiles.

Anyway I think I found at least a temporary solution for the Ingress problem with the rawtoaces tool sadly it seems the last update is a year ago so probably a bit out of date.

There are some newer forks, what is needed is someone who merges this all together.

The article linked to in this post has a nice explanation:

My old (and increasingly outdated) review of free/libre raw processors was done from the point of view of how easy it was to output a scene-referred raw file that doesnā€™t have any channels clipped by the raw processor, that werenā€™t already clipped in the raw file:

A Review of FLOSS Raw Processors, Part 1

By default, raw processors are set up to produce enhanced output (ie, already sharpened, with increased color saturation and increased mid-tone contrast, not unlike in-camera-produced jpegs). But I donā€™t want ā€œenhancedā€ output. I want radiometrically correct, scene-referred output as the first step in a linear gamma image editing workflow.

In that review I defined ā€œscene-referredā€ as

ā€œRadiometrically correct, scene-referred outputā€ means the channel values in the interpolated image are proportional to the amount of light that was recorded in the raw file.

But really this is a definition of ā€œcamera-referredā€ or ā€œfocal-plane-referredā€ the lens adds its own flavor to the resulting raw file, as does the sensor and also whatever processing the camera did before saving the raw file.

The accuracy with which scene colors are captured depends on a long list of things starting with the lens and ending with the camera input profile applied to the interpolated raw file. But people use ā€œscene-referredā€ as a short-hand way to say ā€œas scene-referred as possibleā€ given the camera, lens, lighting, input profile, etc.

OK, how do you get ā€œas scene-referred as possibleā€ when you interpolate a raw file? You just do the basic interpolation+white balance+exposure compensation, plus image repair options such as flat-fielding, lens aberration, hot pixels, chromatic aberration, perhaps denoising. You donā€™t do any ā€œmake prettyā€ algorithms such as Curves or Channel Mixer that would destroy a proportionality with respect to the ratios captured by the raw file.

ā€œCloser to scene-referredā€ requires making a custom camera input profile for each lighting condition and lens and possible each ISO, etc.

More stringent definitions of scene-referred require that the image intensities be not just proportional to scene intensities, but instead equal to scene intensities.

In-camera options that modify shadows and highlights before saving the raw file (what happens to any jpeg is irrelevant) by design donā€™t save raw files with intensities proportional to the intensities reaching the sensor.

1 Like

Now, I wonā€™t be speaking with the clarity of @dutch_wolf. :slight_smile: I donā€™t have to write concisely on the daily basis, so I have grown lazy. :stuck_out_tongue: @anon41087856, I donā€™t think people here are disputing the relevance of metrology. Over emphasizing and using it to point out, in your opinion, the invalidity of the ICC system is trivializing both metrology and ICC.

The first thing that I would like to point out is that consumer and professional cameras arenā€™t scientific instruments. When it comes to scientific instruments, they are much more robust, single-purposed and calibrated (and expensive$$$$) than cameras (even state-of-the-art ones). The range of measurements allowed by the device is carefully characterized and profiled by people who do it for a living and who do it with equipment that you couldnā€™t possibly have in your business or FOSS garage, so the user doesnā€™t have to do it. Transfer curves, accuracy, precision and operating conditions, etc., are all made transparent per characterization and profiling and are expected to be followed stringently if the scientist is ever to want to measure good data. Even then, there would be limitations to the research, which would have to be noted, hopefully in full, in the paper. Moreover, the instruments would have to be sent back (very carefully and expensively transported) to the manufacturer or company on the regular basis to re-calibrate, re-characterize and re-profile them. Very expensive to do and have it all. Every lab has to contend with the good enough data problem.

I will speak less about ICC because we already have knowledgeable and experienced people to address it here. Based on what I know, the ICC and CMM system is able to accurately-enough model colour. It is as others have said, they are tools that do that. Does it come with limitations? Sure it does, like everything else; but what it addresses doesnā€™t replace metrology. To me, it just makes sense of the numbers that metrology produces. Of course, there is overlap. A good model or system overlaps with other ones. And I think the parts that overlap, with metrology, ACES, OCIO, etc.,etc., are exactly what this forum is trying to figure out one step at a time.

So, letā€™s work together. Yes, letā€™s consider the pros and cons; but also how a pro for one system or model could be used to understand or improve the con of another system or model.

2 Likes

A naive question. Isnā€™t the ā€œlookā€ part of the aces workflow where most of the raw developing work takes place? It sounds a bit like a of the shelf style when discussed in these threads. An off the shelf style is only of peripheral interest to photographers? So the tools to flexibly and rapidly create looks would be the critical part of such a workflow.

Having tested the filmic module I quite like it for tonemapping even if I havenā€™t mastered it so Iā€™m curious how workflows would changes with a full aces workflow.

*I use the acronyms as shorthands and not as precise terms like my fellow pixlrs above.

As I described above the initial look will be set by the director, cinematographer and DIT ( Digital imaging technician) during recording (this is can be done per shot). This initial look will then be transferred to all the post-process departments which will use it to do their thing (be that 3d, VFX, compositing or editing)
This look will also be the starting point used for the final grade which is done after all the other post-process things are finished.

The looks are transported as part of a set of metadata files (ACESclip) and can be stored either as ASC CDL (Color Decision List) or LMT (Look Modification Transforms; note an LMT might be built from an ASC CDL).

Currently I donā€™t know of any open-source tools that can do this but it is part of the ACES workflow and in this spot I do believe the biggest difference between ACES and a scene referred photography workflow will be found. Do note that it even in ACES it would still be possible to edit the scene referred data so depending on what the goal is an of the shelf look might not be nearly as limiting as it first seems (also depending on the look of course)

1 Like