Deliberately difficult gamut colors in macro shot of hex nut

Linear. They have to be, to be an accurate representation of the raw data to which they are assigned.

Edit: And, strictly speaking, the matrix doesn’t represent the data’s linearity. The tone curve, or absence thereof in this case, is what represents that.

1 Like

Thanks @ggbutcher and @JackH for your clear explanations and for your patience. So I take the opportunity to ask a few questions I hope not completely stupid.

First question: it seems assumed that sensors and so raw data are linear. On the contrary, color vision is certainly a highly non linear process. But one of first steps in processing is to transform data into color information coded in working space.
So either we use a matrix: in this case it is a rough approximation but the processing stays scene referred or we use a complementary LUT to take into account the specificities of color vision, the transformation can be accurate but the processing is no longer scene referred but vision referred.
What is better for processing raw: stay in capera space, apply an approximate matrice or use a matrix+LUT?

second question related to output transform:
during processing, for displaying you do working>display using the display profile (processing displaying)
when you export an image you do working>output using an output profile (sRGB, rec2020)
When you display the exported photo, working>output>display is applied (output displaying)

  • first, I suppose output profile gamut is smaller than display profile gamut. For me it is the case with sRGB. Processing displaying will have richer colors than output displaying. In ART and RT we can observe that directly with the soft proofing function
  • second, I suppose output profile gamut is larger than display profile gamut (prophoto, ACEScg…). There could be small differences between processing display and output display due to different shrinking of colors. But viewed on different displays with different gamut you will see different colors. Color will not be reproducible.

Hope that questions are clear enough and make sense.

Don’t @ggbutcher’s forays into SSF measurements clearly show that it isn’t linear when it comes to color? The data might be given as a triplet for RGB, but the sensitivities measured clearly show that these do not linearly match the scene, but something derived from that through bayer filter and sensor sensitivity, which together isn’t linear. So I’d say already the “transform” (measurement) going from the scene to camera/raw space isn’t linear. Or am I misinterpreting/-attributing those results?

Right now, I’d personally say it depends. Since the advent of this playraw, I’ve not previously had problems editing in camera space, then transforming for display or output (export). But I found with this image that I couldn’t apply any non-linear tone transform, that is, do anything to lift the image data away from linear, without mucking with the blue hue. My surmise at this point is that it’s quite fine to work images with “normal” colors, that is, colors that are pretty much already within whatever output gamut, in camera space. Otherwise, knocking down the extreme colors to a smaller working gamut is necessary to retain their hues in the non-linear tone transform campaign…

Now, my thinking in implementing all this in rawproc is that one needs to be able to see the image transformed to the capabilities of the display. And, when one saves the image to a file, it should be embedded with a profile that describes its gamut and tone with respect to linear. Indeed, if someone else wants to view that image in their color-managed viewer, the viewer needs to know the gamut and tone state in order to do the transform to their particular display gamut and tone state. That respects the essence of how ICC-based display “calibration” works.

In rawproc, a toolchain of operations is built that step-by-step process the image away from it’s raw state to a rendition state. I put a checkbox beside each tool in the chain, which selects the image at that step for display. If you want to see the image in the state it would be exported, you put the check on the last tool (bottom of the list) in the chain. Oh, and one of the available tools is a colorspace tool, it’ll either assign or convert the image; if at least one colorspace tool isn’t inserted, then no color transform is done for display or export, just the unconverted pixels processed up to the tool that is checked. This has the following implications related to your question:

  1. If there is not at least one colorspace tool assigning a profile to the image in the chain, the display and output transforms will just display the image processed to the point of the checked tool, or save the image as of the last tool in the chain, in neither case will the image be color or tone transformed.

  2. If there is a single colorspace tool in the chain that assigns a profile, then that profile will be used in the downstream tool selected for display, and used by the last tool for export using a profile identified in the rawproc properties for that image type (I can specify a sRGB export profile for JPEGs, a ProPhoto profile for TIFFs, etc.)

  3. If, in addition to #2, there is another colorspace tool in the chain that converts from whatever profile is presented to that tool to another, say, working, profile, then the downstream display and export will both convert from that colorspace to their appropriate profile.

Yes, this sounds convoluted when written-down, but it really allows one to consider the sort of thing about which you are inquiring. To your specific question, the one thing I don’t do is working > display > output, as that doesn’t meet the intent of the ICC architecture. display and output should be thought of as two different renditions, and their color transforms support their own capabilities and peculiarities. Otherwise, I usually keep the last tool checked so I can both see the amalgam of my processing and what it will look like to another’s color-managed viewer…

Not at all. What we’re regarding on the screen are the sum attempts to pull colors our rendition media can’t display nicely into some niceness. The input data to that is definitely energy-linear, that is, doubling a measurement expresses a doubling of the encoded light power. Film had some non-linearity in its lower and upper ends, but digital is largely linear in this sense.

Keep in mind, what we’re trying to control are things we can’t effectively see through the “narrow pipes” that are our rendition media…

It may depend on the extent of operations you wish to perform. Are you familiar with Elle’s examples of linear vs non-linear operations:

Easy to do your own tests. Duplicate a tiff. Convert v1 to linear rec 2020, and v2 to rec 2020. Open it in your favourite raw editor and use your favourite modules on both. When I’ve done this, linear operations looked better 95%+ of the time, to my taste. However I have not compared editing in camera LUT space vs typical workflow of matrix > working profile. Again, it would be simple enough to test. Duplicate your raw. In v1 apply LUT as both input profile and working space. In v2, apply matrix as input profile and some linear wide gamut as your working space. Same output profile for both. Compare results.
What I’ve learnt from this discussion is that the ideal solution, particularly in problematic cases such as this one, may be to use LUT as input and linear wide gamut as working. In fact, in problematic cases it may even be better to use linear srgb as working space. That way the LUT can scale the gradients down to output size nicely. Whereas if you went LUT > wide gamut > output, you might risk gradient problems resurfacing in the second transform.

Everyone: If I don’t make any sense or am getting way too off-topic, please say so and I’ll stop :slight_smile:

I am aware it’s energy linear, that’s not my point. I am talking about linearity in color, i.e. relationships between “reality” (wavelengths) and RGB/XYZ/whatever color spaces that try to represent that reality in some way (through a perception model or a bayer filter or anything else). I am not concerned with output media, I do know that limitations on those require all kinds provisions, linear or not. Let me try to put it differently:
The scene would be perfectly described by a wavelength and luminosity at each point. Of course in imaging there’s tons of complex stuff on top of that due to human perception and limited media, however in principal those are just mappings to that physical description (in reality maybe imperfect ones, but the principle still holds). So we have reality on one side of the camera, and some color space on the other end. The channels of raw data in between, depend through complex sensitivities on the wavelengths/colors as shown in e.g. one of your measurements: https://pixls-discuss.s3.dualstack.us-east-1.amazonaws.com/original/3X/6/e/6e1c39ed67a60565679ea77cbf1df261cf776d35.png I don’t see how mapping these three raw channel values back to wavelengths/colors can be linear. To my understanding that’s why a 3x3 matrix, converting to XYZ or any other color space, is always an approximation/optimisation. Multiplying with a 3x3 matrix is a linear transform, and the real transform is much more complex and not linear. My probably incorrect conclusion is that to get a linear relationship between reality and some color space with a camera in between, you need a non-linear transform from camera channel values to the color space.

1 Like

The Human Visual System processes photons arriving at the retina non-linearly - but until they get there it expects them to behave linearly :slight_smile:

We therefore try to keep the workflow as linear as possible. Alas, that is not possible for a number of reasons some of which mentioned upthread, but we try our best until we can’t or we decide to switch from ‘accurate’ to ‘pleasing’ mode. The LUTs that were described in this thread are not there to take into account specificities of color vision, they are there because while trying to maintain the system linear (by using a 3x3 matrix to exit raw camera space) we introduce errors. For most applications therefore matrix+LUT is the way to go, and that’s what I believe all current commercial raw converters do.

As long as you move around the various spaces using a matrix and floating point notation the process is linear and thus fully reversible. Therefore in theory it does not make any difference what space you work in. In practice it is best to stick to standardized ones, chosen for good reasons to be a step in the right direction towards the output space. Ideally all input and output devices would have the same color space, but that’s typically not the case. You want to process data in a color space that is the same size or a little larger than your output’s. But not much larger if you don’t want to be surprised at the very end.

Jack

1 Like

If the only colors we could perceive were the spectral colors, those that are unambiguously tied to a single wavelength, then I think there’d be room for a linear model of color. But, when the head wants to mix received wavelengths into single notions, that expectation goes out the window…

You don’t have to map them back to the original spectral power distribution, because our model of the HVS is based on daylight receptors that not coincidentally come in just 3 cone flavors (rho, gamma, beta or LMS). So the input to the HVS is just another three dimensional color space and you can project off to any of its peers simply by applying the appropriate 3x3 transform matrix, which you can easily find online (for instance XYZ ↔ LMS).

Jack

PS You may have seen this.

1 Like

@paperdigits Consider splitting this thread. There is more discussion on colour theory than Play Raw submissions.

That was kind of the point of this playraw, wasn’t it? The image was specifically made with a set of conditions under which we know the technology stack doesn’t cope well when we have the defaults.

I think of threads as living creatures, they go where they need to go, rather than as a rigid box where we try to group things together.

3 Likes
  • My first question (input profiles):

@JackH I can agree and understand what you write. And, if I understand the choice of a type of input profile is a compromise between energy-linearity that is kept by matrix profile and correctness of color rendering obtained with matrix+LUT profile.
@rasimo I think as you that at some point, some non linear color correcting function (3dLUT for instance) has to be applied. When existing, the correcting LUT is applied by raw processors generally at the beginning of processing pipeline.

@ggbutcher

I didn’t really catch it. I think I understand that you prefer to stay in camera space for processing, but I am not sure.could you eclarify please.

@Soupy I am not speaking about non linear space, gamma, fancy artistic LUT but about color correction LUT that can come in addition to matrix in an input profile defining the transformation from camera space to connecting space. I always use linear working space in ART and RT.

  • my second question(output):

Whe going from large gamut camera space to a small gamut output space, you cannot avoid to shrink colors and get gradient and hue problems. Here the blue is going to violet.

The problem is that my display gamut is larger than sRGB specially in the blue.
And yes it can be a surprise when you compare the image viewed during processing (workingspace>display space) with the exported image (working space>output space) viewed on the same display (output space>display space).

Thanks @waveluke for this intriguing image.

Surprise also, the blue is not displayed correctly. click on image.

@paperdigits No problem. Your response will help those worried about being off-topic (@rasimo).

Viewed on Firefox, my colour picker reads an average of 0,35,255; average to eliminate JPEG artifacts.

I suppose you didnot click on image.

Well, I didn’t exactly prefer it, but I tried it and it worked fine. My batch proof processing produces small JPEGs with a simple matrix profile this way, and the colors come out fine in most cases. And, the same workflow seemed to work okay for the theater image I used in the SSF threads, but I think I was misled by the matrix renditions to think the cyan was normal. Wasn’t 'till I got to the “nut image” that I observed the hue shift when I applied a non-linear tone transform (filmic), and found that first converting to a working space helped that.

I viewed it at 100%. Something might be wrong with my setup.

picking with the firefox color picker directly on this page I get a #3f00ff color (63,0,255) , but a true blue (0,0,255) after clicking on image.

My conclusion: deep blue is a nightmare on photos.

1 Like

According to GIMP (linearly scaled to 1x1)

image

I never said anything about fancy artistic lut. I’m talking about the same thing you are. You were asking about working in camera space. Well, in that case working space = input profile.

My understanding from this thread on the use of luts instead matrix as input profile is that they could be non-linear, contain gamut compression and the like which helps with difficult gradients as in this picture. Perhaps I have misunderstood and it is not just lut, but matrix+lut, whereby we still have the linear matrix, but combined with a non-linear lut to handle the transform?

However if that’s the case and we went matrix+lut > wide gamut working > output, shouldn’t the working space also have a lut, to handle the transform?

Yes of course, but one of the fundamental aspects of this topic has been how to reduce those problems. With non-linear (gamut compressed?) luts being a good answer, because they can better handle the gradients, as seen by ggbutchers playraw, instead of just lumping everything in the border of the next space, which I was told is what creates the fringing when using standard matrix and relative intent.