"Aurelien said : basecurve is bad"

Yes, but as long as you develop a low number of images (so you don’t need styles or to copy/paste history stacks), or only use one camera, that’s not a problem. As Glenn affectionately calls rawproc my (his) hack software, I think he’s not really worried about supporting huge user bases and offering a tool for people who have to crank out dozens or hundreds of images per day.

On the other hand, for C1, this may not be the case. In fact, I suspect it is not.

2 Likes

For sure I suspect you are correct. I was and again likely wrong in thinking that the raw rgb numbers don’t mean anything except to that device and need to be mapped with tables or matrices to a PCS. From this reference then edits and color space transforms can happen with predictable results…

@ggbutcher I know you have a few threads where you have explained your adventures into spectral camera data and your work on rawproc… I would be interested to go back and read them all but time dictates otherwise but if you did elaborate on your decision to use the camera space and forgo a working profile I would love to read that over…TIA

And that’s what happens in darktable for all raw images: the module input color profile is added for all raw files and cannot be disabled. And it’s pretty early in the pipeline as well, before any operation that deals with colour.

2 Likes

AND? of course if you myltiply the input by alfa it will be multiplied by alfa at the end after linear transformations.

But what if you use two differente camearas with two different filters the signal you get is not correlated by a linear function, as they get different signal strength when you iluminate them with the same spectrum.
If you change the light stimulus, the spectrum, each of them gets a differente signal which do not conserve the previeous relation between them.

So to convert from one camera to another you cannot use a linear function, that is what I was saying.
The same applies to colors pereceived by an observer.
If you want to get acurate results in color from different cameras to the colors a standard observer can see you cannot use a linear transform.

Convolutions are linear in the sense that if the spectrum energy is multiplied by a facter, the signal is multiplied by that factor of course.

I did not say that.
What I said is that if you want to convert the signal you get in R,G,B using the filters of a camera to what an standard observer would have seen in the same situation, you cannot use a linear transform, if you need acuracy, of if you want to transform the results from one camera to another camera, as the transfer functions of the filters are differente and respond different to differente wavelengts.

So the multiplicators that work under a say green light to get the green color for the observer, won’t work under a blueish light, where you would need another coefficients.

Linear transform is just an approximation, the LUT is just the measured coefficients needed to get the appropiate results under differente lights received by the sensor, so they work for that color.
If you don’t get good results using the LUT or it has been incorrectly measured or you don’t have enough samples in the LUT and are doing too gross interpolations.

Nothing to do with basecurve.
The theme comence when talking about what C1 does and how it works.

I just said that it uses ICC profiles and does its processing in the camera space and gamut instead of transforming to a working space.

It was said that a Linear transform is better to get color acuracy and just I said it is not, as the relation between colors humans perceive and colors get by a sensor is not a linear function, it works different under different light colors.

I am not sure the proposed exercise makes a lot of sense. Assuming your “standard observer” is human, are you comparing sensor readings to signals from photoreceptor cells? Of course the two are very different, and the brain does a lot of post-processing.

But I think this whole issue is a red herring, even trivial editing involves some color calibration, this this is not a step you can skip or automate. So maybe some software does some kind of initial correction, which I will have to tweak anyway… I am not convinced this is practically important (if you disagree, please show images).

This is of course understood. Using a linear representation in the pipeline as long as you can is useful for the purposes of processing photos (avoiding artifacts, etc). I think the claim is that it makes life easier if you edit photos, or write software to do this. That’s all.

Incidentally, “accuracy” is not really defined in this context, all photos are “processed” to a certain extent to evoke some kind of response from the viewer. Some of this response depends on properties of the eye, but a lot of it is cultural conditioning (ie adults currently alive see photos all the time, and internalized various visual cues that date back to film and evolve constantly).

Yes it seems (from what I have read about C1, as they do not explain exactly how they do the development) that they use the ICC to get acurate results of color in all the gamut and transform to color space with the gamut the camera can get.

Then they process there the image and only at the end of the workflow convert to display or output device or workspace.
They do not use an intermediate space.

I don’t think that working direcly in RGB interpolated from raw data with a conversion to a “human” color is a good idea.

Other software use a linear matrix transform to transform to the RGB working space (with a different gamut than the camera) and from there to the output (three transforms).

I am not comparing.

When you use a ICC is to get the best approximation of the color in XYZ space (so human, as color is a human concept, we don’t know how colors are perceived by an insect) to the light spectrum the sensor received.
It gets an RGB value after filtering that light spectrum (convolution) with its filter characteristics.

The ICC is constructed measuring known color patches in camera space and calculating the transform needed to get the value it should have.

Indeed you are sampling a vectorial function where the output vector (XYZ) is a non linear funcion of the input vector (RGB) (x,y,z)= f(r,g,b)

It is not linear across the gamut, it is linear in the sense that if you multiply the input vector by a scalar output xyz will be multiplied the same scalar.

But the function lf is not linear and cannot be expressed by a matrix multiplication, and relations among r/g/b won’t be conserved in x/y/z when you change the input value.

acuracy is perfectly measurable usually a cuadratic distance is used, the sum of cuadratic error from what you get using the transform compared to the values it should have.

Of course if you do later transform then you are making artistic interpretations and it is up to you, what you want to get or how you want your image to look.

What yo need to be linear is the color space where you work to process the color, not the function used to transform from the RGB to the color space (the nonlinear function has to be proportional to the intensity of the light received, the total energy in the visible spectrum).

We expect that if I have a pure red and pure green and mixed them at 50% we will get an orange.

Quite a few of the working spaces are ‘not human’, in that they contain colours outside the range of human vision. See Welcome to Bruce Lindbloom's Web Site

darktable supports non-linear LUT profiles (such as those from Glenn); RawTherapee supports DCP with curves, for example, and I assume also LUT profiles. Neither of those are linear.

1 Like

I am still wondering how you extract the numbers from a human though.

Hopefully the “standard observer” is not harmed in the process :wink:

And they lack colors humans can see.
And the lack colors humans can see and the camera can capture, and have colors your camera is not able to capture.

That is why using an intermediate workspace may be not optimum, and three transforms can produce more clipping than using the input gamut of the device, and probably the idea behing C1.

You will loose some colors any way as your camera does not capture all colors humans can see, and can capture colors or wavelengths humans don’t see.
But if your device is not able to capture that data, no benefit in having a space with colors your camera is not going to generate.

That is the basic idea in the first strategy of image processing that seems to be followed by C1.

Ask the people who has developed XYZ and cie lab and all that spaces.
That is what all color science is about.

It was measured at the mids of the past century, with experiments with people comparing their response to light. I don’t know the details.

The transform funcions of the cones, the cells that generate the cerebral stimulus, is known too, you can seek in wikipedia to see the red green an blue sentitivities to the light spectrum.

But measuring acuracy is simple:

You present a patch to the sensor with a standard iluminant and a known XYZ value, measure the RGB values and apply the transfor you are using, and get a X’Y’Z’ then measure the distance (X-X’)^2+(Y-Y’)^2+(Z-Z’)^2 and you get the cuadratic error.

Repeat the process with many patches, sum all cuadratic errors and you get the measure of the standar cuadratic error.
Then most acurate (by that norm) will be the one with less value.

And that’s one of the problems with this discussion: this is all based on assumptions about what another (commercial, closed source) program does. In addition, @anon41087856 has already explained, and shown, why editing in anything but a linear color space (linear in light energy, to be precise) is asking for problems.

I see no reason to continue with this discussion

1 Like

I’m doing this over a wifi hotspot to my cell phone, a bit dodgy…

@priort , @kofa, it’s not a long story about how I got to editing in camera space.

Firstoff, need to point out that the first tool in my default toolchain is colorspace:camera,assign, which basically takes the camera profile data and loads it in the internal image metadata, so it follows the image through the toolchain. In rawproc, to make color management work, you need an ‘assign’ colorspace tool somewhere before the software starts converting from one profile to the next…

I’ve read all the exhortation about using a working profile, so my toolchain at one time had a colorspace:[profile],convert right after the demosaic tool, where profile=some large-gamut linear working profile. Well and good, until one day I forgot to put it in. Never even noticed, the display and export output continued to look fine. So, to make clear, what was now in the toolchain was just the original colorspace:camera,assign, and that was the profile that was used for display and eventual export.

I have a theory about why it works, but I’m not presently at the computer with the data to assess the reasoning. Has to do with Elle Stone’s assertions about “well-behaved” profiles. Next week…

1 Like

I’d be rather surprised if any of the authors of XYZ were still alive, given that it was created in 1931 (CIE 1931 color space - Wikipedia). Some of those for CIELAB are probably still around, but retired (it’s from 1976 - CIELAB color space - Wikipedia).

1 Like

I have some articles at home that describe the process; essentially, they presented a split display of 1) a reference spectral color, and 2) a RGB mix color that they controlled with knobs. The observer essentially had to dial the knobs until the two sides of the split matched, to their perception. Thus, the term, “color-matching experiments”

4 Likes

@Tamas_Papp Lots of good stuff here…page 43 I think refers to the process used…

10 Likes

Pleasure to read your replies…as always frank, honest and informative…thanks for taking the time to elaborate…your a gentleman…

3 Likes

Yes, and me.

I was just meaning that II don’t know the details of how they developed it, just some general ideas.

I think there has been more work recently and some modifications, but not sure.

Thanks @ggbutcher I had read about it time ago and did not remember details.

What I don’t know either is how they separated the physical perception (response from cones) from the perceptive (interpretation by the mind).

I don’t remember exactly, but there is something like an xyz space that is just after the response of the cones, the convoluted values of the spectrum and the response from retina.

In wikipedia there are published that transfer functions but I don’t know how they have obtained that, may be mesusing activity in optical nerves?