"Aurelien said : basecurve is bad"

We are talking about correction the raw signal, right? So we have no spectral information, just points we’re told represent Red, Gree, or Blue. We have no way (from the information the camera gives us) to know why those pixels are designated as red, green, or blue, and we don’t need it.

And it’s easy enough to have a non-linear LUT:
A simple 3-point LUT for a monochrome sensor: (0,0) - (8000, 16000) - (16000, 0)
Non-linear and non-continuous (if you use linear interpolation). A linear LUT would be
(0,100)-(8000, 5100)-(16000, 10100) (again, monochrome for simplicity).
even (0,100)-(8000, 5000)-(16000, 10100) is already non-linear (i.e. not of the form y = ax + b)

And if your LUT is perfectly linear, its output can be represented by a simple linear equation for monochrome, or a 3×3 matrix for color

Yes, at one point you will have to do that transform. That’s done with filmic and output color profile

And that’s not what’s used when dealing with scene-referred. It’s been said numerous times that there, “linear” means “linear in light energy”. The perceptual response is logarithmic (i.e. perceived intensity is proportional to the logarithm of the light energy).

1 Like

So they are not linear at all, as the signal is a convolutiin among the filter transfer function and the spectrum of received light.

Signals obtained with different transfer functions are not proportional among them.

Hence the need of making none linear cirrections to get the most similar result to what a human could have seen from what the camera has captured.

That is what a icc profile corrects, difference in response to different light sources with different spectrum, not the scaling of all the channels by a factor, thay as you say us linear, at least at the camera level not at a perceptive level.

Altough that filters in camera try to emulate retina response to light.

What you need to be linear and aditive is the signal you are processing, the colors from a human perspective.

So using a ICC profile seems adecuate if you are looking for color acuracy.

Oh? Afaik, such transfer functions are (for each wavelength) linear (wrt. to the incoming energy). So, if I multiply the input by a factor α, I expect my output to be multiplied by that same factor α. So as long as I don’t change the colour of the incoming light, I’d expect the output to be linear with the input, even with different transfer funtions for the different colours

No, they try (at best) to emulate the spectral sensitivity, i.e. relative responses to different wavelenghts at a given energy. The filters do not (and can not) take into account the response as a function of total light enery. That response is the job of the sensor (which is linear, for most purposes).

This discussion is exactly why contracts and scientific texts spend so much place on definitions of concepts and terms. That hopefully ensures everyone knows what is meant by each term used.

1 Like

What we are doing here is not convolution, it is a weighted sum of all the wavelengths in the original signal, where the weighting is bsed on the spectral response of your filter. This is absolutely a linear operation.

Even if it was convolution, you should know that convolution is a also a linear operation, which comes about due to the way its definition is based on the (linear) integration operator.

I really don’t know what you are trying to say here. Do you have a concrete example to illustrate what you are talking about?

1 Like

I am not sure I understand what you are trying to say here. Convolutions are of course linear, as explained above by @Matt_Maguire.

Are you talking about input color profiles? What does this have to do with filmic/basecurve?

But the ICC (or any other profile) is used for colour space conversion, isn’t it?
@ggbutcher has mentioned a few times that he edits in camera space, without any transformation, and then converts straight to the output (I assume mostly sRGB) at the end of the chain (see, for example, Color transforms in Raw Processing).

1 Like

My understanding which seems to be a moving target when these color discussions pop up is that the ICC files take the rgb data from a given device and converts it to known values xyz or lab , ie the PCS so that the data can be predictably converted and edited as needed. The further use of a working space as I have heard it described is a conversion to a space as large as possible to maintain as much of the device gamut as possible but allow for tools to work in a common predictable color space. Using camera space would vary from camera to camera and device to device…but that is my very limited understanding and likely misguided…

Yes, but as long as you develop a low number of images (so you don’t need styles or to copy/paste history stacks), or only use one camera, that’s not a problem. As Glenn affectionately calls rawproc my (his) hack software, I think he’s not really worried about supporting huge user bases and offering a tool for people who have to crank out dozens or hundreds of images per day.

On the other hand, for C1, this may not be the case. In fact, I suspect it is not.

2 Likes

For sure I suspect you are correct. I was and again likely wrong in thinking that the raw rgb numbers don’t mean anything except to that device and need to be mapped with tables or matrices to a PCS. From this reference then edits and color space transforms can happen with predictable results…

@ggbutcher I know you have a few threads where you have explained your adventures into spectral camera data and your work on rawproc… I would be interested to go back and read them all but time dictates otherwise but if you did elaborate on your decision to use the camera space and forgo a working profile I would love to read that over…TIA

And that’s what happens in darktable for all raw images: the module input color profile is added for all raw files and cannot be disabled. And it’s pretty early in the pipeline as well, before any operation that deals with colour.

2 Likes

AND? of course if you myltiply the input by alfa it will be multiplied by alfa at the end after linear transformations.

But what if you use two differente camearas with two different filters the signal you get is not correlated by a linear function, as they get different signal strength when you iluminate them with the same spectrum.
If you change the light stimulus, the spectrum, each of them gets a differente signal which do not conserve the previeous relation between them.

So to convert from one camera to another you cannot use a linear function, that is what I was saying.
The same applies to colors pereceived by an observer.
If you want to get acurate results in color from different cameras to the colors a standard observer can see you cannot use a linear transform.

Convolutions are linear in the sense that if the spectrum energy is multiplied by a facter, the signal is multiplied by that factor of course.

I did not say that.
What I said is that if you want to convert the signal you get in R,G,B using the filters of a camera to what an standard observer would have seen in the same situation, you cannot use a linear transform, if you need acuracy, of if you want to transform the results from one camera to another camera, as the transfer functions of the filters are differente and respond different to differente wavelengts.

So the multiplicators that work under a say green light to get the green color for the observer, won’t work under a blueish light, where you would need another coefficients.

Linear transform is just an approximation, the LUT is just the measured coefficients needed to get the appropiate results under differente lights received by the sensor, so they work for that color.
If you don’t get good results using the LUT or it has been incorrectly measured or you don’t have enough samples in the LUT and are doing too gross interpolations.

Nothing to do with basecurve.
The theme comence when talking about what C1 does and how it works.

I just said that it uses ICC profiles and does its processing in the camera space and gamut instead of transforming to a working space.

It was said that a Linear transform is better to get color acuracy and just I said it is not, as the relation between colors humans perceive and colors get by a sensor is not a linear function, it works different under different light colors.

I am not sure the proposed exercise makes a lot of sense. Assuming your “standard observer” is human, are you comparing sensor readings to signals from photoreceptor cells? Of course the two are very different, and the brain does a lot of post-processing.

But I think this whole issue is a red herring, even trivial editing involves some color calibration, this this is not a step you can skip or automate. So maybe some software does some kind of initial correction, which I will have to tweak anyway… I am not convinced this is practically important (if you disagree, please show images).

This is of course understood. Using a linear representation in the pipeline as long as you can is useful for the purposes of processing photos (avoiding artifacts, etc). I think the claim is that it makes life easier if you edit photos, or write software to do this. That’s all.

Incidentally, “accuracy” is not really defined in this context, all photos are “processed” to a certain extent to evoke some kind of response from the viewer. Some of this response depends on properties of the eye, but a lot of it is cultural conditioning (ie adults currently alive see photos all the time, and internalized various visual cues that date back to film and evolve constantly).

Yes it seems (from what I have read about C1, as they do not explain exactly how they do the development) that they use the ICC to get acurate results of color in all the gamut and transform to color space with the gamut the camera can get.

Then they process there the image and only at the end of the workflow convert to display or output device or workspace.
They do not use an intermediate space.

I don’t think that working direcly in RGB interpolated from raw data with a conversion to a “human” color is a good idea.

Other software use a linear matrix transform to transform to the RGB working space (with a different gamut than the camera) and from there to the output (three transforms).

I am not comparing.

When you use a ICC is to get the best approximation of the color in XYZ space (so human, as color is a human concept, we don’t know how colors are perceived by an insect) to the light spectrum the sensor received.
It gets an RGB value after filtering that light spectrum (convolution) with its filter characteristics.

The ICC is constructed measuring known color patches in camera space and calculating the transform needed to get the value it should have.

Indeed you are sampling a vectorial function where the output vector (XYZ) is a non linear funcion of the input vector (RGB) (x,y,z)= f(r,g,b)

It is not linear across the gamut, it is linear in the sense that if you multiply the input vector by a scalar output xyz will be multiplied the same scalar.

But the function lf is not linear and cannot be expressed by a matrix multiplication, and relations among r/g/b won’t be conserved in x/y/z when you change the input value.

acuracy is perfectly measurable usually a cuadratic distance is used, the sum of cuadratic error from what you get using the transform compared to the values it should have.

Of course if you do later transform then you are making artistic interpretations and it is up to you, what you want to get or how you want your image to look.

What yo need to be linear is the color space where you work to process the color, not the function used to transform from the RGB to the color space (the nonlinear function has to be proportional to the intensity of the light received, the total energy in the visible spectrum).

We expect that if I have a pure red and pure green and mixed them at 50% we will get an orange.

Quite a few of the working spaces are ‘not human’, in that they contain colours outside the range of human vision. See Welcome to Bruce Lindbloom's Web Site

darktable supports non-linear LUT profiles (such as those from Glenn); RawTherapee supports DCP with curves, for example, and I assume also LUT profiles. Neither of those are linear.

1 Like

I am still wondering how you extract the numbers from a human though.

Hopefully the “standard observer” is not harmed in the process :wink:

And they lack colors humans can see.
And the lack colors humans can see and the camera can capture, and have colors your camera is not able to capture.

That is why using an intermediate workspace may be not optimum, and three transforms can produce more clipping than using the input gamut of the device, and probably the idea behing C1.

You will loose some colors any way as your camera does not capture all colors humans can see, and can capture colors or wavelengths humans don’t see.
But if your device is not able to capture that data, no benefit in having a space with colors your camera is not going to generate.

That is the basic idea in the first strategy of image processing that seems to be followed by C1.

Ask the people who has developed XYZ and cie lab and all that spaces.
That is what all color science is about.

It was measured at the mids of the past century, with experiments with people comparing their response to light. I don’t know the details.

The transform funcions of the cones, the cells that generate the cerebral stimulus, is known too, you can seek in wikipedia to see the red green an blue sentitivities to the light spectrum.

But measuring acuracy is simple:

You present a patch to the sensor with a standard iluminant and a known XYZ value, measure the RGB values and apply the transfor you are using, and get a X’Y’Z’ then measure the distance (X-X’)^2+(Y-Y’)^2+(Z-Z’)^2 and you get the cuadratic error.

Repeat the process with many patches, sum all cuadratic errors and you get the measure of the standar cuadratic error.
Then most acurate (by that norm) will be the one with less value.