"Aurelien said : basecurve is bad"

Not any voodoo, it just seems to work using the first of the strategies, while others use the second (most of them).

And seems they use ICC profiles while most programs use a linear matrix multipiication.

I don’t know if it is mos appropiate, for preserving colors capture by camera, and for people interested in preserving the most color acuracy like people working in product photography it seems to make sense, but I suppose there are drawbacks for that too.

I preferred their results to LR and the interface too, even I have been using LR much more time than C1.
They have a quick workflow too, and I suppose that is why some professionals prefer it.

But it is quite expensive and in many processing aspect less advanced than DT.

I think they are likely lut based icc where as I believe one time I saw someone say that simple matrix profiles were better than Lut based profiles for scene referred?? No idea… DT can use the additional color calibration settings to tweak the icc and input profile to create a preset with quite low delta E for those shooting conditions …if you have a color card…

…simple matrix profiles were better than Lut based profiles for scene referred??

There are two reasons for that:

  1. a lookup table (LUT) is only defined for a specific range of values, eg. [0,1]. In a scene-referred workflow, some of the samples may end up falling outside this interval, and there is little choice to do anything other than clip the values before applying the LUT. Simple matrix multiplication is defined for any input value.
  2. A LUT is generally going to be applying a non-linear transform. This breaks the linear relationship between sample values and luminance in the scene, which means you can no longer model physical processes like blurs anymore. Simple matrix multilication is a linear operation.

Thanks for the refresher…figured as much but not with absolute certainty enough to suggest that…

I don’t see how a linear matrix may be better than a lut all dependes on the number of samples on the lut, but if their are enough and a good interpolation… with six samples and linear interpolation you already have what linear matix does.
If there are non linearities in the transform the LUT would give more acurate results.

If there are non linear transforms is due to non linearity in the correspondance from sensor colors to RGB so it gives you more acurate colors.

The LUT has been obtained by real mesures and patches, if it were better a linear matrix, no calibrator would be sold and just a few patches would be enough to calibrate displays or devices.

May be there are problems with what is said about colors extrapolated out of gamut, but if you are searching for color acuracy…

A lookup table will have a maximum and minimum input value. For any input value that fall between this maximum and minimum, you can perform interpolation. However, if the input value falls outside these min and max values, then you would need to perform extrapolation, which is problematic for a lookup table. Because values in a scene-referred pipe are not bounded, there is no guarantee they will fall within the range of values for which the LUT is valid.

Luts expect a range of exposures …this doesn’t hold with scene referred edits…the lut is going to have a hard coded black and white I would think…but I am no expert only surmising

Yes that can be a problem outside the extremes.
But when you use a linear segmente you are extrapolating the data from two points to the whole plane, so no better panorama.

When you say that LUT is not linear… linear related to what?

Related to energy in a narrow band around red, green a blue?

The filters in the camera do not provide a signal proportional to that as they include much of adjacent bands, green a lot of red and even blue, red, some green and blue, blue more narrow spectrum.
So the signal you get is not proportional to the “Pure” colors.

Our retina is not linear either and has different characteristic than sensor filters.

But if you want color acuracy it is related to the standard observer: our retina response, so you want a signal proportional to that.

So you need non linear conversions from the signal capture by the camera to what an observer would have perceived in that scene.

That is what I understand by linear response.

1 Like

I do not doubt you are correct and there are problems implementing it, problems that C1 seems to have solved quite well.

The DT path has its own problems implementing the non bounded linear model, problems that it has solved too with excelent results.

The problems with the C1 approach don’t seem different than any other integer developers, using the camera space instead of an intermediate workspace does not imply using LUTS, they could have been used a linear transformation.

They are not using a nonbounded scene model, they are using integer (then bounded) models.

If they are using LUTS it would be for better color acuracy.
They have many clients that demand that color acuracy.
Of course there is allways room for improvement.

There might be situations with problems as you describe, but C1 is recognized as a program that provides good colors and color acuracy.

I am still not sure that C1 is doing anything that you could not do in DT. I could not find any reference to their mythical camera space. They apply their curve a little differently but they are using ICC profiles through standard connection spaces…you can use lut based profiles in DT or even use DT chart to create a tone curve lut combo based on a color checker or a matching JPG. C1 is a solid product no doubt with some tricks up its sleeve but nothing I don’t think that you could not do in DT. It like camera raw having access to DCP files…C1 has access to it’s custom profiles… Maybe someone that knows the real ins and outs of C1 might know far better or confirm if there is really some major advantage in the starting point…I don’t think so. other than money and resources to generate the profiles…

2 Likes

I am not saying that you cannot do it in DT.
Just commenting what I have read about C1 way of processing, they use ICC profiles to calibrate camera color and process image in the resulting camera space, instead of converting to a RGB intermediate workspace.
It is supposed to be color acurate (at least as acurate as a display calibration or other device calibration can be).
Really to get that acuracy you needi to construct an ICC for the ilumination conditions and your camera, but they provide a canned ICC and differente adjustments for each camera.

I have never done it in DT but you can use calibration patches in DT, now you have it in color calibration module.

But there is a difference: if there are colors that your camera captures and the rec2020 working spaces does not, you loose that colors that get clipped or transform to be in gamut.

In C1 all the colors the camera can capture remain in gamut (at least in theory, if the profile works well).

If they have the money is because they have profesional people paying for their expensive product, and they came after LR and PS son they mus do something well or adequate to that people.

I don’t know until what extent that way of processing could be better or even noticed, may be in some cameras and some extreme cases, and may be in other situations it works worse (all options have usually advantages and disadvantages).

Just I read here about C1 and wanted to comment what I have read about how it does processing and the potential advantage of preserving all captured colors, and may be drawbacks in implementing that option.

Okay I think I found the resources and money part of the Capture One sauce … :slight_smile:

Access to camera spectral data will give the best camera specific profiles.

It would appear that this and a whole lot more goes in to their profiling process…

It looks likes there exists a cultural heritage version of Capture One ( Product Catalog | DT Heritage
) which I think it says is based on Capture one pro… the in depth process used is outlined in this document… Maybe this is more involved than what you get with Capture One Pro but it is likely still revealing about what the process might look like when they create their profiles.

@ggbutcher I suspect you know about this software but just in case you don’t and it could be of any use in your investigations… only OSX and WIndows but not expensive…
Robin Myers Imaging: SpectraShop™ It was a link in the above document on color reproduction.

We are talking about the linearity of the response of the R, G & B channels from the sensor with respect to the intentity of the light falling on the sensor. If the input signal from the scene has a spectral response of f(\lambda), and the colour filter array for the photosites have a spectral response of r(\lambda), g(\lambda), b(\lambda) for the R, G & B channels respectively, then the output of the channels will be:
R = \int_0^\infty f(\lambda)r(\lambda)d\lambda
G = \int_0^\infty f(\lambda)g(\lambda)d\lambda
B = \int_0^\infty f(\lambda)b(\lambda)d\lambda

(assuming the response of the photosites without the CFA are linear with respect to the number of photon collected, which is a pretty good approximation if you look at the characteristic curves of sensors typically used in cameras).

In fact, due to the linearity of the integration operator, if we apply an input signal scaled by a factor of a, the input signal will be af(\lambda), and the response of the R channel will be:

\int_0^\infty af(\lambda)r(\lambda)d\lambda = a\int_0^\infty f(\lambda)r(\lambda)d\lambda = aR

which means that increasing the input signal by a factor of a will cause the output of the R channel also to be scaled by the factor a. That is to say, the output of the R channel is exactly proportional to the intensity of the input signal. Similar logic applies for the other channels.

2 Likes

We are talking about correction the raw signal, right? So we have no spectral information, just points we’re told represent Red, Gree, or Blue. We have no way (from the information the camera gives us) to know why those pixels are designated as red, green, or blue, and we don’t need it.

And it’s easy enough to have a non-linear LUT:
A simple 3-point LUT for a monochrome sensor: (0,0) - (8000, 16000) - (16000, 0)
Non-linear and non-continuous (if you use linear interpolation). A linear LUT would be
(0,100)-(8000, 5100)-(16000, 10100) (again, monochrome for simplicity).
even (0,100)-(8000, 5000)-(16000, 10100) is already non-linear (i.e. not of the form y = ax + b)

And if your LUT is perfectly linear, its output can be represented by a simple linear equation for monochrome, or a 3×3 matrix for color

Yes, at one point you will have to do that transform. That’s done with filmic and output color profile

And that’s not what’s used when dealing with scene-referred. It’s been said numerous times that there, “linear” means “linear in light energy”. The perceptual response is logarithmic (i.e. perceived intensity is proportional to the logarithm of the light energy).

1 Like

So they are not linear at all, as the signal is a convolutiin among the filter transfer function and the spectrum of received light.

Signals obtained with different transfer functions are not proportional among them.

Hence the need of making none linear cirrections to get the most similar result to what a human could have seen from what the camera has captured.

That is what a icc profile corrects, difference in response to different light sources with different spectrum, not the scaling of all the channels by a factor, thay as you say us linear, at least at the camera level not at a perceptive level.

Altough that filters in camera try to emulate retina response to light.

What you need to be linear and aditive is the signal you are processing, the colors from a human perspective.

So using a ICC profile seems adecuate if you are looking for color acuracy.

Oh? Afaik, such transfer functions are (for each wavelength) linear (wrt. to the incoming energy). So, if I multiply the input by a factor α, I expect my output to be multiplied by that same factor α. So as long as I don’t change the colour of the incoming light, I’d expect the output to be linear with the input, even with different transfer funtions for the different colours

No, they try (at best) to emulate the spectral sensitivity, i.e. relative responses to different wavelenghts at a given energy. The filters do not (and can not) take into account the response as a function of total light enery. That response is the job of the sensor (which is linear, for most purposes).

This discussion is exactly why contracts and scientific texts spend so much place on definitions of concepts and terms. That hopefully ensures everyone knows what is meant by each term used.

1 Like

What we are doing here is not convolution, it is a weighted sum of all the wavelengths in the original signal, where the weighting is bsed on the spectral response of your filter. This is absolutely a linear operation.

Even if it was convolution, you should know that convolution is a also a linear operation, which comes about due to the way its definition is based on the (linear) integration operator.

I really don’t know what you are trying to say here. Do you have a concrete example to illustrate what you are talking about?

1 Like

I am not sure I understand what you are trying to say here. Convolutions are of course linear, as explained above by @Matt_Maguire.

Are you talking about input color profiles? What does this have to do with filmic/basecurve?

But the ICC (or any other profile) is used for colour space conversion, isn’t it?
@ggbutcher has mentioned a few times that he edits in camera space, without any transformation, and then converts straight to the output (I assume mostly sRGB) at the end of the chain (see, for example, Color transforms in Raw Processing).

1 Like

My understanding which seems to be a moving target when these color discussions pop up is that the ICC files take the rgb data from a given device and converts it to known values xyz or lab , ie the PCS so that the data can be predictably converted and edited as needed. The further use of a working space as I have heard it described is a conversion to a space as large as possible to maintain as much of the device gamut as possible but allow for tools to work in a common predictable color space. Using camera space would vary from camera to camera and device to device…but that is my very limited understanding and likely misguided…