new camera input luts available

why do you want input lut transforms?

consider the example in this thread:

computing colour by just using the dng ForwardMatrix results in good colour near white, but sacrifices extreme values, especially apparent for blue here (out the window, see cie scatterplot in the top right):


this is a lut profile created from spectral sensitivity functions (ssf):

all else being equal at least now you have a starting point with values inside the spectral locus, so no negative energies required to reproduce this image!

good news everyone!

@ggbutcher created a collection of colour luts (cluts) from a variety of ssf (from different sources and measured by himself), and they are now available in the vkdt camconst github.

what’s a clut look like?

here are visualisations for two random cameras:


the chart here shows the camera chromaticity coordinates, and the points represent some sample spectra converted to camera rgb coordinate (by integrating them against the ssf), colour is the rec2020 coordinate of the output. the named circles show the cc24 spectra (plus a more saturated set X??) evaluated through the actual ssf, and coloured by error (deltaE, spectrum integrated against cie colourmatching vs. spectrum integrated against ssf and processed through clut).

the profile itself is essentially similar to a dcamprof lut profile. the differences are the domain (vkdt: full spectral locus, dcamprof: intersection thereof with prophotorgb) and the interpolation (vkdt: dense, dcamprof: spline).

how to apply a clut

grab the one corresponding to your camera model from the above-linked repo, place it in your data/ directory and then apply the clut.pst preset (ctrl-p in darkroom mode or node editor and then pick clut.pst).

11 Likes

Hi Hanatos, interesting work! So these are 2D luts? Can you explain a bit better what you mean by ‘camera chromaticity coordinates’? What are the axes and what are the lines?

Jack

heya,

these are 2.5D luts (2D lookup domain, 3D values stored there). the 2D domain is just camera rgb (r/(r+g+b), b/(r+g+b)) in analogy to the CIE xy chromaticity coordinates x = X/(X+Y+Z). these live in a triangle, but i want to store a 2D texture, so the triangle is stretched to a quad domain to make better use of the space, nothing to be proud of. just for completeness, the conversion is

// std C2 mapping with sqrt
void quad2tri(double *x, double *y)
{
  const double sx = sqrt(*x);
  *x = 1.0-sx;
  *y = *y * sx;
}
void tri2quad(double *x, double *y)
{
  const double t0 = *x, t1 = *y;
  *x = (1.0-t0)*(1.0-t0);
  *y = t1 / (1.0-t0);
}

(how you would sample points on a triangular light source in Monte Carlo path tracing, if you’re into this kinda thing, and not the best way to do so)

by “lines” you mean the structure in the colourful points in the background? these are just a grid of badly chosen sample spectra, sometimes they are closer together in one direction than the other, so they appear to form lines.

i want to prepare some better comparisons to dcamprof’s lut profiles, maybe create the same visualisations for their luts and for the straight matrix too. should be possible to convert dcp lut to this texture format (i don’t want to query a scattered data pointset at runtime if i can avoid it). as far as i understand dcamprof does some extra regularisation that may or not make it less accurate…

2 Likes

Cool, this remind me of rg chromaticity

1 Like

As i don’t follow the whole profiling / Lut thing completely …

… does this clip the values inside the range of the Lut ?
I dont know about vkdt , but Darktable does the input profile after exposure , so you can get values outside 0.0 - 1.0, And everytime I try a icc to use as camera profile , it clips the values to 100%.

Or is there a different method how i should use files like this ?

From what I understand this don’t clamp anything because:

  1. camera rgb doesn’t have negative values (cameras can’t record negative light)

  2. with this “rb chromaticity color space” it 's impossible to get values higher than 1
    With the formula above “r=(r/(r+g+b),…” The result r is always < 1

1 Like

Is the assumption correct that the outermost locus is the most spectrally pure? And the closer one gets to the center of those loci the closer to Illuminant E? In the D700 clut visualization, are there certain green values overlapping with less spectrally pure green values (upwards in the direction of blues)?

EDIT: I’m so sorry for my manners! This looks fantastic and just reinforces my assumption that SSFs are a must for color-sensitive stuff (so all things photography). Great great work from @ggbutcher and @hanatos . I need to get a GPU to test vkdt.

2 Likes

right. it’s pretty much that only with blue instead of green (flips the triangle, forgot why i did it this way, something with white balancing where i kept green constant or so).

the range of the lut is all the physically possible values. it covers the whole spectral locus and scales with brightness of the input (because it divides out r+g+b, so the global scale introduced by exposure does not matter) (what @age said).

the outermost coordinates in the quad mean something random depending on camera response function. mostly unphysical stuff that never happens (except for noise below black point maybe). the pure spectral colours are close to the outermost ring in the amoeba shaped pointcloud (i think this ring is at 95% distance or so).

hm. the rings scale towards illuminant E, so you can see a bit of structure in the pointcloud around that. the colouration is rec2020, so it renders (1, 1, 1) white where it means D65.

hehe yeah the D700 shows the metameric ambiguity between camera rgb and cie/rec2020: a certain shade of green camera rgb coordinate could potentially have been another spectral stimulus, which, to the eye, would appear as a way more saturated shade of green. given just the camera, there’s no way you will be able to tell them apart. browsing through the different models this happens quite a bit.

1 Like

But that means they are supposed to be used before exposure changes , right ? Because you can’t apply it to rgb data that has values > 1.0.

Just looking if i can use this in DT , or if I can learn something from it in using other ICC files as profiles better in DT.

no. think about a spectrum between 380 and 830 nm. if you scale it up, it still has the same shape. you can do stuff with it in a normalised space and then multiply the overall scale back in the end. this is also explained in the amazing dcamprof docs. in general, the apparent brightness of a certain spectrum is different in camera rgb and CIE XYZ, that’s why it’s a 2D->3D table.

(re:icc and such… maybe i should put together a list of things wrong with the dt processing pipeline. but then again maybe not).

2 Likes

Do it :wink: I say this not as someone wishing to piss on dt, but a user of the software who wants to better understand.

(also probably most of the more grave things are my fault in the first place)

1 Like

That’s what I was trying to get at. Thanks!
I assume you generated the “rings” by sampling the camera SSFs with spectrally broader and broader virtual lights? Maybe even gaussian shaped? If so, it would be possible to plot lines of constant illuminant center-wavelength within that clut space?

I’m looking forward to see what effects this has on real-worl gradients. Specifically how spectrally pure light-sourced gradients are affected. Even more so compared to matrix solutions. And which CFAs are better and which are worse. Super interesting stuff.

1 Like

(writing this on my cell phone, with dodgy Internet in the woods…)

+1 to the dcamprof docs! Even if you never use the software, Anders has compiled a treasure trove of information on camera profiling. I’ve just spent the morning with a local copy, trying to better understand the mechanics of luts in this application. Every time I do such, I learn something new…

1 Like

I would welcome you comments on this and if it would promote any discussion that could lead to improvements everybody wins…

1 Like

well integrating the same ssf against sampled spectra, yes.

only spectral colours (and from there towards white) could be gaussian. the purple line is more like 1-gaussian (needs two peaks or a dip). dcamprof has some xy → gaussian spectra upsampling routine that i didn’t look into in detail yet. i’m using this method just because it’s so dead simple. i think real spectra would be better, but these are harder to sample densely in xy space. certainly room for improvement here.

reflectance spectra with mode/peak at constant wavelength? yes:


(all the way to the right would be the spectral colour, but it stops displaying stuff at boundaries of adobergb)
this is what the colour module uses for hue-preserving gamut mapping and changing chroma (following mizokami and webster 2012 if that is close to the idea you are proposing).