What does linear RGB mean ?

OK, this is getting REALLY out of topic now :laughing:

250 USD for the power supply, plus 50-100 USD per discharge tube. Not cheap, but certainly not prohibitively expensive if you’re really into it (prices from Spectrum Tubes, Magnetizers and Coils, there should be other suppliers).

The cheap versions are mercury lines from a fluorescent tube, sodium from street lighting or burning table salt, xenon from a high power car headlight, neon and other noble gases from color signs (all sources that are quickly disappearing, so act fast! :wink:)

The problem with diode-based lasers is that, unless you buy them calibrated from a reputable source (expensive), the wavelength is known plus/minus 10nm or worse. The exception are the 532nm and a couple of others that are derived from some non-diode sources (like Nd:YAG crystals), so the wavelength is always spot-on.

Good idea! (although breaking a DVD player to remove the laser is not cheap :laughing:)
Those lasers should have a pretty reliable wavelength.

CD-type lasers with a wavelength of 780 nm (within the infrared) were used. For DVDs, the wavelength was reduced to 650 nm (red color), and for Blu-ray Disc this was reduced even further to 405 nm (violet color).
Source Wikipedia

Neither did I (and I should have the equipment to do it). Sounds like a good reason to borrow the lab spectrometer for a sunny afternoon :wink:

1 Like

Wikipedia tells that automotive “Xenon” bulbs are actually metal-halide, so not suitable as calibration source? I can provide spectrum if interested as I’ve repurposed two 6000K bulbs for vestibule lighting :slight_smile:

Okay that is less expensive than I thougt. And I agree that is perfectly fine for calibration.

10nm off!!! okay, yeah that might be too much. The 406 ones are probably repurposed blu-ray sources…maybe the ones which didn’t make it into the drives? Those 532nm are frequency doubled 1064nm ones rigth?

I think I’d get killed if I take out the spectrometer out of the lab!

People are actually building their own DIY spectrometers btw.:
Youtube DIY spectrometer

every cameraflash should be a Xenon bulb…hmm.

I doubt it until I see a spectrum. The way they ignite when turned on, how quickly they reach maximum light output, looks a lot like Xenon to me.

Please beware of the UV! A cataract in your eyes is no fun at all.

Short summary: a homebuilt spectrometer with decent calibration is in reach. So there is nothing really stopping one from acutally spectrally characterizing CFAs on cameras. Which is needed to build a AtoB0 LUT. Someone correct me if I missed something.

Worst case scenario for a cheap pointer, yes. Most are probably closer to specification, but you can’t know it without calibrating them :wink:

Yes, they’re frequency doubled Nd:YAG lasers.

We have a couple of small portable ones. Besides, it would make for a nice demonstration for students :blush:

The Wikipedia article says that

they are actually metal-halide lamps that contain xenon gas. The xenon gas allows the lamps to produce minimally adequate light immediately upon start, and shortens the run-up time.

so, you’re both correct. I doubt the xenon in there is enough to give a meaningful spectrum.

Yeah! All my description about calibration lamps should have started/ended with a :warning: “Be careful! Many non-thermal lamps emit strongly in the UV (mercury, hydrogen) or in the near infrared (xenon), some of them more than in the visible. They’re dangerous to look at for extended periods of time without adequate protection” :warning:

(specifically calibration lamps, the ones sold for illumination should filter those wavelengths in the glass)

1 Like

exactly, that would be a catch22, trying to calibrate with something that needs calibration in the first place. I thought they could be 2nm off, which, if you have several for calibration is meh but not the end of days. We never used them in the lab because of mode-instabilities and such.

but they are the most trouble if they do not do this.

Off topic:

I had one in 2018, now fixed with a brand-new lens. During the procedure, when I temporarily had no lens in my right eye, I could see a lilac / light-violet glow around the ends of fluorescent lights. They said it was because the retina is sensitive to UV but this is normally filtered out by the lens.

The new lens is wonderful, but now I have a slight colour imbalance: a minor cataract in the left eye gives a yellower image compared to a bluer image in the right.

1 Like

DCamProf is a solid piece of software, a few years old, probably the best free tool out there to generate robust input camera profiles with widely available targets. Most people don’t need anything more than a DCamProf CC24-based profile. In fact, most times sampling the gamut solid more frequently than that can result in less stable profiles.

Lumariver Profile Designer is based on the same engine, has a decent GUI but costs a few pennies.

Jack

2 Likes

Yes, that’s why they call it a ‘compromise’ color matrix. Dense LUTs are a compromise too, because they introduce button-down discontinuities in what is in fact typically best rendered as a smoothly changing solid. Some disciplines call this overfitting. So often the truth is in the middle.

I don’t quite understand what folks here mean when they say that SSFs and pixels are not linear though, they sure have a linear response as defined upthread: twice the input, twice the output all else equal. It may not be the response of perfect color matching functions, but that’s a different issue.

Jack

2 Likes

I don’t quite understand what folks here mean when they say that SSFs and pixels are not linear though, they sure have a linear response as defined upthread: twice the input, twice the output all else equal.

Sure, with respect to changes in intensity changes, as long as you stay away from the noise floor and the saturation limit, the response is more or less linear; I don’t think anyone is debating that

It may not be the response of perfect color matching functions, but that’s a different issue.

But this is why people are saying the sensor response is not linear – stepping 3 units along the spectral axis will not cause 3 times the change compared to stepping only one unit along the spectral axis.

Right. But isn’t that true of any SSFs, including perfect LMS? They are all linear under a given illuminant: double the input, double the output. The fact that they don’t behave like CMFs is a different issue, no?

Jack

They are all linear under a given illuminant: double the input, double the output. The fact that they don’t behave like CMFs is a different issue, no?

Yes, you are right, they are two separate issues. One issue is that we talk about linearity with respect to imput intensity, holding everything else equal. The other issue is whether there is a linear mapping from the camera “RGB” values into a standardised colour space. In the former case we have linearity (in some sense), in the other case we do not.

Right, and color is perceptual thus subjective. I would argue that if one is being cantankerous about it, colorimetric color spaces are themselves compromises and non-linear with respect to my visual system (or yours): how many people were used in 1932 to determine CMFs and what was their variability? Surprisingly few and surprisingly high.

So the real question is how close to linear is close enough? I don’t have answers, just questions. All I know is that In normal outdoor conditions, with my Nikon digital cameras I typically do not have a preference for color produced via an appropriate 3x3 (or x4) matrix vs something more involved.

Jack

That is a bit surprising. Saturated colors in flowers will surely benefit at least in how they gently go into what I would loosly call ‘gamut clipping’ region. The comparison from @ggbutcher of the matrix profile vs. the AtoB0 LUT really sold the more involved process to me. And I am not saying it looks more like the real world, but more believable how it starts to ‘clip’.

Fair enough, out of gamut mapping is a perceptual game that forces non-linear compromises by definition. We have now entered the output-referred perceptual world. My earlier comments were more aimed at the linearity of the input-referred hardware world.

Jack

1 Like

Well, here is the spectral plot by iPro+ArgyllCMS. Change .png to .sp for data.

It is nice to see how all were concerned about the possible risk in retrofitting :slight_smile: I can weld and it takes only few baresighted arc flashes to remember it forever.

1 Like

Have we though? What @ggbutcher did, as far as I understood, was not with the goal of being perceptually pleasing. He implemented a different, more accurate(?) way to deal with highly saturated colors (a LUT for the device space to PCS conversion). As a side result that turned out to be more perceptually pleasing (only imho). That ‘perceptually pleasing’ part can well come from the PCS to Display conversion which definitely tries to be perceptually pleasing, but might fail when the decive to PCS conversion is inaccurate (or at least not precise for saturated colors).

Honest question: Did I misunderstand this?

That’s not just Xenon. Thanks and I stand corrected.

Also: That i1Pro seems to be nice!

I did not see the example but I assume that the flowers in question were out of gamut and M. Butcher brought them in. What does that mean?

Given your moniker I trust you understand space projections. The sensor sees spectrum arriving from the scene and converts it, say, from 33 dimensional space to 3 (400-720nm every 10 nm to the 3 rgb raw channels). The result at this stage is bounded because only positive values up to clipping are allowed, let’s call this bounded set of values input ‘camera’ space.

Since ‘camera’ is a linear 3-dimensional space we can visualize it as a parallelepiped and project it accurately to any other linear 3D space by matrix multiplication, where the result will also look like a parallelepiped of different size and shape. In floating point we can do back-to-back projections ad infinitum without losing tones, but for this discussion let’s assume that we are going straight from camera to an output colorimetric color space like sRGB. sRGB also cannot have negative values or values beyond clipping.

In such a case, linear matrix multiplication may result in out of bound values, that is in negative or clipped values in the destination space. These tones do not exist in the landing color space because they fall outside of the relative allowed ‘cube’. They are deemed to be out-of-gamut. What to do with them? Gamut mapping to the rescue.

One way to deal with them is to simply clip/block them to the nearest boundary, many ways to do that. Another way is to say ‘I know this color does not exist in sRGB, but the closest one geometrically, or perceptually or pleasingly is this other one’. You can do gamut mapping by using a plethora of formulas or with LUTs, the results will be least bad according to your choices (‘intent’). Of course such results are by definition ‘incorrect’ because they are subjective and not linearly related to the original tones. A few simplifications but you get the idea.

Jack

PS Perhaps the best way to understand gamut mapping is to play with this brilliant Marc Levoy applet.

1 Like

Oh i am sorry, it was post 33 in this very thread: What does linear RGB mean ? - #33 by ggbutcher

Everything you write I at least think I understood. So far we are on the same page. The only difference I see is that there are actually two transforms taking place. One from camera to XYZ and then from there to display (often sRGB). I understand that there are many ways to go from XYZ to sRGB depending on the rendering intent. My specific question (or struggle maybe! :smiley: ) is about the camera to XYZ though. I understand how to do that with a 3x3 matrix. My question is what errors do you get for colors which are close to monochromatic in the 33dimensional sense in combination with a colorfilter array which is not a simple bandpass or gaussian but a rather complex shape?

In the results of post33 above, a monochromatic, or close to monochromatic source is illuminating a room (but is not the only lightsource) and creates a gradient on the wall. The only thing that changed between the two pics was to go from a 3x3 to a LUT for the device-to-XYZ conversion (I assume it’s XYZ) and suddenly the gradient is handled much better. Is the assumption of a parallelepiped in a CIE-plot for complex CFA filter shapes justified? You do have three scalars now, yes, but each still multiplies a 33dim vector. I have the feeling that for low saturations this is not a problem, but for high saturations aka. more monochromatic light sources, this becomes more and more sensitive to the actual colorfilter-spectrum.

Wasn’t this the reason why Anders Torger in https://torger.se/anders/dcamprof.html used an additional target with high saturation to get rid of residual errors of highly saturated colors?

I am so sorry everyone if I am derailing this topic. I am happy to take this into DMs or a seperate topic.

EDIT: I’ll try to simulate a bit over the weekend. I think I know where I am wrong :smiley:

Yeah, many ways to skin a cat, but to make things simple pretend that the matrix projects you directly from ‘camera’ space to sRGB. They are both 3D spaces, so it is just a reversible linear transformation if we stick with floating point math. But sRGB is not based on floating point, it is based on capped positive integers only. So now you may have some tones that land outside the ‘cube’ thus out of gamut. How do you bring them into the fold? A LUT is one way to do it. Of course you could take a detour to XYZ and apply the LUT there but the end effect is exactly the same.

If your output color space is big enough and all tones fit within it, the process is entirely linear and no gamut squeezing LUT needed.

But, you say, the CFA’s SSFs do not look like LMS cone fundamentals so the best we can do while staying totally linear is via a compromise color matrix, which results in some ‘errors’. And you are right. Errors compared to what? Usually compared to the 1932 CIE standard observer etc. etc. You can try to correct those ‘errors’ via another LUT or by other means, with the limitations mentioned earlier.

If you have looked at Leroy’s applet it is easy to see that any highly saturated color is guaranteed to be out of gamut in sRGB (and Adobe RGB). If the color is the result of a linear transformation it may also be off because of the compromise color matrix. LUTs are interpolated, usually in HSV space, so if you capture a lot of saturated colors it behooves you to have some saturated samples in your test target otherwise you are extrapolating blind.

If you are curious about the type of errors that can be expected by treating a modern CFA linearly off a simple CC24 target take a look at this article.

Jack