For an upper bound, we could consider our \bar X, \bar Y, \bar Z function over a spectrum less wide, so that we can find a significant minimum for the function \bar X+ \bar Y+\bar Z.
For example, on your graph, we could consider the interval [420,650] instead of [370,750].
Of course, we would make an error, and I don’t know how problematic this is.
We can consider any linear combination of \bar X, \bar Y, \bar Z with non null coefficients, as such combination will give results strictly positive as long as at least one of them is strictly positive on each part of our wavelengths interval.
\bar f = \alpha_x \bar X + \alpha_y \bar Y + \alpha_z \bar Z
Let’s call \bar f_{min} the minimum of \bar f over the considered interval.
We have: \alpha_x X + \alpha_y Y + \alpha_z Z = \int_\Omega I(\lambda) \times ( \alpha_x \bar X(\lambda) + \alpha_y \bar Y(\lambda) + \alpha_z \bar Z(\lambda) ) d\lambda
Which gives: \alpha_x X + \alpha_y Y + \alpha_z Z \geq \bar f_{min} \times \int_\Omega I(\lambda) d\lambda
Thus: E \leq \dfrac{\alpha_x X + \alpha_y Y + \alpha_z Z }{ \bar f_{min} }
We can try different \alpha values to try to get the higher lower bound possible.
However, all the problem with this upper bound is how to choose the interval in order to get a significant bound without making too much errors.
anyway, more general comment: thanks for starting this discussion. I find it quite interesting. Unfortunately I don’t have the necessary physics background to contribute to your analysis, but I’m certainly interested in the outcomes.
I can only contribute with my empirical observations, that match more or less what you wrote in your initial post. Using max as a norm gives not so pleasing results (in particular, it tends to compress blue skies too much). Euclidan norm works better in my experiments, and so does luminance. But I don’t have much physical justifications I’m afraid…
Ah. Found that https://jo.dreggn.org/home/2015_spectrum.pdf by @hanatos (it’s a small World), so maybe we can compute an approximation of I(\lambda) after all (from XYZ) and then numerically integrate it so the energy computation is direct (but expensive).
… if you’re on the search for fast spectral upsampling, may i suggest you also look into this one: https://jo.dreggn.org/home/2019_sigmoid.pdf ? caveat may be that colours in images will encode emission and not reflectances, so you’d need to scale before and after upsampling.
and yes, if i may add: i find this topic super interesting. the distortion in chroma that a tonecurve brings has always annoyed me and i think we can do better by doing things in spectral. the upsampling is really only using a couple fma + one reciprocal sqrt instruction and can be executed in SIMD on many wavelengths quite efficiently. i’ll be excited to hear about any more ideas or results in this direction.
Would it be reasonable to estimate the spectral response of each sensor by taking a Gaussian around, let’s say, 454 nm, 540 nm and 610 nm? See e.g. https://www.dxomark.com/About/In-depth-measurements/Measurements/Color-sensitivity for a sketch.
That could give you a starting point to try out if the energy-oriented method actually works.
@snibgo in my experience, Y works reasonably well (e.g. it’s what is used in the dynamic range compression tool of RT), whereas L of Lab produces quite unnatural results. I’m sorry I can’t provide a more meaningful/quantitative answer, but my knowledge in these topics is too limited for that.
Yes, I have long been wrestling with Lab. When I change L values, I need to adjust C ones to compensate in order to get more natural results and vice versa. There are other Lab-like spaces but I haven’t tested those because they aren’t readily available.
Did you ever try it ? In darktable, you can use the tone curve in XYZ or Lab or RGB. What happens then ? XYZ shifts colors toward yellow/green, Lab creates blueish-greyish shifts when you raise the shadows luminance. So, they don’t work. Besides, Lab is a color adaptation model more than a working color model (to push pixels).
Luminosity (Y or L) is perceptually defined. This has nothing to do with actual light/photons behavior. Sticking to physically defined spaces and reproducing in software what would happen to photons IRL has been found to often give the best and most intuitive results, so the retouching process is faster and more reliable.
It’s a blind guess really. How do you know the mean and std of your gaussian ? If it’s to find yet another nasty workaround, that project becomes pointless. Let’s be rigourous.
Lab is known to not shift hues linearly. After all, it was one of the first attempts to mimick human perception (1976). Also, I vaguely recall some weird kinks happening around blue in Lab. TL; DR Lab belongs to museums.
Provided you use xyY (not XYZ) and still, I find it brightens reds too much.
These Lab-like spaces (JzAzBz, IPT, etc.) are good mostly to perform gamut mapping when you want to reduce the visual difference (Delta E) between source and destination. They are still not intended to do general purpose color grading (except maybe rotating selective hues).