White point adjustment

lol. If only it were that simple…

There are many levels between those extremes. I don’t fully understand the molecular structure of the gases in a light bulb… but I know when to turn the light on.

then please do elaborate. feel free to go as complex/detailed as you think is needed, I’ll do my best to follow…

1 Like

I did that in my first comment.

I see. thanks for clarifying

1 Like

“linear” in this case refers to a transfer function that progresses by a constant. Simply, this means the transfer function graphs as a line. Exposure and the two-point “curve” both have this characteristic, which is why exposure can equivalently be applied by a poorly-named “linear curve”.

This characteristic of a two-point curve is very useful in image processing. So much so that I wrote a separate curve routine just for that case, because computing x->y with a slope is so much more efficient than looking up y for each x in a complicated spline algorithm. It forms the basis for my blackwhitepoint tool in rawproc, where I can set black and white on a two-button slider and this “linear curve” is applied between the two points.

The reason for scaling your image thusly has one of its roots in the difference between your camera’s tone measurement range and most display’s range between black and white. A 14-bit raw file would seem to go most of the way in filling 16-bit integers available on modern computers, but really it’s only a quarter of the 16-bit range, 0-16383. And then, most consumer displays are still just 8-bit, so eventually a scaling has to be done in the “bad” direction, the direction that loses precision. And so, setting a point in the data range as “white” tells the software where to scale your data to meet that expectation.

A good “white” isn’t always the max value of all three channels; if you over-exposed your image, all the light past the sensor’s capability will just get glommed at the saturation point. Well and good until white balance is applied; now each of those channels is shifted in various ways, left or right of green (the common reference for white balance) and those saturation spikes separate from each other in the histogram. If you set the white point at the highest spike, your whites will take on a (usually) magenta cast, describing the residual color contributed by the spikes that aren’t at the white point. In that case you have to set the white point close to or at the lowest spike. This is stuff most raw processors do before they present an image to you for further mangling.

That same “linear curve” can be used to correct a lot of color casts. Look at the RGB histogram of a color negative sometime; you’ll notice that each channel has the about the same shape, but shifted left-right from each other. If your software lets you set separate black and white points for each R, G, and B channel, you can set them for each channel and they scale the channels to the same limits which removes the color cast. You can use white balance multipliers to do this, but you can’t set separate black points with them so the channel-shifting can’t be as equivalent.

More than you probably wanted to know. Sorry, just got into a writing mood…

Thank you @ggbutcher for taking the time to explain this. As you mentioned above

Blockquote

RT is doing things for you that you need to tease from what we’re talking about to insure you’re understanding the specific effect of these transforms, on the appropriate input data."*

Blockquote

and it was this aspect that I was trying to get to the bottom of. There is definitely an observable difference between the two methods in RT and the conclusion I have come to from playing around with both is that it’s best to get the mid-tones about right with the exposure slider and use the ‘linear curve’ to set the white point. Your blackwhitepoint tool implementation in rawproc sounds similar to what I remember from Lr and although the implementation may be different, I found that is was a quick and easy of getting a good starting point for the rest of the editing.

Early on, I discovered the concept of “contrast-stretch” in the raw processing tutorial at gmic.eu:

(just revisited the page, oh @David_Tschumperle, that is surely an “attention-getting” profile picture…) Over the next years, I came to learn a lot more about data formats, device capabilities, and general image processing that all act in the context of the “scale” of the image data. Particularly, I discovered the in-exactitude of the thing we know as exposure; it’s really about putting the parts of the image you care about in the range between the sensor’s noise threshold and its saturation point.

So, for my proof processing, a simple black-white point scaling is usually all I have to do to get an acceptable image, indeed most times one that looks better than the in-camera-produced JPEG. From there, I have the data basis consider custom curves to my whim; currently, I’m playing a bit with log-gamma and filmic, but I still prefer my own devices in shaping a curve for a particular image.

Sometimes, I find that the auto black-white point operation applied in the proof script clips highlights that I want to see in their own glory. So, I re-open the raw from the proof JPEG, which re-applies the proof processing, and I adjust the blackwhite tool to pull the highlights back into play. Then I usually need some sort of curve to pull the mid-tones back up. I guess dt’s filmic tool would do that for me, but I’m a “manual transmission” sort of driver, and prefer to shape my images to my immediate whim, and not through trying to figure out slider side-effects. YMWV.

I have a bigger reason to be discussing this at length; I started early departing from the mainstream applications because tools like G’MIC showed me the value and power of a tool box of operations you could apply in any order you like. And, to do so from the first input of the image array form raw file, in order to consider the full effect of every single subsequent transform imposed on the image. I’ve learned a lot in doing so, more than if I’d have continued to rely on pre-processing chains, base curves, and all the other abstractions that keep the details from confounding folks who just want to photograph things. Don’t get me wrong; dt and RT are well-engineered products that do a great service supporting photographers in getting the job done, but really learning about the basis and effects of image transforms requires a “de-constructive” approach, knowing what the data looks like to start, and what it takes to make it a finished image, step-by-step. I’m in a career transition right now, but after that bow is tied I’m probably going to make a video tutorial along the lines of “from-scratch” raw processing, using rawproc.

1 Like

I agree completely about needing to learn the basics. My move away from Lr was partly due to the fact that I had absolutely no idea of what was going on behind the sliders. Sure, you can learn what to tweak to get a particular effect but the day Adobe (or any other supplier for that matter) pulls the plug on the product, you have to start all over again. I look forward to seeing the video :slight_smile:

I come back from travel and vacation :slight_smile:

First of all, excuse my bad english which will probably hurt the understanding.

I have already spoken on this subject, but it does not hurt to explain again.

I will not speak mathematics literally, what explains Alberto is correct, but the interaction with colorimetry.

The slider “exposure” acts in the same way on the 3 RGB channels, which will lead for mid-tones to a faithful representation of the change of exposure.
But for colors in gamut limit (high or low lights), each channel will be “calculated” separately which will bring in this case a deviation from the true luminance (but what is true luminance ?).

For me (and many university), “True luminance”, is the least bad representation that is Lab (L* a* b*).

For the curves the problem is more complex.
Each curve model “RGB” takes into account a calculation of the luminance, for example: l= (r+g+b) / 3, or l = r * 0.2126729f + g * 0.7151521f + b * 0.0721750f, etc.
If you compare this result with “L* a* b*” the differences are importants…often huge, and this has consequences on the overall rendering, the white point and the black point.

But, it is a choice, and you cannot have butter and butter money !!

Recall : the entire color chain is complex, each of the points alone deserves several theses in university.

  1. white balance and its almost mandatory correlation - color appearance adaptation - as soon as we move away from reference D50.
  2. profil ICC or DCP : what to think about elaborate profiles with a target 24 colors (close to sRGB), while the user chooses a working profile “prophoto”…Whats to do with “lost” colors ??
  3. same question for this profile when illuminant is not D50…??
  4. the majority of software including RT have been designed with the RGB model that has its advantages and disadvantages… : the various RGB models have brought concepts with pleasant results - often judged better than the Lab model, which must be associated with the “Munsell” correction for saturation (case of RT)

jacques

4 Likes

Thank you @jdc for the clarification. If I have understood you correctly, this would explain why I didn’t see any mid-tone difference between the curves and exposure slider methods for setting the white point when I was using the grey-scale wedge but saw a noticeable difference when using a colour image?

@Wayne_Sutton
Yes partially, because there is also differences between “exposure slider” and “all curves”
For each “curves”, there is an interpretation of luminance and / or saturation with formulas coming from “Adobe” or else where.
For some of them, “mix” of channels RGB acts only in theory on luminance, but it depends on Working profile. When you mix R, G, B the values are different if you use sRGB, Prophoto, ACES, Rec2020, etc.

In “Local adjustement” (newlocallab branch), “Exposure” is entirely made in L* a* b* mode, and you have the choice at the end of the module “Local adjustements” to correct local saturation gaps with a “Munsell” correction (this correction also exist in main menu)

For example with choice “luminance”
if the color is a red
L* = 60 a*=40 b*=50

sRGB [0…255] R=220 G=113 B=56
Prophoto [0…255] R=162 G=109 B=57

With the formula Luminance = r * 0.2126729 + g * 0.7151521 + b * 0.0721750

Luminance sRGB [0…255] = 132
Luminance Prophoto [0…255] = 116

jacques

1 Like

Thank you very much Jacques, I think I am beginning to understand what is going on. As you say in French, “Je comprends vite mais il faut m’expliquer longtemps…”. :slight_smile:
Best regards,
Wayne

@Wayne_Sutton
:wink: