White point adjustment

I don’t know the math but in simple terms:
When you change the exosure you change the white point, the black point and everything in between.
When you change the white point in the tone curve, the black point doesn’t change. Everything in between changes in proportion to it’s position between the two.

well, of course it does matter. the formula should be something like this:

y = T(x^{\gamma})^{1/\gamma}

where T is the tone curve, and x \in [0,1] is the input value

1 Like

Exposure compensation is a multiplication operator; zero times EV equals zero.

For what it’s worth, Guillermo Luijk has a dcraw tutorial that discusses this:

http://www.guillermoluijk.com/tutorial/dcraw/index_en.htm

Scroll down to " EXPOSURE CORRECTION USING CURVES"…

Yep.

But the black point still changes when you change the exposure.

Then, something else is being done besides the exposure transform. You can’t move a value that is RGB=0,0,0 off that with multiplication.

When it is determined that a particular camera has a non-zero value for its black point, that compensation occurs as a subtraction of that value from all measurements in the image. I’m having fun with just that right now; my trusty D7000 had a blackpoint of zero, but my new Z6 has a blackpoint of 1008, so that has to be subtracted from the image early on to make it look right.

Ok, so black is black. And the rest of the image moves in relation to that. Is that a better way of saying the black point changes?

My comment was supposed to be a simple comparison of the two techniques in question, not a technical legalese definition of exposure.

I’m afraid not, sorry. or at least I don’t understand what you mean by that.

My comment was supposed to be a simple comparison of the two techniques in question

applying a linear curve is the same as multiplying

@Marctwo, the two techniques in question have a technical basis. If one doesn’t understand that basis, they’re just moving sliders around and hoping for the best. If that’s your objective, have fun.

@Wayne_Sutton, I do the curve technique to adjust my white point all the time, in the manner described in the article I posted in a previous post. Both it and exposure compensation are “linear scaling”, or multiplicative operations, the adjusted variable has different meanings based on the supporting equation. I like the curve method because I can stare at the histogram, figure out a white point I like, and scooch the control point over to it. I find this to be especially important if there is clipped data in the raw image, as that gets shifted arbitrarily by white balance and if you don’t either white-point to the lowest clipped channel or do a “reconstruction pet trick”, you get the dreaded magenta cast.

Now, this is based on my software, not RT. RT is doing things for you that you need to tease from what we’re talking about to insure you’re understanding the specific effect of these transforms, on the appropriate input data.

Thanks everyone for the replies. I’m still not sure I understand completely what is going on however If you push the exposure slider hard over to the right then the black point does indeed appear to stay put on the histogram while everything else above black is shifted to the right. Rawpedia maybe a bit misleading here @Morgan_Hardwood because it states that: Moving it to the right shifts the whole histogram to the right. This means this slider changes the black point (on the very left of the histogram) and the white point (on the very right). The formula indicated by @agriggio would presumably explain the difference between the two methods as far as the effect on the midtones is concerned although if I understand you correctly @ggbutcher the two methods are strictly linear operations so I’m a bit confused still on that point . For what it’s worth, I found a similar discussion regarding the use of the two methods in Lr but I’m not sure it adds anything to our discussion. The 4th post in the thread by MFRYE describes his observations using the two methods and these seem to stack up with what I have observed. two different ways to set black- and whitepoint
I agree with you @ggbutcher regarding the curve technique.

lol. If only it were that simple…

There are many levels between those extremes. I don’t fully understand the molecular structure of the gases in a light bulb… but I know when to turn the light on.

then please do elaborate. feel free to go as complex/detailed as you think is needed, I’ll do my best to follow…

1 Like

I did that in my first comment.

I see. thanks for clarifying

1 Like

“linear” in this case refers to a transfer function that progresses by a constant. Simply, this means the transfer function graphs as a line. Exposure and the two-point “curve” both have this characteristic, which is why exposure can equivalently be applied by a poorly-named “linear curve”.

This characteristic of a two-point curve is very useful in image processing. So much so that I wrote a separate curve routine just for that case, because computing x->y with a slope is so much more efficient than looking up y for each x in a complicated spline algorithm. It forms the basis for my blackwhitepoint tool in rawproc, where I can set black and white on a two-button slider and this “linear curve” is applied between the two points.

The reason for scaling your image thusly has one of its roots in the difference between your camera’s tone measurement range and most display’s range between black and white. A 14-bit raw file would seem to go most of the way in filling 16-bit integers available on modern computers, but really it’s only a quarter of the 16-bit range, 0-16383. And then, most consumer displays are still just 8-bit, so eventually a scaling has to be done in the “bad” direction, the direction that loses precision. And so, setting a point in the data range as “white” tells the software where to scale your data to meet that expectation.

A good “white” isn’t always the max value of all three channels; if you over-exposed your image, all the light past the sensor’s capability will just get glommed at the saturation point. Well and good until white balance is applied; now each of those channels is shifted in various ways, left or right of green (the common reference for white balance) and those saturation spikes separate from each other in the histogram. If you set the white point at the highest spike, your whites will take on a (usually) magenta cast, describing the residual color contributed by the spikes that aren’t at the white point. In that case you have to set the white point close to or at the lowest spike. This is stuff most raw processors do before they present an image to you for further mangling.

That same “linear curve” can be used to correct a lot of color casts. Look at the RGB histogram of a color negative sometime; you’ll notice that each channel has the about the same shape, but shifted left-right from each other. If your software lets you set separate black and white points for each R, G, and B channel, you can set them for each channel and they scale the channels to the same limits which removes the color cast. You can use white balance multipliers to do this, but you can’t set separate black points with them so the channel-shifting can’t be as equivalent.

More than you probably wanted to know. Sorry, just got into a writing mood…

Thank you @ggbutcher for taking the time to explain this. As you mentioned above

Blockquote

RT is doing things for you that you need to tease from what we’re talking about to insure you’re understanding the specific effect of these transforms, on the appropriate input data."*

Blockquote

and it was this aspect that I was trying to get to the bottom of. There is definitely an observable difference between the two methods in RT and the conclusion I have come to from playing around with both is that it’s best to get the mid-tones about right with the exposure slider and use the ‘linear curve’ to set the white point. Your blackwhitepoint tool implementation in rawproc sounds similar to what I remember from Lr and although the implementation may be different, I found that is was a quick and easy of getting a good starting point for the rest of the editing.

Early on, I discovered the concept of “contrast-stretch” in the raw processing tutorial at gmic.eu:

(just revisited the page, oh @David_Tschumperle, that is surely an “attention-getting” profile picture…) Over the next years, I came to learn a lot more about data formats, device capabilities, and general image processing that all act in the context of the “scale” of the image data. Particularly, I discovered the in-exactitude of the thing we know as exposure; it’s really about putting the parts of the image you care about in the range between the sensor’s noise threshold and its saturation point.

So, for my proof processing, a simple black-white point scaling is usually all I have to do to get an acceptable image, indeed most times one that looks better than the in-camera-produced JPEG. From there, I have the data basis consider custom curves to my whim; currently, I’m playing a bit with log-gamma and filmic, but I still prefer my own devices in shaping a curve for a particular image.

Sometimes, I find that the auto black-white point operation applied in the proof script clips highlights that I want to see in their own glory. So, I re-open the raw from the proof JPEG, which re-applies the proof processing, and I adjust the blackwhite tool to pull the highlights back into play. Then I usually need some sort of curve to pull the mid-tones back up. I guess dt’s filmic tool would do that for me, but I’m a “manual transmission” sort of driver, and prefer to shape my images to my immediate whim, and not through trying to figure out slider side-effects. YMWV.

I have a bigger reason to be discussing this at length; I started early departing from the mainstream applications because tools like G’MIC showed me the value and power of a tool box of operations you could apply in any order you like. And, to do so from the first input of the image array form raw file, in order to consider the full effect of every single subsequent transform imposed on the image. I’ve learned a lot in doing so, more than if I’d have continued to rely on pre-processing chains, base curves, and all the other abstractions that keep the details from confounding folks who just want to photograph things. Don’t get me wrong; dt and RT are well-engineered products that do a great service supporting photographers in getting the job done, but really learning about the basis and effects of image transforms requires a “de-constructive” approach, knowing what the data looks like to start, and what it takes to make it a finished image, step-by-step. I’m in a career transition right now, but after that bow is tied I’m probably going to make a video tutorial along the lines of “from-scratch” raw processing, using rawproc.

1 Like

I agree completely about needing to learn the basics. My move away from Lr was partly due to the fact that I had absolutely no idea of what was going on behind the sliders. Sure, you can learn what to tweak to get a particular effect but the day Adobe (or any other supplier for that matter) pulls the plug on the product, you have to start all over again. I look forward to seeing the video :slight_smile:

I come back from travel and vacation :slight_smile:

First of all, excuse my bad english which will probably hurt the understanding.

I have already spoken on this subject, but it does not hurt to explain again.

I will not speak mathematics literally, what explains Alberto is correct, but the interaction with colorimetry.

The slider “exposure” acts in the same way on the 3 RGB channels, which will lead for mid-tones to a faithful representation of the change of exposure.
But for colors in gamut limit (high or low lights), each channel will be “calculated” separately which will bring in this case a deviation from the true luminance (but what is true luminance ?).

For me (and many university), “True luminance”, is the least bad representation that is Lab (L* a* b*).

For the curves the problem is more complex.
Each curve model “RGB” takes into account a calculation of the luminance, for example: l= (r+g+b) / 3, or l = r * 0.2126729f + g * 0.7151521f + b * 0.0721750f, etc.
If you compare this result with “L* a* b*” the differences are importants…often huge, and this has consequences on the overall rendering, the white point and the black point.

But, it is a choice, and you cannot have butter and butter money !!

Recall : the entire color chain is complex, each of the points alone deserves several theses in university.

  1. white balance and its almost mandatory correlation - color appearance adaptation - as soon as we move away from reference D50.
  2. profil ICC or DCP : what to think about elaborate profiles with a target 24 colors (close to sRGB), while the user chooses a working profile “prophoto”…Whats to do with “lost” colors ??
  3. same question for this profile when illuminant is not D50…??
  4. the majority of software including RT have been designed with the RGB model that has its advantages and disadvantages… : the various RGB models have brought concepts with pleasant results - often judged better than the Lab model, which must be associated with the “Munsell” correction for saturation (case of RT)

jacques

4 Likes

Thank you @jdc for the clarification. If I have understood you correctly, this would explain why I didn’t see any mid-tone difference between the curves and exposure slider methods for setting the white point when I was using the grey-scale wedge but saw a noticeable difference when using a colour image?