For what itās worth, Guillermo Luijk has a dcraw tutorial that discusses this:
http://www.guillermoluijk.com/tutorial/dcraw/index_en.htm
Scroll down to " EXPOSURE CORRECTION USING CURVES"ā¦
For what itās worth, Guillermo Luijk has a dcraw tutorial that discusses this:
http://www.guillermoluijk.com/tutorial/dcraw/index_en.htm
Scroll down to " EXPOSURE CORRECTION USING CURVES"ā¦
Yep.
But the black point still changes when you change the exposure.
Then, something else is being done besides the exposure transform. You canāt move a value that is RGB=0,0,0 off that with multiplication.
When it is determined that a particular camera has a non-zero value for its black point, that compensation occurs as a subtraction of that value from all measurements in the image. Iām having fun with just that right now; my trusty D7000 had a blackpoint of zero, but my new Z6 has a blackpoint of 1008, so that has to be subtracted from the image early on to make it look right.
Ok, so black is black. And the rest of the image moves in relation to that. Is that a better way of saying the black point changes?
My comment was supposed to be a simple comparison of the two techniques in question, not a technical legalese definition of exposure.
Iām afraid not, sorry. or at least I donāt understand what you mean by that.
My comment was supposed to be a simple comparison of the two techniques in question
applying a linear curve is the same as multiplying
@Marctwo, the two techniques in question have a technical basis. If one doesnāt understand that basis, theyāre just moving sliders around and hoping for the best. If thatās your objective, have fun.
@Wayne_Sutton, I do the curve technique to adjust my white point all the time, in the manner described in the article I posted in a previous post. Both it and exposure compensation are ālinear scalingā, or multiplicative operations, the adjusted variable has different meanings based on the supporting equation. I like the curve method because I can stare at the histogram, figure out a white point I like, and scooch the control point over to it. I find this to be especially important if there is clipped data in the raw image, as that gets shifted arbitrarily by white balance and if you donāt either white-point to the lowest clipped channel or do a āreconstruction pet trickā, you get the dreaded magenta cast.
Now, this is based on my software, not RT. RT is doing things for you that you need to tease from what weāre talking about to insure youāre understanding the specific effect of these transforms, on the appropriate input data.
Thanks everyone for the replies. Iām still not sure I understand completely what is going on however If you push the exposure slider hard over to the right then the black point does indeed appear to stay put on the histogram while everything else above black is shifted to the right. Rawpedia maybe a bit misleading here @Morgan_Hardwood because it states that: Moving it to the right shifts the whole histogram to the right. This means this slider changes the black point (on the very left of the histogram) and the white point (on the very right). The formula indicated by @agriggio would presumably explain the difference between the two methods as far as the effect on the midtones is concerned although if I understand you correctly @ggbutcher the two methods are strictly linear operations so Iām a bit confused still on that point . For what itās worth, I found a similar discussion regarding the use of the two methods in Lr but Iām not sure it adds anything to our discussion. The 4th post in the thread by MFRYE describes his observations using the two methods and these seem to stack up with what I have observed. two different ways to set black- and whitepoint
I agree with you @ggbutcher regarding the curve technique.
lol. If only it were that simpleā¦
There are many levels between those extremes. I donāt fully understand the molecular structure of the gases in a light bulbā¦ but I know when to turn the light on.
then please do elaborate. feel free to go as complex/detailed as you think is needed, Iāll do my best to followā¦
I did that in my first comment.
I see. thanks for clarifying
ālinearā in this case refers to a transfer function that progresses by a constant. Simply, this means the transfer function graphs as a line. Exposure and the two-point ācurveā both have this characteristic, which is why exposure can equivalently be applied by a poorly-named ālinear curveā.
This characteristic of a two-point curve is very useful in image processing. So much so that I wrote a separate curve routine just for that case, because computing x->y with a slope is so much more efficient than looking up y for each x in a complicated spline algorithm. It forms the basis for my blackwhitepoint tool in rawproc, where I can set black and white on a two-button slider and this ālinear curveā is applied between the two points.
The reason for scaling your image thusly has one of its roots in the difference between your cameraās tone measurement range and most displayās range between black and white. A 14-bit raw file would seem to go most of the way in filling 16-bit integers available on modern computers, but really itās only a quarter of the 16-bit range, 0-16383. And then, most consumer displays are still just 8-bit, so eventually a scaling has to be done in the ābadā direction, the direction that loses precision. And so, setting a point in the data range as āwhiteā tells the software where to scale your data to meet that expectation.
A good āwhiteā isnāt always the max value of all three channels; if you over-exposed your image, all the light past the sensorās capability will just get glommed at the saturation point. Well and good until white balance is applied; now each of those channels is shifted in various ways, left or right of green (the common reference for white balance) and those saturation spikes separate from each other in the histogram. If you set the white point at the highest spike, your whites will take on a (usually) magenta cast, describing the residual color contributed by the spikes that arenāt at the white point. In that case you have to set the white point close to or at the lowest spike. This is stuff most raw processors do before they present an image to you for further mangling.
That same ālinear curveā can be used to correct a lot of color casts. Look at the RGB histogram of a color negative sometime; youāll notice that each channel has the about the same shape, but shifted left-right from each other. If your software lets you set separate black and white points for each R, G, and B channel, you can set them for each channel and they scale the channels to the same limits which removes the color cast. You can use white balance multipliers to do this, but you canāt set separate black points with them so the channel-shifting canāt be as equivalent.
More than you probably wanted to know. Sorry, just got into a writing moodā¦
Thank you @ggbutcher for taking the time to explain this. As you mentioned above
Blockquote
RT is doing things for you that you need to tease from what weāre talking about to insure youāre understanding the specific effect of these transforms, on the appropriate input data."*
Blockquote
and it was this aspect that I was trying to get to the bottom of. There is definitely an observable difference between the two methods in RT and the conclusion I have come to from playing around with both is that itās best to get the mid-tones about right with the exposure slider and use the ālinear curveā to set the white point. Your blackwhitepoint tool implementation in rawproc sounds similar to what I remember from Lr and although the implementation may be different, I found that is was a quick and easy of getting a good starting point for the rest of the editing.
Early on, I discovered the concept of ācontrast-stretchā in the raw processing tutorial at gmic.eu:
(just revisited the page, oh @David_Tschumperle, that is surely an āattention-gettingā profile pictureā¦) Over the next years, I came to learn a lot more about data formats, device capabilities, and general image processing that all act in the context of the āscaleā of the image data. Particularly, I discovered the in-exactitude of the thing we know as exposure; itās really about putting the parts of the image you care about in the range between the sensorās noise threshold and its saturation point.
So, for my proof processing, a simple black-white point scaling is usually all I have to do to get an acceptable image, indeed most times one that looks better than the in-camera-produced JPEG. From there, I have the data basis consider custom curves to my whim; currently, Iām playing a bit with log-gamma and filmic, but I still prefer my own devices in shaping a curve for a particular image.
Sometimes, I find that the auto black-white point operation applied in the proof script clips highlights that I want to see in their own glory. So, I re-open the raw from the proof JPEG, which re-applies the proof processing, and I adjust the blackwhite tool to pull the highlights back into play. Then I usually need some sort of curve to pull the mid-tones back up. I guess dtās filmic tool would do that for me, but Iām a āmanual transmissionā sort of driver, and prefer to shape my images to my immediate whim, and not through trying to figure out slider side-effects. YMWV.
I have a bigger reason to be discussing this at length; I started early departing from the mainstream applications because tools like GāMIC showed me the value and power of a tool box of operations you could apply in any order you like. And, to do so from the first input of the image array form raw file, in order to consider the full effect of every single subsequent transform imposed on the image. Iāve learned a lot in doing so, more than if Iād have continued to rely on pre-processing chains, base curves, and all the other abstractions that keep the details from confounding folks who just want to photograph things. Donāt get me wrong; dt and RT are well-engineered products that do a great service supporting photographers in getting the job done, but really learning about the basis and effects of image transforms requires a āde-constructiveā approach, knowing what the data looks like to start, and what it takes to make it a finished image, step-by-step. Iām in a career transition right now, but after that bow is tied Iām probably going to make a video tutorial along the lines of āfrom-scratchā raw processing, using rawproc.
I agree completely about needing to learn the basics. My move away from Lr was partly due to the fact that I had absolutely no idea of what was going on behind the sliders. Sure, you can learn what to tweak to get a particular effect but the day Adobe (or any other supplier for that matter) pulls the plug on the product, you have to start all over again. I look forward to seeing the video
I come back from travel and vacation
First of all, excuse my bad english which will probably hurt the understanding.
I have already spoken on this subject, but it does not hurt to explain again.
I will not speak mathematics literally, what explains Alberto is correct, but the interaction with colorimetry.
The slider āexposureā acts in the same way on the 3 RGB channels, which will lead for mid-tones to a faithful representation of the change of exposure.
But for colors in gamut limit (high or low lights), each channel will be ācalculatedā separately which will bring in this case a deviation from the true luminance (but what is true luminance ?).
For me (and many university), āTrue luminanceā, is the least bad representation that is Lab (L* a* b*).
For the curves the problem is more complex.
Each curve model āRGBā takes into account a calculation of the luminance, for example: l= (r+g+b) / 3, or l = r * 0.2126729f + g * 0.7151521f + b * 0.0721750f, etc.
If you compare this result with āL* a* b*ā the differences are importantsā¦often huge, and this has consequences on the overall rendering, the white point and the black point.
But, it is a choice, and you cannot have butter and butter money !!
Recall : the entire color chain is complex, each of the points alone deserves several theses in university.
jacques
Thank you @jdc for the clarification. If I have understood you correctly, this would explain why I didnāt see any mid-tone difference between the curves and exposure slider methods for setting the white point when I was using the grey-scale wedge but saw a noticeable difference when using a colour image?
@Wayne_Sutton
Yes partially, because there is also differences between āexposure sliderā and āall curvesā
For each ācurvesā, there is an interpretation of luminance and / or saturation with formulas coming from āAdobeā or else where.
For some of them, āmixā of channels RGB acts only in theory on luminance, but it depends on Working profile. When you mix R, G, B the values are different if you use sRGB, Prophoto, ACES, Rec2020, etc.
In āLocal adjustementā (newlocallab branch), āExposureā is entirely made in L* a* b* mode, and you have the choice at the end of the module āLocal adjustementsā to correct local saturation gaps with a āMunsellā correction (this correction also exist in main menu)
For example with choice āluminanceā
if the color is a red
L* = 60 a*=40 b*=50
sRGB [0ā¦255] R=220 G=113 B=56
Prophoto [0ā¦255] R=162 G=109 B=57
With the formula Luminance = r * 0.2126729 + g * 0.7151521 + b * 0.0721750
Luminance sRGB [0ā¦255] = 132
Luminance Prophoto [0ā¦255] = 116
jacques
Thank you very much Jacques, I think I am beginning to understand what is going on. As you say in French, āJe comprends vite mais il faut māexpliquer longtempsā¦ā.
Best regards,
Wayne