With your help, I would like to make a little experiment to compare the result of linear vs. gamma-encoded RGB editing.
Below I’m showing two versions of the same image, in which I have stretched the contrast by setting the black and white points with a linear curve.
In both cases, I have set the perceived 70% brightness to 100%, and the perceived 7% brightness to 0%, with a straight line connecting the two points.
However, in one case the straight line is applied to RGB values encoded with a perceptually uniform tone response curve (like the one for the CIELab lightness channel), while in the other case the straight line is applied to linear RGB values (i.e. no gamma encoding). In this latter case, the white and black points have been converted from perceptual to linear encoding before computing the straight line.
The initial RAW image is converted to the Rec.2020 colorspace using the standard Adobe color matrix, and no exposure compensation.
In one case, I’ve used a version of the Rec.2020 colorspace with a perceptually uniform Tone Response Curve (TRC), while in the other case the Rec.2020 colorspace was encoded with a linear TRC.
After applying the contrast stretching, the two images have been converted to standard sRGB and then saved to disk.
All RGB profiles are taken from the set kindly provided by Elle Stone, while the original RAW file is from Andreas Katifes.
I think visually I like Edit #2 for the deeper blues and slightly increased contrast? I feel like there’s nothing interesting happening in the foreground, so the bright clouds in the distance draw my eye more.
Edit #1 might look more “natural” if I’m looking for something that would visually fit what I might expect if I were standing on the scene?
No idea which one was processed in linear RGB. What does one even look for visually to know?
I like the image of Edit #2, also. As stated by Pat, the deepness of the colors and increased contrast were noticeable and more pleasing to the eye. My eye was strongly directed to the bright white cloud in the background in Edit #2. I would say the Edit #2 is the winner for me. There was one thing that I liked better in Edit #1, and that was the increased contrast between the light post on the right, and the background.
In my initial post I should have stressed that the two images were not intended to be “final edits”, but just the result of the very initial step of setting the black and white points…
In fact, the key point here is: “which one looks closer to the scene you could have seen by your eyes”?
I am happy to see that the difference is insignificant. I don’t know which was processed in linear space and which gamma-encoded, and am pleased that I could produce either version by small adjustments to the curve, thereby taking the great burden and stress of figuring out which processing technique is better off my chest
Significant or not, in one case moving the white point to the left in the curve is practically equivalent to a change of the RAW exposure, in the other case not.
If you want to brighten a TIFF or Jpeg image in the correct way (i.e. equivalent to an exposure compensation), then you have to work in linear RGB… or add a custom curve that will only be valid for a given gamma encoding.
As I already stressed, my point is not how to obtain a beautiful image that “interprets” reality for artistic purposes, but how to obtain a “natural” result from a very basic adjustment like changing the black and white points. The creative part, if needed, starts after that…
I’m maybe taking a “too scientific” approach here, but I still believe that one of the two images is a better starting point for further editing than the other, and this can simplify the work and/or improve the final result.
I would say that the image #1 is more natural, in regards to detail. In that I think the the increased detail in the lamp on the right would be seen by the eye, in image #1. The brightest point in the image seems to be the bright white cloud, to the right of the lamp of the left. This brightness of this point looks the same to me in both images. Image #2 seems to be more natural in regards to brightness of the image, especially in the shadow areas. My conclusion is that I can’t decide which image looks more natural. If I were given both images and asked to decide which I would be able to get the best PP results from, I would pick the first image. I probably would give the second image a try, also. But, would edit image #1 first. I hope this helps more than confuses.
However, is it valid to apply the same transformation to both?
In either case, I’d base the curve (or levels adjustment, in this case) on the histogram behind the curve. 70% is not the same source brightness on both images, is it?
In the linear case, 70% is converted to linear encoding before applying the adjustment, as well as for the 7% of the black point. That’s why the brightest clouds and the darkest part of the lamp look the same in both images. It is the transition between the two points that is different.
In that case, I guess the question is whether the results of a linear adjustment work better in linear or perceptual space. Which one needs the third control point to achieve the desired result, and hence would be slower in use?
I prefer #1, mostly because it seems to have more detail on the hills. But the deep blue sky in #2 clashes a bit with the lack of shadows and contrast on the platform.
Thanks for all your replies! It seems that the verdict is roughly 50/50 for the two images, but it has been correctly guessed that #1 is the one processed in linear RGB.
The lesson I take from this discussion is that an image editor should provide the user with both options, and give a clear explanation of the advantages and drawbacks (if any) of each method.
If the editing goal is to match reality as close as possible, then linear RGB processing is the recommended choice.
I immediately picked edit #2 as incorrect, probably because I have experience in how important it is to use a linear space for any blending/interpolation operation. I’m used to citing the importance of this and people going … “I didn’t know that was even a thing.”