I have seen cases where there was a very bright area that was handled by shifting the histogram way to the left, such that the left tail was off the scale, to work on the bright area, and then using a second instance with the histogram in the middle to work on the darker areas. I just wondered if there was a downside to doing that?
One instance gets the same input dataā¦with 2 the second is working with the results from the firstā¦it really shouldnāt be an issue IMO
Yes, that would be a possibility. But I wanted to demonstrate the ācontrast compensation functionā of the mask in TE in the video, because the example was very suitable and this function is rarely used and for some it is not clear what it does.
Do you have any opinions or use cases for the different norms when creating and adjusting your TE mask??
Yes, this can also immensely improve masking!
Maybe it would be really worth making an episode just for Tone Equalizerā¦
Any video that you choose to make is well worth it for everyoneā¦:). I try the RGB sum and geometric mean but I really have no idea why just seems to offer a nicer result some timesā¦
That depends on the color.
Here on the example you can see it best when you turn off preservation of details (I only lightened highlights and set mask exposure compensation with color picker automatically for all three algorithms):
Power norm - yellow is brightest
euclieden norm - yellow is a bit darker
geometric mean - yellow is darkest and with much more details (local contrast)
I have noticed this but then sometimes I g back and redo the auto exposure comp and then there is less difference. I was never sure of the correct intended sequence if you changed normsā¦thanks for the example
Thank you for interesting videos and examples.
Trying to use the same approach as in your first example, it seems to me that itās easier and better just to use 2 tone eqlz. modules on top of each other to lighten the shadows.
In the following example the highlights are ok but the shadows way too dark. Exposure are raised by 0,5 EV. No other adjustments. Every setting is default.
Now I raise the exposure by 4 EV. ItĀ“s to +4 as in your video. The shadows are much better but itās not possible for me to get good details in the highlights adjusting filmic.
Default setting with 2 instances of tone eqlz on top of each other results in the following. A much better result that can be improved by adjusting filmic and local contrast.
Very well, butā¦
ā¦I hope you do not blindly follow what I do.
For this I have taken several examples that need different treatment. In no case you should take any fixed values, which I also took in the video. - This is always dependent on the photo!
I would have liked to see what your filmic values were.
Did you increase the compression with white relative and black relative exposure?
Did you adjust the contrast in the ālookā tab, shadows-highlights balance?
Maybe you can provide the raw file and sidecar file so that one can better judge what you have done?
Ach, GnƤdige Boris, of course we do!!!
MfG
Claes in Lund, Schweden
Thank you for a fast response.
Iām not following your steps blindly but a failed to get a decent result out of raising the exposure to a high level and afterwards to try several adjustments to filmic (and other modules) to get the details back in the highlights.
The screen shot only demonstrates how much work is needed in the highlights after raising the exposure (filmic settings are still just default). It seems to me that it is much more straight forward to raise the shadows by tone eglz since the highlights are reasonable ok from the start ( and thatās also the case in your first example).
I upload the raw file and are curious to see if you can get a better result in a simpler way ā¦
DSC_6662.NEF (27.2 MB)
.
Hi Olaf,
I will have a look at your file soon. The new episode I just made took up my free time.
New episode: Portrait retouching
All Raw files from this episode are from:
https://www.signatureedits.com/free-raw-photos/
Sidecar files:
26.CR2.xmp (104,0 KB)
_DSC6411.NEF.xmp (58,3 KB)
tag @signatureeditsco IMG_4457.cr2.xmp (107,5 KB)
Thank you for all your efforts, Iām looking forward to see your result.
Iām especially interested in seeing how much you can raise the exposure and still bring back the details and colors by filmic adjustments. 4 EV, as demonstrated in your video, surprises me and I failed to get a good result for the image I uploaded.
Hi Boris,
Always a pleasure to watch your videos. I do have a question, thoughā¦
Iāve noticed in this video and in prior ones that have humans in them that you do not use, or at least try to use, the white of one of the eyes to set an initial white balance. You do try the background as a method, which, in my opinion, might be the bigger guess (no way of knowing if it is really r=g=b).
I know that this method isnāt fail-safe but it often works to set a nice starting point that can be used to further improve the colour balance of the image using, among others, the channel mixer. Especially for people that arenāt (yet) all that proficient with using the channel mixer this might be a nice intermediate step, leaving the changes that have to be made as a next step using the channel mixer to be relatively minor and a bit more approachable.
So, Iām just curious why I do not see you use the white of the eye to white balance.
Yes, the issue of the white balance is a very interesting one.
Here you can see exactly that in both eyes one has a lot of red veins. Sometimes there is hardly enough space between the corner of the eye that is red and the iris that could be chosen as a reference. Also the transition to the iris is not sharp but - depending on the person - has a gradient transition.
Eye is also round, has no straight surface that is uniformly illuminated, unless you have direct frontal light. If you donāt have frontal light, you may have different colors of light depending on the angle from which you take color.
Nevertheless, as you mentioned, this can be a good starting point.
But if you want to be accurate, you canāt avoid using color pickers to take the values and then try to get a neutral gray with different tools.
The problem with this approach is that you focus too much on the technical side of things and forget the fact - unless you shoot under controlled lighting conditions with the gray card - that you canāt get a perfect white balance, especially under difficult lighting conditions.
So, you have to be able to learn, if necessary, to be creative and to get a result that is satisfactory for you and that corresponds to your idea of good color composition.
In my opinion, this is more important than correct color reproduction. When photographing a beautiful snowy idyll in the evening, which part of the snowy landscape do you take as a reference; the color warm area directly illuminated by the sun or the bluish shadow? No matter which side you take, you spoil the beautiful color mood, which lives just from these color differences.
The same applies to the color of the skin. I donāt think much of any āstandard valuesā that are supposed to correctly represent the color of human skin. It is - like all other colors - always dependent on the environment and color composition, which is either given or should be created.
This is perhaps the reason why I am more inclined to show people to be more concerned with color composition and less focused on color accuracy.
I know about, and agree, with most of what you write. And although I know an eye is round I did not realize the implication of that (obvious, once it is pointed out).
The need, sense and usefulness for correct colour reproduction is a topic all of its own and one with very distinct opinionsā¦ My approach is often to go for the technical correct as a base and then add a creative/artistic part on top of that. If I do I also use a colour checker and a white card to do in-camera white balancing. But, as you point out, the light of the scene is at times leading and more important then a perfect white you start from. Not sure if I agree with this when it comes to humans and their skin, unless it is an obvious artistic approach.
I think this is, in general and for the crowd youāre trying to reach, the correct way.
Thanks for your clear answer!
Thank you @s7habo - very good video.
I had to have a second look at darktable 3.8 user manual - retouch - specifically āmerge fromā. It is very interesting how how the blur applied to lower level is multiplied to the upper scales.
Initially I thought the geometric means is new darktable 3.8 user manual - blend modes - (because I couldnāt find it) then I realized the blending modes are ādisplayā and āsceneā and the āsceneā is selected automatically only if auto-apply pixel workflow defaults is set to āscene-referredā I had the setting at ānoneā and I was applying filmic with a style. But I changed my settings now - looks like there is more to what they do than what I expected.
Questions: when you are using āmerge fromā on the retouch module - what is the main goal? My understanding is that you are accelerating the effect of the blur to multiple levels of the separation. Am I perceiving it correctly?
On the first 2 images when you were applying the colorization of the skin with the retouch module (the second instance) - you separated the image at 5 levels but the 3rd image you only separated the image on 2 levels. What was the logic behind?
Yes. Effect is applied to all selected scales.
This has to do with reflection of the skin. First two samples had matte reflection and I didnāt want to lose the details of the skin in those places. In the third model, the skin was very shiny because of the skin cream, which means that the local contrast was also very strong in the places of reflection, and I wanted to reduce this effect.