Highlights recovery

Thanks @andrayverysame.
I couldn’t go further on this,
So, to recap, recovering highlights in DT seems to loose some detail when compared to ACR, better shown in the gaps on the wall, as pointed by the red circles.

ACR:
Screenshot%20from%202018-07-10%2009-01-33

DT (as per my edits):
Screenshot%20from%202018-07-10%2009-01-37

I also use shadows and highlights, with settings even lower - try not to pass 35. I’m searching how to change defaults settings to 30 as base settings and then adjust to taste. Default 50 is always to high. Any simple way (if possible) to change it from 50 to 30? I do not think about preset, I’ve made them. I want to activate shadow&highlights with default setting 30.
Same with “haze removal”, how to change initial value from 0,5 to 0,30?

I store a new preset whenever I want to have different ‘defaults’ for a module. Just set the values you want and than click the presets menu and save the current state as a preset. After that, you can enable the module by just applying the wanted preset. You can have more there, a preset for shadows 20 and highlights 0, another one for shadows 0 and highlights -20, etc.

@Gobo You could change this in the settings, under presets.

I’m trying not to use Lightroom and learn DT, but for this time I opened file and changed:
exposure -0,48
highlights -2
shadows +64
whites -100
blacks +100
clarity +21
vibrance +21
saturation +17
crop


There are visible lines between tiles on the left and in the middle of the photo, brick texture on balcony to the right and all bars are visible on balcony to the left. All done in 20s. There must be a way to make it better (or get close to this) in DT. :slight_smile:
I’m trying base curve with fusion - two/three exposures. Nice module I found today. If I get something worth showing I’ll upload it.

Darktable preferences → Settings → presets:


I have defined preset “haze 0,20” but every time I turn module on it starts with default 0,5
On the preset list there is no “highlights and shadows” although I have a preset 30/30. And it also starts with default 50/50 so each time I have to click presets button and chose my 30/30.
Maybe it needs to be implemented to make such preset permanent.

Details

The concrete joints are visible in my minimal example. This means that, at some point in the dt processing stack, you did something to smooth it. I have the feeling that @Gobo’s application of the haze module might have contributed to that. As I have been saying all along, I would strongly suggest that you turn off as much as possible until you know how to use each module. Otherwise, you won’t know which one is causing the issue, and due to ignorance, you might be fooled into thinking that dt isn’t as good as ACR, LR, etc.


ACR, LR, etc.

As mentioned above, commercial apps tend to abstract their tools. For those who are transitioning to dt and its buddies, it is a culture shock.

@Gobo listed his LR slider values. Which one contributes to detail? Clarity does. What is Clarity? It enhances local contrast, among other things (abstraction, remember). So the next step is to find the local contrast module in dt. Equalizer falls in that category too. @paperdigits’ suggestion of using the :dragon_face: (tone mapping) is valid too, though it is not a local contrast algorithm per se.

There is also sharpening, smoothing and thresholding, since LR does that behind the scenes. Basecurve also. That is what I mean by abstraction. There might be 10 sliders but each might do more than their title says, and even more is done even if you don’t use a single one.

Adobe assumes that you aren’t smart enough to make the fine grained decisions, whereas dt gives you the tools to choose your own destiny.


Presets

Sometimes the solution is simple. Have you tried restarting dt, deleting the .xmp or resetting the preferences? You might want to make a backup before attempting the latter two. Or ask smart people like @houz. :space_invader:

3 Likes

@afre I’m not sure who you’re replying to, I assume it’s me?
In that case, about the details here’s my thought.
I did indeed have turned off every module not needed and started from scratch. In the example you posted the highlight recovery is not as pronounced as in the ACR example, which was the challenge. I actually can retain more detail in DT if I don’t work on the highlights too much.

All of my comments have been for the general you, to no one is particular, since the questions and problems being brought up have been quite similar. The main thrust since my original post was that

1. Highlight recovery is hard when some patches are clipped or close to clipping.
2. Tool comparison only makes sense when you take the image as a whole, while viewing the tool as a variable and keeping everything else constant.
– As you said just now, stronger HL recovery interferes with the details in the mid-tones, which is a strike against the particular tool that you are using. But maybe that is what comes with that type of correction… If that is true and ACR does a “good job”, then this means that Adobe is doing something else to recover the details, which is what I discussed above.

Both of my examples are minimalist in nature and are there to illustrate a specific thing. The first one is to show the actual detail present in the raw; the other is to show that two simple curves can maintain some of the highlights while lifting the mid-tones. They are not meant to look good.

I was comparing my LR result with results of @pass712 process. I took file and xmp from @pass712 to analyze it in darktable.
@afre I agree that understanding how modules work is important but simplicity is also very important for layman people like me. You used “PhotoFlow and G’MIC” in this particular example, and I’m unfamiliar with these tools. Result you got in recovering highlights is best to me but with tools I don’t know.

Actually I’m spending most of the time on learning / analyzing / testing / watching tutorials on YT instead of processing my own files. Such discussion like above gives a lot of information, but I would like to stick to darktable and master it before trying G’MIC, GIMP and other tools. I hope that darktable will fulfill my expectations to some point. I also hope that this approach is correct and learning other tools should be next step. Unless this is a wrong way and users like me, moving from adobe, should also learn other tools with correct workflow between them. Let us know.
The major problem I find learning darktable - lot of materials but scattered all over. Also achieving similar results with completely different workflow. That’s the power of darktable, but learning this way is difficult. Here (on pixls) are a lot of examples with xmp files to analyze and people willing to help and I’m saying thank’s to all of you :slight_smile:

1 Like

The discourse software is reminding me that I am talking too much in this thread. :slight_smile: Sorry for being a blabbermouth. :blush:

1. I used PF and G’MIC (and dcraw) because I am more familiar with them than dt. However, if you take in the concepts, I am sure that you can transfer them to any app.

2. I tried using dt to make an example and have experienced many of the problems detailed in this thread. As I said previously, you need to use the “workarounds” (quotes because they aren’t workarounds) like parametric and drawn masks for the edit to be successful.


Here is my edit. Following the footsteps of my previous examples, I tried to make the history short and concise. You could definitely do more. I didn’t use any “workarounds” but I did turn off everything that I found irrelevant. Edit: here is the XMP, _DSF0498.RAF.xmp (23.2 KB). Oops, I messed up the history stack. Please compress it yourself and ignore the (off) entries.
.




Remarks (What I didn’t use but you could.)

A. Base curve is useful but I left it out because you need to use a drawn mask to protect the mid-tones and shadows. Here is how I would have used it (see image below). Tip: In dt, there are various tools that you can apply multiple instances of. E.g., you could do that with base curve and tone curve. Edit: If you examine the XMP, you could see that I did that with global tonemap.

image

B. I did the resize for web sharing in G’MIC, with post-sharpening. Resize in dt adds halos to the edges and swallows up the faint concrete joints.

C. Highlight construction yields more detail, esp. in the highlights, when set to reconstruct colour, but the detail is kind of wacky. (In the XMP, I used Lch reconstruction.) I don’t know if it is due to the X-Trans pattern and this is a bug or limitation in dt. I am sure that it could be tamed…

2 Likes

Interesting how turning down the RGB levels in white balance provides better highlight recovery than turning down the exposure

Yes, I learned that from dcraw, the grandfather of raw processors.

This should be made more accessible to Darktable users, I think. This is a pretty big feature that is a side effect of an unrelated module.

Thank you for sharing your process, afre!
I second @CriticalConundrum, that rgb levels in the white balance is a nice trick indeed :slight_smile:

I downloaded xmp from @afre. Renamed to afre.RAF.xmp and made a copy of oryginal file to afre.RAF. (yes, I know that RAF file name is inside xmp file)
Open dt with file and history showed about 63 changes… Took a look somewhere in the middle of list and modified something in Equalizer, so I lost all modifications above it. I decided to download xmp file again and overwrite the one I manipulated. I should get exactly what @afre made, but this doesn’t work. I tried restarting dt, removing file and copy freshly downloaded xmp, make a new copy of RAF file and copy xmp file… Each time I open dt with RAF file i don’t see last modifications. obraz
I looked up xmp file and last few changes are what I remember (from opening first time):
highlights
global tone map
global tone map
global tone map
base curve
But I don’t have them in history (shorten from about 63 to 58 modifications)
Checked this on other computer (without renaming files) and same thing: highlighted line 42, modified sharpening (lost everything in history above line 42), closed dt, download - overwrite xmp file and when I open dt again I have history with 42 lines only.
If somebody can confirm problem, I’ll make a copy of above to redmine.darktable. I have dt 2.4.4 on W10

Sorry @Gobo, I uploaded the .xmp with a long history stack that shows me trying a bunch of stuff. What you should do is press the compress history stack button. Even then, there would be 12 steps remaining. Notice how some of those steps are (off)? They aren’t steps either. In short, concentrate on which tools have been activated here:

image

Only 5 (actually 4 if you consider global tonemap as one step) are turned on, making it a total of 9 processing steps in the dt stack.

If you want to reread the xmp files, you need to check the option on Preferences for “look for new xmpnon start up” or soemyjing like that.

I thought a lot about this last night, and I think it bears pointing out that white balance and exposure compensation are really in two separate categories of tool. WB is an early and fundamental modification to the raw data, in some processors it’s done before even demosaic. EC is a discretionary tool, usually done in conjunction with some notion of the displayable image. Given that order, it makes good sense to me that WB has more influence on highlight ‘recovery’ because for one, EC is done after WB, so if WB lost highlights, EC is using that handicap as a starting point.

As a set of multipliers, WB by definition will push one channel’s data to the right. With G anchored at 1.0, one of either R or B is a number > 1.0, which will increase the original channel values. If that value happens to be at or near the saturation limit, the WB multiplier will likely push it into oblivion. An especially poignant consideration for ETTR.

So, to avoid losing data to the right in a WB operation, consider this: after the R and B multipliers are determined, transform all three numbers with a multiplier determined by 1.0/highest_multiplier. I think that’ll transform all three in syncronization, and make the highest multiplier equal to 1.0, so then WB won’t be pushing any data higher than it already is. Recover highlights by not losing them in the first place… does this make sense, @afre?

Edit: Ever write something, post it, then read it and realize you were not thinking of all the considerations? So, for my 2nd paragraph above, maybe not so likely as the >1.0 multiplier will be working on lower values relative to G, so not so likely to push them over the top. I should have my coffee before I post… :smiley:

1 Like

That is a part of the equation. There are downsides to normalizing the multipliers to <1. (Re)read the RT AMaZE thread and others like it for more insight.

PS Speaking of RT, you may have to dial back certain controls before you can use the recovery tools, so we have new users remarking that recovery isn’t working, etc. I say this to point out that each raw processor may take a different approach.