Highlights recovery

Thank you for sharing your process, afre!
I second @CriticalConundrum, that rgb levels in the white balance is a nice trick indeed :slight_smile:

I downloaded xmp from @afre. Renamed to afre.RAF.xmp and made a copy of oryginal file to afre.RAF. (yes, I know that RAF file name is inside xmp file)
Open dt with file and history showed about 63 changes… Took a look somewhere in the middle of list and modified something in Equalizer, so I lost all modifications above it. I decided to download xmp file again and overwrite the one I manipulated. I should get exactly what @afre made, but this doesn’t work. I tried restarting dt, removing file and copy freshly downloaded xmp, make a new copy of RAF file and copy xmp file… Each time I open dt with RAF file i don’t see last modifications. obraz
I looked up xmp file and last few changes are what I remember (from opening first time):
highlights
global tone map
global tone map
global tone map
base curve
But I don’t have them in history (shorten from about 63 to 58 modifications)
Checked this on other computer (without renaming files) and same thing: highlighted line 42, modified sharpening (lost everything in history above line 42), closed dt, download - overwrite xmp file and when I open dt again I have history with 42 lines only.
If somebody can confirm problem, I’ll make a copy of above to redmine.darktable. I have dt 2.4.4 on W10

Sorry @Gobo, I uploaded the .xmp with a long history stack that shows me trying a bunch of stuff. What you should do is press the compress history stack button. Even then, there would be 12 steps remaining. Notice how some of those steps are (off)? They aren’t steps either. In short, concentrate on which tools have been activated here:

image

Only 5 (actually 4 if you consider global tonemap as one step) are turned on, making it a total of 9 processing steps in the dt stack.

If you want to reread the xmp files, you need to check the option on Preferences for “look for new xmpnon start up” or soemyjing like that.

I thought a lot about this last night, and I think it bears pointing out that white balance and exposure compensation are really in two separate categories of tool. WB is an early and fundamental modification to the raw data, in some processors it’s done before even demosaic. EC is a discretionary tool, usually done in conjunction with some notion of the displayable image. Given that order, it makes good sense to me that WB has more influence on highlight ‘recovery’ because for one, EC is done after WB, so if WB lost highlights, EC is using that handicap as a starting point.

As a set of multipliers, WB by definition will push one channel’s data to the right. With G anchored at 1.0, one of either R or B is a number > 1.0, which will increase the original channel values. If that value happens to be at or near the saturation limit, the WB multiplier will likely push it into oblivion. An especially poignant consideration for ETTR.

So, to avoid losing data to the right in a WB operation, consider this: after the R and B multipliers are determined, transform all three numbers with a multiplier determined by 1.0/highest_multiplier. I think that’ll transform all three in syncronization, and make the highest multiplier equal to 1.0, so then WB won’t be pushing any data higher than it already is. Recover highlights by not losing them in the first place… does this make sense, @afre?

Edit: Ever write something, post it, then read it and realize you were not thinking of all the considerations? So, for my 2nd paragraph above, maybe not so likely as the >1.0 multiplier will be working on lower values relative to G, so not so likely to push them over the top. I should have my coffee before I post… :smiley:

1 Like

That is a part of the equation. There are downsides to normalizing the multipliers to <1. (Re)read the RT AMaZE thread and others like it for more insight.

PS Speaking of RT, you may have to dial back certain controls before you can use the recovery tools, so we have new users remarking that recovery isn’t working, etc. I say this to point out that each raw processor may take a different approach.

This time I did a full round of edits and completely forgot about Photoshop.
1 - RT

  • Exposure and Dynamic Range Compression to recover both shadows and highlights
  • Sharpening and Impulse Noise reduction
  • Wavelets (essentially, noise reduction and highlights compression)

2 - Gimp

  • Applied chroma/tonality split edit according to Elle Stone tutorial (NOTE: I think this step is not necessary and the same results could be achieved either in RT or DT, but I’m practising the tutorial)

NOTE: The intermediary results from steps 1 and 2 are rather dull and flat. But they carry with them more information from the shadows and highlights (and chroma, from Gimp). The last step below is where I adjust all that information according to my taste.

3 - DT

  • Final tone mapping

Here’s the result:

This is a nice result!

1 Like

Thanks, I liked it too, it was fun playing with this photo.

I hope @andrayverysame didn’t give up his migration to foss :stuck_out_tongue:

Hope so. The steps you list sound complicated though. :stuck_out_tongue:

Remarks
1. Could you please share your pp3, xcf and xmp?
2. By tutorial, I am guessing the autumn one. Note that her tutorials try to be scene-referred. Therefore, doing these steps in 2-GIMP (e.g., after DR compression) defeat this objective.

Agreed. Every time I end my edits I get that impression…

Here they go:
1- _DSF0498.RAF.pp3 (10.5 KB)
2 - Gimp’s xcf is huge (1.4 GB!). Uploading to filenet.bin. When it finishes I put the url here (but it’s nothing more than the steps outlined by Elle on her tutorial, a bit simplified on the Lightness Group)
EDIT: the file is here: https://filebin.net/10vg3nighjreg383
3 - _DSF0498-1.tif.xmp (10.7 KB)

Correct, this one

Why?

Once a workflow is no longer scene-referred, there is no point in doing scene-referred editing.

@afre Just to check my understanding of it.
When you say it’s no longer scene-referred you mean that by doing any kind of tone mapping edit I change the original information about color and light that is inside the raw file, right?

I am not well-versed in scene-referred. And, if you read some of the threads, it is kind of a controversial subject.

It is not about changing things per se because you do that when you calibrate and profile the colours and interpolate the raw pattern, etc. Scene-referred, if I understand correctly, is being as accurate to the scene as possible. In scientific papers, it is referred to as ground-truth. Scene-referred differs from display- or print-referred, etc. Once you do things like compressing the dynamic range, it is no longer scene-referred because the only reason you are doing that is to appease apps that cannot handle HDR or the so-called unbounded values.

1 Like

Even if the first step result was saved in a wider color gamut (REC2020), like I did?

Yes, a wider gamut only lessens out-of-gamut results and does not account for scene integrity or colour deviations. Of course, throughout this thread, I have tried to make things simple. For more info (and debates), see: Unbounded Floating Point Pipelines.

@afre thanks, I’ll digest that thread sooner.

btw, the xcf file is there. I updated the other post where I linked the files.

Sorry for haven’t been active here, quite busy with work.

I haven’t give up, but in all truth, is not that I’m actually migrating.
The way I see it, to me it’s a matter of what suits better my needs, and if it’s a commercial software it’s a matter of course of whether I can afford it or not.
I know many people choose FOSS also (and maybe sometimes only because) it follows a different approach to the ecosystem of software and development.
I use commercial software for my job and I couldn’t change even if I wanted to. Photography is for me different, it’s more a passion, therefore I have more choice. Ironically, more choice means more time to spend figuring things out, which is not always ideal.
All that said, it’s really a matter of balance and personal habits. I used to spend a lot of time learning software and techniques, mainly because I found it fun and it served my job, but also because I believe sometimes starting from scratch can be a healthy thing. When I learned Blender there was no one-click solution and that made me learn things I avoided or overlooked even after years of doing 3D professionally. It definitely payed off.

But now I do photography as a form of liberation from the technicalities, it makes me go out and move and feel more connected to what I feel rather then spending time at a desk. So, with the spare time left, right now I test the software to let me develop my raw files, hoping to find the right balance between a good understanding of the technical aspects, but also having fun on the creative side.

I’m testing Rawtherapee again after having played with it a few times in the past. The feeling hasn’t changed in terms of usability: I find the overall experience sluggish, at times confusing with the UI design choices, and a little overwhelming. Nonetheless, I understand there’s a lot going on with RT and there’s much power under the hood.
As it usually happens for this type of things (and in life in general I would add), there isn’t a perfect solution. DT feels faster, smoother, I like the group idea, the masking tools, the option to duplicate the modules, the overall speed. With RT I know I can push the development when I need it, but I miss those good things from DT. Adobe is something I have been using for 20 years now, it feels like being at home, but I don’t like their business model, their idea of adding eye candy for the mass and leaving the programs buggy for the professional users. The other thing, keeping automatic process happening without the users being aware of it, doesn’t bug me. It’s simply a different approach, some may need that environment, some don’t. The ideal solution? Something powerful (RT + DT I guess), with a simple UI, with the option to go deeper when it’s needed keeping the experience simple and smooth.

And then I want to read. We talk a lot about software, but any discipline, photography is no exception, needs a solid understanding of the fundamentals and creative aspects. Good books can add a lot to our understanding: studying art, how to work with black & white, social implications of the medium… There’s a tons to digest, but that’s the beauty, at least for me as I enjoy the feeling of learning.

Sorry, too much typing on the keyboard tonight, long post :slight_smile:

You should indeed use the tools that work best for you. I don’t think anyone here will tell you differently.

What bothers me about Adobe is (1) they don’t seem to take the money from their cloud cash cow and make their software better. (2) the future of lightroom is a webapp (3) in a sense you pay them to rent your own work. When you cancel that subscription, your assets are sort of useless until you resume your subscription.

1 Like

Totally.
Like I said, at this point I use Adobe (PS, AI, AE) only because I have to, but honestly I don’t find it fun anymore.
Unfortunately the market, when you work in a specific industry, dictate what software to use: e.g., Nuke vs Fusion, I think I would go with Fusion, but then again, no work in my industry.

Luckily, when it comes to photography, I can finally pick what I want.
So far I’m enjoying DT and it seems I’m getting some interesting results with RT although I have only played with it and I’m moving by just experimenting rather than actually knowing what its features do and what’s the concept behind them. Nonetheless, it’s fun.
I guess the decision whether I’ll keep using RT or not will arrive after I realize how much I really need to know about the technical theory of its tools, or if I can simply get away with the general theory of color and raw development I already have gathered by using other software. I wish there were more tutorials for RT, it seems to me there isn’t much yet (and many are not in English). But all in all, it’s a fun experience.

1 Like