I’m getting more and more excited to try out the new SH tool as well as the tool you just mentioned. If I only had the time…
@agriggio I have started to play with the new S/H tool, comparing it with my preferred (but slow and complicated) method based on enfuse.
Here is an example of my method applied to the image from this PlayRAW:
To obtain this result, I did the following:
- saved several linear-gamma TIFF images at 1EV exposure increments, from 0EV to +9EV
- combined the TIFF files with enfuse with default options
- I opened the original RAW, applied a +1EV exposure compensation, then loaded the enfuse output on top of it
- I merged the two images with a luminosity mask build like this:
- inverted L channel
- reduced weight of mid-tones with this curve:
- applied a bilateral blur (from GMIC) on the curve output (gaussian blur produces halos around the sharp edges)
- finally, I slightly increased the mid-tones contrast with an RGB curve applied to the merged output
With RT I was not able to get a similar result, particularly to preserve the smoothness of the sky tonality in the top-right part where it gets dark…
Since you know the S/H tool much better, I’d be really curious to see what would be your best result on this image. Also, maybe the steps above can give you some nice idea… my dream would be to optimize enfuse to the point where it could be used in real-time.
@Carmelo_DrRaw enfuse uses the “best-exposed” pixels for fusion, and unlike when generating an HDR it is not recommended to take evenly-spaced brackets, but instead to use the images where the areas of interest are best-exposed. In this case it might be one shot for the land/sea and one for the sky.
The documentation sheds more light on this (or it did last time I read it, which was several years ago, not 4.2): http://enblend.sourceforge.net/enfuse.doc/enfuse_4.2.pdf
Hi @Carmelo_DrRaw, here’s what I could do:
IMG_0080.CR2.pp3 (10.4 KB)
I don’t know if the output matches what you got above, but it’s definitely quicker…
I have tried the new Shadows / Highlight tool and would be very happy to have it relace the old one.
I don’t have any problems with the issue of legacy compatibility. But if it turns out that enough people considered it significant, perhaps the problem could be handled by adding legacy compatibility settings such as Word processing programs oftern have. The cost is the continuing maintenance of little used code.
Since Lightroom was mentioned, here is Adobe’s blog post from when they introduced their new Highlight and Shadow sliders in V4. They are using Laplacian Pyramids. I’ve been really interested in seeing if any open source products would implement their use. The blog post also references the research paper that inspired them.
Adobe has also said that the highlight tool does recovery with channel mixing, grabbing info from non saturated channels. And their exposure slider plays into the mix by actively protecting highlights and applying some of it’s own recovery.
I like tools that operate on the conservative side. When a tool does too much at the same time, it can be difficult to do something in isolation without getting an undesirable effect. When I first started using FLOSS software, the sheer number of options overwhelmed me. Now I welcome them. Gone are the days of mediocre 1-to-3-click apps! That said, Lightroom is still okay. I just prefer the open source alternatives.
Just FYI, this is not an argument, but a simple question.
If Darktable already implements them, how come Lightrooms tools are so much more effective with a greater potential for a natural look than any of the FOSS software I’ve tried?
Disclosure, I am a very experienced Lightroom user. Have been using it since V B.5. And I spent a few years teaching it at the college level. I can do things with it that most people still believe that you need Photoshop to achieve.
That said, I greatly dislike Adobe. The only reason I started to even use Lightroom was becuase they bought my raw convertor of choice at the time, Raw Shooter. But, when you truly understand how to use Lightroom, I haven’t found any other software that comes close to it as a complete package.
As for shadow and highlight recovery, the closest I have gotten with FOSS is in Raw Therapy (I haven’t yet tried this new tool). But to get there took quite a dance around the tools.
As for Enfuse, I used to use it for stacking realistic HDR images. But I want good S&H tools for single frame images in my Raw convertor as opposed to another piece of software.
Apologies for meandering a bit here.
I have never tried Lightroom, but from what I’ve seen and read I have no difficulty to believe that it is the state of the art regarding highlight recovery and shadow lifting. That’s why I am very interested in your experience So, would you be willing to share some “challenging” raw files together with what you consider a good rendering done in Lightroom? Just to get some ideas about what our target should be… what do you think?
I haven’t used LR either, but I’m sure Adobe has spent crazy amounts of money to make their tools give good results with just a few sliders-- the focus seems to be on achieving good results while putting in as little effort as possible.
darktable, IMO, exposes more sliders and more technical tools than will ever be available in LR, giving the user more power (at the cost of speed) to manipulate the image as they please.
As for the “greater potential for a natural look,” that is purely subjective. It is pretty difficult to nail down exactly what a “natural look” is-- what is natural to one, isn’t to another. And as for each applications potential, that lies squarely in the hands and fingers of the operators of the applications. As your well versed in Lightroom, I’d expect you to produce much better results with it than with darktable or RawTherapee. But if you spent as much time in either or those two applications as you did in Lightroom over the years, what would be the result?
This article describes the use of local laplacians in DT. I have personally not tried yet to play with the DT implementation, but I plan to do it for sure during the next days.
My wild guess is that Adobe has put a lot of software engineers on the task of optimizing the local laplacian technique to the point where it can be used in real-time on large images. The scientific background is probably largely based on the papers cited in the blog post you mentioned, but I am pretty sure they pushed the optimizations much further than what is described in the papers…
Can we, FLOSS developers, do the same? I hope so…
As for enfuse, in my understanding the compression of the dynamic range of HDR images, and the S&H recovery for single frame images, are very closely related.
Would you have some challenging image to propose, together with your state-of-the-art result obtained with LR? While the “quality” of the processing is subject to personal taste, I think this could provide a good reference for comparing the results of the various FLOSS tools.
I think we really need the help and guidance of experienced users, so you are very welcome!
EDIT: I realize that I’ve been writing in paralle with @agriggio, and that we came up with rather similar arguments and requests… nice!
Thanks for all of the feedback and interaction. Unless it is community etiquet, I will answer all in this one message.
Regarding more tools exposed to the user, I LOVE this about FOSS. I am very techincal, it is one of the reasons I love photography. I want more tools and options to interact with my images. Lightroom does a very good job of hiding the math and science away from the user. But even with it’s “simplified” tools (relative to DT or RT), many people still don’t undertand how to use it correctly, or to the best of it’s abilities.
As for providing a sample raw file and my edited version… yes! Let’s do this. One of the exercises I put my students through is extreme sensor testing (over and under exposures and then finding the limits of their cameras). I have a great example image that always has students saying aloud, I didn’t know I could that!
In the next day or so, I’ll pull out that image and share the file and my converted jpg rendition. In it, I not only do highlight or shadow recovery, but also noise/detail management. Basically rescuing an awful image from oblivion.
@paperdigits, I agree, all in the world of perception is subjective, and relative. But when I say “natural”, I mean editing an image to resemble what the scene may have looked like to the average observers visual system without looking like a crunchy, mid-2008, HDR cesspool. As for using RT or DT extensively, granted I haven’t, and I am open to learning more, and admitting that I don’t know either as well as Lightroom. That said, I have spent a good amount of time with both of their current releases. I’m not as enamored with DT as I wish I was. But in RT, with enough time, I can get pretty damn good results. And it is clear that overall, it’s pipeline has the potential to be so much more than Lightroom’s. And in many ways, already is.
@agriggio I do have one complaint about Lightroom’s highlight recovery, they never get to a point where they automatically go from channel blending to “neighborhood” color blending (color propagation in RT). This is one of the areas where I have found FOSS tools (especially RawTherapy) to have an edge.
Ok, so where/how should I share this raw file and it’s converted JPG?
Oh, and @Carmelo_DrRaw, Thanks for that link. I’m going to go read it. And yeah, Adobe has more money than God, they do throw a lot of resources behind it. But like any behemoth, they get things drastically wrong from time to time! And I agree with you about tonal compression between HDR and single image recovery.
I think you should be able to upload both here… and thanks in advance!
You can drag and drop them right in the edit window of this forum when replying. We’d actually prefer this as it means we host the file, and it won’t suddenly disappear
All you have to do is want to learn more! I think we’ve got a great group of people here, we all share knowledge and look to improve our toolchain. Someone with as much experience as you is certainly welcome and I look forward to what you can bring to our community.
Thanks. I’ve been poking my head into some of these groups for a while now, and I can tell it’s a good community! I’m looking forward to learning more and hopefully being of some help, too.
I agree that theoretically it should be like that, but my experience tells me that the enfuse algorithm needs some intermediate values to properly blend the good exposures.
Here are two examples from the same picture. The first uses all bracketed images at 1EV step, the second one only uses 3 images (the darkest one, an intermediate exposure and a bright one):
IN the second image, there are clearly regions (like the sky in the top-left part) where no optimal pixels could be found, and overall the result is less pleasing…