Here is a somewhat conservative approach. Aimed at being realistic.
DSZ_0619.jpg.out.pp3 (12.6 KB)
Here is a somewhat conservative approach. Aimed at being realistic.
DSZ_0619.jpg.out.pp3 (12.6 KB)
Image was fun to process, allowing me to revisit and slightly update old tricks.
PhotoFlow vignette, ca correction; linear no-clip float.
gmic filter pixels; HLG.
pnmclahe local contrast.
gmic local contrast; inverse HLG; brightness, contrast; chroma, sharpen, resize, sharpen.
Zoom and enjoy!
Nice challenge @ggbutcher!
Thank you for interesting task!
Here is my attempt:
For me the biggest difficulty was not so much the extreme dynamic differences, but rather the task to make the picture more aesthetically pleasing after solving the high dynamic range problem. Accordingly, the list of modules used in darktable is quite long. This approach is not recommended if you have to process 100 such photos.
After editing in the darktable:
DSZ_0619_01.NEF.xmp (20,0 KB)
I used Fuji Velvia 100 film simulation in the GMI’C as the last step.
I found it difficult to maintain good contrast in both the shadows and highlights. CLAHE was instrumental in achieving it but at the cost of uneven illumination along the pillars to the left and ceiling to the right.
Another weakness, and endemic to my processing in general, is the resizing. I have never been able to retain as much detail as other people. I used
r2dx 1920,2, which means average interpolation. Sounds weak but it performs as
well poorly as the other options.
Would like the thoughts of G’MIC and other devs.
Thanks for this nice shot, and congratulations for the new toy, it seems to be a great piece of hardware.
I found it very difficult to handle this image in darktable: I couldn’t handle the highlights as I wish, and the the color shift was hard to deal with, both issues from my filmic usage. The alternative way, masking, was difficult for this particular image mainly due to the hanging mobile.
So I tried the Photoflow way and it was a breeze (except for the huge amount of time I had to wait until caching was done, before exporting to jpg). Keep up the good work @Carmelo_DrRaw!
Thanks for a GREAT testcase image to use when working on trying to improve the behavior of darktable’s exposure fusion module.
I’ll be posting some examples/comparisons in a separate thread later today (as my primary intent there is to compare the results of making changes to one step in a minimal pipeline, and comparing to that to some other single-module approaches). The outside highlights definitely present a challenge as far as preserving them without washing them out. The tone equalizer someone else is working on provides much better results in those regions, but at the cost of the shadow areas having highly unnatural colors.
More later today in a separate thread, I’ll reference this one when I put it together.
If you’re processing the image of this PlayRaw, post your finding here; the intent of my image selection was to talk about tone management, so we’re good here.
Thanks to all who have posted so far; my expectations are wildly exceeded by your efforts. I actually cloned and compiled darktable/master (easy-peasy, by the way; just need to insure all the prerequisite packages are installed). So, I’m going to open all your .xmp-s in the next few days, see what sort of “sauce” was considered and applied.
All the tone curves I’ve looked at so far followed the same pattern as the one I did in rawproc, before I switched to filmic: anchor the high end, lift the low. I found dt’s manual tone curve nice to use in that it provided more control in the low end than rawproc’s; but I love the sweeping curves of Tino Kluge’s spline algorithm…
The color challenge surprised me. In rawproc, I’d just used the camconst.json primaries, worked the image in them, and converted to sRGB for output, and the skies stayed nice and blue; in darktable, I couldn’t get the magenta hint out of the leftmost windows. That may have had something to do with white balance, but I haven’t studied dt enough to mess with it yet.
Again, thanks to all who are participating!
In my case, the challenge is that I’m working on a whole bunch of possible code changes… Although I’ve realized that I MUST expose a whole pile of parameters that are currently hardcoded in darktable as sliders, otherwise I’ll lose my sanity producing example cases. - I think I have 5-6 example images for my WIP which seems excessive except as its own thread.
Current darktable master (undesirable for various reasons)
Attempting to alter the exposure weighting approach while maintaining linear blending (severe haloing)
Various approaches similar to what you get when exporting +0, +2, and +4EV JPEGs and feeding them to enfuse (with various weights)
Pierre Aurelien’s tone equalizer algorithm (currently handles highlights the best, but results in unnatural-looking colors in the shadows)
Right now, I’m seeing that most approaches to exposure fusion need a better way to roll-off highlights… Most weighting algorithms have such low weights in the highlights that it turns out that the highlights of +2EV and +4EV exposures aren’t weighted low enough (relatively) to keep them from contributing more than they should. So they contributed less to the highlights than the current darktable exposure fusion algorithm, but still way too much, leading to blown highlights.
Since all of the relevant parameters (target brightness, brightness variance) are hardcoded in DT, it’s hard to generate comparison cases without recompiling. Which means I need to venture into the realm of stuff I suck at (UI/UX design) to expose those parameters as additional sliders.
Take 2. Add
enfuse after the “filter pixels” step from my first attempt. Then tweak subsequent arguments to compensate for this additional step. This outputs a result with more depth and a less blown out outside view. Zoom and enjoy!
Here’s my attempt with Photoflow:
DSZ_0619.pfi (31.4 KB)
I don’t think this image is suited for a global tone mapping approach like filmic - the problem is that the tonal ranges for the interior and exterior parts of the scene end up overlapping. Whenever I try to use a single curve on this image, the view through the windows loses too much contrast and ends up looking misty (or the interior ends up looking too dark).
I used a shadows/highlights layer to bring the shadows range up and drop the highlights a bit. Setting the anchor to 75 (as it is in the relight layer) seemed to give the best results.
The out of camera white balance looked a bit too blue on the chairs, so I adjusted the colour temperature manually.
Once I had done this, the chairs look too flat so I added some local contrast. This messes up the view through the windows, so I moved the local contrast to a second instance of shadows/highlights with a mask calculated from the L channel to restrict local contrast to the darker tones.
Finally, I added a little bit more contrast and a blue/yellow shift using a Colour Adjustment layer.
Thanks for the narrative, very instructive.
I think your rendition comes closest to what I recall of the scene. And yes, I couldn’t shape any single curve such that comes close to your rendition.
Regarding white balance, I think your adjustment keeps enough blue in the interior to convey it’s chromatic relationship to the exterior - after all, it is effectively incident daylight.
Very nice edit!
I doubt that it was that dark inside. Maybe energy-wise but not perceptually. BTW, what is the context of this photo? Where was this taken and why were you there?
This is the main lobby of the Ent Center for the Performing Arts, University of Colorado - Colorado Springs. We were there one recent Saturday morning to hear a rehearsal of a choral group our neighbor conducts. This was a ‘grab-shot’ walking in, headed to the rehearsal hall.
It’s about 1/2mile from our house; I may have to walk down there one morning, see if they’ll let me sit in the lobby for a bit, and take in the morning light…
Edit: Oh, the exterior view is to the west; if one were outside looking the same direction, the sun would be back over your left shoulder…
@afre - I’m assuming one of the lua-script approaches or manually exporting then feeding to enfuse?
Not what I’m used to seeing from darktable (although I suspect I know where things are breaking now - the contrast weight in DT’s implementation appears to potentially be WAY off. enfuse default for contrast weight is 0.0!)
I suspect that in the flow darktable uses, contrast weight will simply act to increase the weighting of positive exposures improperly. A “traditional” enfuse workflow (where the highlights of the “bright” images will be clipped severely) that has contrast/saturation weighting turned on will deweight these clipped highlights due to lack of saturation and lack of contrast. In this particular scenario, tracking brightnesses above 1.0 becomes a liability. (However I think that tracking brightnesses above 1.0 eliminates the need for contrast weight in darktable’s exposure fusion use case - they’ll simply get severely penalized in the exposure weight calculation step.)
Need to try when I get home.
Elaboration of my approach.
1 Start with linear no-clip float. (
2 Filter pixels, esp. negative values. (
3 Process copy to be in a perceptual range. (
– Step 3 would flatten both the shadows and highlights while adding contrast to the mid-tones.
4 Restore both extremes using
enfuse --exposure-weight=0 --saturation-weight=0 --contrast-weight=1 --entropy-weight=0 --gray-projector=l-star -l auto -o a2.tif a1.tif a1_.tif
Caution Post-processing is required, as
enfuse introduces negative values and an alpha channel. Hint Adding grey scale masks to your input images would help you control which pixels contribute to the fusion and by how much.
OK, BOOM - contrast weighting definitely WAS the culprit in many of the issues I’ve been having with highlights and exposure fusion.
After all those code changes, the only settings in the UI are:
Turn on exposure fusion with three exposures
Leave bias to +1.0
Set exposure shift per exposure to +2.0ev
No other changes were made, also I stuck with camera white balance. Once I’ve exposed a bunch of the important control variables in sliders I’ll push the code as a WIP.
Have others done better? Definitely. Once this code is cleaned up, is the work to obtain the image SIGNIFICANTLY less? Yup.
For comparison, here’s Pierre’s tone equalizer, +4.0 EV for the deepest shadows, gradually dropping to 0.0EV for the -2EV highlight band. The highlight region looks much nicer, but the indoor areas look highly unnatural, with that “aggressively tonemapped HDR” look that drove so many people towards enfuse (one of enfuse’s primary claims to fame is that it’s much more natural-looking than most preceding HDR tonemapping approaches)
As shown in my command, I gave all of the weighting to
contrast-weight because it makes the image sharper. Does it have an adverse affect on take 2? I can’t tell because I didn’t pixel peep.
Where does this gamma come from? How about the working colour space? The latter likely has more of an influence on the colour balance.
How did you do that? If there is clipping in the input images, I usually mask them out. Otherwise, if there is detail, I keep it. I haven’t had any luck with
Yes, looks terrible. (Coming from someone with an ancient low-end colour unmanaged SDR screen. )
Hey @ggbutcher, could you have used a polarizer here to help cut some of the glare off the floor and table? (Or did you use a polarizer?)