To be precise, it should be sRGB gamut with sRGB TRC, that is, “standard sRGB”, embedded in the output image. Any color-managed application will interpret the RGB values correctly and will convert them to the display colorspace through the display ICC profile. If the user display is sRGB-like, the image will look more or less OK also for non-color-managed applications (the more or the less depending on how much the user display deviates from the sRGB standard).
Not the correct way, but I did this with two filmic instances, one for darks and one for highlights, with the help of a mask, plus tone curves and lowpass, to give a final boost.
DSC_8528_01.NEF.xmp (7.4 KB) (DT 2.7)
@gadolf looks pretty good!
Thank you for your response.
I’m not able to load the xmp in my Darktable 2.6.2 windows version. Darktable displays an error message and stops working.
Could you explain how to recreate your result in an average-user-friendly way?
Sure (actually, user-friendly is the only way I’m capable to go )
The history stack:
The history stack doesn’t reflect the order in which I applied the modules. The actual order was as follows.
white balance (I think I spotted the back of the lady at right, but not 100% sure…)
The mask applied in the next step (filmic “darks”):
filmic “highlights” (used the same mask as above, but inverted):
tone curve (same mask) -
tone curve (global) - This was to eliminate the redish tone. I certainly could have used other color mapping tools, like color balance, color correction:
In photoflow I would use only tonemapping and curves for a “neutral” starting pointDSC_8528tm.pfi (40.4 KB) DSC_0856.pfi (32.0 KB)
Thank you for an excellent explanation.
Apparently you are not using 2.6.2. Maybe this is the reason for the error when I tried to load your xmp-file.
I didn’t think of using filmic twice (with a mask) since I have been thinking of filmic as a tool to handle an image with big differences between highlights and shadows. When you separate the highlights and the shadows by a mask then you could perhaps use many other tools and get an even better result?
I will give it a try when I have a little more time…
Sure it is. I’m using the development version 2.7, which produces xmp files that 2.6 can’t read, but I guess this workflow will give the same results in 2.6.
That’s why I stated that my edit is not the correct way to do it , which is crystal clear after @ggbutcher’s explanations. However, I must confess that with images such as these, I still haven’t found a way to handle them with a single filmic instance. The closer I can get to is something like @age’s edit (btw, very nice edit). But in this case, I’m not able to tame the highlights as precisely as I do with masks and double instances. In the end, I think what matters is if the result is pleasant or not, and if I have to suspend the rules a bit for that, why not?
Good point, haven’t thought about that…
that’s too many modules for one simple task. I think I should be able to improve that for dt 2.8.
I still need to check how you do your curve interpolation though, because if you remove the display gamma, remapping the grey log from 67-75% to 18% makes the cubic spline fail big time. The benefit of the gamma, strictly from a numerical stability point of view, is the spline maps from 67-75% to 45-50%, so you avoid oscillations. I have put the maths down for a custom filmic spline, I still need to check if it behaves better.
I wouldn’t ascribe my explanations to “canon”; I’m just trying to show the fundamentals of the curve so one can understand how to intelligently misuse it to good effect…
Interior shots with such windows are just challenging. My current thinking is, if one wants to get it in one exposure, you ETTR for the window and yank the shadows out of the depths. Thing is, with most cameras you’ll get a noisy room doing that. So, you either 1) give in and shoot two exposures, one for each “scene”, and combine the two with HDR software, or 2) get a camera with a better dynamic range so that mitigating the low-end noise is a reasonable task. I’ve played with both alternatives now, #1 works really well but I’m really warming to #2 with the new camera.
Which brings me back to filmic, which I think is the tool in most implementations that gives the most control in pulling up the shadows in a highlight-weighted exposure. And, that’s my response to the thread title…
Ah, home again, with all my little tools…
Here’s a screenshot of what i’ve been messing with in filmic:
I hope the .png renders well on your monitors…
First, this scene is not quite as challenging as @obe’s dining room image, but it doe separate into two distinct “scenes” for exposure consideration. I pulled the parameters pane out of the dock and resized it to show all of the tone tool. Starting from the top, the commands stack has:
- all of the regular raw processing: camera colorspace assignment, blackpoint subtraction, as-shot whitebalance, and AHD demosaic;
- the blackwhitepoint tool normalizes the data to put the raw data at the top and bottom of the display range;
- the tone tool, which is a small “zoo” of tonemapping curves, has the filmic curve selected, which is the Duiker equation shown in a previous post,
- and the “resize-sharpen” group is for file saving, but I keep it for this messing-around because the display profile transform is faster than with the full-sized image.
Note that the A,B,and D coefficients aren’t the Duiker defaults. Particularly, B is well-below, which is specifically controlling the curve segment applied to the lowest parts of the image. I can scroll through B values and watch the shadows go bright and dark, and the upper part of the image just stays nice and balanced - the “toe” at the bottom of the curve does this nicely. A and D were messed-with to deal with the upper part of the image under the curve shoulder; I don’t have a particular heuristic about them yet.
Really though, I’m pretty sure the piecewise filmic curve @Carmelo_DrRaw is implementing will be much easier to control, so keep his endeavor on your radar…
And your third option is of course to use a flash…
But anyway, you are bound to shoot some photos with a high dynamic range that need special treatment. So it is interesting to study, develop and optimise tools to do that.
I don’t understand all the maths but I see that there are different algoritmes to handle this problem. In the past I have used RT’s dynamic range compression tool which is superior to DT’s tone mapping, in my opinion. But filmic can produce even better results and from the discussion I understand that filmic will be improved further in 2.8. I’m looking forward to that.
In the meantime thank you for all your input, explanations and clarifications….
Oh, gee, yes, thanks for completing the consideration. I’m actually in the middle of a “source selection” for a flash; new camera doesn’t have one, and most of my family snapshots occur in a tungsten-lit room with rather large windows…
I’m not really a math person either. What I’ve found, however, is that all of these tone mapping functions (well except for LUTs, maybe) express their performance in terms of a curve, and that curve is the basis for intuitively understanding what’s going on. If you understand the X -> Y, goes into -> comes outof dynamic of a transfer function depicted as a curve, you can easily start to understand the outcome of applying it to all the pixels of an image.
In my tone tool, seen in the screenshot above, I spent a couple of hours making the ability to plot the curve of the selected operator, and that has been quite instructive to my consideration of filmic. The “money-maker” in the filmic curve, that oh-so-little “toe” at the left end, is hard to depict in context of the full curve, but its little manipulations make a large difference in the transform of a linear scene.
Know the ways of the curve, and the effect of the maths becomes clear…
Yes, that’s what I meant by not perfect :-). This was a quick and dirty edit. I did not spend a lot of time on tweaking the masks and added a quite liberal blur and feathering. Probably fixable with a lot tweaking of the masks. I also did not spend any time on noise. But the point here is that this is an HDR where you either fix the grey point of the interior or of the exterior, or both using some kind of masking. Using flimic for the whole scene will result in either a completely blown out exterior or a very dark interior.
Agree on the dark side, but the original filmic equation doesn’t reach display white unless you normalize it to the 0.0 -1.0 range (or whatever black-white range the particular software works with). If the raw image doesn’t have saturated pixels, if something is blown in processing, I think it’s more likely due to a white balance or exposure multiplication than a filmic curve.
@jillesvangurp, I hope I’m not irritating you in my responses, but you’re really getting me to think about filmic in context with all the prior stuff in exposing and processing. For high DR scenes, a given camera will only have so much tone space between an acceptable noise floor and saturation, and any tone curve can only go so far to redistribute tones to accommodate it. I think the essential question then with any tone mapping curve is, given an image that isn’t highlight-saturated, how much “lift” can it give to the shadow regions before it compromises mid-tone contrast? Any curve lifting shadows has to flatten out somewhere, and that’s where contrast will be killed.
The next step, masks, IMHO give only a limited ability to go farther than a tone curve, because the mask boundary forms a discontinuity region for tone gradation. Some scenes give you a clear line upon which to place the discontinuity, such as a horizon, or the window in the dining room, others not so much. Even with clear delineation, over compensating tone in the two regions can start to look “processed”.
And so, some scenes just require multiple-exposure HDR, depending on the camera. This approach to my thinking is just masking with a bit more latitude, but it still can suffer from looking “processed”.
After all this discourse, to my mind, a filmic tone curve has two compelling considerations: 1) that little “toe” at the bottom keeps some tonality in the near-blacks, and 2) programmers are working hard on the equation to provide shaping controls that are more usable than with other tone curves.