To view a raw image in darktable, some amount of minimal processing is an unavoidable requirement. To keep things simple for this question, I turn off all the modules that I can, including WB to avoid it causing clipping. What I am left with is a demosaiced image piped through the input | working | output colourspaces and ultimately displayed on the monitor.
If I don’t use a tone mapper, in my case I normally use filmic rgb, are shadows and highlights simply clipped?
Pretty sure they are, so the real question is when I do use filmic rgb and adjust the relative white and black points, am I shifting the clipping point (so I clip at a different point) or am I moving the black point and white point (so nothing is actually clipped but shadows and highlights are compressed/expanded)?
Without a raw histogram, if the histogram of my image (minimally processed as above, without tone mapping) shows no clipping of shadows or highlights, does that mean my raw image isn’t clipped?
Normally, highlights can be clipped, shadows can be rounded to zero, but that depends on the output format (e.g. screen output will clip values >1)
Filmic and sigmoid map their input values to a range 0…1. In filmic, the relative white and black points give the values that are mapped to 1 and 0, resp. (black point is not mapped to zero, as that’s not possible with a log scale).
Again for filmic values above the relative white point will end up clipped.
I’d say, normally yes. But that assumes that the raw white and black point are set correctly (usually that’s the case, but we’ve seen some camera models where the white point was off).
That said, if all you want to check is raw clipping, dt has a tool for that (the matrix icon). That shows you where the clipping occurs, and in which channel
Hi
If I have it correct, tone mapping is always required. Your eye can discern a much wider range of exposures (brights / darks) than your camera and your camera can record a wider range of exposures than your monitor can display. With camera controls you get to choose what exposures are recorded and with tone mapping you get to choose which exposures are displayed in gamut on your monitor / paper
If you export to floating point TIFF, for example, no, they won’t be. But any output device, whether display or printer, will be limited in dynamic range. You have to deal with it somewhere, somehow.
I do use the raw clipping indicator icon. I am really trying to get my head around relative white and black point.
So for filmic, the darkest input value is mapped to 0 and the brightest input is mapped to 1, meaning that when I decrease the relative white/black point, am I clipping the input to filmic or just compressing ?
Have you looked at the DNR map display in filmic…basically you are working around the exposure that you set to determine middle gray…and then the relative white and black are the ev into which the data are mapped around that… simplistically put… you can see this in that interactive display…
Anything that’s beyond the set white/black point will be ‘white’ or ‘black’ (I’m adding quotation marks because if the original pixel was a very bright, but also very saturated, colourful one, filmic may not map it to white). So that would be clipping. Compression means gradually reaching a point, but once you are there, and cannot go beyond it, it’s the same as clipping. Maybe there’s a confusion about the terminology here?
The filmic curve is not applied to each RGB channel separately; that can cause hue shifts. It tries to avoid such colour shifts, but that sometimes results in unnatural-looking images, like fire or glowing ambers turning pink instead of orange or yellow.
I do look at the dynamic range map, yes. I understand what it’s showing but it isn’t fully clear to me if I am clipping the input image at -6.3 and +5.2 or if I am remapping the input image darkest value to -6.3 and the brightest value to +5.2 (and compressing or expanding the input to filmic as the case may be) and then remapping (again) that new input range to the display.
Does the relative white/black point clip the input to filmic or compress/stretch (a.k.a remap) the input to filmic?
If I understand your answers, it clips the input. Does that mean that the auto pickers are not a good idea if there are only midtones?
On a side note then, if I expose for a grey card, the relative white point would be set to 2.5 wouldn’t it? White being about 2.5 stops above middle grey?
For the display but I think the scene could be much higher right so you exposed for gray and your screen white or max brightness might be 2.5 ish over that but the scene could be much brighter… with filmic then you would add as much ev for mapping that extra back to the display as you wanted and if you don’t use enough then values above that will be clipped to white…
THis is how I understand it anyway…
Manual entry…
white relative exposure
The number of stops (EV) between the scene middle-gray luminance and the scene luminance to be remapped to display white (peak-white). This is the right bound of the scene dynamic range that will be represented on the display – everything brighter than this value on the scene will be clipped (pure white) on the display. The color picker tool reads the maximum luminance in RGB space over the drawn area, assumes it is pure white, and sets the white exposure parameter to remap the maximum to 100% luminance.
It’s a curve. It remaps. Whatever is outside the range will become black or white. Whatever is inside is compressed or stretched to fit inside / fill that 0% - 100% range.
You are correct, if your original scene consists of midtones only, you may not need/want such tone mappers.
The scene could be much higher but limited by the dynamic range of my camera sensor, so if I know what that is I can just set the black and white points to match the DR of the camera can’t I ? Then do my best to set exposure correctly, which should give a good mapping to the display shouldn’t it ?
Or is it the case that I can actually stretch that initial image in the scene refered work flow, before the filmic, and what I really have to do is set the filmic white and black points to be wide enough for this ?
I would read the section on filmic.It should be enough to clarify this. The raw black and white point module will sets the dnr values for your camera from the metadata or camera support in DT. The key thing is for you to set your middle gray. Even more relaxed than that just expose in post so that your main subject is well exposed… this can initially drive highlight to be really blown…then you add and tweak filmic or sigmoid. Using the sliders for white relative and black you now bring the data to your monitor white and black point by deciding to compress it all or let it clip at some point. There are also settings in the options to set the profile of that roll off…ie hard soft and safe…so you might want to experiment with those…safe is more like sigmoid
I read it (again). Read it more than once in the past, and there it is as plain as day. The relative white and black points clip the input. No idea why I seemed to have missed that!
Interestingly, it also notes at the bottom the relationship between sensor dynamic range and scene dynamic range within the darktable pixel pipe.
I still enjoyed this conversation, it has been enlightening. Many thanks.
You are right, but OTOH if you only have midtones, mappers like sigmoid or filmic will be almost linear on that part, so they are innocuous. So for the majority of use cases, enabling a tone mapper is sensible.
With your degree of interest, I strongly recommend that you invest in RawDigger https://www.rawdigger.com/ which only gives you a true raw histogram for all or part of any raw file.
With a true raw histogram, it is very easy to see when sensor clipping occurs - it appears as a brick wall, especially in the log-log view mode. Not free but money very well spent.
It can also put out an RGB conversion with none of the innumerable hidden twists and turns of other converters, including proprietary ones. For example, I would never have known that the Sigma DP2 Merrill has a horrible green cast at each side, has only half-sensitivity in the red channel and enough high-gain pixels in the blue channel to deliver a starry, starry night. All invisible when converted by Sigma Photo Pro. For me, that’s like a Ferrari paint job with scratches underneath.
I looked at rawdigger after I saw it being used on here in answer to another question I posed about white balance ( in fact I think it was one of your answers). I run Linux on everything (and Android) and unfortunately it is not available for Linux. I might look to see if it will run under WINE when I get a chance.
I am stuck without a true raw histogram for now but I am hoping I can get pretty close to finding out the sensor dynamic range with some of the methods I have seen online. To be honest, I will be happy to just get quite close.
I know it has been advised that filmic rgb should be adjusted to taste, with a strong emphasis on being happy with the visual results, but I can’t help but feel that I would benefit from setting the white and black points something close to the real sensor dynamic range.
It’s not so much to taste but to that middle gray anchor (which is often done to taste) that is also used by several other modules as a reference. And image to image you are not likely setting every image to a gray card. In fact you might edit to expose the face for example of your main subject to have good light. Then you set those points …if you fix them then this breaks down…
First, sensor dynamic range is an ill-defined concept. Sure, you have the number of bits recorded from the sensor, and you can simply take \log_2, but that does not map to anything useful since the last few bits are surely going to be noise, which is a big problem in the shadows because relative to the signal, the noise will be overwhelming.
Second, what you are deciding in practice is where the contrast should be distributed globally. You are choosing a mapping f: \mathbb{R}^+ \to [0,1], the slope of which (or f^{-1} if you want to think in display terms) is the contrast. The convention is that middle gray is aligned with the most interesting part of the image, so its gets the highest slope (contrast), which worked nicely with chemical film, but that’s just a convention, you can change that in digital photography.
filmic rgb’s has a bounded f^{-1}([0,1]), while sigmoid maps the whole interval, but away from middle gray both have a tiny amount of slope anyway. In some applications it makes a subtle difference, but you have to be careful: since f is bounded and increasing, you only have so much slope to go around. In other words, if you want contrast in highlights and/or shadows, you have to give up contrast in the middle.
It is really an artistic choice. The best practical way may be deciding where in the tonal range your subjects are, and working back from there. Eg for a particular image, it may be that you want contrast in the midtones and a bit in the highlights, and less in the shadows. In sigmoid you can do this with the skew parameter, in filmic rgb the white/black points.
You can of course change where you put the highest constrast, but it’s not just a convention, as our eyes are most sensitive to contrast around middle gray (and that’s where our eyes put the “exposure”). And of course, setting “middle gray” for the most interesting part of the image, doesn’t mean that that part is middle gray (e.g. a black cat: you don’t want the cat to be forced to middle gray)
That’s true for the global contrast.
But you can “cheat” and somewhat modify local contrast to give the image a bit more “pop” than expected from the global contrast curves as shown be filmic/sigmoid. In filmic, the different “contrast” settings in the “options” tab also play a role wrt. contrast in shadows and highlights.
This is basically what “HDR tonemappers” do: map a large input range to a ‘standard’ output range and strongly increase the local contrast to compensate. But doing this for an overly large input range, and in an Lab-like colour space is not ideal… Used with moderation you can get good results.
And don’t forget the contrast settings under filmic’s options tab, or the tone equaliser, in combination with filmic’s black and white references: the latter give you “cut-offs”, the others allow you to change the contrast in those areas (with, of course, trade-offs in the rest of the image, you only have so much tonal range to play with…)
@thehatterman : If you set filmic’s references to cover the full dynamic range of your sensor, you force every image to be treated as if it covers 14EV, which then has to be compressed to ~10 EV (screen) or less (paper…). In practice, that means you get no whites, and/or no blacks in many images (anything in shade will show ~7EV between black and white, iirc), or you will have to increase the global contrast quite a bit. When you do that, you have to keep an eye on the tonemapping in filmic, as the curve can overshoot. (“overshooting” is not possible with sigmoid, due to the different math used in that module).