Filmic RGB's blackpoint / whitepoint measurement

Hi! I would like to discuss some points about Filmic RGB’s white/blackpoint measurement that I think still need to be raised even after Aurélien Pierre seems to have left DT development.
I hope that this is the right place for it (My version is DT 4.8.0 - Please note that I am only an end user, and definitely not a programmer.):

As we know, Filmic RGB attempts to imitate the behavior of film. In this sense, the calculated dynamic range of the source image (called “EV scene” in the graphic display) should not be dictated by the given scene but (originally) by the film material or the digital sensor’s potential dynamic range. Otherwise, you’ll re-map the “internal” luminance range of a “flat” non-full-spectrum scene (in the sensor source dynamic range) to the full luminance range (blackpoint/whitepoint) of the display range. However, in the “scene” tab, the color pickers for white/black relative exposure perform a black/whitepoint measurement from the scene.

This has several effects:

  1. Contrast stretching
    By default, any scene is turned into a high-contrast scene.
    Aurélien has declared in a forum comment that Filmic RGB is designed for scenes with a high dynamic range, where there are “real” whites and blacks.
    In scenes where this is not the case, the problem is not that the color pickers may pick up some inaccurate value, as hinted at within the user manual, and that “you will need to adjust it manually.” Similarly, the problem is not (as remarked in the color pickers’ comments when you hover over them within the module) that you should adjust the values (even manually) “so […] clipping is avoided.” This is because in any scene apart from those with real whites and blacks (referring to the film material or the sensor data) - or apart from overexposure, there actually should not be any risk of clipping that would need to be avoided, because (after exposure correction, e.g. due to ETTR) we should be mostly somewhere in the moderate outer ranges of the histogram (outer upper/lower midtones).
    Rather, the problem is the basic assumption that, according to the user manual, in the “scene” tab, the module measures min/max luminance within the scene and "assumes it is pure white [and] pure black." Granted, the module does not “know” where the ‘real’ white (and black) points are - but it is a conceptual error to assume that we can just infer them from the given mathematical data!
    This is particularly problematic as in one of his introduction videos, Aurélien advises us to learn about Ansel Adams’ zone system. Yet, based on the color pickers, any attempt at staying true to the zones (apart from middle grey) is confounded - Note that this point does not criticize the additional “filmic” compression at both ends of the S-curve!
    By the way, a similar effect of contrast maximization happens in the Negadoctor module, within the “paper properties” tab, when you use the color pickers for “paper black” and “paper gloss”.

Specific effects are:

  1. Highlight clipping to maximum luminance
    2a) In the highlights, the dynamic range is clipped (and re-mapped) based on the brightest spot.
    Thus, when the brightest spot is a white wall “originally” at 70% luminance, it is pushed up to 100%.
    2b) Overly aggressive highlight reconstruction
    When activated by default (and in its default settings), Filmic RGB’s included “highlight reconstruction” tab starts whitening the highlights artificially created by the (white relative exposure) color picker (as a consequence of point “2a”). While this function is useful for its intended purpose, thanks to the highlight clipping caused by the color picker, skin portions or bright cloth tissues etc. that are now artificially pushed into white are additionally covered in a field of white dots that are preparing the transition to pure white.
    Before I understood the “symbiosis” between the “highlight reconstruction” module and the “highlight reconstruction” tab within Filmic RGB, this side effect of a faulty dynamic range measurement led me to completely shun this function of Filmic RGB, thinking it was incorrectly programmed.

  2. Shadow clipping to zero luminance
    Especially in low-dynamic-range situations (e.g. “zoomed-in” long-distance landscape shots that are flattened by natural haze), the shadows can be washed out - which is acceptable with regard to the given situation. Filmic RGB pulls these flat shadows down to the black point - often with a lack of pixel information regarding smooth luminance transitions, making such scenes appear extremely grainy or overly “dramatic” in the shadows.
    There is one ironic example in which all these effects come together: I have two photos with the same exposure (module) setting imposed on the main subject. One of the two includes a brighter area or highlight: Here, the strong black relative exposure is “countered” by the comparatively strong white relative exposure. We get a comparatively balanced image. Conversely, I compose (zoom or crop) the “darker” scene such as to exclude the brighter areas or highlights: Here, Filmic RGB gives much more room to “uncompressed” shadows, letting the image become much darker overall, whereas those areas that (due to the lack of the “highlights” of the first image) are now calculated to be highlights themselves and are covered in white dots from Filmic RGB’s highlight reconstruction.

There have been several solutions proposed with regard to this “problem:”

  • A) Freehand use of “dynamic range scaling”
    As a default solution, you are advised to move the “dynamic range scaling” slider - but towards which target point?! The additional info when you hover over the slider is “to give a safety margin to extreme luminances.” Again, it is assumed that the maximum detected luminance fromt he given scene is (close to) pure white and should be moderated just to avoid clipping. The manual explicitly admits that “when no true white and black are available on the scene, the maximum and minimum RGB values read on the image are not valid assumptions any more.” But then where do we get those target points from, apart from a manually adjusted estimate?! In any case, dynamic range scaling creates new problems in the shadows because it “symmetrically shrinks or enlarges the detected dynamic range:” If e.g. we scale down the highlights from the whitepoint to their “more realistic” zone position, the blacks get washed out - and vice versa regarding a freehand recovery of shadows.

  • B) Freehand correction of black/white relative exposure
    In certain videos, Aurélien Pierre, Bruce Williams and Gus from Studio Petrikas all make free-hand corrections with the sliders of black/white relative exposure.
    Regarding non-HDR images, Aurélien seems to start with a default position and only adjusts one of the two sliders - even with a pre-defined number!
    When Bruce uses Filmic RGB, he goes for the full manual adaption.
    At one point, he tries the colorpicker, but finds the result “too extreme,” , explicitly wondering about the “cringe” factor concerning manual adjustments (in the case of the blackpoint, whereas the whitepoint of the high-dynamic-range scene was obtained successfully).
    Gus from Studio Petrikas uses the color pickers and then makes a freehand black level correction according by taste by “a few stops.”
    In all cases, it is an arbitrary step to assume where pure white and black are and whether the scene is “interpreted” as one that should use the full luminance spectrum.

  • C) Working with pre-measured dynamic range values
    a) Subtraction of the sensor’s overall dynamic range from scene-measured white relative exposure
    An old 2018 introduction to “Filmic” edited by Aurélien explicitly refers to or to make you insert the EV of known dynamic range of your sensor into the “scene” tab, subtracting it from the measured maximum white relative exposure.
    Filmic module for darktable - HackMD
    It may be noted that this approach starts with the assumption that the measured white relative exposure actually is the whitepoint.
    I am fully aware of the remark in the more recent updated statement from the manual, according to which “there is no direct relationship between your camera sensor’s dynamic range (to be found in or measurements) and the dynamic range in filmic (scene white EV – scene black EV).” This is most unfortunate, and I am not sure why this aid is ruled out ( even tells you precisely the relative value of shadows and highlights!).
    Is it, as the manual implies, that the specifics of the RGB space and the earlier modules in the pipeline may void the reassuring numbers of measured values? In one video, Aurélien explains to us that the measured values are only irrelevant because in studio environments, it is simply the scene that has a lower dynamic range that you can capture.
    But this explanation would simply refer to the assumption that scene-measured highlights and shadows should be pulled up close to pure black and white just to reach maximum contrast.
    b) Imposing the sensor’s relative highlight/shadows dynamic range (based on the given ISO) via individual Filmic RGB presets
    In a (German-language) tutorial, Boris Hajdukovic proposes using camera-related dynamic range presets on the black/white relative exposure sliders, whose positions he has to adapt to the prior exposure correction.
    Apart from the cumbersome need to adapt these numbers for every image, jumping forward and backward to calculate the “right” positions based on the exposure module’s EV-shift, this method appears refreshingly intuitive, since such presets could be easily copied to any picture with the same ISO.
    Nevertheless, it has two problems: Firstly, there is currently no automatic function implemented to transfer the luminance shift from the exposure module to Filmic RGB’s “scene” tab. More importantly, at least with the current DT version, it does not work correctly, based on my numbers from For example, it clips the shadows in “higher dynamic range” images. So I guess I am either misunderstanding something about the applicability of such numbers, based on either the scene characteristics or the complexity of the RGB color space.

What is currently missing is some “expectable” relationship between middle gray and min/max luminance in less-than-HDR conditions. I am dissatisfied with certain remarks (Aurélien) that you have to make artistic choices here since there is no accurate representation of reality: In the case of “high dynamic range” images with given pure whites and blacks the goalposts are quite “scientifically” exact indeed! What about inserting a scene-measured known luminance value as a substitute and placing it into a specific zone or luminance value? After all, we already have to “know” (or choose) some value for Ansel’s middle gray (or, roughly, mid-tones). Such a step (currently, of course, not implemented in DT) could mathematically fix the curve at some intermediate point, instead of the end-points.

Please don’t tell me just to move to Sigmoid!

1 Like

I always thought that by “mimic film” that was meant in the way that the highlights and shadows roll off from clipping; not like a literal mimicry of the whole medium of film. Maybe I’m wrong there.

Aurlien always said trust your eyes, and don’t edit by numbers; I can only assume this is the reason why. You don’t have to use the auto pickers for the black and white sliders; I’ve never done it and been satisfied.

The highlight reconstruction is off by default in darktable. There is a checkbox for it.

Again, trust your eye. Move it until it looks good. My photos often contain less dynamic range than my sensor can handle, thus I increase contrast, move white slider to the right and black slider to the left until I get the desired overall contrast.

once you’ve established the overall contrast for your photos, you cna use tone equalizer to effect specific tonal ranges.

it is arbitrary and depends on your artistic vision

we don’t really have this in filmiuc v7, the current version.

I’d go a step further: what was there in “reality” doesn’t matter at all, all that matters is your artistic interpretation of the scene, a journey on which you’ve already started when you captured the scene.

If you must go down this road, you can look at your histogram and mentally draw a few more lines in it, and you’ll have ten zones. Then you can edit from there.

It’d be much more useful, in my opinion, if you spend sometime reflecting on what you want your photo to look like, “previsualization” if you will, and then execute that vision. Editing by the numbers is going to leave you with a souless and joylessly unartistic process.


Thanks for your well thought out post. I may not have any real answers to your questions. But for my part when I use filmic I click on the white relative exposure eyedropper and I feel it often overshoots to the left so I pull it back manually to get a pleasing looking white/highlight. I then do the same approach with the black relative exposure slider and find that most times the black is well set presuming there is something approach black in the original scene.

For my part I don’t expect filmic to make correct presumptions and I adjust by eye for the look I like. Certainly the automatic adjustment of sliders is a problem in a hazy scene without true whites or blacks.

I often use sigmoid because I like the colors and results that come straight out of the box with an ‘average’ scene. However, I have come to appreciate the ability of filmic to tackle very high dynamic range scenes. For instance, where I have a blue sky with white clouds and parts of buildings in very deep shade. I experienced this recently in medinas in Morocco. Filmic was the easiest tool to tackle these images and retained details in the highlights of the sky that were challenging to preserve in the sigmoid module.

For what it is worth, when I process an image in DT or even photograph an image in the camera I set the exposure to give the best rendition of the highlights. Then and only then do I tackle the recovery of the shadows using a combination of modules including the color balance rgb, shadow and highlights, and the tone equalizer modules.

I avoid painting by numbers which is possibly how I interpret your desire by the your post. Please forgive me if this is a misunderstanding on my part.

I often wonder what Aurélien is up to and if we will ever see some more great modules appearing from him. However, I am not interested in going down the Ansell editing pathway as AP has removed some modules such as shadow and highlights which I really like to use.

BTW, @Neutral_Gray welcome to the forum and I hope you can enjoy using DT as much as I do.


I think this is not a correct starting point:

  • the “film behaviour” referred to is the S-shape of the response curve, with the slope of the curve tending towards zero at the end points;
  • given the modules coming before filmic in darktable’s pixel pipe, the relation between image dynamic range and sensor dynamic range is not simple: exposure, tone equaliser and other modules will change the original dynamic range;
  • automatic picking of reference values will always imply an assumption of what that reference corresponds to;
  • the important dynamic range here is not the input, but the **output **, compare screen and (paper) print…

The initial assumptions being questionable, the resulting reasoning is suspect.


I agree with @rvietor here — I think this assumption is false.

There was a recent discussion in another topic, I will just link what I wrote instead of repeating it here:


  1. all mapping is about distributing (global) contrast, of which there is a finite amount.

  2. bring everything to the relevant tonal range before global tone mapping. trying to fix things at that stage is a futile exercise.

  3. consider sigmoid.

1 Like

First of all, thanks for your kind reactions!
It is hard to reply to all these well-meaning answers at the same time without omitting some of your good points.
I assume it is better to categorize them a bit. I hope I do not misrepresent you too strongly.
To be sure, I really do see the risk that I completely misunderstand something, so please bear with me.


  1. Sigmoid is too simplistic.
  2. There is an artistic aspect to image processing, and Filmic in particular, but in the case of my problem, this is less relevant.
  3. The technical obstacles to a correct calculation of Filmic’s black/whitepoint should not be regarded as unsurmountable necessities.

Here come the more detailed points:

  1. Endorsements of Sigmoid
    a) “If Filmic RGB gives you too much freedom (insecurity), switch to Sigmoid.
    What I am looking for is not a simpler solution but a more precise one for certain picture scenes.
    b) “Sigmoid does what you seem to expect from Filmic RGB.
    I don’t think so. I admire most considerations behind Filmic RGB and the combination of its sub-modules. What I find lacking is only a partial function. With Sigmoid I would lose all the parts of Filmic RGB that I do enjoy.

  2. Artistic opinions
    a) “What you want to do via ‘painting by numbers’ is actually an arbitrary artistic decision.
    I agree in three general ways:
    -Concerning limited dynamic ranges of media, as Aurélien says in one video, “things suck,” and decisions have to be made as to what to compress.
    -Often, no precise luminance measurements (light meter, reference cards) are used and in post-processing you have to refer to other objects within the scene that can be assigned a certain luminance value.
    -In high-contrast scenes, you often have to decide what is the “key” object to be positioned in the midtones, shifting (and compressing) the rest of the luminance spectrum.
    There is also a point supporting the “artistic” argument within “Filmic RGB”: Filmic does have an artistic side - in the shape of the “look” tab, especially by setting the contrast of the linear slope.
    The “scene” tab, however, is about tone mapping. Without denying that you can use it for artistic contrast settings, I think that reserving its use to this function distracts our view from the problem I want to address. There has to be some “reliable” element in Filmic’s behavior. To compare digital to analog photography, we seem to have forgotten how meticulously mathematical analog was. Especially for all those artistic effects such as dodging/burning or pushing/pulling, the film/paper material had to react in predictable semi-linear ways. Also, in the case of images with full blacks/whites, we do expect the color pickers to give us correct results - it is just with duller images that we need an “everything is art anyway” excuse.
    b) “What matters is the output, not the input.
    Of course, both in analog and digital, we are limited by our output media. When photographing in analog, you had to think backwards from those limits to properly expose and process your image. Hopefully, then, you could capture high-contrast elements within the boundaries of a limited dynamic range. What matters is that the “predictability” of the input was a basic necessity. Under certain “normal” conditions, the development happened in very linear ways. It was never the case that by using the standard method (like with Filmic RGB), all images were automatically stretched to maximum contrast!
    c) “Mapping is about the distribution of (global) contrast.
    There seems to be a conflation of two terms that also affects the “problem” built into the “scene” tab: the dynamic range of different media is conflated with the internal contrast of a specific scene, within a specific medium. In one early introduction videos by Aurélien aptly titled " remap any dynamic range," this term explicitly refers to the potential maximum range of different media. Just look at the video’s preview image!
    It is not about re-mapping global contrast, but about compressing the scene’s (potential) dynamic range and making the cutoff less dramatic by “reasonably” compressing the edges - hence the S-curve. However, what the color pickers do is to maximize contrast by assuming the scene’s global contrast to be the dynamic range that has to be re-mapped to display space**.
    d) “The film behavior is simply an S-shape.
    Compared to “simpler” tone mappers like a basic tone curve or Sigmoid, that is true. However, what matters is how/where the S-curve is applied. The extreme shoulder/toe ends should contain only re-mapped whitesand blacks (either from the source or via intermediate processing). I understand that with all the processing in open RGB space, it takes some “lasso” to anchor these points in the display space. My problem is that the color pickers in the “scene” tab are only doing this correctly in scenes with full whites and blacks. In all other scenes, any lighter grey is mapped to the maximum end of the shoulder, while any darker grey is mapped to the minimum end of the toe: Both points should still be on the linear slope - or a dull image should remain dull unless you have processed it before Filmic RGB. When we manually change the maximum/minimum sliders “a bit” or arbitrarily move the “dynamic range scaling” slider, we may “recover” some details, but we only move the scene-measured values “a bit” away from the extremes of the display target range (end of the shoulder/toe area). That means that in general, we still follow the assumption that any scene should be stretched to maximum contrast.

  3. Technical obstacles
    a) “Sensor dynamic range isn’t a very precise value in the first place.
    It is imprecise, but at least an indication as to where the real clipping points should be. I understand that even within pre-defined standard spaces of cameras, certain edges will only contain noise. However, that should not keep us for searching for some indicators of this kind. I originally proposed some alternative intermediate anchor point whose luminance is known. I don’t know…
    b) “Too much happens in the pipeline before Filmic RGB is applied.
    How come we have all these digital, precisely calculated steps but canot trace back the changes?! It is unacceptable to simply stack modules on top of each other to a point where we deem the original reference unidentifiable. Is this because logarithmic or RGB space is so complex or because we cannot / don’t want to somehow connect the effects of otherwise solitary modules? After all, “color calibration” is successfully connected to “white balance” in order to relay the camera data. There must be some way of knowing how, with relation to the original scene, individual modules changed the image! Besides, in most cases, the color pickers in “Filmic” do not give me dramatically different measured black/whitepoint values when I turn off certain intermediate modules (e.g. “color balance rgb”). I acknowledge, though, that certain effects that strongly change luminance (exposure, tone equaliser) have to be connected to Filmic’s input numbers somehow - and as I mentioned, Boris Hajdukovic proposes one tedious method to do this manually.
    c) “You should first process the image so that Filmic RGB correctly picks the values as a finishing tool.
    If you want a “flat” image to remain flat, it should not be necessary to process it to maximum contrast before applying Filmic RGB, just so that there are high/low luminance extremes that the color pickers in the “scene” tab can identify correctly. Sorry if I’m getting this point wrong.
    d) “The automatic picking is only an assumption, which you can’t blame for false readings.
    I do not question this point at all. To clarify, what I am looking for is some sort of anchoring of the curve, no matter how far-reaching artistic changes may be. The “lack” here seems to be that you can only shape (the image’s position on) the S-curve based on its extreme ends, not via an intermediate anchor point.
    Of course, you can currently manually calculate an approximation: You can identify the brightest spot measured by the color picker, assign it to a position in Ansel Adams’ zone system. By calculating a multiplication factor taking us from this zone’s luminance percentage to that of Zone X, you should be able to scale up the “white relative exposure” EV value to that of “virtual” white. Not only is this currently far too arduous to do, but I also don’t know if the idea holds with regard to the earlier processing steps you mentioned. But it should avoid the current behavior of artificially maximizing contrast.

I think you still misunderstand. filmic’s B&W points are not “calculated”, they are an artistic choice you have to make when processing the image.

Not at all, sensor DR has very little to do with the characteristics of your global tone map (other than the fact that going into regions where your sensor does not provide information is usually nonsensical).

It is still your choice, within the domain that makes sense. Sensors have huge dynamic range these days. Trying to map all of that to a finite range of values will result in very flat images, even if you are targeting a HDR display.

Then just keep using filmic rgb, but keep in mind that the automatic suggestions for b&w points are just suggestions based on rules of thumb (very simple algorithms), not some objective truth derived from your sensor or whatever. Use your eyeballs to tweak the result to taste.


@Neutral_Gray discussions tend to flow better with shorter posts. I’m genuinely interested in feedback on darktable functionality but a wall of text can be quite off-putting and hard to respond to.


The filmic module has quite a few “anchoring points”:

  • input luminosity of 0.182 is invariant (the famous “middle gray”; this value is set earlier in the pipeline, typically with the exposure module)
  • white reference sets the EV value that will be mapped to 1.0 output;
  • black reference sets the EV value that will be mapped to 0.0 output;
  • contrast (“look” tab) determines the slope at middle gray;
  • latitude (also “look” tab) sets a zone where the contrast is kept constant;
  • shadows/highlights balance (again, “look” tab" shifts the zone set with latitude towards highlights or shadows.

Then there are a few choises determining the fall-off at the ends (in the “options” tab).
(Of course, if you only want to use the “scene” tab, your options are limited…)


  • if you want an image with light gray as brightest tone, set the white reference high enough;
  • if you want an image with dark gray as darkest tone, set the black reference low enough (you may have to adjust your middle gray in certain cases);
  • if you want a linear “curve” between two points, increase latitude.

(strictly speaking, you can also use the controls in the “display” tab to change the output range, this is not recommended for normal use)

But keep in mind that you only have a limited working space: if you set your darkest and lightest tones, and you want a linear curve between them, that forces your contrast setting, vice versa if you want a given contrast, you limit where you can put your brightest and darkest tones. That’s why the contrast dimished towards white and black. And in any case, the resulting tone curve should be monotonously increasing (or you may get strange effects which you probably don’t want to achieve through filmic…).

That’s not a limitation of filmic, but of the mathematics involved (output limited to 0…1, slope defined in one point or over a range). Try to pin too many points, and the math has no real solution anymore (and I don’t know how to use imaginary values in my editing).

What the pickers do is set the “curve end points” to what you tell them is black or white. They do not modify the contrast setting. So if you don’t want extreme black or white, you’ll have to set those two reference values manually.

A misquote. That remark was made about using sensor dynamic range as a base for filmic’s settings, and I pointed out that that range could have changed due to the pixel pipe, and thus wasn’t that good a base.

Of course the changes are traceable. It’s not done in the current pipeline (and I don’t think there’s any use for doing so).

Darktable is designed for editing images. Part of that is mostly technical (basic white balance adjustments, demosiacing, distortions corrections, noise reduction, …), part is artistic/esthetic. Darktable is clearly not meant for highly technical work, where exact pixel values are needed for measurements. There are other programs for that, and you’ll need to spend a lot of time (and money) on calibrating and controlling all steps of your process.


I understand, and it is really nice that you all still put in the time, despite the confusion.


I think filmic needs the reference points to determine the final mapping but for me I think of filmic as a midtone tool. It really operates on the premise that you set the intension of the image with your exposure and anchor the midtones… by not being constrained initially by hard upper and lower limits you can establish the core element of most photos…so effectively once you establish your midtones you map the rest of the tonal range of the image to blend those into that base adjustment of tone. Given the nature of most raw images this requires a boost of the midtones and then the need to map the highlights back down. I don’t turn it on at first for any image. I know DT by default uses a tone mapper and having that pop on strait away might be convenient but I like to see the images without it… I also very often develop an image without one… Using CB, local contrast and diffuse or sharpen I can often get a very nice image with no compression necessary and so less work to recover details… I think because it has many controls people tend to try to do more with filmic than was intended by Aurelien. Do a basic global tone map and then move on and do color and tone locally with the other tools…

There are many kinds of photography so there is no one size fits all recipe and thankfully DT does a pretty good job of providing lots of options… GIven the prescribed workflow framework of establishing the midtones from the scene to a desired location on the display or output I think tool then supports that process as desired and I am not sure different black/white point determination would be needed but I also am no expert…

1 Like


I am not a darktable user, so all this talk about Black and White levels and various mappings goes right over my head.

To me, the actual scene capture is represented only by the raw histogram, nothing else. For that, I use RawDigger with “subtract black” disabled.

1 Like

It’s mostly about how you package that on to your current display which for most people is still SDR


If you hover your mouse over the “Black relative exposure” slider, the info says “increase to get more contrast.
I just meant that the sliders do have an effect on global contrast.
This has also been described in the main Filmic discussion:

Also, sorry for the misunderstanding.

Concerning the core issue:

I understand the point, also due to your deeper explanation in the post you linked. Also, I don’t reject artistic decisions.

Still, I feel like something is missing here. In the old days, each film type had a fixed dynamic range. Of course, you could dodge and burn etc., but this was based on the “reliable” behavior of the film. Now, the whole concept of mapping is focused on “holding on to” and compressing luminance values above display range, but it is clueless about luminance values that are within/below display range (low-contrast) even before applying Filmic.

This is exactly what I mean: When we are not trying to rein in extreme luminances, the “enough” should have a “normal” reference.
Something like the camera dynamic range or the film dynamic range is just missing in the concept. Neither turning Filmic off nor using it in its “default” settings is the right answer: Without filmic, there is no minimum compression (and de-saturation) when approaching the edges, while the default settings may not be too close to the camera range (or something of the type).

I get that. Theoretically, the old film process would imply that in our digital workflow, a first compression (including some sort of fixing measured values) would come before modification steps in the pipeline, and then another, in the sense of paper printing.

(Image source: Death of the Zone System (Part V) — Gordon Arkenberg)
But I guess that is stupid because you wouldn’t want to compress your data unnecessarily.

Don’t worry, that is really not what I have in mind.

Could it be satisfactory to just settle on some reasonable values for the white and black and avoid the temptation to fiddle with them?

This should be doable, and it is in fact how things work in e.g. Blender - the picture formation transform there has a fixed range of inputs, beyond which the values are clipped.


Is this true…I admit this topic is getting long with a lot of back and forth but is that not sort of what the latitude is for?? Maybe I don’t understand its role…but with exposure you take your whole set of data and bump it up or down … THen with the latitude you determine what range is “protected” and outside that you map with the sliders to try and align display black and white to the image data. If the entire range of data well exposed is not clipping then why is filmic needed or it could be made more or less neutral and just apply the gamma?? I admit I feel like we know the black and white of the display and we have the raw black and white points specified as reference points in the data for highlight recovery etc. I would have to read the code but I suspect filmic respects/uses these values so I am just trying to see what we are missing here.

1 Like

Aren’t the defaults we have already doing this?


Of course, just a gamma could be sufficient.

I’d say that for unclipped images, Filmic can still be useful - just to simulate the normal compression effects of a slightly flat film photo.
Yes, the “latitude” slider sets the width of the protected linear part, but that does not mean the data remains unchanged. Firstly, where you set the latitude is an artistic decision, so an unclipped but “wider” image may or may not land on the strongly compressed toe/shoulder area. Secondly, the “contrast” slider regulating the slope of this linear part is just as artistic and film-typical. Also, the de-saturation towards the edges does not only start outside the linear part. Finally, if you use it, the transition part of the “highlight reconstruction” tab may (correctly) start whitening the areas at least close to white.

The raw black/white points are not necessarily the scene’s min/max values. Maybe you underexposed, (wrongly) expecting a higher-dynamic scene. Also, as discussed above, for most sensors, there are (imprecise, old) measured values of dynamic range above/below middle gray. These could serve as such points in low-dynamic scenes, including a shift based on exposure correction - if only the earlier modules in the pipeline didn’t mess up these values.

Yes, but they are arbitrary values that impose an arbitrary stretching factor on the given scene - just like the color pickers’ stretching the measured edges to min/max.
Also, a sensor’s dynamic range is not just an overall number, but every camera sensor has a different distribution of this range above and below middle gray that is, to make things worse, different depending on the ISO used.

To understand the problem, consider this video:
Boris Hajdukovic demonstrates Filmic’s highlight compression abilities by using a reference image with ten greyscale luminance zones which he shifts via the exposure module and explains Filmic’s effect using the histogram (in order to compare it with Sigmoid). Works just great!
Now imagine he had deleted the outer zone fields of that image and just applied the white/black detection color pickers - no exposure shift! We’re dealing with precise digital data, but neither with Filmic’s default values nor with the “dynamic range scaling” slider could we safely pinpoint where in the spectrum these fields should be put.

Yes, unfortunately DT does not have a simple interface for just visualizing the dynamic range of your image. You find out indirectly, as in “those highlights just got compressed when I moved that slider”, or you can use the mouse hover in tone equalizer to explore, but cannot see the whole image at once.

Maybe a simple overlay that encodes luminance with false colors could help.

1 Like