Where is white point in the scene referenced workflow?

Thats bullshit - sorry for this sounding rude. If you’re able to get how to use filmic everyone else is able too, because the basic workflow is well documented. You don’t really need to have a deep understanding of the math behind it - you just need to get that filmic sliders aren’t the white/black sliders in lightrooms gui. And thats no rocket science.

1 Like

Or try it a different way to describe it as simply as I can.

You have captured infinite amount of data. Infinite.
With exposure you set where the middle point is of the data that you want to use.

Then the filmic black and white sliders determine how much signal under or above that mid point you want to actually use. And the signal you choose, will be squeezed into the area available for it. This will be different if you are mapping a file for sRGB sdr, or if you are mapping for hdr1000, or for… You still choose the amount of signal you want to use, but the amount of squeezing filmic has to do is determined by your output format. This is where the ‘display’ tab in filmic is for. If you ever find yourself ‘mastering’ your file for hdr displays, this are the values to change.

Anything before then filmic in the pipeline will translate well, everything after filmic in the pipeline needs to be reevaluated.

Anyway, filmic takes the data you defined, and squeezes this in available space. With a curve. And that curve is not easily adjusted.

So if filmic crushes your blacks, you can mess with the black slider, but that alters the data you selected. What you really want to do is use the tone equalizer before filmic to make those crushed shadows brighter.

The same for the highlights. If you select a lot of dynamic range, filmic has to cram a lot of data in the available space. This will make it look flat. The data is still there, it is just so squeezed together that you don’t see the differences in the clouds anymore. Reduce brightness of certain highlights to get then to be more visible (tone equalizer), or start messing with local contrast to get certain bits to stand out more.

So filmic is NOT a highlight recovery tool, or a shadow booster. It is an HDR tool. It maps infinite amount data to a certain range of data to be displayed. Also why you must stop thinking about the lightroom whites and highlight sliders. Different beasts (and not calling on bad a worse than the other).

I think the highlight slider of lightroom / acr does a lot of things at the same time. It doesn’t ‘recover’ anything, it adjusts a curve and or whitepoint at the same time, while increasing local contrast in the highlight area that is left. That makes it a magical slider in some cases, but in other cases I find it to do stuff I don’t want which I then can’t correct or undo.

Sorry that is a poor argument defending a workflow that is IMHO conceptually flawed. The filmic module is just the „rescue operation“ to make the mess, that the scene referenced modules have created beforehand, more palatable. It does too much and in an intransparent and unpredictable way.

The whole screne referenced workflow may preserve the captured data but many of the applied edits get canceled out by the convoluted mess that is called the filmic module.

Black and white point are just one of the problem areas. But what happens to the out of gamut colors that our cameras captured? You carry around these colors until you convert to a well defined, bound colorspace. If you sharpen in linear space you‘ll end up creating even more out of gamut pixels some of which will not even be in the cameras capture range, the same applies to any other saturation or contrast changes the user applies before making the step to the display referenced part of the edit. That means that the transformation from linear unbound space to a regular color space will be worse, no matter if you use a perceptual or an absolute colormetric transformation.

IMHO darktable needs to drop the scene referenced mess, it is solving the wrong problem (or the right problem with the wrong means). Nobody editing cares about data he can‘t visualize - your eyes can‘t even convey some of that captured data even if there were displays capable to show these colors. That is why editing - which always is a visually bound operation - can‘t be performed at that stage as there is no well defined way to convey the results. Or can you tell me (in the form of mathematically correct transformations) how a sharpening of out of gamut areas will be transformed? You wont be able to do so (not even if I were to determine that I want a perceptual transformation to display colorspace).

1 Like

You’re right, you have to turn down the sun intensity for example if there’s much more dynamic range in the field than your target media are capable to deal with. Then there’s no demand for scene referred workflow.
But since nature is as it is, you can ignore this or deal with it. darktable provides the toolchain for this - if you don’t want to use these, that’s not darktables problem, it’s just yours.
If the tools doesn’t match, simply use others. A goldsmith also uses different pliers than a blacksmith

1 Like

That sounds as if you activate/use the different modules in pipeline order. But, the order in which you edit the image doesn’t have to reflect the pipeline order! I usually start with the exposure and white balance (WB+color balance), then filmic. That gives me usually a decent starting point, for other edits. So I see what the final effect of each edit is, no surprises, no unexpected “cancellations”.

1 Like

If that were the case it would make things even worse (and would explain why editing using that workflow is completely impossible for me as I never do HDR, I don‘t care about HDR and I meticulously avoid the need to ever use HDR).

That‘s basically the display referenced workflow then - but with problems in the future as many of the modules you can use after filmic are either deprecated or rely on unbound data which it isn‘t anymore after applying filmic. So that‘s a dead horse being flogged…

That’s the thing. You do. It’s digital camera. It is hdr. You are always managing its dynamic range, which is pretty much always much higher than your output target (things might change soon, but for now images on screen and print have lower dynamic range then what you shoot).

You think of hdr as merging multiple shots… Or the grungy extreme-high-local-contrast look. Both have nothing to do with the definition of hdr.
You are basically always mapping high range into low range.

1 Like

Meanwhile 95% of DT users are super happy. Blender has it (for a while), rawtherapee now has something like it. Rawdocter has something like it. Filmulator is a tool built around something like it.

Clearly more people seem to see a need for it. So maybe think about what your problem with it is. Or at least rephrase it so you don’t claim that your problem is a problem the whole world agrees with.

I’m not offended by the language but I think your comment is unnecessary and uncalled for. As I’ve said in my previous comments, I understand the scene-referred workflow and don’t have a problem with it. But spend any time on social media platforms where darktable is discussed, you find many users struggling with filmic and I can understand why. Even though I now have a good grip on it and understand the theory behind it, does that mean I should then just be a cheerleader for it? Trying to understand where these users are coming from and engaging in discussion on this forum is what I thought pixls.us was for. I think filmic is good, but not perfect and maybe it can be improved. Maybe not, but this is why I joined the discussion.

3 Likes

I’m familiar with the Rawtherapee, Blender, dt and Photoflow implementations. (in that order) I’m not very well read on the technicalities of the implementations but it does strike me that dt stands out in it’s implementation?

In Blender it’s been a simple “look” filter with very little control. In RT the log aspect has been separated out from the curve aspect making the results more predictable and the module a clean dr management module. Photoflow gives, if I remember correctly, more control over the curve so that you can place contrast where you need it.

1 Like

You assume wrongly. the relevant part of my scene usually has 5-8 EV dynamic range - which the RAW histogram will show as well! I will ETTR to have as little photon shot noise as possible. I don‘t need to tone map, I don‘t need to compress dynamic range, more often - as was common with Fuji Velvia 50 - I am stretching the dynamic range of the scene to the display dynamic range to increase saturation and vibrance, I also don‘t care to use my camera as a technical colorimeter, the results just need to be visually pleasing!

The filmic module is a major hinderance when editing, the transistion from scene referenced to a display referenced mode IMHO must happen as early as possible to have any sort of reproducability and avoid problems with color space limits. The filmic module is ill suited because it tries to do to much on false assumptions like yours!

Sharpening in scene referenced mode must account for the gamna curve applied later, it must account for contrast stretching or compression applied later - else you will get either not enough sharpening or too many artefacts. That‘s probably the reason why for me the sharpening module creates such bad results and I have to go back and forth between sharpening and filmic to find a balance between the two - never finding a result I am pleased with.

1 Like

Wrong assumption…

It’s a scene-referenced workflow, in that the modules I use all work in scene-referred linear space, and are applied before filmic (exception: local contrast). The order in which modules are adjusted has nothing to do with the order in which the modules are applied in the pipeline to generate the final image.
And I explicitly do NOT use any of the deprecated modules.

To repeat: the order in which the user activates/adjusts a module has nothing to do with the order in which modules are applied to generate the final image.

Which is why you do it at the end of your workflow, so you see the result after filmic is applied while adjusting the amount of sharpening. Which means that the contrast stretching and gamma curves are already taken into account. You might have to leave a little bit of headroom in filmic before sharpening, but that’s easy enough to estimate after a few images.

This is good advice, but I would want to add a couple of things. First, filmic isn’t always necessary, especially for low dynamic range images. Filmic can really compress highlights when you don’t necessarily want more compression. I find this a lot with clouds where subtle contrast can be lost with filmic turned on. Secondly, the local contrast module comes after filmic in the pixelpipe and will often push your white point into clipping after you have previously set it, so you will then often need to go back and adjust the white point again.

I avoid the readjustment by Turing on local contrast right away and then doing my filmic adjustment.

The local contrast module has sliders to control this :stuck_out_tongue:.
Or like others say, edit into it.

Personally I find myself using the bilateral mode more, if all I want is some punch or to get details to be more visible in a certain range. Up the details but keep contrast (really) low to get an effect like the ACR clarity slider. Use a mask to get it to only work on the highlights - if that is what you want. Also overdoing the effect but then tweaking the opacity in the blend settings can make it easy to apply it more in a subtle manner.

But… I’ll confess that it sometimes feels a bit like saying ‘I will fix this in post’. I set filmic parameters in a way that I’m not happy yet, but I know (ahem, hope) that the next step will bring it to where I like it.

An edit workflow that works more in steps like ‘see a problem, fix it… If it does something you do not like, do not make the change’ is easier to grasp or come to turns with.

Doesn’t make the method wrong and it doesn’t deserve things that have been said in this thread, but it does mean that it requires a method of thinking or editing or workflow that can be quite different to what people expect. And people can say that they don’t understand it, or do not want to change their habits.

This is all good! But don’t say that the tool is then flawed. Say it’s not for you or that you can’t be bothered. It yields more respect that way.

I stand by my point that you are using a device that yields different range of data to what you want as output, so you need to map it.

But… If you do not capture a large range or data (or are OK with loosing a lot of jt) like your 5-8 EV, then it should make things easier, since you have less or almost no mapping to do.

But then filmic will do nothing more than an scurve and if you set filmics contrast to 1 it should be almost a no-op. Thats not causing issues, right.

Most people are annoyed by the flatness it produces, or think it should also enhance visibility of details in the selected range.
But in your case you should have no issue with it? And you can even leave it off in your case!

You don’t have to use it. Use exposure and things like color balance (which also can work on shadows/mid/highs) and tone equalizer to use your sensor output ‘as is’ with some tweaks.

If you really shoot a lower range, don’t use a tool to map high range into low range :). Set a black point and an exposure so that nothing clips (that you don’t want to clip) and use tone equalizer to change parts of your image you want to be brighter or darker.

About your sharpening comment. Maybe you have a point there. Maybe not, I think my workflow is too different to make a valid remark there.
But

  1. output sharpening should be done on the output, I even use a different tool than darktable for it.

  2. input sharpening (raw sharpening) is done very early in the pipeline in basically every program I know (including ACR). So it’s working on scene referred data. And it seems just fine.

My files are always already denoised and sharpened before I load them into Darktable, so I can’t speak on what darktable does or doesn’t do there. But basically I sharpen in scene referred (although I call it more ‘lens correcting an unsharp lens’ :)) and sharpen according to its output after darktable (and after all resizing and color space corrections are done. I sharpen different for web, full screen viewing, printing, etc…).

From reading your tone and comments in this thread I still get the feeling you had different expectations for filmic as a module, or hoped it did something different, or just don’t need it (and maybe not understand it 100%, but there are a lot of us out there like that). So don’t use it, don’t go around saying the tool is a mess.

If I don’t like a particular whine, I don’t go making the point that the whine is wrong and the farmer has no clue what he is doing… I go drink other whine, and maybe share an opinion that I didn’t like it.

I actually do this too because I almost always want some local contrast added. Or I leave some extra room when setting filmic because I know local contrast will push my whites and blacks further. But these techniques are almost never mentioned in guides or tutorials, so I imagine it’s the kind of thing that will trip beginners up at first until they find out for themselves.

Was this directed at me? I’m a happy user of darktable so I hope my comments aren’t being taken as big criticism of the software.

No it wasn’t, was aimed at the general thread (and other threads where people are saying they don’t like something in a way that sounds like they are stating facts that the whole world should agree to)

No problem! Your comment was a reply to one of mine, so I wasn’t sure. The subject of filmic and scene-referred does tend to stir up some strong opinions.

Watching Boris edit he almost always used a parametric mask with local contrast to direct it to the midtones…so as not to distort highlights or shadows… I recently found nice control using color rgb. if I auto pick white and gray in the master tab and slide the midtone mask slider back and forth I can really direct contrast and saturation esp in the highlights