Two ways of using Filmic RGB

In post-processing, “exposure” means nothing. No post-processing can change the exposure. Exposure is fixed when the camera took the photograph.

Worse, labeling something “exposure” in PP is misleading. The actual exposure by the camera determines what data we have, and what we have lost due to highlight clipping or noise in the shadows. No amount of tweaking something called “exposure” in PP can get that data back.

There is some use for “zones” as described by Ansel Adams. But these are not “exposure stops” after the camera has exposed the film. And digital has so much more flexibility than photochemical emulsions that we shouldn’t get stuck in old paradigms.

2 Likes

Well, maybe…

Let’s remember there is a reference to Ansel Adam’s zone system, and there is that word “film” not-exactly-hiding in the title. What do you do with film negatives? You print them. What do you do with over-exposed negatives? You change the exposure of the print to compensate. Admittedly, you increase the exposure of the print to compensate an over-exposed negative, but never mind.

In fact filmic is very “enlarger oriented”… you map the mid-grey (select exposure time) and you change the slope (select paper contrast).

Still, exposure really means “pretend like you could go back and change it”. It gets the idea across.

Thing is, dealing with the two in the separate situations has different implications. Increasing exposure at capture can push highlights into sensor saturation, where the data piles up at the saturation point. Increasing exposure in post, with a floating point data representation, lets the highlights grow past the value where one wants to call “white”, preserving the opportunity to use that data to reconstruct some kind of definition below “white”. For shadows, increasing exposure at capture moves the light measurements out of the noise floor; increasing exposure in post just moves the noise…

When I read @MarcoNex’s post I was initially considering the former, when he was really talking about the latter. Graham, I know you get the difference, but overloading the term here opens the door to confusion…

About the term “exposure”, I thought in the context of post it could be short for “exposure correction”. That would at least not pretend to change exposure but its result … Sorry, just my 2c.

4 Likes

Confession: I call it “Exposure Compensation” in rawproc… :smile:

2 Likes

How about “xxx Intensity”? Luminance is otherwise well defined.

I’ve struggled with finding a semantically-appropriate term without using more than three words for this. In my hack software, I still call it “Exposure Compensation”, which almost takes the whole width of the toolchain pane. If “stops” weren’t involved, I’d be calling it “Multiplication”, which is fundamentally what the operator does…

if you don’t like exposure compensation, why not gain?

4 Likes

!!!

Sometimes the best answer is so simple… Thanks!

I think I’ll make ‘gain’ an alias for ‘exposure’ in rawproc, soes I don’t disturb anyone’s prior processing, if there are ‘anyone’ else using…

Because this is the thongue of the dark side … pro video people … ugh! :crazy_face:

Yes, “gain” is good. Perhaps “brightness” or “lightness” for less technical users.

“Exposure compensation” or (worse) “exposure correction” can mislead users into thinking the camera exposure was wrong and needs correcting in post. In fact, the camera exposure might be perfect ETTR, causing neither clipping nor noise, and the only problem is that neutral processing results in an image that is subjectively too dark or too light.

2 Likes

Brightness and lightness are super confusing too, since multiple models use those therms.

True. Maybe “brighten” or “lighten”?

Those are the terms that I use for my own commands. In any case, as long as the tool is clear on what it does and honours the tradition it has been adapted from, it should be fine.

Brightness and lightness are already taken and have a different meaning.

Besides the actual process of keeping a shutter open, exposure means the relative luminance of a light emission in log2 encoding. It has in post-processing just as much meaning as on camera. To add 1 EV on camera, you need to double the exposure time, on post-processing you need to double the linear RGB code values.

But pushing a slider on a computer screen doesn’t change the relative luminance of anything.

(Unless of course the computer is hooked up to physical lights or a camera, and changes the settings on those lights or camera. For example, darktable can control a tethered camera, and can control actual exposures made by that camera.)

So why bother pushing it then ?

It’s as if you said “paying goods with a credit card is like not spending money at all”. No actual bills left your pocket but your bank account sure depleted.

I think you are mistaking a symbolic effect connected to a real-life phenomenon, with no effect at all. When I push the exposure slider in software, my screen becomes brighter. So it does change the luminance.

2 Likes

Ha, yes, good catch. Pushing the slider brightens the screen, increasing the relative luminance of that part of the screen, thus exposing our eyeballs to more light. So, yes, it increases an exposure.

Hmm.

1 Like

I think you got mistaken.

In camera, you expose to avoid highlights clipping, which usually leads to under-exposing, compared to what your lightmeter/auto-exposure wants you to do.

In software, the easiest way to recover midtones is push the whole picture in exposure module (thus, making highlights look temporarily clipped in the display output, because they are still there in the pipeline), then use filmic to recover the highlights with the shoulder part of its curve.

But, the other way to use filmic is to push the highlights just on the verge of clipping in the exposure module, so at filmic time, you know your theoritical standard grey is at 18% and the white is at +2.45 EV. Then, you fix the midtones by decreasing the reference grey value.

While both methods are 100% mathematically equivalent (adding +1 EV in exposure module, or dividing the grey reference by 2 is the same), the GUI might get easier to control with the first method.

But, keep in mind that the first method will push the top end of the RGB data possibly far above 100%, and since most of the UI controls in darktable stop at 100% (for parametric masking and such), some of your bandwidth will not be controllable anymore (for example in levels and tone curve). darktable doesn’t clip at 100%, so the data will still be there, but you won’t have dials to bend them.

That’s why tone equalizer and colour balance let you control parameters pretty much between 0 and +infinity, that’s why I don’t like levels and tone curves anymore.

4 Likes

Thanks for the (re-)explanation @anon41087856. I think I’ve been mixing up your two methods - pushing exposure to recover the mid-tones and then starting filmic with 18% grey. I understand now :grinning:.

1 Like