THere’s nothing wrong with taking the authors word for how things work under the hood. Like you said, he reads the documentation, that gives him enough to start working with the tool.
Don’t forget that filmic and the whole scene referred work flow are new tools as far as dt is concerned, and like every new tool, we’ll have to learn how to use it. Like we have to learn how to (best) use a new camera.
I can’t really understand that. When your father started with photography, he had to be able to acquire a lot of expertise before he could get reasonable results. He had to learn the relationship between aperture, exposure time and ISO. That was the necessary knowledge that could not simply be ignored in those days.
I don’t know why today he should expect some tool to do the job for him. Digital image processing also requires knowledge to be able to edit images adequately. Even back then there were simple cameras and Polaroids with which one could quickly take a few holiday photos without much effort. But if you wanted to take your photos more seriously, you needed to know how to do it.
Today it is no different. With your mobile phone you can quickly take photos, make the basic corrections with a simple app and that’s it. But if you want more, you need to know how to do it. In my opinion, darktable is not a simple app to easy get quick results, but a tool whose primary function is to offer you the possibility to get high quality results if you make an effort to acquire the necessary knowledge.
I think that that’s actually a common problem (not limited to dt/photography):
someone has learned to make images in a particular way, and changing those habits is hard.
Whether that is due to learning new things becoming more difficult with age, or whether it’s difficult to accept that the new methods give initially worse results than the old methods (for by that person!), I don’t know.
Or another reason might be a lack of patience for studying a new tool ("I know how to take pictures, why should I learn that again?’).
That said, when you just open dt to start editing, it is rather opaque, more so than some other programs (I only switched to dt on my third try…). Then again, rawtherapee scared the heck out of me when I had a look at that a few months ago (just out of curiosity, not thinking of switching, so no effort to understand the interface, let alone the individual tools).
Yes, this is exactly what I was concerned about. After using auto adjustment for black/white relelative exposure I can still the under/overexposure warnings in jpg, while there are no warninings in raw. Even though I set thresholds at 0% / 100%. For that reason I though that the auto buttons used different method.
Yes I know that. I have my own styles and presets already. I absolutely love the scene referred workflow. It was a still ongoing learning curve of months, but that’s no complaint - any piece of complex software will take time. Just pointing out darktables complexity is a symptom of its innovations and progressions. It neither sleeps, nor completely discards the past.
“raw overexposure” shows clipped channels in the raw data, and this is independent of any operations by dt (except raw black/white point?). The ‘jpeg over/under exposure’ works on the image as displayed, i.e. after applying all the operations in dt’s pipeline. So it’s very well possible to have a non-clipped raw and get warnings for the jpeg.
In the case of exposure+filmic: in my limited experience, it’s quite normal to have rather large areas shown as over/under exposed after setting the exposure. This is (usually) corrected in filmic.
Otoh, if the image looks good, I tend to ignore the ‘jpeg over/under exposure’.
Do non-technical users really need to care about the maths?
If they are only introduced to the new, scene-referred development path, they don’t have to know the difference between base curve, Lab-based tone curve and filmic, not for 95% of cases, I think.
I do agree that some kind of ‘facade’ module could wrap exposure + filmic, with fewer sliders:
‘overall brightness’ (=> exposure)
‘blacks’ (=> exposure black + filmic black)
‘whites’ (=> filmic white)
If a more robust way to auto-tune the tone EQ mask could be developed (reliably stretch the mask histogram using a robust luminance estimator that works ‘OK’ in most cases), there could be two more (‘lift shadows’, ‘lift highlights’). If the message-passing between modules were available, this module would not need to duplicate anything; it could drive the others. Then, if a user wanted more control, they could use the individual modules instead (or one could use the ‘facade’ module first, then switch to tweaking the individual modules to fine-tune settings; the ‘facade’ module could display a warning that the ‘real’ modules’ settings have been changed after leaving the ‘facade’, and any new changes made there would overwrite those in the real modules).
That’s weird since the scene-referred way is much closer to analog darkroom in the way it behaves and with the concepts it uses (EV zones everywhere, exposure adjustments…). Usually, people with old school experience get it faster than the digital kids.
But then… scene-referred is a change over something people already did not understood. That legacy display-referred is full of assumptions that nobody is aware of (but got hard-coded in software). So, changing something that people already did not get (conceptually) is like trying to learn a new language when you don’t really know the difference between a verb and a noun in your native language. You need some grammar in your own language to be able to learn a new one (unless you learn them as a young kid).
50% Lab is 18% RGB or XYZ, so simply converting to Lab applies the correct cubic root on grey.
Your overexposure warning is crap. JPEG is saved with anything above 100% clipped. Over-exposure warnings are set such that they show you everything above 98% or 99%… But the 98-100% range is still valid and there is nothing wrong about having clouds or bulbs ending there. Proper over-exposure warning would display anything above 100%, because that is not valid. But in a JPEG file, there is nothing about 100% because there is no 256 or 257 values in 8 bits, so there is no overexposure diagnostic to be made there. People should really relax with over-exposure alerts. Trust your eyes. There is no software replacement for the eyes of a visual artist.
The difference between cliping in jpg and raw is clear to me.
Still, my doubts refer to setting automatically the black/white relative exposure. My expectation was that if the auto mode aligns them with the darkest/brightest point of an image, then I should have no under/overexposure indication in jpg.
Looking at some other exchanges here, it seems to me that I do not quite understand what the jpg under/overexposure indicator is doing
Yes, that is very true. That’s exactly why Filmic is such an important step forward (and thank you for that). I’m just talking about how to present this feature to users who are struggling to understand it conceptually and how much they actually need to understand it. Filmic has an amazing amount of thoughtfulness behind it that makes it seem almost like magic. Most of that will be beyond most users to ever fully understand but they can still appreciate what it does for them. IMHO such users could benefit from a UX that focuses more on the basics and clearly separates the more advanced stuff into separate modules.
To be clear, my father is quite hopeless when it comes to handling anything to do with computers. So, an advanced post processing workflow is a steep learning curve for him regardless.
If you want to explain filmic to your analog dad, here is what I propose:
filmic applies the sensitometry curve of a virtual film we create in software, on top of digital data,
the scene white exposure is the Dmin (but in log2 instead of log10 – slightly different unit, same meaning),
the scene black exposure is the Dmax (but in log2 instead of log10 – slightly different unit, same meaning),
the look contrast is the gamma of the film,
the look latitude is… well the latitude of the film,
everything in the display tab is linked to ICC displays, so no need to touch that if you have an ICC profile,
the scene-referred workflow is like having a virtual camera in your computer, in which you can redo the exposure (in exposure module) and build your custom-made film emulsion from scratch.
When I saw the darktable 3.0 changelog that listed user-defined pipeline order I was thrilled, because that is one of the basic omissions about most raw processing software that eventually led me to write my own. So now that a group of folk are wrestling with the implications of that is interesting…
I don’t think there’s anything wrong with ‘recipies’, that is, defined pipeline orders upon which folks just rely. And, I don’t think there’s anything wrong with keeping and using old orders, as long as you’re satisfied with the result. Indeed, for non-technical folk comparing the results of various pipelines will be the insight they need to viscerally appreciate the benefits of something called “scene-referred” pipeline ordering. Being able to do just that in my hack software has provided absolutely the most instructive experiences in my transition from film to digital.
Maybe darktable needs some sort of construct to name and store various pipelines, allow the user to select whichever for a default when starting anew on an image. Then, one can readily identify a result with a named set of operations in a certain order, and then be able to talk about that order in comparative terms that aren’t too confusing to others. Just a thought…