I think as a general matter, it’s far more important for software to be internally consistent than for it to be consistent with other software. And the latter can at times, or maybe even always, conflict with the former.
Yes, but exposure and filmic are in different points of the pipeline. I understood that pushing exposure a lot and recover highlights with filmic might have strange effects for the modules in between, it’s that still true ?
No, yes, depends on what you do and with which modules… There is no escaping the pipeline checking and understanding at this point. In a truly scene-referred workflow, it shouldn’t matter. But because of the input profiles approximations and such (remember camera input profiles are rough approximations tweaked for skin/mid-tones), it might be a good idea to remap middle grey asap in the pipeline, and deal with white and black mapping at the last step before going to display-referred (along with gamut mapping and colour space conversion).
The only drawback is if you push exposure by +3 EV, so your RGB values might range from 0 to 800%, and you only get masking controls between 0 and 100%. I still have to witness this being a true limitation, though, but that could theoritically be an issue (although I personaly not use luma masking anymore in dt, and hue/saturation controls should be fine).
Thanks, that closes the loop. I think I completely understand at this point.
My focus IS on how the tool works. That’s why I asked the question in the first place. FWIW, I have a genuine desire to understand the underpinnings of the DT application, which is one reason why I am drawn to it. My other comments on the levels tools, which equally apply to tone curves and other legacy tools were only intended as helpful suggestions for you to consider or reject. I’m sorry my thoughts have been blown out of proportion by others.
I’ve been attempting to respond to a few of the replies here, but abandoned them as I can only see from the kind of responses that much of what I’m trying to say is being misunderstood, or as Dave Goldberg says, ‘blown out of proportion’.
I too am simply trying to understand how DT works, but without success.
Mica, chill out man. I’m not saying that DT is failing, I know it’s me that’s failing, and I’m trying to understand why. I’m not at a point of worrying about individual images, this is a general problem for me so far. I’m not critising DT at all, but I’m finding it frustrating to get to grips with, and the inconsistant advice I’m reading isn’t helping.
Aurelien, you’re reading more into this than intended, and you’re misquoting me. For one I didn’t mention my favourite software as a comparison, I’m referring to any other software at the moment. You seem to be missing the point that many are coming from that other software, so will inevitably be comparing to some extent. We can’t just undo 20 years of learning, even though you seem critical of that as a concept.
I’ve also been careful (I think) not to say GUI, but only UI, as I wanted to keep a distinct difference there. We have to start somewhere with understanding how the software works, and the UI is the first point in that process. Whether it looks pretty, or not, is not important.
If you’re saying that I’m never going to get how DT works from the UI, and I have to learn the science behind it to get anywhere, then that’s fine, I understand, and I know that DT probably isn’t going to be for me. It certainly seems that trying to get a basic workflow to process a large number of images quickly is not going to be possible.
It’s not absolutely necessary to know all the physics and maths behind any module you want to use. There is the documentation, tutorials and dozens of (partially excellent) videos which can guide you to step in.
But darktable has some extremely powerful features. And the more you know what is going on behind the scenes, the more benefit you can achieve. Then it helps if you have some basic understanding of the physic and maths inside the module you want to use, most of it is school knowledge, it’s not “rocket technology”. For some newer modules you can watch Aurelien’s videos to get an entry.
I totally disagree. Watching (and understanding) some of Bruce Williams’ videos helps a much to find a workflow according to your personal requirements. But what are your personal requirements ?
- just doing some tweaks to the images you took with your digital camera ? The simplest case in my experience. My personal workflow works for more than 95% of the images coming from my digital camera.
- developing digital copies of diapositives (scanner or camera) ? More challenging…
- developing digital copies of negatives ? Way more challenging…
- developing digital copies of reproductions of historical photographs ? A real challenge…
Every specific application I mentionend above needs a different workflow. And here this comes full circle. To exploit the full power of darktable you have to look behind the scenes. You have to spent some time to learn and understand what darktable’s modules are doing. If you are not ready to do so, then darktable is possibly not the right choice for you.
All resources to step in are already available, just use your search engine. And in this forum there are a lot of people ready to help. Just clearly formulate the problem or issue. But general critism (or program bashing) is merely no help for anyone.
Keep at it! It’ll come to you with experience
I had the same trouble just couple months ago, especially around the time DT 3.0 came out and whole new filmic rgb & linear workflow came about. I “solved” that by relying on @aurelienpierre and @Bruce_Williams materials. There are others out there but those 2 give (I think) best info and Aurelien’s articles do explain things in great detail.
Problem is - it’s you to try them out and work them out. The recent tone equalizer video from Aurelien had a quote about it, I can’t quote it verbatim, but it went like “with practice and depending on image you’re working with, certain modules like tone equalizer make sense or not. Tone equalizer for one isn’t ‘universal’ module to be used everywhere”. That resonated with me and actually helped me realize that with linear workflow I actually need LESS modules to do BETTER job
It should be UX
That’s gonna scare some people Aurellien’s videos are very in-depth and many people think “2h video just to get an entry about module?? I think I’ll try 5-min lightroom video from a person who shows how to slap their preset on image (and try to sell it to me thrice) and be done with it”. Honestly - If one would combine watching Bruce’s videos as “entry/explainer” and Aurelien’s as “in-depth explainer” then it wouldn’t scare people as much I think
Bruce’s, Aurelien’s @s7habo and others Watching those really helps
I fear we are piling misunderstandings on top of misunderstandings here. I said I’m ok with critics, I’m ok with people not liking DT, I’m sometimes working with the devs from Rawtherapee, Photoflow, etc. on some specific matters, so ultimately the goal is to have a tool for each user (but not a tool for every user).
I would rather say 20 years of learning avoidance, because that’s what most soft today are about. Do “amazing” stuff without having to bother about technics, provided you only do standard/basic/usual stuff. My major concern about this approach (which is fine for some people), is what happens when you do peculiar things that push the semi-auto software out of its comfort zone ? Then, you get no plan B, no backup, no clutch to switch to manual mode, you are screwed. And programming a manual path around a soft built around automation can be tricky, because, internally, making things easier in appearance usually increases the complexity of the code.
“Science” is a big word. It’s only general culture level science, like “light is made of photons” or “photons are described by their wavelength”, which doesn’t imply you need a deep understanding of anything scientific at all. You don’t even need to read an equation. I’m not sure why people get stuck with that so much (maybe bad school memories ?). I sometimes write equations when they are simple, because I think it makes it easier to understand, for example, that an exposure compensation is a stupid multiplication by some factor, and so it gives a sense of how powerful those very simple operations are. But, as I already said, it’s no more complicated than what you do when you do basic home accounting.
As a matter of fact, processing a large number of pictures has never been so fast and reliable. I spend an average of 2 min per picture now, even less if I’m doing batch editing. Just follow my “darktable 3.0 for dummies” tutorials, stick to the 4 basic modules I gave, and you are done.
That comes exectly to the point. Today many people just want to hear : “click here”, “click there”, “use this slider”, “use that switch”. And the whole procedure may not last longer than some seconds. There is a lot of software out there supporting this approach. This is totally ok. But people following this appoach should know, that they are living with dangerous superficial knowledge. They will fail quickly if processing is not straightforward.
I think most darktable users want something else : a toolbox offering the tools to do ambitious tasks, they like to have a plan B, as you say, and possibly a plan C. This needs time to learn and comprehend and to carry out experiments.
Ah hahaha - yeah. I recently watched a video from some actual pro photographer (or rather “guy making money off of selling own photos”) titled to suggest that it’s a list of very cool tricks speeeding up processing in Lightroom. one of the tips: brightening a photo by wrong sliders an reaching 100% on them and then adding several masks on top of that doing same thing. Instead you know - rising the exposure. And also the fact that darktable allows me to enter values outside of slider range made that 1 tip hilarious. Several other “tips” also suggested approach of “use this slider dunno why but effect’s ok”.
One shouldn’t be making generalizations abut what “most” want. In my case after knowing why and how things work, my edits now (especially group ones) are very fast with way more predictable results - and that’s what I want - knowing why and how things work help me choose the right tools and have plans. And yes - it does take time
- And how do you expect it to happen? Do you need help? So help us help you and start from an objective point, like:
- “I have this image, but I don’t know where to start”, or
- “I’m trying to apply the basic linear workflow to this image by increasing exposure until I get the mid-tones correctly exposed, but when I apply filmic, the image looks horrible”, or
- Do you have any issues sharing some image and the side car file that represents your editing attempt? This would be a play raw thread, where others would share their own workflows and where you start learning the basic steps.
- If the answer to 2) is yes, for whatever reason, then, do you have any issues downloading any image file from the Play Raw section and then try to edit it to your taste?
- If the answer to 3) is yes… well, then I don’t see any other way of helping you (but this is just my opinion)
- if the answer to 3) is no, then, do you have any problem in creating a thread asking for help in developing the image you downloaded, and sharing the side car file? This would be a thread where all issues you have would be addressed, and you maybe sure many people would join you in your efforts to learn.
Just for the record, when you were actively trying to get help on another thread a month ago, I pm’ed you and even put myself available to do a live session, but you refused it like this:
Thank you Gustavo, I appreciate your efforts.
However, I think I’m better understanding the kind of tool Darktable is, and for now it’s not looking like something I can work with for what I need/want from a photo editor. I am understanding more of what it is capable of, and I do get what it does, but it’s just so much more than I want at the moment.
Reading some of the threads here, it is clear that there is no quick and easy way to edit large quantities of images, which is really what I need right now.
I’m not actually new to darktable, I have been trying it out pretty much since it became available on the Mac platform. For some reason, the latest version has given me a step backwards, as in the past I have been able to get very near to the kind of images I wanted, but general stability meant I decided not to switch.
I might wait to see what happens with the development of v3, and try again if I see anything that looks like it might make things work for me.
Do you still feel darktable isn’t for your needs?
EDIT: I’m sorry disclosing a message from a private thread, but I don’t think it exposes anything personal, and in my view, it may help we understand you better.
So you’re free to write the same lengthy post over and over expressing your frustrations, but when I express mine I need to “chill out?”
And when others try to help they’re blowing it out of proportion?
My apologies, I hadn’t meant for it to get silly. I have been genuinely trying to get DT to work for me, perhaps I’ve gone about it the wrong way. It’ll not happen again.
I did too, but only recently got my solution Just today I managed to edit 130 photos and export them with very satisfactionary results! All it took was just ~5mins to edit “representative” image, then copy history to others and then adjust outliers and odd-balls and click export
My start was Aurellien’s video with samples and quote about non-universal modules (an “A ha!” moment)
So as others suggested - why not share an example photo you’re having bit of problems with and see if/how others can help you? I’m pondering doing just that, since I’ve got a fireshow photos to do and that is totally different beast
EDIT: I just did that:
Here’s an argument for not locking Filmic at 18.45% – the camera doesn’t store mid-gray as 18.45% of sensor saturation in the raw file. I’ve just verified that my Nikon D7200 meters such that mid-gray yields 6.25% (4.0 stops below sensor saturation). So every D7200 photo would need +1.6 EV compensation in the exposure module and 300% range on parametic masks. Add in any extra compensation to deal with HDR situations or backlighting (I’ll often dial in -1.3 to -2.0), and you’re quickly in the 800-1200% range.
Also, how well does Tone Equalizer deal with luminances > 100%?
So, the raw green values in the gray patch are ~1024 (16383 * 0.0625 for 14-bit raw data)? That just doesn’t sound right; if we weren’t meeting friends in a bit, I’d be digging out the camera, tripod, and ColorChecker… I also recently found my old Kodak Color Dataguide, which i believe has a decent gray page…
Oh, by the way, welcome to the forum!
Slightly above 1024/16384, due to the black offset which needs to be subtracted out. No need for tripod or Color Checker, just shoot a sheet of white paper on auto-WB with no exposure compensation. Running dcraw -4 followed by pnmhistmap -lval 4096 -rval 5120 (it works on a 65535 scale) shows a green peak at right around 4600, and the D7200 has a black offset of 600.
Building on @aurelienpierre’s thoughts of removing the mid-gray adjustment from filmic and fixing it at 18%. As an idea would it make sense to have a picker tool in Exposure module that would calculate the exposure correction so that the selected area would match 18% gray? My eyes cannot detect the correct correction in one go and I end jumping between Exposure and Filmic and tuning the exposure.
I do that too and actually don’t have any issue with that, rather - it’s less unwieldly that way
I’ve now had a chance to check D7200 mid-gray levels at various ISOs.