A dt - basic - level 101 workflow - for version 3.5+

I will admit, the scene-referred workflow, has not been easy to adapt to, but over the last week, I think I’ve got the hang of it and the images I have edited in the last day or two, have been most satisfying, to work on, with results that encourage me to go out there and take better pictures, or have caused me to re-edit raw images, with this revised approach, with much better results.

Workflow described herein, is based 100% on the scene-referred approach, using filmic - no base curves or LUTS used here.

Recent modules, need a proper read through the manual

The thorough read of the manual (or relevant sections there of) is unavoidable.

But how do you know what is relevant when there are some hidden features in the user interface, which only the manual can tell you about. So with dt, one has to read the entire manual. No other way. And read it several times, until the new scene-referred ethos, sinks in.

And as I have experienced recently, some conventions exist in dt, which may not yet have been captured in the manual, and it was only after raising “expectations” on the dt github, were certain conventions revealed, to me.

A level of persistence and constant enquiry, and sometimes accepting/discovering the conventions, cos the manual cannot be always assured to cover every single bit of dt, that has been developed, its written by humans, who make a damn good effort, but we are human.

The darktable of a few years ago, where you could skip much of the manual and still use it, is gone. The versions from about the time filmic was introduced, need a compulsory read, of the manual, and preferably a read of the blog articles, and Youtube videos related to these new modules. To reinforce any info in the manual.

One module per function type - an approach - a mindset

What I deduce from recent developments (over the recent 2 years) in dt, is an attempt to, as much as possible, avoid the overlap where multiple tools exist to fulfill the same purpose. Of course this is not always possible, but that broad trend of separating functions into different modules, is something one may also adopt as a mindset, in developing a workflow.

So when one looks at an image and thinks of a modification required, a specific module immediately comes to mind, as the primary tool to use, but of course, this approach is not set in stone, and nothing stops us from breaking the good practice.

dt version and settings

I’m using a 3.5 dev build on Windows - 3.5.0+2525~gd1f02a42e

Via preferences, I’m using the modern scene-referred workflow. Which enable modules like color calibration and filmic.

Basic - Level 101 Workflow steps
  1. Before I make any changes, turn off all other indicators, and use the over-exposed indicator to check if there are any such areas in the raw image. And then turn off the over-exposed indicator.
    image

  2. Take a look at the histogram, to give me an idea of how much leeway I have, to edit the image. I’ve marked up an image of the histogram with three white lines that broadly represent :
    -on the left of the image - how much room I have to make the darkest parts darker, if I wanted this, and going overboard in this direction, will throw aspects of the image into underexposure
    -on the right of the histogram - how much room do I have to make things brighter, without throwing aspects of the image into overexposure.
    -towards the top of the histogram, how much room do I have to add saturation, or chroma, without making portions of the image oversaturated.
    image

  3. Then I turn on the clipping indicator, which gives me further details of which areas of the image, have any issues like oversaturation, after all the processing in dt. i.e if there are issues at the output. Until editing is completed, I turn on the clipping indicator occasionally, to see if there are areas of the image I need to pay attention to.
    image

  4. Steps 2 and 3 are only guides. We have creative license to break the rules, if the image or what we have in mind calls for it, e.g. there are valid reasons to leave overexposed portions of an image, or oversaturate, or crush aspects into the black, if that’s what the image needs.

The next steps can be taken in any order, and typically is not a one direction sequence, as one comes back to make further minor adjustments as needed, to modules, based on the changes occurring in other modules. But the thought process is to go from broad, to narrow refinements.

  1. Color Calibration - to set what I would call the “white balance” if needed (or if one is more comfortable with this, turn off Color Calibration, and use white balance module - starting in As-Shot mode, and make adjustments). Color Calibration is what I use exclusively, but it does have a bit of a learning curve, and I am still learning how to use it better.

  2. Exposure - (How bright, broadly speaking, should this image be). I see this as a global tool, across the whole image.

  3. Filmic-rgb, to adjust how much is pushed into the bright, and dark regions of the image, and how much contrast is needed. Filmic is one more level of precision, after Exposure has been applied.

Filmic helps to refine the darkest and lightest regions. Of course it does more than this, I’m trying to keep things simple.

  1. The next level of adjustment for broad changes more finite than filmic is suited for, would be the color-balance-rgb, which enables both global and regional changes for highlights, mid-tones, and shadows - for chroma(colour), luminance, etc, etc. Typically this replaces the old Shadows and Highlights module.

  2. If any more specific broad but finite brightness adjustment is needed. then Tone-eq module is a good place to go.

Then back to making small changes to all of the above as needed.

  1. Finally some Raw denoise, and Sharpen. using a magnified view, as well as a fit to window view, to see the results at micro and full image view. Optional - for images intended for casual view or sharing on social media, one may skip this step, cos the effort may be wasted. I’m picky so I spend a minute or two, so I never have to go back to this step, in the event that I wish to print, at a later date.
Beyond Level 101

Then those who wish to take things further can get into masks, and multiple instances of modules, to fine-tune things even more, as their time and effort justifies, and of course, use any other modules they wish to use.

More editing freedom using Duplicates

One more thing, d this has nothing to do with the raw processing, per say, but to address my human judgement, and allow me to make changes, with a lot more flexibility, and experiment a lot more, is to after any significant changes, create a duplicate, and continue any edits on the duplicate. so that I can always go back to view the progress of my edits, to see where I may have taken things too far, and recover from any excesses.

In a simple example, I may end up with something like this, with each duplicate being a copy of an earlier one.

Duplicate 0 - Basic Workflow with all steps above, except tone-eq and color-balance rgb, completed.
Duplicate 1 - Tone-eq changes and Colour balance are made here
Duplicate 2 - Refinements to module settings are made here + Crop + Sharpen.
Duplicate 3 - A refinement of Duplicate 2, where I try out certain options or refine further.
Duplicate 4 - An alternative refinement of Duplicate 2.

So at the end of the day, I can look at Duplicate 3 and 4, and know which of these is more of what I want, and based on this I can create a Duplicate 5, from the one I prefer, to make any further edits.

While the dt history is a wonderful tool, and I do use it, I like having these “fallback” duplicates, which I can compare in the lighttable view, and see if I’m improving the image, or shockingly as I sometimes discover, I have gone too far, but because I have these safety nets, I can be far more creative in exploring options. Its a bit difficult to use only the history to back track, cos the history does not have “branching” preservation, so it preserves edits but you cannot go back to a previous edit which has been replaced, by a new “branch”, in your edits.

The duplicates help me preserve the progress of my edits much better than using only the history feature. and allow me to have multiple branches, from one point in the edit, so I can explore alternatives, as outlined above.

1 Like

Is this different from what is written in the first part of the manual?

1 Like

You can also take snapshots at various stages of the history and then compare them with the current state of the pipe…one more tool for comparison

I use duplicates, much more than snapshots, and work with them compulsorily, as a habit, enough to suggest they become a part of the workflow.

Where they add value, I do use snapshots, but a lot less than duplicates

Why I prefer duplicates to snapshots

Long read below.

  1. Snapshots are based on the window size and magnification in darkroom. If you change any of these, you have to go back and recapture all snapshots, at an identical edit window size and magnification. With duplicates, the view magnification and window size apply equivalently, to all duplicates, so comparison is always apples to apples.

  2. If you have indicators turned on, this is included in the snapshots, so many a time, you have to go back, find the point in history where the snapshot was taken, turn off the indicator, delete the snapshot and retake the snapshot. Lots of work, if you have a few snapshots.

  3. Snapshots cannot be compared with other snapshots, side by side. Duplicates can, in the lighttable views.

  4. Snapshots are not retained, if dt crashes, or you exit dt, and you are in the middle of a long edit, there is no recall of snapshots, but duplicates are saved, with all the history (if you do not compress) with such a small overhead, a tiny file on the disc, the. .xmp. Pretty small price to pay to have fully recallable/editable checkpoints, in your image development. And there is no way you can remember, what snapshots you were comparing, or what the edit point of each snapshot was, in a complex long edit. So Snapshots are impossible to recover from, as a comparison tool.

And duplicates are so great for when you are not sure of how far to go on a decision.

So not a problem, make a few, with different values for an edit, e.g lifting shadows, or anything. And at leisure, with eyes rested, after taking a break from the screen, or the next day, compare in lighttable.

I tend to use duplicates the way most people use exposure bracketing in camera, Not sure, about the edit choice - bracket your decision in a few duplicates, with different values of the edit change, and compare. in lighttable.

  1. Snapshots, when you have all the tools on right, history/duplicate/info tab on left, thumbnails viewable at the bottom, leaves you with a smaller view in the middle, to compare across. With duplicates, you can also use the lighttable and switch between them in full screen view.

  2. Snapshots can be a nightmare, giving you a history number, but unless you have no other duplicates, which you are comparing with, and no other images similar, you can be left with, the question - which duplicate does this snapshot number refer to, cos that same history number exists across duplicates, but the edits may be different. - A right nightmare… I ran into this issue once, and this led me to use duplicates, instead of snapshots…, for most comparisons.

@OK1 Can I tell you a secret?

Set DT to use filmic and colour calibration by default first.
Then, for editing first adjust the exposure, then go into Filmic’s “look” tab and add a lot of contrast. Then go in color balance rgb and use “add basic colorfulness” preset.

And that’s it. You have your image. From then on you can mess around with shadows and highlights an denoising but you don’t have to if everything looks fine.

3 Likes

I actually have started applying default filmic first then adjusting the exposure ( usually up) and then adjusting with dynamic range up or down to correct. Then the color balance preset AP added and your are right add some contrast. I often just add 10-15% when warranted in CB and then slide the fulcrum until I hit the sweet spot…then its just the clean up…maybe some local contrast with bilateral for details sometimes as well if needed.

1 Like

Yeah, that’s basically what I do too. It’s to get a nice starting point.
I think with this, DT can give you a starting point comparable and even better than any other commercial software.

The issue that I still have with DT is manipulating the highlights and shadows. I still haven’t figured out how to do that properly. Tone eq is nice but I can’t really push it. Color Balance RGB luminance works ok on some images but often I get some desaturated hazy result, like I’ve just put a white layer with some transparency over the image. I also often loose a lot of detail.

I can kinda fix it with local contrast and contrast eq but it’s still a hit or miss work and pretty cumbersome.

Granted, I still need to figure out color balance rgb workflow in depth so for now I’m assuming that I just lack the skill.

I can get a great result with Lightroom tho. But I presume Lightroom’s highlights, shadows, whites and blacks sliders do a lot more then just raise or lower the tone. I’d have to figure out what it does exactly and try to replicate it with DT. But it must be something similar to color balance rgb in a way that it must use some sort of masking but probably adds some local contrast with the same mask. On some images where I have harsh shadows and highlights I can just pull two sliders, one up and one down and I get the perfect look. I can kinda get similar result in DT but with a lot more messing around an never quite the same quality. And I’m pretty sure it might just be my lacking skills but it’s still to be determined :slight_smile:

To give you an example, look at this image:

And I know, this isn’t a great edit, I pushed it as far as I could. But it was so fast and all the detail is still there (some washed out leaves area an the right bu an easy fix). Now I’ve done similar edit with DT 3.6 but I either loose the detail in the grass or the detail on the roof of the shack.

Another image would be this (I haven’t tried color balance rgb yet on this one but I remember struggling very much with this image with tone eq, exposure, masking etc):

Thank you. An interesting approach. I will try this out, at the next opportunity.

Another method is with channel mixer (color calibration).
For instance, in the image you linked DSC_2481, the shadows are all green. So go into the ‘brightness’ tab and set R0, G1, B0, with ‘normalise’ checked. Now use parametric mask so it only effects the shadows. I find 0,0,2,9 on ‘g’ slider is a good default. Or 0,0,9,18 if you want it to effect some mid tones as well. This will lighten greens, thus the shadows.

For a highlights parametric mask you can use 18,59,100,100. Typically for highlights you will want to darken or lighten blue skies. For the former, whatever combination of r&g = 1, and b=0. For the latter, B=1, r&g = 0.

To make it easier, I have set presets for highlights, mids and shadows. So it is neutral when I turn it on (no effect), I have all values in brightness tab on 0.333, normalise ticked, CAT bypass none and gamut compression 0. So the only thing my presets contain are different parametric masks. Then all you have to do is adjust the sliders to taste.

However in extreme cases like that image, it will need to be combined with tone equaliser, where you just lift the hell out of the shadows, and drop it out of the highlights.

Excellent ideas you captured in your post, and I love the image of the cottage with the thatched roof. Spent a few minutes just looking at it. So in that respect, it caught my eye, which is ultimately the purpose of a great image.

Suggestions for Highlights and Shadows and this may apply also to Mid-Tones

I tend to, as mentioned earlier, think top down, so any changes to be made, the big question will be what’s the scope of the required modification.

All of these “light” related changes, in one way or another will also contribute to contrast, on a larger scope, or a more limited scope, scope being a region of the image, such as shadows or highlights.

The other thing to add, is that over time, I try at first in an image to make small changes, i.e. small changes in different modules, might have a more natural result, than a big change in one module, cos such big changes on some sliders/controls, will likely have an undesired side effect.

  1. Global changes if needed - across the entire image, I’ll look at exposure module, as the main governor for the dark points and highlights. Exposure determines what gets passed on to the other modules, and is the 1st in the chain, for adjusting “light”.

  2. Then Filmic using the default setting, and I can adjust white relative exposure and/or black relative exposure, to refine the extremes, which is also a way to modify the contrast. The alternative would be to use the contrast slider, to achieve something similar.

  3. If a specific region such as Shadows or Highlights needs to be modified, I have found the new color balance rgb module, to be effective if one adjusts the Luminosity for the specific region. i.e Luminosity for Shadows, and Luminosity for Highlights. These two sliders are quite effective, for broad changes that are for just shadows, highlights, and of course there are sliders for “power” which is for adjusting “mid-tones”.

  4. If more specific adjustment is needed, then - tone-eq is applied. But I must be honest, the tone-eq has its own very specific workflow, and using it requires a lot of reading, a lot, and a lot of experimentation, to decide which kind of mask, best suits your image. , and one has to “calibrate” mask exposure compensation and mask contrast compensation first, before making any changes to the image. i.e Tone-eq, is not one module that we just go and start adjusting sliders, there’s a bit of setup first, with every image, so that it will work better. I can imagine that some who use tone-eq are not aware of the importance of first adjusting and selecting the right mask.

But if well understood, and the workflow for tone-eq is adhered to, then any further refinements for specific areas, to adjust contrast, can be achieved using tone eq, for more detailed changes than what can be achieved with color balance rgb.

  1. For even more detailed changes of contrast, then the rgb-curve module, enables even more complex adjustments of contrast, cos one can plot lots of points on it. But the more finite and precise one wishes to make changes to contrast, there is the risk of inadvertently destroying the image. rgb-curves, has almost no safety nets, or air bags. With cautious use, it can be a lifesaver.

  2. Anything more, will most likely have to veer into more advanced edits, using additional instances and/or masks, refine the scope of the change. Apologies cos I cannot remember the image where I used this example below, to attach it, and making up examples does take time.

An example - on an image, I had tried all of the above, to lift shadows, but I was not getting the result I wanted. As a last resort, I added a 2nd instance of the exposure module, with a parametric mask by grey level, therefore affecting only the region I was interested in. And that did the job. Some feathering and mask blur, in the parametric mask, helped to make this change less abrupt, and more natural.

From info I gleaned from another forum member, I’ve also setup three presets for color-balance rgb, which have parametric masks, on highlight, mid-tones, and shadows. So occasionally I might call up three instances set to these presets, to give me even more control over the use of the color-balance eq.

Tool choices - More Control OR More "Image Assistance" built into the Tool?

This debate will never end, of which approach is best. It depends. And sometimes different images may profit from using alternative tools.

When I want a “vintage” digital camera look, like digital cameras from sometime before 2010, kind of like the Canon 5D version 1 image, I just use Lightzone, it has far fewer controls than darktable, but I accept that the tool is doing a little bit more to contribute to the image, than darktable, cos it has its own method of tone mapping, which lends itself to a different kind of look. Its editing tools are basic, so one may then import the image into darktable for further editing, having acquired the “look” in Lightzone. Not a recommended workflow for most images, cos it adds time, to the whole process and needs more organisation.

Over time, darktable becomes like a set of surgeons tools, and over time, we establish our own habits of which tool to use for what, and also which alternatives to try/or complement with, if our 1st choice tool is not as effective as anticipated.

What I like about darktable is - compared to any other tool, I have used, you are 100% in control, if that’s what you want. If we like decisions taken for us, without the tools letting us know or explaining the “intelligence” and colour science behind the scenes, then Adobe tools, Capture One, Luminar and those kinds of editors are the way to go.

Equivalently comparing image editors, compensating in darktable

Unfortunately there is a paradox. Increased dynamic range at first glance, actually looks worse than reduced dynamic range.

The raw file is an example - has plenty of dynamic range, but it looks a bit greyed out and muddy. So all raw processing apps actually reduce the dynamic range to make them look more appealing. Some do this a bit more than others , as part of their “look”.

It took me a while - many months (maybe a year) to understand this. Until recently I was comparing darktable and other tools, at a time when I got pretty frustrated with darktable(because of its steeper learning curve), with new modules like filmic, and color calibration.

One observation was, tools like Capture One, and the OOC Jpeg algorithms in my Sony camera(which I can also emulate accurately via their Sony Imaging Edge software on the computer), tend to add by default, and its almost impossible to disable this - local contrast, and also reduce the dynamic range.

At first, when compared to the typical processed image in darktable, the results of other tools, may look more immediately appealing, but over time, it was more apparent that these other tools actually “distort” the image more, without asking for your permission. They look ok, but not as “clean” and “transparent” as darktable, unless of course you “command” darktable to add similar aggressive changes which it is also quite capable of achieving.

There is a certain “clarity” I find with darktable, which is especially great at preserving dynamic range but the downside is that it does not immediately look as pleasing, when compared side to side., until one adds some local contrast, and a bit more sharpening, or more global contrast to more accurately approximate the “look” of some of these other editors. This is what I call the instagram look, - instant appeal. But you may not want to print such an image on a large piece of paper, cos enlarged, it looks harsh, and two dimensional.

When one compares with other commercial raw processors, the purity of darktable processing, definitely needs a full complement of its modules, like local contrast, to match up.

Without this, one can be convinced to think that the commercial apps achieve a better result, but the truth is they are doing some extra “intelligence” which is not disclosed, which we now have to add in darktable - manually.

An example of this is discussed here

What about trying to add in a blend mode to assist with lifting the shadows maybe??