Is there a "technically sound" learning resource for darktable?

Others will chip in with more knowledge than me, but I believe that these modules are older display-referred, rather than the newer and preferred scene-referred workflow. That means they’re more likely to cause artefacts, though they can still be used fine as long as they aren’t pushed too far. If your computer can cope with the demands of the processor intensive “diffuse and sharpen” module, it has good presets for various different local contrast and dehaze settings. If it slows down your computer, you can try to apply it last, and/or check whether Darktable is utilising OpenCL in the processing settings.

1 Like

You can process an image and post a Play Raw with the jpeg and xml file so people can see what you’re doing?

You don’t need a monitor with perfect color accuracy to edit photos. Just make a reasonable effort, and use the white border (Ctrl+B).

The problem with scopes, histograms, etc is

  1. That they compress 2D (\mathbb{R}^2 \to \mathbb{R}^3) information into a 1D plot. Inevitably, something is lost.
  2. They do not allow you to assess a lot of things, eg those that depend on color context and local contrast.
  3. They are of course unusable for assessing your image from an artistic perspective.

I would go as far as turning all histograms and scopes off while learning Darktable, it is a distraction.

Most tutorials I have seen use them to demonstrate what is going on, as a pedagogical tool. But I have not seen anyone purposefully editing the histogram instead of the image.

4 Likes

I think with DT, this is not a good approach. For example, in high dynamic range images, I think it is best for you to pick the area of the image that’s important / main focus (eg. A human face) that you want good exposure. Set the exposure to that and then use the tone mappers modules to deal with the rest of the dynamic range.

2 Likes

It seems to work well for me. DT has more tools for brightening the darker zones (shadows) than recovering highlights.

I would say it is not abnormal, however, most of the time I only raise the contrast by default for my Canon R7 images. Sometimes the skew slider can be moved to put more contrast in the shadow or highlights. But it depends on the look you are chasing.

2 Likes

@Dmitry I would say even that blindly adding filmic or sigmoid might be something that can be considered as well…I start every image with it off and if the image in not needing the tone mapper for DNR management I leave it off. I will use the Tone eq to manage what is necessary and then using local contrast, rgb cb and diffuse and sharpen presets I can often get an image with more natural looking details and I don’t have to chase recovering things from the compression of the tone mapper so there are several approaches.

The key is to take some of the tools and play around with them on some color sweeps and see how the sliders impact contrast and color and then you can simply apply the tools to introduce the changes you need to visually impact/form your images…

There is no technical or fixed recipe to follow every time…the closest thing to that is likely the camera jpg.

3 Likes

This is an option, but note that you can make sigmoid almost linear too, from midtones to highlights (using skewness).

Yep filmic can be setup similar too…I just find that its often not necessary for many photos I edit but others might need it all the time or prefer that approach….

YouTube just suggested me the perfect video to answer this question:

3 Likes

I think I’ve seen his videos pop up in my feed before but since he’s talking about video I just moved on thinking it’s not applicable. But this video is great look at this topic. At the end he suggests another video about exposure that is equally useful.

No, I think that Haze Removal is scene-referred, and Local Contrast is display-referred, but they are in scene-referred default workflow, and LC is above Filmic/Sigmoid, so that’s ok.

Yes, Haze Removal is old, but my computer cannot handle “Diffuse and Sharpen”. I actually raised a bug several weeks ago, because “D&S” was roasting my 8 GB Macbook at 98 degrees Celsius during export without any ability to finish its job.

2 Likes

Haze removal is great, I use it all the time. I tend to use D&S very little these days, it is computationally expensive and frequently overkill for my purposes.

It’s not an objective task

In all typical cases, the task of development is highly creative and subjective. There is no true reference picture of reality that can be achieved. Everybody’s perception is unique to them, two people will look at the same scene and see different things. When editing, you map scene (say 14 bit) to display (say 8 bit). In this very subjective process you basically must discard most (90%+) of the information that was captured and shape what is left according to what you have seen, and what you think it should look like to other people. If it means you must nearly clip, no problem; clipping is The Enemy when you obtain raw data, but in your final deliverable your entire creative task is to make it as saturated or light or dark as your eyes think it should be; you want to make the best use of the available tiny display space[0].

Example

As an exaggerated example (but from real life), imagine a book set on pink paper; you are taking a photo without a specific task to faithfully reproduce it, you just want it to look like you saw it. If you take a photo of it with a phone, it might look nearly white, because the dumb AI thinks “oh this looks like a book so surely it’s nearly grey and it’s just the light that’s pink”, and as a “bonus” the shade of pink will vary with each frame depending on how the dice falls. As a pro shooter, you’ll obviously never rely on auto processing; you go full on by numbers as I described above, and you end up with pink pages.

But wait! If you are a person looking at that book in real life, do you really see pages that are so pink? Well, actually, your eyes will quickly adjust, because the dumb AI was not totally wrong: it looks like a book, so our brain does try to ignore the pink. Furthermore, if you look at the book, your mind might try to ignore some distracting things in the background (which you could take care of with framing and FoV, but can help additionally convey with some vignetting), etc.

Plus, like with music, where you will publish the result can also affect your editing choices. In what situation will people see it? Will it be Instagram or some social media? Then your shot will be competing for attention with random stuff while the user is on the toilet, and you can’t know whether it will appear on white or on black (dark mode). Is it going to be your own photo book? Then your shot will be seen among your other shots, so you can be strategic and for example set a dim moody baseline and then break it in select shots for greater impact.

It’s always a subjective game. There are no right or wrong answers.

Theory + reference > tutorial

Now, there may be some cases where you want to rely on numbers in addition to your eyes—like, you may want to rely on clipping view to see where you accidentally clipped in raw (lost data), or where you clip in the deliverable. I think these come to you if you know theory, and theory is generally independent from software. I.e., you may not need a “darktable learning resource”, but just theory resource[1] to guide you what to do + darktabe reference to guide you how to do it in darktable.

As long as you understand the fundamental subjectiveness of the task, knowing theory can help you achieve better results.

Exception

One notable exception is, e.g., when you are digitizing an art piece. Somebody did their creative thing, and now you want to convey the result as neutrally as you can. Then you are facing a challenge that is probably 80% technical “editing by numbers” and only 20% creative. You want the final result to never clip, etc. Note that your editing will begin before you shoot: you start with setting up light, calibration shots, dark frames, flat fields, colour cards.

If this is your task, you must know theory and how to apply it in whatever software you use.

Footnotes

[0] I don’t mean “make everybody’s eyes bleed”. If you are into music, think of it as a mastering challenge. You don’t want to obliterate all detail by compressing it into a wall of sound; but, you do want to strategically make it pop when you want it to pop, and where you want it to pop you want to go as far as you can.

[1] One great resource on theory is RawPedia. A lot of it is general subject area knowledge about light, colour spaces, calibration, etc., that is applicable to any well-behaved software.

3 Likes

Thanks. Happy to be corrected.

I use the contrast setting in Sigmoid. I might have got this quite wrong, it is only an impression, but even after doing most of my corrections in Colour Balance RGB, I prefer Sigmoid’s contrast slider. It seems to give an overall pop to the picture. CB-RGB’s contrast slider feels more like local contrast. Am I right or this nonsense?

Then I sometimes tweak the skew: more often than not I return it to its default.

I adjust Sigmoid’s target black to recover a little detail in, eg, black hair.

I don’t think so, it is a global mapping.

1 Like

Nothing wrong with that. I’m pretty sure AP even recommended using Filmic’s contrast slider.

It is a broad-strokes global adjustment. That said, while I don’t remember the reasoning, CB RGB’s contrast slider was only really meant to be used with masks, not as a global adjustment.

1 Like

Thanks gentlemen.

Oddly, I do use CB-RGB’s contrast slider when it is an instance controlling a masked area. Simply because it seems the obvious thing to do.

/aside :slight_smile: