darktable 3.0

@Farrukh I see… I can’t verify ATM but if that is true then this is kind of confusing:

image

Would be an honor.
Glad I can contribute to this outstanding project :slightly_smiling_face:

1 Like

I see that is to lure us into reading manual.

OK git fetch and try to build. Outcome

~/work/darktable/src/common/dlopencl.h:27:10: fatal error: CL/cl.h: No such file or directory
 #include <CL/cl.h>
      ^~~~~~~~~

Whoa! darktable fails to build on Linux?!

Kindly add to build instructions steps required to prepare build environment.

Executing in terminal

# apt-get install ocl-icd-dev ocl-icd-opencl-dev opencl-c-headers opencl-clhpp-headers opencl-headers

did trick for me (that is for Debian/Ubuntu distros). Though I also installed CUDA headers prior to mentioned step, what is advised on quite few illiterate forums. Illiterate in sense of programming, some overflow bunch. CUDA on its own didn’t bring me any joy.

I’ve just updated the package in the OBS to version 3.0.0. Will create a PR as soon as my build succeeds.

1 Like

Hi,

you can find here (https://redmine.darktable.org/projects/darktable/wiki/Building_darktable_20) some instructions.

They are not updated but - with some minor changes - can still work!

Maurizio

I forgot where I saw it. At least I got the PDF, here is a link to MY storage:

darktable-usermanual.pdf (~15Mb)

I will not update the link and delete the PDF in a few days, I guess there will be better links soon.

1 Like

Better link (a more recent one): https://redmine.darktable.org/projects/darktable/wiki/Building_darktable_26 (this works for 3.0 for sure).

1 Like

Hi Maurizio,

Thanks. As you noted it is not really applicable for 3.X versions. Something is missing.

Regards
Filip

No that doesn’t work for 3.X, you are missing opencl headers => build will fail

I said that works for 3.0 not for master. OpenCL headers have been moved on actual 3.1.0/3.0.1 branch so you’re right, just for these new versions. I’ve always compile master (so actual 3.0.0) since the beginning (so since the beginning of the year with the 2.6 link I posted. But, yes, yesterday, I had the OpenCL header error with 3.1 master branch. So you’re right for releases after the 3.0 : it needs to add OpenCL headers (a ‘git submodule init’, in the code source folder, adds the openCL header ; so not so much changes).

1 Like

I have never seen sharpen changing colors, I mean anything that can be observed without zooming 100%. In which cases does sharpen produce unnatural results ?

It halos badly when you sharpen too much

That’s fine, I just need a slight sharpening to compensate for the AA filter of my camera

There is a lua add-on to do RL deconvolution and another lua add-on to use gmic, which has Octave Sharpening (wavelet based) and its own Richardson Lucy sharpening.

Outstanding work. I completely get the advantage of applying scene-related transforms prior to the perceptual mapping.
However, let me be contrarian re the advantages of filmic-RGB vs base curves: nothing obliges the base (or tone) curve to be S-shaped. One is free to add all the little wriggles that one wishes in order to map tones closer or further apart, with the aid of the colour/tone-picker.

The curves are naturally pseudo-infinite dimensional interfaces… you just need to be careful to keep them monotonic.

Replacing them by a cascade of fixed parametric tansforms is backward, to my mind. It could be argued that the base curve, as a “set once and mostly leave alone” tool, could be well handled by a parametric transform (ie filmic-RGB). Would it be better though? To me, a curve is one of the most intuitive representations of a mapping… maybe that just comes from 30 years of teaching mathematics, but I don’t see the use of tuned fringes and optimal separations being more sale-able to beginners than “look, you want to separate these two tones, so you make the curve more vertical in between”. If the shadows/mids/highlights slider system worked for some people in Lightroom, I’d suggest it was because of the intuitive notion of shadows/mids/highlights, not the inherent superiority of parametric filtering at specific EV levels.

Ok, soapbox mode off now. Thanks once again for the amazing work, et bonnes fêtes et toussa… :slight_smile:

2 Likes

First, there are not many curves shapes that make sense from a psychophysics point of view. That S curve is inherited from film densitometric curves, and very close to visual tone mapping too.

Second, having a drawn curve with free-hand nodes makes the setting practically nontransferable from one picture to another, you need to adjust every node etc. The beauty of filmic is you can transfer the look (contrast, latitude, desaturation) and only adjust the bounds (white/black/grey).

Third, the whole curve UI assumes a bounded signal between 0 and 1, which is more and more wrong as HDR becomes the standard. Moreover, users will expect the middle grey to be in the middle of the graph, but raw signals don’t have their middle grey at 0.5, in fact nobody but the photographer knows what is middle grey in the scene.

So curves are bad at several levels. I agree that filmic doesn’t do everything, that’s why I made the tone equalizer.

Yes, because Lightroom converts to display colour space at the beginning of the pipe, and that’s fine as long as you keep working for paper prints (“one input/one output” workflow). So they can hard-code some transformations that assume highlights are in [0.75, 1.0], midtones in [0.25, 0.75], and blacks in [0, 0.25]. The problem arises if you want to enable a “one input/multiple outputs” workflows, for example to take advantage of HDR displays but still be able to print without redoing the whole editing. Well, you can’t, in Lightroom. To enable multiple output, you need to make your processing pipe output-agnostic, which means no special value has a special meaning, so you can’t use hardcoded constants like that anymore. Which means you don’t have shadows or highlights, but some light emissions that lie at some energy levels, and you don’t know beforehand their values.

Filtering at specific EV levels was intuitive in Ansel Adams zone system, because people had no other choice. And it’s also very intuitive if you think of your pixel RGB vectors as light emissions which spectrum got discretized to 3 intensities. But people want to see RGB vectors as colour codes, so here we are.

To be clear, I don’t care about intuitivity. That’s all a lie. Is intuitive what resembles something you already know, so you can spare the learning process. If people are used to bad tools working it bad spaces with hardcoded constants everywhere that will make their life easier 80% of the time, but will completely let them down in the remaining 20%, I would probably try to make something as bad if I sold the software. But I’m not selling anything, and I don’t believe tools should work only 80% of the time.

So I design algos that make the right thing first. Then I try to make the right thing easier for user. But it’s an iterative designing process, and it just began.

I think putting the ease of use first is a bad priority management. The thing is simple tools doing simple things are not always intuitive. Like in darktable, you can literaly replace every curve and tone mapping thing by an exposure module using blending modes multiply or divide, and masks. The exposure compensation is a straight multiplication of RGB values, it’s very colour-safe. Use the parametric masking to isolate highlights, and another instance for shadows, the awesome contour feathering and blurring, and you got a better shadows and highlights correction, with no saturation issues or halos. And that falls back to old-school dodging and burning under the enlarger. Aka selective and masked exposure, which was more organic and intuitive since the printer manually adjusted the exposure time on the enlarger.

How many people here know how to use this very simple module to its full potential ? Nope, you need to give them a slider labeled “shadows” and another one labeled “highlights”. Even if it’s just a masked exposure compensation. Digital processing is faulty of having made intuitive light concepts completely exotic by burying them under piles of UI bullshit and colour nonsense.

Happy holidays to you too :slight_smile:

8 Likes

A question about two paragraphs from the article (came in as result of the german translation).

Warning: Do not use basic adjustments with the modules exposure and contrast brightness saturation. Indeed, this would be like creating a second instance of the module without a mask: preserve color uses the same values as in base curve (options also added in this module on darktable 3.0). It is therefore preferable not to use these two options at the same time and even to use these two modules at the same time.

That’s ok and understandable since basic adjustment sliders are a subset of sliders of the mentioned modules.

But the question relates to the following paragraphs:

Using this module allows you to quickly correct a photo that does not require too much processing. It is therefore particularly suitable for simple and fast processing, or to begin in RAW processing (as shown in this example). The main limitation of this module is that these different parameters usually occur (via the different modules offering them outside this one) at different levels of image processing, so they do not provide quite the same rendering.

=> It must not be used with the new RGB workflow offered by the trilogy: filmic rgb, color balance and tone equalizer to which is added according to needs: rgb curve and rgb levels.

The bold sentence is what causes confusion - and to be honest: I could not explain this to the reader who asked the question.
My guess is that it is related to some pixelpipe issues (?).
So could anyone please shed some light on the matter?

Should the bold text at the end of the quote below be rephrased into something like “…to use color preservation options similar to [OR inspired by] chrominance preservation algorithms of filmic rgb module.”

Reasons:

  1. filmic kind of refers to the old implementation which is now deprecated, up-to-date module is named filmic rgb.
  2. The option names in rgb curves and rgb levels are preserve color while in filmic rgb the option is named preserve chrominance (not sure if this is intentional or not).
  3. There are three (3) preserve chrominance algorithms and six (6) algorithms under preserve color, their names do not fully match.
  4. “attached channels” reference seems kind of obsolete given we are already speaking of color/chrominance preservation which is all about maintaining RGB ratios. Or is it an intentional reference to tone curve module options?

Suggestions:

  1. Use lower case “rgb” letters when referring to the filmic rgb module, the way it appears in user interface. Upper case RGB have their use cases when referring to color space, tri-stimulae, workflow, etc.
  2. Rephrase “…of filmic (2.6 one)…” into “…of filmic module (initially introduced in darktable 2.6 and deprecated as of darktable 3.0)…”
  3. Maybe it was also make sense to call filmic rgb an evolution [or new implementation] of filmic as speaking of versions implies the module name should not change, while it obviously did change.

It could be useful to come up with a markup convention for module parameter names and their list-based values.
For example, the following seems a bit more readable to me:

New luminance Y and RGB power norm chrominance preservation modes have been added on top of max RGB mode (the only one available in filmic). They provide additional flexibility needed for challenging real-world scenarios, for example, when max RGB darkens the blue skies too much.