darktable 3.0

No that doesn’t work for 3.X, you are missing opencl headers => build will fail

I said that works for 3.0 not for master. OpenCL headers have been moved on actual 3.1.0/3.0.1 branch so you’re right, just for these new versions. I’ve always compile master (so actual 3.0.0) since the beginning (so since the beginning of the year with the 2.6 link I posted. But, yes, yesterday, I had the OpenCL header error with 3.1 master branch. So you’re right for releases after the 3.0 : it needs to add OpenCL headers (a ‘git submodule init’, in the code source folder, adds the openCL header ; so not so much changes).

1 Like

I have never seen sharpen changing colors, I mean anything that can be observed without zooming 100%. In which cases does sharpen produce unnatural results ?

It halos badly when you sharpen too much

That’s fine, I just need a slight sharpening to compensate for the AA filter of my camera

There is a lua add-on to do RL deconvolution and another lua add-on to use gmic, which has Octave Sharpening (wavelet based) and its own Richardson Lucy sharpening.

Outstanding work. I completely get the advantage of applying scene-related transforms prior to the perceptual mapping.
However, let me be contrarian re the advantages of filmic-RGB vs base curves: nothing obliges the base (or tone) curve to be S-shaped. One is free to add all the little wriggles that one wishes in order to map tones closer or further apart, with the aid of the colour/tone-picker.

The curves are naturally pseudo-infinite dimensional interfaces… you just need to be careful to keep them monotonic.

Replacing them by a cascade of fixed parametric tansforms is backward, to my mind. It could be argued that the base curve, as a “set once and mostly leave alone” tool, could be well handled by a parametric transform (ie filmic-RGB). Would it be better though? To me, a curve is one of the most intuitive representations of a mapping… maybe that just comes from 30 years of teaching mathematics, but I don’t see the use of tuned fringes and optimal separations being more sale-able to beginners than “look, you want to separate these two tones, so you make the curve more vertical in between”. If the shadows/mids/highlights slider system worked for some people in Lightroom, I’d suggest it was because of the intuitive notion of shadows/mids/highlights, not the inherent superiority of parametric filtering at specific EV levels.

Ok, soapbox mode off now. Thanks once again for the amazing work, et bonnes fêtes et toussa… :slight_smile:

2 Likes

First, there are not many curves shapes that make sense from a psychophysics point of view. That S curve is inherited from film densitometric curves, and very close to visual tone mapping too.

Second, having a drawn curve with free-hand nodes makes the setting practically nontransferable from one picture to another, you need to adjust every node etc. The beauty of filmic is you can transfer the look (contrast, latitude, desaturation) and only adjust the bounds (white/black/grey).

Third, the whole curve UI assumes a bounded signal between 0 and 1, which is more and more wrong as HDR becomes the standard. Moreover, users will expect the middle grey to be in the middle of the graph, but raw signals don’t have their middle grey at 0.5, in fact nobody but the photographer knows what is middle grey in the scene.

So curves are bad at several levels. I agree that filmic doesn’t do everything, that’s why I made the tone equalizer.

Yes, because Lightroom converts to display colour space at the beginning of the pipe, and that’s fine as long as you keep working for paper prints (“one input/one output” workflow). So they can hard-code some transformations that assume highlights are in [0.75, 1.0], midtones in [0.25, 0.75], and blacks in [0, 0.25]. The problem arises if you want to enable a “one input/multiple outputs” workflows, for example to take advantage of HDR displays but still be able to print without redoing the whole editing. Well, you can’t, in Lightroom. To enable multiple output, you need to make your processing pipe output-agnostic, which means no special value has a special meaning, so you can’t use hardcoded constants like that anymore. Which means you don’t have shadows or highlights, but some light emissions that lie at some energy levels, and you don’t know beforehand their values.

Filtering at specific EV levels was intuitive in Ansel Adams zone system, because people had no other choice. And it’s also very intuitive if you think of your pixel RGB vectors as light emissions which spectrum got discretized to 3 intensities. But people want to see RGB vectors as colour codes, so here we are.

To be clear, I don’t care about intuitivity. That’s all a lie. Is intuitive what resembles something you already know, so you can spare the learning process. If people are used to bad tools working it bad spaces with hardcoded constants everywhere that will make their life easier 80% of the time, but will completely let them down in the remaining 20%, I would probably try to make something as bad if I sold the software. But I’m not selling anything, and I don’t believe tools should work only 80% of the time.

So I design algos that make the right thing first. Then I try to make the right thing easier for user. But it’s an iterative designing process, and it just began.

I think putting the ease of use first is a bad priority management. The thing is simple tools doing simple things are not always intuitive. Like in darktable, you can literaly replace every curve and tone mapping thing by an exposure module using blending modes multiply or divide, and masks. The exposure compensation is a straight multiplication of RGB values, it’s very colour-safe. Use the parametric masking to isolate highlights, and another instance for shadows, the awesome contour feathering and blurring, and you got a better shadows and highlights correction, with no saturation issues or halos. And that falls back to old-school dodging and burning under the enlarger. Aka selective and masked exposure, which was more organic and intuitive since the printer manually adjusted the exposure time on the enlarger.

How many people here know how to use this very simple module to its full potential ? Nope, you need to give them a slider labeled “shadows” and another one labeled “highlights”. Even if it’s just a masked exposure compensation. Digital processing is faulty of having made intuitive light concepts completely exotic by burying them under piles of UI bullshit and colour nonsense.

Happy holidays to you too :slight_smile:

8 Likes

A question about two paragraphs from the article (came in as result of the german translation).

Warning: Do not use basic adjustments with the modules exposure and contrast brightness saturation. Indeed, this would be like creating a second instance of the module without a mask: preserve color uses the same values as in base curve (options also added in this module on darktable 3.0). It is therefore preferable not to use these two options at the same time and even to use these two modules at the same time.

That’s ok and understandable since basic adjustment sliders are a subset of sliders of the mentioned modules.

But the question relates to the following paragraphs:

Using this module allows you to quickly correct a photo that does not require too much processing. It is therefore particularly suitable for simple and fast processing, or to begin in RAW processing (as shown in this example). The main limitation of this module is that these different parameters usually occur (via the different modules offering them outside this one) at different levels of image processing, so they do not provide quite the same rendering.

=> It must not be used with the new RGB workflow offered by the trilogy: filmic rgb, color balance and tone equalizer to which is added according to needs: rgb curve and rgb levels.

The bold sentence is what causes confusion - and to be honest: I could not explain this to the reader who asked the question.
My guess is that it is related to some pixelpipe issues (?).
So could anyone please shed some light on the matter?

Should the bold text at the end of the quote below be rephrased into something like “…to use color preservation options similar to [OR inspired by] chrominance preservation algorithms of filmic rgb module.”

Reasons:

  1. filmic kind of refers to the old implementation which is now deprecated, up-to-date module is named filmic rgb.
  2. The option names in rgb curves and rgb levels are preserve color while in filmic rgb the option is named preserve chrominance (not sure if this is intentional or not).
  3. There are three (3) preserve chrominance algorithms and six (6) algorithms under preserve color, their names do not fully match.
  4. “attached channels” reference seems kind of obsolete given we are already speaking of color/chrominance preservation which is all about maintaining RGB ratios. Or is it an intentional reference to tone curve module options?

Suggestions:

  1. Use lower case “rgb” letters when referring to the filmic rgb module, the way it appears in user interface. Upper case RGB have their use cases when referring to color space, tri-stimulae, workflow, etc.
  2. Rephrase “…of filmic (2.6 one)…” into “…of filmic module (initially introduced in darktable 2.6 and deprecated as of darktable 3.0)…”
  3. Maybe it was also make sense to call filmic rgb an evolution [or new implementation] of filmic as speaking of versions implies the module name should not change, while it obviously did change.

It could be useful to come up with a markup convention for module parameter names and their list-based values.
For example, the following seems a bit more readable to me:

New luminance Y and RGB power norm chrominance preservation modes have been added on top of max RGB mode (the only one available in filmic). They provide additional flexibility needed for challenging real-world scenarios, for example, when max RGB darkens the blue skies too much.

I think that you certainly should persue your vision… my concern is raised by suggestions of deprecating base (and tone?) curves (or simply leaving them to rot with their underbrush of useless presets). If some of the issues related to ie red saturation could be shared with the curve UI, and a little bit of affection shown by allowing user deletion of useless presets, I’d be a happy camper.

But let me continue the conversation… in case it is of interest to someone :slight_smile:

Egotist that I am, I’m not actually interested in multiple outputs. So I’m defending a personal preference. Nor do I much care about some people’s idea of 3 band intuition, I just wanted to bat that away as a justification.

I don’t understand your comment about “hard coded constants”. The curves are general continuous mapping functions [0,1]->[0,1]. I can put the point of inflexion at 0.18, or at 0.82, or both (to pick out contrasts in both ends of the tonal range). They posterise and do other weird stuff if you make them non-monotone… but if someone wants to do that, why not? Certainly much easier than the old way of doing it with multiple developments and blah blah, that was popular in the 70’s because it was hard :smiley: Nor do we have to listen to people who insist that all colour science is about creating paper-images that are optimised under a standard D50 luminant.

It’s certainly true that most of these curves are not psycho-physiologically “correct”… but so be it, neither is reconstructing an emotional response from combinations of tones on a plane… Les Nablis remarked in 1895 (?) that finally, a painting is just that, it is a means of communicating many things, most of which are not “true” in the sense of faithful representation of reality. I don’t care if my photos are lies: if all they could be was “true”, photography would be very boring for me. Especially since I produce 95% B&W…

Thing is, intuitive control of a UI is important. I need a direct feel for what result R will be produced by action A… which implies that my brain has to automatically solve the inverse problem R->A. Inverse problems are hard once you get far from local linearity. I’m willing to work to obtain that feel, but I need to be persuaded that there is something to be won: “efficiency” doesn’t do it for me, I’m not a wedding photographer.

Finally, you say that “the whole curve UI assumes a bounded signal between 0 and 1, which is more and more wrong as HDR becomes the standard”.
Not so much, unless you suppose that we are all to do multiple exposure captures on one end, and have unbounded screen brightness on the other. Maybe that works for landscape photography in the f/64 tradition… but even trees move, and people move much faster. At the other end, maybe one day we’ll be showing our images on 12 bit 8K displays… but then there are people retro-pedalling to do gum-bichromate, and the great majority of images actually sold as such, are in paper books. The history of display devices can’t be assumed to always be making things uniformly “better”… for some things yes, for others no. The reflected-light image on paper already does pretty well in making use of our ability to perceive simultaneous tonal range… which is far less than 20EV. According to Wikipedia:
" The retina has a static [contrast ratio] of around 100:1 (about 6.5 [f-stops]."

Which is a Dmax of 2.0, which is within the capability of glossy or baryta inkjet paper… and supported by the (anecdotal) fact that no one seems to be able to see banding in 8-bit printed images. Some authors are choosing to publish on matte paper, with typically Dmax<=1.7 (aka 50:1, aka <6 stops). The print is a version of reality repackaged so that we can look at it comfortably without iris gymnastics, just as it is re-packaged to present a certain authorial vision.

Cheers, Graham

I think what Aurelien wants to say is that dynamic range of cameras is getting bigger and bigger - way more than the 6.5 stops you mentioned.
So we don’t have to do multiple shots to get a dynamic range that has to be remapped to the much smaller range of your output - be it retina or baryt or whatever.

Maybe… but my point was that the purpose of creating an image is for a human to look at it. So at the top of the pyramid of scales, there needs to be an eye looking at the output, and for that eye to be able to look at the image comfortably, it needs to be limited to around 6.5 stops.

There is also the game & cinema world where the purpose of the image is not to be looked at, but for it to shock and stun… but is that in the ambitions of dt?

Le sam. 28 déc. 2019 à 14:29, Bernhard via discuss.pixls.us noreply@discuss.pixls.us a écrit :

I think my work coming out of darktable is stunning… So yes!

The UI of most curve tools I know is restricted to the notion of black and white, but that doesn’t mean the transfer function has to clamp at those points. In my software it will continue to transfer values well beyond 0.0 - 1.0.

In my experience, a curve tool makes it hard to control the toe where the darkest tones reside, particularly in the linear data where they stack up. A filmic curve and its UI should allow one to manipulate the toe to effectively “crisp” the shadows, as Duiker (the fellow who first posited a filmic equation) intended.

With regard to parameterizing “one curve to rule them all” for all shots, ETTR vexes that. For all of its complexity in some modes, most camera metering supplies an exposure that’s anchored on some notion of “middle gray”, putting the high-key parts of the scene where they may. But, the majority of the scene should respond well to the camera sensor. ETTR’s anchor is at the high end, and now the shadows fall where they may on the sensor. If one’s light is changing throughout the session, there won’t be one ETTR-based curve to make all good…

To be honest, I always hated the terminology shadows/midtones/highlights in all commercial SW I used, because it isn’t really clear which is which, it sounds arbitrary.
But why do you use the same in Color Balance then ? Is it only because it’s more intuitive than lift/gamma/gain or there are hardcoded assumptions there as well ?

And how is it done in DT ?

Yes, but putting the inflexion at 0.18 will make low-lights very difficult to control since we have increased sensitivity in these parts, so what you need for a good UX is a log interface that scales up this region for better control. Which is not what typical curve do.

The thing is nobody wants to do that, and using free-hand nodes generally ends up losing time to manually micro-tune a smooth monotonous curve… So you only get the illusion of power, and all the overhead. 99% of the time, people just want a S curve, so why not deliver them in a more robust way ?

I think you are mixing things up. Photography is a technically determined art : it was made possible only after optics, micromechanics and chemistry went far enough to fix an image on a substrate. Making feeling-enabled pictures is not incompatible with using the state-of-the-art technics based on the best understanding we have so far of light emissions and colour perceptions. The art is in the using, but to make reliable and sensible tools, you have to care about science. It’s all about getting robust tools that give the best results the quickest. I hate computers, I want the path of least effort and most efficiency to obtain the results I’m after. To achieve that, treating RGB vectors as whatever numbers and mess around with them as if they represented no physical reality is like shooting myself in the foot. Unrolling the physics and psychophysics where needed is the only way to get digital to behave like analog, therefore predictibly.

Intuitive for whom ? For someone who did analog photography in the past (like serious printing stuff, not just sending away negatives to the 1h lab), digital display-referred makes no sense.

I agree. But most non-linear transforms inherited from the display-referred workflow are non inversible, and can get very unpredictible (if you are able to answer the question “how much will I oversaturate shadows while I increase the contrast that much”, you are better than me).

You don’t get it. Your screen has 8 EV of dynamic range, your paper has 5 to 6.5 EV, yet today your average DSLR has 12 EV (up to 14EV and counting) at 100 ISO. This is single-frame HDR, and it’s standard now. These files need to be handled rigorously to keep all the data and blend it gracefully into SDR, or be able to recover details in backlighting situations as advertised by the cameras manufacturers.

Hence me saying a sensible image processing pipeline should be 100% output-agnostic, which is possible only if you work on linear light.

Static contrast is pointless, your brain is doing focus-stacking and exposure-stacking in real-time, so the retina is just the first part of a complex process and the actual dynamic range of human vision is around 18-20 EV depending of the surround lighting.

You are missing the point. The ambitions of myself for dt is having a set of physically-accurate tools to push pixels in a fast and robust way, so I can perform much dramatic edits without nasty side-effects. As of darktable 2.4, all the contrast and dynamic range compression tools gave halos or colour shifts when you pushed them far. The usual answer was “don’t push them that much, they work only for small adjustments”. Right, but what good is a tool that fails me when I need it the most ? If I need to push shadows by +5 EV and the tonemapping tool can’t do it… well it doesn’t work. That implies bad colour models and bad algorithms.

So I studied the problem and tested solutions, and came with the answer that image processing needs to be physically-accurate, work as much as possible in linear light and stop convolving colour and display concepts too soon in the pipe. In this case, you can use the software as a virtual camera and redo the shot in post-processing. Then, editing is analogous to designing your own film emulsion, and many things are more simple even though the UI might get more crowded. Dealing with linear light is very easy, it’s like adding colour filter on top of your lens.

I just think that people who never pushed darktable too far can’t see the problem. It sure works fine for gentle editing, so I get why all my changes just seem like a big pile of habits-changing trouble for many people.

Even if you don’t clip values, having an UI from 0 to 1 still sucks because these special values are only conventions, and nobody knows what data you actually manipulate. I think good algorithms should work in the more general way. But then, sure, you need to expose some scaling parameter in UI and users will start to complain about it, even though its default value will usually not need to be changed for a majority of them.

I tried to be user-friendly, because offset/lift affects mostly shadows, gamma/power midtones and gain/slope highlights. But, of course, there is no threshold in there, let alone hardcoded. The algo is simply RGB_{out} = (slope * RGB_{i n} + offset)^{power}.

It’s been wired progressively. If you look at the pipe now, everything coming before filmic is output-agnostic. Filmic is the HDR->SDR mapping, and everything coming after expects SDR data.

8 Likes

Love the new scene-referred rgb workflow. I’ve revisited a load of my old edits and the improvements are significant. Keep up the good work.

1 Like