Colorspace clarification....

For tone manipulation the best color spaces are:

1)RGB per-channel and perceptual color space with the perceptual color space slightly behind
2) cie Lab
3) RGB ratio

For sharpen (not deblur) we need to work on the luminance channel and in a log/gamma color space.
Linear rgb in this case is just an artifacts factory

For deblur i can’t say, but using only deblur as the main raw sharpener is wrong.

How have you arrived at these conclusions?

Are you serious? Show me a commercial raw editor that use rgb-ratio as the main method for tone manipulation, filmic it’s pretty much what Reinhard described in his paper in 1997 and only darktable use it.

All the proprietary lut (sony, Canon, arri…) for the log/hdr to sdr conversion are rgb per channel curves (plus gamut compression).

There aren’t papers to show how much linear rgb is bad for sharpen an image because is so obvious but you could find a lot of discussion on internet and everyone in the end comes with the same conclusion.

Yes i’m serious. You have asserted that @anon41087856 is wrong in stating that Lab is a bad colour space to use (and by implication that the entire direction that darktable is taking is wrong). When one makes assertions about objective reality (especially one with such a big implication to the future of darktable) it is normal to prove your case with evidence.

I (and @priort) have asked for objective evidence and instead you have just provided additional assertions without further justification. Now we’re on the “but everyone says so” argument and “it’s so obvious nobody needs to prove it”.

Anyway pixls is telling me off for too many replies so I’ll just sit here quietly and await some argument to convince me one way or the other.


There are a lot of discussion in this forum too, find it yourself

Try It yourself

You could import in gimp an unsharpened image and try the linear or gamma unsharp mask, the linear usm will have a lot more artifacts

1 Like

I am grateful for the exchange and the opportunity to learn something and to hear other opinions. It would be great if we don’t degrade into a Republican vs Democrat mentality…we can see how well that is working for the US…

I would agree with Chris that making any statement without a reference or clear cut example that can be used for discussion point might lead to a hollow debate rather than a fruitful discussion…

One example for me was this one…

When I look at the images I am not immediately aware how they demonstrate the points we are talking about ( I acknowledge part of this could be me being thick). The photoflow one looks like it has a lot more contrast…beyond that I was not sure how it was comparable to DT to support a theory or assertion. (Again likely I am slow on the uptake)

In the original thread for this photo there was a nice DT edit …that subjectively I thought was on par or actually nicer to these in this case and IMO which means really nothing…but the point is one edit in DT and one edit in photoflow don’t necessarily provide a clear demonstration of what is the point just like my assertion that the mentioned edit is “nicer”

EDIT: @age thx for your time and I see you have some suggestions in your post that came as I was writing mine…

Sadly yes, I like the denoise and local contrast module in darktable, but in my opinion it has become a slow, overcomplicated and the worse quality wise raw editor.
The recommended scene referred workflow it’s good only for very innatural jpeg export.

the real point is that there isn’t the offending blue to purple shift in the Lab tonemapped image (but there is in darktable)

1 Like

Please don’t quote me as if I said that “the entire direction that darktable is taking is wrong”. I’m far from convinced that this is true - I was paraphrasing you.

1 Like

Sorry for my outburst, the new darktable’s philosophy “everything should be done in linear rgb or using the rgb ratio preserving method, everything else sucks” it’s something that I really don’t understand.
There are definitely some benefits, sometimes, and disadvantages other times.

These are still very general statements you’re making @age but we are asking for some reference material to back up your statements. “Its obvious” or “go find it yourself” aren’t acceptable replies here.

If you don’t have the time to find supporting evidence, just say that. We are all busy. But continual double-down on vague statements does not progress the discourse.


It was my impression that as long as you want to perform a digital version of a physical operation (i.e. blending), you must obey the physics. Since physics works in ‘linear space’, your operation must be performed in ‘linear space’ (there may be more accurate terminology here…). If you’re happy to throw physics out of the window for something that feels more intuitive or gets you to eye-pleasing results faster, than this is not wrong, it’s just different.

Edit: oh and I don’t think I need to remind anyone on the forum that color science can be a tricky subject. There’s a lot of research going on to improve processing methods, accurate color rendition under all sorts of circumstances, etc. It seems a little foolhardy to stay with the CIELAB 1976 recommendation when, e.g. we have a pretty well-established CIECAM16 (2016) color appearance model to work with as well.


I should clarify that @anon41087856’s original article on the subject (here – disclaimer, I helped edit the English translation of this article) was pretty convincing to me and judging by my own results, every edit I have done using the scene-referred modules has been a vast improvement on similar edits done using the old Lab/display-referred modules. Either scene-referred is better, or @anon41087856’s modules are better despite being scene-referred. (I will of course admit there’s a possibility I’ve just improved my process over the years)

I need a pretty strong demonstration if I’m to believe that all of this was a load of rubbish and I’ve not seen anything like such a demonstration.


So you are proposing to perform USM/local contrast in ciecam16 color space? I would say that nobody will see any difference compared to cie Lab and as such it could be used because faster.

Close it, I don’t care. Still USM and local contrast in darktable could be used with the scene referred workflow because they work in the range 0-inf.
This thread was started from this assertion.

Sorry, where is your portfolio again ?

Here is mine : Portfolio - Aurélien PIERRE, Photographe. 100% practice since 2011. Theory came later to understand why I couldn’t get the advertised results out of the DR-beast that was the Nikon D810 at the time.

  1. Blur comes from lenses,
  2. Lenses process light,
  3. Linear RGB is proportional to light emission.

How does perceptual fit into that ? Light doesn’t have a gamma or a cubic root, I’m sorry.

Proprietary LUT are tuned for FPS, not for color consistency.


You know a shitload of papers actually come from an era where RGB had to be used as 8 bits integers, which does not let much of a choice than to use gamma RGB. Which says strictly nothing about linear RGB.

And again… Back to lens model and blur formation.

So perhaps the USM is the problem here ? Cared to try diffuse and sharpen ?

Because RGB ratios fall back to exposure changes which preserves hue and chroma. While independent RGB is a clusterfuck of unpredictable color shifts, all the more if RGB is not linear.


If you don’t want to participate in this thread then don’t. Its that easy. Nobody said anything about closing it, we are just trying to understand your point. I personally don’t understand what you’re getting at and it seems like quite a few others don’t either. We are trying to gain clarity.

Great, truth is that it doesn’t preserve hue and after filmic the user have to restore the saturation with only six saturation sliders and mask (vibrance, global chrominance, shadow saturation, midtone saturation, highlights saturation and highlights brillance), without this long step every image looks like a post-apocalypse movie.
The new sharpen module just adds two minutes for every rendering and for 90% of the times isn’t better the the old unsharp mask.
Good work.

Let us take a break from the discussion and refrain from using yous and Is. It is okay to disagree but to use disparaging words or expressions is not helpful.