Messed up colors by the number - darktable's new unbreak and filmic modules

I’m assuming that the colors in a scene-referred interpolated raw file (no “make it pretty” algorithms) are as close to the actual scene colors as most of us can get given that when we take pictures we don’t also measure scene colors using specialized equipment.

If the colors in the scene-referred interpolated image file are taken as the “not messed up” colors, then the degree that post-processing produces “messed up colors” can be quantified given suitable metrics. I used two metrics:

  • How many degrees did the LCh hue change?
  • By what percent did the LCh “Saturation” change?

For the above metrics, LCh “Saturation” is defined as the ratio of Chroma to Lightness, per Mark Fairchild, page 25 - as Fairchild notes, the Lab color space doesn’t have an officially sanctioned measure of Saturation:

Here are screenshots showing results of processing a sample raw file several different ways using darktable. The sample raw file was posted at the top of a “play raw” thread: [PlayRaw] Departing Storm - please note that in the screenshots below in this current thread I used a better white balance than I did for the files I posted in the “play raw” thread.

To measure “how messed up” colors might be as a result of using various darktable algorithms to make the image “pretty”, I pulled the darktable output into GIMP and placed four sample points, one each as follows:

  1. Blue sky
  2. Orange/brown field behind utility building
  3. Grass in front of utility building
  4. Dark area in the clouds

Above: scene-referred (no “make it pretty” processing) - these colors will be considered “base line” and “not messed up at all” other than the Lightness value being too dark in order to not have any clipped highlights.

Above: Scene-referred + base curve to stretch midtones and compress highlights (no toe to compress the shadows)

Above: Scene-referred + Exposure +Unbreak profile (tone curve module not yet applied), following this tutorial: Solving dynamic range problems in a linear way - #62 by anon41087856

Above: Scene-referred + Exposure + Unbreak profile + Tone Curve with automatic RGB to restore chroma, following this tutorial: Solving dynamic range problems in a linear way - #62 by anon41087856

Above: Scene-referred + Exposure + filmic, following this tutorial: New module : Filmic by aurelienpierre · Pull Request #1811 · darktable-org/darktable · GitHub

OK, I’m going to hit the “Create Topic” button and then add the metric calculations in a separate post.

1 Like

Here are the scene-referred “base line” LCh hue and Sat values:

Scene-referred:  1        2        3        4
C                30.3      8.8     13.4     15.0
h               265.4     73.9    137.3    266.4
Sat              0.56     0.27     0.44     0.25

Here’s the Saturation and also the % Sat changes and absolute hue changes as a result of using various “make it pretty” darktable algorithms:

Base Curve:       1        2        3        4
Sat               0.53     0.26     0.42     0.23
% Sat change      5.66     4.72     5.22     5.52
abs. hue change   1.00     0.10     0.00     0.60
Exp+Unbreak:      1        2        3        4
Sat               0.15     0.11     0.19     0.07
% Sat change     73.44    58.59    56.27    73.68
abs. hue change  13.10     4.50     0.10     5.20
Exp+Unbreak+ToneCurve: 1        2        3        4
Sat                    0.36     0.22     0.38     0.16
% Sat change          35.53   19.01     13.84    35.29
abs. hue change       10.00    3.70      0.50     4.50

Exp+Filmic        1        2        3        4
Sat               0.36     0.27     0.47     0.16
% Sat change     36.41     2.11     5.51    36.28
abs. hue change   1.90     0.90     0.70     0.80

So in terms of “messed up colors” using LCh hue and “Saturation” changes as the metrics for “messed up colors”, here’s a verbal summary:

  • The particular Base Curve that I used didn’t mess the colors up very much at all. Other Base Curves no doubt can make really messed up colors, or slightly messed up, or etc, depending on the specific Base Curve.

  • Exposure + Unbreak, without adding an “S” curve via the tone curve module and also adding in Chroma using RGB, of course produces really, really, really messed-up colors. The only reason I included this output in my metrics is to show that the colors - the hue and Saturation values - are already “messed up” before using the Lab-based Tone Curve.

  • Exposure + Unbreak + the Lab Tone Curve and using RGB to add chroma still produces fairly messed up colors. This doesn’t mean the resulting image isn’t useful as a starting point for further editing - that’s surely image-dependent and also a matter of aesthetics. However, personally I wouldn’t use this darktable module/process precisely because of the resulting “messed up colors”, and specifically the large changes in hue. I prefer to mess up my image colors deliberately in post :slight_smile: rather than accidentally as a result of an automatic tone-mapping algorithm.

  • Exposure + Filmic has minimal LCh hue changes, and for the two “ground” sample points the “Saturation” changes are also minimal. But for the “cloud and sky” sample points, the “Saturation” changes are pretty high.

Personally I think the Filmic module is an awesomely nice addition to darktable:

  • I’m not fond of the flattening of contrast in the clouds but the goal of the module is to produce flattened output for further processing, and other darktable modules can be used to add back in some controlled local contrast.

  • The flattened ground tonality suggests an approach that I think will greatly improve my previous attempts to tone-map this and similar “sky-ground” images. Without examining the filmic output I doubt I ever would have pinpointed “what” was wrong with my previous attempts, even though I knew beyond any doubt that “something” was wrong.

  • For this particular raw file the “filmic” code in the module that keeps the highlights from oversaturating - measured using LCh “Saturation” - produces undersaturation in the darker portions of the clouds. Imho the sky also looks undersaturated. But of course one can add additional saturation to the output from the filmic module using other darktable modules. And maybe there’s a better set of parameters than the ones I used for the filmic modules. Here’s the XMP file (has the unflattened history):
    080724-1822-100-1479.cr2.xmp (15.8 KB)

Now here is a plea for people to please please please stop throwing accusations like “messed up colors” “silly” and “nonsense” around when they are talking about code written by other people.

Developers are the resource without which we wouldn’t have free/libre software. Calling someone’s code “silly” is just not conducive to keeping developers happy.

The claim as been made that without insults nobody changes their code. This is just not true. I know this from personal experience working with GIMP. Examples and submitted code go much much farther than insults.

Instead of casting aspersions on code written by other developers, please give examples showing shortcomings. Spend some time evaluating the strengths and weaknesses of new modules against the strengths and weaknesses of existing modules. Maybe add some cautionary notes to documentation along the lines of “This algorithm used this way will result in such and such changes in colors, so maybe use this other algorithm instead.”


hi @Elle, thanks for your experiment. a couple of questions:

  1. can you post the base curve you used?

  2. do you consider both metrics equally important? I know almost nothing about all this, so I was wondering whether hue differences could be more noticeable/disturbing than saturation ones… what do you say?


1 Like

Thanks! I like this post a lot. I just have a couple of questions. Hope they are not off-topic:

  1. As far as I understand, every kind of non-linear transformation in RGB color space will cause (non-linear) hue shift. Is this correct?

  2. What if the base curve is applied in lightness or HSV-lightness blend mode? It should not change the color information of the photo but I have noticed that the outcome is not a pleasant and natural looking photo even if I try to “fix” its saturation using other iops (at least for portraits). Any thoughts?

I couldn’t agree more. And this holds not only for software developers.
Even Linus seems to have changed his mind about this recently :smile:

It’s in the XMP file at the end of my comments above on the filmic module. If you open the file and search for “basecurve” there are two entries, one for applying the base curve, and a later one for disabling it. I don’t know how to ask darktable to output separate XMP files, but it does allow to save the entire history through successive “open export close” edits. So the base curve module is deactivated, but the curve is still there. Here’s a screenshot:


Notice the base curve is as linear as I could make it before curving over to meet the top right corner - as @msd notes, non-linear transforms in RGB cause hue and saturation shifts.

I think the problem is actually that the hue shifts are less immediately noticeable than the saturation shifts, simply because people will tolerate a broad range of hues for any given object as believeable. So unless the hue shifts are such that for example a person’s face turns magenta, or the sky turns really purple or all the way to cyan, one is not so likely to notice that the originally captured image colors have been altered. Whereas we readily recognize saturation shifts and easily make judgments such as “more saturation needed here” and “less saturation needed there”. Also adding/reducing saturation is relatively easy.

That we don’t immediately “know” there’s been hue shifts doesn’t mean that random hue shifts don’t matter, but rather that one might “feel” something is wrong with an image without being able to pinpoint “what” is wrong. But once you see the image side by side with and without the hue shifts, the difference is often very obvious.

One way to deal with hue shifts is to use a “hue lock” layer which I think even the various HSX color spaces do a pretty good job with adding “hue lock” layers, but this is just an “I think” - I have not actually tested.

Another approach is to use GIMP’s “Luminance” blend mode which is easy to port to other image editing softwares. Luminance blend applies tonality changes but keeps hue constant and raises/lowers Chroma proportionately with Lightness, thus keeping LCH “Saturation” constant. PhotoFlow has this Luminance blend mode, and for coders who are averse to Lab/LCH, this blend mode doesn’t actually use Lab/LCh.

As you’ve already noted, HSV “V” often produces not so pretty results when used as a way to keep the colors unchanged. And likewise LCh/Lab “Lightness” blend mode also has drawbacks when the image tonality has been modified substantially. This is because LCh Chroma is not the same as LCh “Saturation”. When Lightness is raised and Chroma kept constant, “Saturation” falls, and vice versa. So extreme tonality changes blended with the original image using “Lightness” blend mode can look pretty funny, requiring further editing to bring Chroma up or down proportionately to the change in Lightness. Which is why Luminance blend mode is a very nice thing to have.

As a cautionary note, a huge advantage (edit: “huge” if it’s a problem with your specific image, it’s not always a problem of course!) of the way the darktable Filmic and “unbreak profile” modules are currently coded, is that avoiding Luminance blends and LCh Hue blends avoids the problem of RGB channel values being driven out of the range 0.0f to 1.0f. This problem affects some hues more than others, depending on the image and the RGB working space’s color gamut, and of course affects shadows and highlights more than midtones. I’ve been wishing for a simple “one click” way to deal with this problem. I think @briend has worked on some possibilities.

Yes, and not just hue shifts but also saturation shifts. It’s easy to demonstrate this for yourself. Open an image in GIMP, add some sample points set to LCh, then open the Curves dialog and keep an eye on the LCh hue and also keep an estimate of whether L and C are changing proportionately. There is an old (2007) and in places rather funny dialog over on the Luminous Landscape forums about whether this really did happen and if so whether users actually like it or not.

Of course in the LL forum they were talking about PhotoShop’s HSL blend mode (did I get the initials right? Is it HSL?) which isn’t the same blend mode as what is normally meant by HSL. I can’t find the calculations for PhotoShop HSL(?) online, though I’ve looked - anyone know/have a link to the actual calculations?

1 Like

@Elle This is likely suitable for a new thread. I recall us discussing luminance and lightness in the LCh thread way back. I know what LCh lightness is but I still have trouble understanding how to use manipulate and blend luminance; e.g., with G’MIC, which is what I primarily use. I.e., I know the why but do not know the how.

Here’s hopefully a more clear explanation of why random, unintended hue shifts should be avoided during processing:

Given scene colors such as green grass, red stop signs, magenta desert sands, yellow buses, etc, the precise hue of each object in the scene depends on the sum total of all the wavelengths the varous surfaces can reflect and also on the sum total of all the wavelengths in the light source(s) that’s illuminating the surface in the first place - the surfaces can’t reflect what’s not there to be reflected.

If your post-interpolation processing algorithms randomly shift hues around, it’s “as if” the objects in the original scene were suddenly illuminated through little color filters of varying colors and strengths - everything in the scene has its own added individual “color cast” - the connection between the light source(s) illuminating the scene and the colors in the scene has been damaged.

Which is not to say that the original digital capture was 100% colorimetrically accurate - of course it’s not. But it’s the best we can do as photographers. Making custom camera input profiles for various lighting conditions helps. Using a camera with better color capture capabilities helps.

But whatever the accuracy of the originally captured colors might be, randomly stirring all the hues around during processing is something best avoided as whatever degree of color accuracy there was in the original capture is thereby rendered worse than it was.

Similar considerations apply to algorithms that randomly shift saturation. But often (not always - dpends on the image) saturation shifts are easier to see and fix.

Edit: I don’t know how or why this post was in reply to @elle :slight_smile: - it was supposed to be in reply to @agriggio.


Hi @afre - I don’t use G’MIC - does it have a luminance blend that corresponds to the luminance blend in GIMP?

“Luminance” is one of those words that gets tossed around fairly randomly in image editing software.

In GIMP and PhotoFlow, the Luminance blend mode blends the luminance information - the “Y” in XYZ - of the top layer with the resulting colors of all the underlying layers, and does this in a way that preserves the hue and saturation (ratio of Chroma to Lightness) of the underlying layer colors.

Thanks for the explanation. Good to see that this matches with my gut feeling :slight_smile:

Like this?

Top image: XYZ
Bottom image: XYZ
Result: XYZ

A lot of people don’t realize that “Y” of XYZ includes information about “how green” as well as “how light”. So if you take X and Z from the original color and Y from a second color, you get a new color that’s either greener and brighter or less green and less bright. than the original color. Spend some time playing with ArgyllCMS xicclu and you’ll see what I mean.

That’s interesting, given the Y (luminance) of the xyY space is simply the Y from the XYZ, or is my math so bad I can’t see the decoupling?? :smile:

From your article:

x = X / (X+Y+Z)

y = Y / (X+Y+Z)

Y = Y

Edit: and so, If I consider the whole transform, the Y is being used in the x and y equations, so I’d guess Y is just taking on a new meaning, even if it’s value doesn’t change…

So how does luminance blending work? I don’t think G’MIC has it, at least not that I know of. I would like to know how to do it. Hopefully, it isn’t too hard because my math and coding is entry level.

I’m curious also, went digging through the GIMP and Photoflow git repos looking for it, no joy.

The PhotoFlow code can be found here: PhotoFlow/src/base/blend_luminance.hh at stable · aferrero2707/PhotoFlow · GitHub

1 Like

@Carmelo_DrRaw - thanks! for posting the link to your code. I haven’t looked at it but I bet it’s easier to read than the corresponding GIMP code.

Here’s a link to the original bug report with some python code and several patches for various versions of GIMP. The bug report was filed shortly after the LCh layer blend code was added to GIMP and the specific complaint was precisely what @msd noted above - Lightness blend mode produces desaturated results if the blended image is considerably lighter than the original image:

Even though I turned the submitted code into a patch that fit in with the rest of GIMP’s layer blend code, and I know what it does in practice as a layer blend mode, I never did understand how the code does what it does, so if anyone can put this into words, that would be great!

@ggbutcher - regarding XYZ and xyY, apparently as soon as the color scientists had set up XYZ they said “Oh, we need a way to separate out Luminance” so they devised a transform and came up with xyY:

Douglas Kerr’s website is a treasure trove of information on “optics and photography (especially digital photography” (, and also other stuff - he’s a retired telecommunications engineer with wide-ranging interests - the table of contents is well worth periodically perusing - a bunch of new articles have been added since last I checked which was several years ago:

1 Like

+1 to that. I’ve encounter Mr. Kerr’s writings in a number of my “google journeys”, well-written with respect to both technical conciseness and ‘explainability’. Thank you @Elle, I now have 14 .pdfs queue-ed up in my Acrobat Reader for casual perusal, starting with the CIE_XYZ article you referenced, and ending with this one:

My code should do exactly what was proposed in the bugzilla discussion, i.e:

  • linearize the RGB values if needed
  • compute the Y_bottom and Y_top luminance values of the bottom and top layers that are being blended
  • compute the ratio R = Y_top / Y_bottom
  • scale the bottom linear RGB values by R (what is called “exposure scaling” in the bugzilla discussion), and assign the resulting RGB values to the blend output
  • re-introduce the gamma encoding in the blend output, if needed

(Notice that the Y values are always computed using the actual primaries of the colorspace that is assigned to the RGB values of the bottom and top layers.)

@Carmelo_DrRaw - Thanks! That’s such a nice clear explanation! I could see the ratio of Y values was calculated, but what was done with that ratio, well, I never did claim to be very good (or patient) at reading code :slight_smile: .

@ggbutcher - I likewise saw a bunch of new articles mostly on cameras and sensors, that are now on my “to read” list. Also that "Principles of Steam Locomotive Valve Systems " article looks pretty interesting!

1 Like