Applying WB to jpg/tiff in darktable and RawTherapee

RawTherapee and darktable are primarily raw converters with their toolchain pipeline (RT) or pixelpipe (dt) suited to the conversion and development of raw files. One consequence of that is an early application of White Balance, which obviously makes sense for raw files – but does it also make sense when working with converted files (jpegs, tiffs, etc.)?

I have come across this issue when playing around with a swapped channels ICC profile as a result of this discussion in DxO PhotoLab’s forum. To illustrate my point here let me show you a regular sRGB file:


I converted this file to a swapped channels sBRG profile I made with DisplayCal (powered by ArgyllCMS and @gwgill):


The converted file with the embedded sBRG profile can be found here. Both files should look almost identical in any colour-managed application.

darktable doesn’t render the preview thumbnail correctly for me in the lighttable space, but it looks OK in darkroom. RT is fine as well, but the problem starts when I try to modify the white balance: the Temp and the Tint sliders work on wrong channels because WB is applied before the input profile. So when I try to warm the image up and move the Temp slider right, it cools it down instead:

So here’s my question: is it appropriate to apply WB before the input profile to a regular RGB file? Would it be a problem with e.g. ProPhoto RGB files? I know that Lightroom / Adobe Camera Raw applies WB to RGB files after the input profile. DxO PhotoLab and Capture One behave like darktable and RawTherapee – they follow the raw pixelpipe / toolchain pipeline.

Edited to add: I know that theoretically one can re-order the pixelpipe in the latest darktable but for some reason I couldn’t move the WB module after input profile.

I thought about this at one point too. I hope I am not a target of some righteous indignation as a result of the following statements, as I am probably not concise enough or plain wrong.

My thought on this matter is to offer the option to have an intermediary input profile after white balance. As can be seen in this wiki post

there are at least 2 points where a profile is necessary in raw processing. There can always be more as a colour profile is just a shortcut to series of matrix transforms meant to adapt colours into another working environment, correct colour balance and distribution, or compress them into a colour gamut.

So I did some messing around in rawproc with this, because I built it with such experiments in mind. What I came to is, you can correct white balance just about anywhere in the processing chain, but the “as-shot” white balance multipliers need to be applied before the image is scaled to display white, while there’s still room for the multiplication.

I opened my test raw, applied the default raw tools, and deleted the white balance operator:


A couple of explanations are in order:

  • Yes this image is green, please don’t talk about that. :smile:
  • The first tool, colorspace:camera, simply assigns the color primaries from dcraw’s adobe_coeff. The actual transform to a display space isn’t occurring until the end of the processing chain, where the tone:filmic tool is checked (the tool with the check is the one displayed), and that is to the calibrated monitor profile.
  • The image stays in its original numerical scale until blackwhitepoint:data, where it is scaled to fit the 0.0 - 1.0 range, where 1.0 is display white.

Now, let’s stick the whitebalance tool with the as-shot multipliers in a few places, see what happens. First, after subtract, before demosaic:


This is where I usually put it, looks okay This requires a rawdata-specific version of whitebalance, in order to walk the mosaic properly. Now, let’s move it to after demosaic, where the image is now RGB:


Looks the same to me. Actually, in the screenshot the after-demosaic application appears less color-saturated. Now, let’s move it to the end, after the scale to display-white:

Some of the channels in highlights are pushed past 1.0, and they aren’t clipping well…

Just read @afre’s post, and I feel compelled to relate that I’ve come to think differently about the working profile since I wrote that. In my raw processing, I’ve stopped converting to a working profile; I just do all my raw work in the camera space, with a conversion to an appropriate output profile for display or saving to a file. Now, if I were to save a high-bitdepth TIFF for opening in GIMP, I’d convert to a large-gamut working profile for the TIFF.

There’s one reason I’d return to an early conversion to a working profile, and that would be if I could perfect the workflow to use a non-whitebalanced target shot camera profile to do the whitebalance in the chromatic conversion. Right now, I’m just too lazy to shoot the target in my sessions… :smile:

There is a reason for a working profile and colour space. What constitutes a good profile and space is debatable as seen in many a fiery thread. Camera space is commonly not well behaved and unfit for processing. In particular, some tips of the triangle are way out there, leading to gamut and data issues. I guess you know how to map where your camera’s lies on the horseshoe thingy (sorry, dead tired).

When you work with a raw file these floating point conversions don’t matter that much, but the OP is about RGB files such as jpegs and tiffs, where those conversions can lead to banding, posterization, etc.

It’s interesting that e.g. Capture One uses “something close to the camera space” – see their description.
They also make some interesting points about the calculations of colour readouts and their reference to colour space.

That’s an interesting experiment with moving the WB in the raw processing chain.

I wonder if having a separate processing chain for tiffs/jpegs would be a better solution for programs like darktable, which is designed to operate on both raw and non-raw files. Then you wouldn’t have to worry about the exact order of the operations that would have to accommodate both the raw and the non-raw workflow.

My 2cts on this topic: strictly speaking, WB is only valid when applied to linear RGB values in the camera colorspace, and prior to any non-linear beautifying transform or RGB colour mixing (like a film emulation LUT).

Since in most cases it is not possible to know, and therefore “undo”, the exact processing that produced the jpeg or tiff file, the best approximation is probably to convert the RGB data to a reference linear colorspace before applying some linear multipliers to the RGB channels to alter the WB.

Moreover, in order to be able to still express the WB multipliers in terms of temperature/tint, the original multipliers must be known and reverted. Otherwise only relative multipliers can be applied, and no absolute conversion to temperature/tintbis possible AFAIK.

Yes, neutralizing the non-raw file with the WB module is probably not practical from the image quality point of view (it’s better to do so when capturing the image for SOOC jpeg or in raw), so that leaves us with the usage for artistic effect, which can be done by several other colour manipulation tools which take place further down the line.

When it comes to raw files and WB, the one thing I miss in darktable is being able to apply WB corrections locally (with a drawn/parametric mask), when there are various sources of light in the frame.

Again, many of you are likely more informed than I, so take my comments with a sprinkle of :salt:.

What I would say in terms of artifacting is that as long as the profile conversions happen one after the other without additional processing I think it would be fine. This is also where well-behaved come in because some transformations and / or colour spaces have discontinuities and mathematical oddities, which would affect the outcome.

These links don’t appear to have any technical data. My guess is that they have specific working / output spaces for each camera supported by this, where the white point is slightly off and chromaticity rotated to fit real colours better than sRGB.

To this, I have 2 objections:

1 We won’t have a general use working / output profile.

2 It would require a converter that would need to be FLOSS compatible and now you have another set of profiles to deal with.

Right now, we rely on coefficients, black and white points that can be adjusted when they are off. If you introduce this adaptive and colour twisting approach, I think it would unnecessarily complicate our processing workflow. I would rather use a larger working colour space (or large enough like Rec2020).

If you really want the perfect characterization, then handcraft your profiles. Heck, characterize to your heart’s content the camera space and the OOC JPG space and spectral sensitivities under every camera and scene setting and condition. That is a lot of ands and of course I am joking: who has the time and skill for that?


In the end the OP question is based on a channel swap edge case. Perhaps this will become more of an issue as more fancy and odd mobile devices come out. If it is a simple swap, then perhaps it might be prudent for the user to extract the bayer from the raw file, swap the intensities manually and the repackage it into a DNG file. Or, in the case of channels, just swap them. If I recall correctly, OpenCV operates in BGR and when I played with it long ago I had to remember to swap channels or pixels. Very annoying!

I think this is prudent when dealing with some extreme colors, because the rendering intents can only do so much to pull those colors into a decent sRGB-grade interpretation. And by ‘hand-craft’ I’m referring to building LUT color transforms with discontinuities appropriate to this specific task.

Matrix profiles , by definition, don’t have discontinuities; the rendering intent algorithm just follows a straight line from the original color toward the white point to some point within the destination gamut. It’s the decision with respect to where to stop in that journey that sometimes doesn’t look right…

Edit: Oh, and I still think incorporating white balance in the camera profile can help retain colors, as the WB multipliers seem to be more destructive in that regard than doing it along with the camera-to-whatever colorspace transform… ??

We are a little off topic; I will finish my thoughts with one more post (then we may start another thread if need be). As I said, I am quite loose with my discussion and might be wrong. My take is this: The transforms themselves are not the problem but the decision process is, not just where the journey ends. Try doing a round trip to see what I mean. Data and relationships are lost; we hope to minimize that where possible.

As for sticking with the camera colour space, I think this would only work if you are careful with your processing, but you would still have to faced the final gamut compression problem. Let me break down what I mean.

The first part of the statement refers to the fact that image processing of any sort would push and pull tones all over the map and also outside of gamut. If your camera space isn’t well-behaved or too wide, colours would become even wilder and more difficult to tame with each step, which is the second part of my statement.

In terms of gamut compression, the more that happens at the end the more detail and tones end up squashed and unnatural. @jdc is a superstar, so I won’t say anything about RT special colour tools. If you look at @Carmelo_DrRaw’s gamut compression, though great, the modules tend to make the images too colourful or odd in some manner, at least in my opinion. If you have a less typical profile, then these problems are bound to be greater.

Your decision to keep the camera profile is just like my decision to raw develop my images into linear Rec2020 working space but not convert the result to sRGB or append a colour profile. Viewing the custom nonlinear Rec2020 in standard sRGB is fine and looks good in my eyes but that isn’t the standard and encouraged approach. Perhaps, your camera has a good profile to begin with. :slight_smile: