darktable master gets a JPEG default pipeline order

During the dt 3.6 release party, I think it was @betazoid who reported that many Fuji users post-processed OOC JPEG rather than RAW.

The default v3.0 darktable pipeline assumes linear input, that is linear RGB before and after the input color profile. If you process non-linear inputs (JPEG, TIFF, PNG, etc.), the pipeline is non-linear before the input color profile module, which removes the non-linear encoding, while pretty much all the modules hanging there expect linear input.

An additional pipeline order preset has been added today in master, adapting the default v3.0 to non-linear input (basically, it moves all linear modules that were before input profile after).

The default pipe order gets renamed “v3.0 RAW”, the new one is called “v3.0 JPEG”. When opening new files, the RAW order will still be the default and it needs to be changed manually.


That’s really helpful, Aurélien. I’m sure Fuji users will be extremely grateful!

Not only Fuji users, recently I am editing a lot of slide and negative scans which are not linear as well. In general I am editing a lot of non-raw images in darktable, as its capabilities are unmatched. So thank you very much, @aurelienpierre!

One question though:

As the original transfer function of these images is not always known, is there always some safe assumption or does one have to change settings for particular input material.

Edit: I thought I should describe better what I mean: Input colour profile will only reverse the nonlinear mapping of the file format, but what about a potential nonlinear mapping of the content itself. The most prominent would be a black offset, especially for scans that expose to the right. Furthermore, there might be some inherent tone mapping, as the dynamic range of the medium is probably lower than the scanner’s. Do we have to account for these when using “linear” tools to not get issues, e.g. in the compressed highlights of a slide scan? I used to use legacy tools with scans to circumvent such issues, but probably it’s time to resign from such half-baked solutions …

And another one: How does this change affect negadoctor?

Thanks for the mention and for picking up my feedback. But I am sure it’s not only Fuji users. Recently I was just surprised that more and more photographers are editing jpegs, what a crazy thing to do! Not sure how we can interpret and explain this.

1 Like

The original transfer function of the images is known: it’s written in the ICC TRC tag in any picture properly tagged with an ICC profile. Of course, I’m talking about the transfer function of the color space in which the image is saved.

There is no way to undo the tone response of films, unless you create a special linearization LUT for your film emulsion, in which case you still have to undo the color space TRC first. There is also no way to undo tone curves and similar edits done on the digital image before saving it.

It does not. Negadoctor has always been inserted after input color profile for this reason.

They either like to suffer or have no idea what they are doing.

1 Like

Should the new JPEG pipeline order be used for scanning negatives with a digital camera because the negative is non-linear? Or should we use the default RAW order because you’re “scanning” with a digital camera?
I’m assuming the latter because negadoctor handles the negative and the default pipeline is handling the digital camera part of the workflow. Is that right?

If you’re scanning with a camera producing raw files, then the negative and its translumination is your scene. So you can use the scene referred workflow to get a proper representation of your scene - and that’s the input for negadoctor

1 Like

Yes, but what does it mean for the tools to use: Better use the legacy algorithms as they origin from a time where scanned material was the usual source? Or is a linear assumption as good as any? And what about the black offset, where to correct it best - I would assume as early as possible, but what does it mean in practice?

Oh God, I thought it was clear enough.

  • If your input image is has been saved as integers file formats like JPEG/PNG/TIFF, you use the JPEG order.
  • If your input image is a linear RAW (or 32 bits TIFF/EXR saved with no TRC/OETF/gamma) , then you use the RAW order.

What is in the image is irrelevant. What matters is how it is encoded.

Non-linear spaces are:

  • sRGB (composite gamma),
  • Adobe RGB (gamma 2.2),
  • ProPhoto RGB (gamma 1.8),
  • any display RGB.

Linear spaces are:

  • sensor RGB (unprofiled),
  • linear Rec2020,
  • linear Rec709,
  • linear ProPhoto RGB,
  • linear XYZ.

For obvious reasons, it’s impossible to save pictures in 8 bits with linear spaces, so anything below 12 bits/channel is most definitely non-linearly encoded. But files at 16 bits/channel can be either, so you need to check the ICC tag to know.

  • If you are using a DSLR to scan film, then your input is RAW.
  • If you are using an high-end scanner that outputs 16 bits “RAW” TIFF (like Silver Fast), then your input is RAW,
  • If you are using a low-end scanner that outputs 8 or 16 bits JPEG/TIFF/DNG, then your input is most likely non-RAW (though there is a doubt for TIFF and DNG, so check it).

Remember that there is no scene-referred concept applicable if you work digitized film because the relationship to the scene is already long gone, so there is no reason to overthink it. The Cineon model used in negadoctor expects linearly-encoded scans, but of course the color content is non-linear (but one non-linearity is still better than 2). Scanning film is working on an image of an image of the scene. So we care only about the encoding of the image of the image in this context, not about the direct image of the scene.

TL;DR : pipeline order depends solely on the input file type, linear/RAW or non-linear/non-RAW.


If this was an answer to my question, I guess you entirely misunderstood what I was asking. Never mind, I should not have hijacked this thread to ask for some practical advice but better made a new one. Sorry for the noise. But working with an

is not trivial for me, at least from a theoretical perspective, and

that unfortunately seems to be my super power :expressionless:

1 Like

In my less educated back a year ago*, I saved files in 8-bit linear. Seems like it’s technically possible, to me. By impossible, do you mean, inadvisable?

*A year ago I got interested in photography, and a year later (almost to the day when I got a camera), both my skills in taking photos and editing, have grown a lot. In regards to editing, much of what I have learned is through this forum and using darktable. So, hats off to you and all others involved in the development of darktable, and this community!

Anyway, the JPEG pipeline sounds good.

Which almost never actually happens for any SOOC JPEG.

Basically all cameras I’ve ever seen use some sort of tone curve in addition to the sRGB transfer function, and do NOT indicate that in ICC tags.

A most extreme example would be Sony S-Log picture profiles. Gamut and transfer function are NEVER tagged in an ICC profile. The closest you’ll get is a Picture Profile EXIF tag that just says “this was shot in S-Log2 mode”, leaving it up to the user to make a matching ICC profile and embed it.

I’m 95% certain Fuji doesn’t embed an ICC tag that matches the actual camera behavior either.

We know for certain from the thread on Panasonic V-Log that Panasonic cameras don’t embed an ICC profile that matches the actual gamut and transfer function the camera used.

If the camera does something fairly consistent/predictable that maps well to an ICC profile (gamut transform on linear data followed by per-channel tone curve - which IS common), one can use an approach such as Debevec’s Algorithm to reverse engineer the TRC, followed by a ColorChecker to reverse engineer the gamut. Or if the camera manufacturer posted the actual mathematics transfer functions (S-Log2/3, V-Log, etc) and gamut (S-Gamut/etc), then you can easily make a matching ICC profile.

But the majority of the time you’re going to have a JPEG that has had an unknown S-curve applied to it and can, at best, be assumed to be sRGB transfer function and gamut as an extremely rough approximation without some pretty serious reverse engineering. You definitely can NOT assume the camera tagged the JPEG correctly.

I’m not sure about Canon, but given that people are trying to reverse engineer their picture styles format (which they do apparently embed? see Reverse Engineering Picture Styles ), it’s unlikely they’re embedding an appropriate ICC that actually matches camera behavior.

The artistic tone curves and other vendor-log tricks applied by camera manufacturers to maximize bit-depth use are none of our business here. Nobody spoke about linearizing JPEGs to scene-referred, we talked only about undoing sRGB or Adobe RGB TRC. That, for example, makes exposure corrections behave a bit more predictably. OCC JPEG still have brightness and contrast correction anyway.

JPEG should still be considered finished products, and not working material. If people choose to ignore that, the rest is on them.

Shoot RAW guys.


With 8 bits in linear encoding, you can encode no more than 8 EV of dynamic range, and your lowest EV is pretty much ON/OFF (1 or 0). Expect bad shit.