RAW developer and other modules

I have been meaning to ask questions about the settings in RAW developer. You may have noticed that my experience and feedback generally revolves around it and other simple modules. So, to start:

RAW developer → Color tab

  1. Where does the standard profile come from?
  2. What are standard, perceptual and linear encodings?
  3. To clip or not to clip: the when and why.

This is the standard “Adobe color matrix” profile derived from DCRAW (AFAIK). It should correspond to the “standard color matrix” option in Darktable

In the version of photoflow you are using, the color management is implemented in a way that “decouples” the gamut and gamma encoding of the colorspace in use. The gamut is defined by the RGB primaries, which specify the reddest, greenest and bluest color that a colorspace can represent. The gamma encoding is specified by the tone response curve used to represent the RGB values. Typical TRCs are gamma 2.2 used by the AdobeRGB1998 colorspace, gamma 1.8 used by ProPhotoRGB, and the one used by the sRGB colorspace.

A special case is the linear TRC, which does not apply any gamma-like encoding.
The L channel in the CIELab colorspace uses yet another gamma encoding, which is called perceptual because it tries to represent the values in equally-spaced intervals of perceived lightness. For example, mid-gray is represented in this case exactly by 50% lightness, while it corresponds to 18% lightness in a linear scale.

Although each colorspace specifies the associated TRC, there is nothing that prevents the user of a different one. For example, it is perfectly legitimate to edit an image using the sRGB primaries and a linear or perceptual encoding, and photoflow gives the freedom to do so in an easy way.

So, to answer your question:

  • standard means the TRC defined by the colorspace (for ex., gamma=2.2 in the AdobeRGB case)
  • perceptual means the TRC associated to the CIELab L channel
  • linear means no gamma encoding

I personally suggest to do most of the editing in linear colorspace, and only switch to perceptual encoding if you need to work with overlay-like blend modes. You can do this by adding a color profile conversion layer and setting the new encoding there.

Of course, you also need to add such a conversion layer at the top of your layer stack, to finally convert the image to the output colorspace. Here, chose sRGB with standard encoding if you are saving to Jpeg.
I am currently improving the interface for this, by implementing an “image export” dialog which will allow to set the size, post-resize sharpening and output colorspace for the images being exported to disk ans Jpeg or TIFF.

I would recommend to always clip negative values, because otherwise they might cause a lot of troubles in subsequent edits. Negative values are due to the presence of colors outside of the destination gamut, so better choose a wide-gamut colorspace like Rec.2020 or ACEScg and clip.

Overflow values are a different story. The version of photoflow you are using is designed to correctly handle high dynamic range images, in which the upper value is basically undefined (from the physical point of you, there is no theoretical limit in the intensity of light of the scene being recorded…). For example, if you push the exposure compensation upward, the brightest pixels are not lost of you do not clip, they are simply represented by values larger than 1. When editing the image further, you can compress the highlights back below 1 and recover the details for the final output image.

So my recommendation is to set the highlights reconstruction in the exposure tab to either clip or blend (in this case, the clipping happens before the exposure compensation is applied), and un-check the clip overflow values box in the Color tab.

I hope this answers all of your questions…


Thanks @Carmelo_DrRaw. And I thought you lacked the motivation :wink::


  1. Good to know that perceptual means L*. I am way too used to the perceptual vs linear paradigm, perceptual being gamma corrected, but that would not work for spaces that are already linear.

  2. At the moment, I am not clipping anything outside of the 0-1 range. Instead, I am scaling everything to fit within range. I doubt that this is the right thing to do but I do not know what else I could do. (Before the edit, I started speculating about discontinuity and spurious data but I will leave someone more qualified to help me work through this out of range business in another thread.)

You can give the Color → Tone mapping tool a try, suing the filmic curve:


This tool should compress the highlights in a way that “looks natural”. The filmic curve is taken from here.

1 Like

I will give this module a try.

Actually, it is not the remapping that I am concerned about but whether merely shifting and scaling 0- and 1+ values back into range is appropriate; e.g., if the range is -1.534 to 1.245, adding 1.534 then dividing by 2.779 would get rid of those pesky outliers.

Edit: I gave filmic a try on one image and it seemed to perform well.

  • Do you know if the algorithm clips anything?
  • Is the exposure slider different from the one in Raw developer? If so, how so?

True, but it would also introduce an Hue shift, effectively changing the colours in the image… The only appropriate way to avoid negative values is to choose a working colorspace large enough to include all the colors in the image. The default Rec2020 proposed in the linear_gamma code should be a good compromise.

When saving to Jpeg it is usually recommended to convert the image to standard sRGB. In this case, there are two options for dealing with out-of-gamut colors:

  • simply clip them during the conversion, by enabling the clip options in the colorspace conversion tool. This will happen in any case as soon as the image is converted from floating-point to 8-bit integers in the Jpeg export process
  • edit the image BEFORE the conversion, selectively reducing the saturation of areas that are outside of the sRGB gamut. The gamut warning function of the colorspace conversion tool gives a precise and quick feedback about which areas are affected by the gamut clipping.

No, it doesn’t. It actually uses an analytical formula, so it works for any floating point value. Values above 1 are brought back below 1 by the filmic tone mapping curve.

The exposure adjustment in the RAW developer module acts on the pixel values in the CAMERA colorspace, before any colorspace conversion takes place. However, as long as the working colorspace is linear, adjusting the exposure in the camera or working colorspaces is equivalent, and therefore the two exposure sliders have practically the same effect.

Sorry, I can only discuss color in more technical terms because I am terrible at identifying them. I guess the problem with shifting and scaling is that it affects the hue, or at least the ΔE, of every color, instead of select colors near saturation when I compress the highlights. (Hue changes can still occur in post but I suppose it is advisable to keep it at a minimum where possible, except for artistic purposes.)

The part that I need to reconcile is why positive values but not negative ones. Maybe you have already answered the question but it still is not clear in my mind.

I think in this case an example can be worth 1000 words, so I prepared a preset for you to play a bit with numerical pixel values and color transforms.

Please open any image with an embedded color profile, download this preset and load it above the image you opened.
The preset does the following:

  • converts the image to Lab colorspace
  • transforms the image into a uniform color, a saturated yellow which is barely within the Rec.2020 gamut and definitely outside of sRGB.
  • transforms the image back to RGB. Here you can choose the RGB colorspace. With sRGB, you will get a negative value in the blue channel, unless you clip the output of the colorspace conversion
  • above the RGB conversion, there is a basic adjustments layer where you can change the exposure (RGB values multiplied by a common factor) and brightness (the same value added to each of the RGB channels)
  • finally, the image data is converted back to Lab, so that you can compare the initial and final Lab values

To see how pixel values evolve, simply put a sampler anywhere in the image.

The uniform yellow color corresponds to L=50 a=0 b=64. With the initial preset, the output Lab values are identical, even if the processing goes through an intermediate sRGB colorspace. This is because the conversion to sRGB is performed in unbounded floating point mode. Out-of-gamut colors (the saturated yellow in this case) are represented with negative channel values, but are still “valid” and correctly treated when converting back to Lab.

The first interesting thing happens when the output of the sRGB conversion is clipped, like shown in the screenshot below:

The final Lab values are now different, and the color has shifted a bit toward magenta (positive a value).

We can also see what happens when changing the exposure or brightness of the RGB data.
An increase in exposure produces an increase in saturation (higher b value), but no hue shift (the a channel stays at 0):

An increase in brightness produces a decrease in saturation (lower b value), but no hue shift (the a channel stays at 0):

This is just a small part of what can be checked with this set of layers… also, you can simply disable part of the layers if you want to inspect the output of one of the intermediate ones.

Don’t hesitate to ask more questions, I understand that this is one of the hard subjects in image editing!

EDIT: please install the latest version, as I fixed few issues in the colorspace conversion code and in the values shown by the samplers…


  1. photoflow-w64-20171007-git-unstable loads raws much slower than photoflow-w64-20170914-git-linear_gamma. (I see lots of shiftmats in the console.)

  2. PF crashes when I enable assign profile in Color profile conversion.

  3. Why compare in Lab? Can the space, operations in it, and conversions between it and other spaces preserve all possible colors?

I am still debugging the new auto-CA correction code, and there are quite too many output messages… however, only the loading phase should be slower, while the processing should be in principle faster

Thanks, I will have a look

Not really… Lab has a finite gamut as other colorspaces. I am using Lab because it is easier to interpret the values, as lightness is decoupled from color and the Chroma is simply proportional to the (a,b) values.

Please have a look. There was a bug

  1. Should loading pfis be as slow as raws? Would it be a good idea to store CA info in the pfi?
  2. Should auto-ca happen on load? By default, settings in Corr → lens corrections are disabled.
  1. Is it large enough to cover all of the RGB spaces that come with PF?
  2. How are out-of-range and -gamut values represented in Lab?

I just posted in the GitHub issue… I do not see the artefacts in PhF :open_mouth:

Loading a PFI that processes a RAW file results in loading the RAW itself, so the same slowness applies.
I opted for not storing the CA info because in such a way it will be possible to take automatically advantage of future improvements in auto-CA detection. Also there is still room for improvement of the RAW loading speed and auto-CA analysis phase.

The auto-CA works in two steps: an “analysis” phase that computes the correction factors, and which is always executed, and a “correction” phase which modifies the image data, and which is controlled by the checkbox in the Corr tab

I think that at least the ACES colorspace is wider than Lab, and it might even be that some of the very saturated colors that can be recorded by modern digital cameras are also outside of the Lab gamut. However, AFAIK such colors cannot be generated by any existing output device, therefore I would consider them irrelevant in practical terms.

The normal range of Lab values is L=[0…100] and a,b=[-127…128]. When doing conversions in floating-point precision and unbounded mode, out-of-gamut colors will be represented by Lab values outside of such ranges. Also very bright colors in HDR images can be represented by L values above 100.

Good to know the thought process behind auto-ca. Curiosity satisfied.

Interesting tidbit about ACES. Actually, I have been using ACES in recent PlayRaw workflows.

Maybe it is just a typo but shouldn’t a,b=[-128…127]? Proof:

From the first link:

In theory there are no maximum values of a* and b*, but in practice they are usually numbered from -128 to +127 (256 levels).

So [-128…127] is a just convention to match [0…255] but should not matter in PF where everything is done in float.

The CIE Lab colour model encompasses the entire spectrum, including colours outside of human vision.

So it covers everything shown in the chromaticity diagram.

Found this along the way:

I’ve never really liked results of tonemapping algorithms (over the years I’ve tried quite a few), and so have always tonemapped by hand using masks and layers. But this filmic tonemapping is very different and really excellent, actually very film-like.

It’s been probably years since I added any new editing tools to my small arsenal of “go to” editing algorithms, well, apart from GIMP-2.9’s LCH-based tools. But the filmic algorithm is something that I anticipate using quite a lot. It really does allow to add exposure and compress the highlights in a way that looks natural.

@Carmelo_DrRaw - thanks! for the link to the page that explains the filmic tonemapping. And many, many thanks for the filmic tonemapping algorithm.

If you read this page from Bruce, you will see that the integer representation of Lab excludes some real colors (particularly in the green hue range, see the bottom of the page). This limitation does not hold if one uses floating-point values and allows for (a,b) values outside of the [128…127] range. By the way, thanks for fixing the typo in the (a,b) range definition!

However, I am pretty sure that no monitor or printer will be able to actually produce such colors, so for me this limitation is “practically irrelevant”. Experts can correct me if I am wrong.

I agree, I am starting to apply it by default more and more to my images… however there is really nothing very fancy in it, it is just a non-linear curve with a roughly S-shape in perceptual scale. Nevertheless, it really gives a “natural” boost in mid-tones contrast.

You should also try to modify the “preserve colors” slider, to see its effect. Sometimes moving it all the way to “1” produces even more natural results…

Lab Good to know that a,b=[-128…127] does not cover all possible colors and that the range is not a scaled representation of the entire spectrum.


This happens when I use ACES:

tone mapping > preserve colors = 1
RAW > Color > working profile = ACES linear 

No clipping
name: [tm-00]
min : -5633875116032
max : 1299298123776
mean: -3475430.2141832756
std : 2461316913.8421001
rang: 6933173239808

Clip overflow
name: [tm-01]
min : -5633875116032
max : 1299298123776
mean: -3475430.2141898782
std : 2461316913.8421001
rang: 6933173239808

Clip negative
name: [tm-10]
min : -2528173096960
max : 0.83970385789871216
mean: -1127271.6365228517
std : 777916169.15958655
rang: 2528173096960.8398

Clip both
name: [tm-11]
min : -2528173096960
max : 0.80282509326934814
mean: -1127271.636529814
std : 777916169.15958655
rang: 2528173096960.8027

Load time

Observations from loading my orchid photo from [PlayRaw] Flowers Flowers Flowers!

  1. Raw and pfi load time from opening to the completion of updating is 29-38s.
  2. Closing image tab using [x] while loading makes PF crash.

GIMP LCH-based (blending modes, and hue shift) is literally the only reason I’m keeping GIMP 2.9.5 and higher. I have not found a free solution besides having to program to get LCH-based tool, but that path sounds tedious. I don’t know when they’ll come to Photoflow, but if it does, I’ll probably be using Photoflow over GIMP for my needs.