RawTherapee Website release post (aka: we're not dead yet)

@patdavid another change in the same line as mentioned above, I mean within Dual-Demosaic with Bilinear:

“options in RawTherapee that blends the results from”

I think it should be blend instead of blends

1 Like

@patdavid in the Wavelet Improvements section,

“Main developers: jacques Desmis, Ingo Weyrich”

A capital J for Jacques.

Thanks again for your time and effort!

1 Like

Ok, you all rock!

Thank you for taking the time to proofread and get me fixes/suggestions. I’ve incorporated all of the changes you’ve mentioned.

I do still have a question - mention was made of improvements to wavelets but I cannot find a link or explanation about what was done. I apologize if I’m being dense but is there a good reference for the improvements to wavelets? Is it just for local adjustments or as part of a larger module?

Could I please get a link to some material to review?

Also - I’ve added the section on Abstract profiles - if someone could take a quick look and make sure it makes sense that would be awesome.

If there are no further changes to be made I can see about publishing this weekend possibly (I still need to test pushing to the website host with the credentials Gabor passed on to me).

1 Like

The post is looking good, Pat! There’s a few things I wanted to say but I haven’t had the time in the past few days. I’ll get them to you tomorrow. One quick thing though. AFO is an abbreviation for the Spanish of “waveform analyzer”. In English, it’s simply referred to as “waveform”.



I will try, from memory, to note the main changes in Wavelets. Note the excellent quality of Rawpedia - Wavelets (made for the graphics, images, general presentation, by @XavAL , and for the content by Jacques Desmis, @Wayne_Sutton , @Andy_Astbury1 and @XavAL )

For “Abstract profile”, this has been subject to many heated debates (that’s an understatement), mostly related to the lack of understanding of what I had done (what is this thing, it’s not usual, etc.). I think, they can give their opinion that it was validated by myself (obviously), @Wayne_Sutton and @Andy_Astbury1 (with a video)


Some explanation on “Wavelets”. Excuse my very bad english

The “Wavelets” module in the “main” menu has also been greatly improved. It is on this occasion that I introduced the notion of “Attenuation response”, which I will explain as simply as possible, a little later.

Several modules have undergone profound changes, notably by the addition of this function and its corollaries (offset…). For example, the “Contrast”, “Chroma” and other modules.

The " Toning " module has been enriched with a graphic possibility of visualizing the changes.

The " Denoise " module (which I have separated in " Local Adjstments ") is clearly improved, but it limits certain possibilities of wavelet, by its limitation in number of levels of decomposition (memory occupation, processing time). It includes (in “Advanced” mode) complex optimization functions to improve the results, which I will not detail here. Moreover, it is now equipped with a “Chroma” module.

Edge sharpness" (an extremely complex module), which increases the contrast on the edges, is also improved.

A “Blur levels” module is also new. Its usefulness is obvious…

“Residual image” has also been improved, in particular with the “Blur” function and a new “Clarity” module.

“Final Touchup” has also been significantly improved, in particular by the “Directional contrast” function, which acts to differentiate the response of the 3 wavelet decompositions (horizontal, vertical, diagonal) and “Final smoothing”, which allows you to attenuate images that are too “hard”;

A few words about “Attenuation Response”. This addition has been the subject of very long debates, you can find the explanation in the excellent diagrams made by @XavAL , in the “Wavelet” documentation.
To simplify, the wavelet decomposition, if we don’t apply this addition, often risks to bring artifacts. Why is this? Simply because when you act on a contrast slider, for example, all the signal (which is not the luminance, but the variation of it in positive and negative, for each of the levels, and each of the dimensions : horizontal, vertical, diagonal). The basic idea is simple, but difficult to grasp. We look for the maximum of the signal (of course for each level, dimensions), we calculate a standard deviation (on a distribution which is all but Gaussian, but which serves as a reference). From these two values, we concentrate the action on the most pronounced “central” part of the signal, “Attenuation response” will allow us to reduce or increase this “central” part and « offset » to move this central part .
This addition makes wavelets usable in most cases, without adding artifacts.

All this work required the precious collaboration of @Wayne_Sutton , @Andy_Astbury1 , and @XavAL , and of course the code has been improved by @heckflosse

Compared to “Local adjutments”, many modules seem similar, but because of the possible number of levels, and the deltaE, the performances of “LA” are in my opinion superior. Moreover, functions have been added in “LA” such as “Graduated filter” which acts on the contrast (in the wavelet sense), “Tone mapping” : another approach than the other “usual” algorithms, which in negative values allows for example to simulate a “Dodge and burn” and of course the masks and functions related to the masks such as “Recovery base on luminance mask”.



For your convenience, Wavelet Levels - RawPedia. It may still be a little abstract for some readers… In general, when an image is broken down into wavelets, simply increasing or decreasing each wavelet by a factor (degree) would increase or decrease all the frequencies in each including undesirable artifacts such as FFT-related patterns and image noise related to the frequencies in question. Attenuation in our case is to narrow down the sweet spot per wavelet to reduce them.

Note the method used isn’t perfect. Hence, Jacques saying

Even though I haven’t tried the implementation, I can see it is user friendly: the user manipulates two sliders and sees the changes happen in a direct manner.

1 Like


Thanks for the link to Rawpedia

The debate was long, because we had to make it clear that:

  • we do not work on the contrast, but its 6 positives and negatives variations in the 3 directions (for each level)
  • this distribution is very far from a Gaussian distribution, the high frequencies are much more spread out than the low ones
  • the use I made of the standard deviation, which is in fact only a comparison, it does not serve itself. It is in fact a kind of modeling

Once this was understood, with the problems of the language, it was necessary to find a term, it is @Wayne_Sutton , who proposed “Attenuation response” which seems to me to translate what it does.

Yes it is approximate, but it works

Thank you again


Completing the list of contributors for each feature (from memory and scanning the PRs and issues):

  • Waveform/Vectorscopes
    • Developers: Lawrence Lee, Ingo Weyrich
    • Contributors: Javier Bartol, Paco Lorés (both for documentation, correct me if I’m wrong), Andy Astbury
  • Camera-based perspective
    • Developers: Lawrence Lee, Flössie
    • Contributors: Roel Baars, Ingo Weyrich, Maciek Dworak
  • Inspector
    • Developers: Rüdiger Franke, Lawrence Lee, Ingo Weyrich
    • Contributors: Roel Baars, Javier Bartol
  • Improved film negative
    • Developers: Alberto Romei, Flössie
    • Contributors: Roel Baars
  • Spot removal
    • Developers: Jean-Christophe Frisch, Ingo Weyrich, Lawrence Lee
    • Contributors: Roel Baars, Andy Astbury

Edit: Added Andy for his video on waveform and vectorscopes

1 Like

These are the minor comments I have.

Note: Many of these features are available from the Automated Nightly builds

If “these features” refers to the ones highlighted in the post, then all of them are available in the nightly builds.

@Desmis has explained a summary…

Should the @ be a link?

accounting for the light response of properties of film

There’s an extra “of” (the first one).

The spot removal screenshot could use a caption explaining that the bird is being removed.

From (Jacques) @Desmis and Ingo (@heckflosse)…

The parentheses are flip-flopped.

1 Like

The color-correlation AWB section is very technical. That’s fine because there’s not much to say from a user’s perspective other than “choose the ITCWB option and that’s it”. The potentially confusing part is regarding the 3 phases and the 2 step process. It should be more clear that the numbers are not a mistake because they refer to two different ways of describing the algorithm.

Would it be better if we say what ITCWB is good for and what problem it is trying to overcome?

My final comments are about the camera-based perspective tool.

The focal length and crop factor (and any additional cropping such as a digital zoom) are combined automatically inferred from the image metadata

This makes it sound like any crop is retrieved from the metadata, but in reality, only the focal length and crop factor are used. The user is responsible for accounting any cropping not already factored into the focal length and crop factor metadata, e.g. by increasing the crop factor and using the horizontal/vertical shift to re-align the image center with the optical center.

We should elaborate on the other aspects of the tool.

  • There are three buttons for automatically detecting lines in the image and correcting the perspective in the vertical direction, horizontal direction, or both. Automatic correction works well in most cases where the image has visible horizontal and/or vertical lines.
  • In case the automatic option fails to find lines or gets confused by irrelevant lines, the user can opt for the control lines option. The user draws lines over the image. When complete, RawTherapee will use those lines to calculate the correction. As long as there are at least two lines in the same direction as the correction direction, correction will be applied. This means it is possible to control which direction(s) get automatically adjusted by drawing the appropriate number of lines in the corresponding direction.
  • After correcting the perspective, users can make some final adjustments to the rotation, shift, and perspective recovery. The recovery option is particularly useful if a perfect correction is not desirable. For example, an image of a building may look strange if the building does not “lean back” slightly. One possible remedy is to reduce the amount of correction. This technique leads to a problem if perspective correction is applied in both directions or if post-correction rotation/shift are used; The lean will be tilted to one side. The solution is to use recovery, which ensures the lean is always centered.

The two images I supplied should have captions. The first demonstrates control lines being drawn to correct the perspective in both directions. The second shows the result of applying the correction with some recovery.

That’s all I have to say for now. Thank you @patdavid for writing this wonderful piece!


@Lawrence37 @patdavid

I too am in favor of simplifying the presentation of Color-Correlation Automatic White Balance.

I can propose the following text (although I don’t mind the detailed explanation, it is only out of step with the other presentations)

Excuse my bad english:

This algorithm “Temperature correlation” is based on a comparison of Raw data of the image (those majority), and a base of 200 colors representative of the visible colors (and not virtual) in spectral data.

The system proceeds in several iterative steps, using a correlation of data (Student) to :

  • search for the best Temperature;
  • search for the best green point (tint).

The system gives good or very good results when the illuminant of the scene, during the shooting, is either “Daylight (between 4100K and 12000K)”, or “Blackbody (between 2000K and 4100K)”


1 Like


First excuse my bad english…

Some new things about Rawpedia and the HDR-SDR and Cam functions in Rawtherapee :

In association with @Wayne_Sutton , we have agreed, to make it clearer, and more educational to :

  • remove from “Getting Started” almost everything related to HDR or SDR functions using specific concepts and tools such as “Log encoding”, “Cam16”, “JzCzHz”, Sigmoid"
  • these tools are now grouped in paragraph 3 :HDR to SDR : A first Approach (Log Encoding – Cam16 – JzCzHz – Sigmoid)
  • Remove the most conceptual parts from this last part and incorporate them into the general module: CIE Color Appearance Model 2002/16 - Cat02/Cat16 - Log Encoding – Jzazbz, especially the whole part concerning Jzazbz :Jzazbz – a new experimental CAM ? (Cam16 & JzCzHz)
  • The English version is a bit different from the French one (paragraphs 4, 5 and 6), but the translation should be done this fall (it is a lot of work)
  • Rawpedia links :

New link


The whole is clearer (at least from our point of view) and allows a better understanding and an easier reading.

Some explanation on these HDR-SDR type functions, on their implementation in Rawtherapee. Of course, you may or may not take it into account to incorporate it in the loads compared to 5.8 :

« Color Appearance & Lighting (Ciecam02/16) » (main)

  • has been enriched with Cam16, much more powerful than Cam02
  • possibility to choose a level of complexity " Standard / Advanced »
  • some modifications requested by users to make this module more user-friendly

In summary this module which seems complex is located at the end of the process, and allows in particular in " Symmetric " mode to bring a " chromatic adaptation " of high quality when the conditions of shooting and the real illuminant are different.

"HDR to SDR : A first Approach (Log Encoding – Cam16 – JzCzHz - Sigmoid)" (Local Adjustments)

Introduction :

When we look at Darktable, we can see that the software has been entirely rewritten based on 3 principles and tools: Remove the Lab* Mode, Introduce a ‘Filmic’ module (which is a logarithmic encoding of the data), review all the colorimetry in RGB mode, and in a current branch develop a ‘Sigmoid’ module.
I don’t make any judgement, but I can see that these changes have caused a lot of debate and communication. What about Rawtherapee?

What about Rawtherapee?:

Use of the Lab mode: (especially in Local adjustments) :

  • Certainly it is true that Lab* passes only 7Ev, but only at the level of the output profile (monitor) whose PCS is in Lab mode and in 8bits. From the various tests that I could make with images with high dynamics at my disposal, when one uses Lab in “float”, the conversion RGB=>Lab, and Lab=>RGB is done without any loss at least on images with a Dynamic Range of 15Ev. Certainly in the long run, if we assume that output profiles can use HDR functions, it may be necessary to use HDR-Lab… But note that today most exchanges are done either on paper (which has a very low Dynamic Range) or on the Web. As for buying a real HDR monitor that passes 4000cd/m2, the prices are around 30000 €.
  • It is true that Lab* does not maintain color consistency, especially in red-orange and blue-purple. But in Rawtherapee if in “Local adjustments : Settings” you check the 2 boxes “Avoid Color Shift” and “Munsell correction only”, a series of 200 Lut will correct with a quasi perfection these drifts and will not correct the gamut.
  • see in Rawpedia :


The “Log Encoding” module :

  • I used the excellent work of Alberto Grigio @agriggio , in ART (with some adaptations), to work in RGB mode. Then to solve the problems of lights and colors (and according to the level of complexity chosen: (Basic, Standard, Advanced), you have at your disposal the tools of Cam16, which allow you to work on the components : J = Lightness and contrast , Q = Brightness and contrast using “Absolute luminance”, and the 3 color components (s : saturation, C : chroma, M : Colorfullness)

The « Cam16 » module :

  • it is a very simplified version of the Ciecam & Lighting module present in " main ", therefore much more intuitive, it benefits from the advantages of " Local adjustments " (deltaE, working in " full image ", etc.)
  • it has a " Sigmoid Q and Log Encoding Q " module: which of course can use the " Black Ev and White Ev " settings. In both cases, RGB is not used, but the Brighness (Q) which uses “Absolute Luminance” is used as a reference. Of course, depending on the level of complexity (Basic, Standard, Advanced) you have different settings.
  • I have equipped it experimentally (by modifying the conversion matrixes) with a slider “HDR PQ (Peak luminance)” which allows a first approach HDR, by allowing to set this function between 100cd/m2 and 10000cd/m2.

The « JzCzHz » module :

  • is only accessible in “Advanced” mode
  • I encountered many difficulties during the first use (with the original matrix, and PQ set to 10000 cd/m2): lack of saturation, artifacts, etc.
  • I tried to work around it with various workarounds: for example the “Remapping” function or the slider to adjust the PQ luminance peak (which specialists or researchers can dispute), but now it works more than correctly.
  • JzCzHz (which is a simple transform of Jzazbz), is not a CAM (Zcam we tried does not work) : here again I made some workarounds by giving it some CAM functions for “Scene conditions”: Mean Luminance (Yb%), Absolute Luminance, Surround.
  • It has (like Cam16) 2 functions " Log encoding Jz " and “Sigmoid Jz” , there again we do not use the RGB mode, but Jz which takes into account " Absolute luminance "
  • the code is not optimized (it is in “double” precision), hence a certain slowness.

These 2 modules (Cam16 and JzCzHz) were developed at the end of 2021 thanks to the precious help of @Wayne_Sutton and @Jade_NL . Thanks to them.

Of course, you can help yourself with the documentation in Rawpedia.

Your opinion is important : what do you think ?



So is the release coming or not?


Judging from the stream of posts above your post it seems it’s being prepared.

That was already about 2 weeks ago. I guess @Morgan_Hardwood 's free days are over? And nothing happened?

No. @jdc’s post was written one day ago.

But I think he he is not responsible for releases.