'Game changer' and innovation: what is it, rules of the game.

RawTherapee was born about 20 years ago… Its creator, G. Horwatz, started from scratch and innovated. What he achieved is the result of his knowledge and skills, based on the technology of the time and the methods and tools available in the early days of digital technology. The same analysis could be done for other systems (word processors, the Internet, Darktable, etc.). The process architecture (the four pipelines) and the interface are essentially the same as they were 20 years ago. The tools are presented in the supposedly usual processing order (at the time): the ‘Exposure Tab’ appears first because that’s where the powerful tools of the time were originally located; the ‘Raw Tab’ and ‘Metadata Tab’ are at the end because the user is not expected to (or not often) interact with them.

Innovators play a key role in our societies, whether in our software, our organizations, our communities, our environment, Artificial Intelligence, and more. Flexibility, adaptability, critical thinking, curiosity, the ability to model systems, regulate usage, empathy, and human factors are essential for survival. These innovators, these creators, are often misunderstood and undervalued; they are sometimes perceived as discourteous and somewhat outside the system. But the essential thing is to understand the system and evolve its goals, issues (what is at stake), values, identities, and cultures.

After this brief introduction (which is the crux of the matter), let’s return to software like Rawtherapee, ART, Darktable, etc. I won’t make a direct comparison, as that would be pointless, but I will try to highlight some key points. It’s not about being for or against, or saying one is better or worse, but about understanding:

  • Compared to Rawtherapee, ART has undergone a slimming down, incorporating innovations (CTL, etc.) focused on the user. I admire its creator. Broadly speaking, ART is still similar to Rawtherapee, with some advantages and disadvantages (in terms of features).

  • Darktable, thanks to the initiative of a few innovators whom I respect and admire, has introduced and adapted significant innovations. This leads to a new way of seeing things, and therefore, of doing them. In particular, the concepts of ‘Scene Referred’, ‘Display Referred’, and innovative tools like ‘Filmic’ or ‘Sigmoid’ (and soon ‘Agx’) have been added and explained… a strong communication strategy has been implemented. The general idea is to perform the majority of the processing in linear mode (what exactly is that?) – demosaicing, working profile, etc. – and then to ‘input’ and adjust the data at the end of the process to what is perceived by the user, on the available media (screens, printers, etc.) using tools such as ‘Tone Mapping’ : like ‘Filmic’ or ‘Sigmoid’ combined with ‘Color Balance’, etc.

  • Rawtherapee is a system equipped with tools that are (virtually) nowhere else seen (innovations), apart from a few recent adaptations. I’ve heard comments and criticisms on the web, in the specialized press… (people don’t understand anything, what’s the point, it’s engineering stuff…). A good part of this stems from a lack of understanding, an impression of complexity, or simply a lack of knowledge, which I think is mainly due to a lack of communication and exchange (support materials, forums, videos, etc.) and the habits of users and designers. What about: CIECAM, Wavelets (yes, it’s simpler in GIMP, but that’s comparing apples and oranges), White Balance Auto, Selective Editing (yes, it’s different from masks, and on closer inspection, more intuitive, I stand by that), Capture Sharpening, Dual Demosaic, Color Propagation, Abstract Profile, etc.? Several highly competent people have contributed to what RT is today.

‘Game changer’ - in French, the term ‘bouleverseur’ suits me well as a translation: it aims to change the usual way of thinking and acting in terms of image processing. Before changing the way we do things, we must first agree on the way we see things. This is partly the purpose of these presentations / discussions.

I will open separate threads (if my health permits), step by step, which will cover the new features and integrate them into the current Rawtherapee system (GHS, Capture Sharpening, Gamut compression, Selective Editing, Abstract profile, etc.):

  • Perhaps the Rawtherapee GUI interface will need to be updated; this is the purpose of the Pull Request ‘rtreview’. Your suggestions are welcome. Not everything is feasible.

  • It would probably be necessary to change the order of actions in the process, but this operation is extremely complex (the order of ‘events’ that places one action before another, and which has nothing to do with the GUI interface, for example GHS should be further upstream, or other processes located later ). I’m answering in advance the question of the differences between Preview and Output, and the malfunctions of Preview… I don’t know how to do it.

I’m going to focus primarily on the ‘why?’ rather than the ‘how?’ Why GHS, rather than something else? Why integrate noise reduction into Capture Sharpening? What’s the advantage of linear White Points and Black Points rather than those expressed in Ev? Etc.

The settings provided with the examples (pp3) don’t represent the ideal, perfect processing, but rather the path to understanding how and why to do it. Of course, they are indicative, simplified as much as possible for educational purposes, and can (and should) be expanded upon.

The limitations of Pixls.us only allow me to post two consecutively if there are no other comments. Therefore, each post is likely to be rather long.

The lack of up-to-date and accessible documentation will be a significant obstacle. I’ll have to copy paragraphs to Pixls.us, which will make it somewhat cumbersome. Conversely, these exchanges will allow me to put the examples online.

If a user of another software (ART, Darktable, etc.) wishes to explain how this type of processing is done in their environment, they may do so, but only in a very short post of one or two lines (no more), linking, for example, to Playraw. Otherwise, the original discussion will be diverted, the moderator will be notified, and the post will be deleted.

I hope the translator won’t distort my words.

Thank you.

Jacques

20 Likes

It’s a pity, that I can only give one like for this post!

3 Likes

It’s a pity, that I can only give one like for this post!

Yes. I totally agree!

@jdc thanks a lot again for your work on RawTherapee

2 Likes

For my part I am really glad that such great free and open source options exist for editing raw files and we don’t have to pay a subscription. I feel too many photographers are sheep and feel they have to use specific commercial software or they are not serious photographers.

I will refrain from making direct comparisons between DT, RT, Art and any other open source software and trying to say which is best. However, I will say that what initially attracted me to RT was the intuitive workflow of the UI. At least it was intuitive to me. I also liked the sharpening and denoising I achieved with RT.

What attracted me to DT was the ability to do localised adjustments in every module that could use drawn and parametric masks. When I used RT I found I would do two or more versions of an image and use GIMP to combine them through layers and masks. With DT I didn’t need this approach. It appears ART has increased the ability to do localised adjustments.

I have said in the past that in an alternative universe I would love to see a program called Darktherapee that combined all the great features of RT and DT into a single program. I realise this is not practical due to coding differences let alone the philosophical differences that may exist between the developers of these two great programs.

1 Like

Thank you for all these positive reviews, it makes me extremely happy. :grin:

Comparing software to say that one is better or another is worse. That’s not the goal of Game Changer, and is meaningless. Of course, I make comparisons individually to see the advantages and disadvantages and learn from them. Of course, creating a product that combines the best of all products is a dream… The obstacles:

  • The process architecture is different; the way white balance is handled, for example, reveals the profound differences.

  • The development and communication teams: large in Darktable, now smaller in Rawtherapee for various reasons. Seeing that one of Rawtherapee’s developers is 78 years old and, moreover, ill, is the one who will try to present new algorithms is worrying for the future. Of course, I enjoy developing algorithms, challenging myself (and trying to solve them), very happy to try and present them to you, but I’m not going to live forever.

Perhaps, nevertheless, cooperation or sharing is possible (I’m dreaming).

An obstacle looms, which, from my point of view, is more significant: the arrival of AI. Because our source code is ‘copy-left’, AI can (and will) copy our code (without telling us) – all it takes is a review by Copilot… and presto, Microsoft collects it – integrating it into products and with resources we don’t have (data, algorithms, parallel computing…). Ultimately, if nothing is done, our open-source software is doomed to disappear: perhaps I’m a harbinger of doom… but we must act.

Before presenting any tutorials, I don’t yet know how, when, or how long our discussions will last - sometimes several weeks, even months. Here are a few points to consider:

a) I will partially use some images from the Rawtherapee Processing Challenge feedback session that took place in 2024. I will add other images (noisy or not) or those based on LEDs.
Challenge Rawpedia

b) The issue of out-of-gamut or anecdotal colors, due, for example, to LEDs:

  • To provide some background and help you understand, I’ve included two links (which I already shared during the Gamut Compression presentation in September 2024 - to say the least, it didn’t attract a large audience).
    Gamut compress Github
    Documentation ACES

  • What we observe is that two fundamental factors (Illuminant and Observer) differentiate a shot taken with a DSLR in an LED environment. The illuminant is very different from daylight, and the camera’s observer is very different from the human observer.
    To simplify drastically, a perceived color is a function of the colors in the image, for each pixel, dependent on the illuminant and the observer, in the form of a matrix calculation. It is inconceivable that a photographer would analyze every part of the image they see using a spectrograph, request the spectral data of LEDs from organizations, and obtain camera observer data from the camera manufacturer (Nikon, Canon, Sony, etc.). Including a ColorChecker24 is only a stopgap measure. The calculation required would be virtually impossible. Moreover, when the lights are directly in the frame, the ‘Pointer’s gamut’ (which translates the reflected colors into the CIExy diagram) is partially obsolete; the entire CIExy diagram is used. This leads to camera gamuts, used as examples in this diagram below, which are completely outside the realm of human vision, yet embedded in the camera’s data. Manufacturers have provided conversion matrices for D65 and sTdA, but obviously not for these cases. The question is, what do we do with this data? It’s the same for ICC or DCP profiles. GHS and gamut compression allow for processing, but what limits should we set? Certainly, we’ll “fit” data 5 to 10 times outside the gamut (including ACES P0)… but what do we do with it? What are the true colors, especially those of LED spotlights? Our vision, however, always has the same characteristics… the input data is theoretically virtual and invisible to our eye. I’m not a magician. It will be up to each of you to adopt the values ​​that seem right to you.

d) Overexposed images.

The third problem is overexposure in daylight, for example, sunsets. This leads to oversaturation of the sensor data. Highlight reconstruction algorithms such as Color Propagation or Inpaint Opposed attempt to make this data usable, to reconstruct it. We have no idea - except perhaps the person who took the picture - how accurate they are. Again, GHS can recover the data, but is it necessary to recover everything?

Certainly, it’s possible to make corrections locally (Selective Editing) or on the entire image (Abstract Profiles) by modifying Primaries, Illuminants, and Dominant Color. This will be explained in the tutorials. But nothing tells us what the “right” colors are. Ultimately, the most important thing is to be satisfied with the result.

Thank you

And excuse my bad english.

Jacques

3 Likes

I feel like you have at least another 20-30 years, obviously your mind is sharp!

3 Likes

@paperdigits

My brain is still functioning very well… perhaps a little less well than a few years ago, but my physical health isn’t good.
But thank you for your kindness. :grin:

Jacques

Hello all

I just made an update in ‘dev’ via a pull request (verified by Copilot). So, executables have been generated. Commit 827acff

When you use GHS, it calculates the ‘WP linear’ and ‘BP linear’ values, as I mentioned in the tutorials.

If the ‘WP linear’ is 10.699 - which is enormous (the datas are normally between 0 and 1), and which I think is only found with images using LED lighting and spotlights in the shot - it will be necessary to combine a treatment using ‘Gamut compression’ and ‘GHS’.

To help the user better understand where the luminance peaks are, I perform additional calculations to determine the level at which each RGB channel is affected.

You will only see an additional line of information if ‘Auto Black Point & White Point’ is enabled.

For example, you will see:
RGB values ​​- R:5.01 G:3.15 B:10.70
So we can see that the most prominent component is B=10.70 (rounded value of 10.699)

This should provide some help for use with ‘Gamut compression’. I am about to start the tutorial on an image with LED lighting.

Jacques

2 Likes

I’ve just added “Dynamic Range GHS (Ev)” (DR Ev) to the information provided in the GUI for GHS.

This value is provided for informational purposes and also for comparison with other algorithms.

It is not used for calculations at all, unlike, for example, ‘Log Encoding’.

It is calculated not from luminance (or anything equivalent), but from the absolute minimum and maximum values ​​of one of the three RGB channels. These channels can be different. Obviously, there is a difference with the DR values ​​displayed elsewhere. To avoid division by zero (in the ‘DR Ev’ calculation), I assigned a very, very low minimum value to ‘Black point (BP linear)’.

The goal, I remind you, is not to calculate Black Ev or White Ev, which would be irrelevant, but to fit all the data within the interval [0, 1]. The two sliders, ‘Stretch factor (D)’ and ‘Local intensity (b),’ aim to balance the image within this interval [0, 1]… with one or more RT-spots.

Jacques

1 Like