Bug with Working Space Tone Response Curve and bundled profiles

When “Local adjustments” wil be merge in DEV (soon), you will have a similar function in “tone equalizer TRC”, who does not use “Working profiles” to work.

This functions allows you to change gamma and slope in a RT-spot

jacques

Just throwing it out there, but I think one needs to take into account that the output profile, whether it is the calibrated display profile or the profile exported with a file, applies its own TRC, built to accommodate the needs of the rendering medium. Anything you do before that is for “s…s and giggles”, or maybe more graciously put, “artistic intent”.

I can be unclear as English is not my naive langage

It is a bug
I think that the working profile is linear in this case but the values and histogram are displayed using a prophoto TRC

When choosing a linear TRC with the embedded prophoto working profile, processing is destroyed ==> other bug.
Furthermore, I think this option is of very little use or even conterproductive as asserted by @anon41087856.
So my suggestion was to delete the working profile/tone response curve option. But this has to be validated by devs and backward compatilbility taken into account.

If it is done at the end of pipeline, it is an artistic function like ASC/CDL. But changing the TRC of the working space has an impact on the whole pipeline with all processing performed in non linear space.

@jdc Just to be very clear, myself I am not calling for this function to be disabled. Rather I am having some issues with how it works. It also is causing problems with highlight reconstruction as reported in the other thread mentioned earlier.

I’m not entirely sure the purpose of this function either. Perhaps a re-design is in order, rather than the total elimination of this control.

Well, in that case, then when the Navigator histogram and readouts are switched to use the working profile for display, will it show readouts for linear data or with this TRC of gamma 2.4 and slope 12.96 applied?

What is an “RT-spot”?

I assume it is avoided by leaving the TRC setting to “none” for now.

I’m not familiar with this assertion you refer to?

@samuelchia

In all cases, Rawtherapee does all its work with a gamma=1

it is at the end of the treatment that a TRC is applied

  • either if you used an Output profile
  • either if you used your screen (in all cases)

If a “screen profile” is installed , TRC of this profile is used (generally g=2.4 s=12.92)
if not, an TRC is applied with g=2.4 s=12.92

The possibility tu use “free TRC” with working profile, assumes that by default a gamma=2.4 and s=12.92 is applied…at the end.
In this case a correction is applied (after working profile is applied), for RGB and Lab threatment.
for example, no action on white balance, no action on input profile…

I understand it seems complex, but in fact, it is very simple…

RT-spot, is the basic concept of local retouching (Local adjustments), now in branch “newlocallab2”, which will be merge very soon

@jdc Thank you for the explanation Jacques! This is consistent with everything I knew previously about the function of this TRC setting, but it is very good to have confirmation once again.

Unfortunately, the “free TRC” function seems to not be working properly. When I set it to Gamma 1.0 slope 0, it gives unexpected results (it isn’t showing gamma 1.0) and it also affects the highlight reconstruction algorithm when it should not. I have filed a bug report on Github highlighting these issues.

Not so sure I understand this correctly. So if I set the TRC to something other than “none”, let’s say I use gamma=1.0, is a gamma of 2.4 and slope 12.92 still applied at the end? Why does that happen? Are you referring to the screen display when you say this? Otherwise shouldn’t it only use my settings of gamma 1.0 and then apply the TRC of the output profile (I’m referring only to what’s happening to the file being processed and saved out of RT, not what I’m seeing on my monitor)?

Good to know!

Is there any practical purpose to this function? For normal photographs, there doesn’t seem to be a need but I imagine it can be useful for some types of analysis work, and if so, it should not be removed. In my case, I want to have linear gamma=1.0 RGB readouts in the Navigation panel. This function should be able to give me that. Unfortunately, it is broken in its current state.

Maybe there is a better way to design the UI to be more intuitive but I haven’t put any thought into that yet.

In countless posts, @anon41087856 mightly asserts that processing must be done in a linear RGB space (surely with the exception of artistic functions).

In Toolchain Pipeline - RawPedia, I suppose that step 7 convert color space refers to the conversion from camera space to working space and step 12 Tone response curve refers to the application of the TRC defined in Working space/TRC custom.
So if a non linear TRC is defined in workingspace/TRC, the rawpedia pipeline page shows that whole RGB pipeline is performed in non linear space.
@jdc could you clarify, and if required correct accordingly rawpedia. thanks in advance.

I cannot really understand.
A TRC is applied in step 12 (see above) so at the beginning of processing and not at the end.

If you refer here to the conversion from Working space to display profile or output profile, I really hope that the embedded TRC of the profile is used.

@jdc could you please make clear used to what? and applied to what?
If there is no screen profile installed you need also other informations
If no display profile is defined in Rawtherapee, what color space is used by Rawtherapee to display previw, thumbnails…? images.

Hmm, you make it sound like some kind of gospel truth from Aurelien. Processing of raw images isn’t done in linear RGB because Aurelien says so, it is that way because it is optimal and was certainly known before Aurelien asserted it.

Correct me if I’m wrong, RawTherapee has been processing raw images in linear space for almost a decade already.

I think he already answered this here:

Thanks

So the answer is Rawtherapee does all its work with a gamma=1 except if a Color” / “Color management” /“Tone response curve” with gamma and slope has been choosen.
Thus

could be wrong

And that is definitively a wrong statement as well as old.
https://community.acescentral.com/t/why-grading-in-log-instead-of-linear/2228/7
http://help.autodesk.com/view/FLAME/2020/ENU/?guid=GUID-F864483A-EF62-444D-B82E-842C31BA70F5

Operations that work best in linear
-white balance/exposure
-blur
-sometimes resizing

Operations that work best in log or gamma
-many tone manipulation operations
-unsharp mask
-grain and noise operations
-sometimes resizing

Modern colour correcter tend to develop into the direction that each operator is operating in the space appropriate for its design goals.

1 Like

Excellent points @age, not all image processing should be done in linear space. I agree very much with that. As usual, there are no hard and fast rules or gospel words to apply to something as complex as digital imaging.

I note that I discovered haloing issues with resampling in RawTherapee but never reported it since I don’t use its resampling function, and fairly recently Ingo discovered the issue, reported it and fixed it promptly. Now we have the option of using log for resampling. So even some processes in RawTherapee are not in linear space, and rightly so!

1 Like

It is debatable when to use which or at all. At the end of the day, we are no longer artisans and trades people where everything is handcrafted to “perfection” or is unique. I think part of much of the argumentation on methodology stems from people coming at the problem from various vantage points.

Take capture sharpening for instance. It was done in that way because it suits the situation and where it is in the pipeline. Sure, there is a possibility that not everyone would like the implementation. No problem, just don’t use it. Still, devs have to make the decision for the rest of us, and so some things can’t be customized, but FLOSS can offer more knobs and dials to tweak the parameters giving more power to the user.

So it comes down to the greatest good so to speak. Anyway, this is why I like to play with G’MIC. Although it has its limits such as the lack of colour management, I can really roll my sleeves up and try unconventional things and get instant feedback. I suppose devs have been doing the same with their hack versions of our raw processors, which has given rise to @agriggio’s ART and other interesting branches and forks.

2 Likes

I completely agree with that (for instance Color grading is better done in log space).
If you read what I wrote I made an exception for artistic function (color grading, tone maping, adding grain …). I am more doubtful about denoising, sharpening and resizing done in gamma encoded space.
That is why I don’t understand the benefit to use a non linear TRC for RGB pipeline processing when processing raw .

Yes I agree, the global working profile TRC is useless. As needed, each function could use a specific space. For a long time, RT denoising can be done in gamma encoded space even if the pipeline is linear. and a lot of processing are made in lab space which is not linear.

So, I never said that all processing had to be done in linear space and I don’t understand why you make a polemic with that, as I just recalled the numerous threads of @anon41087856 about linear RGB processing.

plus alpha blending (occlusion), which is the core of any masking/blending operation, and also uses blur/edge refinements.

All wrong except grain, but the reason lies in chemical diffusion kinetics.

Also unsharp mask is an end-of-life nasty trick from a time where computers were less powerful than lower-market-tier phones. Try deconvolution, you will love it.

The rest is fake news backed by absolutely nothing, except some Dan Margulis guy who pretended otherwise 20 years ago.

Tone manipulation can be done in full linear pipe with log controls, you don’t need a log pipe (see tone equalizer). But since people don’t differentiate view/model/controller in software development, they don’t quite get the subtle difference between applying image operations in some space, and controlling the parameters of said image operations in some space. You can have log controls, even HSL/HSV, and convert to valid RGB before running the algo in linear RGB space. That way, you have better ergonomics while preserving the relationships between ratios and gradients in your image.

I don’t quite get why people still fight over that. Ground truth is physics. Painting is physically accurate. Natural blur is physically accurate. Natural layering/occlusion is physically accurate. There is only one space that is physically accurate : it’s linear. Everything that is not physics is bullshit. Sure, we have perceptual spaces derivated from psychophysics : CIELab 1976 sucks because it’s not hue-linear, CIECAM02 sucks because it will push valid colors out of gamut, CIECAM16 sucks because it still doesn’t do HDR, JzAzBz or IPT-HDR seems ok-ish (still not 100% hue-linear) so far but it’s too recent to be sure and they only do mild HDR (200 nits instead of 100).

Doing color job is no excuse for using non-linear spaces. First of all, because painters have been mixing pigments in scene-referred spaces for the past 25000 years with no issue with color. Second, because simply putting a log or a gamma on your RGB doesn’t make your space “perceptual”. Worse, you will skew hues and colorfulness in unpredictable ways. You need a proper color adaptation model for that, which might end up with pixels encoded in 4-6D instead of 3D (checkout CIECAM02…).

The conversion light → color is perfect in your brain, so leave it to your brain. Don’t use broken color model in software to try simulating that. Cameras record light. Displays emit light. So light transport it is, all along. Path of fewest assumptions.

The only operations that work better in non-linear are the ones that have been specifically designed to work in non-linear. Re-open your notebooks, design some new ones for linear, and you will see that they are more computationally efficient and more realistically-looking than their old counterparts.

1 Like

Firstly, I get what you’re saying, after direct experience experimenting with various toolchains in my hack software. What I’m wondering about is why the movie folk think getting a log image from the camera, then color grading that, is a good thing… ??

1 Like

Because they are used to, because they know how it behaves, because they use stupid software so it’s a workaround for better UI control, because it feels like home even though home as a leaky roof and rats in the basement.

But, before and foremost, because people are abstraction-disabled and don’t get that GUI space and pipeline spaces can be 2 totally different beasts, living in parallel worlds and yet in complete harmony, without degrading UX.

Mapping between spaces is a lot simpler when you think about it in terms of abstract vector algebra relationships and not in terms of WYSIWYG non-sense. It’s also consistent with basic epistemology:

  • you get the reality, the thing you want to manipulate, but it’s hidden forever if you are not a god,
  • the closest you can get from reality is perceptions and sensors readings, which are all inaccurate but it’s the best you have, so your base material is some signal = reality ± error,
  • you build some shitty way of describing the relationships between the bits of reality you gathered, shitty because it should be stupid enough for human beings to understand it, so that’s your model. The best model always has the fewest assumptions built-in.
  • using your model and the original signal, you can decide a priori what part of the signal is considered data and what part doesn’t make sense, so you discard it,
  • you use a computer to run this simulation of the reality you have built from the model, and play god by poking this small universe through model parameters,
  • since model parameters rarely make sense, you simply rewrite them to make simple known concepts appear (speed, distance, energy, you name it),
  • sometimes, you stop poking tiny universes to check if the reality still sort-of behave like your model when you poke it the same. If it does, then publish a paper or something.

But since even a lot of scientists suck at epistemology (philosophy is huuh… girl stuff, right ?), you see them trying to find meaning in the code values produced by the model rather than in the model itself (numbers == truth, right ? someone clever should have checked the maths, right ? they look complicated and serious). And then, since most people suck at abstraction, anything not WYSIWYG will be crucified on the altar of intuitive UX.

So here we are, swimming in a sea of idiocy, mediocrity and lazyness, with people propagating fake news because they heard it from someone famous, even though that someone is a mere user of something carefully hidden beneath the UI.

The fallacy of software is it let you play with concepts beyond your understanding until you fool yourself into thinking that you understand them, whereas you only developed a superficial hands-on survival instinct.

Color is so bloody difficult to grasp.

2 Likes

Can we not do this, please.

And this.

3 Likes

Do euphemisms without me then. Bye.

@aurelienpierre Please could you tell me then what is the best approach to work with digital images, what models should be used, processing pipeline etc.? I confess I can’t quite follow your earlier longer post.

Does RawTherapee also do all these stupid things you refer to?