darktable TCA correction raw vs not raw

Hi,

Despite of lens correction we have “chromatic aberrations” and “raw chromatic aberrations”. The raw one main disadvantage is inability to see the result in close-up because it’s calculated for visible area only so user needs to export the whole image and check the result which is pain and makes it virtually unusable. The question arises if there is any advantage of using the raw version that justifies the struggle? Manual don’t mention why You should use raw one instead of non-raw so… why bother if non-raw is much easier to use?

Welcome WIktor, while I can’t answer your question someone here might be able to…

@rawfiner might have some insights as he worked on the new CA module added recently to DT.

@ggbutcher might also as he has some software that builds edits from the ground up…

That’s true for dt. In RT it is always applied on full raw data and more of 95% of the cases using it is a no brainer :wink:

In practice I find TCA override (in lens correction) and then chromatic aberrations (adjust correction mode) the most useful, and usually ignore the raw CA module. Also, since it operates before demosaicing, you cannot mask. Also it has less fine-tuning.

See these tutorials:

1 Like

The dt raw ca correction works very well in dt too as long as the visible area is large enough. (there will be a message in the module header if data are not valid any more). Unfortunately the full power gets lost in dt as we have the “region of interest” concept for performance.
In general, fixing CA as early as possible in the pipeline is a very good idea and the maths (Ingos work) is solid.

Some lenses i use (my 17mm Olympus for example) have strong CA, they love this module, so i have it automatically applied together with a lens correction not doing CA - in the vast majority of cases this is just perfect.

Not my work :wink:

@wiktor_bajdero Welcome to the forum!

What do you mean by close-up? Anything other than a 100% zoom is inter- or extrapolation and therefore won’t be representative of the final product.

I don’t know the internals of darktable.

In general, correcting chromatic aberration is far better done before demosaicing. Apply the geometric distortion to two of the channels, and the job is done. But if demosaicing is done first, that will spread the badness caused by CA into the missing pixel values, and the task of correcting becomes much more difficult.

:smile:

Ha, yeah.

It’s a bit like hot pixels or noise, best fixed before the infection spreads, but worse because some demosiacing takes regard of edges, and CA messes with edges so they are in the wrong place, so the edge processing occurs in the wrong place, and there is general badness.

Using raw chromatic aberrations alone can be better than using chromatic aberrations alone for quite small lateral chromatic aberrations, as it is before demosaic.

Yet, if you use lens correction for profiled lateral chromatic aberration, using raw chromatic aberrations is problematic because then the lens correction module will add new chromatic aberrations, as it is later in the pipeline.

My prefered workflow is to use lens correction for profiled lateral chromatic aberration correction, and to use chromatic aberrations on top of that to correct both the few remaining lateral aberrations artefacts and the longitudinal chromatic aberration, which are hard to correct with other modules.

5 Likes

Same here. I also find tca lens correction profiled for my own lenses to work better than tca lens correction profiles from other users. If I own a zoom lens I profile for every 5 mm focal length.

What do you mean by close-up? Anything other than a 100% zoom is inter- or extrapolation and therefore won’t be representative of the final product.

If I pass 67% zoom the raw TCA module already gets bypassed in preview. In that matter my actions are pretty much blind because to work on TCA in a range of maybe 5 pixels I like to zoom in for at least 200-1600% to actually see the enlarged pixel blotches affected by the correction to evaluate the result. That’s why I wonder if using non-raw tca only or with lens correction and avoiding raw TCA makes me lose something.

So as I understand now, demosaic propagates the TCA mess making it more devastating issue. It should be beneficial to make raw correction more user friendly. I don’t understand why there is a link between preview zooming and the module input. Why module don’t get the full data to process and preview puts whatever portion user wants? It seems ridiculous. Doesn’t it? I consider putting a feature request to dev’s but maybe I’m missing something obvious why it’s put that way.

It doesn’t. Its a speed thing. If you had to reprocess the whole image every time instead of the region of interest, it would be quite slow. Then you’d be here complaining about slowness.

Or you’d need a zillion layers of caching, which would impact memory use.

(Filmulator takes this route)

Could you use a screen magnifier…this happens at the OS level…so just apply at normal zoom in DT and then try to zoom in with the screen magnifier…might be worth a try to see if you could make a visual assessment??

Yes, I was about to give that suggestion. I am an avid user of OS-level magnification and spot colour identification (because I am not good with colours).

1 Like

rawproc maintains a copy of the image for each tool, the result of that tool. How’s that for memory-hogging? :crazy_face:

1 Like

Need cash for cache.
— a raw professor

All puns intended.

1 Like

If you had to reprocess the whole image every time instead of the region of interest, it would be quite slow. Then you’d be here complaining about slowness.

Would not complain if the full-data process would kick in on high zoom-in setting. Would still be faster than exporting the photo, opening it , zooming in and then making changes. I would like to test how much tca correction adds to processing time but I assume it wouldn’t be longer than a second on my old PC. Going further in this philosophy why bother preview. Let the user do all settings without displaying the result - would be even faster. Sorry for being sarcastic but that’s how I see it. The user cannot asses what the module does for the sake of processing speed.

Could you use a screen magnifier

If only it would allow at least 100% zoom but module bypasses over 67% so by external zoom I couldn’t see the actual pixels but some interpolation of actual pixels.