If you had to reprocess the whole image every time instead of the region of interest, it would be quite slow. Then you’d be here complaining about slowness.
Would not complain if the full-data process would kick in on high zoom-in setting. Would still be faster than exporting the photo, opening it , zooming in and then making changes. I would like to test how much tca correction adds to processing time but I assume it wouldn’t be longer than a second on my old PC. Going further in this philosophy why bother preview. Let the user do all settings without displaying the result - would be even faster. Sorry for being sarcastic but that’s how I see it. The user cannot asses what the module does for the sake of processing speed.
Could you use a screen magnifier
If only it would allow at least 100% zoom but module bypasses over 67% so by external zoom I couldn’t see the actual pixels but some interpolation of actual pixels.
@wiktor_bajdero I agree with you: the effects of CA correction are typically viewed best when zooming in more than you normally would. The fact that this is not possible at the moment is rather inconvenient. Though, it does not make the tool less effective: let’s not forget the CA is almost always removed correctly, you just cannot inspect it closely from within dt.
I understand this, but if you know the ROI and the raw CA tool requires more pixels, would it be possible to just calculate for the least amount of required pixels? You don’t need to cache the entire image.
Almost. I tested now some photos from my Sigma 105 1.4 Art and raw correction works flawlessly so users of quality glass could just toggle raw correction and forget it.
I also tested some images from Canon 55-250 IS STM which is obviously far from premium glass quality and raw TCA correction maybe did some job but far from perfect. The only option is to use non-raw correction cause using it in tandem is impossible cause it would need a dozen of try and error exporting to nail the best setting.
I’m writing this without any knowledge of the other approaches, but the lensfun option does have the distinct advantage that CA and distortion can be combined into one operation, both being non-linear interpolations. The application programmer has to explicitly take advantage of that logic in how they organize the lensfun correction chain, but that’d be one small gain in overall processing efficiency…
It’s interesting to me the lengths raw processors go to hide the fact that raw processing is a serial chain of operations. If you want to continually see the cumulative effect of that chain to the entire image, laws of physics dictate that infliction of changes in earlier operations will take longer to see than later ones. I see this every day in rawproc; in that tool, there are two selections one can make in the toolchain: 1) which tool is presented for modification, by clicking on the tool name, and 2) which tool is presented in the display, by clicking on the checkbox. I usually keep the last tool checked for display, but if I’m working white balance, which is before demosaic in my default tool order, takes a bit of time to work through the upstream tools to change the display. No sweat on my 12-core desktop box, a bit more noticeable on the 4-core tablet. Sometimes I’ll change the display tool to an earlier one to mitigate, sometimes I’ll just add a second whitebalance tool later if I’m just playing, then take it out and apply the final what-I-want-to-do in the original tool. rawproc is smart enough to only re-process starting with the changed tool, a benefit from storing a result image for each tool… Oh, and fast_float in LittleCMS has been a big boon to the display pipe!
Thing is, I work efficiently with explicit knowledge of the toolchain and its processing burden to display. I also then work with explicit knowledge of the cumulative effect of the toolchain, selectable for display at any tool. A lot of insight gained, for a bit of latency in the display.
I don’t mean to criticize the development of any of the other raw processors, but they have had to incorporate tricks of selection and abstraction to keep the illusion of “instantaneous processing”, e.g, some tools don’t show their result until you’re looking at 100% display scale, only displayed segment is modified, etc. And it seems those tricks have implications of their own. Me, I’d rather just know the situation and have tools that work as efficiently as they can, and I’ll accommodate…
Oh, I know it’s not a devious act, but it is indeed hiding the vagaries of the processing chain behind some display contrivance. And I really don’t expect it to change, as folk who just want to process their images want to spend a minimal amount of time on ancillary stuff like latency.
It’s also a shame that if you try to set up a preset of lens correction for just distortion (so that you can use in conjunction with raw CA correction) it stores a fixed value for the focal length rather than giving the correct value. This stops you setting up auto presets that correct for distortion and raw ca that you can just leave and forget about. I think it has been mentioned on github but apparently it is not so simple to sort out…
I am offering no comment on the the well thought out comments above. However, I have on very rare occasions found the defringe module tackled specific images the best and hope that this module is never removed as an option from DT.
Have you seen rafiner’s video?? I believe he suggest to always start with raw CA and then manually tweak TCA and then use the new CA module… usually increasing the strength and radius in the new CA module works fairly well… in any case I am going from memory and its a good video if you have not seen it…
Thanks I have seen it, it’s a good approach if you want to spend time adjusting to get the best out of each photo. For the majority of photos I would like to use raw CA + distortion only on lens corrections.
I am mentioning a specific bug, where if you create a preset with ‘distortion only’ in lens corrections, it applies fixed values for distortion rather than reading the correct values for the lens focal length. @Peter has suggested a workaround by removing the Tca data from the lensfun file but I am not sure how to go about this…