Raw decoding - non bayer

In theory, the colors and overall brightness in PhF should match those from RT if in RT you choose the “neutral” profile and you turn auto-exposure OFF.

Just to mention this “en passant”, I am working since a while on a new PhF version that allows fully unbounded editing (i.e. highlights are not clipped), and I’m adding the possibility to perform HDR-merging of bracketed RAW shots as well as apply tone-mapping directly within PhF.

For bright highlights, I suggest you to try the “blend” highlights reconstruction mode, that you can select in the exposure tab…

Have fun with PhF!

1 Like

I have a request for a Mac and Windows version with xtrans support; May I suggest that if you are still working on xTrans you wait till another version is available :wink:

RawTherapee also has no raw ca-correction for xtrans. It’s on my todo list for a long time, but now I hope I can copy it from Photoflow soon :slight_smile:

Ingo

I’m afraid we’ll end up running in circles on that :wink:

raw ca-correction is a part of rt which is used not only in rt, but also in dt and pf (which I support, because they are open source). The basic algorithm is not from me, but I spent a lot of time to optimize it to current speed and solve the races of original implementation. I suggest to work together on raw ca-correction (you, me and maybe also a dt developer). I do have a bit of insight into current raw ca-correction at least. IMHO the first step would be to make a non-auto raw ca-correction for xtrans (for the simple reason that it’s easier to check for coding errors). When that’s done, we can take care of the auto raw ca-correction for xtrans.

Ingo

I just asked at irc #darktable for developers willing to participate

I’m interested because I want to get to the bottom of how exactly it works.

Great! I’ll try to share the tiny bit of knowledge about raw ca-correction I have with you!

@heckflosse Actually I would like to take advantage of this conversation to better understand the concept behind the CA correction at the CFA pattern level. In fact, I am even not sure this is the good solution, although I have implemented it in PhF as well.

What is the advantage in terms of quality with respect to a CA correction applied to the RGB data after demosaicing?

The demosaicing algorithm should work best when fed with unmodified CFA pixel values, and indeed the CA is a physical effect which is recorded by the camera sensor. So why do we modify the CFA pattern before demosaicing? In addition, and if I understand correctly the code, the modified R and B values are derived with some sort of basic demosaicing, therefore I have the feeling that the cat is beating its tail…

I’m asking mostly because doing the CA correction at the RGB level (after demosaicing) would basically work for any CFA pattern, and therefore would be a much more general solution :smiley:

It’s hard to find but the AMaZE author found that it needs CA correction prior to demosaicing to perform best.

Think about this: the assumption that a demosaicing algorithm makes is that color doesn’t change as rapidly as luminance does. Chromatic aberration violates that assumption.

1 Like

You have a good point here, but I would like to put a counter-argument on the table. In my humble opinion, this is almost only valid for old low-resolution cameras coupled with cheap lenses.

With the current trend of increasing sensor resolutions, I think that the emphasis on demosaicing accuracy is becoming less and less relevant. At some point, other factors become predominant: micro camera shake in handheld shots, slight focussing errors, lens sharpness, etc…

The full potential of a 50Mpx sensor can only be exploited by shooting on a tripod with a razor-sharp professional lens and very accurate focussing. And I am ready to bet that CA will not be an issue in this case.

Moreover, the rapidity of color variations introduced by CA depend on the sensor resolution: the higher the resolution, the smoother the color variations will look at the scale of the CFA pattern, and at some point they will not be an issue anymore for the demosaicing.

So personally I would still be in favour of developing a more general CA correction tool that works at the RGB level, and for the moment only put dedicated efforts on the initial CA analysis step for the X-trans case. Here “initial CA analysis” means for me the pre-processing step where the CA correction parameters are derived from the RAW data.

For the correction of the RGB data, one could re-use the code which exists already for the LensFun or LCP cases, and simply apply a different correction model…

Although the top end lenses indeed are relatively free of chromatic aberrations, there remain mirrorless lenses that rely on it, and pretty much all zooms still have lateral CA in some zoom setting or another. Ultrawide zooms, even the best of the best, still have fairly strong lateral CA, and because the scale of subject features is so small relative to the CA, they benefit greatly from having CA corrected prior to the main demosaic.

On the other hand, I think there’s not much that actually prevents this from working with Fuji raws. Maybe you can’t use the existing code, but there’s nothing about the algorithm, in my understanding, that prevents you from doing the same thing with the X-trans array.

I agree that there is nothing fundamentally complex in doing pre-demosiacing CA correction for X-trans, it is mostly a question of having time to devote to the task…

I’m afraid I will not be of much help on that during the next few weeks, but I’ll do my best to at least integrate and test whatever experimental code is made available.

I have just finished implementing the “slow 3-pass” X-trans demosaicing method from RT, and pushed the changes to github.

That’s indeed not the fastest demosiacing method out there, and in the PhF version all SSE2 optimisations are still turned off, but it should be acceptable as a starting point.

I propose to prepare Win and OSX packages from this version. CA correctional certainly be the next step, but it will not come tomorrow, I’m afraid :wink:

2 Likes

Heh, the “slow 3-pass” X-trans demosaicing method from RT demosaics a 24 mp X-trans in less than 1 second on my machine. Compared to original dcraw code that’s quite fast…
Hint: Speed of 3-pass X-trans demosaicing highly depends on tile size!

Ingo

1 Like

Sounds pretty exciting :slight_smile:

please Let me know here when the mac & Windows versions will be ready, I agree not to wait for the CA.

Btw I have not heard from the french translator

Thanks again for our hard work

For the moment I kept the original tile size of the RT version… I’m quite confident that the choice is optimal for the PhF case as well.

I think that the main limitation of the PhF version remains the fact that SSE2 optimisations are turned off. They might work out-of-the-box, but I first want to double-check the basic implementation before debugging the SSE2 code.

The good news is that I’ve been able to use the RT code almost as-is. I’m basically telling the RT demosaicing routine to process a 114x114 pixels X-trans image instead of a 24MPx one, and repeating that for each 114x114 tile (with an overlap region of 12 pixels for each tile).

There are few things that are re-computed for each tile and that could be pre-processed to save some CPU cycles, but for the moment I am giving priority to simplicity and ease of debugging over speed…

May I ask about the processing time for xtrans files on your machine? Maybe with information about your machine?

I will do some benchmaks with X-trans and Bayer images of similar resolution, and let you know.

I’m using a macbook-pro laptop with a 1.9 GHz i5 CPU (4 cores).

Ok

I’m using an 8-core AMD at 4 GHz…