Manufacturer lens correction module for Darktable

The camera manufacturers normally crop more at the borders than darktable and RawTherapee. In RawTherapee and its demosaicing menu you can change if you want to crop more.

2 Likes

Thank you Peter. I will try that.

Which command in Exiv2 or ExifTool should I use to get all makersnotes like this?

It must be up scaling. I’ve noticed quite a few cameras do that with wide angle lenses. Presumably the corrections eat up pixels but they know people would be confused when some images come out a fair bit smaller. So they just hide the crop by upscaling.

In addition RT does use more of the sensor in the first place which accounts for those couple of extra pixels.

1 Like

The camera crops slightly to.get rid of any demoasic artifacts near the edges, RT does not do this.

2 Likes

Yes, but if you look at the framing way more pixels are missing from the sooc image that can be accounted for by the 8 extra px RT use. Check the post mid lower edge or the trees at either horizon. There are a lot of pixels missing. So to crop like that and still have 4000x3000 pixels you have to scale it back up.

1 Like

A crop may also be due to in-camera lens distortion correction.

This is for Canon:
I took some pictures to check vignetting. I took from widest aperture to f/11 in two sets. One with Peripheral correction off and one with Peripheral correction on.

No lens correction data about 85/1,4L IS for my 6D, so just one set. I will need to download lens correction for that lens to that camera later.
For Canon EOS M5 the tags 0x0014 to 0x0018 are changing with the aperture.
For Canon EOS 6D the tags 0x000d to 0x0012 are changing with the aperture.

For M5 + 50/1.2L at f/1.2 I got:
6898 4939 2757 2041 1481
or if I also include values that aren’t changing or not changing much:
8191 6898 4939 2757 2041 1481 0

For M5 + 50/1.2L at f/11 I got:
8093 7811 7246 6954 6653
or
8190 8093 7811 7246 6954 6653 0

For 6D + 50/1.2L at f/1.2 I got:
6898 4939 2757 2041 1481
or if I also include values that aren’t changing or not changing much:
8191 6898 4939 2757 2041 1481 0

For 6D + 50/1.2L at f/11 I got:
8093 7811 7246 6954 6653
or
8190 8093 7811 7246 6954 6653 0

Exiftool command:
exiftool -csv -u -U -H "-Model" "-lensID" "-ApertureValue" "-*Peripheral*" "-*Vignetting*" *CR2 > M5-6D.csv
M5-6D-7D.ods (113.6 KB)

1 Like

Panasonic Lumix S
lenses are not supported by lensfun at the moment, so they are not supported in darktable either, which is a pity.
Every RW2 Raw file from a Lumix camera does contain lens correction format in a proprietary format. With the exiftool one can extract 7 distortion parameters, see
https://exiftool.org/TagNames/PanasonicRaw.html#DistortionInfo
$ exiftool P1000275.RW2 | grep -i Distortion
Distortion Param 02 : 0.002044677734375
Distortion Param 04 : 0.032867431640625
Distortion Scale : 1.00030526894194
Distortion Correction : On
Distortion Param 08 : 0.132843017578125
Distortion Param 09 : 0.01715087890625
Distortion Param 11 : -0.0714111328125
What I have to find out is what they mean. My guess is that they are parameters according to the Adobe camera model , which would need 5 parameters. See the work of:
https://syscall.eu/#pana
I’m very exited about your module for darktable. If we understand for which model these parameters work, there is a chance to support every Lumix S lens out there out of the box including those to be released in the future.

5 Likes

And I’m the driver of the car that has no idea what makes it go!

Still I read these threads in hopes of leaning something. I’m very thankful for all the work that goes into darktable by all concerned! :pray:

Help!
I forked exiv2 and coded the first time in my life in C++:

This is how far I got:
$:~/dev/exiv2/build/bin> ./exiv2 -g “Exif.PanasonicRaw.DistortionInfo” p.rw2
Exif.PanasonicRaw.DistortionInfo Undefined 32 216
I can search the Panasonic RAW file for Distortion Information in the Makernotes, Tag 0x119, it is found and 32 bytes long.

Now I have to decode the 32 Byte according to:
https://exiftool.org/TagNames/PanasonicRaw.html#DistortionInfo

My 1 day knowledge of C++ is not sufficient I’m afraid to get these integers out in the right way.
Could any show me how to get out for example a signed integer (corrected) from bytes 4 and 5 (=DistortionParameter02) ?
This has to be divided by 32768.0 afterwards to get the correction parameter.

Well, according to the information you provide, DistortionParameter02 is a (signed) 16-bit integer, not a rational or a floating point number. You’d need a second number to define a rational.

But, given that you showed Exiftool can read that info, isn’t it much easier to check first if those parameters vary between images? If they don’t for a given lens, and the model is supported by lensfun, it would be much easier to extract the parameters once with exiftool, and then add the data to lensfun.

Alternatively, check how Exiftool calculates the distortion parameters as floats?

Yes, you are right. They are 16-bit signed Integers. For correction to work they are divided by 32768 afterwards, according to the Research of:
https://syscall.eu/#pana
We are speaking of zoom lenses here, and the correction data varies from image to image. What lensfun does is interpolate between the focal lengths given. To be exact you need lots of them and will never beas accurate as the data from the manufacturer.
AND: this quest for lensfun correction data begins a new with every new lens. If we could read out the lens correction data out of the raw we would have support from day 1 on.

2 Likes

I corrected my pull request for exiv2.
If it is included, darktable will be able to at least read the manufacturer data from Panasonic files on distortion correction and chromatic aberration. It is up to a manufacturer lens correction module to make sense of the parameters. Even if the meaning is not exactly know yet, it will allow experimenting with different interpretation to reverse engineer the manufacturers model.

4 Likes

https://syscall.eu/#pana

https://www.andrewj.com//mft/mftproject.php

Yes, I know. Thanks.

But
https://www.andrewj.com/mft/algorithms.php
mentions 4 different models working with parameters 2,4,5,8,9,11 in different formulas.
To get it right we need a raw converter where we can try it out with immediate feedback.

1 Like

Just a quick note – I have a Sony RX100M2, and looked at the chromatic aberration data from the exif data. There are only 22 values (11 per channel). Interpretting the values as distortion for the radius from sensor center as 1+value/2^21.

Applying that to the bayer matrix data before demosaicing gives the best result by far (I’ve used libraw and libtiff to generate my own tiffs directly for stitching pano’s). The current DT raw TCA algorithm appears to be eliminating some saturated pixels that are in reality saturated, and missing R or B layer shifts on high contrast edges. The TCA applied by lensfun post demosaicing is also objectively worse than using this manufacturer correction. So this is very exciting

An efficiency gain can be had by utilizing the radial symmetry of the problem – for every pixel in the first quadrant of the image, there are 3 matching pixels in the other 3 quadrants with the same radius, only different by the signs of the distance (row or column) from the optical image center. I realize the DT module may not be able to take advantage of that symmetry in all situations, but it can save some calls to sqrt() .

Another efficiency gain could be had by recomputing the current values – which I just use a segmented linear function with the input as radius (I believe that is what I see in the DT module being built). It would be trivial to convert that to a form that uses radius^2 as the input. This new form then would eliminate all calls to sqrt(). Probably would want to increase the number of values for the new form to maintain good resolution of the distortion factor.

Many thanks for discovering the secret of the built in CA correction for Sony raw files. I wish there was a similar discovery for Canon CR2 raw files!

Cheers,
Jeff Welty

3 Likes

Square roots are really not that expensive these days. Maybe a 12 cycle latency (single precision, scalar) on a recent CPU. By comparison, an FMA is about five cycles of latency. Given a single square root will be reused three times (once for each channel) it really is negligible compared to everything else (spline evaluation and image interpolation).

If one really wanted to improve performance the thing to do would be to pre-compute factors of 1 / (xi[i] - xi[i - 1]) which appear in the interpolate function. Divisions are somewhat more expensive than square roots, and typically each call to interpolate will evaluate a several of them. Combine this with the fact we make three interpolate calls for each sqrtf call and one can see it is much more important.

Regards, Freddie.

1 Like

Okay dokay. I’m still stuck in the old days (don’t ask :wink: ) when we worried about that stuff. FWIW here’s my version of the interpolate function. It depends on equally spaced knots, but has no divisions. Doesn’t mean it would be faster. Just a FYI

float raw_tca_distort_amt_sq_basis(float x_sq, float *knots, int Nminus1)
{
// x is (squared radius from image center) / (maximum squared radius)

if(x_sq > .9999) return knots[Nminus1] ; // in case of rounding error computing x_sq

float fi = x_sq*(float)(Nminus1) ;
int i = floor(fi) ; // lower bounding knot is i
float p1 = (fi - i) ; // proprtion of knot i+1 to use
return (knots[i]*(1.-p1) + knots[i+1]*p1) ;

}

Cheers,
Jeff Welty

1 Like

Thank you for making this function first. I’m coding a small script that can merge the jpg generated by the Fuji camera with the color preset with the jpg generated by DxO PureRaw denoising, so users can transfer the color preset to the denoised raw-generated jpg image. I have a problem now. Fuji Cameras do lens correction for the jpg image automatically, which will cause the pixels of the two jpg images (generated by the Fuji camera and generated by DxO from the RAF) can’t be one-to-one corresponding. I have tried the lens correction you provided for DarkTable, and the output image seems bigger than the jpg from my camera, also the correction result is slightly different. I wonder whether you can improve it and make this function can generate exactly the same result as Fuji Camera does. So I can use such a correction algorithm on the DxO-generated jpg result and make my merging script work. I believe such a script will significantly save time for Fuji users.