Manufacturer lens correction module for Darktable

I’ve done some googling into the mysterious NEF/Tag encryption and landed on Laurent’s web-site: NEF : Nikon Electronic File file format in which he says that 0x0097 (ColorBalance) is encrypted. The TagInfo record in Exiv2 is not configured to decrypt that data. NEF images from my D5300 do not contain tag 0x0097.

@KristijanZic I’m an Apple and Windows Guy, and don’t know much about debugging on Linux.

On macOS, I’d use apple file system event monitor to sniff into the behaviour of the app on your image. On Windows: SysInternals Utilities. Here’s an article about something like that for Linux: https://www.linuxjournal.com/content/linux-filesystem-events-inotify

I haven’t used this. However, it’s worth a shot. It’s often staggering to discover what’s goes on “under the hood”.

So the general procedure to apply here is to first identify the Makernote region in the CR2 file. Exiftool and Exiv2 can help you here. Next, we want to exclude any part of the Makernote which is understood by either Exiftool or Exiv2. This is a tad trickier, but shouldn’t be too difficult. What we will be left with is some number of bytes in the Makernote whose purpose is unknown. In order to establish which of those correspond to correction data we can proceed as follows.

At a high level the plan is to twiddle the bits of the unknown bytes, one byte at a time and seeing if it has any impact on the output. This is something which we will want to script (and will probably need to run overnight). The general idea is to flip the bits of the unknown byte i and then see if the resulting DNG data is different so far as the pixels are concerned. If so, that byte is interesting. Once we have a list of interesting offsets we can eyeball the results and get a good idea which ones correspond to distortion and vignetting.

@fdw I don’t really know how to use exiftool and exiv2 and am not familiar with Exif or TIFF spec beyond just basics so I’ll have to learn that first.

In the meantime, I did some more tests with 5Dmk4 with 70-200mm 2.8 III and Iridient.
Iridient definitely corrects vignetting and lens distortion for images shot with that camera/lens combo!

Any ideas how to get visible chromatic aberrations with this lens? I can’t test as I can’t get any.

Alright, here are the sample images from 5Dmk2 and Canon 70-200mm f2.8 IS L III at multiple different apertures.

The sequence of images goes like this.:

  1. CR2 from camera
  2. Iridient converted CR2 to DNG without any corrections applied
  3. Iridient converted CR2 to DNG with all corrections applied to the DNG image
  4. Iridient converted CR2 to DNG with all corrections not applied to the image but appended in the DNG metadata in the OpCodeList

At this link:
https://drive.google.com/drive/folders/1A0FyP-wbdwwXxAU1KCuCNGokMPA2m_xB?usp=sharing

I’ve tested this lens with 5Dmk4 and 77D. In bot cases Iridient seems to read the lens corrections from the Exif of the CR2. (I cannot prove that but that’s what they say on their page).
Any other lenses that I own don’t get corrected. Or at least I can’t perceive any corrections with my eyes.

Note: The 70-200 f2.8 III is the latest EF lens that Canon made. Unfortunately I don’t own any other recent EF lenses.

That’s great, thanks!

Until we figure out how to read the information from other raw formats, would it make sense to implement support for DNG’s opcodes 1 (WarpRectilinear), 2 (WarpFisheye) and 3 (FixVignetteRadial)? (DNG specification, page 89 onward)

I have verified that while exiv2 doesn’t appear to see the data from Olympus ORF files, it does see it when they are converted by Adobe DNG Converter:

$ exiv2 -p t _6130082.dng
[…]
Exif.SubImage1.OpcodeList3        Undefined 184  0 0 0 1 0 0 0 1 1 3 0 0 0 0 0 0 0 0 0 164 0 0 0 3 63 238 236 96 236 197 62 181 63 162 18 152 195 4 1 ...

I am just curious …

  1. lets say I turn on this new module. and then also turn on the lensfun based lens correction module. that would require care not the apply the same correction twice to avoid problems? also does this module cover all fixes that lensfun does?
  2. speaking about lensfun … wouldnt this feature be better implemented within lensfun?

Yes, you could have the same (or similar) correction applied twice. Thus a degree of care is required to decide which module they want to let correct which aberration. However, this is something users already need to be aware of. Darktable contains several modules which can correct CA and several modules which can be used to denoise a photo. Turn them all on and the result is not going to be pretty. Saying that, there are reasons why you might want to mix and match, for example taking the vignetting from the lens, and the distortion from lensfun.

For a regular lens this module covers all of the same fixes as lensfun, although many lensfun profiles lack the ability to correct vignetting.

I did look at integrating these concepts into lensfun. However, to put it bluntly, lensfun is a poorly designed library. One of many issues is that the lens correction structure is exposed in a header file and pre-defined as having space for six correction coefficients. I need more than this. However, changing this would require changes to the ABI. Further, lensfun interface is very much geared towards it providing you with the corrections given data about the lens. Here, we want to tell it what the corrections are, rather than have it read them from a database. (Really, lensfun should be split into two different libraries: one handling the database side of things, and the other handling the corrections, but I digress).

2 Likes

we need the split of data and libs for other cases … just look at how many libraries need changes just to support a single lens.

Nikon most definitely did so back in the day, and that was a major factor back then (around 2008ish?) in why I didn’t even consider Nikon when I bought my first DSLR. This was back when the DMCA was fairly recent and there was a lot of concern that Nikon might use the DMCA to go after open source developers for “breaking encryption”. (Unlike the ISOBMFF patent mess, there were a lot of examples back then of companies abusing the DMCA “breaking encryption” clauses to go after open source developers, so there was some legitimate fear back then.) Good to hear they stopped doing that.

3 Likes

The model described in the attached document seems very similar to the well known and understood Adobe LCM. It should therefore not be too difficult to support (although perhaps without the tangential corrections, which seem somewhat esoteric).

However, it is my opinion that this model should be the last to be supported. I consider DNG files, in the main, to be a Trojan horse. Having open standards is nice, but only if camera manufacturers take advantage of them. This is broadly not the case with DNG. Rather they move the problem of understanding proprietary RAW files one level up stream from the application level to the RAW to DNG converter level. But in order to do this conversion you still need to be able to understand the original RAW files, and if you can do that, you might as well put it in a library and incorporate that library into your application; hence enabling it to work with camera RAWs directly without conversion.

The Trojan horse aspect comes in by Adobe providing a converter free of charge. The existence of this converter — even though it does not run natively on all platforms and has some license issues — disincentivizes the community from trying to solve the fundamental problems. After all while converting to DNG may be clunky, it does work, and works well enough that people lose interest in solving the real problem. As a wise man once quipped “[There] is nothing more permanent than a temporary fix.”

5 Likes

Yup. The only case where DNG has been beneficial is with niche/smaller manufacturers that know they don’t have enough market share to survive with their own RAW format - most VR camera manufacturers, UAV camera manufacturers (DJI, Yuneec, etc) - and even then, 90% of them are incompetent and can’t be arsed to read the DNG spec, leading to people having to put in camera-specific workarounds anyway! (Fortunately, the majority of the screwups are with embedded color profile information, so the workaround is generating a DCP file.)

That reminds me, I still need to power on my Phantom 2 long enough to take ColorChecker shots so I can properly profile it, because DJI botched the ColorMatrix data.

Currently the module understands Sony and Fuji (APS-C) correction data.

Doesn’t want to work on my RAF file produced by X-T30. Says the file type is unsupported.

Do you have the exact error message for the file in question? The Exiftool output for the file in question would be helpful, too.

Do you have the exact error message for the file in question?

Of course not. The module is unavailable for this file format and thus can’t be toggled on/off to generate error output.

The relevant bit of the exiftool output is this:

Geometric Distortion Params     : 416.5555556 0.3535211268 0.5 0.6126760563 0.7070422535 0.7908450704 0.8661971831 0.9352112676 1 1.06056338 0.4979095459 0.9861907959 1.484115601 1.991668701 2.537765503 3.095123291 3.671737671 4.266021729 4.860321045
WB GRB Levels Standard          : 302 372 797 17 302 628 464 21
WB GRB Levels Auto              : 302 518 606
WB GRB Levels                   : 302 552 526
Chromatic Aberration Params     : 416.5555556 0.3535211268 0.5 0.6126760563 0.7070422535 0.7908450704 0.8661971831 0.9352112676 1 1.06056338 3.051757812e-05 0 -3.051757812e-05 -9.155273438e-05 -0.0001525878906 -0.0001831054688 -0.0002136230469 -0.0002746582031 -0.0002746582031 0.0004272460938 0.0004577636719 0.0005493164062 0.0006103515625 0.0007019042969 0.0007629394531 0.0008239746094 0.0009155273438 0.0009155273438 416.5555556
Vignetting Params               : 416.5555556 0.3535211268 0.5 0.6126760563 0.7070422535 0.7908450704 0.8661971831 0.9352112676 1 1.06056338 92.58544922 87.85888672 83.55761719 80 76.87646484 73.74853516 71.38330078 69.02587891 66.81884766

Am I understanding - you seem to be saying that since the lens correction data is in the raw file, DT should get it from there, and we should not rely on Adobe data. I wonder if this is being a little too purist? If the camera makers won’t tell us what the their data looks like, but are happy to tell Adobe, then why not swallow some pride and get the data from the Adobe LCPs etc?

Also could DT user priorities shape what you might build? Should you try a survey? Perhaps most people value the geometric correction most? Perhaps the vignette correction is important? Or not?

How about a solution where DT talks online. DT often knows your cam and lens, but if not the latter, the user can specify it. DT then goes to a “backend” over the net where the parameters are retrieved, and then the correction is made on the user’s machine. The backend would need a one-off population exercise to extract the relevant data from LCP? files. The backend would be controlled by DT/FOSS people presumably.

1 Like

That output looks good to me. The lens in question has ~4.27% distortion at the edge of the frame. Further, brightness at the edge of the frame is ~69% that in the centre.

Can you confirm you are running Darktable against the latest version of Exiv2 (branch 0.27-maintenance)?

$ apt-cache show libexiv2-27 | grep Version
Version: 0.27.3-3

As per Add a built-in lens correction module. by FreddieWitherden · Pull Request #7092 · darktable-org/darktable · GitHub the module requires Exiv2 0.27.4 (currently unreleased) or newer.

the module requires Exiv2 0.27.4

Ah, that’s it. Thanks!