Just returned from a US East Coast cruise, where I found time to code a rudimentary camera-informed lens correction capability in rawproc. It’s in the master branch on Github (yeah, wish I’d have done a separate branch), it presently only does Nikon Z series (oh, only distortion, need to code up vignetting), and I’m also going to make a few changes to configure its use optionally.
Currently uses exiftool (versions >= 12.72) to extract the parameters, which significantly adds to the time required to load the raw file. One important task to instigate is and exiv2 issue to incorporate the Nikon tags; I don’t plan to retain the exiftool method for long.
The capability is a mode in the lensfun-based lenscorrection tool; you can toggle between the two to compare corrections. Right now for Nikon it’s using the Adobe WarpRetilinear algorithm, which seems to work for most images vice the corresponding lensfun correction, but there are some images that have rendered quite different corrections. Need to open some of the grid-images to compare…
An interesting thing I found is that correction data appears for my AF-P 70-300mm lens, a F-mount lens I use with the FTZ adapter. Makes me think that Nikon has included a table of legacy lenses in their camera firmware, but are getting Z lens data from the lens as they don’t update firmware for new Z lenses.
Anyway, a bit-dodgy implementation, but more is revealed…
Nothing new; right now, I’m in the middle of presenting a 2-part CAD workflow that model railroaders can digest. OCD is a bitch…
Actually, lens-correction-wise, I’m probably going to take on building lensfun correction data for my recently-acquired 24-200mm lens; what’s in lensfun right now isn’t good…
You can test this by running the converter under a debugger and having it trigger whenever data is read from the file system. If it opens a large file which looks like a database during the process then it is probably using its own coefficients.
Otherwise, a generic approach to find out what it is doing is to use hardware breakpoints. Once the RAW file has been read into memory scan for where the coefficients are (exiftool will tell you what they are for the raw file in question). Then, stick in a hardware breakpoint so you’ll trigger whenever this data is read from memory. With this you should quickly be able to figure out what function is responsible for processing the coefficients. This is my recommended solution and what I did for the Sony coefficients.
Beyond that, you can also try doing a parametric study: get exiftool to tell you where the coefficients are in the file (byte offsets) and then change them. Do this en-mass (say making 10,000 changes in total to the various coefficients). Run them all through the converter. With that you’ll be able to approximate the Jacobian matrix for d[Adobe coeff]/d[NEF coeff]. If you can get a good fit then it should not be hard to integrate up and figure out what the precise model is.
I would focus on: (i) distortion; (ii) vignetting; (iii) CA in that order.
I observed that the floating point number about compensation was converted into an integer and then processed to generate the corresponding shadow compensation value. To some extent, they will show a positive correlation.