Reverse engineering Nikon Z-series lens correction

I am thinking of this, but maybe wrong? Nikon NIKKOR Z 85mm f/1.8 S - DXOMARK
Chromatic aberration and profiles.

I’m not sure about Nikon, but I had been looking into the distortion / TCA correction for Olympus, and it definitely does change with focus distance. Lensfun currently can’t support this but the manufacturer lens correction tags can.

I found a bit of time to pick apart the alleged distortion and vignetting tags in Z NEFs. First, the assertion:

# extra info found in IFD0 of NEF files (ref PH, Z6/Z7)
%Image::ExifTool::Nikon::NEFInfo = (
    GROUPS => { 0 => 'MakerNotes', 2 => 'Camera' },
    NOTES => q{
        As-yet unknown information found in SubIFD1 tag 0xc7d5 of NEF images from
        cameras such as the Z6 and Z7, and NRW images from some Coolpix cameras.
    },
    # 0x01 - undef[12]
    # 0x02 - undef[148]
    # 0x03 - undef[284]
    # 0x04 - undef[148,212]
    # 0x05 - undef[84] (barrel distortion params at offsets 0x14,0x1c,0x24, ref 28)
    # 0x06 - undef[116] (vignette correction params at offsets 0x24,0x34,0x44, ref 28)
    # 0x07 - undef[104]
    # 0x08 - undef[24]
    # 0x09 - undef[36]
);

ref: https://github.com/exiftool/exiftool/blob/master/lib/Image/ExifTool/Nikon.pm#L11352

So, I took one of my Z 6 NEFs and extracted the relevant tags with:

exiftool -v4 DSZ_4168.NEF

and retrieved the relevant tag information:

 | | + [MakerNotes directory with 5 entries]
  | | | 0)  Nikon_NEFInfo_0x0005 = 0100...Y............L..........@-
  | | |     - Tag 0x0005 (84 bytes, undef[84]):
  | | |        45764: 30 31 30 30 03 01 00 00 9a 59 01 00 00 10 00 00 [0100.....Y......]
  | | |        45774: 04 00 00 00 97 a0 00 00 00 00 10 00 1a c7 ff ff [................]
  | | |        45784: 00 00 10 00 d9 4c ff ff 00 00 10 00 00 00 00 00 [.....L..........]
  | | |        45794: 01 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 [................]
  | | |        457a4: f8 e4 ff ff 00 00 10 00 00 00 00 00 00 00 00 00 [................]
  | | |        457b4: 00 00 40 2d                                     [..@-]
  | | | 1)  Nikon_NEFInfo_0x0006 = 0100...Y...x...........................
  | | |     - Tag 0x0006 (116 bytes, undef[116]):
  | | |        457b8: 30 31 30 30 01 01 00 00 9a 59 01 00 00 10 00 00 [0100.....Y......]
  | | |        457c8: 08 00 00 00 78 1c 07 00 00 00 10 00 00 00 00 00 [....x...........]
  | | |        457d8: 01 00 00 00 0f a4 fa ff 00 00 10 00 00 00 00 00 [................]
  | | |        457e8: 01 00 00 00 e5 0d 03 00 00 00 10 00 00 00 00 00 [................]
  | | |        457f8: 01 00 00 00 8c b1 08 00 00 00 10 00 00 00 00 00 [................]
  | | |        45808: 01 00 00 00 01 00 00 00 01 00 00 00 00 00 00 00 [................]
  | | |        45818: 1d 02 00 00 00 04 00 00 00 00 00 00 00 00 00 00 [................]
  | | |        45828: 00 00 b1 13                                     [....]

Taking tag 0x0005, alleged distortion correction information, I extracted the bytes at the 0x14,0x1c, and 0x24 offsets, 8 bytes each, which indicates a 64-bit rational number where the numerator is the first four bytes and the denominator is the remaining 4 bytes. Here’s the decoding:

0x14:

byte string base10 integer
a3 3c 00 00 15523
00 00 10 00 1048576
15523 / 1048576 = 0.014803886

So far, so good, looks like it’s in the range of a distortion parameter. Now for the next offset:

0x1C:

byte string base10 integer
0d e2 ff ff 4294959629
00 00 10 00 1048576
4294959629 / 1048576 = 4095.992688

Hmmm, not sure about this one. Now, the last offset:

0x24:

byte string base10 integer
6c 78 00 00 30828
00 00 10 00 1048576
30828 / 1048576 = 0.029399871

Okay, that one looks plausible.

I’ve done no research into what algorithm may be involved, just reporting on a possible decoding of the metadata. It may be that the parameters line up to the Adobe DNG Opcode 3 specification, when I get a chance I’ll convert that NEF to a DNG and take a look at those parameters.

Edit (2023-09-09): On closer inspection, one might find that the above exiftool extract has different values than what are used in the subsequent dissection. The dissection was developed with numbers from a different NEF, that’s what I get for doing this work piece-meal across two different computers. The subsequent posts are all based on one NEF, so that is good. Mea culpa…

1 Like

Do you treat those numbers as unsigned? If you use signed integers, you’d get -3571 as numerator,
or -0.00340557 for the second parameter (offset 0x1C). Not out of bounds for a 2-nd order model.

2 Likes

Thanks, that’s what I suspected, but I haven’t previously worked enough with binary data to map that out.

1 Like

Okay, a little more fun, I wrote a short C++ program to extract the float values from the NEFInfo tags using exiftool, nefinfo.cpp. Compile it and run with exiftool like this:

exiftool -b -U -s -Nikon_NEFInfo_0x0005  DSZ_4168.NEF | ./nefinfo
0x14: 0.0392065
0x1c: -0.0138912
0x24: -0.0437384

Here’s the entire program:

#include <vector>
#include <iostream>
#include <fstream>


std::vector<unsigned char> readData(std::istream& in)
{
	std::vector<unsigned char> fdata;
	while (!in.eof()) fdata.push_back(in.get());
	return fdata;
}

float rational32AtOffset(unsigned char * data, unsigned offset)
{
	int32_t * n = (int32_t *) (data+offset);
	int32_t * d = (int32_t *) (data+offset+4);
	return (float) (*n) / (float) (*d);
}

float rational64AtOffset(unsigned char * data, unsigned offset)
{
	int32_t * n = (int32_t *) (data+offset);
	int32_t * d = (int32_t *) (data+offset+8);
	return (float) (*n) / (float) (*d);
}


int main(int argc, char **argv)
{
	std::vector<unsigned char> fdata;

	if (argc >= 2) {
		std::ifstream f(std::string(argv[1]), std::ifstream::in);
		if (f) 
			fdata = readData(f);
		else
			std::cout << "File read error" << std::endl;
	}
	else
		fdata = readData(std::cin);

	if (fdata.size() < 84) {
		std::cout << "data chunk too small: " << fdata.size() << std::endl;
		exit(1);
	}

//for debugging byte array:
//	for (std::vector<unsigned char>::iterator it = fdata.begin(); it != fdata.end(); ++it)
//		std::cout << std::hex << (unsigned int) *it << " ";
//	std::cout << std::endl;

	unsigned char* p = fdata.data();
	
	std::cout << "0x14: " << rational32AtOffset(p, 0x14) << std::endl;
	std::cout << "0x1c: " << rational32AtOffset(p, 0x1c) << std::endl;
	std::cout << "0x24: " << rational32AtOffset(p, 0x24) << std::endl;
	
//use these for the asserted vignetting parameters:
//	std::cout << "0x24: " << rational64AtOffset(p, 0x24) << std::endl;
//	std::cout << "0x34: " << rational64AtOffset(p, 0x34) << std::endl;
//	std::cout << "0x44: " << rational64AtOffset(p, 0x44) << std::endl;
}

Corrections and criticisms welcome…

1 Like

…and even more fun, I took some lenscap-dark shots at different focal lengths (Z 24-70 f4) to see how the numbers progressed. Two series at each numbered focal length, then three in quick succession at 24. Manual mode, 1/100sec, f8, ISO 100:

24: 0x14: 0.0351858  0x1c: 0.0260153  0x24: -0.129545
28: 0x14: 0.0428581  0x1c: -0.0105991  0x24: -0.0644388
35: 0x14: 0.0345201  0x1c: -0.0237732  0x24: 0.00626278
50: 0x14: 0.0168142  0x1c: -0.00927734  0x24: 0.0295153
70: 0x14: 0.0134401  0x1c: -0.00747871  0x24: 0.0436573

24: 0x14: 0.0405416  0x1c: 0.0181055  0x24: -0.126664
28: 0x14: 0.0351114  0x1c: 0.000872612  0x24: -0.0719633
35: 0x14: 0.0291576  0x1c: -0.015481  0x24: -0.00498867
50: 0x14: 0.0193682  0x1c: -0.0125675  0x24: 0.038579
70: 0x14: 0.0131931  0x1c: -0.00721836  0x24: 0.0424976

24: 0x14: 0.0405416  0x1c: 0.0181055  0x24: -0.126664
24: 0x14: 0.0360813  0x1c: 0.0247498  0x24: -0.129151
24: 0x14: 0.0312672  0x1c: 0.0314407  0x24: -0.131095

The message I’m getting here is that these numbers are changing for an input that is not just focal length. I’m either missing something here, or these are not the right numbers…

Check that the camera is set to manual focus for this test - if it’s autofocusing and changing the focus distance for each consecutive shot at the same focal length then the distortion / tca coefficients would probably change.

1 Like

Cripes, autofocus… here’s another set, same pattern/settings, AND autofocus=OFF:

24: 0x14: 0.0405416  0x1c: 0.0181055  0x24: -0.126664
28: 0x14: 0.0428581  0x1c: -0.0105991  0x24: -0.0644388
35: 0x14: 0.0350637  0x1c: -0.0238142  0x24: 0.00396729
50: 0x14: 0.022233  0x1c: -0.0158081  0x24: 0.0446692
70: 0x14: 0.0170689  0x1c: -0.0107899  0x24: 0.0574942

24: 0x14: 0.0405416  0x1c: 0.0181055  0x24: -0.126664
28: 0x14: 0.0428791  0x1c: -0.00968075  0x24: -0.0669641
35: 0x14: 0.0345201  0x1c: -0.0237732  0x24: 0.00626278
50: 0x14: 0.0216465  0x1c: -0.0152063  0x24: 0.0460634
70: 0x14: 0.0170689  0x1c: -0.0107899  0x24: 0.0574942

24: 0x14: 0.0405416  0x1c: 0.0181055  0x24: -0.126664
24: 0x14: 0.0405416  0x1c: 0.0181055  0x24: -0.126664
24: 0x14: 0.0405416  0x1c: 0.0181055  0x24: -0.126664

Okay, much better. There’s minor variation in the focal lengths between 24 and 70, but those two are nailed in every collection, as I made sure I was up against the collar stop for those.

That’s why I posted my half-assed work; more sets of eyes to point out the half-assery :laughing:

4 Likes

Well, the numbers are definitely not lensfun ptlens. I got a 24mm image with a pronounced straight structure in the periphery and substituted the 24mm entry in the lensfun database for the 24-70 f4. First, the uncorrected image:

Definitely needs correction. Here’s a rendition with the lensfun numbers:

There, nice and straight handrail. Now, with the embedded numbers:

Eow, ski-jump…

Here is a copy-paste of the two sets of 24mm numbers from the .xml file:

<distortion model="ptlens" focal="24" a="0.032" b="-0.114" c="0.073"/>
<!-- <distortion model="ptlens" focal="24" a="0.0405416" b="0.0181055" c="-0.126664"/> --> <!-- embedded distortion correction -->

I’m going to have to hand-off at this point, as I’m not really familiar with the particulars of the various warp algorithms used to do distortion correction.

2 Likes

Link time

https://wiki.panotools.org/Lens_correction_model

https://www.imagemagick.org/Usage/lens/

Hello, I am a digital enthusiast and I am currently trying to reverse the distortion correction algorithm of Nikon Z7. I would like to ask if we have a discussion room here to facilitate real-time sharing of results, or should we just communicate in this post?

I think it would be best to post here; I don’t have the time right now for a real-time exchange.

how about using Iridient to generate DNG-1 (linear), the Adobe converter to generate DNG-2 (undemosaicked), then move optics correction tags from DNG-1 to DNG-2 then delete DNG-1 and feed DNG-2 to your raw converter ( assuming it supports DNG fully and applies what tags tell to do ) - no need for stinky LensFun … that is what I am doing in a script from FRV for my RF, EF and XF mount cameras… may be it works for Z in your case ( Iridient Digital - Iridient N-Transformer Iridient Digital - Iridient N-Transformer Supported Cameras )

eef… The idea is to do it with free software.

1 Like

Hi @gogobenjiboy welcome! I’m excited to hear ideas and see results!

1 Like

How about being able to just open it in your preferred raw developer, without having to jump through ridiculous (and non-free) hoops.

3 Likes

Regarding the Nikon distortion and vignetting information, I struck up a conversation at DPReview with Warren Hatch, the person who did the decoding work to date in exiftool. With his permission, here’s his response to my query:

I spent a little time today looking at 0xc7d5. Enough to convince me that my comments in ExifTool are probably on point.

I agree with you that the numbers at those locations don’t seem to fall into any standard format.

Looking just at the Vignette data (the distortion data is in the same format)….

There are 4 (maybe 5) sets of 32 bytes of potential interest.

I would expect that 3 of these are the coefficients for a polynomial. These would likely be the ones at the positions I referenced in my code comments. The 4th (at offset 0x08) might be the constant for the polynomial. The 5th (at offset 0x70) is a mystery to me. The first 4 seem to follow the same storage convention.

If I recall my math correctly, we would expect to see the middle polynomial coefficients to have an opposing sign compared to the 1st and 3rd. Am I right? (my math days were long ago).

And if you give me a feel for the magnitude of the coefficients to expect for correcting the vignette on a lens wide open I might be able to crack this.

Didn’t have a chance to look at OpCodes. Maybe next weekend.

I’m posting it here to see if any of you with better chops in distortion and vignetting math can see some insight with regard to decoding 0xc7d5…

3 Likes

After reading through this thread, Warren had enough information to decode the Nikon Z distortion and vignetting tags. Version 12.71 of exiftool, released today, has the tags:

glenn@bena:~/Photography/Lens Correction/Image-ExifTool-12.71$ ./exiftool -G -*Distortion* -*Vignette* DSZ_0445.NEF 
[MakerNotes]    Distortion Correction Version   : 0100
[MakerNotes]    Distortion Correction           : On (Required)
[MakerNotes]    Radial Distortion Coefficient 1 : 0.01480
[MakerNotes]    Radial Distortion Coefficient 2 : -0.00731
[MakerNotes]    Radial Distortion Coefficient 3 : 0.02940
[MakerNotes]    Auto Distortion Control         : On
[MakerNotes]    Vignette Correction Version     : 0100
[MakerNotes]    Vignette Coefficient 1          : 0.49827
[MakerNotes]    Vignette Coefficient 2          : -0.21944
[MakerNotes]    Vignette Coefficient 3          : 0.44422
[MakerNotes]    Vignette Control                : Normal
glenn@bena:~/Photography/Lens Correction/Image-ExifTool-12.71$ 

Now, to figure out the algorithms. Warren tried the distortion coefficients with the cubic polynomial algorithm at the Panotools wiki, and they appear to work pretty close to the preview JPEG per my coarse observation.

@paperdigits, 'ere y’go… :laughing:

Oh, and thanks, Warren Hatch. You-Da-Man…

5 Likes

Amazing. For my next stupid question, how can we apply the algorithm to an image to see if it works?

Its actually cloudy here, I should shoot some frames with my plexi to test when it stops raining.