Wider field of view in darktable than Digital Photo Professional

I opened a CR2 file in darktable and in Digital Photo Professional 4 to compare the processing. I discovered that the field of view in the photo in darktable was larger than in DPP. I opened the jpg my camera produced when I took the shot in GIMP, and it had the same field of view as DPP. I exported the file from darktable as a jpg and opened it in GIMP. It still had the wider field of view.

I opened another CR2 file in darktable and DPP and found the same thing. Would someone explain why darktable is able to show a wider field of view?

In general, DPP and Adobe apps tend to crop more of the sensor data than FLOSS apps do.

I presume in the opinion of the developers of those apps, the discarded sensor data is inferior to the retained data. Is that true? Some of the time? All of the time?

I could think of 3 reasons (might be right or wrong):

1. To follow standard aspect ratios.
2. Manufacturing convenience and reliability.
3. Need edge pixels for certain types of processing.

Welcome to the forum BTW. :slight_smile:

Sensors are usually slightly larger than the published specs of the camera. The difference is small, perhaps 20 pixels in each dimension. Sometimes, some of these pixels are unusable because some lenses (at some apertures and focuses) cause shadowing. Some software will always crop to the advertised dimensions, which might discard entirely usable data. FLOSS software typically doesn’t.

Just looked at a random EOS 80D shot with a boring lens, the outcome is in 6000 x 4000 with DPP, 6014 x 4012 from LightZone, 6014 x 4010 out of darktable, 6016 x 4014 gives RawTherapee. Not really dramatic.

2 Likes

@Underexposed, IMHO, the idea is that the lens performance is worst at the edges. So it is easier for correcting the vignetting etc. when you remove the extreme edges.

However, sometimes these extra pixels can make or break a picture. So better to have an option to see the whole photo. That is why least amount of cropping is done in FOSS, who do not care to maintain the reputation of camera/lens.

In extreme cases, the (Elvis) guy, who shot JFK, could have been hidden in that 5 pixels on the left.

1 Like

Jacal, I followed your approach and opened a photo in DPP, and it indicated the size was 3648 × 5472. Raw Therapee reported the same size, but the field of view was larger. darktable reported 3710 x 5632 and had the largest field of view. The shot was of the inside of a church and the difference between DPP and darktable was roughly two feet on all four sides. As to quality, and I am just a beginner at this, I couldn’t see anything in the expanded area that I would not want in a print.

The camera was a Canon PowerShot G9 X Mark II.

With a Nikon D800, I’ve never seen any problems at the left, top or bottom edges. The right edge sometimes has a few columns of black. These can be automatically detected and trimmed.

The recent [Play Raw] My Right Foot - #8 by Jacal , taken with a Panasonic DMC-TZ70, had a shadowed gradation at the right edge; darkened but not black. This would be harder to detect automatically.

Just some thoughts, real reasons may vary.

The camera manufacturers have to write an image size into the data sheet and have to deliver that size under all circumstances. There are several image processing steps that may eat some border pixels. So I guess camera manufacturers crop each image to the worst case possibility, i.e., the smallest possible image even considering image processing changes in firmware updates that may lead to better quality but less border pixels. It may be simply a very conservative crop.

Free software tools, however, have the opportunity to show any pixels their developers want to be shown. That may even lead to a changing image size depending on the processing steps you activated. At least they would be able to change result image size after software updates, e.g. if the algorithms are tweaked and eat therefore more border pixels. I don’t know if they actually do any of these, or if they are just less conservative about their assumptions, but at least it may be a possibility.

Edit: Another aspect

Instead of eating pixels, the image size can be virtually increased for spatially extended operations, e.g. by mirroring the image edges for the operation or taking pixels from the other side[TM]. That may be unwanted by camera manufacturers for one or the other reason (quality, memory, …). But this may be implemented in the mentioned tools.

Is it possible that what you’re seeing is DPP applying a lens correction profile - correcting for barrel distortion automatically? This is pretty common in point-and-shoot cameras.

For reference see Canon G9X Mark II Review - Optics

That was a very informative article. With what Chris and others have said, I think the cause is explained. I’ve learned a lot from this discussion. Thanks to everyone that replied.

The small differences, I have mentioned, have happened after exporting jpegs with default settings, without lens corrections or any kind of cropping. No Elvis there, actually. In some - or most - Canon cameras, lens correction can be applied by default, if set so in-camera, being applied also in DPP, visible in the jpeg preview image.

In RawTherapee, automatic lens correction, LCP or LensFun, can be set by a modified Default profile or by Dynamic Profile Rules. I don’t have it activated by default, the lens “distortion” is often more likeable for my eyes.