Do Crop Sensor Lenses Need to Have More Resolving Power Than Full Frame?

For the final image, I care about neither lp/mm nor lp/px, but only lp/ph (picture height). I actually find it misleading and annoying that most image editors zoom to 100% by default. I’d prefer a “2x” zoom control that is relative to the image width, not pixel size. What matters in photography is the image as a whole, not the constituent pixels.

Funny enough, this is the opposite of my day job in machine vision microscopy, where we measure nanoscale subpixel structures far beyond Nyquist, so “pixel peeping” is explicitly the name of the game.

4 Likes

Slightly tangential but it seems there may be dynamic range advantages using even the handheld high res mode 2 of the LUMIX G9ii, which apparently resolves movement between frames pretty well (for landscapes, say). The photons to photos chart is for the Olympus EM1X as that’s all they have but I guess it should be similar or better because of the lower base iso on the lumix.

So of course I was nerdsniped and could not resist reading up about this and the practical implications for hours, and this is the conclusion I came to for my own purposes (you could come to a different conclusion, not because it is subjective, but because you are comparing different kind of lenses and mounts, each combo is different).

First, the math is indisputable: for a lens with the same resolving power, projecting it onto a smaller sensor degrades the resolution relative to image size in proportion to the crop factor. That is, practically, if you are eyeballing the MTF resolution at a particular point that exists on both sensors, on the sensor smaller sensor that corresponds to an “effective” resolution of crop factor x lines/mm.

However, keep in mind that resolution falls off as we deviate from the optical center. So, if you are adapting the same lens to a crop sensor (eg with an adapter), effectively you will be in the “better” region of the lens.

But of course in practice you would be comparing lenses of the same angle of view, eg a micro 4/3 lens at 42.5mm to a full frame at 85mm; they may be an entirely different optical design. Nevertheless, I eyeballed the MTF charts I could find for the primes of typical focal lengths I care about (I also tried zooms, but since charts differ by focal length, I find this a hopelessly complicated undertaking).

So I did something non-scientific, but practical: suppose I spent roughly the same amount of money on a lens, comparing equivalent focal lengths (ie taking crop factor into account), how much would I gain when going to FF from micro 4/3?

It is difficult to match prices exactly, but micro 4/3 lenses are relatively cheap compared to full frame (I looked at L-mount). I also found that for the lenses I looked at, resolution falls off on MTF charts about the same (PanaLeica and premium Oly lenses are better) but of course the graphs stop halfway for micro 4/3 native lenses.

I came to the conclusion that, depending on the lens, I would gain about 15–40% in pure equivalent resolution over the whole frame. Most of the “offset” to the theoretical 100% comes from the edges for micro 4/3, I am guessing that for similar flange distances, covering a smaller sensor makes the life of lens designers much easier.

(Caveat: aperture. I tried correcting for that by the crop factor, eg compare the MTF chart of a micro 4/3 lens at focal length X and aperture Y to a FF lens at 2X and 2Y, but MTF charts by aperture are not always available so I had to find the “closest”. Roger Cicala’s blog observes that closing down does not always significantly improve lenses that are already good to start with. But nevertheless this could matter a bit.)

So… what did I learn that improves my photography in practice? Not much. Consider one of my lenses that I like the least, the Panasonic 14mm f/2.5. Surprisingly, it is a pretty decent lens when it comes to resolution. If I had the PanaLeica 12mm f/1.4, it would buy me a teeny bit of extra resolution that is 6–8mm from the image center. And of course more light, which is much more important. But the thing that would improve my photos the most is investing in learning more about landscape composition.

(sorry for the wall of text)

3 Likes

According to this guy, not necessarily:

P.S. Sorry…

1 Like

Again, this was for the lenses I looked at. All primes. But some plain vanilla Panasonic and non-premium Olympus are surprisingly good too.

And this (sharpness as measured by MFT charts) is an aspect that is very hard to judge precisely from images only. A lot of other problems (eg CA after correction) show up in a similar way.

But I agree that eyeballing images is the most useful way to make a purchase decision.

2 Likes

Agreed. That’s even before taking into account manufacturing tolerances for individual lenses…

“This is your last chance. After this, there is no turning back. You take the blue pill - the story ends, you wake up in your bed and believe whatever you want to believe. You take the red pill - you stay in Wonderland and I show you how deep the rabbit hole goes”

IMG_7897

2 Likes

mm as on the sensor.

1 Like

Sorry about that. “Snipe” deleted.

So we have drifted a little from the original topic, which was resolution. But the question has been implied in several posts: what makes a good lens?

Personally, I have come to the conclusion that resolution is pretty low on my list of priorities. What matters most, to me, is practicality: size, weight, haptics, price, focal length, max aperture. Then comes rendering, how does the bokeh look, how does it flare, how does the rendering change with distance, and aperture? Only when all of these things are to my satisfaction, would I even consider resolution; and even then only insofar as to check for a defective lens.

I have yet to see a single image of mine that was ruined by lack of resolution. But there were a few that were elevated by a beautiful rendering, and plenty that were made possible by having the appropriate lens available. (Talking strictly about modern lenses here. Some SLR lenses from decades past were truly heinous, but modern designs are almost entirely high resolution)

2 Likes

What is your definition of resolution and how do you check it personally?

1 Like

I have done brick wall tests, I have a color target and resolution checker target, and I have used them to compare lenses and cameras.

But that’s just for play.

When it comes to photography, I take pictures and see if they look good. If the pictures look good, the lens is good. If pictures look good but the lens measures poorly, then it’s still a good lens (for photography).

3 Likes

I see … thank you.

I disagree with your understanding and CinC’s tutorial “the cropped sensor gets enlarged more when being made into the same size print”.

A digital camera produces an image which is sized in pixels, not millimeters. Furthermore, digital cameras can produce different-sized images from same sensor size. So the film days concept of enlargement goes out the window, I reckon.

For example, my DC-G9 crop 2 camera can produce a 4000x3000px image full size and my Sigma SD9 1.7 crop can produce a 2268x1512px image full size. To get the same size print, the crop 2 image needs less “enlargement” than the larger crop 1.7. In other words, taking ppi as the digital definition of enlargement and a 10" print we get 400 ppi vs. 227 ppi for that print dimension.

Going further, the SD9 can produce a half-size raw image by pixel-binning and the G9 can produce a double-size image by pixel-shifting.

Based on the above, I would say that sensor physical size and the degree of physical enlargement are both irrelevant to this discussion, sorry.

I see what you’re saying and by using the word enlarged I was being unclear. But is it not the case that the image being captured and as resolved by the lens tautologically covers the entirety of the sensor of whatever size, measured in mm? Therefore the image on the print is, among other things, a product of what the lens is able to project on to a particular sized sensor. In a manner of speaking, the specifics of the sensor other than the size are irrelevant for the point that’s being made (though obviously it’s not irrelevant in the real world). This is just isolating how much you need to blow up the lens’s resolving power with different sized sensors to make an image of a given dimension. But, yes it’s true the electrical and computational qualities of the sensor will also play a role in the final image. That’s my current understanding, though I’m often wrong as I’ve already demonstrated.

1 Like

Looks like I’ve misunderstood the meaning of “resolving power” …

1 Like

I wasn’t clear and I may be wrong but I think the point being made above and which I believe to be correct is about optics, not sensor technology.

Understood. So now all that matters is what’s in the image plane, i.e. in the so-called image circle, but not in the sensor photosites. That makes consideration of the title question a lot easier for me.

Yep

Jfyi

I’m thinking that a useful metric could be DxO’s “blur units” which can be seen in graphical form here for a rather poor Sigma 24mm lens on both an APS-C and a full frame camera:

https://www.imaging-resource.com/lenses/sigma/24mm-f1.8-ex-dg-aspherical-macro/review/

Some discussion about blur units at Imaging Resource here:

https://www.imaging-resource.com/articles/focus-fallibility-lens-testing-fallacies

and the full definition here:

https://corp.dxomark.com/wp-content/uploads/2022/11/2005_blur-long.pdf

Likening the blur plots for the Sigma 24mm to “resolving power” it seems to me that a full frame lens has to be mo’ better than an APS-C, so the answer to the question on that basis is “no”.

Other opinions based on blur units are invited …