A different perspective, help check my math?

I’m not sure what category this belongs to, so I just went with “Capturing”.

To cut to the short version, a number I like to find for camera/lens combinations is “At what distance does one pixel equate to one millimeter on the subject?”

Unless I’ve made a gross math error, I believe this is simply the focal length (in millimeters) divided by the pixel pitch (in micrometers), and you get a distance (in whole meters). (Similar triangles and the helpful 1000x relationships between metric units.)

So, my dad’s 36mp full frame Nikon D800 (4.89 micrometers) with his 300mm lens would be about 61 meters, and my 25mp micro four thirds Lumix G9ii (3.00 micrometers) with a zoom at 300mm would be 100 meters.

With some effort, you could figure out how many pixels will be on any size subject at any given distance.

I like this method because it gives you a specific way to calculate “reach” between completely different lenses, sensor formats, and camera models, maybe helping to cut short some unrealistic claims.

Larger lenses will gather more light, and no paper calculation will compare the resolving power of different lenses without real world data, but I find this simple number to be quite helpful. Any thoughts?

Well, that topic landed with a thud, lol.

I do like being able to say how many pixels will cover a subject at any distance, focal length, and sensor. Maybe I need to figure out a better way to present the idea.

Anyway, thank you for at least taking a look.

1 Like

Perhaps because your measure is mostly irrelevant for “normal” photography?

Well, what I’m normally interested in is what area is covered by my lens at a given distance. Pixel pitch is irrelevant for that. And differences in sensor size are covered by the “crop factor”.

The only situation I can see where your measure would be useful is when you have to determine the size of an object (*). But in that case, you need to know how far away your “target” is, not something that’s reliably stored in the metadata. And, the focal lenght of an objective can change when you focus on the subject…

For most other uses, if the subject is large enough in the view finder, it covers enough pixels. If you need to know the number of pixels covered by your subject, you are probably too far away.

Also, you might want to give an example of such “unrealistic claims” when proposing a solution. Otherwise it gives the impression you are looking for a problem to go with your solution.

There you have two completely different (and unrelated) issues. And what do you mean by “larger lenses”? Longer focals will gather as much light as shorter ones at the same F-stop (overlooking possible absorbtion differences), that’s one reason why we use F-stops. And “resolving power” has no influence on the size of the projected image…


*: there are use cases for that, like microphotography. But there you don’t use a camera objective, and you are supposed to calibrate the system with a special microscope slide, showing a precise scale. Any change in the system means you have to recalibrate, again basically because the effective distance to the subject can change. Other cases where you need to determine the size of the subject usually require a reference in the image (like a scale).

I appreciate your response, and it’s exactly the kind of feedback I was hoping for wondering if someone would even find this useful.

I had a hard time trying to find the balance between saying how it can be useful, and avoiding making some kind of “this system vs that system” awfulness that this forum is pretty clean of. The basic version is a common attack against pricey telephoto lenses on micro four thirds: that a high resolution photo from a full frame sensor can simply be cropped. The people who say this do not seem to care that there is no such 100mp camera on the market that could do this, and even 40mp level cameras with high frame rates and good focus tracking are in a completely different price bracket than even flagship crop sensor cameras.

Distilling down to a single number for a given combo makes for a way of comparison. A “100 meter” combo will have double the resolution on the subject than a “50 meter” one at any distance. It’s really only applicable to situations where photos will absolutely be cropped to get framing, which seems like a safe assumption in, say, bird photography.

It’s an abstraction. I fear it may be too much of one.

Sorry, really I was trying to conclude that the idea had limited scope, and I worded it pretty awkwardly.

Wait, no. I completely missed that you said this. At the same f-stop, they will have the same brightness, not the same amount of light. Double the focal length to cover the same angle of view over four times the area, you have to double the diameter of the entrance pupil, keeping the focal ratio the same, and keeping the same brightness over that larger area. It’s four times the amount of light. It’s where that two-stops of noise difference comes from when people shoot the two formats at the same f-stop instead of the same depth of field. The larger full frame lens at the same framing and f-stop is gathering four times as much light.

That’s literally what I meant by “larger lenses.” If you shoot them at the same depth of field as the smaller lenses on smaller sensors, you gather the same amount of light, spread over a larger area, and are two stops dimmer in f-stop and brightness. It’s why people see their full frame cameras having the same level of noise as a micro four thirds camera when the full frame has the ISO set two stops higher. It’s not a sensor advantage, the lens is literally bigger.

I added the comment at the bottom to acknowledge that my stupid number trick does not address this particular advantage to using physically larger, identical focal ratio lenses.

The other part, the comment about resolving power, is to acknowledge simply making pixels smaller will not make a lens resolve finer detail unless it was already out-resolving the sensor. I’m not pulling rabbits out of a hat with my original suggestion, I’m trying to come up with an easy way to analyze one facet of a complicated comparison, and maybe for the love of everything holy move beyond using the frame size of 135 film as some kind of fundamental standard.

At this point, I’m repeatedly posting to a thread no one seems to want continued, and for that I apologize. Without further feedback, I’ll let it go away.

I’ve used a similar metric in another context. The inverse of yours is angular resolution in radians.

To do with measurement of “resolution limiting” for a given sensor/lens combination.

Also when playing with the idea of a “virtual infinity” when focusing in a scene.

1 Like

For the most part, this forum has been a blessing as to be equivalence-free … :wink:
And I think that’s good because it’s my honest opinion from watching such discussion from the sidelines during some years, that they lead nowhere as for any practical consequences.

I have not enough competence myself in this field to contribute with anything new, but just put forward for you to chew on your path away from the subject here, something I have gathered from previous discussions:

You seemed to want to use number of pixels for a quality scale, but pixels corresponds to number of photo wells (sites) on the sensor, and there are arguments that the size of the wells needs to be factored in as to noise levels.

(Don’t answer me - just think about it for yourself or look it up …)

1 Like

May I mention that the “one pixel” that you’re projecting outside the box is a CFA element e.g. red or green or blue or white so perhaps your external 1 mm should be e.g. a Bayer pattern?

Unless you’re shooting a Foveon-based camera?

1 Like

You’re essentially interested in angular resolution, i.e. field-of-view (an angle) divided by image resolution. This gives you the angle covered by a pixel.

However, this begs the question: what is resolution? Does it matter if the pixels are noisy? Does it matter if the pixels are blurry?

Resolution needs to be determined of the whole system. A smartphone camera with a 200 MP sensors has a lot of pixels, but diffraction limits how much actual detail can be captured. A super-zoom like the Nikon P900 has a looong focal length, but its optics can’t resolve a lot of detail at the long end. And that’s not even talking about motion blur or atmospheric effects or shot noise or demosaicing artifacts.

I’ve been down this rabbit hole myself. We probably all have. But there’s no easy answer. Point your camera system at a resolution chart at the desired distance, and see what you get. Or better yet, shoot the birds or critters you like to shoot, and see if you’re satisfied.

If your goal is to put as many pixels as possible on a faraway target, a MFT system is probably a good choice. And it’s relatively affordable, too. But there’s no avoiding that good long lenses are expensive.

But from personal experience, there’s no joy in endless comparisons. Everything is a trade-off. There’s always a different system one step sideways that optimizes one parameter better than another. There’s always another photographer who uses “lesser” equipment to take better pictures than us. Our goal must be to find joy in what we can do, not despair over what we can’t.

3 Likes

Essentially yes. I chose linear resolution instead of angular to avoid the challenge that a rectilinear projection doesn’t cover the same angle per pixel away from the center. If I’m not using angles, the problem becomes distance dependent, so I tried to figure out how that can be most easily and transferably communicated. A specific question like “How close to I have to get to a bluejay to be satisfied with the result?” is really the end goal, and that only comes with real-world experience with your own setup, plus maybe a little curiosity and some hypothetical rulers to help guide your way.

I agree with everything you said, along with comments made above about noise and pixel size (there’s obviously a limit to how far you can crop no matter how small your pixels are).

I’ve been curious if my idea was a useful abstraction. F-number is an example of a very useful abstraction. I think any abstraction can be stretched and misused, which certainly happens with f-number.

Don’t worry about me joining battle between cameras, waiving the flag of some company that in the end only wants my money. :slight_smile: I long ago figured out all cameras are compromises, and a lot of money can be saved if you know which compromises are right for your own unique needs.

1 Like

I’ve been there many times and I’ve found there’s a lot of other factors such as lighting, subject movement, equipment and camera settings, etc that are going to dictate your max distance getting for getting that shot. Heat haze in particular is a real killer when it comes to resolution at a distance.

I think the advice to get out and shoot to develop your own sense of distances and the quality you’re looking for is your best bet rather than attempting to calculate a figure.

My rule of thumb is that I can usually get good quality and detail if I can crop no more than 50 percent to frame the shot. I might still get decent shots if I have to crop more aggressively, but it’s hit or miss. And of course my best shots have always been when I can fill the frame when I hit the shutter button.

3 Likes

OK, now we’re getting somewhere…

In your original post, it sounded like you were wanting to compare lenses based on spec sheets and calculate a sort of image quality measurement for different distances.

Now we are talking about how close you need to be to a particular subject to get a good quality image with a particular body/lens combination. For this situation, I think that your calculation idea will let you down, because while you were busy calculating pixel counts, the blue jay flew away!

Coincidentally, a pair of blue jays built a nest in our neighbours’ tree this year, and I thought it would be fun to capture them daily as they brooded and hatched their eggs. But I didn’t have a long enough lens at the time to get any quality shots of them. As Dave mentioned, you can only crop so far before the quality of the resulting image is inadequate.

Sticking with the blue jay as an example, here is an approach that might help:

  1. Find or make an object that is a similar size to a blue jay. This is handy because you don’t need to worry about it flying away,
  2. For each lens you have, take a shot[*] where the object nearly fills the frame, and a shot where it fills the central rectangle in the rule of thirds overlay on your viewfinder or LCD. Record the distance to the object for each shot. For zoom lenses, do this for the minimum and maximum focal lengths and a few in between. The shots nearly filling the frame simulate portraits and the shots in the middle ninth of the frame simulate shots where you leave some room for a story.
  3. Unless you have a way better memory than I do, compile the measurements into a document that you can refer to quickly in the field.

If you do the same thing for other subjects, and combine that information with your knowledge of the strengths and weaknesses of each of your lenses, then you just need to have or develop a decent ability to estimate distances. Then you can think “moose at 250 meters, and I want to leave some space in front of him so it looks he has somewhere to go” and go directly to the best lens for the situation instead of guessing. This may not work so well for “grizzly bear at 3 meters”.

*: You don’t actually need to take a shot. You just need to compose the size and location of the object and record the distance between the camera and the object.

1 Like

The Leica Geovid 8x56 R shows distance to subject up to 1100 meters with an accuracy of ~0,5 %.

1 Like

Cool! I check myself after the fact by doing measurements on Google Maps (I think OsmAnd can do this too, but I haven’t started playing with it yet).

You can get distance measurers at a fraction of the price of the Leica in golf and builders shops

Right. Leica is probably the most expensive way to do it.