My camera, which is a Canon PowerShot SX60 HS, is an interesting bridge camera from the 2010s.
Its lens is not bad but not the top quality either. Canon designed it to go from 21mm to 1365mm equivalent ā an absurd range, especially for a small body, so some engineering sacrifices had to be made. Like it is not a fast lens.
The sensor size is 1/2.3" which definitely leads to so much noise! Thatās because the photosites are tiny and thereās low dynamic range.
In the vertical axis, you see dynamic range, measured in stops. Both cameras start out at the top left with just shy of 12 stops of dynamic range.
The horizontal axis shows ISO. As you can see, increasing ISO generally decreases dynamic range. For each doubling of ISO, dynamic range decreases by one stop. Thatās what we call an āISO invariantā sensor. ISO makes images brighter, but it doesnāt change the maximum possible brightness, so for every doubling in brightness (ISO x 2), you lose half your dynamic range (EV - 1).
That is, except for the step in dynamic range at ISO 400/800, where dynamic range suddenly improves when raising ISO. These sensors are not perfectly ISO invariant! They are in fact ādual gainā sensors, with two ISO invariant regions, which switch over at ISO 400/800.
There are two dynamic ranges: one of the sensor as a whole, and one of each individual sensor element (āsenselā, analogous to picture element/pixel). It is a common fallacy that bigger sensels increase sensor dynamic range, but they donāt.
To illustrate why, letās imagine two sensors of the same size, but one has four times as many sensels. Each of the quarter-size sensels will catch less light, and therefore have less sensel dynamic range. But take the sum of each cluster of four, and you get: exactly the same dynamic range as the big-sensel sensor. Thus on a whole-sensor level, thereās no dynamic range difference. This exact scheme is called a āQuad-Bayerā sensor, and commonly used in smartphones, for unrelated reasons.
Therefore, when talking about dynamic range (in photographic terms), sensor size matters, but sensor resolution doesnāt. Sensor technology also matters, newer sensors are better. More than that, though, light matters. If you collect enough light (by exposing longer, or having a bright lens), then even a tiny sensor can produce beautiful, high-resolution, noise-free images.
As such, I donāt understand Glennās quote from earlier:
Thank you for the explanation. What about noise? I imagine that noise follows the same principles? Can you elaborate on the relationship between noise and dynamic range?
Dynamic range is the difference between the sensor clipping point and the sensor noise floor. So a ātwelve stop dynamic rangeā is just a different way of saying, āthere is 2¹² times more signal than noiseā (if exposed perfectly).
However, that ānoiseā is nowadays no longer electric disturbances caused by the sensor (āread noiseā), but the real, physical graininess of the light itself (āshot noiseā). What we see as noise in pictures is an accurate measurement of the light; itās just that the light itself is a bit noisy.
In fact, modern sensors can measure light down to a couple of photons. Itās truly astounding. The corollary is that sensitivity likely canāt improve all that much further. Itās already pretty close to ideal. (Read speed and full well capacity can still improve, though)
It is indeed truly astounding. But I find discussions about dynamic range a bit of a red herring.
Historically, the main problem of photography had been too little light. Which for most scenes meant longer then desired exposure time, leading to subject movement etc. This was solved by firm sensitivity creeping up, eg ISO 400 films becoming quite commonplace before the digital revolution. Modern digital sensors took this much further.
Digital cameras introduced the opposite problem: too much light. Because once the electron well is full, they just cap the recorded number. Film fails much more gracefully.
The quest for large dynamic ranges is related to that. You need a lot of dynamic range because overexposing a scene is really, really bad, you clip hard and cannot recover that information. So we ETTR, but then pull shadows in post in extreme amounts. But then shadows will be noisy if we are down to counting electrons, there is no way around that.
This would have seemed crazy to someone trained in film photography.
They would expose for the subject, accept that highlights have less detail but still retain some, shadows can be pulled a few stops but not more, and arrange their composition with the limitations of film in mind.
My photography goals in 2026 are closely related to the above. Instead of relying on post-processing, I want to arrange light in my compositions so that I can do something meaningful with 8 stops of dynamic range. I got a GND8 filter, and I will be carrying it around for landscapes. I also got a flash so I can start learning how to use it.
Sure. When I print, I donāt want to make up pixels for the final rendition if I donāt have to. Better situation with a ālargerā sensor for that. If even that resolution isnāt big enough, the extrapolation isnāt so great.
Different dynamic, but really the same principles. It boils down to the sensitivity of the sensing agent, how well it can definitively resolve weak energy intensities.
Film had specific āsensitivityā, communicated to the operator as āASAā. When I went to shoot football games, I used Kodakās ASA 400 Tri-X, because it was more sensitive than Pan- or Plus-X. I also would push the effective ASA to 4000 by developing in HC-110 developer replenisher, in a mix passed to me by a couple of AP stringers who shot Saints games. THAT was akin to the dynamic of digital ISO, where gain is applied post-capture to go beyond the base number. The sensorās sensitivity to light does not change. And, what that gain does is to push that uncertainty of measurement that is ground-floor noise well into the normally-visible range.
That Nikon calls it āISO sensitivityā in their camera manuals is IMHO a crap attempt to make things seem the same for film photographers grappling with digital. Just hides the essential dynamics, instead of helping people to understand themā¦
It all starts with the available light and the ability of a particular camera/lens to resolve it.
Some fill flash in a more intimate/smaller landscapes can work wonders. I cut my teeth on this with slide film back in the 90s, and while I donāt use the technique that often, when I do use it, its amazing.
Isnāt it more to do with resolution than sensor size? So, a 40MP APS-C sensor will retain detail better than a 24MP full frame sensor when printed at the same sizeā¦
Or have I misunderstood the points being made?
Having acquired a flash recently, this is something Iām interested in exploring too. Does it work with on-camera flash and/or do you need a certain power?
Sorry, I think of āsizeā in terms of number of pixels in the sensor grid, and that determines how well the sensor can āresolveā the light painted on it by the lens. Terminologyā¦