Anyone know if they ever came up with a commercial modulo camera?

for ref: MIT proposes new approach to HDR with ‘Modulo’ camera: Digital Photography Review

Just posing the question. This would be a camera worth considering, imo. :slight_smile:

1 Like

Not that I am aware of.

But do we really need that much bit depth for highlights? Recently I have been wondering if the electron well of a pixel could just bleed charge C in proportion to the number of electrons it holds, so that

dC = \text{influx rate} - f(C)

where f could be linear, or basically anything monotonic with f(0) \approx 0. Once C is read out, the camera could just assume that the influx rate was constant and back out the total light intensity from a calibrated curve.

Note that I don’t know enough about semiconductors to assess whether the above is feasible.

You could just as well take a series of shorter exposures and add them afterwards. So long as none of the shorter exposures are over-exposerd, and there is no gap between exposures, the result will be identical to a single longer exposure, but without any highlight clipping.

That seems much more straightforward than wrapping with a modulus, and unwrapping afterwards. (And it’s already implemented and proven to work in smartphone cameras; we’ll just need to get our hands on a zero-blackout global shutter big-boy sensor)

2 Likes

I’m not understanding the concept very well based on the video in the article.

What’s the difference between dumping the charges out and taking multiple captures(shutter closes) and then stacking them in PP? Wouldn’t dumping the charges in a single shutter opening also create the same exact temporal problems as taking multiple shots?

1 Like

One shot vs 20, hatsnp. The pixel charge itself gets dropped and a counter done for weight purposes. Basically the sensor itself does a lot of the processing as I understand this concept. :slight_smile:

1 Like

I get that part, what I mean is that the camera itself could also take 20 quick exposures and stack them itself. At least nowadays, newer fujis have extremely fast electronic shutters.

1 Like

So long as you have a fast enough processor and storage, then maybe. Best to just be able to dump the charge and count how many times you did so per pixel than to take 20 complete images at different F-stops so long as you have a system that can actually do so. Again, just my opinion. :slight_smile:

No need for a different aperture I’d say, just a fast enough shutter speed to protect the highlights and then a few more captures with the exact same settings to expose the shadows.

No need for storage, only enough RAM or built in camera storage to hold the exposures before combining them into one. Sort of how Olympus and other systems do long exposures with multiple exposures :slight_smile:

1 Like

That would amount to just measuring the whole charge directly — which is what sensors do now.

I think the design in the OP just keeps the modulo remainder, and reconstructs the lead digits from a continuity assumption.

Clever, but I don’t think that DR is the limiting factor for contemporary sensors [see edit below]. We have so much that manufacturers are now willing to sacrifice a stop or two for other features, mainly readout speed.

Sure, the more DR the better if other things are equal, but if this design requires compromises along other dimensions (cost, complexity, etc), it may not be viable commercially as it is solving a marginal problem.

EDIT I totally missed the fact that the article is more than 9 years old. Apparently the concept was not taken up as a practical solution in the meantime. But I can imagine that before the current generation of high-DR sensors, it may have looked like an interesting solution.

Yup; nearly a decade and nothing after that. I guess it was way too complex or expensive to continue that, sadly. Maybe one day we can actually get a seamless HDR camera that doesn’t require taking multiple captures and blending to accomplish. Until then, doing so has been viable for sure. :slight_smile:

Perhaps I am biased becase I have become uninterested in dynamic range above a (practical, not theoretical) 9–10 stops.

I recognize that it is nice in some niche cases (eg filming outdoor scenes), but 99% of the time I consider the “let the sensor capture everything and I will pick what is relevant in post-processing” mindset as the failure of the photographer. There is almost always a way to compose better, wait a bit, change the angle, arrange the light somehow, etc so that one can do with the dynamic range sensors were capable of 10 years ago.

I would be happy if they could create a sensor that has the actual perception of the human eye. I know Foveon had a sensor that didn’t require a Bayer filter to separate colors, but it had other issues that kept it from widely being accepted. :slight_smile:

You mean, a sensor with separate receptors for red, green, blue, and monochrome that uses post-processing to stitch everything together into a coherent picture?

Or do you mean a sensor that sees color images in good resolution in a tiny spot in the center, with a big blind spot directly next to it, and terrible resolution and mostly monochrome capture everywhere else?

Or do you mean a sensor where sensitivity is adapted on each individual pixel, so it can capture different brightnesses across the image? That’s neat! But with the major downside that this adaptation takes multiple minutes, and high gain leaves ungainly color splotches on the image?

The eye is actually kind of terrible as a sensor, is what I’m saying. It’s just that the post processing in the brain corrects for its many mistakes. Or more correctly, that the visual input is never seen directly, but merely feeds a world model. And it is this virtual model that is actually perceived.

1 Like

Indeed. And the lens is even worse :wink:

But the CPU is OK, just a bit large and power hungry.

And for some reason, it has a bipod attached.

Despite the unusual design, it remains in production — more than 100 million units are made each year worldwide.

1 Like

Somehow, the eye can isolated bright areas without blowing out the whole image (i.e, sun vs. the blue sky). No sensor that I know of can do this (i.e, the whole sky gets blown up to white). Yes; eye has blind spots, but the brain seams to be able to stitch things to such an extent to complete a scene. Maybe that’s the secret sauce that can never be replicated in a digital sensor. :slight_smile:

Eyes + brain work dynamically:

https://www.psychologytoday.com/us/blog/neuro-behavioral-betterment/202207/how-we-scan-the-visual-scene-using-top-down-brain-control

2 Likes

I’d say it just has more dynamic range. You can expose the sky with a camera to look almost exactly as you see with your own eyes, you will just crush the shadows.

Our eyes will also blow up the sky if we go from a very dark place to sunlight very quickly without giving time for ou pupils to adjust.

I remember Aurélien posting a study about how our eyes sort of apply a sigmoid function to the light it gathers but I don’t remember in which topic it was :smiley: Might have some useful material about this topic

1 Like

Yeah; might be asking too much to ask for a dynamically adaptive sensor. Figured the modulo camera using bucket dumps and counters to calculate how to treat various areas of the sensor would come close. Regardless, the brain is extremely powerful to process such scenes like it does, that is for sure. :slight_smile:

1 Like

Nice read, tankist02. :slight_smile:

1 Like