The eye (and brain) do not work the same as a camera. The eye’s retina consists of mostly rod cells which are highly sensitive to luminance, and three types of cone cells which are sensitive to various wavelengths thanks to which we distinguish color. The most common type of camera sensor consists of identical photosites and a color filter array over them. The human eye needs time to adapt to different light situations, while the digital sensor is probably more affected by temperature. The camera has an analog-to-digital converter between the sensor and the brain; the eye uses chemical reactions to send a signal to the brain. It doesn’t seem like a 1:1 dynamic range comparison to me.
This website shows that the best digital cameras can capture up to 12 stops of dynamic range in their raw files (the human eye has around 20 stops, says the internet):
This website lists which cameras those are:
And this guy sheds some light on the subject:
I’m into panography, which almost always includes dealing with very high dynamic ranges. While photographers would normally bracket using a typical camera, people working in (rich departments) forensics etc. would use something like this:
Question unclear. A high dynamic range scene could be stored in an 8-bit JPEG, but it would be posterized due to the low bit depth.
Vastly. The raw data is close to what the sensor saw, like a raw ball of dough, while the JPEG is like a baked breadroll. You could turn the ball of dough into a breadroll, but you could also turn it into a pizza, while once you have a breadroll, there’s no going back. It’s hamburger time.