Why do we mimic perception?

Damn; tired eyes. Notice a lot of red dot artifacting that I literally didn’t see until re-looking at my result. Like I said; not pretty (and pooped). lol

:slight_smile:


R0003463_01.DNG.xmp (17,4 KB)

5 Likes

There are now phones that can do 3.2k nits (Xiaomi Ultra), and Google Pixel 9 is 2.6k nits. Yes, it is crazy, I don’t know what the point is. Watching movies on the beach in August?

My understanding is that the gamma should set be set up properly in the environment you are editing the photos/videos, save in a standardized format which either includes gamma as a parameter or fixes it to one value (eg sRGB = 2.2), and then gamma correction should be automatically applied on the display. This should automatically take care of it if, again, that is properly calibrated. So most people should not worry about gamma if they are editing photos in a sane environment. Is that correct?

Each color space comes with a room lighting standard. The intended perception will only be achieved in appropriate lighting and screen brightness!

Cinema viewing is intended for dark rooms (0 nit), it uses gamma 2.6 (e.g. DCI-P3). Video mastering is done in a standardized dim room (5 nit), and uses gamma 2.4 (REC.709). Desktop PCs are intended for a lit environment (20 nit), and use gamma 2.2 (sRGB, Display-P3).

Rec.709 uses sRGB color coordinates, but ups the gamma to compensate for the darker viewing environment. DCI-P3 differs from Display-P3 only in gamma as well, because of the viewing environment.

The purpose of gamma is to correct for the Stevens and Hunt effect, the loss of contrast perception in dim light. This depends on both the screen and the surrounds. Higher gamma boosts contrast, which is necessary to compensate for the loss of contrast perception in dim light. The darker the environment and display, the more you need to boost contrast, the higher the gamma. Conversely, if you had a full-reality 30K nits display, no gamma would be needed (gamma=1).

The desktop standard of 2.2 is intended for 100 nit displays in 20 nit rooms. 2.2 is probably actually too high for our often-way-brighter displays, and 2.0 would be more appropriate. But that battle is lost, 2.2 is the de facto standard, and office lighting is supposed to be >200 nits, therefore office screens will usually be in the 200-300 nit range. Still, a brighter display will look more contrasty. (But HDR tone curves partly compensate)

If you are editing in a dark or dim room, I would strongly suggest increasing your gamma to 2.4 or 2.6. In a lit room, 2.2 is appropriate. In an office-bright room, it really isn’t, but you don’t have a choice, and all viewers will be equally miscalibrated, so you might as well follow.

3 Likes

I was under the impression that the purpose of gamma was originally to correct for the non-linear emission of CRT screens, is that incorrect?

Yes, but the CRT was tuned that way because of perception.

If you’re interested, check the work of Charles Poynton.

1 Like

Yes, it’s a big “you did this, so I’ll do that” ping-pong game, confusing the central issues of scene vs. rendition.

I like your summary post, btw…

I mean, try looking at a phone screen outside in Portugal during summer when it’s 35-40C outside, sometimes more, it’s impossible. The more nits a screen can output the better in my opinion. I know it produces heat, etc, when used for prolonged periods, which is counter productive in summer, but sometimes you just want to reply to a text and it’s impossible in bright sunlight.

1 Like

Yes, I get it, it is nice to be able to use a phone in all sorts of conditions, including direct bright light, but in general I have always been able to work around this, eg find a spot with shade, block the sun with my body, etc. For this, I find that 1000 nits is plenty.

That said, according to its specs sheet, my cheapo 150-eur phone can do a peak of 2k nits, which is, again, crazy. I wish this technology would trickle down to cameras and then I would not insist on a viewfinder.

(Incidentally, if I happened to find myself in Portugal during the summer, you would not see me looking at my phone. I would be in a seaside cafe, enjoying a plate of bacalhau à brás, sipping some local red wine.)

3 Likes

Given that our monitors decode the gamma-encoded data, I understand “gamma” in RAW processing mainly as a method of compression, where the highlights get compressed due to the inherent non-linearity in how we perceive lightness vs. luminance.

We could store that data without “gamma”, remove the decoding step of our monitors and would perceive the data just fine - but it would use lots of bits for a range of highlights that gets compressed by our perception.

2 Likes

R0003463.DNG.xmp (13.4 KB)

3 Likes

My edit with AgX and a few local adjustments:


R0003463.DNG.xmp (53.8 KB)

3 Likes

RT 5.12


R0003463_rt3.jpg.out.pp3 (16.4 KB)


R0003463_rt4.jpg.out.pp3 (16.4 KB)

2 Likes

I prefer the top one.