What criteria for a good monitor? Any usable monitor database?

This is a good resource… https://www.youtube.com/c/ArtIsRight/videos

Oh yes, indeed… :thinking:

It acts as a real 10bit Monitor, this is FRC. And it’s not that important. Regarding of black I’m dreaming of an OLED Display.
Happy editing!

As I understood colorscience(I didn’t!), the difference between SRGB versus AdobeRGB is noticable, and AdobeRGB is a 10 bit colorspace.
Please correct me…

You can see the posterization effect on many photos, which (should) have smooth color gradient (like sky). Can be reproduced by creating shallow gradients of one color in bitmap editor.

If I understand, this source (point #15) suggests 460 shades (non linear) or 255 shades (linear) is needed to see perfect gradients. (Note: This is shades, thus 2^8 = 256, and 2^9 = 512. If we were talking about total colours, I’ve read the human eye can see approximately 10 million colours. Given 2^24 = 16.8 million, 8 bit would be more than enough, but based on this link at least, its not as simple as looking at the total). As monitors have a gamma, thus a non-linearity (of approx 2.2, pending the make), then 9 bit would be minimum required: Gamma FAQ - Frequently Asked Questions about Gamma
However, most real world scenes are not perfect gradients, thus 8 bit is fine in the vast majority of circumstances.

The primary benefit of 10 bit is to know that any banding you see is in the image, not due to monitor limitations. If your camera raw is more than 8 bits (probably all modern cameras are), then there shouldn’t be banding in the image, and if you introduce banding in editing, it should be easy to spot by turning the module on/off. If you are scanning, scan at 16 bit. So 10 bit monitor is a luxury you probably don’t need.

Perhaps mathematically, 10 bit will introduce less rounding errors than 8 bit, but the visual difference of that would probably be minor - I say probably, cause I’ve never actually seen a true 10 bit monitor.

Yes there is a clear difference between the two spaces, however no space is defined by a particular bit depth. Instead, however many bits you have, get spread across the whole space. Editing 8 bits in a large space (like pro photo rgb) gives you less wiggle room, thus more likely to introduce artifacts like banding, than editing 16 bit, because there aren’t as many bits to spread around. Nowadays, unless you are editing jpegs (which should be avoided), there aren’t many reasons to edit in 8 bit - only for things like saving space, using old filters that only work in 8 bit, or using intensive processes that run faster in 8 bit.

4 Likes

That’s a good point.

2 Likes

Sorry, I was a little busy the last days. Thanks everybody who posted additional criteria to look out for. I’ll again update the initial post. edit Apparently, I cannot edit anymore, will add the complete criteria list at the bottom of this post.

So I’ll add hardware calibrated to the list. As far as I understood, there are two possibilities to do the hardware calibration:

  1. calibration device and software are built-in the monitor
  2. external calibration device and software run on the computer

Second solution will probably not work under Linux, as the software most probably is only available for Windows and Mac.

Thanks, will add that to the list.

I’d hope that for high quality monitors we’re looking at here, this won’t play such a big role, but I’ll add it to the list. :slight_smile: What would be a reasonable threshold?

Okay, will add that as well.

If I find the function to change my initial post to a wiki-post, I’ll do so. If not, maybe someone else can help out. :slight_smile:

Sounds logical to me.

Okay, will note that down.

I’m still looking for a good searchable/filterable database to input all the relevant criteria and get a list of relevant monitors I can then compare. For example, https://www.newegg.com offers nearly no relevant criteria to filter for, https://www.bhphotovideo.com allows to filter by color gamut, but not by color uniformity, https://www.prad.de/ seems to have no filter function at all.

Any additional relevant information to choose a good monitor is welcome. :slight_smile:


List of criteria (as I cannot edit my initial post anymore)

  • screen size / monitor resolution: 24"-27" with 2K or >=32" with 4K
    (27" with 4K needs fractional scaling, that doesn’t work well under Linux yet)
  • panel type: IPS panel
  • color depth: 8bit is enough, 10bit is luxury
    (if you use a camera >8 bit and can turn image processing modules on/off to inspect reason for banding) (10bits might still cause problems under Linux)
  • color gamut: >= 100% sRGB and >= 100% AdobeRGB
  • color uniformity: < 2 Delta-E
    (<=1 → not perceptible by the human eye; 1-2 → perceptible through close observation)
  • luminance uniformity: <5% max. deviation
  • monitor pixel policy: ?
  • PMW for backlight: No
  • calibration: hardware calibration
    (software running on monitor, or available for Linux)

You might also want to look at the NEC Spectraview line. I had a 27" in daily use for 10 years, and it served me well. The calibrator died, but I was able to use a supported third party with no problem.

I’ve since moved on to a Dell Ultrasharp 31" 2560x1600 which I calibrate on linux using displaycal and the ColorSpyder 5 I bought when the NEC calibrator died. I’m happy with the size and resolution upgrade. I bought the Dell used locally, from a designer for $100 on CraigsList. I’ve had it for a year and a half, still works fine.

Luckily, it does work under Linux :slight_smile: Look for DisplayCal software.

Unfortunately, even “high quality manufacturers” have problems making their monitors 100% free from errors. Threshold? To me, only 0 defects would be acceptable. Luckily, most manufacturers publish their policies (and most of these policies are, to me, not acceptable).

Have fun!
Claes in Lund, Sweden

DisplayCal software is doing software calibration. For hardware calibration i think Eizo has sw for linux, benq has windows only…

1 Like

10 bits is supported under Linux/nvidia

afaik only Photoshop and Krita and vkdt can 10bits

On Linux, go to nvidia x server settings/x servers display configuration, in layout, ctrl+left mouse on the screen 10bits capable & under, you can choose in color depth. Works with Dt, Gimp, Hugin, Xnview, etc. Doesn’t work with Cups, Gutenprint, Turboprint, Vuescan, etc

The fact that you are able to see the image does not mean that it is in 10bit. Sorry, dt, Gimp etc do not output 10bits. Xnview?!?! You do know that that’s an image viewer, do you?
I have tried 10bits some time ago. Most desktop environments/guis look strange.

Do an icc in 8bits, another one on 10bits & test it. The graphic output comes out from wayland/xorg. Xnview… Why do you wrote this?! If you think i’m stupid, think twice! I use it on xfce(ubuntu studio), the desktop/gui are nice. Video editing is perfect.

sorry, I did not mean to insult anybody

My Linux workflow is in 10bits for 3 years. I work with photo retouchers, professional printer business & in finally the clients (art director…). You can test it with an 10bits icc made on win & one on Linux (DisplayCAL)

Also you need a displayport connection, even with hdmi, it’s not supported sometimes

Good and cheap: Asus

I have the PA278QV - 27" - can be as low as $299

It’s a sort of inbetween normal and real photo screen (that cost at least 3 times that price). It’s generally a good monitor. It’s my first 27", and after using it for a while my only regret is not to have bought the 32" one.

Mine is an earlier model, current same is PA278CV

Wow, we really need to fix the bit depth misunderstanding.

sRGB is a color space, as are DCI P3 or Adobe RGB. sRGB is the least common denominator of all screens, as it is the smallest. But it contains pretty much all reflective colors available (the colors of non self-luminous objects). If you need more, usually it’s for emissive colors (neons, lasers, anything that makes its own colored light).

8 or 10 bits are the intermediate steps in which you split the range between 0 and 100% of the display peak emission in the digital file. That’s encoding. But for us, image processors, that’s also an output encoding, aka a final product. So, further processing is not on the table, it’s a finished product and any processing happens before. If bit depth define the number of steps to go the same floor (100%), you get that the more steps you have, the tiniest they get. But encoding has other tricks, like the transfer functions (improperly called gamma) that help making the steps a bit more uniform perceptually.

Reading that Adobe RGB is a 10-bits space scares me. It means nothing and both have no direct links. Adobe RGB gives coordinates to light emissions, bit depth deals only with files saving.

Seeing banding in 8 bits is rare. There are ways to prevent it (dithering) and it’s usually a low JPEG quality that is responsible for it, or a 8 bits file that got pushed (brightened) too hard. If you push a 12 bits file (or more) then output it to 8 bits, it should be smooth. Since display is the output, it shouldn’t be an issue.

TL;DR, you need high bit depth to edit pictures and be able to brighten them without posterization. High bit depth in HDR output displays is also mandatory. But for SDR output, I’m not convinced. It doesn’t hurt, but if it’s only an excuse to raise the prices, avoid it.

1 Like