Wide gamut monitor suitability for photo editing

monitor
wide-gamut
color

(George Pop) #1

Hello,

I came upon a page on Gary Ballard’s website that quotes someone writing several years ago on an Apple forum about the suitability of wide gamut monitors for photo editing:

A wide gamut LCD display is not a good thing for most (95%) of high end users. The data that leaves your graphic card and travels over the DVI cable is 8 bit per component.

You can’t change this.

The OS, ICC CMMs, the graphic card, the DVI spec, and Photoshop will all have to be upgraded before this will change and that’s going to take a while.

What does this mean to you?

It means that when you send RGB data to a wide gamut display the colorimetric distance between any two colors is much larger.

As an example, lets say you have two adjacent color patches one is 230,240,200 and the patch next to it is 230,241,200.

On a standard LCD or CRT those two colors may be around .8 Delta E apart. On an Adobe RGB display those colors might be 2 Delta E apart on an ECI RGB display this could be as high as 4 delta E.

It’s very nice to be able to display all kinds of saturated colors you may never use in your photographs, however, if the smallest visible adjustment you can make to a skin tone is 4 delta E you will become very frustrated very quickly.

The argument sounds reasonable to me, but I was wondering if it still applies to current hardware.


#2

My wide-gamut screen has a built-in sRGB gamut setting, which I use.


(Andrew) #3

I think there are 10bit graphics cards and 10bit panels, there’s a thread touching on this here on pixls. When I looked into this, I couldn’t find any evidence that you can see any difference.


#4

I hear the difference is mainly important if you do lots of black-and-white stuff.


#5

More current hardware has 10/12/16 (30/36/48) bit color data transfer capacity according to http://www.hdmi.org/learningcenter/kb.aspx?c=3

HDMI 1.3:

Higher speed: HDMI 1.3 increases its single-link bandwidth to 340 MHz (10.2 Gbps) to support the demands of future HD display devices, such as higher resolutions, Deep Color and high frame rates. In addition, built into the HDMI 1.3 specification is the technical foundation that will let future versions of the HDMI Specification reach significantly higher speeds.

Deep Color: HDMI 1.3 supports 10-bit, 12-bit and 16-bit (RGB or YCbCr) color depths, up from the 8-bit depths in previous versions of the HDMI specification, for stunning rendering of over one billion colors in unprecedented detail.

Broader color space: HDMI 1.3 adds support for “x.v.Color™” (which is the consumer name describing the IEC 61966-2-4 xvYCC color standard), which removes current color space limitations and enables the display of any color viewable by the human eye.

And I mention HDMI because that’s one of your alternatives to DVI cables. The hexachromatic monitor I’ve used had HDMI 1.3, (even though I was only driving it with on-board firefox.)


(George Pop) #6

Thanks for the replies.


(Morgan Hardwood) #7

No.

The post was written by Karl Lang in 2004. Here is the full version: http://www.fredmiranda.com/forum/topic/674065

Just two paragraphs before the paragraph you quoted,


#8

While others have answered already I don’t want to miss an opportunity to boast with my experience either. So here we go. :wink:

I use a wide gamut monitor for a few years now (Dell U2413) that covers almost AdobeRGB. Even with the 8 bit output of my graphics card and software – the monitor itself can handle 10 bit – I never noticed any banding whatsoever. Given that most people are happily using cheaper monitors which are in fact 6 bit internally the 8 bit are probably good enough for most things. I know that there are special cases like fine detailed monochrome images as used in xray viewing terminals, but that’s not your every day thing. That is even in line with what your quote says as ΔE of less or equal to 2 is considered as not noticeable.


(Andrew) #9

** FEATURE IDEA **
I’ve also been using a wide gamut screen, for a few months now, Dell UP2516D, I like it and haven’t noticed any issues. However it has complicated processing a little. I have the panel in adobeRGB mode, at least for processing photos, and output the final result (from RT usually) to the profile RT Medium… (i.e. adobeRGB equivalent), and I add “adobeRGB” to the file name, to remind me. So far so good.

If I want anyone else to see the photo, I produce an sRGB version, so change the output profile, usually resize and sharpen, switch off the “main” sharpening in the Detail toolbox, and Save at 95% quality. This usually gives a jpeg of around 2-3Mb.

So here is a wish-list suggestion for RT development. How about a new setting which when enabled means RT would automatically output a second file in addition to the “primary” one. There would be a new “second file” PP3 living in the user’s folder. RT would output the “main” file, then apply the settings from the second file PP3, then output it.

The idea is that the second file PP3 has a user-defined set of standard changes, just the changes needed beyond what you had for the main output. E.g. in my case, change profile, switch off main sharpening, resize to scale x.

This might be useful for people who want a second smaller jpeg for social media?

And how about a facility to put some text (e.g. your name) in a corner of the picture where not much is happening as determined by some clever algorithm!! Colour/brightness automatically derived so it is legible but not in your face.

I did say “wish”…


(Morgan Hardwood) #10

You can already do that. Simply create a partial PP3 with those social-media settings, then batch-apply it when needed. No need to complicate things. The only catch is that you would need to undo the changes applied by the partial profile after sending the photos to the queue, and multi-pp3 support would remove that obstacle.


#11

This is possible with ImageMagick (and probably G’MIC). I have done this before but don’t remember how. I am sure that you will figure it out.