about printing and "color science"

I have been a bit lost after Bill and Jefferey ended their On Taking Pictures podcast for good last April. So I have been hunting around for some good conversation on photography while walking my dog, running etc.

But then I found Iain and his Prime Lenses podcast, an enthusiastic leicaphile (is that a word??) with a captivating voice, that interviews guys with the excuse of asking for their favourite three lenses (so a bit of gear talk which never hurts). His interviewees (again, is that a word??) are a mixture of friends, gear nerds, etc, not a lot of self-named “artists” which is refreshing because after a while they all sound the same (unless you’re Trent Parke or Martin Parr).

Anyway, in one of his latest podcast he interviewed Kaj O’Connel – only a kid – he must be 15 or 20 years old – but so refreshing and interesting, I even looked it up the connection between Iain and Kaj and realized that Iain knew him from a previous video-interview published here:

I’d recommend it if only you can skip some annoying filler (I’m afraid to say the annoying bits are all on the interviewer, Ali… I’ll come back to her).

Lots of inspiration on printing and making books, and on making it all with rather low-profile tech (like on demand prints etc). I started makign prints again with my old but still trustworthy Canon Pro-100 (I have stacks of red river paper bought years ago!), and I need to bootstrap again my blurb bookmaking damn!

Get back to Ali now… the host of that youtube channel above… she has lots of videos on old cameras so I watched a couple on the Canon 5D and Nikon D700; I’m not recommending to watch them… esp because one of the two starts with an absolute downer for me (let’s ask chatgpt what are the key points of the Nikond D700 and then I’ll confirm what chatgpt says!.. like you couldnt come up with your personal list of things to say on a camera?? come on!!!).

But my main point is this, that she mentions in the 5D review the “color science” of Canon… how wonderful the colors come out of the camera etc etc… which is something already heard and read on so many places (she’s not original, but how can she if she outputs so many videos on cameras?).

So what about color science? In my view this can only be discussed if we’re talking jpegs and the baked-in profiles. If we’re talking raw files, what is the “Canon” color science or Fuji… or Nikon… there isnt’ one, right? it’s all in the hands of the raw processor! Maybe we should (or could) talk about the inherent differences between for example ccd and cmos sensors… or panasonic-made sensors vs sony-made sensors… insert here the other manufacturers of sensors… nothing to do with Fuji or Sony or Leica if we talk about raws!

Now this ^^^^ is what I was thinking but let me know if I’m somehow mistaken here.

2 Likes

well here’s what posting here does… I went in the other room, made a short video on this little photobook I made a while ago and posted on yt.

I’ll share it here just because maybe others need the same kind of inspiration of doing something only for yourself (this little book here has been in my library – my real, physical library I mean – for years, I was happy with that… its existance only known to me and maybe my wife).

1 Like

No, it is a medical condition. Only symptomatic treatment is available, and it is very expensive.

Yes, exactly. But a lot of people shoot JPGs and care about their colors straight out of the camera.

In theory, there could be minuscule differences for different dyes, but an appropriate color correction matrix should take care of that so that differences would be very hard to spot except in some extreme cases.

Also, cameras calibrate WB differently, so if one is looking at a “RAW” file as rendered with the default WB, they might perceive colors to be different between different manufacturers.

There are a lot of terms clueless people bandy about, including “color science”, “paradigm”, “law of large numbers” (almost always referring to a version of the central limit theorem). It is best not to worry about it and keep on pressing that shutter button.

3 Likes

This is great, inspires me to do something similar, instead of full blown A3 or A4 prints. Now I just need some courage to spend 140€ on more ink as I’m out :smiley:

3 Likes

Except I have seen more than once where someone on YouTube compared “color science” by opening the raws in Lightroom… :roll_eyes: And I don’t think I have seen even once where they specifically said it only applies to camera JPEGs.

I have also had someone on Facebook very seriously try to convince me that manufacturers massage the raw data to give the colors that specific look.

But there’s hope. The Snappiness guy mentioned in his most recent video, while talking about CCD and CMOS sensors, that he’s coming to realise that “color science” may have nothing to do with the sensor: https://www.youtube.com/watch?v=GUsuK2KYO44
Feel free to give my comment a :+1: to increase the chance it will be seen by others.

2 Likes

Geesh, convergence is annoying… :crazy_face:

Been thinking retrospectively about my dalliances with so-called color science, and how I read some of the persistent questions asked about it. Now, I’m not a licensed Color Scientist, more a Color Practitioner, or maybe just greasy Color Mechanic, to take on an automotive corollary, so take this all with a grain of salt.

To start, I believe the major manufacturers of digital cameras are striving to make ‘general purpose’ devices, that is, image recorders that strive to encode data that represents the scene as most of us would see it. I think the CS-folk call this “colorimetric”, with the Holy Grail being the so-called Luther/Ives Condition, in which a digital imaging sensor is able to record colors exactly as humans would see them. For a variety of reasons this is not possible with current digital imaging technology, so it’s a matter of how close they can get. So, what’s relevant here is that I don’t think the manufacturers are striving for a particular ‘look’, as that would make them not-so general purpose.

In all that, our cameras are able to encode a range of color hues larger than any of the rendition media we currently have, displays or prints, can depict. And so, at some point in the processing of raw data to rendition data (camera and software), there has to be a ‘color conversion’, where the rich hues the camera got are translated to something within the ‘gamut’ (range of hues) of the particular rendition medium. Again, this conversion in cameras is baselined to be “colorimetric” in the what most call their ‘Neutral’ processing profile. There’s also a conversion of tone, or luminance distribution, if the rendition media does not have the same luminance response as the original light of the scene, but I’m going to defer discussing that for another diatribe.

So, to me, the topic of 'about printing and “color science” boils down to what color conversion is required to scrunch the camera hues into printer hues. Just so the process is understood, color conversion in the ICC profile world is a two-step process, first from input to XYZ, then from XYZ to destination. ‘XYZ’ is a colorspace associated with the range of human vision, derived originally from the CIE 1931 color-matching experiments, where 17 humans got to set the baseline for how cameras strive for “colorimetric”. The ICC profile scheme uses it as an intermediary in color conversion, so input (camera) profiles don’t have to know about all the possible output (printer/display) profiles.

So, what you need to make all this work is 1) a camera profile that represents your camera’s ‘spectral response’ (how it encodes colors) and 2) a printer profile that represents your particular printer’s gamut. On the output side, same thing applies to displays. And to JPEGs, but that’s a bit of a conundrum as you don’t know all the displays out there in the world. So, for that destination we typically use sRGB, as ‘close-enough’ (another technical term :laughing: )

Been thinking about writing a blog post capturing all this, so your rumination is timely…

8 Likes

You explain it in a very concise and easy to understand way. Especially easy to understand for an uneducated person(me) in these matters. A blog post would be nice. Do you think it could be one of those featured pixls.us blog posts? We haven’t had one in a while.

3 Likes

Oh well you call yourself a color mechanic… but you must be one of those mechanic on a MotoGP or F1 teams!

thanks for the explanation (also thanks to everybody else that has commented)

PS I like the mechanic definition so much since I’m a bit of a motorhead that has discovered only in middle age the satisfying sensation of replacing and fixing wheels and parts on bikes and motorbikes…

3 Likes

Yeah, there’s so much ‘science’ written about the topic, not much ‘mechanics’. When I implemented the colorspace tool in rawproc, I had to do a lot of head-scratching to figure out what exactly I was implementing. Mechanically, it’s pretty simple, and understanding the ICC process made wrapping my head around ACES a lot easier.

A lot of my understanding was shaped by Elle Stone’s writings here:

https://ninedegreesbelow.com/

She used to post here. Also, she wrote some C code that uses the LittleCMS library to make a bunch of the standard profiles, with variations of the tone curve you can’t easily find elsewhere:

https://github.com/ellelstone/elles_icc_profiles

You don’t have to mess with the C code, just go into the profiles directory for ready-made .icc files. I use that directory as the foundation of my profile “zoo”…

1 Like

I actually wrote a post on a good bit of what we’re discussing here in 2020:

Doesn’t include the color transform mechanism; that might be what I can write on next.

2 Likes

Last weekend, I took part in a printing workshop, where our local photo club invited a renowned professional printer to tell us all about photo printing.

He came in with a huge and expensive twelve-ink Canon Pro1000 printer and several boxes full of Ilford and Hahnemühle A3 paper.

Everyone came prepared with a dozen or so of their favorite photos, and we’d print four if them for each participant during the workshop.

It was a valuable experience. The first batch of prints he did were truly awful. All details lost in the shadows. Just a mushy brown mess covering half the image. Over-sharpening a beautiful abstract such that every edge was visible in triplicate. A sunset that faded to numb white instead of vibrant glory. And yet, he did not notice. “Gorgeous print!”, he proclaimed, and everyone dutifully awed and clapped. Nobody else noticed, either. In a room full of a dozen photographers, nobody noticed absolutely glaring issues.

His explanations of various questions regarding photography and printing were similarly suspect, at least where I knew a bit about the subject myself. “Overexpose, don’t underexpose”, “you can’t rescue shadows in post”, “dye inks will only last two months”.

It is, however, not my place to criticize another man’s work in front of others. So I kept my mouth shut, and merely asked to use the paper with the non-broken profile when it was time for my prints.

So, I learned a lot about my fellow photographers, got three nice and one ok print out of it, and noted down a few valuable insights about printing.

But above all, I learned one thing: if you want to learn to print, do it yourself. Do small proof prints first. Try the papers yourself. Use your own printer. Put the images up where they’re supposed to be hung up, and view them in the real lighting of that spot. There’s no substitute for that by another person’s remote expertise. And there’s no printer profile or monitor calibration that can adjust for your aunt’s crappy living room lights.

10 Likes

The pdf linked in the reference below has some nice information about color and camera pipeline processing including some examples of the different matricies used for standard WB settings between camera manufacturers and obviously some different spectral properties between the different camera’s colorspaces which I know you have dabbled with Glen and you keep your editing in that space if I recall until output?? What I am not sure of is if the spectral data of the camera color space is a direct measure of a particular sensor or it is also tweaked in some way such that the same sensor if used in a camera made by Sony might be tweaked by Nikon to impart some of the differences that are seen between camera’s resulting colorspace or is it mostly just different sensors… I guess that sensor data goes through some electronics and processing as well so a difference there would likely show up I guess??

https://www.eecs.yorku.ca/~mbrown/ICCV2019_Brown.html

1 Like

It’s a direct measurement of the sensor of a particular camera model. What’s interesting, though, is how similar cameras of a particular make can be. For an example, go to this link:

https://glenn.pulpitrock.net/openfilmtools_ssf_plots/

Compare the two Nikons, and the two Canons.

I have used spectral profiles for my Nikon D7000 for images from my Z 6, and the resulting images looked very close to profile-camera matched images. Not so much for dragging a Canon profile over to a Nikon image.

I would surmise the camera manufacturers have specifications for their Bayer dyes that they pass to their sensor manufacturers.

3 Likes

I always wondered about the following:

  1. What happens to impossible or near-impossible readings recorded by the sensor? Eg the spectra of most dyes are so wide that getting a zero when another channel is large is either impossible or very unlikely, so certain combinations are ruled out, others are possible (weird combinations of monospectra) but chances are they are noise. Are these normalized somehow?

  2. Why are transformations linear (ie a matrix)? Yes, it must be a good local approximation, may even a good global one, but a nonlinear transformation to a linear color space may give better results (goes without saying, preserving neutral readings).

  3. Why does most software use linear / device color spaces for internal representation? Yes, it is great for a lot of transformations and processing, but for others a perceptual space, such as Oklab or similar, could be better. I am thinking of noise reduction and various perceptual adjustments. (Things are moving in this direction though, eg JPEG XL has a XYB color space, which is just LMS with a trivial transform).

They’re not ‘impossible’ so much as ‘not visible’. I’ve dealt with sensors that work in both UV and IR ranges far outside what humans can see, just another wavelength to them. That said, the notion of a working colorspace is to mitigate shenanigans imposed on a visually-oriented workflow by such data.

Because it’s simple for what’s really a corner case. In the usual rendering intent (relative colorimetric), colors that are already in the destination gamut are left alone, it’s just the extreme colors that are moved thusly. Just collecting them at the gamut borders isn’t that egregious for the large majority of images. It’s when you have large regions of extreme hues that it becomes noticeable, as the fine hue gradations of the scene are now missing in the rendition.

What a LUT camera profile provides is control over that hue transform in the camera → XYZ part of the color operation; the XYX → destination is (usually) still a matrix transform. Still, it can provide enough control to give some variance to an otherwise cartoonish rendition gradation.

To defer “damage” to the colors for as long as possible. My notion about that is that doing a tone transform to the original scene-linear data moves the magnitudes and their relationships into ‘unnatural’ juxtapositions, which make subsequent color operations less “reliable”, so to speak. In rawproc I only have a HSV-oriented saturation tool; I’ve found I can use it daintily before the tone transform, while the data is still scene-linear, and the operation looks better there than if I do it after the tone tool.

FWIW.

3 Likes