There are two fundamental aspects of making a presentable image from raw camera data that require our attention: tone and color. From the time we step up to the scene with our camera to the presentation of a finished image, consideration of these two characteristics is fundamental and persistent through the workflow. The management of tone is probably more universally comprehended; it starts with the scene’s dynamic range and the camera’s exposure settings at the time the image is captured, and wends its way through the manipulation of scale, contrast, and other transformations in post-processing. Color, however, is more of a challenge, in that it’s both a physical and psychological phenomenon, and the mechanisms of its capture and depiction are quite ingenious. And vexing.
This missive attempts to present the management of color in “mechanic’s terms”, that is, more of a recipe-approach without too much theory. It also focuses on the workflow of post-processing, where the particular mechanisms are really quite simple in concept, but are often obfuscated by the science and mathematics on which they are based. So, no gamut plots, no matrices, no words like “tristimulus” or “metamerism”. My assertion about the current discussion on color management is that, if the mechanics were better-understood, the underlying theory would be easier to subsequently comprehend.
The particular mechanisms I’ll describe are built around the International Color Consortium’s profile standards. There are other mechanisms, such as Adobe’s DNG Color Profiles (DCPs), but they all share some common ground in how color is described, and the ICC profile is the only one of those mechanisms that can be considered in the complete “soup-to-nuts” workflow, from the camera input to the display or file output.
First off, let’s make sure we understand the artifact called a digital image. This is one of those “ingenious” parts: recording light in a way that enables production of full-color images took some creative thinking on the part of numerous engineers. So we start with a mechanism called a sensor, one where the sensing mechanisms are arranged in a row-column array corresponding to the scene to be recorded. Light passes through and is focused by a lens, and that light bathes the plane of the array for all those little sensors to measure. The actual measurements are analog, an electrical voltage, and that voltage at each little sensor is turned into a corresponding digital number. All those numbers are collected into a chunk of computer memory in the camera and subsequently written into the raw file. Some camera manufacturers will do things to that data before putting it into the file, but for our intents and purposes that array of measurements is as ‘raw’ as we can expect to start with.
In order to be properly interpreted, a digital image requires a few items of information to accompany it. The most fundamental items are the image width and height in pixels, corresponding to the raw measurements taken at each site on the sensor. Computer memory has no notion of “rows” and “columns”, it’s just a long chain of individual, addressable locations. So software needs to know width and height of the captured image so it can determine where in the memory chunk one row ends and the next one begins.
Relevant to this article, and not as well understood as width and height, are the numbers that describe the camera’s spectral response, how well it records light through the spectrum. These numbers are the start of the chain of information that is the basis for color management; without those numbers, the raw camera data can’t be eventually transformed into an image that looks right to us humans. The main reason for this is that most modern cameras can record light that will translate to colors that we can’t see. It’s not the “black hole” sort of invisibility; rather it’s that the devices that display images will just show all, say, reds past the bounds of their ability to show as the maximum red they can display, not the red we’d like to see. It works like how the rabbits of Watership Down count; they can only count up to four; anything more is “hrair”…
The word “gamut” now comes into play; it refers the bounds of the colors that can be depicted by an output device. One might be compelled to think of the camera performance as gamut, but don’t, the color people will tell you that’s wrong. The proper term for camera color performance is “spectral response”. But the information describing both looks the same:
- Three numbers describing the “white point” of the device;
- Three numbers describing the reddest-red the device can handle;
- Three numbers describing the greenest-green the device can handle;
- Three numbers describing the bluest-blue the device can handle.
These numbers are not RGB values, they’re in a coordinate space called XYZ. For purposes of this post, let’s not explore that further, except to say that these numbers are used by the software to convert an image from one color space to another. The collection of #2, #3, and #4 are commonly called “the primaries”, and are usually depicted as a 3x3 matrix. They are usually captured in a data format called a “profile”, which can be a file of its own or embedded in most image formats. There are a couple of dominant profile formats; Adobe’s Digital Camera Profile (DCP) is one, but for this post we’re going to use the ICC profile as our talking point, as it is the only one recognized in all parts of the workflow, from camera to display/printing characterization.
One final thing to note about the color space information is that in some instances it’s conveyed differently than in a “profile”. For instance, the OpenEXR image format has its own metadata format that has specific fields for the white point and primaries. Also, most raw file formats use a field to supply the name of a “standard” profile corresponding to how the embedded JPEG is encoded; it is up to the reading software to have the white point and primaries for that profile name handy for subsequent color work. Curiously, none of the raw formats of my experience except Adobe DNG contain white point and primaries in any form for the actual raw data; we’ll discuss this later.
Laying out the above concepts now brings us to The Key Point. This is the essential thing to comprehend in considering how to manage color. The Key Point applies to any software that handles images; either the programmers write code to do it per The Key Point, or they don’t and leave rendition of the images to the whims of the displays or printers. So, without further ado:
From opening the raw file to the display or output of the finished image, a color profile describing the image’s color characteristics must accompany the image. If the image is converted to another color space, the profile of that color space must accompany the transformed image onward.
Think of the color profile as essential metadata, as essential as the width and height numbers are to properly interpreting the image.
This figure graphically depicts our subsequent discussion:
Now, this is a generic figure, attempting to capture the essential activities of raw processing and their corresponding color considerations. Some software doesn’t use the concept of a “working profile”; dcraw, for instance, works the data in the camera space, and converts it to a user-selectable color space just prior to writing the output file; the default is sRGB. But, a working profile is a important concept, so it’s included in the figure. The rest of the post describes the reasons and considerations for each of the places in the workflow touched by a color space activity.
This is the first step in the chain that is required to adhere to The Key Point. For the color characteristics of the image captured by the specific camera, a white point and primaries that represent those characteristics must be “assigned” to the image. The verb “assign” is important to consider; assigning a profile just attaches it to the image; the image is not modified at this step. So now, with the camera-specific white point and primaries, the image can be properly considered with respect to its color at each subsequent step in the processing workflow.
Sooo… if this information isn’t supplied by the camera in the raw file, where does it come from? It can be specified in one of two main ways: 1) from a humongous list stored internally by the raw processing software, one entry for every camera the software supports, or, 2) in a profile provided by, and usually created by, the photographer.
For either way, the white point and primaries are determined by taking a picture with the camera of a color target, then using special software to read that image and calculate the white point and primaries. The color target has colored patches, colors specifically selected to support the calculations. A commonly-used target is the MacBeth ColorChecker, which has 24 patches and can be procured in a variety of formats. For the “humongous list”, a target shot is usually exposed in sunlight. For “photographer-provided”, the photographer will shoot a picture of the target in the same light as what the rest of the session’s photographs are shot.
The “humongous list” takes on different forms in each software. In the Adobe software family, the “list” is actually a directory full of individual DCP files, with most cameras represented by at least five files. Of note is that these Adobe profiles are also used to impart subjective “looks” to an image, which are over and above the subject of this post. In dcraw, the list is in an internal data structure called adobe_coeff, you can download the source code and readily find the list in it. RawTherapee has a file called camconst.json, and if the color information for a particular camera isn’t contained in it, the dcraw list is also considered.
The upshot of all this is that, if your camera is supported by the software, it’ll automatically assign an appropriate camera profile when the raw file is opened. That behavior can be overridden in most raw processors by identifying a separate profile in a file, which can be in either an ICC or DCP format, depending on what the programmers decided to implement.
For a number of reasons you can research separately, working with the image in the camera color space is generally not a great idea. So most raw processors provide the option to “convert” the image to a working profile for subsequent editing. Note the verb “convert”, this is an actual transform of the image colors, and in this step it is to a “well-behaved” color space with a large gamut. After the image conversion, that working profile now follows the image in the subsequent workflow.
There are a whole bunch of working profiles to consider. It is not a heady endeavor to construct one; all it takes is the specification of the three primaries and a white point. Most working profiles bound the color gamut to the visible colors, and use one of the standard white points; but neither is “required”. @Elle Stone has a good article that describes what a “well-behaved” working color space should look like.
Some raw processors provide only one choice of a working profile. This isn’t necessarily a bad thing, as long as its gamut is larger than your ultimate display or printer destination.
Generally, in a color-managed raw processor, you’re not going to be looking at the image in its working profile color space. The displayed image is first “converted” to the display profile before it’s thrown onto the hardware. This is important to your result, as you want the displayed image to represent where you want the image to end up when it is rendered for your audience. Accordingly, making a custom profile for your display and viewing conditions is important to supporting that consistent rendering.
Your display may have a “sRGB” mode, but the internal image still needs to be converted from the working profile to sRGB to “look proper” on that display in that mode. But beware, the specification of sRGB isn’t identical in profiles available on the internet…
We generally pass images around in various file formats: JPEG, TIFF, PNG, etc. All of these particular file formats allow for the embedding of an ICC format profile containing the primaries and white point that describe the image’s color characteristics. So whatever you do in regard to the subsequent discussion in this section, embedding the corresponding color profile in the file is essential to maintaining the integrity of The Key Point.
The predominant reason to save an image is to provide it for viewing by an audience. For all the talk of things like sRGB as the common denominator for the web, whatever you do, keep in mind The Key Point and make sure a color profile corresponding to the image is embedded in the file. If you do that, properly-configured color-managed software will know how to handle your image for their displays.
Converting to sRGB for saving an image for the web has worked okay until now because most displays’ gamuts were in the neighborhood of sRGB. So, increasing availability of wide-gamut displays is going to vex that thinking; paying homage to The Key Point will help you somewhat; at least color-managed destinations will be nice to your image.
Consider this: If all image viewing software in the whole wide world were “color-managed”, that is, they converted the image to the color characteristics of their particular output device to render it, you could send images around in whatever color space you chose, and the proper conversion would done to display it. Alas, that is a Utopian dream; in “the wild” that is generally not the case. That’s why sRGB has become a defacto least-common-denominator; most consumer displays up to this time have approximated this color space, so things looked “about right”, mostly, sometimes, well…
In your own world, there are a few good reasons to save your final image to a file in its working profile. Mainly you’d do this to support further editing in another software, preserving the larger gamut. If you do that, make sure the corresponding working profile is embedded in the file.
Printing an image requires the same consideration as displaying it; a custom profile that represents the printer’s color capabilities should be the conversion destination.
Saving an image to a file to send to a printing service requires mainly adherence to the printing service’s instructions. If they tell you, “send sRGB”, send them a sRGB image. If they tell you to make sure the file has an embedded profile, make sure the image conforms to that profile and the profile is embedded in the image file. If they tell you that they’ll do the conversion, you can send in whatever color space as long as the corresponding profile is attached. The Key Point plays here, as well.
The phenomenon of color is very complicated. However, the mechanisms to manage color are rather straightforward, if you respect The Key Point. And, once you understand the mechanisms, working your way through the science and theory of color is a lot more manageable.
 RawPedia, “How to get LCP and DCP profiles”, How to get LCP and DCP profiles - RawPedia
 RawTherapee, “camconst.json”, RawTherapee/camconst.json at dev · Beep6581/RawTherapee · GitHub
 Elle Stone, “What Makes a Color Space Well Behaved?”, Well behaved working space profiles
 Elle Stone, “Will the Real sRGB Profile Please Stand Up?”, sRGB profile comparison