Okay, here’s my experiment. I set out here to do two things 1) demonstrate scene-referenced editing with my existing rawproc program (the concept), and 2) look for the places were I’d insert OCIO (figure out where to insert the mechanism). It might also inform others trying to figure it all out, or, it might confuse mightily. Anyhoo…
To start, a couple of things about rawproc, my hack software. It’s on github, here:
For those who have downloaded a copy at some time or are synced to the github repository, you’ll not be able to replicate this specific experiment unless you sync to a commit (49d3dc) I made today, to allow inserting a colorspace tool when color management is disabled. Seemed like a good idea at the time… I’m on a path to a release, and instead of a major build in a few weeks, I might do a minor build in a day or two.
So, rawproc is a toolbox-oriented image processor. You open an image, then apply tools in arbitrary order. The list of tools applied is shown in a panel; you can add or insert tools to your satisfaction. Each applied tool has a copy of the image, developed by starting with a copy of the previous tool’s image. So, a chain of applied tools is really a stack of images, which saves time selecting images for display as well as in processing. Yes, rawproc is a memory hog. That’s a feature, not a bug…
Each tool has a checkbox that you use to select the displayed image; you can be working a particular tool but displaying another. That’ll be useful for the experiment.
rawproc has a boatload of configurable parameters, editable in the Properties dialog. To start this experiment, I turned off color management:
input.cms=0
This turns off all the automatic display and output ICC transforms; we’re going to to that by hand.
First, I’m going to open a raw image with only demosaic, white balance, and wavelet denoise, no color or gamma transform. Also, I’m going to assign the opened image it’s calibrated camera profile. Here are the properties:
input.raw.libraw.autobright=0
input.raw.libraw.cameraprofile=Nikon_D7000_Sunlight.icc
input.raw.libraw.colorspace=raw
input.raw.libraw.gamma=linear
input.raw.libraw.threshold=500
This is the same as dcraw -o 0 -g 1 1 -W -n 500
. Here’s a screenshot of the result:
The internal image is 32-bit floating point, 0.0-1.0. The histogram is of the internal image, but is scaled 0-255 for convenience. This is the raw RGB data, with its original radiometric relationships ‘as-shot’. The display has no display profile applied, it’s just a transform of the 0.0-1.0 internal data to 8-bit 0-255 integer data for the GUI to render on the screen. And, the assigned colorspace is the camera gamut determined by callibration, no TRC or gamma, corresponding to the linear data, but that doesn’t show on the screen.
So now I’m going to construct a view transform using my available tools. I don’t have a LUT tool yet, but I think what I’m about to do is instructive. First, I’m going to scale the data to perceptual using a plain old 2.2 gamma:
Note the histogram, reflects the ‘spreading’ of the data to a perceptual range. Now, I’m going to apply an additional scaling to set the black and white points at the limits of the data range, 0.0-1.0, using a curve:
In OCIO, I believe these two transforms would be baked into a LUT, along with maybe other “look-oriented” manipulations. Note the flat colors; the large camera gamut is being truncated by the display. So finally, I’ll add a colorspace tool to take care of that, converting the working data to linear sRGB, which is close enough to my display gamut for this experiment:
Oh, thanks for the really nice set of profiles, @Elle; I use them all the time.
So, at this point what I’ve done is to load the camera raw as scene-linear as I can get it, then I applied three tools to construct my present notion of a view transform. Note in the last screenshot I drew a red line above the first tool; tools below the line are the view transform, and I’ll maintain the line in the next screenshots.
Now, I’m going to do some work on the image: take care of a wonky color balance, resize and sharpen for output. To do this work, I’m going to insert tools in the chain above the view transform segment of the chain, which I think would pay homage to scene-referred editing. However, I’m going to keep the last tool in the chain checked for display, WYSIWYG. Here’s the result of the color balance, applied as a blue curve followed by a red curve, manipulated to bring those channels in line with the green:
Make note of where those curves went in the tool chain, after the initial open but before the first view transform tool. Note the histogram, right now it’s always of the working data associated with the displayed image; I’m likely going to make it configurable to show the histogram of the tool in work, to support this workflow. Now, resize and sharpen:
Again, I stuck them in the chain above the view transform tools.
So, in a fashion I’d assert I’ve demonstrated scene-referenced image editing, as all of the editing tools were done on the linear data. You’ll note I didn’t adjust exposure; the scene doesn’t have a reliable gray patch, and I’m not shipping the image around so I didn’t worry that part. But I think I covered everything else. Note there was only one ICC-based transform, and that transform only did the gamut mapping from camera to display gamut.
Now, this exercise leads me to think that I could incorporate a passable first-cut at OCIO by putting its transforms from a pre-loaded configuration as an alternate selection in the colorspace tool. A decent view transform LUT would take the place of the three tools I used for this experiment.
The thing I can’t yet figure is how the camera colorspace gets transformed to the OCIO reference colorspace. Until I do, I’ll be putting in a colorspace tool first, to ICC-transform the data from the camera profile to a working profile corresponding to the OCIO configuration’s reference colorspace. Rec709 doesn’t seem right for this application, the gamut isn’t all that different from sRGB. So, I’d be looking to make a OCIO config with a Rec2020 reference colorspace. Still, until I learn otherwise it’ll take a ICC transform to go from the camera to the reference colorspace.
Comments are most welcome; I’m trying hard to figure this out.