Unbounded Floating Point Pipelines

You’re always rolling your scene data though an imposed view.
The filmic view we use as an example is actually a view that resembles the response of a filmic camera with a good dynamic range, something that is closer to our perception (and it’s aestetics we are familiar with after decades of cinema and photography).
The alternative is other imposed view: the simplistic mapping of scene values to the screen with severe channel clipping, that acts far from we expect and know from experience and perception.
Are you seriously proposing that most of the photographers prefer that over a wysiwyg view?

I would like to clarify one crucial point here… when you talk about “mapping of scene values to the screen”, are you considering a 1:1 mapping, or a “mapping through an ICC transform”?

I am asking because a properly applied ICC transform from a linear input colorspace to a well-calibrated monitor profile is such that a R=G=B=0.18 gray patch will be “perceived as middle gray” by the observer. This is obviously not a 1:1 mapping, but a linear-to-perceptual conversion.

Either a raw dump of linear data in the 0,1 range or the same through a nonlinear transfer will be insufficient to display a range that a photographer would expect. Your camera has 10 to 15 stops of dynamic range. You always expect that to end up displayed, not just a tiny window of a few stops and fugly channel clipping.
Whatever transform is used, it must take account of the camera’s dynamic range, otherwise is feeding garbage to the output.
What I mentioned above was using the OCIO view (which would be already in the display range after the view transform) as the source for the display correction, which can be done with ICC (from the OCIO view space to the desired screen profile)

I don’t have time to do justice to your questions today, but I’ll try to post tomorrow.

Regardless of the tools used, I see three fundamental things that have to happen to linear scene-referred data to be displayed or printed, in no particular order:

  1. Some kind of scaling to a perceptual range. Gamma is an operation of that category; there are apparently better ways to do it.
  2. Some kind of additional scaling to put the black and white points at the limits of the display range. I think this may be accommodating the camera DR. Also, I think Look stuff is considered here.
  3. Some kind of color gamut transformation to respect the limits of the output medium.

To my understanding to date, OCIO is clearly all over #1 and #2, and @gez’s latest post helped me develop a perspective on #3 that I need to consider a bit.

Back in the work part (before #1, above), I totally get working in the linear-0.18gray-floatingpoint basis for scene referenced data. Not discussed enough for my satisfaction in the 299 previous posts is management of the associated color gamut; I think in OCIO that is captured in the “reference colorspace”, and video folk seem to like Rec709 for gamut. Me, I’d rather use my trusty Rec2020 or something bigger within which to work color, before rendering it down to output gamut.

My last hangup is with respect to how to get OCIO to recognize the gamut of my camera, which in ICC I currently have in a oh-so-carefully-crafted (I’m so proud… :smile:) calibrated camera profile. @anon11264400, I did look over your OCIO config that references the Nikon 5200, but not in sufficient detail to understand it, yet: GitHub - sobotka/nikonD5200ocio: Transform set based on Filmic with the Nikon D5200 camera transfer function.

My deal is this: my software is rather simple, and toolbox-like, so I can do a lot of fundamental things in arbitrary order. I think Photoflow is a lot like this. With that perspective, I see the utility in incorporating OCIO-based color management, but I also see ways to use ICC profiles to do the canonical things required to respect scene-referenced working data. Yes’ I’m pretty sure I’d be using the profiles in ways Mother Nature hadn’t intended, but until I can work out how to consider my particular camera in OCIO, I’m going to use scene-referenced data with ICC profiles, specifically to capture color gamut information and transforms.

In these discussions, we need to separate the canonical work (data encoding, transforms, etc.) from the particular mechanisms. Right now, I believe I’ve constructed a toolchain in my hack software that adheres to all the scene-referenced strictures discussed here, using linear-gamma ICC profiles to carry just the color primaries. LittleCMS lets the floating point color rendering transforms take place in “infinite space” (unbounded???).

I don’t have a 3DLUT tool yet, but I will pretty soon. In the meantime for linear-to-display I’m using a plain old power gamma transform (#1) and a curve tool (#2) in its place, but the deal is I’m doing linear-to-perceptual and display scaling. A LUT would give me better control, but it would be doing the same fundamental task. Oh, and linear-to-display gamut (#3) is handled in a transform using 1.0 gamma (that is, no gamma) ICC profiles.

This thread has schooled me in ways far beyond what I originally envisioned; thanks all for that. I just need to keep separated the concepts from the mechanisms…

Your constant equating of “scene-referred=good=OCIO” and “display-referred=bad except for printing=ICC” is not helpful. Neither is the constantly repeated refrain “attempting to shoehorn a scene referred workflow into a display referred pipeline”.

You insist on limiting what can be done using ICC profile color management to V2 profiles and specs. And then you slam current ICC profile color management on this entirely false basis.

I propose removing the words “display-referred” and “scene-referred” entirely from this discussion. It’s not helpful.

Instead try explaining how OCIO is better than ICC in concrete “this can be done and that can’t be done” terms.

Notice a filmic view in fact can be set up using ICC.

Come on. That distinction between models is the core of this discussion. Otherwise this becomes a discussion about if you can/can’t do something with ICC/OCIO.
Which takes me to…

You can also fall a tree with a kitchen knife. It doesn’t make it the right tool for the task.

I won’t repeat what I said earlier, but it’s really about that. A tool designed for the scene-referred model vs. a tool designed for the display-referred model that needs to be adapted/hacked to fit in the scene-referred model, with no known examples of implementation that prove it’s up to the challenge.
Name ONE program that is currently used to produce scene-referred output using ICC. Link examples to files that were produced that way (with “unbounded” ICC v4 profiles) and try opening those images in different softwares and see what happens.

Either Photoshop and GIMP although allow linear editing with floats are still display-referred images. Editing scene-referred images with those tools implies going blind or mangling the output.
Krita is making an effort to overcome that, and that effort is done through OCIO, and it’s still mostly a display-referred editing program where many operation will mangle scene data so you have to be extra careful.
So where are the examples?

I got to thinking about that in terms of my available tools, and it came down to a two-step operation: 1) gamma transform to scale the linear data to the perceptual relationship, and 2) and addtional linear-function scaling to the black and white extents of the display. So, linear->perceptual->display, in my simple sandbox.

I think the 3DLUTs in the view transforms of OCIO do both things in one step, linear->display. I tried various hand-crafted curves, but nothing looked as “right” as the two-step. Yes, the power function by itself can’t do the job.

Makes sense, and a LUT would produce a better result on the gamut “edges”.

“Nikon” caught my eye. Thanks, I’ll not spend any time on it for this purpose.

I think I might be doing that in rawproc, my hack software. Not sure when, but I’ll try to produce a treatise with screenshots in the next few days.

For me the filmic view isn’t really good for real world footage because filmic tonemappers were developed mainly for videogames, not for photoediting.

Not everyone loves the filmic tonemappers
https://www.google.it/amp/s/ventspace.wordpress.com/2017/10/20/games-look-bad-part-1-hdr-and-tone-mapping/amp/

A better implementation that needs some testing is the hable tonemapping in the way it’s used in mpv player.

Scene-referred and aces are used mainly for matching different cameras and add to a video cgi in a natural way.
Is really necessary for the development of still raws ?

1 Like

The filmic tonemapper I mentioned in my examples was developed by a guy with extensive experience working in the movie industry, which I’d say is the industry that has historically taken photography to the next level and shaped the contemporary photographic aesthetics.
That some filmic tonemappers are used in games doesn’t make that filmic tonemappers is something developed by the videogames industry.
Filmic Blender was inspired by the response of cinema cameras and it was designed to replace the poor factory default from Blender. Nothing to do with games.

So what? Some bad tonemappers and poorly implemented HDR in games. Completely irrelevant.

It’s used for that, sure. But it’s also designed for retaining the dynamic range captured from the scene, and that’s its main benefit: You have some sort of “virtual scene” in your float file that keeps the light ratios from the scene, ready to be used anytime and produce physically plausible compositing.
Having wide dynamic-range scene-referred imagery is also necessary if you have multiple outputs that are SDR and HDR. The same “master” will work for your HD screen and your HDR TV (no matter how many nits it can spit). Being able to switch views is crucial for that.

Let’s deconstruct this sentence:
Development of still raws.
Are you considering development as producing a beautiful, display-ready output from your raw?
Or is it applying demosaicing and producing a good quality scene-referred file?
If it’s the former, then you’re producing display-referred imagery, so don’t worry about what the development program does internally provided that it produces a beautiful jpeg on the tail. Using a film-like view transform serves that purpose.
If it’s the latter, welcome to the group of people who really care for the scene-referred information. That’s information useful for us, we want to keep it. Using a view transform also serves that purpose, because it allows you to view how your scene-referred image will look through a specific display.

btw, keep in mind that I’m not advocating to a specific view transform. OCIO allows you to use whatever view transform you choose, and also stacking creative looks on top of it. Not really a forced/fixed view.

Let me provide an example of scene-referred image that was entirely obtained with ICC-based tools.

It is the result of the exposure blending of 5 bracketed images. The RAW files were interpolated in PhotoFlow, and saved in the original camera colorspace before being aligned with align_image_stack. The aligned images have been re-imported into PhotoFlow, converted to linear Rec.2020 and blended with luminosity masks.

The blended image has then be saved in two versions, one in the same linear Rec.2020 colorspace, and a second version saved with a V4 sRGB profile.
Both TIFF files have been saved with a +4ev exposure compensation, which results in a significant amount of pixels exceeding 1.0 (just to demonstrate the unbounded conversions). The files can be downloaded from here.

The two screenshots below show the image without exposure compensation. The first one is a direct display of the original image, while the second one corresponds to the sRGB TIFF, re-opened, converted to linear Rec.2020 and with a -4ev exposure compensation applied.

The perfect agreement between the two images shows that V4 profiles with parametric TRCs are perfectly suited for manipulating images with arbitrarily large values…

Are we really sure about this? I mean, apart for the adoption of the teal-orange look that is so popular on Instagram and the like.
Below I put a random contemporary masterpiece that I can hardly believe was influenced by movie aesthetics…

This statement betrays the data.

I won’t be restating what I stated above. This demonstration does not counter anything I said. It clearly wasn’t understood.

Okay, here’s my experiment. I set out here to do two things 1) demonstrate scene-referenced editing with my existing rawproc program (the concept), and 2) look for the places were I’d insert OCIO (figure out where to insert the mechanism). It might also inform others trying to figure it all out, or, it might confuse mightily. Anyhoo…

To start, a couple of things about rawproc, my hack software. It’s on github, here:

For those who have downloaded a copy at some time or are synced to the github repository, you’ll not be able to replicate this specific experiment unless you sync to a commit (49d3dc) I made today, to allow inserting a colorspace tool when color management is disabled. Seemed like a good idea at the time… I’m on a path to a release, and instead of a major build in a few weeks, I might do a minor build in a day or two.

So, rawproc is a toolbox-oriented image processor. You open an image, then apply tools in arbitrary order. The list of tools applied is shown in a panel; you can add or insert tools to your satisfaction. Each applied tool has a copy of the image, developed by starting with a copy of the previous tool’s image. So, a chain of applied tools is really a stack of images, which saves time selecting images for display as well as in processing. Yes, rawproc is a memory hog. That’s a feature, not a bug…

Each tool has a checkbox that you use to select the displayed image; you can be working a particular tool but displaying another. That’ll be useful for the experiment.

rawproc has a boatload of configurable parameters, editable in the Properties dialog. To start this experiment, I turned off color management:

input.cms=0

This turns off all the automatic display and output ICC transforms; we’re going to to that by hand.

First, I’m going to open a raw image with only demosaic, white balance, and wavelet denoise, no color or gamma transform. Also, I’m going to assign the opened image it’s calibrated camera profile. Here are the properties:

input.raw.libraw.autobright=0
input.raw.libraw.cameraprofile=Nikon_D7000_Sunlight.icc
input.raw.libraw.colorspace=raw
input.raw.libraw.gamma=linear
input.raw.libraw.threshold=500

This is the same as dcraw -o 0 -g 1 1 -W -n 500. Here’s a screenshot of the result:

The internal image is 32-bit floating point, 0.0-1.0. The histogram is of the internal image, but is scaled 0-255 for convenience. This is the raw RGB data, with its original radiometric relationships ‘as-shot’. The display has no display profile applied, it’s just a transform of the 0.0-1.0 internal data to 8-bit 0-255 integer data for the GUI to render on the screen. And, the assigned colorspace is the camera gamut determined by callibration, no TRC or gamma, corresponding to the linear data, but that doesn’t show on the screen.

So now I’m going to construct a view transform using my available tools. I don’t have a LUT tool yet, but I think what I’m about to do is instructive. First, I’m going to scale the data to perceptual using a plain old 2.2 gamma:

Note the histogram, reflects the ‘spreading’ of the data to a perceptual range. Now, I’m going to apply an additional scaling to set the black and white points at the limits of the data range, 0.0-1.0, using a curve:

In OCIO, I believe these two transforms would be baked into a LUT, along with maybe other “look-oriented” manipulations. Note the flat colors; the large camera gamut is being truncated by the display. So finally, I’ll add a colorspace tool to take care of that, converting the working data to linear sRGB, which is close enough to my display gamut for this experiment:

Oh, thanks for the really nice set of profiles, @Elle; I use them all the time.

So, at this point what I’ve done is to load the camera raw as scene-linear as I can get it, then I applied three tools to construct my present notion of a view transform. Note in the last screenshot I drew a red line above the first tool; tools below the line are the view transform, and I’ll maintain the line in the next screenshots.

Now, I’m going to do some work on the image: take care of a wonky color balance, resize and sharpen for output. To do this work, I’m going to insert tools in the chain above the view transform segment of the chain, which I think would pay homage to scene-referred editing. However, I’m going to keep the last tool in the chain checked for display, WYSIWYG. Here’s the result of the color balance, applied as a blue curve followed by a red curve, manipulated to bring those channels in line with the green:

Make note of where those curves went in the tool chain, after the initial open but before the first view transform tool. Note the histogram, right now it’s always of the working data associated with the displayed image; I’m likely going to make it configurable to show the histogram of the tool in work, to support this workflow. Now, resize and sharpen:

Again, I stuck them in the chain above the view transform tools.

So, in a fashion I’d assert I’ve demonstrated scene-referenced image editing, as all of the editing tools were done on the linear data. You’ll note I didn’t adjust exposure; the scene doesn’t have a reliable gray patch, and I’m not shipping the image around so I didn’t worry that part. But I think I covered everything else. Note there was only one ICC-based transform, and that transform only did the gamut mapping from camera to display gamut.

Now, this exercise leads me to think that I could incorporate a passable first-cut at OCIO by putting its transforms from a pre-loaded configuration as an alternate selection in the colorspace tool. A decent view transform LUT would take the place of the three tools I used for this experiment.

The thing I can’t yet figure is how the camera colorspace gets transformed to the OCIO reference colorspace. Until I do, I’ll be putting in a colorspace tool first, to ICC-transform the data from the camera profile to a working profile corresponding to the OCIO configuration’s reference colorspace. Rec709 doesn’t seem right for this application, the gamut isn’t all that different from sRGB. So, I’d be looking to make a OCIO config with a Rec2020 reference colorspace. Still, until I learn otherwise it’ll take a ICC transform to go from the camera to the reference colorspace.

Comments are most welcome; I’m trying hard to figure this out.

No, it simply shows that V4 ICC profiles with parametric TRCs can correctly handle floating-point data outside the [0,1] range. The non-linear sRGB TRC that is encoded in the pixel values of the second image can be properly “undone” by a sRGB → linear Rec.2020 ICC conversion, thus restoring the correct “ratios of energies” of the original scene-referred pixel values. That’s the only deminstration I had in mind with this example.

You asked for “a V4 ICC profile that works[1] for scene referred data”, and I tried to give an example of such a thing. I didn’t mean to counter-argument anything you said, just provide an example you have asked for.

In fact, I think the main problem we are having in this thread is the lexical barrier. I am a scientist, I have a very good understanding of math as well as the details of how photons are detected and measured by photo-sensors, and I dare to say that I have a fair understanding of color management (although I am not a color scientist). However, I am not familiar at all with the jargon you are using, not because it is wrong, but because it is too accurate and technical for an outsider like me.

You might than say that I should not invest time in developing image editing software if I do not understand the technical discussions, but I would object to this as well. The math behind image editing is in most of the cases rather simple, and you do not need any VFX background to understand it correctly.

I see you removed all your posts, and that’s really a pity. I’m completely sure there is a lot we could learn from this discussion, once we get the correct key to decipher the technical terms that we find obscure…

I agree! This is such a shame. @anon11264400 could you please undo this? You might be fed up with this arguing and discussion, or frankly think you’re talking to a brick wall, but to remove your contributions to the discussion is definitely not helping us to maybe getting to some understanding of your vision.

2 Likes

And though most of this whole topic is too technical and bleeding edge for me to wrestle with, I think you’re doing a great job, I hope it goes well, keep posting!

1 Like

Could we clarify the scope of the term “garbage” that has been extensively used in this thread?

What you are saying is that doing a simple ICC transform from the linear scene-referred working space to the monitor profile, which implies some clipping if the linear values are outside of the [0,1] range, results in garbage.

My opinion is that this all depends on what is the use of the produced image. If one is editing an intermediate image that is part of a static scene or a movie, and that needs to be composited/blended with other real or synthetic images, then I understand why you consider such a final product garbage.

However, in a photographic workflow where the image seen on the screen is the final product, I do not agree. The goal is to produce an artistic result that matches the ideas and personal taste of the photographer, and not a technically correct image that preserves the linear relationship between light intensities and pixel values in the scene.

In such case, clipping might even be part of the artistic decisions of the photographer (for example to obtain an “high-key” effect).

I am convinced that for most of the users in this forum, what matters is to have at hand tools that “do the right thing”, and that help them to achieve pleasing and realistic results. It is then their choice to deviate from realism if the wish to do so. In all cases, what they see on the screen matches what they will post on the web. Sometimes a filmic-like tone mapping is a good artistic choice to get closer to the common perception of the scene being depicted, but in other cases it might go completely against the result one wants to achieve.

There are however some basic principles that I believe should be respected. To give the most simple example, an exposure correction applied to linear, scene-referred values gives a pleasing result, while the same linear scaling applied to sRGB-encoded values most likely turns out quite ugly…

I strongly support whoever tries to educate the photo retouchers to work on linear RGB data, but I have the feeling that at some point in the pipeline, the needs of pure photography as opposed to VFX/graphic design deviate quite significantly.

That said, I am really interested in trying to extend the capabilities of our FOSS tools to match the needs of fields other than “pure photography[1]”, but that does not mean that what we have been doing so far is just garbage.

[1] when I say “pure photography”, I mean the artistic process of creating a pleasing image out of a single, real-world scene. For example, replacing the sky in a picture is for me already outside the scope of “pure photography”…

1 Like

Well, me too, actually. Three technical degrees, but only four real math courses. Long story…

Beginning of last year, I thought color management was about organizing my Crayons in the box. Hurt my head mightily, but I figured out LittleCMS and how to insert it in my software. Now, the current discourse is a bit of that all over again, but also about putting things I experienced last year with ICC in perspective. In this conversation, I think intermingling the concepts with the tools has confused the conversation. Once I teased them apart, the concepts are clear. The tools, well I’m getting there…

For me, the take-home message of this topic is the obvious one: in the real world, there is no upper limit to luminance. Hence the concept of “white”, meaning the brightest possible colour, has no meaning in the real world. There are only shades of gray.

From that, much follows.

I am gobsmacked by this statement. I am reasonably sure this statement isn’t quite your intention.

A scene referred model ensures a physically plausible set of manipulations. That means things that blurs work correctly as they would, overs work, etc. Everyone here already understands the benefits of a linearized model.

What follows from that, is a scene referred model / view architecture. It cannot be a hybrid.

A scene referred model isn’t handcuffing a creative look or style. It is attempting solely to keep the various manipulations physically plausible and not corrupted by over darkening / fringing etc. and result in natural light blending, exposure adjustments via painting, etc.

As pointed out a few times, when attempting to shoehornin a scene referred model into display referred software, the following insurmountable problems result:

  • Only display a small artificial “slice” of the data will be displayed.
  • The slice of the data will not be displayed at the proper encoded output levels.
  • The slice of the data will potentially not display the correct colours.

Placing a tone map operator at the very tail end of a display referred system is the beginning of attempting to think like a scene referred system. There are other creative sweeteners that may be part of the view transform, as well as utility in optional / alternate views combined with looks.

This.

Once one has wrapped their head around this facet, the rest is a simple mental step; it applies to all operations done on scene referred data. The view forms the “render”.

2 Likes