Unbounded Floating Point Pipelines

So this would be equivalent to my second definition, yes? no?:

That was a yes or no question. Do adjustments that make the image intensities “Proportionate to the intensities in the original scene, which would allow any editing operation that preserves proportionality, such as setting/changing a white balance, adding positive/negative exposure compensation” qualify as “scene-referred” adjustments as you are using the term?

I think we are talking about the same thing, but given the issues with terminology, I’d like to verify.

I’m assuming of course the adjustments are made on linear RGB, otherwise the concept of “positive/negative exposure adjustments” makes no sense at all, and neither does “setting/changing a white balance”.

Shifting from ‘engineer’ to ‘bear-of-little-brain’, to my thinking the camera does the measuring of the light energy, so its notion of intensity forms the reference. Any subsequent transform that changes the differences in measured intensity amongst the pixels moves away from the scene reference.

Gray = 0.18 provides an anchoring reference, I guess because white has so many possible places to be and black is buried in the sensor noise regime.

Floating point just provides a bigger beaker within which to mix the transform chemicals


And so, that’s my simple understanding of “scene-referenced” to date. The benefit of working the data with respect to tone and color in this reference makes intuitive sense, although I haven’t really studied the implications. Deferring linear-to-perceptual transforms until the end is the challenge for me; I like to look at my data along the way


That’s exactly the benefit of separating the data from the view: you can both look at the data before and after the view transform. Have measurements of the scene-referred values and also see how the output transform modifies those values.
Try this with Blender: open the program, change the renderer to cycles (the dropdown selector in the center of the top row that has blender render as default), render the default scene with F12. Click on the rendered image and check the values sampled from the scene and post-transform.
You can then change the view transform with the cm panel (right panel under the scene button) without re-rendering. Check the values again.

I’ll post some screenshots as soon as i have a computer available.

This page: Scene Linear Painting — Krita Manual 5.2.0 documentation

says this:

When you are done, you will want to apply the view transform you have been using to the image(at the least, if you want to post the end result on the internet)
 This is called LUT baking and not possible yet in Krita. Therefore you will have to save out your image in EXR and open it in either Blender or Natron. Then, in Blender it is enough to just use the same ocio config, select the right values and save the result as a png.

Here’s a screenshot of Krita set up to use OCIO and Troy’s filmic config:

krita-with-filmic-viewer

Hopefully I got all the right drop-down boxes set up properly. I’m using a black and white image because I’m using sRGB as my monitor profile, which is wrong enough to make colors look unpleasant. But the ocio config file seems to assume the monitor profile is sRGB, yes? No?

I didn’t do any actual editing. I just opened the image, set up ocio, and exported the image as an exr file.

I’m assuming the filmic mapping is applied to the exported image, yes? No?

I opened the exr file with Blender:

blender-with-krita-results

But I don’t know how to tell Blender to “use the same ocio config, select the right values, and save the result as a png”. Any hints would be appreciated.

Thanks!

I will open a new thread showing examples of this workflow with blender so we don’t contaminate this thread with information that is not directly relevant.

Edit:
Done. With lots of screenies and friendly tone :slight_smile:

1 Like

I’m going to try this one more time. A gray card reading from a scene doesn’t tell you anything about the dynamic range of the scene partly because it depends on where in the scene you put the card - eg on the beach - perhaps under the open shade of an umbrella or maybe in direct sunlight.

There is an easy way to estimate scene ev, and that’s by using Fred’s ultimate exposure guide or equivalent:

http://www.fredparker.com/ultexp1.htm

Here is a question specifically about an ACES workflow:

When shooting a movie, presumably often more than one camera is used, and presumably the different scenes are shot under multiple light sources ranging from relatively low light to very bright. Presumably the camera person always tries to set the exposure to capture maximum information.

When archiving all the footage - the frames from the various scenes as shot by the various cameras - let’s say one scene was shot under bright sun, 15ev according to Fred’s exposure guide. And a second scene is a close-up under candlelight, 6ev. And a third scene is deep shade, 11ev.

Are the frames from each scene scaled proportionately to the ev of the scene before being archived in the ACES color space?

Does the person who shoots the scenes do something to get an accurate measure of the brightness of the predominant light source? Expose a gray card under the predominant light source? Consult Fred’s chart?

In other words, what’s the actual procedure, or at least an example of an actual procedure? And what is done in situations of extreme mixed lighting?

I think it’s a lot easier to explain a workflow if people stop talking in generalities and actually give concrete examples, which is one reason why I’m asking this question about “what is actually done”.

The other reason for asking about what is done in an ACES workflow is that shooting one still raw file that will be processed into one final image, presents a different scenario than shooting footage for a film, where the entire footage will be brought together in one finished film.

I’m not clear how scaling a captured scene in a photograph to fill a particular dynamic range based on the camera’s dynamic range necessarily is helpful for processing the initial scene-referred rendition into a final image, whether the image is a frame in an ACES workflow or a single still image in a photographic workflow.

EV and dynamic range:

The article linked below defines as the ratio of the ev of the highlights to the ev of the shadows:

https://www.bhphotovideo.com/explora/photography/tips-and-solutions/dynamic-range-explained

So a scene with an ev of 12, doesn’t necessarily have a dynamic range of 12 stops: maybe the photographer is really into minimalism and took a macro shot of white paper under hazy sunlight.

1 Like

You are conflating an EV0 exposure with creative issues. Any given scene will have an EV0 exposure. That is how you can get a pretty good idea of comparable alignment. Given a piece of footage at EV0 with a grey card in the same position in an HDRI, one can get a pretty good alignment between the two.

For a single still image, the grey card that one would be taking as a reference ground truth anchor would be at EV0 for the shot. Given the correct view transform, that yields a properly rendered image.

EV0 is mapped to 0.18 for the entire view transform set of ACES. The view set is for both dynamic range and latitude, around this value.

Assuming you have EV0, everything else is aligned relative to it.

See above. Hence why middle grey scene referred is important as an anchor.

You always apply scene referred data through a “rendering”. In order to do so, one requires a reference point anchor. Again, you aren’t scaling an image to “fill a dynamic range” but rather to align a middle grey anchor point with the rendering transform; as per your example value range, twelve of data remains twelve stops of data at any exposure.

Remember that pure multiplication on scene referred values is not changing the “scale” of the data, merely moving the ratios up and down the exposure range linearly. This has no impact on the ratios, and is merely provides a means to set exposure in the scene referred domain. An HDRI with, for example, 28 stops of latitude could be scaled, holding perfect scene values, and rendered through the virtual camera transform as required. You would only “see” a smaller subset of that 28 stop range in the view transform, of course.

Take a grey card at you chosen EV0 illumination is probably the easiest way I can explain it.

No need to link to B&H sites for explanations of exposure, dynamic range, and latitude, but thanks. :wink:

Sorry to jump in, I’m far from an expert in this subject and I’ve been reading this post in an attempt to learn. I’m however somewhat familiar with scene-referred workflows and OCIO (in CGI and compositing, not photography), and if I’m not wrong this type of question are quite far away from the point of it.

When @gez gave you the gray card example he used it this way: if you have a theoretical 100% white card, and a gray 18% one, shot side-by-side under the same light, what you would get after capturing those scene-values and taking measurements of them in a scene-referred workflow would be that: where the white card measures “1.0”, the grey one would measure “0.18”, meaning that the relation between the values is preserved according to the scene data.

Asking something like in your in example seems somewhat irrelevant, in other words, you might as well be asking “What if the gray card is in my pocket?”

Please correct me if I’m wrong, as it is likely I might be.

That would be relevant if I were photographing the inside of my pocket.

I asked for specific practical details linking the dynamic range of the camera as @anon11264400 has done, with how in an ACES workflow one determines how to scale values shot under different lighting conditions by cameras with different dynamic ranges.

I suggested that “ev” of scenes might be the more relevant measure than some odd calculation based on the dynamic ranges of the different camera.s Then I asked what real camera people shooting real scenes really do.

All I got in return was generalities.

How does getting a reading off the gray card in scene A, help you scale the intensities of the footage from scene A when you are preparing them for archiving in the ACES color space? What’s the scaling for? Is it to make intensities from scene A somehow align with intensities from scene B? If so, what’s the metric for the scaling?

When shooting still raw images, pegging a gray card (if the photographer even uses a gray card) at 0.18 is likely to either result in overexposed highlights, or else result in a lot of wasted space at the brighter end of the camera’s dynamic range if the scene is a low dynamic range scene.

I also asked what is the point of scaling any given still image, somehow in accordance with the camera’s actual dynamic range. But again, no answer.

Apologies if it came out the wrong way, I was just trying to point out how your example did not make sense in my mind in regards to what a scene-referred workflow is suppose to offer. Not just this one about Dynamic Range in particular, but similar ones before it. As I understand it, It’s not about having a gray card there at all, those were just examples. It’s about consistency among ratios and the accurate preservation of those.
But then again, since I’m not able to explain it in less general terms and even less able to be better at it than Troy or others here, and being this focused around photography, I’ll refrain from trying.

Thank you and everyone else for this thread BTW, is being very enlightening!

1 Like

Thank you. That’s the first sensible thing that’s been said about gray cards. A gray card in a scene without the corresponding spot metering thereof doesn’t tell you anything about the scene brightness, only about how one might want to set f-stop and shutter speed.

Even using the in-camera meter is an incredibly iffy proposition given so many variables from “is there a filter in front of the lens” to “what mode is the in-camera meter using, spot or something else, and if spot, how wide is the angle” to “what white balance is the camera set to, which most definitely affects metered reading” to “how is the in-camera meter even scaled”.

So you are using the gray card and a spot meter to determine ev, yes? And the gray card is put in whatever seems to be the dominant light for the scene, yes?

In an ACES workflow, are the frames from the various scenes under various lighting conditions scaled according to ev for each scene?

My apologies, I’m still not sure how the “anchor at 0.18” is being used when the footage is archived. Let’s say scene A and scene B have graycard spot meter readings that indicate that scene A light was twice as bright as scene B light.

Are the pixels in each scene scaled to match actual scene intensities for all the different scenes, or is each scene only scaled to match the reading off the graycard for that scene, so scaling is on a “per scene” basis?

Sorry, we cross-posted. But I think you are saying that it’s always “per scene”, rather than “scale to make the pixel intensities actually match the scene intensities”.

Long ago and far away, yes. Then I stopped taking photographs until I got my first digital camera. With digital, my main goal is to not clip pixels in the highlights, and to ev-bracket for clean shadows when possible. The concept of middle gray for me has become mostly an editing decision (“where do I put middle gray in this scene”), and never an actual “shooting” decision.

1 Like

Above is the one question I’m trying to get a definitive answer for.

From everything you’ve said in the last few posts, scaling is “per scene”, there is no scaling to the absolute intensities of the pixels in the scene - Yes? No?

Somewhere in this overly long thread, I think at some point you said something about the dynamic range of the camera playing a role in the scaling, which made absolutely no sense at all. But maybe you didn’t say exactly that when you talked about the raw file already being compressed in dynamic range to fit within 0.0-1.0 range. Because the amount of compression required (if any) depends on the dynamic range of the scene, not the dynamic range of the camera.

I think we are talking at cross-purposes. Assuming a scene with a dynamic range higher than the camera dynamic range, a common goal is to avoid clipped highlights, so one would underexpose as required. The dynamic range is indirectly relevant in that this determines “what is the minimal underexposure required to avoid clipping for this specific scene”.

Maybe in reality the scene was underexposed two stops more than it needed to be to avoid those clipped pixels. Maybe it was just barely underexposed to avoid clipping pixels. And maybe during raw processing the user moved the exposure compensation farther to the left (negative exposure compensation), than they actually needed to, to avoid having any channel values >1.0 after white balancing, so the channel values aren’t actually normalized.

When moving the exposure compensation slider to set middle gray to where it belongs wrt to the “perfect middle gray exposure”, the only thing that matters is getting the gray card in the image to read 0.18. Or scaling proportionate to whatever base exposure one might have taken to set the gray card to 0.18, compared to the exposure actually used to avoid clipping.

So in the most direct, practical sense possible - setting the slider to scale the image to put middle gray at 0.18 - the dynamic range of the camera is not something that the user even needs to know - Yes? No?

Let’s not be silly. You said the channel values need to be scaled. So there must be an operation in the image editor to scale the values. Maybe it’s called “exposure compensation” or “Levels” or “Exposure”, whatever. And most such operations in most user interfaces use sliders (though hopefully also provide a place to type in values).

Of course whatever creative or practical reasons the user might have had for not setting the “perfect 18% gray” exposure are irrelevant to the question. I find it helps to ask “specific situation” questions to avoid getting completely generalized answers.

Here is my last question on this topic of setting 18% gray in a scene-referred workflow:

In this scene-referred workflow you are describing, is it essential/required/important to scale the image to a “perfect middle gray shot” of 0.18 gray, even when processing a single still image? Yes? No?

Thanks! for the answer.

Well, I have already been trying it in Krita and also in Blender, using output from PhotoFlow. For my first test image, the “filmic” viewer failed miserably even without adding exposure compensation, precisely because the viewer distorted the original tonality quite a lot.

When I did add exposure compensation before loading the image into Krita or Blender, the situation went from “That isn’t really working” to “The image was pretty in the raw processor, but if that’s what I had seen upon originally examining the image, I don’t think I’d have bothered processing it”.

I’d like to post some screenshots to show the problem, though only if doing so would not result in a barrage of criticisms, neither of the workflow I actually used to make the final image, nor of the final image that I actually did produce (I like my final image, we are talking process, not critiquing results).

I have other images that I suspect would pose very similar problems, that I’m planning to open in Krita or Blender just to see what happens.

I don’t think all images would be difficult (for me, not speaking of other people) to edit when viewed through a filmic curve. But some images, maybe even many/most images, yes, that filmic viewer seems like a show-stopper in terms of “would I consider using an OCIO photographic workflow”.

If you and/or @gez are interested, I can post the screenshots from the first trial image. But if the response is going to be “but you are working in the dark blah blah” or “ground truth demands blah blah”, then I’d rather not.

Hmm, no.

I requested that you not criticize my image or the ICC-based workflow that I used to produce the image, but rather confine the discussion to a specific problem I encountered in your specific proposed OCIO workflow, that being the blanket insistence that photographers edit all their images while viewing the images through a filmic tone mapping.

But you heaped on the verbal disparagements (“garbage”) before I even posted the screenshots. I don’t see any point in continuing a discussion of OCIO with someone who’s true goal seems to be bashing ICC profile color management.

FWIW, I really like PhotoFlow’s filmic tone mapping, which I believe uses the same algorithm as the Blender/Krita filmic tone mapping. IMHO for many images applying filmic tone mapping is the quickest way to a nice final image, with beautiful roll-off to the highlights and shadows. But not always and not as an “always imposed view”.