So this would be equivalent to my second definition, yes? no?:
That was a yes or no question. Do adjustments that make the image intensities âProportionate to the intensities in the original scene, which would allow any editing operation that preserves proportionality, such as setting/changing a white balance, adding positive/negative exposure compensationâ qualify as âscene-referredâ adjustments as you are using the term?
I think we are talking about the same thing, but given the issues with terminology, Iâd like to verify.
Iâm assuming of course the adjustments are made on linear RGB, otherwise the concept of âpositive/negative exposure adjustmentsâ makes no sense at all, and neither does âsetting/changing a white balanceâ.
Shifting from âengineerâ to âbear-of-little-brainâ, to my thinking the camera does the measuring of the light energy, so its notion of intensity forms the reference. Any subsequent transform that changes the differences in measured intensity amongst the pixels moves away from the scene reference.
Gray = 0.18 provides an anchoring reference, I guess because white has so many possible places to be and black is buried in the sensor noise regime.
Floating point just provides a bigger beaker within which to mix the transform chemicalsâŠ
And so, thatâs my simple understanding of âscene-referencedâ to date. The benefit of working the data with respect to tone and color in this reference makes intuitive sense, although I havenât really studied the implications. Deferring linear-to-perceptual transforms until the end is the challenge for me; I like to look at my data along the wayâŠ
Thatâs exactly the benefit of separating the data from the view: you can both look at the data before and after the view transform. Have measurements of the scene-referred values and also see how the output transform modifies those values.
Try this with Blender: open the program, change the renderer to cycles (the dropdown selector in the center of the top row that has blender render as default), render the default scene with F12. Click on the rendered image and check the values sampled from the scene and post-transform.
You can then change the view transform with the cm panel (right panel under the scene button) without re-rendering. Check the values again.
Iâll post some screenshots as soon as i have a computer available.
This page: Scene Linear Painting â Krita Manual 5.2.0 documentation
says this:
When you are done, you will want to apply the view transform you have been using to the image(at the least, if you want to post the end result on the internet)⊠This is called LUT baking and not possible yet in Krita. Therefore you will have to save out your image in EXR and open it in either Blender or Natron. Then, in Blender it is enough to just use the same ocio config, select the right values and save the result as a png.
Hereâs a screenshot of Krita set up to use OCIO and Troyâs filmic config:
Hopefully I got all the right drop-down boxes set up properly. Iâm using a black and white image because Iâm using sRGB as my monitor profile, which is wrong enough to make colors look unpleasant. But the ocio config file seems to assume the monitor profile is sRGB, yes? No?
I didnât do any actual editing. I just opened the image, set up ocio, and exported the image as an exr file.
Iâm assuming the filmic mapping is applied to the exported image, yes? No?
I opened the exr file with Blender:
But I donât know how to tell Blender to âuse the same ocio config, select the right values, and save the result as a pngâ. Any hints would be appreciated.
Thanks!
I will open a new thread showing examples of this workflow with blender so we donât contaminate this thread with information that is not directly relevant.
Edit:
Done. With lots of screenies and friendly tone
Iâm going to try this one more time. A gray card reading from a scene doesnât tell you anything about the dynamic range of the scene partly because it depends on where in the scene you put the card - eg on the beach - perhaps under the open shade of an umbrella or maybe in direct sunlight.
There is an easy way to estimate scene ev, and thatâs by using Fredâs ultimate exposure guide or equivalent:
http://www.fredparker.com/ultexp1.htm
Here is a question specifically about an ACES workflow:
When shooting a movie, presumably often more than one camera is used, and presumably the different scenes are shot under multiple light sources ranging from relatively low light to very bright. Presumably the camera person always tries to set the exposure to capture maximum information.
When archiving all the footage - the frames from the various scenes as shot by the various cameras - letâs say one scene was shot under bright sun, 15ev according to Fredâs exposure guide. And a second scene is a close-up under candlelight, 6ev. And a third scene is deep shade, 11ev.
Are the frames from each scene scaled proportionately to the ev of the scene before being archived in the ACES color space?
Does the person who shoots the scenes do something to get an accurate measure of the brightness of the predominant light source? Expose a gray card under the predominant light source? Consult Fredâs chart?
In other words, whatâs the actual procedure, or at least an example of an actual procedure? And what is done in situations of extreme mixed lighting?
I think itâs a lot easier to explain a workflow if people stop talking in generalities and actually give concrete examples, which is one reason why Iâm asking this question about âwhat is actually doneâ.
The other reason for asking about what is done in an ACES workflow is that shooting one still raw file that will be processed into one final image, presents a different scenario than shooting footage for a film, where the entire footage will be brought together in one finished film.
Iâm not clear how scaling a captured scene in a photograph to fill a particular dynamic range based on the cameraâs dynamic range necessarily is helpful for processing the initial scene-referred rendition into a final image, whether the image is a frame in an ACES workflow or a single still image in a photographic workflow.
EV and dynamic range:
The article linked below defines as the ratio of the ev of the highlights to the ev of the shadows:
https://www.bhphotovideo.com/explora/photography/tips-and-solutions/dynamic-range-explained
So a scene with an ev of 12, doesnât necessarily have a dynamic range of 12 stops: maybe the photographer is really into minimalism and took a macro shot of white paper under hazy sunlight.
You are conflating an EV0 exposure with creative issues. Any given scene will have an EV0 exposure. That is how you can get a pretty good idea of comparable alignment. Given a piece of footage at EV0 with a grey card in the same position in an HDRI, one can get a pretty good alignment between the two.
For a single still image, the grey card that one would be taking as a reference ground truth anchor would be at EV0 for the shot. Given the correct view transform, that yields a properly rendered image.
EV0 is mapped to 0.18 for the entire view transform set of ACES. The view set is for both dynamic range and latitude, around this value.
Assuming you have EV0, everything else is aligned relative to it.
See above. Hence why middle grey scene referred is important as an anchor.
You always apply scene referred data through a ârenderingâ. In order to do so, one requires a reference point anchor. Again, you arenât scaling an image to âfill a dynamic rangeâ but rather to align a middle grey anchor point with the rendering transform; as per your example value range, twelve of data remains twelve stops of data at any exposure.
Remember that pure multiplication on scene referred values is not changing the âscaleâ of the data, merely moving the ratios up and down the exposure range linearly. This has no impact on the ratios, and is merely provides a means to set exposure in the scene referred domain. An HDRI with, for example, 28 stops of latitude could be scaled, holding perfect scene values, and rendered through the virtual camera transform as required. You would only âseeâ a smaller subset of that 28 stop range in the view transform, of course.
Take a grey card at you chosen EV0 illumination is probably the easiest way I can explain it.
No need to link to B&H sites for explanations of exposure, dynamic range, and latitude, but thanks.
Sorry to jump in, Iâm far from an expert in this subject and Iâve been reading this post in an attempt to learn. Iâm however somewhat familiar with scene-referred workflows and OCIO (in CGI and compositing, not photography), and if Iâm not wrong this type of question are quite far away from the point of it.
When @gez gave you the gray card example he used it this way: if you have a theoretical 100% white card, and a gray 18% one, shot side-by-side under the same light, what you would get after capturing those scene-values and taking measurements of them in a scene-referred workflow would be that: where the white card measures â1.0â, the grey one would measure â0.18â, meaning that the relation between the values is preserved according to the scene data.
Asking something like in your in example seems somewhat irrelevant, in other words, you might as well be asking âWhat if the gray card is in my pocket?â
Please correct me if Iâm wrong, as it is likely I might be.
That would be relevant if I were photographing the inside of my pocket.
I asked for specific practical details linking the dynamic range of the camera as @anon11264400 has done, with how in an ACES workflow one determines how to scale values shot under different lighting conditions by cameras with different dynamic ranges.
I suggested that âevâ of scenes might be the more relevant measure than some odd calculation based on the dynamic ranges of the different camera.s Then I asked what real camera people shooting real scenes really do.
All I got in return was generalities.
How does getting a reading off the gray card in scene A, help you scale the intensities of the footage from scene A when you are preparing them for archiving in the ACES color space? Whatâs the scaling for? Is it to make intensities from scene A somehow align with intensities from scene B? If so, whatâs the metric for the scaling?
When shooting still raw images, pegging a gray card (if the photographer even uses a gray card) at 0.18 is likely to either result in overexposed highlights, or else result in a lot of wasted space at the brighter end of the cameraâs dynamic range if the scene is a low dynamic range scene.
I also asked what is the point of scaling any given still image, somehow in accordance with the cameraâs actual dynamic range. But again, no answer.
Apologies if it came out the wrong way, I was just trying to point out how your example did not make sense in my mind in regards to what a scene-referred workflow is suppose to offer. Not just this one about Dynamic Range in particular, but similar ones before it. As I understand it, Itâs not about having a gray card there at all, those were just examples. Itâs about consistency among ratios and the accurate preservation of those.
But then again, since Iâm not able to explain it in less general terms and even less able to be better at it than Troy or others here, and being this focused around photography, Iâll refrain from trying.
Thank you and everyone else for this thread BTW, is being very enlightening!
Grey cards can be used with reflective spot meters to meter any range of light values.
Thank you. Thatâs the first sensible thing thatâs been said about gray cards. A gray card in a scene without the corresponding spot metering thereof doesnât tell you anything about the scene brightness, only about how one might want to set f-stop and shutter speed.
Even using the in-camera meter is an incredibly iffy proposition given so many variables from âis there a filter in front of the lensâ to âwhat mode is the in-camera meter using, spot or something else, and if spot, how wide is the angleâ to âwhat white balance is the camera set to, which most definitely affects metered readingâ to âhow is the in-camera meter even scaledâ.
So you are using the gray card and a spot meter to determine ev, yes? And the gray card is put in whatever seems to be the dominant light for the scene, yes?
In an ACES workflow, are the frames from the various scenes under various lighting conditions scaled according to ev for each scene?
Even if it is based on some hardware averaging, there is still somewhere in the scene where âat exposureâ would render a grey card at a particular code value.
That code value is the one we would try to anchor at 0.18 for the rendering in our software.
My apologies, Iâm still not sure how the âanchor at 0.18â is being used when the footage is archived. Letâs say scene A and scene B have graycard spot meter readings that indicate that scene A light was twice as bright as scene B light.
Are the pixels in each scene scaled to match actual scene intensities for all the different scenes, or is each scene only scaled to match the reading off the graycard for that scene, so scaling is on a âper sceneâ basis?
With Alexa and other expensive cameras, the vendors provide a unique, typically log-esque, based encode for the image. This allows you to load the image with the appropriate transform and have the âat exposureâ value placed at 0.18 without any further adjusting.
Sorry, we cross-posted. But I think you are saying that itâs always âper sceneâ, rather than âscale to make the pixel intensities actually match the scene intensitiesâ.
Have you ever used a light meter? It helps me to contextualize this.
Long ago and far away, yes. Then I stopped taking photographs until I got my first digital camera. With digital, my main goal is to not clip pixels in the highlights, and to ev-bracket for clean shadows when possible. The concept of middle gray for me has become mostly an editing decision (âwhere do I put middle gray in this sceneâ), and never an actual âshootingâ decision.
But I think you are saying that itâs always âper sceneâ, rather than âscale to make the pixel intensities actually match the scene intensitiesâ.
Above is the one question Iâm trying to get a definitive answer for.
From everything youâve said in the last few posts, scaling is âper sceneâ, there is no scaling to the absolute intensities of the pixels in the scene - Yes? No?
Somewhere in this overly long thread, I think at some point you said something about the dynamic range of the camera playing a role in the scaling, which made absolutely no sense at all. But maybe you didnât say exactly that when you talked about the raw file already being compressed in dynamic range to fit within 0.0-1.0 range. Because the amount of compression required (if any) depends on the dynamic range of the scene, not the dynamic range of the camera.
If camera A encoded 12 stops of dyanmic range and camera B 16, they both produce identical 0.0 to 1.0 linear files. That means we have no way of knowing what encoded value corresponds to the âat exposureâ value, and each would require a different exposure scaling. Thinking of it another way, each could be considered a linearly encoded file at a different exposure as they are both normalized, or uniformly scaled down to the 0.0 to 1.0 range.
I think we are talking at cross-purposes. Assuming a scene with a dynamic range higher than the camera dynamic range, a common goal is to avoid clipped highlights, so one would underexpose as required. The dynamic range is indirectly relevant in that this determines âwhat is the minimal underexposure required to avoid clipping for this specific sceneâ.
Maybe in reality the scene was underexposed two stops more than it needed to be to avoid those clipped pixels. Maybe it was just barely underexposed to avoid clipping pixels. And maybe during raw processing the user moved the exposure compensation farther to the left (negative exposure compensation), than they actually needed to, to avoid having any channel values >1.0 after white balancing, so the channel values arenât actually normalized.
When moving the exposure compensation slider to set middle gray to where it belongs wrt to the âperfect middle gray exposureâ, the only thing that matters is getting the gray card in the image to read 0.18. Or scaling proportionate to whatever base exposure one might have taken to set the gray card to 0.18, compared to the exposure actually used to avoid clipping.
So in the most direct, practical sense possible - setting the slider to scale the image to put middle gray at 0.18 - the dynamic range of the camera is not something that the user even needs to know - Yes? No?
So in the most direct, practical sense possible - setting the slider to scale the image to put middle gray at 0.18 - the dynamic range of the camera is not something that the user even needs to know - Yes? No?
Yes. Although I have no idea what slider you are referencing in this instance.
Letâs not be silly. You said the channel values need to be scaled. So there must be an operation in the image editor to scale the values. Maybe itâs called âexposure compensationâ or âLevelsâ or âExposureâ, whatever. And most such operations in most user interfaces use sliders (though hopefully also provide a place to type in values).
Of course whatever creative or practical reasons the user might have had for not setting the âperfect 18% grayâ exposure are irrelevant to the question. I find it helps to ask âspecific situationâ questions to avoid getting completely generalized answers.
Here is my last question on this topic of setting 18% gray in a scene-referred workflow:
In this scene-referred workflow you are describing, is it essential/required/important to scale the image to a âperfect middle gray shotâ of 0.18 gray, even when processing a single still image? Yes? No?
Yes. View transforms are fixed, and as such, the encoded range of scene referred values are the ground truth, and must be aligned to a certain range for rendering.
Thanks! for the answer.
Try it! Load a linear 16 bit TIFF from dcraw via dcraw -T -4 and load it into the compositor as per @gezâs post. Probably much easier to experiment and see the principles in action.
Well, I have already been trying it in Krita and also in Blender, using output from PhotoFlow. For my first test image, the âfilmicâ viewer failed miserably even without adding exposure compensation, precisely because the viewer distorted the original tonality quite a lot.
When I did add exposure compensation before loading the image into Krita or Blender, the situation went from âThat isnât really workingâ to âThe image was pretty in the raw processor, but if thatâs what I had seen upon originally examining the image, I donât think Iâd have bothered processing itâ.
Iâd like to post some screenshots to show the problem, though only if doing so would not result in a barrage of criticisms, neither of the workflow I actually used to make the final image, nor of the final image that I actually did produce (I like my final image, we are talking process, not critiquing results).
I have other images that I suspect would pose very similar problems, that Iâm planning to open in Krita or Blender just to see what happens.
I donât think all images would be difficult (for me, not speaking of other people) to edit when viewed through a filmic curve. But some images, maybe even many/most images, yes, that filmic viewer seems like a show-stopper in terms of âwould I consider using an OCIO photographic workflowâ.
If you and/or @gez are interested, I can post the screenshots from the first trial image. But if the response is going to be âbut you are working in the dark blah blahâ or âground truth demands blah blahâ, then Iâd rather not.
completely dank workflow . . . you have been looking at all along is completely mangled garbage. . . . looking at complete and utter garbage . . . ramming data raw garbaged through a 2.2 display
. . .
Please do [post the screenshots]
Hmm, no.
I requested that you not criticize my image or the ICC-based workflow that I used to produce the image, but rather confine the discussion to a specific problem I encountered in your specific proposed OCIO workflow, that being the blanket insistence that photographers edit all their images while viewing the images through a filmic tone mapping.
But you heaped on the verbal disparagements (âgarbageâ) before I even posted the screenshots. I donât see any point in continuing a discussion of OCIO with someone whoâs true goal seems to be bashing ICC profile color management.
FWIW, I really like PhotoFlowâs filmic tone mapping, which I believe uses the same algorithm as the Blender/Krita filmic tone mapping. IMHO for many images applying filmic tone mapping is the quickest way to a nice final image, with beautiful roll-off to the highlights and shadows. But not always and not as an âalways imposed viewâ.