Hi @gez,
thanks again for your pointer to cinematiccolor (I have started reading the pdf, but Iâm not done yet), and for setting up your test. Now I hope you can also help me understand what Iâm seeing I am using the unbounded-processing branch of RawTherapee, and here is what I get. (NOTE: RT doesnât support exr, so I converted your image to a 32-bit float tiff using GIMP).
So, whatâs going on? As you might have noticed from the various saved versions shown in the file dialog in the video, I played with a lot of stuff to try and understand how what I get relates to your comments, but being fairly ignorant about all this, Iâd like to avoid guessing.
@agriggio, thatâs very interesting. And I didnât know there was an unbounded branch, I will try it.
@gez, if want to explore this ânew methodâ, what s/ware would you suggest to start with please. Iâm not well up on colour science, and used to using RawTherapee and to a lesser extent Gimp. I would be wanting to re-process some of my photos (from RAW of course, Canon .CR2) and see what happens, comparing to my previous efforts. Iâm not into shading / rendering / animation. Any tips to get started would be great.
If you want to have a âmaximumâ of colors from your exr image in a jpeg file you may be able to use GâMIC.
After we can use our artistic gifts to get something good from this beautiful image⊠[Itâs a joke]
And when do you apply your manual tonemapping? Before or after editing?
Is it editing a tonemapped image or editing without tonemapping going blind because you donât see whatâs going on with your colour?
Also, what you said about the canned algorithm tonemapping is not correct. You can use whatever tonemapping you want in your OCIO config, you can change them on the fly, all in real time without touching your data. Fully customizable.
I think your words âstfuâ, âbullyingâ and âbuddiesâ are inappropriate and more than a little misleading:
No GIMP dev has ever told me to âstfuâ, though sometimes they had to work pretty hard to get me to understand that they finally did understand what I had been trying to explain.
No GIMP dev has ever âbulliedâ me into writing patches. Iâm very proud of the few patches that Iâve written that have been accepted into GIMPâs code base. Iâm also amazed at the courtesy and helpfulness of the many GIMP devs who helped me write those patches, starting from the very first patch I wrote (with a lot of help from the other GIMP devs) to switch GIMP from using LCMS version 1 to using LCMS version 2.
Iâve always thought you seem like a very nice person, insofar as one can make such a judgement based solely on interactions over the internet. I greatly appreciate your input several years ago in helping to test some issues in GIMP color management/color mixing (which issues have long since addressed in the GIMP code base), and in helping to proofread and improve many of the articles on my website. I learned a great deal from our discussions, and I hope you also did. But I donât see how our interactions could be described as âbeing buddiesâ.
Regarding patches, maybe a year ago I signed up to get email notifications for GIMP bugs. I am astonished at the sheer volume of bug reports from people demanding that this, that, and the other change in GIMP code be âimmediatelyâ done, which has given me an appreciation for the constant request for patches. Iâm also astonished at the rudeness of many such requests for changes in the code, and equally astonished at how politely these rudely phrased requests are dealt with by the GIMP devs.
There seems to be a general lack of understanding that âI want this in GIMPâ doesnât mean that whatever the person wants just magically happens. There is no magic. If people want changes in the code âsomeoneâ has to write the code.
Also much âcode writingâ that the devs for any project have to do, involves keeping up with changes in libraries that the program uses. Every single time a library - from GTK or QT, glib, and other âbigâ libraries, to LCMS and all the way down to the smallest file format library - every single time these libraries undergo major changes that break backwards compatibility (this happens with version changes), all the devs from all the project have to modify their own code to keep up with the changes. This is time-consuming work that surely isnât especially gratifying from a programmerâs point of view. And then there is the constant need to keep track of bug fixes and improvements, between the major version changes.
Right now the big change in progress for GIMP, aside from getting 2.10 out the door, is the change from GTK2 to GTK3. And the bug reports from people on Windows and Mac keep pouring in, many times caused by changes in these proprietary operating systems, with scarcely a Windows or Mac developer stepping up to the plate to help.
So yes, patches are welcome. And if there is a feature or bug fix that someone really wants, thatâs a good feature or bug fix for that person to work on.
That is a simply tonemapping
Just for curiosity how good works blender for real world hdr video or image?
This is 1000nits perceptual quantizer st2084 with 2020 primaries
To all, an apology. I never thought this topic would go ballistic like it has.
To those who are attempting to learn, I commend you in sticking to the dialogue in spite of the dismissiveness.
To those who are attempting to teach, I feel compelled to point out that you are responsible for your message and itâs intended effect; if you really want to affect change, you need to consider the studentâs frame of reference. It most certainly is not yours; thatâs not a put-down, itâs just acknowledgement that we all think differently.
So, Iâm really am trying to wrap my head around scene-referencing, and the first order of business Iâm stuck at is how to handle my raw camera input. The ACES and OpenEXR literature address the topic specifically, and if I read all this correctly they do acknowledge the need for some sort of input camera transform to get to a color-consistent basis for commencing scene-referred editing. Well and good, but what Iâm not getting right now is, whatâs the destination of this first color transform, color-wise? In what I do now, itâs a high-gamut working profile like Rec.2020 with no gamma-oriented transform so the data stays linear with respect to its original capture. To answer an earlier question, this transform in my software is unbounded, owing to LittleCMSâs ability to handle of floating point transforms in this manner.
If I can get past this little hurdle, then Iâll tackle the LUT basis for the rest of it, as well as adapting my curve tool and other such to linear space.
I have the feeling that this questions are very much GIMP-centric, and are due to the fact that GIMP does not yet allow for non-destructive editing.
However, GIMP is not the only layer-based editor in the FOSS world. Both Krita and my own PhotoFlow have non-destructive tools which can be placed anywhere in the processing pipeline.
In particular, in PhotoFlow you can place a tone-mapping tool on top of the layers stack, and see the tone-mapped output in real-time while making adjustments to the non-tonemapped unbounded data. The user can choose among different tone mapping operators, like simple gamma curves or âfilmicâ-like from here and here.
As a developer, I would like to know: does this conceptually match what Blander is doing behind the scenes to provide the output you show in your screenshots? Is this the right approach to the problem of rendering out-of-bounds pixel values to a display, and thus improve the "visual feedback"during the process of editing the unbounded image data?
It is some time ago that I looked into it but if I remember correctly the destination should be defined by the OCIO config. Do note that as far as I understand OCIO works a bit different then ICC, in OCIO there is no reference color space all other color spaces are tied to[1] but everything is defined in respect to the working color space[2] so to share exr/hdr data you should also share the OCIO config (just giving the ICC profile is not enough)
Note that this is my personal understanding and I might be wrong
[1] Which is what XYZ/LAB is used for in ICC
[2] This reference color space is often called Linear all other color spaces defined in the config have either a to_reference and/or a from_reference
Hi again.
First of all, itâs not my intention to pick a fight with anyone, but I do believe that in these cases it is crucial to point as soon as possible to the made up stuff that clounds the understanding of the real issue and move on.
âUnboundedâ is a concept that muddles the waters, because it immediately refers to a bound that will be exceeded, and the idea of a scene-referred model is that light emissions go from 0 to infinity. When we shoot real-world scenes or render synthetic scenes that mimic reality, weâre capturing that. From 0 to infinity (devices will have its bounds, but the theoretical scene wonât).
At any rate, although our cameras have a limited dynamic range, when you shot a scene youâre capturing a portion of that infinite intensity range that mantains a relationship with the intensity rates of the real scene (point A is two stops above point B in the real scene and in your capture, etc.)
The goal of a scene-referred model is to keep those rates intact.
Unbounded, used in the context of ICC transforms refers to the concept introduced by Marti MarĂa of using an RGB colorspace as a PCS.
The problem with that unbounded concept is that produces garbage RGB that can be used in a proper emission model. The proof of that is the squirrelly colours mentioned above in this thread from simple operations.
So, the whole idea of stuffing this RGB as PCS in the processing pipeline is absolutely problematic for processing, and implying that operations that fail have to be revisited while they are perfectly valid for valid RGB values, sounds unreasonable.
Now, if âunboundedâ only means not clipping intensity, then we all agree. Thatâs the key part of a scene referred model. But in that case, why are we calling this âunboundedâ? It has no bound by definition. Scene-referred already implies that, no reason for inventing a new term.
Then, if weâre all talking about the same (scene-referred model), isnât a good idea to take a look of existing implementations that are open source and already used in libre apps?
A scene-referred workflow relies on a single reference, a view, and from-reference and to-reference transforms.
All the processing is done in the reference, the source material that is not in the reference space is converted to the reference, the output and view go through from-reference transforms to the needed space (itâs all in the OCIO docs).
Your view allows you to actually SEE what youâre doing in your display, colour pickers, UI elements, etc).
You can have different configs with different references, different transforms and different views, created specifically for your needs.
Probably phrased it wrong I meant that in the OCIO standard there is no reference defined, while this is the case in the ICC standard. The advantage on the ICC side is that you only need to share the ICC profile to get it to work everywhere and the advantage of the OCIO way is that it is extremely flexible.
I think we are talking past each other here, lets see if I can put it differently.
In the ICC standard all color space transform must be to and from XYZ, this makes XYZ the defined reference color space for the ICC standard, in contrast the reference color space for OCIO is undefined in the standard and can be chosen at will then the config is used to tell how to transfer other color spaces to this reference color space. Does that make sense?
In the ICC standard all color space transform must be to and from XYZ, this makes XYZ the defined reference color space for the ICC standard, in contrast the reference color space for OCIO is undefined in the standard and can be chosen at will then the config is used to tell how to transfer other color spaces to this reference color space. Does that make sense?
Absolutely. Which is why I said it was a protocol.
With OCIO you could operate in whatever primaries you wished, with the only convention being that the reference is scene referred linear. Every transform would he defined in reference to that.
This also allows 1:1 ingestion without tripping through potential quantisation issues.
I agree that there is a misunderstanding on the terminology. To me, âunboundedâ simply means that there is no bound.
I just checked the first post of the thread, and there is no mention of ICC transforms. I think the context is more general: we have a pipeline of processing operations, and what is being discussed here is that it would be good if such pipelines did not assume any bound on the input and did not impose any bound on the output (except of course in the last step, when actually displaying the picture).
Now, if âunboundedâ only means not clipping intensity, then we all agree. Thatâs the key part of a scene referred model. But in that case, why are we calling this âunboundedâ? It has no bound by definition. Scene-referred already implies that, no reason for inventing a new term.
If I understand what you write correctly, scene-referred has a lower bound at 0. Unbounded means that you can also have negative values. So, if the concepts are different, they deserve different namesâŠ
âŠor maybe Iâm just a victim of this whole confusion
Yes, as the instigator of this thread, that is where I started. The bounds I was considering were more about dynamic range, related to the noise floor and saturation point for the sensors. Once the integer-based raw data is placed in a data format, say, floating point, there is now the possibility to push tones out of the original raw range, and my thought was that data should be preserved for possible dealing in other tools. White still has to be corralled, at some point just prior to humans looking at the image, no?
Scene-referencing seems to put a better perspective on that endeavor, I just need to reorganize some synapses to deal with itâŠ