This one I find hard to wrap my head around… either in a mask or in the module esp CB there are controls via a fulcrum for white and gray. Are they not to accommodate for the light pre filmic or whatever to define those set points for the module in that yet to be tone mapped data…
That’s exactly the reason why I like to saturate my picture with a saturation curve in color zones (in lightness mode) after filmic/sigmoid/tone curve. At this position in the pipeline its easier to make the distinction between highlights/midtones/shadows.
I don’t want to advertise for this mechanism, just saying that I do see the merits of doing the saturation after tone mapping.
After reflecting on what I wrote in this thread so far, I used the term scene-referred as an approach to editing rather than strictly what it is, in that the goal is to preserve scene-referred features such as chromaticities. Linear editing generally, but not always, allows us to develop and implement simpler algorithms to tackle this problem.
Strictly speaking, @flannelhead is correct in saying that that is what the camera is capturing for the most part sans photographic limitations, which we may attempt to correct in our post-processing. In fact, early papers on this topic use this term in this way! Sorry about that: I have been outside of image processing for a while now. Quite rusty.
I still disagree with @AD4K about look and encoding. They should typically be at the transform stage. Someone pointed out that there is a difference between processing and module order, which is correct. One can go back and forth among modules for finer adjustments, but the order determined by the workflow, unless overridden, has rhyme and reason. I have included “unless overridden” because the user may have good reason not to follow conventional wisdom, for certain purposes or creative licence.
However, I would add that there are legitimate pre-processing stages, which all post-processing apps worth their salt have. Perhaps, this already covers what @AD4K was concerned about like, for example, the B&W tonal mixing mentioned.
Anyway, like most things in life, there are no hard and fast rules. Things have nuances and new discoveries are being made all the time. That is why the open source software and community are so very important and I would dare to say essential, riffing off of another recent thread.
I think there’s some misunderstanding here from your part.
We are not talking about the order you ‘tweak’ things, a.k.a the history stack.
We are strictly talking about the pipeline. The order that each module is processed. What goes after what.
The processing order MUST have ‘rhyme and reason’, otherwise you are just throwing things at the wall and seeing what sticks.
Again, it seems that you are assuming that just because I am certain that some processes must happen before other processes, that I do not know what I am doing and I’m just shuffling the modules around to ‘get a look’. That is incorrect.
Picture building is not an afterthought.
Here’s an interesting challenge:
- Try to make a monochromatic image that is made from “yellow” tone.
- Try to make a monochromatic image that is tinted yellow.
You will find that if you do this correctly, one these have to be done before the image is ‘formed’, the other one - after.
You will notice that one images attenuates to white, but the other image effectively attenuates to yellow.
If done properly this is easily achieved by (separately):
- Using ‘colorize’ module before the image formation (sigmoid or filmic) to achieve monochromatic tone (on Linear data/Scene-referred, etc.)
- Using ‘colorize’ module after the image formation (sigmoid or filmic) to achieve a monochromatic tint (on formed picture/display-referred, etc.)
Both of these are creative choices that need to happen in their precise spots in the picture making pipeline to achieve the desired result; there’s no guessing or randomness associated with this.
Hope this helps to clarify my ramblings.
raw photo credit to: difrkaguilarFranklin Aguilar Matos
I have been understanding what you have been saying the entire time. Please understand that I am not criticizing you in any way. That is not my intention. My intention is to discuss.
Your examples are for specific applications. By that, I mean that this is not always the case, that there are exceptions. In your example, it is definitely advisable to make the correction prior to the one mapping because it is for monochromatic photographic and editing purposes. I am not talking about guesswork. I am stating that it depends on what is required, and sometimes a matter of taste if not a requirement. I think it is a matter of my way of saying things not being clear. If so, I apologize. Also, I am still learning too.
What I am saying is that it is important to have a clear understanding that processing aspects may be corrections, scene (data, linear, non-linear), display (media, output, surround) and still others look oriented. Although these elements are often intertwined, for technical-minded folks, it is often a good idea to address each separately to tackle them precisely. I think this is the best I can put it. BTW, thanks for showing examples. It makes it easier for our readership to follow.
The beauty is that it is universal.
As long as you’re working with linear data, even log data, there will always be a point where picture formation happens, no matter the software.
Circling back, this discussion started from me, saying that “scene-referred” and “Display-referred” terminology has many pitfalls and creates confusion. “Picture-oriented” thinking is just easier and simpler to understand, as well as easier to communicate, and build picture making pipelines (as the point is shifted from “The Scene” to “The Picture”)
“Play RAW” is the best, most perfect example of this. Nobody has a clue what “The Scene” actually looked like, but the photographer. Yet hundreds of people make great pictures from the provided raw, with having absolutely no relation to the “the scene”.
Would you say that this (edit: i.e., the pre-formation that is universal) happens near or at the point of formation? Or far from it?
This is true, and in fact, of many things. Generally, things need clarification. For some, like myself, it take me many words to bring me to such a place of understanding.
For reference, the earlier paper I referred to earlier was: https://library.imaging.org/admin/apis/public/api/ist/website/downloadArticle/jist/45/5/art00002. Although dated, it does describe things clearer than my attempts. Of course, we have come a long way since ROMM/RIMM RGB, that is for certain, but the concepts have not changed, e.g., choosing Rec. 2020 as the working profile and why our post-processors have adopted it. I mentioned ACES and OCIO too.
Excuse me for the ramblings and I forgot to say Welcome to the forum @AD4K.
I’m still confused. When you say, “in lightness mode”, is that select by lightness or the lightness tab?
Second, the dt 4.4 docs show a saturation tab on Color Zones, but my 4.4.2 does not have it. Is that the same as the chroma tab? Then, back to the first question, do I use the chroma tab or select by chroma (depends on the answer to my first question)?
Sorry, I have never used Color Zones.
I expect that I am wading in over my head here, and I know I am contradicting people with much more detailed knowledge than me, but here goes…
IMHO, calling the output of the tone mapper the birth of “the picture” just introduces an alternate term that is very imprecise in general English, and will therefore cause confusion for some people. What actually happens (hopefully!) is that the image goes from being a crappy looking picture to a decent looking picture.
darktable users have had “scene referred” and “display referred” drilled into their heads for a few years now, and most users understand that the tone mapper is what gets them from “scene referred” to “display referred”. I don’t love those terms either, but I think they have come to be widely understood in dt-land.
Personally, I wish that the terms used back when scene-referred tools were introduced were based on the explicit differences in the characteristics between pre tone mapper and post tone mapper data. Ultimately, I think that would lead to easier understanding for new users, but I think that ship has sailed.
What terms do you think of as alternatives?
Like this:
Goal is, to saturate colors in a pleasing way. Often one can reach this, by saturating the shadows most, the midtones a bit and desaturating the highlights. @s7habo often does it in the color balance rgb like this.
Not saying that the color zones is the better module, but here you can see the actual curve in reference to the tones of the image. And with a bit experience one can adjust that very specific to the images needs.
I think Tim is confused because you can not saturate colors in the lightness tab; you’re using the chroma tab.
Another reference might be in say RT something similar is C vs Hue or C vs L tone curve options. I think this is essentially the sort of function that CZ offers when used this way…
Thank you.
Well, this is all hypothetical at this point because scene-referred and display-referred are already well established as terminology in dt (darktable 4.6 user manual - darktable's color pipeline), so what follows is probably pointless and oversimplified, but the key differences are:
- the dynamic range that can be represented due to the range limits of numeric representation of data;
- whether data is linear.
The scene has a DR and is analog. Scene DR/analog
The sensor has a maximum DR it can record, which may be less than the scene DR (which is why I think the “scene” part of “scene-referred” isn’t a great word to use), and is linear, with its numeric range constrained by the number of bits representing each site on the sensor. Sensor DR/linear
The “scene-referred” part of the pixel pipe has unbounded DR, thanks to floating point (the DR on entry to this part of the pipe would not exceed that of the sensor but could be expanded by operations performed), and is linear. Unbounded DR/linear
The “display-referred” part of the pixel pipe has a finite DR as determined by the limits of the device, which the data is mapped into by the tone mapper, and is non-linear. Display DR/non-linear
And other devices such as printers have their own finite DR values. Printer DR/non-linear
Again, this is all hypothetical, since we already have well-known terms in general use, so if you have a bird cage that needs a new liner…
Thanks, @elGordo!
(Not sure I understood that about the bird cage, though … non-native English user here.)
But I think you anyhow address some important issues.
As a beginner with image processing and with darktable, I believe that what did most for my basic understanding of it all, was the drawing Aurelien made in one of his videos where he illustrated how we need to take care of “diminishing DR” as we move from scene → sensor → screen (-> paper).
And this is the same as is reflected in your descriptive “system”.
For the purpose of thinking about how we best can convey understanding of dt and image processing I’ve therefore thought whether “scene”/“display” ought to be left in the background as much as possible, and rather focus on the aspects you also refer to. Perhaps we ought to include a tad more abstract level and just in general refer to “sensor formatting” and “rendering formatting” (or something similar).
(Underlying this is agreeing to, as pointed at by several others here in this forum and elsewhere, that the 3D “scene” of electromagnetic radiation emitted and/or reflected from atomic structures is something completely different from the image data output from the sensors as a basis for some complex interpretative 2D image forming cognitive processes.)
For this to make sense, you have to imagine my post being printed on paper. People often use paper to cover the bottom of their bird cages, where it gets covered in poop/guano. You wouldn’t do this with papers that were valuable. I don’t know what your native language is, so I can’t provide a translation.
(In my language I think we would somehow rather refer to paper hanging on the wall in tiny rooms …)
2 posts were split to a new topic: Emulating a fill flash with masked exposure