Ok, I’ve just visited the website and it’s not really up to date. I’ll gather some ideas and get back to you.
Regarding the user manual, what can I do to help you, and what is the procedure to follow, I also note that there is no translation into French?
https://docs.darktable.org/usermanual/development/en/
For the docs you can start at darktable 4.4 user manual - contributing to dtdocs
For the website, you can make a PR or open issues.
To know what needs work in the manual, search in GitHub dt PR, closed, documentation pending, 4.4 tag. Read the dt PR, update the documentation and submit a PR in the dtdocs repo. Cross reference to the dt PR.
At this point we’re probably not going to have a complete manual for 4.4 so don’t feel you need to limit yourself to just documenting the 4.4 changes
I’m afraid I fell at the first hurdle. I have never used github in anger, so the syntax for searching defeated me.
The furthest I got was entering:
is:pr is:closed is:documentation-pending is:4.4 tag
into a search box and finding some discussions of pull requests.
Can you give an example of how to find some pending documentation?
You need label: documentation pending
GitHub has a pointy-clicky interface to query building, so just use these buttons to narrow selection:
What’s the best way to make, say, clouds a bit more orange in Darktable?
I don’t really understand what the difference between scene- and display-referred is. In some places the documents say that certain modules are always one or the other, and in others they say that dragging a module beneath Sigmoid turns it into scene-referred, and above turns it into display-referred.
I use Color Balance RGB, draw a mask for tbr clouds, then use the tint it using the 4-way tab.
Not really. “White … can be anything from 0 to infinity” doesn’t make sense to me as neither my camera nor monitor have got the ability to capture or display infinite white. I don’t know how it differs in practice from, say, Lightroom , as both applications need the clipping warnings/blinkies enabled to show when the whites or blacks have clipped.
I believe what he means is that the white you capture can vary in intensity and the software doesn’t know what it is. A seen shot inside a house, to outside, to having a white wall reflecting mid day sunlight, all have different white values from 0 to infinity.
take a look directly into the sun at high noon - that’s the meaning of infinity…
Well mostly it’s about trying to do most of the math ie editing on the image data before the display transform rather than after…this image demonstrate the potential consequences https://images.app.goo.gl/bJ432mdKi7yyXQfx9
While that’s true, your editing can cause values to get outside the range of values your camera can produce:
take an interior, exposed so that the window is not clipped (in the raw file).
That means that that window will have spots with a luminosity close to 1.0 at the start of the pixel pipe.
While editing, you use the exposure module to bring the interior to a decent luminosity, say by adding +3 EV. But that will push the window luminosity to somewhere around 8 ( 1×2^3), i.e. much higher than your camera can produce, or your screen can display. (The screen isn’t really bothered, it’ll just clip at 1.0.) And that’s just one module that can push luminosity upwards.
So while white at infinity is rare (although pushing the brilliance sliders in color balance RGB can get you close…), you can easily get white values beyond what your camera can produce…
So filmic wants you to tell it from where it should consider luminosity as representing “white” (using the pickers, you can tell filmic “just grab the highest value in this region”, but that’s not always what you want).
I understand this, but this is what Lightroom and other editors do? Once you’ve adjusted the midtones, if the highlights have been pushed to the point of clipping then you recover them by using the brights/lights/whites/highlights or however your editor calls those sliders, or if a single channel has clipped then by using something like RGB tone curves. “Scene-referred” and “display-referred” don’t seem to have any meaning other than to say that you should watch for clipping while you edit, which editors have supported since forever.
No - not really.
Caveat: I don’t use Lightroom. Via googling Lightroom seems to be a Display referred editor.
The whole point of Scene-referred is that during editing - the values never clip. Therefore, there is zero loss of detail during editing. Any loss of detail is at the last mapping stage - which you control. The mapping stage can then map to any colour-space required for output rather than assuming only a semi-defined display colour-space.
Darktable is in the process of moving towards Scene referred and therefore, some modules have been re-coded to support both. Whereas, some modules are sill only Display referred.
The advice regarding module position being Display referred or Scene Referred are for those people who would like to make the best of both methods.
If you like working as-per Lightroom. Select Display-referred in the menu selection and forget about everything else. Work as you have always worked.
This looks nice but not sure if it will come to reality. I ended up using a style in color balance RGB with a number of instances but it is not ideal because it clutters the edits.
That’s one thing for sure, but the main thing is that scene-referred means that the processing modules can act in ways that reflect the physical reality of the scene.
To all intents and purposes a camera sensor records the number of photons that reach it, in a linear manner – the charge accumulated on the sensor is directly proportional to the quantity of light arriving on it.
If we keep this data linear for as long as possible during processing (scene-referred means that the image data remains linked to the actual real-world data in the scene), we can use simpler algorithms to make adjustments to the image in a way that is more predictable and physically accurate. For example adjusting the exposure simply adds light to the scene (as if more photons had been received on the sensor). It turns out that we can do a lot of other stuff as well that both looks better and uses simpler maths, by keeping the data linear for as long as possible.
We do need to perform non-linear tone mapping operations on the image eventually in order to squeeze it into the display-referred space and make a pleasing “filmic” image. However, these non-linear operations mean we can no longer make such simple physical assumptions in later modules (we aren’t “counting photons” any more), so we do this tone mapping as the final step (where possible).
Display referred processing on the other hand, discards this physicality as early as possible in the pipeline, performing non-linear operations on the image to make it “start off looking good”, and making further adjustments on top of this tone mapping. This makes it much harder to process the image in subsequent modules due to the loss of linearity - the maths is hard and often if you push modules too far, they quickly start to look “wrong”. AIUI the movie industry works almost entirely in scene-referred space when processing images and when creating visual effects because in this linear space it’s much easier to build physically realistic models that look good.
(caveat – I’m not an expert on this so some of the details might not be entirely accurate – I try to listen to smart people and hopefully understand most of what I hear)
Take another look at the graph I shared… not only how clipping happens and the impact on channels but also how it could introduce a hue shift…Using display referred you are setting 0 and 1 in the display space from the outset. You can push and pull in that non linear space and end up recreating highlights that you push beyond those limits but data are clipped . In scene referred you push and pull in linear space proportional to the scene in 32bit for and only do the mapping down to the display after most of the editing is done.