Blender AgX in darktable (proof of concept)

The black and white relative exposure sliders, just like in filmic, define the input range.
The sigmoid curve remains the same, but the more or less dynamic range is mapped to the 0…1 input range.
You can start with -10 EV and +6.5 EV. That means a ‘dynamic range’ (input range) of 16.5 EV is mapped to 0…1 on the x axis of the sigmoid, and 18% mid-grey is closer to the white point than to the black.
You can change to, say, -5 EV and +5 EV. In that case, 10 EV is mapped to the same range, and the 18% mid-grey is smack between the endpoints.

I’ve entered those settings to the plotter, and I’m trying to stretch the x-axis to illustrate the difference.

Notice that although the linear slope was set the same, it is appears flatter when the 0…1 range is stretched out to cover a higher range (sorry, I forgot to actually set a linear part, but you can still see it using the dotted line, and the general slope in the neighbourhood of the pivot). I can scale it according to the dynamic range, but that will change the curve. There is no way to completely avoid settings influencing each-other.

Top: curve stretched to -10…+6.5 EV, no linear section, slope = 2.4, toe and shoulder power at 1.5. These the the defaults.
Middle: same, only the x-stretching (‘dynamic range’) changed, running from -5 to +5 EV. You can see how the toe and shoulder were influenced, and how the angle of the toe/shoulder at (0,0) and (1,1) changed. Also, the ‘effective slope’ is much higher.
Bottom: slope changed from 2.4 to 2.4/1.65 = 1.45 (to account for the 16.5 EV → 10 EV scaling change in the x direction). The dotted slope line of the top and bottom images are now the same, but there are still some expected changes (e.g. tone and shoulder angle). There is just no way to completely isolate one slider from the others.

If you select an even lower dynamic range (and adjust the slope accordingly), you may even lose the shoulder (or toe). Here, with -5…+3.5 EV, slope = 2.4/(16.5/8.5) = ~1.24, you can no longer hit white at all – even without a shoulder, the straight line would not take you to 1.

I’ve run a lot of comparisons between AgX and the Sigmoid/CB RGB combination and I’d agree that the four ways controls (plus saturation and contrast) have very similar effects. My impression is that AgX explicitly lays out the parameters for the tone curve on a single view, where the controls in CB RGB are tucked off in the masks tab.

What I like as a user about AgX is that I can perform all of that tone mapping under a single pane, where in Sigmoid and Filmic I then go to CB RGB to refine the tone. The results are fine but it’s a lot of extra work.

The ‘look’ controls are applied to the output of the sigmoidal curve. They employ no masking.
The curves in color balance rgb influence the masks that limit the effects of shadows lift and highlights gain. The effect of the global power can be tuned using the white fulcrum (since it’s performed on scene-referred data, one has to set the value to be treated as 1 – the white fulcrum).
https://darktable-org.github.io/dtdocs/en/module-reference/processing-modules/color-balance-rgb/#4-ways-tab

The ‘look’ controls of the agx module are display-referred (0…1), but if you are not careful, the scale and the offset can create values > 1, which are not compatible with the display-referred part of the pipeline.

1 Like

Thanks for responses!

Yes, I realize those sliders are essential in Filmic RGB. They are the meat of the module itself. But in AgX, you have Slope, Offset, Power, so I’m trying to understand the reason why we need them in AgX. The white and black relative exposure sliders were not present in earlier versions of the module, so I’m wondering why they have been introduced.

Also, Kofa said that they should be used to set the black and white points, but in my testing, the sliders seems to affect the whole range too much, so that I can’t imagine being able to easily set the two extremes. If you set your white point with the white relative exposure slider, you immediately lose your white point when you touch the black relative exposure slider. Either this behaviour shouldn’t happen, or these sliders are not suited to setting black and white points.

Here is a video of the exact same picture used in my last video but using Filmic RGB instead. You’ll noticed that the White relative exposure slider mainly affects the highlights, and the Black relative exposure slider mainly affects the shadows, which is exactly what you would expect. In AgX, it seems to be working differently if you compare the two videos.

Yes, that’s exactly the same workflow I’ve been following, and I think it’s great. You control the overall DR and black/white points with the Look controls, then you can tweak the highlights/shadows (shoulder/toe) with the extra controls. It’s all on the same page and has a distinct advantage over Sigmoid and Filmic RGB.

And this is also why I’m trying to understand the reason for having the white/black relative exposure sliders. They seem redundant in this workflow.

No, you do not. Once again, those set the input range.
The output does change, but not because the input white point is ‘lost’: it’s because effective contrast (slope) changes.
The smaller the range, the higher the contrast is, given the same sigmoid slope. I can scale the slope to maintain the contrast; that will influence the shoulder and the toe (see above). Would you like to try that? I can create such a build during the weekend.
Before the following pull request was merged, the situation was similar with filmic, too:

Dependency on white and black relative exposure was due to the fact that we divide the input by dynamic range before applying the curve. This is easy to correct, by multiplying contrast by dynamic range: x*c = (x/DR) * (c*DR)
filmicrgb: make slope at gray point only depend on contrast by rawfiner · Pull Request #10206 · darktable-org/darktable · GitHub

1 Like

I’m enjoying testing, experimenting and learning all this, so I’m more than happy to try out whatever you want to build. I just don’t want this to become frustrating for you. So, please only indulge me if it’s something you want to do. I am just one user, and I’m interested to hear other people’s feedback and opinions about this module/workflow.

I’ve read your earlier post explaining the math behind the relative exposure sliders. It all makes sense, but I think what I’m missing is what it actually means from a usability point of view. In practice, I’m finding it hard to reliably set the white and black extremes for this particular photo on my display with these sliders. And is this maybe where the confusion lies? What I want is to set the brightest part of the photo for my sRGB display and this part to always be this exact brightness regardless of what other sliders are changed. With the tone mapper being one of last modules in the pixelpipe (and ideally the very last module one day), one of its roles should be to set these absolute black and white points.

I understand why the sliders affect each other so much, and what you’re saying about contrast changing rather than the white point, but in practice it’s not working for a workflow like this:

  1. I adjust white relative exposure slider to the slightest sign of clipping (for illustrative purposes), then stop. White point is set – the brightest part of my image is now the brightest that my sRGB display can physically display.
  2. I adjust black relative exposure slider to get my black point, but the brightest part of my image is now dimmer and below what my sRGB display can handle. This feels like I’ve lost my white point, (even though the underlying math might say that the white point is not lost.)

But maybe I just need to see a video of someone using it in the way you envisage, and it will all click. Thanks again!

Can you post (or link to) a problematic raw, along with the module settings before you start changing the black and white point, then what white point you set, and what black point you set?

I copied over the calculations from other code, and I guess they know what they are doing, but I did not verify that ‘my’ (LLM-based) port was correct, or (more likely) if my subsequent tweaking introduced issues. So all kinds of bugs are absolutely possible.

Thanks for your feedback!

I’ll have to get back to playing with this…for me the real power of the module was the initial simplicity. I didnt’ make too many attempts so this is a feeling more than an educated opinion but I started to edit images a bit differently … I found myself looking to the approach shown by Boris in his recent videos about just getting the image to a neutral state with sigmoid and then in his case it was the tone eq and CB to further manage things with that neutral preset…out of the gate with agx I found I was doing something similar with the original agx poc …opening up the shadows of the image and maybe compressing the highlights…using the saturation a bit to keep color and then I would leave room at both ends of the tonal range and I could be bold with local contrast, DorS, and rgb color balance and really drive the image.

In the end I was using global brilliance to bump up a little the brightness if needed and global luminance in the 4 ways for the blacks…

The sheer simplicity of it was quite nice… I have not had the time to explore this updated version with all the extra parameters so that is the next step for me.

Try setting the white reference such that the histogram (in log mode) just touches the right edge, then change the black reference. Also try this with filmic. With filmic, in that situation you can do what you want with the black references (and the contrast), your white point won’t change.

1 Like

Sure. a couple of examples from Play Raw are below. I’ve found that the success of those relative exposure sliders in AgX heavily depends on your settings in the Exposure module. If you can set exposure in such a way that you only need small adjustments to the relative exposure sliders in AgX, you have a lot more success. But if you need to make larger adjustments to these sliders, it can cause problems.

2015-05-23 - 7865.DNG (10.8 MB)

For this photo, I start with Exposure set at 0 and AgX at its default settings. So nothing changed so far.

Black relative exposure: Slide to the right to set black point. Black clipping starts at around -7.5 (default threshold for clipping indicator)

White relative exposure: Slide to the left to set white point, but clipping is never reached. The slider fails to get to white and starts to reverse at about 2.2.

So the photo is perhaps just too underexposed. So move Exposure up to +1.500, and start again:

Black relative exposure: Slide to the right to set black point. Black clipping starts at around -5.7

White relative exposure: Slide to the left to set white point. White clipping starts at about 2.9. Success, but the clipping indicator is now showing more black clipping. Not massively but still noticeable.

_DSC2182.NEF (22.9 MB)

For this one, you need to give the Exposure a bump to about +1.5.

Black relative exposure: Slide to the right to set black point. Black clipping starts at around -3.9

White relative exposure: Slide to the left to set white point. White clipping starts at about 2, but there is now much more clipping in the shadows.

Note that it’s not always desirable to have the black/white points just below clipping. I’m just doing this because the clipping indicator clearly shows changes that aren’t always as obvious to the eye.

Here is a grey ramp. I’ve set my white relative exposure to 4 EV (above mid grey), and black relative exposure is at -1 EV (below mid grey). So, my ‘dynamic range’ (mapped input range) is only 5 EV.


Notice where the white starts.
I can move the relative black exposure down to below -13 EV.

  • the white cutoff point did not change
  • the contrast changed: we are mapping a larger input range to the same output range

If I move the black relative exposure even further, then I lose my whites. Most of the 0…1 range is now dedicated to the darks, the mid-grey is now 20 ‘units’ from the darkest mapped black, and 4 ‘units’ from the white (the range 0…1 is divided into 24 parts, because of the 24 EV range; mid grey is now mapped to the input value 20/24 = 5/6 = 0.83…, with only about 0.17 (4/24 = 1/6) remaining between mid-grey and the top of the input range, 1. The slope remained the same, 2.4; even without a toe, y would go up 2.4 units for each 1 unit of x increase. As we go from mid-grey’s 0.83 log-mapped value to +4 EV’s 1 log-mapped value, x only changes by 0.17, so y will change 0.17 * 2.4 = 2.4/6 = 0.4. That is not enough to push y to 1, so you ‘lose your white’. This is the problem that scaling the contrast will solve.

For now, you can compensate by raising the slope.

4 Likes

Thanks @kofa , I’m looking forward to trying the next build. And I appreciate all your explanations.

In the meantime, have you personally already found a workflow you like with this module? Or are you still just experimenting? Do you envisage mainly using the Look section, or do you enjoy using the relative exposure sliders and the toe/shoulder tweaks? You’re getting lots of feedback from some of us, but I’m interested in what you are enjoying / not as happy with.

Cheers!

2 Likes

@kofa this module could be a game changer.

@s7habo already demonstrated that he could achieve a similar look on a difficult image with just this module that took several modules and some time to create using the currently available modules.

I’ve been playing with it to try and understand it and see what it can do. I shoot mostly sports, so today I decided to try it on some basketball images I shot last week. With just this module I was able to duplicate the look of my processed images using current master. So given that I’m shooting basketball, which is indoors and has fairly constant light, I could simply create a preset and apply it to all my images and have most of my processing done in seconds. Apply denoise and lens correction and I’m done.

I’ve also noticed that with just one module you can achieve pleasing results which may help newcomers to darktable achieve good results without getting overwhelmed with the amount of modules available.

Judging by the amount of replies and interest, you may not have just created a module, but a monster :smile:

8 Likes

Honestly, I have not used the module much. Pretty much all my spare time in front of the computer was spent coding and trying to understand how it all works together. I’ve had great guidance from @flannelhead, but others have chimed in, too. Thanks everyone!

5 Likes

Thanks, but I didn’t. I just copied them, badly, into darktable. There’s more to come, and it’s a looong way before this comes to master, in any way or form.

4 Likes

Thanks, I have a better understanding of the black/white relative exposure controls, but could you expand a bit on how they would be used on an image within the module?

There’s a new (Linux-only, my Windows build failed – help, see below) build. The main changes:

  • the contrast (slope) is scaled according to the selected exposure range.

  • changed the order of applying the outset matrix (spotted by @flannelhead - thanks!). However, I’m still thoroughly confused by this. In the GLSL implementation I initially started from (provides the polynomial):

    AgX’s picture formation process works by applying the following steps:

    • A matrix transformation on the input data (inset). The input is expected as linear tristimulus with Rec. 709 primary chromaticities (“linear sRGB”)
    • An encoding to log2 space followed by the application of a sigmoid curve
    • An optional ASC-CDL-based look transform; I’ve included the default looks “Golden” and “Punchy”
    • The inverse of the input matrix transform (outset) followed by the fitting EOTF (from non-linear Rec. 709)

    This explicitly says the inverse matrix is to be applied before the linearisation. The code is:

    // Inverse input transform (outset)
    val = agx_mat_inv * val;
    
    // sRGB IEC 61966-2-1 2.2 Exponent Reference EOTF Display
    // NOTE: We're linearizing the output here. Comment/adjust when
    // *not* using a sRGB render target
    val = pow(val, vec3(2.2));
    

    The comments on that GLSL page also express confusion, and I’m unable to follow the explanation.

    The OCIO config first applies a gamma of 2.2 (going from ‘gamma 2.2 display’ / ‘sRGB’) to linear), then does the outset, and finally applies gamma 1/2.2 (to encode back to ‘gamma 2.2 display’ / ‘sRGB’), if I understand correctly:

      - !<ColorSpace>
      name: AgX Base
      family: Image Formation
      equalitygroup: ""
      bitdepth: unknown
      description: AgX Base Image Encoding
      isdata: false
      allocation: uniform
      from_scene_reference: !<GroupTransform>
    	children:
      	- !<ColorSpaceTransform> {src: Linear BT.709, dst: AgX Log (SB2383)}
      	- !<FileTransform> {src: AgX_Default_Contrast.spi1d}
      	- !<ExponentTransform> {value: 2.2}
      	- !<MatrixTransform> {matrix: [0.913500795022408, 0.0447436243802432, 0.041755580597349, 0, 0.0808718821432087, 0.815474287341219, 0.103653830515572, 0, 0.0359397231776182, 0.0342846039509056, 0.929775672871476, 0, 0, 0, 0, 1], direction: inverse}
      	- !<ExponentTransform> {value: 2.2, direction: inverse}
    

    (SB2383-Configuration/config.ocio at main · sobotka/SB2383-Configuration · GitHub)

  • Performance may be better: no low-level optimisations, but I’ve moved a bunch of calculations out of the main loop. I have no idea if that’s something anyone will really feel.

So, as I said, there is only a Linux build this time. My Windows build always had a mysterious prompt (I had to choose an app to open a shell script in the middle of the build, and it did not matter what I chose (e.g. Notepad++ instead of a shell); at least, it worked – until now). I can no longer build for Windows (I’ve uninstalled and reinstalled MSys already). Like before, I just followed the instructions from darktable/packaging/windows at master · darktable-org/darktable · GitHub.

$ cmake --build build --target package
[0/2] Re-checking globbed directories...
[22/950] Updating version string (git checkout)
Version string: 5.1.0+532~gf89cda2303
[81/950] Checking validity of cameras.xml
C:/Users/kofa/msys64/home/kofa/darktable/src/external/rawspeed/data/cameras.xml validates
[84/950] Generating bin/version_gen.c

[85/950] Generating styles strings for translation
FAILED: bin/styles_string.h C:/Users/kofa/msys64/home/kofa/darktable/build/bin/styles_string.h
C:\WINDOWS\system32\cmd.exe /C "cd /D C:\Users\kofa\msys64\home\kofa\darktable\build\bin && C:\Users\kofa\msys64\home\kofa\darktable\tools\generate_styles_string.sh C:/Users/kofa/msys64/home/kofa/darktable/build/bin/../share/darktable/styles C:/Users/kofa/msys64/home/kofa/darktable/build/bin/styles_string.h"
C:\Users\kofa\msys64\home\kofa\darktable\tools\generate_styles_string.sh: line 49: $OUT: ambiguous redirect
[94/950] Generating authors.h for about dialog.
ninja: build stopped: subcommand failed.

That would be the last line here:

export IFS=$'\n'

{
	echo "// Not to be compiled, generated for translation only"
	echo

	#  Ensure that we remove the duplicate strings

	ls $STYLE_DIR/*.dtstyle | while read file; do
    	get-l10n $file
	done | sort | uniq
} > $OUT

https://tech.kovacs-telekes.org/dt-agx/Darktable-5.1.0%2B532~gf89cda2303-x86_64.AppImage

If someone could build from GitHub - kofa73/darktable at f89cda2303c3e658cd59c32b366f0e6b271bbd8c, please contribute.

3 Likes

Really strange, this is my work and I know next to nothing about Windows. What is surprising is that this has been done quite some time ago and nothing has been changed in this area recently… and it used to work ok !

Maybe you could try quoting $OUT:

	done | sort | uniq
} > "$OUT"

I hope another Windows dev will reproduce and understand what is going on.

1 Like