I’ve been a big fan of the zonemapper of lightzone (which I still use for just this) and a recent search on the reasons why the zone system plugin was removed from darktable let me to these two threads:
Which showed me that I’m not the only one trying to struggle using the tone-equaliser for the things I used to do with the zone system.
So I thought I try to port the old zone system plugin to the new scene-referred backend.
While doing that I looked more closely at what Lightzone is doing, which uses nurbs interpolation to create a smooth curve from the zonemap (instead of just linear interpolation like darktables zone system plugin) and implemented the same in zone system plugin. I think that at least alleviates some issues stemming from the discrete boundaries of the old zone system.
There’s still some issues remaining: the GUI interface is finicky (I don’t quite understand what’s going on, but it sometimes prevents moving a node all the way to one end) and I have disabled opencl processing, because I’ve not yet implemented a new opencl kernel. However on my machine the module is plenty fast enough even running on the CPU.
You can find the code in my fork at fork. There are 2 branches:
feature/zonemapper_nurbs which is forked from master and feature/zonemapper_nurbs_5.2.1 ported to the 5.2.1 release
That’s one way to do it, but proposing a proof of concept here for people to test and see if there is some interest before polishing the code is also a valid option.
Look how the AGX tonemapper was developed
I had a quick look at your code and it seems you ported the current zonesystem to the new algorithm.
As darktable has strong rules about preserving old edits, I think, if you want this to be integrated in the future, it would be easier for you to start with a new module (copy the existing and modify).
At least the code has to preserve old edits, either by keeping the old code or by having a way to reconfigure the module to give the same result.
Now, let’s start some compilation and test this
This is interesting. Lightzone is still a great piece of software with it’s different approach. Would be good to see zonesystem in darktable.
Good luck.
Yes I hadn’t thought about versioning yet. The changes to the output would probably be only very subtle in most cases, but I just asked on github what the best process is.
I’m also still playing around with different interpolation methods (I’m not quite convinced that the nurbs implementation is really the best way, I’m looking at a monotonic Hermite cubic which might be better suited at the moment). If people are interested I can create a couple of branches with different interpolations.
Can I suggest augmenting the UI (e.g., with checkboxes or tabbed views) to enable the different code paths instead? Building once is faster than building N times, and it is much easier to do side by sides within the same DT instance. Just my 2 cents.
You are dealing with unlimited input here. Y can be 10 or more; you’d assign all the range between slightly below 1 to “infinity” to the last “zone” (sure, there is a practical limit, but 5 EV above mid-grey, so Y = 0.18 * 32 = 5.76 is an everyday situation, not extreme in any way).
On the darker end of the scale, your mid grey is no longer L = 50%, but Y = 18%; 2 EV below, it’s Y = 4.5%, not L = 25%.
Edit: removed mistaken comment about per-channel application.
If indeed the new version of zonesystem will be placed in the scene-referred part of the pipe, a new module is probably needed (you do not want the old zonesystem module before the tone mappers…)
If it’s in the display-referred section (which it seems to be if I read the forked code correctly), a new module may not be necessary (cf. filmic). But then, as @kofa said, you’ll have to deal with unlimited input, (which is not what the original film-based zone system was developed for).
What further complicates the issue: the old zonesystem module was developed when the only tone mapper module was the basecurve. That module more or less forced you to take care of the dynamic range before basecurve (as that module maps a range of 0…1 to 0…1).
One could add ‘some kind of sigmoid’ (e.g. the sigmoid module), and then work in 0…1 RGB, but even then, in darktable’s pipeline, whether we are before or after the tone mapper, the encoding is linear, and mid-grey is not at 50%, but at 18%.
Or, one could directly take scene-linear, unbounded input, map it within some bounds (like black and white relative exposure in filmic or agx), and apply some non-linearity to the mapping in a way that mid-grey ends up in the middle. It would basically be another filmic-like module (using the Y norm in filmic), with an alternative, flexible way to define the curve. Using the norm would retain RGB ratios, with its pros (no Notorious 6) and cons (salmon sunsets). Or one could switch to per-channel application, potentially also adding primaries to the mix (resulting in something like agx with an alternative way to configure the curve).
Is this a module that should be used instead of the tone mappers (Filmic/Sigmoid/AgX) or rather an alternative to Tone Equalizer? My understanding is that the old zone system module was the latter.
In my Tone Equalizer thread I have seen that people are very “protective” of the module because there is no alternative - while almost everything else you do with DT has multiple ways of achieving it with different sets of modules. This is why I feel there is a need for a tone equalizer alternative.
If was a Tone Equalizer-like module, no sigmoid-curve mapping would be needed. The module would just have markers at “stops” (log2 of the input values) and the user would move them around. If the user leaves everything at the defaults, the output could be identical to the input.
Possibly such a module should also include the guided filter logic of Tone Equalizer - maybe in a simplified way, to optionally preserve details when compressing the dynamic range. Right now I am not very active here because of real life, but I could see myself participating/helping with such an endeavor.
Yes, it turned out that Tone Equalizer was not a like-for-like replacement for the Zone System module. The latter had much more control over the individual zones, whereas Tone Equalizer is much more suited to broader changes spanning multiple zones.
Such fine control over individual zones brought its share of problems with local contrast, but the idea in principle was really good, and I still think there is room for such a module in Darktable’s toolbox. @JovianSettler’s redesign of Tone Equalizer may bring some improvements, but I think I agree with him when he says that another module would be welcome to complement it.
As this is an area I’m particularly interested in, count me in to do some testing for you. However, I’m only running Windows at the moment, so I might need to wait until this project is a little further down the line and Windows exes are being made.
I realised that I actually did not quite understand the the scene-referred processing pipeline (that’s what you get for relying on AI as a shortcut for explanations instead of doing the reading). So as some of you correctly pointed out the module is not really doing what it’s intented to do. I’ll think the most straight forward thing would be to convert this into a tonemapper (lightzone is doing something similar, by using a zonemapper for their raw-tone curve as their first module).
I still think the zonemapper is the most intuitve interface I’ve used for tonemapping and I will do some homework to see if I can properly bring it to the scene-referred world.
Check if you can simply use the resulting curve as an alternative method in agx, where you have the limits set by the black and white relative exposure. You may want to transform the input so that mid-grey falls in the middle, although it’s usually not that far off (and maybe it’s natural to have more on the left than on the right of the mid-grey, if you don’t have unusually bright highlights). You’d have to replace the curve logic in _apply_curve. There, your input is log-mapped to 0…1, and you are expected to output 0…1. You can get Y from const float luma = _luminance_from_matrix(pixel_in_out, rendering_to_xyz_transposed);.
Edit: though, if you map using Y, you won’t benefit from the primaries manipulations, won’t have a path to white (e.g. a bright, saturated blue will remain blue, and will run end up as (small R, small G, greater than 1 B), which, after B is clipped, produces quite a dark colour.