once you fix the average after fixing the hue/ratio, does it still preserve the hue or would you run into a loop here?
Both preserves the hue but the i added the constrain of preserving the average as well for the second one, resulting in a change in chroma instead. So we can trade chroma VS average while maintaining constant hue.
Great!
However this would be a hdr display referred module
Tried to implement a method exploiting the observation that the naive preserve hue method is good at the border and that the hue and energy preserving method was good around the primary color axis. I just interpolated between the two based on where pixelsâ color was compared to the two identified boundaries. I exported the weight as part of developing it and it looks like this:
Still only first-order smoothness for the weight, but interpolating two first-order smooth curves yields a second-order smooth function so check out the results!
Pretty good if you ask me!
And this is how the HSV circle looks like:
For reference here is what the slice I showed for the other method a few posts back looks like. Note that the result is > first-order smooth and that we reach the same maximum chroma as in the naive method.
Here is what the Bloom example from earlier in the thread looks like (pretty much the same as the (now known as) naive preserve hue method:
Some actual HDR images from HDR Heaven (same as tested on earlier in the thread) using this mixed-method:
And finally two of my own pictures from the summer:
What do you think?
Thanks for sharing your thought process.
i think we all should be doing more rock climbing
sorry i lost context over the length of this thread⊠just to be sure, youâre only applying this in the context of a curve, right? not for other local contrast/tone manipulating things?
and the first step is always to map the rgb channels naively through the curve separately? did you also experiment with mapping some luminance first and reconstruct rgb by ratio?
more specifically iâm thinking about lifting shadows with the local laplacian pyramids. there the whole operation first runs on luminance only, so you donât really want to map it through each channel separately first. but also it can cause quite nasty colour shifts and oversaturated blacks.
I think the last one is both a lovely image and a beautiful edit. Beautiful subtle greens and gradations in the shadows. Thereâs moooood.
Other than that Iâm not sure what youâre up to (Iâm not a programmer).
The edits are nice but if you had included the original images to compare it would have demonstrated the module better.
I am also not a mathematician like @mikae1 so can appreciate the effects and effort.
did you see this? seems very similar in spirit, only that it employs the derivative of the curve. seems to me that it would always result in smooth transitions?
No, I havenât! Thanks for sharing.
Similar in spirit but still a bit different, interesting! I have been considering this approach but didnât like it as there was no way to formulate it as robustly as I liked it to be. It looks pretty nice though. I think the biggest difference is how all dots desaturate at the same time in this while it desaturates from the middle and out for a per channel approach (uncomment line 129 to see that).
He also has a nice little soft clip, on rows 133 - 135, comment those out and uncomment 139 to see where clipping happens.
I need to see if Björn wants to hang out for a chat, fun to see what other Swedes are up to!
About that climbing session of yours @hanatos, absolutely up for that! Would be a joy to do some Any particular place you had liked to go?
About your questions from before Christmas that I still havenât answered to.
I apply this correction on the per-channel mapped values, or I use the per-channel tone map as a guide for luminance and saturation. Whatever way you want to view it.
I have experimented with RGB ratios in similar ways to the link you shared and havenât been happy with it so far. I still do not know how to guarantee the mapped values to stay within the display gamut and I thus discarded it for now.
You could use a similar approach to hue preservation when doing local laplacians as long as colors already are mapped to within that working gamut. You could actually relax the constraints a bit even, I force the out values to be smaller than the max per-channel mapped and larger than the smallest per-channel mapped. Employing a correction like this in scene space could drop that requirement.
That loser requirement makes it fine to use a gradient-based method as Björn does though as we do not have to worry about clipping bright values. I guess trying both and seeing how it behaves is the proper way to approach that larger topic.
@mikae1 and @Rajkhand thanks for the nice comments about the pictures. I do understand that it might be hard to judge the technical aspect of it. I will come back with some practical examples once I polished and pushed the latest code I worked on.
I would like to see this module in the mainstream, as a user choice alternative to filmic.
I am sure filmic can do lots of things and solve most of my use cases where the results donât seem so good, but it is not easy to master.
For many things, in a correctly exposed image and when the main subject is in midtone, filmic is great.
But it compresses the higlights (it has to do it) and shadows, and then you have to fight against it trying to expand shadows and lights, and I cannot get sometimes natural looks.
When you have clipped channels in the highlights things get worse, it gets quite difficult to recover them and give them a natural look.
I have tried this sigmoid module and in this cases it gives me better (more easy to get) results, it seems more similar to the way we worked in LR or other software.
But having to use two DT installations with separate databases is not an option, it is good for trying and testing, but not for a dayly basis, to use it in photos you may need it in your collection.
So is there a proposal to add it that we can vote for?
May you add it as an optional module or user installable module to DT?
I have seen in github some request in the same sense, but I am not sure if there is a âoficialâ proposal where general users could vote.
An example image that demonstrates this would help focus the discussion.
Well, may be this is not the place to discuss whether sigmoid is better for some cases than filmic or not.
If you are here is because you have liked sigmoid and want to give it a try.
I have said that probably the problem with filmic is in my own, not being proficient enough in using it.
Anyway if the OP wants me to try to upload some image with sigmoid and filmic, I can seek for some.
Its not only me who is having problems with higlights using filmic, I can provide a link to other spanish forum where many of us have tried to get a natural look and could not (with filmic)
The image is not mine, so I will just put the link to the topic where there is a link to download.
But perhaps it is better to open a separate thread with a similar images and see what we can get with filmic and what we can get with sigmoid, isnât it?
Quitar el cielo morado - Darktable en general - darktable en español
If you think it would be of help, I can seek for my own example and create a thread for it.
It will take me a bit to seek for a good example and compare filmic and sigmoid (I have to upgrade sigmoid install too).
But it does not matter if it is due to my inexperience with filmic or sigmoid being really superior to some images.
The fact is that there is people that likes it and finds it useful to some kind of images and easier to use than filmic.
So if it can be added to the mainstream it would be great to have options, in the same way we have options for other tasks.
It has been argued that it can create confusion, but DT has a lot of confusing modules some of them obsolete, others repetitive⊠having options is great.
Just donât activate it for all users or donât put it in general scene referred modules collection, but in alternative modules or something like that.
Even if other modules work better with filmic (color calibration or other modern modules that were developed by Aurelien and are great) it is the user option to see if it does not one to use them or accepts the results obtained.
Sigmoid is adapted to the new scene workflow: converts from an unconstrained âlinearâ RGB scene referred space to the constrained non linear RGB screen output, son there is no reason to discriminate it, it adapts to the new path in DT.
The reason is just a matter of taste and jugding results and that is the work of the end user.
I am now in Darktable 3.8.
Sigmoid executable seems to be 3.6 (at least for windows).
But the link on github in @jandren place does not work.
Any chance of being upgraded soon?
I had it installed but unistalled when upgrading to 3.8.
I donât remember where I get the executable for windows fro, I cannot find it anywhere now.
(I know, I could compile it myself, but no I canât as I donât know how and doing it for windows requires quite a lot tools I donât have and do not master).
Looks like it has not been worked on in a year at least from the link in original postâŠso I wouldnât hold my breath. Also I donât think there has been any recent discussion about it being merged.
IIRC this touches many aspects of dt infrastructure. Should there be one module housing all display-transforms or many modules? What qualifies or disqualifies a specific disp-trans? Should look-transforms and display-transforms be seperate (most likely yes) but linked in terms of place in the pipeline (also most likely yes, but how)?
I donât think this is up to Jakob.
Addinng more info to @priortâs answer:
@TurboGit Sorry didnât keep a close eye on this branch during the fall as I tried to hunt down a solution to merge elsewhere and then took a break for other things. I had preferred to not integrate it in filmic as filmic is making several assumptions about white and black points which arenât needed in this module as it is defined for infinity. So itâs essentially not compatible with filmic.
@ariznaf Anyway, there are works in parallel going on about recovering clipped highlights, check this thread: Guiding laplacians to restore clipped highlights - #70 by Juha_Lintula
(it starts with a Aurelienâs new method and Jens-Hanno presents another method that he was working at the same time).
Jens-Hannoâs PR: [WIP] Segmentation based highlights recovery for bayer sensors by jenshannoschwalm · Pull Request #10716 · darktable-org/darktable · GitHub
Aurelienâs PR: Highlights reconstruction : introduce new method by aurelienpierre · Pull Request #10711 · darktable-org/darktable · GitHub
Why just one for display transform specifically?
We have lots for sharpening for example or to do BN conversionsâŠ
some people prefer one others prefer other module, even if it is technically superior.
Aurelien has recently provided the diffuse and sharpen module, with sharpening profiles, and we have equalizer from it too used for sharpening.
Donât take me wrong.
I appreciate Aurelienâs work a lot, and apprecite filmic.
I like the aspect it provides to many photographs when important subject are midtones and the lights are not clipped.
I know it can be used to recover lights and even it has some thing to do with noise but could not get it to good work, my fault, I am sure.
Sigmoid does a good job (I donât know if better or worse than filmic or in which cases it does it well or not) with a quite simple interface and controls.
I agree with @jandren that for the end user concentrating in one aspect of the photo at a time is much easier: do just one thing and do it well.
Even if there are collateral effects, may be easier to solve them later than fighting against all in one point.
I know there are technical reasons that Aurelien has explained and that many aspects are related, ando that is why he put many things in one place, as technically it is a superior solution.
But sometimes perfection and convenience donât let you walk the same path.
I prefer to keep it simple and something that I can manage and understand to some extend.
I think one answer is that many of the modules are now designed to work together with assumptions or actions that take into account the properties of each. So its not a simple as just adding in another module that is not designed for that ecosystem. That might actually impair the performance of the other modules. If it was simply an equation implemented the same way as filmic I donât think there would be an issue but from what @jandren explained it is not like this.
May be official modules work together best, but that is not the unique path or the unique criteria to take into account.
In order to keep modules working all modules should adjust to the current workflow, using floating point RGB, linear, nonbounded model for the editing modulles, and a final conversion to screen.
May be some modules are technically superior and work better with each other.
But a final conversion to screen with other curve wonât hurt or make the workflow invalid, as can be seen with the results given in this thread.
If everybody uses the same modules and the same tools every body is going to have the same look.
Having options (even if they are not of the taste of many) is not bad, as long as that option is consistent and provides good results.
With LR you have few options, you have to develope the photo in the way Adobe thinks it should be done.
That is why people need to do a preliminary work in LR and then go to PS (whre you have many options from many vendors and users) to get the look they want in their photos.