I am fairly confident that ACDSee’s Light EQ does a little more magic. Just using the
color zones module doesn’t seem to be doing the trick. I have been playing with darktable and ACDSee in parallel for a moment and I find it rather difficult to get visually identical results.
I am fairly confident that ACDSee’s Light EQ does a little more magic. Just using the
One solution is to make preset with 3-4 exposure modules and predefined luminosity masks for different tone ranges. Applying that preset you can play after with exposures of protected by tone areas. It is bit tricky though
Well, I think it’s completely normal that different software do things a bit different. As I noticed in your shared image, it seems that ACDSee has done a little more than just applying the LightEQ. It has definitely applied some kind of local contrast to the photo (The tone curve is also seem in effect). I am confident you can achieve a very similar result if you do just a little more in the darktable. Specifically try “Equalizer”, “Local Contrast”, “Tone Curve”, or “Shadows & Highlights”. Good luck!
Serious congrats to you for been the adventurous and rational soul of the group, giving the “new thing” a chance before arguing. You may or may not agree but firstly you have the openness to understand, to know before discard. (Do you know serous studies show only one in twenty is able to defeat the status quo? Welcome to the club
I hope to find someone like you in the DT development team and convince him to work in the Exposure Zones, because, as explained before and you had found, there is no way to get such results with DT, at least not in the same frame time, and I do not want to simply copy the ACDSee invention but make something better.
Well, if you like the halos (tree trunk) on the left example. Anyway, both results are not acceptable for me.
Exposure fusion gives me quite good results for such cases (+ a little shadow&highlights, + lowpass). Lately, I discovered that a second instance of the base curve with exposure fusion is even nicer. Parametric masks might help to limit the effect.
image is CC0 if you like to play. Yes, I know it is an easy example, because of the absence of colors.
Do you consider the picture on the left as a perfect example? Then pull the yellow a tad up, saturate the red a bit and work the shadows.
Could you provide an original raw and a “perfect” result under CC0? This way it would be much easier to see what you want and how difficult it is to get a similar result in darktable.
My example is by no means perfect! It’s only just that, an example. It was suggested that the effect from ACDSee could be replicated with darktable’s
color zones module and select
by lightness, which I found very hard to do.
@R_O how would you like to proceed? Do you have the time and means to investigate yourself on how to emulate the effect? Possibly with suggestions from us?
N.B. in my example the tone curve module of ACDSee is active to undo the applied camera curve, make the starting point for both images identical.
ACDSee is making tone mapping using the dynamic range available in each zone./division and allows you to adjusts blacks and whites. color zones simply modifies the tone curve with no chroma, without HDR, and poor gamma correction (As well as other tone tools It looses contrast easily and you need additional tools to fix the mess -as the local contrast-, so, Exposures Zones to work will need the exposure/tone management used in the histogram -when you right/left click on- which works better than the rest).
Look, most of the comments in here show no understanding of the technologies involved but bad actinide toward my proposal. This is why I wanted to test the community first. Now I guess Exposure Zones is not for Darktable. What is the purpose if most devoted users would be unable to use it?
Thank you for your time.
If another software has the feature you need, then you should use that software.
We absolutely do not need your tests. You are welcome to participate to make our software better, you are welcome to share you knowledge and help us enrich one another, but trying to “test the community,” whatever that may mean, will not get you anywhere.
You’ve failed at a through technical explanation of what “Exposure Zones” even is, so you are correct, it is not for darktable.
I understand your proposal in intuitive way. Your Exposures Zones would be nice to have in Darktable. Unfortunately I am not a developer. To be more constructive could you find and provide more technical materials, like links to math behind, research etc. Maybe someone would be challenging enough and fork darktable by adding new Exposure Zone module. And with time it will merge to master branch. As it was with liquify module, if I recall well.
was added by @Pascal_Obry, who is active since I follow the project back in 2012. There were a few people who chimed in and added new features.
I have spent the time to watch the link videos. Previously, I tested capture one and watched their videos. As the tone curve work in Lab color space, darktable has a L curve since the very beginning I guess.
After all, I really do not know what they do better beside markting.
There has not been posted any before and after example. My internet connection is dam slow on upload and I’ve spent some time to get something up.
It looks like a guided filter used as a luminance mask, then some tone curves magic applied in the mask.
The dehaze module implements guided filters, Heiko Bauke told a few months ago that he planned to implement them as another masking option as well.
Anyway, the result from ACDsee looks pretty much overcooked to me, I don’t know what people have against shadows these days.
@Benja1972, indeed the way to go. You fork dt from GitHub and work on your branch until you are satisfied. Of course you also talk to the developers to ensure you’re moving in the right direction and the integration is ok. At some point, when you fell it is ready for merging you create a pull-request. You’ll also need to stay around to answer questions, change code after reviews, adapt things here and there… Be patient… Then it will get merged.
All the magic is in already in DT,
You already have the tool to divide the shadows and you already have the tools to recover the dynamic range and manage the exposure.
Then just provide the interface to apply the adjustment to such division.
The auto button just recovers the light in the darker zones and it is used as a reference, not the final adjustment.
I cam also to comment that it is true that the Lab curve in the Color Zones behaves as Capture One Luma Curve, as a result, both have contrast/gamma correction issues. This makes ACDSee/s Light EQ unique in the world, What I am unable to figure is why it is so hard to understand for most people.
My next proposal is to make a focus zones tool, but this is really a complex concept and I do not think DT has the tools, It would be something totally new, but lest start with the easy things…
If I understand correctly you just want a new module with a new UI that can do what it can already be archived, because all the math is already in dt. That’s “easy”, maybe. (I’m not a dev).
BUT because this is FOSS software, developing that involves man-hours that you don’t pay and you seem perplexed that nobody is doing what you say.
What you propose is maybe a cool feature, but if no dev is interested or it’s not a [critical] bug, don’t expect they’ll do what you ask.
As I see it, you have different options:
- File a feature request at redmine.darktable.org
- Keep describing and developing your idea so it grows interest
- Pitch it to another similar FOSS project
- Learn to code
- Pay someone to code it for you
PS: The “test” thing… I recommend you don’t use that argument.
I agree. (I also saw the “test” effect).
DT is already overloaded with modules but none of them lest you easily manipulate the full information in the RAW files, specially dynamic range and focus, but not because the SW is bad, it uses a paradigm that does not consider this.
It is necessary to create a new tool based on the current photography technology and its workflow.
Imagine that you have a portrait made with a flash and a reflector but you decide to add a second flash at a 3mts of distance in the opposite side of the main with lower intensity than such and reduce the deep of field to 1mt., DIRECTLY IN THE RAW FILE. . Making this by the histogram, sharpening and noise removal tools would be a nightmare. The current tools are focused in improving the bitmap instead of the photography.
I am currently looking for the team to make the new SW, if I succeed, you will be the first to know.
Sorry but what you describe is physically not possible you might be able to get something close with the right tools but unless you have been able to get the “photograph” as a 3D scene in something like blender you won’t be able to ever get it exactly right. Sure there are probably a lot of cases where this might be good enough but it will never be perfect (and probably require a lot of hand-tuning since there is no depth info in your average RAW file)
 Which might be possible given the right tools and some computational power but will probably at least require multiple exposures from different angles and/or have some other way to have depth information of the scene.
 Yes I know a lot of camera do record estimated focus distance but this is only 1 point and this is the distance to the in focus area so not nearly enough to do what you would want
Actually, it is possible by two aproaches:
Emulate a 3D scene (as you suggest), but instead of create full 3D objects you just define the objects border (as the bitmap tracing does) and calculate the shadows by the distances saved in the metadata, where you already have the focus length and deep of field.
Using the focus points in the image (as the assistants that some mirror less and professional cameras have). With this method you have an “object” defined by such points, from there the shadows represent “volume”, then you manipulate this shadows around the focused areas (as the focus merging works).
Both methods may use the full dynamic range saved in a RAW to get more accuracy and space to create the new shadows. They also may use “artificial intelligence” to learn your style and type of “objects” instead of make the full calculation every time.
Affinity Photo already have the perspective, focus and tone tools that may evolve in the suggested direction.
You just need to think different…
In both your cases you need distance information you don’t have, you assume that a focus point measure distance but that is not how focus points work you get the distance from the amount the lens has moved from its infinite stop not from the focus points. Now I grant you the dynamic range argument but darktable/ratherapee/GIMP (v>2.10)/krita already don’ throw away that data so that argument is moot.
And yes with mention of the right tools in my previous post I did mean thing like “AI” and such algorithms and those might do a good enough job most of the time (especially if you want to decrease the DoF, increasing will require some deconvolution and probably some kind of inpainting like guesswork)
Also not that perspective correction and tone operations are not what I am talking about here I am mostly talking about chancing focus (especially when it goes beyond just chancing DoF) and adding things like an extra light source (although with a bit of practice this one is should be quite fakeable)
 Contrast detect work by looking at the contrast of the image and tries to get to a certain max contrast in the region of interest, phase detect looks at rays that enter from different sides of the lens and match those up at a single point, the advantage of phase detect is that you know the direction and exact point of focus but neither option gives you distance (not directly at least)
 Theoretically you might be able to do something with phase detect but you would probably need to calibrate each lens for this to work, this is never going to work with contrast detect auto focus