Do you know if darktable has a lightness control tool?

(Roger) #41

"In both your cases you need distance information "
You are right, if the case, you may enter the values manually. If you take the picture you know the distances. Additionally, some cameras already record the distance and the deep of field as well as the focused zones. They are is my starting point.

" already don’ throw away that data"
I suggest to do this:
Exercise (A)

  1. Open a photo with any of the programs that you mentioned.
  2. Increase / decrease the exposition, you may use the “exposition control” or the “tone curve”, etc.

You will see that you lost contrast.

Exercise (B)
Now do the same with a mirror less camera (DSLRs are not WYSIWIYG).

  1. Point to an object.
  2. Increase / decrease the exposition.

You will see that the shadows change (not the contrast).

For me, the SW from exercise (A) is wrong because they do not work as the reality does. They neither use the dynamic range, they just do a gamma correction (or something like this). This may be a heritage from the past, when PCs have only 4bits per channel, but now we have 16bits per channel or more. Currently only ACDSee’s Light EQ processes the exposition “well”, but, as shown here, most people do not understand this brilliant invention.

This post brings me to the main topic, the Exposition Zones, the final conclusion was DT can not do this because it needs to “evolve” its exposition/tone curve management first. I guess it is easier to create a new program supported by camera brand than change the DT’s core and explain their long time users what happened.

I also said the Exposition Zones would be the “easy” feature, the “Focus Equalizer” is something indeed complex, but I invite to take a look to Affinity’s Focus Merge feature (Here is a video, and who Fujifilm X T-20 manages distances and focus (


Can you tell me which cameras that are? I had looked into things like that recently and so far I am only aware of some phones that store depth data.

(Ingo Weyrich) #43

Afaik among the usual consumer/semipro/pro cameras there are some which record the distance to subject (calculated from distance to sensor) and there are also some which record the x/y coordinates of the point the camera focused (no depth informations) in meta data.

(Tobias) #44

I know only about;

To get the depth data you need to

  1. creating a focus stack
  2. calculating a depth of field

That’s more or less what Android Photo App does.
But I don’t know any camera that does this.

(Roger) #45

I did not explain myself well, by record I mean they “show” the information, but I had not verified if they “store” it.
I guess they do not keep the distance in the metadata because it is meaningless at the moment.
If you do not see the video here is the manual explaining how the distance, deep of field and focus are “shown”:

The process is: contrast makes shadows and shadows make objects, this is why it is important FIRST to improve how the software process the exposition, it is necessary to add a “layer” and process it as shadows instead of contrast, I expect this will also improve the definition of the object borders, as a result, the focus. This would be the clarity 3.0

(Mica) #46

In English it is exposure, it is not the same word as in Spanish.

(Christian Kanzian) #47

Simple because cameras do a lot of internal image processing, which is not comparable to sliders in any SW application mentioned here. Maybe the with the camera shipped raw processors can produce similar results.

(Gustavo Adolfo) #48

I’m sorry to interrupt, but I have to say a “few” words, since only now I saw this interesting thread.

I’m a newcomer and soon realized who the devs are.

Agreed. Isn’t that one of the core attributes of the foss ecology, community? I’m not saying that closed source software hasn’t its own vibrant communities, with people helping each other, but they seem to lack a valuable quality that maybe it’s the starting point in foss: empowerment. Power to directly influence the ongoing development of something. Something that will probably never be like a one-click powerful magic tool (which, under the hoods, does lots of things). Imho It will probably never be it because small teams, sometimes, just one person, cannot give up immediate feedback from testers which, by chance, are us all. In other words, their tools have to be simpler at start and expose the more variables as they can. And they will probably carry that mark until they become extremely complex but, at the same time, extremely flexible, as they grow up and turn into mature software like those that are discussed here.
If one just wants something because it’s free (as in free beer) - and just because of that - then he/she won’t probably deserve the status of a real foss community member: he’s just a consumer, not a participant.
I’m really sorry if I loose myself in a torrent of words, but what I see here has much to do with my own recent experience of someone who’s just arrived from the closed source world (in photography, not in general computer usage) and was anxious to replicate old workflows and get the expected results asap.
But I think I’m learning the zen art of controlling myself and of assuming that maybe I’m missing something.
Imho, I’m also testing, not the community, but myself.
So far so good.
Now please, proceed with the technical dialogue, which is getting better and more objective.

(Roger) #49

You are absolutely right.
May be Exposition Zones can be used as a trade mark :wink:

(Roger) #50

Indeed there is a good explanation for the incorrect behaviour, specially when Capture One is made by Phase One. Actually, even ACDSee Light EQ is not an exposure tool but tone mapping.
However, after bringing here the great idea to include something better in DT I realized, and hopefully you too, that a new paradigm must be used to manage the exposure and focus.
If you liked the deep of field concept here is another one: What would happen with the high ISO noise if you manipulate the image by shadows as I suggest? It dissipaters because it does not behave as a shadow. Why? Because no pixels are equal in a shadow (but in 8 bit or lower) . Additionally, it may be possible to create a halo removal tool.

(Gustavo Adolfo) #51

@R_O It would be great if you bring us a jpg developed by you using the tools you’ve mentioned, as well as its correspondent raw file. I assure you it will be very instructive to all of us. The way you’re putting things is not helping much. Thanks!

(Roger) #52

I post links with information and videos about every concept. I also provide steps to reproduce the phenomena.
If after reading these you find there are missing parts, welcome to the club, use your imagination.

(Gustavo Adolfo) #53

@R_O I risk saying that you don’t change paradigms with links and written steps because this is not a corporate software engineering development cycle.
Still on your steps, show us with practical results.
Maybe I’m wrong, but devs need to exercise their tools with images we provide and discuss results.
Paradigm changing is not out of scope, but you should lower your expectations for now (sorry devs for taking your voide, correct me if I’m wrong)
Anyway, thanks for the 40 minute DT link you provided. I found it very useful and can’t wait an opportunity to put some things in practice myself.

(Roger) #54

I am sorry to tell you that the way to change a paradigm is viewing the process from a different side and in current times, the best tool is the video.
However I appreciate you read at least one part of my post, as I said before, comments like yours let me go deep until find this is something big.
I had to say this because you prefer to waste 40 minutes instead of 40 seconds making the same. Do you think this is how a “paradigm changer” is? Why wasting your valuable time in my post when it goes against your beliefs?
Any comment is welcome but I may say those with actual suggestions are more valuable.

(Mica) #55

A little bit of humality would go a really long way.

(Roger) #57

You bring a subject that must be clarified, the “bad thing” is not DT, as I said before, it is great, but the method used to “emulate” the exposure.

The fact is the exposure in the camera must be the same than the exposure in the computer because both represent the same concept. The software must perform as we capture the light in our cameras

In this post I had tried to explain that the programs for personal computers do not do this as mirror less cameras do. It must be just a question of time until the first will be updated, but, because it is a completely change of paradigm, making a new software may be better than change the old ones.

This “upgrade” would be great because the most important processes in a RAW editor could be simplified and enhanced, opening the door to new features that today are hard to believe.

Regarding the current software in the market, the tool that better emulates the exposure behaviour is ACDSee Light EQ (even when it is not an exposure tool)
as a result, you may work faster and better with it that with other programs (having the same proficiency) but this matters only if productivity and accuracy are important for you, otherwise it is worth to show how a PC software can “emulate” the “shadows” as the camera exposure does.

Please do not take it personal and be patient with my outspoken nature.

The most recent concept is how the high ISO noise could be easily removed without affecting the sharpness and how a halo removal toll would be available.

Do you think it is possible?

(Roman Lebedev) #58

I’m sorry, @R_O, but all you have said is simply out of touch with nowadays’ reality.

Nowadays cameras do not store the info you think they are storing.
They store the bitmap, they do not store anything else.
If you disagree, please provide proofs.

They do not store distance measurements for the each pixel.
They don’t even store spectral measurements for each pixel.

WHEN they start storing said data, THEN there is any point in taking about the software side.


What the fujifilm camera show there is just a digital representation of the age old focus/dof indicator you can find on a lot of older manual lenses and some more expensive digital lenses.

For example

This lens is focused ~1.5m with a Dof from roughly slightly over 1.2m to slight over 2m.

This is the same thing that fuji camera does[1] except it gives a digital readout

Also this affinity photo merge is probably just focus stacking (might be really fancy focus stacking tough)

[1] check out the video you linked earlier the operator changes focus first before confirming the distance (which he pre measured so he could confirm it as he explains in the video)

(Gustavo Adolfo) #60

“Paradigm” shows up here and in other five posts from @R_O.
I think that’s the main discussion here, not if DT is capable of achieving similar results to ACDC or not.
And what are the paradigms involved here?
On ACDC, the productivity paradigm, which means one click does (almost) it all. If you have time constraints when editing pictures (maybe because you’re a professional, maybe because you’re lazy, maybe because you simply have no time available), then it’s better to go with this paradigm.
On DT (and RT and PhF and…), the flexibility paradigm, which means avoiding by all means afaik to encapsulate the steps hidden by the one click paradigm. So it’s fundamentally an anti-productivity tool. (Which doesn’t mean that tools under this paradigm can’t be set up to increase productivity by use of custom presets.) Evidently, by forcing the user to do all the hidden steps, it forces the user to learn each concept behind image editing, something that the other paradigm doesn’t: one can achieve pretty good results without having to plunge on the theory behind it.
They are opposite paradigms and they are set up on the early development stages.
So, it’s clear to me, DT won’t go that way, and this thread is useless, to say the least, if it means that.
On the other side, it’s always good for us newcomers to be aware of the paradigms involved in image editing software, so we don’t waste our valuable time on a dead end path.

(Christian Kanzian) #61

My impression is that this discussion boils down to something like “learning curves” of different products.

Personal experience with software is in line with

Product A has lower functionality and a short learning curve. Product B has greater functionality but takes longer to learn