"In both your cases you need distance information "
You are right, if the case, you may enter the values manually. If you take the picture you know the distances. Additionally, some cameras already record the distance and the deep of field as well as the focused zones. They are is my starting point.
" already don’ throw away that data"
I suggest to do this:
- Open a photo with any of the programs that you mentioned.
- Increase / decrease the exposition, you may use the “exposition control” or the “tone curve”, etc.
You will see that you lost contrast.
Now do the same with a mirror less camera (DSLRs are not WYSIWIYG).
- Point to an object.
- Increase / decrease the exposition.
You will see that the shadows change (not the contrast).
For me, the SW from exercise (A) is wrong because they do not work as the reality does. They neither use the dynamic range, they just do a gamma correction (or something like this). This may be a heritage from the past, when PCs have only 4bits per channel, but now we have 16bits per channel or more. Currently only ACDSee’s Light EQ processes the exposition “well”, but, as shown here, most people do not understand this brilliant invention.
This post brings me to the main topic, the Exposition Zones, the final conclusion was DT can not do this because it needs to “evolve” its exposition/tone curve management first. I guess it is easier to create a new program supported by camera brand than change the DT’s core and explain their long time users what happened.
I also said the Exposition Zones would be the “easy” feature, the “Focus Equalizer” is something indeed complex, but I invite to take a look to Affinity’s Focus Merge feature (Here is a video https://www.youtube.com/watch?v=ohtMNDYCxH8), and who Fujifilm X T-20 manages distances and focus (https://www.youtube.com/watch?v=YaOytMS7Khg).