Do you know if darktable has a lightness control tool?

"In both your cases you need distance information "
You are right, if the case, you may enter the values manually. If you take the picture you know the distances. Additionally, some cameras already record the distance and the deep of field as well as the focused zones. They are is my starting point.

" already don’ throw away that data"
I suggest to do this:
Exercise (A)

  1. Open a photo with any of the programs that you mentioned.
  2. Increase / decrease the exposition, you may use the “exposition control” or the “tone curve”, etc.

You will see that you lost contrast.

Exercise (B)
Now do the same with a mirror less camera (DSLRs are not WYSIWIYG).

  1. Point to an object.
  2. Increase / decrease the exposition.

You will see that the shadows change (not the contrast).

For me, the SW from exercise (A) is wrong because they do not work as the reality does. They neither use the dynamic range, they just do a gamma correction (or something like this). This may be a heritage from the past, when PCs have only 4bits per channel, but now we have 16bits per channel or more. Currently only ACDSee’s Light EQ processes the exposition “well”, but, as shown here, most people do not understand this brilliant invention.

This post brings me to the main topic, the Exposition Zones, the final conclusion was DT can not do this because it needs to “evolve” its exposition/tone curve management first. I guess it is easier to create a new program supported by camera brand than change the DT’s core and explain their long time users what happened.

I also said the Exposition Zones would be the “easy” feature, the “Focus Equalizer” is something indeed complex, but I invite to take a look to Affinity’s Focus Merge feature (Here is a video https://www.youtube.com/watch?v=ohtMNDYCxH8), and who Fujifilm X T-20 manages distances and focus (https://www.youtube.com/watch?v=YaOytMS7Khg).

Can you tell me which cameras that are? I had looked into things like that recently and so far I am only aware of some phones that store depth data.

Afaik among the usual consumer/semipro/pro cameras there are some which record the distance to subject (calculated from distance to sensor) and there are also some which record the x/y coordinates of the point the camera focused (no depth informations) in meta data.

I know only about;
Exif.Photo.SubjectDistance
Exif.Photo.SubjectDistanceRange

To get the depth data you need to

  1. creating a focus stack
  2. calculating a depth of field

That’s more or less what Android Photo App does.
But I don’t know any camera that does this.

I did not explain myself well, by record I mean they “show” the information, but I had not verified if they “store” it.
I guess they do not keep the distance in the metadata because it is meaningless at the moment.
If you do not see the video here is the manual explaining how the distance, deep of field and focus are “shown”:
http://fujifilm-dsc.com/en/manual/x-t20_v11/taking_photo/manual-focus/index.html

The process is: contrast makes shadows and shadows make objects, this is why it is important FIRST to improve how the software process the exposition, it is necessary to add a “layer” and process it as shadows instead of contrast, I expect this will also improve the definition of the object borders, as a result, the focus. This would be the clarity 3.0

In English it is exposure, it is not the same word as in Spanish.

2 Likes

Simple because cameras do a lot of internal image processing, which is not comparable to sliders in any SW application mentioned here. Maybe the with the camera shipped raw processors can produce similar results.

You are absolutely right.
May be Exposition Zones can be used as a trade mark :wink:

Indeed there is a good explanation for the incorrect behaviour, specially when Capture One is made by Phase One. Actually, even ACDSee Light EQ is not an exposure tool but tone mapping.
However, after bringing here the great idea to include something better in DT I realized, and hopefully you too, that a new paradigm must be used to manage the exposure and focus.
If you liked the deep of field concept here is another one: What would happen with the high ISO noise if you manipulate the image by shadows as I suggest? It dissipaters because it does not behave as a shadow. Why? Because no pixels are equal in a shadow (but in 8 bit or lower) . Additionally, it may be possible to create a halo removal tool.

I post links with information and videos about every concept. I also provide steps to reproduce the phenomena.
If after reading these you find there are missing parts, welcome to the club, use your imagination.

@R_O I risk saying that you don’t change paradigms with links and written steps because this is not a corporate software engineering development cycle.
Still on your steps, show us with practical results.
Maybe I’m wrong, but devs need to exercise their tools with images we provide and discuss results.
Paradigm changing is not out of scope, but you should lower your expectations for now (sorry devs for taking your voide, correct me if I’m wrong)
Anyway, thanks for the 40 minute DT link you provided. I found it very useful and can’t wait an opportunity to put some things in practice myself.

I am sorry to tell you that the way to change a paradigm is viewing the process from a different side and in current times, the best tool is the video.
However I appreciate you read at least one part of my post, as I said before, comments like yours let me go deep until find this is something big.
I had to say this because you prefer to waste 40 minutes instead of 40 seconds making the same. Do you think this is how a “paradigm changer” is? Why wasting your valuable time in my post when it goes against your beliefs?
Any comment is welcome but I may say those with actual suggestions are more valuable.

A little bit of humality would go a really long way.

3 Likes

You bring a subject that must be clarified, the “bad thing” is not DT, as I said before, it is great, but the method used to “emulate” the exposure.

The fact is the exposure in the camera must be the same than the exposure in the computer because both represent the same concept. The software must perform as we capture the light in our cameras

In this post I had tried to explain that the programs for personal computers do not do this as mirror less cameras do. It must be just a question of time until the first will be updated, but, because it is a completely change of paradigm, making a new software may be better than change the old ones.

This “upgrade” would be great because the most important processes in a RAW editor could be simplified and enhanced, opening the door to new features that today are hard to believe.

Regarding the current software in the market, the tool that better emulates the exposure behaviour is ACDSee Light EQ (even when it is not an exposure tool)
as a result, you may work faster and better with it that with other programs (having the same proficiency) but this matters only if productivity and accuracy are important for you, otherwise it is worth to show how a PC software can “emulate” the “shadows” as the camera exposure does.

Please do not take it personal and be patient with my outspoken nature.

The most recent concept is how the high ISO noise could be easily removed without affecting the sharpness and how a halo removal toll would be available.

Do you think it is possible?

I’m sorry, @R_O, but all you have said is simply out of touch with nowadays’ reality.

Nowadays cameras do not store the info you think they are storing.
They store the bitmap, they do not store anything else.
If you disagree, please provide proofs.

They do not store distance measurements for the each pixel.
They don’t even store spectral measurements for each pixel.

WHEN they start storing said data, THEN there is any point in taking about the software side.

What the fujifilm camera show there is just a digital representation of the age old focus/dof indicator you can find on a lot of older manual lenses and some more expensive digital lenses.

For example
IMGP3852
This lens is focused ~1.5m with a Dof from roughly slightly over 1.2m to slight over 2m.

This is the same thing that fuji camera does[1] except it gives a digital readout

Also this affinity photo merge is probably just focus stacking (might be really fancy focus stacking tough)


[1] check out the video you linked earlier the operator changes focus first before confirming the distance (which he pre measured so he could confirm it as he explains in the video)

My impression is that this discussion boils down to something like “learning curves” of different products.

Personal experience with software is in line with

Product A has lower functionality and a short learning curve. Product B has greater functionality but takes longer to learn

2 Likes

@R_O – you’ve given us the sales pitch, you’ve tested us, you’ve given us assignments about shadow manipulation. You’ve talked about how great this feature would be, a lot. You’ve shared videos.

You’ve been asked by a developer for sample images and you haven’t provided them. After 57 posts in this thread, now would be a good time to provide those images.

How about a mock up of what the tools panel would look like in darktable? How about some of the math or technical techniques necessary to make such a tool?

4 Likes

This explanation goes for gadolf too:
Imagine that you have to go from point (A) to point (B), a distance of 100mts.
You can walk or you can use a bicycle. Indeed, each one is a different paradigm (with their own learning curve), but, in a simplistic approach, both do the same because they go through the field, you use your legs, etc.
What paradigm is faster and effortless?
Do you think that the best trained walker can beat a poor trained cyclist? Is it possible to make new things with the bicycle than are almost impossible by walking?
Having this in mind, Darktable is walking and ACDSee Light EQ is cycling, BUT NOT BECAUSE IT ADJUST THE LIGHTS WITH A CLICK, because it manages the exposure the same way than your camera. This is why you take 40 seconds instead of 40 minutes making the same, as you use seconds instead of minutes covering the 100mts in your bike.
However, Light EQ is not an exposure tool, but tone mapping, it is juts a proof of concept that shows how it is possible to adjust the lightness emulating shadows instead of by a gamma correction as all the RAW editors do (DT and ACDSee by itself).
Now, and thank to this post, my Exposition Zones proposal is to create a new software that uses the exposure management (tone curve) based in shadows as its core, taking advance of the possible benefits that this approach will offer. I want to make an electric bike.

Of course, you may use DT or the SW that you prefer, as you can walk or use the bicycle. Your choose depends of your purpose, but neither they are the same or the result is based on your skills.

I guess you are right, but this post is no longer suitable, as you say, it is now large and I may add confuse, the worst is it is getting bigger with topics that already were explained from people that only read an small part, plus the garbage made by those who posts things out of context. This trend will grow as more people is attracted by the number of comments.
I guess the best is freezing the post by avoid answering new comments, this way we may build a clean timeline with the newest comments, but I would prefer to use a managed forum to have more control.

In the meanwhile I am going after someone in the sw development team of Fuji to know their point of view. If the concept is valid, it would be great getting them involved.