Do you know if darktable has a lightness control tool?

I am sorry to tell you that the way to change a paradigm is viewing the process from a different side and in current times, the best tool is the video.
However I appreciate you read at least one part of my post, as I said before, comments like yours let me go deep until find this is something big.
I had to say this because you prefer to waste 40 minutes instead of 40 seconds making the same. Do you think this is how a “paradigm changer” is? Why wasting your valuable time in my post when it goes against your beliefs?
Any comment is welcome but I may say those with actual suggestions are more valuable.

A little bit of humality would go a really long way.

3 Likes

You bring a subject that must be clarified, the “bad thing” is not DT, as I said before, it is great, but the method used to “emulate” the exposure.

The fact is the exposure in the camera must be the same than the exposure in the computer because both represent the same concept. The software must perform as we capture the light in our cameras

In this post I had tried to explain that the programs for personal computers do not do this as mirror less cameras do. It must be just a question of time until the first will be updated, but, because it is a completely change of paradigm, making a new software may be better than change the old ones.

This “upgrade” would be great because the most important processes in a RAW editor could be simplified and enhanced, opening the door to new features that today are hard to believe.

Regarding the current software in the market, the tool that better emulates the exposure behaviour is ACDSee Light EQ (even when it is not an exposure tool)
as a result, you may work faster and better with it that with other programs (having the same proficiency) but this matters only if productivity and accuracy are important for you, otherwise it is worth to show how a PC software can “emulate” the “shadows” as the camera exposure does.

Please do not take it personal and be patient with my outspoken nature.

The most recent concept is how the high ISO noise could be easily removed without affecting the sharpness and how a halo removal toll would be available.

Do you think it is possible?

I’m sorry, @R_O, but all you have said is simply out of touch with nowadays’ reality.

Nowadays cameras do not store the info you think they are storing.
They store the bitmap, they do not store anything else.
If you disagree, please provide proofs.

They do not store distance measurements for the each pixel.
They don’t even store spectral measurements for each pixel.

WHEN they start storing said data, THEN there is any point in taking about the software side.

What the fujifilm camera show there is just a digital representation of the age old focus/dof indicator you can find on a lot of older manual lenses and some more expensive digital lenses.

For example
IMGP3852
This lens is focused ~1.5m with a Dof from roughly slightly over 1.2m to slight over 2m.

This is the same thing that fuji camera does[1] except it gives a digital readout

Also this affinity photo merge is probably just focus stacking (might be really fancy focus stacking tough)


[1] check out the video you linked earlier the operator changes focus first before confirming the distance (which he pre measured so he could confirm it as he explains in the video)

My impression is that this discussion boils down to something like “learning curves” of different products.

Personal experience with software is in line with

Product A has lower functionality and a short learning curve. Product B has greater functionality but takes longer to learn

2 Likes

@R_O – you’ve given us the sales pitch, you’ve tested us, you’ve given us assignments about shadow manipulation. You’ve talked about how great this feature would be, a lot. You’ve shared videos.

You’ve been asked by a developer for sample images and you haven’t provided them. After 57 posts in this thread, now would be a good time to provide those images.

How about a mock up of what the tools panel would look like in darktable? How about some of the math or technical techniques necessary to make such a tool?

4 Likes

This explanation goes for gadolf too:
Imagine that you have to go from point (A) to point (B), a distance of 100mts.
You can walk or you can use a bicycle. Indeed, each one is a different paradigm (with their own learning curve), but, in a simplistic approach, both do the same because they go through the field, you use your legs, etc.
What paradigm is faster and effortless?
Do you think that the best trained walker can beat a poor trained cyclist? Is it possible to make new things with the bicycle than are almost impossible by walking?
Having this in mind, Darktable is walking and ACDSee Light EQ is cycling, BUT NOT BECAUSE IT ADJUST THE LIGHTS WITH A CLICK, because it manages the exposure the same way than your camera. This is why you take 40 seconds instead of 40 minutes making the same, as you use seconds instead of minutes covering the 100mts in your bike.
However, Light EQ is not an exposure tool, but tone mapping, it is juts a proof of concept that shows how it is possible to adjust the lightness emulating shadows instead of by a gamma correction as all the RAW editors do (DT and ACDSee by itself).
Now, and thank to this post, my Exposition Zones proposal is to create a new software that uses the exposure management (tone curve) based in shadows as its core, taking advance of the possible benefits that this approach will offer. I want to make an electric bike.

Of course, you may use DT or the SW that you prefer, as you can walk or use the bicycle. Your choose depends of your purpose, but neither they are the same or the result is based on your skills.

I guess you are right, but this post is no longer suitable, as you say, it is now large and I may add confuse, the worst is it is getting bigger with topics that already were explained from people that only read an small part, plus the garbage made by those who posts things out of context. This trend will grow as more people is attracted by the number of comments.
I guess the best is freezing the post by avoid answering new comments, this way we may build a clean timeline with the newest comments, but I would prefer to use a managed forum to have more control.

In the meanwhile I am going after someone in the sw development team of Fuji to know their point of view. If the concept is valid, it would be great getting them involved.

To be quite honest, using Harry Durgin’s video to prove that editing in darktable is slow is very disrespectful and disingenuous. He’s making a video to teach people how to edit, he talks through his decision making, often pauses to explain things, and shows the viewer the effects of a filter he is working with. Because he is making an educational video, it is at a slower pace.

4 Likes

No, I am making an factual comparison, because making the same with the Ligh EQ without explanation takes 10 seconds instead of 40 (WITHOUT USING THE AUTO BUTTON). I tested it by myself in case a comment liker yours would come.
I also verify if the video uses a common method and at least in YouTube, most people do the thing the same way, no matter what tool they use (Even ACDSee suggests such method to promote its Editor).
The method is time consuming by itself, no matter what SW you use, specially because you get new results with every adjustment instead of watching them all together.
How long may you take? Give a chance to Light EQ in mode STANDARD (including learning curve) to note the difference by yourself.

@R_O I think you missed the most important part of my last post:

I’d say two of those three things would be bare minimum to getting this discussion back on track, otherwise I think you’ve lost everyone’s attention somewhere in the paradigm void.

2 Likes

There have been a few points I’d like to answer to, but since they are scattered all over the thread I’ll just have a single post.

  1. darktable, RT and all the other tools don’t use a gamma curve for exposure, they use the correct and trivially simple approach of multiplication. Of course that brightens the shadows, too, but that’s the very definition of exposure. So complaining that changing the exposure of an image doesn’t increase the contrast in the image in just silly.
  2. The proposed feature is supposed to be able to add new shadows on objects. For that to work a simple approach that takes the existing brightness in the image isn’t enough, thus it’s not a tone mapping operator. Instead you need a depth map of the scene, which regular cameras don’t give you. An exception being systems with more than one camera (stereo cameras or more) and lightfield cameras, and even for those the results are not working great in many cases.
7 Likes

More or less, yeah. Divide the image into N luminosity sections. I guess the interesting part is that it gives you both highlights and shadows, so you can increase contrast in a particular zone.

1 Like

@paperdigits So the point is how they define those zones. In conventional luminosity masks, a shadows slider applied to a pure highlights band would have no effect at all, correct?

1 Like

I’ve never used ACDC, so I don’t exactly know, but I’d guess the shadows are relative to dynamic range of the luminosity mask, so the tool on some clouds would darken the darkest part of the luminosity band.

1 Like

The Look of this example reminds me on the Old Oak - A Tutorial

You can so the same in dt with two instances of the tone curve with different modes.
Here is an Image where tried to copy that Look
Imgur

2 Likes

I watched this video

But what is the difference between a tone curve in dt divided in 10 to 20 points compared to this light eq?

1 Like

@pk5dark Do you mean scale chroma automatic in XYZ for the first curve and automatic in RGB for the second?

I did this a saw some results:
Before:

After:

EDIT: But… what does it mean doing that?

@gadolf

IRC one curve with L only in manuel mode and the second with automatic mode.

Tone curves looks like that for the example posted above.

tone_curve

1 Like