recover ooc jpeg detail/sharpness from raw with darktable

I guess for low pass we can use the new blurs module now?

Unfortunately, no. Blur is limited to 128 pixels and it does not have a de-saturation function.

For this is better suited diffuse and sharpen module, since you can have much larger blur radii and on much better control over different aspects of it. Unfortunately, I also miss the de-saturation function in this module.

If the highlights are not clamped Lab is a superior color space for a lot of operations compared to linear rgb and linear xyy, and totally adeguate for the scene referred workflow

This is not my quote but from the text in the link I gave in the post.

If you want to discuss about it I recommend to create a new topic because we are slowly deviating from the main topic here, and to address it to Aurélien,who wrote this article.

1 Like

But you have learned something wrong from that article

Done

1 Like


IMG_1946.CR2.xmp (12.3 KB)

My effort as I might do it…

2 Likes

I have to go and search, but somehow I thought it was changed somewhere in the 3.6/3.8 cycle. Could be completely wrong about this.

Filmic is already at the end by default, doesn’t mean you can’t or shouldn’t put stuff after it.
‘local contrast’ now has a function you use as a sweetener after filmic, and from what I’ve read i was under the impression that the devs were fine with it as it is. Again, could have misunderstood. Forums here are not always a reliable source.

Still LAB according to darktable 3.8 user manual - local contrast


IMG_1946.CR2.xmp (15.7 KB)

IMG_1946_01.CR2.xmp (13.8 KB)

2 Likes

So it seems… Weird I guess I must be completely wrong about this.

But that also means there is no scene referred way for local contrast , except diffuse now (although with the performance cost that seems not practical).

I would have sworn I had some edite where I slammed the exposure , then use contrast equalizer and then filmic and the higights were not lost… Maybe just a dream :pensive: .
Newborn baby here so my mind is a bit hazy…

Why in such order? I have read somewhere that we should reverse the effects in a reversed order to the order in which they were created. So, first the lens blur was created and then the matrix issues. Therefore, why not starting with sharpen demosaicing and afterwards continue with lens deblur?

If I understood correctly…

https://docs.darktable.org/usermanual/3.8/en/module-reference/processing-modules/diffuse/#using-multiple-instances-for-image-reconstruction

Quote:

" While more than one of these issues can affect the same picture at the same time, it is better to try to fix them separately using multiple instances of the module. When doing so, ensure the issues are corrected from coarse scale to fine scale, and that denoising always happens first. That is, your instances should appear in the following pipe order:

  1. denoise,
  2. local contrast enhancement,
  3. dehaze,
    4. lens blur correction,
    5. sensor and demosaic correction.

Starting with the coarser-scale reconstructions reduces the probability of introducing or increasing noise when performing the finer-scale reconstructions. This is unintuitive because these processes don’t happen in this order during the formation of the image. For the same reason, denoising should always happen before any attempt at sharpening or increasing acutance."

… the order is correct.

4 Likes

Neat, thanks! I noticed that you use color look up table. Is there a scene-referred workflow alternative?

You can do similar things using the brightness and colorfulness tabs in CC and using the channel mixer and rgb color balance but the CLUT is quick. Really you can use all the modules in DT …sometime there are caveats but you can use them…Boris recently demonstrated the use of CLUT in one of his video series…he will also use the lowpass filter…

Just past 19 min mark here…but you may want to see from the start of the edit for the image…

2 Likes

Working with software where “order” is the first order of business (rawproc), it’s really neat to see this discussion going on darktable.

That said, I’m going to throw a wrench into the discussion: rely less on what others tell you is “good”, and find out why for yourself. I’ll give you my simplistic basis, and you can consider it for yourself: The camera records a scene in an array of pixels where the measurements are “energy-based”, that is, related to the light energy presented to each pixel from the scene. “Scene-referred” is another name for this, and the deal with our processing is to disturb that energy relationship as little as possible in the early operations of raw conversion, saving the departure from it to a more perceptually pleasing presentation, e.g., by filmic and the other tone curve, to the end of the toolchain.

Understanding this, and what is each tool’s affect on it, will help you make better ordering decisions on your own, rather than just “what @anon41087856 sez…” :laughing:

1 Like

FWIW, I would rather just pay attention to the arguments of people who have invested time into understanding these issues, and decide if I find them convincing or not. To form a serious opinion about this would require literally years of investment on my part, comparable to a graduate degree.

The problem with the experimental approach you advocate is that I would need to decide between methods that work 98% and 99.9% of the time, which is tricky because one may not run to all issues first.

My understanding is that the problem with the historical/display-referred/LAB/etc workflows is not that they never ever work, because then no one would have used them. The problem is that they can fail under some circumstances, and scene-referred tries to address these failures.

6 Likes

Fair enough, I know the time investment well, having done just that over the past seven years. It can be hard sometimes to separate from the inquiry long enough to go take pictures…

I think you’ll get to that eventually, though, as you go through bouts of “what the hell is going on here???..” and boom, you dig into it… :crazy_face:

2 Likes

This is a good point to consider in this discussion, as the old tools aren’t just uniformly bad. Sometimes it’s a matter of degree; a scooch of color saturation after the tone curve might not look egregious, but dial it up from there and watch the progressive circus…

My software only has crappy HSL color saturation, haven’t spent time figuring out an alternative as of yet. So, I still use it, but only right after the initial processing to linear RGB, and then never more than just a few increments. Just keeping it before the tone curve has helped retain a bit of its usefulness…

Actually, at this point (3.8) I am fine with the complexity of Darktable. Having mostly made the transition to scene-referred, I am trying to focus on the 10-ish modules that I need 99% of the time and ignore everything else.

Currently my core workflow includes:

  1. filmic rgb,
  2. color balance rgb,
  3. diffuse or sharpen,
  4. color calibration,
  5. crop,
  6. tone eq,
  7. exposure,
  8. rotate and perspective,
  9. denoise profiled
  10. grain,
  11. retouch
  12. denoise (profiled).

(I am not counting orientation, chromatic aberrations, lens correction, highlight reconstruction, demosaic etc because these would naturally be enabled with safe defaults and are easy to deal with when I need to).

So the bottom line is that whenever someone says “hey, you can now ignore module X because module Y you are already using has the same functionality”, I am sold.

5 Likes