Hello again, darktable

We came back home from vacation a few days ago, with about 1000 new photos on my memory cards.

In the last few months, I have mostly been using commercial software, such as Capture One and Lightroom. So that’s what I intended to do here as well.

But it didn’t work out. Capture One’s import window is broken by a bug (and has been for months), Lightroom’s import is so, so slow, so I imported with darktable. Which just worked. And in contrast to both Lightroom and Capture One, its duplicate detection does not take several minutes to figure things out, but does it instantly. No fuss.

When it comes to editing, I used Lightroom during the last year, but just couldn’t get used to their highlight rendering. I tried, for months. But it’s just not my taste. Capture One, meanwhile, is crippled by a performance bug that their support has been unable to resolve for many months. I can not tell you how frustrating this experience has been.

Anyway, this means I’m back to darktable for editing as well. And it has been… super smooth, actually. The latest version is another great improvement, the app feels polished and smooth. And it’s actually good to be back with scene-referred editing, and Sigmoid’s adjustable highlight saturation, and halo-free tone equalizer, and the new and wonderful color equalizer. These are capabilities the mentioned commercial apps miss.

It’s good to be back! Thank you everybody for your hard work on Darktable! It’s a tremendous app!

The only things I miss, are, oddly, the AI features. AI masking is genuinely useful as a time saver (you can do the same thing with darktable’s masks usually, it just takes more effort). AI denoising is actually great, but can be replaced by pre-processors such as neat image or pureRaw. AI object removal is a great tool, and sometimes can fix things that Retouch can’t touch. Nothing mission critical in here, but some nice time-savers.

Now, I want need to re-shoot my color target and get those modern film simulations for my Fuji camera set up in darktable. And I wanted to try and play with custom import profiles… Ah, back to tinkering! I missed that in the commercial programs.


This is very surprising… You would think a simple algorithm based on file name and exif data would be instant, just like it is with darktable. I wonder what lightroom and capture one do behind the scenes to lead to this slowness, as I refuse to believe they are that incapable of designing a quick algorithm for it :smiley:

Some times being #3 is being #1!

1 Like

Well, not even the commercial softwares have the AI features so powerful as the ones available with Lightroom. This is true for Capture One (on the forum you often read that their masking tools are not as good as Lightroom).
Some commercial softwares do not even have the basic ones: DXo Photolab, which I have bought, is really poor as regards the masking stuff (as far the AI features are concerned).

Not to mention that, in the past, AI stuff was completely not available :slight_smile: :slight_smile:

THANKS a lot for your all report.
I am sure it is extremely rewarding for darktable developers (and users).

I wonder if AI will creep into FOSS one day and I wonder if it will be for better or worst? I definitely like the control that DT gives me and maybe that makes me reluctant to embrace AI which sometimes might be better described as Artificial Incompetence. But I admit I am biased. However, a couple of good descriptions of useful AI have been given in the original post and maybe I should be less resistant to change.

1 Like

There already exist open source models for object detection and so on, theoretically they could be implemented right now, at least for masks, in darktable, it’s just a case of doing it. I’d say the hard part would be dealing with user hardware (running them on the cpu is costly), ensuring compatibility with python and all its dependencies (a mess) etc.

1 Like

Just after a quick editing session this morning: One absolutely outstanding feature of darktable is its auto-apply preset system.

My Pixel 6a phone has two cameras: one well-behaved “normal” 24mm eq lens, and a 20mm eq wide lens with deranged white balance coefficients.

Neither Lightroom nor Capture One allows per-lens presets. Darktable does, and it is brilliant!


Essentially, I hadn’t touched darktable for about a year. So I had forgotten some things. I had learned to live with Lightroom.

But there were always these annoyances: my phone camera had a broken import profile that only Adobe could fix, but didn’t. The aggressively desaturated highlights. The simplistic preset system.

I have now processed about half the photos from my vacation, and thereby relearned darktable. I re-shot my color target and re-created my Fuji film sims, this time with the new Nostalgic Neg and Reala Ace, and updated my film-sim-panel accordingly. I created auto-presets for all the small differences between my three cameras (phone, GR, Fuji).

And now everything is running smoothly. With these automations, darktable is almost as quick to use as Lightroom. Of course edits are still mostly click-wait-see instead of real-time. But that’s balanced by many small repetitive tasks that are now automated. Good stuff!


If you want to share presets and examples, it would be really interesting to see.


Will do, in a few days.


What hardware are you on again? My aging haswell i5 with a 1050Ti is reasonable with my z7ii files, and I assume when I upgrade to something from this decade that it’ll be quite fast.

1 Like

With a rtx3080 my edits are more or less instant, even more so if I leave D&S for last. And while it was expensive when it came out, nvidia’s next gen is bound to be released soon and even the 60 tier cards will probably be as fast as the 3080 whilst having more and faster memory.

1 Like

A Mac Mini M1, which is reasonably fast. But that 4k screen just has a lot of pixels, and those 40 MP Fuji files have gotten big, too. It’s smooth enough for late modules such as color balance. But early modules like exposure take about a second or so.

I’m not complaining, this is fast enough for day-to-day editing. It’s merely a difference to Lightroom (with its fixed pipeline).

Ah I still have 1080P screens.

4k screen and RTX2070 mobile, i7 and 32 GB ram, and linux, but max. 31 MP files here, and I cannot observe this amount of lag either, even not when running entirely on cpu.

I checked in detail, and my cpu is from 2020, as the M1, but has only 2.3 vs. 3.2 GHz clock.

So I wonder where the bottleneck is, that makes darktable difficult to use on bitten fruit computer. Is it the arm compiler, which does not optimize that well? Is it obstacles that apple puts into the way of open source developers?

1 Like

This is the different. OpenCL vs not OpenCL makes a huge different.

CPU only I have a barely noticeable lag when, what the OP mentioned, I move the exposure around heavily (such as, up and down and up again). With GPU even this barely noticeable lag vanishes.

It depends on what other modules are enabled. If DoS is enabled, things slow way down; if you have a minimal history stack, it might be good.

Sure, but I never have lags in the range of a second, as the OP describes. Therefore I still wonder, why darktable’s performance seems to be more limited on apple computers.

To give some background, I have heard this “darktable is slow on mac” for years now from different people with different computers, therefore I still wonder why this is the case. Or, if it is not the case, why it is observed like that.

We discussed this recently here: Export times on iMac and Mac mini with diffuse&sharpen

The conclusion was GPU power and, importantly, memory available for OpenCL.

1 Like