A bit of a comparison between between Darktable and Capture One

So here’s a weird one: After using darktable for years, I was “forced” into using Capture One for a few months. And learned a few things. That I now want to share.

First, and skip this paragraph if you’re not interested, why? I learned photography on Linux, on darktable. I had no prior exposure to any other tool whatsoever. Also, I have a PhD in signal processing. So, a prime candidate for darktable. And I absolutely love darktable. But then two things happened: One, my second child was born, and all that precious free time that I had used editing photos evaporated. And two, I was having issues with post processing that may or may not have been related to darktable. Anyway, long story short, I gave Capture One a serious try. Not a short stint during its trial period, like last time, but a real, honest investment of money and time.

And accounts like these are exceedingly rare: Most people use Lightroom, and if they write a comparative review it is thus from the viewpoint of a Lightroom user. And that is of course ignoring the fact that learning a new raw developer represents a serious time investment and are rarely done without external pressure; which results in very few accounts that are more than skin deep. The present text comes from a few months of more-or-less exclusive use of Capture One, a few thousand edited pictures (all new, not just re-edits of old ones, and implicitly the incentive to recreate another program’s rendering). And from the viewpoint of a darktable user. Anyway, let’s get started.

So, Capture One. It is faster than darktable. You move a slider, you see the results of that slider immediately. Even on my lowly Surface (7 Pro) tablet. That’s what got me hooked. But after a few months of this, I discovered that this focus on speed actually went deeper: I discovered that I started processing pictures much more quickly, and this was frankly a revelation in the time-starved months after my second baby’s birth. Let’s look at that in a bit more detail.

My usual editing workflow in Darktable goes something like this: I start with fixing up exposure, usually by dragging the histogram. Then I adjust shadow density with the black slider in filmic, and recover highlight color with the white slider in filmic, and adjust white balance to taste. Then I crop, then add additional adjustments such as color zones, color balance, tone equalizer, denoising, or contrast equalizer. Last come local adjustments if necessary. The thing is, more or less every image needs at least exposure, black and white filmic, and cropping. And these things are buried in two different modules (and the histogram) in darktable, each requiring multiple clicks to access.

In contrast, a similar workflow in Capture One happens entirely on one screen, just by going from one slider to the next:

And furthermore, common tools such as cropping, rotating, and the white balance picker are accessible at all times with highly memorable keyboard shortcuts (C, R, W, respectively). Taken as a whole, this allows me to positively blaze through images like never before. It took a while to appreciate this, and get to know this workflow, but at least in my usage, the difference is quite significant. I now sometimes do a few days’ edits in a spare half hour, which used to be an all-evening affair in my usual darktable workflow.

Of course, I am well aware that I could configure custom keyboard shortcuts in darktable, and that darktable’s module system is infinitely more powerful, and so on. But this example highlights a bit of a philosophical difference between darktable’s unflinching priority on user control, and C1’s compromise between power and speed. There are upsides and downsides to both, and at this moment in my life, I begrudgingly value speed over power, because there is so little of it available.

Speaking of which, there were numerous occasions where I missed darktable’s deep control. Most notably, Capture One’s High Dynamic Range sliders and Clarity controls feel restrictive and oversimplified compared to the splendor of darktable’s Tone Equalizer and Contrast Equalizer: Much too often, C1’s Shadows would adjust too large a portion of the image, Highlights not enough, and both producing halos if not managed very carefully. In darktable, the explicit Tone Equalizer mask would allow me to handily limit the affected area very precisely. Similarly, C1’s two Clarity sliders act somewhat similarly to raising/lowering the left/right half of the Tone Equalizer, but I often missed the ability to affect only in-between wavelet sizes, for example for specifically highlighting tree trunks or bird feathers. But even where I struggled with C1’s controls, I could not deny that acceptable results could be had very quickly, and I noticed myself moving on quickly to the next image instead of going the extra mile as I would have done in darktable.

There are two more features I alluded to earlier that I want to highlight particularly: darktable’s filmic black and white point. Because to the best of my knowledge and ability, Capture One does not have any analogue to these two functions. These two magic sliders are able to recover deep shadow detail without brightening all other shadows (much), and recover burnt highlight color without darkening all other highlights (much). They are proper magic. In Capture One, ostensibly similar functionality lies in the High Dynamic Range Black/White sliders, but they tend to bleed too far into the midtones, and can even introduce lightness reversals around highlights and halos around high-contrast edges. Having explicit control over the image dynamic range compression in filmic is truly genius.

Images such as the following are where I still definitely go back to darktable, because I find their larger dynamic range too hard to handle in Capture One:


_DSF1853.RAF (18.7 MB) (CC-BY-SA)

Although I have to add a caveat: Actual highlight recovery of partly-burnt highlights just plainly works better in Capture One than in darktable. More colors are recovered deeper into the corruption, and with much less effort. This is an application that I do prefer Capture One for. Denoising is similarly much faster/easier in Capture One.

In terms of Capture One’s much lauded color controls, I found them largely equivalent to Color Balance and Color Zones. Nothing to report here.

Being a commercial application, however, Capture One has certain benefits: for example built-in support for Fuji’s film simulations. For newer cameras at least. Sadly. Something somewhat similar can be done with darktable’s LUTs, but then that becomes a guessing game about whether the specific LUT “wants” to come before or after or instead of filmic, and whether deep shadows and highlights are recoverable afterwards, and a myriad other opaque parameters. Essentially, I rarely found LUTs worth the effort in darktable, but use Capture One’s film simulation simulations frequently. It is a neat feature.

Lastly, a few words about the library module and file organization. At first glance, I hated Capture One’s library. Absolutely hated it. You have to import every single directory manually (no multiple selections!), all the edits go into a central library and nowhere else, and sidecar support is laughable. But then someone told me a much better way: Instead of using Capture One’s catalogue, create a session, but ignore all those pre-built import and output directories, as well as the import button, and instead simply navigate to any old directory on your computer with the sidebar file browser. This is clearly not how sessions are meant to be used. But it actually works reasonably well, even with photos on a network share. I actually kind of prefer it to darktable’s workflow of importing film rolls. But the saving grace for darktable is that the import window allows multiple selections (I sort my pictures into daily directories), and that its import file picker displays the modification date of directories (so I can see which daily directories changed in the latest import). And might I add that these two features are exceedingly rare in raw developers, yet I absolute rely on them. So on balance, I mostly prefer darktable’s way of doing things, if only by a small margin. But I do like the explicit file system browser in my weird way of abusing Capture One sessions.

As a corollary to file management, I have come to enjoy Capture One’s handling of sidecar metadata: They are readily picked up by darktable, but the opposite is not true. Which is a bit of a shame. Perhaps it would be a good idea to save image metadata (ratings, color tags, tags) into filename.xmp files in addition to the editing data in filename.extension.xmp. This pains me a bit to admit, as I had said the opposite on a recent thread on metadata. But I have since learned the upsides of this.

So, on the whole, I grew to quite like Capture One, mostly for its streamlined user interface and speed of operation. In terms of image quality, I honestly didn’t see much difference between darktable and Capture One. But perhaps I am not the most discerning of users, either, as my focus is not on crazy detail recovery or the more technical arts of macro or astro. When it comes to control, I find darktable in a league of its own, and frequently felt restricted and, dare I say, patronized by Capture One. And that is probably what a geek like me would say, especially on this particular forum, so take it with a massive grain of salt.

Anyway, I hope this is of interest to someone…

25 Likes

No religion here, insightful and considered observations always welcome in my quarter.

When I set out to write rawproc, I had two goals in mind: 1) learn the gritty specifics of image processing, and 2) have software that meets my particular needs. One of those needs was to work the family images in much the same way I’d work proofs of my serious stuff, and i succeeded in that endeavor. Thing is, it does things my way, I understand my way, and I don’t think too many others will come to appreciate it. Before he left for the South Pole, I attempted to impart my ways to my son, but he’s not getting it, at least not initially. He wants two-clicks-and-a-slider and done, and I really don’t blame him…

What you’re seeing in FOSS, particularly darktable, is a lurching about-face from what seemed to be a decent raw workflow at the time into one that really resets one’s expectations, less about mangling the data this way and that to one that respects it’s physical origins to the greatest extent possible. In doing so, new tools are posited, and we’re in the middle of the iterative part of UI development to support folks’ needs while holding true to the scene-linear foundation. How many versions of filmic in just a little more than a year?

This will settle down as the UI needs are better-comprehended, and I think @anon41087856 is doing a yeoman’s job at that comprehension. I don’t think it’ll ever be the one-and-done of C1, as softwares of that ilk are tending to take abstraction to an indecent level; at DPReview, one fellow espoused the abandonment of output sharpening because it wasn’t necessary, but I don’t think he realized he was still doing it under the guise of an improved export tool…

Very interesting treatise; thanks for sharing…

5 Likes

I use darktable ‘presets’ so that if my exposure is correct, then my image is given a full basic processing without further fuss. Here is my basic dt result without any keystrokes at all.

.
At this point I had not adjusted filmic or finally adjusted color.
I have great difficulty in understanding why people ‘thrash’ about with each and every image even on an initial development.

3 Likes

I just opened your RAF in rawproc, and discovered this: 1/2500sec, f2.4, ISO 800. You’re giving up dynamic range in the exposure by amping up the ISO and limiting the light on the sensor with the correspondingly short shutter speed. Now there are reasons for doing this, particularly to stop motion, but a static scene like this doesn’t need such treatment. I’d bet you’d have better shadows right out of the box for this image with more light on the sensor.

Edit, according to the PDR plot for your camera a photonstophotos.net, you’d gain a bit better than 1.5 stops by dialing ISO down to 200…

7 Likes

Can you elaborate on how you have this setup with each module? This sounds really useful but I’m not sure how to adapt it automatically to each image

1 Like

@bastibe WIth the new basic adjustments widget you will be able to build a dashboard by adding features from most modules so this should speed things up.

@garibaldi One approach is this since you can save the state of a module ie active or not and the parameters even in the inactive state you can do an edit with all the modules you would normally apply. Now pick any common settings or your starting points and now disable all or at least the extra ones that you often but not always apply and then save this as a style. Now if you apply this style you can add a series of modules with your desired defaults (initially active or not) and also as needed selectively add modules with your desired defaults by simply enabling the remaining ones you need. So maybe you don’t always use local contrast but when you do just enable it with your default settings… The only thing in this approach might be that if you are not careful in a certain sequence of actions and compress your history stack you might reset or remove one of those modules those…

Or for sure you can define autopresets for all your common modules so that they are applied by default when ever you apply or reset the module…This is a good thing to do say with filmic as it has many options. If you setup an autopreset then simply resetting the module will set them all as you need rather than the system default and then readjusting each time…Doing this for your commonly used modules will save a lot of time…

3 Likes

Fairly simple really:
First, you have a Fuji machine so it is necessary to boost the EV +1.43 in the exposure module. (check the RAW EXIF data for confirmation). Also turn on the ‘offset compensation’. Now you should not have to fiddle with the exposure at all. Then I preset nominally needed (for me) small adjustments to local-contrast, contrast-equalizer and color-balance; they also can be preset.
With that done I generally only then need to adjust filmic-rgb and then color-calibration. In the case of your image I did not even do these last items.
If you are messing with your exposure in dt then you need to apply the correct JPG-to-RAW correction … I have not needed to touch my basic exposure settings in the past 1,000 images.
Special images always need ‘special touches’ with masking and gilding-the-lilly … that is normal but basic development of incoming data should be easy and straightforward.
Use the immense power of dt to extract the very best out of the few 5-star images in your collection.

1 Like

I was already deeply intrigued by a character who just sets out to write his own raw developer, but now you reveal even more fascinating stories. Is there a place where you chronicle your life? Your story sounds like an interesting one.

Perhaps I should have selected a different image… In general, I find I am rarely limited by the dynamic range of my camera, but often want to include more of it than customary into my images. So my issue is not with capturing, but with rendering.

That does sound great! I’ll be looking forward to trying that. Thank you for mentioning it.

It really depends on how you’re shooting. My way often leaves me with under- or overexposed images, because I often value capture speed over precision, so as to not hold up my family, or to spend more time playing with my kids than holding a camera. To put it another way, I prefer to push my exposure in post (while I have time), over getting it right in the field (where I don’t).

But I do know my ways are idiosyncratic so thank you for your comment regardless.

2 Likes

It occurs to me that I might be able to emulate a similar behavior in Capture One’s Levels, which can move the black/white points out, as well as in.

C1 DR
DT DR

If true, a similar II might be a great addition to darktable’s Filmic module as well: a histogram with overlayed movable black/white points.

3 Likes

That said, I’m not the best ETTR-er. I use my camera’s highlight-weighted metering so I don’t have to distract myself from composing and focusing, which gives up some of the top end in a lot of cases, falling onto the crutch of my camera’s really nice low light performance. I do keep my ISO at base by default, and only switch out of that if I specifically need a decent shutter speed for avoiding motion blur.

With respect to ‘interesting’, at the moment my son is having an interesting time; I’m just working part time in my old day job, in the basement. I guess we gave him some of the inclination; we spent a couple of years living on an island in the Pacific when he was a pre-teenager, so doing weird things for employment is some part of his thinking… :crazy_face: He bought a nice camera to take down there, and I tried in the time we had to set him up with rawproc, RawTherapee, darktable, and ART so he had options. Now that he’s there with limited internet, I can’t show him things with rawproc that I know would make his post-processing either. He’s still in the “SOOC JPEG” phase; when he decides to start shooting raw, I’m going to have to handle the “Why are my pictures so dark?” angst over a discord chat… :face_vomiting:

2 Likes

Hello @davidvj, would you mind to post your basic preset? Many people seem interested in good defaults, so it may serve as inspiration.

Congratulations @bastibe on your second child! I can certainly relate to the time pressures and why you have had to change your usual processing habits. I quite enjoy post-processing, but it can be a huge time sink, and time is probably my most precious resource at the moment (job + 2 kids +…).

Having very recently moved over to the Fuji ecosystem, I now have Capture One Express given to me for free. I may try it out sometime, but I’m already expecting it to be too limited for my needs. I’ve also invested in the DAM features of darktable, so am unlikely to give it up. However, my first impression of the Fuji film simulations is that they are incredible, so I’m already wondering if I’ll sometimes want to use software with the official film simulations built in. But after shooting exclusively raw for the last 15 years, I’m realizing that using JPEGs out of camera is a viable option once more with Fuji, so I may just continue to shoot Raw + JPG and only process the raws for the photos I want to display and print. That will be the real time saver for me.

This I can certainly relate to. It was my experience with Lightroom too. The results may not be the best, but acceptable results could be achieved very quickly, often in a matter of seconds.
By the way, have you tried X Raw or Silkypix - the other Fuji software? Would be interested to hear your thoughts on those.

1 Like

I have indeed! There’s a whole article about all the raw developers I could find up on my blog, which includes Silkypix. I am quite fond of Silkypix, actually. It is quirky, and its translation is a bit idiosyncratic, but results are excellent. But above all, its documentation is much more comprehensive and technical than any other commercial raw developer I know of. But it is also a bit slow, and being quite similar to darktable in spirit, I stick with darktable, generally. But I do own a license, and its Fuji film simulations are very good.

Fuji’s own X-Raw developer is not a raw developer per se, but instead connects to your actual camera via USB, and offloads all image processing to the actual firmware in your camera. Which is crazy cool from a technical standpoint, and obviously gives you the “true” film simulations of your camera like nothing else can. But on the flip side it is only as flexible as your camera’s post processing engine, which really does not compare to a desktop application at all.

Yes, this is what I was scratching my head about. What is the benefit of using your camera’s processing rather than a desktop computer’s? I would have thought the latter is infinitely more powerful and capable, so I’m wondering what the purpose of X Raw is.

It lets them avoid porting the demosaicing algorithm they’ve already developed from an ASIC to a general purpose processor.

Thanks! So for the user, there’s not really much benefit other than being able to access processing algos that would otherwise be unavailable (or not as good). But in theory, if they did port the algo to desktop software, there wouldn’t be any advantage.

I guess speed and access to those custom things they have in their jpeg engine. The film simulations are supposedly not just LUTs (idk). Then on newer cameras there are two color-chrome algos, clarity and skin smoothing. And the quality of those is at least not terrible for a camera-jpeg engine.

The simple use of aperture-priority will ensure that your camera correctly establishes the 18% pivot in 99% of all cases. It is simple point-and-shoot without holding up the family affair.

Not for me. I often want to expose a bit differently than the auto modes on my camera guesstimates. But good for you if it works for you.

The table at the end of this article might be of some interest…it might be nice to determine the raw exp bias that Fuji uses for each camera as it could help possibly with a more accurate starting point esp say with filmic. Looks like Fuji underexposes which is now know but this changes with iso and so some presets could be made to perhaps guide the starting point for filmic…