a library for each job

Hello everybody.
I started studying the darktable in order to get out of lightroom and maybe my way of thinking is wrong, so I need help.
I work mainly with wedding photography. In each event I capture from 4000 to 8000 raw images. In lightroom I create a new catalog for each event.
After each event I export the images and jpeg and move the raw ones to an external hard drive. I only revisit these images if a customer requests an adjustment or research for a photo that I didn’t deliver.
As I understand it, in the darktable there is only 1 library that will store everything, correct?
Is this method efficient for the amount of photos I take? as they reach 200,000 images a year and inevitably many will have the same name.
How is the correct procedure to work on my case?

you can switch the library on program start via command line parameters, see darktable 3.6 user manual - darktable
You can also just save the xmp sidecar file after each edit and save this with your raw files wherever you want. Then you simply can remove the images you don’t need from your database. On importing these images again, the edit settings will be taken from the xmp

2 Likes

Also note DT does not import your actual image just a reference to where it is located so your Database slash catalogue is not massive

1 Like

I use a library for every year, this method I used when I still was using Lightroom. There is no way visual way in DT to switch library (‘catalog’) but you can launch darktable from the command prompt/terminal with the --library /path/to/library/nameoflibrary.db parameter.

You can always make a shortcut on your desktop including the parameter

If the library doesn’t exist, Darktable will create one for you at that location.

I keep all my libraries in a DarktableLibrary folder. In this I have a 2021.db, 2020.db, etc etc

I don’t know if this makes it any faster to work with DT, I’m just very used to this way of working.

2 Likes

Sometimes weddings, mostly sporting events, 500 to 3000 images per event.

I use the library as a temporary working environment.

Import, cull, edit, tag, export, remove all files from lib.

If I need the project again, I just reimport, close darktable, let the previews generate from the commandline, then open dt again and work.

https://www.darktable.org/usermanual/en/special-topics/program-invocation/darktable-generate-cache/

Boost your on disk cache in the preferences to insane if you want full size previews of all images while working on the project. Experiment with the different sizes, they depend on your camera resolution and your screen. Also learn to embrace the culling view, that is where those pre-rendered images make their shiny appearence - browsing through hundreds or thousands of images is an instant operation once you have your settings right.

The rejects get moved into a separate storage, so I have them if need be, but under normal circumstances this cuts down on the reimport drastically.

With that workflow I also can switch machines from laptop on the road to desktop at home in a pinch. Of course writing to xmp is mandatory. There are a few things that are only stored in the library, but they have not bitten me. Yet?

Oh, and I set lib maintenance in the preferences:

storage: database: check for database maintenance = on close (don't ask)

1 Like

What is your definition of insane please ?

On my hard disk, the biggest cached image files are between 2 MB and 5 MB. The mean cache file size is approx. 150 kB. I would guess, assuming 5 MB × number of images would get you on the safe side, but for a more realistic scenario, 200 kB × number of images or even less may be sufficient. Edit: But for extensive use of culling mode, better align on the safe side :wink:.

However, please do this exercise for your own images, as with different camera resolutions and cache settings your numbers may differ.

And update the setting from time to time, as your library grows …

Chris’ numbers are very much on the spot for what I see with my 24 and 20 megapixel bodies.

For my setup (24mp on a 2560x1440 screen) the previews are mip 5 and average 500KB each. For fullzsize previews (mip 8 for 24mp files) without noise reduction and higher isos I need somewhere between 5 to 10MB per image.

So if I wanted to have fullsize previews of noisy images pre-rendered, that would 1GB per 100 files. With your mentioned 8000 files you would be looking at 80GB of cache under those conditions … insane enough?

But play with your own settings and setup, I have my cache set to 8GB and have not run into restrictions recently.

not sure if this helps you, but shoot sport events and end up with similar numbers. I import to different folders using rapid-photo-downloader, then cull using Geeqie. Only once culled do I import into darktable.

edit: oh you were using lightroom so expect you cant use those tools as they’re on Linux. I have used Photo Mechanic for culling on Wndows, it’s a bit clunky but works fine. I might have to look into dt preview generation as described by @grubernd too now I’ve seen that :thinking:

As for having the same name that’s fine as I would hope they’re all in different folders anyway.

I might have to look into geeqie culling as mentioned by @swansinflight as a different approach. :upside_down_face:

1 Like

I also cull in geeqie, though I shot like 50 images on a busy day. Set a shortcut in geeqie to open in darktable (“d” in my geeqie setup), then just use mouse wheel to advance in geeqie and D to send it to darktable.

1 Like

ahh thats a cool way of doing it. I’ll end up with 2-5k images to cull though and don’t want to keep them all. I mark keepers with a 1, Ctrl+1 to select, Ctr;+Shift+I to invert, then delete all those not marked (that are now selected).
Can still right click an image and /plugins/darktable to open one there if I ant to preview an edit.