How to compact the library?

DT 4.8.1 on Leap 15.6

I just deleted over 10 000 images.

Is there a DT preferred way of compacting the db or should I use sqlitebrowser to vacuum?

Thanks

sqlite and vacuum

Perhaps the scripts from the darktable package will help:
purge_non_existing_images.sh and purge_from_cache.sh

1 Like

purge_non_existing_images.sh deletes entries from the database for which the corresponding files no longer exist. That should not be the case here (unless OP deleted the images from outside darktable…)
purge_from_cache.sh removes thumbnails that are no longer relevant. It’s not modifying the database, afaik.

In a case as described by @foto , I’d probably use all three as follows:

  • backup the database (or trust the snapshots)
  • purge_non_existing_images.sh (shouldn’t do any harm, as since we are cleaning up the database…)
  • purge_from_cache.sh
  • use sqlite vacuum (will use up to twice the database size as temporary workspace on disk)
  • back up again (so you won’t be forced to restore the un-vacuumed version…)

Another way. First do a backup. Then delete the database and Add to Library all of the images, thus creating a new database.

This is what I do:
I work with Linux Mint 22 and have installed SQLite Browser with version 3.12.2-3build2 (GUI editor for SQLite databases).
I opened both darktable databases data.db and library.db with it.
In one of the menus you can compress the database, i.e. get rid of empty cells in the tables that remain after deleting data.
I was able to reduce my darktable database by 4% after compressing it. Not a single photo had been deleted beforehand, so it was just a clean-up of the database.
Make a copy of the databases before compressing them.
(Not sure what I could do to synchronise xmp and the databsases)

Better have sidecars in that case, or you lose all edits and tags…

Simplest would just be to use the sqlite VACUUM command as @wpferguson suggested, of course while darktable is not running. (either sqlite would refuse to work on the database while dt is active, or you’d have two programs accessing one sqlite database, which dt itself tries to prevent…)
Extra backups are a safety measure, the purge... commands make sure you do a maximum of cleanup, while you are at it.

1 Like

If you use overlays, and the overlay image has several versions, this won’t work well, as the module parameters in the database and the sidecar store:

  • the DB ID of the overlay (but this won’t be valid after a reimport)
  • the file name of the overlay image, but not the version number.

When the composite module finds that the ID is invalid, it will take the ID of the first image it finds in the DB based on the stored filename.

I haven’t used that module too much. But I can recognize that as a problem. Thanks for letting me know.

Me neither, but it did break my darktable when I removed and reimported images. That completely broken behaviour has been fixed, but the inability to fully restore the original state (including version) is still there.

Thanks, I forgot about these. I’ll run these before the ‘vacuuming’