Darktable Database Backup Best Practice

Interesting - I think, I need to check out git-annex :slight_smile:

You need to backup “darktablerc” and “data.db” and similar files ?, or how do you handle those ?

I don’t change many settings, and those are done by hand. Also a good check if anything significant has changed.

Some things are exported for re-import like the tagging lists because of the categories and styles if they have proven themselves.

Some lines are added to the rc file.

The database … I don’t touch.

edit: I consider databases either a feature that is embedded in a professional environment with transparent rollbacks, full incremental backups and all the tooling to be worked with those things … or a data dump that can be thrown away any time without a bad feeling.

Darktable keeps two kinds of database copies: a copie is created when the version changes (names ending in “-pre-<version>”, and there are the snapshots (names ending in "snp-<date><time>).
The “version” copies are never deleted by dt, but the number of snapshots it keeps is configurable (settings=>storage, options under the “database” heading).

And afaik, you can safely delete either of those “safety” copies (I’d do it while dt is not running).

Thanks, I wasn’t aware of this setting, will do as you advised !

I second this. All other backup stategies (non-deduplicating) are pretty much obsolete at this point.

One should backup the whole home folder, possibly excluding large files, using an automated cronjob. It can run every 15 minutes or so when on an unmetered connection. One can of course add tweaks like freezing databases or taking ZFS snapshots, but just doing the above is sufficient.

While of course you can backup to your own server, I find borgbase to be an excellent provider. They have a “write only” feature, which ensures that even if an adversary (eg an encryption virus) gets hold of your computer, they will only be able to add to the repo, not remove anything. They contribute to the development of borgbackup.

4 Likes

That’s still an issue, I accidentally add stuff from time to time, e.g. a large movie file, to the backup. I wish there was a heuristic that recognizes large changes (single files or whole directory structures) and asks if the addition was intentional. Or, for an automated backup, the suspect files/folders could first end in a staging area on the backup medium/server to review them later, such that they are not lost in case but also not “irreversibly” in the backup repository.

1 Like