Unfortunately, I am not doing database backups (including the whole config folder) of darktable too often. I want to improve, but have no good idea how to handle the topic. Doing file system backups (e.g. with rsync) would work of course, but I don’t think this is too efficient for the sqlite part. For the latter, google did not tell me the best practice so far. Therefore, I wonder how you are doing this, and how you handle the automatization. Especially, do you define a threshold (and how) of database changes before a backup is triggered, or do you just use timed backups? Do you detect if the database is detached from darktable during backup? And how do you do the actual backup? Incrementally?
P.S.: I regularly backup the sidecar files and have them versioned in a git repository as well, but some features rely on database only, and the amount of pictures approaches the critical mass quickly …
Hm, I do full backups of my home folder as well, incremental rsync to external hdd, but I think it would not be bad to have something that happens more often, probably triggered by the amount of changes done rather than an arbitrary time schedule.
Maybe set up your darktable config folder as a git repository (git init; git add *; git commit -a should be enough for an initial run) and then have a recurring job which runs git commit? If you’re paranoid about losing the database/config, you could also push it to a private gitlab repo.
Thanks afre
I save my pictures to an outside hard drive from my main computer
If my Laptop has the same version of Darktable than my main computer May I copy the db of my main computer to my laptop and both machines will be alike
If yes do I have to remove the existing db from the laptop first?
This for WINDOW 10 /64 bits
The topic is old, but can I recommend strongly symbolic and junction links for convenient backups. I am using this under windows.
A junction or symbolic link to the darktable config directory is placed in one of my standard backup source directories. From this source I am making manual and automated and versioned backups to multiple target backup directories (USB, NAS, other PCs …). With the junction/symbolic links it is easy to “collect” various important directories and files of various software tools (not only DT), which are distributed on different windows subdirectories.
Good backup tools allow the enabling/disabling of junction and symbolic links.
I am also using heavily the DOS subst command to map different subdirectories to drive letters. That is a good method for flexible file locations with constant drive letters. A different topic, but also quite useful for defined well backup strategies.
DT does have also its own means to direct most if not all those files… I do something similar ie syncing that directory but I just used the DT parameters to point to folders I created…
A couple of months ago I changed half of my backup toolchain from rsync incremental to borg, which was eye opening. The chunk-wise deduplication makes a tremendous difference and I am convinced that this is how backup should work. As I can always mount and check my backups with a fuse file system, I don’t feel the need anymore that the current state of my files should be accessible in the backup (as it was with rsync). I tested this with more than >20 TB of backup data (8 TB with rsync incremental, 2 TB deduplicated) and it worked flawless (comparing every bit between original files and backup). A big benefit of borg that it is very simple to get old backups into borg and also to get stuff out of the chunk store again.
The other half of my strategy is still git-annex (for media where chunk-wise deduplication is less important).
Should DT purge old Data and Library snaps automatically (likre keeping just 2 of each for example) , as keeping them all will make ~/.config/darktable/ grow in size indefinitely ?
I’ve just checked mine seeing this thread and it’s a whole gb already
That’s what I would do if I had to start from scratch or had to change disks. But as the disks are way more than 50% full, I cannot easily convert the media backup to borg.
Also, I am currently using czkawka to compare my full sd cards with the remote backup to ensure I have the photographs backed up before removing them from the SD card. This is a bit more complicated and probably slower with borg fuse mount (but not impossible). Other methods may be possible, such as adding the contents of the card entirely to the borg repo, comparing if there are files with the exact same chunk list, and removing these files from both the card and the snapshot. But I would probably stick to option 1.
For darktable, or to be more precise, any application configuration, I go the complete opposite way:
I don’t backup configuration.
I have systems in place to recreate the configuration that is needed.
I can go from blank disk to production system with current project data available where it should be in about two hours. And yes, this is tested. Not everything in every application will be totally perfect, but it will work. And that is what I care about. The tuning for the pretty things is a constant game anyway.