After reading all the discussions in the AgX and darktable rebrand threads and others, I thought about compiling darktable again for testing purposes as I have done a couple of years ago more regularly. However, with a growing image collection, the anxiety grows that I one day will run the “wrong” darktable (dev) version and break all my database contents and edits. I have backups, but of course I might lose a couple of hours or days anyway and have the burden to restore from backup in case something breaks.
Therefore, I’d like to know how you are ensuring that test versions do not interfere with the release version for productive work automatically (I know the command line flags, the problem is more on missing to add these accidentally). Furthermore, I wonder if there is an option that makes darktable recognize itself as a non-release version in case and default to another database, cache, etc. location and automatically having xmp switched off?
@kofa, @wpferguson, thanks for your great answers. One more question: If I understand it correctly, using a shell script would solve the database issue, but the xmp files would still be overwritten until I set the preferences for the dev version not to do so. Is there a “simple” possibility to make the dev version not to overwrite xmp files from the beginning, maybe by compile time options?
The goal would be to use the dev version on the same images as the production version, e.g. to be able to test with large datasets.
If you start with a separate configdir, then you’ll get a new library.db and data.db that have no images untils you import something. So, you can start the new version the first time and set the xmp write to never and that way it shouldn’t affect the existing XMP files. When you import it should read the existing XMP files.
I would create a set of test data to test this out and make sure how it behaves.
I always use a separate set of images for each instance of darktable I run. Actually I have one set of images and I create symlinks to the files for each separate instance of darktable I run. That way I have the same set available everywhere, but I don’t incur the storage overhead.
darktable supports a rich set of configuration parameters defined by the user in file darktablerc, located in the directory specified by --configdir or defaulted to $HOME/.config/darktable/. You may temporarily overwrite individual settings on the command line with this option – these settings will not be stored in darktablerc on exit.
This sounds a bit risky to me, as i might forget it one day and recognize after importing some folders and changing metadata or even editing some images. Anyway,
this is excellent advice. And
this sounds like a good idea as well, but it also depends on my interaction and therefore the most likely cause of errors, me. Good advice, though.
In general, with starting this thread I was hoping that there is a hidden compile time option that disarms the dev version which all developers know but I obviously missed. It seems this is not the case, therefore thanks for sharing your workflows. I’ll adapt and find my own solution based on your suggestions.
On my side I use a patch before compiling that change the default folders.
I never thought about the XMP files, mostly because if I come back to a file it’s to restart a new edit, and I have an exported jpeg on the side.
Here is the patch, it could probably be extended to modify the default xmp extension from .xmp to .dev.xmp or similar. 0001-Make-darktable-dev-paralllel-installable.patch (2.8 KB)
My approach is that I have a folder with test images. It is mostly play raws. This folder is what I use for any dev build or to test PRs. I don’t create a scenario that a dev build of dt is used for my personal image library. I don’t think anyone should.
Do you have a pointer to that script ? Is it only in the development folder ?
On my fedora system, /usr/bin/gimp is an elf binary, not a script.
On my side, I package dev darktable version with Fedora copr, the package installs in /opt/darktable-dev. I did not wanted to rely on scripts to override the paths, just launch /opt/darktable-dev/bin/darktable and have every thing working in separate folders.
Also, the patch modifies some utility scripts that I already had to use to cleanup a bit the dev library.
In fact, I should probably propose a patch to modify the scripts to be able to override the user config dir.
I was referring to a potential gimp script, as you said “like Gimp does”, just to get some inspiration.
If it does not exists, that could be written indeed
This is a typical job where an LLM shines. ‘Call this command; if a release version, its output will look like this; if it’s an experimental build, it’ll look like that. If it’s a release build, just run darktable without params; otherwise, extract the version from the output, and run it with config directory --configdir ~/.config/darktable-{version} --conf some/setting=some_value’
Back in the day (and what a pleasant day it was, when systems managers only had systems to support, not endless PC/Windows stuff) and users had only terminals on their desks, they logged in to a shell-script from which they chose one of the (few!) programs they could run.
Probably most people can visualise the case-statements backbone of such a script and has probably written them. Could easily be used to set environments and executable for different versions of the same program.