Check out the memory option of --library…it might be of use to you …you could run a shortcut using it… darktable 3.6 user manual - darktable
I also don’t keep the images in the library for long. I import a directory (or a handful, as I’m slow with my edits), edit, export, and remove. I don’t use the :memory:
option, as it takes me days or weeks to process a trip’s photos.
No images are imported as BLOBs into the database, only processing parameters, tags, exif data etc. are. This data is also written to the xmp sidecar files so you do not need to use a database as others already mentioned.
Ya I guess I am essentially re-importing each time I run DT but for me it doesn’t take long so its just like browsing or opening whatever files I select…its reading the data each time from the xmp so it could take me years to edit…wouldn’t really matter. I guess if your folder had very large raw files and/or lots of them it might be slow but as I said I don’t use and DT DAM features so this just gives me one less thing to manage/keep track of…it is also the best way I found to work on several PC’s accessing a common image pool. I have the files on the cloud so I can edit them from my laptop or either of my desktops…
This was how basically how Aperture from Apple worked way back in the day (before it was discontinued). The database structure was represented as one monolithic file in Finder (the file manager). It was actually a directory full of renamed file blobs if you peeked at it behind the scenes (with a terminal), but Aperture had a tight control on it and treated it like a sealed off box. (Very Apple-like, right?) At least there was a way for people to extract images with a bit of know-how. (Lightroom later added import support to make the task easier.)
However, all the other raw editors (FOSS and proprietary) keep things on disk, either in the location they already exist, or they also have the choice to move or copy on input (depending on app). Some have local databases to speed things up and others just work on files or smaller directories.
As far as darktable is concerned, it has a local database that serves as a working cache so it doesn’t have to read all the XMP sidecar files all the time… as that just does not scale, especially on spinning disks (which are still necessary for larger photo collections). The database enables huge libraries and lets someone search through all the data pretty quickly. Doing that over XMPs would not work for something like a 4+ TB collection (which is what I have, spanning back to 1999).
It’s possible to recreate darktable’s library database by re-importing the images and their XMPs. I’ve done this before too (and it takes hours for me). But we don’t have to worry about database corruption, as we could always rebuild (as long as we do save the data back out to XMPs, which is enabled by default). This means that we don’t have to worry about losing images or even metadata due to database corruption.
I’m happy there’s a way to use darktable in both modes (transient file browser and library), so we can all use it the way we want.
Yes that is my preference as well.
Well, this import of files
seems to be confusing. But there are no files imported at all. It is just the metadata read from files and stored in the database. (and not to forget a thumbnail cache will be build as well).
Maybe it should be named import image metadata
and not import files
?
In theorie you can just use your prefered file manager for browsing and open file by file in darktable from your file manager. But I find the current interface very clean and easy to use. Who cares where the RAW files on disk are stored, once they are copied from the camera/SD card to the disk.
It’s now called add to library