Hello! I’ve moved all my photos to an 8tb ironwolf which is quite noisy (according to people and Seagate, all 8tb+ hard disks are noisy ATM) which makes it very noticeable whenever it’s writing data, specially in “bursts” and not in a sustained write, as I imagine it parks its head between bursts.
With this I was wondering if there’s any darktable configuration that I can change so that it only updates the XMP whenever I switch back to the lighttable, or something similar. It seems incredibly inefficient to update every step of a slider and not just when the mouse button is released, or apply a sliding expiration of 5/10 seconds that refreshes every time the user makes an action.
I understand the way it currently works is better for whenever there’s sudden crashes, but darktable seems to be stable enough for that not to happen on a regular basis, and even if so, a sliding expiration of 5/10 seconds doesn’t seem like it would lead to a large progress loss.
This is only a curious question and not to be seen as a critique, I understand this is a very “me” problem and if there’s no config I’ll live with it and maybe even setup an editing space on an SSD, I guess I would benefit from faster loading times with that as well.
Isn’t this what it does already? I mean, whenever darktable crashes for me I always loose everything I did since last time I was in lighttable. So I assumed that the XMP is written only when exiting darkroom, switching to another image or exiting darktable altogether. I would love to be wrong about this, though
You can check the attachment I left at the end, whenever I move a slider it starts writing to disk, so I reckon it doesn’t behave like that. My database is in its default ~/.config/darktable directory which is on an SSD, so it isn’t that either because I can hear the disk.
I also remember that I lost every edit when dt crashed, but this was month ago. Did not have crashes for a long time.
Checked with dt 4.1.0+454~g9666e7f58 (Ubuntu 22.04) and the xmp is touched at every action in any darkroom processing module. I like this behaviour, it protects against data loss. Did a quick search on github but could not find a related commit at the moment.
i suspect that’s sqlite writing through to disk directly whenever you change an entry in the db (such as the history stack while dragging sliders). also see man creat and O_DIRECT …
not a fan. guess you can test whether this is what happens if you run --library :memory: if that still works nowadays.
I don’t think it is just xmp writing to the hd. When you move a slide, it needs to execute the image processing and maybe update the buffers. There is always going to be some hd read/write.
darktable can use the disk to cache the results of image operations, which would cause writes to the disk, if enabled. See enable disk backend for thumbnail cache and enable disk backend for full preview cache at darktable 4.0 user manual - processing.
However, the cache is normally located under ~/.cache/darktable, which I assume to be on the same SDD that stores the darktable database:
My database is in its default ~/.config/darktable directory which is on an SSD
I think that’s rather different: the issue is about bulk editing photos on the lighttable, not about writing XMPs while dragging sliders in the darkroom (‘when applying changes to many photos at once. An example would be applying tags to all photos’).
RawTherapee has (or def. used to have) a user-definable lag period. When dragging things, it doesn’t start to re-calculate until the time has expired. So you can balance between the responsiveness you like vs. the stress you put on your system. I think that’s neat.
It is obviously also going to depend on hardware as well . I put on my hardisk and CPU monitor in Win11 and moved sliders in D&S and not much happened wrt hardisk activitiy after the initial load of the module. My PC and video card are decent. I think if you use modules that invoke tiling and pipeline runs and the hardware is not a match then the caching and buffering is going to happen within the software and perhaps the OS as well… If the setting after edit means not until moving to a new image or if you set it to never write xmp and use :memory for the database and you still have this behaviour, ie the disk writes I think that might reveal this sort of thing if DT is honoring the settings as stated…
I wonder that dt does not have a ‘save’ function. I also have lost not only changes to images but also the very processing location. Would " ^s " be interesting to others?
If you disabled writing xmp files via preferences you can use the ‚write sidecar file‘ function to mimic such ‚save‘ function. Then assign a shortcut to it, and you‘re done …
Yes, this is what I was trying to explain when I mentioned ‘sliding expiration’. Every time the user makes an action the timer is reset, and only saves whenever it reaches 0. It’s a great system in my opinion.
I guess if the user’s hardware cannot keep up with the actions it won’t update as fast. Still I believe it doesn’t make much sense that it would use my storage hard drive and not my /home directory where the database and every other config/cache files are.
See this new attached video that confirms it’s writing to the XMP, watch the change_timestamp changing.
It even updates when my system can’t keep up with the exposure changes.
For reference I’m running a 3700x and an RTX 3080 10gb version.
So set to never write and using :memory as the database you get this?? I would say DT is not honouring the settings or maybe it a misunderstanding on my part of what they do,