When are edits written to xmp file and database

When exactly are edits written to the xmp sidecar file and/or to the database: on closing dt, closing the module or going back to lighttable?

I’m editing scans of old negatives on film or glass plates (some are more than 100 years old) and they have a lot of ‘speckles’, dust or damage to the film. So there is a lot of editing to do with the retouch tool, often more than 100 corrections. During these edits dt closes from time to time without warning and all edits are lost. Maybe I go too fast, clicking while the ‘working…’ indicator is on. How can I ensure that my edits are stored periodically?
dt 3.8.1, W10, xmp in import.

Go back to lighttable. There is also a button in light table to write the sidecar file.

Thanks

Using heal? I think that can cause some issues with all the iterations. Try to use clone if you can.

try astro Denoise for the fine stuff…tweak the patch size…I bump it up to 4 and drop the strength way down…maybe the less manual fixes???

I also observed this behaviour editing negatives with the retouch tool. On my system these crashes appear if too much “instances” of “healing” are running simultaneously (critical limit seems to be about five or six on my system). I was able to significantly reduce crashes by optimizing the “performance settings” of the application (OpenCL optimization, i.e. increasing speed). Performance tuning strongly depends on your hardware, several aspects have been discussed here in the recent past. And there is a lot of work in progress in the development version 3.9 by jenshannoschwalm on github. I also remember this video [EN] darktable 3.0 to 3.8: optimisation of software performance - Invidious by Aurélien Pierre.

To force storing the xmp I switch to another image in darkroom, then back to the image to edit (needs just two clicks in filmstrip).

Using “clone” instead of “heal” as suggested by g-man was not a solution in my case, frequently the edges of the cloned area are visible.

I tried astro denoise. It works partially in homogeneous surfaces, like skies without clouds. In parts with fine details it smooths too much.

Yes I used mostly the healing tool and was jumping quite fast to the next spot. The clone tool, as you said, works only in certain parts of the image. I may have reached a limit, although my machine is quite powerful and optimized according to Aurélien’s video. By reducing my ‘jumping speed’, there were no crashes anymore.

Thanks, didn’t know that.

This sounds like a bug. Getting log data from a crash using the darktable log (likely with -d memory -d perf) would help. If just slowing the click helps it seems to indicate that the previous operation is still working or still needs to release the memory.

Without seeing an example I thought it might we worth a try as you sounded like you might have a lot of noise like dust to clear…I use it at very low opacity… often even 10% or less…not sure if you tried that or not…… at the default it is strong

Actually I finished treating my 130 negatives. Some of them were badly affected, mushrooms or something like that, sort of worms growing out of a bright spot. For others the film/layer of the glass plate was clearly damaged (scratching etc.). So I had to use the healing tool.

@g-man : how to do a data log after a crash (I can try to reproduce one)?

Sometimes they are automatically created…you can run DT in debug mode…

https://docs.darktable.org/usermanual/3.8/en/special-topics/program-invocation/darktable/

I think -d all should track everything…Are you getting the backtrace notification when it crashes or does it just crash…??

What is the exact syntax for invoking dt with the -d option?

From the cmd prompt (Windows10) I can start dt form its directory but ‘darktable.exe -d’ won’t start? I used darktable.exe [-d] but that’s probably just normal dt. I could reproduce the crash (after >150 clicks!) but no new file was written to the .debug folder.

The manual has the details and the location of the log file.

Cmd from program files/Darktable/bin
darktable -d perf -d memory

Don’t use the .exe
The -d all will have so much info that makes it hard to understand.

OK that worked and I found a darktable-log.txt file created today at about the right time (buried very far down in a sub directory C:\Users\myself\AppData\Local\Microsoft\Windows\INetCache\darktable).

I attach it. I guess the log is from the run started at 18:12:28. It’s very long and I can’t make much sense out of it. Somebody more knowledgeable should have a look at it before opening a bug report on github?

darktable-log.txt (750.0 KB)

Can you delete the log file and do the -d perf -d memory again but only try to recreate the issue? I’m in the cellphone and this is just too long.

Eventually I think it will be best to start an issue in GitHub. Eg. Crash from using heal too fast. With a copy of the log.

Hard to reproduce without any other modules activated. I reached the limit of 300 corrections without a crash several times. And the log file gets very long anyway. I’ll give it a try tomorrow again. But if it’s really a bug, it’s probably a minor one.

I was sitting by the pool watching the kids and managed to go thru the file. It is using OpenCL for the retouch. In some cases it was quick and others it was 2-3s. I can’t tell if this is from the same image or multiple images. I agree it is likely a minor bug in a extreme case but seen by at least 2 different persons. The current darktable master has significant number of changes to OpenCL. Maybe the changes already improved this issue. I will try to replicate when I get some time.

@Christian_Pfister i have been working on the darktable OpenCl code quite a lot over the last months. There have been quite a number of improvements, stability fixes and debugging improvements, all now in current github master, not in a release version.

What you describe hints to a possible “race condition” (either in the healing tool or in general pipeline processing). I personally have not used that module a lot, so i would appreciate to get hands on your raw image file and the xmp file for that image you have observed crashing.
You can either share here or email to hanno-at-schwalm-bremen.de, i would take this as private and not to share elsewhere if sent via mail)

About the runtimes you mentioned (some take long, some don’t take) i am not surprised at all, the amount of processing or tiling depends a lot on internal parameters (what and where to heal).

I can send you a few, no problem. These are Tiffs from scans of B&W negatives, either film or glass plates from the 1920ies, 1930ties. I scanned with an Epson V750 Pro flatbed scanner and the software which comes with it. They were converted into positives by the Epson software