Today, I imported 13k images to my library, some of them are very large scans (50 MB). The problem is, darktable crashes at some point, but not always after the same time. It is a new problem. I think it is a memory problem, because the whole system freezes. I replicated the issue with a memory monitoring program open, so I could confirm my suspicion of memory problems. And yes, I could see, at a certain point within 3 or 4 seconds darktable is rapidly eating all memory resources, then filling the swap file, then my system freezes.
Help, I have never had this kind of behaviour.
Maybe it has to do with thumbnailing files that are too big for memory, because the issue mainly occurs in the lighttable. But shouldnât these files be split up and tiled?
The attached PDF file contains the console output of flatpak run org.darktable.Darktable -d opencl -d perf -d verbose -d common
My platform is Linux Mint, 16GB Memory, 2GB Swap file, 4GB GPU memory.
I would be glad if anyone could point me to a solution so I can use my beloved darktable again. darktable_crash.pdf (87.5 KB)
For disk swap this is true, but we have far better technologies now.
IMO the best action here is to setup zram. It has a small CPU performance penalty but itâs not a lot and can effectively provide you with 8GB+ of usable memory. Some people even run twice their system memory in zram without issues.
The problem is not about importing the images, that succeeded fairly quickly. now after the import, the system freezing occurs. And there is no crash because it is a freeze, and yes, sometimes the oom killer steps in and other times I have to use a âMagic Key Sequenceâ to manually start the oom killer. (Alt + PrintScreen + F)
For sure I can increase my available RAM, by either method, but I wonder what is causing this condition. usually when memory is too small for the task at hand, tiling the image should take place instead of freezing the system. I am not very willing to do this workaround even if it might help me, as I am intrigued by the nature of this problem.
It could be that background generation of thumbnails triggers a memory leak.
Also, try -d memory as a debug option - and, if you are on Linux, use > /path/to/debug.txt to send the output directly to a file, then attach that, instead of a PDF.
So, for example: darktable -d common -d memory > /tmp/debug.txt
I tried it with default resource level, and I noticed one obvious effect: The memory overload took longer to occur (128 seconds vs. 55 seconds). Here is the log, maybe you can find more hints in there: darktable_log3.txt (91.4 KB)