Lighttable previews are pixelated since Darktable 4.6

Dependend on the size of your images and your screen it may be enough to use -m 4 or -m 5 to save space on disk. Just have a look at the ~/.cache/mipmaps... folder (subfolders 0...7) and check the sizes of the created jpg files.

Did you run darktable-generate-cache against an empty cache ? If not : the tool only generates missing thumbnails. (Old) thumbnails already available are not beeing recreated if the history did not change.

1 Like

You’re right, after a few tests I think -m 4 (1803x1200) is enough for my screen, which saves GB of disk space.

Since it’s a new cache directory, it was empty. And yes, only new thumbnails are generated by the tool, which is a good point.

1 Like

Thank you for the lively discussion / help on my problem!
It was the reference to my computer specs, which pointed in the right direction.
I replaced a hard drive in early January…
The new hard drive has more capacity but unfortunately only a very small cache and is therefore slow with larger amounts of data. This is exactly what affects darktable…
Moving the .config/darktable directory to a faster hard drive fixed my problem :slight_smile:

My old hard drive was a Corsair MP600 Gen4 PCIe 1TB. This has become too small with 1TB, but was very fast!
In January I bought the Crucial P3 Plus 4TB M.2 PCIe Gen4 NVMe. This hard drive has more space but is less suitable for darktable because the cache makes it very slow for larger data transfers…

Thanks again for your support!

1 Like

Thanks for sharing the source of the problem. I bet you are not the only one being caught out on this.

The tricky thing was that in the same time I changed dt software version and swapped the hard drive.
I then completely forgot about the hard drive. But I also could not think of having a slow hard drive. I thought that new PCIe Gen4 stuff has always good performance, but that’s not the case.
The „Crucial P3 Plus“ has read/write speed up to 5000/4200 MB/s, but somehow a strange chaching what makes it very slow with larger amounts of data…

1 Like

How would a slow hard disk explain the problem – and the fix/workaround of deleting config resulting in no more pixelation?
BTW, I run a SATA SSD for my home and a SATA HDD for data, and have never experienced this problem.

hmm, good question. I can only say it works…
(I’m just a humble photographer, not a software developer)
When I used darktable with a fresh config folder I just imported some 20 pics or so, not all my pics
By the way, startup of darktable is now also quicker.

Do you have selected “preferences → storage → look for updated XMP files on startup” ? In this case startup will take some time with a database with 170 K images and it is not surprising that with only 20 to 30 images the start is much quicker.

Whatever was previously present in the hardware cache (~/.cache/mipmaps...), possibly responsible for the pixelation : starting with a completely fresh config also results in a reset of the cache. I have just checked this again explicitly with a fresh install of the dev version (empty config) and an “old” cache.

@wiegemalt : which operating system ? How much RAM ? A database (library.db) with 170 K images might have a size of several GB. How is this database cached by harddisk and/or OS ? Only vague assumptions as we have too little information…

I did not select that one.

Maybe I was misunderstood: I went back to my old ./config/darktable folder with a database to 173887 pics.
The only thing I changed is this: I moved my home folder back from the new, but slow harddrive to an older but faster one.

My folder .cache/darktable/ looks like this (also the old one):

Screenshot_cache_darktable

library.db 422MB

System:
Kernel: 6.6.26-1-MANJARO arch: x86_64
Desktop: KDE Plasma v: 5.27.11 Distro: Manjaro Linux
Machine:
Type: Desktop System: Gigabyte product: X570 AORUS ELITE
CPU:
Info: 12-core model: AMD Ryzen 9 3900X bits: 64 type: MT MCP cache:
L2: 6 MiB
Graphics:
Device-1: NVIDIA TU106 [GeForce RTX 2060 SUPER] driver: nvidia v: 550.67
Info:
Memory: total: 32 GiB

This seems to be extremely small for a database with more than 170 K images (about 2.4 KB/image). Are you sure you have so many images in your library.db ? If you have sqlite3 installed you could run

sqlite3 /path/to/your/library.db "select count(id) from images"

to get the number of images in your database. To give you a comparison : my database contains about 28 K images and has a size of about 700 MB (about 25 KB/image).

You have two mipmaps-... folders (should not be the case). I suspect the second one ( mipmaps-d6034... ) is the relevant one. I would remove the other one (move it to an other location out of .cache)

With a size of 4.7 GB your cache seems quite small. If you open this folder, how much subfolders ( 0...8 ) do you see ? Are these subfolders populated with jpegs ? You need (dependend on the size of your images and your screen) at least 0...4 to get cached images to be shown pressing “w” in lighttable. Otherwise the full screen preview has to be recalculated (takes time).
Another comparison: a complete cache of all my 28 K images (directory 0...4 ) has a size of around 20 GB so your cache is most likely incomplete.

If you still have images shown pixelated when pressing “w” : Note the ID of the affected image (displayed in lighttable → image information). Check the jpegs with the corresponding number in the directories 0...8. What do these jpegs look like (do they exist at all) ?

What are your current settings in “preferences → lighttable → thumbnails” section ?

Last point (not related to your issue) : you need to keep only the latest folder with cached_v3_kernels, the older ones can be deleted.

1 Like

I just checked that for me

sqlite3 /home/dan/.config/darktable/library.db "select count(id) from images"
145923

And the size of the db:

pwd
/home/dan/.config/darktable
[dan@kirk darktable]$ ll -h library.db
-rw-r--r--. 1 dan dan 287M 17. Apr 11:36 library.db

But what is it about the cached_v* directories and what is their purpose?
du -sh *

3,0M    cached_v1_kernels_for_NVIDIACUDANVIDIAGeForceRTX3060Ti_53510405
3,1M    cached_v1_kernels_for_NVIDIACUDANVIDIAGeForceRTX3060Ti_5358605
3,1M    cached_v2_kernels_for_NVIDIACUDANVIDIAGeForceRTX3060Ti_53510405
3,1M    cached_v2_kernels_for_NVIDIACUDANVIDIAGeForceRTX3060Ti_53512903
3,0M    cached_v2_kernels_for_NVIDIACUDANVIDIAGeForceRTX3060Ti_5358610
3,0M    cached_v2_kernels_for_NVIDIACUDANVIDIAGeForceRTX3060Ti_53598
3,1M    cached_v2_kernels_for_NVIDIACUDANVIDIAGeForceRTX3060Ti_5452306
3,1M    cached_v2_kernels_for_NVIDIACUDANVIDIAGeForceRTX3060Ti_5452902
3,0M    cached_v3_kernels_for_NVIDIACUDANVIDIAGeForceRTX3060Ti_53514602
3,0M    cached_v3_kernels_for_NVIDIACUDANVIDIAGeForceRTX3060Ti_53515405
3,0M    cached_v3_kernels_for_NVIDIACUDANVIDIAGeForceRTX3060Ti_5452902
3,0M    cached_v3_kernels_for_NVIDIACUDANVIDIAGeForceRTX3060Ti_5452906
3,1M    cached_v3_kernels_for_NVIDIACUDANVIDIAGeForceRTX3060Ti_55067
46G     mipmaps-3372ef87e17b52b4c224d3d71571f899069ae300.d

Cached opencl kernels are created with each GPU driver version and each significant change to darktable opencl algo. You can safely delete them all. When you start darktable, new ones will be generated using your current opencl GPU drivers.

You can also delete both of your minimaps folders and then trigger new thumbnails to be generated. There is a new feature in dt that can generate them with datktable is idle. Since your DB is really big, this will take a long time.

1 Like

Yes, that would indeed be the cleanest solution.

You have to enable “preferences → lighttable → generate thumbnails in background” in this case. This feature is disabled by default. I personally prefer the use of darktable-generate-cache.

I suspect you have a large number of unedited (rejected) images ?

Is there filter to show all edited or unedited pictures in lighttable view?

Yes, there is. Filter by tags, check the darktable tag hierarchy.

Check “lighttable → collections → darktable → history

darktable-history

1 Like

Indeed. According to the new (for me) filter it looks like this:

Similar for me:
Screenshot_History_darktable

I’m wondering why there is basic two times?!?

I guess it’s because of my two album folders I configured (mnt and home). For each album a seperate ‘basic’

No, a hash code for the history is used for classification. You find the code in collection.c, lines 1505 ... 1532.