culling with prerendering (slow laptop)

I am on holiday, and would like to do some preliminary culling (just exposure, lens correction, sigmoid) on a slow laptop with no GPU.

Is there a way to make Darktable render the next 10-15 images and cache the result while I am eyeballing the current one? Using DT 4.8, on Linux. I have 16 GB RAM.

Why not just use darktable-cli while you go out to dinner or something?

1 Like

Another related approach is to create a style (containing basic corrections) and auto apply them on import making use of the autostyle Lua Plugin
https://docs.darktable.org/lua/stable/lua.scripts.manual/scripts/contrib/autostyle/

Can you elaborate on how this would help me? I would end up with a bunch of JPGs, but I would want to keep working on RAWs at some point.

I do this, but AFAIK Lightroom does not cache the results. So when I go back and forth between images, they are recomputed.

sorry, not enough sleep… I meant darktable-generate-cache.

2 Likes

Is this maybe related to

?

1 Like

To verify - try the latest app image (nightly) - the fix is merged in it.

it is under assets

please do make a backup as 4.9 is in development.

These days I use the crawler (not darktable-generate-cache). I can’t say if it matters or not but was suggested on the github thread.

To go around the issue on 4.8.1 - I apply a basic style - just so DT treats the image as modified.

Hope this helps.

I have mixed feelings about the way the thumb crawler works.
I took this discussion as an opportunity to raise a feature requests.
Add option to limit thumb crawler to current collection ¡ Issue #17348 ¡ darktable-org/darktable ¡ GitHub

– edit: never mind

1 Like

I don’t know if these affect culling:

prefer performance over quality
Enable this option to render thumbnails and previews at a lower quality. This increases the rendering speed by a factor of 4

reduce resolution of preview image
Reduce the resolution of the navigation preview image (choose from “original”, “1/2”, “1/3” or “1/4” size). This may improve the speed of the rendering

Set some quick algo for pixel interpolator (warp) and pixel interpolator (scaling).

Set the demosaicing algo to PPG on one image and copy that to all the others. You can revert that later.

Thanks, it was a cache issue. Specifically, the thumbnail resolution I set in preferences was lower than what my current monitor displays, so it had to be recomputed. Fixed now.

It is still not clear to me how the (disk) cache works. Now I am rerunning darktable-generate-cache, and it seems to be generating all thumbnails. Is there a way I can limit it to, say, the images taken in the last month, or a specific folder? The ideal situation would be to just have images generated for those I have visited the last month or so, and automatically GCd after that time passes.

My understanding is that the thumbnails cache on the disk has no limit, so I am not sure how to keep it in check. It has been running for an hour now and already wrote a few gigabytes.

https://docs.darktable.org/usermanual/4.6/en/special-topics/program-invocation/darktable-generate-cache/

You can specify a range of image ids to limit cache generation.
I find this a bit clumsy since consecutive image ids not neccessarily represent the collection I’m currently working on.

As @vbs stated earlier: you may want to use the crawler instead

I apologize for my ignorance, but I don’t know what the “crawler” is, or how to enable it. Searching the manual for “crawler” yields no results.

In discussions I also see mentions of the “backcrawler”, is that related?

Enable “generate thumbnails in background” in preferences and the crawler will automatically generate thumbnails as background activity as long as there is no user interaction for at least 5s (you can change that value in darktablerc)

https://docs.darktable.org/usermanual/4.6/en/preferences-settings/lighttable/#thumbnails

1 Like

Thanks, but will this generate thumbnails for all images in my collection eventually? That is not something I really want.

For most images I pretty much finished editing so I don’t want them to have thumbnails — I have about 35k images in Darktable, and work on about 1k–2k at a time (reducing it below 200, or ideally 100, after I get back from a trip). Making high-res thumbnails for them is a waste of CPU and disk space.

If you insist on the thumbnail cache being temporarily you probably want to disable the secondary disc cache at all (This might decrease lighttable performance ) or manage your cache directory on a filesystem basis otherwise.

The latter might be something, metadata driven views of git-annex might be helpful for
https://git-annex.branchable.com/tips/metadata_driven_views/

Start a feature request.

1 Like

Good idea, see:

1 Like

You could already automate this, at least on approximately. To find files access 43 or more days ago:

kofa@eagle:~/.cache/darktable$ find mipmap* -mtime +42
mipmaps-6ac0672dac6fe81d5e75505cc7fa15bfeed1acf8.d/3/8.jpg
mipmaps-6ac0672dac6fe81d5e75505cc7fa15bfeed1acf8.d/3/12.jpg
mipmaps-6ac0672dac6fe81d5e75505cc7fa15bfeed1acf8.d/3/11.jpg
mipmaps-6ac0672dac6fe81d5e75505cc7fa15bfeed1acf8.d/3/4.jpg
mipmaps-6ac0672dac6fe81d5e75505cc7fa15bfeed1acf8.d/3/9.jpg
mipmaps-6ac0672dac6fe81d5e75505cc7fa15bfeed1acf8.d/3/6.jpg
mipmaps-6ac0672dac6fe81d5e75505cc7fa15bfeed1acf8.d/3/7.jpg
mipmaps-6ac0672dac6fe81d5e75505cc7fa15bfeed1acf8.d/3/13.jpg
mipmaps-6ac0672dac6fe81d5e75505cc7fa15bfeed1acf8.d/3/1.jpg
mipmaps-6ac0672dac6fe81d5e75505cc7fa15bfeed1acf8.d/3/20.jpg
mipmaps-6ac0672dac6fe81d5e75505cc7fa15bfeed1acf8.d/3/10.jpg
mipmaps-6ac0672dac6fe81d5e75505cc7fa15bfeed1acf8.d/3/15.jpg
mipmaps-6ac0672dac6fe81d5e75505cc7fa15bfeed1acf8.d/3/5.jpg
mipmaps-6ac0672dac6fe81d5e75505cc7fa15bfeed1acf8.d/3/14.jpg
mipmaps-6ac0672dac6fe81d5e75505cc7fa15bfeed1acf8.d/3/3.jpg
kofa@eagle:~/.cache/darktable$ 

Subject to the *atime family of mount options; relatime is the default:

relatime
Update inode access times relative to modify or change time. Access time is only updated if the previous access time was earlier than or equal to the current modify or change time. (Similar to noatime, but it doesn’t break mutt(1) or other applications that need to know if a file has been read since the last time it was modified.)

Since Linux 2.6.30, the kernel defaults to the behavior provided by this option (unless noatime was specified), and the strictatime option is required to obtain traditional semantics. In addition, since Linux 2.6.30, the file’s last access time is always updated if it is more than 1 day old.

To remove them: find mipmap* -mtime +42 -delete

In my opinion - most likely you don’t need all the 35k images permanently imported in DT.
Given the constrains

  • slow laptop
  • working on 1-2 k at a time (preferably 100-200)
  • want not to loose extra space and cpu power

I would remove the extra images so DT does not look at them. The processing is still part of the .xmp and they can be re imported when needed.

When we want to see the images in light table we need the thumbnails - just have to decide how big.

  • if the thumbnails are generated ahead of time - we loose disk space
  • to generate them we have to use the CPU when the computer is idle (so they can be ready).
  • if we don’t want to loose the disk space and to use CPU power when the computer is idle - then we have to use CPU when viewing.

Not sure how everything can be fulfilled without sacrificing on at least one item.

If we are to assume that we are to work with non processed mages and rely on build in preview - in theory there can be no thumbnails generated at all. But once the images are processed - then the thumbnail will be based on the processed image not on the embedded .jpg

Also in 4.8 workflow based on the embedded .jpg is slow because of

Not sure what I am missing but it is almost like a catch 22.
If I was in your shoes - I would likely remove most of the images and just import batches and move images between folders processed / not processed / working - something like this.