performance issue with NAS

Your configuration is very specific. I don’t even know all the details on the mount. But just for the sake of it - do you think you can try a much more simple mount? Of course unless you have very specific reasons to use what you are using.

I use the following on a 3TB drive and it performs quite well (and my “nas” is a router with a USB drive). I did try soft mount before but had issues. Again - just a thought.

nfs auto,tcp,user,intr,timeo=10,retrans=3,retry=3 0 0

Most of the configuration were attempts at fixing the issue :slight_smile: I tried all kind of NFS settings first.

I really appreciate all the replies. However, there is still one big question (at least for me).
Why would even a completely faulty NFS mount affect working on files in a local collection.
Once the collection on the NFS share is completely imported and after any startup checks, why would darktable keeping trying to read/write to a collection if you are not working on it.

It would make sense if I had the problem while working on the NFS share but not on the local files (not synced).

Wired, on the same router.

How large is the collection? (pictures count on CollA). And how many folder it has?

I would appreciate if this could be moved to GitHub, can you open an issue there? TIA.

1 Like

Ticket #13569 created

ticket created on github.

1 Like

Could it be that the SELECT statement took long?
From the command-line, after closing darktable, you could try opening your library.db with sqlite3, and issue the same select. If it is slow, then some index may be missing, making the query slow.