Your configuration is very specific. I don’t even know all the details on the mount. But just for the sake of it - do you think you can try a much more simple mount? Of course unless you have very specific reasons to use what you are using.
I use the following on a 3TB drive and it performs quite well (and my “nas” is a router with a USB drive). I did try soft mount before but had issues. Again - just a thought.
Most of the configuration were attempts at fixing the issue I tried all kind of NFS settings first.
I really appreciate all the replies. However, there is still one big question (at least for me).
Why would even a completely faulty NFS mount affect working on files in a local collection.
Once the collection on the NFS share is completely imported and after any startup checks, why would darktable keeping trying to read/write to a collection if you are not working on it.
It would make sense if I had the problem while working on the NFS share but not on the local files (not synced).
Could it be that the SELECT statement took long?
From the command-line, after closing darktable, you could try opening your library.db with sqlite3, and issue the same select. If it is slow, then some index may be missing, making the query slow.