performance issue with NAS

The output always stops on these three lines

637,643112 [sql] /usr/src/debug/darktable/darktable-4.2.0/src/common/film.c:525, function dt_film_set_folder_status(): prepare “DELETE FROM memory.film_folder”
637,643979 [sql] /usr/src/debug/darktable/darktable-4.2.0/src/common/film.c:531, function dt_film_set_folder_status(): prepare “SELECT id, folder FROM main.film_rolls”
637,644032 [sql] /usr/src/debug/darktable/darktable-4.2.0/src/common/film.c:536, function dt_film_set_folder_status(): prepare “INSERT INTO memory.film_folder (id, status) VALUES (?1, ?2)”

The following line appeared after darktable started responding again
769,254071 [sql] /usr/src/debug/darktable/darktable-4.2.0/src/libs/collect.c:1395, function tree_view(): prepare “SELECT folder, film_rolls_id, COUNT(*) AS count, status FROM main.images AS mi JOIN (SELECT fr.id AS film_rolls_id, folder, status FROM main.film_rolls AS fr JOIN memory.film_folder AS ff ON fr.id = ff.id) ON film_id = film_rolls_id WHERE (1=1) GROUP BY folder, film_rolls_id”

What action are you trying to perform? Does your user have read/write permissions to the share?

check what version of nfs is available on the nas and your workstation. it may be an issue that they’re different versions, this might impact the data transfer.

Do you have the look for update XMP files on startup in preferences, storage on?

1 Like

Just a suggestion for best practices would be take a look at using the local copies functionality for remote files.

it gets stuck randomly
yes I have read-write permissions. In fact nothing ever fails, darktable just freezes for a few seconds randomly. Sometimes 2-3 secs, sometimes 20+

no I don’t

ok but that affects local files as well.
That is what is maddening. First it’s one share and not the other on the same NAS with the same settings. Second it doesn’t matter if I am working on a local file. If the troublesome share is connected, the problem appears. If I remove it from the collection, the problem disappears, even on the second (non troublesome share).

1 Like

Unfortunately no I don’t.

1 Like

There is no problem with another share on the same NAS with the same settings.
The problem only appears when I import a specific share and once I do it appears whether I am working on the NAS or local files.

Ah I missed that, dumb question: have you tried sharing out via SMB or SSHFS to see if it’s something in the NFS implementation? I run my shares via SMB and haven’t seen this issue (Fedora 27 desktop and Debian file server). I could try an NFS export and see if that does like this.

I’ll try the smb option.
However, it is weird that the problem is only with one particular share. I have another share (smaller but still with 1000s of file) that cause no problem.
What I am trying to figure out as well is why would a share affect work on a local file. I would presume that darktable would only try to read or write to the share if I am working on it.
edit to clarify: local files on a local collection, not local files synced with the NAS

Can you share the command or part of the fstab that you’re using to mount these shares? And perhaps also the NFS config from the server, if you can.

192.168.1.50:/volume1/photo /nas/pics nfs soft,intr,rw,nosuid,noatime,user,rsize=32768,wsize=32768,noauto,x-systemd.automount,x-systemd.device-timeout=10,timeo=14,x-systemd.idle-timeout=1min 0 0

NAS doesn’t show much on config

How are you connected to the nas? wired or wireless? if wireless, how is the signal quality?

Your configuration is very specific. I don’t even know all the details on the mount. But just for the sake of it - do you think you can try a much more simple mount? Of course unless you have very specific reasons to use what you are using.

I use the following on a 3TB drive and it performs quite well (and my “nas” is a router with a USB drive). I did try soft mount before but had issues. Again - just a thought.

nfs auto,tcp,user,intr,timeo=10,retrans=3,retry=3 0 0

Most of the configuration were attempts at fixing the issue :slight_smile: I tried all kind of NFS settings first.

I really appreciate all the replies. However, there is still one big question (at least for me).
Why would even a completely faulty NFS mount affect working on files in a local collection.
Once the collection on the NFS share is completely imported and after any startup checks, why would darktable keeping trying to read/write to a collection if you are not working on it.

It would make sense if I had the problem while working on the NFS share but not on the local files (not synced).

Wired, on the same router.

How large is the collection? (pictures count on CollA). And how many folder it has?

I would appreciate if this could be moved to GitHub, can you open an issue there? TIA.

1 Like