Both of your library.db files should be local, I think. If they’re remote, it’ll be much slower.
Now you have two sets of files, some copied locally, say ~/Photos. Then you have your remote drive you mount, say /media/gadolf/Photos
Load the local library, and add ~/Photos. Close darktable. Make sure you mount your smb share. Open DT with the ‘remote’ library.db. add all the files in /media/gadolf/Photos.
You might need other variables, like having separate cache dirs and thumb dirs (they should be on your local disk for speed).
As Mica said, the library file is local, but the RAW files are not. You need to add the files to the alternative library once they’re on their final location, otherwise darktable will have the wrong paths on the database.
Sorry if that wasn’t clear before!
@paperdigits and @guille2306.
So, if I understood it, the “remote” library will actually sit on the local computer, not on the remote server where all the image files are (except the new ones that will still be edited).
So the initial “remote” database load will have to be done when all the files are already in the remote server, right?
Besides, it will be better to have a script to start darktable using the “remote” library and thumbnails cache options.
If I want to browse/edit files that stay on the remote server, I’ll start darktable using the - - library and - - cachedir options, otherwise, just start it the usual way.
I don’t know, even if it sounds doable and doesn’t require hacking skills, it still feels a bit of a hacking, and not much fluid.
Great!
From the link provided by @paperdigits, I finally succeeded in working either with files from the network share and from the computer, each one represented by a collection.
So now that it proves to work, I’ll attach the external hd to my computer, load all the thousand images into the database, then re-attach the hd to the server, and use the “search filmroll” function with the top-level folder.
I had some hiccups due to permissions stuff because I was mounting the network share on the usual /media folder. So I decided to mount it on my home folder, using this:
sudo mount -t cifs -o username=gustavo,uid=$(id -u),gid=$(id -g) //192.168.y.xx/fotos /home/gustavo/media/gustavo/fotos
With that, even elevating priviledges with sudo, I’m still able to change the mount point ownership to my user so that darktable can write to it.
This opens a whole new world to me, because, so far, I’ve been using only the darkroom. Now I can put all my images under the darktable umbrella.
Although it is not technically needed to separate the library, there may be some problems in darktable (slow response) if the library has thousands of images in it. Is not my case, but I’ve seen comments on the forum about that. You may want to check that (or do your own experiment, and if it’s slow for everyday use, just rename/move the library file afterwards).
Your thumbnails are saved in a hidden directory in your home folder. If your home folder is mounted on a fast SSD, browsing among your pictures will be fast.
Please Read The Fine Manual on how thumbnails are created
DanielLikesDT
(Daniel, who is an enthusiast for F/OSS and photography)
29
Does anyone know how much space needs to be allocated for the thumbnails on your local harddrive? I mean per picture or per 1000 pictures? Just to make sure I can browse through my thumbnails even if I am not connected to my server.
SMB is not encrypted either unless you set it up to be encrypted. At least that is the situation on my Win/10 box when talking to a Synology NAS box (that is using Samba). You can confirm whether the data is encrypted or not by using Wireshark.