@paperdigits and @guille2306.
So, if I understood it, the “remote” library will actually sit on the local computer, not on the remote server where all the image files are (except the new ones that will still be edited).
So the initial “remote” database load will have to be done when all the files are already in the remote server, right?
Besides, it will be better to have a script to start darktable using the “remote” library and thumbnails cache options.
If I want to browse/edit files that stay on the remote server, I’ll start darktable using the - - library and - - cachedir options, otherwise, just start it the usual way.
I don’t know, even if it sounds doable and doesn’t require hacking skills, it still feels a bit of a hacking, and not much fluid.
I will have to think a bit if the Digikam way wouldn’t be more usable. After all, it allows me to populate the database locally, without harming network usage, by connecting the external hard drive directly to my computer (where Digikam is installed), and only after that to attach the hd to the server and just “move” the collection to another location, which I understand as changing the images path in the database. And if I’m not wrong, I could even set up Digikam to use darktable as external raw editor.
Am I missing something regarding the pure darktable solution?
Regardless my decision, thank you very much for helping.