Working with two libraries / or other approach

The library is just a database (library.db, I think)

@elstoc @DanielLikesDT FYIThe configuration and operation files of darktable.pdf (183.9 KB)

Thanks. Just out of curiosity what’s the source of this file?

BTW on a related note, I just discovered (in /usr/share/doc/darktable) a file called darktablerc.html that describes most of the settings in the config file.

Yep that’s a good one……most are direct from the preferences tabs but there are a few that are not…the file is translated from a French author…the darktable.fr is an awesome site if you can speak
French I have translated some material and watched some video’s with translated subtitles…

1 Like

I have tried this approach and it works very well. To summarize:

  • just start darktable from your launcher or whatever you use (I am using ubuntu) to get to the default library which is basically empty except for the pictures you are working on.
  • use a script or the command line to start darktable with darktable --library <library file> to access a secondary library which is filled with the content from the server or wherever the majority of your data is stored.

As already pointed out this approach is very easy to handle and also acceptably fast for the local editing work but it is still possible to check all historical files with the darktable interface.

The only thing I am not sure about is the database and lock-files. I am not sure which of those can be deleted. I will check the proposed file.

How? I mean, the thumbnail part.

Hi, I’m trying to reproduce @guille2306’s workflow to start working with two libraries: one in a remote server (home made proto NAS), the other one in my computer, for the on-going work.
I’m getting, though, this error when starting darktable with the –library option:

gustavo@N4050:~$ darktable --library smb://192.168.1.71/fotos/HD-imagens/darktable/library.db
[defaults] found a 64-bit system with 12175304 kb ram and 4 cores (0 atom based)
[defaults] setting high quality defaults
[backup failed] /home/gustavo/.config/darktable/data.db -> /home/gustavo/.config/darktable/data.db-pre-3.1.0
gustavo@N4050:~$ ^C
gustavo@N4050:~$ darktable --library smb://192.168.1.71/fotos/HD-imagens/darktable/library.db-pre-3.1.0
[defaults] found a 64-bit system with 12175304 kb ram and 4 cores (0 atom based)
[defaults] setting high quality defaults
[backup failed] /home/gustavo/.config/darktable/data.db -> /home/gustavo/.config/darktable/data.db-pre-3.1.0

darktable starts empty.
What does backup failed mean?
The steps I took:
1 Initial database load
1.1 Detach external HD from NAS and attach it into computer
1.2 Clear darktable database (rename ~/.config/darktable folder)
1.3 Clear darktable cache (delete ~/.cache/darktable)
1.4 Open darktable
1.5 Import folder from plugged-in HD
1.6 Close darktable
1.7 Copy ~/.config/darktable folder into plugged-in HD
1.8 Open darktable
1.9 Select all images
1.10 Remove selected
1.11 close darktable
1.12 Detach HD from computer and attach it into NAS
1.13 Open darktable: darktable –library smb://192.168.1.71/fotos/HD-imagens/darktable/library.db

EDIT: I added the --cachedir option so that darktable fetches thumbnails from my computer (they were previously created in step 1.5 above), and the backup warning/error has gone, but darktable still shows no images:

gustavo@N4050:~$ darktable --library smb://192.168.1.71/fotos/HD-imagens/darktable/library.db --cachedir /home/gustavo/.cache/darktable
gustavo@N4050:~$

Does the smb:// protocol handler work? Why not just mount the share first, then refer to the local path?

If I understand correctly, you’re generating the database on the computer and then moving the drive and the database together? I don’t know if the database keeps the files paths as relative paths. If it’s an absolute path, darktable may be looking for the files on your computer, even if you moved the database.

Of course, thanks!
Now struggling against permission stuff. Since mount can only be issued by root, the local path ends up owned by root, and darktable gives this error (I think it has to do with the local path permission)

EDIT: chown doesn’t work, if I try to change the ownership of that folder.

This thing is starting to get complicated …

Correct.

Hummm… When I read from you workflow

… I could only conclude that the alternative library you’re referring to was the one located in the “final long term location” which, in my case, would be the network share. If not, I didn’t understand that part of your workflow

Both of your library.db files should be local, I think. If they’re remote, it’ll be much slower.

Now you have two sets of files, some copied locally, say ~/Photos. Then you have your remote drive you mount, say /media/gadolf/Photos

Load the local library, and add ~/Photos. Close darktable. Make sure you mount your smb share. Open DT with the ‘remote’ library.db. add all the files in /media/gadolf/Photos.

You might need other variables, like having separate cache dirs and thumb dirs (they should be on your local disk for speed).

As Mica said, the library file is local, but the RAW files are not. You need to add the files to the alternative library once they’re on their final location, otherwise darktable will have the wrong paths on the database.
Sorry if that wasn’t clear before!

@paperdigits and @guille2306.
So, if I understood it, the “remote” library will actually sit on the local computer, not on the remote server where all the image files are (except the new ones that will still be edited).
So the initial “remote” database load will have to be done when all the files are already in the remote server, right?
Besides, it will be better to have a script to start darktable using the “remote” library and thumbnails cache options.

If I want to browse/edit files that stay on the remote server, I’ll start darktable using the - - library and - - cachedir options, otherwise, just start it the usual way.

I don’t know, even if it sounds doable and doesn’t require hacking skills, it still feels a bit of a hacking, and not much fluid.

I will have to think a bit if the Digikam way wouldn’t be more usable. After all, it allows me to populate the database locally, without harming network usage, by connecting the external hard drive directly to my computer (where Digikam is installed), and only after that to attach the hd to the server and just “move” the collection to another location, which I understand as changing the images path in the database. And if I’m not wrong, I could even set up Digikam to use darktable as external raw editor.

Am I missing something regarding the pure darktable solution?

Regardless my decision, thank you very much for helping.

That is the most straight forward way, yes.

You can also change the location of images using the collect images module: Moving folders in the database - #2 by Pascal_Obry

If you follow @paperdigits suggestion about the collect images module, the approach would be exactly the same as the one you propose with Digikam:

  • populate the alternative library with local files
  • move the files (you can also move the library, but then working with it may be slow)
  • start darktable with the alternative library and tell it where to find the ‘missing’ files using the collect images module (see the last point in this manual page: https://www.darktable.org/usermanual/en/collect_images.html)

Great!
From the link provided by @paperdigits, I finally succeeded in working either with files from the network share and from the computer, each one represented by a collection.
So now that it proves to work, I’ll attach the external hd to my computer, load all the thousand images into the database, then re-attach the hd to the server, and use the “search filmroll” function with the top-level folder.
I had some hiccups due to permissions stuff because I was mounting the network share on the usual /media folder. So I decided to mount it on my home folder, using this:

sudo mount -t cifs -o username=gustavo,uid=$(id -u),gid=$(id -g) //192.168.y.xx/fotos /home/gustavo/media/gustavo/fotos

With that, even elevating priviledges with sudo, I’m still able to change the mount point ownership to my user so that darktable can write to it.

This opens a whole new world to me, because, so far, I’ve been using only the darkroom. Now I can put all my images under the darktable umbrella. :open_umbrella:

@guille2306 and @paperdigits, thanks a lot for your help!

EDIT: interesting to note that this way I won’t need an alternative library.

2 Likes

You’re welcome!

Although it is not technically needed to separate the library, there may be some problems in darktable (slow response) if the library has thousands of images in it. Is not my case, but I’ve seen comments on the forum about that. You may want to check that (or do your own experiment, and if it’s slow for everyday use, just rename/move the library file afterwards).

1 Like

Your thumbnails are saved in a hidden directory in your home folder. If your home folder is mounted on a fast SSD, browsing among your pictures will be fast.

Please Read The Fine Manual on how thumbnails are created :wink:

https://www.darktable.org/usermanual/en/thumbnails.html

2 Likes

Does anyone know how much space needs to be allocated for the thumbnails on your local harddrive? I mean per picture or per 1000 pictures? Just to make sure I can browse through my thumbnails even if I am not connected to my server.

SMB is not encrypted either unless you set it up to be encrypted. At least that is the situation on my Win/10 box when talking to a Synology NAS box (that is using Samba). You can confirm whether the data is encrypted or not by using Wireshark.