[Solved] Opening DB from LAN: Why Read-only?

When I copy the config/db-folder to LAN (mapped in fstab, R/W CIFS access) and open DT with darktable --configdir /mnt/proxmox-shares/darktable.db/, the sqlite3 database is opened in R/O mode:

sqlite3 error: ./src/gui/presets.c:88, function dt_gui_presets_init(), query “DELETE FROM data.presets WHERE writeprotect = 1”: database is locked
sqlite3 error: ./src/common/database.c:5612, function dt_database_release_transaction(), query “COMMIT TRANSACTION”: database is locked
sqlite3 error: ./src/common/database.c:5579, function dt_database_start_transaction(), query “BEGIN TRANSACTION”: cannot start a transaction within a transaction
sqlite3 error: ./src/common/database.c:5612, function dt_database_release_transaction(), query “COMMIT TRANSACTION”: database is locked

I’ve tried to copy that folder again locally, just to be sure, and it opened R/W just fine. There are also no *lock* files in the db-folder and no other machine is accessing of course that folder.

When I open that library.db file from the LAN straight from the sqlite3 command, it opens R/W and not R/O. So it seems DT somehow checks if the db-folder lies on LAN and opens it R/O, but I didn’t check the sources yet.

Please, anyone has idea why the database is opened in read-only mode? I would like to have the database on my LAN and not locally.

UPD1: I’ve tried to execute some INSERT INTO from the sqlite3 directly into the LAN shared library.db - and that’s R/O as well… hmm… seems like sqlite3 is causing this behavior.

UPD2: I’ve found it, so for anybody in the same situation: the problem is with sqlite3 using a db-file via CIFS, as CIFS is not compatible with the binary locking needed in sqlite3. So when you map your NAS-folder by something different than CIFS/SAMBA (e.g. sshfs in my case), the db is writable than ;-). Marry Christmas-days!

Thank you

1 Like

You generally shouldn’t put darktable’s db on a network share, it’ll make darktable quite slow.

Opening DB across a network connection in general is not optimum as it tends to be slow. But with a 1 - 2.5 -10 GBit network connection it can be negligible for DBs with low loads, as the DT DB supposedly is.
How are you mounting the remote location? I used to use NFS and experienced several difficulties about user rights and performance. Switching to SSHFS resolved my issues about access rights and in general speed. But DT is extremely slow on network location for DB

CIFS/samba as specified in the original post.

TY for pointing out that network shares are slow; I’m aware of this fact, but for me it’s more important to work with the same library (not sidecars, but the real DB) than to have better speed locally and then syncing manually the changes across the machines.

My motivation, to give you the context why I’m looking at it: I hate to spend my whole life at one place, where my main DEV machine stands. So I move around, having a few notebooks across our house with different OSs (big house, always up-to-date backup notebooks etc.) At the end I usually want to print a few images and this is not as easy with Canon under Linux, so I tend to balance between printing under Windoze straight away, or to use my iMac with Ubuntu, having a super nice screen and being profiled with the Gretag EyeOne spectrometer: I’ve setup an VM under Proxmox/Debian, where the Windoze lives its sad life, sharing the Canon Pixma PRO-200.

That means: I edit my pictures at more notebooks as I move around and at the end I fine-tune my edits before printing on either Win11 or Ubuntu desktop machines, which are profiled and have access to the profiled Canon printer.

In my LAN there are 3 NAS servers, 1x Proxmox (ZFS, main files sharing, VMs etc.) and 2x Synology (BTRFS, 321 backup schema, cameras etc.), all of them snapshotting every 5 seconds last hour, 1 minute last 24 hous, 1 hour last week, 1 day last month, 1 week 5 years. I like to profit from this whenever possible, not only writing code but also for stuff like editing the photos. That’s why I try to use every piece of SW in the mode edit anywhere, sync immediately everywhere, always 100% failsafe.

I’ve considered already a way to replace sqlite with a real C/S DBMS, but I’m not 100% sure it’s the right way, as at the end instead of playing around with photos in my spare time I would end up compiling and debugging again :smile:

I’ve googled quickly what’s the state of giving users the option of persisting the metadata onto a real DB (e.g. PostgreSQL, MariaDB…) and it seems that it was abandoned for no real killers IMHO.

Another option I’ve already thought about a few years ago is to replace the local persistence layer in the sqlite engine to be able to aggregate and wrap some remote SQL API: that way, any ‘local-only’ apps based upon sqlite would be able to talk to a remote server w/out modifying the sources.

There would be no need to make it an “enterprise-like” solution, i.e. a lot of parallel users with robust locking overhead etc.; most of the home end-users would be aware they can access/modify the catalog DB only from one machine at a given time. It would be surely more practical and robust than to rsync/syncthing/DropBox etc. the local files to a common share, as I’ve seen the workaround solutions in the forums.

If anybody is interested, maybe we could invest some time into this.

1 Like

OK, well, if that doesn’t work out, the recommended way is to keep the dB’s on each machine, use the Copy Locally feature,.and enable searching for newer sidecar files at start up. Good luck.

Or just accept the slow operation…

Anyway, according to the edits in the original post, the “read-only” issue is solved.

@SuperNose : perhaps you could mark the title with “[Solved]”?

Yes, you’re right, the R/O issue is solved now and the other problem (the real one: how to sync more machines for DK editing) belongs to another topic. TY

UPD: I feel a bit stupid now, but I don’t see any “Mark as solved” function nor can I edit the title anymore :smiley: When I find it, I’ll mark it :wink:

Dont worry there isn’t one :slight_smile:

Ah, that could explain why can’t I find it :yum:

I’m going to open another BUG, this time for the Windoze version: should I mark old topics somehow as “SOLVED”? Quick search here didn’t pointed that out somehow and I want to behave nicely :innocent:

TY

@SuperNose I already know that you want all your computers to have access to the same database.

I know that the speed of the database is not the most important thing.

However, you can make the database local and the same as on any other computer. There are two options: put it on a fast pendrive, or use a tool to synchronize data between computers. To be honest, I don’t know such a program “between computers”, but I do know between Synology NAS and connected computers. It’s called Drive and works just like Dropbox.

And now a question for you. Did I understand correctly that computers with Windows and Linux use the same database? I’m curious how you manage, because when importing photos on a Windows computer, the addresses to photos look like Z:\folder\subfolder\img and when importing on Linux, they look like /mnt/folder/subfolder/img. How do I adjust the addresses so that they are the same on both systems?

No. That doesn’t work. The paths to the images are the full path, so it can’t work.

The recommended way to achieve this is to have your images on the NAS, but the database and cache locally to each machine. Turn on the option to look for updated XMP files on start up. Use the Copy Locally feature if necessary. Let the XMP files do the syncing.

Is this a way to “look for updated XMP’s” only from particular folder ?
I know which folders changes and seraching over 60000 files is not necessary (i was trying this option - 5 minutes waiting untill dt opens).

Nope.

Yes, you’re right of course, that the paths are different under Linux and Windoze.

I was just exploring the options I have and that is to share the same database across Lin/Win: all I have to do is to “search filmroll” on the root folder - this takes a few seconds and good is, I can continue to work under the-other OS :slight_smile:

For syncing files, I use Syncthing, works like magic, across Win/Lin/Osx/Android, so anybody going this way should check this tool, as it just works ;-).

To the only real option implemented I see right now, the XMP way, I’m going to reply to the other post from paperdigtigs, where he recommends this directly, as I’ve checked that of course already.

Yes, the XMP way of using the metadata persistence is currently the only implemented usable way I’ve found in DT. The design idea to use an alternative metadata persistence parallel to the database one is a good one, the “write” direction works nicely.

The problem is with the “read” direction: currently, DT assume the sync option “look for updated XMP files on startup” is good-enough for all, but that’s not true.

In my situation, it takes around 20 minutes to “scan on startup”. I’ve around 145k files (JPG+RAW+PNG) stored on the LAN, connected 1Gbps, transferring around 50MBps. To scan for 145k XMP files updates is not usable of course.

The missing option is named: “sync from XMP on entering edit”.

The scenario driven story for this missing option: Andy has a lot of files in different folders on his LAN and he uses different DT instances across different machines. He expects to be able to edit his images on machine A and have all these edits available on machine B, without doing any manual syncing work. The syncing must be automatic, fast and seamless.

I’ve found already tons of other users searching for such functionality across the forums, although the non-devs/architects can’t explain it very well, but they’re intuitively looking exactly for this missing “sync from XMP on edit” option.

The missing functionality is pretty simply implementable, as all the write/read/sync code is already there, it’s just not possible to scope the sync-from-XMP-if-newer operation to ONE singular file when opening the edit-module.

TBH I would implement this, but I’m buried in .NET coding the last few decades and completely out-of-time right now. But I’ve checked already how to setup the local IDE to be able to debug/code for DT, just from pure curriosity. But I guess it’s more efficient to help the DT community with testing and supplying feedback than to hack spontaneously a missing feature in my sandpit :smiley:

PS: The tool is so nice and robust, it could easily outperform all those CR/LR/ACDSee DAMs I was using long years. I’m using it color-manged under X, was also a lot of “fun” to figure out how to profile-everything with my EyeOne Gretag under Linux :wink:

That would make for another perfectly usable feature wish: different machines could use different absolute path prefix. Such setting would be locally bound to the concrete DT instance. That way a Win machine path could begin with “\server\share” and *nix machine could begin with “/mnt/share”.

DT starts with the “film-roll” concept anyway, as I’m able to switch such “film-roll” root to another location. I’m actually doing this right now, for all my ~150k images and it takes just a few moments. I didn’t look into the db-table, if those image-paths are then patched to hold a new path, or if there is a “film-roll path” used.

But also in the first case, where all the images paths would have to be patched, it takes just a few moments and is not comparable to the “look for updated XMP files on startup”, which takes 20 minutes in my case.

In the latter case, it would cost really nothing but a config var. When it’s not implemented that way, it would need a bit of code-analysis, how much effort would it cost to deal dynamically with the image-paths, so an absolute-path prefix plus the film-roll relative suffix would be calculated instead of reading just one column from the image-table. I have no idea in this moment, how the image-files code is modeled, but it would be nice when there is an class with a property like “FullPath”, which would put the prefix and film-roll suffix together of course. When there is something like that, such feature means only a few code-lines plus UI-setting with local persistence.

For the general case, you’d need a list of such prefixes, as darktable doesn’t force a single path as a base for image directories.

Which means searching for images becomes unreliable until you have edited each and every image where you made a change in e.g. keywords elsewhere…

I’d rather see options to force a sync on XMP while darktable is running, either globally (start it just before a break) or for the current folder (“film roll” is tricky here, as that could be a search result…). Or just resync current selection (would fit nicely in the “actions on selection” module, not sure where to place the other sync commands).

Yes, of course a list, for every base film-roll there is under the “folders” view: I have myself 3 different film-rolls there, as “roots” - so yes, every of those would need a an absolute-prefix path.

Question: Is this forum read also by the DT architects/devs? Or is it better to put feedback like this to a different place? TY :slight_smile: !

I didn’t understand this, sry :flushed: I thought DT knows the path of an image while entering the edit-module, doesn’t it? So when I click on the “darkroom” button to enter the editing module, DT could quickly check if there is an updated XMP stored parallel with the image being focused, just to sync automatically such available updates when newer than the db-stored edits.

I don’t consider the “lighttable” as important to apply any possible newer edits, stored in the XMP sidecars - it’s good-enough for viewing, although maybe outdated. Just when exporting/printing/editing, those newer edits are then crucial.

Yes, a new “sync from XMP” function inside the selection-actions would also be ok, although this would mean an always-extra-step for the end-user, which takes time in the better case and could be forgotten in the worst case.