I think you could fake something like that using a NAS to store the data.db (using “–datadir” on startup) and the configuration (“–configdir”). With one very important limitation: only one instance should access that database at any given time. And I’m not sure darktable or SQlite enforce that limitation.
Also, are you sure you have the exact same configuration on your different computers?
For the rest, a centralised database implies network traffic. Most likely a lot slower than a local configuration, though that depends a lot on the number of exchanges (and the data volume exchanged). Even a gigabit connection would have at best 125 MB/s.
Thankfully this is mostly what I’m after, a tagging system and rating, coupled with exif I can create some simple tables and filters to fit my needs. At least for raw files this works fine, and in a way they are the only ones I want the information shared with other software.
JPEG and similar files I guess I can keep track in my own database, shouldn’t be too difficult to spot duplicates based on some exif attributes, or just name + date if exif is missing. That way I can move them around and update their absolute directory when needed to keep track of where they are (plus possible real duplicates).
I will update my image viewer thread if and when I implement these things in case anyone is interested. For now I’m busy creating an audio looper so I don’t have to buy a hardware one
Yes but since time isn’t (usually ) variable, for me progress tends to happen in a manner roughly inverse to the slope of the curve. Steeper == slower.
interesting discussion around where/how to store and exchange metadata.
today i’m questioning the historic decision to use xmp in darktable. some thoughts:
it requires a bloaty xml/xmp sdk/exiv2 dependency to read
depending on how exactly you read and write it, you would hope you don’t destroy it for others (there’s this esoteric limitation in size for instance, can’t be longer and can’t be shorter than this or that, and when is different code really 100% compatible)
it really only gives you this impression of interoperability (they both do xmp, so it’s the same, right?) but then as stated above, the edit history is completely incompatible
we were worried about async file access in dt. here’s why: you don’t run your queries from the xmp. you will have a sort of internal representation (database). when do you sync back? and if people have two programs running at the same time? say one for dam and one for raw development?
and on the other hand i’m reading the use cases for how do people organise their files here with great interest. i have to say for me it’s the same as found in the thread above often times: more or less careful folder organisation, then some light tagging and star rating, maybe colour labels but often for temporary use.
given both that, i would argue it’s better to store your own stars and tags as simple and robust as possible and provide explicit synchronisation mechanisms with others (maybe xmp). at least that makes it absolutely clear to the user what is synchronised when.
This makes sense to me, most raw editors already have dam builtin, so the cases they need to sync with outside software are few. Making it an explicit behavior and maybe even on demand or configured when to happen, seems like the best bet.
This is one example of a well-defined technical term being used in a different (actually opposite) sense by people. IMO instead of trying to educate others about the origin of the term, it may be better to just drop it and use plain language, as in “easy to learn” or “difficult to learn”. 99% of usage can be replaced by either. Sure, it sounds much less erudite, but one can make up for that in others parts of the text
In any case, we photographers should be the last to complain about these things, since “aperture” and “inverse relative aperture” (the thing we measure by f-numbers) are used interchangeably.
the images are named with the GUID (when you hover the mouse on top) (I can work around this by looking at the name in the info panel on the left)
when local copies are in place - no sync happens to the .xmp (if I am not mistaken) and I like the sync because I have periodic backup of the .xmp - very frequent (1h) on a local destination
I read somewhere that the locks are only for local storage - not for NAS. Not sure if I understood it correctly (and I haven’t experimented).
If my understanding is correct - the reason why concurrent connections to the DB are bad is because 2 users can try to work on the same image and the changes are not in sync (can become a mess). I can see this can be problematic but not yet an issue for me again - 1 computer only in place. Slow nas (a USB drive) etc.
I view it as a safety net. If something bad ugly and no good happen to the DB - worst case I can re import. And - there have been cases when I change a bunch of images by accident (delete the editing etc.) then I restore the .xmp and read it.
Sync with other programs - haven’t seen that - somebody else reads and translates the DT xmp. But it is better to leave the option possible than to close the option.
Sync between several DT systems (a team of users works on the same catalog of images). Either the users will have to use central DB repository or the users own DBs will have to somehow negotiate and auto sync with one another.
Currently it is not explicitly visible to the user if .xmp are in sync with the DB - unless a scan on startup is performed. But this is often disabled because of the speed gain.
Firstly just to say that edits are not very interesting to have in xmp imho. My personal opinion is that they are better left to other sidecars or databases.
Now the above comments and implied workflow doesn’t make sense to me. A raw editor should read xmp at every time the raw file is opened otherwise the exported files are “broken” and lacking metadata. The hassle of working around this is huge. I know because I’m a RT user and RT only got this feature in dev recently…
You could have a strict view that the raw editor has no dam features. No tagging, no rating and no starring in the app. All that should be handled by specific DAM software and culling software. Now this makes some sense but you’d get into trouble if you have “internal” rating system that doesn’t get written to xmp. People will expect rating to hold because it’s hard work to make those decisions and it’s wasted if you can’t use it down the chain.
The idea that you rarely “sync” to outside software is baffling. If your workflow depends on multiple pieces of software, and it should, the metadata needs to be “synced” frequently. Editing, tagging, culling, selecting etc happen in all sorts of orders. Often back and forth.
this means you need to constantly run the crawler/use some file system notification daemon. you don’t have to open a raw file to make it appear in a collection filtering by some metadata. which brings you into some sync/file locking issues, makes stuff complicated and potentially slow. also it sounds like a bad replacement for a shared database between applications/frontends to me.
let me try to understand the use case here. you mean your workflow includes rating in RT, going back to digikam, changing rating, going back to RT, changing again?
I don’t see why. When the file is opened, by the raw editor, it just reads the XMP.
FWIW, I think that using XMP as a format for raw edits was an ill-advised decision, since there is nothing standardized about the format: it is just a container for whatever each editor does. I think that using basename.index.darktable etc for the actual edits (with index differentiating duplicates) would be a much saner choice, and similarly for other raw editors. These formats could contain a standardized part in XML for rating and other metadata, and maybe a low-resolution embedded image, so DAM software could deal with them even if they are otherwise opaque.
Many apps do just that, auch as ON1, DxO, Zoner, Rawtherapee. Some, like Capture One and Silkypix and Exposure, put their edits in a subdirectory next to the file instead.
Using XMP sidecar files for edits (not just metadata) is actually somewhat rare, I’d say.
I believe the problem here is what if someone changes the file while the raw editor is editing it? If a person is using both software at once, it’ll be a mess when it comes to synchronization. Maybe it could update all files when the user goes into the ‘light table’ or similar, but this could be triggered multiple times and needlessly scan those XMP’s over and over again.
To be fair with ssd’s it’s trivial to do that scan and I would be surprised if it takes more than a few milliseconds for even a few thousand images. The problem would only be users with hard drives, but even those are plenty fast nowadays.
Thats a classical problem in parallel computing and database access. You want to avoid that at all costs. Not only does it mess up synchronisation, it can (and will) cause data loss:
editor reads the XMP at the start of an editing session.
then program B adds a tag or a caption and writes it to the XMP.
the the editing session ends, and the editor writes any changed info back to the XMP. You now lost the information added by program B…
And no, the editor rereading the xmp just before rewriting it doesn’s solve the problem, it will only make the critical window smaller. You either have to use some kind of exclusive access mechanism, or allow only one program to write the xmp data.
I don’t think there’s a serious critical window in this situation, and if it does, it’s clear to the user what happened.
Even taking into consideration the slight delay it takes to write such a small file, the user would need to be extremely fast preforming both tasks at the same time, or something be seriously wrong that one process or the other gets seriously delayed in its execution. In my opinion this is a non issue. This all based on the assumption that user input is what starts the XMP write of course.
It would be useful because making a selection of images is a process.
The initial selection tagging and rating is just to bring up which files to edit/look at more closely.
During edit you see things or find crops/edits that change the rating of the image. It’s then useful to be able to revise the rating immediately so you don’t forget which file it was. Usually it’s a selection between a couple of similar shots so it’s not quick to figure it out later.
You may then revise the overall rating again to whittle down a final selection.
The above sequence then gets randomized and run every way until the job is done.
It’s completely possible to work around this as I have done in RT. I’m just saying it’s a pain in the butt and the butt pain of managing ratings, tags and selections is particularly annoying. it requires huge concentration but is boring. I’m fine loosing an edit every now and then, having to redo tagging or rating is much more painful.
I guess the conceptually clean way of implementing it would be complete separation between tools. A raw editor that only copies metadata from raw and xmp files and writes to exported files when supported. No metadata input at all. Just a refresh button that re reads xmp files on selected folder in some file browser perhaps.
If some kind of grouping feature is required in the editor it should have a name and concept that differs from rating. This would of course demand that you remember to switch apps, identify the file and change rating when you’ve realised a tweak is needed.