Simply, my wife and I both enjoy photography. We both shoot digital. We’ve never before bothered doing anything other than me copying all files to a single NAS drive and renaming files along the way to show basic time date in the filename.
However we have recently moved up a notch in terms of what we are trying to do so we’ve attended a couple of courses, tech and arty, now both shoot raw and are taking more and more shots that are not just related to obvious holidays and events that we can easily find by date. So we need a good management system that alows us to share what we are doing.
Enter darktable, which by and large will do everything we need for the forseeable as far as we can determine. We are currently playing, experimenting and working through both the manual and Bruce Williams videos in our lunchtimes.
We’ve read up on local copies, understand basic cataloguing and management (we think) in lighttable and we can adapt this to manage a lot of what we do but we think it leaves a couple of areas not covered.
Can anyone tell me if there are features that we’ve missed that can do the following:
If I/we are away on holiday we’ll take a relevant laptop with us.
We might take some shots with us if we’re working on something and for this local copies seems to be the solution. However we’ll take it primarily to look at and work on some of the new shots while we are away. So we want to be able to catalogue/tag/rate/colour etc. as we normally would and then merge these shots, metadata and any edits back into the main file locations when back at base.
Given that local copies function exists is there a similar management feature to import new files when remote and then merge when reconnected to the primary database and file system?
We want to work on the same set of files back at base using 2 desktops (And occasionally the laptop to prepare for a trip or follow up from a trip). It’s highly unlikely that either of us will ever work on the same file at the same time but we will want both the desktops to have as synchronised a database as possible. What are the options here to ensure that e.g. if I add files to the NAS and import them into darktable my wife can then see them from her PC/lighttable view?
If that’s not clear, please ask. I have a tendency to overexplain.
For 1, you can copy move the folder of new images (with the XMP files) to the NAS and then import them. The only thing that won’t come over is grouping if you decided to group some images.
What about a smaller step? What if we make the library path to be part of the preferences and darktablerc?
Benefits, more users can have multiple library.db in their local machine or point it to a nas. They can do this today, via invocation, but this way it is more convenient.
Negatives, we will likely need an error when the path is not available.
Ps: I recommend only doing this with library.db and not with the entire config path (data, shortcuts, minimal, etc).
just saw this post. It seems like we have a similar needs for a workflow, see the other topic I started.
Currently I am trying an approach where I don’t sync the database at all, but only the images and sidecar files and doing a somehow automatic discovery of new added images with help of a lua script. The new images could either come from another PC running darktable or just added without any knowledge of darktable e.g. when a smartphone uploads to a filmroll folder.
I guess synchronizing the database between multiple PCs only works if all PCs have exactly the same images and the pathes are the same.
Re 1. I understand the option but was hoping to avoid that if I could. Simply because for me at least the workflow would start as soon as I import the files into darktable on the laptop to view/filter/reject/catalogue the images - always best done close to source as possible and when you’re on extended vacation to e.g. Canada or New Zealand and come back with maybe 3 - 5000 images from a 6 week trip it’s much better if they’re at least at “junk thrown out and quality ratings, interest ratings initially done”
As a newcomer with little field experience I’m not sure how much of that survives a reimport into a different database from the .XMP. If it’s everything that comes in then as a workaround it might be doable (for me at least, less so for my wife who is a user rather than a tinkerer)
Re synching. Had a quick look at the thread you referenced.
I suspect it’s in the ballpark of what I am seeking but again as I don’t fully understand how darktable operates, I wouldn’t ask for the database to be synched per se. (although that may be the required solution) I’d ask for a user outcome.
I’ll ponder a bit, dig a little more and then engage in that thread. For anything like that it needs to be generic rather than fulfill individual users needs as I suspect it will be a slightly fundamental change to the way darktables operates.
I haven’t coded for 40 years so I’m not about to offer that as I’m way out of touch, but happy to provide input as long as it’s useful and welcomed.
Yes, that sounds almost identical in context, but you are way ahead of me in seeking workarounds. From your suggestions/questions sounds like a couple of things that had already occurred to me are non-trivial. At least from my current level of knowledge.
Right now you can’t share the database simultaneously. The first client would open it, then add the lock file, so the second client would see the database is locked. And, I don’t think sqlite3 handles concurrent database access.
The user has a way to really mess up their setup. Then someone has to support it and deal with the frustrated user. The option is there to override if you’re advanced enough to need and you understand it. Let’s not make it easier for users to shoot themselves in the foot.
As each copy of darktable has its own sqlite engine, that seems probable: hard to synchronise accesses over several programs without some kind of supervisor.
The library is loaded before you have access to the GUI. So a change to the database path might mean a restart of darktable. So you don’t gain all that much by having the library path as a preference/setting.
And I agree with @wpferguson :
as using multiple databases isn’t something that’s all that often used, and certainly not something for new users, or users with no idea what a database or a file system is (more than you want to feed…), I’m not sure handing them another foot gun is smart.
And there a catalog change also needs a restart, from what I saw.
Even Adobe seems to advise using multiple catalogs only for certain use cases, none of which seem very common for amateurs (“different clients”, allowing access by several users, …)
If multiple users need access to the database simultanuously, you need an external database server (MariaDB, MySQL, …). Digikam supports that, darktable does not.
Trying to circumvent that by using sidecars only (i.e. --library :memory:) is possible, but open to race conditions: when two users edit the same image, one is almost sure to lose their edits…
Currently not possible: you cannot have more than one darktable client access the database. Even working on the same images is tricky if you are using sidecars (see my previous post). In a nutshell, darktable is not designed to accept parallel access to images, or the database.
For the less initiated, is there any documentation anywhere that describes the overall architecture of Darktable (the program) or is this sort of info just spread beween developers, beta testers and advanced users pushing the envelope and/or described in discussion threads in the github repository?
just trying to guage how to come up to speed most effectively.
Darktable using sqlite means you cannot have two instances access the data base without issues. That’s inherent to sqlite, so no need to explain that in the manual.
And in general, the overall architecture of a program isn’t interesting to most users, so not described in the user manual. You might have a document in the source tree, but even that isn’t all that common: it’s hard to keep documentation in sync with the source code, and wrong documentation is worse than no documention.
The issue here is that your scenario needs simultanuous access from independant locations. That means more than one instance can write to the same data, a so-called “race condition”.
Imagine you start editing an image. after you loaded the image, someone else loads the same image for editing, with no edits yet! You edit, and your changes are written to the database. Then the other party edits the image, and his changes are written to the database without taking into account your edits.
Not good…
I’m not saying that it is likely to happen all the time, but it is possible. And if it’s possible, it will happen sooner or later (and then you spend a lot of time figuring out what went wrong…).
Having only the image files on the NAS, wich each of you using a local database may work, but you will have an issue when syncing the databases, if both of you edited the same images…