Free software and photo metadata — a chance to engage with the broader photography industry

On the whole I think separate edit and xmp files make sense but this use case where you do further work on files with identical names that reside in the same folder as the master seems as you put it contrived. Currently most software default to an export folder or sorts?

When I shoot raw+jpg I store them next to each other but output files end up in web or export folders.

I always put result files next to the originals. Heck, Filmulator only does that, for now (Save As coming Soon™).

With identical file name? What happens for RAW+jpg shooters?

I have tested Filmulator but only on my dwn folder.

It appends -output to the jpegs.

But I put intermediates in the same folder too, anyway.

So the -ouput addition solves the issue, the files are no longer identical.

Thank you for the condescension. That was uncalled for.

I do in fact know that there are private namespaces in XMP files. But I also know that they regularly get clobbered by applications that don’t preserve the edits of others.

Even the simplest things such as color labels and ratings aren’t handled consistently: Capture One and Lightroom only allow one color tag at a time, while Darktable can apply multiple colors at once. A rating of zero is sometimes interpreted as “this file has no rating”. In other applications, it is “this file was rejected”. Tags sometimes live in a hierarchy, sometimes they don’t.

And let’s not even start with how some applications only read XMPs at startup, some only read them during import, and others actually make an effort to detect XMP changes.

It is a mess, is what I’m saying, and unless applications start to honor not just a common file format, but also common behaviors, it is going to stay messy.

Again, color label is not standard. LR stores it in its private namespace and so does DT.

This is actually specified in the standard, -1 means rejection and zero is a rating. An empty field would be “no rating.”

The main point I think is that following the standards is quite a good idea. If free software would behave well it would be an advantage.

I’m also pretty sure the standards body has considered most issues and that we don’t really have to figure it our ourselves. Except when they failed to take some edge case into account that is. I do think it’s important to separate out which metadata is standardized and which can be stored in separate name spaces. The discussion mixes these issues and I think that’s part of the reluctance to adopt the standard.

I think it’s fine that some file structures are discouraged and can lead to the wrong metadata being associated with those files. If you care about metadata just follow the standard.

This is already done and decided, people just need to read up :wink:

1 Like

That’s really a huge advantage of standards isn’t it… :slight_smile:

1 Like

Standards give people control of their data and freedom to move between different software. Seems like a golden opportunity to me!

I’m not convinced taking a standard and implementing it wrong is any better than following a standard with some flaws (many have at least some drawback).

This is not merely wrong, Bastian. It is wrong on an epic scale. It’s hard to imagine how it could possibly be more wrong!

Simple test:

Take a lonely CR2 file in Windows:


Add some very basic IPTC metadata to it in Photo Mechanic, which to recap is stored using the XMP standard:

Voila, the XMP file appears:


Take it into Linux. Load it it digikam. Make the XMP file extension lowercase for digikam to recognize it (!?!?!!?), but hey, it otherwise works as intended:

darktable also shows the XMP data from the same CR2 and its sidecar:

Meanwhile back in Windows load up the file in FastRawViewer (a project of the libraw developers), and of course the XMP data is visible there too:

I didn’t test what would happen when writing the IPTC metadata in either of the Linux programs, but it’s clear that at least they read them. I have not been following the steps the digikam team have taken to implement the XMP standard. But from preliminary indications it seems their approach has been, well. . . more complicated (and confusing) than that mandated by the XMP standard.

No Adobe program was used in any of these steps, although I guess some (or maybe all) of them draw on part of the BSD licensed Adobe XMP SDK.

The simple reality is that any contemporary photography program on Windows or macOS that works with IPTC metadata uses the XMP standard, which means it uses file.xmp. As I mentioned, the only exception I am aware of is Corel Aftershot, which is a great example of what not to do.

If possible it would be good to hear why Adobe chose the file.xmp standard. Perhaps the people who developed the standard are still alive and they can explain the history and reasoning behind it. I’m curious to know myself.


As far as I can tell, nobody’s advocating not reading filename.xmp if present, the question is about whether you should write to that or to filename.cr2.xmp.

It seems to me that I should have Filmulator write metadata only to the standard filename.xmp, and then do my own thing (sqlite sidecar) for edits.

If you write to filename.cr2.xmp then nothing outside of the Linux-centric world will read it. Which is a whole bunch of programs that millions of people depend on to work with other programs in predictable ways. It’s that simple, from the desktop to the cloud. :man_shrugging:

For a different point of view: I like having my edits in the sidecar. Its extra insurance that if the database corrupts or is otherwise lost or damaged, I still have my edits. Also my sidecar files are checked into git, so they’re versioned.


The best thing would be to ask the people who originally designed the XMP standard. I have no idea exactly who they are, if they are still alive, or would be willing to talk. But let’s see. It should be possible to find out.

1 Like

I’m planning on having edits in sidecar SQLite databases, not sidecar text files. It’s much more robust against application crashes and such, because of transactions.

Plus, my main database is also sqlite so it reduces the maintenance burden for keeping the schemas synced.

Yes, that’s what I do 50-60 hours a week for my job that pays me. All day XML and XSLT.


I think it’s not the XMP standard but rather a de facto standard from comercial SW to use file.xmp.
Could you point me to the paragraph of the XMP standard supporting that?
But often de facto standard from commercial SW becomes the norm.
Perhaps FOSS programs need a “commercial” mode to manage the XMP files.

This isn’t about Linux centric. It’s about Raw+JPEG which might want different metadata for each file, as well as potentially different edits. Just because the big boys haven’t considered a scenario doesn’t mean it’s unrealistic.