How to load an exported sidecar file

Yes but those files are the ones I’d say are associated with the raw (?input) file rather than the exported image file.

It is probably fair to say that my thinking is a bit contaminated by my use of other programs. When an image file (e.g., tif format) is created from a raw file using Rawtherapee (RT) you would also get a sidecar file named “PXL_20220508_14425766.tif.out.pp3” as well as “PXL_20220508_14425766.dng.pp3”. Of course ART being a fork of RT also does the same thing but so does Canon DPP4 even though I have to request it whenever I do an export.

Note: RT also allows to save and reload as desired multiple sidecar files while editing the image. It looks like that is what Duplicates are about. Duplicates might be good, for all I know, but at present I’d say it is a more complicated way of doing the same thing.

What that allows, is to do more editing with RT on the raw file after having created the image file which do change the sidecar associated with the raw file BUT NOT the one associated with the image file. Therefore, I can come back later, maybe years, and resume editing the same raw (by loading that sidecar file) to reset parameters to the ones used to create the image file. If you really do it years later, as I have done, that is when it is a good idea to do it with the same version of RT used to create the subject image file. That is why running in portable fashion where I never loose a prior version is advantageous.

I think what I’m learning here is that what I just described is the reason for DT including the processing parameters in the metadata. While I’m wanting to avoid unnecessary changes to my workflow it is looking like this may be necessary should I want to use DT. I can see an advantage which is that the metadata is harder to separate from the image file than a sidecar file. However, metadata is another subject that has its’ share of complexities.

If what I’m saying is correct I’ll probably start by making my own copy of the sidecar file following a naming convention like RT in order to reserve the option of omitting the subject metadata from the image file (i.e., resulting work product).

Have a look at the discussion concerning Lightroom imports, different programs, same situation. Those threads explain why most editing steps can not be transplanted between programs (things like crop, and similar adjustments might translate, things like “filmic” or “tone equaliser” certainly won’t)

I would expect it is NOT useful at all which has something to do with why I dislike the idea storing that data in the image file as metadata. I do know from experience that metadata can have unintended consequences in other software. I think this is a case where the image files being produced are intended to be used by all manner of other software that we cannot possibly anticipate what might go wrong.

I have been spending a lot of time in the User Manual. At present, I’d say it is what I like most about Darktable (DT). However, as Mica has subsequently posted, it now looks like what I’m thinking did work is NOT the way DT actually works, which is also consistent with a very helpful reply from Todd. In that, NOT to be found in the manual.

With respect to dt’s processing metadata, if another application reads and and mangles something, that application is broken in respect to processing. XMP is the eXtensible Metadata Platform, and the extensible part is made by using a namespace in the XMP XML. F you look at darktable’s processing instructions, you see <darktable:exposure>1.234533</darktable:exposure> if another programs reads that an gets it wrong… Well :person_shrugging: all the code is public and commented in the source.

The manual explains very specific things in a very specific way. We omit a lot of conceptual things in the manual so we don’t have like 2,000 pages. The manual is already 300 pages long, which is a lot.

I never touched the settings, but I think I know how it works. So, take in mind that I might be wrong:

Darktable has a database, which contains the location of imported files, duplicates and all the metadata and edits. So, a few points:

  • The original RAW file you imported should never be changed by Darktable. Just in case you were thinking that.
  • Any edits you do, are always stored in the database.

Now, write sidecar file for each image (in Settings → Storage) on import does exactly that. Darktable will create (or update) .xmp files for each imported file, when you import them. So if you start editing afterwards (and modifying tags or other metadata) they will not end up in the xmp file (automatically).

The after edit function does more what you want I guess: It means that after every edit you do (or after you leave the Darkroom mode, not sure) the .xmp file is created / updated and will contain any metadata updates and any edits you made. This is handy if you keep the xmp files together with the raw files in your backup / storage, that way you keep the edits you made to the file always together with the file. But the edits are written as ‘a description of what Darktable has done’, and that means that (most) other software can’t do anything with those edits. Maybe simple stuff like cropping or orientation, but mostly only tags and simple metadata.

never is self explanatory I guess, Darktable will just never touch any xmp files unless you do it manually.

Now, reading of xmp files is done one startup (if you enable the option in Settings → Storage) and maybe when you open a RAW file for editing again (not sure about that one).

This all has nothing to do with the metadata that is written to a jpg file if you export a jpg file from Darktable. I believe that will always contain metadata, and so always contains the edits as metadata. So as long as another program hasn’t stripped the Darktable-metadata from the file, you can manually do load sidebar file... and then select a jpg that is written by Darktable.

In the Lighttable mode (photo picker mode as I call it), on the right side is a panel history stack, and it has two buttons at the bottom: load sidecar file... and write sidebar files.

The ‘load sidecar file’ allows you to pick a xmp or jpg file to read the metadata and edits from. Handy when you saved multiple copies (with different filenames) and want to revert to something. Or when you want to try out edits from this forum.

‘write sidebar files’ will forcibly write xmp files for all the current selected images. You can’t change the filename (as far as I know), this will always be <base filename>.xmp for any <base filename>.RAW file, in the same folder.


So, back to your original question.

I don’t know ‘how you cannot figure it out’. It should do this by default I guess. You double click an image from the Lighttable mode, you end up in Darkroom mode and you make edits. Those edits are then saved to the database. So the next time you double click the same image again, it should open up with the edits you did last time.

Now, if you want to throw away your edits and reset it to a certain .xmp (or .jpg) file:

  1. In the Lighttable mode, select the image you want.
  2. In the right sidebar, in the panel history stack, you might want to click ‘discard history’ to throw away all your edits and reset the image back to defaults.
  3. Now you can click load sidecar file... and pick an xmp file or jpg file which contains Darktable edits. Now you can doubleclick the image again to open it in Darkroom mode and you’ll see the edits you just loaded.

(Instead of using ‘discard history’, there is also an ‘overwrite mode’ for loading a sidebar that does this in one go, but it’s set to ‘append mode’ by default. And discarding edits is always something I want to explain explicitly in case you do it by accident.)

If I compare to other rawconverters i know of:

  • Lightroom / Adobe Camera Raw will not create .xmp files by default, but write them to their database. But a setting can be made to always write xmp files (so basically kinda like Darktable). I don’t know if there is a specific ‘load xmp file’ option though.
  • DxO PhotoLab always writes in it’s own .dop file format. There is no database of edits, so if you throw away the file, you start over with the your edits. But they are always (over)written, so you can’t easily go back to a previous version of the edit, unless you made a copy of the .dop file yourself.
  • ON1 PhotoRaw writes it’s own sidecar files with edits, but they are primarily stored in the database (so sidecar files are there as ‘a safety’ thing). I don’t even know how to reset the edits and load edits from a sidecar file.
  • Capture One writes it’s own sidecar files, I don’t know what they contain. But the edits are stored for sure in a Session / Database. I don’t know if it’s possible to reset the edits and read them from a sidecar file.
  • Rawtherapee works purely with it’s own sidecar files. Edits are written to them, and you can overwrite or merge edits with those from a .pp3 file. A pp3 file has the option to conain multiple set of edits (snapshots) though, so trying things out doesn’t has to mean you have to mess with multiple sidecar files for a single image.

So I’m a bit struck by this comment, because I think this is far from intuitive and common in other software. They all have their own way of doing things.


It might also be that you are just looking for a ‘snapshot’ feature. A method of having one RAW file, but having multiple versions of edits for it, so you can try around and compare edits. In Darktable this is called a ‘duplicate’.

In the Lightroom mode, it can be done from the right sidebar, the selected image[s] panel. There is a duplicate button there. I call it ‘a virtual copy’.

In the Darkroom mode I recently learned, there is a duplicate manager in the left sidebar. You’ll see previews of the other versions of the file, and you can click around to select them. I believe there is even a ‘mouse hover preview’ function by holding alt (and clicking? Not sure on this). And there is a duplicate button to create a duplicate as is (from your current version), and an original button to create a duplicate but start again from defaults.

This is a nuance I am not getting…not that I get everything…those xmp files are basically the script with the instructions of how to create that export so they are intimately related to it, DT just doesn’t offer to write a second copy with the exported file. If you apply them to another image it will blindly apply those steps which may or may not be the result you want on a new image. If you use shift ctr c and shift ctrl V you can do selective copy and pastes from the history stack on to a duplicate to make a new edit or starting point or on to another image in a series so this might be something you can also expt with.

image

There is a click and hold…so if you click on a duplicate and hold it will come up …when you let go the image you are currently editing comes back so you can do some on off comparisions among your duplicates with the currently displayed image

When I say associated with the raw (input) file I mean those are the ones automatically being updated by the software and synchronized with the database. If I want to freeze something that is certain NOT to be updated any further or can be used for restoration should the database go away (as happens when using the :memory option for --library) a different file is needed. That is the kind I’m talking about.

Okay well I guess DT takes the approach that they could get lost or need managed so just embed the data. I suspect something like a tiff would be your most common export to pass along so why not just make sure to use a copy of your exported file in the next step and then your export is intact, has the metadata locked in time and sync with the export and if the next program in your workflow mangles the DT data no big deal…I don’t think it would be that difficult to manage that way and if you use duplicates and selective copy and paste and name your duplicates you have a well organized set of exports from DT. If you really wanted this I am sure you could implement it in scripting or a batch file. Use the command line to do your exports and then also copy the xmp used for processing to the export directory…maybe even renaming it in a way it could not be modified like xmpx for exported and then you would have to physically rename it to use it again…this should protect it.

Yes! Having given it a little more thought it is pretty clear that my thinking is tainted by my experience with other software that happens to behave quite similarly in this respect.

As previously mentioned there is merit to having the processing parameters embedded in the file. Also, because I’m using several different programs I do need to distinguish which files have been processed by which programs. The sidecar files do present a bit of a burden for organization and maintenance purposes.

Also, and possibly most significant, the image files I create from the programs used to develop raw files are usually in 16bit uncompressed tiff format. These could be called my master image files. As you might expect these are NOT the final work product and NOT what I share or for that matter print. Further post processing is involved to create image files ready (sized) to print maybe with borders and text added. Even a signature for the ones I really like. This further post processing is typically done using GIMP which has some (now much improved) capability for managing metadata. There are also some metadata edits that require ExifTool. Between GIMP & ExifTool I should be able to remove the XMP metadata added by DT which I might point out has become pretty irrelevant at this point in my pipeline.

This topic turned out to be a bit more complicated and ended up generating much more discussion than I originally expected. However, it was very helpful to me and I want to thank all participants for taking the time to contribute for which I am most grateful.

THANKS!

1 Like

A bit late to the party but with two words of caution:

  • make sure the appropriate option to save the development info in the exported file is enabled (darktable 3.8 user manual - export)
  • in some rare instances the resulting metadata including the development is larger than allowed by the file format (not sure if it applies for TIFF, but it certainly does for JPG). In those cases it will be truncated and the rest lost(*). Make sure to enable the option to compress that metadata to minimize the risk (darktable 3.8 user manual - storage)

(*) there is actually a warning on the manual about not using it blindly for XMP backup because of that

Well now that is good to know. I did notice the option about compression and was a little puzzled about why that might be desired.

What I think is an obvious question is “how do you determine that adding the processing parameter metadata failed?”. It sounds like the phrase “the rest is lost” means that you end up with a bunch of pretty useless metadata in the image file. If so, why does that make sense? Wouldn’t a good solution to this problem be to do what other software does and write a sidecar file which certainly does NOT have any size limits? Oh and since that solves the problem why NOT write the processing parameters used to create an image file into a comparable read-only sidecar file with a name that associates it with the resulting image file? If some users would rather NOT do that make it optional.

Also if I’m understanding correctly, does this mean that DT cannot be used to create an image file that is free of (without) this defective metadata? Fortunately, if such an image file is otherwise correct one can use other software such as GIMP to produce a comparable file without the defective metadata.

I personally only think this would ever happen if someone had a massive number of masks…someone should torture test it and see…there are scripts , at least 2 that you can run to strip the metadata I believe if that was a desire… google lua scripts and I think you will find it someone has one as part of an export process…ie running the script provides that in the export step …at least I believe I have seen that…

Think this is the one I recall…

Seems like option 4 allows for a custom exiftool action…perhaps this would allow you to create your output or strip data as you see fit…

From this topic

1 Like

You might also want to make use of this…

Probably there is an error in the console if you run darktable from the command line. But in any case according to this the limit for JPG is 64kB and for TIFF is 4GB (effectively infinite). If you’re exporting to JPG you could check the XMP sidecar size beforehand. As @priort said you’ll need quite a complicated development with many masks to get close to the limit (specially when the metadata is compressed), so it’s a very rare occurrence.

You could file a feature request (Issues · darktable-org/darktable · GitHub). Maybe a LUA script is enough.

Yes, just disable the option to save the development info in the metadata (it’s the only part big enough to even remotely hit the size limit).

Would the criteria for determining that the amount of metadata is excessive have something to do with the proportion of the resulting file size? If so, a low quality (highly compressed) jpeg would seem like the most likely candidate. My 16bit uncompressed tiff files are pretty big. On my old low resolution cameras more than 100MB typically. Thinking here is that it would take an awful lot of metadata to significantly increase the relative size of such files.

On the other hand, if the idea is that there is an absolute limit to the size of the metadata irrespective of the file size it might not be so hard.

@guille2306 Made some good points I think. I really don’t know that much about it although it would be easy to find out with some research… it might be useful if there was a way that dynamically you could flag any files during export that would exceed any physical limits on that embedded data. Then you could choose how to proceed knowing the file would not contain a complete record of all the required metadata but again I am not sure how to go about that or how hard it would be … Again as it may be rare I wonder how often it would happen but I am sure there is someone out there capable of those calculations…