Although my photographic workflow is based on digiKam, I use several Bash shell scripts to import, organize, and process photos and raw files. Here are two scripts I use on a regular basis:
And I’m always looking for ways to automate and improve my workflow through scripting. If you happen to use scripts to optimize your workflow, would you consider sharing them with us?
I couldn’t comment on the GitLab snippet so I’ll comment here.
I never studied POSIX but I’m quite fluent in Bash. Photo Funnel is called pf.sh which means it’s a shell script, but the header tries to load the Bash executable, #!/bin/bash If there is a Bash-compliant shell installed but its not called Bash and there is no symlink from /bin/bash to it, the script will fail. The script otherwise looks POSIX-compliant, but as I haven’t studied POSIX I’m unable to say that with certainty. As such, you can either change the header to #!/usr/bin/env bash and change the rest of the script to actually use Bash (replace [ ... "$foo" ] with [[ ... $foo ]] see BashGuide/TestsAndConditionals - Greg's Wiki ), or change the header to #!/bin/sh to use POSIX shell.
This dir='~/backup' is potentially dangerous. It should be dir='$HOME/backup as ~ won’t expand.
It’s likely not going to be immediately useful to anybody here, but since you asked
The only script I use on a regular basis is one that automatically moves files from my camera (now via SD card reader) to a central location on my server via UDEV (and hotplug/HAL before that.) I have an SD reader connected to my file server, so when I get home I can just drop the SD card into the reader - no need to open up an app, or even have a workstation turned on (which my wife loves.)
I’ve been using this for around 15 years, so it’s got a fair amount of cruft as I added/removed features over the years.
This little one is extremely useful to my photographic workflow:
IFS="$(echo -e "\n\r")"
for EACHFILE in `ls -1 | grep -v md5sum.txt | grep -v ".xmp$"`
do
if ! egrep -q " $EACHFILE$" md5sum.txt; then
md5sum $EACHFILE
fi
done
(I do not claim authorship since this is copy-paste of several sources and fiddling a bit with the result, and no master stroke of me.)
I named it md5new and it is usually called like that:
md5new | tee -a md5sum.txt
Called in a folder with images, it updates the checksums of recently imported raw and jpeg files. Since sidecars (xmp in my case, since I use darktable) are living in a git repository, they are ignored by the script.
What is still missing is an automated way to crawl my entire photo library and check all the photos and find these that were not added to a checksum file. Unfortunately, fiddling with find needs hours of try and error to get all the space and special character handling correct. And I am still not sure if it would be the most reasonable approach. Therefore, from time to time I check some random samples of my photo library instead.
An even better way would be sidecars containing the checksums, but if this is already implemented somewhere, I missed it. I guess, the dng format offers the possibility to have checksums in the raw, but I shoot canon and sony. If I were sure that darktable would handle “stub” sidecars containing only the checksum (as a correct xmp xml entry), I would try that approach. But I don’t want to ask for support, since the dt community already got too many of my workflow improvement feature requests .
@chris you should take a look at git-annex. Basically when you add a file in git-annex, it hashes the file, moves it to a read only location, renames the file to its hash, then creates a symlink in the original location with the original name. You can run git annex fsck and it’ll compare the file to its hash. Lots of other neat stuff too; its how I’m storing my raw files.
If anyone is on Github, please feel free to let me know and I’ll be happy to add you to the organization. Otherwise myself or @paperdigits can get any scripts into the repo if you’d like!
@paperdigits, I already know git-annex and plan git-annexing my photo library for ages, not only for the checksums but copying files around etc. I already tried it 2 times but failed, one time the problem was that I was not able to install the same version of the software on client and server, and two different annex versions were not able tu talk to each other in a useful way. The second time, the problem was that I tried to set it up on an already existing git repo, which was synced with unison to the server. That messed both repos up to a degree that I was not able to fix, and that’s the status quo for several years now. Luckily, I am still able to use the git same git repo for the sidecars.
That written, it does not mean git-annex is a bad software, I like it very much and use it in places where I setup something new or handle less data so that I can copy things over if I do something stupid. Furthermore, both cases were mistakes done by me and not the software, and, time permitting, I will do another try.
That said, I still think additional checksums are cheap and therefore a good thing to do, and, furthermore, a better integration into my workflow would be extremely helpful.