Following a recent suggestion from @Elle Stone, and thanks to her help, I have managed to create a script that blends multiple exposures into a low-noise, high dynamic range image, pretty much like Guillermo’s ZeroNoise and HDRMerge… with few important differences:
the various bracketed exposures are individually processed and converted to 16-bit TIFFs using PhotoFlow’s batch processor. In this step, one can already correct the lens distortions, which helps the subsequent alignment and merging of the shots
the TIFFs are then aligned using Hugin’s align_image_stack
the aligned images are then included into an auto-generated .PFI file that provides all the required exposure compensations and luminosity masks in the form of non-destructive layers
One advantage of this approach is that once the final .PFI file is opened, all aspects of the exposure blending can still be tweaked. For example, it is possible to modify the blending masks in order to manually remove ghosting due to moving objects. It is also possible to tweak the exposure compensation of each image in the case that the auto-generated values are not precise enough.
Finally, the .PFI file can be directly opened in GIMP via the PhF plug-in.
Here is an example of a five-stops bracketed sequence merged with the script:
@Carmelo_DrRaw so the PFI file stores all layers? I’m curious, why PFI and not layered TIFF or some other existing format? Do PFI files generally work as sidecar files, and if not, how do you store the info that usually goes into sidecar files?
PFI files are human-readable XML files that store the layers structure as well as all the different layer parameters.
A PFI file is more complex than a layered TIFF, because layers can be grouped, and also because the PFI file contains all the parameters of the tools associated to each layer. It basically contains the full logical structure of the edit.
PFI files work somehow as sidecar files, in the sense that the image data is not stored in the PFI file itself but it remains in the original input files.
However, there is not a 1:1 relation between PFI files and image files, because a PFI file can contain several “image layers”, each loading one raster or RAW input file. The PFI file only contains the name(s) of the image(s) than need to be loaded…
Whoa! This sounds Amazing! I’d be glad to try it with some bracketed sets that I have.
One thing I wanted to ask - is it correct that the conversion from RAW to TIFF determines the demosaicing algorithm/parameters and WB and applies them, so these cannot be changed later on after opening the TIFF files for alignment and tweaking?
Correct… there is one option though for WB: it is possible to save the TIFF files in the camera color space (i.e. do not apply any conversion to a standard colorspace like sRGB), and then add the CAMERA → RGB conversion in the final PFI and above the blending.
You could then insert a WB correction layer in the final PFI file, between the blending and the CAMERA → RGB conversion.
This will work fine if you need to apply small adjustments to the original WB.
For larger adjustments, I would need to think further… the main issue is that the exposure blending is based on a special luminosity mask that considers the maximum of the RGB channels of each pixel. This guarantees that every pixel where at least one of the RGB channels is clipped or near the clipping point is replaced by one with a lower exposure wherever possible.
However, the clipping point varies depending on the WB multipliers, therefore the luminosity mask needs to be built from the final WB.
If I remember correctly, you are working under Windows, right? In that case, I would need help from someone skilled in writing Windows batch files, as the version of the script I have at the moment only works under Linux…
@assaft I have packaged a “portable” version of the experimental branch, which can be found here.
The package is a simple ZIP file. It should be possible to extract it anywhere on your disk, and then run photoflow.exe and pfbatch.exe directly from the bin folder, without any installation…
The current script requires to specify a PhF preset that is used for converting the RAW files to TIFF. The preset must contain a RAW developer layer, plus any other layer you might want to add (for example a lens correction layer). The path to the preset is given as the first parameter, while the rest of the parameters are the RAW files to be converted (the order doesn’t matter, since the files are re-ordered acoording to the EXIF data).
The reason I had asked about a script is because I recently concluded that the best way to choke a decent image out of Sony’s A7 camera- with its entirely crippled firmware - is to plant the camera on a tripod and take ev-bracketed exposures. Your script works perfectly. Here are some 100% crops comparing editing results with and without using the blended ev-bracketed shots:
Ev-bracketed exposures blended in PhotoFlow - the image is dark because I had to underexpose the base exposure quite a lot to retain the highlight information (side-lit glasses and bottles). But the shadows are super-clean and can withstand fairly extreme editing.
After +4 exposure in GIMP-CCE, followed by a rather steep Curves operation (roughly approximating the tonality of the final image) - these are extreme edits and yet the shadow areas don’t show any noise.
Same processing as above, except done on just the base exposure that was underexposed to capture the highlights. This is the amount of noise the final image would have had if I had tried to process just the base exposure instead of the blended tiff from the ev-bracketed raw files.
Thanks. I made some progress but it’s not ready yet. Unfortunately Windows shell is pretty limited compared to bash so it won’t look as elegant as I’d want it to be.
It will help me to see the console prints that you can on ios/linux and the files: $plist aligned.txt exposures.txt images.txt before they are deleted at the end of the script. It will be great if you can upload a zip with all of these.
Maybe it will be interesting to look at how Adobe’s Linear DNG format works. As far as I know it is similar to TIFF in the sense of being a pixel-data (demosaiced) file format, but it still doesn’t have WB applied to it. I used to work with this format in the past when I used PhotoAcute to combine files for the purpose of HDR or Super Resolution and wanted to stay in a format that allows me to tweak the WB later on
.I was wondering if tone curve adjustments such as brightening an image, or recovering highlights or shadows, are more limited / noisier when applied to TIFF compared to a RAW file.
My experience with Lightroom is that when a RAW file is opened, the converter gives much more leeway for recovering highlights or brightening the shadows compared to when a TIFF file is opened. I thought that this is because when we deal with a RAW file, the converter can perform some of these adjustments as part of the demosicing process and this gives better results. Is it the same in Photoflow? is there any advantage to brightening an image using the exposure compensation setting in the RAW Developer layer, compared to keeping it at 0, exporting to TIFF, and then opening the TIFF and applying a tone curve to the TIFF that brightens the pixel data in a similar way?
@assaft the RAW data is stored in linear format, with 14-bits accuracy at best… the TIFF files resulting from the RAW processing are 16-bits, and can eventually be stored in linear gamma format as well, so the TIFF files can store the image data with an accuracy at least comparable to the original RAW.
Therefore, I doubt that a linear DNG format would provide better accuracy that TIFF. On the other hand, the blended image has to be processed and eventually stored in floating-point format, to take advantage of the increased amount of information provided by the blending of bracketed exposures. PhF indeed processes the image in floating-point accuracy, but does not (yet) provide the option to save the result to floating-point TIFF.
However, this option will be added very soon.
What influences the tone adjustment of the image is in fact mostly the gamma encoding… many operations, like exposure compensation, channel mixing or WB corrections, give better results when applied to linear-gamma data.