ZeroNoise-like script for PhotoFlow

@assaft I have packaged a “portable” version of the experimental branch, which can be found here.

The package is a simple ZIP file. It should be possible to extract it anywhere on your disk, and then run photoflow.exe and pfbatch.exe directly from the bin folder, without any installation…

The current script requires to specify a PhF preset that is used for converting the RAW files to TIFF. The preset must contain a RAW developer layer, plus any other layer you might want to add (for example a lens correction layer). The path to the preset is given as the first parameter, while the rest of the parameters are the RAW files to be converted (the order doesn’t matter, since the files are re-ordered acoording to the EXIF data).

Thank you for writing this script!

The reason I had asked about a script is because I recently concluded that the best way to choke a decent image out of Sony’s A7 camera- with its entirely crippled firmware - is to plant the camera on a tripod and take ev-bracketed exposures. Your script works perfectly. Here are some 100% crops comparing editing results with and without using the blended ev-bracketed shots:


Ev-bracketed exposures blended in PhotoFlow - the image is dark because I had to underexpose the base exposure quite a lot to retain the highlight information (side-lit glasses and bottles). But the shadows are super-clean and can withstand fairly extreme editing.


After +4 exposure in GIMP-CCE, followed by a rather steep Curves operation (roughly approximating the tonality of the final image) - these are extreme edits and yet the shadow areas don’t show any noise.


Same processing as above, except done on just the base exposure that was underexposed to capture the highlights. This is the amount of noise the final image would have had if I had tried to process just the base exposure instead of the blended tiff from the ev-bracketed raw files.

1 Like

Thanks. I made some progress but it’s not ready yet. Unfortunately Windows shell is pretty limited compared to bash so it won’t look as elegant as I’d want it to be.

It will help me to see the console prints that you can on ios/linux and the files: $plist aligned.txt exposures.txt images.txt before they are deleted at the end of the script. It will be great if you can upload a zip with all of these.

@assaft I will send the outputs as soon as I can… concerning the script, maybe the best would be to re-write it in perl? But I’m not a perl specialist at all…

I can rewrite it in python if that’s ok with you.

1 Like

OK. Some thoughts:

  1. Maybe it will be interesting to look at how Adobe’s Linear DNG format works. As far as I know it is similar to TIFF in the sense of being a pixel-data (demosaiced) file format, but it still doesn’t have WB applied to it. I used to work with this format in the past when I used PhotoAcute to combine files for the purpose of HDR or Super Resolution and wanted to stay in a format that allows me to tweak the WB later on

  2. .I was wondering if tone curve adjustments such as brightening an image, or recovering highlights or shadows, are more limited / noisier when applied to TIFF compared to a RAW file.
    My experience with Lightroom is that when a RAW file is opened, the converter gives much more leeway for recovering highlights or brightening the shadows compared to when a TIFF file is opened. I thought that this is because when we deal with a RAW file, the converter can perform some of these adjustments as part of the demosicing process and this gives better results. Is it the same in Photoflow? is there any advantage to brightening an image using the exposure compensation setting in the RAW Developer layer, compared to keeping it at 0, exporting to TIFF, and then opening the TIFF and applying a tone curve to the TIFF that brightens the pixel data in a similar way?

@assaft tiff is just a container. In fact a lot of raw files are based on tiff. There is nothing stopping you from applying white balance or curves to the contents of a tiff file.

@assaft the RAW data is stored in linear format, with 14-bits accuracy at best… the TIFF files resulting from the RAW processing are 16-bits, and can eventually be stored in linear gamma format as well, so the TIFF files can store the image data with an accuracy at least comparable to the original RAW.

Therefore, I doubt that a linear DNG format would provide better accuracy that TIFF. On the other hand, the blended image has to be processed and eventually stored in floating-point format, to take advantage of the increased amount of information provided by the blending of bracketed exposures. PhF indeed processes the image in floating-point accuracy, but does not (yet) provide the option to save the result to floating-point TIFF.
However, this option will be added very soon.

What influences the tone adjustment of the image is in fact mostly the gamma encoding… many operations, like exposure compensation, channel mixing or WB corrections, give better results when applied to linear-gamma data.

OK, thanks for the information.

So about what you said here: [quote=“Carmelo_DrRaw, post:7, topic:3022”]
the main issue is that the exposure blending is based on a special luminosity mask that considers the maximum of the RGB channels of each pixel. This guarantees that every pixel where at least one of the RGB channels is clipped or near the clipping point is replaced by one with a lower exposure wherever possible. However, the clipping point varies depending on the WB multipliers, therefore the luminosity mask needs to be built from the final WB.
[/quote]

Why not converting the RAW files to TIFF without any WB applied (as if all multipliers equal 1.0) and applying WB using a dedicated layer for that after the exposures were blended?

I’m perfectly fine with python… the main constraint is to be able to do the floating-point computations of the “exposure factors” starting from the aperture and shutter speed values, and to be able to extract the EXIF information somehow.

I tried to explain this in one of the previous messages… :

In short: using the wrong WB multipliers might result in non-optimal exposure blending, since in some cases the multipliers can be close to 2.0…

You would need to apply the final WB to the individual images and then generate the luminosity blending mask from the WB-corrected layer. This would mean having as many WB correction layers as input images, all with the same multipliers. It is doable, but not very practical…

Sure. Actually the floating point computation was one of the main problem with Windows shell. OK, so I’ll let get back to you once I have something working. Please send me an example of the output you get on Linux once you have that.

I understand what you are saying but I don’t see why this is different than a case in which the photographer uses the expose-to-the-right technique and obtains a RAW file with values as close as possible to the RAW saturation point. Opening such file in a RAW converter and applying WB with high multipliers will clip some pixels, but that’s ok because the photographer will use some tone curve adjustments to compress the DR. So the end result will be without clipping.

I think that HDRMerge shows a similar scenario. As far as I know it doesn’t look at the WB when selecting the pixels for the blending. It just takes those from the exposure where they have the highest value but still below the clipping point. The WB is ignored because no matter how high the multipliers will be set when developing the file, the photographer will be able to avoid clipping by compressing the DR in the RAW converter.

That’s correct.

From my experience with doing this kind of merging, you may want to have a bit of rolloff or else you might find bands where the noise suddenly gets stronger, in case the user used too big a bracket step.

This is actually what happens already in the creation of the luminosity mask… there is an inverted curve with a linear roll-off that is used to generate the mask from the initial channel data:

[quote=“assaft, post:25, topic:3022”]
I think that HDRMerge shows a similar scenario. As far as I know it doesn’t look at the WB when selecting the pixels for the blending. It just takes those from the exposure where they have the highest value but still below the clipping point.
[/quote]

After further thinking, now I agree with you. I am therefore modifying the script to convert RAW files to TIFF with UniWB and camera colorspace, and then to apply the WB and color conversion after the blending, as part of the blend.pfi edit.

1 Like

How do I open multiple photos to do exposure blending? I downloaded the portable version for my windows 10 computer but I can’t seem to get it to open and then blend multiple exposers…

This is still quite experimental, it requires a non-standard version of the code and a script to prepare the input images. At the moment I have a working solution for Linux, but not for Windows.

@assaft is kindly helping me to write a python script that can be executed in the Windows shell, as the original script was in bash… in a couple of days we will hopefully be able to post here detailed instructions on how to run the script under Windows and Linux, and I will provide Windows packages of the needed PhF version.

The need for a script is due to the fact that the RAW images have first to be converted to TIFF, then aligned, and finally combined together through suitable luminosity masks. The script automates all the steps, from RAW processing to the final blending…

Thanks for explaining it. Looking forward to a windows version.

I am bumping this rather old thread, to announce that I have updated the ZeroNoise-like scripts to be compatible with the current photoflow batch processing options.

The new scripts are available from the GitHub repository: PhotoFlow/scripts/exposure-blend at stable · aferrero2707/PhotoFlow · GitHub

The usage of the bash and python versions of the scripts is very easy:

  • put the RAW files from your bracketed shots in some folder
  • from this folder, invoke the script passing the list of raw files, for example:
path-to-the-local-photoflow-git-repository/scripts/exposure-blend/exposure-blend.sh *.NEF

or

path-to-the-local-photoflow-git-repository/scripts/exposure-blend/exposure-blend.py *.NEF
  • this will create a file called blend.pfi in the current directory. Open this file with PhotoFlow, and you will get a stack of layer groups that load the individual images, apply the appropriate exposure compensation, and blend the non-overexposed areas together to minimize the noise:

A White Balance layer at the top allows to adjust the WB of the final image.

The blend masks of each image can still be edited and refined, if needed.

Here is a before/after comparison on an over-exposed dark area (left are the pixels from the shortest exposure, right are the pixels from the longest exposure):

The Python version of the script has still a bug in the cas where the bracketed images were not taken in ascending exposure time, which I am trying to solve… on the other hand, the bash script properly handles and arranges images taken in an arbitrary bracketing order.

The scripts require the latest PhF version from yesterday, either from the stable or the v0.3.0-RC1 branches.

Ping @XavAL as he is probably interested in this update.

2 Likes