ZeroNoise-like script for PhotoFlow

OK, thanks for the information.

So about what you said here: [quote=“Carmelo_DrRaw, post:7, topic:3022”]
the main issue is that the exposure blending is based on a special luminosity mask that considers the maximum of the RGB channels of each pixel. This guarantees that every pixel where at least one of the RGB channels is clipped or near the clipping point is replaced by one with a lower exposure wherever possible. However, the clipping point varies depending on the WB multipliers, therefore the luminosity mask needs to be built from the final WB.
[/quote]

Why not converting the RAW files to TIFF without any WB applied (as if all multipliers equal 1.0) and applying WB using a dedicated layer for that after the exposures were blended?

I’m perfectly fine with python… the main constraint is to be able to do the floating-point computations of the “exposure factors” starting from the aperture and shutter speed values, and to be able to extract the EXIF information somehow.

I tried to explain this in one of the previous messages… :

In short: using the wrong WB multipliers might result in non-optimal exposure blending, since in some cases the multipliers can be close to 2.0…

You would need to apply the final WB to the individual images and then generate the luminosity blending mask from the WB-corrected layer. This would mean having as many WB correction layers as input images, all with the same multipliers. It is doable, but not very practical…

Sure. Actually the floating point computation was one of the main problem with Windows shell. OK, so I’ll let get back to you once I have something working. Please send me an example of the output you get on Linux once you have that.

I understand what you are saying but I don’t see why this is different than a case in which the photographer uses the expose-to-the-right technique and obtains a RAW file with values as close as possible to the RAW saturation point. Opening such file in a RAW converter and applying WB with high multipliers will clip some pixels, but that’s ok because the photographer will use some tone curve adjustments to compress the DR. So the end result will be without clipping.

I think that HDRMerge shows a similar scenario. As far as I know it doesn’t look at the WB when selecting the pixels for the blending. It just takes those from the exposure where they have the highest value but still below the clipping point. The WB is ignored because no matter how high the multipliers will be set when developing the file, the photographer will be able to avoid clipping by compressing the DR in the RAW converter.

That’s correct.

From my experience with doing this kind of merging, you may want to have a bit of rolloff or else you might find bands where the noise suddenly gets stronger, in case the user used too big a bracket step.

This is actually what happens already in the creation of the luminosity mask… there is an inverted curve with a linear roll-off that is used to generate the mask from the initial channel data:

[quote=“assaft, post:25, topic:3022”]
I think that HDRMerge shows a similar scenario. As far as I know it doesn’t look at the WB when selecting the pixels for the blending. It just takes those from the exposure where they have the highest value but still below the clipping point.
[/quote]

After further thinking, now I agree with you. I am therefore modifying the script to convert RAW files to TIFF with UniWB and camera colorspace, and then to apply the WB and color conversion after the blending, as part of the blend.pfi edit.

1 Like

How do I open multiple photos to do exposure blending? I downloaded the portable version for my windows 10 computer but I can’t seem to get it to open and then blend multiple exposers…

This is still quite experimental, it requires a non-standard version of the code and a script to prepare the input images. At the moment I have a working solution for Linux, but not for Windows.

@assaft is kindly helping me to write a python script that can be executed in the Windows shell, as the original script was in bash… in a couple of days we will hopefully be able to post here detailed instructions on how to run the script under Windows and Linux, and I will provide Windows packages of the needed PhF version.

The need for a script is due to the fact that the RAW images have first to be converted to TIFF, then aligned, and finally combined together through suitable luminosity masks. The script automates all the steps, from RAW processing to the final blending…

Thanks for explaining it. Looking forward to a windows version.

I am bumping this rather old thread, to announce that I have updated the ZeroNoise-like scripts to be compatible with the current photoflow batch processing options.

The new scripts are available from the GitHub repository: PhotoFlow/scripts/exposure-blend at stable · aferrero2707/PhotoFlow · GitHub

The usage of the bash and python versions of the scripts is very easy:

  • put the RAW files from your bracketed shots in some folder
  • from this folder, invoke the script passing the list of raw files, for example:
path-to-the-local-photoflow-git-repository/scripts/exposure-blend/exposure-blend.sh *.NEF

or

path-to-the-local-photoflow-git-repository/scripts/exposure-blend/exposure-blend.py *.NEF
  • this will create a file called blend.pfi in the current directory. Open this file with PhotoFlow, and you will get a stack of layer groups that load the individual images, apply the appropriate exposure compensation, and blend the non-overexposed areas together to minimize the noise:

A White Balance layer at the top allows to adjust the WB of the final image.

The blend masks of each image can still be edited and refined, if needed.

Here is a before/after comparison on an over-exposed dark area (left are the pixels from the shortest exposure, right are the pixels from the longest exposure):

The Python version of the script has still a bug in the cas where the bracketed images were not taken in ascending exposure time, which I am trying to solve… on the other hand, the bash script properly handles and arranges images taken in an arbitrary bracketing order.

The scripts require the latest PhF version from yesterday, either from the stable or the v0.3.0-RC1 branches.

Ping @XavAL as he is probably interested in this update.

2 Likes

Thanks, thanks, thanks! :smiley:

1 Like

can you help me figure out why the stack script fails with a fits file?

ImageStack/stack.sh: line 32: 33442 Segmentation fault: 11 ${phfdir}photoflow --batch “$img” “$fitspreset” “$img.tif” >&"$img.log"

@matsmyth Welcome to the forum! Could you start by telling us what version you are using?

I would advise you to use the most current one here: Release Continuous build · aferrero2707/PhotoFlow · GitHub. Unfortunately, it isn’t the easiest to find.

If it doesn’t work on the latest version, it means that the script needs updating.

Hi @matsmyth, welcome to the forum and sorry for the late reply!

I have never tried to process FITS files… could you provide me the exact command you are running, and some sample file?

Thanks!