ZeroNoise-like script for PhotoFlow

@assaft I will send the outputs as soon as I can… concerning the script, maybe the best would be to re-write it in perl? But I’m not a perl specialist at all…

I can rewrite it in python if that’s ok with you.

1 Like

OK. Some thoughts:

  1. Maybe it will be interesting to look at how Adobe’s Linear DNG format works. As far as I know it is similar to TIFF in the sense of being a pixel-data (demosaiced) file format, but it still doesn’t have WB applied to it. I used to work with this format in the past when I used PhotoAcute to combine files for the purpose of HDR or Super Resolution and wanted to stay in a format that allows me to tweak the WB later on

  2. .I was wondering if tone curve adjustments such as brightening an image, or recovering highlights or shadows, are more limited / noisier when applied to TIFF compared to a RAW file.
    My experience with Lightroom is that when a RAW file is opened, the converter gives much more leeway for recovering highlights or brightening the shadows compared to when a TIFF file is opened. I thought that this is because when we deal with a RAW file, the converter can perform some of these adjustments as part of the demosicing process and this gives better results. Is it the same in Photoflow? is there any advantage to brightening an image using the exposure compensation setting in the RAW Developer layer, compared to keeping it at 0, exporting to TIFF, and then opening the TIFF and applying a tone curve to the TIFF that brightens the pixel data in a similar way?

@assaft tiff is just a container. In fact a lot of raw files are based on tiff. There is nothing stopping you from applying white balance or curves to the contents of a tiff file.

@assaft the RAW data is stored in linear format, with 14-bits accuracy at best… the TIFF files resulting from the RAW processing are 16-bits, and can eventually be stored in linear gamma format as well, so the TIFF files can store the image data with an accuracy at least comparable to the original RAW.

Therefore, I doubt that a linear DNG format would provide better accuracy that TIFF. On the other hand, the blended image has to be processed and eventually stored in floating-point format, to take advantage of the increased amount of information provided by the blending of bracketed exposures. PhF indeed processes the image in floating-point accuracy, but does not (yet) provide the option to save the result to floating-point TIFF.
However, this option will be added very soon.

What influences the tone adjustment of the image is in fact mostly the gamma encoding… many operations, like exposure compensation, channel mixing or WB corrections, give better results when applied to linear-gamma data.

OK, thanks for the information.

So about what you said here: [quote=“Carmelo_DrRaw, post:7, topic:3022”]
the main issue is that the exposure blending is based on a special luminosity mask that considers the maximum of the RGB channels of each pixel. This guarantees that every pixel where at least one of the RGB channels is clipped or near the clipping point is replaced by one with a lower exposure wherever possible. However, the clipping point varies depending on the WB multipliers, therefore the luminosity mask needs to be built from the final WB.
[/quote]

Why not converting the RAW files to TIFF without any WB applied (as if all multipliers equal 1.0) and applying WB using a dedicated layer for that after the exposures were blended?

I’m perfectly fine with python… the main constraint is to be able to do the floating-point computations of the “exposure factors” starting from the aperture and shutter speed values, and to be able to extract the EXIF information somehow.

I tried to explain this in one of the previous messages… :

In short: using the wrong WB multipliers might result in non-optimal exposure blending, since in some cases the multipliers can be close to 2.0…

You would need to apply the final WB to the individual images and then generate the luminosity blending mask from the WB-corrected layer. This would mean having as many WB correction layers as input images, all with the same multipliers. It is doable, but not very practical…

Sure. Actually the floating point computation was one of the main problem with Windows shell. OK, so I’ll let get back to you once I have something working. Please send me an example of the output you get on Linux once you have that.

I understand what you are saying but I don’t see why this is different than a case in which the photographer uses the expose-to-the-right technique and obtains a RAW file with values as close as possible to the RAW saturation point. Opening such file in a RAW converter and applying WB with high multipliers will clip some pixels, but that’s ok because the photographer will use some tone curve adjustments to compress the DR. So the end result will be without clipping.

I think that HDRMerge shows a similar scenario. As far as I know it doesn’t look at the WB when selecting the pixels for the blending. It just takes those from the exposure where they have the highest value but still below the clipping point. The WB is ignored because no matter how high the multipliers will be set when developing the file, the photographer will be able to avoid clipping by compressing the DR in the RAW converter.

That’s correct.

From my experience with doing this kind of merging, you may want to have a bit of rolloff or else you might find bands where the noise suddenly gets stronger, in case the user used too big a bracket step.

This is actually what happens already in the creation of the luminosity mask… there is an inverted curve with a linear roll-off that is used to generate the mask from the initial channel data:

[quote=“assaft, post:25, topic:3022”]
I think that HDRMerge shows a similar scenario. As far as I know it doesn’t look at the WB when selecting the pixels for the blending. It just takes those from the exposure where they have the highest value but still below the clipping point.
[/quote]

After further thinking, now I agree with you. I am therefore modifying the script to convert RAW files to TIFF with UniWB and camera colorspace, and then to apply the WB and color conversion after the blending, as part of the blend.pfi edit.

1 Like

How do I open multiple photos to do exposure blending? I downloaded the portable version for my windows 10 computer but I can’t seem to get it to open and then blend multiple exposers…

This is still quite experimental, it requires a non-standard version of the code and a script to prepare the input images. At the moment I have a working solution for Linux, but not for Windows.

@assaft is kindly helping me to write a python script that can be executed in the Windows shell, as the original script was in bash… in a couple of days we will hopefully be able to post here detailed instructions on how to run the script under Windows and Linux, and I will provide Windows packages of the needed PhF version.

The need for a script is due to the fact that the RAW images have first to be converted to TIFF, then aligned, and finally combined together through suitable luminosity masks. The script automates all the steps, from RAW processing to the final blending…

Thanks for explaining it. Looking forward to a windows version.

I am bumping this rather old thread, to announce that I have updated the ZeroNoise-like scripts to be compatible with the current photoflow batch processing options.

The new scripts are available from the GitHub repository: PhotoFlow/scripts/exposure-blend at stable · aferrero2707/PhotoFlow · GitHub

The usage of the bash and python versions of the scripts is very easy:

  • put the RAW files from your bracketed shots in some folder
  • from this folder, invoke the script passing the list of raw files, for example:
path-to-the-local-photoflow-git-repository/scripts/exposure-blend/exposure-blend.sh *.NEF

or

path-to-the-local-photoflow-git-repository/scripts/exposure-blend/exposure-blend.py *.NEF
  • this will create a file called blend.pfi in the current directory. Open this file with PhotoFlow, and you will get a stack of layer groups that load the individual images, apply the appropriate exposure compensation, and blend the non-overexposed areas together to minimize the noise:

A White Balance layer at the top allows to adjust the WB of the final image.

The blend masks of each image can still be edited and refined, if needed.

Here is a before/after comparison on an over-exposed dark area (left are the pixels from the shortest exposure, right are the pixels from the longest exposure):

The Python version of the script has still a bug in the cas where the bracketed images were not taken in ascending exposure time, which I am trying to solve… on the other hand, the bash script properly handles and arranges images taken in an arbitrary bracketing order.

The scripts require the latest PhF version from yesterday, either from the stable or the v0.3.0-RC1 branches.

Ping @XavAL as he is probably interested in this update.

2 Likes

Thanks, thanks, thanks! :smiley:

1 Like

can you help me figure out why the stack script fails with a fits file?

ImageStack/stack.sh: line 32: 33442 Segmentation fault: 11 ${phfdir}photoflow --batch “$img” “$fitspreset” “$img.tif” >&"$img.log"

@matsmyth Welcome to the forum! Could you start by telling us what version you are using?

I would advise you to use the most current one here: Release Continuous build · aferrero2707/PhotoFlow · GitHub. Unfortunately, it isn’t the easiest to find.

If it doesn’t work on the latest version, it means that the script needs updating.