HDR and linear gamma

As per another post, I am trying to find software for a workflow that includes both HDR merging and image averaging.

One of the central ideas in HDR merging is to estimate the “camera response function” or CRF. I recently came across this paper which points out that the key assumptions behind this approach to HDR merging are basically false. In particular, they show that the CRF is not a fixed property of the camera and instead is actually due to nonlinear processing of the raw file - the actual raw file is quite linear, and thus has a trivial CRF.

The paper develops a way to un-gamma-correct a set of LDR images that have already been converted.

However, their section 4 shows that HDR merging is a simple weighted average of the linear converted RAW files. So if you have raw files there is no point in doing nonlinear processing and then having to undo it.

My first question is how can I achieve this with existing software - is there a way to do this in Raw Therapee or darktable, or possibly them in combination with Hugin (for alignment).

This seems like a superior approach to making HDR files than the conventional algorithms that work on nonlinear converted files.

I am not sure what would happen if one tried merging linear gamma files made with RT or darktable in a conventional HDR merge process - would that give the correct weighted linear average or would it mess things up with an algorithm that thinks there is a nonlinear CRF that must be found.

1 Like

Have you tried HDRMerge yet?

No I haven’t tried HDRMerge, in part because its website on GitHub says that it implements a set of older algorithms. Ward’s 2003 algorithm for image registration, and a 2001 paper by Jarosz on fast convolutions.

So I incorrectly assumed it was “old school HDR”. Prompted by your suggestion I just read the manual.

The good news is that it does seem to use linear gamma conversions from RAW!

On the other hand, the bad news (from my standpoint) is that it does not appear to do a weighted average of the pixels but instead it seems like that it calculates masks to grab certain pixels from each image.

I would like to do image averaging also, by having multiple LDR files at the same exposure. HDRMerge would appear not to support that.

But thanks for the suggestion!

Filmulator’s old command-line version prior to the GUI had the ability to merge bracketed HDR, with proper weighted averaging in linear space.

I never ended up implementing the functionality in the GUI version, though. If someone has a good clean UI suggestion for how to do it…

The main issue for the UI is what weights to use. The paper referenced in my original post uses the exposure time - this is also used by a bunch of other HDR papers. So that can be taken from the EXIF data.

In the case of multiple files with the same exposure, just give them the same weight.

Weights are easy to handle without UI, I handled that by sampling the image for brightness of non-clipped pixels.

I think I may have some UI ideas, anyway.

Note that the paper contains a link to the MATLAB code.

HDR code for creating high dynamic range image from a stack of nonlinear low dynamic range images captured with different exposure times. The algorithm allows for the user to select the reference image. If not specified, the algorithm automatically selects one as reference image.

INPUT: Stack of non-linear LDR images.
OUTPUT: A single HDR image.

PARAMETERS:
- 'exp0': sets the reference image from the stack (e.g. exp0=5 for seting the 5th image as reference).
- 'clipping': discards too dark/too bright pixels (range 0-255, e.g. clipping = [5 245]).
- 'factor': resize the original images (for speeding-up the algorithm in case they are huge) 2x1 array, first element is to resize the final result, second element is for computational purposes.
- 'sift' and 'dense': if images are not registered choose 'sift'. For dense sift choose 'dense'.
- 'pairwise': in case the stack is too long, this option computes the new linear stack pairwise.
- 'show': displays the HDR image using a Naka Rushton non-linearity.

The code from the paper that is available does the task of trying to undo the gamma correction of nonlinear corrected files.

It isn’t the code for the simpler task of dealing with linear converted raw files.

I have written my own version in Mathematica, to go from linear tiff but that is both slow and not really ready for production workflow usage.

The exposure merging code still exists, actually! Take a look for reference.

You call that routine once for each additional equal-or-brighter image you want to merge in.

It’s not hooked up to anything at the moment, though.

I haven’t read the code. Just mentioning it for those who might be curious. Are you saying that their code is incomplete, only addressing the first part?

No, I am saying that their code addresses the problem of given LDR pictures that have already been converted with nonlinear processing, how do you make a good HDR file from them.

However it is far simpler, and less error prone to go directly from RAW. That will have much more accurate color because their fix is only approximate. It ought to have fewer artifacts.

So the case they did is just not that interesting to somebody who has RAW files.

There’s a recent thread where we discussed normalizing pano images using Light Value (LV), which is defined:

LV = 2 * log2(Aperture) - log2(ShutterSpeed) - log2(ISO/100) (source: exiftool)

These numbers for each image allow one to calculate the EV compensation required to drag all the images to the same “light-ness” for stitching.

I can’t find the thread…

1 Like

I’ve never tried to merge HDR from OOC JPEGs, and I wouldn’t want to, because the processing to make the JPEG can’t be readily reversed. The “gamma” is simple enough, but there are also prettifying functions for saturation increase and sharpening and who knows what else.

2 Likes

@nathanm Yes, the weigths are 0 or 1 when you have very different exposures :wink: Indeed, weighted averaging would be a good feature to have (btw, that’s how Google’s HDR+ works), so I would encourage you to request it at HDRMerge github. There is also another tool w/ the same name that sounds to be closer to what you’re after…

@ggbutcher There is also the statistical approach to equalization, as amount of light actually captured vs exposure settings might not always be ‘spot on’ to what the dials say, or e.g. illumination fluctuated slightly between burst shots.

For merging multiple images at the same exposure, median blending is IMHO a much better approach than averaging (the link refers to Photoshop, but the technique is general).

There seems to be some debate about median versus mean versus other approaches. Median is clearly the best for eliminating large changes - like moving object, or heavy noise.

A lot of sources claim mean is better at reducing finer grain noise.

The same workflow is needed for each one - you need to align the images and then do a pixel-wise median or mean.

Median is clearly better for removing noise, but the problem is that it leads to posterization.

Just thinking aloud.

An adaptive blending method is likely required because it depends on what your inputs are and how much has changed between frames (movement of camera and subjects; changes in lighting, settings of camera and other gear, etc.), and where the changes are situated locally on the overall image (and in the 3d space in the actual scene).

Research tends to favour video or fast still frames because it is easier to keep track of trends and anomalies that way. Adaptive doesn’t have to mean complexity. Simply choosing the right blending method (mean or median, or another robust estimator) is a start.

Theoretically, the brighter exposures should have better stats but in terms of structure, detail and HL clipping this might not always be true. Thus, blending between frames of different exposures is not merely just to get more dynamic range. Ideally, we are able to perform the fusion while dropping data that doesn’t contribute to the quality and ground truth of the scene (in other words, noise and artifacts).

There are a ton of research papers on different blending methods for reducing “ghosting” in HDR images where something moved.

However, the state of the art there seems to be to use patch based image alignment to do the heavy lifting there. BTW that shows that HDR or image averaging, have a different sort of alignment problem than panorama stitching where you must work at the frame not patch level.

I tried a couple blending methods on my test photo

This shows blending 16 different 200 x 200 samples of an image, and the effect on SNR.

Now, in this case the noise is just from the camera the subject is an evenly light board that is way out of focus.

In this case, mean works best. Trimmed mean in this case means for N > 4 that it can throw out the highest and lowest value and then take the mean. Median is conventional.

However in this case the noise is limited in amplitude. Where Median comes into its own is when there are a bunch of wild outlier - which is not true in this case.

What I meant was that it takes multiple strategies, which ones depends on the image set. We have to be careful about what we do though; we wouldn’t want to increase complexity needlessly. Trimmed mean is a compromise between mean and median and so it makes sense for it to fall between the two.

BTW, it would be helpful if you gave us a sample image and showed your work (how you have arrived at your graphs and analysis) with sidecars, etc. I am sure your method is sound: it is just that the more context we are given the easier it is for people to understand your problem.