A .blend file would be awesome. I’ve just modified the forum uploads to allow them.
The blend file I just uploaded points to the three TIFF files created from the DNGs with dcraw.
They are rather huge. Should I upload them too? (I’m uploading them to my drive anyway)
Oh, and btw, you can use the same blend file for stacking your own shots too. It’s configured for stacking three shots only 0EV +2 -2 exposures but it can be modified for different values easily.
It only requires that the three input files are linear rec.709 tiffs produced with dcraw -T -4 from your RAWs.
@gez I don’t think that is necessary. As long as we know how to reproduce them. Maybe add the dcraw command in the actual post. Thanks for sharing something different.
@afre it’s all in the other thread I linked above.
Basically it’s using dcraw with the parameters -T -4 on your raw files.
I also use -w (for white balance), -n 100 (for some noise reduction) and -q 3 gave me good results, not sure whether it depends on the source files or not.
Trust the man page of dcraw better than me.
CRW_3012.jpg.out.pp3 (11,2 KB)
My try using 3012 only. Rec2020. RT only :).
Here is my first attempt, using a combination of PhotoFlow, align_image_stack and enfuse.
The final result is here:
The processing is a bit over-done on purpose, to show what can be achieved.
My processing steps can be outlined like this:
I prepared a scene-referred starting point by merging the RAW files with HDRMerge and saving a neutral processing in linear Rec.2020
from the scene-referred version I saved 5 images at different exposure values (from -1EV to +3V with respect to CRW_3012.DNG), which were then processed with
enfuse. The enfuse output looks flat, but provides a compressed dynamic range with limited suppression of the local contrast:
next, I applied an S-shaped curve to re-introduce some global contrast:
at this point I started to separately target the left, right and central portions of the image with opacity masks, adjusting the contrast in each part using RGB curves:
finally, I applied some RL-deconvolution sharpening
I think that, speaking the OCIO language, the
enfuse step can be seen as a particular “view” transform that compresses the global dynamic range while preserving the local contrast. All what follows the enfuse output should be considered as display-referred processing, I guess.
@Carmelo_DrRaw Really newbie question, but I didn’t see any reference to “linear Rec.2020” color space in HDRMerge. Is it possible to change color spaces in it?
@Carmelo_DrRaw Also, could you elaborate on this?
Regarding the exposure values, you’ve opened the scene-referred version (which, I assume, is a linear Rec.2020 tif as a result of opening the hdrmerge dng on PF and exporting it), then change the exposure value to the following steps: -2, -1, 0, +1, +2 and exporting each one of them as a new linear Rec.2020 tif. Is that correct?
And why this enfuse step? Why didn’t you simply started from hdrmerge output?
No, HDRMerge produces a RAW DNG file, which is still in the camera colorspace and encoded with a Bayer pattern. It is up to the RAW processing software (PhF in this case) to apply the appropriate demosaicing and color conversion.
In this case I did not use any intermediate scene-referred TIFF, I simply opened the HDRMerge output and saved linear Rec.2020 TIFFs at different exposure compensation values.
The HDRMerge output is a RAW image that is exposed like the darkest frame in the sequence (CRW_3012.DNG in this case), but with better details in the shadows that are taken from the brighter exposures.
I saved the following exposure steps from the HDRMerge output: -1, 0, +1, +2, +3.
Applying the same exposure compensation to CRW_3012.DNG would have provided similar results, but with much higher noise in the shadows:
HDRMerge output +3EV:
Enfuse applies a multi-level blending of the different exposures in order to achieve a compression of the dynamic range while preserving the local contrast.
To understand the difference, let’s compare the enfuse output with the result of a simple RGB curve that approximately gives similar brightness in the shadows and highligts.
RGB curve applied to the scene-referred HDRMerge output:
As you notice, the second image is much flatter… in fact, there is no simple curve that can reproduce the enfuse output, because of the enhanced local contrast provided by the latter…
@Carmelo_DrRaw Impressive! Many thanks for your comprehensive explanation.
When exporting the tiffs, would the recommended settings be like these?
As for the clip overflow values and clip negative values, I saw you’ve already made a recommendation here. Is it still effective? (clip negatives, don’t clip overflow)
And for exposure increase, should I clip the highlights? I’d risk saying no, because at this point of the work flow we’re just gathering as much data as we can, and with the best quality we can extract from the raws or, in other words, selecting the best pixels from a set of images. Would this be what scene-referred means?
If, within a single image, a photosite is fully saturated, the proper approach is to ignore the data contained there, and all debayered values around it. They are non-data. That is, think of it as a 3D coordinate missing the location of one or two coordinates.
In terms of scene referred data, there is no meaning to any particular value in terms of high value. As such, no clipping should be applied as the entire range of data is legitimate.
This is also why it is impossible to reconcile scene referred linear manipulations within a display referred model; the data you are looking at is only a portion of the range, and even the values within a portion are likely displayed incorrectly.
@Carmelo_DrRaw I couldn’t reproduce the enfuse results you got.
Your enfuse image has more contrast than mine.
With enfuse default settings:
I tweaked with the weight parameters, but the best I could get was eliminating the clipped highlights. Contrast, however, didn’t changed:
Could you share the intermediate images that you have used as input for enfuse? That would help to understand where is the difference…
From my side, I sued enfuse with default options.
Another try, this time using luminosity masks in Photoflow,
enfused_CRW_3008-3012-cw05sw06.pfi (113.1 KB)
I did almost the same workflow as @Carmelo_DrRaw above, but on the final step I decided to use a modified version of luminosity masks in PF: instead of three luminosity masks, I used nine (like L, LL, LLL, M, …)
This link contains one single TIFF file. In my case, I produced 5 TIFF files spaced by 1EV (from -1EV to 3EV with respect to the exposure value of the HDRMerge output), which I then merged with enfuse.
Hummm, it shouldn’t, I also used the 5 files you’ve mentioned and was using that link. I guess I let the laptop uploading the files and someone closed the lid…
I can only redo the upload when I get back home, sorry for that
@gadolf asked me to repeat this point here re. rectangular curves with luminosity masks…
I said that I always thought that the more extreme curves are, and especially if they have corners, then the more likely unpleasant artefacts will result. (I appreciate that many tools do not allow you to create a corner)
Which I replied to @RawConvert that maybe the gaussian blur filter on top of the rectangular curve (on the group layer mask) helps reducing, if not eliminating, those artifacts. At least it was what happened to me during my last edit of this picture. Then it was just a matter of tweaking the blur radius and/or the mask slider.
@gadolf @RawConvert There is actually a substantial difference between blurring and smearing a luminosity mask.
I took the images of this thread as an example, using
CRW_3011.DNG as the source for the luminosity masks, which were used to blend
CRW_3010.DNG (brighter) onto
Here is what is obtained with a sharp mask cutting at 50% luminosity (mask first, blend result next):
The same mask, blurred with a 50px radius:
Notice the appearance of strong halos…
Next, the same mask but less sharp curve:
No halos in this case, but a strong flattening of the contrast in the transition regions.
When the smearing of the mask is pushed to the maximum, the transitions are smoother but the flattening of the contrast is extended to the whole image:
That should be compared with the enfuse output:
Notice how the last two images have similar brightness in the highlights and shadows, but the enfuse output has better local contrast in the mid tones…