CRW_3012.jpg.out.pp3 (11,2 KB)
My try using 3012 only. Rec2020. RT only :).
CRW_3012.jpg.out.pp3 (11,2 KB)
My try using 3012 only. Rec2020. RT only :).
Here is my first attempt, using a combination of PhotoFlow, align_image_stack and enfuse.
The final result is here:
The processing is a bit over-done on purpose, to show what can be achieved.
My processing steps can be outlined like this:
I prepared a scene-referred starting point by merging the RAW files with HDRMerge and saving a neutral processing in linear Rec.2020
from the scene-referred version I saved 5 images at different exposure values (from -1EV to +3V with respect to CRW_3012.DNG), which were then processed with
enfuse. The enfuse output looks flat, but provides a compressed dynamic range with limited suppression of the local contrast:
next, I applied an S-shaped curve to re-introduce some global contrast:
at this point I started to separately target the left, right and central portions of the image with opacity masks, adjusting the contrast in each part using RGB curves:
finally, I applied some RL-deconvolution sharpening
I think that, speaking the OCIO language, the
enfuse step can be seen as a particular “view” transform that compresses the global dynamic range while preserving the local contrast. All what follows the enfuse output should be considered as display-referred processing, I guess.
@Carmelo_DrRaw Really newbie question, but I didn’t see any reference to “linear Rec.2020” color space in HDRMerge. Is it possible to change color spaces in it?
@Carmelo_DrRaw Also, could you elaborate on this?
Regarding the exposure values, you’ve opened the scene-referred version (which, I assume, is a linear Rec.2020 tif as a result of opening the hdrmerge dng on PF and exporting it), then change the exposure value to the following steps: -2, -1, 0, +1, +2 and exporting each one of them as a new linear Rec.2020 tif. Is that correct?
And why this enfuse step? Why didn’t you simply started from hdrmerge output?
No, HDRMerge produces a RAW DNG file, which is still in the camera colorspace and encoded with a Bayer pattern. It is up to the RAW processing software (PhF in this case) to apply the appropriate demosaicing and color conversion.
In this case I did not use any intermediate scene-referred TIFF, I simply opened the HDRMerge output and saved linear Rec.2020 TIFFs at different exposure compensation values.
The HDRMerge output is a RAW image that is exposed like the darkest frame in the sequence (CRW_3012.DNG in this case), but with better details in the shadows that are taken from the brighter exposures.
I saved the following exposure steps from the HDRMerge output: -1, 0, +1, +2, +3.
Applying the same exposure compensation to CRW_3012.DNG would have provided similar results, but with much higher noise in the shadows:
HDRMerge output +3EV:
Enfuse applies a multi-level blending of the different exposures in order to achieve a compression of the dynamic range while preserving the local contrast.
To understand the difference, let’s compare the enfuse output with the result of a simple RGB curve that approximately gives similar brightness in the shadows and highligts.
RGB curve applied to the scene-referred HDRMerge output:
As you notice, the second image is much flatter… in fact, there is no simple curve that can reproduce the enfuse output, because of the enhanced local contrast provided by the latter…
@Carmelo_DrRaw Impressive! Many thanks for your comprehensive explanation.
When exporting the tiffs, would the recommended settings be like these?
If, within a single image, a photosite is fully saturated, the proper approach is to ignore the data contained there, and all debayered values around it. They are non-data. That is, think of it as a 3D coordinate missing the location of one or two coordinates.
In terms of scene referred data, there is no meaning to any particular value in terms of high value. As such, no clipping should be applied as the entire range of data is legitimate.
This is also why it is impossible to reconcile scene referred linear manipulations within a display referred model; the data you are looking at is only a portion of the range, and even the values within a portion are likely displayed incorrectly.
@Carmelo_DrRaw I couldn’t reproduce the enfuse results you got.
Your enfuse image has more contrast than mine.
With enfuse default settings:
Could you share the intermediate images that you have used as input for enfuse? That would help to understand where is the difference…
From my side, I sued enfuse with default options.
Another try, this time using luminosity masks in Photoflow,
enfused_CRW_3008-3012-cw05sw06.pfi (113.1 KB)
I did almost the same workflow as @Carmelo_DrRaw above, but on the final step I decided to use a modified version of luminosity masks in PF: instead of three luminosity masks, I used nine (like L, LL, LLL, M, …)
This link contains one single TIFF file. In my case, I produced 5 TIFF files spaced by 1EV (from -1EV to 3EV with respect to the exposure value of the HDRMerge output), which I then merged with enfuse.
Hummm, it shouldn’t, I also used the 5 files you’ve mentioned and was using that link. I guess I let the laptop uploading the files and someone closed the lid…
I can only redo the upload when I get back home, sorry for that
@gadolf asked me to repeat this point here re. rectangular curves with luminosity masks…
I said that I always thought that the more extreme curves are, and especially if they have corners, then the more likely unpleasant artefacts will result. (I appreciate that many tools do not allow you to create a corner)
Which I replied to @RawConvert that maybe the gaussian blur filter on top of the rectangular curve (on the group layer mask) helps reducing, if not eliminating, those artifacts. At least it was what happened to me during my last edit of this picture. Then it was just a matter of tweaking the blur radius and/or the mask slider.
@gadolf @RawConvert There is actually a substantial difference between blurring and smearing a luminosity mask.
I took the images of this thread as an example, using
CRW_3011.DNG as the source for the luminosity masks, which were used to blend
CRW_3010.DNG (brighter) onto
Here is what is obtained with a sharp mask cutting at 50% luminosity (mask first, blend result next):
The same mask, blurred with a 50px radius:
Notice the appearance of strong halos…
Next, the same mask but less sharp curve:
No halos in this case, but a strong flattening of the contrast in the transition regions.
When the smearing of the mask is pushed to the maximum, the transitions are smoother but the flattening of the contrast is extended to the whole image:
That should be compared with the enfuse output:
Notice how the last two images have similar brightness in the highlights and shadows, but the enfuse output has better local contrast in the mid tones…
@Carmelo_DrRaw So I understand that you advise not to use mask like I did. However, I could get some interesting results with those. I pushed the blur radius at maximum of 20 and didn’t get any noticeable artifacts.
No, I am just pointing to the fact that if you want to compress the dynamic range, enfuse might be a better option.
Luminosity masks can be used for many other purposes, and it is useful to see the difference between blurring and smearing. A combination of the two (blurring the smeared mask) might also be a good compromise…
@Carmelo_DrRaw Ok. Just take note that I used the luminosity masks on the enfuse output, and just to be able to edit only portions of it.
Just to clarify, here’s what I did:
After that, I opened enfuse output and add all those luminosity masks:
Each luminosity mask (BBB, BB, B, M-, M, M+, L, LL, LLL) has the following structure:
Now the BBB curves:
The B curves: (My BB is wrong, so I skip it for now)
The M- curves:
… and so on, until I reach the right side of the histogram,
So, I believe I’m not using the lum masks to compress the dynamic range, just for the final edits before saving to jpg.
On the other hand, I don’t think I could do so many masks if I were smearing them, because they would overlap each other and the pixels on the intersection would be adjusted multiple times (with whatever adjustment I make on each mask)