[Play Raw] Luminosity masks

I do that too but the thing is that there are more ways to stabilize the camera than using a tripod. Be creative! I haven’t examined your raws yet but usually I find the first frame and the second one to have the largest deviation because you can move the camera when you press the shutter button.

One thing I found from your shots from the last PlayRaw that I just remembered is that the exposure of your next bracket set is taken from the previous set. Before you take the next bracket make sure that you readjust your exposure. That way you don’t accidentally make each set progressively darker. But it looks like this PlayRaw’s is just one set of 5.

@yteaot Yours is my favourite so far. It is less fake-HDR than your recent entries, more natural, which I like. The ghosting is mostly gone as well, esp. where the trees are, though on the left side we see some double imaged cables, etc.

Yeah, I know, but I remember clearly that I was in a hurry that day, the sun was cooking me and I was a bit concerned about safety, because that spot is not the safest one. Besides, I was reading at the time about the advantages of stacking not perfectly aligned images to get rid of noise (I know, the misalignment shouldn’t be that big, only a couple of pixels).so I really relaxed about stabilizing.

No, I don’t think so. Actually the bracketing is accomplished by a CHDK lua script, where you set up how many bracket exposures you want,the EV amount between brackets, and number of steps. It was coded by me (oneaty) based on another one, aiming bracketing and stacking. Each time the loop repeats, the camera starts from the 0EV exposure.
From this specific session I have a couple more sets, but they’re all badly misaligned so I left them out of this post.

lol … while trying to decrypt your statement, Harry Dean Stanton crossed the street. Or was it Trump?

(Forget about the flag)

That’s Harry allright, he’s on his way to buy some milk ,-)
:brazil: Forgotten… tropicalia alley dreaming of progress :corn:

Two funny presidents mentioning smart rockets? (This actually does exist,) Sorry.

Hi! This thread looked as a good opportunity to prepare another sample of scene-referred editing so I played with the files too :slight_smile:

I used dcraw to produce linear tiffs as discussed in this thread then produced a simple stacking of three exposures (EV0 +2EV -2EV) using Blender’s nodal compositor.
The result is a scene referred image with wide dynamic range, so you have a lot of room to apply any creative grading afterwards.

dcraw -T -4 -w -n 100 -q 3 [raw file]

It may look scary for people not used to nodal compositing, but it’s actually quite simple. If anyone is interested I can upload the blend file.

This is the stacked image through the Filmic Blender view with medium-high contrast. (the output of the big frame on the left)

This one has a little of colour grading. Only three operations (curves, colour balance and saturation correct).

I hope you like it

EDIT And now the blend file:
Stacking-NodeTree.blend (531.8 KB)

…And now the link to the tiff files too.
The .blend file included in the compressed file with the images has a little bug I just fixed. Use the .blend file attached above.

I kept playing with the file and added a few more nodes. Now it has some unsharp-mask and clarity/local-contrast. Those operations were re-created using the existing compositing operations. For those interested, selecting the node and pressing tab enters the group and exposes the operations inside.

Stacking-NodeTree.blend (596.9 KB) Added filters

7 Likes

A .blend file would be awesome. :slight_smile: I’ve just modified the forum uploads to allow them.

Hi @patdavid!
The blend file I just uploaded points to the three TIFF files created from the DNGs with dcraw.
They are rather huge. Should I upload them too? (I’m uploading them to my drive anyway)

Oh, and btw, you can use the same blend file for stacking your own shots too. It’s configured for stacking three shots only 0EV +2 -2 exposures but it can be modified for different values easily.
It only requires that the three input files are linear rec.709 tiffs produced with dcraw -T -4 from your RAWs.

@gez I don’t think that is necessary. As long as we know how to reproduce them. Maybe add the dcraw command in the actual post. Thanks for sharing something different.

@afre it’s all in the other thread I linked above.
Basically it’s using dcraw with the parameters -T -4 on your raw files.
I also use -w (for white balance), -n 100 (for some noise reduction) and -q 3 gave me good results, not sure whether it depends on the source files or not.
Trust the man page of dcraw better than me. :slight_smile:

CRW_3012.jpg.out.pp3 (11,2 KB)

My try using 3012 only. Rec2020. RT only :).

Here is my first attempt, using a combination of PhotoFlow, align_image_stack and enfuse.

The final result is here:

The processing is a bit over-done on purpose, to show what can be achieved.

My processing steps can be outlined like this:

  • I prepared a scene-referred starting point by merging the RAW files with HDRMerge and saving a neutral processing in linear Rec.2020

  • from the scene-referred version I saved 5 images at different exposure values (from -1EV to +3V with respect to CRW_3012.DNG), which were then processed with enfuse. The enfuse output looks flat, but provides a compressed dynamic range with limited suppression of the local contrast:

  • next, I applied an S-shaped curve to re-introduce some global contrast:

  • at this point I started to separately target the left, right and central portions of the image with opacity masks, adjusting the contrast in each part using RGB curves:

  • finally, I applied some RL-deconvolution sharpening

I think that, speaking the OCIO language, the enfuse step can be seen as a particular “view” transform that compresses the global dynamic range while preserving the local contrast. All what follows the enfuse output should be considered as display-referred processing, I guess.

1 Like

@Carmelo_DrRaw Really newbie question, but I didn’t see any reference to “linear Rec.2020” color space in HDRMerge. Is it possible to change color spaces in it?

@Carmelo_DrRaw Also, could you elaborate on this?

Regarding the exposure values, you’ve opened the scene-referred version (which, I assume, is a linear Rec.2020 tif as a result of opening the hdrmerge dng on PF and exporting it), then change the exposure value to the following steps: -2, -1, 0, +1, +2 and exporting each one of them as a new linear Rec.2020 tif. Is that correct?

And why this enfuse step? Why didn’t you simply started from hdrmerge output?

No, HDRMerge produces a RAW DNG file, which is still in the camera colorspace and encoded with a Bayer pattern. It is up to the RAW processing software (PhF in this case) to apply the appropriate demosaicing and color conversion.

In this case I did not use any intermediate scene-referred TIFF, I simply opened the HDRMerge output and saved linear Rec.2020 TIFFs at different exposure compensation values.
The HDRMerge output is a RAW image that is exposed like the darkest frame in the sequence (CRW_3012.DNG in this case), but with better details in the shadows that are taken from the brighter exposures.

I saved the following exposure steps from the HDRMerge output: -1, 0, +1, +2, +3.
Applying the same exposure compensation to CRW_3012.DNG would have provided similar results, but with much higher noise in the shadows:

CRW_3012.DNG +3EV:

HDRMerge output +3EV:

Enfuse applies a multi-level blending of the different exposures in order to achieve a compression of the dynamic range while preserving the local contrast.
To understand the difference, let’s compare the enfuse output with the result of a simple RGB curve that approximately gives similar brightness in the shadows and highligts.

enfuse output:

RGB curve applied to the scene-referred HDRMerge output:

As you notice, the second image is much flatter… in fact, there is no simple curve that can reproduce the enfuse output, because of the enhanced local contrast provided by the latter…

1 Like

@Carmelo_DrRaw Impressive! Many thanks for your comprehensive explanation.

When exporting the tiffs, would the recommended settings be like these?


As for the clip overflow values and clip negative values, I saw you’ve already made a recommendation here. Is it still effective? (clip negatives, don’t clip overflow)
And for exposure increase, should I clip the highlights? I’d risk saying no, because at this point of the work flow we’re just gathering as much data as we can, and with the best quality we can extract from the raws or, in other words, selecting the best pixels from a set of images. Would this be what scene-referred means?

If, within a single image, a photosite is fully saturated, the proper approach is to ignore the data contained there, and all debayered values around it. They are non-data. That is, think of it as a 3D coordinate missing the location of one or two coordinates.

In terms of scene referred data, there is no meaning to any particular value in terms of high value. As such, no clipping should be applied as the entire range of data is legitimate.

This is also why it is impossible to reconcile scene referred linear manipulations within a display referred model; the data you are looking at is only a portion of the range, and even the values within a portion are likely displayed incorrectly.

1 Like

@Carmelo_DrRaw I couldn’t reproduce the enfuse results you got.
Your enfuse image has more contrast than mine.
With enfuse default settings:


I tweaked with the weight parameters, but the best I could get was eliminating the clipped highlights. Contrast, however, didn’t changed:

Could you share the intermediate images that you have used as input for enfuse? That would help to understand where is the difference…

From my side, I sued enfuse with default options.

@Carmelo_DrRaw The CRW*.tif in this link: https://filebin.net/sg3t4un5w6823n0o (still uploading as of now)