HDR from single raw?

Heres a little question.

Is is possible to create a reasonable HDR file from a single raw file? The reason I’m asking is that when I was visiting the Isle of Skye last month, I took some bracketed raw exposures of Talisker Bay, with the aim of creating a HDR panorama.

Unfortunately I didn’t have my tripod with me at the time, so the bracketed exposures were hand held. Despite trying to align the bracketed images in both hugin, and luminance hdr, I’m finding that they aren’t lining up very well at all, quite a lot of artifacts due to moving waves / clouds etc.

So what I’m curious about, is if it is possible to somehow take a single raw file, set the exposure compensation for the raw file to be -2, 0, then +2, creating individual images for each exposure from the raw, then using these to create a pseudo HDR? ALso, does anyone know of a command line script (I run Ubuntu 17.04) to automate this?

Thanks,

Brian

It depends on what you mean when saying “HDR”. There is either the real meaning of an image with high dynamic range, or you could mean a regular image that looks like being tonemapped. The latter is possible, and exporting a raw as you suggested is regularly done to be able to use stock tonemapping operators to get the desired look. It’s not really an HDR though. A proper HDR image can obviously not be created from a single HDR – you can’t make up more values than you have in the input file already.
If you are willing to spend some time it might be possible to assemble the different parts of the hand held stack in GIMP and mask out regions that are over/under exposed to get a nice even exposure without any artifacts.

4 Likes

If the original values come from the same raw file, then it makes no sense to shift the exposure and combine these to a new HDR image. You can not create information from nothing. However, a camera sensor has in general a higher bit depth than most output devices. So, it is possible to manipulate tone curves in a non linear fashion to make very dim and very bright signals visible in the same image. The problem is that the dim signals are often also very noisy. Taking several images with different exposures is definitely the better approach.

2 Likes

@Brian_Innes I have little experience taking bracketed shots. When I do, I often encounter your problem. Sometimes, it is the subjects that are moving erratically.

What I would say is that, if you have two shots that are not too far off, then you have a chance of making an acceptable HDR image. To elaborate, a quick random web search tells me that HDR TV has 13 or more stops. Say your raws have 7; modern cameras may have more. That means 1 2-raw fusion = 1 HDR image. Then you can tone map however you please.

Let us know how it goes, or if you like, share some of those raws for the kind folks on this forum to take a look and provide feedback.

1 Like

In addition to what @houz wrote, I’ll add that the main purpose of bracketing is to reduce noise. You could just as well tone-map the brightest image which has no clipping in the highlights and shadows where you don’t want clipping (it is acceptable for some areas of the photo to be clipped, such as the sun), but without bracketing the shadows would be noisy.

For examples of tone-mapped images made from 1 raw file vs bracketed, see these:

It is easy to combine hand-held shots and mask out movement using HDRMerge.
Also see this Experimenting with multiple exposures (Hugin, HDRMerge, RT...) - #3 by Morgan_Hardwood

2 Likes

Hi Brian,

LuminanceHDR has a builtin workflow for single image tonemapping.
http://luminancehdr.readthedocs.io/en/latest/getting_started/hints/#single-raw-file-workflow

James

3 Likes

Also, do you actually need to produce an HDR file, or are you looking for a means to simply combine a couple of exposures as needed?

For instance, if you can align the major features of your mutliple images, but find that some things move in ways you don’t like (clouds, waves) you could use something like a luminosity mask to combine relavant sections of each exposure, and then you can simply paint the mask where you can choose which of them best shows the moving scenes (and just use that).

  1. Align images.
  2. Create a luminosity mask
  3. Paint over areas of movement (ghosting is a common term for this), to only use one of the images for that section.

Might be an idea to start with (I’ve done this often in the past).

Tools like LuminanceHDR also have anti-ghosting tools you can use to produce an actual HDR file if that is what you prefer (I believe HDRMerge does as well).

1 Like

I believe the technique you are looking for is called “DRO” (Dynamic Range Optimization), which you can do from a single exposure, rather than “HDR” which always requires brackted exposures. Some cameras can do DRO as part of their native raw processing (http://www.amateurphotographer.co.uk/technique/camera_skills/how-to-get-the-most-from-your-in-camera-dynamic-range-optimisation-4509), but you can probably achieve something similar with tone curves, luminance masking, and simulated ND grad filters in post. Look up the tutorial on luminosity masking for Dark table as a good starting place: PIXLS.US - Luminosity Masking in darktable

1 Like

It depends to some extent on what you use to blend the 2 exposures as your unlikely to want a typical HDR look.

Why do it - It can be an easier way of getting the result you want and it might even be a better result - no need for high light recovery and reconstruction etc and the same at the shadow end.

I use enfuse and it’s gui on linux to do it sometimes. The problem is that I need to guess what the output will look like when the images are blended. This may mean doing the same thing twice. There is also a windows spin off of enfuse that isn’t free and I understand that offers more control.

I mentioned the method to some one who is rather new to pp on another forum. It can help to some extent in that case when the changes needed are pretty extreme. I also mentioned the other way. Same 2 exposures again and blend via layers and masks. Masks can be hand painted but I also provided a link to pat david’s excellent tutorial on obtaining masks from colour channels for b&w photo’s. These may do or seriously reduce the amount of hand painting needed. I think Pat’s page even pleased a bloke who teaches photography and one or two others. They all use PS too and believe me are infuriatingly competent.
I did a stupid example years ago after deliberately taking a shot into the sun on water after some one mentioned that this was rather tricky to pp.

High Dynamic Range aka Fake HDR | Flickr

You can clearly see that I could have bought out more water detail. Sometimes masks will be tricky to produce. It is then an easier way of getting a result.

The name for me is fake hdr. No one told me to do it, just obvious that the method could be useful.
John

1 Like

Thanks everyone for the very informative replies. Lots to be experimenting with! :slight_smile:

@patdavid, I think you may have the best solution. Overall I’m just wanting to get a bit more detail in the landscape, while making the sky look a bit more dramatic.
So what I may do, is convert all the bracketed raws to 16bit tiffs.

Create seperate panoramas for the negative, middle and positive exposure brackets, load these as seperate layers in gimp, then apply masks.

@Isaac, that looks a good tutorial on luminosity masking in Darktable, looks a very powerful tool!

@afre, I’ve uploaded one of the bracketed exposures which is being most troublesome with camera movement It’ll be interesting to see what people make of this:

IMGP2646-2648.zip (23.7 MB)

Regards,

Brian

1 Like

HDRmerge & Photoflow

JA JAja ajA ja I’m laughing 'cause just realized that Photoflow is intimately related to the noise of my computer’s fan under high load, I know pure silliness, still… true

Well Brian, despite the title you’ve ended up providing a (3) bracketed shot of what seems a panorama. I just took up from there and fed them to HDRmerge that aligned and created the HDR output; then in PhF I made a couple basic adj with luma masks and tried the best to avoid nasty halos between sky and hill. Just for fan, I mean fun threw in a bucket of provia. All of this nonsense was done in 2 steps, as the split details and the sharpening like to pretend to be tax collectors :stuck_out_tongue:

IMGP2646-2648_A-B.zip (2.9 KB)

 
:ghost::scream::ghost::scream::ghost:
BTW I just noticed that there are a couple problematic areas in the sky… but seems to be a merging artifact, so one would just have to fix that (ghost masking / brushing) in HDRmerge, maybe also a bit of NR =)

the bracketed shots

@chroma_ghost did a lot more work on the image.

@Brian_Innes Here is a bare bones dcraw + align_image_stack + enfuse result, using mostly the default settings. There was no attempt to tackle ghosting for the sake of comparison. To me, IMGP2647 + IMGP2648 looks the most successful. With a little more effort, I am sure that you can improve on these results.

Final edit, I promise: Sorting through and uploading the correct images can be confusing!

IMGP2646 + IMGP2647 + IMGP2648

IMGP2646 + IMGP2647 only

IMGP2647 + IMGP2648 only

Bonus: IMGP2647 + IMGP2648 only, with strong tone mapping

You can also use something like Hugin’s align_image_stack on your images once you’ve developed them into tif. Hopefully it’ll make quick work of getting things aligned for GIMP.

https://patdavid.net/2013/01/focus-stacking-macro-photos-enfuse.html#hugin-align-images

I also neglected to mention in my last post (5x edited :blush:) that if I had done some lens corrections, the ghosting would have probably been less severe.

@patdavid It was the EPL-1 that switched me to m4/3. The camera most hate initially and then eventually love. My wife bought me an E-P3, big step up in performance in many areas. I still use it at times. Then came an E-M5 and later an E-M1. I keep the E-M5 for spare body just in case and to be honest the E-M1 doesn’t really offer a lot more. The plastic kit lens received a lot of criticism but optically it’s a cracker - loose parts that can be wobbled about - nobody checked how firm they were when the camera was switched on. People might break it if they tried too hard.

My only beef with Olympus is pro lenses. Fast, heavy and expensive. Mostly the heavy aspect but that via F ratios ups the cost. Panasonic lenses can be used as well. That just leaves the macro lens - 60mm focal length means getting too close at higher magnifications.

The MkII EM-1 is a bit ouch price wise for me and I would wonder if another may come out in the next few years but unlike some they do update facilities in firmware so hard to be sure. Just a feeling from certain reviews.

ImageJ can also be used to stack and align. There is a spin off called Fiji that some reckon make this easier to do but from what I can see it just uses ImageJ plugins. I’ve also seen descriptions of Hugin for alignment and ImageJ for stacking. It’s a java app so can run on anything. What you get is a pretty powerful image processing core for scientific use and a graphical interface. People who understand that sort of thing write and publish macro’s for it. It’s hosted on a .gov site and is very widely used in certain scientific imaging areas. Free too.

John

1 Like

Yep, imagej is popular with the NASA folks as well, iirc.

The E-M5 has been a great fun little camera for a long time for me, but I got to play with an XT2 recently (before I rented the 5dmk3) and I can still hear it whispering to me…

Thanks everyone for the replies!

The biggest issue I have been having trying to create HDRs from the bracketed exposures is ghosting in the foreground rocks, as well as in the sky and sea. This was probably due to my hand holding the camera since I didn’t have my tripod with me. That was what led me to think that if I could create a pseudo HDR from a single raw, then it might get around the problems.

However perhaps I need to look more closely at the anti ghosting function of Luminance_HDR.

It does however seem that I will be able to make something out of the bracketed exposures hopefully! :slight_smile:

The MK II EM-1 is shouting at me but I’m not listening. They seem to have achieved the same sort of dynamic range or better at ISO 200 than any one else.

My son bought a 5d IV. it has wonderful hidden auto features in auto mode but it doesn’t get used much. He also had 7D what ever the latest is, sold and bought an 80D instead.

I’ve got too much invested in lenses now to switch but this is why it’s tempting.

Screenshot_20170926_194336

:wink: Why chase stops though when I can take 2 shots. It gains 1 over the Mk1.

John

1 Like

I bought one at the start of the summer (or, more correctly, the University bought one that I get the exclusive use of). It’s an amazing camera, I have to say. I also got the 12-40mm f2.8 Pro lens. Amazing image quality. But, together, it is not smaller than a typical APS-C DSLR, and it is heavy. This is why my trusty little E-M10-ii with the nice Panasonic 20mm pancake remains in my possession, and is not going anywhere. I use the “big gun” when I “go out to take photos” and need the IQ, but I bring the little guy when I’m just out and about and want to have a camera on me… I’ve captured some great shots with both.

PS. I actually only bought the E-M1-ii because I needed it for my fieldwork as an archaeologist. Dust and rain resistance was key, as was increased resolution, and image fidelity. I’ve been using the “high res” mode to take pretty spectacularly detailed images of artifacts and sediment columns.

1 Like

It might be easier to just layer-mask the well-exposed land image with the well-exposed sky image.

1 Like