A passionflower close-up. Tried to max out the detail/sharpness and increase saturation without getting too flashy with RT(*), then exported to JPG at full size, scaled down in Gimp, plus a bit of wavelet sharpening.
Nice photo! I like the pastel colors. and composition. It’s a bit too grainy for my taste. Did you by any chance also sharpen the smooth parts of the image?
The one thing I’m missing is the separation of the foreground and the background. I’m not sure if that’s because of the colors or contrast but something to make the flower pop out just a bit more would help in my opinion.
In the link you provided there are great results described, even for a low number of images. I wonder how such a workflow could look like preserving more than 8 bit data per colour channel, e.g. with tiff images, or early in a raw workflow.
@chris some cameras, like my old Pentax K10D, allow you to take several (up to 9 or 12) raw shots and combine them into one in-camera, leading to a very clean result. I never found a real use for it though, low ISO shots are clean enough. Unless the intention is to fake long exposure.
If you want an example of such an in-camera merge shot, see “pentax_k10d_books_multi-exposure_in-camera” here.
@Morgan_Hardwood, 1. I don’t have a practical application. I just did wonder if there’s something available. Just out of curiosity. It seems that techniques learned from video, here especially optical flow, can help Alot in photography as well, but most video tools are themselves somehow limited (e.g. resolution, colour, etc.). That said, 2. I’m a Canon shooter and therefore always searching for improved noise reduction techniques .
Of course I have. Several times. AFAIR it does not deal with optical flow. IIRC it’s the same for HDRMerge. The nice thing about the technique described in the aforementioned article is that it should be able to deal with comparatively bad conditions such as parallax errors of hand-held shots etc.
Every algorithm will fail in some conditions, but I can imagine that there is a reasonable ordered set of operations, each working better in most conditions that the predecessor: just median stack, aligning+median stack, aligning+some transformation+median stack, motion compensation for fix block sizes+median stack (IIRC that was used in the article), motion compensation for variable block sizes+median stack, motion compensation for variable block sizes+block rotation+median stack, … There may be Alot to discover.
I wonder if optical flow could even help in panorama stitching, since it would add another degree of freedom. Of course the techniques are similar in principle already, but what I mean is a block-wise semi-independent treatment of extracted features and their surrounding. In the overlapping region, the relation between matching image details could have some statistical variations, which may help to compensate for parallax errors and/or moving objects between shots.