Help with noise in the water

I feel the same as you on many occasions, and this tool helps me sometimes

I tried to guess the distance to the tree and set it as 2 meters.
I also set the focal length and apperture (and obviously, camera model)

This is Android Hyperfocal Pro.

EDIT: Set subject distance to 2.7 meters and there you have infinity focus:

Basic scene-referred workflow plus camera input profile for Canon 50D from @ggbutcher’s database, which adds more punch to the yellows and greens.


water_noise.cr2.xmp (10.9 KB)
(darktable 3.2.1)

3 Likes

Nice, this looks like a useful app! Does this mean that if I had been 2.7 meters or more from the tree and had still focused on it, the background (e.g. bridge, island, etc) would also have been in focus and sharper? I was definitely closer than 2 meters so that would explain why it wasn’t focusing to infinity as I had intended

That’s it.
Download it and play with it, if you don’t want to do the math manually from those dof charts you find on the internet.
Thanks to @shreedhar who first mentioned the app to me.

At the magnifications you use, you may be disappointed, you need to understand that hyperfocal is “acceptably sharp” and this is very much based towards holding a hard copy and what the human eye can see. Circle of confusion - Wikipedia has details, blowing an image to 200% on a computer screen is different. Fujifilm cameras have two depth of field scales LCD setting, one is big and based on print, and one is very narrow and based on a computer screen.

Depth of field is a matter of degrees.

If something is barely in the zone of acceptable focus, that means it’s almost out of focus.

I tend to let the backgrounds fall out of focus in favor of perfect focus for the foreground in this sort of photo.

I haven’t tried it yet, but Spencer Cox at Photography Life has an interesting focus technique, described here:

The link is into the middle of the article, to the section titled, " 6. The Double the Distance Method". Very easy to do…

4 Likes

I don’t seem to be able to recreate this with the enhanced yellows and greens. When I tried loading the Canon 50D profile into the input color profile module, I don’t see any difference in the colors, just the sky getting darkened (default settings with enhanced color profile on the left, Canon 50D profile on the right):

Is there something else you need to do to use a custom input profile like this besides just selecting it in the input color profile module?

Thanks, this article (and the explanations by @nodal and @CarVac) was helpful for me to better understand hyperfocal distance; I had not given much though to sharpness throughout the DOF before but definitely will now. The double to distance method seems like an easy technique to remember; I’ll have to give it a try!

I don’t know what to say in technical terms.
Please, load my xmp, select step 15 (tone eq.) in history and take a snapshot. Then go up to the input profile step and compare. I can see yellows andr greens pop up. But the sky is heavily flattened after that. In the next step, I use tone eq. to step down the sky until the histogram regains contrast (actually, when applying this step, I don’t use the histogram to guide me, I just look at the image)
Note that I’m using the waveform histogram.

EDIT: Wow, strange… I just clicked on the images I uploaded and the one where input profile is enabled seems to flatten the yellows and greens. I doubled checked in darktable and the punch is there. Maybe it’s a browser’s color management stuff going on here (but, again, I’m not technically savant)

If you’re not color-managed, the input color profile won’t do anything. Its the input to a color transform, where the output is defined by either a working profile or the display/output profile. Check to see if you have a working profile enabled. Also, a display profile.

Took me a bit to get all of this in order; after I did, I wrote this: Article: Color Management in Raw Processing

1 Like

I just compared the image processed with the Canon 50D matrix and ssf input profiles, and there’s not a lot of discernable difference for this image. In some of the evergreen greens I can detect a shift from the yellow side, but most of the deciduous leaves seem to be yellower shades of green to begin with.

Probably a good illustration of the colorimetric accuracy of well-made matrix profiles; where they suffer is in retaining hue gradation of extreme colors due to the straight-line transform to a place just inside of the destination gamut. Also to consider: If the encoded colors of the original image capture are already within a destination gamut, a colorimetric rendering intent won’t touch them…

water_noise.cr2.xmp (12.8 KB) water_noise_01.cr2.xmp (14.4 KB)

I can see the historgram changing as seen in the screenshots you posted, but I still don’t see a noticeable difference in the yellows and greens. Thanks @ggbutcher for the article and suggestion about color management - I am indeed using colord and darktable-cmstest shows that I have a profile selected for my monitor. In any event, I appreciate knowing about this profile for the Canon 50D and will try using it with future shots!

Thanks - I appreciate seeing how to use a completely different set of modules than I normally use to process an image! Can you explain how you’re using the lowpass and defringe modules? It looks like for the former to boost saturation (but why do it in this way with lowpass) and for the latter to reduce color noise?

I’ll admit that I need to spend some time working with the filmic and tone equalizer modules. Us old dogs, are sometimes slower at learning new tricks. :grin:

Recently, I watched an older Harry Durgin edit and saw how he used the lowpass for saturation,
Go to 21:55.
https://www.youtube.com/watch?v=dYDCYCwPlMY
As for the defringe, I used that for the leafless branches in the foreground on the left. BTW, I can’t remember where, but I recall seeing the defringe and hot pixels modules use for reducing chroma noise on very high iso images.

2 Likes

In many cases, even a handheld bracket works well with HDRmerge as long as there isn’t much subject motion.

Thanks for the tip on HDRmerge - I had not seen this tool before! I have used other HDR programs like LuminanceHDR and Photomatix in the past, but I don’t like that they require the tonemapping to happen outside of Darktable (since often I can’t create as good of an image with those tools as when I use Darktable), so this seems like a significant improvement since I can then process the DNG normally with Darktable after generating it with HDRmerge.

I took a set of 3 bracketed images (0 EV, +0.67 EV, and +1.33 EV) that I shot handheld, merged them with HDRmerge (using the Darktable plugin), and then processed the resulting DNG and the 0 EV image with the same history stack. The results are below, showing substantially less noise in the shadows with the DNG (left) than the 0 EV image (right):

With the original images, if I wanted to recover that much detail from the tree using the +1.33 EV exposure, I would get significant highlight clipping:

Here’s the full DNG after processing with the scene-referred workflow:

Given this experience, I feel confident that I can shoot bracketed shots handheld (in situations where there is enough light) and use this method to create a DNG RAW with greater dynamic range; this is great!

2 Likes

I thought your photo presented some interesting material to work with.

_1080087_06.RW2.xmp (29.6 KB) dt3.3

The original is wonky by 2~2.3 degrees, has barrel distortion and a little bit of the lens hood on the minimum focus and widest angle (only with open source RAW though, it’s corrected on the internal jpeg), it has noise that I don’t find distracting but can be removed really well using RT’s wavelets tool, so as a play raw “learning” photo it’s quite good.

I think the point I was attempting to make was that at 200% every image can be nit picked and probably something about going out and taking pictures even in one’s back garden.