ISO-based denoise presets useless for iso-invariant cameras

When utilizing the full potential of an iso-invariant sensor together with darktable in scene-referred workflow all the denoise profiles based on ISO are most of the time off by at least one stop.

Photographing iso-invariant means you set your exposure to the lowest iso and exposure time that retains your maximum (expected) highlights and leave it at that. Or you set your camera to use auto-exposure with at least -1 but usually more at -1.7 or -2 stops.

Any ideas how to handle this with hundreds of images?

e.g. reportage, wedding reception etc - scenarios where you do not want and/or can look at each image in super detail but still need maximum quality.

I don’t get what is useless about an iso preset ?

Iso invariant means that you can freely choose between shooting iso 1600 at 'correct exposure ', or iso 100 at -4ev and raising exposure in post.

It means the shadows are clean enough (AT BASE ISO!!!) to raise in post vs using higher iso in the field (and in doing so saving more highlights ).

This all has nothing to do with how much noise is generated (and profiled ) at certain ISOs?

A profile for iso 100 will try to remove just the noise floor, nothing more . If you leave exposure as shot or raise it with 4ev , doesn’t change the noise added at that specific ISO , right ?

Maybe a profile is a bit too heavy handed in the shadows (which you want to raise to not be shadows anymore ) , but there is a shadow bias slider for that… And it basically means the iso profile is a bit too crude.

Tl:dr, iso invariant sensor has nothing to do with profiling how much noise it generates at certain ISOs.

That is only true if you do not change the exposure in post.

Staying with your example:
An image shot at iso 1600 and one at iso 100 but +4EV in post will have the same noise - the one where an iso 1600 noise profile would remove most of it.

Using the iso 100 profile is pretty much very useless and makes no discernible difference in the look of the image.

Are you sure it’s the same type of noise?
Increasing ISO in-camera means analog amplification, does it not?
Underexposing by 4 EV and raising exposure by 4 EV in post-processing means discarding 4 bits of information (for a 14-bit sensor, a value that would be recorded as 10101010101010 without underexposing is recorded as 00001010101010, boosted to 10101010100000 via the exposure adjustment). That would introduce quantisation noise, which, I believe, is different in nature from sensor noise.

2 Likes

But that wouldn’t be the case if your underexposed scene still fits within the sensor’s dynamic range, right? In that scenario you’d still have the same range of data bits, but they’d be shifted to the left of the histogram.

If anything, I’d think the downside is that under exposing pushes the shadows into the low end and increasing the impact of noise, but that’s different from quantization.

Manually choosing a profile for a higher ISO looks good.

And the way iso-invariance works it should be the same. It might be a little different because in one version the camera pushes the exposure (via ISO) and in the other the software does it. From my current experience pushing in darktable with scene-referred is way better.

To speed up the process of selecting the “right” iso preset" for your shot, you could create some DT styles where let say 400 iso preset correction is applied (called 400iso denoise) another 1600iso denoise … and bulk apply the styles to shot session you know are shot at a given iso.

In most cameras, no. Check photon for specifics.

OP:
I think there is an important distinction on your workflow. If you are using the camera metering to determine the shot exposure (shuttle speed and aperture), then selecting a higher ISO in the camera will negatively impact the captured data by the sensor. Why? At the higher ISO, the camera will meter and suggest a speed/aperture that actually allows less energy ( light + time) to reach the camera sensor. Therefore you will be capturing less information (less energy reaching the sensor) in from the shadow areas of the image. Therefore, keep the iso in the camera low, to capture more energy even if the image ends up looking “underexposed” in the camera, you can then use the software to amplify captured energy.

PS: in other words, how/where you meter matters. Try to get as much energy as possible without clipping the sensor signal.

I thought darktable’s profiled denoise was run prior to exposure compensation? (That’s what I remember, it has been a while)

If the denoise happens before exposure compensation in the pipeline, then the assertion remains true even if you do change exposure.

This is not true for the majority of cameras on the market, since even ones that are iso-invariant USUALLY have a dual-gain sensor. Pretty much any Sony newer than the A7M2 or so has a dual gain configuration. ISO 1600 is above the dualgain cutin for nearly every camera on the market that has dualgain.

Can you point me at some specific article there? There are quite a few. :slight_smile:
And, of course, I meant ISO at the raw level (ignoring different ISO values implemented using the same base exposure and using different in-camera tone curves to apply different levels of digital amplification).

I think this is the article that applies the most to this conversation. Most of the current cameras have digital gain or a different scheme (eg dual gain). It gets really complex and I probably have misunderstood half of what that website has. From all the graphs in that site, the Dynamic Shadow Improvement is the one I look at to identify what camera ISO I should use.

https://www.photonstophotos.net/GeneralTopics/Sensors_&_Raw/Sensor_Analysis_Primer/Photographic_Dynamic_Range_Shadow_Improvement.htm

3 Likes

I have not tested against that, but in theory that would complicate things even more.

Let’s pretend a dual-gain sensor with iso 100 and 800 as bases.
Then iso100 +4EV and 800 +1EV could really look different.

On my Nikon D500 i have not really seen any relevant differences … yet. =)

That is currently my only idea to apply some preset,
but it would have to be based on ISO + EV offset.

I just have no idea how to do that efficiently en masse.

You have a dualgain cutin at ISO400:
https://www.photonstophotos.net/Charts/RN_e.htm#Nikon%20D500_14

You definitely should not expose at ISO100 if you have insufficient light to the point where you want to go above 400, as you don’t want that extra read noise from the additional capacitor if you don’t need it.

Similarly, per the shadow improvement chart linked by @g-man - Shadow Improvement of Photographic Dynamic Range versus ISO Setting

Definite improvement when you jump past ISO 400

As to how that interacts with profiled denoise, I need to do some extra thinking. It may indeed be that denoise happening before exposure compensation is exactly the problem here, since the shot noise model probably changes with exposure compensation.

As to the issue of quantization noise - at this point I believe most cameras are now bound by read noise and shot noise, and quantization noise will not be a problem even with significant underexposure. That said, per Mr. Claff, you shouldn’t be dropping below 400 unless you have so much light that you are clipping things of interest at 400, since your read noise increases past there.

1 Like

+1 I would keep this camera at ISO400 in most cases.

Presets won’t work, as you don’t have a criterion to decide which has to be applied (ISO is available, EV offset isn’t).
But you can apply styles to selections of multiple images, and perhaps you can eyeball the required EV correction in lighttable view (thus decide which style to apply).

But, do you really need to apply denoise while still working with 100s of images for an event?
Are you really going to present all the images, or only a selection of say 100 (which, as a viewer, I find usually more than enough). How are those images presented? If on screen, no need to denoise the images with lower EV corrections (say <= 400 ISO equivalent), the noise won’t be visible anyway due to the scaling applied.
And how many are finally going to be delivered? And in what form?

I mean, you don’t need to edit all images you took up to presentation standard (let alone ready for printing).

To avoid misunderstanding: once a final selection is made, and you are editing for delivery, you will need to use all tools, and spend time on each image. But no need to spend that time on images that are not going to be presented, let alone delivered in final form.

Thanks for the link, somehow I missed that chart while I was doing some research just a few weeks ago. Very helpful.

If you do that before any exposure corrections, you won’t know what noise preset/style would be needed. If you do that after you corrected all images, well they look all good in lighttable view. I am really looking for some way to get decent noise reduction across all images without having to look at 100% views of each and every one.

And there it is … the unsolicited view from someone who pretends to know it better without having any insight into the requirements whatssoever. Now that is some patronizing if I ever saw one. So let me just answer you in your own style: “I mean, you need to shut up.”

As you wish

If you think about the way that profiled denoise data is generated, it makes sense that it doesn’t work well given this case. To generate the data you shoot a file at each ISO speed, clipping both blacks NAD whites. A script characterizes the noise at each ISO, then the module either matches or interpolates the data for each ISO. Then its simply matched from the ISO metadata.

If this is something you plan to do often and for hundreds or thousands of images, then you should probably generate your own custom noise profiles for this use case.

This seems unnecessarily harsh and I’d request some empahty in future replies.

1 Like