Adjusting Black Level Correction

Thanks, everyone. I will stop worrying about this question and avoid black point correction going forward, except in cases of obvious crushed blacks.

4 Likes

My understanding is that it serves two purposes:

  1. Total darkness may be recorded as a positive number by a sensor by design. This is subtracted in the pipeline, but electrical noise can make the result negative. You can correct for that here. But in practice I think that this is done anyway in the raw black/white point module, so you no longer need to do this.

  2. Some operations (eg sigmoid) take logs of values, which give an error for -Inf. A small positive value is introduced here to make that work, but this is a hack, the relevant modules should just special-case 0.

In practice, you don’t need to touch this. It is surprising that it is exposed in the GUI.

1 Like

It is my understanding that no sensor will record down to zero due to what’s called ‘dark current’, imperfections in the electronics that render the presence of a measurement even when there are no photons exciting it. Others may please correct/clarify…

Cameras will either 1) pre-subtract a value to deliver black (0.0) values out of the box, or 2) deliver the raw data un-subtracted and report a suitable black subtraction value in the metadata. My Nikon D7000 does #1, and the Z 6 does #2. My surmise is that such a number is a compromise, as there’s probably a bit of ambiguity between where dark current tapers off and recordable photons picks up. Further surmise is that dt provides an additional adjustment to the metadata number for that reason.

In rawproc, my default toolchain includes a subtract tool which applies the metadata number if available (if the number isn’t available, the default is 0, so that lets me keep the tool in the default toolchain without worry). I can turn it off, and for a Z 6 raw (metadata black subtract: 1008) this is the result:

Anyway, FWIW.

1 Like