Exposure shift for incident light metering

I would like to set my camera exposure based on an incident light meter - so there would be no grey card or color checker to help me set exposure in Darktable.
Additionally, from the fastrawviewer page below, I know that this topic (the exposure shift) is also relevant if you want to perform “correct” ETTR based on sensor specifics.

I know (from other discussions) that even with a “perfect” in-camera exposure, you have to perform an exposure shift to reach the middle gray luminance that camera JPGs reached with an applied tone curve (before Filmic/Sigmoid). This is supposed to be around +0.7 EV, which is a default value for a spectrum between +0.5 to +1.2 EV.
https://docs.darktable.org/usermanual/4.8/en/preferences-settings/processing/

I am curious as to what the specific value is for my camera (Canon EOS 7D - yes, the old one).

I see four methods to find this out:

  1. You’re a Fuji user and can use the EXIF value exploited by the corresponding LUA script.
  2. You compare the camera RAW and JPG exposed to middle gray.
    How to Use the Full Photographical Dynamic Range of Your Camera | FastRawViewer
    This failed completely for me, testing the whole spectrum of ISO values with different targets and lighting conditions: Every time, there were huge differences in the necessary shift, and I believe this is due to the non-stepless character of the exposure triangle causing too large differences between settings.
  3. Compare the RAW and JPG versions of the DPReview test images.
    https://www.dpreview.com/articles/4109350402/welcome-to-our-studio-test-scene
    In this case, from small movements in the plants, it seems that both versions were taken separately. Comments below the presentation article also notice that in some cases, the exposure was “wrong,” and generally, we have no information on which metering mode and target area was used. So one might be comparing apples and pears.
  4. Activate the old base curve module with its camera-specific profiles, set the global color picker to measure a zone that maps to middle gray (around 50 in LCh) on a color checker area in the DPReview RAW. Then deactivate the base curve module and set the exposure shift with the global color picker still active so that you get up to middle gray in the same area.
    In my case (Canon EOS 7D), this meant +0.904EV.

There are (at least) two disadvantages of method 4:
a) I have no idea how accurate the given “general” Canon EOS preset is for my specific model.
b) It’s an embarrassingly “analog” transfer from a module with a digitally set curve. Yet, if you try to read out the value of the base curve profile at 50% by clicking another point, you automatically change the curve.

Question: Is method 4 acceptable for you?
Is there a better method of retrieving the middle gray values for a camera-specific exposure shift?

Personally I feel like all these approaches to edit by numbers fail as often as they work.

For raw images embracing the scene referred workflow my approach is to use the 50 percent setting in exposure and the picker as my method to set exposure. I let it try first on the whole image and if the result is not good then I modify the region a bit to remove extreme areas or actually select a region around the subject of interest.

Then I use the tone eq and if I decide to the tone mappers to manage thing s from there.

And doing this also after a while shows how much exposure is typically added for your style of shooting so if you do want to apply an offset from that sampling and it does hover around a value you could apply the average of those values…

I’m not saying this is the way to do it just that it’s one way to do it…

3 Likes

This seems really complicated and I’d hazard a guess that you’ll end up clipping more highlights than you’ll save with the method you’ve outlined?

Are you having problems getting a decent exposure? How much exposure do you generally add in the exposure module?

2 Likes

I feel your method sounds way too complex and overthinking the problem.

My personal method of setting the exposure module in DT is to give the best exposure to the highlights and ignore what is happening to the shadows. Then I can use a combination of the shadow and highlights module, the tone equalizer module, the color balance rgb module, and even additional instances of the exposure module masked to the shadows to brighten the shadows to the correct level.

This method works well for me because DT has great options to recover shadows, but I don’t depend upon a single module alone to achieve this.

3 Likes

Keep in mind that the 0.7EV is an average, and corrects for the tendency of camera’s to underexpose (presumably based on their view of how the ideal jpeg should look like). With external metering, the “required” correction will be different (and your jpegs may look overexposed, but who cares?)

Using an incident light meter doesn’t stop you from including a gray card or colour checker in the image, and then use those within darktable. A handful of test shots should give you any correction needed relative to the metered value (which was used for exposure?).

Also, while ETTR may be useful in certain conditions, I find it impractical for most of my shooting (hunting insects or birds depending on location and season, with rather short notice before a shot, and changing light conditions).

2 Likes

Someone (don’t remember who) shared this method in the IRC/Matrix channel some time ago:

do you have a white wall ?
that is evenly lighted
you spot measure it to be centered on 0 EV, then take a picture. Open the picture in dt, place a color picker in Lab over the same surface you spot measured, and increase/decrease exposure until the color picker shows 50 as L value.
with my camera, it’s strictly always +0.67 Ev in the exposure module.
it’s better if the sun is your light source, or equivalent
I made the test at different iso value too.

  • spot measured, deactivate filmic or other module that can interfere in measurement. and increase/decrease

Haven’t gotten around to trying it yet, but maybe it works for you.

Are you referring to the raw exposure bias, which is 0.72 for lower ISO values?

I have also used the studio shots from Dpreview for my cameras. It’s important to change to the view comparing raw files and download the JPEG linked there. The JPEGs linked in the default view seemed to be conversions made with Lightroom. If JPG and RAW are available from the same shot, it shouldn’t matter which part of the scene was exposed to. When I did this for my cameras, I think that the exposure was on the middle grey-patch of the colour checker.

The values I obtained from the files from Dpreview were close to the values, I often ended up with when adjusting by eye only: for Fujifilm 1.2–1.3 EV, and for Olympus 1.4–1.6 EV. Referring to your first point, adding Fujifilm’s reported raw exposure bias to darktable’s default exposure of 0.7 EV would result in a quite bright image.

Because then you are probably doing a double correction there: the 1.2-1.3 EV mentioned should replace the 0.7 EV, which is an average raw exposure bias for general use. (Strictly speaking, it’s the negative of the camera bias for which you are correcting, the camera underexposes).

The shift I discuss here is a technical need.
Sure, there are many reasons for changing exposure in DT - and I use your method too. Yet, what I care about here is just the technical, camera/manufacturer-specific correction to get us from the display-referred workflow of (in-camera) tone curves to the scene-referred mindset of anchoring middle gray. For the sake of this point, let’s just assume we “nailed” exposure in-camera.

Yes, it is complicated, but I think it is a question that has remained unanswered since DT shifted to scene-referred. This currently multiplies the need of making decisions: 1. how you want to expose in-camera, 2. how to correct exposure in DT because of the manufacturer-specific shift (which, of course, we can bypass with an “average” of 0.7EV)

Maybe, but I “only” seek a default way of using two modules (Exposure + Filmic/Sigmoid) after taking an exposure decision in-camera. You propose tweaking ~4 modules instead. Not wrong, but not simpler either.

If you’re right, this will make things more complicated. I hope you’re not. Regarding ETTR: I just included it since the linked article explains how knowing the precise in-camera shift matters in order to do ETTR “really” correctly, but of course, the method itself is only useful in certain situations.

That’s what I describe as “method 2” above. I still don’t know why the results varied so strongly for me in different situations.
And that was without additional in-camera “dynamic range increase” options like Highlight Tone Priority / HTP [Canon], Active D Lighting [Nikon] or Dynamic Range [Fujifilm].

Thanks for the hint!

At least it would matter regarding which patch actually lies at middle gray luminance. If it is just some average metering mode, the middle gray patch in the JPG may not lie on the middle gray value. If you are forced to mix neighboring fields in the JPG (to get your numerical target middle gray area) you might land you on a combination of two positions on a non-linear curve.

Exactly! Nothing to add regarding the Fuji topic.

As an overall “simple” solution, does anyone know how to extract the factor that the different curves in the base curve profiles apply to get the shift value precisely at the point of reaching (getting to, not modifying from) middle gray? This might be a compromise value closer to individual cameras.
My next best guess would be to test this on some generic digital gray chart (not a photo!) with gradated luminance values (since I saw that my “method 4” depends too much on finding middle gray on some sample photo and not on the actual shift factor applied by tone curves.
This way, one could produce a list of standard presets like in base curve, for those lazy people who trick themselves into believing they “nailed” exposure.

Sorry, everyone, for the “wall of text,” I just wanted to take all your replies seriously.

DT has a tool that was used to make the basecurves…

THis might give you insight on that process… https://www.youtube.com/watch?v=LufwQZx01gk

There is also darktable chart… this is a process to match the jpg with a tone curve and a lut made for the color lookup module…

Finally you can use the color calibration tab with a color chart and it will tell you what exposure is the correct one for the color values to be accurate. I believe it uses patches in the tone ramp so you could experiment with that…checking to see what exposure values you get…

1 Like

Thanks, that’s really helpful. I’ll look into that.

Not sure, what you’re trying to say, but perhaps I was a bit unclear. I was not questioning darktable’s default. The EXIF value in Fujifilm cameras is something separate, which, by chance, is almost what darktable is trying to compensate by default. However, this would result in a very dark image for Fujifilm cameras. The 1.2–1.3 EV indeed replace darktable’s default value of 0.7 and are not added on top. With the bright image, I was thinking about adding the raw exposure bias of Fujifilm to darktable’s default. This would result in 1.42 EV (darktable’s 0.7 EV + Fujifilm’s 0.72 EV) and, compared to the 1.2–1.3, in an brighter image.

I agree. For the studio shots I was interested in, the numerical values of the mentioned patch were middle grey and according to the description of Xrites colour checker, the patch indeed should be middle grey. So, yes, this should be checked.

Wouldn’t this be something similar as your point about the non-linearity of tone curves? I think, taking an image with your camera of middle grey would be the most reasonable approach for an exact value. Although I assume that images with the same model from someone, who does this regularly, are as good. Or you just try some of the values mentioned here and see, how you like it. Maybe someone already figured this out for your model here in the forum.

Not sure if this fits you need but DT will tell you the correct exposure to set if you use the colorchecker and do the assessment in CC module…it also creates a channel mixer matrix to create a better color match from the current input profile used…

Yes, it would be a problem. That’s why I’m currently searching Google for high-bit greyscale gradation images in png or tiff with a high-enough resolution. There I could use the color picker on small banding strips (with the smallest possible differences between individual ones) to test the existing curve profiles on them.
And I’m almost sure there will be additional problems regarding the color space settings.

Of course, camera-specific measurements with test photos would be even better, but I just don’t seem to get it right between different scenes or lighting conditions. I still think it’s due to the step-wise options of the exposure triangle.

I feel like there might be a grey ramp in here… I’ll have to look at them all…

Looks like mostly color images on review…there are some color charts in the AP0 directory…

1 Like

I don’t understand. What is the technical need? Why do you need it?

1 Like

Thanks, for now I ended up with this image:

A 16-bit gradient in 2048x1024px in TIFF.

from this overall address.
http://www.bealecorner.org/red/test-patterns/

I chose this because it offered the most gradations, though admittedly, an even wider file would have helped use more of the ‘shades’ that 16-bit is capable of. So please understand this as an approximation.

Input/Output color profile was sRGB by default.

I turned on Base Curve (color preservation: none) and went through all the presets.
Each time, I placed the global color picker “dot” (mean luminance) inside a stripe closest to 50% luminosity, with maximum zoom and noted the value (only one number for RGB - it’s gray…). Then I turned off Base Curve and activated Exposure with the global color picker constantly active on the same spot. The area exposure mapping only works with one decimal place, so I typed in different combinations until I had reached the same exposure as before (not 50%!). In a few cases, I had to round up or down the third decimal place.

As an alternative, I repeated this with the global color picker crossing into the next adjacent strip. Often, but not always (the latter in cursive), this resulted in exposure values closer to 50%.

Base curve preset Luminance LCh (stripe closest to 50%) Luminance RGB +EV shift Luminance LCh (2 stripes closest to 50%) Luminance RGB +EV shift
Canon EOS 50.12 119 0.942 50.00 119 0.940
Canon EOS alternate 50.15 119 1.044 50.02 119 1.042
Cubic spline 50.19 119 0.000 49.98 119 0.000
Fujifilm 50.24 120 1.102 49.98 119 1.098
Kodak Easyshare 50.00 119 0.303
Konica Minolta 50.10 119 0.577 49.92 119 0.576
Leica 50.07 119 0.498 49.85 119 0.494
Nikon 50.20 119 0.873 49.96 119 0.870
Nikon alternate 49.96 119 0.956 50.21 119 0.960
Nokia 50.08 119 0.445 49.86 119 0.440
Olympus 50.16 119 0.603 49.92 119 0.598
Olympus alternate 50.19 119 1.217 49.97 119 1.215
Panasonic 50.07 119 0.498 49.85 119 0.494
Pentax 50.04 119 0.496 49.82 118 0.492
Ricoh 50.04 119 0.496 49.82 118 0.492
Samsung 50.07 119 0.735 49.82 118 0.729
Sony Alpha 49.75 118 1.069 50.01 119 1.074

So there are some things to discuss:

  • Based on the “target” image, which is the highest resolution I could find, this is still just an approximation. Some vectorgraph or floating-point format (if this even exists in this kind of shape) or higher resolution would be more precise.
  • These are the shift values to [the closest measurable point next to] neutral gray that the generic presets in Base Curve give us. They are not model-specific, and I have no idea who wrote them. Still it may be useful to bring them over to the scene-referred workflow.
  • Based on the “target” image, I derived two values: one is a value from the strip closest to 50% luminance, and one with the mean value of this closest one and the adjacent strip on the other side of 50%. Since we’re dealing with nonlinear curves, this may already bring in a certain distortion. Thus, I also don’t know if you should, in turn, aim for another mean value between both columns. For scale, in most cases (aberrations in cursive), these luminances are still in the 119 RGB value.
  • I presume these shifts will be different with additional in-camera “dynamic range increase” options like HTP activated.

I’d have to try to wrap my head around all this but just looking without thinking it is not clear how slapping a non-linear profile out of the gate on an image, ie srgb and then using a basecurve module that will usually expect exposure to be managed into the display range (ie 0-1) before it is applied is relevant to using DT normally where sigmoid and filmic and the scene referred workflow do not … but I guess maybe if I go back and work through things I might understand what the end goal is…

2 Likes

Right, that looks like a lot of work…

Now, how does this tell us anything about the offset in camera metering relative to what it “should be” (i.e. to get a proper 0.18 gray value in our demosaiced, linear image after the exposure module)?

What is measured here is (at best) the required exposure correction between raw and in-camera jpeg, which is not the same thing. At worst, you measure the exposure correction between raw and a (semi-)randomly selected curve… (see “Cubic spline”, “Olympus alternate”, “Nikon alternate”, “Canon EOS alternate”).

Note the base curves can be algorithmically derived from raw/jpeg pairs (normally a raw and a camera jpeg). That means that in the case of alternates for a brand, you have to know which model corresponds to which curve…


You should perhaps rethink what question you want to answer: you started with wanting to use an incident light meter, and claimed that that prevented you from using a gray card or color checker. (I beg to disagree here).
Now you are deriving exposure shifts from base curves, using an artificial image. All that tells you is what the basecurve does, so at best, what the camera manufacturer does with middle gray from their raws…
And the basecurves are explicitly aimed at simulating the in-camera processing

3 Likes

There is a lot of confusion in the sentence above.

First, the camera exposure is the combination of aperture, shutter speed, and (to a lesser extent, because of ISO invariance) the ISO you shot the image with. You can use an exposure meter, but it does not make much sense for digital cameras, especially MILCs, as they have pixel-level “exposure meters” built in (visualized with histograms, zebra patterns, etc).

side note

Global exposure meters, either built in or as an accessory, made more sense for film, which is very tolerant of overexposure. For most film types, you still get to see a lot of details at significant overexposure, which even high DR digital cameras would clip hopelessly.

Second, the exposure you set in Darktable is just, more or less, a number that multiples the values recorded by the camera sensor. (Some argue that calling it exposure is a misnomer, but that is water under the bridge now, the term is stuck.) It is not unlike setting the ISO in your camera, except that it does not introduce additional noise or clipping.

That term is ill-defined for two reasons. First, unless your image has very little dynamic range, camera exposure is almost always a compromise between getting some detail in shadows without blowing the highlights. There is no “perfect” exposure — it depends on what you want to do with your image.

Similarly, exposure depends on processing intent: where you want your middle grays, which part of the image you want to emphasize, etc. To confuse matters, a lot of other modules can counteract exposure, ie a more or less identical image can be achieved with a wide range of exposure settings (eg high exposure, tame highlights in tone equalizer, or low exposure, lift shadows in the same module).

3 Likes