Darktable - Maintaining Sensor Linearity RAW to .tif

Hi,

I am an engineering student aiming to characterize a flux distribution on a plane by correlating pixel values with flux sensor measurements in the plane. However, I am not very knowledgeable with photography, so I’d just like to clarify some things about processing RAW files.

My understanding is that DLSR sensors have a basically linear response to light intensity (I am using a Canon EOS 60D), but if I shoot in JPG, the camera will apply a nonlinear gamma adjustment to the image. Therefore, I should shoot in RAW, convert to grayscale/monochrome, and then export as a .tif, which I can use in MATLAB for the rest of my post-processing. In the conversion process, I want to ensure that linearity is maintained.

  1. In darktable, there is three color profile options: input profile, working profile, and output color profile. If I select the input color profile as standard color matrix, and then my output color profile as any of the default linear color profiles (linear Rec709, linear Rec2020, etc.), linearity will be maintained, correct?

  2. White balance is irrelevant. Whether I apply any white balance or not, linearity is maintained in the grayscale image (as long as I don’t clip any of the channels) correct?

darktable is a wonderful, powerful, complex, sophisticated program. But I suggest it is overkill for your needs.

Instead, I suggest either dcraw or dcraw_emu from the libraw toolset. For example:

dcraw_emu.exe -v -w -4 -o 0 -T yourfile.cr2

This creates the linear tiff file yourfile.cr2.tiff

5 Likes

For a greater understanding Google develop a raw by hand…there is a 2 part blog that shows all the steps…it is a great background resource

1 Like

Speaking of, here the Matlab specific one.

As @snibgo already said, darktable is overkill for what you want to do. In addition, do you really want to figure out what darktable does to your data (a jury/supervisor can get picky about such things)? dcraw is fairly simple code to understand. You may also want to think about the resolution you need, simply binning the RGB data from the raw file might be enough; you divide the number of pixels by 4 that way, but there will be no interpolation, so simple to understand and explain.

White balance is basically applying scaling factors to the red and blue pixels to correct for differences in illumination.
Whether that correction is relevant for you or not depends on what you want to do exactly. If your plane and the incoming light are alwas the same colour, it shouldn’t matter. If not, you’ll have to test.

Keep in mind that you are not making photographs here, but doing measurements. So being able to trace where the errors come from is important, very fine detail may or may not be as important.

1 Like

I don’t want to meet up with you in a dark alley… :laughing:

+1

You might also find some useful information from this thread…

Thank you everyone. I will switch to using dcraw

@ggbutcher OK, getting all the details is not easy :stuck_out_tongue: But getting what the code does is a lot less complicated for a console application than it is for GUI applications like dt. And when you are a student, you have to understand as much as possible of your tools/methods (been there :confused: )

y’know, thinking about this, I never could get what I thought was true unmodified data from dcraw. A program that would be more verifiable would be the aptly named unprocessed_raw.cpp in the libraw samples directory:

https://github.com/LibRaw/LibRaw/blob/master/samples/unprocessed_raw.cpp

You’d have to clone and compile libraw, but it might better support establishing a pedigree for your data.

unprocessed_raw doesn’t demosaic. This might (or might not) be what the OP actually needs.

(dcraw can also do “no demosaic”, of course.)

Okay, maybe even more easy to use would be rawproc, my hack raw processor. It opens raw files (using libraw) and presents them unmodified (set the property input.raw.libraw.rawdata=1 for totally unmodified data or =crop to remove the masked borders) for further work. With it, you can open a raw file and then immediately save it to TIFF for your purposes.

Link to the current version, 1.1:

Windows installer and Linux AppImage.

Wow, this is much easier than using LibRaw (I don’t know c++ so using LibRaw directly is a headache). Thank you!

One question about this. I still want to do adjust for black/whitepoint of the camera, debayer, and convert to grayscale. So what I have done in the program is kept the “subtract:camera”, “demosaic:[amaze]”. “blackwhitepoint:camera”, and “gray” (with rgb each set 0.33). Is having the subtract:camera, and blackwhitepoint:camera redundant/does that mess up the black point? Should I only use the blackwhitepoint:camera tool if I want to shift into 8bit/16bit space?

use blackwhitepoint:data, and it’ll just scale to the data limits. This is probably redundant to the subtract:camera, but I sometimes do different things with blackwhitepoint, so keeping the subtract retains that essential operation.

It doesn’t change the linearity, but if you’re doing things that need to respect the original magnitudes you should probably remove it.

@Wesski: You may need to be careful about how you define “linearity”. This might mean:

(a) the values are proportional to the energy received, V = k * E, or:

(b)the values are proportional to the energy received plus a constant, V = k * E + c.

Depending on your needs, either definition may be appropriate, or perhaps it doesn’t matter.

Probably a fair assumption at the sensor level, but is it also valid for all raw files?

Okay thank you

For consumer cameras, I think so.

I show a method for testing in Linear camera raw.

I don’t have enough experience to say, but I suspect that most good cameras have linear raw files. Or at least, close enough for photographic purposes. But cameras are not calibrated photometric instruments. For example, the colour filter array passes spectrums of light, and manufacturers don’t publish these spectrums, so we don’t really know what quantity is being linearly measured. For any application, we should ask: How much accuracy do I need? Are files from this camera as accurate as I need?