When processing a raw file, one of the first steps is to demosaic the image. If you set the demosaicing method to “passthrough (monochrome)”, this discards color information during the demosaicing process, and darktable will flag the image as monochrome. Note: You should only use this for images taken on a camera where the Bayer filter has been removed.
I’ve 2 questions, may be rethorical, but I hope someone could explain better
From the note, one could remove the bayer filter from a camera… Is that really possible ? I understand I could removed the Bayer filter from my Canon 70D. I suppose I misread the sentence.
I looked at Wikipedia about color filters and I see that there are many kind of filters along with Bayer. (Color filter array - Wikipedia)
From the list of color filter (RGBE, RYYB, etc…), I can use the passthrough option of the demosaic tool with any filter, but bayer filters ?
On a side note, how do I find which camera has which kind of filter on its sensor ?
If you have any CFA applied to your sensor, you should properly demosaic first and then convert to B&W. The different filters in the CFA have a different response to the incident light, which is taken into account when demosaicing and for the color input profile. In that case you first want to get accurate colors before you convert those colors into grayscale.
If your camera has no CFA, this demosaicing step and color calibration is unnecessary (edit: please correct me if I’m wrong). You directly capture the lightness of the scene without color information. You can select “Passthrough” for the demosaicer in that case.
Oh, wow ! Scratching out thebayer filter is then a thing. Thanks for sharing. I better understand then, why I would use the passthrough option in the demosaic tools !
There are a few companies (worldwide) that remove the Bayer filter for you. That comes at a price, 1000€ can be expected. And the advantage is not as big as for a true monochrome as the micro lenses are also removed.
It’s complicated… I thought that dpreview had the patterns listed, but I can’t seem to find the info anymore.
Also … in my experience the best monochrome conversions come when you can hold on to all the color information as long as possible, adjust everything for a near perfect color image and then use that as a base for your monochrome version. You gain so much control that way, it really improves the end result.
With optical sensor arrays, tiny lens systems serve to focus and concentrate the light onto the photo-diode surface, instead of allowing it to fall on non-photosensitive areas of the pixel device.
Ok, I was wrong. My thought was that for each microlense or photo diode, you have one color (2 green, one red, one blue). So, one “dot” (of the image) needs 4 photo-diode ( = 1 micro-lense). Removing the color filter would give 1 dot for 1 microlense. And this is where I’m wrong ^^
What you can do at dpreview is download a raw of the camera you’re interested in from their studio scene comparison tool, then just extract the metadata with either exiftool or exiv2:
glenn@bena:~/Photography/rawproc$ exiftool -G DSG_3111.NEF |grep CFA
[EXIF] CFA Repeat Pattern Dim : 2 2
[EXIF] CFA Pattern 2 : 0 1 1 2
[Composite] CFA Pattern : [Red,Green][Green,Blue]
Actually, I think it goes one step further, in that you can influence the monochrome tonality most effectively with color manipulation, to the point of abstraction. When I grayscale an image, my tool has three sliders for the R, G, and B channel contributions. I almost always go through the “single-channel” variations, where I slide one of the channels to 100% and the other two to zero, just to see what aesthetic effect concentrating on a single channel might bring to the image. There are other colors, mixes of two or all three channels, that might bring a more interesting image; you just have to inspect the scene, find the dominant colors in the texture you like, and play with them. I’ll also go back and maybe add a color saturation before the grayscale, to punch out colors before I mash them into monochrome…
Isn’t this sliders similar to the one in Color Calibration module (grey tab) ? May be with some internal variation in the code.
This is something I tried to play with but in my experience with the few images I played with I didn’t see major difference. But I’m really a newbie with mixing channel
@olivier i am really not sure you are understanding the bayer stuff correctly, just to make sure and don’t feel offended if i understood you wrong.
For normal cameras you interpolate every pixel from the surrounding area by a demosaicing algorithms and end up with red-green-blue colors for every pixel. These can be used in any way, one example would be a color->grayscale conversion.
A few special cameras (mostly from leica) have a true monochrome sensor, there is no red/green/blue stuff there, every pixel is just a grayscale. This offers better monochrome tonality, sensitivity, dynamic range and a higher resolution with only one downside: no color at all.
And there are a few people who modify a normal camera by working on the sensor itself, or some companies do that for them as such work is highly risky of damaging your camera completely. This is nothing software-related at all. After that mod these cameras also know nothing of colors, every sensor pixel (photosite) knows only about light intensity. For those - very special - camera we need a special “demosaicer”, in fact it does nothing fancy, it just only renders luminance.
I’m not offended at all. The Bayer filter is something unknown to me, and I try to find thing I might know that may explain… and may be true or not. That is why I like to share my point and if I’m wrong, please correct me, of course! I try to learn by asking questions and trying to attache some dots
Camera sensors measure light as energy, more intense light at a location yields a bigger number. So, all the little sensors arranged in an array by themselves would produce measurements that readily yield a monochrome image.
But light has two components to consider in telling our brains how to make colors, intensity and wavelength. The sensors themselves can only measure intensity, so we need another contrivance to capture wavelength. That is the Bayer array, an array of high mid, and low bandpass filters laid upon the sensor array. You probably are used to calling them red, green, and blue filters, but it’s really more useful to consider them in what they’re really doing, which to allow only selected wavelengths pass to the respective sensor location. Our eyes actually work in a similar way, where the rods are set up to record low and mid wavelengths, and the cones are set up to record high wavelengths.
So, the camera records the light of the scene as splayed upon the sensor array covered with this mosaic of bandpass filters. That information can then be used by software to construct a color encoding at each location (demosaicing), using the measurement at that location and some interpolation of the surrounding locations’ measurements.
So, when you scrape off the bayer filters from a camera’s sensor, which is not that hard to do with the right solvent and tool, you end up with just the semiconductor sensor locations, which can make a fine monochrome image of just the light intensities.