Raspberry Pi Processing

Hello! I’m very interested in RT, and have a few questions about its suitability for my application, and how to install it on an RPi.

Briefly, I am designing a color sensor system consisting of an RPi with an RPi camera module (either a V2 or HQ). The functioning of the system is simple: capture an image (with a gray card in the FOV periphery for white balance), and determine the average color of the captured scene (not including the gray card). Image processing will occur automatically via scripts (i.e. not a GUI), and ideally will all occur on the RPi. The top priority of the system is color accuracy and it’s my understanding that I should work with RAW images, and maintain linearity in my processing steps. I have been following this helpful blog post by Jack Hogan on rendering RAW images, which outlines some of the necessary steps: 1) load the RAW image and subtract black levels, 2) white balance, 3) correct linear brightness, 4) clip image data, 5) demosaic, 6) apply color transforms. However, as outlined in Chapter 5 of the RPi camera algorithm/tuning guide, there are other processing steps that may be necessary to include (e.g. lens shading correction, defective pixel removal). A few questions below:

  • If I supply a DNG file, can RT be scripted via the CLI to automate all of the above steps? One piece I am having trouble understanding is how (or if it’s possible) to automate in RT the white balance based on the gray card in the FOV for each image.

  • Alternatively, I think I can code up in NumPy steps #1-6 listed in the blog post above - could I then hand off this semi-processed image to RT for additional processing (e.g. lens shading correction, defective pixel removal)?

  • Can RT be installed on an RPi (in particular I am hoping to use an RPi Zero W), and are there instructions specific to the RPi and Raspbian for how to perform the installation?

Thank you!
Mike

Honestly, LSC and defective pixel removal are probably easier than a high quality demosaic algorithm, so in your case, I think you’re probably better off with RawPy + NumPy and will spend more time trying to get RT’s CLI integrated into the pipe than just keeping the whole pipe in NumPy

Thank you very much for the reply and helpful information! I’ve been looking into RawPy as well in an effort to determine whether that or RT will be suitable for my application and processing pipeline. I recognize that this is a RT forum (so happy to ask these questions elsewhere if more appropriate), but does RawPy (and LibRaw) have built in functions to do things like lens shading, chromatic aberration, defective pixel, and distortion corrections, or will I have to write my own code for that given the specific nuances of my processing pipeline? I believe the only non-standard aspect of my processing pipeline is that I would like to use the FOV gray card for white balance, and to not apply gamma. So it seems like I can rely on RawPy (or RT) functions to get me part of the way, but then I will need to rely on some custom code for the white balance correction (i.e. finding it in the scene and then applying it), and I’m wondering if it’s feasible to then return to RawPy (or RT) functions to finish up the processing. Thank you!

1 Like

mmm … if you just want color it seems a bit overdone to process a lot of pixels in the image just to get the general color, reflected by a surface?

Won’t it be easier to have a sensor and a translucent cap or use somehing like a colorimeter?
A display calibrator may get better results…

Sounds like you’re making a colorimeter… ??

Regarding the additional corrections:

  1. Lens transmissivity: If you can get your hands on a spectrophotometer, you should be able to measure your lens with a full-spectrum light shining through the wide-open lens. Now, that set of measurements will need to also be compensated for the light’s spectral power distribution, but that would be trivial - just remove the lens, and measure the light directly. @ilia3101 has some plots of the sort of data you’ll need here: https://discuss.pixls.us/t/high-quality-spectral-response-data-incoming/28497/16
  2. Defective pixels: If the measured patch of color is relatively uniform, taking the median of each of the three channels should eliminate the effect of defective pixels.

For my camera profiling, I worked a bit on making a Raspberry Pi spectrophotometer, with a RPi and a monochrome camera. Stopped working on it when I discovered the monochrome camera had a 660nm IR cutoff filter, which obliterated any measurements above that.

Have Fun!!

Fernando - thank you very much for your comment! I’ll address your question re. a colorimeter below in my response to Glenn, as he brought this up as well.

Glenn, thank you very much for your helpful comments!

Re. your (and Fernando’s) question about a colorimeter, I’ll first preface my answer by stating that admittedly color science is not my field, so please let me know if it seems I’ve misunderstood anything. The system I am designing is intended to be deployed outside for long periods of time (thus under widely varying illumination conditions), and the location of the object it is observing will be at least 1-2 meters beneath it. Are colorimeters intended for that type of application (especially re. the large distance between sensor and observation)?

Glenn, great point re. defective pixels and the median. Can you clarify why the lens transmissivity info is needed? I do not see that listed in the RPi algorithm/tuning guide. Have the RPi cameras already had this type of characterization performed?

A colorimeter is usually used at close range, to measure a single patch of color. Your device is similarly configured, but it sounds like you have a different application. I’m a bit less familiar with colorimeters than spectrometer stuff, as it looks like they have a RGB encoding step - I know that mechanism, but I don’t know the destination colorspaces. Someone like @gwgill will be a better source.

It probably depends on the achromaticity (lack of chromatic bias) of your lens. I’ve always assumed the better camera manufacturers worked hard to make their designs as achromatic as possible, but @ilia3101’s charts at the link I posted above belie that. One thing good about using the RPi cameras is that the cameras and their lenses are pretty common, making it more likely one could end up in front of a spectrometer.

I don’t currently have a spectrometer (I fudged that part of my camera profiling), but I’m about to procure one of these:

I think it has an I2C interface, which would be easily interfaced with the RPi. I’ve used the libpigpio GPIO library, which has two I2C APIs, and works nicely with my model train control stuff.

Edit: Oh, if I get one of the above, I have a RPi V2 camera, and I’d be more than willing to point it at the V2 lens, with a tungsten-halogen light shining through it…

The usual reason for using a camera type sensor to measure color is a need to gather a 2D array of color values. There are specialized systems for doing exactly this type of thing, but of course they are not cheap.
Generally an off the shelf RGB camera is not a good color sensor, except in very special situations. The reason is that their spectral sensitivities are typically far from that of the human observer. They can be made to work quite well if the source of color is restricted to 3 dimensions, i.e. an RGB display or a CMY print under a fixed illumination.
Most cheap colorimters are intended for short range use (i.e < 500 mm), but some high end ones have more specialized optics for measuring at a distance (i.e. something like the Konica Minolta CS-150 or their competitors equivalents.) Of course if color accuracy is not so important, a general purpose camera may give you a rough approximation to a colorimetric value.

1 Like

Camera sensors are covered by the Bayer fillters that filter the light each pixel receives.

Red filter does not only lets red light pass, it lets lot of green and blue.
There are green pixels twice as much as red or blue, being green colors favored, but letting too much red pass.

Blue is the less favored color and which usually is less acurate.

So color acuracy of camera sensor is not quite good.

They are designed to give pleasent results combining for each channel the info form adjacent channels too, no to be color accurate.

You can get some kind of color info from it, but yoou have to fight against all that and make all that color correction stuff to get something meaning full.

Colorimeters are desigined for that.
There are several kinds.

Simplest one are just similar to camera pixels: three monochromatic sensors covered by a filter.
But the filters are designed to be more color accurate.
You will have to make some math to convert that info to XYZ colors, that is what the colorimeter software doess, but you just have “one pixel” so you can make long calculations even in a RPi.

There are other kinds of colorimeters better than that with no filters thatt could degrade in time.

You just need to concentrate the lightt from the space you can measure to the colorimeter.

They usually have a lens but designed to measure from close distance.

May be you could use three lightmeters and covered them by filters.

You would need to calibrate the filters to determine their transfer function and do convolution calculation.

I am not an expert at all at that, just pointing the big picture.

There are other type of devices: spectroferometers.
No need of filter ther, so they arre the most accurate.

They would provide you with the energy distribution in each wave length.

From that you may convolute it with published transfered functions for the retina (red, green, blue cells) to convert taht to XYZ and the psicological response to get it in other LAB or what ever color space.

I cannot provide details as color science is very complex and there are too many color spaces and details.

You need to previously determine what do you need it for, and the degree of accuracy you need and what do you understand by “measuring color”, in what color space?

At the end using a spectroferometer seems the most easy way to do it, as you would have the energy spectrum of the light it receives (you would need to point it to the area you need may be with a lens) and using convolutions to do the math and transform it to the space you need.

If you use a lens and need to be accurate, you would need to calibrate the lens to determine its transfer funcion to correct from that (or get it from the manufacturer).

All that calculations are standard in the color science world and easily available.

Actually, they are bandpass filters, each in the region of the spectrum for which they’re named.

Filters can be a bit confusing; when I was shooting black-and-white film back in the day, I’d use red filters to take blue skies to black; I thought then the filter was blocking red light but it was really letting only red through, of which in the sky there was just about none. It also blocked most of the blue from reaching the film, making that part of the image black… :crazy_face:

That 's it, each filter is a band pass filter with al transfer function associated with it.
That means that it blocks part of the energy in each wavelength and lets pass another amount of energy.
The amount of light you meassure after it is the result of summing all light energy that passes through it that is the sum of the energy reaching the filter with that wavelength multiplied by the percentege of energy it lets passthough at that wave length (which is the characteristic transfer function of the filter).

Or mathematically:
let \Phi(\lambda) be the power distribution function of light energy of \lambda wave length,
and \Upsilon(\lambda) the proportion of energy the filter lets pass for that wave length.

The total energy that the passes through the filter (and thus the value capture by the sensor) is:

v= \sum \Phi (\lambda)\Upsilon(\lambda)

In a continues distribution it is matehmatically expressed by an integral that is the convolution of the two functions:

v= \int_{0}^{\infty} \Phi (\lambda)\Upsilon(\lambda)

If you have three filters (mostly green, mostly red and mostly blue) you get 3 RGB values.

You can’t reconstruct all the original light spectrum from that simple 3 values.

Retina cell cones are like a combination of a filter and a sensor, and provide a stimulus response for a given light spectrum as the convolution of that spectrum by their transfer function (different for each cone type).

The transfer functions for a standard observer are known and standarized.

Math equations for calculation of XYZ Cie values from the camera filters are really not known, you can’t derive exactly the values a standard observer could have obtained exposed to the same light spectrum simply from the RGB values the sensor of the camera has obtained.

They are approximated through a linear matrix calculation that combines the values RGB in the camera and gives a result:

\begin{pmatrix} X \\ Y \\ Z \end{pmatrix} = \begin{pmatrix} M_{rr} & M_{rg} & M_{rb}\\ M_{gr} & M_{gg} & M_{gb}\\ M_{br}& M_{bg} & M_{bb}\\ \end{pmatrix} \begin{pmatrix} r \\ g \\ b \end{pmatrix}

That is essentially the color calculation any developer program does in its first steps after Bayer interpolation to get the color of a pixel (well really they usually directly convert from the original camera RGB space to some working RGB space without calculating intermediate XYZ values).

So it is possible to approximate the XYZ values from data capture by the sensor, but it won’t be the most acurate calculation.
And if you have a not so powerful device, it does not make sense to do all that calculations (and other color corrections to get the result in some particular color space) to just get a gross approximation of the color of the light that arrives to your device.

It seems much more acurate and convenient (and exact) to use a spectroferometer sensor to provide you with the energy distribution function of the light it receives and then convolute it with the transfer functions of the standard observer to get XYZ values of the color and transform them from ther to whatever color space you want the data.

It can be done quickly with FFT libraries (fast lagrangian transformtations).

If you have no access to a spectroferometer (too expensive or none is available that could be managed from the RPi) a colorimeter may provide you with the RGB it can get from the scene, or even a monochrome sensor coverd with filters that you calibrate to get the matrix weithts to do the transformation to XYZ from the RGB values you get from it.
You will need that matrix weitghts or calibrate it using a calibrated device that can provide you with XYZ values and adjusting your own to get results as close as possible to the calibrated device, when exposed to the same controled light.

Forgive me for all imprecisions my exposition could have, but I think that is the general idea.

As I have said previously I am not by far a color scientist or even a mathematician geek.

1 Like

@m_brown, I’m down-selecting a suitable spectrophotometer, and I ran across this, might be well-suited to your application:

@gwgill @ggbutcher @ariznaf, thanks so much for your helpful comments and information!

A goal for this system is to be low-cost (<$500) to allow for the deployment of many systems, with the understanding that this will likely result in tradeoffs with color accuracy.

Re. colorimeters, the price and object distance are what has kept us from exploring those options. Re. spectrophotometers, we have actually explored their use with Ocean Insight hyper-spectral models (~$1500). However, it’s my understanding that if you point a spectrophotometer at an object, you will get its “apparent” color, i.e. its color influenced by the illumination conditions/geometry, not its “true” color. In our application this resulted in needing three spectrophotometers (~$4500): one looking at the object (at an angle), one looking at the sky (at the same angle to allow for subtraction of sky reflectance), and one looking straight up to measure downwelling irradiance to calculate a reflectance. Using multiple spectrophotometers also results in the need to inter-calibrate them (both wrt wavelength and magnitude - the latter being difficult).

Ultimately we are interested in assessing broad changes in color from blue-green-red, and are hoping that using a single sensor system (no inter-calibration of sensors) consisting of a camera with a gray card in the FOV (to help account for illumination conditions) will allow us to do that. We are now trying to determine the image processing pipeline (e.g. with RT or Python) for this application.

Then, this one might be useful:

18 channels spread between 410nm to 940nm. 3 of these will set you back a whole lot less than the Ocean Insight instruments. When they come back in stock, I’m going to buy one to measure SPD of light sources, which is what you’re doing with the second and third devices in your description. With full-spectrum light, I don’t think you need single-digit-nm resolution, like you might with one of those freaking LED sources…

Agreed - these look very cool! For our current application we would have to deal with the inter-calibration, and maintaining/tracking it throughout numerous diurnal/seasonal temperature changes, etc. which would likely prove tricky. I’m keeping them in the back of my mind for other applications though!

Those sorts of industrial RGB sensors aren’t any more colorimetric than an RGB camera though. To get any sort of accuracy they need to have standard observer filters in them, something like this TCS3430 based breakout for instance.

1 Like

How do you calibrate them though ? (Note that it needs the same light sample directed into 3 different chips to give you 18 bands). You need at least a known spectrum to amplitude calibrate, and some sort of spectral reference to wavelength calibrate them. Ideally you need a calibrated narrow band source to plot the spectral sensitivities. Relying on the manufacturers “typical” data might get you some of the way if you aren’t too concerned about accuracy.
And having done that you can approximate standard observer curves by weighting the band values, but with the result possibly not as smooth as 3 purpose designed filters. 10-20nm bandwidth is typically good for reflectance measurement where colorants have smooth spectra, and the illumination is chosen to be smooth, but they can show their limitations if you are trying to measure narrower emission spectra.

I’m looking for a full-spectrum way to measure the illuminator SPD when capturing a diffraction grating-based spectrum image for camera profiling. Tungsten-halogen SPD is “smoothly distributed”, so I don’t think I need fine resolution. I was about to buy an i1Studio, but they’ve been sold to someone who’s jacked up the price, got me to thinking about the primary purpose for the device and I decided I could maybe do better with something less-resolute.

Oh, and I don’t need to know absolute power, I just need to be able to make a normalized dataset through the range of interest.

The latter is the true color - a reflectance spectrum without illumination cannot cause any stimulus in a human eye and brain, and so is not color.

The resulting color of light reflected from a surface is a product of the spectrum of both. So a surface does not have an XYZ value independent of an illuminant !
What a surface does have is a reflectance spectrum. To actually measure this does require a spectrometer of some sort.

That doesn’t sound right. To measure the spectral reflectance of non-fluorescent samples you should only need two spectral measurements - the illumination spectrum and the reflected spectrum.

But an important question is what sort of surfaces are you dealing with ? Are they matt or glossy ? Matt surfaces can be measured with a 0/45 degree geometry, glossy surfaces are trickier. If you want the total reflectance including specular, you need an integrating sphere.
If you aren’t controlling the illumination then you are also at the mercy of how continuous the illuminant spectrum is. If it has gaps, your S/N at those wavelengths gets bad.
If the sample is matt, then the simplest approach would be to have a mechanical means of switching the sample path between the sample and a reference white illuminated in the same way as the sample.

1 Like