Mathematically right values in decomposing to LAB ?

It is playing time up here in Sweden: I picked up one Nadorcott and my Colormunki Photo spectrophotometer. Using Argyll’s spotread, I measured five more or less random spots on the Nadorcott (handheld) and here is what Colormunki reported:

XYZ: 29.621478 20.013466 1.305972, D50 Lab: 51.852433 44.910105 66.767355
XYZ: 32.354465 22.585119 1.937511, D50 Lab: 54.642401 42.956591 64.520720
XYZ: 31.430769 21.333597 1.291062, D50 Lab: 53.312679 45.349696 69.476870
XYZ: 27.254337 18.020918 0.972389, D50 Lab: 49.521469 45.719796 67.450873
XYZ: 32.618553 22.894400 1.436598, D50 Lab: 54.963400 42.515748 70.509651

Have fun!
Claes en Lund, La Suède

I had a look to the paper you sent and there are indeed some interesting points for the setting of the scanner:
“Each fruit was sliced at the equator at a thickness of 1 cm, and the slice was captured by a flatbed image scanner (GT-X980, Seiko Epson Corp., Suwa, Nagano, Japan). The flatbed scanner is readily available and allows the easy acquisition of highly stable samples because its shooting environment (i.e., the configuration of light source, camera and scanning surface) is fixed within the same product. The cover of the scanner (the transparency unit) was removed to accommodate the fruit; thus, in order to eliminate external lighting, the fruit was covered by a black Bakelite board during scanning. The scanning resolution was 1200 dpi and the color depth was 16 bits per R/G/B component.”

I had planned to cover the peels with blue paper (the same background I used to take photos with the camera), cover the blue paper with a black cloth and then press with the usual cover of the scanner.

1. Byte sequence should not matter since you are saving to a TIF, which is a standard format.
2. If the ICC profile isn’t integrated, it means that the data is in the scanner’s native colour space and you can make a custom profile of the scanner. (It is possible that the scanner firmware or app converts the colour space internally but then why would we have the option of not including the ICC profile?)


One thing I would add to the dpi vs bit depth business is that I would rather scan at lower dpi with the native (max) bit depth since colour integrity and gradients are more important to you. Ideally, you would scan at 1200 dpi and 48 bits, then down sample the resolution using a batch script.

Haha! I am glad you enjoy measuring colors during your free time :wink:

I did hundreds measurements of colors with the portative colorimeter BYK. I got values close to
40 ; 0 ; 15 for Lab* with D65. (those peels were green since I am comparing different stages of maturity)

I was thinking of making a comparison between those and the values I will get by analyzing images (more likely the scans since I did not manage to have light even enough to take photos) but it makes me think of what @Elle said earlier:

The values I got with the BYK colorimeter are with white point D65, but the values I will get with digital images will be related to white point D50. Hum…!

Maybe @afre is saying the same thing and I’m misinterpreting. But it seems to me that given that scanning at a lower-than-max-optical-resolution does simply skip readouts, there is no binning going on during the actual scanning, instead just skipping over data points/readouts.

So for maximum signal to noise ratio, I’d suggest scanning using the maximum optical resolution the scanner provides, at the highest bit depth. And then as @afre suggests, use a command-line program to downsize, thereby binning to reduce noise while making a smaller image file.

@snibgo or @afre or other imagemagick experts - what command would be used to downsize using “binning” rather than the more usual downsizing while trying to preserve detail? Also, is there a way to tell imagemagick to not do any “gamma” correction? At this point, before making a scanner profile, what the native “gamma” that the scanner might (but hopefully doesn’t) incorporate even in “raw” output is unknown.

Thanks =)

Sorry, it is still not absolutely clear for me.
The plan is to make to custom profile of the scanner thanks to the SpyderCheckr 24 (I never dealt with profiles before). If I register all my scans with ICC integrated, then I will not be able to “replace” the ICC profile integrated by the scanner by my own “customized” ICC made with the target chart ? Or I am misunderstanding ?

For what I understood thanks to the Help section of the scanner software, the scanner doesn’t incorporate gamma.
I chose “no color correction”. Whereas one of the other option was “apply automatic exposure + choose gamma 1.8 or 2.2”.
There was no possibility of choosing gamma when you choose the “no color correction” option.

I’m not sure what you mean by “binning”.

Suppose we have four values (0, 1/3, 2/3 and 1), and we downsize by “binning” to two values. What are those values?

IM won’t do any gamma correction unless you tell it to. There are a very small number of exceptions to this rule, and “-set colorspace sRGB” takes care of those. This operation doesn’t change pixel data, it only changes metadata, so IM won’t internally convert the image to sRGB.

Beware that some file formats can’t store gamma metadata, and for those formats that can, some software ignores it.

I guess that @MarionGaff1’s “no color correction” means the image is recorded as linear RGB, rather than non-linear sRGB. Testing with and without should answer that question.

I didn’t make measurements, but those scans look very flat - nice!

If you upload a similar low resolution scan of the other side (just the scan without any profile), if it would help I would try to make an ICC profile for the scanner using ArgyllCMS, just to see if results look believable without any surprises in the procedure. I’ve never profiled a scanner, but it can’t be that different from profiling a camera.

If you’d rather experiment first before sharing (totally understandable!), here’s the relevant ArgyllCMS “how to” page:
http://argyllcms.com/doc/Scenarios.html#PS1

And here’s where to download ArgyllCMS if you don’t already have it installed (or don’t have the latest version): http://argyllcms.com/ - just scroll down to where it says “Downloads” - there are separate downloads for various operating systems.

Well, downsizing is a standard way to reduce noise. For a camera raw file it can be done by asking for the type of interpolation that just combines the RGBG pixels into one “RGB” pixel, producing a smaller but cleaner image. I haven’t ever used that type of interpolation. But in GIMP there is “linear” interpolation, that seems basically to do something like taking the median of the pixels surrounding each pixel (including the actual pixel). Asking for 50% reduction in width and length does produce a much cleaner image, at least when starting from a non-demosaiced image (it was an experiment, and worked really well as the starting file really was very noisy).

This “binning” - and maybe I’m using totally the wrong word, but hopefully I’ve explaing what I’m trying to point to by using the word - is different from using a scaling algorithm that tries to preserve detail. The assumption is that noise and detail look an awful lot alike to a scaling algorithm, so if size reduction while also decreasing noise is the goal, don’t use algorithms that try to preserve detail.

Here is what I got scanning the other side (200 dpi, no color correction, 48 bits colors, automatic level of exposure is intermediate, Tif format). The same as for the “gray side” : 2 files : one with ICC integrated, one with NO ICC profileTarget%20chart%20color%20-%20no%20correction%20-%20200%20dpi%20-%20ICC%20profile006|nullxnull Target%20chart%20color%20-%20no%20correction%20-%20200%20dpi%20-%20no%20ICC%20profile005|nullxnull

I will use ArgyllCMS later on, as soon as I can find the time to do so, because I have several experiments in parallel :smiley:

To me, “binning” means categorizing a large number of pixels into a smaller number of “bins”, and then processing all pixels in the same bin in the same manner.

But never mind the terminology, I’m looking for a precise definition of what you want, so I can suggest the IM operation(s).

IM can do median, if that’s what you want, eg “-median 3x3” changes every pixel to be the median of the 8 neighbours and itself. It doesn’t change the image size.

If your answer to my question about (0, 1/3, 2/3 and 1) was (1/6, 5/6) then the operation would be “-scale 50%”.

What I’m looking for is an imagemagick down-sizing command that is something like the code in this file: https://gitlab.gnome.org/GNOME/gegl/blob/master/gegl/buffer/gegl-sampler-linear.c, which according to the online documentation (https://docs.gimp.org/2.10/en/gimp-tools-transform.html#gimp-tool-transform) takes the average of the four nearest pixels, not including the actual pixel. It does work well to downsize when the goal is noise reduction, when starting from a raw file that hasn’t been demosaiced, but instead simply output to disk without first doing an interpolation.

But I guess I shouldn’t have used the word “binning” - sorry! And given that the scanner actually produces RGB, no interpolation needed, maybe some other down-scaling algorithm would work better.

But again, the goal of the down-scaling, down-sampling? is noise reduction rather than preservation of detail.

Wouldn’t a gaussian blur do the trick?

With work, I expect that any Gimp process can be replicated in ImageMagick, though it might need C code.

If the Gimp process gives a good result, I would simply use Gimp. Gimp can edit images non-interactively. (At least, it used to be able to. I haven’t tried this recently.) So it could batch-process the images.

IM’s common downsampler is “-resize”, which has infinite varieties, using “-filter”. Most people want to add sharpness, but I expect some varieties will do the opposite.

If noise is a problem, I would treat that as a separate issue to downsampling. Denoise first, then downsample. Different types of noise need different treatment. Maybe a simple blur to remove or reduce high-frequency data. This will also blur edges, which I guess doesn’t matter here.

I’m currently playing with techniques that limit outliers. For example, calculate the mean and standard deviation in a small area around every pixel. If the pixel is outside the range (mean +/- k* std_dev), cap it to be within that range.

But will noise be a problem when scanning orange peel? I don’t know.

Sorry about leading you all down a rabbit trail with binning and down sampling. I admit I haven’t been exactly concise with my posts but glad the hints lead to places. I blame insomnia. :blush:

The first question we need to ask is
– What are the native input properties of the scanner? That would be its optimal setting.

The next question we ask is
– What data resolution (spacial, temporal, etc.) and precision would best represent the thing being observed? There is usually a sweet spot: more or less information than necessary usually introduces more problems and therefore requires more considerations (time and energy).

This is why reading dozens if not hundreds of papers of previous research is vital and why I keep on pushing on my other points. The design of the experiment and research guide what we do.

The point of down-sizing in this discussion is that the scanner produces files that are very large. A first step towards making the file size manageable is to downsize. Some down-sizing operations actually add sharpening. That would be a mistake for the current use case, would just add artifacts.

I think GIMP’s linear scaling is a really good choice for the current use case. Nohalo/lohalo are very CPU-intensive, “cubic” adds sharpening, and “none” just takes every other pixel when you ask for a 50% size reduction, so isn’t really any different than selecting a lower-than-native scanning resolition.

I don’t know how to use GIMP from the command line, and I bet trying to open the full-size scan in the GIMP UI would be difficult, depending on one’s computer and amount of RAM.

Scaling in GIMP is actually done using GEGL, which also can be used directly at the command line. I don’t know the command. People on GIMP IRC might know, in particular Pippin would know for sure. Another place to ask Pippin about using GEGL a the command line would be on the GIMP-dev mailing list or the babl/GEGL-dev mailing list.

@ggbutcher - A small gaussian blue can be useful as a preliminary step before downsizing. But gaussian blue is not a down-sizing operation, which I’m sure you know :slight_smile: so clearly there’s been a miscommunication somewhere. I’ll take the blame!

Gaussian blur technically speaking actually samples the entire image for every pixel in the image, though in practice various short-cuts are taken. The point of down-sizing in the current use case isn’t to blur the image but to down-size the image to a more manageable file size.

I don’t think it’s a rabbit hole. Rather it’s a very interesting and useful/practical topic, especially today when cameras and scanners can produce huge files, much huger than perhaps the user in any way wants or needs. I’m thinking about starting a new thread on the topic unless someone else does so sooner than I get around to it :slight_smile: .

@MarionGaff1 - I did make several different types of ICC profiles for your scanner using the reduced-size target chart you uploaded. I’ll try to post some images and the relevant commands later today or more likely not until tomorrow morning (US east coast time). In the meantime I would suggest a couple things when scanning the chart:

It would be good to put a piece of black paper behind the chart, large enough to cover the scanner bed. It might not matter for a scanner - I just don’t know. But for a camera, surrounding the target with white just causes camera “veiling flare” - I think that’s the right term. So to be on the safe side it might be better to just eliminate the possibility by using a black background behind the target chart.

Also, speaking of the white background, I measured a small amount of difference at one end of the background vs the other end, around L=92-93 at one end, down to L=89-90 at the other end. I’m not sure how much this might affect results, but maybe flat-fielding - again, something I’ve never done - before making the ICC profile might be a good idea. The fall-off in intensity seems uniform and consistent for all the various scans.

On the other hand, maybe flat-fielding would be more trouble than it’s worth. It might be interesting to put the target in the center of the scanner and scan it twice, spinning it 180 degrees for the second scan, and comparing resulting ICC profiles from each scan.

Does anyone here have experience with using flat fields? I know RT makes the process easy, but the online documentation makes it seem as if RT only works with raw files for flat-fielding.

I was thinking more of taking a patch from the image and blurring it, which would ‘average’ out the pixels to allow taking a more consistent LAB value.

Thank you so much!

Here are the scans that I obtained without ICC profile integrated.Target%20chart%20color%20black%20backG%20-%20200dpi%20-%20no%20correction%20-%20exposure%20interm%20-%20no%20ICC|nullxnull Target%20chart%20color%20black%20backG%20-%20200dpi%20-%20no%20correction%20-%20exposure%20interm%20-%20no%20ICC004|nullxnull
I still did not figure out if I should register my scans with ICC profile integrated or not.
If I register all my scans with ICC integrated, then I will not be able to “replace” the ICC profile integrated by the scanner by my own “customized” ICC made with the target chart OR are the ICC profile integrated by the scanner and the image “glued forever”?
If I understand well what ICC profiles are, they are made to give information about the frame of reference to use to interpret properly the coordinates that define each color.
So, if I don’t integrate any ICC profile, how are the softwares able to open the scans with the right colors (right colors being information given by the scanner).

And with ICC profile (black background 200 dpi)