Mathematically right values in decomposing to LAB ?

gimp

#81

Haha! I am glad you enjoy measuring colors during your free time :wink:

I did hundreds measurements of colors with the portative colorimeter BYK. I got values close to
40 ; 0 ; 15 for Lab* with D65. (those peels were green since I am comparing different stages of maturity)

I was thinking of making a comparison between those and the values I will get by analyzing images (more likely the scans since I did not manage to have light even enough to take photos) but it makes me think of what @Elle said earlier:

The values I got with the BYK colorimeter are with white point D65, but the values I will get with digital images will be related to white point D50. Hum…!


(Elle Stone) #82

Maybe @afre is saying the same thing and I’m misinterpreting. But it seems to me that given that scanning at a lower-than-max-optical-resolution does simply skip readouts, there is no binning going on during the actual scanning, instead just skipping over data points/readouts.

So for maximum signal to noise ratio, I’d suggest scanning using the maximum optical resolution the scanner provides, at the highest bit depth. And then as @afre suggests, use a command-line program to downsize, thereby binning to reduce noise while making a smaller image file.

@snibgo or @afre or other imagemagick experts - what command would be used to downsize using “binning” rather than the more usual downsizing while trying to preserve detail? Also, is there a way to tell imagemagick to not do any “gamma” correction? At this point, before making a scanner profile, what the native “gamma” that the scanner might (but hopefully doesn’t) incorporate even in “raw” output is unknown.


#83

Thanks =)

Sorry, it is still not absolutely clear for me.
The plan is to make to custom profile of the scanner thanks to the SpyderCheckr 24 (I never dealt with profiles before). If I register all my scans with ICC integrated, then I will not be able to “replace” the ICC profile integrated by the scanner by my own “customized” ICC made with the target chart ? Or I am misunderstanding ?


#84

For what I understood thanks to the Help section of the scanner software, the scanner doesn’t incorporate gamma.
I chose “no color correction”. Whereas one of the other option was “apply automatic exposure + choose gamma 1.8 or 2.2”.
There was no possibility of choosing gamma when you choose the “no color correction” option.


(Alan Gibson) #85

I’m not sure what you mean by “binning”.

Suppose we have four values (0, 1/3, 2/3 and 1), and we downsize by “binning” to two values. What are those values?

IM won’t do any gamma correction unless you tell it to. There are a very small number of exceptions to this rule, and “-set colorspace sRGB” takes care of those. This operation doesn’t change pixel data, it only changes metadata, so IM won’t internally convert the image to sRGB.

Beware that some file formats can’t store gamma metadata, and for those formats that can, some software ignores it.

I guess that @MarionGaff1’s “no color correction” means the image is recorded as linear RGB, rather than non-linear sRGB. Testing with and without should answer that question.


(Elle Stone) #86

I didn’t make measurements, but those scans look very flat - nice!

If you upload a similar low resolution scan of the other side (just the scan without any profile), if it would help I would try to make an ICC profile for the scanner using ArgyllCMS, just to see if results look believable without any surprises in the procedure. I’ve never profiled a scanner, but it can’t be that different from profiling a camera.

If you’d rather experiment first before sharing (totally understandable!), here’s the relevant ArgyllCMS “how to” page:
http://argyllcms.com/doc/Scenarios.html#PS1

And here’s where to download ArgyllCMS if you don’t already have it installed (or don’t have the latest version): http://argyllcms.com/ - just scroll down to where it says “Downloads” - there are separate downloads for various operating systems.


(Elle Stone) #87

Well, downsizing is a standard way to reduce noise. For a camera raw file it can be done by asking for the type of interpolation that just combines the RGBG pixels into one “RGB” pixel, producing a smaller but cleaner image. I haven’t ever used that type of interpolation. But in GIMP there is “linear” interpolation, that seems basically to do something like taking the median of the pixels surrounding each pixel (including the actual pixel). Asking for 50% reduction in width and length does produce a much cleaner image, at least when starting from a non-demosaiced image (it was an experiment, and worked really well as the starting file really was very noisy).

This “binning” - and maybe I’m using totally the wrong word, but hopefully I’ve explaing what I’m trying to point to by using the word - is different from using a scaling algorithm that tries to preserve detail. The assumption is that noise and detail look an awful lot alike to a scaling algorithm, so if size reduction while also decreasing noise is the goal, don’t use algorithms that try to preserve detail.


#88

Here is what I got scanning the other side (200 dpi, no color correction, 48 bits colors, automatic level of exposure is intermediate, Tif format). The same as for the “gray side” : 2 files : one with ICC integrated, one with NO ICC profile

I will use ArgyllCMS later on, as soon as I can find the time to do so, because I have several experiments in parallel :smiley:


(Alan Gibson) #89

To me, “binning” means categorizing a large number of pixels into a smaller number of “bins”, and then processing all pixels in the same bin in the same manner.

But never mind the terminology, I’m looking for a precise definition of what you want, so I can suggest the IM operation(s).

IM can do median, if that’s what you want, eg “-median 3x3” changes every pixel to be the median of the 8 neighbours and itself. It doesn’t change the image size.

If your answer to my question about (0, 1/3, 2/3 and 1) was (1/6, 5/6) then the operation would be “-scale 50%”.


(Elle Stone) #90

What I’m looking for is an imagemagick down-sizing command that is something like the code in this file: https://gitlab.gnome.org/GNOME/gegl/blob/master/gegl/buffer/gegl-sampler-linear.c, which according to the online documentation (https://docs.gimp.org/2.10/en/gimp-tools-transform.html#gimp-tool-transform) takes the average of the four nearest pixels, not including the actual pixel. It does work well to downsize when the goal is noise reduction, when starting from a raw file that hasn’t been demosaiced, but instead simply output to disk without first doing an interpolation.

But I guess I shouldn’t have used the word “binning” - sorry! And given that the scanner actually produces RGB, no interpolation needed, maybe some other down-scaling algorithm would work better.

But again, the goal of the down-scaling, down-sampling? is noise reduction rather than preservation of detail.


(Glenn Butcher) #91

Wouldn’t a gaussian blur do the trick?


(Alan Gibson) #92

With work, I expect that any Gimp process can be replicated in ImageMagick, though it might need C code.

If the Gimp process gives a good result, I would simply use Gimp. Gimp can edit images non-interactively. (At least, it used to be able to. I haven’t tried this recently.) So it could batch-process the images.

IM’s common downsampler is “-resize”, which has infinite varieties, using “-filter”. Most people want to add sharpness, but I expect some varieties will do the opposite.

If noise is a problem, I would treat that as a separate issue to downsampling. Denoise first, then downsample. Different types of noise need different treatment. Maybe a simple blur to remove or reduce high-frequency data. This will also blur edges, which I guess doesn’t matter here.

I’m currently playing with techniques that limit outliers. For example, calculate the mean and standard deviation in a small area around every pixel. If the pixel is outside the range (mean +/- k* std_dev), cap it to be within that range.

But will noise be a problem when scanning orange peel? I don’t know.


#93

Sorry about leading you all down a rabbit trail with binning and down sampling. I admit I haven’t been exactly concise with my posts but glad the hints lead to places. I blame insomnia. :blush:

The first question we need to ask is
– What are the native input properties of the scanner? That would be its optimal setting.

The next question we ask is
– What data resolution (spacial, temporal, etc.) and precision would best represent the thing being observed? There is usually a sweet spot: more or less information than necessary usually introduces more problems and therefore requires more considerations (time and energy).

This is why reading dozens if not hundreds of papers of previous research is vital and why I keep on pushing on my other points. The design of the experiment and research guide what we do.


(Elle Stone) #94

The point of down-sizing in this discussion is that the scanner produces files that are very large. A first step towards making the file size manageable is to downsize. Some down-sizing operations actually add sharpening. That would be a mistake for the current use case, would just add artifacts.

I think GIMP’s linear scaling is a really good choice for the current use case. Nohalo/lohalo are very CPU-intensive, “cubic” adds sharpening, and “none” just takes every other pixel when you ask for a 50% size reduction, so isn’t really any different than selecting a lower-than-native scanning resolition.

I don’t know how to use GIMP from the command line, and I bet trying to open the full-size scan in the GIMP UI would be difficult, depending on one’s computer and amount of RAM.

Scaling in GIMP is actually done using GEGL, which also can be used directly at the command line. I don’t know the command. People on GIMP IRC might know, in particular Pippin would know for sure. Another place to ask Pippin about using GEGL a the command line would be on the GIMP-dev mailing list or the babl/GEGL-dev mailing list.

@ggbutcher - A small gaussian blue can be useful as a preliminary step before downsizing. But gaussian blue is not a down-sizing operation, which I’m sure you know :slight_smile: so clearly there’s been a miscommunication somewhere. I’ll take the blame!

Gaussian blur technically speaking actually samples the entire image for every pixel in the image, though in practice various short-cuts are taken. The point of down-sizing in the current use case isn’t to blur the image but to down-size the image to a more manageable file size.

I don’t think it’s a rabbit hole. Rather it’s a very interesting and useful/practical topic, especially today when cameras and scanners can produce huge files, much huger than perhaps the user in any way wants or needs. I’m thinking about starting a new thread on the topic unless someone else does so sooner than I get around to it :slight_smile: .

@MarionGaff1 - I did make several different types of ICC profiles for your scanner using the reduced-size target chart you uploaded. I’ll try to post some images and the relevant commands later today or more likely not until tomorrow morning (US east coast time). In the meantime I would suggest a couple things when scanning the chart:

It would be good to put a piece of black paper behind the chart, large enough to cover the scanner bed. It might not matter for a scanner - I just don’t know. But for a camera, surrounding the target with white just causes camera “veiling flare” - I think that’s the right term. So to be on the safe side it might be better to just eliminate the possibility by using a black background behind the target chart.

Also, speaking of the white background, I measured a small amount of difference at one end of the background vs the other end, around L=92-93 at one end, down to L=89-90 at the other end. I’m not sure how much this might affect results, but maybe flat-fielding - again, something I’ve never done - before making the ICC profile might be a good idea. The fall-off in intensity seems uniform and consistent for all the various scans.

On the other hand, maybe flat-fielding would be more trouble than it’s worth. It might be interesting to put the target in the center of the scanner and scan it twice, spinning it 180 degrees for the second scan, and comparing resulting ICC profiles from each scan.

Does anyone here have experience with using flat fields? I know RT makes the process easy, but the online documentation makes it seem as if RT only works with raw files for flat-fielding.


(Glenn Butcher) #95

I was thinking more of taking a patch from the image and blurring it, which would ‘average’ out the pixels to allow taking a more consistent LAB value.


#96

Thank you so much!

Here are the scans that I obtained without ICC profile integrated.


I still did not figure out if I should register my scans with ICC profile integrated or not.
If I register all my scans with ICC integrated, then I will not be able to “replace” the ICC profile integrated by the scanner by my own “customized” ICC made with the target chart OR are the ICC profile integrated by the scanner and the image “glued forever”?
If I understand well what ICC profiles are, they are made to give information about the frame of reference to use to interpret properly the coordinates that define each color.
So, if I don’t integrate any ICC profile, how are the softwares able to open the scans with the right colors (right colors being information given by the scanner).


#97

And with ICC profile (black background 200 dpi)



(Elle Stone) #98

Hi @MarionGaff1 - whatever your scanner software or user manual might be telling you, there’s no embedded ICC profile in any of the various scan files labelled with and without profiles, at least not in the ones that I’ve downloaded and checked, that had file names indicating there was a profile embedded.

You might want to open one supposedly with, and one without a profile embedded by the scanner - both scanned without repositioning the target or opening the window, etc - and calculate the “difference” which should be zero for all pixels. Visually there doesn’t seem to be any difference. Imagemagick does allow this “difference” operation from the command line. Example imagemagick commands can be found here: https://ninedegreesbelow.com/photography/lcms2-unbounded-mode.html - but that’s an old article, perhaps syntax has changed in the meantime. Don’t bother reading the text unless you just happen to want to, just locate the sample imagemagick commands. Or ask @afre or @snibgo for the syntax.

When using command line tools such as exiftool or ArgyllCMS to examine metadata for example to see if there’s any embedded ICC profile), life is hugely simpler if file names don’t have any spaces - use underscores or hyphens instead of spaces.

Doing a quick check, the darkest patches of a scan of the target chart with the white background are consistently a bit lighter compared to a scan with the dark background. But the Lab Lightness difference is less than one. Of course comparing two scanned files in such a slapdash fashion isn’t worth much from a statistical perspective so I’ll leave it to you to do further experimenting, or else just use the black background. You might also want to experiment to see what difference using a blue vs a black background for the orange peels might make in terms of possibly scattered light.

You are absolutely right. Software needs an embedded ICC profile to properly interpret the colors:

  1. First scan the target chart.
  2. Then use ArgyllCMS to make the scanner input profile. ArgyllCMS profile-making utilities don’t need an input profile.
  3. Then use Imagemagick or GIMP or other command line utility or image editing software to embed the scanner input profile into the scan of the target chart and also embed the input profile into all the scans of the orange peels and anything else you might need to scan.

Here’s a sample command to use exiftool to embed an ICC profile:

exiftool "-icc_profile<=/path/to/your/scanner-profile.icc" /path/to/your/scanned-target.tif

So after you make the input profile, if the scanner profile is named “scanner-profile.icc” and the scan of the target chart is named “scanned-target.tif”, and both are in the same folder, then cd to the folder and type this:

exiftool "-icc_profile<=scanner-profile.icc" scanned-target.tif

And the usual warning: Any time someone hands you a command to type into a terminal, test, test, test and test again to verify that the command is working as expected. Even with trustworthy sources of such commands, typos happen, people forget the right syntax, and syntax does vary from one OS to the next. I don’t know Windows or Mac syntax. I only know Linux syntax.

Here’s the exiftool web page: https://sno.phy.queensu.ca/~phil/exiftool/
There’s an exiftool forum for asking about syntax and such.

ImageMagick, LCMS, ArgyllCMS, and no doubt quite a few other softwares all have utilities for embedding ICC profiles at the command line. Exiftool isn’t the only option.

Well, that’s all the typing I want to do for now. My apologies for running out of steam before uploading the scanner profile and sample commands :slight_smile:


(Alan Gibson) #99

Those ImageMagick commands should work fine for IM v6, or IM v7 if installed “with legacy commands”.

I recommend not using “composite” for anything. That is a very old tool, long deprecated. Instead, use “convert … -composite”, with the two input images in the opposite order. That way, a sequence of IM commands can be readily squished into a single command.


(Elle Stone) #100

@snibgo - thanks! I doubt I’ll ever redo those commands to update them to IM v7. But I’ll add a note to my article with the information you provided in case anyone is ever actually tempted to try to follow along the steps in the article.