Feedback with the use of CLUTs

Hi there,

With colleagues, we have written a long journal paper about our CLUT compression technique
(the one that is implemented in G’MIC, with filters Colors / Color Presets and Colors / Film Simulation), which is an extended version of our previous research report: https://hal.archives-ouvertes.fr/hal-02066484. And we have submitted it to TOG (ACM Transactions on Graphics).

We received the reviews a few weeks ago, and the paper has been rejected. Some of the reviewers made very useful comments, and we already has worked on improving the paper with the suggestions they made (mainly considering other colorspaces than sRGB for the compression / decompression steps).

However, one of the reviewers was very negative, with comments that make me say that he is not impartial. I think he may be one of the authors of a “competing” paper that performs CLUT compression (a lossless compression technique, so with compression rates far much lower than those we obtain with our method).
I don’t have no clear evidences of this assertion, but a reviewer who tells that a bunch of papers from the same authors are missing in the bibliography is generally not good, particularly when there is already one paper from them already cited in the bibliography.

i’d really like to have your thoughts on some of the remarks he made. I’m not sure what he says is 100% correct. But I may be wrong. So your opinions are welcome.

"In general, I would like to summarize my thought that this paper is not really about compression, rather it focuses on the reconstruction of a color look-up table using a discrete set of points by iterative placement. I would encourage the authors to look at the use of ICC profiles which are an industry standard for communication of color transforms.

One of the key differences between this work and ICC profiles is that ICC profiles use a regularly gridded structure, whereas this work uses a sparse distribution of keypoints located to characterize the curvatures of the color space.

On line 164, the statement is made that “Usually a CLUT is stored either as an ASCII zipped file …, or as a PNG image”. This statement is not correct. Although the examples that you have shown are stored in that manor, in my 20+ years of experience, color tables are commonly stored using ICC profiles, or binary formats."

So my first question is : Have you ever seen an image retouching software proposing color-modification filters based on the use of ICC ? I was not aware of such software. In our paper, this is quite clear in the introduction that we are interested in the “artistic” side of color modifications.
I thought ICC profiles were mainly used for color corrections for devices (printers, scanners, monitors, etc.).

On line 172, the statement is made “typical to sizes 32^3, 48^3, 64^3”. As an engineer that has implemented many 3D CLUTs over my career, I would never use an even sampling of the input color space. When using an even number ( 16, 32, etc ) it means that the domain is sampled in non-integer increments. For example, if using a 16^3 CLUT, you only have 15 subdivisions, resulting in an increment of 17.066 per node spacing. This is not desirable, so it is common to use an odd number that is one greater than a power of two. For example, 17 or 33 such that the increment will be an integer value.

I would say that 17^3 is a ridiculously low resolution for most of the CLUTs. Do you think 17^3 is acceptable in any way for general CLUTs ? I’ve already processed CLUTs with a higher level of details (variations) in them. No way they can be subsampled accurately with a 17^3 resolution.
Also I’m not sure I agree with the fact that 2^n + 1 resolution is preferable. I don’t really see the point. What I know is that when using 2^n resolutions, you can often generate a 2D image by unrolling the values of the 3D CLUT, and indeed save it as a .png file, which is really convenient (the CLUT set from RawTherapee is provided as 1728x1728 .png files, I don’t think this is a wrong approach).
Maybe the guy has only designed CLUTs for printers or devices with limited memory ?

It is clear that the authors themselves are not up-to-date on the current standard used in the industry. ICC profiles have been around for over 20 years and provide a very efficient mechanism for describing color appearance transforms.
In general, I would suggest that the authors spend a significant amount of additional time learning about ICC profiles, color science and color tables. That will provide a solid foundation upon which to attempt to re-write this paper.

So here again, I’m asking : have you ever used software that are mainly based on the use of ICC profiles to apply color transformations to images ? Every CLUT pack I’ve downloaded from the net only provide CLUTs, as .cube or .png files. Never seen any .icc for this purpose.

2 Likes

welcome to the glamorous world of TOG i suppose.

pascal has done a prototype to use ICC/argyll to describe arbitrary colour transforms:

but i would not call this an “industry standard”. also i wonder which part of the old industry this may have been. i thought every hip kid in 2015 used OpenColorIO instead of ICC based workflows. maybe this is movie vs photography.

i suppose the fixed point/integer argument is valid for 8-bit input. in this case you don’t need to round to the nearest values or interpolate. i’m not sure it is a very good argument here though, it sounds like an implementation-specific off-by-0.5 rounding issue.

going forward i suppose you could try to make more clear in your writeup that you’re working with artistic/film emulation luts with high frequency and abrupt changes. now, your intro begins like "Color calibration and correction tools " which probably mislead the reviewer to think you’re talking about colour calibration instead. these luts might be smoother, much more well-behaved, and only work on a specified input bit depth. in any case a discussion of opencolorio/lut profiles/icc would help.

in general, SIGGRAPH/TOG is a very excited environment. if you get reviews that you see as unfair, it might be because the reviewers were not excited about your writeup. this may be lack of “wow factor” in the application or in the technical contribution. have you considered VMV, or CGF? deadline for eurographics short papers is 20th of december…

good luck with the submission.

2 Likes

You made me curious, so I went looking and stumbled on to this page from Adobe about exporting CLUTs. They include ICC profiles as one of the formats.

I understand that you can still export CLUTs as .icc profiles.
I’d be really interested to know what well known image retouching software allows to import .icc profiles, just to apply a color-modification filter on an image :slight_smile:

I haven’t seen such software. But even if there is no such software, this doesn’t disprove the reviewer’s comment. ICC profiles do store colour tables, and those could be used for image retouching. David’s task is to show that, although he could store CLUTs in ICC profiles, he doesn’t for reasons X, Y and Z.

I can see that argument. A channel with 256 possible values can be divided exactly into 16 divisions, which means 17 samples. Of course 17 is very low, but the same argument applies to 65536 possible values, which can be divided exactly into 32 divisions, which needs 33 samples.

So the argument is valid, but is it correct for this application? Does it matter that David’s scheme has non-integer divisions? I don’t know. David might say it doesn’t matter for reasons X, Y and Z.

(Disclaimer: I have no known connection with David, or TOG, or the reviewer.)

Because I’ve seen no software that takes icc profiles as input to apply color changes in an image ?
Apart from that, not sure that saying file format X is better than Y is useful, unless you demonstrate X is better suited to store the data. Here .icc profiles are able to store CLUTs, and then ?
As the reviewer said, that’s still a classical storage as an array of colors, I don’t see the added value compared to using .png or .cube.zip files.

I still don’t get it, why subdivising by 17 is better than by 16…

Hi,

(disclaimer: I didn’t read the paper, so take my comments for what they are…)

FWIW, I agree with @snibgo, this seems to be a fair point, and something you should be able to address very easily in your revised version. It doesn’t matter much that you think it’s not worth it: as you have been unlucky to find a (self-proclaimed) expert reviewer who thinks this is a necessary comparison, this is something you “should” do, if you really want to resubmit to the same journal. It wouldn’t be the first time people add content just to please some reviewers looking for an excuse to “kill” a paper. I agree it shouldn’t work that way, but unfortunately sometimes you have to bite the bullett…

On the other hand, I can totally understand why you were upset by this comment:

In general, I would suggest that the authors spend a significant amount of additional time learning about ICC profiles, color science and color tables. That will provide a solid foundation upon which to attempt to re-write this paper.

This kind of patronizing statements serve no purpose other than feeding the reviewer’s ego, and I think the best thing to do is simply ignore them…

HTH

We are not dividing by 17. We are dividing into 16 intervals, which means taking 17 samples.

This may be a misunderstanding by the reviewer. (If so, the submitted paper may need to be clearer.) For input values of 0 to 255, so 256 possible values, there might be a sampling at value=256. The reviewer seems to asume that is so.

With 17 samples, the sample points are then at at 0, 16, 32, 48, 64 , 80, 96, 112 ,128, 144, 160, 176 , 192, 208, 224, 240, and 256.

With 16 samples, the sample points would be at 0, 17.066, 34.133, 51.2, … 256.

On the other hand, David’s scheme may not take a sample at 256. In that case, 16 samples would mean 16 (not 15) intervals.

Not quite, the upper limit for an 8 bit table would be 255, not 256.

For integer increments you want 255/(n-1) to be an integer, since the factors of 255 are 3,5,17 you get integer steps with a product of some of these +1.

e.g for 16 steps, the samples are at 0, 17, 34, … 238, 255.

@paulmiller: see my last post above. The reviewer may be assuming (possibly incorrectly) that a sample is taken at 256.

Ah. Comprehension failure on my part - I thought you were describing an actual table, not the reviewer’s mistaken version. Sorry.

Indeed, 256 is not a part of the interval. 255 is the last value.

That seems to answer the reviewer’s problem.

From the Hal-archive paper https://hal.archives-ouvertes.fr/hal-02066484 , the domain is [0,255]^3 , so it is fairly obvious that 256 is not part of the interval, and that samples of sizes 16^3, 32^3, 64^3, 128^3 and 256^3 from a cube of [0,255]^3 give intervals of those sizes, giving integral divisions.

Perhaps this is not so obvious in the new paper, or the reviewer was having a bad day.

For precision, we might want domains [0,65535]^3 or larger. Perhaps the new paper addresses these.

The new paper might also consider the extent of lossiness. In the thread Help to create a set of "PIXLS.US" color LUTs ? , “The PseudoGrey CLUT cannot be lossy compressed without losing its pseudogrey property”. I guess the pseudogrey property was below the quality threshold. Is there some analysis of what CLUts can be effectively compressed, or where problems occur?

rawproc will allow an arbitrary application of a “colorspace” transform (‘convert’, in the ways @Elle taught us) at any point in the toolchain, using an ICC profile as the output profile, if there is an input profile attached to the internal working image at that point in the toolchain. I use LittleCMS for the heavy lifting ( cmsTransform(…) ), so whatever it can handle in a profile is valid, including LUTs.

Reading the snippets you provide, it would seem the author of them is not aware (or disdainful of) other LUT formats and their applications. A good counter example is the grading of log format video; these LUTs are not used from ICC profiles. His objection to your lack of consideration of an ICC approach shouldn’t be a constraint to your paper’s acceptance. With regard to ‘competition’, if the committee is actually thinking in those terms they’re doing a disservice to the concept of scientific inquiry and the presentation of alternative work to the same end. If they both meet the grade, I would argue both papers should be accepted.

If the other papers are relevant to your line of inquiry, go ahead and cite them. Who knows, you might find something additional to consider. I know in dissertations such completeness is a virtue; this may not be that sort of writing, but doing it probably helps you get past the acceptance wicket…

I haven’t published anything before but I recall being taught to cite beyond what I used for completion and to avoid the nasty business of accusations of plagiarism. Red tape for sure. As for ICC, merely mentioning it as a possibility might be enough.

1 Like

Incidentally: the decompression part of David’s work takes a sparse clut and fills in the blanks, making a complete clut.

I’ve done the same thing, at Sparse hald cluts (2017). My goal was different: given two versions of the same image, what haldclut transforms from one version to the other? The pixels of one version give the RGB indexes into a haldclut, and the corresponding pixels of the other version give the colour values. Then we fill in the blanks, so we can use the same transformation on other images.

Beware: my method uses ImageMagick, and is clunkier than David’s, and probably slower. I didn’t even write it up properly because I became distracted. But it works.

I expect David’s method could be easily tweaked to similarly find the clut that transformations from one image to another.

Some remarks on this topic:

  • For those interested, here you will find our WIP revised version (12 pages) : https://tschumperle.users.greyc.fr/tmp/revised_draft.pdf
  • Compared to the HAL report, it contains more details about the use of different colorspaces and error metrics for compression/decompression, a comparison with RBFs for reconstruction, and an additional section about what @snibgo mentionned (haldclut creation from a pair of images).
  • At this point, I don’t really know what we could add :slight_smile:

There is already a filter in G’MIC that does a similar thing : Colors / CLUT from After - Before Layers .

I really don’t think we want such big domains. Even 256^3 is never considered in practice. It’s already way too large to store color transformations (that are smooth most of the time). Right now, I’ve never seen a CLUT file defined for 256^3. I don’t imagine considering 65536^3 then (and theoretically, this would require 786432 GB of memory for the storage :slight_smile: ).
Beware, I’m not saying that the value precision of the CLUT transformed colors must be 8 bits or less, I’m just talking about the precision of the input colors (as we are using interpolation, and the CLUTs are smooth, it’s OK to stay with quite low ‘spatial’ resolution as 33,64,…).

I have read (more like skimmed) the revised paper. I don’t have the time to give proper feedback but I must say that it is richer than before and therefore looking much better. Would probably be ready after one more revision.

We are talking two different things here: domains and sampling frequency. If images have 16 bits/channel/pixel, then our cluts should have that precision for RGB indexes into the clut and also the colour values within the clut. Hence domains [0,65535]^3. If we don’t have that precision, we are discarding data or rounding results. This may not matter if we are making a final image but probably will matter for intermediate results. Rule of thumb: don’t discard data until you know you will never need it.

Larger domains may be needed if we are using cluts to manipulate displacement maps. (We are more sensitive to change of location than to change of colour.)

So we need domains of [0,65535]^3 or more. The fully-populated clut would have 65536^3 entries. With current computers, making such cluts is not reasonable. But sampling at 256^3 is reasonable (16 million entries). Even 1024^3 is reasonable. For future-proofing, we don’t want to restrict sampling to 256^3.

How much error would a domain restriction of [0,255]^3 restriction give? Less than one part in 256, obviously. For final images this would not be visible. But for some purposes, this does matter.

I’m not saying that David’s paper imposes a [0,255]^3 restriction. As far as I can see, it should work with any size of domain. But it makes that assumption, and there is no statement that larger domains can be used.

(Am I being picky? Possibly. But it took Gimp a long time to break free of the 8-bit restrictions. I don’t want current work to accidentally fall into the same trap.)