3D LUT Compression/Decompression Does Not Work

Hello All,

I’m wondering if anyone has any insights into the problem I am having:

I am currently testing the compress_clut and decompress_clut modules for use in an upcoming project. The problem I am encountering is that, when I pass a LUT through the compression algorithm and then the decompression algorithm, the result is a LUT that looks nothing like the original. Additionally, when compared to the original image with no LUT applied, the compressed LUT appears to do very little.

I think this may have something to do with the compression algorithm using the color cube vertices as keypoints (the LUT I am using to test does not have the vertices in-gamut).

Any insight into this problem would be greatly appreciated.

That’s probably the cause of the issue. The compression algorithm pre-supposes that all color values are in-gamut!
If you have the opportunity to share a .cube file of this LUT, maybe I’ll see if I can handle this in the compression algorithm (but I don’t promise anything).

1 Like

I’ve done a quick simulation with an out-of-gamut LUT, and indeed the compression algorithm does not work.

After investigation, I found out that it was because if the LUT had a maximum input value greater than 255, then the compression algorithm assumed it was encoded in 16bits, and it was dividing the LUT values by 257, to keep a [0-255] range (still float-valued though).

I’ve fixed this, and now an out-of-gammut LUT can be compressed by the algorithm (just did a quick test with one such LUT, so maybe more intensive testing is required).

The fix should be available in less than one hour, and after a $ gmic update , as usual
(assuming you are running G’MIC version > 2.8.0).

Let me know if that solves your issue.

Thank you so much for your help! I am out and about for now, but will test later today.

I want to clarify, since from your responses it sounds as though I may have been unclear.

The LUT I am using does not have any output values at the vertices of the cube. Inputting these values into the LUT returns values somewhere inside the cube. When I visualize the original and compressed LUTs using a 3D point cloud, the original LUT clearly has it’s code points all contained inside the cube, while the compressed LUT has skews out towards all 8 vertices, presumably because the algorithm is adding those 8 points in as keypoints.

From your response, it sounds like you solved the opposite problem: A LUT returns values outside the cube, and these caused errors. That is not the problem I am having, however a fix for that is still definitely appreciated, as I could foresee that being an issue in the future as well.

Here are the results, after running gmic update. As you can see from the point cloud, the compressed version still exhibits skewing towards the vertices of the cube (this is especially clear from the “blue” vertex).


I may be encountering these results due to the method I am using to decompress the LUT. I could not get the decompress_clut command in GMIC to accept input, so I have been utilizing the following C++ program, modified to output a HALD clut: https://framagit.org/dtschump/libclut/blob/master/decompress_clut.cpp
If there is a way to make the GMIC command line accept a LUT as input when using decompress_clut, please let me know, as it would make scripting much easier.

What I don’t get is whay you don’t have LUT data defined in the full RGB cube.
G’MIC always assumes that every (R,G,B) point of the RGB cube has an associated color with it, and so does the compression algorithm.
I don’t even understand how you pass your color data to the compression algorithm, in case on an incomplete cube.

The visualizations I have illustrate the Output values for the LUT, which in my understanding (my background is as a Director of Photography and Colorist for films) do not need to fill the color cube. Most LUT standards (.cube is the one I am familiar with) do work on the basis of every point in the cube (input) having some output, but that output can be arbitrary. If a LUT is in essence some function, then the domain of that function is the whole color cube, however the range of the function may be a subset (or superset) of the color cube.

Take as an example some color correction which decreases the gain of an image by 0.5. If we were to generate and visualize a LUT based on this function, we should see that no value in the output cube is greater than 0.5 (127 in 8 bit). The compression (or perhaps decompression) algorithm would seem to fail in this case, as it assumes that (1, 1, 1) (white) will be in the output set, whereas our gain function would dictate that the highest value should be (0.5, 0.5, 0.5).

I just tested the gain example I posted above, here are the results.

My long-term goal is to utilize the decompression algorithm in a way it was possibly not intended for. I am interested in generating 3D LUTs to match different cameras or film stocks based upon correlating two sets of data - one for each camera. My plan is to feed these data into the decompression algorithm as keypoints, and (hopefully) get a LUT out the other side which is a close approximation of the “look” of the camera that the data set came from.

I may be able to apply a tone curve to each data set which normalizes the inputs and outputs to the range[0-1]. I think that the compression/decompression algorithm may work as intended if the LUT is normalized so to speak - then I can just apply the inverse of the tone curve.

Edit: This is how I am generating my 3D cube visualizations:
gmic <inputLut.cube> distribution3d colorcube3d primitives3d 1 add3d

1 Like

The 0.5 gain is a good example indeed, I’ll do some tests to see if I can compress it properly.

For me, this works with the latest change about the compression algorithm:

$ gmic input 32,32,32,3,[x,y,z]*8-127 +compress_clut 0.5,0.1 +decompress_clut. 32

Command input creates the out-of-gammut CLUT, then I compress it with quite high fidelity parameters, then decompress it, and it seems I get two similar CLUTs.

[gmic]-0./ Start G'MIC interpreter.
[gmic]-0./ Input image at position 0, with values '[x,y,z]*8-127' (1 image 32x32x32x3).
[gmic]-1./ Compress color LUT [0] as a set of colored keypoints, with maximum error 0.5, average error 0.1, 2048 maximum keypoints, DeltaE_2000 metric and srgb colorspace for reconstruction.

* Process CLUT '[image of '[x,y,z]*8-127']' (32x32x32).
  > Add [#22] Max_Err = 0.403142, Avg_Err = 0.0885882         
  > Rem [#16/17] Max_Err = 0.956023, Avg_Err = 0.118862        
[gmic]-1./ Decompress colored keypoint [1] into 32x32x32 CLUTs, using srgb colorspace for reconstruction.
[gmic]-3./ Display images [0,1,2] = '[image of '[x,y,z]*8-127'], image_of_x,y,z*8-127_c1, image_of_x,y,z*8-127_c2'.
[0] = '[image of '[x,y,z]*8-127']':
  size = (32,32,32,3) [384 Kio of floats].
  data = (-127,-119,-111,-103,-95,-87,-79,-71,-63,-55,-47,-39,(...),121,121,121,121,121,121,121,121,121,121,121,121).
  min = -127, max = 121, mean = -3, std = 73.8651, coords_min = (0,0,0,0), coords_max = (31,0,0,0).
[1] = 'image_of_x,y,z*8-127_c1':
  size = (1,18,1,6) [432 b of floats].
  data = (0;0;0;0;24.6774;41.129;57.5806;82.2581;98.7097;139.839;172.742;172.742;(...),-47;-95;49;25;-103;1;-103;89;-127;121;-127;121).
  min = -127, max = 255, mean = 63.4519, std = 121.061, coords_min = (0,0,0,3), coords_max = (0,14,0,0).
[2] = 'image_of_x,y,z*8-127_c2':
  size = (32,32,32,3) [384 Kio of floats].
  data = (-127,-118.653,-110.382,-102.172,-94.0126,-85.8976,-77.8207,-69.7767,-61.7611,-53.77,-45.7999,-37.8478,(...),121.345,121.336,121.319,121.292,121.256,121.209,121.153,121.09,121.028,120.978,120.959,121).
  min = -127.958, max = 121.644, mean = -2.91478, std = 73.6164, coords_min = (18,31,0,2), coords_max = (0,19,31,2).
[gmic]-3./ End G'MIC interpreter.

(look at stats of images [0] and [1], which are basically the same).

I wonder if the issue doesn’t come from the .cube loader/saver, rather than the compression algorithm. I get some weird .cube data with out-of-gammut CLUTs.

Confirmed, there is currently a limitation in command input_cube that prevents reading .cube files with negative values (out-of-gammut).
I’ll try to fix this ASAP.

This:

should fix the issue with input_cube with out-of-gammut values.
Should be available in a few minutes, after a $ gmic update.

1 Like

I’ve finally had a chance to test the changes, and it appears that everything is now working as expected. Thank you so much for your work on this!

One last question: Can you help me understand how keypoints are “encoded” in the compressed LUT output (ie the gmic_cluts.ppm file)? It’s clear that each line encodes a unique CLUT, however I don’t understand how the decompression algorithm relates keypoints to the change in between the input color space (a uniform cube) and the output space. I would think there would need to be paired colors for each keypoint - one input and one output.

Actually, two consecutive lines encode a single CLUT. First row = keypoint coordinates, Second row = keypoint colors.

Ahh, that makes more sense. In that case, why does the compress_clut command only output a single set of data? Does this correspond to keypoint coordinates or keypoint colors?

Both :wink:
G’MIC uses a 6-channels image to store keypoint coordinates (3 first channels), and keypoints colors (3 last channels).
G’MIC image format .gmz allows this, as well as storing an associated CLUT name.
The up-to-date CLUT dataset is available here, in. gmz format:
gmic/gmic_cluts.gmz at master · dtschump/gmic · GitHub

The ppm file available on the libclut repo (https://framagit.org/dtschump/libclut/blob/master/gmic_cluts.ppm) has been generated from the. gmz file, as it can be read more easily by non G’MIC users.