HaldCLUT compression

Hello

Re: Google Workspace Updates: New community features for Google Chat and an update on Currents

We managed to compress all the color mapping data used in the various Film emulation filters in the +GIMP plug-in, with a quite smart and home-made compression algorithm. So, all the 303 color profiles we provide in the plug-in represent now only 621KB of data (instead of 82MB before!!)

@David_Tschumperle well done! Could you point to/explain the compression so that others can save some bandwidth too?

1 Like

I believe @David_Tschumperle is going to write a paper about the method he used so others can see and use it…

Yes, I’ll write a paper on this as soon as possible.
Basically, the idea is to model an HaldCLUT by a small set of colored keypoints, so that you can accurately reconstruct the whole HaldCLUT by interpolating those colored keypoints in the RGB cube. It’s nothing more than that.
Now, there are some subtleties about the way this interpolation can be done efficiently in 3D, as well as the way those keypoints are chosen (note that the compression algorithm is sloooow. It took me 4 days of non-stop parallel calculations on my 24-cores machine to compress the 303 CLUT we have in G’MIC :slight_smile: ).
But once it’s done, it’s done.

That’s it basically.

EDIT: Also note that as the decompression step is quite complex, it is also time consuming, so once a CLUT has been decompressed, it is stored on the user’s disk as a more easy-to-read data file (still a bit compressed, but approx. 300Kb each anyway). So that applying the same filter on a different image doesn’t take too much time afterwards.

Wow, that’s a lot of time! But it seems to be worth the time to include compressed CLUT into packages. Decompression can be done during installation of package.

that sounds really like a good gain! we’re working on something really similar for darktable. our aim was to make luts more accessible/useful for users by making them editable from the gui.

the result sounds really similar to your approach, we have a sparse set of 3d colour points and use thin plate splines to interpolate between them. the sparse fitting in our case is done via orthogonal matching pursuit (which is a matter of seconds for one lut).

we’re aiming at 24 control points/patches for one lut, 49 for a high quality one (mostly to keep it manageable in the gui). evaluation isn’t cheap, but not more expensive than say denoising (unoptimised as it stands now). it does introduce some errors occasionally (delta E in the range of 1.3 (49 patches)-- 2 or 3 (24 patches)).

i was wondering if you also trade precision for storage?

our code is here in case you’re interested:
https://github.com/darktable-org/darktable/tree/lab-io/src/lut

2 Likes

Yes, I have also reconstruction errors.
The amount of acceptable errors is actually given as two parameters of the compression algorithm, which gives the desired average and the maximum reconstruction error. Both criteria must be verified otherwise we keep adding new keypoints (with a limit of 2048 points).
For some of the CLUTs we have, I actually never reach these criteria (some CLUTs are noisy for instance, so no chances to reconstruct them accurately with smooth interpolation).
For now, I’ve set avg_error=1.75 and max_error = 17.5. I found out that limiting the max_error is important to keep a reconstructed CLUT that is contrasted enough (sometimes, the average error is low, but we lost a bit of color dynamic in some color regions, due to a too high max error there).
But maybe this comes from my reconstruction scheme, I need to investigate this a bit more before drawing conclusions.

2 Likes

This sounds interesting. It’s definitely something I could use in my film emulator. Currently the 8 bit cluts take up about 50mb so I’m loading them on demand. Getting that down would make a offline version feasible.
I guess this would require a bit of WebGL to evaluate the splines. Could be a good excuse to do a 2.0 under a proper open source license. :slight_smile:

1 Like