Feedback with the use of CLUTs

Actually, I don’t talk about “sampling frequency”. When I talk about a 8bits/channel CLUT with a resolution lower than 256^3, e.g. 64^3, then I consider that the CLUT has been downscaled with 3D averaging boxes. It’s not like I sample the CLUT at each 4th voxel. I consider the CLUT data have been averaged to produce a single voxel for each 4x4x4 voxels in the corresponding 256^3 CLUT.
That is a bit different.

In particular, because if the CLUT function is sufficiently smooth, this way of downscaling won’t destroy much details in the CLUT color variations. And CLUTs are usually very smooth color functions. In this context (and only in this context), it is acceptable to stick with low CLUT resolutions (32^3, 64^3,..) without losing important color information.
For CLUTS, I really don’t see the interest of considering a 65536^3 definition domain. 256^3 is already overkill (I’m saying it again, because the CLUT functions are really smooth, of course it would be different for 3D functions with a lot of discontinuities inside). This doesn’t prevent having CLUTs working for 16bits/channels images.

Also, of course we use linear/bicubic interpolation to get the transformed color for any [R,G,B] color input (with float-valued precision). If your CLUT is 128^3, it means that for a 8 bits-depth/channel image, you’ll read the CLUT values at locations (x + dx,y + dy, z+dz), where dx,dy,dz can be either 0 or 0.5. For a 16 bits-depth/channel image, we’ll just have more different possible fractional parts for (dx,dy,dz), but that doesn’t add extra complexity in the code, once you have a 3D value interpolator working at a floating-point precision.

In that case, I guess the type of function is really different : a displacement map may contain strong discontinuities.

Note that our proposed method doesn’t limit in theory the CLUT “spatial” resolution. We chose [0,255] for the sake of clarity, but the multi-scale reconstruction algorithm can work with any resolution. What I can say is that the reconstruction of a 256^3 CLUT probably takes a few minutes, considering the high number of voxel values to reconstruct.

Thanks for the clarificaton.

I suspected this was the case, but the paper doesn’t say this. I think it should.

One point I missed from reading the paper is that the key points are located on the CLUT sample locations - and you move the key points to the nearest CLUT sample when performing the multi-resolution algorithm. I couldn’t understand how m(x) could work since it would probably be 1 everywhere. The C++ sample code you posted earlier makes this clear.