Hello all,
I’ve written a small and simple Python package for generating a .cube 3D LUT from a bunch of raw / jpeg image pairs. Samples are drawn from multiple image pairs, so that (hopefully) more colors are sampled than when using a single image like the usual approach with gmic (see How to create HaldCLUTs from in-camera processing styles ). So, depending on the use case and source images, it may perform better or not
It was more or less a weekend-project and is neither tested nor sophisticatedly coded (AKA messy code), but maybe it is useful for someone.
Just a reminder that LUT imply a display-referred workflow and won’t scale to “HDR” (by HDR I mean anything doing things on RGB values > 100 % display).
Yes, it is a similar logic.
The estimation procedure is a bit different (@sacredbirdman seems to use local averages of target colors with respect to all LUT subregions).
My script exposes some more options to the user (in particular, the LUT size and sample count can be chosen) and tries to do as much as possible automatically, so that the user does not need to convert the images using Darktable beforehand. This might come with some drawbacks on some systems / different versions of Darktable, though.
Also, I haven’t compared the results of both scripts, so it is worth trying both solutions in the end; especially regarding the stability in sparsely sampled color regimes.
Thanks for pointing that out.
Maybe I should emphasize this in the documentation, as it may be a bit counterintuitive for some users (e.g. Applying LUTs and recovering highlights )
I’m recreating my LUTs from the other topic about building LUTs with your program as we speak, and will report my findings. I modified your scripts to use my already-generated training set instead of processing each file with your provided darktable style (cool idea, though!).
My plan is to create LUTs that go after filmic, where we’re in display-referred space, and retain filmic’s processing options. We’ll see how well that goes! I’m excited, as my own approach didn’t fare well with high-saturation colors. I’m hoping yours will be more robust in those regimes.
I think the point is that LUTs are applied in display space, and all LUT processing is therefore in display referred.
You can still do your scene referred processing before the LUT, though. (This is somewhat uncommon, though, as many people tend to use LUTs as a shortcut to replace custom processing. It also depends on the LUT: If the LUT includes a strong tone curve, it might be intended to replace the default tone curve.)
So today I built LUTs using this here technique, but with the same training images I had used in the other thread. Lo and behold, the produced LUTs have the same sort of strange artifacts as in my own experiments. Of course I could smooth over those just like I did in the other thread, but that doesn’t solve the problem really, it merely hides it.
However, there were two significant differences between the published code and what I did:
I didn’t use darktable as part of the processing, as I had already exported matching darktable-exports and camera-jpegs.
I couldn’t get the automatic image alignment to work reliably. It always infinite-looped on some of my images and wouldn’t converge. But since my images were shot without any lens corrections or processing otherwise, I just went without the alignment.
If I understand the code correctly, it essentially matches the darktable-jpeg histogram to the camera-jpeg histogram using an optimization algorithm. Interestingly, this procedure produces slightly different colors than my match-pixels-to-pixels way of doing things.
At any rate, I learned a few things from this: One, I had a broken file in my training set that confused things (doh!). Two, a 16x16x16 or 9x9x9 LUT is perfectly sufficient, no need to go to the ridiculous 64x64x64 I was using. And probably some more.
This is not the impression I have (and not the way I use it). I use LUTs to emulate color film and, perhaps more importantly, to create shared boundaries for a series of images. I always place the lut 3D module as the last module (before output color profile though).
Yes, good regularization, extrapolation and sufficient color space coverage in the samples is really challenging!
I’ve also found that, even when using highly saturated test patterns, estimation for extreme color values often just doesn’t work well. On very bright and / or saturated gradients, visible artifacts occur.
I wonder if this is really due to our algorithms or maybe a problem in the sample data generation process.
The handling of out-of-gamut colors / highlight clipping in the color space transformations in the pixel pipeline of Darktable might be an issue, for example, if there is no bijective mapping between the resulting colors of the RAW and JPEG samples anymore, for example.
In the latest release, I’ve added some more options and also a few charts to output for visualization. Have fun (see the Readme and the --help option)!
Also, I’ve tuned the optimization and extrapolation a bit (still not a sophisticated approach, but let it be for now) and added a simple test image generator for sample generation.
IMO it is really important not to overparameterize. E.g. a 9x9x9 Cube already produces visually identical results to my camera’s JPEGs. Whereas this is a low-resolution cube, it still has 729*3 Parameters! Given the fact that images are far from a uniform sample distribution over the colors, this is already a lot and good generalization perfomance is hence difficult to achieve.
Hmm, have you tried to tune the ECC alignment parameters? Interesting that it does not work for you. For my images, it works flawlessly.
EDIT: Here are Fuji LUTs generated with the current version and the sample pattern generator. It is important to remember setting the interpolation to trilinear and the color space to adobe rgb in the 3D lut module fujis.zip (33.7 KB)
PS: have also added the link to your script to the readme; great
Oh, this was a packaging error by me. The required .xmp files were not included.
Should be fixed now; you can use pip install --upgrade darktable_lut_generator to update to version 0.0.5
Hmm, seems the .xmp version is incompatible with your installed Darktable version…
I have created the .xmp files with a recent git build of Darktable.
I don’t have a good solution for this at the moment; maybe, I’ll re-create the xmps with an older Darktable version in the next days.
But there is still an easy solution for you: you can create two own .xmp files with your Darktable version: One for the JPG, which should be pretty much the default processing, but with the output color space module changed to Adobe RGB, and one for the .DNG with all modules coming after color calibration disabled and output color space module also set to Adobe RGB.
Then you can simply point the script to your own xmps with the --path_xmp_raw and --path_xmp_image arguments (this has not been tested yet, however…)