Help Me Build a Lua Script for Automatically Applying Fujifilm Film Simulations (and more)

I think all sorts of modules can be applied as long as:

  • they are applied in the same way in the template and the later RAWs.

  • every pixel is changed in the same way.

  • it does not exceed the value range of darktable-chart.

I had already tried it with the base curve. But unlike filmic, the basecurve settings are rarely changed, so there’s no benefit to that.

This PR might have some info for you regarding this topic.

1 Like

8 bit color depth. I’ll add that to the instructions. That recommendation is mainly to prevent any compression artifacts from interfering with the result.

1 Like

I should have look at the repo. I see you have sample files so I could have checked those thanks for responding…I am glad you formalized this…thank you for sharing your work…

1 Like

I got it running but I tried to use a raw jpg pair and it crashes …I used your test images and it runs fine and i used my same raw as a source and target and it ran fine but using a jpg at the target failed. I exported them from DT with the default PNG settings for 8-bit ie compression 5 and resized on the width to 300 as they are portraits. Not sure if its the settings or what I may have done…do the images converted to png all have to originate as raw file?? I didn’t understand that from the description …seems like the target could be an ooc jpg ??

Are you Foto-Zenz? That’s a really cool blog post! Thank you so much for sharing it! I’ll have to experiment with your Darktable-chart styles!

Even with the “problematic” LUTs from Stuart Sowerby, I find it surprisingly useful to have a near-JPEG starting point. Actually I noticed that it is not that important to me that my starting point matches the JPEG completely. But having a rough approximation of what I saw in the viewfinder actually helps a lot.

Still, next week I’ll try to produce my own LUT with the help of @sacredbirdman’s program and see where that’ll take me.

Let me know how you make out I could not get it to work trying a raw jpg pair…I might be missing a nuance or two…

To mimic a camera profile, let’s say your camera produced DSCF1330.RAF and DSCF1330.JPG.

  1. You’d run the raw through Darktable and save it as png into source_images/DSCF1330.png
  2. then you’d convert the camera-made JPG into PNG and put it into target_images/DSCF1330.png
  3. ensure resulting png’s have exactly the same resolution
  4. repeat that for every photo you want to include in the test set.
  5. list those files. e.g. DSCF1330.png in the images.txt -file (the program tries to find that name in both source_images and target_images)

Hope that helps :slight_smile:

1 Like

That is the process I followed. I noted that my raw and jpg are listed at the same size but the raw also has a size in brackets that is smaller. I think this reflects the border allowed for demosaicing. I cropped a middle portion of the raw that was about 2/3 of the image and then I resized as you mentioned to around 300 by 500. I copied the crop to the jpg and did the same.

I will try again but just use the full image and resize or any other combination I can think of…

Yesterday I came back from a weekend trip (during which my camera died :disappointed_relieved: and my kids freaked out :sob:), to find at my doorsteps the lens adapter I had ordered for creating LUTs.

So I took out an ancient Canon FD-compatible Tokina 28mm, set my camera to take exposure brackets, and went shooting a training set. Some 180 images were taken, in series of [-2, -1, 0, 1, 2] EV exposure. I tried to get a varied set that covered all the major colors. Then I plugged my camera into the Windows VM, and told the Fujifilm X-Transformer to render these 180 images in each film simulation. This took some time, but was easy to do. At the same time, I exported the same images with Darktable in scene-referred mode, with filmic enabled and at default values, and everything else turned off.

With the training set thus prepared, I installed a zig compiler (available as a snap) and the required libraries (libsdl2-dev, libsdl2-image-dev on Ubuntu). Compiling the zig program was extremely easy. Impressive.

Then, less, easy, I figured out the required incantation of imagemagick to convert my training sets to PNGs, and coded up a Python script to apply it to whole directories, and with some parallelization for good measure. Surprisingly, this part took the most effort. If you’re interested, convert -resize 600x600 -auto-orient source.jpg destination.png works in almost all cases.

Then I ran @sacredbirdman’s wonderful program, and voila, new Fujifilm LUTs:

In my extremely limited testing on a couple of sample images from before my camera broke, these LUTs seem to look better than Stuart Sowerby’s LUTs I had used beforehand.

But I’ll have to check this in more detail later, and specifically look for defects. These LUTs are at this point less than an hour old. If you find any problems, please let me know! I’d guess that most problems can be fixed with a few more sample images but I’ll need to know what to look for!

5 Likes

Fantastic. I’ll do some tests tomorrow. It’s good to have these at hand for when you just need a quick edit without much thought going into it, plus the film sims look good. Thank you for your work, and sorry to hear about your camera :smiling_face_with_tear:

I wish I could contribute some classic negative samples, but the only camera I have with it is the X100V and from what I’ve read above it won’t work because of the lens corrections, such a shame.

For the record, one of my three cameras died, and it is probably repairable. I already sent it off to be repaired. So not that big a deal, really.

The problem was more that it was the only ILC on that particular trip, and I didn’t get to play with the cool wide angle I had brought on the trip.

1 Like

Thanks so much…I tried using png’s created in DT…they didn’t work… Any insight on settings that you used to prepare them…As you have done I was able to install and compile. I could use the single test image pair provided by @sacredbirdman but when I tried to do the same with a pair of my images the process crashes…If I used either of the images as both the target and source the process would run so the images seemed readable by the computational process but the two as a pair would not work… not sure what I did wrong other than the test images were not blurry…

Thanks for your work sharing the LUTs. Sorry to hear about the camera. In my experience very saturated violet&pink colors tend to have gaps in samples but I’m not sure if they have much of an impact in real world use :slight_smile:

I’m glad to hear you found the results (at least so far) good since one of the reasons for creating this program was my dissatisfaction with the existing LUTs for my cameras (mainly the color casts and other artifacts)… so I wanted to make a program that could closely enough mimic signature color renderings but also create well-behaved LUTs so you can rely on them.

1 Like

I could take a look at the problem if you can share a png pair that causes a crash :slight_smile:

I’ll shoot something again…only have my phone right now…I did try cropping them down as you suggested. I don’t think that was the issue…but maybe??

Here are a couple …lots of colors …


PXL_20220418_194054213.dng (12.7 MB)

I created the PNGs using the quoted ImageMagick command from Darktable-exported JPEGs. Weirdly, the zig program crashed for three of my 180 images, which were all-white blown out. The error was a segfault when trying to read the pixel data, which I presume hints at some sort of issue with SDL.

Another problem was orientation, as the SDL-loaded PNGs seem to not read the orientation bit correctly. The image must be a true 600x400; a rotated 400x600 does not count. And image dimensions must match exactly, which is a problem since Darktable likes to export one pixel wider than my camera. I’m hoping that downscaling and my less-than-perfectly sharp lens saved me from issues with that.

I’d be happy to upload my training set by the way, if someone else wants to play with it or contribute to it.

Could you clarify and did you crop all images to 400x600 or resize the whole image??

I exported full-size JPEGs from Darktable, then resized them to 600x400 PNGs using convert -resize 600x600 -auto-orient source.jpg destination.png (convert is part of ImageMagick). Then I exported “fine” JPEGs from Fujifilm X Raw Studio, and converted them the same way.

1 Like

It’d be good if you could share the actual png images you tried to use. I suspect the problem might be with the dimensions. The program throws a tantrum if the images are not of same size. I just pushed a fix so it only rejects the image pairs (and shows a warning about it) if the dimensions don’t match. So, you could try pulling the fixes from the repository and try again :slight_smile: