Help Me Build a Lua Script for Automatically Applying Fujifilm Film Simulations (and more)

Today I tried various modes of processing fuji files in camera, and it seems that there’s no way of reliably disabling all lens corrections. Which is a shame, as it means I’ll have to shoot a training set instead of just picking images from my catalog. I now have an adapter on order to connect my old Canon FD glass to the Fuji. So that’ll take a few days.

I have always fantasized about implementing some sort of automatic warping that matches the camera JPEG to a Darktable render, to match up pixels in some way. But I worry that’s one rabbit hole too deep for my limited free time.

2 Likes

RawTherapee has such a feature, maybe it could be brought over?

http://rawpedia.rawtherapee.com/Lens/Geometry#Geometric_Distortion

1 Like

Yes, Fuji does not provide a way to disable in-camera lens corrections. I tried using exiftool to wipe out lens information from the raw files but seems like that’s not possible. OTOH, there are actually geometric distortion parameters in the raw file but I don’t know how those would translate to Darktable’s correction module…

Thanks for sharing…your instructions say save as PNG…any specific parameters 8 bit 16 bit etc??

This is likely far less sophisticated that what @sacredbirdman has worked up but I modified it to work with my spyderchecker just for fun. The result is an icc targeted to match a provided source image aka a processed jpg… I modified it for 24 patch colorchecker and spyderchecker…I can provide the details if you have any interest or if you want to share a test shot pair if you happen to have a colorchecker I cant test it for you and shoot you the resulting icc file… This is a bit like darktable chart but it does not create a style but an icc file…

2 Likes

So I’ve been experimenting a lot with film simulations. Initially I used ICC profiles in darktable. But I never got good results.

In my opinion, the HaldCLUTs for fuji filmsimulations also deliver poor results in comparison to the jpg.
I think there are two fundamental problems here:

  1. A HaldCLUT assumes that it gets pixels in a certain basic state. This must correspond exactly to the one that was also used when the CLUT was created.

  2. Creating a HaldCLUT is a simple process. Tone value and color changing operations (eg the filmsimulation) are applied to the Hald_CLUT_Identity_12.tif file and the result is saved as png.
    The problem with this is getting Hald_CLUT_Identity_12.tif into the Fuji camera as a RAW to apply the filmsimulations to it.

I have had much better experiences with darktable-chart.
There is also the option to correct the colors in the color-lookup-table module if necessary.

see:

If you want then you can load Hald_CLUT_Identity_12.tif into darktable, apply tonecurve and color-lookup-table from a film simulation. The result can be exported as png which will serve as HaldCLUT.
This could replace the tonecurve and the color-lookup-table with the 3D-LUT module if the order of the pixelpipe fits.
The question, however, is what has been gained by doing so.

3 Likes

I have been meaning to take look through your article again. I shared it some time ago translated to English on the FB website…I included the pixls.us link… In the past I also had good luck with DT chart… I recall even Harry Durgin’s video where he went against the grain at the time and added the base curve during creation of the style as he found it gave better results…

I think all sorts of modules can be applied as long as:

  • they are applied in the same way in the template and the later RAWs.

  • every pixel is changed in the same way.

  • it does not exceed the value range of darktable-chart.

I had already tried it with the base curve. But unlike filmic, the basecurve settings are rarely changed, so there’s no benefit to that.

This PR might have some info for you regarding this topic.

1 Like

8 bit color depth. I’ll add that to the instructions. That recommendation is mainly to prevent any compression artifacts from interfering with the result.

1 Like

I should have look at the repo. I see you have sample files so I could have checked those thanks for responding…I am glad you formalized this…thank you for sharing your work…

1 Like

I got it running but I tried to use a raw jpg pair and it crashes …I used your test images and it runs fine and i used my same raw as a source and target and it ran fine but using a jpg at the target failed. I exported them from DT with the default PNG settings for 8-bit ie compression 5 and resized on the width to 300 as they are portraits. Not sure if its the settings or what I may have done…do the images converted to png all have to originate as raw file?? I didn’t understand that from the description …seems like the target could be an ooc jpg ??

Are you Foto-Zenz? That’s a really cool blog post! Thank you so much for sharing it! I’ll have to experiment with your Darktable-chart styles!

Even with the “problematic” LUTs from Stuart Sowerby, I find it surprisingly useful to have a near-JPEG starting point. Actually I noticed that it is not that important to me that my starting point matches the JPEG completely. But having a rough approximation of what I saw in the viewfinder actually helps a lot.

Still, next week I’ll try to produce my own LUT with the help of @sacredbirdman’s program and see where that’ll take me.

Let me know how you make out I could not get it to work trying a raw jpg pair…I might be missing a nuance or two…

To mimic a camera profile, let’s say your camera produced DSCF1330.RAF and DSCF1330.JPG.

  1. You’d run the raw through Darktable and save it as png into source_images/DSCF1330.png
  2. then you’d convert the camera-made JPG into PNG and put it into target_images/DSCF1330.png
  3. ensure resulting png’s have exactly the same resolution
  4. repeat that for every photo you want to include in the test set.
  5. list those files. e.g. DSCF1330.png in the images.txt -file (the program tries to find that name in both source_images and target_images)

Hope that helps :slight_smile:

1 Like

That is the process I followed. I noted that my raw and jpg are listed at the same size but the raw also has a size in brackets that is smaller. I think this reflects the border allowed for demosaicing. I cropped a middle portion of the raw that was about 2/3 of the image and then I resized as you mentioned to around 300 by 500. I copied the crop to the jpg and did the same.

I will try again but just use the full image and resize or any other combination I can think of…

Yesterday I came back from a weekend trip (during which my camera died :disappointed_relieved: and my kids freaked out :sob:), to find at my doorsteps the lens adapter I had ordered for creating LUTs.

So I took out an ancient Canon FD-compatible Tokina 28mm, set my camera to take exposure brackets, and went shooting a training set. Some 180 images were taken, in series of [-2, -1, 0, 1, 2] EV exposure. I tried to get a varied set that covered all the major colors. Then I plugged my camera into the Windows VM, and told the Fujifilm X-Transformer to render these 180 images in each film simulation. This took some time, but was easy to do. At the same time, I exported the same images with Darktable in scene-referred mode, with filmic enabled and at default values, and everything else turned off.

With the training set thus prepared, I installed a zig compiler (available as a snap) and the required libraries (libsdl2-dev, libsdl2-image-dev on Ubuntu). Compiling the zig program was extremely easy. Impressive.

Then, less, easy, I figured out the required incantation of imagemagick to convert my training sets to PNGs, and coded up a Python script to apply it to whole directories, and with some parallelization for good measure. Surprisingly, this part took the most effort. If you’re interested, convert -resize 600x600 -auto-orient source.jpg destination.png works in almost all cases.

Then I ran @sacredbirdman’s wonderful program, and voila, new Fujifilm LUTs:

In my extremely limited testing on a couple of sample images from before my camera broke, these LUTs seem to look better than Stuart Sowerby’s LUTs I had used beforehand.

But I’ll have to check this in more detail later, and specifically look for defects. These LUTs are at this point less than an hour old. If you find any problems, please let me know! I’d guess that most problems can be fixed with a few more sample images but I’ll need to know what to look for!

5 Likes

Fantastic. I’ll do some tests tomorrow. It’s good to have these at hand for when you just need a quick edit without much thought going into it, plus the film sims look good. Thank you for your work, and sorry to hear about your camera :smiling_face_with_tear:

I wish I could contribute some classic negative samples, but the only camera I have with it is the X100V and from what I’ve read above it won’t work because of the lens corrections, such a shame.

For the record, one of my three cameras died, and it is probably repairable. I already sent it off to be repaired. So not that big a deal, really.

The problem was more that it was the only ILC on that particular trip, and I didn’t get to play with the cool wide angle I had brought on the trip.

1 Like

Thanks so much…I tried using png’s created in DT…they didn’t work… Any insight on settings that you used to prepare them…As you have done I was able to install and compile. I could use the single test image pair provided by @sacredbirdman but when I tried to do the same with a pair of my images the process crashes…If I used either of the images as both the target and source the process would run so the images seemed readable by the computational process but the two as a pair would not work… not sure what I did wrong other than the test images were not blurry…