Help Me Build a Lua Script for Automatically Applying Fujifilm Film Simulations (and more)

I like to apply Stuart Sowerby’s Fuji Film Simulations to my Fujifilm raws. However, Fujifilm cameras provide several such film simulations, and it is tedious to manually choose them in Darktable for every picture I have taken.

Thus I wrote this lua plugin to automate the process:
fujifilm_auto_settings.lua

The plugin works by reading the film simulation from the raw file’s EXIF data, and applying a style of an appropriate name. This works well, but it requires the user to have set up the appropriate styles before using the plugin.

Similarly, the plugin reads the aspect ration of the associated JPG file and applies an appropriate crop style, and reads the Fujifilm dynamic range setting and applies a matching DR style.

(Fun fact, this is more Fujifilm-specific adaptation than I have seen in any other raw developer. Yay for scripting interfaces!)

A better solution would modify the development parameters directly, by setting the appropriate LUT and modifying filmic’s contrast setting in the develop module, without going through a style.

Here’s my plea for help: Can a lua plugin affect such processing changes? If so, how?
Any help on this would be greatly appreciated!

2 Likes

The Lua darktable API doesn’t support image development because Lua isn’t fast enough to be responsive in darkroom mode.

1 Like

That explains it. Thank you.

Then my style-based approach is probably the way to go.

(Although my experience with lua in game dev and embedded audio implies that lua would be plenty fast enough for controlling image editing parameters. But that’s a different discussion for another day.)

Note that the Sowerby / @Jean-Paul_GAUCHE styles / LUTs (New Addition to the “Largest Collection of Presets” list – Fujifilm Simulations by Stuart Sowerby transformed to DT Styles – Open Source Photography) seem to add a definite colour cast:

You may want to check out the Astia LUT by @sacredbirdman.

I’ve tried with the latest master build, and the outcome is the same. The original greyscale gradient (also check the ‘rgb parade’ in the top right):

With the LUT (FujiFilm XTrans III - Astia) turned on; bottom is the original, top is with the LUT applied. Notice the shift in the ‘rgb parade’, too:

Of course, you may prefer that tint as part of the ‘look’.

Sacredbirdman’s LUT:

2 Likes

Thank you for the heads-up! That’s good to know.

Since that LUT still seems to be useful to other people besides me I decided to revive the tool I made for generating them… It’s now published as open source :slight_smile: Maybe we can use it to create a repository of all kinds of camera/film emulations together…

4 Likes

Awesome! This is something I’ve been wanting to do for a while, so thank you for putting up a full solution!

Whenever I thought about this, my main concern was dealing with lens distortions. I’ll have to check if my camera has the option of completely disabling distortion correction, to be fully compatible with Darktable’s rendering. Come to think of it, it might be preferable to shoot a manual lens, just to have no chance of some in-camera correction to “fix” something.

I’ll try to assemble a test set and see how this behaves!

I also wonder how robust this procedure will be against the small changes introduced between sensor generations, and perhaps more importantly, Darktable versions.

Although come to think of it, it should be fairly trivial to rebuild the test set for each new version of Darktable as it comes out.

There are a few variables to consider, though. I think I’d build against the current scene-referred pipeline with Filmic at the end. Probably with a strongly reduced contrast, though, to preserve shadows and highlights for the LUT’s benefit. The exposure module should probably not correct the in-camera exposure compensation.

It might also be useful to shoot everything in DR400, as these can be converted to DR100 and DR200, but not vice versa. That ways we could build dedicated LUTs for the different DR modes. If I understand this correctly, the minor variations in rendering introduced by varying ISO should be of little concern, right?

Does the current software have some kind of completeness check that checks that all colors have been seen on the training set? I assume that there’s some kind of interpolation going on to smooth over missing colors right? (I haven’t had time to look into code yet)

Anyway, this is awesome, thank you!

I knew I forgot something from the quick instructions :slight_smile: Yes, lens distortion & correction is indeed a problem and I’ve always shot these images with adapted lenses so the camera doesn’t try to do anything clever.

I haven’t tried shooting with DR-modes … I don’t actually know what kind of processing it does. If it does some kind of tone mapping (or other local changes) it will throw off LUT calculations.

There are quite a few internal parameters in the program that aren’t currently exposed to the user… but they can be tweaked in the code. At the moment it divides the color space in 16x16x16 segments and then calculates a correction vector for every segment (so it doesn’t try to correct every individual color). If a segment doesn’t receive enough samples (at the moment 5 samples) it will discard the samples to prevent outliers from creating strange color casts. If a whole segment doesn’t receive any samples it will be left as neutral and will just be interpolated over.

There are also cut-off points for almost black (values less than 3) and almost white (values above 252) and it will use interpolation (toward neutral) when generating values for those extreme ends… again, this gets rid of weird color casts at the extremes.

Lastly it will use those color space segments to interpolate over them to create the final LUT. This means that it won’t try to 100% accurately mimic the target data but in my experience the result is quite convincing and well-behaved.

2 Likes

Today I tried various modes of processing fuji files in camera, and it seems that there’s no way of reliably disabling all lens corrections. Which is a shame, as it means I’ll have to shoot a training set instead of just picking images from my catalog. I now have an adapter on order to connect my old Canon FD glass to the Fuji. So that’ll take a few days.

I have always fantasized about implementing some sort of automatic warping that matches the camera JPEG to a Darktable render, to match up pixels in some way. But I worry that’s one rabbit hole too deep for my limited free time.

2 Likes

RawTherapee has such a feature, maybe it could be brought over?

http://rawpedia.rawtherapee.com/Lens/Geometry#Geometric_Distortion

1 Like

Yes, Fuji does not provide a way to disable in-camera lens corrections. I tried using exiftool to wipe out lens information from the raw files but seems like that’s not possible. OTOH, there are actually geometric distortion parameters in the raw file but I don’t know how those would translate to Darktable’s correction module…

Thanks for sharing…your instructions say save as PNG…any specific parameters 8 bit 16 bit etc??

This is likely far less sophisticated that what @sacredbirdman has worked up but I modified it to work with my spyderchecker just for fun. The result is an icc targeted to match a provided source image aka a processed jpg… I modified it for 24 patch colorchecker and spyderchecker…I can provide the details if you have any interest or if you want to share a test shot pair if you happen to have a colorchecker I cant test it for you and shoot you the resulting icc file… This is a bit like darktable chart but it does not create a style but an icc file…

2 Likes

So I’ve been experimenting a lot with film simulations. Initially I used ICC profiles in darktable. But I never got good results.

In my opinion, the HaldCLUTs for fuji filmsimulations also deliver poor results in comparison to the jpg.
I think there are two fundamental problems here:

  1. A HaldCLUT assumes that it gets pixels in a certain basic state. This must correspond exactly to the one that was also used when the CLUT was created.

  2. Creating a HaldCLUT is a simple process. Tone value and color changing operations (eg the filmsimulation) are applied to the Hald_CLUT_Identity_12.tif file and the result is saved as png.
    The problem with this is getting Hald_CLUT_Identity_12.tif into the Fuji camera as a RAW to apply the filmsimulations to it.

I have had much better experiences with darktable-chart.
There is also the option to correct the colors in the color-lookup-table module if necessary.

see:

If you want then you can load Hald_CLUT_Identity_12.tif into darktable, apply tonecurve and color-lookup-table from a film simulation. The result can be exported as png which will serve as HaldCLUT.
This could replace the tonecurve and the color-lookup-table with the 3D-LUT module if the order of the pixelpipe fits.
The question, however, is what has been gained by doing so.

3 Likes

I have been meaning to take look through your article again. I shared it some time ago translated to English on the FB website…I included the pixls.us link… In the past I also had good luck with DT chart… I recall even Harry Durgin’s video where he went against the grain at the time and added the base curve during creation of the style as he found it gave better results…

I think all sorts of modules can be applied as long as:

  • they are applied in the same way in the template and the later RAWs.

  • every pixel is changed in the same way.

  • it does not exceed the value range of darktable-chart.

I had already tried it with the base curve. But unlike filmic, the basecurve settings are rarely changed, so there’s no benefit to that.

This PR might have some info for you regarding this topic.

1 Like

8 bit color depth. I’ll add that to the instructions. That recommendation is mainly to prevent any compression artifacts from interfering with the result.

1 Like

I should have look at the repo. I see you have sample files so I could have checked those thanks for responding…I am glad you formalized this…thank you for sharing your work…

1 Like

I got it running but I tried to use a raw jpg pair and it crashes …I used your test images and it runs fine and i used my same raw as a source and target and it ran fine but using a jpg at the target failed. I exported them from DT with the default PNG settings for 8-bit ie compression 5 and resized on the width to 300 as they are portraits. Not sure if its the settings or what I may have done…do the images converted to png all have to originate as raw file?? I didn’t understand that from the description …seems like the target could be an ooc jpg ??