I quite recently discovered the collection of Haldclut files for all kinds of film, available here (the link is found on the Rawpedia page ) and I am having a lot of fun trying them out (in darktable mostly) - some have lovely effects on colours which I’d never think of otherwise.
I’m really curious how were these created. Did someone actually shoot film and digital versions on all these different film stocks, so they could be created with something like dt-chart? Or, maybe more likely, were they done by eye, to match the ‘look’? Either way, it must have been a lot of work
To all those involved, if you’re reading this, thank you!
Edit: Just found this page it seems @patdavid at least started it? If not more!
I’ve been shooting mostly film (and more recently scanning and inverting it in RT with the film negative module). And it’s only recently that I’ve started playing with the film simulation HaldCLUT module for my “purely” digital shots.
With that said, I’m very interested in the process, the choices and the tweaks that were made to create so-called “film simulations”. Unfortunately the RawPedia article about the module omits this part.
The number of film stocks available in the RawTherapee Film Simulation Collection is impressive. I’d be glad to hear what kind of images were used as a base for a given look. Were you, contributors, working off of already inverted scans? Did it involve colour checkers (CC24, 48 or more)? In your source images, to what degree was the processing subject to interpretation?
Is there a place I should look at to find some documentation that will help assess the “validity” (i.e. how confident are the authors a given film simulation HaldCLUT is as realistic as possible?) of certain looks?
I don’t remember the contents of the thread, but I would say, from your quote, that I used many shoulds. I may have said this before: the simulation depends on its author’s goal. At the end of the day, they are looks applied to images that didn’t start out that way. A simulation can be:
Variations of what an actual physical combination of film, photographic hardware and darkroom process could achieve (that could be quite broad, since one can do a lot of unconventional things in and outside of the darkroom)
Same as above, but an idealized version to overcome actual or perceived limitations such as going from analog to digital, adapting to viewing conditions, following a manufacturer or brand’s digital style, or changing the temperature or mood
Further idealization, leaning more into the author’s artistic tastes and opinion, which can involve Frankenstein’s simulations, adding features to technical simulations to stretch their potential or going in a completely different direction
I guess the first order of business is to place simulations into these buckets.
Another thing that could be useful in documentation is to have a description on the limitations of each simulation. Not all of them could and should be applied to all images, particularly the more extreme or lopsided simulations and/or raw images. I forget if the RT module has tools to interpolate and otherwise reduce the possibility of the result being super unpalatable, or is capable of triggering all of RT’s warnings and making the user panic.
I hate to be “that” guy, but the film simulations included on the rawpedia page very closely match the names of some of the VSCO’s preset stocks.
Now, I’m not saying that the original creator ran a program through those presets and created the haldCLUTS that way, but he just might have. If that were the case, The original creator of the haldCLUTS, would have no idea of how VSCO came up with the film simulation in the first place.
All that aside, being a long term user of the VSCO presets (1-7 packs) I find the rawtherapee interpretations very close.
Since I’m into the whole “digicam-as-the-main-camera” thing, I found that developing digicam raw files, not only is it a breeze, but it is also the only software that will open, my konica-minolta dimage g600 raw files, And let me tell you, Putting those raw ccd files through the film simulation luts renders by far the closest I have ever seen an image look to an actual film scan. The colors are beautiful and the noise on the pictures is usually far more uniform and less “blotchy” than on my newer CMOS-sensor cameras. My prefered one is Fuji Superia 200 (Which is also my preferred preset on the here-unnameable “LR”)
Running CHDK on my old Canon sd450 allows me to save the images in raw format. These files are far too noisy to be used as color images, but once turned into black and white images and having applied a film simulation of AGFA APX 25, I couldn’t believe my eyes.
I had never seen an AGFA APX 25 but I have definitely seen TRI-X 400, and the results couldn’t possibly be any closer. I immediately started to shoot more and more since I love the film presets and the whole RawTherapee process so much.
In the past I used to emulate film overlaying dust and grain layers using gimp, then I would tweak the colors, add random grain, and apply all sorts of processes to my digital images to end up with less than perfect digital looking images. I have found that there is no best way to emulate film, other than accepting whatever noise the original raw file had, using the amaze demosaicing method, removing the hot pixels, Applying some delightful microcontrast (Contras threshold = 7, Qty = 70, Uniformity 0) to bring up the graininess of the noise, and then apply a film simulation to finally adjust the tone curves to suit your heart’s content.
Thanks for having shared all of those resources on emulating film, I myself am planning to create a webpage in which people can upload high resolution film scans with color checkers so that we can gather a huge database on film and hopefully be able to emulate it better in the future.
VSCO copied Instagram and Instagram copied Polaroid and we have some Polaroid LUTs in there… So your insinuation about the luts copying vsco is strange since VSCO’s whole existence is a copy of a copy of a copy.
Makes sense that you would think I said that, I didn’t.
I’m not even saying it’s copying vsco, I’m saying there is a big similarity in the naming convention of the presets.
I’m also not saying it’s a good or a bad thing, though we are all certainly benefiting from it.
I’m also not sure vsco copied Instagram, regarding the platform? Sure, but they have been a photo preset creating company before they even copied the Instagram model.
At any rate, VSCO and instagram’s original filters wanted to copy analog film, and so do a lot of these presets… So I guess if they’re trying to emulate the same look/film stock, then it isn’t surprising they look the same. But saying “well maybe you copied them” makes no sense when they are a copy of a copy anyway.
A phone app that gives you analog film like look yo your smartphone photos.
Thanks for the info! I’ve never considered using Digikam for raw development… learn something everyday. Interesting about the similarity of naming with those presets. I had wondered myself whether there may have been some “inspiration” from other software. But who knows. The naming may have just been seen as a good model…
YES! That’s one way to read my initial question. I want to know what kind of “filter” I’m applying:
Do I have something that’s fantasizing a rumour of a film look? we’ve all heard too many times “I love those portra tones”
I’m not saying “fantasy”, VSCO, instagram, snapseed, acdsee, you-name-them filters are worthless — they’re just obscure, pun intended
or do I have something that actually was made based on known inputs (e.g. a colorchecker on negative film, with, say, 5000K colour temperature and only the grey gradient neutralised)?
yes, why not; and at the same time, I wonder what tools would be appropriate to conveniently and objectively describe the effect of a given filter. Would a graph of some sorts be adequate?
So, again, kindly pinging @patdavid and possibly other folks here for pointers to documentation on the film simulation collection for RT.
I’d really love to understand the purpose and more specifically how each so-called simulation was built.
I used these in LR 10 years ago or so, and I tend to agree with your assessment. For portrait work, I’ve really grown to appreciate the Portra 400 2 HaldCLUT. I’ll back it off to 50-60% or so, but it really gets the job done in a way that I appreciated about those earlier VSCO presets.
These were generally a combination of manually tweaking and fiddling in an iterative process and sometimes using a starting point from others works. I’ve written about it before and based some things on work like Petteri Sulonen did initially with his portra-esque, velvia-esque, provia-esque, etc. curves.
This was usually less a rigorous application of parameters (as you’ve mentioned - color checkers + specific temperature lights, etc) and more of attempting to replicate a feeling for general tones as they applied to subjects. Portra does wonderful things for skin tones for instance, but I don’t really recall testing anything on other subjects when playing.
Some of those CLUTs were likely also reverse-generated from other color curve packages and there was some work in the G’MIC package by authors far more proficient than me.
That is to say, the process is usually less about color accuracy and more about a subjective tweaking to arrive at something that might feel similar in results - pick how far down the rabbit hole you want to travel. (Some folks are happy just crushing and lifting blacks to get a print feeling - others might go farther… ). Indeed these are, by nature, crude attempts at replication and certainly fodder for causing the color scientists eyes to twitch and to trigger them (please don’t poke or trigger any of the color scientists - they can be… ornery sometimes).
In the end I think it’d be a fun exercise to shoot again in a controlled environment with solid references to re-visit the replication but I just don’t have the time (or energy?) these days to tackle something like this. If you wanted to, I’m willing to purchase some color targets and ship them over to you. (Actually, this goes for anyone that might be interested).
As far as vouching for validity or approximation of any of the results - I’d say no - I’m doubtful these would be identical at all. I do think they produce a pleasing result that would trigger some memory of how I saw them once upon a time…
(Sorry this wasn’t a more solid answer than you might have been hoping for.)