First, I know just enough at this point to be dangerous, so please forgive any gross errors! Also, this is a question about whether an idea I just had is feasible, not a proposal!
I use Fuji X series cameras, and rather enjoy the idea of film simulations. There are lots of discussions out there on film simulations, and some interesting software about generating 3D-LUTs from RAW and in-camera processed images. These appear to work well, but come with some limitations, especially that they replace the tone mapper, such as AgX.
In other words, the image processing looks like Y=(f°g)(X), where X is the raw image, Y is the processed image, f represents all of the processing up to the LUT (or scene mapper), and g represents either the LUT or the scene mapper.
It seems to me that if the scene mapper at its default settings could be modelled as an invertable function, such as another LUT, then one could look at image processing as (f°s°g)(x) where f is as before, s is a LUT designed to be applied before scene mapping, and g is the scene mapper, such as AgX.
If that was the case, then one ought to be able to generate the lut s using gâ»Âč(Y) as the target processed image.
Such a LUT would be applied before scene mapping, and would be defined on darktableâs usual colour space and bit depth. That would make it a lot more like any other module.
So, is that idea crazy, or is the behaviour of scene mapping modules like AgX (at some fixed setting) too input dependent for an inverse function to be easily approximated?