I think you add significant value to this subject, introducing/highlighting a subset of the myriads of possible ways that one may adopt to deliver an automation, that encapsulates some kind of “picture intelligence”, in the translation from raw to an intelligible image, that humans may recognise as a real image.
Clearly the whole mantra of - let’s just provide a whole bunch of “tools” for the end user to figure out their own presets, is one approach, and it works for some, further to their mastery of those tools, such as the base curve, tone curve and filmic rgb, or any other smilarly involved tools in other digital raw photo development apps. It does take some effort to stay on course through the initial challenges with these tools, and identify which of their options are the most effective manipulators of the image.
But also, whether we like it or not, while some may wish to eke out a living in the wilderness, grow their own food, cut their own hair, raise their own animals, and make their own clothes (I do wonder how far they will go - do they make their own cotton and wool also and leather and shoes? or drill for their own oil?), my point being the argument for starting from scratch fails, if taken too far. We all are better off using tools, or the intelligence encapsulated in software, to our advantage. If the tools force us to always have to go back to elementary nuts and bolts, this defeats the whole purpose of using a tool. We might as well write our own software. But nevertheless for some it is this - write our own software that has helped to improve tools like darktable - they wrote software for themselves, and now its available to us.
Certainly some sort of initial intelligence that helps us get to a 1st base, in image processing has to be the way to go, and I have no issue with providing the end user with access to the nuts and bolts settings that created that 1st base, for further tweaking.
The developer of the filmic module, did mention in another thread, that once upon a time, filmic did have presets, which I add, is similar to the approach of providing some presets in the base curve module in darktable.
Your approach buttresses the observation that it may require a set of workflow steps that are beyond the abilities of any single module in darktable, to arrive at that 1st base look, and you are spot on - darktable styles would then be a great way to address this.
There nevertheless is an important issue to consider, and I am wondering which is the best term to use, cos different people call it different things - color science, tone mapping, etc.
I am confident in my assertion that the kind of color science/tone mapping that occurs in many digital cameras, or in their software equivalents such as from Sony or Canon, and which we find working behind the scenes in tools from Adobe or Capture One, are dynamic. Not simple curves only, like the kinds that are bundled in darktables base curve module as presets of different camera manufactuer “looks”, but definitely more than this. Who knows how many parameters and rules are working behind the scenes to achieve their “look”.? only God knows, or only those working at these enterprises - and this must be a closely guarded secret like the formula for making coca cola - the popular drink.
Furthermore these dynamic color science/tone mapping rules of image processing are also not historically static. I was fortunate to discover, and have mentioned this also in other threads, that using Capture One, I can dial into the same image using their in-built presets, some of the look and feel of almost any digital camera’s jpeg. What I mean to draw out of this is that this color science has also evolved, over many years, improving with the more recent cameras. And this improvement in image processing is independent of improvements in sensor technology. With Capture One, you can take the same image and “age” it by choosing a look from one of the earlier dgital cameras, or choose a look from one of the more recent digital cameras, and though subtle, the difference is obvous - more recent photo intelligence algorithms, or emulations thereof, do a better job of retaining colour, amongst other things.
If I express it in darktable terms, old digital photos have a look that is more like what filmic or base curve produces, when you turn on the preservation of chrominance (i.e turn on and choose an option to preserve colours - in base table - similar thing)
With the more recent digital camera emulations, colours are richer, more like what you end up with when chrominance/preserve colours in darktable modules, is turned off. The difference being that this may lead to oversaturation in darktable, which the digital camera emulations use what has to be a dynamic approach to desaturation, so the image still looks more natural, and features in what would have been saturated areas, i.e washed out, remains visible - i.e contrast is retained
We find similar evolutions of looks implemented in software, over time, - filmic gives us the option to choose the color science of version 3 or version 4 of this module, for example.
Where I notice the color science/tone mapping of the commercial apps and digital cameras appear to deviate from whatever I have come across in open source apps, is variation in how they address the difficult photos, such as over exposure or aspects where the image has a tendency to become saturated. These more involved tools from the leading digital camera and software businesses, in my own words “bend or twist light”, to rescue as much of the image as possible.
Yet the commercial apps, they also do their best to avoid two things, oversaturation, and out-of gamma excesses, both of these challenges, significantly controlled, at the same time, albeit at the expense of an image that somewhat deviates from reality!.
I use the phrase “bend light” cos I find that these more involved algorithms perform selective desaturation of overexposed areas or almost overexposed areas of the image, and who knows what else, to keep the image pleasing to the eye, if not exactly anything like the image that was captured.
Therefore any “static” based set of presets, based on a static transform, such as mapping/estimating the transtitions between a raw file and the jpeg equivalent, will only work within a set of input photos, of a certain exposure. You find similar behaviour with the base curve presets. These presets work ok, when your image is definitely not over exposed, or your image is somewhat underexposed. Once you go outside of the expected inputs, the presets no longer work well.
So there are limits to what a static based transform method can achieve. Not that this limits the benefit of the approach, just stating the obvious, that there may then need to be, not just one preset, but using a static transform approach, optimally a series of presets, that one can try on in quick succession, to find the one best which brings the image closer to what is a decent starting point, and then we can tweak this look further.
In 2020, I was learning photography as a new hobby, going beyond auto mode on my cameras, and getting into raw files processing, and after getting so frustrated with darktable, cos with my learner’s over exposed images, when I eventually found the willingness to re-approach darktable after a few weeks of giving up, my salvation was fortuitously creating by hand, (trial and error), a whole set of my own custom base curves, as presets, which I could quickly try on, to see which of them most closely which matched the kind of images I had been taking, and rescued my faith in digital photography from raw sources. So depending on the exposure in the image, cos at the time I was shooting 100% manual, a different custom curve would render the image more suitably as a starting point. Definitely improved the efficiency/workflow.
In spite of its limitations, the static transform based workflow, can be of benefit, and in the video world we already have it used pretty heavily using LUTS, which are also supported in darktable via the 3DLUTS module, and all the video cameras provide official LUTS to statically transform videos recorded to their Log formats in camera, to standards like Rec709. The look up table approach is a static one. Definitely not dynamic, garbage in - garbage out. There are rules for ensuring that the video is captured within certain exposure limits, and once adhered to, you get a pretty good result with the LUT, straightaway. So there is some evidence that a static transform process is not a waste of time, as long as those who use it understand the caveats. The input must conform to certain rules of exposure, recommended by the manufacturer of the camera who also provides the associated transform LUT.
There is also a practice of providing LUTS which address over exposed or underexposed video., so rather than a one size fits all LUT, you have one for each kind of exceptional condition.
If I may therefore augment your efforts, I would suggest that a series of “presets”, in your case dstyles, which cover a range of inputs that conform to certain exposure compensations, e.g from -3 to +3, in increments of maybe 1/3 of a step or in increments of a whole exposure ie -3, -2, -1, 0, 1, 2, 3 and encapsulate the static transforms required to imitate the raw to jpeg picture logic, for different exposures, might offer a more comprehensive way to encapsulate the dynamic image intelligence of the digital cameras, and their software equivalents., and make this available to darktable users.
Sure the end user has to “try” out more than one preset/or dstyle, to find out which one takes them closest to their goal, then tweak further from there. But this approach should address the shortcomings inherent in aiming to provide one preset/dstyle to rule them all, which I’ve outlined earlier.
And one more caveat, the jpeg has some local and global sharpening included, as well as noise reduction, and I suspect that the algorithms used commercially, are also dynamic, making use of information such as the ISO as an input.
My point is to set out valid expectations, for anyone who uses presets, whether using styles or module presets, etc, these can, at this time, only address issues from a static perspective, and the end user has to appreciate these caveats - e.g be aware that they need to do their own sharpening and local contrast as an additional step, to come a bit closer to the jpeg. Or whatever sharpening and local contrast is included in a multi-module preset (like a dstyle) may need to be further tweaked, to suit their own image.
But there is hope.
Aiming to emulate the in-camera jpeg is a laudable objective, and best wishes from me with the effort.
Rather than a quest to conform to the in-camera jpeg, I have been on a quest to simply obtain an image that looks somewhat like the scene I captured, and in that regard, I have not been satisfied with the in-camera jpegs, and in some cases I have also not been satisfied with the results of the tools like Camera One and Sony’s own Imaging Edge, or Adobe’s which, while they produce images that are optically correct in contrast, and look real, sometimes deviate the colours from what my eye recalls. And this is not about memory, I can be pretty detailed, and when I take an image of typically a plant, I may also snap a branch or leaf off it - with permission, and compare with the final image, when back on my computer, so I’m looking at the thing I took an image of, and can tell which app gets the color right, not just the imaging contrast.
Please note - white balance has been taken out of the equation, cos I keep that identical between the apps, as much as possible. I mention this because for some people, and this may be why they go down the tinker tweaker route, they want to avoid, adapt, and deviate from the default in-camera look of the jpeg, or the baked in look that Adobe or Capture One’s tools (or others) confers on the image.
Whichever approach we take whatever tools we use, and no matter how many controls are made available to the end user, our starting point will already have a baked in look to some extent, right from the sensor data, cos each manufacturer starts the workflow of this custom look, right from the sensor, so we cannot avoid this, as some of it is already baked in, right within the raw file, before any further processsing is done.
In the same vein, every approach, including one with the most extensive set of flexible tools, provided to the end user, cannot avoid a certain minimum of an in-built “look”, I gave the example earlier of filmic’s version 3 and version 4 of its “color science” which you can choose as options. i.e whether we like it or not, we already are starting with some kind of preset “photo intelligence” baked in in any set of tools we use. “Presets” i.e a predetermined starting point are already inherent in any approach. i.e something that has been Pre-Set, which we only have limited control over, we cannot change all aspects of the image (unless we write our own software!).
Therefore building a higher level set of presets, is simply going one step further, and is by extension, also a most sensible approach.
I’m making one more push to se how far I can get using only filmic, to create presets, that work, for one key reason. It has been the module or tool, which produces colours that, in my view, are somewhat closest to the scene as captured by the camera, with respect to colour accuracy. And in that regard, filmic produces some of the more realistic renditions of the scene. I am still stuggling with arriving at easy ways to push the image in the direction I want, after filmic is done with its job, but thats another item-could be the fault comes from the image itself.
Each raw processor, in closing, has its own “taste”, and maybe overtime we appreciate each one for their individual slants. I find the filmic to be able to, produce images which have one character, when used judiciously of course, rather than poke you in the eye forcibly, like some results of certain raw processing apps and their larger than life 1st impressions, with filmic, your eye is drawn into the image and you are encouraged/invited to take another look, and come back over and over again, to appreciate the finer details of the image, which you inevitably do.
Definitely I would say some of the processing I have done recently with filmic, achieves that - look again result, where you want to see the image again, and I can attest that in some way , they best resemble the original scene, IMHO, in certain aspects, especially colour accuracy. Caveat - this is only true after the chrominance control is turned off!..
Here is an example, which uses filmic and very little else. Nothing else I have attempted to process the raw file with, gives me colours similar, that look more like the real life image., with regard to the colours.