Yes not the correct phrase. Perhaps I should have said just using tools that work in scene referred space such as filmic and rgb colour balance.
I agree that the process may be hobbled, I was really requesting more details on how it would work. I think a general free for all may be difficult to manage.
Yes not the correct phrase. Perhaps I should have said just using tools that work in scene referred space such as filmic and rgb colour balance.
I get it and its hard with so many variables in play. I just think the way I look at it the camera jpg is not a scene referred edit so to judge how well the raw processor works to get that look you would actually not do a scene referred edit because by design its taking a different approach that I think wouldn’t likely give you the jpg edit if you applied the tools as intended with the math in mind…maybe that is just my mindset
I reckon let’s just see what happens here and make sure we don’t restart the conversation on another thread here…
I think you add significant value to this subject, introducing/highlighting a subset of the myriads of possible ways that one may adopt to deliver an automation, that encapsulates some kind of “picture intelligence”, in the translation from raw to an intelligible image, that humans may recognise as a real image.
Clearly the whole mantra of - let’s just provide a whole bunch of “tools” for the end user to figure out their own presets, is one approach, and it works for some, further to their mastery of those tools, such as the base curve, tone curve and filmic rgb, or any other smilarly involved tools in other digital raw photo development apps. It does take some effort to stay on course through the initial challenges with these tools, and identify which of their options are the most effective manipulators of the image.
But also, whether we like it or not, while some may wish to eke out a living in the wilderness, grow their own food, cut their own hair, raise their own animals, and make their own clothes (I do wonder how far they will go - do they make their own cotton and wool also and leather and shoes? or drill for their own oil?), my point being the argument for starting from scratch fails, if taken too far. We all are better off using tools, or the intelligence encapsulated in software, to our advantage. If the tools force us to always have to go back to elementary nuts and bolts, this defeats the whole purpose of using a tool. We might as well write our own software. But nevertheless for some it is this - write our own software that has helped to improve tools like darktable - they wrote software for themselves, and now its available to us.
Certainly some sort of initial intelligence that helps us get to a 1st base, in image processing has to be the way to go, and I have no issue with providing the end user with access to the nuts and bolts settings that created that 1st base, for further tweaking.
The developer of the filmic module, did mention in another thread, that once upon a time, filmic did have presets, which I add, is similar to the approach of providing some presets in the base curve module in darktable.
Your approach buttresses the observation that it may require a set of workflow steps that are beyond the abilities of any single module in darktable, to arrive at that 1st base look, and you are spot on - darktable styles would then be a great way to address this.
There nevertheless is an important issue to consider, and I am wondering which is the best term to use, cos different people call it different things - color science, tone mapping, etc.
I am confident in my assertion that the kind of color science/tone mapping that occurs in many digital cameras, or in their software equivalents such as from Sony or Canon, and which we find working behind the scenes in tools from Adobe or Capture One, are dynamic. Not simple curves only, like the kinds that are bundled in darktables base curve module as presets of different camera manufactuer “looks”, but definitely more than this. Who knows how many parameters and rules are working behind the scenes to achieve their “look”.? only God knows, or only those working at these enterprises - and this must be a closely guarded secret like the formula for making coca cola - the popular drink.
Furthermore these dynamic color science/tone mapping rules of image processing are also not historically static. I was fortunate to discover, and have mentioned this also in other threads, that using Capture One, I can dial into the same image using their in-built presets, some of the look and feel of almost any digital camera’s jpeg. What I mean to draw out of this is that this color science has also evolved, over many years, improving with the more recent cameras. And this improvement in image processing is independent of improvements in sensor technology. With Capture One, you can take the same image and “age” it by choosing a look from one of the earlier dgital cameras, or choose a look from one of the more recent digital cameras, and though subtle, the difference is obvous - more recent photo intelligence algorithms, or emulations thereof, do a better job of retaining colour, amongst other things.
If I express it in darktable terms, old digital photos have a look that is more like what filmic or base curve produces, when you turn on the preservation of chrominance (i.e turn on and choose an option to preserve colours - in base table - similar thing)
With the more recent digital camera emulations, colours are richer, more like what you end up with when chrominance/preserve colours in darktable modules, is turned off. The difference being that this may lead to oversaturation in darktable, which the digital camera emulations use what has to be a dynamic approach to desaturation, so the image still looks more natural, and features in what would have been saturated areas, i.e washed out, remains visible - i.e contrast is retained
We find similar evolutions of looks implemented in software, over time, - filmic gives us the option to choose the color science of version 3 or version 4 of this module, for example.
Where I notice the color science/tone mapping of the commercial apps and digital cameras appear to deviate from whatever I have come across in open source apps, is variation in how they address the difficult photos, such as over exposure or aspects where the image has a tendency to become saturated. These more involved tools from the leading digital camera and software businesses, in my own words “bend or twist light”, to rescue as much of the image as possible.
Yet the commercial apps, they also do their best to avoid two things, oversaturation, and out-of gamma excesses, both of these challenges, significantly controlled, at the same time, albeit at the expense of an image that somewhat deviates from reality!.
I use the phrase “bend light” cos I find that these more involved algorithms perform selective desaturation of overexposed areas or almost overexposed areas of the image, and who knows what else, to keep the image pleasing to the eye, if not exactly anything like the image that was captured.
Therefore any “static” based set of presets, based on a static transform, such as mapping/estimating the transtitions between a raw file and the jpeg equivalent, will only work within a set of input photos, of a certain exposure. You find similar behaviour with the base curve presets. These presets work ok, when your image is definitely not over exposed, or your image is somewhat underexposed. Once you go outside of the expected inputs, the presets no longer work well.
So there are limits to what a static based transform method can achieve. Not that this limits the benefit of the approach, just stating the obvious, that there may then need to be, not just one preset, but using a static transform approach, optimally a series of presets, that one can try on in quick succession, to find the one best which brings the image closer to what is a decent starting point, and then we can tweak this look further.
In 2020, I was learning photography as a new hobby, going beyond auto mode on my cameras, and getting into raw files processing, and after getting so frustrated with darktable, cos with my learner’s over exposed images, when I eventually found the willingness to re-approach darktable after a few weeks of giving up, my salvation was fortuitously creating by hand, (trial and error), a whole set of my own custom base curves, as presets, which I could quickly try on, to see which of them most closely which matched the kind of images I had been taking, and rescued my faith in digital photography from raw sources. So depending on the exposure in the image, cos at the time I was shooting 100% manual, a different custom curve would render the image more suitably as a starting point. Definitely improved the efficiency/workflow.
In spite of its limitations, the static transform based workflow, can be of benefit, and in the video world we already have it used pretty heavily using LUTS, which are also supported in darktable via the 3DLUTS module, and all the video cameras provide official LUTS to statically transform videos recorded to their Log formats in camera, to standards like Rec709. The look up table approach is a static one. Definitely not dynamic, garbage in - garbage out. There are rules for ensuring that the video is captured within certain exposure limits, and once adhered to, you get a pretty good result with the LUT, straightaway. So there is some evidence that a static transform process is not a waste of time, as long as those who use it understand the caveats. The input must conform to certain rules of exposure, recommended by the manufacturer of the camera who also provides the associated transform LUT.
There is also a practice of providing LUTS which address over exposed or underexposed video., so rather than a one size fits all LUT, you have one for each kind of exceptional condition.
If I may therefore augment your efforts, I would suggest that a series of “presets”, in your case dstyles, which cover a range of inputs that conform to certain exposure compensations, e.g from -3 to +3, in increments of maybe 1/3 of a step or in increments of a whole exposure ie -3, -2, -1, 0, 1, 2, 3 and encapsulate the static transforms required to imitate the raw to jpeg picture logic, for different exposures, might offer a more comprehensive way to encapsulate the dynamic image intelligence of the digital cameras, and their software equivalents., and make this available to darktable users.
Sure the end user has to “try” out more than one preset/or dstyle, to find out which one takes them closest to their goal, then tweak further from there. But this approach should address the shortcomings inherent in aiming to provide one preset/dstyle to rule them all, which I’ve outlined earlier.
And one more caveat, the jpeg has some local and global sharpening included, as well as noise reduction, and I suspect that the algorithms used commercially, are also dynamic, making use of information such as the ISO as an input.
My point is to set out valid expectations, for anyone who uses presets, whether using styles or module presets, etc, these can, at this time, only address issues from a static perspective, and the end user has to appreciate these caveats - e.g be aware that they need to do their own sharpening and local contrast as an additional step, to come a bit closer to the jpeg. Or whatever sharpening and local contrast is included in a multi-module preset (like a dstyle) may need to be further tweaked, to suit their own image.
But there is hope.
Aiming to emulate the in-camera jpeg is a laudable objective, and best wishes from me with the effort.
Rather than a quest to conform to the in-camera jpeg, I have been on a quest to simply obtain an image that looks somewhat like the scene I captured, and in that regard, I have not been satisfied with the in-camera jpegs, and in some cases I have also not been satisfied with the results of the tools like Camera One and Sony’s own Imaging Edge, or Adobe’s which, while they produce images that are optically correct in contrast, and look real, sometimes deviate the colours from what my eye recalls. And this is not about memory, I can be pretty detailed, and when I take an image of typically a plant, I may also snap a branch or leaf off it - with permission, and compare with the final image, when back on my computer, so I’m looking at the thing I took an image of, and can tell which app gets the color right, not just the imaging contrast.
Please note - white balance has been taken out of the equation, cos I keep that identical between the apps, as much as possible. I mention this because for some people, and this may be why they go down the tinker tweaker route, they want to avoid, adapt, and deviate from the default in-camera look of the jpeg, or the baked in look that Adobe or Capture One’s tools (or others) confers on the image.
Whichever approach we take whatever tools we use, and no matter how many controls are made available to the end user, our starting point will already have a baked in look to some extent, right from the sensor data, cos each manufacturer starts the workflow of this custom look, right from the sensor, so we cannot avoid this, as some of it is already baked in, right within the raw file, before any further processsing is done.
In the same vein, every approach, including one with the most extensive set of flexible tools, provided to the end user, cannot avoid a certain minimum of an in-built “look”, I gave the example earlier of filmic’s version 3 and version 4 of its “color science” which you can choose as options. i.e whether we like it or not, we already are starting with some kind of preset “photo intelligence” baked in in any set of tools we use. “Presets” i.e a predetermined starting point are already inherent in any approach. i.e something that has been Pre-Set, which we only have limited control over, we cannot change all aspects of the image (unless we write our own software!).
Therefore building a higher level set of presets, is simply going one step further, and is by extension, also a most sensible approach.
I’m making one more push to se how far I can get using only filmic, to create presets, that work, for one key reason. It has been the module or tool, which produces colours that, in my view, are somewhat closest to the scene as captured by the camera, with respect to colour accuracy. And in that regard, filmic produces some of the more realistic renditions of the scene. I am still stuggling with arriving at easy ways to push the image in the direction I want, after filmic is done with its job, but thats another item-could be the fault comes from the image itself.
Each raw processor, in closing, has its own “taste”, and maybe overtime we appreciate each one for their individual slants. I find the filmic to be able to, produce images which have one character, when used judiciously of course, rather than poke you in the eye forcibly, like some results of certain raw processing apps and their larger than life 1st impressions, with filmic, your eye is drawn into the image and you are encouraged/invited to take another look, and come back over and over again, to appreciate the finer details of the image, which you inevitably do.
Definitely I would say some of the processing I have done recently with filmic, achieves that - look again result, where you want to see the image again, and I can attest that in some way , they best resemble the original scene, IMHO, in certain aspects, especially colour accuracy. Caveat - this is only true after the chrominance control is turned off!..
Here is an example, which uses filmic and very little else. Nothing else I have attempted to process the raw file with, gives me colours similar, that look more like the real life image., with regard to the colours.
Could you go the playraw route …pick a couple that you struggle with and share to see how others work on it rather than continuing with trial and error??
Thanks, great suggestion. If I do get completely stuck, and have exhausted all logic, known to me, I will reach out and upload some raw files and darktable or other apps sidecar files, as well as the outcome of my own efforts, and ask for help.
With base curve, my challenges led me to go back to improve my gear and technique. I’d like to confront my challenge with filmic with the same resolve, to see what changes I could make at source, my hunch being the answer may lie in addressing lighting conditions, as well as exposure, as I only shoot with natural light outdoors.
Who knows, maybe filmic is the only “app” that’s telling the truth, while others are telling you what you want to hear, even if its not as accurate…! If I ever get the filmic settings, and my image taking to sync up to outstanding pics, I’ll be back to share these on its forum thread.
The key to filmic is not clipping your highlights.
It’s really not rocket science…
These were my thoughts also, but is appears there there must be a variety of highlight clipping scenarios.
With the image I posted, there was no highlight clipping in the raw image, of the combination of red, green, blue channels, nor was there any clipping of any single channel, so on the camera as well as using the indicator tools in the photo editors, absolutely no clipping of any highlights, in the source.
But in this case there was a strong predominance in the red channel, in the raw file itself, (its attached). So any attempt to brighten the image, because of the predominance in the red channel, pushes it into saturation. in processing I;ve adjusted White Balance to 5500, cos this looked more accurate, to the scene (no significant impact on red channel with this change)
I’ve been able to find workarounds using masks, but I’d love to avoid too much tinkering with an image, cos what I enjoy is taking the photos and looking at the end result, not hours of slaving over a pic, or what I call light bending!
A nice challenge, to figure out whats the best practice for getting it optimal at the source. I do remember in the old days, if you were attending an event at a TV station, there were some colours you were advised not to wear… Could be something related… the is some info out there about how red and blue are amplified more than green, in the sensor. Anyway, a good challenge, if I can resolve this at source… So I can easily add , more “light” to it in post, without running into the risk of saturating any colours, or introducing out of gamma issues, in the photo editor, none of these being issues which exist in the raw file.
DSC01997.ARW (16 MB)
I have attached the raw file in a post before this response, so its easier to discuss in case anyone wishes to appreciate the challenge. In simple terms, there was a lot of Red channel in the raw file, but not clipped in any way.
So any attempt to make the image brighter tends to saturate the red channel. So rather than do this, I had to leave the final image a bit darker than intended… I;d hope to find a solution that fixes this at the time of picture taking, rather than have to mess with the colour channels in post and “distort” the colours… My wish, we’ll see how this goes.
Some more practice out on phototaking sessions and coming back to view them on the computer should lead to a better understanding and a solution/or accept that under certain conditions, I have to accept some limitations in the capture process, that will lead to this kind of constraint…
Appreciated. I think Im already an adherent of the process, and well adapted to the scene referred workflow.
Ive attached the raw image, in a prior response, in case anyone wishes to try with filmic, and also avoid any saturation, while making the image brighter than what I have achieved, using simple methods like a bit of brightness, not any complex arcane approach.
The image was well captured, no issues there, but it seems we are running into one of the well known issues with attempting to make optical changes to a digital image, such as making it brighter, in this case, it becomes a bit of a challenge. A compromise.
The apps which enable a brighter image, also distort the colour from what I know it is. The app which gets it right in colour which is darktable filmic module, and I think it does a good job of this, then establishes an image which is difficult to make bright, without running into saturation and out of gamma issues.
The image was shot in the shade not direct sunlight, so maybe more light at source might have been the ideal solution rather than attempt to fix it in post… via digital tricks of the trade…
@OK1 can you upload your xmp file for this image as well?
So not quite sure what you are looking for – the following image is brightened a bit (exposure, filmic white/blackpoints), colours not too saturated (color balance rgb), applied a bit of sharpening (contrast equaliser). Does seem to need much in the way of denoising.
DSC01997.ARW.xmp (6.2 KB)
And here is the out-of-camera JPG:
This one seemed pretty straight forward to me. I darkened the shadows a bit to try and make the flower pop out more. This is using a recent master build of darktable.
Only exposure, filmic rgb, local contrast, then tone equalizer to darken the shadows.
DSC01997.ARW.xmp (7.1 KB)
Much the same for me. 1 minute edit in current darktable master. I maybe pushed the saturation a bit more than I normally would:
DSC01997.ARW.xmp (6.7 KB)
Brilliance in rgb color balance might do the trick try that…
I tried a little but wasn’t sure how bright or desaturated you were looking for??
And I think not trying to get so much out of it. It can certainly give a great result at times wrt all aspects of a photo and other times it should just be used to map DNR and leave the rest to other tools. I still think this is often the issue, ie people trying to get “too” much out of filmic rather than just using it for its primary mission of managing DNR…
I still don’t understand what people are expecting to get out of it. I literally only expect it to set the upper and lower bounds of my data. Sometimes I don’t need to do much more than that, but if I do its in Tone Eq or Color Balance mostly, not in filmic itself (sans highlight reconstruction).
I think the discussion has veered off from the topic; the content is good and interesting though. I suggest we move this to a new thread @paperdigits.
I agree we should stick to style but lets face it filmic is the main player in a style that will use the scene-referred workflow so as long as the discussion sticks to how one might better use DT to best match a jpg is still on topic as far as I can see. Not that I really care one way or the other. I think one experiment would be to actually see how people do on a few photos that they struggle with using both workflows rather than compare against each other. Is it that one or the other is actually better and easier to get the jpg look. Then maybe the answer is use that one. If it turns out not to be scene referred then maybe use scene referred for the technical advantages it can offer but don’t try to shape it in to the jpg look as the other method is better suited for that particular goal… In any even it will be interesting to see what people propose.
This is my little modification I mentioned earlier to the colormatch script to use it with a Spyderchecker 24. The raw/jpg matched icc gives nice results in my opinion…and could assist in getting closer to the jpg out of the gate.