If editing is a lower priority (and believe me, I get that), I think that’s a decent approach to photography.
I used to handle things that way, but I came to realize what I was doing in a lot of cases was to apply a tone curve that reversed the effect of the “base” curve, or whatever was applied to get the first notion of “result”. And that slewing of the image data first one way, then the other, just accumulates hue shift that can eventually become noticeable if you lay curve upon curve.
If you realize that and take care with how many tone curves you lay in the workflow (and don’t forget the last of those which is the display/export color profile, which we tend to forget about), you can get decent results. Just realize that every tone curve in the workflow adds to the hue shift…
This first Hue paper was interesting. I would summarize it in the following way:
- Reduce dynamic range using rgb ratio
- Introduce hue shifts following the Bezold–Brücke effect
- Map out of gamut colors to the display saturation boundary
Here is a good illustration from handprint : colormaking attributes
(That page has a ton of other stuff to digest as well at a later stage)
If I read the diagram correctly, this effect is similar to what we get with the crosstalk option. It is at least in the same direction. This would explain why it seems natural that bright objects shift like this.
They are not focusing on the problem of out of gamut saturation, it’s just very shortly mentioned. So I’m a bit hesitant to conclude that from this paper.
The second paper is more involved and it, more importantly, includes local effects for the tone mapping. I’m currently working with the hypothesis that most local effects are better dealt with as separate modules earlier in the darktable pipeline.
And finally, that post by sankos you linked @afre has some interesting links that finally took me all the way to a small open-source software called dcamprof. A single man effort in trying to make profiles that are as correct as possible for later use in image editing pipelines. Tried to have a look at the code and it seems like his approach is based on what I have named preserve hue. He names it after the fact that it seems to be used in the Adobe dng tone curve. This result is then modified to match saturation from the original picture and some hue adjustments are finally applied to the highlights. These include making red colors more yellow for pleasing sunsets. Again similar to the effect with crosstalk but just for a selected range of colors.
Ya I use dcamprof…it also has a commercial version with a GUI as well. Lumariver Profile Designer
Given the fact of rod intrusion, mesopic vision is actually tetrachromatic.
That’s a cool snippet from that page and also quite scary when you think about the consequences…nice!
can this program be integrated in darktable to make better camera profiles? or the present one in colour calibration module is sufficient?
As it stands dcamprof is mostly about generating DCP profiles which DT does not support but I think @jandren is looking at the code used by dcamprof as it has some commands that can be used to tweak those profiles and they have a Neutral tone reproduction operator that can be introduced to control hue shifts…I can’t speak for @jandren but I think this is what he is looking into wrt the code and how that might be leveraged with his work on the sigmoid tone mapping…
dcamprof is not only creating DCP profiles but it also creates very good ICC profile for Capture One. Only thing is that you have to export your colorcheckr pic in a proper tiff file which is explained in detail for Capture One. If someone can explain how to export it from DT we can use it to create an ICC profile for DT. here is the link http://lumariver.com/lrpd-manual/#make_icc
Nice to hear from some dcamprof users!
@Rajkhand Doing any form of integration of either dng or icc profiles from dcamprof is out of scope for what I’m trying to figure out in this thread. I’m also not sure if it at all fits with the darktable scene-referred pipeline. I just found it to be an interesting example with public source code which makes it possible for me to try to learn something from it which hopefully could propagate to a better sigmoid module. But something you could help me with is providing some example images where you think it works well in Capture One. I.e. RAW + lightly edited jpeg. I’m especially interested in how it behaves with bright objects in the image.
@PhotoPhysicsGuy uff, tetrachromatic? That might cause problems… Wonder if it can be exploited for a better model!
Rewriting color-theory to be 4-dimensional seems like a monumental task. It’s a long discussion in 3 dimensions already. Or it may be 1D for achromatic light and 2.5D instead of 2D for chromatic stuff?
Haha, yeah, like international research team sponsored by large industry partner monumental! Could be an interesting PhD position though
There’s a solution for this with argyllcms & darktable.
See pcode — Darktable Camera Color Profiling
But it’s more easy is just use new colorcalibration enhancement to take the calibration from a color checker target in upcoming 3.6
I don’t have lumariver only the commandline dcamprof but as for DT the proper way to generate the tiff for color profile programs is generally as outlined here…GitHub - pmjdebruijn/colormatch: ColorMatch The key is the input and output profile should be the same. This way DT does a direct pass thru and gives you a tiff with no color corrections.
Actually Pascals script using argyllcms and using a jpg/raw pair does a nice job of matching the two if that is something people want to try. I haven’t gotten around to comparing a custom icc with Aureliens correction in color calibration yet…maybe doing both would really nail the color so do a custom icc and then further tweak it with color calibration …
I think you could actually double up. So you could make that custom profile and use that and then you could add the color calibration which in the end is like a set of channel mixer tweaks to try to match the color better . At least you could try a couple of time to see if it was worth it?? But I think we are hijacking the topic a bit…
It’s what I use to make LUT ICC camera profiles from spectral measurements.
Note that in darktable 3.6, the colour calibration module will support improving colours using a Color Checker.
I think he asked about it already above but was just not sure if it was any good…
It would be interesting to use the evaluation features and compare the standard profile vs custom and see what sort of a deltaE difference there was between them using the color correction profile assessment
The story of hot lightsabers and why even monochromatic lasers should desaturate at high brightness levels.
Why do lightsabers in both the old and the new movies look super bright and highly energetic even on both prints and low dynamic range screens?
First an example from one of the classics:
And one from the third trilogy
Compare that with this disaster
Why is this last one so bad compared to the two from the movies? Whoever made them thought that the actual blade should be in bright saturated displayable colors while the movie frames are essentially white. This is quite understandable, just study you local traffic lights next time you are out for a strool at night. They do not desaturate at all to our human eyes but they still look bright. But it somehow does not work for images and paintings. Have a look at this classical lightsaber VFX tutorial: VIDEO COPILOT | After Effects Tutorials, Plug-ins and Stock Footage for Post Production Professionals Andrew uses a white solid for the blade and a colored glow around to make it look bright, and it works, its white but we precive it as a self luminous colored lightsaber.
Here is a more modern version of the same tutorial where Ian Hubert takes advantage of the ACES-like desaturation of high brightness colors and adds a scene-referred glow to make it, well glow!
So why does this work so well? It’s called glare effect or grey glow illusion, making it possible to make reflective materials like a print appear emissive. Here is a classical test image and some papers to read if it is of interest:
But how does this translate into image editing? Here is my take. A global scene to display transform should smoothly converge to pure white as the brightness goes up. It actually should not preserve saturation of highlights. The problem is that good lenses are designed to suppress glare like this which will causes bright light sources to just be white with no punch. I generated my own version for this using Blender, one with and without glare (known as bloom in Blender). A good lens will produce the second version without bloom which will look much worse than a lens with some imperfections! Import the exr add sigmoid with preserve hue and a bit of exposure tuning to reproduce the results.
glare triangle bloom.exr (21.7 MB)
glare triangle.exr (1.1 MB)
The solid triangle in the above two images are the same, think about that for a second! A global tone mapper which only works on individual pixel values and takes the second image without bloom could and should never produce anything like the version with bloom. The correct result for a global tone mapper, be it sigmoid, filmic or something else, is instead just the white triangle. The bloom effect has to be either created manually when shooting by some filter / dirt or in post in a separate module.
I hope that some of you darktable users now shout, but the bloom module, we have the bloom module for this! And you absolutely should because that is the correct way of solving this in post! The only problem is that the results of that module is, weeeeeelll…, kind of disappointing. The reason? Simple, the module assumes a display referred workflow and needs an update to scene referred (operate on rgb ratios instead of L in Lab) for it to work as expected.
PS I cant stop laughing over how bad the darktable bloom module looks on this test image. DS
Enjoy your long form posts. Hope D isn’t feeling particularly litigious.
The darktable user manual already states quite clearly that when working with a scene-referred workflow, the “bloom” module should be avoided, for the exact reason that it does blurs in Lab space:
So, no need for litigation
There is a new “diffusion” module based on heat transfer equations that is proposed, which works in linear scene-referred space, and it might be interesting to apply that module to this sort of problem.