We already have a bunch of (good) ways of working with colors globally, but affecting hue, saturation and brightness based on color ranges is a pain in the scene referred workflow and I honestly don’t understand those in the thread that thinks color calibration and color balance rgb makes the color equalizer redundant.
9 instances of color balance rgb and masking is the best we have now. While I’m happy someone created presets as a workaround, it’s a rather painful way of doing the job.
Yes! That’s #1 on my wishlist. As a replacement I’m often falling back to the color zones, but I would really like to have a more robust, native scene referred module for that purpose.
Actually I would really love to have it in a similar flexible way as the color zones. Setting Chroma vs. Lightness provides a way to shape a “saturation curve” (more flexible than the color balance rgb), like this:
and chroma vs. chroma enables something like a customized “vibrance curve”.
I would say the opposite - it’s actually easiest to have color zones near the end of the pipeline. Reason: you’ll now be doing the selective edits based on the hue you see on the display, not something that was derived from the state of the data earlier in the pipe.
Color zones is pretty good, and it’s available today. You can get good results out of it, just don’t push it too hard and make sure to use it after filmic / sigmoid.
I think there were some pretty good ideas going on in the color eq proto that would probably help to avoid some of the artifacts. I still tend to think a module like this would better fit in a position after filmic / sigmoid.
Could write more about this later, but perhaps this discussion could be split out of the Primaries thread…
I don’t have much to add to this discussion TBH, but following with interest.
As far as blotching and related side effects go, I think it could be better but it’s also hard to avoid with more extreme adjustments I suspect.
I’ve noticed that this kind of adjustment is a Lr staple and I’ve seen some styles (for sale too!) that push it too hard IMO And then you get weird transitions between colours…
I use color zones a lot moved between two color calibrations to use it as a color filter for monochrome images. And yes, it sometimes gets blotchy.
This discussion shows that the current pipe is all about color and nowhere near about monochrome - it took me rather long to get half decent conversions out of the scene referred workflow. Often a LUT would do a better job – if I could find a suitable one – but that workflow is so much last millenium.
So my input would be: whatever the thoughts and ideas on color zones are, please consider us poor folks that still like to create monochrome images and often have to use brutal color adjustments to get a nice looking result.
Just so you know, I have been thinking about the monochrome scene referred workflow a lot in the past three years, but still have not come up with a solid solution - drawing a little blank here, despite getting kinda-ok conversions by now.
Imo that would be the same with ‘color eq’.
There is very seldom a situation where i need to change some generic range of hues, the masking is needed anyway. In result it’s easier to use ‘color balance rgb’ as at the same time i can adjust other things like contrast/brilliance on the same instance with same mask.
And it isn’t replacement for the old lab modules either.
It doesn’t provide the convenience like other tools do. E.g. the sampling of color directly in module ui, to adjust the range.
I would say the easiest for color manipulation is ‘color look up table’ module. With click directly in image to select color, another click to anchor color you want to prevent to be changed. And then it has sliders for manipulation which give better precision imo.
My random input. I realise that lots of people here like to tinker with lots of different solutions to color grading and don’t mind learning three or four different modules that kind of overlap to get what they want to achieve. However, I’ve got a feeling that there’s also a bunch of people, maybe less active and visible on the forum, that find that all a bit overwhelming and just want a few fully functional modules that work and won’t accidentally turn their images into bad phone pics.
I know, but it works and no scene referred tool comes even close in terms of simplicity, speed and result.
I have now. His techniques are very solid but also waaayyyyy too slow and cumbersome for daily and/or mass editing. Watching the video makes we wish darktable had a node-based tool setup … something I usually don’t like for image editing.
See, I spent years mastering images for all kinds of usages so it’s nice if someone pays for that by the hour. Awesome methods to have a client sitting next to you and watch you unfolding magic upon a picture. But personally and professionally I like to take pictures, not spend my nights editing them - so I prefer my basic tools to be rather efficient and only start with the tinkering when I really want to have an image mastered properly. Now darktable is all set up for the latter, but as @TonyBarrett has pointed out a lot seems to be lacking for the former.
Just the amount of adding and moving modules to get a half-decent result tells me that the scene referred workflow has been designed while monochrome images never made it onto the requirements list.
I’m curious as I mostly use DT and when I need such a tool for color tweaks I use color zones or CLUT and if I don’t push them they seem okay. So in that sense I only have that frame of reference. If you do pop them out and put them after filmic are they any worse at creating artifact that you would get in something like RT if you use their color equalizer/hsl tools or adobe hsl tools… Wont they all break the image if you are not careful… Or is that part of the goal for a new tool ie that it not only works in the scene referred part of the pipeline but also works to control adjustments so that you get superior results with less artifacts from some better math??..
That’s supposedly why there’s a “beginner” layout preset in the darkroom.
As for “accidentally turning their images into bad phone pics”: that’s why you can go back in the history stack to bellow the point where you made the error…
But that kind of thing requires users to (careful now) read the manual (or at least browse it). Any decently powerful editor allows you to destroy your images…
Noting this comment and the use of the word clipping I just wonder if someone might elaborate. I believe it was stated at one time accurate or not that no module in DT actually clips data. The pipeline is calculated in 32 bit float. What is true I believe as I understand it is that the interface of certain modules ie the GUI is limited to processing in display space and so the edits are not working on the full DNR of the data but there is no clipping… It may be a nuance and maybe its a case of me not really understanding what that comment meant. If that is an accurate way to describe it and yet people are focused on clipped data I think it shapes the way people think and the fear of using these modules. Also if you use them after the pipeline is back to display referred is it a massive issue either… These comments are not an argument against a new tool or that there are not potential limitations to using display referred modules but the I wonder about this notion that they “clip” the data or would it be more that they can distort the data as they can only impact the portion available to the GUI??