Hello,
Just a quick message to say a big thank you to everyone for your feedback and help with this adventure.
I think the module is ready for the next stage.
Greetings from Luberon,
Christian
Hello,
Just a quick message to say a big thank you to everyone for your feedback and help with this adventure.
I think the module is ready for the next stage.
Greetings from Luberon,
Christian
@Christian-B , I don’t think you covered this. I’m interested to know - Contrast Eq can do coarse without affecting fine.
I have been trying to learn a little about this pyramid concept…I came across this just recently
Learning Differential Pyramid Representation for Tone Mapping Learning Differential Pyramid Representation for Tone Mapping
The user trial at 4.5 shows the method liked rather less than “GT”. Any ideas what GT is?
There is a “Global Tone Mapping” represented as GTP. So, I guess it is “Global Tone”.
I tried to compile this branch for my Macbook Air M1 running Tahoe, but I fail to install gtk-osx-application-gtk3 as described by Todd Prior. I’m getting a checksum error for this package. Any other options to compile it for my macbook M1? (I can compile the Linux version just fine, but that machine is very slow)
I’m not familiar further with building stuff for the macbook, so I’m stuck there
Maybe. “ground truth” is also mentioned.
I am using linux build on my VirtualBox for OSx_86.
The GT images are the reference images in datasets for training…I am no expert by far but when training I think you have a set of verified images, ie that have been reviewed or created and they contain annotation etc so that when you run your math you can compare if for example your model could identify automobile color accurately, skin tones, jeans on a person, what kind of jeans etc etc
I am not sure of the exact process in the early days real humans poured over images to produce this sort of reference.
They give some outline of what they did and likely its also in an appendix and available as a dataset…
“created corresponding ground truths using several software and toning tools combined with selection algorithms. These images cover a wide range of scenes, such as indoor and outdoor, including natural light/artificial light, multi-contrast, sunrise/sunset, urban/nature, daytime/nighttime, and so on”
You should probably use the build instructions and scripts in the project. Though I also had trouble. Using homebrew, I needed to install lua manually, and prepend “/usr/bin/” to the paths of sed and find in the build scripts. And for some reason I had to comment out a failure after invoking hdiutil. I’m not sure why that helped or how the package was still able to install. The Mac build procedure is rough around the edges. I might be able to reproduce and fix all these issues if I make a clean build.
Hello Todd,
Thank you for sharing this information. This article confirms what I have observed in my tests: image resolution is a critical variable for pyramidal approaches. Since filter radii are defined in pixels, the physical scale of contrast changes depending on the density of the sensor.
On an old sensor (10 MP), I noticed that the spatial scale (blending - “contrast scale”) needs to be reduced and edge protection (feathering - “pyramid edge protection”) significantly increased compared to a 36 MP sensor. This is a point that I will carefully incorporate into the tooltips and documentation to guide users
Greetings from Luberon,
Christian
Can modules see the dimensions of the incoming image? Could you make it adjust appropriately based on image size?
It would need to be adjusted to output resolution which would need to be user entered. There’s probably a better way.
Can you explain why it would need to be adjusted for output dimension, instead of input dimensions? By input I mean the dimensions of the image coming into the module. By output dimensions I assume you mean export dimensions.
Hello,
That’s a very good question. It is indeed possible to determine the image resolution. However, at this stage, I prefer not to calculate or automate this “hard-coded” in the code. Especially since noise can also be misleading and require appropriate correction. I prefer to first plan a series of visual tests to, for example, propose presets adapted to an image resolution and, above all, good communication.
Greetings from Luberon,
Christian
By output dimensions I really mean visible detail when viewing the final product. The ideal control would let you adjust using sliders referenced to what you would see when viewing the eventual product (be that a print, a picture on your phone or a billboard). That way the interface would be consistent and you would quickly build an intuition as to what each slider does.
Given that that’s imposable, the question is what should we anchor the scale against? 100% zoom is consistent, but doesn’t work when considering the whole image. Zoom to whole image (your suggestion) is also consistent and may well be the best real world compromise, but then people who work at 100% view will be confused. A user defined option would also work, but that adds complexity.
Not sure what the best answer it, but it’s good to consider all the options.
I know it would be some work, but you could add a manual override for those edge cases where a user needs to be able to customize the filter radius. Kind of like the toggle in AgX, “keep pivot on the diagonal”, that when enabled hides the manual gamut control.
You obviously will choose a default filter radius, and I think making the radius scale with max(imageWidth, imageHeight) by default will be generally more helpful and more user-friendly than having it be strictly static.
Just my opinion. ![]()
People usually work at 100% for pixel-peeping or detailed masking work I think. I don’t think people usually adjust contrast/local contrast from 100% zoom.
Since the image as a whole is the final “product” coming out of the pipeline, and contrast is generally assessed throught the whole image, it makes sense to me to base filter radius scaling on the max dimension of the image (height or width).
Many many thanks for producing this Dmg.
I could not get it to work initially: the darktable icon was greyed out so would not copy to the Application folder.
My solution was to upgrade the operating system from last version of Sequoia to the currrent version of Tahoe.
Am running this on an M3 Pro on a MBP.
I hope this helps any others having difficulty accessing the wonderful contrast management RGB module
if the build is done with homebrew, then you need at least the macOS version of the build machine to be able to run it elsewhere . (That’s the main reason why macOS version support of official releases is quite limited)