When tweaking image A to match the histogram of image B, the result is best when the images show the same amounts of the subject. For example, your first image includes more of a yellow plate and a white plate than the second image, and these will have an adverse effect.
If you first crop both images to match subjects, the result of matching the histograms is better.
I don’t have a method of automatically doing the crops.
@snibgo Yes, the more similar the images are, the better the result will be. But this will not always be possible. I was quite pleased with this first result. I had matched the histograms of the channels independently.
I suspect that histogram matching will be faster and better than tweaking each image with masks etc.
Here is the first and second input manually cropped, and the first of these with histogram adjusted to the second. The result is fairly good. It will never be perfect because, for example, the child’s arm occupies a larger area in the second image.
Automatic cropping could be fairly simple. For example, crop the first image to 80%, and search for that in the second. Some arithmetic gives us the areas that overlap, and those areas are the crops we can use.
And that wouldn’t be perfect either. In the manual crop, the second image is taller because the camera has changed position. This is reasonable because I wanted to include the same amount of the yellow plate. Doing that automatically might be difficult.
I used the following script to do the work. Windows BAT syntax:
I should have said: in my script, “childSpoon.jpeg” is the image posted in the OP.
I might add: the method is perfect, in the sense that the resulting histograms match. But for perceptual perfection, we want isolated elements to match: the background in the two images should match, the visible part of the child’s face in the two images should match, the jug in the two images should match, and so on. Unless all the elements occupy the same area in both inputs, this won’t happen.
The match-histogram method uses statistics from entire images to calculate the CLUT. But that’s not really what we want.
Another, more complex, approach is to crop small matching areas from both inputs: the wall, the jug, the face, and so on, so we collect statistics from these small areas. Then match the histograms based on those small areas. This gives as many images as there are areas, so we blend these using some 2D method: Shepards, or triangulation and barymetric distances, or whatever.
darktable has a color mapping module with histogram mapping and LUT, and also a deflickering option to match exposure from photo to photo (intended to be used for timelapses, but it should do the trick).
@snibgo Thanks a lot for your detailed instructions. I understand the principle of your approach. But it will take a while till I understand exactly what the script is doing.
This tweaks the RGB channels, giving each a multiplier and addition to make the mean and standard deviations match the target. This is simpler than the match-histogram method, and the result is fairly good. When the range of input colours is small, it is more reliable than match-histogram, so might be better for a method that chops one image into tiles, finds the tiles in the other input, matches pairwise, then blends the results.
I do my raw inspection in Mathematica, because so far I have never really bothered to learn Python. If you know Python, you could use rawpy · PyPI which utilizes libraw to import your files and try to find what you’re looking for.
I was curious to analize my RAW histograms more in details: view single sensor bin (R,G1,G2;B)… I know I can find into RT with no demosaicing and the inspector…but
I’m amazed in the FOOS there isn’t a software as Rawdigger
I’ve taken up the endeavor a couple of times, but put it aside as more pressing programming arose (sounds more ‘official’ than it is, really about what I felt like doing day-to-day… ), Really, I put a bit of work into rawproc’s histogram, and it meets my needs rather well…
I have put some time into a command-line program that’ll read a raw file courtesy libraw, then walk the image array, collect and sum the channel data per value, and puke it out as comma-separated text suitable for opening in your favorite spreadsheet program. Libreoffice makes a nice histogram of the data with it’s column chart. Works okay on my test raw, but it’s not easily compiled by non-programmers in its present state.
Really, Rawdigger isn’t such a commercial abomination; it helps fund libraw, the open-source core library that some of us use in our raw processing programs.