I’ve been recommended to ask this question here, as it might pique someone’s interest. There’s an abstract version of it, and the specific thing I’m trying to achieve.
The abstract version is given two images, which were derived from the same source, so have similar geometry, but differing tone, color, saturation etc, is there an easy (or algorithmic) way of adjusting one of the images (tone, curves, saturation, any common image processing function), so that it matches the other.
The specific version is I have a transparency (which happens to be 4x5), and I have scans of it, but the colours never seems to come out right so it looks the same as the original transparency. So I laid the transparency against a plain white part of the computer screen, next to the best version of the image I currently have. Thus the transparency, and the computer image are lit with the same light source, to eliminate that as a variable.
I then take a picture of them both together, and I can cut out the picture of the transparancy, and the picture of the image on screen. That eliminates any issues with capturing the images.
Now if I transform one into the other, those same transformations should be what I’m looking for to make the image on screen look like the transparency itself.
The goal of all this is to make a computer image which looks as close as possible to the actual transparency.
It seems like there should be a relatively straightforward transform from one to the other, but I’m failing to see it, and how to make this happen, especially if it can be done in a standard photo editor.
Here’s an example.