Hi @nonophuran, sorry for the delay. Here’s an example of what i was referring to in my previous post: processing the raw file to a negative tiff (with “Camera Standard” as input profile), and then inverting the tiff (using this experimental RT branch) :
Note how the yellow licence plate looks more similar to the iPhone shot, although the red shade is still not the same.
Regarding the Medium article about tri-color scanning, thanks for the link, it is very interesting indeed. I had to try it myself
I tried using one of these cheap RGB bulbs, by taking 3 separate shots, with red, green and blue light respectively.
Then i created a composite, “fake” raw file, where for each corresponding pixel i kept the larger value among the 3 images (should be analogous to what “Lighten” does in the Medium article).
This is easily done with existing tools. Let’s say we have 3 raw files test_r.ARW
, test_g.ARW
and test_b.ARW
.
Extract each raw file to a grayscale, linear tiff, without demosaic or colorspace conversion:
dcraw -v -T -o 0 -4 -H 1 -d test_{r,g,b}.ARW
Combine the 3 tiffs and keep the larger value for each pixel:
convert test_{r,g,b}.tiff -grayscale Rec709Luminance \
-evaluate-sequence Max test_comp.tiff
Now rename the composite tiff to dng, and add some metadata to mark it as a mosaic, RGGB raw file:
mv test_comp.tiff test_comp.dng
exiftool -DNGVersion=1.1.0.0 \
-PhotometricInterpretation="Color Filter Array" \
-IFD0:CFAPattern2="0 1 1 2" \
-IFD0:CFARepeatPatternDim="2 2" \
test_comp.dng
At this point, RT can open this DNG as a normal raw file from an unknown camera model.
So first of all, to check if it could work, i’ve tested with a standard, positive picture of a color target (no film negative involved). I took three separate R,G,B shots and the same picture with a normal xenon flash for comparison.
I used a linear sRGB input profile (this one) for both the composite DNG and the flash picture to make sure colors were treated the same, despite different metadata. This is the result (RGB composite on the left, flash on the right):
The composite DNG is much more saturated, but besides that, there doesn’t seem to be any huge color deviation, which is good news.
So, i’ve tested with an actual negative. Here is an example with Kodak Portra 800 (left RGB composite, right xenon flash backlight):
In order to make a fair comparison, both pictures are processed using Linear sRGB as input profile.
Even if i try to boost Saturation and Chromaticity in the right picture, trying to match colors on the left, the result is never as good (note especially the yellow and yellow-green patches):
Here’s a real-world example, Kodak ColorPlus 200 (please excuse the artistic quality, i’m not a photographer :-D)
This is also processed with Linear sRGB as input profile, and just some tone curves and chromaticity boost.
My impression is that using this method, it’s more straightforward to get a good result. Using a white backlight, it is also possible to achieve a comparable result, but in some cases it may require more tweaking. The obvious disadvantage is having to take 3 shots each time, and being more sensitive to vibration: if one of the shots is slightly offset… bye bye demosaic!
Anyway… maybe you could try this method on that Audi TT picture, and see what comes out…
alberto