I’m very new here and very new to RT, since I don’t know what I don’t know, please be kind and just point me in the right direction. I’ve already read RawPedia, and searched the forums.
I’ve been working on some code that takes a raw scan of a color negative film, inverts it into a positive, linearizes it, and conforms it to a color profile, then outputs the image to a floating point DNG file. The DNG file in question is for all intents and purposes a linear HDR image that contains (depending on the film used and contrast of the scene captured) upwards of 20+ stops of DR.
The samples are scaled as such that that a correctly exposed 18% gray card is 4 stops below 0.18 with the baseline exposure tag set to +3. The lower limit where the film base plus fog is roughly 4-5 stops below that, and the the upper limit where the film stops recording density is easily several (or more) stops above the middle gray value.
For all intents and purposes, it’s a linear DNG that has a lot of dynamic range. Here’s where I’m left scratching my head trying to figure out what I’m doing wrong: When I add this DNG to Adobe Lightroom, and look at the preview it generates, it looks exactly how I would expect it to; not really any different than any other image that I may have shot digitally, except that it has way more dynamic range in it than had I shot it with a digital camera. When I export it to a Tiff or jpeg, what I saw in the preview is pretty much what I get with the exported file. Life is good.
Now on the confusing part: when I open this exact same file in RT, no matter what I seem to do, I get a crazy super low contrast image. The colors look OK, and if I place an 18% gray card at 0.18, that renders out to about 50% on the histogram in RT, so it looks like RT is recognizing the DNG color matrix tag that I embedded, but for some reason, it does not seem to think that this file is has a linear gamma. If I’m feeding it linear data, and it is linear internally, then my preview being rendered to my display using my display icc should look correct, but instead it looks super low contrast.
From my own experience with Lightroom with my files, I know that internally it clips floating point data at 256.0, so any values above 256 basically just get clipped off, and it assumes a black level of 0 unless told otherwise in the file, otherwise, it assumes 0.18 is 18%. Knowing this, I set the white level tag in the DNG to 256, and get a completely clipped to white image in RT, even though none of the values in my DNG go that high. Confusing. If I scale the white level tag up it starts to look more normal by 16383, and is back to what I was originally seeing at 65535, however, the blacks are always at least 40% on the histogram. I’m really confused. I would think that if I feed a scene referred floating point DNG to RT, even with the neutral profile, it would treat it as a scene referred linear image and simply clip off anything in the preview that is outside of what is displayable until I go into the processing options and start to muck around with stuff instead of doing what it’s doing.
What am I doing wrong? Should I be assuming that RT is expecting the floating point values to be in a different range? Why does it seem to be crushing all the contrast down to a super flat image?
Please help out a clearly clueless person. Here’s a link to download the DNG file in question as well as the .pp3 file that RT made when I opened it in the editor: Dropbox - File Deleted
Many thanks.
Adrian