Hello, I just recently got a Blackmagic Pocket Cinema Camera 4k (BMPCC 4k) that shoots DNG raw, and noticed that the offsets sometimes weren’t quite right, and that blown highlights become filled with dark and saturated purple and cyan pixels mess on lossy compressed BMPCC 4k DNG files. I have test shots, of all resolution and compression combinations, and normal exposed and pure white full clipped variants, for a total of 24 shots (4 compression modes * 3 resolutions * 2 exposures(normal and full blown white). I was wondering what was necessary to upload to the sample raw photo database. Is all permutations necessary, or just one set for resolution, one set for compression, and another set of full white frames?
I know about the RPU, and already submitted two samples. My question is: is a set of compression setting variations and resolution setting variations needed, (11 images) or all possible permutations (24 images)?
Those are DNG’s so in principle that’s between you and the camera’s manufacturer.
That being said, for RPU, i’d be interested in replacing whatever samples we currently have
with normally-exposed daylight landscape-ish stuff, with total of just 4 samples:
one per compression mode. I guess all should be in highest resolution.
For the most part, the BRAW push is for video, and the state of standardization for raw sensor data in video is pretty bad. There is also that whole pesky RED patent, and BRAW has been explicitly stated as being designed to work around one particularly notorious (as in, subject to a lawsuit between Apple and RED) patent.
I think it went wider that that, I think it was claimed even Cinema DNG possibly infringed on RED patents, or any video raw storage in general (IANAL, but it would be interesting to see how this ends up, it just sounds too wide a claim to be upheld…). That’s why BRAW is not really “raw”, but “partially demosaiced”…
Yup. There’s also the fact that while DNG includes a fairly effective lossless compression codec as part of the standard and futzing with the codec honestly does not give sufficient benefits to justify claiming the standard is insufficient, CinemaDNG does no interframe compression, only intraframe compression, making it an extremely poor choice for video.
There’s a reason almost no one uses MJPEG and it’s almost universally frowned upon as being only done by cheap/lowcost/low-resource implementations, despite being quite standardized - its compression ratio is crap.
But in general getting back to the topic of this thread, I’d expect them to be better than most companies at complying with the DNG standard when actually saving out a DNG. But maybe BM screwed up, or we’ve found a corner case where RT isn’t handling the white level tags properly?
Edit: Wait… Lossy compression? I sort of recall that isn’t quite as well defined in the standard…
Yes, the uncompressed and lossless compressed decode just fine, it’s the lossy compressed that have issue with artifacts at or near clipped highlights.
The lossy compression might be using some nonlinearity trick like NEF, so something funny might be happening w/ linearizing back… Is there a LinearizationTable tag in the lossy DNG?
Yep - the table indeed peaks out at 65472 (and not 65536 as the file WhiteLevel tag claims), and all four CFA channels in that sample seem to reach that saturation level, so I’m not sure why there are still some subtle artifacts after the RT tweak…
@kmilos It’s entirely possible RT doesn’t properly use the linearization table and therefore causes the highlights to clip or something… It hasn’t been investigated.
Also, I see these compression modes being offered on other Blackmagic cameras like the micro, so maybe this will help with those cameras if someone ever gets a hold of one as well.
Another possibility is that compression artifacts lead to some areas that were clipped on input being not-quite-at-whitepoint after decompression, leading to them being treated as valid data and not “clipped” when going into highlight reconstruction/etc. It may be that a slightly lower white point needs to be assumed when working with lossy compression.
I would assume that if the linearization table weren’t being used you’d be seeing far worse than just artifacts in the clipped areas.
Yes, analysis sees all four channels hitting 65472 - but when lossy compression is in play, I don’t think it’s safe to assume that just because some pixels that went in at 65472 came back out at 65472 that they all did so.
@Waveluke - perhaps try fiddling with the “White Point Correction” under “Raw white points”?
I’m not sure if it would be possible to make a “safe for all use cases” assumption here without a more thorough understanding of the codec in play.
That sounds quite likely, yeah, since some form of JPEG is used and flat saturated areas might be more heavily quantized so you don’t actually get the maximum odd 4095 level back before linearization… I guess the most pragmatic thing would be to just walk down the LinearizationTable and try out the next lower value for white level adjustment until the artifacts disappear:
I have fiddled with white point correction, and it didn’t seem to have much effect on the artifact until set to crazy high values that discard LOTS of information.