I am trying to understand the raw image processing process. I noticed that images from my Sony A6700 are differently sized out-of-camera compared to what Darktable produces.
To be precise, the raw dimensions of the images are 6656x4608. This includes a black border on the right and bottom, as well as a column of ‘duplicated pixels’ on the right, on the inside of the black border:
The metadata in the raw image specifies the following crop area:
DefaultCropOrigin 26 20
DefaultCropSize 6192 4128
If I understand correctly, this is how the raw image should be cropped, according to Sony. This then results in a 6192x4128 image, that is cropped from a point (26, 20) from the top left to from the bottom right (438, 460).
Now, RawSpeed’s README says:
RawSpeed does NOT crop the image to the same sizes as manufactures, but supplies biggest possible images.
OK, fair enough. In practice, Darktable crops the A6700 images to 6244x4168, by cropping (412, 440) pixels from the bottom right (and nothing from the top left).
Now, what I am confused about is how RawSpeed determines this. In RawSpeed’s cameras.xml
, there is this line for ILCE-6700
:
<Crop x="0" y="0" width="-28" height="0"/>
This would suggest that it would only crop 28 pixels from the right. This would correspond with the strange column of ‘duplicated’ pixels, I suppose. Does that mean that Darktable crops off the black border automatically?
Moreover, is it even valid to deviate from the official crop dimensions and cropping to the biggest possible image instead, considering for example lens correction which is now applied to a bigger image than it ‘should’?
Lastly, purely out of curiosity, why do raw images even include this “masked pixels”? Is there any benefit in including them in raw images? Or is it just an artifact of how image sensors work?