Same phone, different raw... rgbw and rgb versions?

As mentioned above, an alternative method is to tell raw readers that these weird pixels are bad. For dcraw, the bad-pixel list is a text file with each line containing three space-separated numbers: x-coord, y-coord, and timestamp. A timestamp of zero means “we don’t know when this pixel became bad” or “this pixel was always bad”.

Within each 16x16 square, before any image rotation, the four weird pixels are at:

6,10
6,14
14,2
14,6

A Windows BAT script that creates badpix.txt, the full list of all weird pixels, is:

setlocal enabledelayedexpansion

echo off
(
  for /L %%Y in (0,16,3135) do (
    for /L %%X in (0,16,4223) do (
      set /A X1=%%X+6
      set /A Y1=%%Y+10
      echo !X1! !Y1! 0
      set /A X1=%%X+6
      set /A Y1=%%Y+14
      echo !X1! !Y1! 0
      set /A X1=%%X+14
      set /A Y1=%%Y+2
      echo !X1! !Y1! 0
      set /A X1=%%X+14
      set /A Y1=%%Y+6
      echo !X1! !Y1! 0
    )
  )
) >badpix.txt
echo on

A bash script for the same thing is:

for Y in {0..3135..16}
do
  for X in {0..4223..16}
  do
    ((X1=$X+6))
    ((Y1=$Y+10))
    echo $X1 $Y1 0
    ((X1=$X+6))
    ((Y1=$Y+14))
    echo $X1 $Y1 0
    ((X1=$X+14))
    ((Y1=$Y+2))
    echo $X1 $Y1 0
    ((X1=$X+14))
    ((Y1=$Y+6))
    echo $X1 $Y1 0
  done
done >badpix.txt

The first few lines of badpix.txt are:

6 10 0
6 14 0
14 2 0
14 6 0
22 10 0
22 14 0
30 2 0
30 6 0
:
:

Dcraw can then interpolate the missing values before it does any rotation, demosaicing and conversion to sRGB:

dcraw -P badpix.txt -T -6 -O i3.tiff IMG_20191105_134321.dng

For rawtherapee, which doesn’t use a timestamp, see Dark-Frame - RawPedia .

3 Likes

I did it. Seem to work with Raw Therapee.

I’ll test better when my friend will give me more raw images.

Meanwhile, another big thank you!

1 Like

Such a strange sensor this is, I’m wondering if (as I think some have theorized) that the oddball pixels are PDAF sensels that aren’t being interpolated/masked out.

(Newer Sony cameras create PDAF sensels by “stealing” a blue sensel adjacent to a green sensel and creating a larger 2x1 sensel with a microlens over these. I believe the PDAF sensel still has a green CFA, as analysis of sensor data has found that the blue sensels are being interpolated from their neighbors by the camera, but green sensels don’t appear to be. I’m wondering if this sensor is doing something similar but the software is not interpolating out the “stolen” sensels before saving the DNG.)

That seems plausible, @Entropy512.

To test that, I extracted images from the four channels of the raw image: R, G0, G1 and B. Unsurprisingly, G0 and G1 look roughly the same. I compared the x0, x1, x2 and x3 images I showed upthread to each single-channel image. By eye, I compared the relative tonalities of the blue sky and the green foliage in each x-image with each single-channel image.

I decided that each x-image was closer to the G channels than to R or B. So, maybe the weird pixels have a green filter.

But maybe not, because the green channels are similar to the lightness, ie the final colour image reduced to grayscale.

Yeah. What I’ve never quite figured out is - there has been clear evidence that the blue pixels that were “stolen” are being interpolated on devices like the A7M3 (this is based on investigations into noise statistics by the user Horshack on DPReview, I don’t have links to the investigation as they’re fairly old, i just remember them from the past), I never understood why there wasn’t any statistical evidence of something “weird” happening with the green pixels that were getting added on to - since performing as part of a PDAF sensel does alter things. Maybe it’s just so small in magnitude compared to a normal pixel after things have been normalized out that it’s just not visible?

The fact that the “oddball” sensels aren’t at significantly reduced brightness indicates strongly that it’s not one of the older “half masked sensel” PDAF arrangements.

We can put numbers to my comparisons. For each image, crop the top-left and bottom-right corners (10% in each direction). Calculate the means of each crop, and divide the mean of top-left by the mean of bottom-right. This measures the relative tonality of blue versus green objects in each image.

x-0.png 5.94716
x-1.png 5.70896
x-2.png 5.61052
x-3.png 6.02948

4321_R.tiff 3.58296
4321_G0.tiff 4.91598
4321_G1.tiff 4.8789
4321_B.tiff 9.93241
gray.tiff 2.80004

By this measure, the x-images are closer to the green channel than any other, and closer to those than to the grayscale version.

This blows my theory that they are “white” pixels. They seem to be green, but somewhat blueish-green.

Of course, the numbers in the raw file may not be the numbers recorded by the sensor. And perhaps the phone software should have replaced the numbers with the average of the four nearest blue pixels.

Hmm… It would be interesting to see (I won’t be able to potentially fiddle with this until tonight, and I’m kind of sick so probably not tonight either):
Does the ratio of the xn signals to each other shift across the image?
Does the ratio of the xn signals to each other seem to change depending on depth?
Does the ratio of the xn signals to one of their G neighbors seem to change with depth or across the image?

As to my question about shifting right/left (assuming the PDAF sensels are horizontal, if vertical they would be up/down) - see 5D4 DPRAW secondary subframe oddities and "white unbalance" - FM Forums

PDAF sensels will have an offset near the edges of images. This is, I believe, why Sony E-mount cameras with PDAF will only offer PDAF in the center of the frame unless the lens provides some form of optical formula data (what data I don’t know - my reverse engineering project has not gotten that far), and when using legacy adapter emulation, often misbehaves at the edge of the frame with many lenses. (I suspect the specific optical parameter needed is exit pupil distance.)