Same phone, different raw... rgbw and rgb versions?

Front? Could the inconsistency the OP is dealing with be a front/back lens thing?

Or was one of the units purchased in a different country? Sometimes phones that carry the same “basic” designator (such as Galaxy S3) are VASTLY different internally (and do have a different detailed model number.)

For example, from long ago (but a particularly notable example):
GT-I9300 - International Galaxy S3 (Exynos CPU)
GT-I9305 - International Galaxy S3 (Exynos CPU with ??? LTE modem)
SGH-I747 - AT&T GS3 (Qualcomm Snapdragon SoC with Qualcomm LTE)

I believe some of them had different camera sensors too…

1 Like

@Entropy512 Ho-hum… My Italian was a bit too fast! Here is a more truthful translation:

  • a rear 13MP image sensor signed Sony (model IMX278), and
  • a front 5MP one

Scusi, Dottore!

Claes in Lund, Sweden

1 Like

There is another possibility: the part has the tech but didn’t pass QA and therefore is processed as if those pixels were dead.

The Sony “Product Brief” for IMX278 (http://cdn.specpick.com/images/photonics/products/IMX278.pdf ) says:

We can pull out all the x0 pixels and make an image. Then the same for x1, x2 and x3.

These are, essentially, lightness. But x0 and x3 are lighter than x1 and x2, by about 10% overall. So these sensels can’t be used in a simple manner for lightness. Further work might establish if the difference is deliberate manufacturing choice or manfactured bugs or whatever.

Demosaicking algorithms for RGBW color filter arrays looks interesting, but is for RGBW CFAs that have a greater proportion of white sensels than the IMX278.

3 Likes

Thank you i’ll try, yes my imagemagick is Q16.

Both phones are been bought in Italy, my friend’s one from Honor store, mine from Amazon.it . Maybe the one from honor store was an international version, i don’t know. Look, software and so on is really the same.

Unfortunatly i have posted a list of phones that should use always this sensor, but on raw.pixls.us there is no other sample from another phone. Iiirc i remember the huawei p8 doens’t create raw files, only jpeg.

Also mine (that demosaics more ok thank this one) has a lot of white dots (see some previous post), that should be used for autofocus, but maybe they also contain some image info.

Maybe ignoring them as a first strategy isn’t a bad idea.

You might be able to find out in the info pane. Often this is hidden but you could access it by a long press or something to that effect. It depends on the software. Or you could crack open the phone and examine its innards but you would risk damaging it and voiding the warranty.

I briefly examined this and others (e.g., ones with yellow pixels) but they are way beyond my ability to implement.

Since this phone’s is sparse and different (x0 and x3 vs x1 and x2), I still think they might be used for something other than straight up RGBW interpolation. Besides focusing, it could be used for depth, transmission or salience mapping. Everything I say is speculation: it is fun to guess what it is. :stuck_out_tongue:

1 Like

And beyond mine, unless I spent loads of time on it, which I won’t as I don’t have a RGBW camera.

I haven’t found any online information about how to interpret these weird sensels. A more enthusiastic person might contact Sony. But Sony might mutter about proprietary information.

1 Like

the info panel reports the same things unfortunatly…

thank you anyway for your great study.

I need to find out how to translate your “way” to make dng usable to linux, and it will be a great thing!

For the dcraw part, should be ok. I don’t have a -O option, but i think it won’t matter that much.

But i don’t understand how to translate the exiftool part so i am unable to proceed.

Thank you again.

Use -c to output to stdout, then redirect to a file. In CMD, I could do

dcraw -T -c _DSC0488.NEF > 488.tif

-T Output to *.tiff.
-c Write decoded images or thumbnails to standard output.
command > filename Redirect command output to a file.

With redirecting, notice I could name it any way I want. I like to truncate it to *.tif.

1 Like

Here is a bash script version of my above commands. Tested with Cygwin bash.

CAMERA_SRC=phone/IMG_20191105_134321.dng

dcraw -v -o 0 -6 -r 1 1 1 1 -g 1 0 -D -d -c -T $CAMERA_SRC >cam_gray.tiff

(
  exiftool -args -make -model $CAMERA_SRC

  echo -DNGVersion=1.4.0.0
  echo -DNGBackwardVersion=1.3.0.0
  echo -EXIF:SubfileType=Full-resolution Image
  echo -PhotometricInterpretation=Color Filter Array
  echo -IFD0:CFARepeatPatternDim=2 2
  echo -IFD0:CFAPattern2=1 2 0 1
  echo -Orientation=Horizontal
  echo -BitsPerSample=16
  echo -SamplesPerPixel=1
)>exifargs.txt

magick \
  -size 16x16 xc:Black \
  -fill White \
  -draw "point 1,6 point 5,6 point 9,14 point 13,14" \
  m.png

KNL=\
0,0,1,0,0,\
0,0,0,0,0,\
1,0,0,0,1,\
0,0,0,0,0,\
0,0,1,0,0

magick \
  cam_gray.tiff \
  \( +clone \
     -define "convolve:scale=!" \
     -morphology Convolve "5x5:$KNL" \
  \) \
  -size %[fx:w]x%[fx:h] tile:m.png \
  -compose Over -composite \
  cam_gray2.tiff

rm cam_gray.dng
exiftool -@ exifargs.txt -o cam_gray.dng cam_gray2.tiff

dcraw -v -6 -c -T cam_gray.dng >x.tiff

This assumes ImageMagick is version 7. For v6, see my comments above. V7 is better than v6, and I recommend upgrading, unless there are particular reasons for staying with v6.

5 Likes

Great thank you!

I have been able to run the script and to get the picture!

Excellent!

Here’s an idea, if you feel like experimenting further just for kicks:

Open all x1/x2/x3/x4 channels in overlaying windows/layers and try flipping between those quickly. If you see the in focus objects not changing but out of focus ones moving around or changing intensity slightly, then those are indeed phase detect pixels.

Most probably the case of pre-production camera firmware not interpolating those pixels out (some method similar to above) before saving to DNG.

As mentioned above, an alternative method is to tell raw readers that these weird pixels are bad. For dcraw, the bad-pixel list is a text file with each line containing three space-separated numbers: x-coord, y-coord, and timestamp. A timestamp of zero means “we don’t know when this pixel became bad” or “this pixel was always bad”.

Within each 16x16 square, before any image rotation, the four weird pixels are at:

6,10
6,14
14,2
14,6

A Windows BAT script that creates badpix.txt, the full list of all weird pixels, is:

setlocal enabledelayedexpansion

echo off
(
  for /L %%Y in (0,16,3135) do (
    for /L %%X in (0,16,4223) do (
      set /A X1=%%X+6
      set /A Y1=%%Y+10
      echo !X1! !Y1! 0
      set /A X1=%%X+6
      set /A Y1=%%Y+14
      echo !X1! !Y1! 0
      set /A X1=%%X+14
      set /A Y1=%%Y+2
      echo !X1! !Y1! 0
      set /A X1=%%X+14
      set /A Y1=%%Y+6
      echo !X1! !Y1! 0
    )
  )
) >badpix.txt
echo on

A bash script for the same thing is:

for Y in {0..3135..16}
do
  for X in {0..4223..16}
  do
    ((X1=$X+6))
    ((Y1=$Y+10))
    echo $X1 $Y1 0
    ((X1=$X+6))
    ((Y1=$Y+14))
    echo $X1 $Y1 0
    ((X1=$X+14))
    ((Y1=$Y+2))
    echo $X1 $Y1 0
    ((X1=$X+14))
    ((Y1=$Y+6))
    echo $X1 $Y1 0
  done
done >badpix.txt

The first few lines of badpix.txt are:

6 10 0
6 14 0
14 2 0
14 6 0
22 10 0
22 14 0
30 2 0
30 6 0
:
:

Dcraw can then interpolate the missing values before it does any rotation, demosaicing and conversion to sRGB:

dcraw -P badpix.txt -T -6 -O i3.tiff IMG_20191105_134321.dng

For rawtherapee, which doesn’t use a timestamp, see Dark-Frame - RawPedia .

3 Likes

I did it. Seem to work with Raw Therapee.

I’ll test better when my friend will give me more raw images.

Meanwhile, another big thank you!

1 Like

Such a strange sensor this is, I’m wondering if (as I think some have theorized) that the oddball pixels are PDAF sensels that aren’t being interpolated/masked out.

(Newer Sony cameras create PDAF sensels by “stealing” a blue sensel adjacent to a green sensel and creating a larger 2x1 sensel with a microlens over these. I believe the PDAF sensel still has a green CFA, as analysis of sensor data has found that the blue sensels are being interpolated from their neighbors by the camera, but green sensels don’t appear to be. I’m wondering if this sensor is doing something similar but the software is not interpolating out the “stolen” sensels before saving the DNG.)

That seems plausible, @Entropy512.

To test that, I extracted images from the four channels of the raw image: R, G0, G1 and B. Unsurprisingly, G0 and G1 look roughly the same. I compared the x0, x1, x2 and x3 images I showed upthread to each single-channel image. By eye, I compared the relative tonalities of the blue sky and the green foliage in each x-image with each single-channel image.

I decided that each x-image was closer to the G channels than to R or B. So, maybe the weird pixels have a green filter.

But maybe not, because the green channels are similar to the lightness, ie the final colour image reduced to grayscale.

Yeah. What I’ve never quite figured out is - there has been clear evidence that the blue pixels that were “stolen” are being interpolated on devices like the A7M3 (this is based on investigations into noise statistics by the user Horshack on DPReview, I don’t have links to the investigation as they’re fairly old, i just remember them from the past), I never understood why there wasn’t any statistical evidence of something “weird” happening with the green pixels that were getting added on to - since performing as part of a PDAF sensel does alter things. Maybe it’s just so small in magnitude compared to a normal pixel after things have been normalized out that it’s just not visible?

The fact that the “oddball” sensels aren’t at significantly reduced brightness indicates strongly that it’s not one of the older “half masked sensel” PDAF arrangements.

We can put numbers to my comparisons. For each image, crop the top-left and bottom-right corners (10% in each direction). Calculate the means of each crop, and divide the mean of top-left by the mean of bottom-right. This measures the relative tonality of blue versus green objects in each image.

x-0.png 5.94716
x-1.png 5.70896
x-2.png 5.61052
x-3.png 6.02948

4321_R.tiff 3.58296
4321_G0.tiff 4.91598
4321_G1.tiff 4.8789
4321_B.tiff 9.93241
gray.tiff 2.80004

By this measure, the x-images are closer to the green channel than any other, and closer to those than to the grayscale version.

This blows my theory that they are “white” pixels. They seem to be green, but somewhat blueish-green.

Of course, the numbers in the raw file may not be the numbers recorded by the sensor. And perhaps the phone software should have replaced the numbers with the average of the four nearest blue pixels.