Support for Pentax pixel shift files #3489


(Ilias Giarimis) #61

Ha … I could never imagine such an improvement from false color suppression !!. Although I prefer to kill the artefact at the root …

Ingo, have you researched the reason for this pattern ?. Is it a small movement or is it the strong CA ?. In the previous sample with the cat I think that it should be a movement … but here we have no problem in the nearby area with the letters !! … or is it a rotational movement so it increases as far as we get from the center.

EDIT … if we try with the Low light version https://www.dpreview.com/reviews/image-comparison?attr18=lowlight&attr13_0=pentax_k1&attr13_1=pentax_k1&attr13_2=pentax_k3ii&attr13_3=pentax_k3ii&attr15_0=raw&attr15_1=raw&attr15_2=raw&attr15_3=raw&attr16_0=100&attr16_1=100&attr16_2=100&attr16_3=100&attr126_0=highres&attr126_1=highres&attr126_2=highres&attr126_3=normal&normalization=full&widget=1&x=0.29062067021636845&y=0.10783620532730145

I thing the result is better because any mirror/shutter shock has much smaller influense due to the long exposure (3sec).


(Ingo Weyrich) #62

Ilias, you’re right. The Low light version is much better. Though it also shows that I have to make the changes to allow hot/dead pixel detection for pixelshift.

Here is a link to 2 tif files from the Low light version using neutral profile + amaze and neutral profile + pixelshift.


(Ilias Giarimis) #63

Ingo, MUCH better … for both single and multiple frames …
So it must be something with the “daylight” shot (shock motion ? … CA ? …inaccurate pixelshift movement ? ) which triggers the artifacts.


(Ingo Weyrich) #64

Here are first screen shots of motion detection.
Left is without motion correction.
Right in first screenshot is with ‘Show motion’.
Right in second one is with motion correction.


(Ingo Weyrich) #65

Though it’s not ready to push at the moment, I still want to give some additional info how the motion detection and correction works.

The detection uses a method derived from dcrawps to detect ‘motion’. In fact these are differences in the green channels which are sampled twice. dcrawps uses 2 pixels (x 2 samples) for this, but after some tries I found out that using 4 pixels (x 2 samples) gives better results. So the rt implementation uses 4 pixels (x 2 samples) for the detection.

If ‘Pixelshift motion control’ is > 0 and ‘Show motion’ is enabled, the detected ‘motion’ pixels are marked green.

If ‘Pixelshift motion control’ is > 0 and ‘Show motion’ is disabled, the detected ‘motion’ pixels are replaced by the amaze demosaiced version of the selected ‘Sub-image’. This way you can even decide with which of the 4 versions of the image you want to replace the ‘motion’.

If ‘Pixelshift motion control’ is = 0 then motion detection and correction is disabled.

Maybe I can push the new version at the weekend…

Ingo


(Ingo Weyrich) #66

Another example of pixelshift motion correction.
Left without, right with motion correction.


#67

Really great work already, thanks a lot @heckflosse for bringing PixelShift to RawTherapee ! Motion correction seem to work pretty well :slight_smile:


(Ilias Giarimis) #68

@heckflosse Ingo, very good !!

The option to select the best frame to use for amaze demosaic is very usefull.
I wonder what happens with the trumpet in …505.PEF …

  • is any motion detected there ?.
  • how do the 2,3,4 frames look with amaze ? … I mean that
    • there is a great possibility that any shock only affects the first frame
    • the aligning of pixels related to the lines is better in some frames (this aligning difference can result in significant differense in raw value and so it could also be detected as motion in pixelshift :wink: )
      My previous proposal to only use pairs of consecutive frames … could be applied here … i.e instead of loosing all of pixelshift benefits by using pure 1 frame amaze use the info of the next or previous frame costructivelly. As a first step there is no need to fully adapt amaze whith this sceme … just help it a bit i.e.
  • As I understand it, Amaze (and all demosaic algos) makes a first step interpolation of missing values … instead of using this first interpolatioon use the available values of the next/or previous frame … i.e instead of interpolating the 4 red neghbour pixels on a green pixel use the already measured red value from the neighbour frame … may be a next step is to use some weighting between calculated (interpolated) values and measured values …
    … then continue with Amaze majics …

Also usefull would be to have the option to control the sensitivity of the detection (dcrawps gives this as a percentage i.e. 10% 20% etc). The threshold should be different for noisy shots i.e. high ISO than low ISO, also different on dark pixels vs bright pixels, because due to shot noise we measure different values.
For example at a level where the photons detected are 64 the shot noise stdev is sqrt(64) = 8 which means that there is a great possibility (more than 50%) that our two measures differ more than 10% due to shot noise (and not any movement of the subject) so a threshold of 10% is not safe … while if the photons detected are 10000 stdev is sqrt(10000) = 100 i.e very low possibility to have measures which differ more than 3% due to shot noise so a threshold of 5% is tottaly safe to only detect motion.
I can make a table with photons/raw_value relation for K1, K3II, K70 etc.
For K1 it’s around 3 photons / raw_value_14bit @ ISO 100 … 1.5 @ ISO 200 etc … K1photons = 3*(100/(raw_value14ISO)
For K70 it’s around 2 photons/ raw_value_14bit @ ISO 100 … 1.0 @ ISO 200 etc … K70Photons = 2
(100/(raw_value14*ISO)
For 12bit values multiply by 4.


(Ilias Giarimis) #69

Ahh … I totally missed this comment …
Ingo, yes … this is what I mean … divide each full color grid place to 4 bayer pixels R,G1,B,G2 where the values come from interpolating all 4 neighbour pixels by weighting with inverse square distance.

I think that it will be somehow better and somehow worse but I give it better chances to be better in average with less artefacts. Of course the per pixel detail will be worse in 100% but after downsampling to 1/2 (same pixel count as pixelshift) or something intermediate i.e 0.717 (2X pixel count vs pixelshift) will bring it close.


(Gimbal Lock) #70

Just want to jump in and cheer on, keep up the good work. Cant wait to try this myself.


(Ingo Weyrich) #71

At the setting needed to detect the real motion in …505.PEF there is no motion detected in the trumpet.

That’s already possible using the ‘Pixelshift motion control’ adjuster (see screen shots above). I forgot to mention that.


(Ingo Weyrich) #72

Short status update:

The last days I worked on improving motion correction.

My current version (still not pushed) supports three levels of motion correction.

  1. level: is the same as implemented in dcrawps (if the pixel is detected as ‘motion’, take the values for the pixel and the horizontally next one from demosaicer)

  2. level: If the pixel or a pixel in its 3x3 neighbourhood is detected as motion, take the value for the pixel from demosaicer

  3. level: If the pixel or a pixel in its 5x5 neighbourhood is detected as motion, take the value for the pixel from demosaicer

Here are two screen shots which show the differences. Left part of screenshots is dcrawps method of motion correction.
Right part of first screen shot is rt method using 3x3 grid. Right part of second screen shot is rt method using 5x5 grid.

Edit: In case of interest, here is the current rt pixelshift code (still not pushed)


#73

Very subtle differences between the grids 3x3 and 5x5.
In this sample look best gird 5x5 :wink:
Enable this code or share this compilation of RT for tests…


(Ingo Weyrich) #74

I pushed the changes to allow tests of motion correction though currently it’s not correctly integrated in rt pipeline.
Expect colour casts when you change demosaic method or pixelshift frame.
In this case just reload the image to continue tests. Enjoy!


#75

You are genius !

Very good work !

Final effect correction is excellent !

Just a few cosmetic tweaks and you can tell that RT fully supports pixel-shift…


(Ingo Weyrich) #76

No.

No, it’s more than cosmetic work. What I pushed is a test-version. It’s far from being complete.
For example all the raw preprocessing tools currently don’t work with pixelshift…
I’m working on that.


#77

Do not worry :wink:

Capture One Pro 9.XX

Pentax K-1 Pixel shift mode not supported :wink:

https://www.phaseone.com/en/Products/Software/Capture-One-Pro/Supported-Cameras.aspx

In RT Pentax K-1 is partially supported :slight_smile:


(Ingo Weyrich) #78

Why should I worry? As @floessie wrote some time ago in rt github comment, coding can be a fun session :slight_smile:
Keep in mind that Pentax pixelshift support in rt is still wip (work in progress) :wink:


#79

Today seems that open source software is better than commercial software :wink:
Fast coding and rapid response to user needs is the basis !


(Ingo Weyrich) #80

Thanks to @Hombre, who provided PDCU processed pixelshift files, I could make a comparison between PDCU motion correction and RT motion correction. Left is PDCU without motion correction, middle is PDCU with motion correction, right is RT with motion correction.

rt motion correction was processed using the new method based on the idea from @ilias_giarimis
We are still working on it…