Ha … I could never imagine such an improvement from false color suppression !!. Although I prefer to kill the artefact at the root …
Ingo, have you researched the reason for this pattern ?. Is it a small movement or is it the strong CA ?. In the previous sample with the cat I think that it should be a movement … but here we have no problem in the nearby area with the letters !! … or is it a rotational movement so it increases as far as we get from the center.
Ilias, you’re right. The Low light version is much better. Though it also shows that I have to make the changes to allow hot/dead pixel detection for pixelshift.
Here is a link to 2 tif files from the Low light version using neutral profile + amaze and neutral profile + pixelshift.
Ingo, MUCH better … for both single and multiple frames …
So it must be something with the “daylight” shot (shock motion ? … CA ? …inaccurate pixelshift movement ? ) which triggers the artifacts.
Here are first screen shots of motion detection.
Left is without motion correction.
Right in first screenshot is with ‘Show motion’.
Right in second one is with motion correction.
Though it’s not ready to push at the moment, I still want to give some additional info how the motion detection and correction works.
The detection uses a method derived from dcrawps to detect ‘motion’. In fact these are differences in the green channels which are sampled twice. dcrawps uses 2 pixels (x 2 samples) for this, but after some tries I found out that using 4 pixels (x 2 samples) gives better results. So the rt implementation uses 4 pixels (x 2 samples) for the detection.
If ‘Pixelshift motion control’ is > 0 and ‘Show motion’ is enabled, the detected ‘motion’ pixels are marked green.
If ‘Pixelshift motion control’ is > 0 and ‘Show motion’ is disabled, the detected ‘motion’ pixels are replaced by the amaze demosaiced version of the selected ‘Sub-image’. This way you can even decide with which of the 4 versions of the image you want to replace the ‘motion’.
If ‘Pixelshift motion control’ is = 0 then motion detection and correction is disabled.
The option to select the best frame to use for amaze demosaic is very usefull.
I wonder what happens with the trumpet in …505.PEF …
is any motion detected there ?.
how do the 2,3,4 frames look with amaze ? … I mean that
there is a great possibility that any shock only affects the first frame
the aligning of pixels related to the lines is better in some frames (this aligning difference can result in significant differense in raw value and so it could also be detected as motion in pixelshift )
My previous proposal to only use pairs of consecutive frames … could be applied here … i.e instead of loosing all of pixelshift benefits by using pure 1 frame amaze use the info of the next or previous frame costructivelly. As a first step there is no need to fully adapt amaze whith this sceme … just help it a bit i.e.
As I understand it, Amaze (and all demosaic algos) makes a first step interpolation of missing values … instead of using this first interpolatioon use the available values of the next/or previous frame … i.e instead of interpolating the 4 red neghbour pixels on a green pixel use the already measured red value from the neighbour frame … may be a next step is to use some weighting between calculated (interpolated) values and measured values …
… then continue with Amaze majics …
Also usefull would be to have the option to control the sensitivity of the detection (dcrawps gives this as a percentage i.e. 10% 20% etc). The threshold should be different for noisy shots i.e. high ISO than low ISO, also different on dark pixels vs bright pixels, because due to shot noise we measure different values.
For example at a level where the photons detected are 64 the shot noise stdev is sqrt(64) = 8 which means that there is a great possibility (more than 50%) that our two measures differ more than 10% due to shot noise (and not any movement of the subject) so a threshold of 10% is not safe … while if the photons detected are 10000 stdev is sqrt(10000) = 100 i.e very low possibility to have measures which differ more than 3% due to shot noise so a threshold of 5% is tottaly safe to only detect motion.
I can make a table with photons/raw_value relation for K1, K3II, K70 etc.
For K1 it’s around 3 photons / raw_value_14bit @ ISO 100 … 1.5 @ ISO 200 etc … K1photons = 3*(100/(raw_value14ISO)
For K70 it’s around 2 photons/ raw_value_14bit @ ISO 100 … 1.0 @ ISO 200 etc … K70Photons = 2(100/(raw_value14*ISO)
For 12bit values multiply by 4.
Ahh … I totally missed this comment …
Ingo, yes … this is what I mean … divide each full color grid place to 4 bayer pixels R,G1,B,G2 where the values come from interpolating all 4 neighbour pixels by weighting with inverse square distance.
I think that it will be somehow better and somehow worse but I give it better chances to be better in average with less artefacts. Of course the per pixel detail will be worse in 100% but after downsampling to 1/2 (same pixel count as pixelshift) or something intermediate i.e 0.717 (2X pixel count vs pixelshift) will bring it close.
The last days I worked on improving motion correction.
My current version (still not pushed) supports three levels of motion correction.
level: is the same as implemented in dcrawps (if the pixel is detected as ‘motion’, take the values for the pixel and the horizontally next one from demosaicer)
level: If the pixel or a pixel in its 3x3 neighbourhood is detected as motion, take the value for the pixel from demosaicer
level: If the pixel or a pixel in its 5x5 neighbourhood is detected as motion, take the value for the pixel from demosaicer
Here are two screen shots which show the differences. Left part of screenshots is dcrawps method of motion correction.
Right part of first screen shot is rt method using 3x3 grid. Right part of second screen shot is rt method using 5x5 grid.
I pushed the changes to allow tests of motion correction though currently it’s not correctly integrated in rt pipeline.
Expect colour casts when you change demosaic method or pixelshift frame.
In this case just reload the image to continue tests. Enjoy!
No, it’s more than cosmetic work. What I pushed is a test-version. It’s far from being complete.
For example all the raw preprocessing tools currently don’t work with pixelshift…
I’m working on that.
Why should I worry? As @floessie wrote some time ago in rt github comment, coding can be a fun session
Keep in mind that Pentax pixelshift support in rt is still wip (work in progress)
Thanks to @Hombre, who provided PDCU processed pixelshift files, I could make a comparison between PDCU motion correction and RT motion correction. Left is PDCU without motion correction, middle is PDCU with motion correction, right is RT with motion correction.