Support for Pentax pixel shift files #3489

When im read PEF (from K-70) to RT im see violet frame on image in main RT window (bug?).

If change format of raw to DNG this problem is not visible ?

RT no supported pixel shift resultion !

Jpg output from RT have only size 9,86 MB and jpg output from DCU have size 22,8 MB !

RT degration of detail from Pixel Shift Resolution DNG is very big !

How to write useful bug reports

Many words enough content…

Try this PEF self :wink:


No words. For crop :wink:

diff --git a/rtengine/camconst.json b/rtengine/camconst.json
index e8cba2f..9792b70 100644
--- a/rtengine/camconst.json
+++ b/rtengine/camconst.json
@@ -1740,6 +1740,12 @@ Quality X: unknown, ie we knowing to little about the camera properties to know
+    { // Quality C, only crop
+        "make_model": [ "PENTAX K-70" ],
+        "raw_crop": [ 54, 25, 6021, -8 ]
+    },
     { // Quality B, intermediate ISOs info missing
         "make_model": [ "RICOH PENTAX 645Z", "PENTAX 645Z" ],
         "dcraw_matrix": [ 9519,-3591,-664,-4074,11725,2671,-624,1501,6653 ], // adobe dcp d65

Ok ! :wink:

This is more much more difficult task :slight_smile:

Pixel Shift Resolution DNG :


and quality jpg on output RT like this jpg (only AWB set , output from DCU 5 for Pentax cameras) :


For the moment find the updated camconst.json with k-70 support (only PEFs …)

@heckflosse Ingo, do you plan to embedde raw stacking in RT (import HDRmerge code ) ?. I think this gets increasingly usefull as many new models export multiple files like Pentax’s pixel shift and Canon’s Dual Pixel (5DIV).

I think as a first step RT should activate the -s switch of Dcraw (s0 is default and means the first frame -s1 reads the second -s2 -s3 third and fourth) … also usefull for some old Fujis.

Hi Ilias,

thanks for the -s1(2;3) hint. I just tried it and it is easy to extract each of the frames in RT.
As I read for the K70 demosaic would not be needed for pixel shift shots. Just combining the 4 frames in the right way should do it.

I opened Issue 3489

Right! Im try in setting of RT select demosaic none but see only green patterns ? :wink:

Short status update:

Branch pixelshift now allows to access individual frames from Pentax pixelshift and Pentax hdr (bracketing) files.

Combining the individual subframes from pixelshift files into a so called ‘super resolution’ file follows later.


Wow, this is great news. I have a Pentax K3-II and it would be sweet to be able to use RawTherapee to process a pixel shift RAW and end up with a sharper image that’s a combination of the four shots. I’m subscribed to the GitHub issue.

I haven’t tried the modified version of dcraw yet. Are you planning to attempt to avoid artefacts caused by movement like that version of dcraw does?

Anyway, let me know if you’d like any RAW files or testing. Thanks!

In next step I will implement the simple combination of 4 shots to a super resolution image.
Then I have to solve some issues with pixelshift images and raw preprocessing in rt. i.e. .badpixels files currently won’t work correctly on shot 1, 2 and three if you use the coordinates from shot 0. There will be also some things which can’t work at all with pixelshift shots. i.e. raw ca correction can not be applied, because it has to be applied on the raw data before combining the shots, but that would make combining the shots impossible because raw ca corrections moves the pixel values.
When all the issues are solved, I will try the approach from dcrawps to avoid artifacts caused by movements but slightly different. I want to use the green channel variation as a blend mask between the super resolution combined image and an amaze demosaiced image from 1st shot.

Ingo, I don’t understand this. Isn’t the data still in Bayer form after autoCA ? Why cannot we combine these altered data ?.

Ilias, sure the data is in Bayer form after autoCA but it’s kind of shifted due to CA correction which means I would not expect a good fit when combining the individual shots of a pixelshift file to a super resolution file. But maybe it’s worth a try.

I had (have) great hopes on autoCA because as I see pixelshift files (exported jpegs/tiffs) suffer more than single Bayer from CA.
But maybe before this we (you :wink: ) should try to correct the autoCA algo … it gives artifacts (redish color on neutral black details i.e black letters/lines on white bacground … see the Imaging resource still life samples …I will open an issue tomorrow …

An interesting alternative approach regarding artefacts comming from motion would be to only use one of the consecutive pairs (AB, BC, CD) instead of all four frames (to minimize the motion effect). Logic says that such a pair would have more than half the beneffit of the full 4 frame combination because we have green sampled in all grid places and Red/Blue fully sampled in half the lines (or columns depending on the used pair) … then the intepolation of the missing R’s or B’s can come from much more robust (than Bayer) data (six pixels in the upper and lower lines … where an adapted amaze (mainly use of the sofisticated gradient detection and color upsampling) can do wonders with minimal demosaic errors (moire/false color etc).
_ B’ _

BTW … “super resolution” could (should?) force to a larger display size (for better visibility of fine details) i.e. a 22 x grid … then why not “bayerize” the 4 frames in a single 22 Bayer grid and let the Bayer tools do their job ? :wink:

Where I find binares of this branch for Windows or *.deb for Linux for tests ?

First of all, I have not the slightest idea what these cameras do and how pixel shift works. So ignore me if I am not making sense.

Can’t you merge the different shots into one new Bayer file that can go through autoCA afterwards? The CA should be quite uniform across the single images I assume?

Wouldn’t it be better to leave CA correction until after the images have been combined?

I would think you could get better results from correcting the fully sampled red and blue channels, rather than correcting the sub-sampled channels on each image.

Also, there’s no need to rely on the red and blue channels to get the best fully sampled green channel. It’s fully sampled in the combined image.

You won’t find as long as the branch is under development. But you can build from source

Yes, but that requires some coding time because the current CA correction implementation is based on sub-sampled channels. I don’t say that I won’t do that. But it will need a while.

As I wrote CA-correction can't work at all with pixelshift shots I meant the current implementation. A new implementation as @Iain suggested would solve that.

I think we will end up with different methods:
First one will be the simple combination of 4 frames to one image (without motion detection).
Next one could be the one from dcrawps with motion detection but with a different demosaicer for the regions with motions.
Or the one @ilias_giarimis described above with only one pair of frames.

Do you mean to generate a bayer grid where each dimension is doubled? The of course we can use all the Bayer tools and downsize afterwards. But do you think that will give the same detail as the simple version without need to debayer?

Here are four screenshots from my first implementation of pixelshift in rt. Left is amaze, right is pixelshift


If you could provide at least one pixel shift file with the following requirements that would be great

1.) Static subject, no movements.
2.) ISO 100 and f8 or f5.6 (not more than f8 please) to get maximum detail (with a prime lense if available)
3.) it should have chromatic abberation (not purple fringe)

Thanks in advance,