Support for Pentax pixel shift files #3489

Are global or local values. I think you need a slowly variable application to handle all anomalies in PS. That would handle real changes in illumination in space but fix changes between frames. I think that both methods you have are too variable is done pixel by pixel.
See what I’m getting at?
RONC
Ps: Pixel Shift doesn’t have Pentax on it in the list.

It’s in the tooltip :wink:

Didn’t look there.
RONC

Do you have a physical model for what you are trying? Just throwing statistical methods at a bunch of numbers doesn’t give you a meaningful output no matter how nice it looks.
For PS to work properly, color balance must be correct at every pixel location. But then you need to figure why the data doesn’t fit it. Maybe that is what one of the methods is doing but I don’t see it.
Comments from others. Please.
RONC

For the special case when the illumination changes between the PS frames, you get artifacts because the frames don’t match.

To reduce this, RT already had the option to equalize the brightness based on the global green channel brightness which was used (optionally) to calculate correction factors for all channels. That already worked quite well but of course didn’t take into account changes of colour temperature. The new (optional) method tries to get a bit closer to this changes in colour temperature of illumination. Of course this can be only an approximation and will not fit to each part of the image because for example the colour difference in the shadows might be different than in parts which are directly illuminated .

Ingo,
I have two ideas for this. First is to take the local median and build your scale factors from distance of local values to the medians. To make that spatially better fit, apply a guassian or boxcar filter the median table then compare locals to it. Next is calculate the old method and save. Calculate using median and blend the two sets of weights. That would probably handle most cases. Might have to bias towards the old method.

Let me know what you think.
RONC

Another option might be to apply to old method first and follow by the median or vice versa. They are using different measures to attack different problems.

RONC

Hi,

trying to understand what pixel shift does.

On the Pentax link quoted above it is said, that each pixel is exposed to the three colours of the Bayer matrix. This implies that the Bayer-matrix is not fixed to the detector, which I had always thought would be the case. So the filter matrix is indeed shifted with respect to the detector? Is this, what is happening?

Hermann-Josef

No, the matrix is fixed, but the sensor shifts by 1 px between the frames

So the statement on the Pentax page is wrong, if the matrix is fixed with respect to the sensor.

So is the issue then that a given spot in the image is exposed through each of the R, G and B filters? These are then aligned perfectly to give the evident improvement in the images, if pixel shift is applied. Is this the difference to just taking four consecutive images and align them?

Hermann-Josef

Yes, if I take multiple shots on my tripod with my D750, then each pixel (theoretically) gets exposed to the same pixel on the sensor.

I wouldn’t say the statement is wrong. It’s just a question of the definition of a pixel. For example the statement:

Each pixel of the finally combined pixel shift frames was exposed to the three colours

is true.

Good evening,

I do not agree. The pixel is not subject to definition. It is a physical item on the sensor. So the question remains, is a given pixel, i.e. a given piece of silicon, exposed to R, G and B light or always to the same colour.

Hermann-Josef

According to your definition of a pixel ;-), of course it’s only exposed to one colour, because the matrix is fix.

exactly

Ingo, thank you very much for the clarification! So in the end the “pixel shift” is rather similar to taking four separate exposures and then averaging them in an optimum way.

I am an astronomer and have used CCDs for more than 30 years. In the end this is very similar to our practice called “dithering”. We move the telescope between exposures of the same object by (hopefully) integer number of pixels to avoid e.g. that the object we are studying happens to lie on a dead or hot pixel.

Hermann-Josef

The term “pixel” is very much subject to definition and context. Less ambiguous terms for a single sensor element are “photosite” or “sensel”. An array of filters of a repeating color pattern covers the sensor. The pattern used on Pixel Shift-capable cameras is called the Bayer pattern. When shooting in Pixel Shift mode, a shot is taken, the sensor is moved one photosite up, another shot is taken, right, shot, down and last shot. The filter stays static relative to the sensor. The end result is that a single point of the scene exposes four photosites (R, G, B, G).

1 Like

@Morgan_Hardwood
Thanks for the clarification. You confirm what I had written above. I agree that one has to differentiate between the pixel in an image ( i.e. a digital item) and the physical pixel on the sensor. What is meant usually follows from the context. For me it was important to understand, that the Bayer-matrix is indeed fixed with respect to the sensor.

Hermann-Josef

Next days I will try to make an improvement for high ISO Pixel Sift files. Currently the standard procedure for high ISO Pixel Shift is to use the Pixel Shift combined image for regions without motion and the demosaiced version of the selected sub-image for regions with motion.

For really high-Iso images there is another way which gives gives even less noise in the areas without motion:
It’s using the median of the demosaiced version of all 4 sub-images. Here’s an example (left is standard pixel shift combined, right is median of 4 demosaiced frams, image is ISO 51200:

In current version of RT Pixel Shift this can only be used for images which don’t need motion correction and also only a few people know about it :wink: I intend to implement this method in a way that the median of 4 can be used for the parts without motion while using the usual single frame for the parts with motion.

I will keep you informed here.

Ingo

2 Likes

I made a first implementation of using median of 4 demosaiced frames for regions without motion and one demosaiced frame of user’s choice for regions with motion. That works, but the noise difference between regions with and without motion is about 2 stops, which makes the transitions between the regions clearly visible. I will try now to improve that transition issue…

Ingo

1 Like