A7RIII pixel shift and RT

Techincally that may be the only reason. Practically that allows motion detection based on differences in G values. And it allows to reduce noise for high ISO shots (even to reduce noise in dark regions of low ISO shots).

Practically we have 4 frames shot at different times. The first frame gives green values for half of the pixels, the second frame gives green values for another half of the pixels, the third frame… the fourth frame…
Two frames would be enough to get green values for all pixels, do you agree?

If we use only one green value per pixel we have to decide which frame to use. Does this single value fit better to the red and blue values taken from other frames?

Missed that part. That’s an option and makes sense for experienced users, but most of the users will use the automatic mode…

@heckflosse do you think the double sampling could be the cause of the patterns occasionally visible when sharpening? @rechmbrs description does seem to match what I’m seeing.

@rechmbrs are your comments based on theory or observation?

Thanks for the response.

We need to think with this kind of program (non-cosmetic) what happens to the subsequent processing. Many people will never see the benefit of PS at all. To me if we didn’t have what is called the Movement Problem, we should shoot pixel shift always. The Movement compensation thing is much more complicated that most realize. It is not only items moving but changes in illumination and other things.

Regards,
RONC

I’ve been out of pixel shift development for some time until this issue came up. Can you provide an example with that occasionally visible patterns? I will test @rechmbrs approach then.

Pimples in PS is not only theoretical but shows after sharpening sometimes. It gets really hard to separate cause and effect with such small changes but it does add to the final output.

I’m not advocating just tossing the extra G (I don’t have any idea to decide extra) just don’t sum the two Gs. Use all of the information for deciding what to do but done sum both Gs. In build a model image for analysis I wouldn’t use it either.

Sharpening is problem item with me as there is nothing that tells me how good or bad it is. We need a diagnostic for sharpening quality.

Regards,
RONC

@rechmbrs

Do you have an answer to my question?

Perhaps this one. Note that it’s extremely sharpened to emphasize and I have no idea if this is pixel shift or sharpening related. To my eye there are horisontal bands visible at extreme zoom.

DNG, pp3 and exported tiff can be found at: Filebin | z2e8ygxzb7idszy3

You and everyone else is free to use the file as you see fit.

Just noticed the pixel shift icon on my ps files. Nice!

edit: overview might be nice just because

2017-12-12-215110_580x777_scrot

My answer is I don’t have an answer.

I would expect we could an analysis of the each pair and chose the G frame
with narrowest histogram of the two as best. If we had different pattern
on the sensor would held.
Let’s discuss some more and see my other responses in this thread.
RONC

What is the layout of the values coming from the ARQ file via read_shorts()? I guess I was expecting to see something more complicated, like (for example) for the blue value in a pixel it would average the four blue values coming from the four base images.

Thanks

Each pixel has 4 shorts for Red, Green, Green, Blue. So samples[0] is red, samples[1] and samples[2] are green (see the discussion above about averaging vs taking a single one…), and samples[3] is blue. Each comes from a different frame of the 4 ARWs.
So, you don’t need to interpolate anything, just place the values in the image array so that dcraw can then continue with its usual processing (scaling according to black and white level, white balancing, color space conversion, etc.)

1 Like

Very interesting but I have no idea.
Suggestion to understand :
Get first pixel shift frame, PS output, and PS plus sharpening files.
Subtract the single frame from PS and apply levels so we can see deeper
into the different. Do the same with PS and PS + sharpening. View
separately and then together if nothing hits you about what is up.
My brick house has bricks with ridges vertically like what seems to be
there. Look hard at single frame too. Maybe PS really is working here and
sharpening is adding to it.
I apologize for rambling.
RONC

Haha, now I’m really confused. :slight_smile:

If the data is interleaved as you say, then I would expect the ARQ file to be the same length as a single ARW file. Yet the ARQ file is four times larger.

Does the Pentax lay out the data as you describe? I’m concerned that Sony is doing it differently.

Anyway, thanks for your responses. I’m new to PS and RT as of a few days ago…

Well, an ARW only contains one value per pixel (either R, G or B), and you have to reconstruct the others by demosaicing… see Demosaicing - Wikipedia

Because it has 4 values per pixel instead of only one for a single bayer raw

Sony made it much simpler to Pentax as far as format. Is every pixel
sorted so the program has no shifting to do?

RONC

If you don’t want to do motion correction, yes. But RT breaks it into the 4 frames again, and then shifts it back

Ah OK, thanks. I was thinking of a pixel as a 2x2 set of sensor sites, not 1x1. :slight_smile: Now it makes sense.

1 Like

I had a look at your example. I think the artifacts are caused by the extreme sharpening settings.
I tried with the settings from bundled PS No Motion profile. Imho that gives more pleasing results.

@heckflosse @nosle @ron
The sharpening settings are simply enhancing the underlying differences from frame to frame making them more visible.

The principal reason is the differences that exist between frames … in this case it looks it is not motion but light intensity differeces … a lot of the checkerboard artefacts dissappear if we apply the “Equalize brightness of frames” option :wink:

Something that very few users realise is how difficult it is to achive the perfect pixelshift shot (no movement - no illumination changes between frames).
Even the pixelshif mechanism looks to be inaccurate at times … motion is detected at edges at specific angles only … just like if the sensor movement was not always exactly 1 pixel moving down-left-up :wink:

Characteristik example is Dpreview’s ISO100 studio sample for Pentax K-1 … the one under “daylight simulation” suffers from checkerboarding at the resolution trumpet while the same shot but with “LowLight” (tungsten illumination and a lot longer shutter time) is perfect … exactly because the long exposure averages the fluctuations of the illumination …