One ISO >= 3200 image and one at base ISO (both at F8 or F5.6) would be nice.
Maybe you can even include a feather (in the hope it will move a bit) to test motion correction.
Yes, I should be able to do that in the next few days, at least for sunlight. Iāll have to check if I still have any tungsten bulbs.
OK, I should be able to do those this evening.
Iām fresh out of feathers, but I did take some shots a few days ago (for this very purpose) of a tiny waterfall with foliage, rocks, and a breeze. They should be great for testing motion correction.
Also let me know (for both this and the color target shots) if you want them in RPU or filebin or both.
Great, thank you
Please upload them to filebin. @LebedevRI can tell you if he needs them on RPU too.
Yes, but there is texture and details which I doubt is the result of sharpening, e.g. the texture in the woven basket strips, or the scratch in the table to the left of the bottom-left corner of the golden object.
you mean something that is not in the dcraw output, or something that would not be visible on a 1-frame result? the dcraw code is straightforward, it doesnāt do anything other than reading the rgb values out of the arq, without any interpolation
edit: except for averaging the two greensā¦
Iām sorry but i donāt think so. For the reason you highlighted, and the filesize is just ridiculous.
Just to chime in with my experiences with Pentax pixelshift
- Some scenes benefit more than others. I donāt quite understand it yet but thatās my experience.
- Motion is not a big deal as long as your main subject is static. Things that move are a bit more noisy and have less resolution but your brain accepts this imho.
- Sharpen with a tiny radius to reveal detail. You can sharpen a lot before things fall apart. With pentax you get pixel size patterns when you overdo it.
Iām curiousā¦ I understand the RGGB->RGB conversion here. Can you share the code snippet that does the āmagicā of combining the 4 shifted images into 1, or point me to it on github? Iād like to understand how that works.
Thanks
The sony_arq_load_raw()
function in @agriggio 's dcraw patch does exactly that.
The complete rt pixelshift code with motion correction is here
I want to question summing the two G values. This will bias the solution by improving the s/n ratio by sqrt 2 so with this technique which is trying to remove the Bayer pattern will put a different one onto the image. I think the main benefit of having two G values is only comparison of the framesā relative strength and position. This is noticeable on some Pentax data where the Pixel Shift program sums the two.
Pixel Shift output should have one (1) value for each color per pixel location! Pixel Shift is a no summation process.
COMMENTS please???
RONC
It has one value for each colour per pixel location. But in case of the greens this one value is the average of the two available greens. That reduces noise for example. In the rt implementation with motion correction we use the original green values (before averaging) to detect motion, but at the end, the final green value for regions without motion still will be the average of the two green values. I see nothing wrong by doing that.
Ingo,
I think you are putting a pattern in the output by using both G values. Theoretically, the idea of PS is to have the attributes at every pixel color to be the same. Summing will place ābetterā green values in the output based on summing. I hate to throw things away but the output should have statistics that are ergodic in nature rather than anomalies (pimples) that summing will cause.
Processes that follow PS benefit from locally constant statistics. When sharpening is applied the spatial window should be consistent or it will add other pimples.
When people nitpick PS these things show. I think that you should offer both ways with an explanation. Somehow we had not discussed this I guess.
Regards,
RONC
As Ingo (@heckflosse) already said, that part is right there
Doing pixel-shift without any motion correction is quite straightforward, in fact. The āmagicā is in the motion detection and corretion part, and for that Ingo is the wizard
The only reason there are two G values is that there is no other way of getting each single color at a pixel when acquiring the PS frames. If we has RGB for the sensor rather than RGGB (Bayer pattern), we would record only three frames instead of four and have no redundancy.
RONC
I donāt know what is the āblessedā definition of Pixel Shift, but the rationale of the above is simply to use all the information captured in the 4 framesā¦
But as you can see the code is really simple, you can just try out and see what happens if you just pick one of the greens. If you do, please share your findings!
Iām not disagreeing with your code. You programmed the way you think is natural for recording four frames. We donāt have to use all of the info captured and with PS there is redundant data which āin theoryā shouldnāt be used. I usually stand on the side of the theory but truly understand why you coded it.
See my other comments please.
RONC
Well I coded it that way because I didnāt know the ātheoryā of pixelshift
But Iād be interested in seeing if that does make a difference indeed.
I think this would make sense, based on your argumentation.