How to set a monopod foot (x, y, z) point of perspective rotation in the PP3

When you take a burst of frames with a monopod and zoom in on adjacent images, in every case there is some misalignment between the images.
Leaning forward or backward causes a Pitch. Leaning left or right causes a Roll. And twisting about the monopod shaft causes a Yaw.
Comparing 2 frames and seeing landmarks on the right side of the frame are rising while landmarks on the left side are falling and everything is shifted to the left suggests a roll to the left.

I am customizing the .pp3 file to set a Camera Pitch, Roll and Yaw:
[Perspective]
Method=camera_based
CameraFocalLength=28
CameraPitch=5.000000
CameraRoll=6.00000
CameraYaw=7.00000

The Pitch appears to rotate about the bottom of the image.
The Roll pivots around the center of the picture.
The Yaw rotates about the vertical axis running through the center of the frame.

I want to mimic the behavior of a camera on a Monopod with the Pitch, Roll and Yaw rotating about the foot of the monopod.
The Monopod foot is usually directly below the camera ~69" from ground level to the focal point along the vertical, Y axis.
The horizontal X coordinate is ~ 1/2 of the image width.
The Z coordinate, positive toward the subject, is ~0.
There is a 2" ball head just below the camera, the monopod Y height is adjustable and the best point of contact may not be directly below the camera but I would start with the simple case of monopod X=ImageWidth/2, Z=0 and Y is function(FocalDistance, FOV), ~~4 * ImageHeight.

The CameraShiftHorizontal and CameraShiftVertical seem to hold 1 edge fixed and push the other side off of the screen.
The GUI → Vertical maps to the PP3 → CameraPitch
The GUI → Horizontal maps to the PP3 → CameraYaw

The [Gradient] section has CenterX and CenterY values which is along the lines of what I am looking for:
CameraMonopodX, CameraMonopodY, CameraMonopodZ

Is there a way to set a monopod foot (x, y, z) point of rotation in 5.11 through a PP3 alteration?

I am using RawTherapee 5.11 on Linux.

This is because you have Auto-fill enabled. Disable it and the image will shift without scaling. The values in the GUI are in percent. For example, a horizontal shift of 20 will shift the image 20% of the image width.

If I understand correctly, you just want to simulate the yaw, pitch, and roll of a monopod without considering the translation? If that’s the case, the coordinate of the foot doesn’t matter. After setting the yaw, pitch, and roll, you’ll need to do some math to find the appropriate Post Correction Adjustments shift. That’s because you need to undo the shifting that the perspective correction does to lock the center of the image so that it always stays in the center. The math is complicated a bit by the fact that there is also some scaling involved. It’s scaled such that at the center of the image, the axis with the least amount of scaling has a scale factor of exactly one.

Do you need to use RawTherapee? It might be easier to use other software like Hugin.

1 Like

Lawrence,
I turned the Auto-fill of and all corners of the pictures are moving as expected. Thanks!

you just want to simulate the yaw, pitch, and roll of a monopod without considering the translation? If that’s the case, the coordinate of the foot doesn’t matter.

A translation of landmarks within a frame is unavoidable with every rotation and the foot position is essential to calculating each pixel’s translation caused by the rotations.
Both the Gradient and the Vignetting are handled with a center point, (CenterX, CenterY).

The general formula for rotation of a point (x, y) about a Center of Rotation point (CenterX, CenterY) is:
x_rotated = (x - CenterX) * cos(θ) - (y - CenterY) * sin(θ) + CenterX
y_rotated = (x - CenterX) * sinθ) + (y - CenterY) * cos(θ) + CenterY

The (CenterX, CenterY) in RawTherapee appears to be hard coded to be the center of the image. Accepting (float) values for CenterX and CenterY at least in the processing profile would handle general Perspective cases, not just one special case. The above calculation is already done with hard coded values for the Center so no math would be involved. And the code to parse the Center coords is already used for Gradient and Vignetting.

HUGIN:
I have used Hugin for panorama stitching and it works well for that. It is designed to not handle parallax instead using a common viewpoint between frames, the center of a virtual sphere:

“all images are shot from a common viewpoint. A common viewpoint is the only way to avoid Parallax … Images are positioned inside a virtual sphere no matter what output projection[*] is used.”

The focal point of the camera they call a “common viewpoint” never moves. The Pitch, Roll and Yaw all rotate about the center of the virtual sphere.
This is the case in which a theoretical monopod foot were located at X=width/2, Y=height/2 and Z=0, at the image center.

Hugin accommodates small “translations”:

“To take a (little) movement of the camera into account, this model has been extended with the translation parameters TrX, TrY and TrZ,”

The TrX and TrY map to the RawTherapee GUI Camera_based corrections: Horizontal and vertical shifts.
With AutoFill=false, these are simple, linear translations.

From Hugin manual Image_positioning_model (url not allowed)

By translation, I mean the translation of the camera. For example, if the monopod tilts to the right, the camera rotates with a clockwise roll and translates to the right and slightly downward. Do you want to simulate the effect of the downward-right translation too, or just the roll? Subject distance matters when simulating camera translation.

The formulas apply to a rotation in 2D space, but perspective correction is more complicated than that. Geometrically, camera-based perspective correction does this:

  1. Place a virtual camera at the origin, facing forward.
  2. Place the image in front of the camera at the appropriate distance according to the field of view (calculated by the focal length and crop factor).
  3. Place a canvas with exactly the same size and position as the image.
  4. Apply the horizontal and vertical shift to the image. Keep note of where the center was in relation to the translated image.
  5. Rotate the camera and image (as one unit) about the origin using the yaw, pitch, and roll.
  6. Move the canvas, without rotating it, so that its center is located at the center of the image. Not the new center of the image, but the center noted in step 4.
  7. Project the image onto the canvas with respect to the origin. From the virtual camera’s point of view, the image and the image projected onto the canvas should look the same.
  8. Take the image on the canvas. This is the corrected image.

Ignoring the camera’s translation, what you want is the above without step 6. Step 6 is there so that the perspective correction doesn’t push the image off the canvas. Taking out step 6 is mathematically equivalent to doing all 8 steps, then making some final shifting and scaling of the image. The shifting can be done with the Post-correction Adjustment shifts. Manual scaling won’t be available until RawTherapee 5.12.

If it proves to be useful for enough people, I can add an option to disable the automatic recentering (step 6). Then you wouldn’t need to do any shifting and scaling unless you want to simulate the camera translation too.