Quick math questions

In general terms (i.e. for an arbitrary curve, not just spirals), assuming the curve is given by x(t) and y(t), where t is some non-dimensional parameter which starts at 0 and increases from there (in Thanatomic’s post, he used \theta for the spiral, which is also the angle in polar coordinates), the curve length s(t) can be computed like this:

{ds \over dt} = \sqrt{{ds \over dt}^2 + {ds \over dt}^2}
\Rightarrow s(t) = \int^t_0 \sqrt{{ds \over dt}^2 + {ds \over dt}^2} dt

In words: As t changes by some small amount, x changes by {dx \over dt} and y changes by {dy \over dt}, and the length of that very small piece of curve is \sqrt{{ds \over dt}^2 + {ds \over dt}^2}. Integrating (adding) that up from that start of the curve gives you length of the curve for any given t. Depending on what kind of equation you’re using, there might be an easy analytical solution, or you might have to do it numerically.

If you are doing it numerically, simply counting pixels will be inaccurate because two consecutive pixels on a curve are either directly next to each other (distance 1) or diagonally offset (distance \sqrt 2), independent on what slope your curve has. So if you want to know the curve length at some point t_p, it’s a lot more accurate to compute sample points along the curve for a large-enough(*) number n:
t_i = t_p\frac{i}{n} with i = 0, 1, 2 ... n
x_i = x(t_i) , y_i = y(t_i)
Then compute the distance between neighbouring sample points on the curve:
\Delta s_i = \sqrt{(x_i - x_{i-1})^2 + (y_i - y_{i-1})^2}

And from this, you can compute the curve length for your point by adding up all the \Delta s values:
s(t_p) = \sum_{i=0}^n {\Delta s_i}

The smart way to implement this is to choose t_p to be at the end of the curve (however far the curve matters to you), then choose n to be large enough(*), sample all the x and y coordinates, compute all the \Delta s_i, and then compute the cumulative sums of \Delta s_i, i.e. just the first value, the sum of first and second, the sum of the first three … and voilá, you have the curve length at every sample point.

To compute normals: Since you already have all the point coordinates, and assuming that curvature does not change much between the sampled intervals, you can compute the direction of the tangent vector using central differences:
x'_{i} = 1/2 (x_{i+1} - x_{i-1})
y'_{i} = 1/2 (y_{i+1} - y_{i-1})
This tells us how x and y coordinate are changing and so if we plotted a line starting from (x_i + 100 x'_i , y_i + 100 y'_i) and ending at (x_i - 100 x'_i , y_i - 100 y'_i), that would be a tangent to our curve. To get a normal line, we need to turn it by 90 degrees, which is easy:
x_{\perp i} = -y'_{i}
y_{\perp i} = x'_{i}
So a line from (x_i, y_i), to (x_i + x_{\perp i} , y_i + y_{\perp i}) would be perpendicular to the curve at point i.

To draw a normal with a particular length l, you compute the length of your normal vector:
\left| \left( x_{\perp i} \atop y_{\perp i} \right) \right|= \sqrt{x_{\perp i}^2 + y_{\perp i}^2 }
and scale the normal vector accordingly (divide by its own length, multiply with the length you want):
\left( x_{\perp i} \atop y_{\perp i} \right) \frac{l} {\sqrt{x_{\perp i}^2 + y_{\perp i}^2 } }

So, you can draw the curve at any point i with parameter t_i at coordinates (x_i, y_i), it has the curve length s_i, and you can draw tangent and normal lines at whatever length you like.

Done!

If you are using spirals or circles, then this can all be done on paper (as Thanatomanic has shown), although using cos() and sin() is not very fast for a computer. With some other curve types (splines, for example), all the derivatives above are really easy to do by hand, too, and work out to simple simple functions (they look something like ax^3 + bx^2 + cx + d). In the general case, it can get messy, though…

(*) large enough: depends on how “curved” your curve is. Generally, if the distance between two samples always is less than a pixel, you’re always safe, but if the curve is not bending a lot, you can get away with much fewer sample points. If your “curve” is actually a straight line, then n=1 is absolutely accurate

1 Like

Hi @Mister_Teatime, nice write-up, but I don’t think this solves @Reptorian’s original problem. The method you describe is very suitable for a single curve, but not to find the points ‘in between’ the arms of the Archimedean spiral, perpendicular to the arm itself.

Edit: I need to clarify. You do describe how to obtain tangential points to a curve. The tricky thing is: where does that line stop? The value of l can probably be found, but only numerically as well, as I have shown in my post.

In fact the arc-length of the Archimedean spiral is known analytically: see e.g. Archimedes' Spiral -- from Wolfram MathWorld
But again, I don’t see how that helps to solve the actual problem.

1 Like

Apologies for not having read through the whole thread, but what occurrences might those be @Mister_Teatime ?

Light transfer from scene to retina is expected to be linear with the origin at zero. Any non-linearities are introduced by non-idealities, mistakes or for convenience.

Ideally zero light intensity needs to be reflected in zero values as the very first step in the raw conversion process, otherwise key operations like white balance, normalized clipping not to mention some of the more advanced demosaicing algorithms and color transforms become messy and often introduce mistakes.

That’s also why astrophotographers do flat fields and some of the early Nikons used dedicated optical black pixels to determine a physical BlackLevel specific to each capture, subtracting it before writing data to the raw file, as I think @NateWeatherly mentioned. That took care of non-idealities like temperature dependence, inaccurate CDS, dark current and some DSNU. The downside to the Nikon approach was that it messed up histograms near the origin, which frustrated people needing uber-accurate mean signal readings there (a tiny percentage of users). More recently, possibly when 4-T pinned photodiode configurations became common, they apparently feel comfortable in the consistency of their BlackLevels (say +/- 0.5DN throughout the ISO range) and optical black pixels are nowhere to be seen.

Yeah, I might have gotten a little carried away :wink:

I had not read the original question to include having to find the intersections of the normals with the curve. That’s of course not easily solved when using some generic function to define the curve … if, however, you pick a curve which does not wrap around (within the bounds of the picture), you can just continue along the normals until you meet the edge, and get any type of colour gradient along any kind of curve.

I think the reply thread got messed up (or I did not hit reply at the post I was replying to … unthinkable!), so you can’t see what I was replying to.
@snibgo mentioned a raw file which did contain zero values, and my statement was in response to the occurence of such zero values in raw files – if and when they happen. I can’t of course comment on how they got there and if (or what) must have gone wrong to create them.

If I get it right, then the only way to get actually zero light on any pixel of your sensor is by preventing any light at all from getting to the sensor (keep shutter closed, or cover the lens and OVF), so any normal exposure should not have pixels which did not even get a single stray photon – this means the ideal raw file (of an actual scene) should not contain black pixels, even after black level subtraction.

Thanks for your explanations on dark currents and related issues! Following that, most cameras should never have black pixels (except if they’re dead, maybe?), but you might still get an image with applied black point subtraction, and depending on how that was done, some pixels could be “corrected” to zero – which would mean that the subtracted black levels would have been off.

I’m not doing astrophotography, but I did work with flatfields for image analysis in biology once, and what they do is:

  • correct for vignetting (i.e. if my scene consisted of perfectly even light, what would the camera produce?)
  • in the case I used it, also quantify the uneven background illumination of the light table we were using.

Following your explanations, I think you were referring to dark frames, not flatfields? I understand that dark frames are trying to capture the values introduced by the sensor and electronics of the camera itself in the absence of light. So I imagine (without practical experience) that if a dark frame was not entirely accurate (e.g. due to some minor shot-to-shot variation in the dark frame pattern, or some implicit simplified assumptions about the system), some particularly dark pixels from the scene frame could be over-corrected to zero (that is: to below zero, then clipped to zero).

Anyway, I started out with math, and ended up “getting” dark frames, so I’m feeling good now :slight_smile:

I agree until your final comment, which doesn’t follow from your previous:

– this means the ideal raw file (of an actual scene) should not contain black pixels, even after black level subtraction.

A camera can only record integers between zero and some maximum, eg 16383. Suppose the camera writes values that are proportional to the light energy, so zero light records a value of zero. The scene might have a contrast of 16 stops from darkest to lightest. So the darkest shadow should be recorded at 0.25, a value between zero and one, and closer to zero than one.

What value should the camera record here? Surely the correct, most accurate answer, is “zero”.

Similarly for lower-contrast scenes if the camera or photographer doesn’t ETTR.

“Zero” may cause problems in some algorithms, and they may choose to clamp that to a small positive value. That’s a different matter. “Zero” is a valid value from a camera, even when some light energy was received.

Just for giggles, I opened a few raws and looked at their minimum channel values, unadorned by processing. My Nikon D7000s all had 0 mins for all three channels, all the time. My Z 6 raws, however, have consistent channel mins of about 983, including a dark frame I shot for the purpose. Indeed, the Z 6 NEF metadata has a black field, value 1008, and I subtract that from the image or I get weirdness in the dark areas. Nikon D7000 has no such value in the metadata.

With apologies to the OP for the OT diversion, sensors are rough photoelectron (e-) counters. A 14-bit camera like the above mentioned Z6 can count a maximum of about 100k e- at base ISO, which means that it takes about 6.5e- for it to tick up one raw value at the ADC (100k/(16383-BlackLevel of 1008)), which means that ideally any perfectly valid signal of about 3e- or less should be recorded as 1008 (minus the BlackLevel = 0 DN). 4e- for instance will be clocked at 1009 (or 1 DN after BL subtraction).

However, the downstream electronics superimposes read noise on the output of the photodiode. While the signal is Poisson, so it can never be negative, read noise is hopefully Gaussian with a mean of zero, so it can push the signal to be ‘negative’, meaning less than the Black Level (1008 above). However, because of the symmetry in the normal distribution, if you take the average of, say, a 100x100 pixel area uniformly lit by a 3e- mean signal, the result will be dithered as a result and produce an accurate mean reading of 1008.46 +/- 0.01 or 0.46DN after BlackLevel subtraction.

Glenn’s D7000 on the other hand can’t pull this trick off because it did not carry the ‘negative’ values below the BlackLevel and therefore it crushed the symmetry of the bell curve truncating it to zero. Its raw values near zero illumination will therefore be biased to the right hence inaccurate. Which is why Nikon no longer subtracts the BlackLevel before writing data to the raw file.

HTH
Jack
PS some detail here Photographic Sensor Simulation | Strolls with my Dog

2 Likes

Thanks, @JackH. That helps my understanding.

@ggbutcher: Yes, some cameras don’t record zero at all, even in zero light.

If I had such a camera, I would investigate along the lines of Linear camera raw to understand what transformation is needed to get linear values, by which I mean values that are proportional to light energy, NOT plus a constant.

The required transformation may be a simple subtraction of a constant such as 1008, but I would want to test that for myself.

Further reading: Noise, Dynamic Range and Bit Depth in Digital SLRs.

Keeping in mind that this is an issue that would be relevant for an exceedingly small percentage of photographers (e.g. Astros, for whom this is not an ideal camera), it can be dealt with theoretically by taking into consideration the nature of the truncated gaussian distribution near zero light. Curve fitting and LUTs can be made to work also.

Yes, makes sense. There will be some minuscule amount of light which a sensor site could receive without ticking up the counter, though following JackH’s explanation below, that’s a very small amount, so I would suspect that if the camera was a perfect light meter, it should have some amount of light to report at each pixel, for almost any “real” scene.

Well, if the camera is 14-bit and the scene has a dynamic range of more than 14 stops, then the camera will either record some saturation, or some zero light, or both.

For me, this always happens at music performances (EDIT: I mean, with stage lighting and otherwise dark), and street scenes at dusk or night. It often happens with landscapes that include bright sky. Outdoor daylight scenes that don’t include the sky are usually within 10 stops DR, so there is no problem.

There should be a healthy dose of probability going on, so it isn’t so pat at the extremities. In general, it is advisable to take images in the ranges and conditions in which the camera is supposed to perform most reliably, the rest is not guaranteed. Remember that consumer cameras are just that. Well, no instrument can observe and record phenomena perfectly due to a number of reasons, some natural and others by design. I won’t go into the details, mostly because I forgot my education. :blush: That said, this discussion about “0” is still valuable.

The quantum nature of the raw material of general photography (light) makes it a probabilistic process by definition, our eyes expect it so, nothing to worry about that. The rest can be modeled fairly accurately, see the earlier link for a simple intuition. If one understands the basics one knows what to expect and how to get the best out of one’s kit.

1 Like

I know my comment isn’t helpful on a practical level. What I am saying is that cameras are black boxes. The data isn’t untouched so to speak. But due to you all being persistent nerds, it is easy to overcome that by analyzing the output, examining the hardware and firmware, and making profiles and other adjustments. And as you say, fairly accurate is good enough.

“Quick math questions” turns into “How does noise and 0 work in digital photography”. Apparently this is still a photography forum at the core :smiley:

Detours are fun. We substituted “quick” with “light”, which is both quick and light. :rofl:

The original purpose was for us to pose questions related to math and coding, not so much theory. My earliest questions after joining the forum were concerned with unusual values (NaN, inf, zero, imaginary).

Whenever I look into changing illuminants or colorspaces of an image my head explodes because the tutorials out there aren’t as straightforward as I hope. In particular, I am interested in how we arrive at these matrices and would like an easy to follow step by step explanation so that I can make more.

From https://github.com/Beep6581/RawTherapee/blob/dev/rtengine/iccmatrices.h; e.g., how do we get to this?

constexpr double xyz_rec2020[3][3] = {
    {0.6734241,  0.1656411,  0.1251286},
    {0.2790177,  0.6753402,  0.0456377},
    { -0.0019300,  0.0299784, 0.7973330}
};

PS Future: once clarified, I or someone could make G’MIC more capable in this area.

1 Like

I found these links
http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html

https://www.ryanjuckett.com/rgb-color-space-conversion/

https://engineering.purdue.edu/~bouman/ece637/notes/pdf/ColorSpaces.pdf

https://physics.stackexchange.com/questions/487763/how-are-the-matrices-for-the-rgb-to-from-cie-xyz-conversions-generated

https://mina86.com/2019/srgb-xyz-matrix/

EDIT : these two are the more clear to me
https://www.ryanjuckett.com/rgb-color-space-conversion/

https://engineering.purdue.edu/~bouman/ece637/notes/pdf/ColorSpaces.pdf