Color profiling with a fisheye lens?

So I’m in the process of trying to profile a camera that has horribly broken built-in DNG profiles.

A “less horrible” profile exists, but I consider it to be under a license that precludes sharing it since the profile was generated by someone who uses the corrected profile data as a selling feature for paid software (see the “Better Color Representation” section of H360 - MiSphere Converter for Android )

The camera in question (A Xiaomi Mi Sphere) has fixed fisheye lenses.

All of the profiling documentation at How to create DCP color profiles - RawPedia strongly indicates that dcamprof expects rectilinear input images

However this camera saves dual fisheye images side-by-side - any suggestions on how to reproject to rectilinear for input to dcamprof without causing color issues that break the profile?

If I understand what I read in the docs correctly, lensfun can be used to do this conversion. I imagine you’d have to surround it with C code to do that, as I don’t know of any software that exposes that functionality.

Hugin should be able to defish your fisheye.

2 Likes

Crop the side-by-side fisheyes to only one side, and then distortion correct it.

So since I’ve primarily worked only with rectilinear lenses with hugin, it was a bit of a PITA to figure things out, but I think I have gotten things to work with a workflow similar to what @CarVac suggests. It’s also something @Morgan_Hardwood suggested in another post asking about RT and defishing (a long time ago, found via Google)

Unfortunately it’s a major headache because Hugin’s 16-bit display assist functionality described in the manual doesn’t seem to work very well. I see some small differences when changing settings, but overall, the image is extremely dark so it’s hard to do perspective correction. The camera’s EXIF for lens data is pretty bogus which doesn’t help - I just have to guess at FOV, although at least for this particular purpose getting close is good enough.

Semi-OT for @Morgan_Hardwood - if dcamprof spits out:
Hue shift discontinuity between LUT entry neighbors are not allowed. Aborting.

Is that an indication I had too much specular glare? I think I need to reshoot my D50 shot, getting the ColorChecker to be a decent percentage of the frame without casting a shadow, when combined with shooting in winter at northern latitudes is proving quite difficult.

Hey @Entropy512

Regarding dcamprof’s “hue discontinuities” messages, see here:
https://www.ludd.ltu.se/~torger/dcamprof.html#dcp_hue_shift_discontinuity

I don’t believe glare has anything to do with it. In fact I think there was a glare parameter or conditional warning somewhere in the output.

Regarding Hugin’s dark preview, I’m not sure exactly what you’re referring to, but:

  • if it’s the panoramic preview, then try the slow one, not OpenGL (I opened a bug report about 32-bit images being unusably dark some time ago).
  • if it’s the control point screens and similar, then play with Preferences > Control Points Editor > Curve.

And if neither of those work - file a bug report :slight_smile:

Thanks for the info. I’m still trying to figure out how to get a decent D50 shot without glare - unfortunately this camera’s WiFi preview (the ONLY preview method) is pretty poor quality, so what looked like a non-glare scenario in preview had subtle glare once I copied the shots. It SEEMS like I got the lowest percentage of hue-shift discontinuities in the least-glare image, but I could be wrong. After some more practice with hugin defishing I think I’ve got some tricks I can use on the next capture attempt. Unfortunately, living at a higher latitude in winter, combined with other factors (usually at work at noon, leaving only 2/7 days a week where I can hope for a cloudless sky, which is ALSO rare in winter), means it’s indeterminate when I’ll get another capture opportunity.

StdA was much easier once I figured out where on the shelves at Lowes I could still buy legacy incandescent non-halogen bulbs. :slight_smile:

As to StdA - do you know what alternatives have been discussed in color profiling now that many countries are banning incandescent bulbs?

From my own experience, when creating two DCPs, one taken on a cloudless summer day at midday vs one taken on a completely overcast summer day at midday, and applying both DCPs to several various-daylight test shots, there are no visible differences in the resulting colors. Similarly, when comparing a DCP made using a shot taken on a cloudless summer day vs one taken on a cloudless winter day, no differences or barely visible.

I would expect the angle of the sun to make a difference, but not a drastic one, and besides - you’re still making a DCP for light you shoot under. It should be more than enough for personal use.

There are several “new” standard illuminants, but even if you can profile your modern light sources correctly you’re still stuck with one DCP per light type. And if we’re talking about uncontrolled lighting situations, such as a street at night, there can be several types of lights on a single street, leading to a hopeless mix of photonic goo.

The good news is that using an input profile designed under illuminant A (StdA, incandescent tungsten) on a photograph of a night scene lit by anything other than incandescent tungsten still leads to better results than using a daylight profile, so at least there’s that. But as for a proper long-term solution capable of automatically handling all modern light sources, I don’t know.

Well, I agree that profiling under StdA seems to apply pretty well to handling many other “similar” light sources - my question is, how will we profile under StdA in the future when incandescent bulbs are outright banned except for a few small niches?

So far, I think my issues with hue shift discontinuities are contributed to by scanin not registering the image very well, even if I thought I mostly defished it. As a result I am getting diag images that routinely show the color patches scanin is using right at the edges of the actual color patch, instead of centered. It appears that scanin is VERY picky.

So I finally got things to profile correctly - scanin’s automatic recognition feature is REALLY fragile.

From the documentation for scanin at scanin - the -F feature lets you specify where in the image the fiducials are. These are x,y coordinates starting from the fiducial closest to the brown patch and going clockwise around (If your image is rotated 180 degrees, you need to start at the lower right for example) - Using this I was able to get a good profile of my Mi Sphere after defishing with hugin without any of the hue shift discontinuity errors.

1 Like