Newb wants to sharpen non-demosaic'd imagery for photogrammetry

Hello, my first post. In fact, if this wouldn’t work in Gimp, not to rule that out, I’ll need entry level hand-holding to point me to the right (open source preferably) tool, like RawTherapy? My preprocessing workflow going into photogrammetry has been limited to Lightroom, which assumes RAW input, but then RGB output is always demosaic’d or deBayered. My source data being 42 MP under controlled lighting and such provides RealityCapture (RC)a nice dataset for producing a high quality model with plenty of 8K textures that look king, but this first step, scene reconstruction, the resolution of the geometry is compromised by the demosaicing of the source imagery, so I’m curious to experiment with a couple of ideas. The first idea is simply to present RC the non-demosaic’d imagery and see if it can handle feature detection. Whether this input is in grayscale, with each pixel location or “sensel” simply feeding luminance values, or including the actual R, G, B color (as described in this post), I’m not sure what’s implied with the two methods, but I’d expect RC to value “more” data, just uncertain if in the Bayer pattern form the color is in play in the way RC’s algorithms are set up to analyze pixels.

The second idea is to apply sharpening to this non-demosaic’d image to yet improve resolution of the geometry. In this paper by Dr. Zhang, an inventor of a pan-sharpening tool used in processing global imagery, I’m clearly not reinventing the wheel, simply borrowing from a proven and effective technique used in a system that somewhat relates to photogrammetry, insofar tiling and such involves feature detection in overlapping regions of related photography. The difference here is, his methods involve multi-spectral photography and panchromatic photography, the latter of which doesn’t need demosaicing and is used with pan-sharpening in combo with, well suffice it to say, I’m getting warmer to what I’m after, but am reaching the limit of my knowledge.

The link I provided in connection with that first idea should suffice to produce a 16-bit TIF containing non-demosaic’d imagery, but it’s not clear how to achieve this in Gimp. Any thoughts on that? Or any ideas on if RawTherapee is the tool, thanks for advising. Same goes for the concept of pan-sharpening, any thoughts on how best to approach that idea are most appreciated.

Benjy

If you want non-demosaiced output you can use dcraw, assuming your camera is supported (dcraw hasn’t been updated in a while).

I’m not sure that sharpening will improve feature detection; it never improves detail capture, only contrast for detail that was already captured.

darktable lets you bypass demosaic and so output a monochrome image (passthrough), or to show the colors (photosite color). Not sure how close these are to what you want.

You can turn off demosaicing in RT. I’m not sure what profile setting and tiff format would be best for output…
This is a zoom on the white patch of a colorchecker…

I don’t think sharpening an image before demosaicing will do anything useful. Sharpening assumes that adjacent pixels will have similar values, and that isn’t true before demosaicing.

You could separate the three channels before demosaicing. Then sharpen each of the three monochrome images, then combine then (eg making a DNG), then demosaic that. I don’t know if that would be useful.

You might also find a plugin or plugins here to assist you.

This software does pretty much anything to images.

Its used a lot with medical and scientific images

And if you are playing with the raw data you might find this blog interesting and or useful.

https://www.odelama.com/photo/Developing-a-RAW-Photo-by-hand/

Firstly, I’m thrown by how generous with their time the folks on this forum are, so many useful comments and suggestions. Thank you!

Thanks @CarVac , don’t see Sony’s ARW files listed, so checked out darktable, not seeing if it supports ARW, but looks hopeful. As for sharpening and making up data where there is none, it seems an interesting idea @snibgo busting out separate channels to sharpen individually, then bringing back together. Why not test? Thanks for the link to Fiji, will explore the plugins.

I’m aware that AI is being used to run resolutions up well beyond what’s inherent to a single image when as an input you feed thousands of images, which you have with photogrammetry. Not that each of these photos needed for producing a model require thousands, but just to note that so many perspectives of a common section can also be used to learn more about what’s there, like resolution. For now, I happily bite off something closer at hand, should report back and possibly post an A/B comparison.

For anyone interested in seeing what kind of work this relates to, check out animations under Demo here, the source photography comes from Sony A7Rii, but what you see is all virtual cinematography with virtual lighting.

Again, many thanks for assisting a newb.

On extracting the channels, processing each, re-combining into a DNG, and debayering the DNG: my page Camera noise uses this technique for various denoising methods. That page uses dcraw, exiftool and ImageMagick in Windows BAT scripts, but any tools could be used.

For performance, a simple C program using libraw could do all the work without needing intermediate files.

Gosh, that looks pretty intense, possibly too steep a learning curve where I to implement, but looks really interesting. I have a decent Windows machine, have made friends with plenty of foreign tech and software, but what am I honestly up against taking this on in dcraw, exiftool, and ImageMagick? Any tuts to speed the learning curve?

Check out the link I posted to the develop a raw by hand its very instructional and hits on some of the same topics …getting the debayer data etc…

A slightly gentler introduction to the method is shown in Processing Bayer pixels. [EDIT: I mean, slightly gentler than my other page I linked to.]

Thank you both, see your work is documented with full details explained, the flowcharts and such make good sense, am simply unfamiliar with command line interface, just dealt with that once having to compile an engine downloaded from Github, am too used to spoonfed worklow in GUIs. Do these programs like dcraw, exiftool, and ImageMagick feature a GUI that in turn has a command line tool icon you click to then paste into it these instructions your tuts walk me through? Is there a video tut that illustrates basic usage, not any particualr workflow here, but basic getting a lay of the land?

dcraw, exiftool, and ImageMagick are all command-line tools. RawTherapee and darktable are both GUI tools, but with some command-line abilities.

CLI tools have great flexibility to do whatever processing we want, in whatever order we want, and using whatever tools we want at each stage. Sadly, the flexibility comes with a cost to convenience.

Got it. I’m not afraid of pasting in command-lines in the prompt, wondering if your nice documentation simply has me copy/pasting at each step into the cmd prompt and letting these programs run in the background. Would I see images opening and see changes per step? Is it that level of straight forwardness, or does it assume other basic knowledge of operations?

I suppose my pages are tutorials, of a sort. I try to explain what happens at each stage, and I show each result, which is normally an image but is sometimes text. If you follow the same stages with your initial images, you should be able to get results similar to mine.

However, I don’t duplicate documentation that is elsewhere. Most CLI programs can list their own options, and most have web pages of documentation. And the tools I use have published source code, so we can see the lowest-level processing detail if we want. So I assume that interested readers can download, install and run those programs.

I don’t publish canned, black-box solutions. True, some of my scripts can be used that way, but that isn’t my goal. I am far more interested in discovering and explaining principles of image processing, with concrete examples, so interested readers can apply those principles to their own circumstances.

Above all, my pages are really notes to myself. My brain is too small to remember everything I need to know. Today I needed to solve a problem, and I had a vague memory that I had a solution somewhere. A quick search through my web pages found it.

Videos can be useful learning tools for GUI systems. They are less useful for learning CLI. They are not good reference sources. They take more effort to make than web pages, and they are much harder to maintain. So I don’t make video tutorials.

2 Likes

No, the advantages of CLI and of documenting as you have clearly outperforms the GUI/video tut route, I get it. I also take copious notes because so often it’s months between working in a particular area, between muscle memory fading on workflow and quick key commands I’d often have to learn off video tuts to refresh, prefer my notes. Yours are impeccable, much appreciated. Am I right in thinking I’ll be able to pretty much copy/paste each section of commandlines from your page, hit enter, then watch stuff happen on screen with the imagery opening in these apps and seeing changes? I just need to be realistic what kind of learning curve I’m up against implementing your workflow.

Am I right in thinking I’ll be able to pretty much copy/paste each section of commandlines from your page, hit enter, then watch stuff happen on screen with the imagery opening in these apps and seeing changes?

Not quite. The commands I show, highlighted in pale green, are Windows BAT syntax rather than Windows CMD syntax. In other words, they go in a BAT script rather than being typed in at the console. When I rebuild a page, a script copies all the commands into a BAT script and runs it, re-creating all the output images and text. And then another process checks whether the outputs have changed since the previous time.

If you want to paste them into a CMD console, change any doubled percents %% to single percent %.