Image processing: how to get the best from the original signal without adding fake information?

Hello, bit of an odd question here.

When I take pictures, I’d like to capture things as close to reality as possible.
Hence, while I value the multitude of neural networks tools that pop out here and there, I would rather have a smaller, grainier image rather than an upscaled one with made-up information in it.

I’d like to know which tools can be used to get as close as possible to the original signal without adding bogus information, as well as where you personally draw the line.

I think probably deconvolution is one of the tools that shouldn’t add fake information, as well as unsharp mask, but I’d like to hear from more experienced people what do you use, what do you think about this and what does your workflow look like.

Thank you

2 Likes

Hi @zaknu , I am not sure if I fully understand your question here. Are you trying to achieve upscaling or sharpening or what? I use the diffuse or sharpen module found in darktable to sharpen my image without creating artefacts.

I would pereonaly dispense of the notion that you can capture “reality.” As soon as you frame soemthing up you’ve broken from reality, twice when you click the shutter.

Photography can, and often is, realistic, but it is not reality.

7 Likes

FWIW … I am not interested in trying to recreate exactly ‘what the camera saw’. I want to find something in the raw data that persuaded me to push-the-button in the first place. I run into a great deal of opposition from critics and judges who feel that I have been ‘over expressive’ in my interpretation … the colors may be too ‘pushy’ or it’s not natural enough.
Many/most of the images that I see at our local camera club are the product of Photoshop and include painted layers and other (what I would call) appalling processes.
darktable allows us to freely create within the bounds of our raw data …
Two of my recent images are below.



I do not think that we can ever emulate nature and I have no intention of even trying. Photography is a creative process, for some of us, and I hope it stays that way.

Welcome to the forum. Good place to post that question…

As @paperdigits pointed out, you depart from reality the second you take the capture. For instance, colors in the scene aren’t “RGB”, they’re a rich mix of spectra, and all you walk away from the scene with are those oh-so-coarse triples.

So, the point is, you need to take already-damaged data and do more damage to it to make a rendition. Some of that “damage” may be “fake data” - resizing is a very good example of either consolidating pixels for downsize or making up new ones for upsize. You need to independently slew the individual channels to make a white thing “white”, owing to the camera’s biased spectral response. The single most important thing you can do to control this is to understand each operation in both its effect and implications.

Your example, sharpening, is one that can be used for different objectives. Early on, it can be used to correct/mitigate capture artifacts; later, it can be used to re-introduce accutance lost in a downsize. Either way, it’s damaging the original data, you just need to understand its behavior and implications. Me, I don’t do capture sharpening for my camera/lenses (I might need to, I just don’t see the need in my small renditions). Most of my renditions are to smaller sizes for web/computer viewing, so I do apply just the minimum amount of a deconvolution sharpen to re-introduce acutance.

I aim for something similar to your objective. So, I shoot for “colorimetric” color, then a tone distribution that just “looks right”. What we think we see in the scene fools us, moreso in tone than color (unless we have a color perception deficiency). To that end, I pay close attention to my camera color profile with respect to color. Tone, I work that mainly to mitigate the decision to expose for highlight preservation. Each scene is different in that regard.

So, I think of it this way: 1) capture good data; 2) minimize damage in post by understanding the behaviors and implications of each operation applied going from capture to rendition.

4 Likes

It’s tough. Reality has three dimensions, and stretches infinitely in all directions. Yet we squeeze it into a rectangular frame, project it into two dimensions, and limit its dynamic range to just reflected light.

That leaves us with a rendition, to steal @ggbutcher’s term. We can hope to evoke a sense of what reality was like, through careful composition and post processing. But we’ll always leave out more of reality than we can include.

Personally, I have given up on strict realism in photography. I can show a crowded place as empty and serene, by catching the right moment of capture. I’ll even occasionally edit out the odd object if it detracts from the image. But I still try to express my real feelings of the moment, which mostly means I won’t add anything to the scene that wasn’t there, for whatever that’s worth.

2 Likes

Not to mention that everyone perceives color differently. Reality from person to person is not the same.

3 Likes

While I personally don’t abstract as much as your examples, that’s still what I’m after: To find and communicate what caused the (sometimes visceral and instinctive) reaction to what I saw. It’s difficult to find and try to isolate, though, much less communicate. Usually my attempts are rooted in reality, but they’re usually enhanced in some way

2 Likes

I may not know ‘reality,’ but I know what I like.

3 Likes

I personally like to do “photography” that is capturing an image with a camera and not “painting” it. But I have to develop the image because the raw data are by themselves no image. So, demosaicing is necessary and sharpening and adapting brightness and saturation are in general useful. Also, the dynamic range of my camera is much higher than prints or screen. So, this also needs adaptation.
After that, I might add an “artistic” touch but restrict myself (most times), so that the result looks halfway realistic.
So, no sky replacement for me but I use drawn mask to edit my images.

1 Like

Welcome to the forum, @zaknu.

Colour digital cameras mostly capture “mosaic” data, where each sensor pixel records only one channel each. Software interpolates values for the other two colour channels. Is this “fake, bogus, make-up information”? Yes, it is. It’s a reasonable guess. If you don’t like this, you can downsample instead of demosaicing.

Cameras and the human visual system are different to each other, and neither captures objective “reality”.

Sometimes deviating as little as possible from objective reality is important. For example, astrophotography, police crime scene photos.

Sometimes, we are more concerned with subjective reality: we want to make a photograph that somehow re-creates what we felt when we viewed the original scene. That is mostly what I do. And if that means cropping or dodging or burning or shifting tones or colours, that is fine with me. I rarely paint out objects or paint them in, but it does sometimes happen.

3 Likes

This topic has appeared on here before, with very similar replies.

I still find most of the replies absurd. Like there is nothing between the actual situation, objectively observed by a god-like viewpoint-less entity, and a complete fabrication. I can only see replies pointing to the impossibility of perfect capture as attempts at validating a personal preference for making images that look cool regardless of their relation to reality.

That the steps of capturing and displaying an image are technically flawed, that framing involves choosing a view, and that lenses bend light are not meaningful departures from reality as far as photography is concerned.

The arguments should be artistic but since “scientific” ones are used I think it might be worth framing as tolerance. All correct measurements are “wrong” because they are approximations at the correct level of accuracy. Determining what tolerance is required is crucial, going on about errors irrelevant to the question being researched is a serious error.

3 Likes

While I had (a couple of years back already ) tried topaz ai denoise , and it sometimes decided to add things like ‘hair / beard’ where there was absolutely none (noisy shadow on the cheek of a 6 year old ).

… That has been one of the only and big offenders , trying to 'create content where there was none '.

Other ‘ai’ tools like DxO Deepprime, DxO Deepprime XD, Adobe super resolution with their new ai denoise are set to never do that. They are making decisions in the demosaicing process and operate in something like a 4x4 (or just a bit more) pixel radius.

They can maybe over sharpen, yes. But not 'create something that is not supposed to be there '.

Also means that other 'non deep learning ’ based systems can give good results if not better (or more predictable).

DxO made deepprime-xd to also take detail enhancing / sharpening into the denoising step . But i find it doesn’t add much OR makes it to crunchy.
The original deepprime loaded with a DNG file into Darktable to use diffuse&sharpen, is still a strong combo for me, if not the one that i always miss when going to other tools .

I think most tools are some sort of deconvolution ( i put unsharp mask in that category as well ). And in the end they give similar results.

1 Like

Going by the tone of the post, best throw your cameras in the trash and delete all processing apps from your computer.

I admire the sentiment and would recommend searching for and reading about ‘scene-referred’ editing. As to sharpening, I do use de-convolution.

Reality is a torrent that we are trying to draw only a droplet from.

“I went to capture reality, what tools can I use” – you can’t, its all an approximation.

If we try to read into the question more, then OP doesn’t want AI… Most of the AI tools are still labeled as such and should be easy to avoid.

If OP really means “I don’t want something that looks over processed” then that’s even more a matter of taste.

1 Like

My argument is that you can capture reality at the “tolerances” required for photography.

Applying the highest scientific fidelity at all steps would be wrong because you’re not adapting tolerances to the task. Newtons physics are the appropriate tool for some problems even it they are flawed.

So again, arguments about the scientific impossibility of capturing reality are wrongheaded. Artistic arguments may well be valid but haven’t been put forward.

Removing or adding content is generally regarded as a big step in photography. There are plenty famous examples of people doing so and it becoming a scandal.

Something being a matter of taste doesn’t make it hard to discuss. On the contrary it makes it meaningful do discuss. Taste is socially created and has social meaning.

1 Like

Interesting. Please tell us what part or parts of Newtonian physics is/are flawed and how.

Is p=ma OK? :slight_smile:

That’s not even Newton though? It’s late here but I think you made a typo? Should be velocity not accelleration? (or force rather than momentum?)

The digital imaging mechanism is a coarse approximation of the rich spectrum of the original scene, can’t get past that. That we see corresponding hues and tones in our renditions that are representative of the scene owes in part to the metameric adaptation of the human vision mechanism.

Now, I don’t think it’s impossible to render reality, that is, render wavelength-rich renditions. To do so, however, would require soup-to-nuts retooling of the entire digital imaging chain to replace the RGB encoding with a spectrally-rich equivalent.

https://en.wikipedia.org/wiki/Spectral_imaging