Image processing: how to get the best from the original signal without adding fake information?

I’d argue you could capture reality with black and white film. Again in a sense relevant to photography. The fidelity of the data is at best a secondary to other questions about approach to the subject, framing etc.

Can’t say I disagree with you, I just use best-fidelity in my particular approach. Gives me flexibility to do whatever I want in post. Mostly, I try to maintain the hue correspondence of the scene, but I’ll take liberties in post-processing crop to make the rendition I couldn’t capture due to limitations in where I could stand, etc. Once in a while, I’ll go nuts with even color:

DSC_7696c-small

This rendition is from a dimly-lit snapshot with my old D50, the original is a JPEG. In the original rendition, he’s not even looking at the camera. G’MIC filters in GIMP let you do all sorts of damage…

My post was an attempt to establish what the OP was really asking. But @zaknu hasn’t responded to the many posts.

… get as close as possible to the original signal without adding bogus information, …

The trivial answer is: don’t do any processing. In particular, don’t demosaic.

Some write it as F=ma but just to be clear: force= mass times acceleration.

So please do start by pointing out the flaw in that relationship.

Failing that, do show us some flawed Newtonian physics that affect photography so that we may learn some more …

That’s an interesting word. What is reality? The four-dimensional quantized space-time, where things have insides and interact with an infinite electromagnetic spectrum, perhaps. Maybe even a multiverse that forks at every quantum interaction. To the best of our understanding of physics, anyway.

Yet, what we capture with our cameras is innately humane: a tristimulus rendition of “visible” light, projected to a 2D plane. Things no longer have an inside, no backside, either, no past nor future, not even a darkside.

Anyway, this is going too far. We’re talking about photography, so this is normal, and not what OP was talking about. But it is interesting to remind oneself from time to time that we’re never experiencing “reality”, but only the slim subset that our human senses can perceive.

1 Like

You completely misunderstood my post.

I was trying, for what seems to be mostly a science/engineering crowd, to show a basic example of how different questions/problems have different “rules”.

Newtonian physics are fine for most earthbound problems. Similarly our capture devices film/cmos are fine for capturing “reality” as far as photography is concerned. There are well established ideas of what you can’t do to a photo and submit it to competitions or publish it as news. No cloning etc.

AI tools are blurring the lines in rather complex ways. Sharpening and denoising that add meaningful detail are an interesting corner case. We don’t really know where the detail comes from but generally it looks like the same image. Blurry birds get details to the feathers etc. Those details, in my quick assessment, may well be a different bird?

Lots of sophisticated photographers deny that photography is capturing reality. Even when no pp or tricks have been done but their arguments are not technical and instead come from an understanding of how the photographer influences a situation and cuts out a small view.

4 Likes

I like how OP throws out a single post and kicks of a philisophical discussion while he|she does not even read any of the answers.

Is this the beginning of the real matrix where AI fueled spam-bots keep the humans entertained?

2 Likes

A question is a question, no matter its source. I’ve found this particular discourse to be quite interesting, thanks to @nosle, who I’m pretty sure is a human… :laughing:

4 Likes

How do you tell who has read what? I didn’t know we could do that

In this special case a look at the profile tells the story:

Joined    : 4 days ago
Last Post : 4 days ago
Seen      : 4 days ago

STATS
1 day visited
< 1m read time
2 topics viewed
9 posts read
0 hearts given
2 hearts received
1 topic created
0 posts created

And I highly doubt that @zaknu reads the posts without being logged in.

2 Likes

For funs and giggles, here is what ChatGPT had to offer when asked for a shorter answer, the first response was overly verbose. I like how “Wilma”, as we call her in University, talks about her own workflow. :see_no_evil: :rofl:

Interesting. And scary…

This is a very interesting take on the subject:

We are not communing with the God of the Scene when making a picture. We are also not “presenting “, “capturing “, “relaying “, or “conveying ” a “scene” in terms of a simulacrum.

Author’s take is that us, humans, read pictures just like we read texts - extract meaning and signs from what we see - and that brings an interesting aspect to it.

6 Likes

And scary…

Why scary? Don’t you remember this from 1964? ELIZA - Wikipedia
In case you want to have a go at it yourself: How to Build Eliza Chatterbot

Have fun!
Claes in Lund, Sweden

1 Like

Welcome to the forum @zaknu. It seems you have kicked off a conversation that may eventually try to determine “why is there air?”.

Everyone has to decide what their own line is, and some may have different limits if they do multiple types of photography. Photojournalists and people who enter contests may have to stay within externally imposed limits.

You said that you wanted to avoid adding made-up/bogus/fake information to your images, and then mentioned a couple of tools. I’d like to clarify whether your references to fake information were referring to things that intentionally and fundamentally alter the appearance of an image (e.g., cloning out things in an image or replacing a sky), or to tools that can result in unsightly artifacts (e.g., halos) in images. I think everyone has assumed the former, but I’m guessing that you meant the latter.

ELIZA didn’t have a way to get to the internets…

At the risk of (more) thread drift: Newton was famous for f=ma etc, but also his book “Opticks”. See the 1730 (fourth) edition at

But take the book with pinches of salt. For example, on p108:

I never yet found any Body, which by reflecting homogeneal Light could sensibly change its Colour.

Evidentially Newton hadn’t encountered Fluorescence - Wikipedia, which had been observed in 1560.

1 Like

I found it funny reading through that! I can’t quite explain why, something in the clashing of conceptual frameworks.

Importantly though he doesn’t actually go into the issue of “real” or fake but, from my quick skim, has just arrived at understanding just how social and created image reading is. Hopefully he doesn’t go ‘gaaah there’s no foothold everything is fake nothing has value, the world doesn’t exist’ but rather comes to understand how art and social sciences work.

2 Likes

Apologies for not logging back in.
I am grateful for all the answers and didn’t expect some, showing that I might have used better wording.

About what I called maybe inappropriately fake data, I meant thing like upscaling neural networks making wilder guesses and always providing an “answer” even when there is none (example restoring blemishes or adding face details that didn’t exist to begin with, as well as halos artifacts etc.

While the arbitrarity of art and personal preference has been highlighted, the kind of idea I had is probably closest to that (suggested in a few posts) of scene-rendition.

With all due approximations, I’d like to keep approximations at the bare minimum, so I’d like to have my pictures colors as close as possible to the original scene (guess color checkerboards or manual WB before shooting are required) and also fine details and textures rendered as close as possible to the original.
I am not sure if scientific photography i
or scene rendition is closest to this idea.

I’d leave the artistic side to framing and composing as needed, then capture what’s framed with as little approximation as possible.

So, I guess the workflow should consist only of:

  1. get good data, tripod, EFCS, histogram check
  2. chromatic aberration correction
  3. demosaicing after CA correction
  4. lens distortion correction
  5. deconvolution

and nothing else?

Thanks again to everybody for your help

1 Like

The problem I have with reality is that it’s boring — I see it every day. :wink:

3 Likes