If image software is based on math, is there new math available?

Facebook figured out I’m interested in photography. So I’ve seen interesting ads for noise reduction and sharpening by Topaz. My understanding of photo processing software is that programmers found a way to convert visual traits into math, and then use math to change the image.

So, when I see these impressive examples in the ads, does that mean someone out there has improved the math that powers image-manipulation software?

Well, “inventing new math” is pushing it, but new algorithms for image processing are being developed and tested all the time.

See e.g. Google Scholar: demosaicing algorithms.

2 Likes

My terminology was a little off, but you’ve confirmed what I suspected.

I know science will occasionally hit a wall, even if it’s only a conceptual barrier. I’ll try that Google Scholar link!

Look at it this way. Archimedes “almost” discovered calculus way back in circa 250 BC. Newton & Leibniz made it happen much later. Who knows when a breakthrough may come along? Image manipulation in the digital sense is basically still in its embryonic stage.

Topaz doesn’t use math, at least not as such, or anything resembling traditional denoising or upscaling algorithms. Rather they use machine learning (AI) to basically teach their algorithms what quality “looks like”.

1 Like

The “new” facility isn’t math, but more powerful computers. Glancing through my AI textbooks from the 1980’s, the fundamentals don’t seem to have changed much. What has changed is our ability to process vastly greater quantities of data. Machine Learning has been around since the 1960s, but we can now apply it to larger datasets.

2 Likes

Science can only hit a wall when the math is wrong, which is true of an awful lot of physics. Typically the math is wrong because they don’t completely understand the concept - in other words, there are things they haven’t discovered yet. However, more nefarious, sometimes wrong math is promoted because there is money or power to be gained using it to push an agenda.

If you want to see a huge collection of corrections and innovations that don’t get pushed in the mainstream (because he spends so much time exposing them), all explained in understandable English (which is often not the case with maths), visit here: Homepage for Miles Mathis science site

I’m not an algo writer and don’t know how useful that will be for image processing, but it should be very engaging for mathmeticians and physicists.

There have been several breakthroughs in the algorithms used in the late 90s. These made deep learning practical and affordable.

1 Like

Tnx for that link… looks very interesting. Will provide some bedtime reading :+1:

What did I read :upside_down_face:

2 Likes

This, is absolute rubbish. Sorry.

3 Likes

Hard to make a meaningful rebuttal against a criticism offering no reason, but this is probably not the forum for lengthy mathematical debates, so I don’t expect the conversation to progress past basic criticisms or appraisals. Everyone is welcome to their own take of course.

This person is making extensive claims that the very fundamentals of mathematical physics are wrong “but he has done the research”. Calculus is not what it seems to be and is claimed to be fundamentally flawed. Pi is actually equal to four, because of reasons. Even the very definition of a point is ill-defined, therefore Euclid was wrong and so is everyone else, except those who listen to him – at least that’s his vibe.
I cannot take this seriously. He is also a devout conspiracy theorist: see Best Fake Stories.

Do with that what you please, but don’t introduce this person in a topic about progress in mathematics.

8 Likes

Not so much new maths, most is just new algorithms (e.g. machine learning) and improved understanding of human visual system. It’s a huge field. A tiny but interesting portion can be seen here: https://www.ipol.im/

1 Like

I’ve hidden this response as OT to the overall topic.

In the back of my mind, I was wondering about A.I. And, as someone working in tech, I knew the machines have been steadily improving.
Your answer addresses my curiosity on the topic; thanks!

That’s an interesting link! Thank you for posting it.

On a side note, there is persuasive research that our human intelligence is greatly facilitated by the vision center in the brain.

Hey, other people wandered off-topic.

Thanks everyone for the lively discussion.
(Even if the referees had to step in.)

Specifically, Alex’ Krizhevsky’s 2012 research paper/PhD thesis wasn’t a new form of math - it was primarily figuring out how to implement a neural network on a GPU in order to gain acceptable performance. Prior to AlexNet, no one had managed to get even remotely acceptable performance out of a neural network. It turns out that GPU shader hardware was very well suited to the task, and did not need many changes to become perfectly suited to the task (there has been a LOT of improvements in GPU compute capabilities over the last decade driven almost solely by neural networks, including stuff that most image processing gurus here would scoff at like int8 math. Apparently while that’s usually too poor precision for most computations, it’s still fine for most neural networks.)

Neural networks had been around for a while, GPUs with general purpose compute capability had been around for a while too, the big breakthrough was when someone thought of combining the two.

3 Likes

Y’know, int8 is where most renditions live, so if the neural net is appropriately trained…

My sense though, is that parametric operations on high-precision data will continue to better-handle the variety of situations encountered in encoding scenes, for a while.