and ask yourself how many people see that obvious AI graphics and get turned off by that. I saw that comment more than once lately.
which then might be detrimental to your actual goal.
and ask yourself how many people see that obvious AI graphics and get turned off by that. I saw that comment more than once lately.
which then might be detrimental to your actual goal.
This is exactly what people listen to sadly.
Companies arenât really trying to one up each other artistically, especially when it comes to marketing. Have you see ads lately? Itâs a far cry from the creative ads that used to exist even until the early 2010âs. Everything is becoming so homogenized and similar that thereâs really not much leeway for artistic expression, so what AI spews out fits perfectly for those cases.
Even company logos, they are just a very very similar sans-serif font.
But is that really true? I suppose it is for some people, but for me itâs never about the algorithms. Itâs always about how they are used, why they have been developed, and who by.
I think a very common perception of AI these days is along the lines of âI can see how it will do incredible things in certain areas (e.g., finding new antibiotics) but we donât want it to devalue our art or deskill our jobs.â
Thatâs not really blaming the algorithms but rather blaming the capitalistic use of them.
Itâs obviously way more nuanced than this, but I think itâs a well-established concern that the companies behind the tech are profit driven over anything else.
If that were the case, stock photography wouldnât exist.
I think they want cost-efficient.
In Switzerland, many of the ads are really bold and humorous. However, the quality of graphics / images (including photo) in them is secondary.
I dont think it is just that the companies are doing a bad job.
So I can see why you are excited from the technical side. But not everything we can do technically is actually a good thing.
no
Saw this one while at work earlier, David. Excellent content for sure and will share it with other sites I frequent. As for calling what AI is as art, all I have to say, itâs not art in the sense that AI cannot make a decision to create what it does by choice. It is art since someone had the vision to tell the AI to do so. I see such tools as filters and yes, AI, as tools to carry out my own vision but mainly to make someone who doesnât have any dexterity other than a curious mind to actually make something that means something and hopefully is appreciated by others. Yes; art is created, but itâs not done by the choice of an automaton. My thoughts on this matter. Folks may poo poo the results if they chose to, but the result is art; I just wonât call it done my artificial means alone; it requires a conductor to command and the AI to obey. ![]()
Whenever I talk to people, they say AI is truly useful, just not for their stuff. The C++ guys use it for Python, but say itâs pretty useless for C++. The writers use it for illustrations, say itâs pretty useless for writing. And so on.
Which implies that current AI technology is a decent generalist, but no expert. The claims of âPhD levelâ AIs have been debunked over and over. This makes sense, as training on the sum total data of human output is necessarily going to produce an average result. As a PhD level expert myself, this gives me hope.
On the other hand, the job market in my area (image processing and programming) is rough at the moment. Even PhD candidates seem to struggle to find jobs. If thatâs related to the AI crunch, the market does not seem to agree with my above claims, and seems to squeeze even the experts.
What honestly gives me hope is the evident unsustainability of the large AI systems. The big AI corpos appear to be spending hundreds or thousands of dollars on every dollar they earn. If inference is indeed a few orders of magnitude more expensive than currently sold, this current wave of AI exuberance will likely to become unprofitable very soon. Hopefully then, human labor will once again be cost competitive.
100% agree with all these.
Also:
No, this implies the Gell-Mann amnesia effect. People with expertise in a field can see how shit the output is in that field, but theyâll happily swallow the slop if itâs about something they donât know much about.
When it comes to inference, itâs good to note that chinese companies keep releasing smaller and smaller models which can compete with bigger ones at specialized tasks. For example, Qwen 2.5 Coder 32B can fit in a top of the line consumer GPU(24GB VRAM needed, plus more for context, so think RTX5090 32GB 2.5kâŹ) and itâs pretty competitive in programming compared to top of the line models. Inference is really really cheap compared to training. This is also bad news for companies of course. If you can buy and dedicate an RTX5090 machine to provide a coding agent to your employees, it quickly pays itself compared to paying a copilot subscription or similar, and you gain complete privacy and local processing, if your company requires it.
Yes, 100% this.
I think what Iâm most concerned about is that âtheyâ donât care (the corporations and shareholders profiting from replacing workers). Wealth inequality is already atrocious, but I donât see anything happening to stop it. Instead, all signs point to it getting worse. Sorry to pick on the States, but their current administration gives off all the vibes that those in charge really only care about their own wealth and are just hoodwinking the populace that they care about them. (Itâs not just the States obviously, but thatâs the one that seems to be in my face all the time at the moment.)
The natural result of this is some kind of societal collapse or revolution, of course, but thatâs not what we want. Itâs much better if we donât let it get that far. But is any government going to have the guts to put a brake on Big Tech, and if they do, will it be soon enough?
Because a lot of companies just want âgood enoughâ instead of real expertise. Sadly, it seems a large proportion of the populace is also happy with good enough as well, especially if they donât know what excellent looks like. I think there will always be a place for the best of the best and many experts, depending on the field. But âhigh performersâ are in trouble.
Long, long ago, music creation software had the option to subtly vary pitch and timing. I used to use âBand in a Boxâ for example from fifteen years ago:
http://kronometric.org/phot/music/Home%20Again.mp3
Guitar a la Shadows is mine.
There are definitely professional applications where specific subject matter focused models can be useful.
But in the hands of the general public itâs so far only subjected me to a Jesus made of string beans getting 50,000+ likes, Sponge Bob making meth, fake news confusing Zoomers and Boomers, any number of racist memes about Indians flying poop-powered toilets, lazy coworkers piping their email through it (seriously annoying), a bunch of code cranked out by junior developers I now to get to figure out how/why it doesnât work and breaks in odd ways. I did get to make whatever youâd call this horror show is in Stable Diffision on my 7900 XT though.
I guess thatâs kinda neat.
There are positive things that have come out of social media too but most of us would probably agree the juice is not worth the squeeze. Generative AI is that all over again. Weâre lighting our house on fire to stay warm.
I guess thatâs kinda neat.
I would be careful about posting actual AI output. Iâve been threatened several times here for doing that.
Or are Generative AI images OK to post, team?
Iâve been threatened several times here for doing that.
Nobody has threatened you here, ever. The fact that youâve written that makes me both sad and disappointed.
What did happen, is that we asked that people not post AI answers in this forum, and we explained why that is. We welcomed discussion about that decision, but there wasnât much.
Literally the only person who can not do what is asked of them is you. Weâve tried to talk to you publicly, weâve tried to talk to you privately, but it doesnât make any difference as you just keep on doing what we have explicitly asked that you not do probably two dozen times. Iâve lost count.
Iâm tired of spending my time this way.
There are definitely professional applications where specific subject matter focused models can be useful.
Well even there i wonder how we validate the relevance of the assumptions the model made.
and yes your image is a good illustration of the nonsense it is being used for. E.g. certain esotheric circles are using it for âfinally documenting/illustrate the truth with dragons and so onâ
She is doing research on the subject and even has a book out about it:
In der Eso- & Verschwörungsbubble wird KI ĂŒbrigens sehr dankbar aufgenommen. Endlich können kostengĂŒnstig Videos von Drachen (an Dinosaurier glaubt man ja nicht), Aliens, gigantischen UrwĂ€ldern (die von Aliens abgeholzt wurdenâŠ), Atlantis &...
Guys, you perfectly illustrate what I mean: just because most people and corporations do crap with the Internet generative AI doesnât mean the Internet generative AI itself is crap.
That was precisely my point.
That is just part of the problems I have with the tech. see above.
And if most of the use of something is counter productive then the tech might not be something we should allow.
And if most of the use of something is counter productive then the tech might not be something we should allow.
The internet?
Most of global bandwidth is spent on social media, serving non AI slop for more than a decade now, porn, etc. At one point netflix was 15% of the global traffic and many arguments were made against its existence as being productive, as well.
While I agree with your overall argument, we need to be careful because at the end of the day many good âtechsâ (not saying llm or generative AI are ones) can be used by the majority for a bad thing and still be decent.