Google Gemini is great as an instant DIY FAQ assistant

I think it is a big marketing con to anthropomorphize LLMs, since they do average words together poorly quite often, and “to err is human”; thus if they’re spoken about with human qualities, then it’s errors should become more acceptable. But that’s not true.

Humans make errors. That’s OK.

When a computer program commits an error, its buggy. This software errors so often, its pre-alpha quality.

What is evil is the amount of power it consumes while not even producing good results consistently. That is not its fault though. What is evil is that it often repeats things from its training data verbatim, and yet it can not violate copyright when it does so; but if I verbatim copy something and reuse it, there are consequences.

What is evil is the people in control who will use it to put people out of jobs. And then the program won’t even do a good job.

I don’t see my views as dogmatic, I’ve used the technology, I’ve run the llama models myself. You can see how this will be used, and the results aren’t even that good.

1 Like

I think those artificial neural networks work in fuzzy ways – just like natural ones do. Sometimes it’s variety, sometimes it’s an error. When people or (traditional) machines produce items (be it sections of text or engine parts), some fall within tolerance, some do not. You don’t treat people as idiots or discard machines as long as the failure rate is low enough; you accept a certain percentage of waste. I think this is something where, if LLMs can ingest huge amounts of data, discarding, with high but not perfect precision the ‘filler’ and handing the rest over to humans, we can save ourselves a lot of menial work, or enable things that were simply not possible to do previously. Interesting examples are mentioned here: Project: Civic Band—scraping and searching PDF meeting minutes from hundreds of municipalities and Project: VERDAD—tracking misinformation in radio broadcasts using Gemini 1.5

Putting people out of jobs: that is something that has been used as an argument against machines since they were first introduced. However, I think we are all glad there are things that now don’t have to be done my humans. Interestingly, some seemingly simple, mechanical jobs cannot be automated properly: Where Are The Robotic Bricklayers? - by Brian Potter; also true for many dangerous / filthy / laborious work, too.

The amount of energy they consume, the amount of influence they grant to companies (or rather, the amount of trust placed in said companies and money granted to them by investors) I also see as worrisome.

People trusting these machines just as much or more than humans is also problematic. But we are known to trust all the wrong people, too.

Equating AI (all of machine learning, computer vision etc.) with LLMs and other forms of generative AI is also wrong – but LLMs/generative AI is simply something that’s easy to sell to the masses.

In certain areas, these tools (I don’t mean LLMs, but e.g. computer vision) are ‘superhuman’ (they exceed human speed and/or precision), but that is also the case for mechanical machines. If I’m ill, and a machine can spot the cause faster and with better reliability than someone working in a lab, bent over a microscope, I won’t complain.

And of course you’re also completely right about the copyright issues. This is something that Simon also covers here: https://www.youtube.com/watch?v=h8Jth_ijZyY&t=1765s

1 Like

Indeed so, but at least as I recall it, the folks at the top were not salivating for it so hard and openly. Perhaps its just some form of worker oppression. Or maybe I’m just wrong.

Yes there is certainly some great ML applications, like in early detection of breast caner.

Equating all AI with LLMs is just more bad marketing. The term AI will be useless if the LLM hype fizzles out. Its on the “crypto” path.

1 Like

They were salivating so much they used child labors to operate machines that couldn’t be done by adults. Most of our worker’s rights were born out of those periods, people were heavily exploited during and after the industrial revolution. One could argue we haven’t even fully recovered when it comes to the amount of hours we work per week.

I do agree with the sentiment of protecting people’s welfare in case they lose their jobs to an AI, but I also believe humans should be free of menial and repetitive jobs. At least for the majority of people, since a select few do enjoy working in factories or doing repetitive tasks.

I was reading articles just yesterday about the new programmers, who were strongly advised by the current administration to “learn to code”, are finding it very difficult to land jobs, and much of the cause was attributed to AI. Not because they don’t know AI, but because AI is taking on much of the work.

1 Like

Where I live, there are now regularly billboard ads that are blatantly AI-generated. I find them very obvious, and quite offensive. But the marketing managers who payed for them evidently don’t.

There was one with a huge Pretzel on it. Except, the AI put water droplets onto the Pretzel instead of salt, which is wildly silly and looks off-putting. And worse, a Pretzel is just about the most ubiquitous pastry around here, yet it was apparently not even worth their time to walk down to the next bakery and take a simple photo of a Pretzel!

Yesterday I walked by a rather beautiful billboard of a mountain of presents, with snowflakes etc in the background. It had all the telltale asymmetries and wonky details of an AI generated image.

If I were a graphic designer, I’d be very worried about my future salaries. Or perhaps not. Perhaps these images are exactly the output of a graphic designer who doesn’t care.

At work, we just got access to an LLM in Visual Studio (a programming environment). My boss was quite impressed about it, particularly for writing commit messages and tests. I have also found it useful on occasion, for what amounts to web searches (“how to check if two line segments are colinear”, “what is the memory bandwidth of DDR4 vs DDR5”). Its answers were good, but not perfect. They contained too many factual errors to be trustworthy.

But as the billboards showed, in marketing, these errors don’t seem to matter. In my job, they happen to matter. But I do wonder in how many programming jobs they don’t. I think if I were a junior programmer, I’d worry about my future salaries, too.

3 Likes

I’m holding a fundamentals in programming course and we recently asked the students which tools they use to learn. Many answered chatgpt. We can however notice that some use very advanced techniques, which were not explained in the course. Asking the students about that, they said they learned it from chatgpt, and saw it worked for the exercise part. At the exam, they had no clue what it actually does and could not properly apply it.
That showed me (again), that you have to be the expert - not the LLM. You have to understand what it is generating and you have to judge what to do with that information.

For my work, I also use tools like elicit or chatgpt. I think they offer something, which is hard to obtain with other people: you can ask some question in the weirdest way and you get some answer. I mean, not that something like that is not possible, but it is really hard to find a person that has that kind of knowledge to infer the answer from your novice question.
No doubt, the answer from the LLM can be BS but so far it helped me a lot to find the correct terms, used in the literature. I can then go back to google scholar or scopus and search directly with these terms. I had the problem, that I’m not, for example, a native statistician and thus don’t know a lot of the terminology. However, with the help of chatgpt I actually found the terms by just describing what I want to do. It gave me four possible methods, and then I could go to primary literature and have a look what they do and if they really fit my problem. Again, you have to be the expert to judge the answer, and some methods did not directly apply to my problem - nevertheless, it amazes me that my gibberish question contained enough information in the latent space to hint me to the methods I was looking for (or maybe not and I’m still using the wrong thing :smiley: )

3 Likes

Besides top tier advertisement, most ads are pure “slop” and so indistinguishable from each other that they might as well be AI generated anyway as there’s no soul or art in them. If there was, people would probably make out the fake stuff as it would stand out. Much like designing stuff in corporate memphis, I can’t believe workers enjoy creating those things at all.

I feel for the designers that got replaced, and hopefully they can find a job doing something better.

Unrelated: We as a species should start recognizing the mental assault that is advertisement and start banning it altogether.

1 Like

One does not have to hold any of those opinions to be just skeptical and/or cautiously optimistic, or anything in between.

I think that machine learning (including LLMs, but a lot of other tools too) will add a lot of value to certain kinds of tasks, and automate a lot of menial ones.

At the same time, I think that at present LLMs are overhyped (and to be fair, they were a lot more overhyped a while ago, so things are improving). And a lot of people participating in this chorus have absolutely no clue how LLMs work, how they are trained, etc. No even a layperson’s understanding, just approaching it as some new form of magic that everyone should use.

I just find it tiresome to have LLMs pushed into my face by corporations, who are generally not holding my best interests in mind, and a chorus of clueless people, at every single opportunity.

4 Likes

The thing about AI tech that I find most troubling, is the sheer magnitude of money being invested into it. Meta and Microsoft have spent more than 50 Billion Dollars on AI tech in the last year. To wit, one Billion Dollars is a stack of $100 bills one km high.

This has two troubling ramifications:

  1. If that’s the required investment, this tech is inherently exclusive to the richest companies on earth.
  2. If they’re investing this much, they’re expecting a commensurate return. If that’s not forthcoming, they’re going to force it into their customers’ faces with all their might.

This amount of money is in the ballpark of to the Manhattan Project, the Apollo Project, and the iPhone. It is betting on a disruption of the entire industry. It’s going to be a hell of a pop when that bubble bursts.

4 Likes

I don’t think that companies can really force anything on customers.

If all that investment in to ML does not pay off, large companies will just swallow the loss (you will see decreasing stock prices), while smaller companies which overreached will go bankrupt or get buyed up by their competitors.

1 Like

Maybe they won’t outright burst since their investment in hardware can always be used on non consumer markets. I learned today about AlphaFold and how some of the researchers working on it earned the nobel prize for chemistry due to their work on using deep learning to predict protein structures.

As these techniques start being more utilized on different fields, hardware will always be a necessity and it’s something these companies can offer. Sort of how amazon started AWS. Although I wonder how much money that can make vs consumer markets…

1 Like

I’ve been reading Gemini’s answers to my powershell programming queries at work, and it just makes up function and method names, or it outright just copies stuff its been trained on. Then I move on to find a link that explains what I am looking for.

1 Like