This is by and large a misconception based on outdated figures. I am not saying that it does not consume energy, it just does not consume the huge amount of energy that many think it does. At least, unlike other activities, e.g., TikTok scrolling and Netflix streaming, which also consume a lot of energy, LLMs can be used to build something useful.
For the record, energy efficiency of LLMs is one of the areas where more efforts are invested (not because megacorps are nice, but because they do not like to pay huge utility bills - so you can believe that they will keep being very motivated to make progress quickly).
I cannot find up to date figures for the last semester, but last year Google had reported a 33x efficiency improvement (that is: 33 times more efficient,, i.e., 3300% improvement) over the previous year. Interesting discussion here:
An interesting excerpt from the document (@JasonTheBirder may also want to read this):
The marginal energy used by a standard prompt from a modern LLM in 2025 is relatively established at this point, from both independent tests and official announcements. It is roughly 0.0003 kWh, the same energy use as 8-10 seconds of streaming Netflix or the equivalent of a Google search in 2008
So, if you really want to save the planet, stopping binge-watching your favorite series may be a more effective strategy
Indeed - I think it’s good to limit advanced technology use as much as possible, in as many domains as possible, not just limited to AI. It’s not always going to be possible, of course. But I don’t support Netflix or any streaming service, never have and never will. I moreoever think Google Search and search in general is one of the worst things to have been invented on the internet.
And measuring energy is disingenuous anyway. It is not just the energy use per PROMPT, but how the proliferation of the technology will require more energy to upgrade it at later stages, as well as the land use and mining use as well. But like you said, that’s also a reason to be against many other technologies such as streaming, and I am.
Meanwhile you are shooting with one of the most advanced camera’s on the market, posting on YouTube and THUS you support a streaming service by making content.
This whole argument ‘limit advanced technology’… I hear AI, I have heard with electric cars and others… I have heard the argument in Europe for nuclear energy.
It is so arbitrary… completely inconstant. If you’re serious about this, go and life with the Amish. They do not do electricity, not much tech at all.
Yes, of course, that’s how I make my living. But I try to limit my use of technology as much as possible while still making enough to eat. That’s precisely the problem: everyone must contribute to it in some way to eat. I previously had a job that I considered to be even MORE destructive than supporting YouTube. What’re you going to do? Given the conditions, I do what I must, but I would STILL prefer a world where it wouldn’t be necessary.
Everyone uses all sorts of technology in direct and indirect ways often because it’s mandatory. Should it be exempt from criticism then?
People criticize their government and yet still benefit from government services, so why not tell them to go live in international waters? I do what I must or I won’t have food, and also I think I can fight industrial civilization better with at least some online presence
I am serious about it, but there always must be compromises. People in prison also make use of the prison services, yet they don’t want to be there, etc.
The Amish are critical of technology and they don’t use electricity, but you can say the same thing about them. They also use technology in various ways. They have had to adapt to use technology, and many of them use generator or battery-powered tools outside the home in their workshops because their large family-sizes means not all of them can be farmers. And they also use automobiles through non-Amish drivers. I have already studied the Amish at length and it sounds like you know little about them.
Even if the Amish were more consistent themselves, it might still not even be the most ideal to join them because my goal is not to live without technology but to spread disset and distrust of it so that people become more cautious of it, contributing in a small way over a long period of time to slow down technological development
Frankly, I find your argument that because I use some technology, that I should not criticize it, laughable. We all must live in some form of society and some technology use is INEVITABLE, regardless of the wants of the users. I use the internet because I HAVE to have a JOB, even though I dislike most aspects of the internet and I think it’s a net detriment to society.
No matter what sort of vocation you have, you support industrial society in some way, but that does not mean we shouldn’t criticize it. If you think it’s contradictory to criticize technology yet use it, then you should tell people to stop criticizing their government even though EVERYONE uses their government and benefits from it in some way.
Edit: I get this reaction a lot, whereas if I criticize other things that I use, I never get it (like the government, services, companies, banks, etc.) I think that’s partly because people have a strong emotional attachment to technology. Often intelligent people do because it was often a safe haven for many of them when they were growing up and it often becomes their place of refuge. So I think they perceive it as a personal attack on their own emotionally safe zone and almost their tribe, which causes too much dissonance for them.
LLMs are designed to be convincing above all else which means they are great at introducing subtle bugs.
You’re quoting standard prompting whilst talking about agents which run for a significant length of time. Workflows are also getting more complex with multiple agents running. I’m entirety unconvinced by your power numbers. Look at the number of datacenters being planned. The numbers are staggering and are already causing inefficient local generation to be rushed into place to service them.
All of which does not change the fact that a lot of people have a very wrong perception of how much energy is required to prompt and LLM, and that the efficiency per prompt is getting better very rapidly.
It would be interesting to estimate how much energy one actually saves (or maybe not) by implementing something in one day instead of two weeks (to power laptops, displays and what not).
The energy usage is an interesting question.
If I sit at my computer for a week, I consume energy. If I sit at my computer for a day, but use LLMs to work faster, and deliver the same job in a shorter time, it may not be so easy to compare every use. Especially, if I don’t work on a laptop, but on a virtual machine hosted in the cloud.
Of course, if the same number of developers remain active, and, in addition to their complete use, now they also consume AI services, the electricity bill (and, more importantly, the water usage, which cannot be replenished using solar panels or the wind) will go up. Now, whether we’ll keep our jobs, is another question (but has little to do with slop or not).
Certainly you can’t compare the cost of people and supporting them to the costs of LLMs. People are the reason for doing anything. LLMs are just a tool.
I was talking about a software developer, be it as work or as hobby, and the energy requirements. For all we know, the energy requirements of darktable development may be reduced, if developers become more efficient by using AI tools. Or maybe not. I’m just saying that the energy usage of AI is only one side of the coin; there’s also energy saved by the developer using less local energy, or if we use AI to improve the performance of our algorithms, then all users will use less energy, for years. It’s not simple, and not black and white.
The problem with the idea that getting tasks done quicker = less energy consumption is that, at least under corporate work, the expectation isn’t that you’ll do the same work faster and more efficiently, it’s that you’ll do more work.
When discussing the energy issues, I think most people are more concerned about the energy and water usage for training data and data centers, not necessarily the individual cost per query. For me this is very important because I live in an area that is hurting for water but is a prime target for data centers.
Of course you are allowed to criticise. That is not my point. And I am always sceptical about some the aspects of LLMs.
But what I see a lot of reactions like yours:
you take a moral high ground. It comes in the form I am not polluting is a certain way, I am not training the war machine, I am not doing this because of energy consumption.
But often there is a lot of hypocrisy around that statement. Like in yours: yes but have to make money. And yes I am doing much better than before (was literally the comment who reduced his flying from 70 to 65 flights per year).
And when someone points it out: they are angry. If you are really concerned: petition with your government. Only that can work.
I don’t know how you can hold this position with a straight face when we’re currently living in a world with skyrocketing ram, storage and compute prices due to the absolute insane rate that new mega-scale datacentres are being built, with all their resource consumption issues (both power and water) and the damage that these are doing to the communities that they’re built in with not only the power and water issues but also sound and infrasound pollution.
Personally, I don’t use LLMs as a coding assistant at all. Putting in prompts to the magic bot that spits out convincing looking code is how you slip down the pathway of “It’s a tool and you have to use it correctly and review all of its code” to “It’s right most of the time so I’m not checking as thoroughly any more” and ultimately become entirely disconnected from your codebase and the important architectural desicions you made when desinging it in the first place. It’s a tool that encourages you and allows you to turn off the critical parts of your brain and degenerate into simple input to output thinking. Programming with an LLM when done right is like doing pair programming with a junior except the junior never gets any better and in the long run it would have been quicker to just do it yourself.
I do use LLMs for other tasks tho, I’m not simple a hater for being a haters sake. The itegration of Photoprism into a self hosted Ollama instance for photo tagging is an amazing time saver when uploading large amounts of photos, and it does a good enough job to make searching easier. Ultimately it’s a task where subjectivity is fine, and of no harm when it gets things wrong.
You are, possibly without meaning to, misrepresenting my words.
I said that (1) querying an LLM is not as costly as many folks think, as it’s energy cost is comparable to many other activities that we do every day without even thinking about it (like, video or music streaming), and that (2) their per-prompt energy efficiency is going up very very fast.
These being facts, and not opinions, there is no “position” to hold.
Now, coming to your objection.
Yes, the aggregate compute requirements are skyrocketing. And the reason is simple: unlike TikTok, Netflix and Spotify, LLMs can be used to create value (they can also be used to generate fake videos and stupid images, which I think is stupid, but who am I to judge), hence there is a huge demand of people willing to pay to use them on a daily basis for production work.
Which ties back, although through a different route, to the OP. If the productive world wants to use this stuff at scale, this is probably evidence that they can be used to build useful stuff, i.e., non slop.
That is a conjecture that remains to be seen. Every AI company that’s a household name, Anthropic, OpenAI, Anysphere, Perplexity are all losing money hand over fist, and the only thing that’s keeping their valuations up is a deeply irrational market and self dealing between all the major players. All promising each other future money on service agreements and projected profits that don’t line up with reality in the slightest. Some people and businesses might be paying for it but no one is paying enough to keep it afloat. This would normally lead to a product failing but AI is not allowed to fail, at least not yet, because of just how much money is tied up in these companies.
The projections made about productivity gains have not played our so far, not in real world economic factors or controlled studies, so what are we doing building these data centres and generating all this waste without proof of any benefit? Just because you are being told to use something, or being sold a product does not mean that product is useful. Just because it can produce non-slop doesn’t mean its worth adopting when the the reality of it is showing it has little to no effect in productivity for the tradeoff of massive global waste.
Is the technology transformative? Also yes, even in this phase, and it has not reached its full potential yet.
Statistics just do not reflect (yet) the degree of impact that it’s having on the workplace. There are two main reasons: (1) really capable models surfaced only very recently (within the last 2-3 months), and (2) most of the people who use the stuff do not know how to use it properly.
I speak from direct experience, not from hearsay. In the last two months that way that we work has been turned upside down, and oh boy if it makes a difference. The extent of the transformation is difficult to imagine if you have not experienced it.
It always seems to be the way that the real improvements are just around the corner and that the next version is the one that’s really going to change everything. There’s plenty of people out there saying it’s made them so much more efficient, you just need to believe them, you’ll see one day! But that’s nothing but hearsay and meaningless without actual numbers or proof to back it up.
Self experimentation is gossip, not evidence and intelligence doesn’t make you less susceptible to cognitive bias. The only things you can trust are well thought out and executed scientific studies and the data we actually have about how the economy, hiring practises and profits have changed in this post AI world and the evidence is not bearing out the gut feelings so many evangelistic adopters have.
I don’t think you can state that as a fact. For a given quality of query you can achieve it with far less energy than you used to, but most of the improvement in output of late is by moving from simple one shot to “thinking” agentive systems. If you want more quality still you have deep research. Model sizes have also been increasing, though that is at least levelling off a bit. The quality improvements people are chasing are increasing energy cost. You have made statements as fact, but I don’t agree.