Friday, May 16, 2025

Why Using ChatGPT Is Not Bad for the Environment

Andy Masley:

It’s not bad for the environment if you or any number of people use ChatGPT, Gemini, Claude, Grok, or other large language model (LLM) chatbots. You can use ChatGPT as much as you like without worrying that you’re doing any harm to the planet.

[…]

Throughout this post I’ll assume the average ChatGPT query uses 3 Watt-hours (Wh) of energy, which is 10x as much as a Google search. This statistic is likely wrong. ChatGPT’s energy use is probably lower according to EpochAI. Google’s might be lower too, or maybe higher now that they’re incorporating AI into every search. We’re a little in the dark on this, but we can set a reasonable range. It’s hard for me to find a statistic that implies ChatGPT uses more than 10x as much energy as Google, so I’ll stick with this as an upper bound to be charitable to ChatGPT’s critics.

[…]

If you multiply an extremely small value by 10, it can still be so small that it shouldn’t factor into your decisions.

[…]

They hear about AI data centers rapidly growing, look around, and see that everyone’s using ChatGPT, and assume there must be some connection. […] The mistake they’re making is simple: ChatGPT and other AI chatbots are extremely, extremely small parts of AI’s energy demand.

Via Adam Engst:

Masley calculates that, on a daily basis, the average American uses enough energy for 10,000 ChatGPT prompts and consumes enough water for 24,000–61,000 prompts.

Wayne Williams:

The power needed to run generative AI is pushing infrastructure beyond what traditional air cooling can handle.

To explore the scale of the challenge, I spoke with Daren Shumate, founder of Shumate Engineering, and Stephen Spinazzola, the firm’s Director of Mission Critical Services.

[…]

A typical Chat-GPT query uses about 10 times more energy than a Google search – and that’s just for a basic generative AI function. More advanced queries require substantially more power that have to go through an AI Cluster Farm to process large-scale computing between multiple machines.

Dan Drake:

If you’re measuring energy consumption, you need to do a kind of “lifecycle analysis” -- if the choice is between using a traditional search engine and asking a chatbot, you should compare the entire workflow with each.

If I do a regular web search for something, I will frequently click three to four of the results and open them in new tabs, because I’m not sure exactly which one will answer my question; I might do another search. Each of those loads a website, with all the accompanying HTML, JS, and so on.

With chatbots, I find it’s more common for the response to have exactly what I want. “One and done”, as they say.

Also, as AI gets better, people will use it more. They will ask it to do deep research tasks that they would not have even attempted with Google. Or that perhaps they would have paid a person to do.

21 Comments RSS · Twitter · Mastodon


Using ChatGPT is bad for the brain.

And I will die on this goddamn hill.

I'll add what I already said on Mastodon: I love how a lot of people think it's either Google search or some crap 'AI' tool. I still do web searches myself, using DuckDuckGo (*not* its 'AI' assistant). I never used an 'AI' tool, never felt the need to, and I've always been able to find what I was looking for without feeling it took me too much time. And even if it takes some time, I learn more in the process. I train *myself*, and I trust my method & judgement more than a packaged 'AI' response/result.


@Riccardo My opinion on this is evolving. I think in many cases these tools are bad at providing reliable answers, but they can in some cases be better than Web searches at helping you towards finding answers yourself. I’ve had some really bad experiences with searches lately, where none of the engines find what I’m looking for anywhere on the first few pages of results. But then ChatGPT tells me something that isn’t fully true but that sets me in the right direction.


I recently tested the Deep Research feature in ChatGPT while searching for some used U.2 SSDs. I told it to search sites like eBay and locate all drives of a specific size, then search for specs of each model number with a link to the OEM data sheet for verification. Put it all in tables, one sorted by cost/Tb, another sorted by write speed (an indicator of a low spec SSD). Look for a drive that is highly rated on both tables, I have a hit!
This is the sort of crap that ChatGPT is good at, web searching and reorganizing it. And even then, it took some prodding to get it organized the way I like. Well it's easier than writing spreadshees at least.


I’ve had some really bad experiences with searches lately, where none of the engines find what I’m looking for anywhere on the first few pages of results.

At least some of this is because of AI spew generated pages taking over the results, so the problem has been caused by the solution.


I read Andy's post a week or two ago, and something fails my "smell" test. Let me try to explain....

So I cut my 5 minute shower by 1 second, and in return I can do 40 ChatGPT prompts? Then why are there so many AI power stations needed? Texas, Memphis, and now the United Arab Emirates. Maybe because the *current* power grid is built to *existing* demand. While I'm not sure I agree 100% with Stephen Hackett (512pixels.net) and how South Memphis is handling xAI's need for more than was agreed on, it definitely comes closer to passing my smell test than saying I just need to run my vacuum 10 minutes less or cut y daily shower by 1 second....

And what of the environment? What of coal, air pollution, etc.? It feels like Mr. Masley is trying to say... what? That we *don't* need these massive power farms but instead, if every human being could just change their daily lifestyles - whether they use AI or not - it all works out? How about we let the billionaires (many whom paid our new president some big $$$) build - on their dime, not taxes - for their AI expenses.


@Dave AI overall needs a lot of power, but Masley says that 97+% of this is not due to chatbots. (I don’t know whether this is correct, but that’s what he says.) Perhaps the estimates are complicated by the fact that models can be trained and then used for multiple purposes. But the narrow argument is that the incremental cost of your doing some queries is very low.


AI fans are starting to use the bad arguments crypto fans used to justify crypto's insane energy usage.

That's not a good sign.


"They will ask it to do deep research tasks that they would not have even attempted with Google."

And get incorrect answers with made up information. That's got to be the most incredibly foolish statement I've ever read on your blog.

I can't believe I have to explain this to a professional computer programmer. LLMs DO NOT KNOW ANYTHING. They just assemble a sequence of words that will sound plausible. They are programmed not to be correct, but to sound knowledgeable and confident.

That fools a lot of people into thinking that they can be trusted to provide them with information. People who think a chatbot who sounds persuasive and assured must be an expert, because they have no idea what goes on under the hood. For someone who does know what goes on under the hood to say what you just said... I am boggled. How incredibly stupid.

Thank you, by the way. I've been uncertain if I should keep your blog on my RSS feed. But now that you've said the most foolish, profoundly unintelligent thing I've ever read on your site, I know that I've been wasting my time here. Plonk.


Someone else

Andy Masley is an Effective Altruism dude… I dunno… seems very hand-wavy to me (like EA). Also, discounts all the training energy use from competitors, (the plagiarism, etc.), the incorrect answers….

Spending lots of energy to generate an incorrect answer (and give that incorrect answer to thousands of people) — is that efficient?

Also, he posted some updates/corrections/deletions already:
https://substack.com/home/post/p-163672156


Someone else

@Glaurung,

> Also, as AI gets better, people will use it more. They will ask it to do deep research tasks that they would not have even attempted with Google. Or that perhaps they would have paid a person to do.

This is not an incorrect statement., and you’re reading more into it than was actually said.

This is what’s literally happening now — I see it all the time — folks (of all skill levels) treat it like they’re talking to an expert, and it’s actually sounds quite helpful for that or as a sounding board (energy use notwithstanding).

“Good enough and easy” beats “accurate and hard-to-use” for most consumers (actually, easy wins every time. Also, consumers don’t know the limitations. That’s a problem for sure, but doesn’t change the accuracy of the statement.)


@Glaurung I think you’re misunderstanding my point. Regardless of whether people should do this, they clearly will. Some already are. I think I’ve covered the problems with LLMs a lot on this blog, both in theory and in practice how they keep giving me bad answers. That said, I think asking a chatbot to pull together a bunch of links on a topic is a reasonable use of the technology. Obviously, you don’t ask it to fill in table and trust the data, but doing a bunch of searches and organizing the results could be useful. Automating Google, basically. Anyway, the overall point I was getting at is that search queries and AI queries are kind of apples and oranges and we don’t really know what the future requirements and use patterns will be. Maybe we’re arguing about the bandwidth requirements of plaintext vs. HTML e-mails, but MP3s and video are on the way.


@Someone else Yeah, I mentioned the training question above. I’m seeing some criticisms of Masley and will put together some responses next week. It’s obviously hand-wavy, because it seems like a lot of the information isn’t public, and maybe he’s wrong, but he seems to be making a good-faith effort at an estimate. If someone has a good alternative analysis I’d be interested to see it.


"LLMs DO NOT KNOW ANYTHING. They just assemble a sequence of words that will sound plausible"

LLMs *do* know things and *don't* assemble a plausible-sounding answer; they assemble a *likely* answer based on their training data.

A likely answer corresponds to a mostly or entirely correct answer for many questions.

Suppose you ask it a question about a Java API. In that case, it will almost always provide an accurate answer, do so faster than you would have taken to look it up yourself, and you receive immediate feedback from your IDE when the answer is correct, so the cost of a wrong answer is low.

Suppose you ask it to list all vacuum robots with mops that have been released in the last two years and are highly rated by reviewers. You can't do this using a normal search engine, and, again, an LLM will provide a reasonably accurate answer that you can use as a basis for further research.

However, suppose you ask it a question that has no answer, doesn't have a likely, correct answer based on its training data, or has a lot of misleading information in its training data. Or suppose you ask it a leading question based on a false premise. In that case, it might happily hallucinate something for you that is entirely disconnected from reality. But that doesn't apply to most questions people ask LLMs.

There are plenty of reasons not to use LLMs, from energy usage (I think training costs have to be included in that calculation) to ethical reasons (the whole thing is just one global copyright infringement of everything ever produced by anyone), but claiming that LLMs aren't helpful, are always lying, or don't know anything is just false.


Hardik Panjwani

As a teacher, I find that ChatGPT often gives me wrong answers as it applies the wrong formula or sometimes even makes up a wrong formula from scratch.

Very unreliable. You have to know what you are doing so that you can catch it if it goes wrong. Trusting it blindly is a very bad idea.


@Plume... your first three sentences had me strongly disagreeing. Not because I consider them wrong, but rather the tone had me thinking you consider an LLM something more than a simple program. Fortunately, you referred to an LLM as "it" six times after that (even if you also said it has the capability of "hallucinating").

Back in the day... oh, maybe 5 years ago... this kind of computer program was called Machine Learning, or ML. But Artificial Intelligence or AI has a nicer ring. Have you seen the viral video of a robot in a China factory going crazy? Not sure if "it" was a vacuum robot (or on this list an AI can produce), but was it AI causing the issue or simply a bug in "it's" programming?

As for your second-to-last paragraph? One, AI do not "happily hallucinate". They give false answers. (See xAI and the "white genocide" issue due to "unauthorized modification".) Here's the thing... when this happens, regardless of the actual prompt... how does a human being know? I honestly do not know the answer to this, but there should be something - in the programming -prominently telling a human being that an LLM is just whistling in the wind.


"the tone had me thinking you consider an LLM something more than a simple program"

The fact that we anthropomorphize things does not mean we don't understand that they are things.

"AI do not "happily hallucinate". They give false answers."

This is a distinction without a difference.

"how does a human being know?"

By validating the answer.


If I have to validate the answer at the end of the process, using presumably actual research of actual human-generated work, why wouldn’t I just do the research in the first place? We’ve lost our minds with this stuff. I hope, once everyone finally goes clear, we still have humans generating the answers in the first place.


"why wouldn’t I just do the research in the first place"

For most questions, validating answers is orders of magnitude less work than researching them.

"We’ve lost our minds with this stuff"

You're correct, given the aggressive arguments people make in this thread.

If you have moral or ethical concerns about LLMs, please express them. Don't pretend they're worthless just because you don't like them.


Never said they were worthless or even that I don’t like them, so don’t tell me that I did.

They way people use them, the way so many point to them as the future or even present be-all of knowledge accruement, the way so many blindly believe they can do the job on their own or that they’re going to replace humans when the logic that defines their construction has so many clear dead ends … that’s what I don’t like. And I think a lot of people who grow reliant on them are going to get burned, and I simply hope that alternative sources of information will endure this completely unjustified hype cycle.

How that? Better?


Also: “For most questions, validating answers is orders of magnitude less work than researching them.”

Except, in this case, not really, because if an LLM gets an answer wrong, the mistake that led it there could be in the details as well as the opening or closing answer. There could be numerous incorrect details even lurking inside a “correct” answer. It doesn’t know what it doesn’t know. So, depending on what kind of research I’m doing and for what purpose, I can’t just query an answer and confirm the LLM is correct — I need to confirm every single strand of it, and at that point I might as well research and write the strands myself.


You did say that they were worthless when you wrote this:

"If I have to validate the answer at the end of the process, using presumably actual research of actual human-generated work, why wouldn’t I just do the research in the first place?"

You do have to validate the answer at the end of the process, so your if-condition is satisfied. You conclude that you should do the research yourself, making them worthless.

"don’t tell me that I did"

Don't tell me what to do 🤠

"so many point to them as the future or even present be-all of knowledge accruement (...)"

You specifically responded to me, and I did not say anything like that.

Leave a Comment