Friday, May 16, 2025

Why Using ChatGPT Is Not Bad for the Environment

Andy Masley:

It’s not bad for the environment if you or any number of people use ChatGPT, Gemini, Claude, Grok, or other large language model (LLM) chatbots. You can use ChatGPT as much as you like without worrying that you’re doing any harm to the planet.

[…]

Throughout this post I’ll assume the average ChatGPT query uses 3 Watt-hours (Wh) of energy, which is 10x as much as a Google search. This statistic is likely wrong. ChatGPT’s energy use is probably lower according to EpochAI. Google’s might be lower too, or maybe higher now that they’re incorporating AI into every search. We’re a little in the dark on this, but we can set a reasonable range. It’s hard for me to find a statistic that implies ChatGPT uses more than 10x as much energy as Google, so I’ll stick with this as an upper bound to be charitable to ChatGPT’s critics.

[…]

If you multiply an extremely small value by 10, it can still be so small that it shouldn’t factor into your decisions.

[…]

They hear about AI data centers rapidly growing, look around, and see that everyone’s using ChatGPT, and assume there must be some connection. […] The mistake they’re making is simple: ChatGPT and other AI chatbots are extremely, extremely small parts of AI’s energy demand.

Via Adam Engst:

Masley calculates that, on a daily basis, the average American uses enough energy for 10,000 ChatGPT prompts and consumes enough water for 24,000–61,000 prompts.

Wayne Williams:

The power needed to run generative AI is pushing infrastructure beyond what traditional air cooling can handle.

To explore the scale of the challenge, I spoke with Daren Shumate, founder of Shumate Engineering, and Stephen Spinazzola, the firm’s Director of Mission Critical Services.

[…]

A typical Chat-GPT query uses about 10 times more energy than a Google search – and that’s just for a basic generative AI function. More advanced queries require substantially more power that have to go through an AI Cluster Farm to process large-scale computing between multiple machines.

Dan Drake:

If you’re measuring energy consumption, you need to do a kind of “lifecycle analysis” -- if the choice is between using a traditional search engine and asking a chatbot, you should compare the entire workflow with each.

If I do a regular web search for something, I will frequently click three to four of the results and open them in new tabs, because I’m not sure exactly which one will answer my question; I might do another search. Each of those loads a website, with all the accompanying HTML, JS, and so on.

With chatbots, I find it’s more common for the response to have exactly what I want. “One and done”, as they say.

Also, as AI gets better, people will use it more. They will ask it to do deep research tasks that they would not have even attempted with Google. Or that perhaps they would have paid a person to do.

Update (2025-05-19): There are Hacker News and Lobsters pages for Masley’s post. Simon Willison says it’s “by far the most convincing rebuttal of this idea that I’ve seen anywhere.” Michael Lazar wrote a rebuttal, which I find to be long on axe grinding and rhetorical criticisms and short on substance (via Dustin Westphal). Masley has a follow-up post about what he got wrong.

I think the best criticism is that the narrow question Masley is investigating is not what really matters. If you’re against the idea of LLMs or the overall energy consumption of AI (including training and non-chatbot uses), you don’t particularly care about the incremental cost of one more person using ChatGPT. Also, the numbers for ChatGPT may not apply to other systems such as Grok.

Stephen Hackett:

As I wrote about earlier this week, xAI has broken ground on a second data center on Tulane Road here in Memphis that will require an unbelievable amount of electricity.

[…]

As seen here, the SELC has photographic evidence that some 35 turbines have been in operation at xAI’s initial data center, despite Memphis Mayor Paul Young claiming in mid April that only 15 were in use. If 15 strikes you as an oddly specific number, it’s because the Shelby County Health Department’s permit to xAI only covers 15 permanent units.

If the plan outlined in this documents comes to pass, there could be anywhere between 40 to 90 turbines running in south Memphis across the two sites.

Matt Birchler:

I could keep going, but I have some very real options for not only offsetting my ChatGPT usage, but also radically reducing my tech energy footprint overall. The easiest win for me is scheduling my Synology to power down overnight.

[…]

I didn’t write this post to suggest we should all use as much energy as possible, screw the environment, let’s just burn it all down. My intention was to present the same ChatGPT and other LLM energy use numbers you see in alarmist articles in a different way to show that you can tell different stories depending on how you present the same data. Do LLMs use more energy than a lot of other digital actions? Yeah, they seem to, but the base number is so microscopically small that we still aren’t dealing with large numbers in the grand scheme of things.

Update (2025-05-23): James O’Donnell and Casey Crownhart:

Today, new analysis by MIT Technology Review provides an unprecedented and comprehensive look at how much energy the AI industry uses—down to a single query—to trace where its carbon footprint stands now, and where it’s headed, as AI barrels towards billions of daily users.

[…]

By 2028, the researchers estimate, the power going to AI-specific purposes will rise to between 165 and 326 terawatt-hours per year. That’s more than all electricity currently used by US data centers for all purposes; it’s enough to power 22% of US households each year.

[…]

The Lawrence Berkeley researchers offered a blunt critique of where things stand, saying that the information disclosed by tech companies, data center operators, utility companies, and hardware manufacturers is simply not enough to make reasonable projections about the unprecedented energy demands of this future or estimate the emissions it will create. They offered ways that companies could disclose more information without violating trade secrets, such as anonymized data-sharing arrangements, but their report acknowledged that the architects of this massive surge in AI data centers have thus far not been transparent, leaving them without the tools to make a plan.

Via Nick Heer:

This robust story comes on the heels of a series of other discussions about how much energy is used by A.I. products and services. Last month, for example, Andy Masley published a comparison of using ChatGPT against other common activities. The Economist ran another, and similar articles have been published before. As far as I can tell, they all come down to the same general conclusion: training A.I. models is energy-intensive, using A.I. products is not, lots of things we do online and offline have a greater impact on the environment, and the current energy use of A.I. is the lowest it will be from now on.

Nick Heer:

Thinking about the energy “footprint” of artificial intelligence products makes it a good time to re-link to Mark Kaufman’s excellent 2020 Mashable article in which he explores the idea of a carbon footprint.

Update (2025-06-11): Jay Peters:

OpenAI CEO Sam Altman, in a blog post published Tuesday, says an average ChatGPT query uses about 0.000085 gallons of water, or “roughly one fifteenth of a teaspoon.” He made the claim as part of a broader post on his predictions about how AI will change the world.

35 Comments RSS · Twitter · Mastodon


Using ChatGPT is bad for the brain.

And I will die on this goddamn hill.

I'll add what I already said on Mastodon: I love how a lot of people think it's either Google search or some crap 'AI' tool. I still do web searches myself, using DuckDuckGo (*not* its 'AI' assistant). I never used an 'AI' tool, never felt the need to, and I've always been able to find what I was looking for without feeling it took me too much time. And even if it takes some time, I learn more in the process. I train *myself*, and I trust my method & judgement more than a packaged 'AI' response/result.


@Riccardo My opinion on this is evolving. I think in many cases these tools are bad at providing reliable answers, but they can in some cases be better than Web searches at helping you towards finding answers yourself. I’ve had some really bad experiences with searches lately, where none of the engines find what I’m looking for anywhere on the first few pages of results. But then ChatGPT tells me something that isn’t fully true but that sets me in the right direction.


I recently tested the Deep Research feature in ChatGPT while searching for some used U.2 SSDs. I told it to search sites like eBay and locate all drives of a specific size, then search for specs of each model number with a link to the OEM data sheet for verification. Put it all in tables, one sorted by cost/Tb, another sorted by write speed (an indicator of a low spec SSD). Look for a drive that is highly rated on both tables, I have a hit!
This is the sort of crap that ChatGPT is good at, web searching and reorganizing it. And even then, it took some prodding to get it organized the way I like. Well it's easier than writing spreadshees at least.


I’ve had some really bad experiences with searches lately, where none of the engines find what I’m looking for anywhere on the first few pages of results.

At least some of this is because of AI spew generated pages taking over the results, so the problem has been caused by the solution.


I read Andy's post a week or two ago, and something fails my "smell" test. Let me try to explain....

So I cut my 5 minute shower by 1 second, and in return I can do 40 ChatGPT prompts? Then why are there so many AI power stations needed? Texas, Memphis, and now the United Arab Emirates. Maybe because the *current* power grid is built to *existing* demand. While I'm not sure I agree 100% with Stephen Hackett (512pixels.net) and how South Memphis is handling xAI's need for more than was agreed on, it definitely comes closer to passing my smell test than saying I just need to run my vacuum 10 minutes less or cut y daily shower by 1 second....

And what of the environment? What of coal, air pollution, etc.? It feels like Mr. Masley is trying to say... what? That we *don't* need these massive power farms but instead, if every human being could just change their daily lifestyles - whether they use AI or not - it all works out? How about we let the billionaires (many whom paid our new president some big $$$) build - on their dime, not taxes - for their AI expenses.


@Dave AI overall needs a lot of power, but Masley says that 97+% of this is not due to chatbots. (I don’t know whether this is correct, but that’s what he says.) Perhaps the estimates are complicated by the fact that models can be trained and then used for multiple purposes. But the narrow argument is that the incremental cost of your doing some queries is very low.


AI fans are starting to use the bad arguments crypto fans used to justify crypto's insane energy usage.

That's not a good sign.


"They will ask it to do deep research tasks that they would not have even attempted with Google."

And get incorrect answers with made up information. That's got to be the most incredibly foolish statement I've ever read on your blog.

I can't believe I have to explain this to a professional computer programmer. LLMs DO NOT KNOW ANYTHING. They just assemble a sequence of words that will sound plausible. They are programmed not to be correct, but to sound knowledgeable and confident.

That fools a lot of people into thinking that they can be trusted to provide them with information. People who think a chatbot who sounds persuasive and assured must be an expert, because they have no idea what goes on under the hood. For someone who does know what goes on under the hood to say what you just said... I am boggled. How incredibly stupid.

Thank you, by the way. I've been uncertain if I should keep your blog on my RSS feed. But now that you've said the most foolish, profoundly unintelligent thing I've ever read on your site, I know that I've been wasting my time here. Plonk.


Someone else

Andy Masley is an Effective Altruism dude… I dunno… seems very hand-wavy to me (like EA). Also, discounts all the training energy use from competitors, (the plagiarism, etc.), the incorrect answers….

Spending lots of energy to generate an incorrect answer (and give that incorrect answer to thousands of people) — is that efficient?

Also, he posted some updates/corrections/deletions already:
https://substack.com/home/post/p-163672156


Someone else

@Glaurung,

> Also, as AI gets better, people will use it more. They will ask it to do deep research tasks that they would not have even attempted with Google. Or that perhaps they would have paid a person to do.

This is not an incorrect statement., and you’re reading more into it than was actually said.

This is what’s literally happening now — I see it all the time — folks (of all skill levels) treat it like they’re talking to an expert, and it’s actually sounds quite helpful for that or as a sounding board (energy use notwithstanding).

“Good enough and easy” beats “accurate and hard-to-use” for most consumers (actually, easy wins every time. Also, consumers don’t know the limitations. That’s a problem for sure, but doesn’t change the accuracy of the statement.)


@Glaurung I think you’re misunderstanding my point. Regardless of whether people should do this, they clearly will. Some already are. I think I’ve covered the problems with LLMs a lot on this blog, both in theory and in practice how they keep giving me bad answers. That said, I think asking a chatbot to pull together a bunch of links on a topic is a reasonable use of the technology. Obviously, you don’t ask it to fill in table and trust the data, but doing a bunch of searches and organizing the results could be useful. Automating Google, basically. Anyway, the overall point I was getting at is that search queries and AI queries are kind of apples and oranges and we don’t really know what the future requirements and use patterns will be. Maybe we’re arguing about the bandwidth requirements of plaintext vs. HTML e-mails, but MP3s and video are on the way.


@Someone else Yeah, I mentioned the training question above. I’m seeing some criticisms of Masley and will put together some responses next week. It’s obviously hand-wavy, because it seems like a lot of the information isn’t public, and maybe he’s wrong, but he seems to be making a good-faith effort at an estimate. If someone has a good alternative analysis I’d be interested to see it.


"LLMs DO NOT KNOW ANYTHING. They just assemble a sequence of words that will sound plausible"

LLMs *do* know things and *don't* assemble a plausible-sounding answer; they assemble a *likely* answer based on their training data.

A likely answer corresponds to a mostly or entirely correct answer for many questions.

Suppose you ask it a question about a Java API. In that case, it will almost always provide an accurate answer, do so faster than you would have taken to look it up yourself, and you receive immediate feedback from your IDE when the answer is correct, so the cost of a wrong answer is low.

Suppose you ask it to list all vacuum robots with mops that have been released in the last two years and are highly rated by reviewers. You can't do this using a normal search engine, and, again, an LLM will provide a reasonably accurate answer that you can use as a basis for further research.

However, suppose you ask it a question that has no answer, doesn't have a likely, correct answer based on its training data, or has a lot of misleading information in its training data. Or suppose you ask it a leading question based on a false premise. In that case, it might happily hallucinate something for you that is entirely disconnected from reality. But that doesn't apply to most questions people ask LLMs.

There are plenty of reasons not to use LLMs, from energy usage (I think training costs have to be included in that calculation) to ethical reasons (the whole thing is just one global copyright infringement of everything ever produced by anyone), but claiming that LLMs aren't helpful, are always lying, or don't know anything is just false.


Hardik Panjwani

As a teacher, I find that ChatGPT often gives me wrong answers as it applies the wrong formula or sometimes even makes up a wrong formula from scratch.

Very unreliable. You have to know what you are doing so that you can catch it if it goes wrong. Trusting it blindly is a very bad idea.


@Plume... your first three sentences had me strongly disagreeing. Not because I consider them wrong, but rather the tone had me thinking you consider an LLM something more than a simple program. Fortunately, you referred to an LLM as "it" six times after that (even if you also said it has the capability of "hallucinating").

Back in the day... oh, maybe 5 years ago... this kind of computer program was called Machine Learning, or ML. But Artificial Intelligence or AI has a nicer ring. Have you seen the viral video of a robot in a China factory going crazy? Not sure if "it" was a vacuum robot (or on this list an AI can produce), but was it AI causing the issue or simply a bug in "it's" programming?

As for your second-to-last paragraph? One, AI do not "happily hallucinate". They give false answers. (See xAI and the "white genocide" issue due to "unauthorized modification".) Here's the thing... when this happens, regardless of the actual prompt... how does a human being know? I honestly do not know the answer to this, but there should be something - in the programming -prominently telling a human being that an LLM is just whistling in the wind.


> the whole thing is just one global copyright infringement of everything ever produced by anyone

How much does that statement hold water? As I understand the way this kind of machine learning works -- and please do keep in mind that I don't understand it deeply -- it cannot actually fully reproduce the work that goes into them, just bits and pieces of it rearranged in different ways.

So to me that has always brought up the question, in this specific respect, how is it different than a human brain? Our brains are also constantly absorbing all of the information and media we experience. We then incorporate that into our minds so that we can reproduce it later, be it a piece of information, aspects of a story, the sound or visuals from a piece of art or music, and so on.

There have been numerous cases of people accidentally plagiarizing other people because they didn't realize what they came up with wasn't original. (...assuming you give them the benefit of the doubt. I'm sure more than 0% of these cases are genuine and not people lying to cover their ass.)

If you ask something like Stable Diffusion to create a piece of artwork in the style of a known artist, which many people have decried as being theft or plagiarism, how is that different from another human artist studying the pieces of the aforementioned artist and then creating a pastiche?


"the tone had me thinking you consider an LLM something more than a simple program"

The fact that we anthropomorphize things does not mean we don't understand that they are things.

"AI do not "happily hallucinate". They give false answers."

This is a distinction without a difference.

"how does a human being know?"

By validating the answer.


If I have to validate the answer at the end of the process, using presumably actual research of actual human-generated work, why wouldn’t I just do the research in the first place? We’ve lost our minds with this stuff. I hope, once everyone finally goes clear, we still have humans generating the answers in the first place.


"why wouldn’t I just do the research in the first place"

For most questions, validating answers is orders of magnitude less work than researching them.

"We’ve lost our minds with this stuff"

You're correct, given the aggressive arguments people make in this thread.

If you have moral or ethical concerns about LLMs, please express them. Don't pretend they're worthless just because you don't like them.


Never said they were worthless or even that I don’t like them, so don’t tell me that I did.

They way people use them, the way so many point to them as the future or even present be-all of knowledge accruement, the way so many blindly believe they can do the job on their own or that they’re going to replace humans when the logic that defines their construction has so many clear dead ends … that’s what I don’t like. And I think a lot of people who grow reliant on them are going to get burned, and I simply hope that alternative sources of information will endure this completely unjustified hype cycle.

How that? Better?


Also: “For most questions, validating answers is orders of magnitude less work than researching them.”

Except, in this case, not really, because if an LLM gets an answer wrong, the mistake that led it there could be in the details as well as the opening or closing answer. There could be numerous incorrect details even lurking inside a “correct” answer. It doesn’t know what it doesn’t know. So, depending on what kind of research I’m doing and for what purpose, I can’t just query an answer and confirm the LLM is correct — I need to confirm every single strand of it, and at that point I might as well research and write the strands myself.


You did say that they were worthless when you wrote this:

"If I have to validate the answer at the end of the process, using presumably actual research of actual human-generated work, why wouldn’t I just do the research in the first place?"

You do have to validate the answer at the end of the process, so your if-condition is satisfied. You conclude that you should do the research yourself, making them worthless.

"don’t tell me that I did"

Don't tell me what to do 🤠

"so many point to them as the future or even present be-all of knowledge accruement (...)"

You specifically responded to me, and I did not say anything like that.


An LLM can be a thousand things -- me pointing out that it provides little value in one area isn't me saying it's worthless. Not sure why this is so confounding.

"You specifically responded to me, and I did not say anything like that."

You asked for a better response about my initial reasoning so I gave you one. Didn't realize I had to make it a story about you, though I guess we're all just putting words in each other's mouths anyway so what's the difference. I hope you and your chatbot have a great weekend.


@Plume... long tl;dr; coming (for those who may not know that's short-hand for too long, don't read).... I honestly think I'm in (mostly) agreement with you.

To quote your reply to me... a human being should ABSOLUTElY validate the answer. At the risk of taking this comment thread off topic?

(1) Do you know how frequently my Safari/Apple/AI tried to correct my spelling for this post? I'm a bad typist (and it tried to my typing "typo" but not "risk"), when does AI do the work of being able to express a *single* thought of a person? Not saying I believe about those who can use AI to do this, just wondering what you feel about this. Shouldn't post-grade school grads be able to do this? Instead I was lazy (like usual) and started my response but I was more intensive about the "dumb" intrusive spell-check that Apple does anymore. (And it corrected me on things it should.)

(2) To quote you: "For most questions, validating answers is orders of magnitude less work than researching them" Again, I think we agree! But you didn't "follow this up". Why is this bad? (Laziness, no need for it, you did your job already... maybe you need a half dozen more?) I *think* I agree with wheat you think. Still This lacks of reason. And please, I hope you didn't think I was aggressive... (and Safari spell-check wanted something simply wrong on that last word).

(3) Do I have moral or ethical concerns about AI? NO!!!! (And yet spell-check didn't cut in.) I am worried though. (And yes, I don't like them... again quoting you, whom I do not know.) If you noticed, I had some sort of "AI" (quotes intended) intrude on my poor typing skills. And YES, hopefully it's made my s*&tty typing better. And that's a good thing. But there's a word....

TRUST.

I purposely set that word apart. No spell-check, no AI. And I think this *IS NOT* "a distinction without a difference".

(2)


After re-reaiding through this fascinating comment thread... on last thing?

>"AI do not "happily hallucinate". They give false answers."
>This is a distinction without a difference.

The distinction is your use one the word "happily"..They most defiantly give wrong answers. It's all bout the programming! Happily? that suggests an LLM has emotions!


Final comment. @Plume and @Billok (and whomever else).... your comments are thought-provoking, and even if I disagree, things I can learn from. While I do not have any way to keep this discussion going except email (that will not be published) I would really like to continue this! Again, I have much to learn from your views. This isn't some kind of "empty" comment/thought. I'll gladly post my email in a comment if you'd like.


> "Very unreliable. You have to know what you are doing so that you can catch it if it goes wrong. Trusting it blindly is a very bad idea."

So basically, AI is like a journalist.


@Billyok:

"Except, in this case, not really"

If I ask an LLM, "List the ten highest-rated vacuum robots with mopping function," the answer is trivially easy to validate. If I ask an LLM, "How do I read an image file into a byte array in Java?" I will immediately know if it's wrong.

This applies to the vast majority of questions. Questions where validating the answer requires similar effort to generating it are quite uncommon.

"me pointing out that it provides little value in one area isn't me saying it's worthless"

Your point is that it provides no value for the thing 99% of people use it 99% of the time. I will therefore acknowledge that you only said it was 98% useless, but that's within the margin of error of being completely useless 😃

"You asked for a better response about my initial reasoning"

Which was still in response to what I said.

"I hope you and your chatbot have a great weekend."

Why are you so unpleasant?


@Dave

"Happily? that suggests an LLM has emotions!"

I'm anthropomorphizing the LLM in this sentence, but that doesn't mean I think it has emotions. I sometimes refer to it as if it had emotions because these types of analogical descriptions can help discuss LLMs.

In this case, "happily" doesn't mean it experiences the emotion of happiness when it hallucinates things; it means it readily hallucinates things, easily falling into a state of hallucination. Of course, even the word "hallucination" is false, because LLMs don't hallucinate anything. They don't have senses in the way humans do. However, "hallucination" is a helpful analogy for "giving an answer it thinks is likely based on its training data, even though it is false." And of course, it doesn't "think", but again, using these words as analogies is helpful.

If I write "My old sedan groaned and complained as it climbed the steep hill," I'm not indicating that it has feelings. Likewise for LLMs.


@Bri

"it cannot actually fully reproduce the work that goes into them"

It often can, if prompted correctly (or wrongly, depending on viewpoint). This is one of the issues with LLMs generating code, since they can regurgitate code under specific licenses. See, for example:

https://arxiv.org/html/2408.02487v1

This is also how the NYT established that OpenAI had ingested its data, it got OpenAI's LLMs to output their content verbatim.

So can humans, but then again, if I memorize a book and publish a copy I write, I'm also violating copyright.

I will acknowledge that both sides have valid points on this topic.

From an ethical viewpoint, I think that current copyright laws are pretty terrible, but I also believe that LLMs reading all of human creation and then putting people out of work is pretty terrible.


In regards to LLMs often being wrong, I'll also point out that we all need to exercise the same kind of discernment when searching the web, because humans are also frequently wrong, even humans claiming to be authoritative about whatever they're speaking about.

The more I think about LLMs as a better search engine, *especially* the ones that actually do web searches and summarize results, the more I see their purpose. They are a very sophisticated way of aggregating information together that has a natural language interface. Understanding that the information it aggregates could be wrong, and its algorithm for aggregating it can also make mistakes, puts the whole thing in perspective.

And thank goodness it can do that too, because web searches are basically broken now. Using an LLM to find answers on the internet feels more like using Google back in 2008.

So really the problem is that laypeople want to think about LLMs as though they're the Enterprise computer, because that's how these tech companies portray them. Of course we all know they're not even close to that; as has been mentioned numerous times, they're not intelligent or rational.

As for the copyright issue... I think that has more to do with the totally upside down priorities of our society than these generative algorithms. Copyright has been prioritizing business interests over individuals for ages now. Writers, artist and musicians have been undervalued and getting put out of work well before AI arrived. And the ones that do have work are generally being paid to create soulless corporate drivel, not art. In a healthy world, they'd have the opportunity to pursue and develop their art in a personal and fulfilling way, and not have it be tied to their ability to sustain themselves.


Sounds like a lot of excuses and rationalization


>Using an LLM to find answers on the internet feels more like using Google back in 2008.

Yes. Google could have owned this if they weren't so focused on selling ads.


Someone else

@someone, No, AI is not like a journalist. Journalists (good ones, in good publications, anyway) have fact checkers. LLM AI does not.

LLM AI is more like a convincing auto-complete-with-memory BS generation machine. Everything — both true and false — sound equally plausible.

That’s a huge problem, because most people don’t take the second step of verification, and because they speak confidently, make convincing arguments to non-experts or non-diligent people.

And as we know from our election, that BS machines that are tuned optimize attention-holding can and does lead to some pretty dark places.

That said, there ARE other ‘AI’ systems that do actual logic and calculations — I’m super curious about Wolfram Alpha which has been out for over a decade and does actual calculations (which LLMs do not).

There are plenty of other AI techniques, but wow, if we’re building new energy capacity, it sure does sound like we’re just wanting to warm the planet faster in the pursuit of money and mostly right answers.

Also, everyone, play this game (it’s low-energy-use, I promise):

You are the Gen AI


Paraphrasing an old adage: Don't anthropomorphize AIs. They hate when you do that.

I'm an IT guy (system admin), and my knowledge of how AIs work is limited. I can describe how it works to a non-iT person, but I'm not going to be rolling my own anytime soon. As a user of ChatGPT et al, I have found that I can ask some things and get reasonable answers, especially if what I'm asking is objective. I can ask for pieces of code or scripts and get decent results. And asking "how do I..."-type questions is also pretty reliable.

What a lot of people will struggle with is what many of you have pointed out above. AIs are not infallible. They're getting better as they learn more, but you kinda have to know what you're asking about before you ask the questions. I can ask, "how can I format a hard drive on Linux from the command line?", or "show me a function in Swift to calculate the area of a rectangle", and I can know (or find out pretty quickly) whether or not the results are right.

But if I ask questions about things that are outside my areas of expertise, I need to be more skeptical of what it says. The same can be said of search engine results, of course. But the confidence that AIs project when presenting results can give people a sense that it must know what it's talking about. I've asked technical questions and gotten wrong answers from ChatGPT, and it presents them as absolute facts. I can then say, "no, that wrong." and it will say "You're right, sorry about that. Here's the correct answer." And sometimes it still comes back with another wrong answer. But if you don't already know at least a little about what you're asking about, you may not know that you're getting incorrect information.

Trust, but verify. I think we're maybe, kinda moving in that direction, although it probably needs to be more like "you may or may not be able to trust this information, depending on your situation."

Leave a Comment