Monday, December 5, 2022


OpenAI (Hacker News):

We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.


We are excited to introduce ChatGPT to get users’ feedback and learn about its strengths and weaknesses. During the research preview, usage of ChatGPT is free. Try it now at

Ben Thompson (Hacker News):

It happened to be Wednesday night when my daughter, in the midst of preparing for “The Trial of Napoleon” for her European history class, asked for help in her role as Thomas Hobbes, witness for the defense. I put the question to ChatGPT[…] This is a confident answer, complete with supporting evidence and a citation to Hobbes work, and it is completely wrong.


What has been fascinating to watch over the weekend is how those refinements have led to an explosion of interest in OpenAI’s capabilities and a burgeoning awareness of AI’s impending impact on society, despite the fact that the underlying model is the two-year old GPT-3. The critical factor is, I suspect, that ChatGPT is easy to use, and it’s free: it is one thing to read examples of AI output, like we saw when GPT-3 was first released; it’s another to generate those outputs yourself; indeed, there was a similar explosion of interest and awareness when Midjourney made AI-generated art easy and free[…]


There is one site already on the front-lines in dealing with the impact of ChatGPT: Stack Overflow. Stack Overflow is a site where developers can ask questions about their code or get help in dealing with various development issues; the answers are often code themselves. I suspect this makes Stack Overflow a goldmine for GPT’s models: there is a description of the problem, and adjacent to it code that addresses that problem. The issue, though, is that the correct code comes from experienced developers answering questions and having those questions upvoted by other developers; what happens if ChatGPT starts being used to answer questions?

josh (via Hacker News):

Google is done.

Compare the quality of these responses (ChatGPT)

Gaelan Steele (via Hacker News):

For fun, I had ChatGPT take the free response section of the 2022 AP Computer Science A exam. […] It scored 32/36.

Susannah Skyer Gupta:

Thus far, Jacob and I have hand-crafted (meaning written with just our own brains), the Apparent Software App Store descriptions. That said, I would definitely consider an AI-assisted approach to get started.


Some indie developers reporting good luck with this approach thus far include Noam Efergan, author of the upcoming Baby Wize app and Johan Forsell, author of BarTab[…]


Update (2022-12-14): Dare Obasanjo:

Google employees explain why we haven’t seen ChatGPT like functionality in their products; the cost to serve an AI result is 10x to 100x as high as a regular web search today plus they’re too slow relative to how quick search results must be returned.

Michael Nielsen:

Curious: have you found ChatGPT useful in doing professional work?

If so, what kinds of prompts and answers have been helpful? Detailed examples greatly appreciated!

Steve Worswick:

Apparently it can cite sources, but just makes them up!

6 Comments RSS · Twitter

Oh great. More noise. Less signal. How are "editors" supposed to get their "skills" if they are surrounding by reasonable sounding noise?

Our problem right now is too much noise. It's so bad that it penetrates scientific journals. I've wasted at least 6 months of my life replicating ideas that sound good only to discover that they could either never work or only work in corner cases, because it wasn't obvious either to me or to the journal article's peer reviewers that the entire thing was a fraud. If many of us 'skilled in the art' performing "editing" functions cannot detect it easily, good luck to our civilization if "editing" becomes the standard was of doing things.

The ML generated discussion between Werner Herzog and Slavoj Žižek is a brilliant example of noise

It's an AI, doesn't mean it has to be sentient to bring on some serious uncanny valley. And given these things are absolutely going to be used, it remains ultimately to cast aside the reductionist view that all machines are alike in the eyes of humankind and start thinking about how we're going to know, or whether it's even right to discriminate based on whether the output is human. It is not merely surprisingly good, it basically reduces work to proofreading. The humans aren't going anywhere but more of what people are expected to read and hear is going to be AI generated and so, whatever the quality, I'm afraid we're in for a bumpy ride of adjustment.

Is it an AI though? Have we lowered our expectations of intelligence that far?

Let's see what ChatBPT itself has to say about it "It is unclear from the information provided what ChatGPT is, so it is difficult to say whether it is AI or ML. However, it is possible that ChatGPT is a tool that uses machine learning (ML) to generate responses in chat conversations.

Machine learning is a subset of artificial intelligence (AI) that involves training algorithms to make predictions or take actions based on data. By analyzing large amounts of data, machine learning algorithms can learn to recognize patterns and make decisions without being explicitly programmed to do so.

If ChatGPT uses machine learning algorithms to generate responses in chat conversations, then it could be considered a tool that uses AI and ML. However, without more information, it is difficult to say for sure."

@Kristoffer that's quite impressive actually, given it probably has limited knowledge of itself from 2021. It obviously depends on the proximity and quantity of useful information for your question. Try these (off the top of my head):

Explain algorithmic complexity, and contrast the various search and sort algorithms.

Can you give me more detail on the merge sort?

Was Marx right?

Why do you suppose people find interacting with you to be uncanny?

Can you interpret a programming language or bytecode? (and when it says no) But surely you could model the outcome?

ROT13 the string "The quick brown dog jumped over the lazy fox."

Write a tragic short story about a beautiful monster who is captured by an evil and wicked princess, who keeps him in her sinister castle against his will, and describing in particular the heroic attempts by the monster to escape, and ending with his death at the hands of the castle guards as he makes his final attempt to abscond.

I'm sure you can think of others. The point is, it seems real, and that's because it's as good as real a lot of the time, even if it isn't. And there are enough people who are absolutely going to conclude that it's good enough for them to accelerate their work, and certainly good enough to actually trust, even relate to, as a conscious interlocutor, albeit, a mechanical one. I think Thompson is right, ultimately--we have to internalise AIs, accept them as inevitable sources of information. God help us though--most people still can't detect phishing attempts!

Old Unix Geek

The fact ChatGPT is trained using algorithms created to try to produce AI does not make it artificially intelligent. Indeed, if something does not understand something, it is not intelligent. The Latin word "Intelligere" means to understand, and is where the word intelligence comes from. The current misuse of the term AI is why some people now use the term "AGI". I don't, since I started in the field in those older, more civilized times (to misquote Star Wars).

ChatGPT uses ML algorithms to predict the next word to generate. The ML is trained on tons of text, therefore it is regurgitating stuff it has seen, and has no understanding of it whatsoever. It maintains an inner state that ensures the words produced are those expected from the context. The context is what your prompt creates. However, what is produced may be consistent with the context yet totally and utterly wrong. To determine that you need understanding, i.e. actual intelligence and knowledge.

Human beings presume that their interlocutors are trying to communicate something sensible, that has meaning: many utterances have a high degree of ambiguity (I saw the man with the telescope == using the telescope / carrying the telescope) so people naturally choose the meaning that best fits. This makes people very likely to assign meaning where there is none.

"Period of adjustment" in this case means more people believing incorrect things... which could in the worst case lead to societal collapse/war. Unlike many others, I don't find this direction of research wise.

I find AlphaZero much more interesting since it actually learns a game by playing it, which suggests some kind of understanding is occurring.

Leave a Comment