Thursday, February 2, 2023

ChatGPT Plus

OpenAI (Hacker News):

The new subscription plan, ChatGPT Plus, will be available for $20/month, and subscribers will receive a number of benefits:

  • General access to ChatGPT, even during peak times
  • Faster response times
  • Priority access to new features and improvements

Johan Lajili (via Hacker News):

Whereas you might think “well, if it’s not broken don’t fix it”, I believe the web as a way to access information is getting worse by the day. Content generated with GPT-3 is going to start to show up for every long tail search under the sun, whereas regular content is going to get even heavier with SEO keyword to survive. The web is going to get worse and worse, and the only way to get good information is with a system that can extract the signal from the noise, a.k.a ChatGPT.

Arvind Narayanan and Sayash Kapoor (via Hacker News):

The philosopher Harry Frankfurt defined bullshit as speech that is intended to persuade without regard for the truth. By this measure, OpenAI’s new chatbot ChatGPT is the greatest bullshitter ever. Large Language Models (LLMs) are trained to produce plausible text, not true statements. ChatGPT is shockingly good at sounding convincing on any conceivable topic. But OpenAI is clear that there is no source of truth during training. That means that using ChatGPT in its current form would be a bad idea for applications like education or answering health questions. Even though the bot often gives excellent answers, sometimes it fails badly. And it’s always convincing, so it’s hard to tell the difference.

Yet, there are three kinds of tasks for which ChatGPT and other LLMs can be extremely useful, despite their inability to discern truth in general:

  1. Tasks where it’s easy for the user to check if the bot’s answer is correct, such as debugging help.

  2. Tasks where truth is irrelevant, such as writing fiction.

  3. Tasks for which there does in fact exist a subset of the training data that acts as a source of truth, such as language translation.

Previously:

8 Comments RSS · Twitter · Mastodon

I use chatGPT regularly for work. Mostly as a springboard when I'm writing bulk copy. Like, wow a text about ALT tags.

Then I do a bit of editing, create some helpful illustrations and jobs done.

I was also asked to elaborate on a client case that was written in biker points.

ChatGPT provided a page of buzzword heavy nonsense, which I trimmed down to half a page.

It's marvelous.

Old Unix Geek

Although I work in AI, I consider ChatGPT to be pretty much the work of the devil, and that it and its ilk have a good chance of destroying our society. It's an interesting thing to run in the lab, but shouldn't be a product.

Firstly, it is a very credible bullshitter. It doesn't understand anything, it only predicts the probability distribution of the next word to emit, given a training set of examples. That means that when generating scientific sounding texts, it knows that this is where a citation should go, so it produces a citation, that may or may not exist, and even if it does exist may or may not (mostly not) actually make the point ChatGPT generated. Because it repeats things it was trained on, it will repeat all sorts of old-wives tales. Because the probability distribution always sums to one, it will be very confident about what it says, however unlikely those stats appear in the training set.

Secondly, given how easily people are conned, we need more truth, not the bullshit everyone already believes. Sorry, Mr Shortcut, but we need less "bulk copy" not more. Your job doesn't improve the world, it just makes it harder to find useful stuff.

Similarly, we also need software written by people who know what they are doing, not random bits of opensource merged together and checked by "editors". It's well known that reading software is harder than writing it, so the notion that you can just copy proof whatever ChatGPT produces might sound good but will lead to even more bug-ridden software when confidence-tricksters get jobs they could not usually bluff themselves into.

Thirdly, since the probability landscape large language models learn is derived from copyrighted materials, all the "knowledge" they have is stolen. I, for one, did not contribute to GPL software for its source code to be stolen by large corporations. I contributed mostly to Linux and other Unixy tools because I wanted everyone to have tools that were independent of large corporations. I did not even want people lifting bits of what I wrote to put into other products. I did not write papers and books for someone to "auto-generate" my ideas, but to share my understanding and delight with other people who were interested in the same stuff.

If anything, therefore, ChatGPT will further destroy people's understanding of competence. The "Mr Shortcuts" of the world will produce cheap and cheerful content, undercutting the people who actually know something, who charge more because it took work to learn what they know. Youngsters will wonder why should bother to learn to create something? Why put in the hard work, when any skill is just a replaceable commodity?

In the long term, who will produce the new "training material" for ChatGPT? The manifold will just become more and more self similar, more and more full of crap, as the training set consists more and more of data generated by previous versions of ChatGPT.

The notion that skilled people disappear if there seems to be a better paying adjacent field is not new. A similar phenomenon is happening with Electrical Engineers: software seems to bring so much money that few students now bother to become E.E.s. Software seems easier. No math needed. So now there's a lack of people to develop the next CPUs and GPUs. Oh well, who needs those? I guess we'll end up buying them from the Chinese and the Russians (1/4 of whom are engineers, so that despite their smaller population they have more engineers than the US).

Finally, I actually believe it would be relatively easy to take GPT-x, and use it to kill people. You might remember the story of a young woman who told her depressed friend "Do it already!", via social network, and he killed himself. What if someone build a crawler that finds lonely people, uses GPT-x to generate friendly messages, and once the lonely people depend on these virtual friends, they start being told to kill themselves. Using reinforcement learning, the algorithm could get better and better at whatever it was doing. Even if no one builds a killer bot, the same technology could be used to persuade people to vote for someone, or buy some product.

It's actually ridiculous to me that in a timeline, in which half the US believes Russia won Trump's election through disinformation, something like Chat-GPT is legal. (No, bullshit generators don't get freedom of speech. I don't think corporations should either. Only people).

If you ever watched the Sci-fi series Dollhouse, somehow this development reminds me of it. Everything seems fine, until a tipping point of degeneration is reached.

I actually agree with you 100%, and I'm hoping a byproduct of ChatGPT is that people realize that we don't need bulk copy.

I think it will kill content created for seo (which mostly consists of lazy rewrites and copy pastes today) because there will be such a deluge of it.

I hope more considered, thought out pieces that try to find unexpected angles rise to the top.

Because everything else is just a chat away. Why Google, then click, when I can just ask.

Then there's the necessary bulk copy in training manuals, where mundane summaries and rewrites needs to be right there, not a click away.

The amount of time copy editors will save by having ChatGPT as a co author is great. Time they can spend on better things.

I also think the AI code thing will be for people who can't code, and can't afford to hire someone that codes. We will see tons of really shitty things, tons of scam and viruses, and a few nuggets of joy.

Old Unix Geek

I'm glad I didn't offend you, Mr Shortcut. I fear that real A(G)I will be needed to ensure the more considered pieces can be found and rise to the top... and despite today's hype, that is still far away.

I rest my case about the danger of ChatGPT... A judge using ChatGPT to research law in a case is quite literal insanity. It's further gone than I imagined in my worse nightmares.

Old Unix Geek

And here is the shooting itself in the foot, as usual. The problem isn't the AI enthusiast who can't train a GPT-4 on his GPU rig, and shares his code, but the giant corporations who want to milk everyone else's IP for all they can.

My kingdom for entry into a less stupid timeline!

I don't thick any of us will see AGI. The comical stupidity of ChatGPT and the silly symbols vs neutral nets turf war in research are but two of my reasons for that.

Hopefully hand curated sites, like this one, will become more appreciated going forward.

Is there a tip jar?

@Kristoffer Yes, there’s a Patreon.

Old Unix Geek

It seems it was trained on some pretty racist material..

Leave a Comment