Wednesday, March 15, 2023

GPT-4

OpenAI (Hacker News):

GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks. For example, it passes a simulated bar exam with a score around the top 10% of test takers; in contrast, GPT-3.5’s score was around the bottom 10%.

[…]

We are releasing GPT-4’s text input capability via ChatGPT and the API (with a waitlist). To prepare the image input capability for wider availability, we’re collaborating closely with a single partner to start. We’re also open-sourcing OpenAI Evals, our framework for automated evaluation of AI model performance, to allow anyone to report shortcomings in our models to help guide further improvements.

Hartley Charlton:

Apple is testing generative AI concepts that could one day be destined for Siri, despite fundamental issues with the way the virtual assistant is built, the New York Times reports.

Employees were apparently briefed on Apple’s large language model and other AI tools at the company’s annual AI summit last month. Apple engineers, including members of the Siri team, have reportedly been testing language-generation concepts “every week” in response to the rise of chatbots like ChatGPT.

Previously:

Update (2023-03-20): Gary Marcus (via Hacker News):

Chomsky co-wrote a New York Times op-ed the other day, and everyone is out there once again to prove they are smarter than he is, in the smuggest possible language they can muster.

Update (2023-03-22): Bill Gates:

In my lifetime, I’ve seen two demonstrations of technology that struck me as revolutionary.

[…]

I thought the challenge would keep them busy for two or three years. They finished it in just a few months.

In September, when I met with them again, I watched in awe as they asked GPT, their AI model, 60 multiple-choice questions from the AP Bio exam—and it got 59 of them right. Then it wrote outstanding answers to six open-ended questions from the exam.

Update (2023-03-24): DV (via Hacker News):

You might know that MSFT has released a 154-page paper on #OpenAI #GPT4, but do you know they also commented out many parts from the original version?

A thread of hidden information from their latex source code.

10 Comments RSS · Twitter · Mastodon

Siri on HomePod does this fun new thing where when I tell Siri to “stop” a timer alarm, she stops the alarm, then replies “there is nothing to stop”

Maybe it's a call for help?

Old Unix Geek

It can generate the Swift code of an app for someone who knows literally nothing about coding.

It scrapes various third party websites suggested by ChatGPT4 and displays the top movies. It used Swift UI. The code is here. I presume a good programmer would have taken at least 4 hours to write this, even if it doesn't require deep understanding of computer science. I'm not sure the world is ready for this.

I submitted the following to Open AI through their job application form, regarding their proud claim on their home page:

----------
Your statement:

"For example, it [GPT-4] passes a simulated bar exam with a score around the top 10% of test takers; ..."

is, to say the least, deliberately misleading and irresponsible. No AI can "pass a bar exam." All any AI can do is "generate text that adequately answers questions commonly on a bar exam, thus appearing to pass the exam." The difference is not just semantic: "passing a bar exam" encourages one to depend on the AI as a substitute for a lawyer, whereas "generating text...that passes a bar exam" clearly indicates that the AI Is Not A Lawyer-which is the true state of affairs.

There is no link on Open AI's website that I can find to submit feedback, or to contact anyone, for any purpose at all, other than this form. So much for "Artificial general intelligence has the potential to benefit nearly every aspect of our lives—so it must be developed and deployed responsibly."

----------
It looks like AI is the new Bitcoin.

Old Unix Geek

Despite the smiling and reassuring OpenAI people telling everyone that their "capped profit structure" means they are doing this for the benefit of everyone... I am highly suspicious of their motives.

They were funded as a non-profit. Elon Musk, who literally gave them $100M to start, is confused how they became a for-profit. If they've essentially cheated one of their largest donors, how can they be trusted to keep to their word this time?

They originally said they were developing AI for the benefit of everyone, to prevent it being controlled by a single corporation. However, now they are even refusing to disclose any details of ChatGPT-4's architecture, to enable alternatives. They, themselves, are a single corporation.

And just as they've stolen the material on which they train their system, they're using the innovations created by others to build their system, such as Google's Transformer. The output of their system will pollute the well (the source of data they trained on, aka internet / github) preventing other systems from being trained on it, since they claim ownership of their system's output, and since their system's hallucinations will pollute the data source. It will also reduce their competitors' interest in releasing their discoveries to the public, since they know a well funded competitor is happy to take but not behave reciprocally.

Why do I say stolen about the training material? As Pedro Domingos has shown, all neural networks essentially compute how similar an input is to the examples on which they were trained, and produce the output they were trained to produce for the most similar inputs. This is proven mathematically.

Finally, Sam Altman (one of their founders) is repeatedly speaking about driving the cost of "intelligence" down to zero, and Greg Brockman, another founder, says their goal is to create machines that outperform humans at most economically valuable work. That's nice. How are humans supposed to sustain themselves economically and intellectually if that happens?

Sam Altman speaks of unleashing the same level of progress every year as has happened since the Renaissance. How on earth does he think society will cope with that?

It's important to realize that ChatGPT is not an AI. It has no model of the world. It has no notion of cause and effect. A house fly understands more about the real world. It is Apparent Artificial Intelligence, not Actual Artificial Intelligence. Despite this, it could very well result in eliminating most cognitive work, just as most musicians and potters can no longer live off their skills. The notion that we need human experts to verify whether something produced by the tool is a hallucination or not, is frankly quaint. If you can do without skill most of the time, the value of skill will fall, and instead incompetent people with "the right attitude" will be promoted. Expect those who are skilled yet costly to be eliminated, and thus over time the costs associated with gaining skill will also be eliminated. I.e. why read a book if you can ask ChatGPT? Why go to university? And if the thing speaks, then why bother learning to read and write? In case you think I'm exaggerating, in one of his interviews on YouTube, Sam Altman said that skills we had 100 years ago will disappear, and those that remain will be those of 50,000 years ago.

To reinforce the house fly point: in this case we're speaking of a probability distribution of words. For instance, if you ask a language model whether it will result in the destruction of expertise, it answers Yes with some probability and No with some probability. Each word is chosen randomly in accord with the probability distribution. The justification that follows is also selected randomly, but will be consistent with the first chosen word (yes or no). Therefore if you try multiple times, you'll get "no, expertise will still be needed because..." or "yes, expertise will no longer be needed because ..." The reasons it gives aren't because it understood something. The reasons it gives aren't why it decided to say "Yes" or "No". But few people will grok that.

So I see a lot of stupidity, ignorance and arrogance on the part of OpenAI. This product of theirs could easily wreck our civilization, and they even seem to know it, but not care. I wonder whether this isn't the Great Filter that resolves the Fermi Paradox.

For now, the only good thing is that it's still limited to a 32,000 token context, so the tasks it can do will be limited by that.

@Old Unix Geek

These are the kind of comments that we come to the comments section for. I don't agree with all of your rant, but it's a very good rant. Thanks for taking the time!

Old Unix Geek

I'm glad you liked it, David.

I'm confused why you think it's a rant. I wasn't angry when I wrote it, but I am very concerned by the long term consequences I see it having on civilization. Perhaps you meant it was long and impassioned?

There's already a technology which humanity decided not to pursue: gene editing of human beings because the consequences could be terrible and irrevocable. People who try go to prison and are shunned, as happened recently to a Chinese doctor. I think there's a good chance this type of generative AI should fall into the same category.

@Old Unix Geek

Yes, I labeled it a rant because it was "long and impassioned." Sometimes passion comes across to other people as anger, or at least anger-tinged. I think when I read it the first time I did think it was a little bit angry (but reasonable). But that might be me reading too much into it—and obviously how we read something is not necessarily how it was written!

Chomsky's early language research was funded by the Pentagon. A controversially book about Chomsky says that Chomsky's research was supposed to create a spoken interface for the military. Basically, he was asked to create something like Siri/Alexa/ChatGPT in 1963. The opinion of the book is that because Chomsky hated the military, he tried to sabotage the military efforts by making his research too high level to be useful. Either way, this kind of points to the idea that Chomsky might hate ChatGPT because he knows this is exactly what the military wants.

https://en.wikipedia.org/wiki/Decoding_Chomsky#Further_research_on_Chomsky_at_MIT

Old Unix Geek

I find that thesis a stretch, James.

The way I understand Chomsky's point is that there is no scientific value to Chat-GPT. It produces a probability distribution of the next word based purely on deriving correlation from examples it has seen. Therefore there is no causal modelling, but masses of data that are not distilled into a few simple rules. In Science, gathering data is often the first step, but needs to be followed by an actual theory. Kepler's discovery that planets' orbits around the sun are elliptical follows this pattern for example. Chomsky's point is that Chat-GPT's algorithms can model any kind of sequence, not just language. Therefore the algorithms do not impose any linguistic priors on the sequences they emit. This is true. Similar transformer based architectures have been used to create music for instance, which doesn't involve grammar.

If the military decides to use Chat-GPT to decide what to bomb or to bomb it, we'll be in a world of hurt. WOPR of WarGames was more logical and understood language much better than Chat-GPT... "Hi Professor Falken. Would you like to nuke Alaska today?" (Alaska, a low probability choice, chosen by the random number generator).

Leave a Comment