Monday, May 20, 2024

Sutskever and Leike Out at OpenAI

Sigal Samuel (tweet):

For months, OpenAI has been losing employees who care deeply about making sure AI is safe. Now, the company is positively hemorrhaging them.

Ilya Sutskever and Jan Leike announced their departures from OpenAI, the maker of ChatGPT, on Tuesday. They were the leaders of the company’s superalignment team — the team tasked with ensuring that AI stays aligned with the goals of its makers, rather than acting unpredictably and harming humanity.

[…]

Altman was fundraising with autocratic regimes like Saudi Arabia so he could spin up a new AI chip-making company, which would give him a huge supply of the coveted resources needed to build cutting-edge AI. That was alarming to safety-minded employees. If Altman truly cared about building and deploying AI in the safest way possible, why did he seem to be in a mad dash to accumulate as many chips as possible, which would only accelerate the technology?

[…]

For employees, all this led to a gradual “loss of belief that when OpenAI says it’s going to do something or says that it values something, that that is actually true,” a source with inside knowledge of the company told me.

I don’t think wanting access to chips is a bad sign, but it seems clear that the safety folks lost the power struggle within the company.

Greg Brockman and Sam Altman:

We’re really grateful to Jan for everything he’s done for OpenAI, and we know he’ll continue to contribute to the mission from outside. In light of the questions his departure has raised, we wanted to explain a bit about how we think about our overall strategy.

As many of the replies note, the words seem rather hollow and don’t really correspond with their actions.

Kelsey Piper:

But there was no stronger sign of OpenAI’s commitment to its mission than the prominent roles of people like Sutskever and Leike, technologists with a long history of commitment to safety and an apparently genuine willingness to ask OpenAI to change course if needed.

[…]

And it makes it clear that OpenAI’s concern with external oversight and transparency couldn’t have run all that deep. If you want external oversight and opportunities for the rest of the world to play a role in what you’re doing, making former employees sign extremely restrictive NDAs doesn’t exactly follow.

Altman claims that they didn’t actually mean to cancel the equity for employees who didn’t sign the exit NDA. It was just a mistake in the paperwork (via Ryan Jones, Hacker News).

Previously:

Update (2024-05-21): See also: Edward Zitron and Scott Aaronson.

Update (2024-05-24): See also: Nick Heer, Hacker News, John Gruber.

7 Comments RSS · Twitter · Mastodon


Old Unix Geek

My impression of OpenAI is slowly merging with my impression of Apple: a fake image, where the "cool people" are revealed to be very slick thieves. As long as you are on their side, they'll treat you nicely, but the moment you go against their agenda, they'll rip you asunder. They'll defend what they consider to be theirs assiduously, but won't mind serving themselves to other people's stuff, and then lying, often by omission, to create the impression that their products are entirely the result of their own genius.

The fact that Ilya left as soon as GPT-4o came out, and many of the senior people of his team also left, the fact he never returned to the office after the board revolt against Altman, the fact OpenAI "made a mistake" about the life-changing amounts of equity people earned, the claims that safety still is a concern to them, while simultaneously demonstrating the destruction of yet another job (tutoring students), all sounds to me like a façade hiding something rotten in the Kingdom of Denmark. For instance, I wouldn't be surprised if it turned out that important people like Ilya were told that to keep their equity, they had to stay until GPT-4o came out, so as not to convey any further impressions of "disunity" or flailing around.

My guess is that OpenAI needs a lot of CPU cycles (i.e. capital) to build their product, yet they probably aren't making that much money, so they need to demonstrate value to investors. If that involves destroying jobs, violating ethics, or "AGI safety", that's just a small price to be paid as far as they are concerned. Investors must be kept happy, and they were probably unimpressed by the board revolt over "safety" that temporarily displaced Altman. Since I don't see this changing, I am far from reassured that OpenAI will build "AGI that is both safe and beneficial" to quote Ilya's departing tweet.

I also have to wonder, if Ilya really believes OpenAI is on the path to build AGI, why would he leave? After all those "Feel the AGI" exhortations? For a personal project? Sounds unlikely. Did he have no choice, or is AGI just a marketing gimmick, and therefore "safety" is one too? I find it all very odd, since Hinton clearly said Ilya was the main obstacle between AGI and extinction, and Elon said he lost his friendship with Larry Page over his hiring of Ilya Sutskever for OpenAI.

Anyway, this is all just my opinion, based on where I am sitting.


Old Unix Geek

Here's another example of what I meant by "serving yourself to other people's stuff": https://twitter.com/BobbyAllyn/status/1792679435701014908

OpenAI claims Sky's voice (from the movie her) wasn't patterned after Scarlett Johansson's (except Altman specifically asked her to do it, made this voice using "a different voice actor", and then asked Scarlett to reconsider after launch).


"I wouldn't be surprised if it turned out that important people like Ilya were told that to keep their equity, they had to stay until GPT-4o came out, so as not to convey any further impressions of "disunity" or flailing around."

The fact that he just returned after the failed coup was super sketchy to begin with, so this at least makes some kind of sense.

It's fascinating that this company was ostensibly started to curtail the possible harm from this type of technology, only to become the main perpetrator of that exact harm.


It's becoming more and more clear that the board was right to fire Altman. Microsoft will suffer the consequences of propping up a megalomaniac.


Sounds like a tech company being a tech company to me.

Or really any large corporation being a large corporation. Does anyone really expect them to ever act in the best interests of humanity?


Old Unix Geek

More evidence.

To summarize, the tweet thread claims that: OpenAI's lawyers refused to remove the clauses requiring people to shut up until death, when people requested such changes. And in the case of someone who lawyered up to force them not to cancel his vested equity, they replaced the clause by another clause saying that if he did not sign the shut-up clause, he would not be able to sell said equity... (which might be worse from a tax perspective, particularly if rumors of a tax proposal to tax unrealized capital gains turn out to be true). When asked about this contradicting Altman's earlier claims, OpenAI refused to answer.

So as I understand it, OpenAI claims a pound of flesh, but only tells its employees about this claim when they leave.

Now that they have been found out, OpenAI claims to be fixing this... but if I were an OpenAI ex-employee, I would still fear speaking out, unless I had forsaken selling my vested equity.


Leave a Comment