Friday, November 17, 2023

Altman and Brockman Out at OpenAI

OpenAI (Hacker News):

Sam Altman will depart as CEO and leave the board of directors. […] Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.

[…]

As a part of this transition, [co-founder] Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.

This is quite a surprise. It sounds like there’s something really bad. It’s unclear whether Brockman was also involved or just disagreed with the other four board members. Either way, it seems unlikely that he’ll stick around for the long term.

Previously:

Update (2023-11-17): Amir Efrati and Jon Victor:

The blog post said Brockman would step down from his role as chairman of the OpenAI board but that he would stay on in an operating role. But by Friday afternoon, he decided to resign.

Jon Victor, Stephanie Palazzolo, and Anissa Gardizy:

OpenAI’s ouster of CEO Sam Altman on Friday followed internal arguments among employees about whether the company was developing artificial intelligence safely enough, according to people with knowledge of the situation.

[…]

At least two employees asked Sutskever—who has been responsible for OpenAI’s biggest research breakthroughs—whether the firing amounted to a “coup” or “hostile takeover,” according to a transcript of the meeting. [He said it did not.] To some employees, the question implied that Sutskever may have felt Altman was moving too quickly to commercialize the software[…]

Update (2023-11-20): soneca:

I understand it’s a big deal, as AI is the current big thing and OpenAI is the center of it. And it’s good gossip. But the firing post is now the third most upvoted post on HN ever!

Ina Fried and Scott Rosenberg (via Hacker News):

Sam Altman’s firing as OpenAI CEO was not the result of “malfeasance or anything related to our financial, business, safety, or security/privacy practices” but rather a “breakdown in communications between Sam Altman and the board,” per an internal memo from chief operating officer Brad Lightcap seen by Axios.

Kara Swisher (via Hacker News):

[Sources] tell me chief scientist Ilya Sutskever was at the center of this. Increasing tensions with Sam Altman and Greg Brockman over role and influence and he got the board on his side.

The developer day and how the store was introduced was in inflection moment of Altman pushing too far, too fast.

unusual_whales (via Hacker News):

Sam Altman was been looking to raise tens of billions of dollars from Middle Eastern sovereign wealth funds to create an AI chip startup to compete with processors made by Nvidia, $NVDA, before being fired, per Bloomberg.

John Loeber (via Hacker News):

Yesterday, Sam Altman and Greg Brockman were fired from the Board of Directors of OpenAI. Following, all of Tech Twitter was abuzz with one question: wait a moment, who was on the Board? And after they found out, they asked: who on earth are Tasha McCauley and Helen Toner? It turns out that OpenAI’s Board had undergone numerous changes over the years, especially recently. And that just wasn’t ever the biggest news about OpenAI, so those changes didn’t spark the concerns that maybe they should have.

I combed through the Internet Archive and OpenAI’s non-profit filings to try to make sense of OpenAI’s governance. Below, I have attempted to chronicle the composition of OpenAI’s Board over time, point out the conflicts, and you can see how we got to the earthquake yesterday.

[…]

The first thing that sticks out to me is that there have been, for several quarters, two significant conflicts of interest on the Board[…]

Kevin Roose:

Ilya Sutskever, the company’s chief scientist and a member of its board, defended the ouster, according to a person briefed on his remarks. He dismissed employees’ suggestions that pushing Mr. Altman out amounted to a “hostile takeover” and claimed it was necessary to protect OpenAI’s mission of making artificial intelligence beneficial to humanity, the person said.

Via John Gruber (Hacker News):

According to Brockman — who until he quit in protest of Altman’s firing was chairman of the OpenAI board — he didn’t find out until just 5 minutes before Altman was sacked. I’ve never once heard of a corporate board firing the company’s CEO behind the back of the chairman of the board.

Benj Edwards (via Hacker News):

As Friday night wore on, reports emerged that the ousting was likely orchestrated by Chief Scientist Ilya Sutskever over concerns about the safety and speed of OpenAI’s tech deployment.

“This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity,” Sutskever told employees at an emergency all-hands meeting on Friday afternoon, as reported by The Information.

Wes Davis (via Hacker News):

Meta has reportedly broken up its Responsible AI (RAI) team as it puts more of its resources into generative artificial intelligence.

Ilya Sutskever (via Hacker News):

I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.

Alex Heath and Nilay Patel (via Hacker News):

A source close to Altman says the board had agreed in principle to resign and to allow Altman and Greg Brockman to return but has since waffled — missing a key 5PM PT deadline by which many OpenAI staffers were set to resign. If Altman decides to leave and start a new company, those staffers would assuredly go with him.

[…]

Last night, after we learned OpenAI was trying to get Altman back and that the board was waffling, chief strategy officer Jason Kwon told employees that the company is “optimistic” about Altman returning and would share more Sunday morning. Meanwhile, a bunch of OpenAI employees took to X (formerly Twitter) to voice their support of Altman with heart emoji.

Eric Newcomer:

Sam Altman is rallying the troops. OpenAI employees are tweeting heart emojis in his defense. Dozens of people, including some OpenAI employees, visited Altman in his Russian Hill home in what seems to be a sort of resistant camp. Airbnb CEO Brian Chesky and Coinbase CEO Brian Armstrong — both among the most valuable Y Combinator portfolio companies — have offered words of support for Altman. Investor godfather Ron Conway compared Altman’s ouster by the OpenAI nonprofit board to a “coup that we have not seen the likes of since 1985 when the then-Apple board pushed out Steve Jobs.” Microsoft is reportedly working with Tiger Global and Thrive Capital to reinstate Altman. From reading the news or drinking from the Twitter firehose, you would think Altman’s return is a fait accompli. One tech Twitter account quipped yesterday when it seemed that Altman’s reinstatement could happen any minute, “wow it even took jesus three days.”

[…]

My understanding is that some members of the board genuinely felt Altman was dishonest and unreliable in his communications with them, sources tell me. Some members of the board believe that they couldn’t oversee the company because they couldn’t believe what Altman was saying.

[…]

There are three key historical case studies here: First, Dario Amodei, Jack Clark and the team at Anthropic felt troubled enough by OpenAI’s approach that they needed to spin off and create their own more safety and alignment-oriented foundation model company. What (or who) exactly got that team so worried that it needed to jump ship? Altman was certainly at the center of that decision.

Emily Chang, Edward Ludlow, Rachel Metz, and Dina Bass (via Hacker News):

Efforts by a group of OpenAI executives and investors to reinstate Sam Altman to his role as chief executive officer reached an impasse over the makeup and role of the board, according to people familiar with the negotiations.

Will Knight and Steven Levy (via Hacker News):

More than 600 employees of OpenAI have signed a letter saying they may quit and join Sam Altman at Microsoft unless the startup’s board resigns and reappoints the ousted CEO.

Jon Victor and Amir Efrati (via Hacker News):

Jakub Pachocki, the company’s director of research; Aleksander Madry, head of a team evaluating potential risks from AI, and Szymon Sidor, a seven-year researcher at the startup, told associates they had resigned, these people said.

Nilay Patel and Alex Heath (via Hacker News):

After a weekend of negotiations to potentially bring back Sam Altman as OpenAI CEO following his shock firing, the company’s nonprofit board has gone another way entirely and named former Twitch CEO and co-founder Emmett Shear as interim CEO, according to a person familiar with the matter. He will take over as CEO for Mira Murati, who was publicly aligned with Altman.

Matas (via Hacker News):

A list of things that a coherent story does not make[…]

Satya Nadella:

We remain committed to our partnership with OpenAI and have confidence in our product roadmap, our ability to continue to innovate with everything we announced at Microsoft Ignite, and in continuing to support our customers and partners. We look forward to getting to know Emmett Shear and OAI’s new leadership team and working with them. And we’re extremely excited to share the news that Sam Altman and Greg Brockman, together with colleagues, will be joining Microsoft to lead a new advanced AI research team. We look forward to moving quickly to provide them with the resources needed for their success.

Dylan Patel and Daniel Nishball (via Hacker News):

Sam and Greg were considering creating a brand-new startup, but that would have likely caused a >1 year speed bump. Instead, now there is a new subsidiary within Microsoft.

[…]

There is a mass exodus of the core OpenAI team leaving and joining Microsoft. This new organization within Microsoft will get hundreds of technical staff from OpenAI.

[…]

The OpenAI for-profit subsidiary was about to conduct a secondary at a $80 billion+ valuation. These “Profit Participation Units” (PPUs) were going to be worth $10 million+ for key employees. Suffice it to say that this is not going to happen now, and the OpenAI board has foolishly destroyed the chance of generational wealth for many of the team. Despite this literal fumbling of the bag, key OpenAI employees who leave will be treated extremely well.

Part of Satya’s incredible deal with Sam and Greg is likely that these key OpenAI employees that join Microsoft will have their now worthless PPUs pseudo-refreshed for equity in Microsoft which vest over multiple years.

Ben Thompson (via Hacker News):

This is, quite obviously, a phenomenal outcome for Microsoft. The company already has a perpetual license to all OpenAI IP (short of artificial general intelligence), including source code and model weights; the question was whether it would have the talent to exploit that IP if OpenAI suffered the sort of talent drain that was threatened upon Altman and Brockman’s removal. Indeed they will, as a good portion of that talent seems likely to flow to Microsoft; you can make the case that Microsoft just acquired OpenAI for $0 and zero risk of an antitrust lawsuit.

[…]

Here’s the reality of the matter, though: whether or not you agree with the Sutskever/Shear tribe, the board’s charter and responsibility is not to make money. This is not a for-profit corporation with a fiduciary duty to its shareholders; indeed, as I laid out above, OpenAI’s charter specifically states that it is “unconstrained by a need to generate financial return”. From that perspective the board is in fact doing its job, as counterintuitive as that may seem: to the extent the board believes that Altman and his tribe were not “build[ing] general-purpose artificial intelligence that benefits humanity” it is empowered to fire him; they do, and so they did.

This gets at the irony in my concern about the company’s non-profit status: I was worried about Altman being unconstrained by the need to make money or the danger of having someone in charge without a financial stake in the outcome, when in fact it was those same factors that cost him his job.

[…]

That leaves Anthropic, which looked like a big winner 12 hours ago, and now feels increasingly tenuous as a standalone entity. The company has struck partnership deals with both Google and Amazon, but it is now facing a competitor in Microsoft with effectively unlimited funds and GPU access; it’s hard not to escape the sense that it makes sense as a part of AWS (and yes, B corps can be acquired, with considerably more ease than a non-profit).

Michael Spencer (via Hacker News):

While some are calming it’s a great victory of Satya Nadella, I’m not so sure. Cannibalizing your biggest investment doesn’t usually turn out very well. Just one year after ChatGPT launches and Generative A.I. consolidation is already occurring? Given the moves of Inflection, Anthropic and Character.AI, BigTech was already at the doorstep of these startups.

But with OpenAI being torn in half, it seems like independent startups in Generative A.I. really cannot survive or keep up on their own, which means real innovation may be stunted.

Update (2023-11-22): Kali Hays (via Hacker News):

Sustkever is said to have offered two explanations he purportedly received from the board, according to one of the people familiar. One explanation was that Altman was said to have given two people at OpenAI the same project.

The other was that Altman allegedly gave two board members different opinions about a member of personnel. An OpenAI spokesperson did not respond to requests for comment.

These explanations didn’t make sense to employees and were not received well, one of the people familiar said. Internally, the going theory is that this was a straightforward “coup” by the board, as it’s been called inside the company and out. Any reason being given by the board now holds little to no sway with staff, the person said.

Geoffrey Irving:

Third, my prior is strongly against Sam after working for him for two years at OpenAI:

1. He was always nice to me.

2. He lied to me on various occasions

3. He was deceptive, manipulative, and worse to others, including my close friends (again, only nice to me, for reasons)

David Goldman:

OpenAI’s overseers worried that the company was making the technological equivalent of a nuclear bomb, and its caretaker, Sam Altman, was moving so fast that he risked a global catastrophe.

So the board fired him. That may ultimately have been the logical solution.

But the manner in which Altman was fired – abruptly, opaquely and without warning to some of OpenAI’s largest stakeholders and partners – defied logic. And it risked inflicting more damage than if the board took no such action at all.

Deepa Seetharaman et al. (via Hacker News):

Top investors and senior OpenAI leaders were still pushing to reinstate Sam Altman to his CEO role at OpenAI as the future of the artificial-intelligence company remained in jeopardy.

The talks continued as much of OpenAI’s staff threatened Monday to quit if the board didn’t restore Altman to power, according to people familiar with the matter. Meanwhile, OpenAI’s rivals were making public overtures to any disgruntled employees at the startup company behind the viral chatbot ChatGPT.

Salesforce Chief Executive Marc Benioff offered to hire any OpenAI researcher to work on his company’s own AI program, proposing similar compensation and asking candidates to send him their résumés directly. Microsoft also offered to hire OpenAI employees at their same compensation, according to an X post Tuesday by Chief Technology Officer Kevin Scott.

Kevin Scott (via Hacker News):

To my partners at OpenAI: We have seen your petition and appreciate your desire potentially to join Sam Altman at Microsoft’s new AI Research Lab. Know that if needed, you have a role at Microsoft that matches your compensation and advances our collective mission.

Matthew Prince:

Contrary to what @kevinroose and others have written, Microsoft was not a winner of the events of the last few days around #OpenAI. They were in a much better place on Friday morning last week than they are today. Friday morning they had invested ~$11B in OpenAI and captured most of its upside while still having enough insulated distance to allow @BradSmi to claim things to regulators like “ChatGPT is more open than Meta’s Llama” and to allow any embarrassing LLM hallucinations or other ugliness to be OpenAI’s problem, not Microsoft’s.

[…]

I think the chances of the senior OpenAI folks still being at Microsoft in 3 years is asymptotically approaching zero. Where the independence and clear mission of OpenAI was exactly what could have kept that group of incredible talent motivated and aligned over the long term, making Office365 spreadsheets a bit more clever isn’t something that rallies a team like their’s. Sure they’ll try and have some level of independence, but the machinery of a trillion dollar+ business software behemoth is hard to not get caught up in and ground out by.

Alex Ivanovs (via Hacker News):

The letter that the OpenAI employees prepared initially had 500 signatures (out of 700~ employees), and recent reports say that that number is almost 100% now.

[…]

This is also about the people and, more importantly, the 2 million developers who use the OpenAI API. Whether for personal purposes or business. There has been an enormous amount of self-made people on Twitter, Discord, and other social media platforms worrying that the world is about to come crashing down on the dreams that OpenAI has enabled them to accomplish.

[…]

Nadella emphasized Microsoft’s deep involvement in AI development alongside OpenAI. Despite the upheaval, he reassured that Microsoft retains “all the rights and all the capability” necessary for AI innovation. This statement suggests a robust backup plan, ensuring the continuity of services and technologies developed in partnership with OpenAI.

Dave Lee (via Hacker News):

Whether board members were justified in seeking to remove Altman isn’t the real issue. What’s truly important is that the board made a decision that was almost instantaneously overturned by the sheer power and popularity of a trailblazing cofounder. In that sense, OpenAI was no different to the tech giants that came before it: Mark Zuckerberg’s dictatorial hold on Meta Inc., or Larry Page’s and Sergey Brin’s unparalleled voting power at Google-parent Alphabet Inc. Over the past year, many felt reassured (if perplexed) by the fact that Altman, unlike those founders before him, did not hold any stock in OpenAI. The stated reason was to remove any sense that greed was the motivating factor behind the pursuit of profits, while subjecting Altman to what had beenconsidered a higher-than-normal level of accountability. Turns out that none of it mattered: Despite warning after warning after warning, this weekend’s events prove the cult of the founder is alive and well in Silicon Valley.

Anna Tong et al. (via Hacker News):

Some investors in OpenAI, makers of ChatGPT, are exploring legal recourse against the company’s board, sources familiar with the matter told Reuters on Monday, after the directors ousted CEO Sam Altman and sparked a potential mass exodus of employees.

[…]

Investors worry that they could lose hundreds of millions of dollars they invested in OpenAI, a crown jewel in some of their portfolios, with the potential collapse of the hottest startup in the rapidly growing generative AI sector.

John Gruber:

OpenAI named a new interim CEO, Twitch co-founder Emmett Shear. (Shear is an AI worrier, who has advocated drastically “slowing down”, writing “If we’re at a speed of 10 right now, a pause is reducing to 0. I think we should aim for a 1-2 instead.”) OpenAI CTO Mira Murati was CEO for about two days.

[…]

Nadella appeared on CNBC and admitted that Altman and Brockman were not officially signed as Microsoft employees yet, and when asked who would be OpenAI’s CEO tomorrow, laughed, because he didn’t know.

OpenAI (via Hacker News):

We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D’Angelo.

Update (2023-11-27): Keach Hagey et al.:

One solution that Altman devised was a curious corporate structure that led to his ouster. A nonprofit board governs OpenAI’s for-profit business arm with the sole purpose of ensuring the company develops AI for humanity’s benefit—even if that means wiping out its investors.

[…]

Over the weekend, Altman’s old executive team pushed the board to reinstate him—telling directors that their actions could trigger the company’s collapse.

“That would actually be consistent with the mission,” replied board member Helen Toner, a director at a Washington policy research organization who joined the board two years ago.

Cade Metz, Tripp Mickle, and Mike Isaac (via Hacker News):

At one point, Mr. Altman, the chief executive, made a move to push out one of the board’s members because he thought a research paper she had co-written was critical of the company.

Austen Allred:

OpenAI board member Helen Toner published an article Altman took issue with.

She described it as “an academic paper that analyzed the challenges that the public faces when trying to understand the intentions of the countries and companies developing A.I.”

[…]

The article is literally an analysis of different ways you can force AI companies (and governments using AI) to slow development, and recommendations on how they can be used and which are best.

Anna Tong et al. (via Hacker News):

Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said.

Alex Heath:

Separately, a person familiar with the matter told The Verge that the board never received a letter about such a breakthrough and that the company’s research progress didn’t play a role in Altman’s sudden firing.

Via Nick Heer:

Heath’s counterclaim relies on a single source compared to Reuters’ two — I am not sure how many the Information has — but note that none of them require that you believe OpenAI has actually made a breakthrough in artificial general intelligence. This is entirely about whether the board received a letter making that as-yet unproven claim and, if that letter was recieved, whether it played a role in this week of drama.

Deepa Seetharaman:

OpenAI said Sam Altman will return as chief executive of the artificial-intelligence startup that he co-founded, ending a dramatic five-day standoff between him and the board that fired him.

[…]

The new board will include Bret Taylor, the former co-CEO of Salesforce; Larry Summers, the former Treasury secretary; and Adam D’Angelo, the only member of OpenAI’s previous board to remain. Taylor will be the chairman, the company said. Altman won’t be on the initial board.

Elizabeth Dwoskin and Nitasha Tiku (via Hacker News):

Four years ago, one of Altman’s mentors, Y Combinator founder Paul Graham, flew from the United Kingdom to San Francisco to give his protégé the boot, according to three people familiar with the incident, which has not been previously reported.

Graham had surprised the tech world in 2014 by tapping Altman, then in his 20s, to lead the vaunted Silicon Valley incubator. Five years later, he flew across the Atlantic with concerns that the company’s president put his own interests ahead of the organization — worries that would be echoed by OpenAI’s board.

Matt Levine:

The question is: Is control of OpenAI indicated by the word “controls,” or by the word “MONEY”?

Lucas Ropek (via Hacker News):

As far as the tech industry goes, it’s hard to say whether there’s ever been a more shocking series of events than the ones that took place over the last several days. The palace intrigue and boardroom drama of Sam Altman’s ousting by the OpenAI board (and his victorious reinstatement earlier today) will doubtlessly go down in history as one of the most explosive episodes to ever befall Silicon Valley. That said, the long-term fallout from this gripping incident is bound to be a lot less enjoyable than the initial spectacle of it.

[…]

So much of the drama of the episode seems to revolve around this argument between Altman and the board over “AI safety.” Indeed, this fraught chapter in the company’s history seems like a flare up of OpenAI’s two opposing personalities—one based around research and responsible technological development, and the other based around making shitloads of money. One side decidedly overpowered the other (hint: it was the money side).

Update (2023-12-06): Sam Altman (via Hacker News):

I am returning to OpenAI as CEO. Mira will return to her role as CTO. The new initial board will consist of Bret Taylor (Chair), Larry Summers, and Adam D’Angelo.

[…]

While Ilya will no longer serve on the board, we hope to continue our working relationship and are discussing how he can continue his work at OpenAI.

[…]

We clearly made the right choice to partner with Microsoft and I’m excited that our new board will include them as a non-voting observer.

[…]

Bret, Larry, and Adam will be working very hard on the extremely important task of building out a board of diverse perspectives, improving our governance structure and overseeing an independent review of recent events.

Charles Duhigg (via Hacker News):

Altman began approaching other board members, individually, about replacing [Toner]. When these members compared notes about the conversations, some felt that Altman had misrepresented them as supporting Toner’s removal. “He’d play them off against each other by lying about what other people thought,” the person familiar with the board’s discussions told me. “Things like that had been happening for years.”

Paresh Dave (via Hacker News):

During Altman’s tenure as CEO, OpenAI had signed a letter of intent to spend $51 million on AI chips from a startup called Rain AI, a company in which he has also invested personally.

Rain is based less than a mile from OpenAI’s headquarters in San Francisco and is working on a chip it calls a neuromorphic processing unit, or NPU, designed to replicate features of the human brain. OpenAI in 2019 signed a nonbinding agreement to spend $51 million on the chips when they became available, according to a copy of the deal and Rain disclosures to investors this year, seen by WIRED. Rain told investors that Altman had personally invested more than $1 million in the company. The letter of intent has not been previously reported.

Update (2023-12-08): Meghan Bobrowsky (via Hacker News):

Toner maintains that safety wasn’t the reason the board wanted to fire Altman. Rather, it was a lack of trust. On that basis, she said, dismissing him was consistent with the OpenAI board’s duty to ensure AI systems are built responsibly.

[…]

In the interview, Toner declined to provide specific details on why she and the three others voted to fire Altman from OpenAI.

[…]

The group concluded that in one discussion with a board member, Altman left a misleading perception that another member thought Toner should leave, the people said.

By this point, several of OpenAI’s then-directors already had concerns about Altman’s honesty, people familiar with their thinking said.

Kali Hays at al. (via Hacker News):

After Sam Altman was fired from OpenAI late last month, the startup’s employees threatened to leave and accept a blanket offer from Microsoft to hire them all.

This was an audacious bluff and most staffers had no real interest in working for Microsoft, several current and former employees told Business Insider.

[…]

One current OpenAI employee admitted that, despite nearly everyone on staff signing up to follow Altman out the door, “No one wanted to go to Microsoft.” This person called the company “the biggest and slowest” of all the major tech companies — the exact opposite of how OpenAI employees see their startup.

[…]

Some Microsoft employees, meanwhile, were furious that the company promised to match salaries for hundreds of OpenAI employees. The offer came after Microsoft had laid off more than 10,000 employees, frozen salaries, and cut bonuses and stock awards this year.

Previously:

Update (2024-05-29): Richard Lawler (Hacker News):

Former board member Helen Toner is filling in blank spaces in an interview on The TED AI Show podcast, providing her perspective on the events that caused board members to stop trusting Altman, as well as how he eventually returned.

[…]

Toner says that one reason the board stopped trusting Altman was his failure to tell the board that he owned the OpenAI Startup Fund; another was how he gave inaccurate info about the company’s safety processes “on multiple occasions.”

[…]

Toner cites the launch of ChatGPT itself as an example of how the board didn’t have real oversight over the company. “[W]hen ChatGPT came out November 2022, the board was not informed in advance. We learned about ChatGPT on Twitter,” says Toner.

Previously:

Update (2024-05-31): See also: Zvi Mowshowitz.

11 Comments RSS · Twitter · Mastodon


I thought it was something mundane like income not growing as far as Altman said.

Fired for unsafe development of statistical models... what would that entail?


It is dangerous if people believe what ChatGPT says, and people are doing so, because it's becoming a "tool" for answering questions including medical questions. Microsoft envisages it being a tool the CEO can ask questions of just as s/he would ask his/her underlings. If it screws up, bye bye corporation.

Making it available to anyone for any purpose is also dangerous: if you don't know the social media account you are following, or that the "person" on the phone is a bot which was programmed/prompted to cheat you out of your vote/your life savings, you could be in big trouble.

For civilization to work, we assume other people are not bullshitting us, but are providing us with information that makes sense. If everyone violates that basic assumption, we no longer know who to trust, and civilization collapses.

About 10 years ago, most people were so sure that the people who analyzed X-rays would be replaced by AI. Yet they weren't because they didn't work well enough. Now, it's the same with ChatGPT, except this time around there seem to be fewer checks and balances to make sure we don't all go crazy.

Why is ChatGPT not believable? Because LLMs predict the next word from high dimensional manifolds which were built from example data, by interpolating across these manifolds' surfaces. The more data, the more points on the manifold are specified. Thus the appearance of more and more sensible output. The manifold is more or less accurately specified. There is no special mode called hallucination: the exact same interpolation underlies both "good output" and "hallucinations". Hallucinations are in the eye of the beholder: it's all interpolated. We should treat is all as "hallucinated" since it's unclear where the shape of the manifold follows our expectations/the behavior or the world and where it doesn't.

The goal of AI isn't (or at least shouldn't be) about making the LLM produce politically correct nonsense. It should be about making damn sure that the generative models do not create so much confusion and mistrust that the civilization collapses. Perhaps Ilya has realized this, but Sam didn't care.


@Old Unix Geek I’ve been playing with ChatGPT recently and having a very different experience than what I’d read about. Of all the Mac programming questions I’ve asked it, I don’t think it’s gotten a single one right. But the incorrect answers are often convincing enough that if you don’t know the domain you would think it knows what it’s talking about and really be led astray.


@Michael Tsai:

Yes... the thing is that if you're not a domain expert, you might not realize it. And even if you are, you might trust it in other domains. It reminds me of the media: sure, they get stuff quite wrong in my domain of expertise, but I'll believe them when it comes to other areas. People used to quote the NYTimes as if it were chapter and verse, now on Twitter I seem them quoting ChatGPT in the same way... This could break society.

Btw, my last paragraph should have read "The goal of safety in AI"... somehow I _ a word.


I have no idea what to make of this:

https://www.theverge.com/2023/11/18/23967199/breaking-openai-board-in-discussions-with-sam-altman-to-return-as-ceo

Apparently more people have left:

https://www.theinformation.com/articles/three-senior-openai-researchers-resign-as-crisis-deepens?rc=k5vrz1

Jakub Pachocki, the company’s director of research; Aleksander Madry, head of a team evaluating potential risks from AI, and Szymon Sidor, a seven-year researcher at the startup, told associates they had resigned, these people said.

So there seems to be an "Open Revolt" at "Open AI"...


An odd, but possible explanation is that if Ilya and the other board members thought OpenAI reached AGI. At that point Microsoft and other investors lose access to the underlying technology. That might cause the external investors to freak out and threaten to hold back resources, such as access to Microsoft's cloud. (Supposedly Satya Nadella was furious about what happened at OpenAI).

https://nitter.net/AndrewCurran_/status/1725679404637196433#m

This incentive structure would push Altman and Brockman to lean towards disbelieving that AGI was achieved, if they want to keep the current investment structure, because agreeing would require them to stop, according to the terms of the legal structure they created for their non-profit/capped-for-profit thing. Others on the board might disagree, since their job is to enforce the legal structure if they sincerely believe has been reached.

If nothing else, this hypothesis shows us that the incentives around OpenAI aren't very well aligned, if ever they were to attain AGI.


Oddly, Ilya Sutskever just signed a letter asking for the board to resign, or they'll all leave and join a Microsoft subdivision making AI. End of OpenAI...

https://nitter.unixfox.eu/karaswisher/status/1726599700961521762#m

Apparently, Ilya's had a change of heart

https://nitter.unixfox.eu/ilyasut/status/1726590052392956028#m

This appears to falsify my previous post about AGI. Good. But I hardly trust Microsoft monopolizing AGI... So much for the capped non-profit thing.

https://nitter.unixfox.eu/satyanadella/status/1726509045803336122#m


It is interesting that OpenAI's employees decided to rebel.

If people were rational economic actors, one would assume they wouldn't react to Sam Altman's dismissal: some of them are on H1B visas that could be cancelled if OpenAI were to collapse. Most of them were about to get life-changing giant bonuses. All of that is now in question.

That would have been in doubt if Microsoft pulled their hardware, but both Microsoft and the interim CEO Murati both said the Microsoft-OpenAI relationship (and their GPUs) was still on solid ground.

I wonder whether assuming people are rational might be why people like https://old.reddit.com/user/Anxious_Bandicoot126 miscalculated.

Another take saying this is not the best outcome for Microsoft: https://nitter.net/eastdakota/status/1726735785188073726#m


The fact that one of the most powerful, dangerous technologies we have is controlled by people who are this volatile is genuinely terrifying.

Also, everybody working at a company that is secretly developing a powerful artificial intelligence suddenly going insane is the beginning of half the dystopian sci-fi movies out there.


4 days before the Altman saga, an OpenAI project named Q* solved mathematical problems beyond expectations, causing some OpenAI researchers to write to the board, suggesting a possible threat to the future of humanity.

https://www.cnbc.com/amp/2023/11/22/sam-altmans-ouster-at-openai-precipitated-by-letter-to-board-about-ai-breakthrough-sources-tell-reuters.html

Combining LLMs and actual reasoning seems to me to be an obvious path towards trying to achieve AGI.


Leave a Comment