Thursday, June 20, 2024

Safe Superintelligence Inc.

Ilya Sutskever et al. (via Hacker News):

Building safe superintelligence (SSI) is the most important technical problem of our​​ time.

We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.

[…]

We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.

[…]

Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.

I don’t really understand how they know whether what they are doing is “safe.” And currently, I think, people are more worried about what humans will do with AI—which they can’t control—not with what the AI will do by itself. But, I guess, good luck to them in outrunning the other companies who have less focus on safety.

Om Malik:

Daniel Gross, former AI lead at Apple, and researcher Daniel Levy are co-founders of the company.

[…]

What does “safe” mean when it comes to superintelligence? […] I have read fewer word that have more clarity.

Simon Sharwood:

Building an SSI “is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI.”

Who are those investors? The page doesn’t indicate. Ditto the business model.

Previously:

6 Comments RSS · Twitter · Mastodon

These folks waited waaaaay too long after the horse left the barn at OpenAI, and I’m sure they cashed in accordingly with one hand while wagging their finger with the other.

My only solace is that I think the vast majority of this LLM stuff is a parlor trick. I wonder if they think that too and are either correcting course or cashing in a second time before the hype fades. For now, I think you’d have to be crazy to give them the benefit of the doubt.

Old Unix Geek

Safe super intelligence is better than human intelligence which understands the consequences of its actions so that it doesn't end up killing us. For instance, it won't tell its users how to engineer viruses to kill off the people its users hate. Nor will it suggest new methods of space-flight through hyperspace that kill the passengers. It's not necessarily "safe" in the sense of safe spaces. It might well tell you that you are obese and that it's bad for you.

Personally I'm not bullish on LLMs as the road to AGI although they may pave the road to serfdom. However I am impressed by tools like AlphaZero or AlphaFold, so there is promise there.

Its base in Tel Aviv is concerning... Israel isn't exactly a peaceful stable place in which one could not imagine any pressures on the super-intelligence company to put its principles aside. Let's build our Safe Super AI next to Armageddon!

"Building safe superintelligence (SSI) is the most important technical problem of our​​ time."

No, I'm pretty sure that's the climate crisis. You know, the thing AI is already making markedly worse because of how wasteful it is?

What kind of world do these tech bros imagine? One where our planet and ecosystems are destroyed but at least we created a lot of shareholder value or an AGI to live in the wreckage?

AI Alignment is a philosophical field that is speculative and for the most part ties in with transhumanism. Sound approaches do exist. SI seems to allude to AGI ideas, which have a long tradition, however, AGI is not clear even in principle, and there is nearly no research into it. AI is generally conflated with it for a number of reasons. However, it purely rests on the 80s connectionism and probabilistic generative paths or feature mappings in vector spaces. There is little conceptual complexity, its complexity stems from the large data sets. Current AI is what we need and a solution to our problems and challenges much like the blockchain-crpto-NFTs-web3 were. It’s already in sell-out mode.

Old Unix Geek

@Anna: The main reason AI is conflated with AGI, is that AI meant AGI when I did my PhD in AI... Then people started calling everything AI, and the actual AI people had to create a new name, "AGI". AI isn't just 1960s neural networks, or even machine learning, although that's what's having its moment in the sun right now. Theorem provers, Planners, chess engines, which reason symbolically were also forms of "AI", because we thought they might solve intelligence. To summarize, the idea at the time was that search would provide a solution to intelligence. Now the idea is that similarity & learned probabilistic rules will provide a solution to intelligence. Both bets are too narrow minded to achieve AGI. I'm not betting on LLMs alone, but they're interesting from an intellectual standpoint.

@Old Unix Geek

Yes :)

I had that packed in “long tradition”. Symbolic AI. PSSH. Chomsky et al. in the Handbook of Mathematical Psychology ….etc.

AI always meant AI. Now it means bullshit.*

* https://link.springer.com/article/10.1007/s10676-024-09775-5

Leave a Comment