Yuxi Liu (via Hacker News):
The legendary Cyc project, Douglas Lenat’s 40-year quest to build artificial general intelligence by scaling symbolic logic, has failed. Based on extensive archival research, this essay brings to light its secret history so that it may be widely known.
[…]
In 1984, he launched Cyc to manually encode millions of facts and rules about common sense, predicting that once this “knowledge pump” was primed, the system would begin true machine learning by reading natural language texts and conducting autonomous scientific experiments. Cyc grew to contain approximately 30 million assertions at a cost of $200 million and 2,000 person-years. Yet despite Lenat’s repeated predictions of imminent breakthrough, it never came.
[…]
Cycorp achieved long-term financial stability that is uncommon for a small technology company, but all known commercial uses of its system involve standard methods in expert systems, data integration, and information retrieval, functionally the same as similar services offered by established corporations like Oracle and IBM. No evidence suggests that Cyc’s purported higher intelligence provided any competitive advantage.
Previously:
Artificial Intelligence History Sunset
Alberto Romero (via Hacker News):
I’d been holding off on writing about Gemini 2.5. Focusing on the AI model didn’t feel like enough to tell the full story of Google’s comeback. Gemini 2.5 is only a piece—albeit a big one—of something much larger. Back in December 2024, I said they would come out on top by the end of 2025. We’re not even halfway there and it’s already happened.
[…]
Perhaps most importantly, the benchmark scores match the signal I receive from vibes checks, high-taste testers, and firsthand testimonials: people are reporting en masse that Gemini 2.5 Pro is indeed the best model today. A rare sight to witness. (Watch Matthew Berman’s clip below.)
And that's just pure performance. Add to the above that Gemini 2.5, compared to models of its category, is fast and cheap—I mean, they're giving away free access!—has a gigantic context window of 1 million tokens (only recently surpassed by Meta’s Llama 4) and it’s connected to the entire Google suite of products (more on that soon).
Previously:
Artificial Intelligence Google Google Gemini/Bard
Jordan Novet (2024, via Hacker News):
OpenAI co-founder John Schulman said in a Monday X post that he would leave the Microsoft-backed company and join Anthropic, an artificial intelligence startup with funding from Amazon.
The move comes less than three months after OpenAI disbanded a superalignment team that focused on trying to ensure that people can control AI systems that exceed human capability at many tasks.
Emma Roth:
Claude, the AI chatbot made by Anthropic, now has a desktop app. You can download the Mac and Windows versions of the app from Anthropic’s website for free.
Last week, Anthropic released its “computer use” feature in public beta, which allows the Claude 3.5 Sonnet model to control a computer by looking at a screen, moving the cursor, clicking buttons, and entering text. This capability isn’t available within the app, however.
Sebastiaan de With:
Big miss from Anthropic releasing a super clunky macOS electron app that feels like a bad wrapper of their website. Very weird non-standard UI all over, choppy and sloppy animations.
OpenAI is really leagues ahead in making good apps (+ has ChatGPT Search rolling out today)
Via John Gruber (Mastodon):
There’s much talk that Anthropic’s Claude 3.5 has pulled ahead of OpenAI’s ChatGPT 4o in terms of chatbot “intelligence”, but as an overall experience ChatGPT wins hands-down. For one thing ChatGPT has been able to search the web for answers for a while now, and it works great. For another, just today OpenAI launched ChatGPT’s dedicated “search” mode. Claude has nothing like it.
[…]
ChatGPT’s native Mac app, on the other hand, is a truly native Mac app. It looks like a Mac app and feels like a Mac app because it really is a Mac app.
Previously:
Anthropic Artificial Intelligence ChatGPT Electron Mac Mac App macOS 15 Sequoia OpenAI
Kylie Robison (via Hacker News, Slashdot):
Over the weekend, Meta dropped two new Llama 4 models: a smaller model named Scout, and Maverick, a mid-size model that the company claims can beat GPT-4o and Gemini 2.0 Flash “across a broad range of widely reported benchmarks.”
[…]
The achievement seemed to position Meta’s open-weight Llama 4 as a serious challenger to the state-of-the-art, closed models from OpenAI, Anthropic, and Google. Then, AI researchers digging through Meta’s documentation discovered something unusual.
In fine print, Meta acknowledges that the version of Maverick tested on LMArena isn’t the same as what’s available to the public. According to Meta’s own materials, it deployed an “experimental chat version” of Maverick to LMArena that was specifically “optimized for conversationality,” TechCrunch first reported.
First Google demo shenanigans, then Apple, and now Meta.
Previously:
Artificial Intelligence LLaMA Meta