Thursday, June 26, 2025

Software Is Changing (Again)

Y Combinator (transcript, slides, via Duncan Davidson, Hacker News):

Drawing on his work at Stanford, OpenAI, and Tesla, Andrej [Karpathy] sees a shift underway. Software is changing, again. We’ve entered the era of “Software 3.0,” where natural language becomes the new programming interface and models do the rest.

He explores what this shift means for developers, users, and the design of software itself— that we’re not just using new tools, but building a new kind of computer.

He says that LLMs are, in a way, the new operating systems.

Thomas Ptacek (via Nick Lockwood, Hacker News):

Some of the smartest people I know share a bone-deep belief that AI is a fad — the next iteration of NFT mania. I’ve been reluctant to push back on them, because, well, they’re smarter than me. But their arguments are unserious, and worth confronting. Extraordinarily talented people are doing work that LLMs already do better, out of spite.

All progress on LLMs could halt today, and LLMs would remain the 2nd most important thing to happen over the course of my career.

[…]

but the code is shitty, like that of a junior developer

Does an intern cost $20/month? Because that’s what Cursor.ai costs.

Part of being a senior developer is making less-able coders productive, be they fleshly or algebraic. Using agents well is both a both a skill and an engineering project all its own, of prompts, indices, and (especially) tooling. LLMs only produce shitty code if you let them.

Gergely Brautigam (Hacker News):

While the post is funny at times, I feel like it’s absolutely and completely missing the point of the skepticism. Or at least I feel that it is glossing over some massive pain points of said skepticism.

Peter Steinberger:

You gotta look at the iOS app. This is a completely agent-built port of the web frontend.

ageesen (via Peter Steinberger):

I’ve been using CC NON-STOP (think 3 or 4 five hour sessions a day) over the last 11 days. Mostly Opus 4 for planning and Sonnet 4 for coding. I have a workflow going that is effective and pushing out very good quality code.

I just installed ccusage out of curiosity, and was blown away by the amount of daily usage.

Any of you feeling the same kind of urgent addiction at the moment?

Like this overwhelming sense that everything in AI tech is moving at light speed and there literally aren’t enough hours in the day to keep up? I feel like I’m in some kind of productivity arms race with myself.

Don’t get me wrong - the output quality is incredible and I’m shipping faster than ever (like 100x faster). But this pace feels unsustainable. It’s like having a coding superpower that you can’t put down…. and I know it’s only going to get better.

Ken Kocienda (Mastodon):

Well, over the last year or so, I’ve made the biggest-ever change to the way I write software. I now code with AI assistance all the time. Here’s why. Here’s how.

[…]

I write fewer lines of code than ever— by hand in the old-fashioned way—yet I create more code than ever. What’s more, as far as I can tell, there is no detectable reduction in quality. I’m just faster at making changes, fixing bugs, and turning out more features.

[…]

I still think of the features ideas. I still plan how I want the features to be implemented. I still read over all the code before I commit—and I still take the same responsibility over the code I merge—but I don’t write each and every if/then or function call anymore. No more typing out boilerplate code, either. I no longer have to. The AI does this grunt work for me.

My mind feels freed up. I remain at the higher levels of abstraction, with more time to think about ideas and plans. There’s less cognitive overhead in attempting things, so I attempt more things.

I still don’t really get how to apply this to my work. Most of what I’m doing is already thinking vs. typing grunt work. Describing how to change or enhance my existing codebase seems more daunting than just doing it directly. Is reviewing code written by an AI actually like reviewing code written by another human? And how does it help you fix bugs?

These days, I do most of my coding in python. I don’t love the language—maybe someday I’ll say why in more detail. However, since the models know python so well, it is possibly the most effective language to use for AI coding. Unlike other languages.

Kyle Hughes:

At work I’m developing a new iOS app on a small team alongside a small Android team doing the same. We are getting lapped to an unfathomable degree because of how productive they are with Kotlin, Compose, and Cursor. They are able to support all the way back to Android 10 (2019) with the latest features; we are targeting iOS 16 (2022) and have to make huge sacrifices (e.g Observable, parameter packs in generics on types). Swift 6 makes a mockery of LLMs. It is almost untenable.

This wasn’t the case in the 2010s. The quality and speed of implementation of every iOS app I have ever worked on, in teams of every size, absolutely cooked Android. I have to give Google credit: they took all of the flak about fragmentation they got for a decade and grinded out the best mobile developer ecosystem in the world, and their lead seems to be increasing at an accelerating pace. I am uncomfortable with how I have positioned my career, to say the least.

To be clear, I’m not part of the Anti Swift 6 brigade, nor aligned with the Swift Is Getting Too Complicated party. I can embed my intent into the code I write more than ever and I look forward to it becoming even more expressive.

I am just struck by the unfortunate timing with the rise of LLMs. There has never been a worse time in the history of computers to launch, and require, fundamental and sweeping changes to languages and frameworks.

John Gruber (Mastodon):

There were pros and cons to Apple’s approach over the last decade. But now there’s a new, and major con: because Swift 6 only debuted last year, there’s no great corpus of Swift 6 code for LLMs to have trained on, and so they’re just not as good — from what I gather, not nearly as good — at generating Swift 6 code as they are at generating code in other languages, and for other programming frameworks like React.

John Voorhees:

To hear the AI fans tell it, I, the developers we write about, and nearly everyone else will be out of jobs before long. Some days, that threat feels very real, and others, not so much. Still, it’s caused a lot of anxiety for a lot of people.

Rand Fishkin (via Hacker News):

Over the weekend, I went digging for evidence that AI can, will, or has replaced a large percent of jobs. It doesn’t exist. Worse than that, actually, there’s hundreds of years of evidence and sophisticated analyses from hundreds of sources showing the opposite is true: AI will almost certainly create more jobs than it displaces, just like thousands of remarkable technologies before it.

Brian Webster (Mastodon):

I’ve been an independent Mac developer for going on twenty years now (yikes!).

[…]

My initial reaction was pretty skeptical, since it’s clearly full into its hype cycle at the moment, and the previous hype cycle of crypto/blockchain/Web3/NFTs has pretty much proven to mostly be a way to run more elaborate scams. As time has gone along though, it’s undeniable that this LLM stuff has actual utility, even if it’s being thrown at everything under the sun by CEOs in hopes of being able to pay less money to their employees.

[…]

But the code itself isn’t actually the satisfying part: it’s the process of creating something new, and it’s all the things I outlined above about diving deep into a particular area, and solving problems for people.

Probably the biggest limitation of being indie is the fact that you just only have so many hours in the day, and there will always be more stuff you want to do than you possibly have time for. What has started to get me excited about using AI tools to assist with coding is that it can take a lot of the grunt work out of the process of doing what I ultimately want to do, which is to try to apply my expertise to solve problems. While my coding expertise is obviously a decent part of why I’m able to do what I do, the truth is that the individual lines of code that I type out are not really what lets me add something to the world, it’s being able to help people via the mechanism of encoding my expertise into software.

[…]

I’m only just getting started with this stuff, having been working full time with Claude Code for all of a week now, but I’ve already implemented features in hours that would have normally taken me days, with basically the same quality of code output in terms of readability, maintainability, etc.

Peter Steinberger:

A friend asked me to show off my current workflow, so I did an impromptu workshop for him and his developers. This is a snapshot of how I approach vibe coding these days.

Kyle Hughes:

I legitimately think that agentic LLMs are the future of personal computers, the new operating system. Using Claude Code to interact with your own software over MCP, and see it autonomously solve problems with it and using it, is transcendent. The rest of the computer feels so antiquated, handmade GUIs feel cumbersome. Our computers will use our computers soon.

Peter Steinberger:

The thing is, people don’t understand that you don’t actually have to pay that much to get incredible AI productivity. After using the best AI subscription deals 2025 has to offer, here’s the real math (all prices in USD). (And yes, I built Vibe Meter to track exactly how much I’m spending.)

Previously:

Update (2025-06-27): Nikhil Suresh:

I think this essay sucks and it’s wild to me that it achieved any level of popularity, and anyone that thinks that it does not predominantly consist of shoddy thinking and trash-tier ethics has been bamboozled by the false air of mature even-handedness, or by the fact that Ptacek is a good writer.

Helge Heß:

The main feature of AI is the license eraser. FOSS software for almost everything was available all the time. But you wouldn’t use it.

Jeff Johnson:

The only empirical evidence of “increased productivity” I’ve seen from AI lovers is a huge number of articles praising AI.

Cesare Forelli:

I appreciate that, to most, reading other developers’ AI success stories is far less interesting or exciting than it is for those who experienced them, but after reading @mjtsai’s Software is Changing Again I decided to share one that blew my mind today.

Setup: customer has a 4D database; for them I built an iPadOS app used by production workers, plus a Vapor “middleman” that makes that app talk with the database.

Max Seelemann:

I feel that with Claude in Cursor, I can finally work as fast as I can think.

26 Comments RSS · Twitter · Mastodon


Here's the thing for me.... why call it "vibe coding'? Why not "fake coding", or more accurately "AI coded this"? Not trying to be snarky, just wondering. If you want snaky, I have a few ways to describe it:

-- "I'm' too lazy to learn and wish to just create coding"
-- "I just want to make money as fast as can be and can't be troubled to learn code"

But what's up with "vibe"? And when another 5 years pass by and this pans out to something different than the voices today think it's headed, what will be the new name?


@Dave It’s a bit confusing because Karpathy is the one who coined “vibe coding,” but most of what’s being discussed here is not that. Kocienda and others reviewing the generated code is the opposite of vibe coding.


"but the code is shitty, like that of a junior developer

Does an intern cost $20/month? Because that’s what Cursor.ai costs."

What an absolutely dire period of shitty software we're walking into.


Currently the cost of AI is massively subsidised. It’s a common pattern in tech, Uber being the clearest example. Once the true cost of AI is passed on to the consumer I suspect using it liberally will become an expensive proposition. Becoming so reliant on it now such that one’s abilities to problem solving atrophies seems more like a risky bet than a wise move.

I currently believe that AI is and will continue to be a net negative. I’d recommend reading Ed Zitron’s posts to anyone else that finds the AI hype relentless vapid.


"Part of being a senior developer is making less-able coders productive, be they fleshly or algebraic."

Part of being a senior developer is making less-able coders into senior developers.


To me this particular usage makes sense. I'm not a programmer. But part of the reason why it never appealed to me is it seemed so much time doing such basic things.

I've tried asking ChatGPT to straight up do some things I've needed to do non-programming but tech related. Suffice it to say, you would not want to turn an entirely untrained person loose with ChatGPT's instructions and no further experience or guidance. You're gonna have a bad time.

So the real question to me becomes, where are the expert programmers to guide these things going to come from when the next generated is raised on generated code?

By then are the computers good enough to do it completely themselves, and we quickly enter a Von Neumann type situation?


Aleks Dorohovich

> but the code is shitty, like that of a junior developer
> Does an intern cost $20/month? Because that’s what Cursor.ai costs

I think Thomas Ptacek misses the point of an internship, or the role of junior devs in general.


ProfessorPlasma

Perhaps a hot take, but the fact that LLMs struggle with codes that haven't been around for a long time better illustrate that they aren't inherently intelligent? At least, it seems more compelling than the recent Apple white paper on the Tower of Hanoi). If they truly were "learning" then they would be adept at applying abstract and algorithmic concepts which are independent of the language.


> you don’t actually have to pay that much to get incredible AI productivity.

Yeah, because they're currently in the "we have so much funding we're basically giving away products to get people addicted to it" phase. Remember how that ended up last time. It's foolish to fall for it again


personally i don't think this is an indication that AI is getting so much better, but rather that the quality of software is getting that much worse. and i get it. apps feel disposable now, so why bother with craft? just ship it.


What are the best offline, local model options here?


> still don’t really get how to apply this to my work. Most of what I’m doing is already thinking vs. typing grunt work. Describing how to change or enhance my existing codebase seems more daunting than just doing it directly.

i’ve used ChatGPT to speed up certain tasks. And it can save time but it also can waste time. On a couple issues I had it output some code and with full confidence explain that the way it did it was “correct.” Of course I had to fix it.

You can waste a lot of time modifying prompts to try to get something usable and often you are just better off writing the whole thing yourself.

For basic tasks it can generate starting points that saves time. Help automate tedious stuff. Porting code from a foreign programming language it can be very helpful, and when it makes mistakes (almost always does) you see them in your native language after translation and you can fix them. It isn’t practical (at least for me) to become an expert in *every* programming language but AI allows me to grab at things without having to read the Holy Bible on said programming language.

With that said I’m not a Cocoapods guy. I don’t like grabbing pods of shit from other people and just adding them as dependencies. I use my own frameworks most of the time and very rarely rely on third party stuff. When I do rely on third party stuff I look at the code thoroughly to make sure it’s good. So when AI ports something for me, I have to take the time to read the code and make sure what it wrote makes sense.

So I do think AI can be useful tool but it isn’t clear to me how someone with **no programming skills** at all can just talk to an LLM and make a real app. You have to be really determined to not want to learn anything to do that. You’re just going to run buggy AI generated app code you don’t understand and just release it when you feel like it works good enough?
Even if someone was tenacious enough to do that, how on earth would they ever be able to maintain the app? When a user files a bug report are you just going to cross your fingers and ask your LLM to fix it…and pray that it gets it right?

I don’t think devs are going anywhere anytime soon.


Also to add LLMs are great at generating mock data for unit tests

But you have to care about what you’re doing to write unit tests…

So IMO AI is not totally useless to programmers, I don’t agree with that crowd. But I just don’t see it becoming the new “programming language” anytime soon either


Do any of these tools work with an Obj-C codebase? Do you have to give all your code to a third party?


Beatrix Willius

Fully agree with Brian Webster. AI allows me to do things faster. My language has a 2 problems: it went through a change a couple of years ago. Additionally, it's similar but not identical to Visual Basic. Both confuse the LLMs. Nevertheless, my productivity has increased.

The UI of my new application is in large part html. I could have written the CSS myself only with a lot more time and a lot more cursing.

I hope that the vibe coding fad will go away like the no-code stupidity.


@Michael Tsai

Well about Kocienda and vibe coding, it seems you are forgetting something stated in his article. And that bugged me a lot lately: why Claude is unable to do anything good when I use it while others are claiming it's so good?

"Mostly python
These days, I do most of my coding in python. I don’t love the language—maybe someday I’ll say why in more detail. However, since the models know python so well, it is possibly the most effective language to use for AI coding. Unlike other languages."

Here it goes. All the people stating that LLM are helping so much at coding are developers using managed programming language, where all the complexity of programming is already hidden by the programming language itself. From there, it's easy to state "i couldn't have done better" as indeed, programming in those languages doesn't impact much performance or memory usage unless you are doing something really wrong.

But try to do C/C++/Odin programming and it's a complete disaster as you can quickly see that LLMs can't reason. Worst, in my experience, LLMs created bugs on perfectly valid code using manual memory.

Which brings us to a second conclusion: despite what Apple is claiming, Swift isn't an easy language to learn nor use as LLM has tremendous difficulty to generate good code. It's not only an issue about "Swift 6" being too new, but the fact that so much complexity is built into the language that you need to know about unlike Python or Javascript.

IMHO, Apple is going to have serious issues pretty soon. Native iOS coding is already in bad shape, their relationship with developers horrible, and now they won't be able to compete with LLM + managed languages with their broken stack.


@Beatrix Willius are you referring to Xojo? ;)


PS: Vibe coding = not seeing the code at all. If you see the code or fix it manually => you are not vibe coding.


"I’m shipping faster than ever (like 100x faster)"

"I’ve already implemented features in hours that would have normally taken me days"

These claims are absolutely wild to me. How slow were you before using LLMs that they could speed you up to that degree?

I use agentic coding for some things, like writing unit tests, but not because it's faster. I do it because I hate writing unit tests. It usually ends up being slower because generating the code takes almost as much time as it would have taken me to just type it out, and reviewing it again takes almost as much time as just typing it.

LLMs do speed up my coding overall because they cut down on work that would typically take more time, like looking up API documentation. But they speed me up maybe 10-20%, not 100🗙.

"Currently the cost of AI is massively subsidised"

That seems like a non-issue, given how fast cost seems to be falling. But even if all of these companies die tomorrow or crank up their prices, I'll just use a local model and keep doing the same thing I'm doing now.

"What are the best offline, local model options here?"

There are a ton of options from all companies that provide local models, and "the best" constantly changes. The current DeepSeek-Coder is always worth a look. Devstral is also pretty good. But there are also coding-oriented versions of quen, gemma, and llama. You'll have to try and see which model works best for you.


@Damien I share your sentiments. Dumpster fire langs like Swift suffer in the LLM era:

- Adding endless features, syntax variations, and keywords doesn't help machines (or humans) produce good code. It allows too many pattern variations.
- Fumbling the ball constantly and churning the language every year doesn't help train models.
- Leaving tooling in a brittle state hobbles agentic integrations.

I'm sure a Swiftluencer with a course to sell will share a Claude.md file to limp the agents along (and cheerlead Swift when he posts it).

The best part of this era is we now get to dunk on the smug Swift pricks:

Turns out the "language of the future" is not Swift; it's English!

Time to learn English, Swifties! Every line of Swift you write is now obsolete!

;)


> PS: Vibe coding = not seeing the code at all.

You are still vibe coding if you see the code but don’t understand any of it.


@Damien What makes you think I’m forgetting the part of his article that I quoted? (And, BTW, I was speculating here that that would be a problem a year or two ago.) I do think it’s a somewhat open question whether the problem with LLMs and Swift is the lack of modern training data vs. something inherent to the language. On the one hand, it seems intuitively obvious that it’s easier to generate Python and JavaScript. And it’s a problem that even expert humans seem to have trouble with the newer Swift stuff. On the other hand, maybe having a very strict compiler is helpful in a world of generated code because it lets you find latent mistakes. Overall, I agree that there are problems in Apple development land. I think they focused on a lot of the wrong things over the last 10 years or so.


ageesen:

> Don’t get me wrong - the output quality is incredible and I’m shipping faster than ever (like 100x faster). But this pace feels unsustainable. It’s like having a coding superpower that you can’t put down…. and I know it’s only going to get better.

Faster, faster, faster, more 'productivity', more 'output'… It's all because of this. This frenzy of vomiting stuff indiscriminately instead of focusing on creating meaningful applications.

Kocienda:

> I write fewer lines of code than ever— by hand in the old-fashioned way—yet I create more code than ever.

Et tu, Ken? "I create more code than ever" — no, you don't. No, you don't.

These people and their AI coding orgasms think they're living the dream: make good apps quickly and cheaply. Well, I don't know about you, but if you're going to ask _me_ money for 'your' app that was 90% created by automated tools, I won't give you a penny. How about that?

You think it's an unfair attitude. How about I have an idea for a sci-fi story, tell some chatbot to write the whole novel, I quickly proofread it, then put the book on sale for $25. Would you want to buy that?
>


@Riccardo How are you going to know from the outside how they made their app? Can you tell from a restaurant menu whether the chef uses a microwave? Ultimately, unless there ends up being some sort of “organic” labelling about how the product was made, we can only judge the end result. And software problems, unfortunately, often don’t manifest right away. But I guess my overall attitude here is that with people like Kocienda and Webster who have a history of doing good work, if they say the tools are helpful and can be used responsibly my default is to believe them rather than assume they must have gone crazy.


Thomas Ptacek with a really dumb take, and I've seen many that should know better, like Peter, make the same one. You are missing the point; an intern learns. You may handhold them at first, but they learn, and if they are successful, they go on to do great work on their own, or in teams. These retarded models do not learn. You waste hours on trying to hone the dumbest of issues, and a minute later, it makes the same mistake again. If you find that productive, Ptacek, then you are certainly not among the "smartest people" you know.


@Hammer, I aree 99% with your comment. The 1%? You have to read the 10 (actually 11) post thread by Cesare Forelli. (Nobody believes in long posts to blogs anymore. What was life like before Twitter? Or weblogs? Or the World Wide Web? Only those who were born before 1980 may have an accurate answer.)

Anyways, the posted link takes you from a mastodon thread to: https://iosdev.space/@cdf1982/114752728079400321 and post 5 of 10/11 has this quote:

"....I pasted the attached prompt (in Italian though, as sometimes I don’t even think about the language with these things, since they do not care)..."

So you needn't even know English. The language of the future isn't Swift, nor English. It's AI prompts. That is until AI learns to listen... and then...?

Leave a Comment