Software Is Changing (Again)
Y Combinator (transcript, slides, via Duncan Davidson, Hacker News):
Drawing on his work at Stanford, OpenAI, and Tesla, Andrej [Karpathy] sees a shift underway. Software is changing, again. We’ve entered the era of “Software 3.0,” where natural language becomes the new programming interface and models do the rest.
He explores what this shift means for developers, users, and the design of software itself— that we’re not just using new tools, but building a new kind of computer.
He says that LLMs are, in a way, the new operating systems.
Thomas Ptacek (via Nick Lockwood, Hacker News):
Some of the smartest people I know share a bone-deep belief that AI is a fad — the next iteration of NFT mania. I’ve been reluctant to push back on them, because, well, they’re smarter than me. But their arguments are unserious, and worth confronting. Extraordinarily talented people are doing work that LLMs already do better, out of spite.
All progress on LLMs could halt today, and LLMs would remain the 2nd most important thing to happen over the course of my career.
[…]
but the code is shitty, like that of a junior developer
Does an intern cost $20/month? Because that’s what Cursor.ai costs.
Part of being a senior developer is making less-able coders productive, be they fleshly or algebraic. Using agents well is both a both a skill and an engineering project all its own, of prompts, indices, and (especially) tooling. LLMs only produce shitty code if you let them.
Gergely Brautigam (Hacker News):
While the post is funny at times, I feel like it’s absolutely and completely missing the point of the skepticism. Or at least I feel that it is glossing over some massive pain points of said skepticism.
You gotta look at the iOS app. This is a completely agent-built port of the web frontend.
ageesen (via Peter Steinberger):
I’ve been using CC NON-STOP (think 3 or 4 five hour sessions a day) over the last 11 days. Mostly Opus 4 for planning and Sonnet 4 for coding. I have a workflow going that is effective and pushing out very good quality code.
I just installed ccusage out of curiosity, and was blown away by the amount of daily usage.
Any of you feeling the same kind of urgent addiction at the moment?
Like this overwhelming sense that everything in AI tech is moving at light speed and there literally aren’t enough hours in the day to keep up? I feel like I’m in some kind of productivity arms race with myself.
Don’t get me wrong - the output quality is incredible and I’m shipping faster than ever (like 100x faster). But this pace feels unsustainable. It’s like having a coding superpower that you can’t put down…. and I know it’s only going to get better.
Well, over the last year or so, I’ve made the biggest-ever change to the way I write software. I now code with AI assistance all the time. Here’s why. Here’s how.
[…]
I write fewer lines of code than ever— by hand in the old-fashioned way—yet I create more code than ever. What’s more, as far as I can tell, there is no detectable reduction in quality. I’m just faster at making changes, fixing bugs, and turning out more features.
[…]
I still think of the features ideas. I still plan how I want the features to be implemented. I still read over all the code before I commit—and I still take the same responsibility over the code I merge—but I don’t write each and every if/then or function call anymore. No more typing out boilerplate code, either. I no longer have to. The AI does this grunt work for me.
My mind feels freed up. I remain at the higher levels of abstraction, with more time to think about ideas and plans. There’s less cognitive overhead in attempting things, so I attempt more things.
I still don’t really get how to apply this to my work. Most of what I’m doing is already thinking vs. typing grunt work. Describing how to change or enhance my existing codebase seems more daunting than just doing it directly. Is reviewing code written by an AI actually like reviewing code written by another human? And how does it help you fix bugs?
These days, I do most of my coding in python. I don’t love the language—maybe someday I’ll say why in more detail. However, since the models know python so well, it is possibly the most effective language to use for AI coding. Unlike other languages.
At work I’m developing a new iOS app on a small team alongside a small Android team doing the same. We are getting lapped to an unfathomable degree because of how productive they are with Kotlin, Compose, and Cursor. They are able to support all the way back to Android 10 (2019) with the latest features; we are targeting iOS 16 (2022) and have to make huge sacrifices (e.g Observable, parameter packs in generics on types). Swift 6 makes a mockery of LLMs. It is almost untenable.
This wasn’t the case in the 2010s. The quality and speed of implementation of every iOS app I have ever worked on, in teams of every size, absolutely cooked Android. I have to give Google credit: they took all of the flak about fragmentation they got for a decade and grinded out the best mobile developer ecosystem in the world, and their lead seems to be increasing at an accelerating pace. I am uncomfortable with how I have positioned my career, to say the least.
To be clear, I’m not part of the Anti Swift 6 brigade, nor aligned with the Swift Is Getting Too Complicated party. I can embed my intent into the code I write more than ever and I look forward to it becoming even more expressive.
I am just struck by the unfortunate timing with the rise of LLMs. There has never been a worse time in the history of computers to launch, and require, fundamental and sweeping changes to languages and frameworks.
There were pros and cons to Apple’s approach over the last decade. But now there’s a new, and major con: because Swift 6 only debuted last year, there’s no great corpus of Swift 6 code for LLMs to have trained on, and so they’re just not as good — from what I gather, not nearly as good — at generating Swift 6 code as they are at generating code in other languages, and for other programming frameworks like React.
To hear the AI fans tell it, I, the developers we write about, and nearly everyone else will be out of jobs before long. Some days, that threat feels very real, and others, not so much. Still, it’s caused a lot of anxiety for a lot of people.
Rand Fishkin (via Hacker News):
Over the weekend, I went digging for evidence that AI can, will, or has replaced a large percent of jobs. It doesn’t exist. Worse than that, actually, there’s hundreds of years of evidence and sophisticated analyses from hundreds of sources showing the opposite is true: AI will almost certainly create more jobs than it displaces, just like thousands of remarkable technologies before it.
I’ve been an independent Mac developer for going on twenty years now (yikes!).
[…]
My initial reaction was pretty skeptical, since it’s clearly full into its hype cycle at the moment, and the previous hype cycle of crypto/blockchain/Web3/NFTs has pretty much proven to mostly be a way to run more elaborate scams. As time has gone along though, it’s undeniable that this LLM stuff has actual utility, even if it’s being thrown at everything under the sun by CEOs in hopes of being able to pay less money to their employees.
[…]
But the code itself isn’t actually the satisfying part: it’s the process of creating something new, and it’s all the things I outlined above about diving deep into a particular area, and solving problems for people.
Probably the biggest limitation of being indie is the fact that you just only have so many hours in the day, and there will always be more stuff you want to do than you possibly have time for. What has started to get me excited about using AI tools to assist with coding is that it can take a lot of the grunt work out of the process of doing what I ultimately want to do, which is to try to apply my expertise to solve problems. While my coding expertise is obviously a decent part of why I’m able to do what I do, the truth is that the individual lines of code that I type out are not really what lets me add something to the world, it’s being able to help people via the mechanism of encoding my expertise into software.
[…]
I’m only just getting started with this stuff, having been working full time with Claude Code for all of a week now, but I’ve already implemented features in hours that would have normally taken me days, with basically the same quality of code output in terms of readability, maintainability, etc.
A friend asked me to show off my current workflow, so I did an impromptu workshop for him and his developers. This is a snapshot of how I approach vibe coding these days.
I legitimately think that agentic LLMs are the future of personal computers, the new operating system. Using Claude Code to interact with your own software over MCP, and see it autonomously solve problems with it and using it, is transcendent. The rest of the computer feels so antiquated, handmade GUIs feel cumbersome. Our computers will use our computers soon.
The thing is, people don’t understand that you don’t actually have to pay that much to get incredible AI productivity. After using the best AI subscription deals 2025 has to offer, here’s the real math (all prices in USD). (And yes, I built Vibe Meter to track exactly how much I’m spending.)
Previously:
- Swift Assist, Part Deux
- Model Context Protocol (MCP) Tools for Mac
- Claude 4
- Is Electron Really That Bad?
- Tim, Don’t Kill My Vibe
- Vibe Coding
- Swift 6
Update (2025-06-27): Nikhil Suresh:
I think this essay sucks and it’s wild to me that it achieved any level of popularity, and anyone that thinks that it does not predominantly consist of shoddy thinking and trash-tier ethics has been bamboozled by the false air of mature even-handedness, or by the fact that Ptacek is a good writer.
The main feature of AI is the license eraser. FOSS software for almost everything was available all the time. But you wouldn’t use it.
The only empirical evidence of “increased productivity” I’ve seen from AI lovers is a huge number of articles praising AI.
I appreciate that, to most, reading other developers’ AI success stories is far less interesting or exciting than it is for those who experienced them, but after reading @mjtsai’s Software is Changing Again I decided to share one that blew my mind today.
Setup: customer has a 4D database; for them I built an iPadOS app used by production workers, plus a Vapor “middleman” that makes that app talk with the database.
I feel that with Claude in Cursor, I can finally work as fast as I can think.
42 Comments RSS · Twitter · Mastodon
Here's the thing for me.... why call it "vibe coding'? Why not "fake coding", or more accurately "AI coded this"? Not trying to be snarky, just wondering. If you want snaky, I have a few ways to describe it:
-- "I'm' too lazy to learn and wish to just create coding"
-- "I just want to make money as fast as can be and can't be troubled to learn code"
But what's up with "vibe"? And when another 5 years pass by and this pans out to something different than the voices today think it's headed, what will be the new name?
@Dave It’s a bit confusing because Karpathy is the one who coined “vibe coding,” but most of what’s being discussed here is not that. Kocienda and others reviewing the generated code is the opposite of vibe coding.
"but the code is shitty, like that of a junior developer
Does an intern cost $20/month? Because that’s what Cursor.ai costs."
What an absolutely dire period of shitty software we're walking into.
Currently the cost of AI is massively subsidised. It’s a common pattern in tech, Uber being the clearest example. Once the true cost of AI is passed on to the consumer I suspect using it liberally will become an expensive proposition. Becoming so reliant on it now such that one’s abilities to problem solving atrophies seems more like a risky bet than a wise move.
I currently believe that AI is and will continue to be a net negative. I’d recommend reading Ed Zitron’s posts to anyone else that finds the AI hype relentless vapid.
"Part of being a senior developer is making less-able coders productive, be they fleshly or algebraic."
Part of being a senior developer is making less-able coders into senior developers.
To me this particular usage makes sense. I'm not a programmer. But part of the reason why it never appealed to me is it seemed so much time doing such basic things.
I've tried asking ChatGPT to straight up do some things I've needed to do non-programming but tech related. Suffice it to say, you would not want to turn an entirely untrained person loose with ChatGPT's instructions and no further experience or guidance. You're gonna have a bad time.
So the real question to me becomes, where are the expert programmers to guide these things going to come from when the next generated is raised on generated code?
By then are the computers good enough to do it completely themselves, and we quickly enter a Von Neumann type situation?
> but the code is shitty, like that of a junior developer
> Does an intern cost $20/month? Because that’s what Cursor.ai costs
I think Thomas Ptacek misses the point of an internship, or the role of junior devs in general.
Perhaps a hot take, but the fact that LLMs struggle with codes that haven't been around for a long time better illustrate that they aren't inherently intelligent? At least, it seems more compelling than the recent Apple white paper on the Tower of Hanoi). If they truly were "learning" then they would be adept at applying abstract and algorithmic concepts which are independent of the language.
> you don’t actually have to pay that much to get incredible AI productivity.
Yeah, because they're currently in the "we have so much funding we're basically giving away products to get people addicted to it" phase. Remember how that ended up last time. It's foolish to fall for it again
personally i don't think this is an indication that AI is getting so much better, but rather that the quality of software is getting that much worse. and i get it. apps feel disposable now, so why bother with craft? just ship it.
> still don’t really get how to apply this to my work. Most of what I’m doing is already thinking vs. typing grunt work. Describing how to change or enhance my existing codebase seems more daunting than just doing it directly.
i’ve used ChatGPT to speed up certain tasks. And it can save time but it also can waste time. On a couple issues I had it output some code and with full confidence explain that the way it did it was “correct.” Of course I had to fix it.
You can waste a lot of time modifying prompts to try to get something usable and often you are just better off writing the whole thing yourself.
For basic tasks it can generate starting points that saves time. Help automate tedious stuff. Porting code from a foreign programming language it can be very helpful, and when it makes mistakes (almost always does) you see them in your native language after translation and you can fix them. It isn’t practical (at least for me) to become an expert in *every* programming language but AI allows me to grab at things without having to read the Holy Bible on said programming language.
With that said I’m not a Cocoapods guy. I don’t like grabbing pods of shit from other people and just adding them as dependencies. I use my own frameworks most of the time and very rarely rely on third party stuff. When I do rely on third party stuff I look at the code thoroughly to make sure it’s good. So when AI ports something for me, I have to take the time to read the code and make sure what it wrote makes sense.
So I do think AI can be useful tool but it isn’t clear to me how someone with **no programming skills** at all can just talk to an LLM and make a real app. You have to be really determined to not want to learn anything to do that. You’re just going to run buggy AI generated app code you don’t understand and just release it when you feel like it works good enough?
Even if someone was tenacious enough to do that, how on earth would they ever be able to maintain the app? When a user files a bug report are you just going to cross your fingers and ask your LLM to fix it…and pray that it gets it right?
I don’t think devs are going anywhere anytime soon.
Also to add LLMs are great at generating mock data for unit tests
But you have to care about what you’re doing to write unit tests…
So IMO AI is not totally useless to programmers, I don’t agree with that crowd. But I just don’t see it becoming the new “programming language” anytime soon either
Do any of these tools work with an Obj-C codebase? Do you have to give all your code to a third party?
Fully agree with Brian Webster. AI allows me to do things faster. My language has a 2 problems: it went through a change a couple of years ago. Additionally, it's similar but not identical to Visual Basic. Both confuse the LLMs. Nevertheless, my productivity has increased.
The UI of my new application is in large part html. I could have written the CSS myself only with a lot more time and a lot more cursing.
I hope that the vibe coding fad will go away like the no-code stupidity.
@Michael Tsai
Well about Kocienda and vibe coding, it seems you are forgetting something stated in his article. And that bugged me a lot lately: why Claude is unable to do anything good when I use it while others are claiming it's so good?
"Mostly python
These days, I do most of my coding in python. I don’t love the language—maybe someday I’ll say why in more detail. However, since the models know python so well, it is possibly the most effective language to use for AI coding. Unlike other languages."
Here it goes. All the people stating that LLM are helping so much at coding are developers using managed programming language, where all the complexity of programming is already hidden by the programming language itself. From there, it's easy to state "i couldn't have done better" as indeed, programming in those languages doesn't impact much performance or memory usage unless you are doing something really wrong.
But try to do C/C++/Odin programming and it's a complete disaster as you can quickly see that LLMs can't reason. Worst, in my experience, LLMs created bugs on perfectly valid code using manual memory.
Which brings us to a second conclusion: despite what Apple is claiming, Swift isn't an easy language to learn nor use as LLM has tremendous difficulty to generate good code. It's not only an issue about "Swift 6" being too new, but the fact that so much complexity is built into the language that you need to know about unlike Python or Javascript.
IMHO, Apple is going to have serious issues pretty soon. Native iOS coding is already in bad shape, their relationship with developers horrible, and now they won't be able to compete with LLM + managed languages with their broken stack.
PS: Vibe coding = not seeing the code at all. If you see the code or fix it manually => you are not vibe coding.
"I’m shipping faster than ever (like 100x faster)"
"I’ve already implemented features in hours that would have normally taken me days"
These claims are absolutely wild to me. How slow were you before using LLMs that they could speed you up to that degree?
I use agentic coding for some things, like writing unit tests, but not because it's faster. I do it because I hate writing unit tests. It usually ends up being slower because generating the code takes almost as much time as it would have taken me to just type it out, and reviewing it again takes almost as much time as just typing it.
LLMs do speed up my coding overall because they cut down on work that would typically take more time, like looking up API documentation. But they speed me up maybe 10-20%, not 100🗙.
"Currently the cost of AI is massively subsidised"
That seems like a non-issue, given how fast cost seems to be falling. But even if all of these companies die tomorrow or crank up their prices, I'll just use a local model and keep doing the same thing I'm doing now.
"What are the best offline, local model options here?"
There are a ton of options from all companies that provide local models, and "the best" constantly changes. The current DeepSeek-Coder is always worth a look. Devstral is also pretty good. But there are also coding-oriented versions of quen, gemma, and llama. You'll have to try and see which model works best for you.
@Damien I share your sentiments. Dumpster fire langs like Swift suffer in the LLM era:
- Adding endless features, syntax variations, and keywords doesn't help machines (or humans) produce good code. It allows too many pattern variations.
- Fumbling the ball constantly and churning the language every year doesn't help train models.
- Leaving tooling in a brittle state hobbles agentic integrations.
I'm sure a Swiftluencer with a course to sell will share a Claude.md file to limp the agents along (and cheerlead Swift when he posts it).
The best part of this era is we now get to dunk on the smug Swift pricks:
Turns out the "language of the future" is not Swift; it's English!
Time to learn English, Swifties! Every line of Swift you write is now obsolete!
;)
> PS: Vibe coding = not seeing the code at all.
You are still vibe coding if you see the code but don’t understand any of it.
@Damien What makes you think I’m forgetting the part of his article that I quoted? (And, BTW, I was speculating here that that would be a problem a year or two ago.) I do think it’s a somewhat open question whether the problem with LLMs and Swift is the lack of modern training data vs. something inherent to the language. On the one hand, it seems intuitively obvious that it’s easier to generate Python and JavaScript. And it’s a problem that even expert humans seem to have trouble with the newer Swift stuff. On the other hand, maybe having a very strict compiler is helpful in a world of generated code because it lets you find latent mistakes. Overall, I agree that there are problems in Apple development land. I think they focused on a lot of the wrong things over the last 10 years or so.
ageesen:
> Don’t get me wrong - the output quality is incredible and I’m shipping faster than ever (like 100x faster). But this pace feels unsustainable. It’s like having a coding superpower that you can’t put down…. and I know it’s only going to get better.
Faster, faster, faster, more 'productivity', more 'output'… It's all because of this. This frenzy of vomiting stuff indiscriminately instead of focusing on creating meaningful applications.
Kocienda:
> I write fewer lines of code than ever— by hand in the old-fashioned way—yet I create more code than ever.
Et tu, Ken? "I create more code than ever" — no, you don't. No, you don't.
These people and their AI coding orgasms think they're living the dream: make good apps quickly and cheaply. Well, I don't know about you, but if you're going to ask _me_ money for 'your' app that was 90% created by automated tools, I won't give you a penny. How about that?
You think it's an unfair attitude. How about I have an idea for a sci-fi story, tell some chatbot to write the whole novel, I quickly proofread it, then put the book on sale for $25. Would you want to buy that?
>
@Riccardo How are you going to know from the outside how they made their app? Can you tell from a restaurant menu whether the chef uses a microwave? Ultimately, unless there ends up being some sort of “organic” labelling about how the product was made, we can only judge the end result. And software problems, unfortunately, often don’t manifest right away. But I guess my overall attitude here is that with people like Kocienda and Webster who have a history of doing good work, if they say the tools are helpful and can be used responsibly my default is to believe them rather than assume they must have gone crazy.
Thomas Ptacek with a really dumb take, and I've seen many that should know better, like Peter, make the same one. You are missing the point; an intern learns. You may handhold them at first, but they learn, and if they are successful, they go on to do great work on their own, or in teams. These retarded models do not learn. You waste hours on trying to hone the dumbest of issues, and a minute later, it makes the same mistake again. If you find that productive, Ptacek, then you are certainly not among the "smartest people" you know.
@Hammer, I aree 99% with your comment. The 1%? You have to read the 10 (actually 11) post thread by Cesare Forelli. (Nobody believes in long posts to blogs anymore. What was life like before Twitter? Or weblogs? Or the World Wide Web? Only those who were born before 1980 may have an accurate answer.)
Anyways, the posted link takes you from a mastodon thread to: https://iosdev.space/@cdf1982/114752728079400321 and post 5 of 10/11 has this quote:
"....I pasted the attached prompt (in Italian though, as sometimes I don’t even think about the language with these things, since they do not care)..."
So you needn't even know English. The language of the future isn't Swift, nor English. It's AI prompts. That is until AI learns to listen... and then...?
@Michael Tsai — Some sort of ‘organic’ labelling about how the product was made is actually a great idea for the future. In the meantime, I'll just have to do my homework or ask the developer directly.
I'm aware that these tools can be helpful — it's the ‘can be used responsibly’ part that's going to be problematic. For what it's worth, I'll be demanding as much transparency as possible, because charging for an app for which the human part was essentially just the idea and a modicum of supervision over the automation, to me feels fraudulent at best and unethical at worst.
I wonder that if so many languages/frameworks require lots of boilerplate code can can be written by an automated bullsh*t generator, then maybe we really need to work on building higher level languages and frameworks?
@Léo that is an interesting point, and I think you're right but also wrong about the timeline.
That intern may still be bungling at the end of the first day, week, or even month about some things. It takes time to learn. Six months seems like a reasonable amount of time to wait to evaluate a human's progress at a job. Maybe not a specific task, but I'm thinking a little more generally.
These models do get better. A six month review of AI models shows a lot of progress.
Sorry for double post but I think the true takeaway from my above post should be about the value of that human vs the AI.
It sounds like I'm advocating to replace interns with AI. I'm not. That would be a disaster. It's all about people and it has to continue to be or what's this all for?
I'm just saying, they do make progress, and like all the other fads with an actual solid core to them over the relatively short history of computing, we have to find a way to make the best of this one. Vibe coding is the fun word for it now, this is just the tip of the iceberg.
@Ricardo
"charging for an app for which the human part was essentially just the idea and a modicum of supervision over the automation, to me feels fraudulent at best and unethical at worst."
This is interesting to me. I understand where you are coming from. And right now that does seem to be the reality of it. But I wonder about the level of abstraction. Many objects that used to be made by hand are now made by machine, and a human operates the machine. Is it unethical?
What is the role of man vs machine? The machine cannot create. Without your idea, the machine just sits there. Now instead of physical manufacturing machines we have word manufacturing machines. Withour your guidance, it doesn't know what words to make.
I know this is about code, same idea though. We (I at least) have long argued against software patents because it's all the same math. The novelty is in the idea, which is covered under copyright. (Yes I know AI copyright is a landmine too, bear with me.)
Does the principle not hold? It's all the same numbers, but you the human are the one who decided they needed to be put in the right order to do what a human decided needs to be done.
I guess my point is, where's the line? It keeps moving and it's very uncomfortable to be up close to it.
@bart I tend to agree with you about software patents, but what do you mean about the idea being covered by copyright? I thought copyright was for the expression of ideas, i.e. the code rather than the algorithm.
@bart It depends on the interns. I've had interns that pick up Objective C, our [over]complicated designs and UI feel faster than some "señors". Even if it takes months, you still get someone that is far more competent in the long term than these models that are insufferably stupid, and will remain such. Even with juniors, the amount of fiddling one requires is minimal after a while. You give them a task, it may be difficult for them at first, so it takes a while, they might ask questions or ask for assistance, but it goes down with time. Whereas, the AI slop is always terrible (or has been for the last few years), you have to be constantly on guard for absolutely retarded patterns the models introduce in their "solutions", constantly refine your "prompts", and it never learns (obviously), so the next prompt will be just as terrible. And the next 28397 prompts will be just as terrible. Whereas a junior remains that bad after a period with little to no improvement, they are just shown the door.
@Michael Tsai
I guess I interpreted the fact you quoted Kocienda and others reviewing the code as a validation of the quality of vibe coding. Kocienda is well known for his low level programming on iOS but now he's doing mostly high level programming like others and like I said, there isn't much difference between good or bad code in a managed language. So my bad 🙏
@Michael yes when I re-read that I realized I muddled it a bit. The idea would actually be patents. I really meant the implementation.
The thing with software patents is almost like obscenity, you just know someone is ripping something off when you see it. It's impossible to do some things more than one way, but of course there's more to an application than just the core algorithms. Actual patents remove all the nuance. Copyright is more appropriate because it's far more similar to writing a book than inventing an object.
So if someone uses an AI enhanced IDE and the AI writes all the boilerplate but it was the human's idea, I still think that counts. And at some point, if history and science fiction are any indication, the rising tide of automation raises all ships, and we just take things to the next level of abstaction. Buggy whip makers and all that.
Again I misused the word idea. When I say idea, I really mean the creative force the machine lacks. You can give it one prompt and then just take what it gives you, but that's not even "vibe" coding. The "vibe" is the human force of creativity the compuer lacks, the ideas, the imagination. That's the key difference. For now, they can't do it alone any more than a loom can make clothes by itself.
Apparently my newsreader ate my original post.
Short version, @Michael, yes I misspoke. I meant the implementation. But my corrective post above gives the gist.
Writing software is more like writing a book than making an object. Patents remove all the nuance because there's only one right way to do some core math, but the principles apply broadly. Like using a common phrase in a novel.
@Léo exactly. The real difference is what you have in six months or a year. You have an actual human with human experience.
The AI depends a lot on context. Right now at the end of that six months you've just enriched OpenAI or whomever with that experience.
Which is why I think trying to replace humans with AI is a big mistake in most cases. That's the real difference here, I don't like the insinuation by Ptacek that you can replace junior coders with AI. That is the one and only actual problem. That is not a solution, it's a short term gain with long term disaster. It's offshoring for the 2020s.
Hire those interns, teach them and let them use AI to help everyone, but for god sake don't try to replace human beings with this stuff.
I think we almost entirely agree here, but I'm still hoping this AI thing turns out to be a big help and not a huge disaster, if we take the right approach.
I feel like people in tech can get really obnoxious with their predictions about “da future.” It’s exhausting.
~Ten years ago it was obvious to some that tablets would replace desktop computers. It didn’t happen.
It was obvious to some that it was smart to write apps in Swift v1 and ObjC was “legacy.” That didn’t age well.
Truth is some people like iPads but they didn’t make desktop/laptops obsolete.
I see the same story here.
I don’t expect AI to replace developers. It’s an available tool that can be used or abused.
If it gets good enough to write software completely with natural language prompts I have a hard time imagining that that would actually save time, assuming it got “that good”. Would it be like speaking to an LLM in Applescript? Everyone hates writing Applescript.
If it does get good enough to write and maintain software completely autonomously it will extend way beyond software development so we probably shouldn’t worry.
In that world everyone will be having a lot of free time on their hands. It would touch everything and I’d expect you wouldn’t bother asking if something was made by a human or made by AI because the cost of everything will essentially be zero. You could just ask an AI to clone “app name here” and pay nothing. Communism fails because it takes away the incentive to work hard and society isn’t productive. But if machines are doing the work, society is still productive. Communism may finally win! Everyone will be dumb as rocks though
Overhyped
@bart
> Many objects that used to be made by hand are now made by machine, and a human operates the machine. Is it unethical?
A human operates a machine, yes, for example in a factory, where they receive a salary by their employer to do the job. Nothing wrong with that.
A so-called craftsman who claims to sell hand-made products, and charges people accordingly, while such products are de facto entirely made by a machine — definitely something wrong with that.
An author who simply drafts the idea for a book and then hires a ghost writer to do the actual writing is already a fraud and certainly not an ‘author’. The ‘idea’ here is not enough to maintain legitimacy or authority. Writing is _all the process_. Every word counts. Every way of expressing something is what makes your ‘voice’ and style, and only yours.
I'm aware that programming is a different process and it's not that one can do math with a particular style. But when I purchase an app, I like to think that I'm paying not just the idea or the utility, but that I'm also covering part of the cost of development, maintenance, future updates and expansions of functionality — especially if it's not strictly a one-time purchase but a subscription.
And if such costs dramatically decrease because 90% of the process is now largely automated, I'm sorry but I don't feel like paying for your shortcuts. Unless, of course, you reduce the price of your app accordingly.
> And if such costs dramatically decrease because 90% of the process is now largely automated, I'm sorry but I don't feel like paying for your shortcuts. Unless, of course, you reduce the price of your app
How much lower can it possibly go? Software is currently one of the cheapest products you can buy in life. If you get a non subscription app you can get *years* of free updates. Many of these purchases are under $10. You can’t eat lunch for that. Even subscription apps at like 9.99 a year I’d say is pretty cheap. A year of support and updates for ten bucks.
I’ve got some apps for sale under $4 and some people complain that that’s too expensive.
__
The reason natural language isn’t used for programming is because natural language is a very inefficient way to accomplish programming tasks. I don’t think describing conditionals in natural language will end up being a shortcut. It will end up taking longer. I’ll come back to this thread and post a video of myself eating a sock if I’m wrong and everyone is describing apps in natural language in 5 years.
I just watched a little bit of the video, they put kids in a room and taught them to “vibe code” …our society is in trouble. I think it’s more likely that the people who try to make apps this way will just run into dead ends, not be able to fix bugs because they don’t got the chops.
But using AI as tool is useful and I don’t really have problem with it no more than I have a problem with a construction worker using an electric saw. I wouldn’t want to buy a vibe coded app though because I know the person who “created it” has no idea how to maintain it.
Vibe coded apps will = more App Store scams. Now you can make a scam app without having to learn how to code!
@ObjC4Life
> You can waste a lot of time modifying prompts to try to get something usable and often you are just better off writing the whole thing yourself.
Do LLMs have any kind of debugging mode?
I've played with them a few times. The output was rather unsatisfying and I cba to spray and pray with prompts.
> Even if someone was tenacious enough to do that, how on earth would they ever be able to maintain the app? When a user files a bug report are you just going to cross your fingers and ask your LLM to fix it…and pray that it gets it right?
Yes. I've seen a person do that to a previously generated Terraform infra. Multiple times.