Cheating Is All You Need
Steve Yegge (via Hacker News, Mastodon):
LLMs aren’t just the biggest change since social, mobile, or cloud–they’re the biggest thing since the World Wide Web. And on the coding front, they’re the biggest thing since IDEs and Stack Overflow, and may well eclipse them both.
[…]
In one shot, ChatGPT has produced completely working code from a sloppy English description! With voice input wired up, I could have written this program by asking my computer to do it.
[…]
All you crazy MFs are completely overlooking the fact that software engineering exists as a discipline because you cannot EVER under any circumstances TRUST CODE. That’s why we have reviewers. And linters. And debuggers. And unit tests. And integration tests. And staging environments. And runbooks. And all of goddamned Operational Excellence. And security checkers, and compliance scanners, and on, and on and on!
So the next one of you to complain that “you can’t trust LLM code” gets a little badge that says “Welcome to engineering motherfucker”. You’ve finally learned the secret of the trade: Don’t. Trust. Anything!
Conversations with ChatGPT probably reveal a lot about how you yourself program. I’ve asked ChatGPT very few actual coding questions like what I see online (“what function does this” or “write this for me”), but I spend a lot of time asking ChatGPT to help me design or name things. If I think back, a lot of my “coding time” is in fact spent just thinking about how to lay things out vs. generating tons and tons of view code or whatever.
One way of looking at this is like ChatGPT being the “ultimate rubber duck” for “rubber duck debugging”. Just someone that will listen to your programming thoughts and bounce ideas off of.
With GPT so hot in the news right now, and seeing lots of impressive demos, I’m curious to know, how are you actively using GPT to be productive in your daily workflow? And what tools are you using in tandem with GPT to make it more effective? Have you written your own tools, or do you use it in tandem with third party tools?
I’d be particularly interested to hear how you use GPT to write or correct code beyond Copilot or asking ChatGPT about code in chat format.
But I’m also interested in hearing about useful prompts that you use to increase your productivity.
Horace He (via Hacker News):
I suspect GPT-4’s performance is influenced by data contamination, at least on Codeforces.
Of the easiest problems on Codeforces, it solved 10/10 pre-2021 problems and 0/10 recent problems.
This strongly points to contamination.
Sternsafari (via Dan Luu, Hacker News):
My Job is different now since Midjourney v5 came out last week. I am not an artist anymore, nor a 3D artist. Rn all I do is prompting, photoshopping and implementing good looking pictures. The reason I went to be a 3D artist in the first place is gone. I wanted to create form In 3D space, sculpt, create. With my own creativity. With my own hands.
It came over night for me. I had no choice. And my boss also had no choice. I am now able to create, rig and animate a character thats spit out from MJ in 2-3 days. Before, it took us several weeks in 3D. The difference is: I care, he does not. For my boss its just a huge time/money saver.
Hawkins theory is not, to the best of my knowledge, accepted fact, in large part because it’s not even clear how it would be proven experimentally. It is notable, though, that the go-to dismissal of ChatGPT’s intelligence is, at least in broad strokes, exactly what Hawkins says intelligence actually is: the ability to make predictions.
Since the implosion of web3, the raising of interest rates, and layoffs in FANGs, the tech world has been searching for a new bright spot and has, for now, seized on generative AI.
[…]
So I – and most people I know now – use it just about every day. I have a fraught relationship with it. I send my boss a spec I labored over researching for hours, and he gets back to immediately with “oh, I found one issue — ChatGPT says we should also support X.”
[…]
The iPhone comparison is probably the most apt one — the iPhone, like ChatGPT, was just a skillful combination of existing, well-understood technologies in a nice package. But it feels like magic.
[…]
But the most underlooked impact of the current generative AI explosion — more than what is actually directly possible with these models — is giving so many people a fun and motivating entry point into the field. Regardless of anything OpenAI may do, it seems a certainty that there will be more and more “script kiddies” at every level learning how to customize AI for their own problems, either by tweaking someone else’s models, fine-tuning, or making their own.
Previously:
- GitHub Copilot X
- GPT-4
- ChatGPT for Apple Platforms Development
- What Is ChatGPT Doing and Why Does It Work?
- ChatGPT
- Midjourney and Stable Diffusion
Update (2023-04-27): Ldorigo:
Just paste in a chunk of systemd (or whatever) logs and start asking questions. Often just pasting in the logs and pressing enter results in it identifying potential problems and suggesting solutions. It helped me troubleshoot a huge amount of issues on linux desktops and servers that would have taken me a lot longer with google - even if it doesn’t always give the right solution, 99% of the time it at least points to the source of the error and gives me searchable keywords.
The fact that things (code or otherwise) generated by these models is imperfect means we need to be careful with how we use them, but it does not mean they are useless. As always, I like to look at these things as practically as I can, and the video below shows me adding a feature to Quick Reviews in real time, and I think is a good example of how these tools can help, without replacing the need to understand what you're doing.
Update (2023-06-15): Erik Dörnenburg:
But how would Copilot fare with a less common language and code that’s involving more complicated data structures? To find out I turned to Crellinor, my genetic programming / artificial life simulator written in Rust[…]
3 Comments RSS · Twitter · Mastodon
I now have multiple LLMs (at least ChatGPT, Open Assistant, and Bing) open at all times, and use them hundreds of times a day. I don't use them in the way most people seem to ("write a function that does X in language Y"), but I prompt it more generically "I need to do X in language Y, how do I do it", which gives a lot of additional information about APIs and libraries and related things. Then, I write the code myself based on the input from the LLM.
I find that this avoids the situation where I have weird unknown unknown bugs in my code, means that I fully understand what the code does, and seems to be about as fast as just having it write the whole code and then having to iterate until it works the way I want.
The devs at my job say that the LLMs help them "not get stuck". Sounds similar to what @Plume is doing.