Grief and the AI Split
But where I think Amodei’s remarks, quoted above, are facile is that it hasn’t played out as simply that lines of code that would have been written by human programmers are now generated by AI models. That’s part of it, for sure. But what’s revolutionary — a topic I’ve been posting about twice already today — is that AI code generation tools are being used to create services and apps and libraries that simply would not have been written at all before. It may well be that the total number of lines of code that will be written by people today isn’t much different from the number of lines of code that were written by people a year ago. But there might be 10× more code generated by AI than is written by people today.
Before AI, both camps were doing the same thing every day. Writing code by hand. Using the same editors, the same languages, the same pull request workflows. The craft-lovers and the make-it-go people sat next to each other, shipped the same products, looked indistinguishable. The motivation behind the work was invisible because the process was identical.
Now there's a fork in the road. You can let the machine write the code and focus on directing what gets built, or you can insist on hand-crafting it. And suddenly the reason you got into this in the first place becomes visible, because the two camps are making different choices at that fork.
I wrote about this split before in terms of my college math and CS classes: some of us loved proofs and theorems for their own sake, some of us only clicked with the material when we could apply it to something.
[…]
Here's what I notice about my grief: none of it is about missing the act of writing code. It's about the world around the code changing. The ecosystem, the economy, the culture. I think that's a different kind of loss than what Randall and Lawson are describing. Theirs is about the craft itself. Mine is about the context and the reasons why we're doing any of this.
Orchard’s fine essay examines a philosophical divide within the ranks of talented, considerate craftsperson developers. The divide that I’m talking about has been present ever since the demand for programmers exploded, but AI code generation tooling is turning it into an expansive gulf. The best programmers are more clearly the best than ever before. The worst programmers have gone from laying a few turds a day to spewing veritable mountains of hot steaming stinky shit, while beaming with pride at their increased productivity.
There are additional groups. Some good programmers don’t use AI for coding. They’re against it for philosophical reasons, or it doesn’t apply well to their current project, or they haven’t taken the time or built the skills to really make it work for them. There are also hobbyists and people who know some programming, but are not really professional programmers, who are successfully using AI to help build tools for themselves.
Pssst… most software development was slop before AI.
Sure, your code was beautiful and artisanal and custom crafted to be beautiful, performant, and not at all created to be shipped in a fortnight between countless meetings with unclear goals and zero understanding of the bigger picture or user needs.
But those other folks…
I have so much gratitude to people who wrote extremely complex software character-by-character. It already feels difficult to remember how much effort it really took.
This is such a completely different reality from where I live, at this point it’s just difficult to say anything meaningful about it at all.
Previously:
Update (2026-03-26): Terence Eden (Hacker News):
Part of the crypto grift was telling people to “Have Fun Staying Poor”. That weaponisation of FOMO was an insidious way to get people to drop their scepticism.
I feel the same way about the current crop of AI tools. I’ve tried a bunch of them. Some are good. Most are a bit shit. Few are useful to me as they are now. I’m utterly content to wait until their hype has been realised. Why should I invest in learning the equivalent of WordStar for DOS when Google Docs is coming any-day-now?
If this tech is as amazing as you say it is, I’ll be able to pick it up and become productive on a timescale of my choosing not yours.
27 Comments RSS · Twitter · Mastodon
People are upset about advances in AI leading to staffing reductions, layoffs, and AI-washing of the same.
The thing I keep coming back to, and I wish more people would discuss openly, is why wealthy companies can over-hire, fire, hire younger and cheaper, replace with machines, etc. with impunity, at least in the US.
The business of digital software and design has for decades put the responsibility of continuous education on the employee, often without any stipend or other compensation, much less clear expectations or career advancement as a reward. This is notably different from many other professions.
Some people think all regulations are bad. Not all regulations are bad. I like clean air and water. Companies will happily accept dirty air and water if the existence of poison in the environment increases profit.
We have laws that require companies to pay into unemployment benefits funds at the state level to prevent their staffing decisions from overburdening ex-employees in the short term to the extent such burdens become a burden on the state. This makes sense.
These benefits (which are paid by the companies and are not welfare, regardless of opinions on that topic) are paid out at a fraction of wages often not enough to cover frugal living expenses and typically run out after months, not years. Recently unemployed tech workers are experiencing periods of unemployment of one to two years, or even more. Current hiring and firing practices have rendered the unemployment benefits system largely insufficient.
Imagine a world where wealthy tech companies are required to provide compensation (time and money and access) to continuing education. Imagine a world where companies are required to pay into retraining funds, similar to unemployment benefit funds — more if they choose not to retrain workers before laying them off, mirroring unemployment benefits surcharges for companies with excessive layoffs.
Tech has always favored the young, healthy (read: return to office), and those willing to work 80 hours per week for 40 hours salary. This trend is accelerating with AI, and we know even these ideal workers are working more for less, leading to more stress, burnout, and health issues. Many young tech workers do not plan to stay in the industry longer than about a decade; get in, get as much as you can, and get out. At what point is that unsustainable? Perhaps never, because robots?
What is happening to laid off tech workers nearing retirement is particularly disheartening. For many, being five or even ten years from retirement age can mean retraining is a higher cost to incur than simply taking the hit and having less as they exit the workforce. Combined with where social security is heading, this is particularly concerning. Even those who saved and invested well may end up with a much different standard of living late in life, and no doubt some will die earlier as a result.
Simply put, the current benefits and risks are asymmetrical to such a high degree that it's practically criminal. But those are "just evolutionary changes in the business of tech" and there is "nothing we can do about it".
We are not adapting fast enough in this rapidly changing landscape. Segue into discussion on the future of work (or not), UBI, the value of a human and what it means to be one, etc.
What an exciting and terrifying time to be alive.
Perhaps a bit more narrowly scoped comment regarding craftsmanship:
Individual craftsmanship in any industry eventually gives way to industrial automation, by and large.
I lament this for our profession, in general but mostly due to how quickly the transformation is taking place. That said, it seems that a lot of people fighting the change are in denial that it has already happened and there is no going back. Most of those willing to die on the hill of craftsmanship without AI will be left behind in droves, at least in terms of professional employment.
As alluded to in my previous, I strongly believe companies should be compelled to assist in providing a path forward for these people. I also recognize that these people's resistance to AI-driven change seems to work against this. Though, I do wonder how many would resist less if the rug wasn't being pulled out from under them economically and otherwise.
I've said before here that I'm a sys admin rather than a software developer, so I come at this from a different perspective.
I've loved computers since I was young, but math has always held me back. I understand the concepts as they apply but my brain just can't handle the math. Which was fine for me because I came up at exactly the right time that the computers could do the math for me, as long as I understood the concepts to properly apply it.
I realize it's a complex topic for many reasons, but application-wise, this is how I see AI. Increasingly the computer is not there to do what I want it to, it's there to push me towards what the few companies who control most technology want me to do.
The point about software being slop before AI is spot on. At some point it's not about how the software got there, it's about what the objective of the software is. Corporate software is not about what the users want anymore, it's not even about what's technically best. It's only about the money. Software companies are no longer founded by people who care about technology, they are founded and run by people who care about money. They could be selling anything. They're sales people and executives and accountants. The software is more like the latest model car than about carefully crafted information tools.
My point is this: giving the people who actually have to use the software the power to create more software can be a good thing. Of course the same corporations who have been offshoring and cutting every possible corner to increase their personal wealth are going to take advantage of the latest way to essentially legally defraud the company. That's just what they do.
Of course the people who were paying others to write their papers or otherwise cheating will continue cheating/paying their way through college, getting cushy jobs through connections, and generally draining society. AI didn't do that. Like the internet, it's just accelerating it.
We need more true direct democracy on the internet, in software, and in general. This could at least be a way to democratize software development. And those few developers who already knew how to make good software will stand out because the end product will be better. And as we know, "better" has absolutely nothing to do with commercial success.
I read "Grief and the AI Split" and agree with the surface observation that there is a split reaction to AI-generated code, but disagree that the crux is a "make-it-go" versus "love-of-the-craft". At least, I'm mourning with the alleged craft side, but it's not because I love the joy of writing beautiful code myself. In fact, after 40 years of writing code I often find the task tedious.
Rather, I believe that it is meaningful to talk about "software quality", that encompasses multiple dimensions including maintainability, readability, performance, security, supportability, among others. A human developer writes quality software piece by piece from other quality software, making tradeoffs at each level. Often we can't see that tradeoffs are even needed until we are deep in the code. I don't see how we can prompt even a very capable AI to make tradeoffs that align with our objectives and values if we don't know that they need to be made.
The strawman argument from the "make-it-go" side is "who cares about quality? code generation is cheap, so just generate a new version of the app tomorrow or just have it fix the buggy software". My concern with that is that each re-generation or fix begets a new quality issue. Maybe today's version of the app has a security bug, tomorrow's has a usability bug, and on and on. The app never gets better, it just is broken in a different way. Or maybe the code-gen is indeed hyperintelligent and builds an app better than we ever could. But then whose values and objectives are embodied in that app?
I've gone through all the stages of grief and am now (mostly) at acceptance and trying to use these tools, but I am not willing to give up the notion that source code is a durable artifact that should be understandable and maintainable by humans. I view the source repository as an artifact to be maintained by humans and agents as peers and I still read every line for software that I use, even though it takes longer. I fear the day when I stop doing that, especially if it's because I can't actually understand the code the machine has generated.
There is good software and bad software. AI just amplifies that. Hand-crafted software can just be as bad as AI created code.
What absolutely horrifies me are the vibe coders who are so proud of their ugly little ducklings.
Every time I read an extensive CLAUD.md file I think human onboarding has never been better. I just don’t know why it’s not called CONTRIBUTING.
It’s up to the AI Engineer to build up the framework in which the AI operates. How can you measure well crafted code? Install code quality tooling and set a bar to clear it, write the tests first etc. spend some time writing down your philosophy and then let ai build code that way. This would have not been needed if you wrote the code but if you choose to have ai be the actor then your role is to tell it how to play.
Saying “10x more code will be written now” reminds me of the story on Folklore.org where one of the original Mac developers was irritated by a management mandate that they had to keep track of the lines of code they wrote each day
One day he made some code a lot more efficient, cut down the length. and so he said he wrote negative lines of code that day. After that, they stopped requiring him to waste time on that silly metric
It’s a fun illustration of the thing even the John Grubers of the world used to know: measuring by amount of code is absurd and quality is more important than quantity.
John Gruber is so good at getting things consistently wrong
@Bradley
It’s funny people think UBI is on the table in a world where thousands of people live and die on the streets. The sociopathic billionaire does not care about you. That’s why they love AI. The whole point is to replace you
And if we embrace it, we’ve handed them our one point of leverage and power: they needed us for labor.
I can’t believe how foolish the techies have been about this. You guys have been so easily fooled by charlatans parading grandiose, utopian language when it serves them. Like Zuckerberg saying Facebook’s purpose was to connect people when in reality its purpose was to scrape their data and sell it to the highest bidder
I hope you can wake up sooner rather than later. If you think using AI is going to save you, you’re not understanding anything
Now is the time for workers to unionize and refuse to be corralled into betraying each other and themselves
"And if we embrace it, we’ve handed them our one point of leverage and power: they needed us for labor."
The billionaires who control this technology are not even subtle about it. They're all building their luxurious fallout bunkers to survive the downfall of society. In hindsight, it's funny that we used to think that post-scarcity would result in Star Trek or The Jetsons.
I've never been more glad than now that I decided against having children. With any luck, I'll be dead before the wars begin.
LLMs are pretty cool, though. I'll enjoy being more productive in the time I still have until Bezos can build his superyachts without any human involvement and I'm no longer valuable.
@Nick M Great points about quality and source code being a durable artifact. I think that’s part of what’s missing in the analogy with compilers for high-level languages. The generated machine code is not what we check in to Git.
@Manx I don’t see Gruber making a normative statement about the quality vs. quantity tradeoff. He says there’s a lot more code being written/generated, that it’s not just an acceleration but rather that some new things exist that otherwise wouldn’t at all, and that lots of this code is “veritable mountains of hot steaming stinky shit.” All true.
@Manx
I actually agree with you.
More tech and advancements in tech have not created a utopian society and yes there are people on the streets, exactly. I have unfortunately been there quite literally, not long ago. One can go from being relatively well off to being homeless inside of a year in this country. It is surprisingly easy for people to fall through the cracks, especially when major world events become an unexpected factor. People in other developed countries think our policies are barbarian. As one of my (relatively conservative!) doctors once said regarding healthcare, the problem is that the US is not mature enough to fix these fixable problems.
I do not believe UBI can or will seriously be on the table in the US any time soon, even regardless of what a few billionaires have said or written in favor. All motion accelerates toward the opposite. Unions are being trampled and legislated away, monopolistic oligarchies fed, policy is bought by corporations, the gap between rich and poor widens every year, and individuals have little to no leverage in the modern economy. If you get really sick, we'll fix you right up... and take your house. Many are insulated from all of this and do not see what the problem is. This US might actually he the worst best place on earth. But hey, I like Neil Young.
That all said, it is difficult to imagine a world where the future of software development does not require the use of AI tools. That's a dose of reality like the rest. If people reject these tools outright, even for good and ethical reasons, many will likely have to find a way to make a living other than writing code the way they have been. I do not necessarily like this reality, especially the part where we do not own the means.
None of the above is mutually exclusive and all of it can be true simultaneously.
Part of my original point was that we as a society have had opportunities to create a better default contract between corporations and their employees — a better system for working society as a whole. Despite many attempts to change the current trajectory, we ultimately allowed corporate interests and various ideologies slowly chip away at our control and leverage for decades. More recently, employers jumped at the opportunity to directly take back an enormous amount of control and leverage over their employees in these post-pandemic years, RTO being just one example that disproportionately negatively affects certain groups such as caregivers and disabled people who previously did not need to identify as such.
If we retain democracy, perhaps there can be meaningful change at some point, but humans sure are good at sticking their fingers in their ears until they're fully engulfed in the flames. We're predisposed to accept the slow boil and it's difficult to effect lasting change.
There are undertones of these broader topics in the perspectives on both sides of the AI Split, and I find them compelling. How AI as a group of technologies is rapidly changing my profession is interesting, but the macro around it, for me at least, is more informative about how we got here and where we can and must go in the years to come.
AI will advance many fields and yes, there will still be people on the streets here. The worst part is that there will continue to be people in power saying that those people didn't work hard enough. Perhaps let's start by kicking those people out of power.
@Michael @Nick M On durable artifacts, I've been thinking the same with regard to differences between machine code and what we check in to git. Related (at least in my mind), distilling the mental model of VCS down to a checkpointing system that can be analyzed, audited, rolled back, and forked draws parallels to other professional focuses that require stringent processes and procedures to ensure each step or artifact can be understood by the humans now and in the future.
Enterprise DevOps and Data will likely require significant human management and engineering for the foreseeable future. While infra as code is prevalent and integration and automation with AI is absolutely a primary focus there, much of the core management for on-prem and private cloud is orchestrated by human-managed systems like Kubernetes and OpenShift. The principles of governance, risk, and compliance certainly mandate direct human oversight and command in highly sensitive and regulated environments.
Even if AI coding created determinative output (it obviously does not), there are environments outside of SaaS and app development where "artifact" takes on additional meanings and responsibilities due to differing needs and rules. Some of what may be possible with AI may not be adoptable in those environments, at least not yet or perhaps in the same forms as elsewhere. It will be interesting to see how these environments evolve and adapt to broader AI offerings as the desire to adopt new advanced technologies increasingly applies pressure against these constraints, and how these constraints impact the development of AI technologies tailored for the enterprise as well.
@Manx
> The sociopathic billionaire does not care about you. That’s why they love AI. The whole point is to replace you
> And if we embrace it, we’ve handed them our one point of leverage and power: they needed us for labor.
Well, another point of leverage is, they'll need us to keep buying stuff. If everyone is out of a job, nobody will have money to spend on products. So how are these fully AI-workforce-driven companies going to make revenue? Who will buy their products? How will they keep growing?
>they'll need us to keep buying stuff
No, they really don't. That's the destination.
Ultimately, the only reason they need money is to pay salaries. But the goal is to replace people with technology. If they don't need to pay people, they also don't need income. In fact, they don't need people at all. The goal isn't to make you poor, the goal is to get rid of you altogether.
> That said, it seems that a lot of people fighting the change are in denial that it has already happened and there is no going back. Most of those willing to die on the hill of craftsmanship without AI will be left behind in droves, at least in terms of professional employment.
Why? Who are you to make this proclamation?
What I see a future full of slopware that's brimming with egregious security holes, atrocious performance, and heisenbugs that are all but impossible to understand (since nobody read any of the code in the first place). All based on a training corpus comprised of utterly mediocre code. (Or are we pretending that these algorithms can improve themselves beyond our own capabilities? Have we already forgotten that "the greatest shortcoming of the human race is our inability to understand the exponential function"?)
When I see these sorts of sentiments, I get this uncanny feeling of the AI-god speaking through the many mouths of its glassy-eyed acolytes. It's just so bizarrely, brazenly, assertively apocalyptic when this stuff has barely been in production for a couple of months. Maybe give things time to cook before declaring an entire profession over and done with?
You can now generate 10,000 lines of janky code through some non-deterministic prompting, and the fact that this is possible is a miracle. Good luck dealing with the evolution and technical debt of a codebase that nobody understands, for a product that no one actually needs.
I think it's entirely possible that professional coders, to their great surprise and irritation, will find that drawing endless code from the Great Statistical Amalgam will not actually lead to improved productivity or better software.
"I think it's entirely possible that professional coders, to their great surprise and irritation, will find that drawing endless code from the Great Statistical Amalgam will not actually lead to improved productivity or better software."
That's not a reasonable position anymore. There can be discussion about exactly how helpful current models are, but there is no question that they are helpful.
I'm not going to bore you with my hobby projects written by Claude Code, and how none of them would have been done at all without.
What I will say that in my professional capacity I've done the three best prototypes in my 25+ year career over the past month.
Fully fleshed out interactive prototypes that sometimes had a public web frontend, a backoffice admin interface, a companion app for people in a lab.
It has been a tremendous help for me when talking to clients.
As for rich assholes sharing their profit via UBI that will never happen unless we force them. They don't need consumers anymore. You make money by inventing "economic products" i.e. different forms of gambling.
> That's not a reasonable position anymore. There can be discussion about exactly how helpful current models are, but there is no question that they are helpful.
Did I say LLMs can’t be helpful? No — my hot take is that spinning up endless prototypes based on sloppy, mediocre code is not the same as software engineering. You’re not actually being productive just because you feel productive, and it seems that research on gains from LLM use might bear this out. If LOC was a blocker, Meta would have accomplished a lot more with its thousands upon thousands of engineers.
And that’s to say nothing of forming your entire career around a spigot that will surely skyrocket in price following market capture, all while making the fash-supporters in charge ultra rich and ultra powerful. But I suppose that’s a separate matter.
I’ll say this: we should be pushing strongly for publicly owned AND publicly trained frontier models — the Linux equivalent of Claude. The algorithms are (relatively speaking) not terribly complicated, and powerful GPUs are available in every gaming computer (assuming training could be distributed). It should be table stakes that engineers own the tools they use.
"Did I say LLMs can’t be helpful?"
I don't want to argue semantics, but the way I read your comment is that you implied that using LLMs cannot improve productivity or software quality, which is false. If that's not what you meant, then I might not disagree.
"my hot take is that spinning up endless prototypes based on sloppy, mediocre code is not the same as software engineering"
Some people use LLMs as you describe, and I agree that's dumb, but it's not the only way to use them.
"around a spigot that will surely skyrocket in price following market capture"
I doubt that there can be market capture for LLMs. In fact, companies like Anthropic and OpenAI are kind of screwed, because they're burning billions and have absolutely no moat.
Cost will not skyrocket. GLM-5 is a perfectly cromulent LLM for software engineering, and providers run it at profit with entirely acceptable pricing.
"we should be pushing strongly for publicly owned AND publicly trained frontier models — the Linux equivalent of Claude"
100% agree.
> I don't want to argue semantics, but the way I read your comment is that you implied that using LLMs cannot improve productivity or software quality, which is false. If that's not what you meant, then I might not disagree.
Sure, I've observed STS build his modeling app prototype without looking at any code. A few years ago, this would have been unthinkable. But I think people see this and extrapolate to "programming is dead, all hail LLMs." After the initial excitement of rapid prototyping wears off, I think an engineer will still have to dig in and actually understand (and correct) every single section of code to make a usable product. Unless you're building on top of solid abstractions — which LLMs are categorically not — you can't generate your way out of the "engineering" part of software engineering. (Unless your job is just gluing components and code snippets together, I suppose.)
See also "Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity," though that's based on older models.
Whether or not LLMs *can* improve productivity and software quality, I think it's far more likely that they will result in a world bursting at the seams with crappy, bug-filled, homogenous shovelware. Users will find their computing experience degrading, not improving.
> I doubt that there can be market capture for LLMs. In fact, companies like Anthropic and OpenAI are kind of screwed, because they're burning billions and have absolutely no moat.
Well, by hook or by crook, they'll certainly try. I think Altman, for instance, is aiming for market capture by regulation. Capitalists will not allow a free project to pop their trillion-dollar bubble.
"an engineer will still have to dig in and actually understand (and correct) every single section of code to make a usable product"
That's my experience. But I've found that giving precise instructions and then reviewing the code is now usually faster than writing code from scratch, particularly in codebases I'm not super familiar with.
"Whether or not LLMs *can* improve productivity and software quality, I think it's far more likely that they will result in a world bursting at the seams with crappy, bug-filled, homogenous shovelware"
Both are true. LLMs can improve productivity and quality, *and* many people will instead use them to produce large amounts of untested, unstable, and insecure code.
> That's my experience. But I've found that giving precise instructions and then reviewing the code is now usually faster than writing code from scratch, particularly in codebases I'm not super familiar with.
I worked at a FAANG for a few years, and in my experience, reviewing production-grade code is far more exhausting than actually writing it in the first place. And that's assuming the coder is an intelligent engineer who puts actual thought into every line. I can already feel my energy draining from the thought of spending most of my days reviewing reams of machine-generated code.
The ability to rapidly prototype brand new ways of solving things leads to better end products.
There absolutely has to be a step where the thing is actually built. But the fact that the properties was made in hours hopefully makes it more likely that the code is thrown out, and the end result is built from scratch.
Used to be I spent weeks on wireframes. Now I can give the developers fully working flows to look at street a couple of hours. Prototypes where I've iterated and refined the interactions, and caught many more edge cases than before.
Yes there will absolutely be shovelware, and the hustle bros are having a field day. But for serious devs the LLM thing is a gift.
I think the larger problem will be the senior dev burnout from doing endless code reviews, with the amount to verify just growing and growing.
"reviewing production-grade code is far more exhausting than actually writing it"
You're comparing reviewing LLM-generated code during coding with reviewing a PR. That's not the same. Reviewing PRs is genuinely exhausting. You lack context, so you first have to understand the purpose of the PR. Then you have to look at a dozen or more files that usually have hundreds of changed LOC and put everything into context. It's difficult.
That's not how reviewing LLM-generated code works. You tell the LLM to make a specific targeted change, then you review that diff. You know what you told the LLM to do, so you already know exactly what the code is supposed to do. It's not hundreds of LOC, it's usually maybe two or three dozen LOC. It's not hard.
> That's my experience. But I've found that giving precise instructions and then reviewing the code is now usually faster than writing code from scratch, particularly in codebases I'm not super familiar with.
That last part, the codebases I'm not superior familiar with, is the thing. I'm working on an update to an old app of mine. I cannot just unleash an AI agent on the entire codebase and let is scrape the whole thing. Actually I could, but I won't. I find it amazing that so many devs are willing to do this.
There is this thing I'm tweaking and maybe I could save some time asking the LLM to do it for me, or maybe it'll just waste my time. I know what I'm here and I find it to be an exhausting chore to write an essay an LLM, have it do it for me, only to find out that it screwed up and I gotta do it anyway.
> I know what I'm here
Should be “I know what I’m doing here”
My bad.
And superior should just say super. I think autocomplete filled that one and I didn’t notice
"Part of the crypto grift was telling people to “Have Fun Staying Poor”."
The reason they are doing that is that they need people to put money into crypto to make their holdings' value go up. What's the equivalent for LLMs? Most people advocating for them don't benefit from your usage; in fact, more usage probably hurts them.
"If this tech is as amazing as you say it is, I’ll be able to pick it up and become productive on a timescale of my choosing not yours."
That's not a logical argument. Planes are amazing, and they make traveling long distances much easier than a cart and horse or a ship, but I don't expect to become a pilot anytime I want.
"I’m utterly content to wait until their hype has been realised."
Good for you.
I don't understand the point of these articles, from either side.
"You must use LLMs now or you will be left behind!"
"Nu-uh, I don't wanna!"
OK, what did we learn from this interaction? Absolutely nothing. Tell me *how* you succeeded and *how* you failed in using them, and I will learn something.