No.

The more time I spend using LLMs for code, the less I worry for my career - even as their coding capabilities continue to improve. […]

No matter how good these things get, they will still need someone to find problems for them to solve, define those problems and confirm that they are solved. That's a job — one that other humans will be happy to outsource to an expert practitioner.

It's also about 80% of what I do as a software developer already.

Simon Willison, Identify, Solve, Verify

Coding agents require the scaffolding, learning, and often demand more attention than tools, but are built to look like teammates. This makes them both unwieldy tools and lousy teammates. We should either have agents designed to look like a teammate properly act like a teammate, and barring that, have a tool that behaves like a tool. […]

Selling sci-fi is way too effective. And as long as the AI is perceived as the engine of a new industrial revolution, decision-makers will imagine it can do so, and task people to make it so.

Things won’t change, because people are adaptable and want the system to succeed. We consequently take on the responsibility for making things work, through ongoing effort and by transforming ourselves in the process. Through that work, we make the technology appear closer to what it promises than what it actually delivers, which in turn reinforces the pressure to adopt it.

As we take charge of bridging the gap, the machine claims the praise.

Fred Herbert, The Gap Through Which We Praise the Machine

Radiology has embraced AI enthusiastically, and the labor force is growing nevertheless. The augmentation-not-automation effect of AI is despite the fact that AFAICT there is no identified "task" at which human radiologists beat AI. So maybe the "jobs are bundles of tasks" model in labor economics is incomplete. Paraphrasing something @MelMitchell1 pointed out to me, if you define jobs in terms of tasks maybe you're actually defining away the most nuanced and hardest-to-automate aspects of jobs, which are at the boundaries between tasks.

Can you break up your own job into a set of well-defined tasks such that if each of them is automated, your job as a whole can be automated? I suspect most people will say no. But when we think about other people's jobs that we don't understand as well as our own, the task model seems plausible because we don't appreciate all the nuances.

Arvind Narayanan, @random_walker

We already have a phrase for code that nobody understands: legacy code. […]

If you don't understand the code, your only recourse is to ask AI to fix it for you, which is like paying off credit card debt with another credit card.

Steve Krouse, Vibe code is legacy code

What are we actually saying here — that even Microsoft has to evaluate usage of “AI” directly, because it doesn’t affect performance enough to have an obvious impact otherwise? That the technology is so limp that even its biggest investor has to strong-arm its own employees into using it? That their own employees don’t want to use it?

Genuinely good new tools don’t tend to need coercion to fuel their adoption only a few years into their existence, right?

eevee, The rise of Whatever

The problem is that the actual meat of human communication is a tiny fraction of the amount of symbols being spat out. Getting the actual ideas part of a message compressed well can get lost in the noise, and a better strategy is simply evasion. Expressing an actual idea will be more right in some cases, but expressing something which sounds like an actual idea is overwhelmingly likely to be very wrong unless you have strong confidence that it’s right. So the AIs optimize by being evasive and sycophantic rather than expressing ideas.

Bram Cohen, AIs are Sycophantic Blithering Idiots