I’m going to take a big chance here and make predictions about GenAI’s future. Yeah, I know, you’re feeling overloaded on this stuff and me too, but it seems to have sucked the air out of all the other conversations. I would so like to return to arguing about Functional Programming or Free Trade. This is risky and there’s a pretty good chance that I’m completely wrong. But I’ll try to entertain while prognosticating.
Reverse Centaurs · That’s the title of a Cory Doctorow essay, which I think is spot on. I’m pretty sure anyone who’s read even this far would enjoy it and it’s not long, and it’d help understand this. Go have a look, I’ll wait.
Hallucinations won’t get fixed · I have one good and one excellent argument to support this prediction. Good first: While my understanding of LLMs is not that deep, it doesn’t have to be to understand that it’s really difficult (as in, we don’t know how) to connect the model’s machinations to our underlying reality, so as to fact-check.
The above is my non-expert intuition at work. But then there’s Why Language Models Hallucinate, three authors from OpenAI and one from Georgia Tech, which seems to show that hallucinations are an inevitable result of current training practices.
And here’s the excellent argument: If there were a way to eliminate the hallucinations, somebody already would have. An army of smart, experienced people, backed by effectively infinite funds, have been hunting this white whale for years now without much success. My conclusion is, don’t hold your breath waiting.
Maybe there’ll be a surprise breakthrough next Tuesday. Could happen, but I’d be really surprised.
(When it comes to LLMs and code, the picture is different; see below.)
The mass layoffs won’t happen · The central goal of GenAI is the elimination of tens of millions of knowledge workers. That’s the only path to the profits that can cover the costs of training and running those models.
To support this scenario the AI has to run in Cory’s “reverse centaur” mode, where the models do the work and the humans tend them. This allows the production of several times more work per human, generally of lower quality, with inevitable hallucinations. There are two problems here: First, that at least some of the output is workslop, whose cleanup costs eat away at the productivity wins. Second, that the lower quality hurts your customers and your business goes downhill.
I just don’t see it. Yeah, I know, every CEO is being told that this will work and they’ll be heroes to their shareholders. But the data we have so far keeps refusing to support those productivity claims.
OK then, remove the “reverse” and run in centaur mode, where smart humans use AI tools judiciously to improve productivity and quality. Which might be a good idea for some people in some jobs. But in that scenario neither the output boost nor the quality gain get you to where you can dismiss enough millions of knowledge workers to afford the AI bills.
The financial damage will be huge · Back to Cory, with The real (economic) AI apocalypse is nigh. It’s good, well worth reading, but at this point pretty well conventional wisdom as seen by everyone who isn’t either peddling a GenAI product or (especially) fundraising to build one.
To pile on a bit, I’m seeing things every week like for example this: The AI boom is unsustainable unless tech spending goes ‘parabolic,’ Deutsche Bank warns: ‘This is highly unlikely’.
The aggregate investment is ludicrous. The only people who are actually making money are the ones selling the gold-mining equipment to the peddlers. Like they say, “If something cannot go on forever, it will stop.” Where by “forever”, in the case of GenAI, I mean “sometime in 2026, probably”.
… But the economy won’t collapse · Cory forecasts existential disaster, but I’m less worried. Those most hurt when the bubble collapses will be the investing classes who, generally speaking, can afford it. Yeah, if the S&P 500 drops by a third, the screaming will shake the heavens, but I honestly don’t see it hitting as hard as 2008 and don’t see how the big-picture economy falls apart. That work that the genAI shills say would be automated away is still gonna have to be done, right?
The software profession will change, but not that much · Here’s where I get in trouble, because a big chunk of my professional peers, including people I admire, see GenAI-boosted coding as pure poison: “In a kind of nihilistic symmetry, their dream of the perfect slave machine drains the life of those who use it as well as those who turn the gears.” (The title of that essay is “I Am An AI Hater.”)
I’m not a hater. I argued above that LLMs generating human discourse have no way to check their output for consistency with reality. But if it’s code, “reality” is approximated by what will compile and build and pass the tests. The agent-based systems iteratively generate code, reality-check it, and don’t show it to you until it passes. One consequence is that the quality of help you get from the model should depend on the quality of your test framework. Which warms my testing-fanatic heart.
So, my first specific prediction: Generated code will be a routine thing in the toolkit, going forward from here. It’s pretty obvious that LLMs are better at predicting code sequences than human language.
In Revenge of the junior developer, Steve Yegge says, more or less, “Resistance is useless. You will be assimilated.” But he’s wrong; there are going to be places where we put the models to work, and others where we won’t. We don’t know which places those are and aren’t, but I have (weaker) predictions; let’s be honest and just say “guesses”.
Where I suspect generated code will likely appear:
Application logic: “Depreciate the values in the AMOUNT field of the INSTALLED table forward
ten years and write the NAME field and the depreciated value into a CSV.” Or “Look at JIRA ticket 248975 and
create a fix.”
(By the way, this is a high proportion of what actual real-world programmers do every day.)
Glorified StackOverflow-style lookups like I did in My First GenAI Code.
Drafting code that needs to run against interfaces too big and complex to hold in your head, like for example the
Android and AWS APIs (“When I shake the phone, grab the location from GPS and drop it in the INCOMING S3 bucket”). Or
CSS (“Render that against a faded indigo background flush right, and hold it steady while scrolling so the text slides around
it”).
SQL. This feels like a no-brainer. So much klunky syntax and so many moving pieces.
Where I suspect LLM output won’t help much.
Interaction design. I mean, c’mon, it requires predicting how humans understand and behave.
Low level infrastructure code, the kind I’ve spent my whole life on, where you care a whole lot about about conserving memory and finding sublinear algorithms and shrinking code paths and having good benchmarks.
Here are areas where I don’t have a prediction but would like to know whether and how LLM fits in (or not).
Help with testing: Writing unit and integration tests, keeping an eye on coverage, creating a bunch of BDD tests from a verbal description of what a function is going to do.
Infrastructure as code: CI/CD, Terraform and peers, all that stuff. There are so many ways to get it wrong.
Bad old-school concurrency that uses explicit mutexes and java.lang.Thread where you have to understand
language memory models and suchlike.
The real reason not to use GenAI · Because it’s being sold by a panoply of grifters and chancers and financial engineers who know that the world where their dreams come true would be generally shitty, and they don’t care.
(Not to mention the environmental costs and the poor folk in the poor countries where the QA and safety work is outsourced.)
Final prediction: After the air goes out of the assholes’ bubble, we won’t have to live in the world they imagine. Thank goodness.
Comment feed for ongoing:
From: Tim (but not THE Tim) (Oct 01 2025, at 22:18)
I think your predictions and guesses make sense.
My latest takes on AI are that (upper) management is investing because clueless shareholders see it as the new hot thing; and that the problems is not AI of all flavors per se, but twofold: the misuse of AI by some, and the too-trusting-belief in AI by others. I think those hoping to eliminate employee cost through AI fall into the latter group.
I'm sure you remember the buzz around "expert systems" back in the 1980s. Some useful expert systems were built, but many companies tried to jump on the bandwagon with projects that weren't suited to expert systems or were done very poorly, and so for a time that sort of AI fell out of favor.
Then, in the 1990s (I think) we had the 'natural language' craze that turned your language into a database query but used so much time to do it (at least at our shop) that the users couldn't stand it.
This is a more-expensive uphill climb in the AI hype-and-forget cycle; as with the others some useful stuff will be built but most projects will be costly failures
[link]
From: Robin (Oct 01 2025, at 22:28)
Even if the financial collapse isn't as bad as 2008, surely pension funds will take a hit which affects everyone. And of course governments have a history of bailing out the rich. "Socialism for the rich, capitalism for the poor", as they say.
[link]
From: Dave Pawson (Oct 02 2025, at 00:14)
Re hallucinations. https://www.sciencealert.com/openai-has-a-fix-for-hallucinations-but-you-really-wont-like-it looks pretty solid logic, only failing to refute incorrect (unverified) sources.
[link]
From: Nathan (Oct 02 2025, at 04:58)
Re: low-level infrastructure
I used Claude code to write an eBPF program yesterday. One thing I noticed: Claude can nigh-instantly understand the output of the verifier. Its eBPF programs often failed the verifier, but so do human-written eBPF programs.
I don't really have a strong conclusion here except that there are places in the lowest of the low-level where I can see AI agents thriving.
[link]
From: chris e (Oct 05 2025, at 10:03)
In terms of economic collapse, I'm not sure what your comparison point is here?
Allegedly the bubble is - at this point - much larger than both the dotcom boom and the housing bubble.
The difference this time is that finance is quite open with the idea that its a bubble that is going to collapse -- the financial press is full of quotes from Goldman et al to this effect.
In that case you'd have to assume they've shifted these loans on dumber money -- in which case the unwinding *will* hit the broader economy.
[link]
From: Ryan Baker (Oct 07 2025, at 07:29)
You touch on some good high level points, but I think you miss a few finer points that lead to more nuanced results.
Hallucinations - They are "solvable". A lot of progress has been made here. Testing for code is a core principle. Thinking and questioning responses is another path. Solutions don't need to be 100%, humans aren't 100%. We survive via layers. Think before you leap. Identify when we need to double check our work. Identify when we need someone else to double check our work.
I find it difficult to talk to people about hallucinations because (a) it is a very important topic that deserves attention and advancement, but (b) doubters love to lock-in on it, and use wordplay to turn it into a proof "this all isn't going to work, give up now", when there's ample proof it is working, and has surpassed multiple barriers that type of thinking would have assumed were insurmountable.
Mass layoffs - You make a mistake in assuming the "only path to the profits that can cover the costs of training and running those models". If that was true, it would have been true of industrialization in the past. Yet, this created more jobs and was financially sound in the long run. Yes, there were bubbles here too, but confusing bubbles with long-run sustainability is a big mistake. Bubbles are often not about the long term future, they are about the timing.
The first rule of investing is you should be in it for the long haul. The second rule of investing is that other than the first rule, timing is everything. Since those are almost oxymoron's it's an easy one to make mistakes about.
As to the economy, yes, it's good to think about who is "harmed" when a bubble bursts. In a sense, a lot aren't even harmed, in that they invested when prices were lower, and the numbers were paper numbers, and even after stock prices decline, they can still sell that share for more than they could have 2 years ago. Sure, they mirage that they last 2 years made them believe in has disappeared. There are exceptions to this; people who started investing now due to this excitement, or maybe just personal life timing. But the retiree who's been building their 401K for the last 30-40 years.. they bought the SP500 long ago and is mostly riding the ups and downs for the long haul.
The bigger concern should be the economy overall, and the ripples. I think you're right to discount some of the common ripples.. i.e. the financial contagion that would disrupt the normal operation of other parts of the economy, because they all depend on financing to operate normally and optimally. This seems (so far) to not be the case here. But that doesn't mean there aren't other risks, and you don't have to dig too hard to find the biggest risk to the economy here. If AI investment is propping up an otherwise failing economy, what are you left with after it slows down to a rational level?
https://substack.norabble.com/p/the-economic-future-from-and-of-ai-cf1
[link]
From: cris f (Oct 12 2025, at 13:44)
One quibble. Doesn't change your argument, but "That’s the only path to the profits" may not be as clear cut as everyone is claiming. It's certainly what the AI boosters are building the hype on. But there are other paths - the first one is 1) get the population used to turning to AI instead of search 2) sell ads to a much more gullible audience.
[link]