I think I’m probably going to lose quite a lot of money in the next year or two. It’s partly AI’s fault, but not mostly. Nonetheless I’m mostly going to write about AI, because it intersects the technosphere, where I’ve lived for decades.

I’ve given up having a regular job. The family still has income but mostly we’re harvesting our savings, built up over decades in a well-paid profession. Which means that we are, willy-nilly, investors. And thus aware of the fever-dream finance landscape that is InvestorWorld.

The Larger Bubble · Put in the simplest way: Things have been too good for too long in InvestorWorld: low interest, high profits, the unending rocket rise of the Big-Tech sector, now with AI afterburners. Wile E. Coyote hasn’t actually run off the edge of the cliff yet, but there are just way more ways for things to go wrong than right in the immediate future.

If you want to dive a little deeper, The Economist has a sharp (but paywalled) take in Stockmarkets are booming. But the good times are unlikely to last. Their argument is that profits are overvalued by investors because, in recent years, they’ve always gone up. Mr Market ignores the fact that at least some of those gleaming profits are artifacts of tax-slashing by right-wing governments.

That piece considers the observation that “Many investors hope that AI will ride to the rescue” and is politely skeptical.

Popping the bubble · My own feelings aren’t polite; closer to Yep, you are living in a Nvidia-led tech bubble by Brian Sozzi over at Yahoo! Finance.

Sozzi is fair, pointing out that this bubble feels different from the cannabis and crypto crazes; among other things, chipmakers and cloud providers are reporting big high-margin revenues for real actual products. But he hammers the central point: What we’re seeing is FOMO-driven dumb money thrown at technology by people who have no hope of understanding it. Just because everybody else is and because the GPTs and image generators have cool demos. Sozzi has the numbers, looking at valuations through standard old-as-dirt filters and shaking his head at what he sees.

What’s going to happen, I’m pretty sure, is that AI/ML will, inevitably, disappoint; in the financial sense I mean, probably doing some useful things, maybe even a lot, but not generating the kind of profit explosions that you’d need to justify the bubble. So it’ll pop, and my bet it is takes a bunch of the finance world with it. As bad as 2008? Nobody knows, but it wouldn’t surprise me.

The rest of this piece considers the issues facing AI/ML,  with the goal of showing why I see it as a bubble-inflator and eventual bubble-popper.

First, a disclosure: I speak as an educated amateur. I’ve never gone much below the surface of the technology, never constructed a model or built model-processing software, or looked closely at the math. But I think the discussion below still works.

What’s good about AI/ML · Spoiler: I’m not the kind of burn-it-with-fire skeptic that I became around anything blockchain-flavored. It is clear that generative models manage to embed significant parts of the structure of language, of code, of pictures, of many things where that has previously not been the case. The understanding is sufficient to reliably accomplish the objective: Produce plausible output.

I’ve read enough Chomsky to believe that facility with language is a defining characteristic of intelligence. More than that, a necessary but not sufficient ingredient. I dunno if anyone will build an AGI in my lifetime, but I am confident that the task would remain beyond reach without the functions offered by today’s generative models.

Furthermore, I’m super impressed by something nobody else seems to talk about: Prompt parsing. Obviously, prompts are processed into a representation that reliably sends the model-traversal logic down substantially the right paths. The LLMbots of this world may regularly be crazy and/or just wrong, but they do consistently if not correctly address the substance of the prompt. There is seriously good natural-language engineering going on here that AI’s critics aren’t paying enough attention to.

So I have no patience with those who scoff at today’s technology, accusing it being a glorified Markov chain. Like the song says: Something’s happening here! (What it is ain’t exactly clear.)

It helps that in the late teens I saw neural-net pattern-matching at work on real-world problems from close up and developed serious respect for what that technology can do; An example is EC2’s Predictive Auto Scaling (and gosh, it looks like the competition has it too).

And recently, Adobe Lightroom has shipped a pretty awesome “Select Sky” feature. It makes my M2 MacBook Pro think hard for a second or two, but I rarely see it miss even an isolated scrap of sky off in the corner of the frame. It allows me, in a picture like this, to make the sky’s brightness echo the water’s.

Brightly-lit boats on dark water under a dark sky

And of course I’ve heard about success stories in radiology and other disciplines.

Thus, please don’t call me an “AI skeptic” or some such. There is a there there.

But… · Given that, why do I still think that the flood of money being thrown at this tech is dumb, and that most of it will be lost? Partly just because of that flood. When financial decision makers throw loads of money at things they don’t understand, lots of it is always lost.

In the Venture-Capital business, that’s an understood part of the business cycle; they’re looking to balance that out with a small number of 100x startup wins. But when big old insurance companies and airlines and so on are piling in and releasing effusive statements about building the company around some new tech voodoo, the outcome, in my experience, is very rarely good.

But let’s be specific.

Meaning · As I said above, I think the human mind has a large and important language-processing system. But that’s not all. It’s also a (slow, poorly-understood) computer, with access to a medium-large database of facts and recollections, an ultra-slow numeric processor, and facilities for estimation, prediction, speculation, and invention. Let’s group all this stuff together and call it “meaning”.

Have a look at Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data by Emily Bender and Alexander Koller (2020). I don’t agree with all of it, and it addresses an earlier generation of generative models, but it’s very thought-provoking. It postulates the “Octopus Test”, a good variation on the bad old Chinese-Room analogy. It talks usefully about how human language acquisition works. A couple of quotes: “It is instructive to look at the past to appreciate this question. Computational linguistics has gone through many fashion cycles over the course of its history” and “In this paper, we have argued that in contrast to some current hype, meaning cannot be learned from form alone.”

I’m not saying these problems can’t be solved. Software systems can be equipped with databases of facts, and who knows, perhaps some day estimation, prediction, speculation, and invention. But it’s not going to be easy.

Difficulty · I think there’s a useful analogy between the narratives around AI and of self-driving cars. As I write this, Apple has apparently decided that generative AI is easier than shipping an autonomous car. I’m particularly sensitive to this analogy because back around 2010, as the first self-driving prototypes were coming into view, I predicted, loudly and in public, that this technology was about to become ubiquitous and turn the economy inside out. Ouch.

There’s a pattern: The technologies that really do change the world tend to have strings of successes, producing obvious benefits even in their earliest forms, to the extent that geeks load them in the back doors of organizations just to get shit done. As they say, “The CIO is the last to know.”

Contrast cryptocurrencies and blockchains, which limped along from year to year, always promising a brilliant future, never doing anything useful. As to the usefulness of self-driving technology, I still think it’s gonna get there, but it’s surrounded by a cloud of litigation.

Anyhow, anybody who thinks that it’ll be easy to teach “meaning” (as I described it above) to today’s generative AI is a fool, and you shouldn’t give them your money.

Money and carbon · Another big problem we’re not talking about enough is the cost of generative AI. Nature offers Generative AI’s environmental costs are soaring — and mostly secret. In a Mastodon thread, @Quixoticgeek@social.v.st says We need to talk about data centres, and includes a few hard and sobering numbers.

Short form: This shit is expensive, in dollars and in carbon load. Nvidia pulled in $60.9 billion in 2023, up 126% from the previous year, and is heading for a $100B/year run rate, while reporting a 75% margin.

Another thing these articles don’t mention is that building, deploying, and running generative-AI systems requires significant effort from a small group of people who now apparently constitute the world’s highest-paid cadre of engineers. And good luck trying to hire one if you’re a mainstream company where IT is a cost center.

All this means that for the technology to succeed, it not only has to do something useful, but people and businesses will have to be ready to pay a significantly high price for that something.

I’m not saying that there’s nothing that qualifies, but I am betting that it’s not in ad-supported territory.

Also, it’s going to have to deal with pushback from unreasonable climate-change resisters like, for example, me.

Anyhow… · I kind of flipped out, and was motivated to finish this blog piece, when I saw this: “UK government wants to use AI to cut civil service jobs: Yes, you read that right.” The idea — to have citizen input processed and responded to by an LLM — is hideously toxic and broken; and usefully reveals the kind of thinking that makes morally crippled leaders all across our system love this technology.

The road ahead looks bumpy from where I sit. And when the business community wakes up and realizes that replacing people with shitty technology doesn’t show up as a positive on the financials after you factor in the consequences of customer rage, that’s when the hot air gushes out of the bubble.

It might not take big chunks of InvestorWorld with it. But I’m betting it does.



Contributions

Comment feed for ongoing:Comments feed

From: Tristan Louis (Feb 29 2024, at 11:32)

As always, a fascinating and thought provoking piece.

One of the area where I believe you may need to modulate your thinking is in the proverbial shovels vs. gold framework. Nvidia (and by some extend, your previous employer) are selling shovels (chips or time on system) that allows them to get value right now from those panning for AI gold. Assuming they see the curve well enough, they could adjust to the downward demand in the cycle when the craze subsides.

If we were to think in terms of the dotcom era as a comparable bubble/bust cycle (albeit on a much larger scale now), a lot of companies making AI central to their existence will make the same mistakes as dotcoms which made their being an Internet asset central to their existence. In the same ways, the companies that provided tools (eg. The telcos and hardware vendors) or companies that added the new tech to their offerings without jeopardizing their code business (or used the new offering to repackage old ideas into new enhanced ones (eg. Turning the Sears catalog into an Internet ordering system) will survive the incoming crash and potentially amass enough cash reserves to build a longer term large asset.

[link]

From: Max Pool (Feb 29 2024, at 11:59)

Aswath Damodaran (a Professor of Finance at the Stern School of Business at NYU) recently analyzed the magnicifient seven.

Nvidia is 55.84% overvalued with five year Expected CAGR Revenue 32.20% and target operating margin 40.00%.

Exel:

https://pages.stern.nyu.edu/~adamodar/pc/blog/NVIDIA2024.xlsx

blog post: https://aswathdamodaran.blogspot.com/2024/02/the-seven-samurai-how-big-tech-rescued.html

[link]

From: Justin Watt (Feb 29 2024, at 12:33)

From one willy-nilly investor to another: aim for average, invest in index funds, and enjoy the ride. But I'm sure you already do that. For further reading/self-soothing, check out The Simple Path to Wealth by JL Collins or The Psychology of Money by Morgan Housel.

[link]

From: Rob (Feb 29 2024, at 12:40)

The UK may be thinking about it, Canada has been using it for I think a couple of years now: https://www.cicnews.com/2023/05/minister-fraser-clarifies-how-ircc-uses-ai-in-application-processing-0537338.html#gs.57gcdt. Basically, when you apply for a Temporary Residence Visa to say stay in Canada with your spouse whilst you wait on IRCC (ie Immigration) to process your application, your application is processed by a LLM. If it turns you down, you have no avenue of appeal to a human, no, all you can do is re-write it and submit it again.

I see the day coming when you will have to pay for a LLM to make your applications (to say welfare, immigration, parole board, the tax man) more palatable, much like paying SEO shysters nowadays to show up on Google, or the vigorish for Amazon. Because that has worked out so well.

[link]

From: philvec (Feb 29 2024, at 12:59)

AI specialist here, by both degree and job experience. I was looking long for an opinion like this, particularily on the insufficiency of LLMs alone to model "meaning". Not all the statements I 100% agree with, but THANK YOU TB, since my hope for humanity has risen from the dead - as now I know somebody is also aware of the problem!

[link]

From: Dave Pawson (Mar 01 2024, at 00:31)

Fully agree with your overall conclusion Tim. Wondered if you'd considered the political input to this potential market killer? How will they react when all their friends (and funders) are screaming as gelt runs through their fingers?

[link]

From: Ole Eichhorn (Mar 01 2024, at 08:17)

I’m as impressed as you are by prompt parsing, but for a different reason: as it turns out, you don’t need seriously great NLP to do this, all you need is seriously good applied statistics and a sufficiently large cohort of tokens. To me that’s an incredible finding, and a key to why AI/ML is powerful.

I have ChatGPT open 24x7 and use it for everything, and it’s made me X times more productive. It’s not perfect and not always accurate and mostly needs to be checked and edited, but … wow! We blew right past the Turing Test at 100mph and haven’t looked back.

I can’t foresee the economic impact - agree the computing power required is vast - probably many lower income jobs like customer service will be replaced - but the societal impact will be massive.

[link]

From: Leo (Mar 01 2024, at 17:57)

While I tend to agree with most of what you said, I do not share the overall pessimistic sentiment. We have barely scratched the surface of what LLM-powered products are capable of, and as another reader mentioned, it has already drastically improved our productivity. Combine this technology with advancements in AR and things straight out of sci-fi novels become reality. So when you say "I’m pretty sure, is that AI/ML will, inevitably, disappoint", I disagree. I think this comes from an "empirical" mindset, based on previous inventions. We shouldn't try to compare GenAI with previous inventions. For example, the pace of innovation in this domain since October 2022 has no equivalent, and as such, we should refrain from applying the same mental models.

In terms of energy footprint, a good comparison is the advent of combustion engines. In the 60s, all we cared about was building faster cars, and paid no attention to how much they would pollute. Since the early 70s engine size and emissions have been cut drastically. I envision the same happening to LLMs, which we're already seeing with much smaller models performing almost as good as giant ones. That being said I'm not saying it's all smooth sailing from here, and I agree there are significant challenges to overcome. I guess I'm just more optimistic we will overcome them.

This is just my opinion and I'm sure I'll be wrong on many things. I enjoyed your post and can't wait to see what happens next!

[link]

From: Soph (Mar 02 2024, at 06:27)

Regarding use of AI by governments, I think two previous events can point at how damaging it can be: the Horizon Post Office scandal in the UK (a few suicides, people wrongly sent to prison or made bankrupt) and the Robodebt scandal in Australia (an assumed attempt at hitting the poorest hard that also led to suicides and lives destroyed).

Searching for the exact names made me discover that Wikipedia has an Algocracy category.

[link]

From: Paul Boddie (Mar 02 2024, at 10:31)

And eventually along came the boosters, claiming that they are n-times more productive, presumably "freeing themselves up for more strategic thinking" or whatever those too precious for actually writing software (and who never really did, anyway) tend to say.

Meanwhile: "For example, the pace of innovation in this domain since October 2022 has no equivalent..." Really? Have you ever heard of the Manhattan Project? And that was something that happened eighty years ago, before Internet Time or whatever publications like Wired used to bang on about.

And to bring up that endeavour turns out to be quite pertinent, given that the most hyped form of "AI" is effectively being weaponised to fight information wars, as the credulous gush at all the fun it provides, and as the technology billionaires profit from its proliferation.

[link]

From: Mike B (Mar 02 2024, at 17:45)

Have you tried GitHub copilot or other LLM programming assistants? They are legit useful.

[link]

From: Owen Miller (Mar 08 2024, at 18:40)

I think you're being too harsh on the idea of AI hastening government work.

AIs have proven useful for software engineering, for the postal service, and for medicine, amongst other fields.

Governments are not sufficiently competitive and we saw during covid that actually, they could rapidly change their paradigms if they were really pushed. We needn't believe them when they declare that this is the only way of doing things − they're just lazy and accustomed to cushy jobs where they can't be fired.

AI could significantly reduce the cost of living and could significantly improve our society. I discuss it further in Why Robots Desere Rights: https://nonhuman.party/post/why_give_rights_to_robots/

[link]

From: Len (Mar 11 2024, at 18:48)

I spent the time digging into the theory to understand the tech. Dot math with lots of parameters used to sieve lots of data preprocessed by meatware. The I is in the meat that provides the data. In short form as I commented to Sabine: expecting ai to be truly creative is expecting your shadow to evolve.

That said, when doing creative work, primarily writing, recording songs and making no cost videos, I love it. Image generators and eventuality video generators get me out of the sleeve of using sources from the web. The main problem is eventually I have to dip into retirement funds and do a serious upgrade of my production systems. There is little money in music though much joy. So what the hell? The other use is translation. If I give it a lyric that say I want to sing in Spanish as a Mambo, if the lyric was a singable lyric going in, it’s singable coming out. That saves me enormous time.

I did a fair amount of testing. It comes down to being able to write an expressive prompt Quality in; good enough out

[link]

author · Dad
colophon · rights

February 25, 2024
· The World (149 fragments)
· · Business (2 more)

By .

The opinions expressed here
are my own, and no other party
necessarily agrees with them.

A full disclosure of my
professional interests is
on the author page.

I’m on Mastodon!