I gave a talk to strategists and planners from the Outside Perspective group on my recent paper The Dot LLM era? The talk looked to summarise some of the key takeaways that I had written and also reflects a slight refinement on my thinking given current events since I had drafted the paper over the Christmas holidays.
About Outside Perspective
Outside Perspective is a community of brand planners and strategists. All of the members of Outside Perspective are freelance or self-employed. The members clients are drawn from all around the world and all sectors.
My presentation was the first Outside Perspective huddle of the year, where strategists share expertise and areas of interest with their peers.
I have put in the slides at the appropriate places alongside my notes.
Dot LLM era for Outside Perspective
Good afternoon everyone. I hope I’m not depriving you too much from lunch. If I am, just tuck in, just go on mute if you are tucking in because otherwise it’ll make me hungry.
So The Dot LL era came from a question that I posed to myself. I was working at the time for a client who is a major AI company. I was looking at all of the stuff happening around me and thought that the company that I’m working for it’s probably going to be all right. But we do feel like as if we’re in a bubble. So I then started to think about the bubble and eventually pulled it into a paper.
We (the Outside Perspective) will share the PDF of this presentation and you can get the paper from the QR code later on. My thinking has been refined slightly, as I’ve thought about this presentation, just nuances here and there based on what’s been happening since I originally published the paper.
Key points in the presentation
One of the first things I was taught when I present was tell them what you will be presenting, present it, and then tell them what you told them. So this is me telling me what I’m going to tell you.
So as a technology, LLMs (large language models), what people call AI at the moment, are making lasting changes from business to culture. It’s changing aesthetics, even though might have a negative impact like AI slop. The cultural effects are going to stay with us and evolve, just like previous technologies have done from the printing press on.
Now looking at the economics, the question is what’s really going to happen? Because the AI sector has a valuation in trillions which is an insane amount of money to think about. There are two main challenges from an economic perspective which is where I actually really looked at this from:
- The amortisation risk so the speed at which the hardware becomes obsolete or literally burnt out is three to five years versus the likely time to pay off because of the trillions of dollars involved.
- The self-defeating economics of AI as I’ll go through in a bit more detail. Economics are a limitation as to how fast AI can actually be adopted without actually destroying the AI providers themselves.
Both factors give a very narrow margin of success for Dot LLM era players from a business perspective, they need to thread themselves through to land at just the in order to succeed.
The Long Boom
When I came up with the term dot LLM era I was thinking about parallels to the dot com era. I’ll talk a little bit about the dot com era as well because I realise some people might not be terribly au fait with it. By comparison, I lived it and have the scars of my professional involvement with it.
The dot com era happened at a time that Wired magazine termed the long boom. During this time you had US preeminence as the Warsaw Pact had collapsed and China wasn’t yet a member of the WTO. During the Clinton presidency there was a US government budget surplus, so the US had headroom for monetary policy interventions if needed.
So if something like COVID epidemic had happened back then, they would have had much more economic flexibility to actually deal with it than we had coming into 2020.
Today in the US at least, much more like the Reagan era that preceded the long boom.
The West is on the back foot, there’s a resurgent Russia waging an invasion in Ukraine and ‘active measures‘ in the rest of Europe. China which is resurgent economically and militarily and from an innovation perspective which I’ll touch on a little bit later. There is high government debt particularly in the US, but also in Europe and much of the developed world as well.
There is sticky inflation and the overall inflation figures that are quoted in the business press are actually lower than what people are actually seeing in the shops. Consumer sentiment about the economy is much worse than the headline inflation number would suggest.
Finally, there’s a slackening labour market. That isn’t about AI at the moment. Companies say, oh, well, due to AI, we’re making layoffs. Usually they’re making layoffs through cost cutting, outsourcing and offshoring roles, they might be doing a little bit of AI in the background because we’ve given employees access to Microsoft Copilot or similar. That doesn’t mean to say that AI won’t have an impact in the near future.
The Dot Com Bubble
When we talk about the dot-com boom, we tend to think about is one thing but it was actually three interrelated bubbles that were going on.
There was an online business bubble which was relatively low capital but had a high burn rate through that capital in an attempt to build a moat. This is what most people think of when they think about the dot com era. t
There was a smaller, less visible bubble related to open source software. With the internet, it suddenly became much more important because you had a way of contributing to open source projects and collaborating in a way that wasn’t available between different individuals or organisations previously. While open source made software development collaboration easier, and provided good quality software to download for free, businesses struggled to build a profitable open source business model.
Finally, there was a telecoms bubble which was capital intensive. There was a huge amount of infrastructure built out. There was vendor financing by manufacturers of networking equipment. There was industry incumbents, so companies like the BT in the UK or the Bells in the US. And then there was also new telecoms companies like Enron Broadband Services, MCI WorldCom and Qwest.
More on them in a bit later on. But with the graph on the right, what in fact you see is the peak that was reached on the NASDAQ in March 2020 was in It took the NASDAQ 15 years to hit that peak again after the dot-com bust later that year. This is considered to be not as bad as what happened during the 2008 financial crisis. But it gives you an idea of the way things can go.
Hyman Minsky financial instability hypothesis
I want to introduce an idea of Minsky moments.
Hyman Minsky, economist, he came up with his financial instability thesis. He considered this to be bound to three different steps that needed to occur.
First a self-reinforcing boom driven by easy credit. Our interest rates are higher than they’ve been, but they’re still relatively low from a historical point of view.
If you actually look at the amount of money and the valuations that are going into the likes of OpenAI and CoreWeave in January alone, you can clearly see that the self-reinforcing boom is under way.
The second step Minsky mentioned is a shock where investors re-examine cash flows and this is what’s often termed as a ‘emperor’s new clothes‘-type moment. They suddenly start asking questions like when are we actually going to get our money repaid let alone are we going to make an obscene amount of profit on that money. We’re not quite there yet, but there has been some signs of concern from investors, (for instance when Microsoft announced its recent quarterly results). There were always those dissenting voices, but they’re actually proved prescient only in hindsight.
Lastly, there’s a de-risking stage through rapid acid sales. So investors and management realise they’ve got a flaming bag of crap and want to hand it off to someone else. They want rid of it.
So let’s next think about those earlier three bubbles and think about how good analogy are they for our present era of technology.
Online commerce
So like the early web, pure-play LLMs like Anthropic and Open AI’s GPT are currently providing tokens at below their marginal cost. The cost you’re paying for to do AI actions is actually less than the cost those AI actions actually take to create. And that’s not thinking about the research and cost of capital invested in the company.
They’re losing money to build an AI moat just in the same way as e-tailers and service providers back in the dot-com era lost money in order to build a moat in a particular sector. For instance like Amazon did in books. Move forward 25 years and AI companies are so they’re trying to do the same for various different service models. The burn rates of dot-com failures mirror loss making AI businesses. But only at a surface level, dot-coms were capital light in comparison to their modern Dot LLM era counterparts.
Look at the dog sock puppet on the right, he was the mascot and brand spokesanimal from Pets.com. Pets.com had a horrendous burn rate for the time and went bankrupt. The cause of their bankruptcy was down to two reasons:
- The logistics of actually sending out bags of dog meal and rabbit bedding were expensive compared to the amount that was being charged. It took Amazon the best part of a decade to radically reduce the cost of logistics for its own business. Even now, Amazon benefits from Chinese government overseas postal subsidies given to China-based businesses on Amazon.
- The large amount of money they put into advertising and brand building. Around a dog sock pocket with attitude. Great marketing, but if the consumer proposition isn’t right the marketing can’t save your business.
Open source sofware
The open source bubble saw the rise of what’s known as LAMP. That stands for:
- The Linux operating system
- Apache HTTP web server
- MySQL as a database management system
- P was for the Perl, PHP and Python programming languages
If you’ve ever run a WordPress blog, all of that language probably sounds vaguely familiar to you because it is. Because that supports a lot of the web. Linux extended into laptops, tablets and cellphones including smartphones. (Apple products are based on a similar UNIX style software based on the Mach micro-kernel used in various BSD distributions).
During the dot com era there were numerous companies in this space. Red Hat was the outlier success with their enterprise grade support offering. Red Hat managed to sell themselves for $34 billion to IBM in 2019. Red Hat was the most successful exit and profitable business out of its peers, becoming the first of its kind to generate $1 billion in revenue.
Now you can see Chinese companies are competing against US rivals and winning a lot of users in the global south by providing open source and open weight models like Alibaba’s Qwen and Kimi K2.
These Chinese models provide perfectly usable models at lower costs. You can run the models on your own machines. They use a lot less processing power than US AI models, and are challenging closed AI models. Huawei have built a lot of infrastructure in the developing world, so you’ve got a lot of opportunity there that’s now closed off from American AI companies.
US organisations like Airbnb and Silicon Valley based VC companies are running these Chinese models for their own uses.
AirBnB is an interesting case; CEO Brian Chesky is a really good friend of Sam Altman, yet he’s still using to use open source software rather than use OpenAI because it makes commercial sense.
The telecoms bubble
The telecoms boom. There’s been a similar kind of optimism build out of massive infrastructure as happened during the telecoms boom. Back then, they invested about half a trillion in fibre-optic networks based on misreading of traffic growth data. In the dot-LLM era, we’re seeing orders of magnitude more investment across computing power, networking within the data centre and even data centre power generation.
The graph on the right just gives you an idea of how much AI capital expenditure has taken off.
Amortisation risk
I want to introduce you to some of the concepts. One I’ve alluded to already is this idea of hardware amortisation.
During the telecoms bust, there was dark fibre, so optical fibre networks that weren’t lit. Dark fibre that was laid in the 1990s had a useful life of at least a decade.
In the current dot LLM era the equivalent surplus would be GPUs and TPUs – the processors and the network internet connect hardware within the data centre that’s particularly used for training models has a useful life that becomes technically obsolete between three to five years. It’s usually more towards three years because they are used so intensively that a lot of the processors get damaged by the amount of heat generated from the extreme amount of processing they do.
With your laptop, even though things might run slow sometimes, 95% of the time your laptop processor is running idle in terms of what does unless you’re doing some like really hardcore 3d rendering, video editing or complex work in Photoshop.
Your computer’s processor aren’t running at full at full performance all day, all night. By comparison AI training processors wear out within three to five years depending whose numbers you believe.
The chart on the right gives you an idea of how over time a lot of the major hyperscalers have actually been increasing the amount of years that they actually write down their processor’s depreciation. While the the processors have stayed pretty constant in terms of that three-to-five year window that they have a life of before they need depreciation to zero.
A second aspect of this deprecation is that the amount of energy per token is dropping substantially with each new generation of chip. So a five year old chip, if it’s working is the cloud computing equivalent of an old decrepit gas-guzzler of a car.
Financial picture
From a financial perspective, the change in hardware amortisation has caught the attention of short sellers. The reason why, is that the AI hardware is collateral against loans for some AI companies. They have a mortgage out on their chips. So the length of time that those things have a useful life is really important. If you had a house that lasted three years and you’ve got a mortgage for five years, it’s not a great position to be in.
(The most high profile short seller is Michael Burry who runs a Substack newsletter. He’s the chap portrayed by Christian Bale in the film adaption of The Big Short. Extremely smart guy, not as arrogant as he appears in the Christian Bale portrayal. Really great Substack, recommend that you read it.)
There’s also been a number of financing accounting changes going on. So we’ve talked about the lengthening lives of the AI hardware. You’re also seeing off balance sheet deals being done to help finance data centre development. A number of the hyperscalers like Meta, Google, Amazon and Microsoft have been very cash generative businesses. This has been because software and online advertising are high margin businesses that generate a lot of cash.
Meta and Microsoft have teamed up with private equity companies to co-finance their data centre build out and their acquisition of processors. These loans those are in special financial vehicles that keeps them off Microsoft and Meta’s balance sheet. Short sellers are alarmed by this as it is similar to what we saw that in the telecoms business from the likes of Enron and MCI WorldCom during the dot com bubble.
There are also accusations of circular financing as well. So the chart on the right hand side came from Bloomberg in September of last year. This started to get people worried about the idea of an AI bubble because a lot of it is financed by loans to and from the major technology vendors to the AI players.
Short sellers allege that the values and the profits are being artificially over stated by the hardware depreciation costs and this circular financing. They wonder where are the real transactions? If you look at the circular financing there isn’t meaningful revenue at the moment.
Looking at recent market valuations when I wrote this paper at the end of the year the magnificent 10 (a lot of the hyperscalers, including Meta, the Amazon, Alphabet and Nvidia), had a price to earnings (P/E) ratio of 35x.
So that would mean that it would take 35 years of earnings to actually pay off the share. The S&P the 500 dot-com peak had a P/E ratio of 33 times earnings.
Those values then assume that LLMs would drive one to four trillion in revenue growth or cost savings for dot LLM companies in the next two years, which is a huge amount of money.
$1 trillion revenue target
So how are businesses going to get there? So I started to do a thought experiment:
Advertising will only contribute a relatively small amount. It’ll be big numbers for the rest of us here, but if we think about that target that needs to be hit, it’ll only contribute a small amount of the revenue needed.
Advertising is an industry. It’s about 1% of GDP globally. Also while AI can increase efficiency of advertising, which might be a reason to go there, it may even decrease effectiveness of advertising further.
If you look particularly for large brands, they’re not getting the returns out of digital advertising already that they should be. If you look for growth and increased earnings over the past 15 years or so.
So what about business efficiencies? Yes, it can automate tasks, might be able to reduce jobs. The way it’s optimally pitched is what Microsoft Research described as a wingman approach.
Then the third option which is a lower probability because it rules on a certain amount of serendipity and AI companies have a lot less control.
What does $1 trillion in job cuts look like?
Which raise the question what does a notional $1 trillion, in savings due to job cuts mean?
As a thought experiment it scared the living getting lights out of me. It equates to about 10.5 million jobs in the US. I used two economic models.
- The Phillip curve, which models inflation.
- Okun’s law, which looks at the impact of job losses on GDP.
A trillion dollars in job cuts, wipes $four trillion GDP and 3% deflation to start with. There would likely be additional secondary effects that I didn’t even attempt to calculate
It creates an efficiency paradox, that would destroy the dot LLM ecosystem financially. They rely on being able to get money to invest and that would drop through the floor. You would have less businesses and fund managers investing and less retail investors.
Rather than being able to gradually increase prices over time with a moat, AI companies would be having to continually decrease in prices due to deflation.
The efficiency paradox means there’s a sweet spot between the degree of productivity benefits that they actually provide within a market without destroying AI as a business.
This all assumes that we’re actually operating within the closed system of the American AI market. But it’s worse because it isn’t closed.
The China factor
The US doesn’t have global AI dominance. Some experts think that China may be ahead. I don’t necessarily think that that’s the right framing to use because I think that China’s running a slightly different race. It’s taking a very different approach to the US about these things.
Competition is not only economic, it’s geostrategic and that actually might change and impact the economics of what we talk about.
The Chinese models are about 10 times more power and processor efficient than their American counterparts. They’re already being used with million+ downloads. They obviously do a good enough job that some Silicon Valley companies trust them.
Seven Scenarios
I thought about scenarios, about how the dot LLM era is going to happen, and I represented everything on a continuum of economic transformation to total collapse. The more I thought about it, I saw that we’re going to have overlapping scenarios rather than a single outcome.
On the left hand side was what the AI product was trying to do. New value creation or drive efficiency, along the top what the net impact was likely to be in terms of either negative or zero value creation or a positive to transformation value creation effect.
A probability based approach
The moral hazard happens because AI no longer just economic in nature, but also becomes geopolitical and a national security issue. I rated it at 95% because China is already there in terms of treating vast amounts of its economy from food to technology as a national security issue. AI is no exception. I read a paper by a Canadian think tank equated every US AI company data centre as equivalent to having a US military base on your territory.
The AI players are too big to fail. There’s a national security imperative, and a government’s backstop for them. Taxpayers will have to foot the bill. The AI companies don’t necessarily have to innovate as hard. They can take financial risks, similar to banks pre-2008.
The wingman economy, we’re already seeing this in the way Google, Microsoft and Adobe position their AI offerings.
The idea is that you actually get avoid catastrophic job losses by striking a balance between growth and efficiency to land in just the right place.
The Red Hat model. The idea of that pure play AI businesses struggle to find probability. You get LLM model proliferation, like what I talked about with open source models. The open source models, like Qwen, already have people using it around the world. Then value starts to shift to enterprise support or integration. Like Airbnb who are integrating these models already into their services.
Telecoms bust. You only have to look at things like private equity Blue Owl pulling out of a funding round for an Oracle data centre. It could be basically how they feel about Oracle in particular’s AI offering, which a lot of money has been going into, but not a lot of results have been coming out of, or it could also be sentiment around the dot LLM eco-system in general.
The last three options are much lower probability events.
The new economy model. The idea that AI will make transactions frictionless, with agentic automation , a lot of things will happen. There’s an uncertainty around the economics of this at the moment and there’s numerous concerns like the AI might actually drift away from the original human intent over time. There are also bottlenecks with legal and regulatory issues.
The breakthrough is a black swan event by its very nature. So this would be like a major scientific breakthrough, but then it would likely take 10 years to commercialise. Think about new drugs like Wegovy or Ozempic. They were an innovation that launched during the COVID period. The actual discovery was done back in 1979.
It took decades to get them to be commercial as a weight management product.
With other technologies, that period might be down a bit. So a new oil field might be only a 10 year project from discovery to commercialisation. Either way, it won’t pay off in a two year period.
Where we’re at?
I do believe that the dot LLM era is a financial bubble and a technological shift. The shift will continue to happen and evolve. It will continue to influence culture and business.
The financial bubble may destroy economies. It will keep driving national rivalries.
There is likely to be, and at least in the major players like Google and Amazon, a wariness of self-defeating economics where efficiency seeking destroys consumer base. Even if there’s not worries within AI companies, governments will hit them pretty hard because if you actually see a four trillion drop in GDP in the US and a 10 million strong decline in employment rates, even the current Trump administration would have to step in and regulate.
they’ would regulate is they’d probably overcompensate on economic impact.
So a lot of the major companies, possibly with the exception of Elon Musk, will be thinking about these factors to a certain extent. I think we’re in phase one to the boom and go to the next stage at any time. We’ve got the seeds of a lot scenarios including moral hazards.
It’s the geopolitical things that are really complicating things at the moment.
Eventually we’ll get to a new normal. How long it will take depends on the amount of government intervention that actually happens from an economic point of view. It will also depend on geopolitical factors.
You get a Taiwan invasion, that will impact manufacture of GPUs and TPUs because they’re all made on the island of by TSMC.
Large hyperscalers like Alphabet , Amazon and Microsoft are the most likely to survive the bust as they have multiple revenue streams and can integrate their AI capabilities into these products.
A special thank you to Matthew Knight of Outside Perspective for organising and facilitating the session.





























