Chinese new year CNY 2026 also known as lunar new year, spring festival or Tết festival. 2026 marks the year of the fire horse. In the same way that the Super Bowl and Christmas are the stand out times of the year for advertising in the US and Europe, CNY 2026 will be the same for much of east Asia and Southeast Asia.
There has a large amount of tradition and rituals around celebrating the festival, which are rich seams of inspiration for strategists and marketing moments.
I featured an advert from Brunei for the first time.
As with previous years, Malaysia had a lot of campaigns running, many of which were partnerships with local musicians to collaborate on a seasonal song. One of the advantages of partnering with local musicians is their ability to cross post on their own channels broadening the videos reach.
In the Malaysian adverts that were storyteller driven, coping with aging relatives suffering with dementia came through as a common social theme.
Social video has been a great leveller. I have a featured a few videos from small businesses this year which were nicely executed despite operating with minimal budgets.
Coca-Cola in China was notable in that it showed strategic thinking closer to what we now see in the west with social-first ‘Instagrammable’ tactics.
Australia
Godiva
Anywhere up to 8 percent of Australia’s population have some connection to China, which explains why Godiva have done a Chinese new year themed range of chocolates.
Brunei
Flower Journal
Flower Journal is a florist shop based in Brunei, yet they have created a cinematic advert with great storytelling. The craft is arguably better than a number of the big brands featured this year. The work by local agency Cinekota really impressed me.
China
Adidas
Adidas made a film about a school football team and focuses on how the team is a ‘football family’. Reuniting with family is an important part of lunar new year. It’s also about looking forward to the future, hence the children’s wishes.
Apple
TBWA\ Media Arts, Shanghai teamed up with film director Bai Xue for Apple’s CNY 2026 advertisement. The film joins Apple’s series of ‘shot on an iPhone‘ mini movies.
Coca-Cola
Coca-Cola China took a social and experiential approach focused around togetherness. A drone show in Chongqing paired with fireworks that are considered part of China’s intangible cultural heritage was supported by social video clips of a famous father and daughter.
All of this was to address young adults dual sense of togetherness during spring festival as mainland Chinese call CNY 2026. Being together with friends a la Friends and This Life, as well as more traditional family connections.
Valentino
Valentino put relatively subtle lunar new year symbols into a Chinese take on an American diner. The galloping horse zoetrope and red accents throughout the restaurant from neon signs to red floor tiles. As for the film itself, it’s basically a video lookbook.
Hong Kong
Hang Seng Bank
Hang Seng Bank ties into the the importance of welcoming good fortune into your life at Chinese New Year. Celebrities dress as the god of good fortune giving wishes for flourishing prosperity to different neighbourhoods across Hong Kong.
Malaysia
AEON
Japanese supermarket chain AEON did a Malaysian market specific film featuring a mix of well known entertainers. The giddy up line telegraphing its horse related theme and the cultural impact of K-pop is evident in the whole video.
Affin Bank
Affin Bank is consistent in their lunar new year campaigns. Each year they tell of how a famous business customer battled adversity to succeed. This time it was Malaysian book retailer BookXcess.
Affinity
Affinity is a Malaysian estate agent. The video creative is a pretty run of the mill reenactment of Chinese new year with the horse head mask hinting at the CNY 2026 theme. The song itself is a bit an ear worm.
Air Selangor
Air Selangor hits you with a gut punch of an emotional Chinese New Year story that felt like it came straight of the Thai advertising agencies rather than Malaysia. (Thai agencies are famous for wringing you through an emotional shredder leaving you drained after an insurance ad).
AmBank
The film melds together traditions around fabric sharing and lion dance to tell a Chinese new year story of a community coming together.
Astro
Astro is a Malaysian holding company that has a mix of linear TV, connected TV and radio assets. Think the reach of the BBC, but a private enterprise.
Bamboo Green Florist
Bamboo Green Florist is a single shop business based in Penang. For a small business their Chinese new year advert punches above its weight.
GVRide
GVRide is a Malaysian ride hailing app, they sponsored a new year song music video by Namewee alongside other brands.
IJM Land
IJM Land is a Malaysian property developer (part of a larger conglomerate). They position themselves as “one of Malaysia’s property development”. The film sits at the tension between the love of heritage, accumulating wealth and the non-monetary aspects of CNY 2026 – coming together, family, building memories and legacy.
JinYeYe
JinYeYe sell seasonal hampers, so lunar new year is their peak sales time. Their advert is targeted at the global Chinese diaspora and they partnered with Tourism Malaysia alongside local musicians. A bee is considered to a symbol of blessings and represents sweetness, hope and companionship.
Lee Kum Kee
Hong Kong’s Lee Kum Lee were the inventors of oyster sauce and have a place in every Asian kitchen cupboard. But their advert is weak sauce (pun intended) that could have been knocked out on PowerPoint.
Listerine
Listerine just straight up sponsored the video of Malaysian producers 1119 for this new year themed music video.
Loong Kee
Loong Kee is a Malaysian food company who makes everything from processed meats to baked goods. This is at least the third year that they have partnered with local musicians who are internet-famous to collaborate on a new year themed song.
Lotus
Lotus supermarket was formerly part of Tesco’s international footprint before the UK brand divested itself of its international stores to Thai conglomerate Charoen Pokphand (CP) Group. This advert taps into family friction and a couple of nice wushu cinema referencing touches. It reminded me a lot of SingTel’s films from previous years.
It handles the diversity of Malaysia well, without the awkward approach that Malaysian Airlines went for.
Malaysian Airlines
Malaysian Airlines focuses on Malaysians coming home. Given that the airline is a government company. While ethically Chinese, and speaking Chinese at home – the woman is a devote muslim.
In reality that’s about 1-2% of the ethnic Chinese population – for ethno-political, social and cultural reasons that I don’t want to get into on this post. The video is as much about a government approved theme as it is about the airline.
Marrybrown
Marrybrown is a Malaysian quick service restaurant. It is really nice how the story moves through time with relatively small but important cues on screen.
McDonalds Malaysia
Great storytelling but with a serious topic as middle-aged siblings deal with an aging parent with signs of dementia.
Nescafé Gold
Instant coffee brand Nescafé Gold goes down the sponsored music video route. But with a few noticeable differences:
Better product placement that articulates the customer moment.
A more diverse cast than most of the other adverts.
The video title Gongxi Kemeriahan – is a mix of mandarin and malay – gongxi meaning best wishes or congratulations and kemeriahan means excitement.
All of which are likely to because of Nestlé being a western multinational and the marketers are looking to target all Malaysians rather than just ethnic Chinese.
PMG Healthcare
PMG Healthcare is a regional provider of pharmacies, medical and dental clinics to private health insurance customers.
Mr Potato
Mr Potato is a local potato chip brand in Malaysia. Their CNY 2026 advert is a spoof of the Jackie Chan kung fu film Drunken Master.
Public Bank
Public Bank is a Malaysian headquartered bank. This year they have done an AR-based activation. Each Chinese new year you can go into your bank and get a pack of red envelopes and crisp new bills to give out to family, friends and junior colleagues. So this execution makes sense.
RHB
Malaysian bank RHB continued its theme of inspiring stories told in previous Chinese New Year campaigns through to its CNY 2026 campaign. This year tells the story of Komuniti Tukang Jahit, a small tailors shop that empowers women through sewing skills and fair income opportunities.
Shopee
Singaporean e-commerce platform Shopee partnered with local act 3P to a Chinese New Year song for its Malaysian ad campaign. Thoughout Asia lunar new year songs and playlists are all over TV, films, Spotify and YouTube playlists. This leans right into that trend.
SPD Racing
SPD Racing is a small workshop that service motorcycles and sell after market parts. This short video is really nicely executed, replacing parts on the motorcycle with red fittings in the same way that people would wear new red outfits on Chinese new year for good luck.
Tenaga
Tenaga is a Malaysian electrical utility. There is a nice bit of storytelling about a lion dance troupe. This could be rerun in future years given its lack of specificity to CNY 2026.
U Mobile
U Mobile is a Malaysian wireless operator. Their advert focuses on on the travel use case over lunar new year as more people travel rather than staying at home.
Vida C
Vida C is kind of like an energy drink, in a number of Asian countries high vitamin C content is used in the same way that taurine and caffeine are in western energy drinks. They did a relatively subtle product placement in this comedic music video. It’s much less PC than western multinationals would allow.
Watsons
Watson’s is the Boots of Asia. Like previous years it tells a story of family coming together with the joy and chaos that usually ensues. It features Maria Cordero – a Macau born entertainer, radio and TV personality with a famous cooking show based in Hong Kong – but known throughout the region.
Singapore
Carlsberg
Carlsberg launched a pan-Asian campaign with a mix of horse themed packaging design and having it promoted by SKAI ISYOURGOD – a popular Malaysian rapper with appeal across Asia.
FairPrice
Singapore supermarket chain FairPrice focused on the small family moments of the new year celebrations and their ability to build lasting memories. The advert was created by TBWA\ Singapore.
LVMH
LVMH’s drinks portfolio has been suffering from declining sales. Family get togethers are an ideal consumption moment, so it makes sense that Hennessy leant in with special packaging and a Singapore family reunion ‘kit’.
Singapore government
A comedic short film with relatively light social engineering aiming at harmonious relationships and community during CNY 2026. The family were framed as being salt-of-the-earth Singaporean Chinese living in old HDB flat. The universal food photography was very on point.
United States
Panda Express
Panda Express is an American fast food chain that specialises in American Chinese food. It kind of sits outside usual lunar new year traditions becoming a Roald Dahl style fantasy.
Vietnam
Ensure Gold
Abbott Health’s Ensure Gold is a Complan-type drink designed to fortify health and restore strength. The film uses family union traditions to focus on the past, recover during the Tết festival and look to the future with a shared sense of resilience. The theme is even reflected when the family does traditional ancestor worship and we hear the wishes of their departed family.
Home Credit
Home Credit are an online financial services company. They provide credit cards, vehicle loans, pre-payment accounts and instalment payments for consumer products. The advert focuses on everyday people and how they prepare for Tết, including decorating the home, getting new clothes and a new karaoke machine for the family gathering.
Mirinda
Mirinda is a Vietnamese soft drinks brand similar to Tango. Their adverts were noticeable for their shortness. They were running 3 five-second spots and two 15-second spots. No real story, but there is energy, brand colours feature heavily and it gives off a joyous vibe.
MyKingdom
MyKingdom is a Vietnamese toy retailer similar to Toys R Us. Their mobile first content focuses on the challenges of parents looking to buy toys that will last longer than the spring festival.
Sunhouse
Sunhouse is a home electronics brand. Everything from kitchen appliances to to cookware.
In the advert, they focus on starting the new year healthy, there is a belief in starting the new year as you would like it to go on.
Viettel
Wireless carrier Viettel subverts the idea of a family reunion storyline during Tết. Instead when the family can’t come home, an uncle visits his family members around the country.
As I find more CNY 2026 campaigns I will add them here.
The dot LLM era is one of the chunkiest posts that I have written, so I have put it together in a PDF as well that you can download and share freely amongst colleagues and peers.
The dot LLM era executive summary
The “dot LLM era” represents a pivotal moment in technological history, drawing striking parallels to the dot-com bubble of the late 1990s. This period is defined by a massive influx of capital into Large Language Models (LLMs) and artificial intelligence infrastructure, which represents clear analogues to the dot-com era “three bubbles” framework: online businesses, open-source ventures, and telecommunications (which represents closest analogue to the current dot LLM era).
The Core Thesis
The current $1 trillion valuation of the AI sector faces two existential challenges:
Amortisation Risk: Unlike the dark fibre of the 1990s, which had a useful life of over a decade, modern GPU and TPU hardware becomes technically obsolete within 3 to 5 years.
Self-Defeating Economics: If AI-driven automation successfully provides $1 trillion in cost savings through job cuts, the resulting increase in unemployment and drop in GDP could destroy the very macroeconomic environment required to sustain hyperscaler growth.
A Tale of Three Bubbles
The document argues that we are conflating three distinct historical analogues:
Online Businesses: Recalling the “burn rates” of the early web, where pure-play LLMs are currently providing tokens for less than their marginal cost.
Open-Source: Comparing current model proliferation to the rise of Linux, where the ultimate winners may not be the model creators but those providing enterprise-grade support.
Telecommunications: The most instructive analogue, involving massive infrastructure build-outs, vendor financing, and potential “Minsky moments” where optimism outstrips sustainable cash flow.
Geopolitical and Economic Realities
Unlike the 1990s “Long Boom” characterized by US pre-eminence and budget surpluses, the dot LLM era exists within a climate of high government debt and inflation. Furthermore, US dominance is challenged by Chinese hyperscalers and open-source models like Alibaba’s Qwen, which offer high performance at significantly lower costs.
Potential Outcomes
The document outlines seven possible scenarios for the era’s conclusion, ranging from The Breakthrough (total economic transformation) to The Weird Gizmo (total collapse). Currently, “The Moral Hazard”—where AI is deemed “too big to fail” and receives government backing—is viewed as the most likely path (~95% likelihood).
How this dot LLM exploration started?
This dot LLM post came out of a number of ideas and vibes.
Everyone[i] from commentators[ii] and podcast hosts to friends are talking about a dot-com-type bubble in LLMs, what I’ve termed as shorthand the dot LLM era. The dot LLM era comparison has become a steady tempo of concern.
The term AI bubble took off in interest during September of 2025.
The dot LLM era is shorthand to move backwards and forwards in time comparing the current AI boom with the dot-com boom of 1990s – 2001. It’s a very different type of ‘Y2K trend’.
Many pure-play LLMs customers are currently getting to use tokens for less than their marginal cost[iii], and this is part of the reason (alongside the high cost of model training) why the likes of OpenAI, C3.ai, Perplexity, Anysphere and Anthropic are raising new rounds of financing[iv]. They have been losing money[v] and continue to do so.
Spending by both pure-play LLMs and their hyperscaler partners is driven by the effort to create an AI moat[vi]. An AI moat is a sustained proprietary advantage derived from a company’s use of artificial intelligence that makes its offerings fundamentally superior, cheaper, or “stickier” than those of rivals, and which is hard to be replicated by rivals.
Even the most historically bullish institutional investors, like James Anderson[vii], formerly of Baillie Gifford, have turned bearish on Nvidia and pure-play LLM offerings.
To meet the needs of these services, development of an extra 1,500 data centres has been announced – only a quarter of which are under construction at the time of writing.[viii]
It is a time reminiscent of the mid-2010s when venture capitalists subsidised the cost of services like Uber and Lyft[ix] to grow markets from the ground up. Going back further to the dot-com era, Amazon took a similar approach with its business.
Valuations for the Magnificent 10: Apple, Alphabet, Amazon, AMD, Broadcom, Meta, Microsoft, Nvidia, Palantir and Tesla — are high. The 24-month forward P/E ratio of the Magnificent 10 is 35 times. By comparison the S&P 500’s equivalent P/E ratio at the peak of the dot-com boom approached 33[x], with a brief peak at the market top of 44.[xi]
Built into these Magnificent 10 valuations, is an assumption that LLMs will help them cut costs and or drive revenue growth by $1 – 4 trillion in the next two years.[xii]
Like the dot-com era[xiii], the dot LLM era is spawning several businesses that are likely to be considered weird gizmos or bad business ideas that will be mocked in the future. The dot-com analogues included the likes of proto-digital currencies Beenz and Flooz[xiv], CueCat[xv] – a bar code scanner that allowed web users to scan codes on magazines to get more pages online or the short-lived[xvi] 3Com Audrey[xvii][xviii] and Sony eVilla[xix] internet appliances.
(Disclosure: in my first agency-side role, I worked on 3Com’s consumer products and the Palm device business that was spun off as palmOne[xx] to give space for the Ergo connected home internet appliance range. Audrey’s ability to sync with two Palm devices[xxi], despite Palm being seen as an internal competitor, gives you an idea of how disjointed and chaotic internal planning was in companies like 3Com when they were trying to move at ‘internet speed’. One of the last 3Com projects I worked on was the launch of Audrey in October 2000.)
Bubbles don’t kill technology from moving forwards
Like the dot-com era, the dot LLM era is likely to move through two separate cycles: one financial and the other technological. While the financial bubble destroyed a lot of shareholder value, the underlying web technology cycle and use cases became commonplace and evolved. Email became part of our culture[xxii] in the same way that social media became cultural fabric a decade later. LLMs or their successors (such as nested models[xxiii] and world models[xxiv]) are likely to be influential and change the nature of work, life, business and culture.
Already we can see the dot LLM era playing out on social media as over half of content is estimated to be produced with generative AI.
This relentless forward progress for technological adoption and refinement was likened to an organic being by author Kevin Kelly in a phenomenon he called the ‘technium’.[xxv]
Believing that AI is undergoing a dot LLM bubble isn’t the same as not believing that the technology won’t have an ongoing impact.
A Tale of Three Bubbles
When we talk about the dot LLM era we are conflating a number of related bubbles bursting.
The bubbles were based around a common conceit: prior experience counted for naught because the internet changed everything.
This resulted in three distinct historical bubbles:
Online business bubble
Open-source bubble
Telecommunications bubble
The one that most people recall is the dot-com boom where online businesses went under.
Online businesses
Iconic ones included technically ambitious clothing retailer Boo.com, pet care supplies firm Pets.com and many more.
Boo.com burned through $135 million in just 18 months[xxvi]. And they weren’t the only ones. In March 2000, Pegasus Research put out a research paper[xxvii] outlining the burn rates of each online business. The report went under-reported at the time, but took a clear-eyed look at the sector.
Successful business people failed. Podcaster and academic Scott Galloway[xxviii] founded RedEnvelope[xxix], an online commerce site that sold gifts including personalised items and experiences. Bob Geldof’s online travel site deckchair.com[xxx] doesn’t even merit a mention in most profiles of the famous musician.
Back when I worked at Yahoo! long-time employees said that only a pivot to provide dating services had kept the rest of Yahoo! Europe afloat during the dot-com bust of 2001/ 2002. Online advertising revenues at the time dropped more than 30% over a 12-month period. The difference between success and failure was a very narrow gap.
Amazon survived and eventually thrived as it managed to convince its shareholders to defer profitability for a decade to garner growth. That move and the company’s nascent web services business (AWS) led to the online juggernaut that Amazon is today[xxxi]. While Amazon was founded in 1994 and first went online in 1995, it didn’t make its first quarterly profit until the end of 2001[xxxii] of $5 million on revenue of $1.12 billion[xxxiii] and the first annual profit in 2003[xxxiv]. Uber and Lyft learned from the example that Amazon had set a decade earlier.
Open-source bubble
The second bubble was the ‘open-source’ bubble. The rise of the commercial web (and the millennium bug[xxxv]) disrupted existing technology stacks and opened up new opportunities to sell enterprise computing hardware and software. Several companies were launched to support the rollout of open-source software that threatened Microsoft’s and Unix operating system duopoly.
My former client VA Linux Systems built web servers and workstations optimised for Linux users[xxxvi]. Now VA Linux Systems is remembered more for its IPO, which valued the company at $30 and opened for trading at $299[xxxvii]. Red Hat[xxxviii] and SuSE[xxxix] provided commercially supported versions of Linux for corporate enterprises. Like their online business counterparts, few of the open-source business bubble companies could be considered ‘successful’, the outlier being Red Hat which eventually sold to IBM in 2019 for $34 billion[xl].
The winner, Red Hat, didn’t sell the open-source software (Linux) as its business model; it sold enterprise-grade support, integration, and services.
While the open-source bubble was the smallest of the three bubbles, it had an outsized impact with Linux being the foundation for everything from the Android mobile OS to the largest data centres.
Telecoms bubble
The telecoms bubble was the least visible, yet most spectacular bubble and the one that is most instructive about the dot LLM era.
There are three places where you could start the telecoms bubble. April 30, 1995, when the NSFnet was decommissioned[xli], the Telecommunications Act of 1996, or 1984.
I am going to go with 1984[xlii]. While the internet was growing in academic and military circles in the US and there were nascent computer networks elsewhere like the UK[xliii] – the real revolution was happening on the London Stock Exchange. The UK government under prime minister Margaret Thatcher looked to get the government out of businesses. A programme of privatisation took place to sell-off numerous nationalised businesses; plans to privatise British Telecom were proposed in 1982. 1984 saw the IPO of British Telecom plc, the previously government owned telecoms provider[xliv]. The UK government also licensed the first competitor Mercury Communications[xlv].
From a technological perspective the IPO seemed to be a catalyst[xlvi] for wider telecoms deregulation in western Europe[xlvii] and around the world. In 1985, the Japanese government privatised NTT and opened the Japanese telecommunications market up to competition[xlviii]. The European Commission began developing a regulatory framework to open up national telecoms markets in 1987[xlix], Europe and Japan would spend the next decade opening up their markets for alternative telecommunications services.
It was into this global landscape that the US overhauled its telecommunications regulations with the Telecommunications Act of 1996[l]. The stated intention of the act was to “let anyone enter any communications business – to let any communications business compete in any market against any other.”[li] The act incentivised the expansion of networks and new services across the US.[lii] Early US netizens rejected the act as a way to regulate cyberspace[liii].
The following year 69 members of the World Trade Organisation (WTO) agreed to open their basic telecoms markets to competition[liv].
In parallel with the wider atmosphere of telecommunications liberalisation, was the rise of the internet. The rise of home computers in US households between 1990 and 1997 grew from 15% to 35%[lv]. At that time, a small percentage of people would be dialling directly into work, nascent online services like CompuServe or AOL, dialling into their Charles Schwab account and bulletin boards.
Outside the US, it was more likely that your computer was a standalone machine with a spreadsheet, word processing application, maybe design software allowing you to write the document from home and bring it in to work on floppy drive, or possibly an Iomega diskette[lvi] of some sort.
Private long distance optical fibre networks together with free local telephone calls were the infrastructure for internet connectivity. The web the way we know it now was not a surefire winner[lvii]. Much speculation was on the internet superhighway – digital cable television with value added services like online shopping.[lviii] Bill Gates at the peak of his power as CEO of Microsoft was convinced that the digital cable TV was the way forward.[lix] The next edition was edited to reflect the reality of the web instead. The open interoperable nature of the web proved to be more attractive than walled garden digital services envisaged by cable TV companies.[lx]
Investment in telecoms infrastructure increased to meet the future needs of digital services, based on a misreading of internet data traffic growth[lxi]. US telecoms providers invested $500 billion between 1996 and 2001 – mostly on optical fibre networks.[lxii]Much of this spending was done by new entrants including Global Crossing, WorldCom, Enron, Qwest and Level 3. There was a corresponding scale up by equipment makers like Lucent to supply the telecoms providers.[lxiii] Telecommunications equipment companies Lucent and Nortel[lxiv] both provided vendor financing for their dot-com era client base – engineered in such a way to inflate sales figures and their share price.[lxv]
Lucent lent customers the money to purchase their equipment. They then booked the loan value as revenue, even though the repayment risk remained and the debt was held as an asset on the Lucent balance sheet.
Nortel used its own shares as financing for its customers. It is believed that Nortel lent $7 billion+ to help start-up telecommunications carriers make equipment purchases. Many of these were unsecured loans, interest-free and tied to future purchases.
Carriers engaged in ‘round-tripping’. Global Crossing would ‘sell’ network capacity to Qwest; Qwest would ‘sell’ similar capacity back to Global Crossing for nearly the same amount. Both companies booked the deals as revenue. US regulators found that this was a pre-arranged swap designed to inflate revenue, despite having no commercial purpose.
Had the bubble continued into 2005, WorldCom CEO at the time Bernie Ebbers had expected to invest another $100 billion in the company’s network infrastructure that year[lxvi]. Instead, Ebbers left WorldCom investors with a $180 billion loss. When the telecoms bubble imploded, an estimated trillion dollars in debt was owed, much of which was not expected to be recovered.[lxvii]
In 2002, the telecoms bubble helped change the way business is conducted. In reaction to a number of major corporate and accounting scandals, notably Enron[lxviii] and WorldCom – US lawmakers enacted the Sarbanes-Oxley Act of 2002[lxix]. This Act (SOX as it became known) mandated standards in financial record keeping and reporting for public companies. It covered responsibilities of the board of directors and criminal penalties for certain practices[lxx]. It required the SEC to create regulations for compliance. SOX drove up the cost of a company going public and remaining public due to the administrative burden to remain legally compliant.
Technology vendor financing from companies like Cisco and IBM continued to be an issue through the 2008 financial crisis,[lxxi]but was largely kept out of the common discourse by the tsunami of sub-prime mortgage debt defaults.
The dot LLM era hinges around service providers and equipment makers, in the same way that the telecoms bubble did. Here are some examples and their dot LLM analogues.
Service providers
Equipment makers
Dot-com era Enron PSINet Qwest UUNET Worldcom
Dot-com era 3Com Ciena Cisco Equinix Juniper Networks Lucent Sun Microsystems
Dot LLM era Alphabet Amazon Anthropic OpenAI Oracle Microsoft Salesforce
Of course, the idea of them being analogues doesn’t line up perfectly. While the excessive build out of optical fibre networks could be considered analogous to hyper-scaled AI infrastructure; it isn’t a perfect match.
The acceleration in network and computing capability in hyperscalers show the kind of positive trajectory that Mary Meeker had in her dot-com era analyst presentations[lxxii].
Some critics think that the massive acceleration in network and compute investment for LLM purposes represents a Minsky moment in itself[lxxiii] – heralding it as an event that fits Hyman Minsky’s Financial Instability Hypothesis.
Minsky considered this coming in three parts:
A self-reinforcing boom driven by optimism and easy credit
A shock, that can be minor in nature, has investors re-look at cash-flow shortfalls
Rapid asset sales and deleveraging / de-risking
The scale of investment and construction of data centres together with the new electricity generating capacity to power them are orders of magnitude larger than the telecoms boom.
Secondly, the LLM infrastructure has a much shorter life. LLM hyperscalers go through GPUs (and TPUs) extremely fast with a useful life of 3 years or so.[lxxiv] Complete technical obsolescence of a given GPU / TPU design has occurred by 5 years from launch.[lxxv]
Therefore, if there is an AI bust the processors wouldn’t be available to use in the next economic upswing in the tech sector. By comparison the optical fibre networks laid during the dot-com boom had a useful life of 10+ years and the growth of web 2.0 and social startups was largely built on surplus server and networking equipment left over from the dot-com era. The dot LLM era represents a financial and technological amortisation risk.
There is an added wrinkle in this last point about the useful life of GPUs and TPUs. Company filings of hyperscalers show that they are amortising their network and compute capital expenditure over longer times, by lengthening the assumed useful lives of components in their financial paperwork.
The economic environment.
The economic conditions that the dot-com era happened in were very different to the conditions of the dot LLM era.
The US had suffered through much of the 1980s and into the early 1990s. Reaganomics had driven a ‘jobless recovery’ as the financial and services sectors took over from manufacturing as the US economic growth engine. In 1989 the Savings and Loan crisis peaked.[lxxvi] This occurred alongside rising interest rates to battle inflation. An oil price spike as a result of the first Gulf War exacerbated economic conditions and the recession ended the ambitions of George H. Bush becoming president for the second time. Under a new government, by spring 1994, jobs and economic growth both picked up. 1996 saw growth continuing and by May 1997 US unemployment dropped below 5% for the first time in 24 years.
Other countries had similar recessions in the late 1980s and early 1990s due to restrictive monetary policies, oil prices and the end of the Cold War. By 1994, global GDP growth returned.[lxxvii] Wired magazine talked of the 1980s as a contagious idea:[lxxviii]
America is in decline, the world is going to hell, and our children’s lives will be worse than our own. The particulars are now familiar: Good jobs are disappearing, working people are falling into poverty, the underclass is swelling, crime is out of control. The post-Cold War world is fragmenting, and conflicts are erupting all over the planet. The environment is imploding—with global warming and ozone depletion, we’ll all either die of cancer or live in Waterworld. As for our kids, the collapsing educational system is producing either gun-toting gangsters or burger-flipping dopes who can’t read.
In the same article, they thought of the 1990s as the start of ‘The Long Boom’ – 25 years of prosperity freedom and a better environment for the world.
By 2000, the US government went from running a budget deficit eight years earlier to running a surplus. This eased the credit markets for businesses and consumers. The US Taxpayer Relief Act lowered marginal capital gains tax and helped fuel stock market investments. Day trading became a thing by 1999,[lxxix] mirroring investors in crypto and stocks in the 2020s.[lxxx]
By comparison, the current economic climate is more similar to the 1980s than the 1990s. Government debt has reached new heights. Governments have struggled to rein in inflation created by COVID-era supply shocks – which was responsible for several governments including the Biden administration being voted out of office. The high government debt and inflation leave governments with fewer policy tools to manage a systemic shock compared to their 1990s counterparts. The Economist claimed that western countries had government debt levels unseen since Napoleonic times.[lxxxi] There is no US government budget surplus and little ‘headroom’ for monetary policy.
Wired magazine’s ‘contagious idea’ sounds very familiar:
Climate despair has been recognised as a condition by mental health professionals.[lxxxii]
Global warming is cited[lxxxiii] as a cause of extreme weather conditions[lxxxiv].
Good jobs are disappearing and this is often blamed[lxxxv] on generative AI.
US tariffs, Brexit and the Ukraine war are disrupting global commerce.
In conclusion, the dot-com era economy was much more conducive for retail investors than the dot LLM era is.
The internet changes everything
Dot-com businesses had it right in their view that the internet would change business and shopping for consumers and enterprises. Some of them like Amazon made it, many didn’t. The investment bank analysts believed it too.[lxxxvi]
You see similar things being written about AI now, along with similar looking ‘hockey stick’ charts.[lxxxvii]
Microsoft research[lxxxviii] suggests that there is a strong link between GDP per capita and AI usage. But also notes that adoption in advanced economies tends to plateau between 25% and 45%, suggesting non-economic factors eventually moderate growth. Suggesting that the dot LLM era may not be the kind of game-changer that it might be believed to be by advocates. I would recommend that the reader keeps an open mind on this rather than automatically thinking that this proves generative AI as being a technological dead-end. More work is required to try and understand why the plateau happens and whether it represents a ceiling or a brief rest before adoption accelerates again.
Artificial general intelligence or AGI
AGI is when the LLM surpasses your average human. The idea of AGI has taken on the similar messianic fervour of people from the dot-com era including George Gilder’s Telecosm. Many executives in the most prominent LLM developers subscribe to an imminent AGI occurring.
Elon Musk holds the most aggressive timeline[lxxxix]. He thinks that the main bottlenecks to AGI—specifically power supply and high-end chip availability—are being solved rapidly. Through his company’s xAI’s computing power, he believes that the next generation of models will surpass human intelligence in almost any individual task by early 2026. Anthropic’s CEO Dario Amodei believes that AGI could arrive in 2026/7[xc]. OpenAI’s Sam Altman considers 2027 to be a realistic timeline for the arrival of AGI[xci]. DeepMind co-founder Shane Legg has come up with a notional timeline of 2028. His view is based on the current rate of progress for both computing hardware and LLM algorithms.[xcii] Long time AI advocate Ray Kurzweil has published a series of books about AGI, which he termed the ‘singularity’. The latest of which put 2029 as the year in which AGI is likely to occur[xciii].
As with any cultural artefact, AGI has become blended with religious thinking, as exemplified by this outlandish quote from podcaster Joe Rogan.
“Jesus was born out of a virgin mother. What’s more virgin than a computer? If Jesus does return, you don’t think he could return as artificial intelligence? AI could absolutely return as Jesus.” – Joe Rogan[xciv]
All of which is reminiscent of Timothy Leary’s infatuation with the early web[xcv] and the Heaven’s Gate Cult[xcvi].
Despite some prominent advocates, many experts in the field are sceptical about the imminent arrival of AGI. Included in these sceptics are OpenAI co-founder Andrej Karpathy who believes that the nature of LLMs mean that AGI won’t arrive using current techniques and on the timeline that advocates predict[xcvii]. Researchers Rodney Brooks[xcviii] and Yann LeCun[xcix] believe that understanding the physical world is critical for technology to achieve AGI. This work is only starting now. Academic Melanie Mitchell argues that until systems can grasp ‘meaning’ AGI will not happen[c].
The good bubble
Some of the most important US business executives of the LLM era admit that we are in some kind of bubble. Here’s what they’ve said in their own words.
“When bubbles happen, smart people get overexcited about a kernel of truth … Are we in a phase where investors as a whole are overexcited about AI? My opinion is yes.”[ci]
“This frenzy gives us pause … The belief in an A.G.I. or superintelligence tipping point flies in the face of the history of technology”[cii]
“This is a kind of industrial bubble … investors have a hard time in the middle of this excitement, distinguishing between the good ideas and the bad ideas. And that’s also probably happening today.”[ciii]
“Given the potential of this technology, the excitement is very rational. It is also true when we go through these investment cycles there are moments we overshoot as an industry. We can look back at the internet right now, there was clearly a lot of excess investment, but none of us would question if the internet was profound or did it have a lot of impact it was fundamentally changed how we work digitally as a society. I expect AI to be the same; I think it’s both rational and there are aspects of irrationality to a moment like this.”[civ]
“Most other infrastructure buildouts in history, the infrastructure gets built out, people take on too much debt, and then you hit some blip … a lot of the companies wind up going out of business, and then the assets get distressed and then it’s a great opportunity to go buy more … definitely a possibility that something like that would happen here.”[cv]
The real question is whether the dot LLM era is a ‘good’ bubble or a bad bubble? What does a good bubble look like? And how much will it cost? Most of the quotes above see the dot LLM era as similar in nature to the internet boom and bust. While pioneers may have died society was irrevocably changed.
Some of the irrationality in the ‘good bubble’ hypothesis seems to include hubris, for example OpenAI shunned having external advisers to work on its $1.5 trillion worth of data centre deals.[cvi] While OpenAI has relationships with investment banks and corporate law firms – it didn’t make much use of them.
These explanations assume that there will be a corresponding surplus of infrastructure that will spark new innovation on the backs of dead companies. A concept that most represents the telecoms aspect of the dot-com era. The explanations ignore the financial losses suffered by pension funds and retail investors as these companies went bankrupt. They also ignore that the useful life of AI computing hardware is obsolete faster than railway tracks or laid fibre optic cables.[cvii] Short sellers have accused hyperscalers of estimating unrealistically useful lives for their computer equipment, in particular, the GPUs that power AI model training and inference. The allegations claim that profits are artificially overstated by allowing depreciation of assets over a longer period.[cviii]
At its peak in March 2000, the NASDAQ index peaked at 5,048. When the dot-com bubble burst the index declined to 1,139. Recovery took 15 years from the peak value. The NASDAQ reached 5,048 again in March 2015.[cix] The risk is arguably greater this time around as the top ten stocks constituting the S&P 500 index constitute 40% of its value.[cx] This implies a vulnerable, brittle market environment prior to any economic bust. So, the idea of ‘good’ is very narrowly defined and asking the term to do a lot of heavy lifting in terms of its language. Predicting the peak of the market[cxi] is challenging too[cxii].
Can the demand for LLM; grow at the speed implied by invested capital?
Advertising as a possible use case
The first use case to consider for how the dot LLM era could meet its full ‘potential’ would be the ongoing disruption of advertising by digital platforms. Depending who you believe the global total market for advertising is close to, or has just exceeded $1 trillion in total value.
Globally, advertising represents about 1 percent[cxiii] of global GDP. It usually holds at around that proportion as global economic growth waxes and wanes. In some key markets such as the US, UK and Singapore – it makes up a higher percentage of GDP – as the home of advertising platforms, advertising agencies with international responsibility and technology suppliers to the industry.
Advertising isn’t just a cost centre for businesses, but also a driver of economic growth and profit. One Euro of advertising is estimated to generate up to 7 Euros of economic value.[cxiv]
It took digital advertising over a quarter of a century to go from zero to over half of advertising spend. This hinged around two growth spurts, one in 2000 with the rise of online businesses and the second in 2020 with the COVID-19 lockdown. A factor of the transition of digital advertising growth has been down to the fragmentation of audiences across media platforms and alongside traditional media.
AI (but not LLMs) has been used in advertising as long as digital advertising has been around. It started to be used for understanding consumer behaviour and delivering targeted advertising.[cxv] Amazon started using AI for its recommendations in 1998.[cxvi]
Not all economic value in digital advertising accrued from the transfer of ‘traditional’ advertising to digital advertising. There is evidence of a direct correlation between a rise in e-commerce drives a decline in retail properties, given the strong linkage between e-commerce, retail media search advertising – there is part of that value exchange which would accrue to the advertising platforms.
… one percent increase in e-commerce sales as a percent of total sales will decrease commercial real estate prices by 7.64%.[cxvii]
It is worthwhile reading the whole economic paper on the decline in commercial real estate prices to understand the multiple factors that the author tried to take into account to better understand the impact of e-commerce sales.
The sales didn’t only shift online, but offshore. For instance, China-based advertisers accounted for around 11% of Meta’s total revenue in 2024[cxviii], which amounted to $18.35 billion. A significant portion of this is believed to come from large e-commerce companies like Temu and Shein[cxix], rather than a large number of small businesses. These companies benefited from the Chinese state support[cxx] covering their international logistics and postage costs and allowed their businesses to be run on razor-thin margins.
There has also been a corresponding value transfer from the lost profits of advertising clients to the platforms as well. Advertising industry consultant Michael Farmer made this point in his discussion of large fast-moving consumer goods businesses.
…for the fifty years from 1960 to 2010, the combined FMCG sales of P&G, Unilever, Nestle and Colgate-Palmolive grew at about an 8% compounded annual growth rate per year.
The numbers associated with this long-term growth rate are staggering. P&G alone grew from about $1 billion (1960) to $79 billion in 2010. Throughout this period, P&G was the industry’s advocate for the power of advertising, becoming the largest advertiser in the US, with a focus on traditional advertising — digital / social advertising had hardly begun until 2010. Since 2010, with the advent of digital / social advertising, and massive increases in digital / social spend, P&G, Unilever, Nestle and Colgate-Palmolive have grown, collectively, at less than 1% per year, about half the growth rate of the US economy (2.1% per year).
They are not the only major advertisers who have grown below GDP rates. At least 20 of the 50 largest advertisers in the US have grown below 2% per year for the past 15 years.
Digital and social advertising, of course, have come to dominate the advertising scene since 2010, and it represents, today, about 2/3rds of all advertising spend.[cxxi]
Digital advertising at its heart represents marketing efficiency because of its ability to be created and ‘trafficked’ at a much lower cost and greater speed. But this efficiency comes at the cost of corresponding marketing effectiveness in terms of short-term sales and longer-term preference and purchasing impact.
LLMs could undoubtedly further refine marketing efficiency, it could even ‘understand’ the marketing effectiveness challenge. But LLMs are restricted by the way the audience interacts with advertising, limiting their ability to solve the corresponding marketing effectiveness challenge. Marketing conglomerate WPP have launched a performance media platform that looks to further increase marketing efficiency by no longer requiring a traditional client-agency model. WPP Open Pro[cxxii] is the first advertising agency as a software service powered by an LLM. There is some concern that LLMs could destroy the very platforms which serve advertisers to consumers.[cxxiii]
Based on all these factors, advertising is likely to be only one aspect of a market supporting AI’s growth and is unlikely to contribute more than a small proportion of the implied trillion-dollar payback required in the next two years if the dot LLM era doesn’t turn from boom to economic bust.
Business process efficiencies
A second use case mentioned is deriving business efficiencies. This could be done in a number of ways:
Automating white-collar roles
Automating blue-collar and pink-collar roles in conjunction with robotics[cxxiv].
OpenAI recently did research[cxxv] to find out how their service is being used. The sample looked across free, premium and corporate usage of ChatGPT. Some caveats around the research before we delve into it:
It ignored the use of API services.
It is worthwhile remembering that ChatGPT may be under-represented for some actions like writing code – as developers are very aware of what is the current best tool for them.[cxxvi]
Microsoft Worklab research[cxxvii] supports the view of LLM as wingman for white-collar workers. In a story arc that is similar to that of early personal computer adoption, they see LLM use as employee advocated and driven.
Actions have consequences
Economists have models that look at the impact affecting unemployment[cxxviii], inflation and GDP. I have used the Phillips Curve[cxxix] and Okun’s Law[cxxx] in a thought experiment to model the effect on the US economy, if AI managed to provide up to $1 trillion in cost savings through automating jobs. Even with a notional cost savings of $1 trillion, the revenue that would accrue to LLM providers would be a very small proportion of the $1 trillion revenue growth over the next two years implied by current dot LLM era investments.
The average salary is about $94,952 (based on $45.65/hr[cxxxii] x 40 hours/week x 52 weeks/year).
$1 trillion in job cuts would represent about 10.53 million unemployed.
Phillips Curve – used a standard slope where 1% increase in unemployment rate corresponds to a 0.5% decrease in inflation. Okun’s Law – I used a standard co-efficient where a 1% increase in unemployment rate corresponds to a 2% decrease in real GDP.
The degree of economic change, at a time of deflation and drop in GDP would make the environment very hostile for businesses dependent on high growth rates. The economic model of achieving a $1 trillion payback through cost-savings is self-defeating. The very success of automation on that scale would destroy the macroeconomic environment required to sustain the hyperscalers’ growth projections.
As we have seen in Japan during the lost decades,[cxxxiii] deflation would delay purchases and investments. The reduction in GDP would mean that there would be less money available for purchases and investments – creating a negative economic environment for all parties involved including the hyperscalers who would have precipitated the economic change. This scenario has alternative asset management firm Blackstone concerned that its peers are not considering the level of economic disruption the LLM era will bring.[cxxxiv]
That is before you even consider the economic shockwave[cxxxv] that would roll around the globe in a similar manner[cxxxvi] to the 2008 financial crisis. All of this means that there is an optimal economic point in increasing productivity through dot LLM era automation without tanking future growth for hyperscalers and their clients.
AI optimists would think of the economic shockwave as being short-term in nature, followed by a long-term boom. In this respect, they would draw on examples like the rise of the steam engine, railways or electricity. On balance, I would disagree with these optimists. Economic conditions are very different now. For instance, western economies are now much more ‘financialised’[cxxxvii] and so the ‘short-term’ shockwave could be well over a decade in length, more similar to the great depression.[cxxxviii] Developed economy country governments may not have the headroom[cxxxix] to get out of the depression through a Franklin D. Roosevelt-style New Deal Keynesian stimulus.[cxl]
Productivity benefits?
Personally, I have found working with generative AI useful in a number of circumstances, in particular, solving the blank page problem. I have also used it as a research tool, a proof-reader and an editing partner. This article was written with the help of generative AI from an editing perspective. But I have also spent a lot of time looking at the outputs given and ensuring that they accurately reflected the exploration of where I wanted to go. And then there is the issue of hallucinations.
So far, the evidence has been mixed. There are a number of factors for this, IT projects are hard to implement successfully.
Businesses that have embraced LLMs to improve productivity have been penalised by investors due to the high upfront costs required.[cxli] Some critics claim that US data implies a plateauing of adoption of generative AI tools in companies[cxlii] – I personally think that this data is far from conclusive at the present time.
Some AI researchers like DeepSeek’s senior researcher Chen Deli believes that in the short-term AI could be a great assistant to humans, but over a longer period of 5-to-10 years it would threaten job losses as LLMs became good enough to replace humans in some forms of work.
“In the next 10-20 years, AI could take over the rest of work (humans perform) and society could face a massive challenge, so at the time tech companies need to take the role of ‘defender’,” he said. “I’m extremely positive about the technology but I view the impact it could have on society negatively.”[cxliii]
Many of the leading companies in the LLM space such as Nvidia believe that the technology will drive a leap forward in robotics.[cxliv] Companies are currently building training sets on movement that are similar in function to the knowledge training sets used for LLMs. Even for well-known procedures, there are layers of formidable complexity to simple robotics tasks which would tax the most sophisticated process engineers.[cxlv]
There are limiting factors outside the control of the LLM era ecosystem including power, the degree of control and limitations of mechanical engineering to supply chain challenges wrought by globalisation.[cxlvi] Both of which neither move at, or are related to Moore’s Law speed and scale of innovation. A key component is the strain wave gearing (also known as a harmonic drive)[cxlvii]which are made to standard sizes by very few companies, representing an innovation chokepoint, similar to ASML’s lithography machines in semiconductor manufacturing. The standard sizing limits capabilities from mechanical power to precision and increments of movement, which is one of the reasons why Apple still relies on hand assembly on its iPhones despite P&Ps (‘pick-and-place’[cxlviii] machines or surface mount technology (SMT) component placement machines) being available as far back as the 1980s. This chokepoint is one of the reasons why robotics vendors have focused on software-based differentiation with limited success so far.
Different LLMs seem to lend themselves to different tasks as show by Anthropic[cxlix] and OpenAI’s[cl] own research into the economic and usage behaviour of their respective tools.
The Global environment
Unlike other technological leaps forward, the LLM era isn’t likely to see American platform domination all around the world outside China. The dot-com era was the high point of American power. Coming out of the cold war, globalisation was benefiting US technology companies. The decline of Russia allowed the Clinton regime to open up the internet to commercial usage. American companies dominated enterprise software, semiconductors, wireless and computer network products.
25 years later, the US no longer has pre-eminence. Many of its past champions like Lucent[cli] or Motorola[clii] are either much reduced, or no longer American companies. Globalisation in the technology industries has meant that the concentration of expertise has become interconnected and dissipated to global centres of excellence such as TSMC[cliii], Foxconn[cliv] and Huawei[clv]. China had developed a parallel ecosystem some of which like Bytedance successfully compete head-to-head with large American technology platforms.
The LLM era is no longer only American in nature. Chinese companies have compelling offerings. For instance, Chinese hyperscaler Alibaba claim to be able to have models that are comparable to their American counterparts, yet needs 82% fewer Nvidia processors to run.[clvi] Even Silicon Valley companies are using Chinese LLM models over the likes of ChatGPT or Anthropic. The news that Airbnb opted to use Alibaba’s open-source Qwen AI model over ChatGPT was a milestone event.[clvii]US technology sector investors are using the Kimi K2 model because it was ‘way more performant and much cheaper than OpenAI and Anthropic’.[clviii] China benefits from much cheaper model training cost per token. The open-source models can be run on private infrastructure, keeping sensitive data inhouse and ensuring ‘corporate sovereignty.
In the global south, China’s technology companies have corporate and government business relationships built up over years. Their combination of low cost combined with trusted relationships would reduce American hyperscaler opportunities for global expansion.
While US companies have access to more powerful chips, sanctions against Chinese companies aren’t effective with Nvidia chips being smuggled into China and heavy computing work like model training being run in data centres[clix] based in other Asian countries, notably Malaysia.[clx]
There is one clear parallel between the earlier telecoms bubble and the dot LLM era; demand in the global south seems to be constrained by infrastructure rather than user interest in adopting generative AI tools.[clxi]
Other bubbles.
The dot-com era tends to be cited due to it being a technology story as much as an economic story. Many other bubbles were purely financial in nature:
The sub-prime mortgage crisis of 2008/9
The US savings and loans crisis of the early 1990s
1929 stock market crash
Tulpenmanie from 1634 – 1637
The 1929 crash has sometimes been described as an electric generation bubble bursting since some 19% of the shares available on the market were from utility companies. But the impact was so widespread that it be hard to argue that it was really a ‘technology bubble’.[clxii]
The British railway mania of the 1840s is often cited as an analogue of the telecoms bubble a century and a half later. The railway mania rolled out at a slower pace than the dot-com boom. It featured a Minsky moment and resulted in a consolidation of rail companies rather than an outright failure of many businesses. Up to a third of railway companies started during the time collapsed before building their railway line due to poor financial planning.[clxiii]
The key defining factor for how bad the bust is from a bubble, and how long the bust lasts for is the amount of borrowing (or leverage) involved.[clxiv]
How might the dot LLM era differ from the dot-com era in terms of the corresponding bust?
Zero-cost co-ordination
An economic paradigm shift will have occurred that doesn’t have a clear analogue in history that I am aware of. For instance, there are theoretical writings about how LLMs and agents will change the very nature of economics and the corporation may be changed with the advent of ‘zero-cost co-ordination’[clxv] reducing economic friction. This could upend the very nature of what a company is.
Historically one of the reasons given for participating in a firm was that internal coordination costs were cheaper than market coordination (transaction costs). If agentic AI are rational actors that reduce market transaction costs (search, negotiation, contracting) to near-zero, the need for large, hierarchical firms changes and likely diminishes.[clxvi]
If this theory were true, the excessive capital expenditure would simply be the price paid for creating the world’s first zero-friction economic system. In theory, it’s possible, but it depends on the humans involved being rational decision-makers in a rational culture that doesn’t exhibit risk aversion and that their agents don’t develop similar biases over time. This often isn’t true, even in business-to-business situations, for instance in the past ‘nobody ever got fired for buying IBM’.[clxvii]
This viewpoint in some ways is similar to Wired magazine’s editorial team circa 1998 and futurist author Kevin Kelly’s ideas on the ‘new economy’.[clxviii] The thesis was that the internet would reduce information friction. The dot-com bust provided a more tempered lens on the ideas of the ‘new economy’. Would efforts to reduce economic friction fare any better than the information friction reduction of the ‘new economy’?
Google Research economists have asked this same question[clxix] and came back with more open questions than answers. The authors posit that AI systems, being built on optimisation principles, can be modelled as standard “textbook” economic agents. when AI agents deviate from perfect rationality, they may exhibit an “emergent preference” and display behavioural biases similar to those found in humans. They highlight what they termed the “contract” problem. It draws an analogy between the AI alignment problem and the economic theory of ‘incomplete contracts,’ where a designer (the principal) cannot perfectly specify the AI agent’s goals, leading to unpredictable behaviour. The economists were concerned there would be a need for new institutions to govern an AI agent economy to ensure markets remain well-functioning and stable.
The open questions:
Whether AI agents have stable ‘beliefs’?
How they update them?
If they can hold ‘higher-order beliefs’ (beliefs about others’ beliefs)?
There is a lack of research and benchmarks for evaluating AI performance in complex, multi-agent systems which needs to be addressed. One of the key challenges is that small differences between AI and human behaviour can become magnified in an equilibrium.
But what if, as Francis Fukuyama argues,[clxx] that transaction friction isn’t the block on economic growth? Instead, it’s resource constraints, social and political considerations that are the brake on how fast economic growth can happen.
AI-fuelled breakthroughs
The infrastructure boom fuels foundational AI research far beyond current capabilities. In this scenario, active engines of scientific discovery. The AI research achieves a breakthrough in a hard-science field like drug discovery (e.g., new classes of effective antibiotics), materials science (e.g., room-temperature superconductors), novel ways of rare earth metal extraction, or sustained controllable nuclear fusion – and facilitates record compression of time to market for these developments. LLMs would not only have to facilitate the breakthrough, but drive mass-accelerated implementation and regulation.
In theory, LLMs could:
Optimise experiment and trial design.
In- and post-test data analysis.
Drive synthesis of regulatory compliance documents and evidence.
Optimise production and supply chains to facilitate the manufacture and commercialisation of a new break-out product.
If all this happened, it would create entirely new sources of economic value, far dwarfing the infrastructure cost. That is a lot of serendipity, of huge scope and massive assumptions: even the NASA Apollo Program[clxxi] took eight years to have its first crewed lunar flight[clxxii] and another year to put the first men on the moon.
AI-fuelled breakthroughs are usually linked with progress towards AGI or ‘artificial general intelligence’ or human level intelligence AI.[clxxiii] A research paper from Cornell University that outlined benchmarking for progress to understanding the real world. The paper introduced WorldTest, a new framework for evaluating how AI agents learn and apply internal world models through reward-free exploration and behaviour-based testing in modified environments. Its implementation, revealed that while humans excel at prediction, planning, and change detection tasks, leading AI reasoning models still fall short. Their shortcoming was associated with flexible hypothesis testing and belief updating. The findings suggest that future progress in AI world-modelling depends less on scaling compute and more on improving metacognition, exploration strategy, and adaptive reasoning.
Platform lock-in and bundling
Many of the established hyperscalers (Adobe, Alphabet, Amazon, Microsoft, Oracle and Salesforce) have established client relationships in a range of products:
CRM.
Creative Suite and Marketing Cloud.
Office suite or Workspace.
Enterprise Cloud services.
Rather than a disruptive paradigm shift, the LLM payback could come from an instant, embedded non-disruptive increase across existing indispensable products and services. It extracts the value from the existing enterprise wallet, which breaks the historical analogy of relying on new economic value creation. On the face of it, a largely risk-free proposition.
The US legal environment is very different from the dot-com era. Microsoft would not have to worry about facing an antitrust trial similar to its conflict over bundling with Netscape.[clxxiv]
While in the US, antitrust enforcement is considered laxer than during the Biden regime, these technology companies would be concerned about competition regulators in the EU and elsewhere. For example, just this September, Microsoft had to unbundle Teams from its Office software to avoid EU antitrust fines.[clxxv] Alphabet[clxxvi] and Amazon[clxxvii] have had previous bruising run-ins with authorities outside the US which would complicate any decision made to bundle an LLM service.
What could dot LLM era outcomes look like?
I have come up with seven scenarios that range in the kind of impact that generative AI as a sector may provide. These range from being wildly successful to dark failure
The breakthrough: total economic transformation due to a post-war breakthrough in science and technology.
The ‘new economy’: frictionless co-ordination facilitates more economic activity.
The ‘wingman economy’: a managed productivity boom.
The ‘Red Hat model’: an open-source foundation driving value-added services.
The ‘moral hazard’: major AI players are considered ‘too big to fail’ and backstopped with government loan guarantees.
The ‘telecoms bust’: a Minsky moment and amortisation crisis.
The ‘weird gizmo’: collapse total bust.
How these scenarios map out when thinking about the level of value creation or value saved through increased efficiency.
Negative / zero net value created
Positive to transformative value creation
New value creation
The ‘weird gizmo’ collapse (value was illusory)
The breakthrough (new science) The ‘new economy’ (new coordination)
Efficiency / existing value
The ‘telecoms bust’ Capex > value The ‘moral hazard’ value is geopolitical rather than financial
The ‘wingman economy’ (managed productivity) The ‘Red Hat’ model (value moves to services)
The breakthrough: total economic transformation
What it looks like: The massive capital expenditure on infrastructure is validated because AI achieves a true, hard-science breakthrough. This creates entirely new sources of economic value, such as sustained nuclear fusion, room-temperature superconductors, or new classes of antibiotics. In this outcome, the $1 trillion in implied value is not only met but vastly exceeded. Justifying the “bubble” as the necessary investment for a new industrial revolution.
What to watch?
Scientific breakthroughs.
Metric:
High-impact scientific publications that use AI for novel discovery, NOT just analysis.
Source:
Track major journals like Nature, New Scientist and Science for breakthroughs in AI-driven drug discovery, materials science, or physics. Recent reports on AI’s role in molecular innovation and even quantum computing show this is a key area to watch.
The “new economy”: frictionless co-ordination
What it looks like: Agentic AI successfully reduces market transaction costs (search, negotiation, contracting) to near-zero. This upends the nature of the corporation, as the historical reason for firms (cheaper internal vs. market coordination) diminishes. The massive capital expenditure is seen as the “price paid for creating the world’s first zero-friction economic system”. This is the 1998 Wired “new economy” thesis finally coming true, though it faces challenges like the “Contract problem” and AI alignment.
What to watch?
Agentic breakthroughs
Metric:
Demonstrations of “agentic” AI (AI that can independently complete complex, multi-step tasks), particularly in commercial or economic settings.
Source:
Monitor announcements from leading research labs (DeepMind, FAIR, OpenAI) and market analysis on “agentic AI” to see if it’s moving from theory to reality.
The ‘wingman’ economy: a managed productivity boom
What it looks like: The technology finds its “optimal economic point”. LLMs become a powerful “wingman for white-collar workers”, similar to the adoption of early PCs. This drives real productivity gains, but the $1 trillion in cost savings is implemented gradually, avoiding the catastrophic deflationary shock modelled by the Phillips Curve and Okun’s Law. The “Magnificent 10” see steady growth, but the ‘pure play’ LLMs struggle to find profitability on their own.
What to Watch:
National Productivity Data
Enterprise Adoption & AI Mentions in Earnings
Metrics:
U.S. labour Productivity and unit labour costs. We are looking for a “golden path”: productivity rising faster than unit labour costs, which would suggest companies are becoming more efficient without just slashing jobs en-masse.[clxxviii]
The number of S&P 500 companies citing “AI” on their quarterly earnings calls. A high number (e.g., over 40-50%) shows it’s a top-level strategic priority.
Sources:
U.S. Bureau of Labor Statistics (BLS) – productivity and costs. The quarterly releases from the BLS are the single best macro-indicator for this scenario.
FactSet Earnings Insight.[clxxix] – they regularly publish analyses on the frequency of “AI” mentions in earnings calls, which is a direct proxy for corporate focus and investment.
The Red Hat analogue: a foundational model
What it looks like: The “pure play” LLMs like OpenAI and Anthropic, which are losing money, ultimately fail or are acquired for pennies on the dollar. However, open-source and open weight models (like Llama, etc.) proliferate. Alibaba’s Qwen model has already been very successful. Singapore’s national AI programme dropped Meta’s Llama in favour of it.[clxxx] Singapore joins Airbnb as Qwen users;[clxxxi] meanwhile Chinese model DeepSeek has been adopted by European startups.[clxxxii] The long-term winners are not the model creators but the companies that, like Red Hat, sell “enterprise-grade support, integration, and services”.
LLM models have an “outsized impact” —becoming the “Linux” for the next generation of applications—but the initial investors see a massive correction.
What to Watch:
Open-source vs. closed-source momentum
Metric:
Rate of change in download statistics, new model uploads, and developer activity on open-source AI platforms.
Source:
Hugging Face Trends.[clxxxiii] This dashboard shows which open-source models are gaining traction. If downloads for open-source models are growing faster than API call revenue for closed-source models (a harder metric to find), it signals a shift toward this “Red Hat” scenario. GitHub’s annual “Octoverse” report is another key source, as it tracks the rise of AI-focused projects.
The ‘moral hazard’: major dot LLM players are considered ‘too big to fail’ and backstopped with government intervention
There are elements of a non-bubble, financial crisis aspect to the dot LLM era. Chinese LLM vendors are being given subsidised electricity from local governments,[clxxxiv] alongside preferential rates in data centres. The LLM era in the US could be considered by the government as having become too large a part of the economy to be allowed to fail due to normal market forces. Open AI has recently had to deny rumours[clxxxv] that it sought US government loan guarantees for at least part of the multi-trillion dollar deals it has put in place for data centre infrastructure and hardware. AI sovereignty comes to be seen as taking on a geostrategic and national security imperative as business and investor considerations take a backseat.
Hyperscalers are hitting a ‘power wall’ as they cannot get the equivalent electricity generating capacity of 16 Hoover dams. Getting over the wall would require a massive amount of government infrastructure funding.[clxxxvi]
Major government involvement may impact the speed of development as LLM model providers and supporting infrastructure no longer have to constantly innovate and instead move at the speed of their government clients.
What to watch:
Shift in rhetoric from commercial to critical: Observe how language from policymakers, military leaders, and national security bodies evolves. A shift from discussing AI in terms of commercial competition (e.g., “market leadership”) to national infrastructure (e.g., “digital sovereignty,” “critical asset,” “geostrategic imperative”) is a primary indicator. This reframes an economic failure as a national security failure.
Direct & indirect state support mechanisms: look beyond simple R&D grants. Watch for the creation of new, targeted support instruments:
Direct: preferential pricing on energy/compute, state-backed datacentre construction, sovereign wealth fund investments, or direct “national champion” subsidies.
Indirect: government-backed loan guarantees for infrastructure (like the rumoured OpenAI deal), strategic procurement (where the government becomes the anchor customer) – Palantir would be an exemplar, and “regulatory moats” that favour incumbents (e.g., high-cost safety/licensing rules that only large, state-backed labs can afford).
“Bailout” vs. “investment” framing: monitor how state intervention is publicly justified. A struggling “national champion” AI firm receiving a sudden capital injection from a state-adjacent entity will likely be framed as a “strategic investment in national capability,” not a “bailout.” This framing is key.
Metrics:
Value of state & military contracts: Track the total disclosed value of government contracts (especially from defence and intelligence agencies) awarded to foundational model providers. A rapid increase, or contracts for non-competitive “strategic deployment,” signals TBTF (“Too Big to Fail”) status.
Frequency analysis of policy language: quantify the co-occurrence of terms like “AI,” “sovereignty,” “national security,” and “critical infrastructure” in parliamentary/congressional records, national strategy documents, and defence budget justifications. A rising frequency indicates the ideological groundwork for a TBTF policy.
State-backed capital flows: monitor announcements from sovereign wealth funds, national investment banks (e.g., UK’s National Security Strategic Investment Fund), or public pension funds. Track the size and frequency of their investments into large, established AI labs, as opposed to a diverse portfolio of early-stage start-ups.
Subsidy disclosures: quantify the value of announced subsidies (e.g., tax credits, energy discounts, land grants) specifically earmarked for AI datacentres and R&D hubs associated with the major players.
Sources:
Financial & policy journalism: The Financial Times, Bloomberg (especially its Bloomberg Government vertical), and Politico as media sources. Their reporters are often the first to break stories on subsidies, lobbying, and the intersection of tech and state power.
Government procurement & grant databases: official portals like USASpending.gov in the US or the UK’s Contracts Finder service. While difficult to navigate, they provide primary evidence of public funds flowing to specific companies.
Think tank & national security publications: Reports from organisations like the Center for a New American Security (CNAS) in the US, the Royal United Services Institute (RUSI) in the UK, or the Mercator Institute for China Studies (MERICS). They often analyse and quantify the geostrategic rhetoric and policy shifts. The main challenge with this source might be timeliness of publication in comparison to the previous sources.
Company filings & investor calls: For publicly traded companies (Microsoft, Google, Amazon, Nvidia), annual reports (10-K forms) and quarterly investor calls often mention large government contracts or regulatory risks/opportunities, providing a corporate-side view of this trend.
The Telecoms Bust: a Minsky moment and amortisation crisis
What it looks like: The $1 trillion in value fails to materialize from either advertising or business efficiencies. Investors have a Minsky moment and realize the debt and capex are unsustainable. The bubble implodes like the telecoms bubble. The key difference is the financial and technological amortisation risk: the GPUs (with a 2-to-5-year useful life) become obsolete. Unlike the dot-com era’s dark fibre, this infrastructure cannot be repurposed by a “web 2.0”. This leads to trillions in write-offs, analogous to WorldCom’s $180 billion loss.
What to Watch:
Hyperscaler capital expenditure (Capex)
GPU amortisation & resale value
Metrics:
Quarterly capex announcements from Google (Alphabet), Meta, Microsoft, Oracle and Amazon (AWS). This is made trickier to understand by Meta, Microsoft and Oracle looking at forms of private equity financing.
The rate of change in Nvidia’s[clxxxvii] data centre revenue, Broadcom and AMD’s enterprise / data centre revenue. This is the “equipment maker” side of the equation. As long as this number is growing, the bubble is inflating. A sudden slowdown would be the first sign of a “Minsky Moment.”
The resale value of last-generation GPUs (e.g., H100s as B200s/B300s roll out). If these prices collapse, it validates your thesis that the assets cannot be repurposed, and the financial write-downs will be catastrophic.
Sources:
Hyperscaler capex reports from financial analysts and data centre publications. Recent reports show combined capex is projected to hit hundreds of billions, a clear sign of the infrastructure race.
Alphabet, Meta, Microsoft, Oracle and Amazon quarterly results and investor roadshow presentations.
NVIDIA, Broadcom and AMD quarterly earnings reports. The Nvidia Q2 2026 report showing data centre revenue at $41.1B is a perfect example of this indicator.
Resale value of GPUs is a harder metric to track. Monitor tech hardware forums and eBay listings, or look for analyst reports on the “used GPU market.” A collapse in this secondary market for last generation GPUs is a major red flag.
The “Weird Gizmo” Collapse: total bust
What it looks like: The technology is ultimately seen as a novelty. It’s the 2020s version of Boo.com, Beenz and Flooz, or the 3Com Audrey. The argument that “AGI is not imminent[clxxxviii], and LLMs are not the royal road to getting there” wins the day. This bear view of AGI is one that is widely shared by prominent experts[clxxxix] within the machine learning field. Which is why new ways of working like nesting models and world models are being explored, alongside quantum computing. In this scenario, the pure play companies burn through all their cash and vanish. The hyperscalers are left with billions in useless, obsolete silicon, and the “dot LLM era” is remembered as a short-lived period of speculative mania.
What to Watch:
AI startup burn rates & funding (the “burn Rate” indicator)
Metric:
Quarterly venture capital funding for AI startups, specifically looking for a rise in “down rounds” (where valuations decrease) or outright failures.
Source:
Data from firms like CB Insights or Crunchbase.[cxc] Recent reports show that while “mega-rounds” for established players (like Anthropic) are still huge, seed-stage funding is declining, showing a “haves and have-nots” market. A slowdown in the mega-rounds would signal the bust is beginning.
Personal assessment of likely outcomes by scenario
Scenario
Estimated likelihood
Rationale
The moral hazard
~95%
US – China trade disputes and geopolitical strife
Chinese government investment in startups
Chinese local government subsidies for operating AI services
The current position that AI has in driving US GDP growth across sectors including construction and the energy sector
Likely OpenAI loan guarantees
Palantir is already deeply embedded in the US government as a vendor and has partnerships with defence contractors like Anduril
The ‘wingman’ economy
~80-90%
Some research reports indicate that AI is augmenting knowledge workers in different sectors.
Claims of AI replacing workers are more difficult to validate, for example: Klarna moving to automation and then rehiring
Clifford Chance offshoring back-office roles to Poland and China while claiming that the job losses were due to AI.
The ‘Red Hat’ Model
~70-80%
Airbnb opting to use Alibaba’s open-source Qwen AI model over ChatGPT was a milestone event.[cxci]
The ‘telecoms bust’
~75%
Concerns about the size of capital expenditure.
Rate of growth of supporting infrastructure.
Uncertainty about length of depreciation affecting overall shareholder trust in hyperscalers.
Cheaper alternatives like Qwen.
The ‘new economy’
<15%
The uncertain economics of ‘zero friction’ transactions.
Real-life legal and regulatory issues.
Amazon’s dispute with Perplexity using AI agents on its website.
The breakthrough
<10%
A black swan event
The ‘weird gizmo’
<5%
It would be unusual for a technology to disappear completely,
LLMs have been finding some use already.
The rise of open-source AI models which reduce the cost of operation.
Where are we at the moment?
I worked to put together a diagram to try and assess where we are at the moment given that some of the scenarios outlined are running concurrently with each other.
[clxxix]FactSet Insight blog – Search their blog for keywords like “AI” or “earnings.” They regularly publish analyses on the number of S&P 500 companies that cite “AI” on their earnings calls, which is a direct proxy for C-suite focus.
[clxxxiii]Hugging Face models hub – the view can be filtered by ‘trending’ and ‘most downloaded’ to see what the community is using, versus what closed source models are being marketed
[cxc]Crunchbase News – They provide regular analysis of funding rounds. Watch for ‘down rounds’, M&A consolidation among start-ups or acquihires and slowdowns in $100M+ mega-rounds of fund raising.
China bans tech companies from buying Nvidia’s AI chips | FT – the Nvidia ban is an interesting move by the Chinese government. I don’t think it’s just about putting pressure on their semiconductor companies and foundries. I think it also steers the software industry and approach to AI as well looking for more computationally efficient models. China can do comparable computes, using more lower spec chips and more power.
At the moment the leading edge models in the west are taking a hardware led approach rather like putting larger capacity engine in a car a la an old school hotrod. China is forcing its technology sector to take a more holistic approach.
Having Nvidia lobbying the US for permission to sell Blackwell in China is a secondary benefit and not the hard block people think it is. Compute jobs are already done abroad to get around the ban anyway. Its easy to move SSDs from China to Malaysia to run it on local data centres.
Can China really make its consumers spend? | Jing Daily – After decades of export success, country’s bet on domestic consumption to propel growth bumps up against beliefs about money and security. – They’ve got more chance of increasing the number of children born, the beliefs are that engrained
I love some of the apparently random things that Toyota under Akio Toyoda do. From the GR Yaris to this documentary on a vintage Komatsu steel press that was instrumental in Toyota’s first car factory and still is doing sterling work.
The dialogue is in Japanese but English subtitles are available.
My thinking on the concept of intelligence per watt started as bullets in my notebook. It was more of a timeline than anything else at first and provided a framework of sorts from which I could explore the concept of efficiency in terms of intelligence per watt.
TL;DR (too long, didn’t read)
Our path to the current state of ‘artificial intelligence’ (AI) has been shaped by the interplay and developments of telecommunications, wireless communications, materials science, manufacturing processes, mathematics, information theory and software engineering.
Progress in one area spurred advances in others, creating a feedback loop that propelled innovation.
Over time, new use cases have become more personal and portable – necessitating a focus on intelligence per watt as a key parameter. Energy consumption directly affects industrial design and end-user benefits. Small low-power integrated circuits (ICs) facilitated fuzzy logic in portable consumer electronics like cameras and portable CD players. Low power ICs and power management techniques also helped feature phones evolve into smartphones.
A second-order effect of optimising for intelligence per watt is reducing power consumption across multiple applications. This spurs yet more new use cases in a virtuous innovation circle. This continues until the laws of physics impose limits.
Energy storage density and consumption are fundamental constraints, driving the need for a focus on intelligence per watt.
As intelligence per watt improves, there will be a point at which the question isn’t just what AI can do, but what should be done with AI? And where should it be processed? Trust becomes less about emotional reassurance and more about operational discipline. Just because it can handle a task doesn’t mean it should – particularly in cases where data sensitivity, latency, or transparency to humans is non-negotiable. A highly capable, off-device AI might be a fine at drafting everyday emails, but a questionable choice for handling your online banking.
Good ‘operational security’ outweighs trust. The design of AI systems must therefore account not just for energy efficiency, but user utility and deployment context. The cost of misplaced trust is asymmetric and potentially irreversible.
Ironically the force multiplier in intelligence per watt is people and their use of ‘artificial intelligence’ as a tool or ‘co-pilot’. It promises to be an extension of the earlier memetic concept of a ‘bicycle for the mind’ that helped inspire early developments in the personal computer industry. The upside of an intelligence per watt focus is more personal, trusted services designed for everyday use.
While not a computer, but instead to integrate several radio parts in one glass envelope vacuum valve. This had three triodes (early electronic amplifiers), two capacitors and four resistors. Inside the valve the extra resistor and capacitor components went inside their own glass tubes. Normally each triode would be inside its own vacuum valve. At the time, German radio tax laws were based on the number of valve sockets in a device, making this integration financially advantageous.
Post-war scientific boom
Between 1949 and 1957 engineers and scientists from the UK, Germany, Japan and the US proposed what we’d think of as the integrated circuit (IC). These ideas were made possible when breakthroughs in manufacturing happened. Shockley Semiconductor built on work by Bell Labs and Sprague Electric Company to connect different types of components on the one piece of silicon to create the IC.
Credit is often given to Jack Kilby of Texas Instruments as the inventor of the integrated circuit. But that depends how you define IC, with what is now called a monolithic IC being considered a ‘true’ one. Kilby’s version wasn’t a true monolithic IC. As with most inventions it is usually the child of several interconnected ideas that coalesce over a given part in time. In the case of ICs, it was happening in the midst of materials and technology developments including data storage and computational solutions such as the idea of virtual memory through to the first solar cells.
Kirby’s ICs went into an Air Force computer[ii] and an onboard guidance system for the Minuteman missile. He went on to help invent the first handheld calculator and thermal printer, both of which took advantage of progress in IC design to change our modern way of life[iii].
TTL (transistor-to-transistor logic) circuitry was invented at TRW in 1961, they licensed it out for use in data processing and communications – propelling the development of modern computing. TTL circuits powered mainframes. Mainframes were housed in specialised temperature and humidity-controlled rooms and owned by large corporates and governments. Modern banking and payments systems rely on the mainframe as a concept.
AI’s early steps
What we now thing of as AI had been considered theoretically for as long as computers could be programmed. As semiconductors developed, a parallel track opened up to move AI beyond being a theoretical possibility. A pivotal moment was a workshop was held in 1956 at Dartmouth College. The workshop focused on a hypothesis ‘every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it’. Later on, that year a meeting at MIT (Massachusetts Institute of Technology) brought together psychologists and linguists to discuss the possibility of simulating cognitive processes using a computer. This is the origin of what we’d now call cognitive science.
Out of the cognitive approach came some early successes in the move towards artificial intelligence[iv]. A number of approaches were taken based on what is now called symbolic or classical AI:
Reasoning as search – essentially step-wise trial and error approach to problem solving that was compared to wandering through a maze and back-tracking if a dead end was found.
Natural language – where related phrases existed within a structured network.
Micro-worlds – solving for artificially simple situations, similar to economic models relying on the concept of the rational consumer.
Single layer neural networks – to do rudimentary image recognition.
By the time the early 1970s came around AI researchers ran into a number of problems, some of which still plague the field to this day:
Symbolic AI wasn’t fit for purpose solving many real-world tasks like crossing a crowded room.
Trying to capture imprecise concepts with precise language.
Commonsense knowledge was vast and difficult to encode.
Intractability – many problems require an exponential amount of computing time.
Limited computing power available – there was insufficient intelligence per watt available for all but the simplest problems.
By 1966, US and UK funding bodies were frustrated with the lack of progress on the research undertaken. The axe fell first on a project to use computers on language translation. Around the time of the OPEC oil crisis, funding to major centres researching AI was reduced by both the US and UK governments respectively. Despite the reduction of funding to the major centres, work continued elsewhere.
Mini-computers and pocket calculators
ICs allowed for mini-computers due to the increase in computing power per watt. As important as the relative computing power, ICs made mini-computers more robust, easier to manufacture and maintain. DEC (Digital Equipment Corporation) launched the first minicomputer, the PDP-8 in 1964. The cost of mini-computers allowed them to run manufacturing processes, control telephone network switching and control labouratory equipment. Mini-computers expanded computer access in academia facilitating more work in artificial life and what we’d think of as early artificial intelligence. This shift laid the groundwork for intelligence per watt as a guiding principle.
A second development helped drive mass production of ICs – the pocket calculator, originally invented at Texas Instruments. It demonstrated how ICs could dramatically improve efficiency in compact, low-power devices.
LISP machines and PCs
AI researchers required more computational power than mini-computers could provide, leading to the development of LISP machines—specialised workstations designed for AI applications. Despite improvements in intelligence per watt enabled by Moore’s Law, their specialised nature meant that they were expensive. AI researchers continued with these machines until personal computers (PCs) progressed to a point that they could run LISP quicker than LISP machines themselves. The continuous improvements in data storage, memory and processing that enabled LISP machines, continued on and surpassed them as the cost of computing dropped due to mass production.
The rise of LISP machines and their decline was not only due to Moore’s Law in effect, but also that of Makimoto’s Wave. While Gordon Moore outlined an observation that the number of transistors on a given area of silicon doubled every two years or so. Tsugio Makimoto originally observed 10-year pivots from standardised semiconductor processors to customised processors[v]. The rise of personal computing drove a pivot towards standardised architectures.
PCs and workstations extended computing beyond computer rooms and labouratories to offices and production lines. During the late 1970s and 1980s standardised processor designs like the Zilog Z80, MOS Technology 6502 and the Motorola 68000 series drove home and business computing alongside Intel’s X86 processors.
Personal computing started in businesses when office workers brought a computer to use early computer programmes like the VisiCalc spreadsheet application. This allowed them to take a leap forward in not only tabulating data, but also seeing how changes to the business might affect financial performance.
Businesses then started to invest more in PCs for a wide range of uses. PCs could emulate the computer terminal of a mainframe or minicomputer, but also run applications of their own.
Typewriters were being placed by word processors that allowed the operator to edit a document in real time without resorting to using correction fluid.
A Bicycle for the Mind
Steve Jobs at Apple was as famous for being a storyteller as he was for being a technologist in the broadest sense. Internally with the Mac team he shared stories and memetic concepts to get his ideas across in everything from briefing product teams to press interviews. As a concept, a 1990 filmed interview with Steve Jobs articulates the context of this saying particularly well.
In reality, Jobs had been telling the story for a long time through the development of the Apple II and right from the beginning of the Mac. There is a version of the talk that was recorded some time in 1980 when the personal computer was still a very new idea – the video was provided to the Computer History Museum by Regis McKenna[vi].
The ‘bicycle for the mind’ concept was repeated in early Apple advertisements for the time[vii] and even informed the Macintosh project codename[viii].
Jobs articulated a few key concepts.
Buying a computer creates, rather than reduces problems. You needed software to start solving problems and making computing accessible. Back in 1980, you programmed a computer if you bought one. Which was the reason why early personal computer owners in the UK went on to birth a thriving games software industry including the likes of Codemasters[ix]. Done well, there should be no seem in the experience between hardware and software.
The idea of a personal, individual computing device (rather than a shared resource). My own computer builds on my years of how I have grown to adapt and use my Macs, from my first sit-up and beg Macintosh, to the MacBook Pro that I am writing this post on. This is even more true most people and their use of the smartphone. I am of an age, where my iPhone is still an appendage and emissary of my Mac. My Mac is still my primary creative tool. A personal computer is more powerful than a shared computer in terms of the real difference made.
At the time Jobs originally did the speech, PCs were underpowered for anything but data processing (through spreadsheets and basic word processor applications). But that didn’t stop his idea for something greater.
Jobs idea of the computer as an adjunct to the human intellect and imagination still holds true, but it doesn’t neatly fit into the intelligence per watt paradigm. It is harder to measure the effort developing prompts, or that expended evaluating, refining and filtering generative AI results. Of course, Steve Jobs Apple owed a lot to the vision shown in Doug Engelbart’s ‘Mother of All Demos’[x].
Networks
Work took a leap forward with office networked computers pioneered by Macintosh office by Apple[xi]. This was soon overtaken by competitors. This facilitated work flow within an office and its impact can still be seen in offices today, even as components from print management to file storage have moved to cloud-based services.
At the same time, what we might think of as mobile was starting to gain momentum. Bell Labs and Motorola came up with much of the technology to create cellular communications. Martin Cooper of Motorola made the first phone call on a cellular phone to a rival researcher at Bell Labs. But Motorola didn’t sell the phone commercially until 1983, as a US-only product called the DynaTAC 8000x[xii]. This was four years after Japanese telecoms company NTT launched their first cellular network for car phones. Commercial cellular networks were running in Scandinavia by 1981[xiii].
In the same way that the networked office radically changed white collar work, the cellular network did a similar thing for self-employed plumbers, electricians and photocopy repair men to travelling sales people. If they were technologically advanced, they may have had an answer machine, but it would likely have to be checked manually by playing back the tape.
Often it was a receptionist in their office if they had one. Or more likely, someone back home who took messages. The cell phone freed homemakers in a lot of self-employed households to go out into the workplace and helped raise household incomes.
Fuzzy logic
The first mainstream AI applications emerged from fuzzy logic, introduced by Lofti A. Zadeh in 1965 mathematical paper. Initial uses were for industrial controls in cement kilns and steel production[xiv]. The first prominent product to rely on fuzzy logic was the Zojirushi Micom Electric Rice Cooker (1983), which adjusted cooking time dynamically to ensure perfect rice.
Fuzzy logic reacted to changing conditions in a similar way to people. Through the 1980s and well into the 1990s, the power of fuzzy logic was under appreciated outside of Japanese product development teams. In a quote a spokesperson for the American Electronics Association’s Tokyo office said to the Washington Post[xv].
“Some of the fuzzy concepts may be valid in the U.S.,”
“The idea of better energy efficiency, or more precise heating and cooling, can be successful in the American market,”
“But I don’t think most Americans want a vacuum cleaner that talks to you and says, ‘Hey, I sense that my dust bag will be full before we finish this room.’ “
The end of the 1990s, fuzzy logic was embedded in various consumer devices:
Air-conditioner units – understands the room, the temperature difference inside-and-out, humidity. It then switches on-and-off to balance cooling and energy efficiency.
CD players – enhanced error correction on playback dealing with imperfections on the disc surface.
Dishwashers – understood how many dishes were loaded, their type of dirt and then adjusts the wash programme.
Toasters – recognised different bread types, the preferable degree of toasting and performs accordingly.
TV sets – adjust the screen brightness to the ambient light of the room and the sound volume to how far away the viewer is sitting from the TV set.
Vacuum cleaners – vacuum power that is adjusted as it moves from carpeted to hard floors.
Video cameras – compensate for the movement of the camera to reduce blurred images.
Fuzzy logic sold on the benefits and concealed the technology from western consumers. Fuzzy logic embedded intelligence in the devices. Because it worked on relatively simple dedicated purposes it could rely on small lower power specialist chips[xvi] offering a reasonable amount of intelligence per watt, some three decades before generative AI. By the late 1990s, kitchen appliances like rice cookers and microwave ovens reached ‘peak intelligence’ for what they needed to do, based on the power of fuzzy logic[xvii].
Fuzzy logic also helped in business automation. It helped to automatically read hand-written numbers on cheques in banking systems and the postcodes on letters and parcels for the Royal Mail.
Decision support systems & AI in business
Decision support systems or Business Information Systems were being used in large corporates by the early 1990s. The techniques used were varied but some used rules-based systems. These were used in at least some capacity to reduce manual office work tasks. For instance, credit card approvals were processed based on rules that included various factors including credit scores. Only some credit card providers had an analyst manually review the decision made by system. However, setting up each use case took a lot of effort involving highly-paid consultants and expensive software tools. Even then, vendors of business information systems such as Autonomy struggled with a high rate of projects that failed to deliver anything like the benefits promised.
Three decades on, IBM had a similar problem with its Watson offerings, with particularly high-profile failure in mission-critical healthcare applications[xviii]. Secondly, a lot of tasks were ad-hoc in nature, or might require transposing across disparate separate systems.
The rise of the web
The web changed everything. The underlying technology allowed for dynamic data.
Software agents
Examples of intelligence within the network included early software agents. A good example of this was PapriCom. PapriCom had a client on the user’s computer. The software client monitored price changes for products that the customer was interested in buying. The app then notified the user when the monitored price reached a price determined by the customer. The company became known as DealTime in the US and UK, or Evenbetter.com in Germany[xix].
The PapriCom client app was part of a wider set of technologies known as ‘push technology’ which brought content that the netizen would want directly to their computer. In a similar way to mobile app notifications now.
Web search
The wealth of information quickly outstripped netizen’s ability to explore the content. Search engines became essential for navigating the new online world. Progress was made in clustering vast amounts of cheap Linux powered computers together and sharing the workload to power web search amongst them. As search started to trying and make sense of an exponentially growing web, machine learning became part of the developer tool box.
Researchers at Carnegie-Mellon looked at using games to help teach machine learning algorithms based on human responses that provided rich metadata about the given item[xx]. This became known as the ESP game. In the early 2000s, Yahoo! turned to web 2.0 start-ups that used user-generated labels called tags[xxi] to help organise their data. Yahoo! bought Flickr[xxii] and deli.ico.us[xxiii].
All the major search engines looked at how deep learning could help improve search results relevance.
Given that the business model for web search was an advertising-based model, reducing the cost per search, while maintaining search quality was key to Google’s success. Early on Google focused on energy consumption, with its (search) data centres becoming carbon neutral in 2007[xxiv]. This was achieved by a whole-system effort: carefully managing power management in the silicon, storage, networking equipment and air conditioning to maximise for intelligence per watt. All of which were made using optimised versions of open-source software and cheap general purpose PC components ganged together in racks and operating together in clusters.
General purpose ICs for personal computers and consumer electronics allowed easy access relatively low power computing. Much of this was down to process improvements that were being made at the time. You needed the volume of chips to drive innovation in mass-production at a chip foundry. While application-specific chips had their uses, commodity mass-volume products for uses for everything from embedded applications to early mobile / portable devices and computers drove progress in improving intelligence-per-watt.
Makimoto’s tsunami back to specialised ICs
When I talked about the decline of LISP machines, I mentioned the move towards standardised IC design predicted by Tsugio Makimoto. This led to a surge in IC production, alongside other components including flash and RAM memory. From the mid-1990s to about 2010, Makimoto’s predicted phase was stuck in ‘standardisation’. It just worked. But several factors drove the swing back to specialised ICs.
Lithography processes got harder: standardisation got its performance and intelligence per watt bump because there had been a steady step change in improvements in foundry lithography processes that allowed components to be made at ever-smaller dimensions. The dimensions are a function wavelength of light used. The semiconductor hit an impasse when it needed to move to EUV (extreme ultra violet) light sources. From the early 1990s on US government research projects championed development of key technologies that allow EUV photolithography[xxv]. During this time Japanese equipment vendors Nikon and Canon gave up on EUV. Sole US vendor SVG (Silicon Valley Group) was acquired by ASML, giving the Dutch company a global monopoly on cutting edge lithography equipment[xxvi]. ASML became the US Department of Energy research partner on EUV photo-lithography development[xxvii]. ASML spent over two decades trying to get EUV to work. Once they had it in client foundries further time was needed to get commercial levels of production up and running. All of which meant that production processes to improve IC intelligence per watt slowed down and IC manufacturers had to start about systems in a more holistic manner. As foundry development became harder, there was a rise in fabless chip businesses. Alongside the fabless firms, there were fewer foundries: Global Foundries, Samsung and TSMC (Taiwan Semiconductor Manufacturing Company Limited). TSMC is the worlds largest ‘pure-play’ foundry making ICs for companies including AMD, Apple, Nvidia and Qualcomm.
Progress in EDA (electronic design automation). Production process improvements in IC manufacture allowed for an explosion in device complexity as the number of components on a given size of IC doubled every 18 months or so. In the mid-to-late 1970s this led to technologists thinking about the idea of very large-scale integration (VLSI) within IC designs[xxviii]. Through the 1980s, commercial EDA software businesses were formed. The EDA market grew because it facilitated the continual scaling of semiconductor technology[xxix]. Secondly, it facilitated new business models. Businesses like ARM Semiconductor and LSI Logic allowed their customers to build their own processors based on ‘blocs’ of proprietary designs like ARM’s cores. That allowed companies like Apple to focus on optimisation in their customer silicon and integration with software to help improve the intelligence per watt[xxx].
Increased focus on portable devices. A combination of digital networks, wireless connectivity, the web as a communications platform with universal standards, flat screen displays and improving battery technology led the way in moving towards more portable technologies. From personal digital assistants, MP3 players and smartphone, to laptop and tablet computers – disconnected mobile computing was the clear direction of travel. Cell phones offered days of battery life; the Palm Pilot PDA had a battery life allowing for couple of days of continuous use[xxxi]. In reality it would do a month or so of work. Laptops at the time could do half a day’s work when disconnected from a power supply. Manufacturers like Dell and HP provided spare batteries for travellers. Given changing behaviours Apple wanted laptops that were easy to carry and could last most of a day without a charge. This was partly driven by a move to a cleaner product design that wanted to move away from swapping batteries. In 2005, Apple moved from PowerPC to Intel processors. During the announcement at the company’s worldwide developer conference (WWDC), Steve Jobs talked about the focus on computing power per watt moving forwards[xxxii].
Apple’s first in-house designed IC, the A4 processor was launched in 2010 and marked the pivot of Makimoto’s wave back to specialised processor design[xxxiii]. This marked a point of inflection in the growth of smartphones and specialised computing ICs[xxxiv].
New devices also meant new use cases that melded data on the web, on device, and in the real world. I started to see this in action working at Yahoo! with location data integrated on to photos and social data like Yahoo! Research’s ZoneTag and Flickr. I had been the Yahoo! Europe marketing contact on adding Flickr support to Nokia N-series ‘multimedia computers’ (what we’d now call smartphones), starting with the Nokia N73[xxxv]. A year later the Nokia N95 was the first smartphone released with a built-in GPS receiver. William Gibson’s speculative fiction story Spook Country came out in 2007 and integrated locative art as a concept in the story[xxxvi].
Real-world QRcodes helped connect online services with the real world, such as mobile payments or reading content online like a restaurant menu or a property listing[xxxvii].
I labelled the web-world integration as a ‘web-of-no-web’[xxxviii] when I presented on it back in 2008 as part of an interactive media module, I taught to an executive MBA class at Universitat Ramon Llull in Barcelona[xxxix]. In China, wireless payment ideas would come to be labelled O2O (offline to online) and Kevin Kelly articulated a future vision for this fusion which he called Mirrorworld[xl].
Deep learning boom
Even as there was a post-LISP machine dip in funding of AI research, work on deep (multi-layered) neural networks continued through the 1980s. Other areas were explored in academia during the 1990s and early 2000s due to the large amount of computing power needed. Internet companies like Google gained experience in large clustered computing, AND, had a real need to explore deep learning. Use cases include image recognition to improve search and dynamically altered journeys to improve mapping and local search offerings. Deep learning is probabilistic in nature, which dovetailed nicely with prior work Microsoft Research had been doing since the 1980s on Bayesian approaches to problem-solving[xli].
A key factor in deep learning’s adoption was having access to powerful enough GPUs to handle the neural network compute[xlii]. This has allowed various vendors to build Large Language Models (LLMs). The perceived strategic importance of artificial intelligence has meant that considerations on intelligence per watt has become a tertiary consideration at best. Microsoft has shown interest in growing data centres with less thought has been given on the electrical infrastructure required[xliii].
Google’s conference paper on attention mechanisms[xliv] highlighted the development of the transformer model. As an architecture it got around problems in previous approaches, but is computationally intensive. Even before the paper was published, the Google transformer model had created fictional Wikipedia entries[xlv]. A year later OpenAI built on Google’s work with the generative pre-trained transformer model better known as GPT[xlvi].
Since 2018 we’ve seen successive GPT-based models from Amazon, Anthropic, Google, Meta, Alibaba, Tencent, Manus and DeepSeek. All of these models were trained on vast amounts of information sources. One of the key limitations for building better models was access to training material, which is why Meta used pirated copies of e-books obtained using bit-torrent[xlvii].
These models were so computationally intensive that the large-scale cloud service providers (CSPs) offering these generative AI services were looking at nuclear power access for their data centres[xlviii].
The current direction of development in generative AI services is raw computing power, rather than having a more energy efficient focus of intelligence per watt.
Technology consultancy / analyst Omdia estimated how many GPUs were bought by hyperscalers in 2024[xlix].
Company
Number of Nvidia GPUs bought
Number of AMD GPUs bought
Number of self-designed custom processing chips bought
Amazon
196,000
–
1,300,000
Alphabet (Google)
169,000
–
1,500,000
ByteDance
230,000
–
–
Meta
224,000
173,000
1,500,000
Microsoft
485,000
96,000
200,000
Tencent
230,000
–
–
These numbers provide an indication of the massive deployment on GPT-specific computing power. Despite the massive amount of computing power available, services still weren’t able to cope[l] mirroring some of the service problems experienced by early web users[li] and the Twitter ‘whale FAIL’[lii] phenomenon of the mid-2000s. The race to bigger, more powerful models is likely to continue for the foreseeable future[liii].
There is a second class of players typified by Chinese companies DeepSeek[liv] and Manus[lv] that look to optimise the use of older GPT models to squeeze the most utility out of them in a more efficient manner. Both of these services still rely on large cloud computing facilities to answer queries and perform tasks.
Agentic AI
Thinking on software agents went back to work being done in computer science in the mid-1970s[lvi]. Apple articulated a view[lvii]of a future system dubbed the ‘Knowledge Navigator’[lviii] in 1987 which hinted at autonomous software agents. What we’d now think of as agentic AI was discussed as a concept at least as far back as 1995[lix], this was mirrored in research labs around the world and was captured in a 1997 survey of research on intelligent software agents was published[lx]. These agents went beyond the vision that PapriCom implemented.
A classic example of this was Wildfire Communications, Inc. who created a voice enabled virtual personal assistant in 1994[lxi]. Wildfire as a service was eventually shut down in 2005 due to an apparent decline in subscribers using the service[lxii]. In terms of capability, Wildfire could do tasks that are currently beyond Apple’s Siri. Wildfire did have limitations due to it being an off-device service that used a phone call rather than an internet connection, which limited its use to Orange mobile service subscribers using early digital cellular mobile networks.
Almost a quarter century later we’re now seeing devices that are looking to go beyond Wildfire with varying degrees of success. For instance, the Rabbit R1 could order an Uber ride or groceries from DoorDash[lxiii]. Google Duplex tries to call restaurants on your behalf to make reservations[lxiv] and Amazon claims that it can shop across other websites on your behalf[lxv]. At the more extreme end is Boeing’s MQ-28[lxvi] and the Loyal Wingman programme[lxvii]. The MQ-28 is an autonomous drone that would accompany US combat aircraft into battle, once it’s been directed to follow a course of action by its human colleague in another plane.
The MQ-28 will likely operate in an electronic environment that could be jammed. Even if it wasn’t jammed the length of time taken to beam AI instructions to the aircraft would negatively impact aircraft performance. So, it is likely to have a large amount of on-board computing power. As with any aircraft, the size of computing resources and their power is a trade-off with the amount of fuel or payload it will carry. So, efficiency in terms of intelligence per watt becomes important to develop the smallest, lightest autonomous pilot.
As well as a more hostile world, we also exist in a more vulnerable time in terms of cyber security and privacy. It makes sense to have critical, more private AI tasks run on a local machine. At the moment models like DeepSeek can run natively on a top-of-the-range Mac workstation with enough memory[lxviii].
This is still a long way from the vision of completely local execution of ‘agentic AI’ on a mobile device because the intelligence per watt hasn’t scaled down to that level to useful given the vast amount of possible uses that would be asked of the Agentic AI model.
Maximising intelligence per watt
There are three broad approaches to maximise the intelligence per watt of an AI model.
Take advantage of the technium. The technium is an idea popularised by author Kevin Kelly[lxix]. Kelly argues that technology moves forward inexorably, each development building on the last. Current LLMs such as ChatGPT and Google Gemini take advantage of the ongoing technium in hardware development including high-speed computer memory and high-performance graphics processing units (GPU). They have been building large data centres to run their models in. They build on past developments in distributed computing going all the way back to the 1962[lxx].
Optimise models to squeeze the most performance out of them. The approach taken by some of the Chinese models has been to optimise the technology just behind the leading-edge work done by the likes of Google, OpenAI and Anthropic. The optimisation may use both LLMs[lxxi] and quantum computing[lxxii] – I don’t know about the veracity of either claim.
Specialised models. Developing models by use case can reduce the size of the model and improve the applied intelligence per watt. Classic examples of this would be fuzzy logic used for the past four decades in consumer electronics to Mistral AI[lxxiii] and Anduril’s Copperhead underwater drone family[lxxiv].
Even if an AI model can do something, should the model be asked to do so?
We have a clear direction of travel over the decades to more powerful, portable computing devices –which could function as an extension of their user once intelligence per watt allows it to be run locally.
Having an AI run on a cloud service makes sense where you are on a robust internet connection, such as using the wi-fi network at home. This makes sense for general everyday task with no information risk, for instance helping you complete a newspaper crossword if there is an answer you are stuck on and the intellectual struggle has gone nowhere.
A private cloud AI service would make sense when working, accessing or processing data held on the service. Examples of this would be Google’s Vertex AI offering[lxxv].
On-device AI models make sense in working with one’s personal private details such as family photographs, health information or accessing apps within your device. Apps like Strava which share data, have been shown to have privacy[lxxvi] and security[lxxvii] implications. ***I am using Strava as an example because it is popular and widely-known, not because it is a bad app per se.***
While businesses have the capability and resources to have a multi-layered security infrastructure to protect their data most[lxxviii]of[lxxix] the[lxxx] time[lxxxi], individuals don’t have the same security. As I write this there are privacy concerns[lxxxii] expressed about Waymo’s autonomous taxis. However, their mobile device is rarely out of physical reach and for many their laptop or tablet is similarly close. All of these devices tend to be used in concert with each other. So, for consumers having an on-device AI model makes the most sense. All of which results in a problem, how do technologists squeeze down their most complex models inside a laptop, tablet or smartphone?
I am fortunate to be an awards juror for the second time. This is for the Adforum PHNX advertising awards which attracts entries from around the world.
As part of the process I responded to some interview questions. I hope that my old gaffer Tony Gresty appreciated my quoting of him decades later and was surprised that there wasn’t pushback about my assertion of a ‘post-social’ marketing era.
What motivates you to be part of the PHNX Jury, and what do you hope to bring to the judging process?
Before I worked in advertising, I served an apprenticeship in plant process engineering. My old gaffer who was responsible for me had a few sayings. One of which was practice sharpens skill. By being a judge, I hope that I am helping people within the industry around the world to sharpen their skill. Seeing great challenging work and asking myself how it fits the customer and client needs in turn, helps further sharpen my skill as a strategist. TL;DR (too long; didn’t read) altruist generosity?
PHNX has always been about celebrating creativity in all its forms. What new perspectives or disciplines do you think deserve more recognition in award shows today?
Strategy has the Effies, BUT its focus is larger than creativity with a major focus on efficiency and effectiveness rather the creative process. Strategy always provides the ‘assist’ with single-minded insight and creative JOTBD (job to be done), but is never the ‘goal scorer’ to use an American sports stats metaphor. I think that creativity is not only about the creative, but also the context where the creative is placed – which brings in the disciplines of project management’s orchestration, production’s craft and media planning. I think this is going to become far more important as we go towards a post-social era.
Which countries or regions do you think are leading the creative field right now? And which emerging markets should we look out for?
A really interesting question. Leading is probably the wrong phrase to use, but there are markets that are under-estimated. Thailand and the Philippines have well-deserved reputations for emotional storytelling. Year-after-year when I look at lunar new year adverts Malaysia hits well above its weight given the size of the market.
Japan has been consistently delighting advertising folk for the past five decades.
Probably a better question to ask me after I have been through this year’s award entries.
What trends or cultural shifts do you think will define the most impactful creative work this year?
With everything that’s going on, I think we’ll need more humour. Trends within the advertising industry are also leaning towards a better mix of formats. As an industry we over-index on social vs. attention, efficiency and effectiveness for large brands. So we’ve seen a renaissance in OOH amongst other formats.
There is also a return to basics: creative consistency, fluent objects, the power of storytelling and humour. Finally consumers are more interested in consuming more longer form audio and video content, so what a creative execution might look like I hope is very different.
If you could give one piece of advice to agencies and creatives submitting their work, what would it be?
Be single-minded in terms of category consideration. My biggest criticism of last year’s PHNX award entries was not about the quality of the work per se. Many of the entries had a given creative was put in for consideration for the wrong category. And it was the same entries doing it over-and-over again. If it isn’t relevant it’s just going to get ignored or get under the skin of the judges. In the same way that a poorly-placed ad that is slapped all over the place without consideration would have a similar effect in the real world.
Rant over: I wish everyone the best of luck, finally don’t be disheartened. All of the work was of a high standard, choosing winners is hard.
Which creative minds are inspiring you the most right now?
In the widest creative sense I am working my way through veteran Hong Kong film director Johnnie To‘s back catalogue; some of his works like Breaking News feel like exceptionally contemporary given our media environment. A couple of creatives using AI in a really smart way are Omar Karim aka @arthur_chance on instagram and the Dor Brothers. Agency work-wise VCCP’s immersive installation for Transport for London and a small Malaysian shop called Days Studios (whose bread-and-butter work is usually weddings!), yet, did a fantastic job producing a Chinese New Year ad for a cosmetic treatment clinic Aglow Studio – not what you’d call a big client yet it felt like a bigger production than many large brands. Go Google the ad ‘hiss of prosperity’ and watch it on YouTube.