Singapore film makers OGS put together an hour long documentary covering the last days of the Golden Mile complex. It was a mixed use development similar to the Cityplaza development that I lived in Hong Kong. The upper levels were apartments, above a shopping centre beneath. The complex was a fantastic brutalist design that served as a hub for the Thai expat community in Singapore. While the site will be redeveloped, rebuilding the community centred around the Golden Mile will be much harder to do.
Google Docs
Tim O’Reilly pulls together through this interview the best oral history that I have heard of Google Docs. O’Reilly was the right person to do the interview, given how he was at the centre of web 2.0 hosting the conference on the scene and even publishing the books that provided guidance to the relevant developer tools and programming languages.
Toyota Series 60 Land Cruiser
This is from a great Japanese YouTube channel that interviews local car owners, asking them about the vehicle that they drive, why they drive it and what they like about it. The test drives and details in each of the vehicles is amazing. This episode is about the Series 60 Toyota Land Cruiser. The vehicle is the point at which Toyota moves from small Jeep-like vehicles to a highly reliable but full-size SUV.
Hentai Land
A Japanese documentary showing the interface between cosplay and the manga art form. The documentary shows manga artists painting models as a form of performance art.
Play School
The BBC told the back story of Play School from the actors to the educational theory behind the seminal British children’s TV programme. It ran from 1964 to 1988. It went on to influence other shows like RTÉ’s Bosco.
A mix of technology and economics occurred in the 1980s which saw consumers spending much more time in a world of their own at home. This was driven by the video recorder and cable television in the US, it sank traditional cinemas and encouraged the film industry to fight back with the multiplex. We became more depraved as porn moved from grubby cinemas to your living room. Sociologists and marketers called this phenomena cocooning. In reality cocooning never went away, it evolved. For cocooning 2.0, instead of NICAM, VHS and Betamax you have DVD, DSL, social networks and BitTorrent.
I think that technology has moved the unreality bubble that cocooning creates, allowing consumers to screen themselves from society interacting through the screen rather than face-to-face from the home to also on the move. What got me thinking about this was a conversation with my Dad.
My Dad has been driving since the early 1960s. In his first (and only) new car, (as having a family screws your disposable income) he used to have an assortment of petrol company maps and an Esso tiger soft toy which he hacked so that its eyes flashed in time with the indicator lights. My childhood and young adult experience in a car was with jagged mosaics of petrol company maps that I tried to fold neatly and organise in his cars, until I got tired of the whole process and introduced him to the AA’s series of atlas. Its harder to make a mess with a well-bound book.
My Dad is finally selling the ‘rave machine’ (a Volkswagen LT35 turbo diesel camper van which served me well as a comfortable base for diving and going to raves / free festivals). I was clearing out a mass of petrol company maps that he no longer used and putting them all in the recycle bin. He didn’t want to move the maps over to his present car since he is the proud owner of a GPS navigation unit that my Mam had picked him up from discount supermarket Aldi. I realised that the amount of interaction that service station staff and local populace would have with my Dad now he had gone digital would be hugely reduced as he paid for his diesel at the pump with a built-in card reader and never now ventured into the office unless he wanted to feed his Fisherman’s Friend addiction. There was no longer going to be that kitchen sink drama played out as him and my Mam would argue about asking for directions from a local person any more.
My Dad’s move to a GPS device made me realise that we cocoon ourselves from society around us on the move as well as in our home:
How many people use their mobile phone or check their email whilst waiting for a friend in the pub?
How many people use an iPod or play a game on their mobile phone when on public transport? It is interesting that the first Walkmans came with two earphone sockets for shared listening, whereas the design of Apple’s earbuds would make you think twice about sharing your listening experience with someone unless they were really close to you
How many people have used Google Maps on their phone rather than asking someone for directions?
Iceland and other supermarkets have been delivering groceries for a while, meaning that you don’t have to interact with your local community in the shop any more
Apart from having an online supermarket shop, I have done all the activities listed above. And that’s mainly because my fridge freezer is a bit small rather than any great desire to chat with my neighbours over the tops of our wire supermarket trolleys. My smartphone goes beyond location-based services to provide me with a location-based reality bubble.
Cocooning 2.0 is about shutting out society in the real-world even as we interact more closely with our trusted communities through online means. Thinking about some of the utopian ideas that I heard from Castells speaking the other week, I am curious to know what does this mean for the civil society in the longer term as we become disconnected and can’t relate?
I am sure that entrepreneurs will already be thinking about the future market full of potential for ‘authentic’ experiences even as we willfully ignore them on our front door. When you think about your local independent coffee shop, Starbucks or private members clubs like The Hospital isn’t part of their attraction about providing an ‘authentic’ community experience?
I started to think about the internet of heavier things after I spent a bit of time with my Dad. We talked about work, engineering stuff in general and technology in general.
My Dad has a pragmatic approach to technology, it’s ok so long as it fills three distinct criteria:
It’s useful
It’s efficient in what it does and how you use it
It doesn’t get in the way of product serviceability
The last point is probably something that we tend to think about least, but my Dad considers it as he is a time served mechanical fitter. Just prior to Christmas one of the gears went in my parents Singer sowing machine. The machine has been in the family for about 50 years. I managed to buy the relevant cog from a website for just under a tenner.
Contrast this with most electronic goods where you tend not to be able to replace products at a component level. Even if you did, trying to find 50 year old standard catalogue processors, let alone a custom ASIC (application specific integrated circuit) would be a thankless task.
We got to talking about a concept I read in EE Times earlier that month; the internet of heavier things (IoHT). IoHT basically means wiring up or making smart fixed infrastructure and machinery. Venture capital firm KPCB think that the IoHT will generate $14.2 trillion of global output by 2030.
The boosters for it like KCPB think that this opportunity revolves around a number of use cases:
Being able to flag up when preventative servicing is required. (For a lot of manufacturing machinery, companies like Foxboro Instruments – (Now Foxboro by Schneider Electric and Invensys Foxboro respectively) – had been doing this prior to the widespread implementation of TCP-IP network protocols). It is the bread and butter of SCADA systems. But it could be bridges and viaducts indicating that they need work done
MRI machines and other medical equipment that are financed on a per scan unit rather than as a capital cost. Basically extending the enterprise photocopier model into capital equipment expenditure
Machinery that is continuously re-designed based on user feedback
Kicking it around with my Dad got some interesting answers:
Flagging up items for servicing was seen to be a positive thing, however, how would this work with the reality of life in a manufacturing plant. Take a continuous process, say something like an oil refinery or food production line where the whole line needs to be shut down to enact changes, which is the reason why maintenance is scheduled in well in advance, on an annual or semi-annual basis. The process needs to take into account the whole supply chain beyond the factory and both shutdown and start-up are likely to be a complex undertaking. When I worked in the petrochemical industry before going to college; the planning process for a shutdown took six to nine months. Secondly, there was redundancy built into some of the plant so certain things that might need to be taken off line on a regular basis could be. A second consideration is that plants are often not off-the-peg but require a good deal of tailoring to the site. Plants generally aren’t new, there is a thriving market in pre-owned equipment. In the places I worked this included equipment such as such as pressure vessels, electric motors and valves – all of this would have implications for interoperability.
Lastly, what would be the implications when when the ethereal nature of technology underpinning the internet of heavier things met infrastructure that has a realistic life of a hundred plus years in the case of bridges or buildings?
Looking at the defence industry, we can see how maintenance costs and upgrading technology drives much of the spending on weapons systems – a bridge will generally last longer than a B52 bomber or a Hercules transport plane (both are 60 years old systems).
Financing on a per-use unit cost. This was discussed less, the general consensus was that this could dampen innovation as the likes of GE Medical would become finance houses rather than health technology companies, in the similar direction to what happened with Xerox or an early 21st century Sony.
Machinery that is continually redesigned on user feedback sparked a mix of concern and derision from my Dad. It seemed to be based on a premise that products aren’t evolved already – they are changed. The pace of change is a compromise between user feedback, component supply issues and backward serviceability. Moving to an ‘always beta’ model like consumer software development could have a negative impact on product quality, safety and product life due to issues with serviceability of equipment.
I ended up giving a lot of thought about the concept of technology adoption and what it really means. I have been spending a bit of time with the family over the Christmas period as the Carroll family CTO. Reading some of the statistics out there about technology adoption got me thinking whilst I was doing my role as CTO.
In my role as family CTO I had my work cut out for me. My first task on Christmas morning was to recover their Apple ID so that the iPad could be used effectively.
Their mobile communications needs pose a far thornier problem for me and I have been given some thought to my parents and their battered feature phones.
The problem that I have is that its getting increasingly difficult to get them the kind of phone that they want:
Focused on voice
Really simple-to-use SMS
Good haptic feedback (just like what real buttons do)
Something that can be easily locked
Something that can be obtained SIM-free
Something that is physically robust
Something that I can troubleshoot easily
It is a tough call. I have been down this route before. I gave them my old Palm Treo 650 a number of years ago and it got them thinking about digital photography, but it failed as a phone. It’s failures were:
Being too complicated
Providing too many choices
Having too confusing a keyboard
The software was also buggy as hell, but I could trouble shoot any problems they had from my memory of using it a few years before they got their hands on it. The Treo 650 eventually gave up the ghost as the family digital camera, to be replaced by the iPad. My friends who have managed to get their parents using Weixin/WeChat on a mobile phone are not particularly good case studies for what I need to do. There is an absolute unwillingness to have phones with a data package: it is hard for them to understand the vagaries of the mobile phone company tariffs; email is something that they can pick up at home. They never hit the wall on their data allowance from the ISP so it never occurs as a consideration to them.
There is also something about the iPad which means it is accepted as something different to a complex smartphone device and more accepted despite the similar pictures-under-glass interface.
Instead a market stall provided a Samsung feature phone with a late Series 40-esque interface which pushes the envelope in terms of my Dad’s comfort level using it. Meanwhile my Mum soldiers on with an old Nokia. My immediate gut reaction is to go to eBay and pick up something like the Nokia 225 or a Samsung Solid Immerse GT-B2710 for the both of them.
I know other people who have faced similar conundrums and have gone with a Windows Phone (it fails my spec because I wouldn’t be able to troubleshoot it for them), but the tiles front page presents what could be a senior-friendly experience in their eyes. The shy and retiring Tomi Ahonen got hold of some Nokia data looking at phone activations and was both astonished and angry. Roughly a third of Nokia Lumia phones which went out form the factories were never activated. His theory was that a combination of high handset failure rate, unsold inventory from the messy switch over to Windows Phone 8 and possible channel stuffing might be involved.
I don’t know what might justify a 26 million handset short fall, but I could imagine an appreciable amount of them might be due to people using a smartphone as a feature phone. Not having a data plan, being perfectly happy for a phone to be a phone. Is a smartphone still a smartphone if its used as a feature phone?
Extending this analogy further, a large amount of ‘smart TVs’ are now being sold and being touted as the new, new thing in terms of internet eyeballs. Web TV isn’t particularly new as an idea, Combining the web in a TV format has been going since at least the mid-1990s when Steve Perlman founded what would later become MSN TV.
We know that a large amount of homes are buying TVs that are smart, but how do they use them? Are they just using them for the delivery of Apple TV like services; a cable box over IP or are they doing ‘lean forward’ activity one would expect of a smart TV like email, Facebook updates and the like?
I suspect most smart TVs are video delivery mechanisms and that’s pretty much it, are they then really smart? All of this may sound like semantics, but they could feed into the decisions of advertisers, in terms of platforms and creative execution. They are also likely to feed back into product management in the the consumer electronics sector, where TV makers enjoy (if thats the right word) razor-thin margins.
From an information security point-of-view, how would you explain to smart TV owners with ‘dumb TV’ usage patterns that their set may be at risk of being hacked and how they should spend money to protect themselves. A worst case scenario maybe a Sony Bravia (or other manufacturers for that matter) bot army of TVs may never be shut down because consumer apathy to the perceived security risk.
Marc Andreessen’s 2026 AI outlook was published by A16z. As one of the leading funder of Silicon Valley startups, his world view matters.
I’ve gone through and contrasted his 2026 AI outlook through the lens of the viewpoint I researched and wrote up. The core point over where we differ: Andreessen see’s the dawn of a new unbound industrial revolution. Whereas I think that the upside he sees is hard to navigate to; and also risk mirroring the echoes of financial bubbles past in many of the high profile pure play AI companies like OpenAI or C3.ai.
Andreessen has been the quintessential techno-optimist for at least the past two decades with his emblematic essay Why Software Is Eating the World. His worldview is of an industry where demand is insatiable and revenue is undeniably real.
Like Andreessen in his 2026 AI outlook, I would agree that players like Alphabet, Amazon and Microsoft are making money to variable degrees from their AI offerings as part of cloud computing and productivity software bundles.
But in my view there are two aspects of concern:
Pure plays with the notable exception of Anthropic don’t seem to have a clear path to profitability in a time period that would match their implied future revenues based on loans and valuations.
Hyperscalers like Meta and Microsoft are using unusual partnerships to fund their infrastructure and have been inconsistent over how fast they would depreciate their AI computing hardware.
Both Mr Andreessen and myself agree that we are witnessing a technological shift of seismic proportions, arguably, “bigger than the internet” as he put it.
I think that change will happen slower in the short term due to an economic sand box acting as a rate limiter on infrastructure and derived labour efficiencies. In particular, the economic viability of the infrastructure being built to support it.
The Nature of the Boom: Real Revenue or Irrational Exuberance?
A key question is the the solidity of the current market.
For Andreessen, the AI boom is not a speculation fuelled exercise but a demand-driven reality. He argues that unlike the early internet, which required years of physical infrastructure build-out before finding business models, AI is generating cash immediately.
“This new wave of AI companies is growing revenue like… actual customer revenue, actual demand translated through to dollars showing up in bank accounts at like an absolutely unprecedented takeoff rate.”
Andreessen points to “revealed preferences”, observed insight from what people do is more insightful than what they say when asked by market researchers. While the US and European publics express fear of job losses, they are simultaneously adopting AI tools at a fast pace.
My “Dot LLM Era” report, however, suggests this revenue may be dwarfed by the capital expenditure required to generate it.
It posits that the sector faces a “self-defeating economic” cycle. The report highlights that the current valuations of the “Magnificent 10” tech giants (including Nvidia, Microsoft, and Alphabet) imply a forward price-to-earnings (P/E) ratio of 35 times. This is alarmingly close to the S&P 500’s P/E ratio at the peak of the dot-com boom, which approached 33.
In the report I warn that current valuations assume LLMs will drive revenue growth by $1–4 trillion in the next two years.
If this growth is achieved through massive job automation, the resulting unemployment could depress the very economy needed to sustain these companies. Like the 2008 financial crisis, this would invite various forms of regulatory intervention.
This implies a sweet spot between speed of productivity gains versus efficiencies in terms of job losses realised by clients – and acts as a break on the velocity of adoption.
The Infrastructure Trap: The Amortisation Crisis
Perhaps the most critical technical divergence between our viewpoints concerns the hardware powering this revolution.
The “Dot LLM Era” report introduces a chilling concept: Amortisation Risk. It draws a comparison to the “telecoms bubble” of the late 90s, where companies laid massive amounts of fibre optic cable.
The Telecoms Analogy: Fibre optic cable had a useful life of over a decade, meaning even after the companies went bust (like WorldCom), the infrastructure remained useful for Web 2.0.
The AI Reality: Modern AI infrastructure relies on GPUs and TPUs. These processors have a useful life of only 3 to 5 years before becoming technically obsolete.
The report argues that if an AI bust occurs, the hardware will not be waiting for a resurgence; it will be electronic waste.
Hyperscalers are currently lengthening the assumed useful lives of this hardware in their financial filings—Google to 6 years, Meta to 5 years—which the report suggests may artificially overstate profits and adversely affect competitors with debt securitised against AI data centre hardware.
The rapid obsolescence of chips represents a “financial and technological amortisation risk” that could lead to trillions in write-offs, similar to the $180 billion loss left by WorldCom.
Andreessen views this hardware cycle through a different lens: Elasticity.
He acknowledges that massive investment leads to gluts, but argues that “the number one cause of a glut is a shortage”. He believes the price of AI compute is falling “much faster than Moore’s Law”.
As prices collapse, demand will expand exponentially. If chips become cheap and plentiful, AI can be embedded into everything, moving from massive “God models” in data centres down to small models running on local devices.
Geopolitics: The US-China Race
Both Andreessen and I agree that the AI landscape is a bipolar contest between the United States and China, but their assessments of the leaderboard differ.
Andreessen frames this as a “new Cold War” where the US must maintain dominance. He is encouraged by the “DeepSeek moment”, referring to a powerful open-source model released by a Chinese hedge fund. To him, this proves that catching up is possible and that the US cannot rest on its laurels. He advocates for open source as a way to proliferate American standards globally.
I offer a more sobering assessment of American pre-eminence. I argue that unlike the 1990s “Long Boom,” where the US was the undisputed hegemon; the current era is defined by high debt and strong competition. The US maybe on the Soviet side of a Reagan era ‘Space Defence Initative (aka Star Wars)’ AI race from an economic perspective.
Alibaba’s Qwen model claims to deliver comparable performance to American models while requiring 82% fewer Nvidia processors to run.
China doesn’t have a electrical power crunch in the same way that western data centres have due to its sustained investment in coal, nuclear, gas and renewable power sources.
Even Silicon Valley investors and major companies like Airbnb are opting for Chinese open-source models because they are “way more performant and much cheaper”.
Andreessen worries about regulation stifling US innovation, I think that the real geopolitical threats are:
China’s ability to operate largely immune to US sanctions, smuggling chips and utilising data centres in neutral geographies like Malaysia.
China’s speed at propagating the use of AI within its own populace, driving utility at a lower cost per token than their US competitors with state regulation defining trial ‘sand pits’. This will drive new uses faster in China and in China’s client states across the global south.
The Outcome: Transformation or “Minsky Moment”?
Where is this all heading? Andreessen is betting on a future where AI becomes as ubiquitous and essential as electricity. He envisions a “pyramid” structure: a few massive “God models” at the top, cascading down to billions of specialised, cheap models running on edge devices. He admits these are “trillion-dollar questions,” but his firm is aggressively investing in every viable strategy.
I think that LLMs will make a sustained technological impact, but it will be limited in velocity by economic boundaries. In the “Dot LLM Era” report, I outlined seven potential scenarios, ranging from total transformation to total collapse. Currently, it assigns a ~95% likelihood to the “Moral Hazard” scenario.
The Moral Hazard: This scenario posits that major AI players will be considered “too big to fail” due to national security imperatives. Governments will step in with loan guarantees and subsidies to backstop the massive infrastructure debts, effectively nationalising the risk . Rather like the banks during the 2008 financial crisis.
The Telecoms Bust: With a ~75% likelihood, the report fears a “Minsky Moment”, a sudden market collapse driven by the realisation that cash flows cannot cover the massive debts incurred to build short-lived data centres.
Comparative Summary of Perspectives
Feature
Marc Andreessen (The techno-optimist)
My own view (The economic limiting skeptic)
Current Phase
Inning 1. “Biggest technological revolution of my life.”
Phase 1: The boom / The Inflation of a bubble.
Revenue
Real, unprecedented, showing up in bank accounts.
Potentially illusory for at least some players; reliant on untenable cost savings.
Infrastructure
Shortages lead to gluts; cheap chips drive adoption.
Amortisation risk: hardware obsolescence in 3-5 years.
Pricing
Usage-based is great for startups; prices falling fast.
US dominance challenged; Chinese models are more efficient and effective enough for organisations from Singapore to Silicon Valley to adopt them.
Outcome
Long-term ubiquity; widespread prosperity.
Technological change bounded by economic limitations. Current high risk of “Minsky Moment” or government bailouts.
Conclusion: The Trillion-Dollar Questions
Marc Andreessen candidly admits that “companies… need to answer these questions and if they get the answers wrong, they’re really in trouble”. His firm’s strategy is to bet on everything: large models, small models, apps, and infrastructure. He is doing this on the assumption that the aggregate wave will lift all boats.
My ‘Dot LLM Era’ report offers a counterweight to this enthusiasm. A bubble decouples technological and financial progress. Technological utility does not always equal investor profit. As I note in ‘Dot LLM Era’, “Bubbles don’t kill technology from moving forwards”. The internet did change everything, but it also wiped out trillions in shareholder value along the way.
The defining question for the next five years is whether the demand for AI can grow fast enough to pay for the hardware before that hardware becomes obsolete.
If Andreessen is right, this elasticity of demand would save the day. If I am right, we may be heading for the most expensive recycling project in human history.
The Productivity Paradox and Society
A fascinating tension exists between Andreessen’s view of societal adoption and my own macroeconomic warnings regarding productivity and labour.
The “Wingman” Economy vs. The Phillips Curve
Andreessen describes a “symbiotic relationship” where AI acts as a productivity multiplier: a “wingman” for doctors, coders, and writers. He argues that higher pricing in SaaS can actually benefit the customer by funding better R&D, suggesting a cycle of value creation.
I argue that the wingman sweetspot is optimal but tricky to land, in the report I showed the risk through a darker macroeconomic “thought experiment.” It used the Phillips Curve and Okun’s Law to model what happens if Andreessen AI scenario succeeds too well.
The Thought Experiment: If AI automation generates $1 trillion in cost savings through job cuts, it implies approximately 10.5 million unemployed US workers.
The Consequence: Such a spike in unemployment could trigger deflation and a massive drop in GDP. This is “self-defeating economics”: the hyperscalers need a healthy economy to consume their services, yet their success might undermine the investor, enterprise customer and consumer base.
Andreessen counters this fear by citing historical context. He notes the “Committee for the Triple Revolution” in 1964, warned Lyndon B. Johnson that automation would ruin the economy—a prediction that proved false. He believes AI will follow the path of electricity or the internet: initially terrifying, eventually indispensable.
Automation did displace a massive amount of developed world jobs moving at a much slower pace than Andreessen predicted for AI. Electricity moved at an equally slow pace compared to the pace envisioned by AI’s champions.
The Open Source Debate
Both of us agree that the role of open source is pivotal in both narratives.
For Andreessen, open source is the great accelerator. He marvels at how knowledge is proliferating: “Some of the best AI people in the world are like 22, 23, 24”. He views the leak of knowledge as inevitable and beneficial for US competitiveness, provided the US stays ahead.
I analysed the “Red Hat Analogue.” It suggests that in the dot-com era, open-source (Linux) won, but the companies that built the models (or distributions) mostly failed, with Red Hat being the notable exception.
I assigned a ~70-80% likelihood to the “Red Hat Model,” where pure-play LLM creators (like OpenAI or Anthropic) might struggle to justify their capital burn as open-source models like Meta’s Llama or Alibaba’s Qwen commoditise the intelligence.
We have already seen Singapore’s national AI programme drop Llama for Alibaba’s Qwen, reinforcing the idea that the value might accrue to those who service the models, not those who create them.
Final Thoughts
The divergence between Marc Andreessen and my own analysis is not about whether AI works both of us would agree that the technology can be magical and transformative. The disagreement is about who pays for it, how that affects the velocity of AI andwho profits.
Andreessen sees a future of abundance where falling prices drive infinite demand.
My own view sees a future of financial reckoning shaping the 2026 AI outlook where shorter hardware lifespans and brutal competition erode margins, setting a slower pace at which we reach Andreessen’s abundance.
Andreessen’s viewpoint reminded me a lot of mid-20th century aspirations for nuclear power. Nuclear power offered a similar vision in the mid-20th century of electricity too cheap to meter. That was never close to being achieved in the likes of France – arguably the most passionate adopter.
As with the railway mania of the 1840s or the optical fibre boom of the 1990s, society may inherit a significant infrastructure, with a shorter lifespan built on the ashes of investor capital.
Our differing views boil down to a question for the 2026 AI outlook: are we in the “boom” phase, or are we staring down the barrel of the “amortisation crisis”?
As Andreessen himself concluded, “These are trillion-dollar questions, not answers”.