Former MS privacy chief warned of NSA spying | Telecom Asia – The issue is that most privacy laws were drafted to cover communications, not computing and that technically it is possible to encrypt data and store it securely in the cloud. However, that is not possible if one wants to compute with that data – NSA spying is just the canary in the coal mine. What is being done on NSA spying today is also likely being done by the Chinese MSS, the main European powers, the British and the Israelis. Swiftly following them will be the UAE and Saudi Arabia followed by regimes across the world
Groklaw – Forced Exposure ~pj – Groklaw has gone offline. It feels like the NSA is dismantling the whole of US geek culture. The bad news is these are the kinds of people who gave the US competitive advantage in cyberspace
Stack « Mugi Yamamoto – I haven’t wanted a printer this badly since my old Apple StyleWriter II that I used at college and saved me fighting for IT resources in the college computer labs
Jing Daily: Gold iPhone debate on Weibo – To speak the truth, when I first saw the gold iPhone, my immediate reaction was, ‘This is too tacky for me.’ But after a few minutes, I calmed down a lot–every time Apple makes something, it always smashes people’s existing notions and broadens the world’s possibilities. – I guess everyone in Apple’s brand marketing can now die happy after that remark
Mark Cerny: The Man Behind the PlayStation 4 | MIT Technology Review – I’ve had decades to get used to the increasing complexity of video games. But these days children learn how to play games on iPads and smartphones, which are buttonless. So we have a gulf between the beginner players and the blockbuster game players
LG GD910 Mobile Phone – LG Electronics UK – I remember seeing Iain Tait with one of these before he went to W+K a number of years ago, that he got as a going away gift from a client. I guess that’s why the smart watch will have a bigger reaction from others than from me. Don’t also forget the Sony companion devices for Android phones, the Bluetooth enabled G-Shocks and Microsoft Spot. Looking forward to the contextual device future
My thinking on the concept of intelligence per watt started as bullets in my notebook. It was more of a timeline than anything else at first and provided a framework of sorts from which I could explore the concept of efficiency in terms of intelligence per watt.
TL;DR (too long, didn’t read)
Our path to the current state of ‘artificial intelligence’ (AI) has been shaped by the interplay and developments of telecommunications, wireless communications, materials science, manufacturing processes, mathematics, information theory and software engineering.
Progress in one area spurred advances in others, creating a feedback loop that propelled innovation.
Over time, new use cases have become more personal and portable – necessitating a focus on intelligence per watt as a key parameter. Energy consumption directly affects industrial design and end-user benefits. Small low-power integrated circuits (ICs) facilitated fuzzy logic in portable consumer electronics like cameras and portable CD players. Low power ICs and power management techniques also helped feature phones evolve into smartphones.
A second-order effect of optimising for intelligence per watt is reducing power consumption across multiple applications. This spurs yet more new use cases in a virtuous innovation circle. This continues until the laws of physics impose limits.
Energy storage density and consumption are fundamental constraints, driving the need for a focus on intelligence per watt.
As intelligence per watt improves, there will be a point at which the question isn’t just what AI can do, but what should be done with AI? And where should it be processed? Trust becomes less about emotional reassurance and more about operational discipline. Just because it can handle a task doesn’t mean it should – particularly in cases where data sensitivity, latency, or transparency to humans is non-negotiable. A highly capable, off-device AI might be a fine at drafting everyday emails, but a questionable choice for handling your online banking.
Good ‘operational security’ outweighs trust. The design of AI systems must therefore account not just for energy efficiency, but user utility and deployment context. The cost of misplaced trust is asymmetric and potentially irreversible.
Ironically the force multiplier in intelligence per watt is people and their use of ‘artificial intelligence’ as a tool or ‘co-pilot’. It promises to be an extension of the earlier memetic concept of a ‘bicycle for the mind’ that helped inspire early developments in the personal computer industry. The upside of an intelligence per watt focus is more personal, trusted services designed for everyday use.
While not a computer, but instead to integrate several radio parts in one glass envelope vacuum valve. This had three triodes (early electronic amplifiers), two capacitors and four resistors. Inside the valve the extra resistor and capacitor components went inside their own glass tubes. Normally each triode would be inside its own vacuum valve. At the time, German radio tax laws were based on the number of valve sockets in a device, making this integration financially advantageous.
Post-war scientific boom
Between 1949 and 1957 engineers and scientists from the UK, Germany, Japan and the US proposed what we’d think of as the integrated circuit (IC). These ideas were made possible when breakthroughs in manufacturing happened. Shockley Semiconductor built on work by Bell Labs and Sprague Electric Company to connect different types of components on the one piece of silicon to create the IC.
Credit is often given to Jack Kilby of Texas Instruments as the inventor of the integrated circuit. But that depends how you define IC, with what is now called a monolithic IC being considered a ‘true’ one. Kilby’s version wasn’t a true monolithic IC. As with most inventions it is usually the child of several interconnected ideas that coalesce over a given part in time. In the case of ICs, it was happening in the midst of materials and technology developments including data storage and computational solutions such as the idea of virtual memory through to the first solar cells.
Kirby’s ICs went into an Air Force computer[ii] and an onboard guidance system for the Minuteman missile. He went on to help invent the first handheld calculator and thermal printer, both of which took advantage of progress in IC design to change our modern way of life[iii].
TTL (transistor-to-transistor logic) circuitry was invented at TRW in 1961, they licensed it out for use in data processing and communications – propelling the development of modern computing. TTL circuits powered mainframes. Mainframes were housed in specialised temperature and humidity-controlled rooms and owned by large corporates and governments. Modern banking and payments systems rely on the mainframe as a concept.
AI’s early steps
What we now thing of as AI had been considered theoretically for as long as computers could be programmed. As semiconductors developed, a parallel track opened up to move AI beyond being a theoretical possibility. A pivotal moment was a workshop was held in 1956 at Dartmouth College. The workshop focused on a hypothesis ‘every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it’. Later on, that year a meeting at MIT (Massachusetts Institute of Technology) brought together psychologists and linguists to discuss the possibility of simulating cognitive processes using a computer. This is the origin of what we’d now call cognitive science.
Out of the cognitive approach came some early successes in the move towards artificial intelligence[iv]. A number of approaches were taken based on what is now called symbolic or classical AI:
Reasoning as search – essentially step-wise trial and error approach to problem solving that was compared to wandering through a maze and back-tracking if a dead end was found.
Natural language – where related phrases existed within a structured network.
Micro-worlds – solving for artificially simple situations, similar to economic models relying on the concept of the rational consumer.
Single layer neural networks – to do rudimentary image recognition.
By the time the early 1970s came around AI researchers ran into a number of problems, some of which still plague the field to this day:
Symbolic AI wasn’t fit for purpose solving many real-world tasks like crossing a crowded room.
Trying to capture imprecise concepts with precise language.
Commonsense knowledge was vast and difficult to encode.
Intractability – many problems require an exponential amount of computing time.
Limited computing power available – there was insufficient intelligence per watt available for all but the simplest problems.
By 1966, US and UK funding bodies were frustrated with the lack of progress on the research undertaken. The axe fell first on a project to use computers on language translation. Around the time of the OPEC oil crisis, funding to major centres researching AI was reduced by both the US and UK governments respectively. Despite the reduction of funding to the major centres, work continued elsewhere.
Mini-computers and pocket calculators
ICs allowed for mini-computers due to the increase in computing power per watt. As important as the relative computing power, ICs made mini-computers more robust, easier to manufacture and maintain. DEC (Digital Equipment Corporation) launched the first minicomputer, the PDP-8 in 1964. The cost of mini-computers allowed them to run manufacturing processes, control telephone network switching and control labouratory equipment. Mini-computers expanded computer access in academia facilitating more work in artificial life and what we’d think of as early artificial intelligence. This shift laid the groundwork for intelligence per watt as a guiding principle.
A second development helped drive mass production of ICs – the pocket calculator, originally invented at Texas Instruments. It demonstrated how ICs could dramatically improve efficiency in compact, low-power devices.
LISP machines and PCs
AI researchers required more computational power than mini-computers could provide, leading to the development of LISP machines—specialised workstations designed for AI applications. Despite improvements in intelligence per watt enabled by Moore’s Law, their specialised nature meant that they were expensive. AI researchers continued with these machines until personal computers (PCs) progressed to a point that they could run LISP quicker than LISP machines themselves. The continuous improvements in data storage, memory and processing that enabled LISP machines, continued on and surpassed them as the cost of computing dropped due to mass production.
The rise of LISP machines and their decline was not only due to Moore’s Law in effect, but also that of Makimoto’s Wave. While Gordon Moore outlined an observation that the number of transistors on a given area of silicon doubled every two years or so. Tsugio Makimoto originally observed 10-year pivots from standardised semiconductor processors to customised processors[v]. The rise of personal computing drove a pivot towards standardised architectures.
PCs and workstations extended computing beyond computer rooms and labouratories to offices and production lines. During the late 1970s and 1980s standardised processor designs like the Zilog Z80, MOS Technology 6502 and the Motorola 68000 series drove home and business computing alongside Intel’s X86 processors.
Personal computing started in businesses when office workers brought a computer to use early computer programmes like the VisiCalc spreadsheet application. This allowed them to take a leap forward in not only tabulating data, but also seeing how changes to the business might affect financial performance.
Businesses then started to invest more in PCs for a wide range of uses. PCs could emulate the computer terminal of a mainframe or minicomputer, but also run applications of their own.
Typewriters were being placed by word processors that allowed the operator to edit a document in real time without resorting to using correction fluid.
A Bicycle for the Mind
Steve Jobs at Apple was as famous for being a storyteller as he was for being a technologist in the broadest sense. Internally with the Mac team he shared stories and memetic concepts to get his ideas across in everything from briefing product teams to press interviews. As a concept, a 1990 filmed interview with Steve Jobs articulates the context of this saying particularly well.
In reality, Jobs had been telling the story for a long time through the development of the Apple II and right from the beginning of the Mac. There is a version of the talk that was recorded some time in 1980 when the personal computer was still a very new idea – the video was provided to the Computer History Museum by Regis McKenna[vi].
The ‘bicycle for the mind’ concept was repeated in early Apple advertisements for the time[vii] and even informed the Macintosh project codename[viii].
Jobs articulated a few key concepts.
Buying a computer creates, rather than reduces problems. You needed software to start solving problems and making computing accessible. Back in 1980, you programmed a computer if you bought one. Which was the reason why early personal computer owners in the UK went on to birth a thriving games software industry including the likes of Codemasters[ix]. Done well, there should be no seem in the experience between hardware and software.
The idea of a personal, individual computing device (rather than a shared resource). My own computer builds on my years of how I have grown to adapt and use my Macs, from my first sit-up and beg Macintosh, to the MacBook Pro that I am writing this post on. This is even more true most people and their use of the smartphone. I am of an age, where my iPhone is still an appendage and emissary of my Mac. My Mac is still my primary creative tool. A personal computer is more powerful than a shared computer in terms of the real difference made.
At the time Jobs originally did the speech, PCs were underpowered for anything but data processing (through spreadsheets and basic word processor applications). But that didn’t stop his idea for something greater.
Jobs idea of the computer as an adjunct to the human intellect and imagination still holds true, but it doesn’t neatly fit into the intelligence per watt paradigm. It is harder to measure the effort developing prompts, or that expended evaluating, refining and filtering generative AI results. Of course, Steve Jobs Apple owed a lot to the vision shown in Doug Engelbart’s ‘Mother of All Demos’[x].
Networks
Work took a leap forward with office networked computers pioneered by Macintosh office by Apple[xi]. This was soon overtaken by competitors. This facilitated work flow within an office and its impact can still be seen in offices today, even as components from print management to file storage have moved to cloud-based services.
At the same time, what we might think of as mobile was starting to gain momentum. Bell Labs and Motorola came up with much of the technology to create cellular communications. Martin Cooper of Motorola made the first phone call on a cellular phone to a rival researcher at Bell Labs. But Motorola didn’t sell the phone commercially until 1983, as a US-only product called the DynaTAC 8000x[xii]. This was four years after Japanese telecoms company NTT launched their first cellular network for car phones. Commercial cellular networks were running in Scandinavia by 1981[xiii].
In the same way that the networked office radically changed white collar work, the cellular network did a similar thing for self-employed plumbers, electricians and photocopy repair men to travelling sales people. If they were technologically advanced, they may have had an answer machine, but it would likely have to be checked manually by playing back the tape.
Often it was a receptionist in their office if they had one. Or more likely, someone back home who took messages. The cell phone freed homemakers in a lot of self-employed households to go out into the workplace and helped raise household incomes.
Fuzzy logic
The first mainstream AI applications emerged from fuzzy logic, introduced by Lofti A. Zadeh in 1965 mathematical paper. Initial uses were for industrial controls in cement kilns and steel production[xiv]. The first prominent product to rely on fuzzy logic was the Zojirushi Micom Electric Rice Cooker (1983), which adjusted cooking time dynamically to ensure perfect rice.
Fuzzy logic reacted to changing conditions in a similar way to people. Through the 1980s and well into the 1990s, the power of fuzzy logic was under appreciated outside of Japanese product development teams. In a quote a spokesperson for the American Electronics Association’s Tokyo office said to the Washington Post[xv].
“Some of the fuzzy concepts may be valid in the U.S.,”
“The idea of better energy efficiency, or more precise heating and cooling, can be successful in the American market,”
“But I don’t think most Americans want a vacuum cleaner that talks to you and says, ‘Hey, I sense that my dust bag will be full before we finish this room.’ “
The end of the 1990s, fuzzy logic was embedded in various consumer devices:
Air-conditioner units – understands the room, the temperature difference inside-and-out, humidity. It then switches on-and-off to balance cooling and energy efficiency.
CD players – enhanced error correction on playback dealing with imperfections on the disc surface.
Dishwashers – understood how many dishes were loaded, their type of dirt and then adjusts the wash programme.
Toasters – recognised different bread types, the preferable degree of toasting and performs accordingly.
TV sets – adjust the screen brightness to the ambient light of the room and the sound volume to how far away the viewer is sitting from the TV set.
Vacuum cleaners – vacuum power that is adjusted as it moves from carpeted to hard floors.
Video cameras – compensate for the movement of the camera to reduce blurred images.
Fuzzy logic sold on the benefits and concealed the technology from western consumers. Fuzzy logic embedded intelligence in the devices. Because it worked on relatively simple dedicated purposes it could rely on small lower power specialist chips[xvi] offering a reasonable amount of intelligence per watt, some three decades before generative AI. By the late 1990s, kitchen appliances like rice cookers and microwave ovens reached ‘peak intelligence’ for what they needed to do, based on the power of fuzzy logic[xvii].
Fuzzy logic also helped in business automation. It helped to automatically read hand-written numbers on cheques in banking systems and the postcodes on letters and parcels for the Royal Mail.
Decision support systems & AI in business
Decision support systems or Business Information Systems were being used in large corporates by the early 1990s. The techniques used were varied but some used rules-based systems. These were used in at least some capacity to reduce manual office work tasks. For instance, credit card approvals were processed based on rules that included various factors including credit scores. Only some credit card providers had an analyst manually review the decision made by system. However, setting up each use case took a lot of effort involving highly-paid consultants and expensive software tools. Even then, vendors of business information systems such as Autonomy struggled with a high rate of projects that failed to deliver anything like the benefits promised.
Three decades on, IBM had a similar problem with its Watson offerings, with particularly high-profile failure in mission-critical healthcare applications[xviii]. Secondly, a lot of tasks were ad-hoc in nature, or might require transposing across disparate separate systems.
The rise of the web
The web changed everything. The underlying technology allowed for dynamic data.
Software agents
Examples of intelligence within the network included early software agents. A good example of this was PapriCom. PapriCom had a client on the user’s computer. The software client monitored price changes for products that the customer was interested in buying. The app then notified the user when the monitored price reached a price determined by the customer. The company became known as DealTime in the US and UK, or Evenbetter.com in Germany[xix].
The PapriCom client app was part of a wider set of technologies known as ‘push technology’ which brought content that the netizen would want directly to their computer. In a similar way to mobile app notifications now.
Web search
The wealth of information quickly outstripped netizen’s ability to explore the content. Search engines became essential for navigating the new online world. Progress was made in clustering vast amounts of cheap Linux powered computers together and sharing the workload to power web search amongst them. As search started to trying and make sense of an exponentially growing web, machine learning became part of the developer tool box.
Researchers at Carnegie-Mellon looked at using games to help teach machine learning algorithms based on human responses that provided rich metadata about the given item[xx]. This became known as the ESP game. In the early 2000s, Yahoo! turned to web 2.0 start-ups that used user-generated labels called tags[xxi] to help organise their data. Yahoo! bought Flickr[xxii] and deli.ico.us[xxiii].
All the major search engines looked at how deep learning could help improve search results relevance.
Given that the business model for web search was an advertising-based model, reducing the cost per search, while maintaining search quality was key to Google’s success. Early on Google focused on energy consumption, with its (search) data centres becoming carbon neutral in 2007[xxiv]. This was achieved by a whole-system effort: carefully managing power management in the silicon, storage, networking equipment and air conditioning to maximise for intelligence per watt. All of which were made using optimised versions of open-source software and cheap general purpose PC components ganged together in racks and operating together in clusters.
General purpose ICs for personal computers and consumer electronics allowed easy access relatively low power computing. Much of this was down to process improvements that were being made at the time. You needed the volume of chips to drive innovation in mass-production at a chip foundry. While application-specific chips had their uses, commodity mass-volume products for uses for everything from embedded applications to early mobile / portable devices and computers drove progress in improving intelligence-per-watt.
Makimoto’s tsunami back to specialised ICs
When I talked about the decline of LISP machines, I mentioned the move towards standardised IC design predicted by Tsugio Makimoto. This led to a surge in IC production, alongside other components including flash and RAM memory. From the mid-1990s to about 2010, Makimoto’s predicted phase was stuck in ‘standardisation’. It just worked. But several factors drove the swing back to specialised ICs.
Lithography processes got harder: standardisation got its performance and intelligence per watt bump because there had been a steady step change in improvements in foundry lithography processes that allowed components to be made at ever-smaller dimensions. The dimensions are a function wavelength of light used. The semiconductor hit an impasse when it needed to move to EUV (extreme ultra violet) light sources. From the early 1990s on US government research projects championed development of key technologies that allow EUV photolithography[xxv]. During this time Japanese equipment vendors Nikon and Canon gave up on EUV. Sole US vendor SVG (Silicon Valley Group) was acquired by ASML, giving the Dutch company a global monopoly on cutting edge lithography equipment[xxvi]. ASML became the US Department of Energy research partner on EUV photo-lithography development[xxvii]. ASML spent over two decades trying to get EUV to work. Once they had it in client foundries further time was needed to get commercial levels of production up and running. All of which meant that production processes to improve IC intelligence per watt slowed down and IC manufacturers had to start about systems in a more holistic manner. As foundry development became harder, there was a rise in fabless chip businesses. Alongside the fabless firms, there were fewer foundries: Global Foundries, Samsung and TSMC (Taiwan Semiconductor Manufacturing Company Limited). TSMC is the worlds largest ‘pure-play’ foundry making ICs for companies including AMD, Apple, Nvidia and Qualcomm.
Progress in EDA (electronic design automation). Production process improvements in IC manufacture allowed for an explosion in device complexity as the number of components on a given size of IC doubled every 18 months or so. In the mid-to-late 1970s this led to technologists thinking about the idea of very large-scale integration (VLSI) within IC designs[xxviii]. Through the 1980s, commercial EDA software businesses were formed. The EDA market grew because it facilitated the continual scaling of semiconductor technology[xxix]. Secondly, it facilitated new business models. Businesses like ARM Semiconductor and LSI Logic allowed their customers to build their own processors based on ‘blocs’ of proprietary designs like ARM’s cores. That allowed companies like Apple to focus on optimisation in their customer silicon and integration with software to help improve the intelligence per watt[xxx].
Increased focus on portable devices. A combination of digital networks, wireless connectivity, the web as a communications platform with universal standards, flat screen displays and improving battery technology led the way in moving towards more portable technologies. From personal digital assistants, MP3 players and smartphone, to laptop and tablet computers – disconnected mobile computing was the clear direction of travel. Cell phones offered days of battery life; the Palm Pilot PDA had a battery life allowing for couple of days of continuous use[xxxi]. In reality it would do a month or so of work. Laptops at the time could do half a day’s work when disconnected from a power supply. Manufacturers like Dell and HP provided spare batteries for travellers. Given changing behaviours Apple wanted laptops that were easy to carry and could last most of a day without a charge. This was partly driven by a move to a cleaner product design that wanted to move away from swapping batteries. In 2005, Apple moved from PowerPC to Intel processors. During the announcement at the company’s worldwide developer conference (WWDC), Steve Jobs talked about the focus on computing power per watt moving forwards[xxxii].
Apple’s first in-house designed IC, the A4 processor was launched in 2010 and marked the pivot of Makimoto’s wave back to specialised processor design[xxxiii]. This marked a point of inflection in the growth of smartphones and specialised computing ICs[xxxiv].
New devices also meant new use cases that melded data on the web, on device, and in the real world. I started to see this in action working at Yahoo! with location data integrated on to photos and social data like Yahoo! Research’s ZoneTag and Flickr. I had been the Yahoo! Europe marketing contact on adding Flickr support to Nokia N-series ‘multimedia computers’ (what we’d now call smartphones), starting with the Nokia N73[xxxv]. A year later the Nokia N95 was the first smartphone released with a built-in GPS receiver. William Gibson’s speculative fiction story Spook Country came out in 2007 and integrated locative art as a concept in the story[xxxvi].
Real-world QRcodes helped connect online services with the real world, such as mobile payments or reading content online like a restaurant menu or a property listing[xxxvii].
I labelled the web-world integration as a ‘web-of-no-web’[xxxviii] when I presented on it back in 2008 as part of an interactive media module, I taught to an executive MBA class at Universitat Ramon Llull in Barcelona[xxxix]. In China, wireless payment ideas would come to be labelled O2O (offline to online) and Kevin Kelly articulated a future vision for this fusion which he called Mirrorworld[xl].
Deep learning boom
Even as there was a post-LISP machine dip in funding of AI research, work on deep (multi-layered) neural networks continued through the 1980s. Other areas were explored in academia during the 1990s and early 2000s due to the large amount of computing power needed. Internet companies like Google gained experience in large clustered computing, AND, had a real need to explore deep learning. Use cases include image recognition to improve search and dynamically altered journeys to improve mapping and local search offerings. Deep learning is probabilistic in nature, which dovetailed nicely with prior work Microsoft Research had been doing since the 1980s on Bayesian approaches to problem-solving[xli].
A key factor in deep learning’s adoption was having access to powerful enough GPUs to handle the neural network compute[xlii]. This has allowed various vendors to build Large Language Models (LLMs). The perceived strategic importance of artificial intelligence has meant that considerations on intelligence per watt has become a tertiary consideration at best. Microsoft has shown interest in growing data centres with less thought has been given on the electrical infrastructure required[xliii].
Google’s conference paper on attention mechanisms[xliv] highlighted the development of the transformer model. As an architecture it got around problems in previous approaches, but is computationally intensive. Even before the paper was published, the Google transformer model had created fictional Wikipedia entries[xlv]. A year later OpenAI built on Google’s work with the generative pre-trained transformer model better known as GPT[xlvi].
Since 2018 we’ve seen successive GPT-based models from Amazon, Anthropic, Google, Meta, Alibaba, Tencent, Manus and DeepSeek. All of these models were trained on vast amounts of information sources. One of the key limitations for building better models was access to training material, which is why Meta used pirated copies of e-books obtained using bit-torrent[xlvii].
These models were so computationally intensive that the large-scale cloud service providers (CSPs) offering these generative AI services were looking at nuclear power access for their data centres[xlviii].
The current direction of development in generative AI services is raw computing power, rather than having a more energy efficient focus of intelligence per watt.
Technology consultancy / analyst Omdia estimated how many GPUs were bought by hyperscalers in 2024[xlix].
Company
Number of Nvidia GPUs bought
Number of AMD GPUs bought
Number of self-designed custom processing chips bought
Amazon
196,000
–
1,300,000
Alphabet (Google)
169,000
–
1,500,000
ByteDance
230,000
–
–
Meta
224,000
173,000
1,500,000
Microsoft
485,000
96,000
200,000
Tencent
230,000
–
–
These numbers provide an indication of the massive deployment on GPT-specific computing power. Despite the massive amount of computing power available, services still weren’t able to cope[l] mirroring some of the service problems experienced by early web users[li] and the Twitter ‘whale FAIL’[lii] phenomenon of the mid-2000s. The race to bigger, more powerful models is likely to continue for the foreseeable future[liii].
There is a second class of players typified by Chinese companies DeepSeek[liv] and Manus[lv] that look to optimise the use of older GPT models to squeeze the most utility out of them in a more efficient manner. Both of these services still rely on large cloud computing facilities to answer queries and perform tasks.
Agentic AI
Thinking on software agents went back to work being done in computer science in the mid-1970s[lvi]. Apple articulated a view[lvii]of a future system dubbed the ‘Knowledge Navigator’[lviii] in 1987 which hinted at autonomous software agents. What we’d now think of as agentic AI was discussed as a concept at least as far back as 1995[lix], this was mirrored in research labs around the world and was captured in a 1997 survey of research on intelligent software agents was published[lx]. These agents went beyond the vision that PapriCom implemented.
A classic example of this was Wildfire Communications, Inc. who created a voice enabled virtual personal assistant in 1994[lxi]. Wildfire as a service was eventually shut down in 2005 due to an apparent decline in subscribers using the service[lxii]. In terms of capability, Wildfire could do tasks that are currently beyond Apple’s Siri. Wildfire did have limitations due to it being an off-device service that used a phone call rather than an internet connection, which limited its use to Orange mobile service subscribers using early digital cellular mobile networks.
Almost a quarter century later we’re now seeing devices that are looking to go beyond Wildfire with varying degrees of success. For instance, the Rabbit R1 could order an Uber ride or groceries from DoorDash[lxiii]. Google Duplex tries to call restaurants on your behalf to make reservations[lxiv] and Amazon claims that it can shop across other websites on your behalf[lxv]. At the more extreme end is Boeing’s MQ-28[lxvi] and the Loyal Wingman programme[lxvii]. The MQ-28 is an autonomous drone that would accompany US combat aircraft into battle, once it’s been directed to follow a course of action by its human colleague in another plane.
The MQ-28 will likely operate in an electronic environment that could be jammed. Even if it wasn’t jammed the length of time taken to beam AI instructions to the aircraft would negatively impact aircraft performance. So, it is likely to have a large amount of on-board computing power. As with any aircraft, the size of computing resources and their power is a trade-off with the amount of fuel or payload it will carry. So, efficiency in terms of intelligence per watt becomes important to develop the smallest, lightest autonomous pilot.
As well as a more hostile world, we also exist in a more vulnerable time in terms of cyber security and privacy. It makes sense to have critical, more private AI tasks run on a local machine. At the moment models like DeepSeek can run natively on a top-of-the-range Mac workstation with enough memory[lxviii].
This is still a long way from the vision of completely local execution of ‘agentic AI’ on a mobile device because the intelligence per watt hasn’t scaled down to that level to useful given the vast amount of possible uses that would be asked of the Agentic AI model.
Maximising intelligence per watt
There are three broad approaches to maximise the intelligence per watt of an AI model.
Take advantage of the technium. The technium is an idea popularised by author Kevin Kelly[lxix]. Kelly argues that technology moves forward inexorably, each development building on the last. Current LLMs such as ChatGPT and Google Gemini take advantage of the ongoing technium in hardware development including high-speed computer memory and high-performance graphics processing units (GPU). They have been building large data centres to run their models in. They build on past developments in distributed computing going all the way back to the 1962[lxx].
Optimise models to squeeze the most performance out of them. The approach taken by some of the Chinese models has been to optimise the technology just behind the leading-edge work done by the likes of Google, OpenAI and Anthropic. The optimisation may use both LLMs[lxxi] and quantum computing[lxxii] – I don’t know about the veracity of either claim.
Specialised models. Developing models by use case can reduce the size of the model and improve the applied intelligence per watt. Classic examples of this would be fuzzy logic used for the past four decades in consumer electronics to Mistral AI[lxxiii] and Anduril’s Copperhead underwater drone family[lxxiv].
Even if an AI model can do something, should the model be asked to do so?
We have a clear direction of travel over the decades to more powerful, portable computing devices –which could function as an extension of their user once intelligence per watt allows it to be run locally.
Having an AI run on a cloud service makes sense where you are on a robust internet connection, such as using the wi-fi network at home. This makes sense for general everyday task with no information risk, for instance helping you complete a newspaper crossword if there is an answer you are stuck on and the intellectual struggle has gone nowhere.
A private cloud AI service would make sense when working, accessing or processing data held on the service. Examples of this would be Google’s Vertex AI offering[lxxv].
On-device AI models make sense in working with one’s personal private details such as family photographs, health information or accessing apps within your device. Apps like Strava which share data, have been shown to have privacy[lxxvi] and security[lxxvii] implications. ***I am using Strava as an example because it is popular and widely-known, not because it is a bad app per se.***
While businesses have the capability and resources to have a multi-layered security infrastructure to protect their data most[lxxviii]of[lxxix] the[lxxx] time[lxxxi], individuals don’t have the same security. As I write this there are privacy concerns[lxxxii] expressed about Waymo’s autonomous taxis. However, their mobile device is rarely out of physical reach and for many their laptop or tablet is similarly close. All of these devices tend to be used in concert with each other. So, for consumers having an on-device AI model makes the most sense. All of which results in a problem, how do technologists squeeze down their most complex models inside a laptop, tablet or smartphone?
The Washington Postalleged that the British government had served a technical capability notice against Apple in December 2024 to provide backdoor global access into encrypted Apple iCloud services. The BBC’s subsequent report appears to support the Post’s allegations. And begs philosophical question about what it means when the government has a copy of your ‘digital twin’?
What is a technical capability notice
A technical capability notice is a legal document. It is issued by the UK government that compels a telecoms provider or technology company that compels them to maintain the technical ability to assist with surveillance activities like interception of communications, equipment interference, or data acquisition. When applied to telecoms companies and internet service providers, it is usually UK only in scope. What is interesting about the technical capability notice allegedly served against Apple is extra-territorial in nature. The recipient of a technical capability notice, isn’t allowed to disclose that they’ve been served with the notice, let alone the scope of the ask.
Apple outlined a number of concerns to the UK parliament in March 2024:
Breaks systems
Lack of accountability in the secrecy
Extra-territoriality
Tl;DR – what the UK wants with technical capability notices is disproportionate.
Short history of privacy
The expectation of privacy in the UK is a relatively recent one. You can see British spy operations going back to at leas the 16th century with Sir Francis Walsingham. Walsingham had a network that read couriered mail and cracked codes in Elizabethan England.
By Victorian times, you had Special Branch attached to the Metropolitan Police and related units across the British Empire. The Boer War saw Britain found permanent military intelligence units that was the forerunner of the current security services.
By world war one the security services as we now know them were formed. They were responsible to intercept mail, telegraph, radio transmissions and telephone conversations where needed.
Technology lept forward after World War 2.
ECHELON
ECHELON was a cold war era global signals intelligence network ran by Australia, Canada, New Zealand, the UK and the US. It originated in the late 1960s to monitor the military and diplomatic communications of the Soviet Union and its Eastern Bloc allies during the Cold War, the ECHELON project became formally established in 1971.
ECHELON was partly inspired by earlier US projects. Project SHAMROCK had started in 1940 and ran through to the 1970s photographing telegram communications in the US, or transiting through the US. Project MINARET tracked the electronic communications of listed American citizens who travelled abroad. They were helped in this process by British signals intelligence agency GCHQ.
In 2000, the European Commission filed a final report on ECHELON claimed that:
The US-led electronic intelligence-gathering network existed
It was used to provide US companies with a competitive advantage vis-à-vis their European peers; rather like US defence contractors have alleged to undergone by Chinese hackers
Capenhurst microwave tower
During the cold war, one of the main ways that Irish international data and voice calls were transmitted was via a microwave land bridge across England and on to the continent.
Dublin Dame Court to Holyhead, Llandudno and on to Heaton Park. Just next to the straight line path between Llandudno and Heaton Park was a 150 foot tower in Capenhurst on the Wirral. This siphoned off a copy of all Irish data into the British intelligence system.
Post-Echelon
After 9/11, there were widespread concerns about the US PATRIOT Act that obligated US internet platforms to provide their data to US government, wherever that data was hosted. After Echelon was exposed, it took Edward Snowden to reveal PRISM that showed how the NSA was hoovering up data from popular internet services such as Yahoo! and Google.
RAMPART-A was a similar operation taking data directly from the world’s major fibre-optic cables.
US programme BULLRUN and UK programme Edgehill were programmes designed to crack encrypted communications.
So privacy is a relatively new concept that relies the inability to process all the data taken in.
Going after the encrypted iCloud services hits different. We are all cyborgs now, smartphones are our machine augmentation and are seldom out of reach. Peering into the cloud ‘twin’ of our device is like peering into our heads. Giving indications of hopes, weaknesses and intent. Which can then be taken and interpreted in many different ways.
What would be the positive reasons to do a technical capability notice?
Crime
Increasing technological sophistication has gone hand in hand with the rise of organised crime groups and new criminal business models such as ‘Klad’. Organised crime is also transnational in nature.
But criminals have already had access to dedicated criminal messaging networks, a couple of which were detailed in Joseph Cox’ Dark Wire . They use the dark web, Telegram and Facebook Marketplace as outlets for their sales.
According to Statista less than six percent of crimes in committed in the UK resulted in a charge or summons in 2023. That compares to just under 16 percent in 2015.
Is going after Apple really going to result in an increased conviction rate, or could the resources be better used elsewhere?
Public disorder
Both the 2011 and 2024 riots caught the government off-guard. Back in 2011, there was concern that the perpetrators were organising over secure BlackBerry messaging. The reality that the bulk of it was being done over social media. It was a similar case with the 2024 public disturbances as well.
So gaining access to iCloud data wouldn’t be that much help. Given the effort to filter through it, given that the signals and evidence were out there in public for everyone to see.
The big challenge for the police was marshalling sufficient resources and the online narrative that took on a momentum of its own.
Paedophiles
One of the politicians strongest cards to justify invasion of privacy is to protect against nonces, paedos and whatever other label you use to describe the distribution of child sexual abuse images. It’s a powerful, emotive subject that hits like a gut punch. The UK government has been trying to explore ways of understanding the size of abuse in the UK.
Most child abuse happens in the home, or by close family members. Child pornography rings are more complex with content being made around the world, repeatedly circulated for years though various media. A significant amount of the content is produced by minors themselves – such as selfies.
The government has a raft of recommendations to implement from the The Independent Inquiry into Child Sexual Abuse. These changes are more urgently needed like getting the police to pay attention to vulnerable working-class children when they come forward.
Terrorism
The UK government puts a lot of work into preventing and combating terrorism. What terrorism is has evolved over time. Historically, cells would mount terrorist attacks.
Eventually, the expectation of the protagonist surviving the attack changed with the advent of suicide tactics. Between 1945 and 1980, these were virtually unheard of. The pioneers seem to have been Hezbollah against UN peacekeepers in Lebanon.
This went on to influence 9/11 and the London bombings. The 9/11 commission found that the security services didn’t suffer from a lack of information, but challenges in processing and acting on the information.
More recently many attacks have been single actors, rather than a larger conspiracy. Much of the signs available was in their online spiral into radicalisation, whether its right-wingers looking to follow the example of The Turner Diaries, or those that look towards groups like ISIS.
Axel Rudakubana’s actions in Southport doesn’t currently fit into the UK government’s definition of terrorism because of his lack of ideology.
I am less sure what the case would be for being able to access every Apple’s cloud twin of their iPhone. The challenge seems to be in the volume of data and meta data to sift through, rather than a lack of data.
Pre-Crime
Mining data on enough smartphones over time may show up patterns that might indicate an intent to do a crime. Essentially the promise of predictive crime solving promised in the Tom Cruise dystopian speculative future film Minority Report.
Currently the UK legal system tends to focus on people having committed a crime, the closest we have to pre-crime was more intelligence led operations during The Troubles that were investigated by the yet to be published Stalker/Sampson Inquiry.
There are so many technical, philosophical and ethical issues with this concept – starting with what it means for free will.
What are the negative reasons for doing a technical capability notice?
The UK Government supports strong encryption and understands its importance for a free, open and secure internet and as part of creating a strong digital economy. We believe encryption is a necessary part of protecting our citizens’ data online and billions of people use it every day for a range of services including banking, commerce and communications. We do not want to compromise the wider safety or security of digital products and services for law abiding users or impose solutions on technology companies that may not work within their complex systems.
Extra-territorial reach
Concerns about the US PATRIOT Act and PRISM saw US technology companies lose commercial and government clients across Europe. Microsoft and Alphabet were impacted by losing business from the likes of UK defence contractor BAE Systems and the Swedish government.
The UK would likely experience a similar effect. Given that the UK is looking to biotechnology and technology as key sectors to drive economic growth, this is likely to have negative impact on:
British businesses looking to sell technology services abroad (DarkTrace, Detica and countless fintech businesses). They will lose existing business and struggle to make new sales.
Britain’s attractiveness to inbound investments be it software development, regional headquarter functions or infrastructure such as data centres. Having no exposure to the UK market may be more attractive to companies handling sensitive data.
You have seen a similar patten roll out in Hong Kong as more companies have moved regional headquarters to Singapore instead.
The scope of the technical capability notice, as it is perceived, damages UK arguments around freedom-of-speech. State surveillance is considered to have a chilling effect in civilian discussions and has been criticized in the past, yet the iCloud backdoor access could be considered to do the exactly same thing as the British government opposes in countries like China, Hong Kong and Iran.
Leverage
The UK government has a challenge in terms of the leverage that it can bring to bear on foreign technology multinationals. While the country has a sizeable market and talented workforce, it’s a small part of these companies global revenues and capabilities.
They can dial down services in the UK, or they can withdraw completely from the UK marketplace taking their jobs and infrastructure investment with them. Apple supports 550,000 jobs through direct employment, its supply chain, and the iOS app economy. In 2024, Apple claimed that it had invested over £18 billion over the previous five years.
In terms of the number of people employed through Apple, it’s a big number, let me try to bring it to life for you. Imagine for a moment if every vehicle factory (making cars, tractors,, construction vehicles, race cars and wagons), parts plant, research and development, MOT station, dealership and repair shop in the UK fired half their staff. That is the toll that Apple leaving the UK would have on unemployment.
Now think about how that would ripple through the community. Less goods bought in the supermarket, less pints poured in a pub or less frequent hair cuts given.
Where’s the power in the relationship between the tech sector and the government?
Precedent
Once it is rumoured that Apple has given into one country’s demands. The equivalent of technical capability notices are likely to be employed by governments around the world. Apple would find it hard not to provide similar access to other 5is countries, China, India and the Gulf states.
Even if they weren’t provided with access, it’s a lot easier to break in when you know that a backdoor already exists. A classic example of this in a different area is the shock-and-awe felt when DeepSeek demonstrated a more efficient version of a ChatGPT-like LLM. The team had a good understanding of what was possible and started from there.
The backdoor will be discovered, if not by hackers then by disclosure like the Capenhurst microwave tower that was known about soon after it went up, or by a Edward Snowden-like whistle-blower given the amount of people that would have access to that information in allied security apparatus.
This would leave people vulnerable from around the world to authoritarian regimes. The UK is currently home to thousands of political emigres from Hong Kong who are already under pressure from the organs of the Chinese state.
From a domestic point-of-view while the UK security services are likely to be extremely professional, their political masters can be of a more variable quality. An authoritarian populist leader could put backdoors allowed by a technical capability notice to good use.
Criminal access
The hackers used by intelligence services, especially those attributed to China and Russia have a reputation for double-dipping. Using it for their intelligence masters and then also looking to make a personal profit by nefarious means. Databases of iCloud data would be very tempting to exploit for criminal gain, or sell on to other criminals allowing them to mine bank accounts, credit cards, conduct retail fraud.
It could even be used against a country’s civilians and their economy as a form of hybrid warfare that would be hard to attribute.
In the past intelligence agencies were limited in terms of processing the sea of data that they obtained. But technology moves on, allowing more and more data to be sifted and processed over time.
What can you do?
You’ve got nothing to hide, so why worry? With the best will in the world, you do have things to hide, if not from the UK government then from foreign state actors and criminals – who are often the same people:
Your bank account and other financial related logins
Personal details
Messages that could be taken out of context
I am presuming that you don’t have your children’s photos on your social media where they can be easily mined and fuel online bullying. Your children’s photos on your phone could be deep faked by paedophiles or scammers.
Voice memos that can be used to train a voice scammer’s AI to be good enough
Client and proprietary information
Digital vehicle key
Access to academic credentials
Access to government services
So, what should you do?
Here’s some starting suggestions:
Get rid of your kids photos off your phone. Get a digital camera, have prints made to put in your wallet, a photo album book, use an electronic picture frame that can take an SD card of images and doesn’t connect to the web or use a cloud service.
Set up multi-factor authentication on passwords if you can. It won’t protect you against a government, but it will make life a bit more difficult for criminals who may move on to hacking someone else’s account instead – given that there is a criminal eco-system to sell data en-masse.
Use the Apple password app to generate passwords, but keep the record off them offline in a notebook. If you are writing them down, have two copies and use legible handwriting.
You could delete ‘important’ contacts from your address book and use an old school filofax or Rolodex frame for them instead. You’re not likely to be able to do this with all your contacts, it wouldn’t be practical. If you are writing them down, have two copies and use legible handwriting.
Have a code word with loved ones. Given that a dump of your iCloud service may include enough training data for a good voice AI, having a code word to use with your loved ones could prevent them from getting scammed. I put this in place ages ago as there is enough video out there on the internet of me in a public speaking scenario to train a passable voice generative AI tool.
Use Signal for messaging with family and commercially sensitive conversations.
My friend and former Mac journalist Ian Betteridgerecommended using an alternative service like Swiss-based Proton Cloud. He points out that they are out of the legal jurisdiction of both the US and UK. However, one has to consider history – Crypto AG was a Swiss-based cryptography company actually owned by the CIA. It gave the intelligence agency access to secure communications of 120 countries including India, Pakistan and the Holy See. Numerous intelligence services including the Swiss benefited from the intelligence gained. So consider carefully what you save to the cloud.
if you are not resident in the UK, consider using ‘burn devices’ with separate cloud services. When I worked abroad, we had to do client visits in an authoritarian country. I took a different cellphone and laptop to protect commercially sensitive information. When I returned these were both hard reset by the IT guy and were ready for future visits. Both devices only used a subset of my data and didn’t connect to my normal cloud services, reducing the risk of infiltration and contamination. The mindset of wanting to access cloud services around the world may be just the thin end of the wedge. Countries generally don’t put down industrial and political espionage as justifications for their intelligence services powers.
What can criminals do?
Criminals already have experience procuring dedicated secure messaging services.
While both dark web services and messaging platforms have been shut down, there is an opportunity to move the infrastructure into geographies that are less accessible to western law enforcement: China, Hong Kong, Macau or Russia for instance. A technical capability notice is of no use. The security services have two options to catch criminals out:
Obtain end devices on the criminal:
While they are unlocked and put them in a faraday cage to prevent the device from being wiped remotely.
Have an informant give you access to their device.
Crack the platform:
Through hacking
Setting the platform up as a sting in the first place.
If the two criminals are known to each other a second option is to go old school using a one-time pad. This might be both having the same edition of a book with each letter or word advancing through the book .
So if you used the word ‘cat’ as the fourth word on line 3 of page 2 in a book you might get something like 4.3.2, which will mean nothing if you don’t have the same book and if the person who wrote the message or their correspondent don’t use 4.3.2 to signify cat again. Instead they would move onwards through the book to find the next ‘cat’ word. A sleuthing cryptographer may be able to guess your method of encryption by the increasing numbers, but unless they know the book your feline secret is secure from their efforts.
Above is two pages from an old one-time pad issued by the NSA called DIANA.
The point is, those criminals that really want to evade security service understanding their business can do. Many criminals in the UK are more likely to rely on a certain amount of basic tactics (gloves, concealing their face, threatening witnesses) and the low crime clearance rate in the UK.
Instead of a technical capability notice, these criminals are usually caught by things like meta analysis (who is calling who, who is messaging who, who is transferring money etc.), investigative police work including stings, surveillance and informers.
Why?
Which begs the questions:
Why Apple and why did they choose to serve it in December 2024?
What trade-offs have the UK government factored in considering the potiential impact on its economic growth agenda and political ramifications?
The who-and-why of the leak itself? Finally, the timing of the leak was interesting, in the early days of the Trump administration.
I don’t know how I feel about the alleged technical capability notice and have more questions than answers.
I have worked at Interpublic twice during my career. Once at the very start of my career and more recently at McCann Health. I was never vested in Interpublic stock and I don’t own any Interpublic or Omnicom shares. This is not financial advice I am not telling you what you should do.
This post is not intended to be, and shall not constitute, an offer to buy or sell or the solicitation of an offer to buy or sell any securities, or a solicitation of any vote or approval, nor shall there be any sale of securities in any jurisdiction in which such offer, solicitation or sale would be unlawful prior to registration or qualification under the securities laws of any such jurisdiction.
I am pointing out the bits in discussions that I found interesting, and some bits that I found deathly dull, but pertinent.
The shape of it
The acquisition would be done by issuing stock. It wouldn’t involve Omnicom’s cash reserves or raising debt to make the purchase. Following the deal, the new Omnicom would be owned by
60.6% of former existing Omnicom shareholders
39.4% of former Interpublic shareholders
Deal expected to close in the second half of 2025. Once it is closed Omnicom expected to get $750 million in cost savings over the following two years. Combined cashflow of more than $3 billion a year.
Investment analyst call
The investment analyst call was led by Omnicom’s John Wren and featured Phillippe Krakowsky. One of the main factors raised on the call by Wren was the reduction on debt to EBITDA of Omnicom from 2.5x to 2.1x. The combined organisation also had a more balanced maturity profile on debt.
The deal impacted scale in two ways:
Efficiencies due to scale.
Increased capacity to borrow and fund future purchases.
What was less clear from the call was the value to customers. Healthcare was cited as an area of opportunity as both businesses had a substantial healthcare marketing offering. But nothing on how to capitalise on the opportunity.
What I didn’t hear was how the combined business was going to get to 750 million of savings, but that they were confident that they could hit that number in two years after the deal closed.
I also didn’t hear a clear position on how the combined firm would deal with the drain of advertising revenue from marketing conglomerates and media companies to platforms. There was some lip service given to being able to better address generative AI related change as a larger group.
Finally there was no analysis, or consideration about how Omnicom and Interpublic would surpass their competitors innovation. Instead the focus was purely on existing combined size.
Shareholder value
At the time of the announcement, the deal was said to offer a premium in terms of value to Interpublic shareholders.
As for Omnicom shareholders, they claimed: The transaction will be accretive to adjusted earnings per share for both Omnicom and Interpublic shareholders.
Slow gains – which might make taking that money out of their existing shares and instead putting it in a S&P 500 tracker ETF seem more attractive.
Industry animal spirits (aka what people were saying in my feeds and op-eds)
The reaction on social platforms was shrill and overwhelmingly negative. The reasons given included:
The inevitable job cuts.
The internal preoccupation that comes from two large organisations coming together.
The lack of clarity about unique benefit that the new company would provide.
The two-year inward focus on consolidation would allow more innovative competitors (depending who you listened to this would be Accenture, Brandtech, Dentsu, Publicis, Stagwell) gain further ground.
Later on, the discussion moved on towards the reactionary nature of the discussion itself.
From within Interpublic itself, I heard concern about the future from people in different parts of the business. This was down to a lack of internal communication rather than anything specific in nature.
Left unchecked, it could be morale sapping and might encourage some of the best talent to leave for more stable environs.
Update: January 17, 2025 – Campaign magazine podcast. The most interesting argument made in the podcast was that the media buying and creative arms of Interpublic are seen as having little-to-no-value and that deal from Omnicom’s perspective was all about Interpublic’s data platform.
Any self-respecting investment banker worth their salt would be able to break the conglomerate down into constituent parts and sell it off (as what has happened with Interpublic agencies R/GA and Huge already).
In the PR and social / influence sector Golin and Weber Collective would make natural groupings to be spun off and still with enough scale to compete on the global stage.
From a creative agency perspective, it would be a similar situation with Mullen Lowe and McCann World Group.
IPG Health looks like it had already been pre-packaged for private equity when it was carved away from its advertising groups and nominally has a full suite of offerings to provide the pharmaceutical sector clients.
For bits of networks that you can’t sell. For instance if the purchaser doesn’t want to have an agency office in Malaysia (Malaysia is only in here hypothetically, in reality I have no idea why more global corporate headquarters aren’t located in the Cameron highlands); you can recoup some of your money by facilitating a management buyout. These are more common than you realise.
Instead the podcast participants think that clients are just all about first and third party data platforms. I would argue that’s a simplistic view that ignores:
The relative complementary nature of the Interpublic and Omnicom networks in terms of product spread and geographical reach. In most markets, one or the other network has an appreciably stronger position. Where there is consolidation needed, this would most likely result in redundancies in the Asia Pacific and European regions.
Client brands need for continued brand building and the current chaos in the major platforms pivoting to the new presidential administration’s direction.
‘Bad neighbourhoods’ for brand content will adversely affect the ability of brands to advertise or promote themselves effectively. It’s harder to build effective brand memory structures in what consumers are likely to perceive as a hateful, or hostile environment.
Finally there is the the little acknowledged fact that social platform advertising is disproportionally supported by D2C marketing and varying forms of hucksterism from Temu to get-rich schemes. This isn’t the kind of businesses that fill up the client ranks of large marketing conglomerates like Omnicom and Interpublic.
What business thinking says
Harvard Business Review claims that 70 to 90 percent of mergers and acquisitions fail. By comparison, anywhere between 25 and 80 percent of large IT projects fail. 70 to 85 percent of new consumer product launches fail. TL;DR running a business is tough.
Secondly, Omnicom and Interpublic grew historically through acquisitions. Which would mean that they understand how to move a business forward and integrate their new acquisition.
The business model that marketing services conglomerates historically worked on was a mix of an arbitrage play, driving integration and efficiencies.
Arbitrage
Omnicom and Interpublic both relied on a few ways to gain an arbitrage benefit:
Private companies are generally cheaper to buy than publicly listed firms. It’s a matter of economics, publicly listed firms list in a closer to perfect market. Secondly, buyout contracts to get the management to meet financial targets that facilitate either a faster financial payback or a cheaper price on the business.
Larger companies like Omnicom can borrow money at more favourable terms than a small to medium-sized business. Larger companies that have lower levels of leverage will be able to get money in a more favourable format than more highly leveraged business of the same size.
Driving integration
Historically these groups take a light touch on integration for agencies where the capabilities are common to more than one agency, WHERE the acquired agency is hitting the ambitious financial targets set by the holding company. Integration in terms of integrated new business pitches and common selling of new products or capabilities.
This might be where the client is looking for an integrated solution. Or it might be where it makes sense to pool resources to deal with a new area like Amazon advertising and retail media or generative AI services.
Once a newly acquired business has become ‘part of the furniture’ and the founders have stepped away, you are more likely to see it become more deeply knitted into the holding group business fabric. This is likely to include common systems and processes: time-tracking software, HR and talent management software, accounting software, cloud services and productivity software.
Efficiencies
Sources of efficiencies overlap integration through standardisation and being able to buy in bulk. A second source of efficiency is consolidation of common business functions:
Accounting / finance
Business development
Freelance staff pool
Human resources and recruitment
IT
Knowledge management
Legal services
Open questions
Both Omnicom and Interpublic have experience of integrating and spinning off parts of their businesses. What’s different about the Interpublic acquisition is that the scale involved is different from anything else that’s been undertaken in the sector.
How will this be done successfully?
What (additional) value is in the resulting business for clients?
ADWEEK polled marketers to better understand their attitude to the merger. On balance they weren’t supportive of the deal. Twice as many respondents were negative about the deal compared to those who felt positively about it. The good news was that almost 60 percent either hadn’t made their mind up or were on balance neutral. At this point I need to caveat the results with the note that there wasn’t a breakdown on the types of respondents in terms of their role and seniority.
But it implied that Omnicom had a serious communication job to be done convincing wider stakeholders on the merits of the deal.
The problem might be greater than telling a better story. By some estimates 60% of Interpublic and Omnicom scopes of work are allegedly already understaffed – if true, likely putting customer satisfaction at risk. And that’s before the reduction in headcount to match the need for cost savings.