Search results for: “steve jobs”

  • Intelligence per watt

    My thinking on the concept of intelligence per watt started as bullets in my notebook. It was more of a timeline than anything else at first and provided a framework of sorts from which I could explore the concept of efficiency in terms of intelligence per watt. 

    TL;DR (too long, didn’t read)

    Our path to the current state of ‘artificial intelligence’ (AI) has been shaped by the interplay and developments of telecommunications, wireless communications, materials science, manufacturing processes, mathematics, information theory and software engineering. 

    Progress in one area spurred advances in others, creating a feedback loop that propelled innovation.  

    Over time, new use cases have become more personal and portable – necessitating a focus on intelligence per watt as a key parameter. Energy consumption directly affects industrial design and end-user benefits. Small low-power integrated circuits (ICs) facilitated fuzzy logic in portable consumer electronics like cameras and portable CD players. Low power ICs and power management techniques also helped feature phones evolve into smartphones.  

    A second-order effect of optimising for intelligence per watt is reducing power consumption across multiple applications. This spurs yet more new use cases in a virtuous innovation circle. This continues until the laws of physics impose limits. 

    Energy storage density and consumption are fundamental constraints, driving the need for a focus on intelligence per watt.  

    As intelligence per watt improves, there will be a point at which the question isn’t just what AI can do, but what should be done with AI? And where should it be processed? Trust becomes less about emotional reassurance and more about operational discipline. Just because it can handle a task doesn’t mean it should – particularly in cases where data sensitivity, latency, or transparency to humans is non-negotiable. A highly capable, off-device AI might be a fine at drafting everyday emails, but a questionable choice for handling your online banking. 

    Good ‘operational security’ outweighs trust. The design of AI systems must therefore account not just for energy efficiency, but user utility and deployment context. The cost of misplaced trust is asymmetric and potentially irreversible.

    Ironically the force multiplier in intelligence per watt is people and their use of ‘artificial intelligence’ as a tool or ‘co-pilot’. It promises to be an extension of the earlier memetic concept of a ‘bicycle for the mind’ that helped inspire early developments in the personal computer industry. The upside of an intelligence per watt focus is more personal, trusted services designed for everyday use. 

    Integration

    In 1926 or 27, Loewe (now better known for their high-end televisions) created the 3NF[i].

    While not a computer, but instead to integrate several radio parts in one glass envelope vacuum valve. This had three triodes (early electronic amplifiers), two capacitors and four resistors. Inside the valve the extra resistor and capacitor components went inside their own glass tubes. Normally each triode would be inside its own vacuum valve. At the time, German radio tax laws were based on the number of valve sockets in a device, making this integration financially advantageous. 

    Post-war scientific boom

    Between 1949 and 1957 engineers and scientists from the UK, Germany, Japan and the US proposed what we’d think of as the integrated circuit (IC). These ideas were made possible when breakthroughs in manufacturing happened. Shockley Semiconductor built on work by Bell Labs and Sprague Electric Company to connect different types of components on the one piece of silicon to create the IC. 

    Credit is often given to Jack Kilby of Texas Instruments as the inventor of the integrated circuit. But that depends how you define IC, with what is now called a monolithic IC being considered a ‘true’ one. Kilby’s version wasn’t a true monolithic IC. As with most inventions it is usually the child of several interconnected ideas that coalesce over a given part in time. In the case of ICs, it was happening in the midst of materials and technology developments including data storage and computational solutions such as the idea of virtual memory through to the first solar cells. 

    Kirby’s ICs went into an Air Force computer[ii] and an onboard guidance system for the Minuteman missile. He went on to help invent the first handheld calculator and thermal printer, both of which took advantage of progress in IC design to change our modern way of life[iii]

    TTL (transistor-to-transistor logic) circuitry was invented at TRW in 1961, they licensed it out for use in data processing and communications – propelling the development of modern computing. TTL circuits powered mainframes. Mainframes were housed in specialised temperature and humidity-controlled rooms and owned by large corporates and governments. Modern banking and payments systems rely on the mainframe as a concept. 

    AI’s early steps 

    Science Museum highlights

    What we now thing of as AI had been considered theoretically for as long as computers could be programmed. As semiconductors developed, a parallel track opened up to move AI beyond being a theoretical possibility. A pivotal moment was a workshop was held in 1956 at Dartmouth College. The workshop focused on a hypothesis ‘every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it’. Later on, that year a meeting at MIT (Massachusetts Institute of Technology) brought together psychologists and linguists to discuss the possibility of simulating cognitive processes using a computer. This is the origin of what we’d now call cognitive science. 

    Out of the cognitive approach came some early successes in the move towards artificial intelligence[iv]. A number of approaches were taken based on what is now called symbolic or classical AI:

    • Reasoning as search – essentially step-wise trial and error approach to problem solving that was compared to wandering through a maze and back-tracking if a dead end was found. 
    • Natural language – where related phrases existed within a structured network. 
    • Micro-worlds – solving for artificially simple situations, similar to economic models relying on the concept of the rational consumer. 
    • Single layer neural networks – to do rudimentary image recognition. 

     By the time the early 1970s came around AI researchers ran into a number of problems, some of which still plague the field to this day:

    • Symbolic AI wasn’t fit for purpose solving many real-world tasks like crossing a crowded room. 
    • Trying to capture imprecise concepts with precise language.
    • Commonsense knowledge was vast and difficult to encode. 
    • Intractability – many problems require an exponential amount of computing time. 
    • Limited computing power available – there was insufficient intelligence per watt available for all but the simplest problems. 

    By 1966, US and UK funding bodies were frustrated with the lack of progress on the research undertaken. The axe fell first on a project to use computers on language translation. Around the time of the OPEC oil crisis, funding to major centres researching AI was reduced by both the US and UK governments respectively. Despite the reduction of funding to the major centres, work continued elsewhere. 

    Mini-computers and pocket calculators

    ICs allowed for mini-computers due to the increase in computing power per watt. As important as the relative computing power, ICs made mini-computers more robust, easier to manufacture and maintain. DEC (Digital Equipment Corporation) launched the first minicomputer, the PDP-8 in 1964. The cost of mini-computers allowed them to run manufacturing processes, control telephone network switching and control labouratory equipment. Mini-computers expanded computer access in academia facilitating more work in artificial life and what we’d think of as early artificial intelligence. This shift laid the groundwork for intelligence per watt as a guiding principle.

    A second development helped drive mass production of ICs – the pocket calculator, originally invented at Texas Instruments.  It demonstrated how ICs could dramatically improve efficiency in compact, low-power devices.

    LISP machines and PCs

    AI researchers required more computational power than mini-computers could provide, leading to the development of LISP machines—specialised workstations designed for AI applications. Despite improvements in intelligence per watt enabled by Moore’s Law, their specialised nature meant that they were expensive. AI researchers continued with these machines until personal computers (PCs) progressed to a point that they could run LISP quicker than LISP machines themselves. The continuous improvements in data storage, memory and processing that enabled LISP machines, continued on and surpassed them as the cost of computing dropped due to mass production. 

    The rise of LISP machines and their decline was not only due to Moore’s Law in effect, but also that of Makimoto’s Wave. While Gordon Moore outlined an observation that the number of transistors on a given area of silicon doubled every two years or so. Tsugio Makimoto originally observed 10-year pivots from standardised semiconductor processors to customised processors[v]. The rise of personal computing drove a pivot towards standardised architectures. 

    PCs and workstations extended computing beyond computer rooms and labouratories to offices and production lines. During the late 1970s and 1980s standardised processor designs like the Zilog Z80, MOS Technology 6502 and the Motorola 68000 series drove home and business computing alongside Intel’s X86 processors. 

    Personal computing started in businesses when office workers brought a computer to use early computer programmes like the VisiCalc spreadsheet application. This allowed them to take a leap forward in not only tabulating data, but also seeing how changes to the business might affect financial performance. 

    Businesses then started to invest more in PCs for a wide range of uses. PCs could emulate the computer terminal of a mainframe or minicomputer, but also run applications of their own. 

    Typewriters were being placed by word processors that allowed the operator to edit a document in real time without resorting to using correction fluid

    A Bicycle for the Mind

    Steve Jobs at Apple was as famous for being a storyteller as he was for being a technologist in the broadest sense. Internally with the Mac team he shared stories and memetic concepts to get his ideas across in everything from briefing product teams to press interviews. As a concept, a 1990 filmed interview with Steve Jobs articulates the context of this saying particularly well. 

    In reality, Jobs had been telling the story for a long time through the development of the Apple II and right from the beginning of the Mac. There is a version of the talk that was recorded some time in 1980 when the personal computer was still a very new idea – the video was provided to the Computer History Museum by Regis McKenna[vi].

    The ‘bicycle for the mind’ concept was repeated in early Apple advertisements for the time[vii] and even informed the Macintosh project codename[viii]

    Jobs articulated a few key concepts. 

    • Buying a computer creates, rather than reduces problems. You needed software to start solving problems and making computing accessible. Back in 1980, you programmed a computer if you bought one. Which was the reason why early personal computer owners in the UK went on to birth a thriving games software industry including the likes of Codemasters[ix]. Done well, there should be no seem in the experience between hardware and software. 
    • The idea of a personal, individual computing device (rather than a shared resource).  My own computer builds on my years of how I have grown to adapt and use my Macs, from my first sit-up and beg Macintosh, to the MacBook Pro that I am writing this post on. This is even more true most people and their use of the smartphone. I am of an age, where my iPhone is still an appendage and emissary of my Mac. My Mac is still my primary creative tool. A personal computer is more powerful than a shared computer in terms of the real difference made. 
    • At the time Jobs originally did the speech, PCs were underpowered for anything but data processing (through spreadsheets and basic word processor applications). But that didn’t stop his idea for something greater. 

    Jobs idea of the computer as an adjunct to the human intellect and imagination still holds true, but it doesn’t neatly fit into the intelligence per watt paradigm. It is harder to measure the effort developing prompts, or that expended evaluating, refining and filtering generative AI results. Of course, Steve Jobs Apple owed a lot to the vision shown in Doug Engelbart’s ‘Mother of All Demos’[x].

    Networks

    Work took a leap forward with office networked computers pioneered by Macintosh office by Apple[xi]. This was soon overtaken by competitors. This facilitated work flow within an office and its impact can still be seen in offices today, even as components from print management to file storage have moved to cloud-based services. 

    At the same time, what we might think of as mobile was starting to gain momentum. Bell Labs and Motorola came up with much of the technology to create cellular communications. Martin Cooper of Motorola made the first phone call on a cellular phone to a rival researcher at Bell Labs. But Motorola didn’t sell the phone commercially until 1983, as a US-only product called the DynaTAC 8000x[xii].  This was four years after Japanese telecoms company NTT launched their first cellular network for car phones. Commercial cellular networks were running in Scandinavia by 1981[xiii]

    In the same way that the networked office radically changed white collar work, the cellular network did a similar thing for self-employed plumbers, electricians and photocopy repair men to travelling sales people. If they were technologically advanced, they may have had an answer machine, but it would likely have to be checked manually by playing back the tape. 

    Often it was a receptionist in their office if they had one. Or more likely, someone back home who took messages. The cell phone freed homemakers in a lot of self-employed households to go out into the workplace and helped raise household incomes. 

    Fuzzy logic 

    The first mainstream AI applications emerged from fuzzy logic, introduced by Lofti A. Zadeh in 1965 mathematical paper. Initial uses were for industrial controls in cement kilns and steel production[xiv]. The first prominent product to rely on fuzzy logic was the Zojirushi Micom Electric Rice Cooker (1983), which adjusted cooking time dynamically to ensure perfect rice. 

    Rice Cooker with Fuzzy Logic 3,000 yen avail end june

    Fuzzy logic reacted to changing conditions in a similar way to people. Through the 1980s and well into the 1990s, the power of fuzzy logic was under appreciated outside of Japanese product development teams. In a quote a spokesperson for the American Electronics Association’s Tokyo office said to the Washington Post[xv].

    “Some of the fuzzy concepts may be valid in the U.S.,”

    “The idea of better energy efficiency, or more precise heating and cooling, can be successful in the American market,”

    “But I don’t think most Americans want a vacuum cleaner that talks to you and says, ‘Hey, I sense that my dust bag will be full before we finish this room.’ “

    The end of the 1990s, fuzzy logic was embedded in various consumer devices: 

    • Air-conditioner units – understands the room, the temperature difference inside-and-out, humidity. It then switches on-and-off to balance cooling and energy efficiency.
    • CD players – enhanced error correction on playback dealing with imperfections on the disc surface.
    • Dishwashers – understood how many dishes were loaded, their type of dirt and then adjusts the wash programme.
    • Toasters – recognised different bread types, the preferable degree of toasting and performs accordingly.
    • TV sets – adjust the screen brightness to the ambient light of the room and the sound volume to how far away the viewer is sitting from the TV set. 
    • Vacuum cleaners – vacuum power that is adjusted as it moves from carpeted to hard floors. 
    • Video cameras – compensate for the movement of the camera to reduce blurred images. 

    Fuzzy logic sold on the benefits and concealed the technology from western consumers. Fuzzy logic embedded intelligence in the devices. Because it worked on relatively simple dedicated purposes it could rely on small lower power specialist chips[xvi] offering a reasonable amount of intelligence per watt, some three decades before generative AI. By the late 1990s, kitchen appliances like rice cookers and microwave ovens reached ‘peak intelligence’ for what they needed to do, based on the power of fuzzy logic[xvii].

    Fuzzy logic also helped in business automation. It helped to automatically read hand-written numbers on cheques in banking systems and the postcodes on letters and parcels for the Royal Mail. 

    Decision support systems & AI in business

    Decision support systems or Business Information Systems were being used in large corporates by the early 1990s. The techniques used were varied but some used rules-based systems. These were used in at least some capacity to reduce manual office work tasks. For instance, credit card approvals were processed based on rules that included various factors including credit scores. Only some credit card providers had an analyst manually review the decision made by system.  However, setting up each use case took a lot of effort involving highly-paid consultants and expensive software tools. Even then, vendors of business information systems such as Autonomy struggled with a high rate of projects that failed to deliver anything like the benefits promised. 

    Three decades on, IBM had a similar problem with its Watson offerings, with particularly high-profile failure in mission-critical healthcare applications[xviii]. Secondly, a lot of tasks were ad-hoc in nature, or might require transposing across disparate separate systems. 

    The rise of the web

    The web changed everything. The underlying technology allowed for dynamic data. 

    Software agents

    Examples of intelligence within the network included early software agents. A good example of this was PapriCom. PapriCom had a client on the user’s computer. The software client monitored price changes for products that the customer was interested in buying. The app then notified the user when the monitored price reached a price determined by the customer. The company became known as DealTime in the US and UK, or Evenbetter.com in Germany[xix].  

    The PapriCom client app was part of a wider set of technologies known as ‘push technology’ which brought content that the netizen would want directly to their computer. In a similar way to mobile app notifications now. 

    Web search

    The wealth of information quickly outstripped netizen’s ability to explore the content. Search engines became essential for navigating the new online world. Progress was made in clustering vast amounts of cheap Linux powered computers together and sharing the workload to power web search amongst them.  As search started to trying and make sense of an exponentially growing web, machine learning became part of the developer tool box. 

    Researchers at Carnegie-Mellon looked at using games to help teach machine learning algorithms based on human responses that provided rich metadata about the given item[xx]. This became known as the ESP game. In the early 2000s, Yahoo! turned to web 2.0 start-ups that used user-generated labels called tags[xxi] to help organise their data. Yahoo! bought Flickr[xxii] and deli.ico.us[xxiii]

    All the major search engines looked at how deep learning could help improve search results relevance. 

    Given that the business model for web search was an advertising-based model, reducing the cost per search, while maintaining search quality was key to Google’s success. Early on Google focused on energy consumption, with its (search) data centres becoming carbon neutral in 2007[xxiv]. This was achieved by a whole-system effort: carefully managing power management in the silicon, storage, networking equipment and air conditioning to maximise for intelligence per watt. All of which were made using optimised versions of open-source software and cheap general purpose PC components ganged together in racks and operating together in clusters. 

    General purpose ICs for personal computers and consumer electronics allowed easy access relatively low power computing. Much of this was down to process improvements that were being made at the time. You needed the volume of chips to drive innovation in mass-production at a chip foundry. While application-specific chips had their uses, commodity mass-volume products for uses for everything from embedded applications to early mobile / portable devices and computers drove progress in improving intelligence-per-watt.

    Makimoto’s tsunami back to specialised ICs

    When I talked about the decline of LISP machines, I mentioned the move towards standardised IC design predicted by Tsugio Makimoto. This led to a surge in IC production, alongside other components including flash and RAM memory.  From the mid-1990s to about 2010, Makimoto’s predicted phase was stuck in ‘standardisation’. It just worked. But several factors drove the swing back to specialised ICs. 

    • Lithography processes got harder: standardisation got its performance and intelligence per watt bump because there had been a steady step change in improvements in foundry lithography processes that allowed components to be made at ever-smaller dimensions. The dimensions are a function wavelength of light used. The semiconductor hit an impasse when it needed to move to EUV (extreme ultra violet) light sources. From the early 1990s on US government research projects championed development of key technologies that allow EUV photolithography[xxv]. During this time Japanese equipment vendors Nikon and Canon gave up on EUV. Sole US vendor SVG (Silicon Valley Group) was acquired by ASML, giving the Dutch company a global monopoly on cutting edge lithography equipment[xxvi]. ASML became the US Department of Energy research partner on EUV photo-lithography development[xxvii]. ASML spent over two decades trying to get EUV to work. Once they had it in client foundries further time was needed to get commercial levels of production up and running. All of which meant that production processes to improve IC intelligence per watt slowed down and IC manufacturers had to start about systems in a more holistic manner. As foundry development became harder, there was a rise in fabless chip businesses. Alongside the fabless firms, there were fewer foundries: Global Foundries, Samsung and TSMC (Taiwan Semiconductor Manufacturing Company Limited). TSMC is the worlds largest ‘pure-play’ foundry making ICs for companies including AMD, Apple, Nvidia and Qualcomm. 
    • Progress in EDA (electronic design automation). Production process improvements in IC manufacture allowed for an explosion in device complexity as the number of components on a given size of IC doubled every 18 months or so. In the mid-to-late 1970s this led to technologists thinking about the idea of very large-scale integration (VLSI) within IC designs[xxviii]. Through the 1980s, commercial EDA software businesses were formed. The EDA market grew because it facilitated the continual scaling of semiconductor technology[xxix]. Secondly, it facilitated new business models. Businesses like ARM Semiconductor and LSI Logic allowed their customers to build their own processors based on ‘blocs’ of proprietary designs like ARM’s cores. That allowed companies like Apple to focus on optimisation in their customer silicon and integration with software to help improve the intelligence per watt[xxx]
    • Increased focus on portable devices. A combination of digital networks, wireless connectivity, the web as a communications platform with universal standards, flat screen displays and improving battery technology led the way in moving towards more portable technologies. From personal digital assistants, MP3 players and smartphone, to laptop and tablet computers – disconnected mobile computing was the clear direction of travel. Cell phones offered days of battery life; the Palm Pilot PDA had a battery life allowing for couple of days of continuous use[xxxi]. In reality it would do a month or so of work. Laptops at the time could do half a day’s work when disconnected from a power supply. Manufacturers like Dell and HP provided spare batteries for travellers. Given changing behaviours Apple wanted laptops that were easy to carry and could last most of a day without a charge. This was partly driven by a move to a cleaner product design that wanted to move away from swapping batteries. In 2005, Apple moved from PowerPC to Intel processors. During the announcement at the company’s worldwide developer conference (WWDC), Steve Jobs talked about the focus on computing power per watt moving forwards[xxxii]

    Apple’s first in-house designed IC, the A4 processor was launched in 2010 and marked the pivot of Makimoto’s wave back to specialised processor design[xxxiii].  This marked a point of inflection in the growth of smartphones and specialised computing ICs[xxxiv]

    New devices also meant new use cases that melded data on the web, on device, and in the real world. I started to see this in action working at Yahoo! with location data integrated on to photos and social data like Yahoo! Research’s ZoneTag and Flickr. I had been the Yahoo! Europe marketing contact on adding Flickr support to Nokia N-series ‘multimedia computers’ (what we’d now call smartphones), starting with the Nokia N73[xxxv].  A year later the Nokia N95 was the first smartphone released with a built-in GPS receiver. William Gibson’s speculative fiction story Spook Country came out in 2007 and integrated locative art as a concept in the story[xxxvi]

    Real-world QRcodes helped connect online services with the real world, such as mobile payments or reading content online like a restaurant menu or a property listing[xxxvii].

    I labelled the web-world integration as a ‘web-of-no-web’[xxxviii] when I presented on it back in 2008 as part of an interactive media module, I taught to an executive MBA class at Universitat Ramon Llull in Barcelona[xxxix]. In China, wireless payment ideas would come to be labelled O2O (offline to online) and Kevin Kelly articulated a future vision for this fusion which he called Mirrorworld[xl]

    Deep learning boom

    Even as there was a post-LISP machine dip in funding of AI research, work on deep (multi-layered) neural networks continued through the 1980s. Other areas were explored in academia during the 1990s and early 2000s due to the large amount of computing power needed. Internet companies like Google gained experience in large clustered computing, AND, had a real need to explore deep learning. Use cases include image recognition to improve search and dynamically altered journeys to improve mapping and local search offerings. Deep learning is probabilistic in nature, which dovetailed nicely with prior work Microsoft Research had been doing since the 1980s on Bayesian approaches to problem-solving[xli].  

    A key factor in deep learning’s adoption was having access to powerful enough GPUs to handle the neural network compute[xlii]. This has allowed various vendors to build Large Language Models (LLMs). The perceived strategic importance of artificial intelligence has meant that considerations on intelligence per watt has become a tertiary consideration at best. Microsoft has shown interest in growing data centres with less thought has been given on the electrical infrastructure required[xliii].  

    Google’s conference paper on attention mechanisms[xliv] highlighted the development of the transformer model. As an architecture it got around problems in previous approaches, but is computationally intensive. Even before the paper was published, the Google transformer model had created fictional Wikipedia entries[xlv]. A year later OpenAI built on Google’s work with the generative pre-trained transformer model better known as GPT[xlvi]

    Since 2018 we’ve seen successive GPT-based models from Amazon, Anthropic, Google, Meta, Alibaba, Tencent, Manus and DeepSeek. All of these models were trained on vast amounts of information sources. One of the key limitations for building better models was access to training material, which is why Meta used pirated copies of e-books obtained using bit-torrent[xlvii]

    These models were so computationally intensive that the large-scale cloud service providers (CSPs) offering these generative AI services were looking at nuclear power access for their data centres[xlviii]

    The current direction of development in generative AI services is raw computing power, rather than having a more energy efficient focus of intelligence per watt. 

    Technology consultancy / analyst Omdia estimated how many GPUs were bought by hyperscalers in 2024[xlix].

    CompanyNumber of Nvidia GPUs boughtNumber of AMD GPUs boughtNumber of self-designed custom processing chips bought
    Amazon196,0001,300,000
    Alphabet (Google)169,0001,500,000
    ByteDance230,000
    Meta224,000173,0001,500,000
    Microsoft485,00096,000200,000
    Tencent230,000

    These numbers provide an indication of the massive deployment on GPT-specific computing power. Despite the massive amount of computing power available, services still weren’t able to cope[l] mirroring some of the service problems experienced by early web users[li] and the Twitter ‘whale FAIL’[lii] phenomenon of the mid-2000s. The race to bigger, more powerful models is likely to continue for the foreseeable future[liii]

    There is a second class of players typified by Chinese companies DeepSeek[liv] and Manus[lv] that look to optimise the use of older GPT models to squeeze the most utility out of them in a more efficient manner. Both of these services still rely on large cloud computing facilities to answer queries and perform tasks. 

    Agentic AI

    Thinking on software agents went back to work being done in computer science in the mid-1970s[lvi]. Apple articulated a view[lvii]of a future system dubbed the ‘Knowledge Navigator’[lviii] in 1987 which hinted at autonomous software agents. What we’d now think of as agentic AI was discussed as a concept at least as far back as 1995[lix], this was mirrored in research labs around the world and was captured in a 1997 survey of research on intelligent software agents was published[lx]. These agents went beyond the vision that PapriCom implemented. 

    A classic example of this was Wildfire Communications, Inc. who created a voice enabled virtual personal assistant in 1994[lxi].  Wildfire as a service was eventually shut down in 2005 due to an apparent decline in subscribers using the service[lxii]. In terms of capability, Wildfire could do tasks that are currently beyond Apple’s Siri. Wildfire did have limitations due to it being an off-device service that used a phone call rather than an internet connection, which limited its use to Orange mobile service subscribers using early digital cellular mobile networks. 

    Almost a quarter century later we’re now seeing devices that are looking to go beyond Wildfire with varying degrees of success. For instance, the Rabbit R1 could order an Uber ride or groceries from DoorDash[lxiii]. Google Duplex tries to call restaurants on your behalf to make reservations[lxiv] and Amazon claims that it can shop across other websites on your behalf[lxv]. At the more extreme end is Boeing’s MQ-28[lxvi] and the Loyal Wingman programme[lxvii]. The MQ-28 is an autonomous drone that would accompany US combat aircraft into battle, once it’s been directed to follow a course of action by its human colleague in another plane. 

    The MQ-28 will likely operate in an electronic environment that could be jammed. Even if it wasn’t jammed the length of time taken to beam AI instructions to the aircraft would negatively impact aircraft performance. So, it is likely to have a large amount of on-board computing power. As with any aircraft, the size of computing resources and their power is a trade-off with the amount of fuel or payload it will carry. So, efficiency in terms of intelligence per watt becomes important to develop the smallest, lightest autonomous pilot. 

    As well as a more hostile world, we also exist in a more vulnerable time in terms of cyber security and privacy. It makes sense to have critical, more private AI tasks run on a local machine. At the moment models like DeepSeek can run natively on a top-of-the-range Mac workstation with enough memory[lxviii].  

    This is still a long way from the vision of completely local execution of ‘agentic AI’ on a mobile device because the intelligence per watt hasn’t scaled down to that level to useful given the vast amount of possible uses that would be asked of the Agentic AI model. 

    Maximising intelligence per watt

    There are three broad approaches to maximise the intelligence per watt of an AI model. 

    • Take advantage of the technium. The technium is an idea popularised by author Kevin Kelly[lxix]. Kelly argues that technology moves forward inexorably, each development building on the last. Current LLMs such as ChatGPT and Google Gemini take advantage of the ongoing technium in hardware development including high-speed computer memory and high-performance graphics processing units (GPU).  They have been building large data centres to run their models in. They build on past developments in distributed computing going all the way back to the 1962[lxx]
    • Optimise models to squeeze the most performance out of them. The approach taken by some of the Chinese models has been to optimise the technology just behind the leading-edge work done by the likes of Google, OpenAI and Anthropic. The optimisation may use both LLMs[lxxi] and quantum computing[lxxii] – I don’t know about the veracity of either claim. 
    • Specialised models. Developing models by use case can reduce the size of the model and improve the applied intelligence per watt. Classic examples of this would be fuzzy logic used for the past four decades in consumer electronics to Mistral AI[lxxiii] and Anduril’s Copperhead underwater drone family[lxxiv].  

    Even if an AI model can do something, should the model be asked to do so?

    AI use case appropriateness

    We have a clear direction of travel over the decades to more powerful, portable computing devices –which could function as an extension of their user once intelligence per watt allows it to be run locally. 

    Having an AI run on a cloud service makes sense where you are on a robust internet connection, such as using the wi-fi network at home. This makes sense for general everyday task with no information risk, for instance helping you complete a newspaper crossword if there is an answer you are stuck on and the intellectual struggle has gone nowhere. 

    A private cloud AI service would make sense when working, accessing or processing data held on the service. Examples of this would be Google’s Vertex AI offering[lxxv]

    On-device AI models make sense in working with one’s personal private details such as family photographs, health information or accessing apps within your device. Apps like Strava which share data, have been shown to have privacy[lxxvi] and security[lxxvii] implications. ***I am using Strava as an example because it is popular and widely-known, not because it is a bad app per se.***

    While businesses have the capability and resources to have a multi-layered security infrastructure to protect their data most[lxxviii]of[lxxix] the[lxxx] time[lxxxi], individuals don’t have the same security. As I write this there are privacy concerns[lxxxii] expressed about Waymo’s autonomous taxis. However, their mobile device is rarely out of physical reach and for many their laptop or tablet is similarly close. All of these devices tend to be used in concert with each other. So, for consumers having an on-device AI model makes the most sense. All of which results in a problem, how do technologists squeeze down their most complex models inside a laptop, tablet or smartphone? 


    [i] Radiomuseum – Loewe (Opta), Germany. Multi-system internal coupling 3NF

    [ii] (1961) Solid Circuit(tm) Semiconductor Network Computer, 6.3 Cubic inches in Size, is Demonstrated in Operation by U.S. Air Force and Texas Instruments (United States) Texas Instruments news release

    [iii] (2000) The Chip that Jack Built Changed the World (United States) Texas Instruments website

    [iv] Moravec H (1988), Mind Children (United States) Harvard University Press

    [v] (2010) Makimoto’s Wave | EDN (United States) AspenCore Inc.

    [vi] Jobs, S. (1980) Presentation on Apple Computer history and vision (United States) Computer History Museum via Regis McKenna

    [vii] Sinofsky, S. (2019) ‘Bicycle for the Mind’ (United States) Learning By Shipping

    [viii] Hertzfeld, A. (1981) Bicycle (United States) Folklore.org

    [ix] Jones, D. (2016) Codemasters (United Kingdom) Retro Gamer – Future Publishing

    [x] Engelbert, D. (1968) A Research Center For Augmenting Human Intellect (United States) Stanford Research Institute (SRI)

    [xi] Hormby, T. (2006) Apple’s Worst business Decisions (United States) OSnews

    [xii] Honam, M. (2009) From Brick to Slick: A History of Mobile Phones (United States) Wired

    [xiii] Ericsson History: The Nordics take charge (Sweden) LM Ericsson.

    [xiv] Singh, H., Gupta, M.M., Meitzler, T., Hou, Z., Garg, K., Solo, A.M.G & Zadeh, L.A. (2013) Real-Life Applications of Fuzzy Logic – Advances in Fuzzy Systems (Egypt) Hindawi Publishing Corporation

    [xv] Reid, T.R. (1990) The Future of Electronics Looks ‘Fuzzy’. (United States) Washington Post

    [xvi] Kushairi, A. (1993). “Omron showcases latest in fuzzy logic”. (Malaysia) New Straits Times

    [xvii] Watson, A. (2021) The Antique Microwave Oven that’s Better than Yours (United States) Technology Connections

    [xviii] Durbhakula, S. (2022) IBM dumping Watson Health is an opportunity to reevaluate artificial intelligence (United States) MedCity News

    [xix] (1998) PapriCom Technologies Wins CommerceNet Award (Israel) Globes

    [xx] Von Ahn, L., Dabbish, L. (2004) Labeling Images with a Computer Game (United States) School of Computing, Carnegie-Mellon University

    [xxi] Butterfield, D., Fake, C., Henderson-Begg, C., Mourachov, S., (2006) Interestingness ranking of media objects (United States) US Patent Office

    [xxii] Delaney, K.J., (2005) Yahoo acquires Flickr creator (United States) Wall Street Journal

    [xxiii] Hood, S., (2008) Delicious is 5 (United States) Delicious blog

    [xxiv] (2017) 10 years of Carbon Neutrality (United States) Google

    [xxv] Bakshi, V. (2018) EUV Lithography (United States) SPIE Press

    [xxvi] Wade, W. (2000) ASML acquires SVG, becomes largest litho supplier (United States) EE Times

    [xxvii] Lammers, D. (1999) U.S. gives ok to ASML on EUV effort (United States) EE Times

    [xxviii] Meade, C., Conway, L. (1979) Introduction to VLSI Systems (United States) Addison-Wesley

    [xxix] Lavagno, L., Martin, G., Scheffer, L., et al (2006) Electronic Design Automation for Integrated Circuits Handbook (United States) Taylor & Francis

    [xxx] (2010) Apple Launches iPad (United States) Apple Inc. website

    [xxxi] (1997) PalmPilot Professional (United Kingdom) Centre for Computing History

    [xxxii] Jobs, S. (2005) Apple WWDC 2005 keynote speech (United States) Apple Inc.

    [xxxiii] (2014) Makimoto’s Wave Revisited for Multicore SoC Design (United States) EE Times

    [xxxiv] Makimoto, T. (2014) Implications of Makimoto’s Wave (United States) IEEE Computer Society

    [xxxv] (2006) Nokia and Yahoo! add Flickr support in Nokia Nseries Multimedia Computers (Germany) Cision PR Newswire

    [xxxvi] Gibson, W. (2007) Spook Country (United States) Putnam Publishing Group

    [xxxvii] The O2O Business In China (China) GAB China

    [xxxviii] Carroll, G. (2008) Web Centric Business Model (United States) Waggener Edstrom Worldwide for LaSalle School of Business, Universitat Ramon Llull, Barcelona

    [xxxix] Carroll, G. (2008) Web of no web (United Kingdom) renaissance chambara

    [xl] Kelly, K. (2018) AR Will Spark the Next Big Tech Platform – Call It Mirrorworld (United States) Wired

    [xli] Heckerman, D. (1988) An Empirical Comparison of Three Inference Methods (United States) Microsoft Research

    [xlii] Sze, V., Chen, Y.H., Yang, T.J., Emer, J. (2017) Efficient Processing of Deep Neural Networks: A Tutorial and Survey (United States) Cornell University

    [xliii] Webber, M. E. (2024) Energy Blog: Is AI Too Power-Hungry for Our Own Good? (United States) American Society of Mechanical Engineers

    [xliv] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I. (2017) Attention Is All You Need (United States) 31st Conference on Neural Information Processing Systems (NIPS 2017)

    [xlv] Marche, S. (2024) Was Linguistic A.I. Created By Accident? (United States) The New Yorker.

    [xlvi] Radford, A. (2018) Improving language understanding with unsupervised learning (United States) OpenAI

    [xlvii] Heath, N. (2025) Authors outraged to discover Meta used their pirated work to train its AI systems (Australia) ABC (Australian Broadcast Corporation)

    [xlviii] Morey, M., O’Sullivan, J. (2024) In-brief analysis: Data center owners turn to nuclear as potential energy source (United States) Today in Energy published by U.S. Energy Information Administration

    [xlix] Bradshaw, T., Morris, S. (2024) Microsoft acquires twice as many Nvidia AI chips as tech rivals (United Kingdom) Financial Times

    [l] Smith, C. (2025) ChatGPT’s viral image-generation upgrade is ruining the chatbot for everyone (United States) BGR (Boy Genius Report)

    [li] Wayner, P. (1997) Human Error Cripples the Internet (United States) The New York Times

    [lii] Honan, M. (2013) Killing the Fail Whale with Twitter’s Christopher Fry (United States) Wired

    [liii] Mazarr, M. (2025) The Coming Strategic Revolution of Artificial Intelligence (United States) MIT (Massachusetts Institute of Technology)

    [liv] Knight, W. (2025) DeepSeek’s New AI Model Sparks Shock, Awe, and Questions from US Competitors (United States) Wired

    [lv] Sharwood, S. (2025) Manus mania is here: Chinese ‘general agent’ is this week’s ‘future of AI’ and OpenAI-killer (United Kingdom) The Register

    [lvi] Hewitt, C., Bishop, P., Steiger, R. (1973). A Universal Modular Actor Formalism for Artificial Intelligence. (United States) IJCAI (International Joint Conference on Artificial Intelligence).

    [lvii] Sculley, J. (1987) Keynote Address On The Knowledge Navigator at Educom (United States) Apple Computer Inc.

    [lviii] (1987) Apple’s Future Computer: The Knowledge Navigator (United States) Apple Computer Inc.

    [lix] Kelly, K. (1995) Out of Control: The New Biology of Machines (United States) Fourth Estate

    [lx] Nwana, H.S., Azarmi, N. (1997) Software Agents and Soft Computing: Towards Enhancing Machine Intelligence Concepts and Applications (Germany) Springer

    [lxi] Rifkin, G. (1994) Interface; A Phone That Plays Secretary for Travelers (United States) The New York Times

    [lxii] Richardson, T. (2005) Orange kills Wildfire – finally (United Kingdom) The Register

    [lxiii] Spoonauer, M. (2024) The Truth about the Rabbit R1 – your questions answered about the AI gadget (United States) Tom’s Guide

    [lxiv] Garun, N. (2019) One year later, restaurants are still confused by Google Duplex (United States) The Verge

    [lxv] Roth, E. (2025) Amazon can now buy products from other websites for you (United States) The Verge

    [lxvi] MQ-28 microsite (United States) Boeing Inc.

    [lxvii] Warwick, G. (2019) Boeing Unveils ‘Loyal Wingman’ UAV Developed In Australia (United Kingdom) Aviation Week Network – part of Informa Markets

    [lxviii] Udinmwen, E. (2025) Apple Mac Studio M3 Ultra workstation can run Deepseek R1 671B AI model entirely in memory using less than 200W, reviewer finds (United Kingdom) TechRadar

    [lxix] Kelly, K. (2010) What Technology Wants (United States) Viking Books

    [lxx] Andrews, G.R. (2000) Foundations of Multithreaded, Parallel, and Distributed Programming (United States) Addison-Wesley

    [lxxi] Criddle, C., Olcott, E. (2025) OpenAI says it has evidence China’s DeepSeek used its model to train competitor (United Kingdom) Financial Times

    [lxxii] Russell, J. (2025) China Researchers Report Using Quantum Computer to Fine-Tune Billion Parameter AI Model (United States) HPC Wire

    [lxxiii] Mistral AI home page (France) Mistral AI

    [lxxiv] (2025) High-Speed Autonomous Underwater Effects. Copperhead (United States) Anduril Industries

    [lxxv] Vertex AI with Gemini 1.5 Pro and Gemini 1.5 Flash (United States) Google Cloud website

    [lxxvi] Untersinger, M. (2024) Strava, the exercise app filled with security holes (France) Le Monde

    [lxxvii] Nilsson-Julien, E. (2025) French submarine crew accidentally leak sensitive information through Strava app (France) Le Monde

    [lxxviii] Arsene, Liviu (2018) Hack of US Navy Contractor Nets China 614 Gigabytes of Classified Information (Romania) Bitdefender

    [lxxix] Wendling, M. (2024) What to know about string of US hacks blamed on China (United Kingdom) BBC News

    [lxxx] Kidwell, D. (2020) Cyber espionage for the Chinese government (United States) U.S. Air Force Office of Special Investigations

    [lxxxi] Gorman, S., Cole, A., Dreazen, Y. (2009) Computer Spies Breach Fighter-Jet Project (United States) The Wall Street Journal

    [lxxxii] Bellan, R. (2025) Waymo may use interior camera data to train generative AI models, but riders will be able to opt out (United States) TechCrunch

  • February 2025 newsletter

    February 2025 newsletter introduction

    Welcome to my February 2025 newsletter, I hope that your year of the snake has gotten off to a great start. This newsletter marks my 19th issue – which feels a really short time and strangely long as well, thank you for those of you who have been on the journey so far as subscribers to this humble publication. Prior to writing this newsletter, I found that the number 19 has some interesting connections.

    In mandarin Chinese, 19 sounds similar to ‘forever’ and is considered to be lucky by some people, but the belief isn’t as common as 8, 88 or 888.

    Anyone who listened to pop radio in the mid-1980s to mid-1990s would be familiar with Paul Hardcastle’s documentary sampling ’19’. The song mixed narration by Clark Kent and sampled news archive footage of the Vietnam war including news reports by read by Walter Cronkite. 19 came from what was cited as the average age of the soldier serving in Vietnam, however this is disputed by Vietnam veteran organisation who claim that the correct number was 22. The veteran’s group did a lot of research to provide accurate information about the conflict, overturning common mistakes repeated as truth in the media. It’s a handy reminder that fallacies and trust in media began way before the commercial internet.

    New reader?

    If this is the first newsletter, welcome! You can find my regular writings here and more about me here

    Strategic outcomes

    Things I’ve written.

    • Zing + more things – HSBC’s Zing payments system was shut down and was emblematic of a wider challenge in legacy financial institutions trying to compete against ‘fintech startups. I covered several other things as well including new sensor technology
    • The 1000 Yen ramen wall is closing down family restaurants across Japan. A confluence of no consumer tolerance for price elasticity due to inflation driven ingredients costs is driving them to the wall. Innovation and product differentiation have not made a difference.
    • Luxury wellness – why luxury is looking at wellness, what are the thematic opportunities and what would be the competitors for the main luxury marketing conglomerates be successful.
    • Technical capability notice – having read thoroughly about the allegations that Apple had been served with an order by the British government to provide access to its customer iCloud drive data globally – I still don’t know what to think, but didn’t manage to assuage any of my concerns.

    Books that I have read.

    • World Without End: The million-copy selling graphic novel about climate change by Jean-Marc Jancovici and Christophe Blain. In Japan, graphic novels regularly non-fiction topics like text books or biographies. A French climate scientist and illustrator collaborated to take a similar approach for climate change and the energy crisis. Their work cuts through false pre-conceptions and trite solutions with science.
    World without end by Jancovici & Blain
    • Laws of UX by Jon Yablonski. Yablonski breaks down a number of heuristics or razors based on psychological research and how it applies to user experience. These included: Jakob’s Law, Fitt’s Law, Hick’s Law, Miller’s Law, Peak-End Rule and Tesler’s Law (on complexity). While the book focuses on UX, I thought of ways that the thinking could be applied to various aspects of advertising strategy.
    • I re-read Hooked: How to Build Habit-Forming Products by Nir Eyal. Eyal’s model did a good job at synthesising B.J. Fogg’s work on persuasive computing, simplifying it into a model that the most casual reader can take and run with it.
    • Kapferer on Luxury by Jean-Noël Kapferer covers the modern rise of luxury brands as we now know them. Like Dana Thomas’ Deluxe – how luxury lost its lustre Kapferer addresses the mistake of globalised manufacturing and massification of luxury. However Kapferer points out the ‘secret sauce’ that makes luxury products luxurious: the hybridisation of luxury with art and the concept of ‘incomparability’. The absence of both factors explain why British heritage brands from Burberry to Mulberry have failed in their current incarnations as luxury brands.
    • Black Magic by Masamune Shirow is a manga work from 1983. Masamune is now best known for the creation of Ghost In The Shell which has been turned into a number of anime films, TV series and even a whitewashed Hollywood remake. Despite the title, Black Magic has more in common with space operas like Valerian & Laureline by Pierre Christin and Jean-Claude Mézières than the occult. In the book Masamune explores some of the ideas which he then more fully developed in Ghost In The Shell including autonomous weapons, robots and machine intelligence.
    • Doll by Ed McBain. Doll was a police procedural novel written in 1965 that focused on the model agency industry at the time. The novel is unusual in that it features various artistic flourishes including a model portfolio and hand written letters with different styles of penmanship. The author under the McBain pen name managed to produce over 50 novels. They all have taunt dialogue that’s ready for TV and some of them were adapted for broadcast, notably as an episode of Columbo. You can see the influence of McBain’s work in the likes of Dick Wolf’s productions like the Law & Order, FBI and On Call TV series franchises.

    Things I have been inspired by.

    Can money make you happy?

    Past research indicated that happiness from wealth plateaued out with a middle class salary. The latest research via the Wharton School at the University of Pennsylvania indicates that might not be the case instead, earning more makes you happier and there might not be a point at which one has enough. The upper limit on the research seems to have been restricted by finding sufficiently rich research respondents rather than natural inclination. As a consumer insight that has profound implications in marketing across a range of sectors from gaming to pensions and savings products.

    AgeTech

    I came across the concept of ‘agetech’ while looking for research launched in time for CES in Las Vegas (7 – 11, January 2025). In the US, the Consumer Technology Association (CTA) and American Association of Retired People (AARP) have put together a set of deep qualitative and quantitative research looking at the needs of the ‘aged consumer’ for ‘AgeTech’. AgeTech isn’t your Grandma iPad or your boomer CEO’s laptop. Instead it is products that sit at the intersection of health, accessibility and taking care of oneself in the home. The top five perceived age technologies are connected medical alert devices,digital blood pressure monitors, electric or powered wheelchairs/scooters, indoor security cameras, and electronic medication pill dispenser/reminders. Their report 2023 Tech and the 50-Plus, noted that technology spending among those 50-plus in America is forecast to be more than $120 billion by 2030. Admittedly, that ’50-plus’ label could encompass people at the height of their career and family households – but it’s a big number.

    It even has a negative impact on the supply side of the housing market for younger generations:

    The overwhelming majority (95%) of Americans aged 55 and older agree that aging in place – “the ability to live in one’s own home and community safely, independently, and comfortably, regardless of age, income, or ability level” – is an important goal for them. This is up from 93% in 2023.

    The Mayfair Set v 2.0

    Spiv

    During the summer of 1999, a set of documentaries by Adam Curtis covered the reinvention of business during the latter half of the 20th century was broadcast. I got to discover The Mayfair Set much later on. In the documentaries it covered how the social contract between corporates and their communities was broken down and buccaneering entrepreneurs disrupted societal and legal norms for profit. There is a sense of de ja vu from watching the series in Meta’s business pivots to the UK government’s approach to intellectual property rights for the benefit of generative AI model building.

    It probably won’t end well, with the UK population being all the poorer for it.

    The Californian Ideology

    As to why The Mayfair Set 2.0 is happening, we can actually go back to a 1995 essay by two UK based media theorists who were at the University of Westminster at the time. It was originally published in Mute magazine.

    This new faith has emerged from a bizarre fusion of the cultural bohemianism of San Francisco with the hi-tech industries of Silicon Valley. Promoted in magazines, books, TV programmes, websites, newsgroups and Net conferences, the Californian Ideology promiscuously combines the free-wheeling spirit of the hippies and the entrepreneurial zeal of the yuppies. This amalgamation of opposites has been achieved through a profound faith in the emancipatory potential of the new information technologies. In the digital utopia, everybody will be both hip and rich. Not surprisingly, this optimistic vision of the future has been enthusiastically embraced by computer nerds, slacker students, innovative capitalists, social activists, trendy academics, futurist bureaucrats and opportunistic politicians across the USA. 

    It reads like all these things at once:

    • A prescient foreshadowing from the past.
    • Any Stewart Brand op-ed piece from 1993 onwards.
    • The introduction from an as-yet ghost written book on behalf of Sam Altman, a la Bill Gates The Road Ahead.
    • A mid-1990s fever dream from the minds of speculative fiction authors like Neal Stephenson, William Gibson or Bruce Sterling.

    What the essay makes clear is that Peter Thiel, Larry Ellison and Elon Musk are part of a decades long continuum of Californian Ideology, all be it greatly accelerated; rather than a new thing. One of the main differences is that the digital artisans no longer have a chance to get rich with their company through generous stock options.

    Jobsmobile

    Even Steve Jobs fitted in with the pattern. For a hippy he drove a 5 litre Mercedes sports car, parked in the handicapped spaces in the Apple car park and had a part in firing Apple’s first gay CEO: Michael Scott because of homophobia and Scott’s David Brent-like handling of Black Wednesday. It may be a coincidence that Tim Cook didn’t come out publicly as gay until over three years after Steve Jobs died.

    … a European strategy for developing the new information technologies must openly acknowledge the inevitability of some form of mixed economy – the creative and antagonistic mix of state, corporate and DIY initiatives. The indeterminacy of the digital future is a result of the ubiquity of this mixed economy within the modern world. No one knows exactly what the relative strengths of each component will be, but collective action can ensure that no social group is deliberately excluded from cyberspace.

    A European strategy for the information age must also celebrate the creative powers of the digital artisans. Because their labour cannot be deskilled or mechanised, members of the ‘virtual class’ exercise great control over their own work. Rather than succumbing to the fatalism of the Californian Ideology, we should embrace the Promethean possibilities of hypermedia. Within the limitations of the mixed economy, digital artisans are able to invent something completely new – something which has not beenpredicted in any sci-fi novel. These innovative forms of knowledge and communications will sample the achievements of others, including some aspects of the Californian Ideology. It is now impossible for any serious movement for social emancipation not to incorporate feminism, drug culture, gay liberation, ethnic identity and other issues pioneered by West Coast radicals. Similarly, any attempt to develop hypermedia within Europe will need some of the entrepreneurial zeal and can-do attitude championed by the Californian New Right. Yet, at the same time, the development of hypermedia means innovation, creativity and invention. There are no precedents for all aspects of the digital future. As pioneers of the new, the digital artisans need to reconnect themselves with the theory and practice ofproductive art. They are not just employees of others – or even would-be cybernetic entrepreneurs.

    They are also artist-engineers – designers of the next stage of modernity.

    Barbrook and Cameron rejected the idea of a straight replication of the Californian Ideology in a European context. Doing so, despite what is written in the media, is more like the rituals of a cargo cult. Instead they recommended fostering a new European culture to address the strengths, failings and contradictions implicit in the Californian Ideology.

    Chart of the month: consumer price increases vs. wage increases

    This one chart based on consumer price increases and wage increases from 2020 – 2024 tells you everything you need to know about UK consumer sentiment and the everyday struggle to make ends meet.

    Consumer prices vs. wage increases

    Things I have watched. 

    The Organization – Sydney Poitier’s last outing as Virgil Tibbs. The Organization as a title harks back to the 1950s, to back when the FBI were denying that the Mafia even existed. Organised crime in popular culture was thought to be a parallel corporation similar to corporate America, but crooked. It featured in the books of Richard Stark. This was despite law enforcement stumbling on the American mafia’s governing body in 1957. Part of this was down to the fact that the authorities believed that the American arm of the mafia were a bulwark against communism. Back to the film, it starts with an ingenious heist set piece and then develops through a series twists and turns through San Francisco. It was a surprisingly awarding film to watch.

    NakitaNakita is an early Luc Besson movie made after Subway and The Big Blue. It’s an action film that prioritises style and attitude over fidelity to tactical considerations. The junkies at the start of the film feel like refugees from a Mad Max film who have happened to invade a large French town at night. It is now considered part of the ‘cinéma du look’ film movement of the 1980s through to the early 1990s which also features films like Diva and Subway. Jean Reno’s character of Victor the Cleaner foreshadows his later breakout role as Leon. It was a style of its time drawing on similar vibes of more artistic TV ads, music videos, Michael Mann’s Miami Vice TV series and films Thief and Manhunter.

    Stephen Norrington’s original Blade film owes a lot to rave culture and cinéma du look as it does to the comic canon on which it’s based. It’s high energy and packed with personality rather like a darker version of the first Guardians of The Galaxy film. Blade as a character was influenced by blaxploitation characters like Shaft in a Marvel series about a team of vampire hunters. Watching the film almost three decades after it came out, it felt atemporal – from another dimension rather than from the past per se. Norrington’s career came off the rails after his adaption of The League of Extraordinary Gentlemen did badly at the box office and star Wesley Snipes went to jail for tax-related offences.

    The Magnificent Seven – I watched the film a couple of times during my childhood. John Sturges had already directed a number of iconic films: Bad Day at Black Rock and Gunfight at The OK Corral. With The Magnificent Seven, he borrowed from The Seven Samurai. It was a ‘Zappata western’ covering the period of the Mexican revolution and was shot in Cuernavaca, Mexico. The film did two things to childhood me: made me curious about Japanese cinema and storytelling. There are some connections to subsequent Spaghetti Westerns:

    • Sergio Leone’s A Fistful of Dollars (shot in 1964 would borrow from another Akira Kurosawa film Roshomon)
    • Eli Wallach played a complex Mexican villain in both The Magnificent Seven and Leone’s The Good, The Bad & The Ugly.
    • The visual styling of the film is similar to spaghetti westerns, though the clothes were still too clean, Yul Brynner’s role as the tragic hero in black is a world-away from the traditional Hollywood coding of the good guys wearing white hats (or US cavalry uniforms).
    • The tight, sparse dialogue set the standard for the Dollars Trilogy and action films moving forward
    • Zappata westerns were the fuel for more pro-leftist films in the spaghetti western genre. While The Magnificent Seven still has a decidedly western gaze, it took on racism surprisingly on the nose for a Hollywood film of this era.

    Watching it now as a more seasoned film watcher only sharpened my appreciation of The Magnificent Seven.

    Breaking News by Johnnie To feels as much about now as it when the film was shot 20 years ago. First time I watched it was on the back of a head rest on a Cathay Pacific flight at the time. Back then I was tired and just let the film wash over me. This time I took a more deliberate approach to appreciating the film. In the film the Hong Kong Police try and control and master the Hong Kong public opinion as a robbery goes wrong. However the Hong Kong Police don’t have it all their own way as the criminals wage their own information campaign. This film also has the usual tropes you expect from Hong Kong genre of heroic bloodshed films with amazing plot twists and choreographed action scenes along with the spectacular locations within Hong Kong itself. Watching it this time, I got to appreciate the details such as the cowardly dead-beat Dad Yip played by veteran character actor Suet Lam.

    Useful tools.

    Current and future uncertainties.

    current and future uncertainties

    This could be used as thought starters for thinking about business problems for horizon scanning and scenario planning. It’s ideal as fuel for you to then develop a client workshop from. But I wouldn’t use something this information dense in a client-facing document. You can download it as a high resolution PDF here.

    Guide to iPhone security

    Given the propensity of phone snatching to take over bank accounts and the need to secure work phones, the EFF guide to securing your iPhone has a useful set of reminders and how-to instructions for privacy and security settings here.

    Novel recommendations

    I got this from Neil Perkin, an LLM-driven fictional book recommendation engine. It has been trained on Goodreads (which reminds me I need to update my Goodreads profile). When I asked it for ‘modern spy novels with the class of John Le Carre’ it gave me Mick Herron’s Slow Horses, Chris Pavone’s The Expats and Chris Cumming’s The Trinity Six. All of which were solid recommendations.

    Smartphone tripod

    Whether it’s taking a picture of a workshop’s forest of post-it notes or an Instagrammable sunset a steady stand can be really useful. Peak Design (who were falsely accused of being a ‘snitch‘) have come up with a really elegant mobile tripod design that utilises the MagSafe section on the back of an iPhone.

    Apple Notes alternative

    I am a big fan of Apple Notes as an app. I draft in it, sync ideas and thoughts across devices using it. But for some people that might not work – different folks for different strokes. I was impressed bu the quality of Bear which is a multi-platform alternative to the default Notes app.

    The sales pitch.

    I am now taking bookings for strategic engagements; or discussions on permanent roles. Contact me here.

    More on what I have done here.

    bit.ly_gedstrategy

    The End.

    Ok this is the end of my February 2025 newsletter, I hope to see you all back here again in a month. Be excellent to each other and onward into March.

    Don’t forget to share if you found it useful, interesting or insightful.

    Get in touch if there is anything that you’d like to recommend for the newsletter.

  • 2024 iPad Pro

    In my take on the 2024 iPad Pro I am going to look at things through three lenses and after the initial hot takes have cooled down. These three lenses are:

    • Hardware
    • Semiconductors
    • Advertisement

    Apple and Microsoft both push their most powerful tablets like the 2024 iPad Proas creator tools. However, at the time of writing I have been working alongside creative teams in a prominent ad agency and both the creative and strategic elements of the work we were doing were pulled together using different software, but the same hardware. Apple MacBook Pro computers and large secondary monitors. An illustrator attached a ‘graphics tablet‘ alongside their laptop to provide additional tactile control, just in the same way I am known to use an outboard Kensington trackball for additional fine control in creating presentation charts.

    Where I have seen iPads used:

    • Senior (older executives) replying to emails – I suspect its because the screen is bigger than a smartphone.
    • As a media player device. The iPad is the travel and bedside equivalent of the book and the portable DVD player.
    • As a presentation device. Friends that give a lot of public presentations at conferences and one who works as a university lecturer both use the iPad as device to present from in place of lugging around a laptop.

    In all of these use cases, there isn’t that much to differentiate iPad models and the main limitations are user intent or software-related.

    My parents use an iPad I’ve bought them to keep in touch with me. We started using an iPad as a Skype client over a decade ago. Then iMessage and FaceTime started to make more sense, particularly has they started getting Skype spam. It’s the computing equivalent of a kitchen appliance: largely intuitive and very little can go really wrong – that’s both the iPad’s strength and its weakness.

    Secondly, there is the confusion of the Apple iPad product line-up, which is at odds with the way Apple got its second wind. In Walter Isaacson’s flawed autobiography of Steve Jobs, one of the standout things that the returning CEO did was ruthlessly prune the product line-up.

    He made it into a 2 x 2 grid: professional and consumer, portable and desktop. For most of past number of years, the iPhone has gone down this ‘pro and consumer’ split.

    The iPad line-up is less clear cut to the casual observer:

    • iPad Mini
    • iPad
    • iPad Air
    • iPad Pro

    In addition, there are Apple pencils – a smarter version of the stylus that used to be used prior to capacitive touchscreens became commonplace. Some of these pencils work with some devices, but not others. It’s a similar case, with other Apple accessories like keyboards that double as device covers. All of which means that your hardware accessories need an upgrade too. This is more than just getting a new phone case. It’s more analogous to having to buy a new second monitor or mouse every time you change your computer.

    With all of that out of the way, let’s get into hardware.

    Hardware

    The 2024 iPad Pro launched before the Apple Worldwide Developer Conference, so we had no idea how the device will work together in conjunction with iPadOS 18. Addressing long term criticism of using the iPad is as much about software as it is about hardware.

    The 2024 iPad Pro still doesn’t have a definitive user case, but Apple decided to focus on creativity in their marketing.

    Presumably this is because the main thing to celebrate about the 2024 iPad Pro is increased computing power and creative apps are the most likely to make use of that power. For many ‘non-creative’ use cases, the previous generation of iPad Pro is very over-powered for what it does.

    Some of the choices Apple made with the hardware are interesting. The existing iPad Pro is a thin, lightweight computing device. The 2024 iPad Pro is Apple’s thinnest device ever. This thinness is a clever feat of engineering, but so would be an iPad of the same size, but with more battery capacity. Instead Apple made the device made things a bit thinner device with exactly the same battery life as previous models.

    The iPad Pro uses two screens one behind the other to provide deeper and brighter colours at a resolution that’s extremely high. This provides additional benefits such as avoiding screen burn-in which OLED screens were considered to be vulnerable to.

    The camera has moved from the side to the top of the 2024 iPad Pro in landscape mode. This has necessitated a new arrangement of magnets for attachments, which then drove the need for new accessories including the new Apple pencil pro.

    Semiconductors

    The M4 processor is Apple’s latest silicon design and represents a move on from the current processors in Apple’s Mac range.

    It is made by TSMC on a leading edge 3 nanometre process. This is TSMC’s second-generation process. Having it as the processor in the 2024 iPad Pro, allows Apple and partners to slowly ramp up production and usage of the new processor to match gains in semiconductor chip yields. This will give them the time to iron out any production challenges and resolve any quality issues. Relatively low production volumes would be a good thing, prior to the processor being rolled out more widely.

    Apple seems to be designing the M-series processors in parallel to the A-series processors used in iPhones and iPads in the past. They seem to have them in mind for a wider range of devices.

    Advertisement

    Apple previewed an advertisement to promote the 2024 iPad Pro.

    Crush has been executed with a high degree of craft in the production. It had a lot of negative reactions from celebrities and current Apple customers who saw it in terms of:

    • It being a wider metaphor of what technology was perceived to be doing to creativity. For instance, Hollywood actors and screen-writers are concerned about streaming and the effects of large language models.
    • Destroying real-life artefacts that consumers have attached meaning to. For instance, I use digital music, but also have a physical music collection that not only reflects my taste, but much more. Real-world experiences now provide respite from the digital world.

    With product launches like the iPhone 3, Apple created adverts which were less of a literal metaphor for everything that could be crammed into the device by using show-and-tell.

    Reversing the Crush! ad makes a similar point, but in a less oppressive way.

    And as with everything else in life, there is seldom a time when an idea is truly new. There was an ad done by BBH London which used a crush metaphor to demonstrate all the features in LG’s Renoir phone circa 2008. As this circulated around Apple was perceived as being a copycat.

    Presentation

    Given that Apple events are now largely virtual post-COVID we didn’t have a positive live audience reaction amongst those who ‘got it’ to guide public opinion. Instead it was left on social media ‘contextless’.

    The Apple exhibition centre at the new ‘space ship’ campus, doesn’t seem to be used in the same way that Apple did live events prior to 2020. Apple held small event screenings for journalists in New York and London.

    But was Crush! bad?

    When I first saw it, I thought that it was good from a craft point of view. I was a bit surprised at how dark the lighting was, it felt a little off-key.

    My personal opinion about the concept was that it felt a bit heavy-handed because it was so literal. The creative brief done by a strategist is usually the jumping off point, not the literal creative concept.

    But that doesn’t make it bad advert, it just felt not particularly clever for someone who is probably more media-literate than the average person. I would go as far as to say, it would have been unlikely to win creative advertising awards.

    But I was also aware that my opinion didn’t mean that the ad wouldn’t be effective. Given the 2024 iPad Pro’s role as M4 guinea pig, Apple probably weren’t hoping for barn-storming sales figures and in the grand scheme of things the advert just wasn’t extremely important.

    I was probably as blindsided as Apple was by the depth of feeling expressed in the online reaction.

    TL;DR I don’t know if Crush! really is ‘bad’. Let’s ask some specific questions about different aspects of the ad.

    Am I, or the negative responders the target market?

    Maybe, or maybe not. I don’t have a place in it in my current workflow. I still find that a Mac works as my primary creative technology device. What about if Apple were aiming at college kids and first jobbers? These people wouldn’t come to buying the 2024 iPad Pro with the same brand ‘baggage’ that me and many of the commentators have.

    Working in marketing, the 1984 ad and the Think Different ads were campaigns were classics. Hell, I can remember being a bit of an oddball at college as a Mac user. I helped friends get their secondhand Mac purchases up and running.

    Going to coffee shops or working in the library and seeing a see of laptop lids emblazoned with the Dell, Gateway, Toshiba and H-P logos. If people were a bit quirky they may have a Sony Vaio instead.

    I remember the booes and the hisses in the audience at MacWorld Boston in 1997, when Apple announced its partnership with Microsoft.

    Even when I worked at Yahoo! during the web 2.0 renaissance, Mac users were second-class citizens internally and externally in terms of our product offering.

    In the eyes of young people today Apple was always there, front and centre. The early iPad or iPhone experience as pacifier. The iPhone has must-have teenage smartphone. The Mac at home and maybe an Apple TV box.

    Finally many high performing adverts of the past aimed at young adults have left the mainstream media and tastemakers non-plussed.

    How did the ad test?

    According to anecdotal evidence I have heard from people at IPSOS; in a survey they found that about half the respondents surveyed said they would be interested in finding out more about the 2024 iPad Pro. The younger the respondent, the more likely they were to be interested in the device.

    System 1, tested the ad and found that it performed 1.9 out of a possible maximum score of 5. In System 1 parlance this indicates somewhere between low and modest long term brand growth derived from the advertisement. The average score for US advertisements is 2.3. But over half of ads that were run in the Super Bowl this year scored between 1 and 2. Which would imply that the ad could be improved; but the devil might be in the details as implied by the IPSOS research.

    Is Crush! just a copy cat?

    You can have the best creative director in the world who has seen a lot of advertising, but they might not know all advertising. Secondly, the advertising industry is getting rid of long term professionals. According to the Institute of Practitioners in Advertising no one retired from the industry in 2023, as staff were ‘phased out‘ of the industry way before retirement age. All of which means that there isn’t the historical memory to know if a campaign is sailing close to plagiarism.

    And it isn’t just advertising. Earlier in my career, I got to see former business journalist and newspaper editor Damian McCrystal speak at a breakfast event. One thing stayed with me about his presentation, in which he talked about the financial industry:

    The reasons why we make the same mistakes over-and-over again is because ‘the city’ has a collective institutional memory of about eight years.

    Damien McCrystal

    So we had Northern Rock, Bear Stearns and Lehman Brothers, despite the fact that pretty much every financier I have ever met had read Liar’s Poker by Michael Lewis. This was based on his experiences as a banker navigating the Savings and Loans scandal of the 1980s and 1990s.

    So no, despite the similarity of the LG Renoir advertisement, I don’t think that Crush! was an intentional copy.

    More related content can be found here.

    More information

    Some thoughts about Apple’s new iPads | Ian Betteridge

    The M4 iPad Pros | Daring Fireball

    Brief Thoughts and Observations on Yesterday’s ‘Let Loose’ iPad Keynote | Daring Fireball

    How Apple’s ‘tone deaf’ iPad ad signals a turning point | FT

    Apple’s New iPad Ad Leaves Its Creative Audience Feeling … Flat – The New York Times

    Apple’s new iPad ad has struck a nerve online. Here’s why | AP News

    Commentary: Apple’s tone-deaf iPad ad triggers our darkest AI fears – CNA

    The Fat iPhone, 11 years on: The iPad’s over a decade old and we’re still not sure what it’s for • The Register

    12 things I learned by switching from the 13-inch MacBook Pro to the 12.9-inch iPad Pro | Macworld

  • Dove 20 years of real beauty

    I was privileged to freelance at Ogilvy on Dove a number of years ago and got to understand the brand a little better during that time. My work on Dove was focused on product advertising for Dove soap in Brazil, the US, Vietnam and the Philippines rather than adding to the master brand canon around beauty standards.

    When the 20th anniversary of the master brand campaign rolled around my LinkedIn was filled with posts about 20 years of the Real Beauty (or changing beauty as its currently articulated) positioning for the Dove brand. I took more of a slow read/write approach to my take on Dove.

    Dove origin.

    The origins of Dove lie in the injuries experienced by American servicemen during world war two. There was a need for a milder soap to address the needs of burn victims, and the concept of having moisturising cream (or cleansing cream as it was called in the earlier ads) was included in the soap to rehydrate skin rather than leaving it excessively dry after stripping off the skins natural oils.

    Dove was introduced as a consumer product in 1957. The original advertising focused on the functional benefits of the product.

    Decades later and the Dove advertising continued to focus on the products functional benefits.

    For instance this 1990s advert positions Dove against everyday beauty brands and premium brand Neutrogena.

    Dove still does functional benefit advertising, but it’s the master brand level advertisements that tend to get the most attention.

    2004.

    It is worthwhile considering the context that Dove was entering into with its reinvention. While we were post-9/11 the culture still has the optimism of the early 2000s. Celebrity gossip and paparazzi photos and videos were still a thing. Facebook had been launched for Harvard University students. Myspace had launched a year earlier with a focus on music and blogging was gaining a head of steam as a social channel. Real Media had launched a streaming music service but Spotify was a couple of years away from launch.

    iTunes music downloads, CD ripping and iPods were reinventing music. Television shows were used to find the next popstars, while Dido and Eminem were dominating radio play.

    DVD series box sets were a thing. Season three of TV show 24 was the must see TV with Jack Bauer trying to stop a biological terrorist attack and deal with his own heroin addiction.

    I was using a Nokia smartphone and a Palm Tungsten T personal digital assistant at the time.

    Beauty soap category at the time.

    Beauty soap was not a new category. Unilever had arguably marketed the first beauty soap called Pears. By the time real beauty happened Pears was no longer distributed or marketed by Unilever in the UK. As well as Dove, Unilever owned Lux which was seen to be a ‘milder for your skin’ soap. By this time, Lux was a heritage brand that my Grandmother had liked and its main market focus was Latin America, Africa and South / South East Asia. Lux has pivoted to a girl power like position against societal sexism in its brand purpose led advertising.

    Procter and Gamble had their own Lux analogue called Camay that traded on the glamour of famous actresses and socialites. At this time Camay was not seen as contemporary in the UK, but was selling well in Eastern Europe. By a strange twist of fate P&G sold Camay to Unilever in 2015, it was available in Latin America.

    Simple soap was a British market competitor that had been part of Smith and Nephew’s spin-off of their consumer products division to focus on their medical businesses including advanced wound management. Simple’s positioning was that it contained no unnecessary ingredients and that it was ideal for sensitive skin.

    Nivea had cleaning products like shower gels rather than soap per se but was in the personal care space.

    At the time, Dove like Palmolive and Simple might be bought by a housewife and used by all the family. My Mum and Dad still use Dove or Simple soap bars, based on which they find first on their supermarket run.

    Real beauty.

    Dove’s global brand team wanted to reposition Dove more firmly in the beauty category. The story that is promoted revolves around how the brand team presented the Unilever board at the time with interview footage from their wives and daughters about their opinions on beauty.

    There were a few iconic images that came out of the campaign.

    Dove.

    The tickbox images that appeared in a lot of out of home executions at the time.

    dove tickbox

    The Dove evolution video which captured what lots of people knew in the media industry, but tapped into wider public discussions about the use of photo manipulation that were appearing around that time.

    How real beauty memed.

    Dove’s outdoor execution in the London Underground had wags using pens and markers to suggest the negative answers. I remember on the escalator in Holborn station seeing every advert with the box ticked. It even memed with online celebrity news site Holymoly launching the campaign for real gossip.

    Campaign for Good Gossip - campaign for real beauty obituary

    Dove Men+Care range.

    Dove brand extension Dove Men+Care was launched in 2010 and now has a comprehensive range of everyday products. Unilever described this as a ‘white space’. But Nivea for Men had been in this space since 1986 and Nivea had sold shaving products to men as far back as the 1920s.

    Dove Men+Care’s purpose wasn’t that clear when I worked on Dove as the master brand is so focused on empowering women and girls.

    We believe that care makes a man stronger, and in order to best care for those that matter to you most, you need to start with care for yourself first.

    Unilever website

    This take from the Unilever website about what the Dove Man+Care brand stands for is still very generic and it could cover anything from Gillette or a Jordan Peterson sound bite to Andrew Tate’s various manosphere-oriented, fitness-focused enterprises.

    The risk of a male counterpart.

    It would be a major undertaking to build this into something a bit more pointed, yet fit for purpose. I could understand why it would be low on the priority list, particularly when Gillette’s effort was received so badly at the time.

    We know from behavioural science that positive reinforcement works better than taking a negative stance. I have heard a couple of hypotheses put around at the time that:

    • Men may use Gillette razors; but women in households buy them.
    • Women represent the largest growth market for disposable razor systems due Gillette’s male market dominance, male consumers inertia to change brand once chosen and facial hair growth – meant that the Gillette brand team didn’t feel that they were taking a risk.

    In both cases, men feature in the advert, but may not have been the ads target audience.

    However I think that the media buying suggests these hypotheses were wrong. The ad was run during a prime TV spot on the Super Bowl. Critics point to Procter & Gamble taking a $8 billion non-cash writedown for the shaving giant.

    P&G reported a net loss of about $5.24 billion, or $2.12 per share, for the quarter ended June 30, due to an $8 billion non-cash writedown of Gillette. For the same period last year, P&G’s net income was $1.89 billion, or 72 cents per share.

    …The charge was also driven by more competition over the past three years and a shrinking market for blades and razors as consumers in developed markets shave less frequently. Net sales in the grooming business, which includes Gillette, have declined in 11 out of the last 12 quarters.

    Reuters – P&G posts strong sales, takes $8 bln Gillette writedown (July 30, 2019)

    From a societal perspective in general masculinity related topics is a cultural land mine; particularly when #allmenaretrash and similar hashtags are now commonplace, so it is harder to use in an effective manner the kind of nuance Gillette attempted.

    Egard – a watch brand made this response video to Gillette.

    Impact

    Dove grew as a brand and became a form of social currency. It made the agencies involved (Ogilvy and Edelman) famous for years to come. What Edelman actually contributed to the creative concept is open for debate.

    In terms of the Dove real beauty brand purpose, the results seem to be more mixed.

    The current Dove master brand ad ‘The Code’ seems to be very similar to the original ‘Evolution’ ad, the only changes have been that Photoshop was being used by an expert and AI has now put it in the hand of teenage girls.

    The distortion remains the same. The Girl Guides Girl’s Attitude Survey ran at the end of last year indicated that things have gotten worse over the past decade rather than better. And this was supported by another research driven article I read in The New York Times: What It’s Like to Be a 13-Year-Old Girl Today.

    While the public discourse has changed behaviours haven’t and the wellbeing of girls and women seems to be in a similar or worse position today than it was 20 years ago.

    Part of this is likely to be societal, we live in more anxious times and the status quo may have been even worse, had Dove not sparked the kind of public discourse it had.

    Brand purpose?

    At the time when Dove’s campaign came out, I can’t remember purpose really being a ‘thing’. The closest thing I could remember in the marketing zeitgeist is that people would occasionally talk about technology in terms of the pitch a young Steve Jobs made to PepsiCo executive John Sculley: do you want to sell sugared water all your life, or do you want to change the world?

    There was talk about changing attitudes and creating a movement – but it was seen in terms of creativity, rather than a higher purpose.

    At the time Unilever’s fragrance brand Lynx / AXE were running creative like this.

    AXE / Lynx is still the world’s number one men’s fragrance brand, but its positioning has changed a bit.

    When you smell good, good things happen. You’re a little more confident and life opens up a world of possibilities. We believe that attraction is for everyone and between anyone. It doesn’t matter your race, your sexuality, or your pronouns. If you’re into it and they’re into it, we’re into it. That’s The New AXE Effect.

    Unilever website

    Lynx and AXE content wasn’t that far out. Advertising in the late 1990s and early 2000s wasn’t so serene. You has several ad campaigns that were subversive or transgressive in nature.

    A good deal of this was cultural zeitgeist. If you were a creative director in your mid-30s at the time, your terms of reference were very different. You would have likely enjoyed sub-cultures like the rave scene and independent music that drew from 1960s psychedelia and counterculture icons. You probably watched the Jim Rose Circus Sideshow film, one of their TV appearances or attended one of their live shows. Russell Brand was considered funny.

    Brands getting attention and critical acclaim like Sony’s Playstation gaming console, Levi’s and Skittles were taking brand risks with campaigns that were far edgier than we’d be likely to see now. One direct mail shot from Sony Playstation designed to promote the Tekken 3 fighting game was sent out in a plain manilla envelope stamped ‘private and confidential’. Inside was a convincing medical card advising that the recipient receive immediate medical treatment for a potentially serious condition. Some of those mailed were waiting for hospital test results and complained to the authorities.

    Meanwhile in the US, Mountain Dew was promoting pager plans as part of a co-marketing deal. But this was happening in the middle of a moral panic on pagers being a portal to drug dealer hook-ups and teen prostitutes receiving bookings from johns. Kids were being arrested and charged for possessing pagers in schools and colleges.

    Failed online business Pets.com had a distinctive shouty voice that we probably hadn’t seen since Poundland’s ‘teabagged’ social posts.

    Two examples give a good temperature check of what was happening in agency teams at this time up to just before 2010.

    The Volkswagen ‘terrorist’ film that was used as a door opener by creative team Lee Ford and Dan Brooks. It leaked online, much to the bemusement of Volkswagen. Creatives thought it would be well received by a brand marketing team with a sense of humour. While VW didn’t like it, it did get them work with a large production house in the US and London agency Quiet Storm.

    The second one was Lean Green Fighting Machine’s Facebook campaign for Dr Pepper in 2010, that referenced an online Brazilian porn clip known as ‘2 girls, one cup’. The client had signed it off, without knowing the context. Controversy ensued on Mumsnet and the agency was fired from the account.

    Amidst all this cynicism, boundary pushing and counterculture; Dove’s real beauty would have been distinctive and differentiated. Even if it did run a risk of being perceived as cynical self-serving corporate schmaltz.

    Brand purpose as an idea seems to have gained popular currency after Dove’s campaign for real beauty.

    You can see in this chart based on Google Books data how the English language mentions of ‘brand purpose’ took off.

    brand purpose
    Data from Google Books Ngram viewer

    Brand purpose critic Nick Asbury places the rise of brand purpose to the 2008 financial crisis and related events such as the Occupy movement, which supports the post-2014 surge in interest. 20 years later, Dove is now seen as being emblematic of brand purpose. Dove took on brand purpose as a concept over time, with the increasing prominence of the Dove Self-Esteem Project being a case in point.

    More related posts can be found here.

  • CMOs

    Premature obituaries of CMOs

    Almost 11 years ago business academic Dominique Turpin wrote an article describing CMOs as ‘dead’. Turpin worked at the IMD business school in Switzerland and the article was a classic bid for thought leadership.

    UN Women Global Innovation, Technology and Entrepreneurship Industry Forum
    Alicia Tillman, CMO at SAP AG until 2021, at the time of writing CMO at Delta Airlines

    It’s just the kind of thumb-stopping headline that drives readership of LinkedIn content where it was published.

     … the decline in the CMO’s influence is alarming, especially at companies that claim to put the customer first but in reality are product-driven.

    True, some companies have marketing in their DNA, especially firms that had a visionary founder with a great understanding of the customer. Examples include Ingvar Kamprad at IKEA or the late Steve Jobs at Apple.

    But these are exceptions. The norm these days is that the CEO sets the overall strategy, the R&D and innovation teams design the product, and the CFO determines pricing and departmental budgets. No wonder some CMOs feel unloved and are considering a career change

    Dominque Turpin – The CMO is dead (August 21, 2013)

    Turpin goes on to explain what he believes that there are four causes, that together result in no CMOs.

    • Most CMOs aren’t focused on planning and delivering customer value
    • Short-termism has meant that organisations have become CFO-focused – a la ‘Neutron’ Jack Welch’s perception of shareholder value, rather than a balanced scorecard approach
    • Marketing impact is hard to measure
    • Organisations lack a clear understanding of what marketing is

    Instead Turpin wanted to create a CCO role – chief customer officer. He saw that this role could sit with the CEO, the CFO or the former CMO. While the CFO as CCO might take the fuzziness out of marketing as Turpin put it, there would be a tension between their natural ‘ neutron Jack Welch’-nature and being customer-centric. What about the CEO? Turpin pointed out that they tend to come from engineering or finance. Both are efficiency focused disciplines with incremental short-term views. Again both are barriers to customer-centricity and would be largely blind to long term effects.

    3G Capital

    Move forward six years and a CFO-driven approach to running Kraft Heinz by 3G Capital saw a massive destruction of value some $15.4 billion from the $50 billion paid to buy out the business in the first place. The quarterly dividend got cut and shareholders filed lawsuits. The founder of 3G Capital talked extensively about the GE Way driven by Jack Welch as a key influence on their approach.

    Scott Galloway

    Scott Galloway

    Professor Scott Galloway is a serial entrepreneur who is provocative, interesting and often right.

    On CMOs Galloway said

    “If you’re the CMO that shows up and says ‘I need more budget so that I can do a brand identity study, can spend money on advertising and get invited to great conferences and hang out with people who are more interesting and better looking than me by spending media dollars that are less and less impactful’ then you’re like the second lieutenant in Vietnam — you’re dead in 18 months or less,”

    Scott Galloway

    There is a lot to unpack in that statement, but it doesn’t spell the end of the CMO or advertising.

    Pax Americana to Pax Australis

    In this post Galloway taps into a wider criticism that we’ve seen of American marketers over the past few years. When I was in college American professors and marketing thinkers set the tempo for the profession around the world. As Mark Ritson recently wrote

    In the 20th Century marketing was American. The discipline, the theories, the textbooks, and the approach. To arrive at Wharton in 1994 was to see a future that was not just untenable in the UK, it was one nobody back home was even aware of. Marketing was a decade ahead of anything in the UK. The American marketers I met, academics and practitioners, were so advanced it made my head spin.

    Mark Ritson – Effectiveness ignorance has left American marketing lagging behind the rest of the world (Marketing Week)

    Text books by the likes of Philip Kotler and David Baker, were perceived wisdom of old white academics. None of this thinking was evidence-based; beyond anecdotal successful case studies.

    One of the ‘secrets’ that marketers and CMOs at large FMCG companies like Mars, Proctor & Gamble, Kellogg’s and Unilever had was access to Australian based marketing science research. This was primarily via their long-term sponsorship of the Ehrensberg-Bass Institute in Adelaide, Australia.

    This body of research in turn shook up wider marketing thinking when Professor Byron Sharp published How Brands Grow. (There were other important works as well such as the UK based publications from the UK’s Institute of Practitioners in Advertising – notably The Long and The Short of it and Effectiveness In Context).

    While marketing outside the US was shaken up by the works of Sharp and Binet, the US continued onwards in its marketing the way it always had.

    Mark Ritson’s recent column on the state of American marketing caused international furore in the marketing community, despite Marketing Week being a UK-only publication. Ritson complained about American marketers lack of awareness about the importance of marketers effectiveness.

    This comment on Mark Ritson’s post sharing the article, while humorous has a lot of truth in it:

    Okay, okay. Stop throwing big words like vituperative and effectiveness at us simple-minded Americans. Sure, we handed the keys of marketing over to software engineers at the turn of the century. Maybe that led us to a fair bit of myopic strategery here in the very exceptional United States of America.

    Based on the comments I’ve read, when asked to define effectiveness the answer provided is essentially #IYKYK. Why let a golden opportunity to school American marketers on the wise ways of the world beyond our ample shores slip through your fingers?

    I don’t care if you call it Marketingwirksamkeit or efficacité du marketing, what matters is more than just numbers on the scoreboard, but how the points were won. Just the other day I was reading a scholarly piece on the effectiveness of meme marketing by the faculty of Griffith University in Queensland. It hit me. My American, effectiveness ignorance has blinded me. I now lag behind the rest of the world.

    Michael Simmons, Sendofy

    While the US obsessed over marketing technology, the rest of the world was attempting (imperfectly) to more knowledge about marketing efficiency and the marketing technology stack. Having worked in the technology sector, it was the last place I would have gone for marketing lessons. At best, marketing as a function was sales support.

    A case in point in this mis-application of focus was the relative performance of this year’s Super Bowl advertisements. I realise that this delve into marketing efficiency has at least made the case for a dramatic change in US-based CMOs. But not so fast, CEOs don’t think that their CMOs and marketing teams are performing that badly.

    CMOs: from the dog house to the boathouse

    American agency Boathouse have been doing an annual survey of CEO attitudes to marketing and their CMOs. The third edition of this survey was published in January this year.

    Some of the highlights of the report include:

    • CEOs identified what they want their CMOs and marketing teams to address: driving growth, market share/sales, differentiation, improving brand reputation, and “transforming company narrative.”
    • 49 percent of CEOs believe their marketing team is “best in class.”
    • 40 percent of CMOs are rated “best in class.”
    • One point I found quite interesting was that half of CEOs believe the short tenure many CMOs have “is a sign of success, not failure.”
    • CMOs’ perceived trust with C-suite stands at 43 percent and 41 percent with the CEO. This has doubled over the three years that Boathouse has been running this research project
    • In a bit of an odd note: CEOs believe CMO loyalty is growing, stating that “in a dramatic shift from 2021, the CEO’s perception of CMO loyalty is growing, [as] 8 in 10 CEOs perceive CMOs would take a bullet for them (up from 3 in 10 in 2021).”
    • 76 percent CEOs are “integrating A.I. into their organizations” and 90 percent believe that 90 percent of their CMOs are engaging with AI for the benefit of the company in areas such as “content, analytics (about two-thirds), and customer experience or research (half).”

    TL;DR CMOs don’t sound as if they will be disappearing in the next 18 months or so as Galloway believed.

    C-suite without powerful CMOs get punished.

    You could argue that CEO are the ultimate arbiters of CMO success, failure and tenure. But the reality for public companies is the large investors who can vote the CEO of most companies out of office. The key influencer in this decision process is the equity analysts who sit within, or advise client organisations such as fund managers.

    equity analyst

    Even if CEOs don’t think that their marketing is important enough to have a CMO, their shareholders will. Equity analysts have indicated that they rate brand strength and marketing as more important than reported profit, or leadership quality.

    Given that most of the c-suite can’t speak or do effective marketing, they really need their CMOs. Companies like 3G Capital and Reckitt Benckiser have been punished in public markets for failing at marketing, despite operational and financial excellence. Unilever has been punished for its focus on ESG at the expense of brand building and is even under regulatory investigation.

    More information

    “You’re Dead In 18 Months Or Less”: Scott Galloway On The Future Of CMOs – B&T

    UPS’ Removal Of CMO Role Reveals The Real Problem Facing The C-Suite

    Boathouse CEO Study on Marketing and the CMO | Boathouse

    Fortune 500 companies are cutting CMO jobs | Fortune

    The Unspoken Truth About CMO Churn | AdWeek

    Marketers, investing in market research is not superfluous | Marketing Week

    Gartner Survey Shows 73% of CMOs Will Fall Back on Low Risk, Low Return Strategies for 2021

    9 recent CMO departures that point to the radical transformation of marketing | Marketing Dive

    Mark Read: CMOs have become too much like chief communications officers | PR Week

    Coca-Cola’s decision to scrap the CMO role for a CGO should begin to pay off anytime soon | Observations In Marketing | The Thinking Marketer