Search results for: “Kevin Kelly”

  • Intelligence per watt

    My thinking on the concept of intelligence per watt started as bullets in my notebook. It was more of a timeline than anything else at first and provided a framework of sorts from which I could explore the concept of efficiency in terms of intelligence per watt. 

    TL;DR (too long, didn’t read)

    Our path to the current state of ‘artificial intelligence’ (AI) has been shaped by the interplay and developments of telecommunications, wireless communications, materials science, manufacturing processes, mathematics, information theory and software engineering. 

    Progress in one area spurred advances in others, creating a feedback loop that propelled innovation.  

    Over time, new use cases have become more personal and portable – necessitating a focus on intelligence per watt as a key parameter. Energy consumption directly affects industrial design and end-user benefits. Small low-power integrated circuits (ICs) facilitated fuzzy logic in portable consumer electronics like cameras and portable CD players. Low power ICs and power management techniques also helped feature phones evolve into smartphones.  

    A second-order effect of optimising for intelligence per watt is reducing power consumption across multiple applications. This spurs yet more new use cases in a virtuous innovation circle. This continues until the laws of physics impose limits. 

    Energy storage density and consumption are fundamental constraints, driving the need for a focus on intelligence per watt.  

    As intelligence per watt improves, there will be a point at which the question isn’t just what AI can do, but what should be done with AI? And where should it be processed? Trust becomes less about emotional reassurance and more about operational discipline. Just because it can handle a task doesn’t mean it should – particularly in cases where data sensitivity, latency, or transparency to humans is non-negotiable. A highly capable, off-device AI might be a fine at drafting everyday emails, but a questionable choice for handling your online banking. 

    Good ‘operational security’ outweighs trust. The design of AI systems must therefore account not just for energy efficiency, but user utility and deployment context. The cost of misplaced trust is asymmetric and potentially irreversible.

    Ironically the force multiplier in intelligence per watt is people and their use of ‘artificial intelligence’ as a tool or ‘co-pilot’. It promises to be an extension of the earlier memetic concept of a ‘bicycle for the mind’ that helped inspire early developments in the personal computer industry. The upside of an intelligence per watt focus is more personal, trusted services designed for everyday use. 

    Integration

    In 1926 or 27, Loewe (now better known for their high-end televisions) created the 3NF[i].

    While not a computer, but instead to integrate several radio parts in one glass envelope vacuum valve. This had three triodes (early electronic amplifiers), two capacitors and four resistors. Inside the valve the extra resistor and capacitor components went inside their own glass tubes. Normally each triode would be inside its own vacuum valve. At the time, German radio tax laws were based on the number of valve sockets in a device, making this integration financially advantageous. 

    Post-war scientific boom

    Between 1949 and 1957 engineers and scientists from the UK, Germany, Japan and the US proposed what we’d think of as the integrated circuit (IC). These ideas were made possible when breakthroughs in manufacturing happened. Shockley Semiconductor built on work by Bell Labs and Sprague Electric Company to connect different types of components on the one piece of silicon to create the IC. 

    Credit is often given to Jack Kilby of Texas Instruments as the inventor of the integrated circuit. But that depends how you define IC, with what is now called a monolithic IC being considered a ‘true’ one. Kilby’s version wasn’t a true monolithic IC. As with most inventions it is usually the child of several interconnected ideas that coalesce over a given part in time. In the case of ICs, it was happening in the midst of materials and technology developments including data storage and computational solutions such as the idea of virtual memory through to the first solar cells. 

    Kirby’s ICs went into an Air Force computer[ii] and an onboard guidance system for the Minuteman missile. He went on to help invent the first handheld calculator and thermal printer, both of which took advantage of progress in IC design to change our modern way of life[iii]

    TTL (transistor-to-transistor logic) circuitry was invented at TRW in 1961, they licensed it out for use in data processing and communications – propelling the development of modern computing. TTL circuits powered mainframes. Mainframes were housed in specialised temperature and humidity-controlled rooms and owned by large corporates and governments. Modern banking and payments systems rely on the mainframe as a concept. 

    AI’s early steps 

    Science Museum highlights

    What we now thing of as AI had been considered theoretically for as long as computers could be programmed. As semiconductors developed, a parallel track opened up to move AI beyond being a theoretical possibility. A pivotal moment was a workshop was held in 1956 at Dartmouth College. The workshop focused on a hypothesis ‘every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it’. Later on, that year a meeting at MIT (Massachusetts Institute of Technology) brought together psychologists and linguists to discuss the possibility of simulating cognitive processes using a computer. This is the origin of what we’d now call cognitive science. 

    Out of the cognitive approach came some early successes in the move towards artificial intelligence[iv]. A number of approaches were taken based on what is now called symbolic or classical AI:

    • Reasoning as search – essentially step-wise trial and error approach to problem solving that was compared to wandering through a maze and back-tracking if a dead end was found. 
    • Natural language – where related phrases existed within a structured network. 
    • Micro-worlds – solving for artificially simple situations, similar to economic models relying on the concept of the rational consumer. 
    • Single layer neural networks – to do rudimentary image recognition. 

     By the time the early 1970s came around AI researchers ran into a number of problems, some of which still plague the field to this day:

    • Symbolic AI wasn’t fit for purpose solving many real-world tasks like crossing a crowded room. 
    • Trying to capture imprecise concepts with precise language.
    • Commonsense knowledge was vast and difficult to encode. 
    • Intractability – many problems require an exponential amount of computing time. 
    • Limited computing power available – there was insufficient intelligence per watt available for all but the simplest problems. 

    By 1966, US and UK funding bodies were frustrated with the lack of progress on the research undertaken. The axe fell first on a project to use computers on language translation. Around the time of the OPEC oil crisis, funding to major centres researching AI was reduced by both the US and UK governments respectively. Despite the reduction of funding to the major centres, work continued elsewhere. 

    Mini-computers and pocket calculators

    ICs allowed for mini-computers due to the increase in computing power per watt. As important as the relative computing power, ICs made mini-computers more robust, easier to manufacture and maintain. DEC (Digital Equipment Corporation) launched the first minicomputer, the PDP-8 in 1964. The cost of mini-computers allowed them to run manufacturing processes, control telephone network switching and control labouratory equipment. Mini-computers expanded computer access in academia facilitating more work in artificial life and what we’d think of as early artificial intelligence. This shift laid the groundwork for intelligence per watt as a guiding principle.

    A second development helped drive mass production of ICs – the pocket calculator, originally invented at Texas Instruments.  It demonstrated how ICs could dramatically improve efficiency in compact, low-power devices.

    LISP machines and PCs

    AI researchers required more computational power than mini-computers could provide, leading to the development of LISP machines—specialised workstations designed for AI applications. Despite improvements in intelligence per watt enabled by Moore’s Law, their specialised nature meant that they were expensive. AI researchers continued with these machines until personal computers (PCs) progressed to a point that they could run LISP quicker than LISP machines themselves. The continuous improvements in data storage, memory and processing that enabled LISP machines, continued on and surpassed them as the cost of computing dropped due to mass production. 

    The rise of LISP machines and their decline was not only due to Moore’s Law in effect, but also that of Makimoto’s Wave. While Gordon Moore outlined an observation that the number of transistors on a given area of silicon doubled every two years or so. Tsugio Makimoto originally observed 10-year pivots from standardised semiconductor processors to customised processors[v]. The rise of personal computing drove a pivot towards standardised architectures. 

    PCs and workstations extended computing beyond computer rooms and labouratories to offices and production lines. During the late 1970s and 1980s standardised processor designs like the Zilog Z80, MOS Technology 6502 and the Motorola 68000 series drove home and business computing alongside Intel’s X86 processors. 

    Personal computing started in businesses when office workers brought a computer to use early computer programmes like the VisiCalc spreadsheet application. This allowed them to take a leap forward in not only tabulating data, but also seeing how changes to the business might affect financial performance. 

    Businesses then started to invest more in PCs for a wide range of uses. PCs could emulate the computer terminal of a mainframe or minicomputer, but also run applications of their own. 

    Typewriters were being placed by word processors that allowed the operator to edit a document in real time without resorting to using correction fluid

    A Bicycle for the Mind

    Steve Jobs at Apple was as famous for being a storyteller as he was for being a technologist in the broadest sense. Internally with the Mac team he shared stories and memetic concepts to get his ideas across in everything from briefing product teams to press interviews. As a concept, a 1990 filmed interview with Steve Jobs articulates the context of this saying particularly well. 

    In reality, Jobs had been telling the story for a long time through the development of the Apple II and right from the beginning of the Mac. There is a version of the talk that was recorded some time in 1980 when the personal computer was still a very new idea – the video was provided to the Computer History Museum by Regis McKenna[vi].

    The ‘bicycle for the mind’ concept was repeated in early Apple advertisements for the time[vii] and even informed the Macintosh project codename[viii]

    Jobs articulated a few key concepts. 

    • Buying a computer creates, rather than reduces problems. You needed software to start solving problems and making computing accessible. Back in 1980, you programmed a computer if you bought one. Which was the reason why early personal computer owners in the UK went on to birth a thriving games software industry including the likes of Codemasters[ix]. Done well, there should be no seem in the experience between hardware and software. 
    • The idea of a personal, individual computing device (rather than a shared resource).  My own computer builds on my years of how I have grown to adapt and use my Macs, from my first sit-up and beg Macintosh, to the MacBook Pro that I am writing this post on. This is even more true most people and their use of the smartphone. I am of an age, where my iPhone is still an appendage and emissary of my Mac. My Mac is still my primary creative tool. A personal computer is more powerful than a shared computer in terms of the real difference made. 
    • At the time Jobs originally did the speech, PCs were underpowered for anything but data processing (through spreadsheets and basic word processor applications). But that didn’t stop his idea for something greater. 

    Jobs idea of the computer as an adjunct to the human intellect and imagination still holds true, but it doesn’t neatly fit into the intelligence per watt paradigm. It is harder to measure the effort developing prompts, or that expended evaluating, refining and filtering generative AI results. Of course, Steve Jobs Apple owed a lot to the vision shown in Doug Engelbart’s ‘Mother of All Demos’[x].

    Networks

    Work took a leap forward with office networked computers pioneered by Macintosh office by Apple[xi]. This was soon overtaken by competitors. This facilitated work flow within an office and its impact can still be seen in offices today, even as components from print management to file storage have moved to cloud-based services. 

    At the same time, what we might think of as mobile was starting to gain momentum. Bell Labs and Motorola came up with much of the technology to create cellular communications. Martin Cooper of Motorola made the first phone call on a cellular phone to a rival researcher at Bell Labs. But Motorola didn’t sell the phone commercially until 1983, as a US-only product called the DynaTAC 8000x[xii].  This was four years after Japanese telecoms company NTT launched their first cellular network for car phones. Commercial cellular networks were running in Scandinavia by 1981[xiii]

    In the same way that the networked office radically changed white collar work, the cellular network did a similar thing for self-employed plumbers, electricians and photocopy repair men to travelling sales people. If they were technologically advanced, they may have had an answer machine, but it would likely have to be checked manually by playing back the tape. 

    Often it was a receptionist in their office if they had one. Or more likely, someone back home who took messages. The cell phone freed homemakers in a lot of self-employed households to go out into the workplace and helped raise household incomes. 

    Fuzzy logic 

    The first mainstream AI applications emerged from fuzzy logic, introduced by Lofti A. Zadeh in 1965 mathematical paper. Initial uses were for industrial controls in cement kilns and steel production[xiv]. The first prominent product to rely on fuzzy logic was the Zojirushi Micom Electric Rice Cooker (1983), which adjusted cooking time dynamically to ensure perfect rice. 

    Rice Cooker with Fuzzy Logic 3,000 yen avail end june

    Fuzzy logic reacted to changing conditions in a similar way to people. Through the 1980s and well into the 1990s, the power of fuzzy logic was under appreciated outside of Japanese product development teams. In a quote a spokesperson for the American Electronics Association’s Tokyo office said to the Washington Post[xv].

    “Some of the fuzzy concepts may be valid in the U.S.,”

    “The idea of better energy efficiency, or more precise heating and cooling, can be successful in the American market,”

    “But I don’t think most Americans want a vacuum cleaner that talks to you and says, ‘Hey, I sense that my dust bag will be full before we finish this room.’ “

    The end of the 1990s, fuzzy logic was embedded in various consumer devices: 

    • Air-conditioner units – understands the room, the temperature difference inside-and-out, humidity. It then switches on-and-off to balance cooling and energy efficiency.
    • CD players – enhanced error correction on playback dealing with imperfections on the disc surface.
    • Dishwashers – understood how many dishes were loaded, their type of dirt and then adjusts the wash programme.
    • Toasters – recognised different bread types, the preferable degree of toasting and performs accordingly.
    • TV sets – adjust the screen brightness to the ambient light of the room and the sound volume to how far away the viewer is sitting from the TV set. 
    • Vacuum cleaners – vacuum power that is adjusted as it moves from carpeted to hard floors. 
    • Video cameras – compensate for the movement of the camera to reduce blurred images. 

    Fuzzy logic sold on the benefits and concealed the technology from western consumers. Fuzzy logic embedded intelligence in the devices. Because it worked on relatively simple dedicated purposes it could rely on small lower power specialist chips[xvi] offering a reasonable amount of intelligence per watt, some three decades before generative AI. By the late 1990s, kitchen appliances like rice cookers and microwave ovens reached ‘peak intelligence’ for what they needed to do, based on the power of fuzzy logic[xvii].

    Fuzzy logic also helped in business automation. It helped to automatically read hand-written numbers on cheques in banking systems and the postcodes on letters and parcels for the Royal Mail. 

    Decision support systems & AI in business

    Decision support systems or Business Information Systems were being used in large corporates by the early 1990s. The techniques used were varied but some used rules-based systems. These were used in at least some capacity to reduce manual office work tasks. For instance, credit card approvals were processed based on rules that included various factors including credit scores. Only some credit card providers had an analyst manually review the decision made by system.  However, setting up each use case took a lot of effort involving highly-paid consultants and expensive software tools. Even then, vendors of business information systems such as Autonomy struggled with a high rate of projects that failed to deliver anything like the benefits promised. 

    Three decades on, IBM had a similar problem with its Watson offerings, with particularly high-profile failure in mission-critical healthcare applications[xviii]. Secondly, a lot of tasks were ad-hoc in nature, or might require transposing across disparate separate systems. 

    The rise of the web

    The web changed everything. The underlying technology allowed for dynamic data. 

    Software agents

    Examples of intelligence within the network included early software agents. A good example of this was PapriCom. PapriCom had a client on the user’s computer. The software client monitored price changes for products that the customer was interested in buying. The app then notified the user when the monitored price reached a price determined by the customer. The company became known as DealTime in the US and UK, or Evenbetter.com in Germany[xix].  

    The PapriCom client app was part of a wider set of technologies known as ‘push technology’ which brought content that the netizen would want directly to their computer. In a similar way to mobile app notifications now. 

    Web search

    The wealth of information quickly outstripped netizen’s ability to explore the content. Search engines became essential for navigating the new online world. Progress was made in clustering vast amounts of cheap Linux powered computers together and sharing the workload to power web search amongst them.  As search started to trying and make sense of an exponentially growing web, machine learning became part of the developer tool box. 

    Researchers at Carnegie-Mellon looked at using games to help teach machine learning algorithms based on human responses that provided rich metadata about the given item[xx]. This became known as the ESP game. In the early 2000s, Yahoo! turned to web 2.0 start-ups that used user-generated labels called tags[xxi] to help organise their data. Yahoo! bought Flickr[xxii] and deli.ico.us[xxiii]

    All the major search engines looked at how deep learning could help improve search results relevance. 

    Given that the business model for web search was an advertising-based model, reducing the cost per search, while maintaining search quality was key to Google’s success. Early on Google focused on energy consumption, with its (search) data centres becoming carbon neutral in 2007[xxiv]. This was achieved by a whole-system effort: carefully managing power management in the silicon, storage, networking equipment and air conditioning to maximise for intelligence per watt. All of which were made using optimised versions of open-source software and cheap general purpose PC components ganged together in racks and operating together in clusters. 

    General purpose ICs for personal computers and consumer electronics allowed easy access relatively low power computing. Much of this was down to process improvements that were being made at the time. You needed the volume of chips to drive innovation in mass-production at a chip foundry. While application-specific chips had their uses, commodity mass-volume products for uses for everything from embedded applications to early mobile / portable devices and computers drove progress in improving intelligence-per-watt.

    Makimoto’s tsunami back to specialised ICs

    When I talked about the decline of LISP machines, I mentioned the move towards standardised IC design predicted by Tsugio Makimoto. This led to a surge in IC production, alongside other components including flash and RAM memory.  From the mid-1990s to about 2010, Makimoto’s predicted phase was stuck in ‘standardisation’. It just worked. But several factors drove the swing back to specialised ICs. 

    • Lithography processes got harder: standardisation got its performance and intelligence per watt bump because there had been a steady step change in improvements in foundry lithography processes that allowed components to be made at ever-smaller dimensions. The dimensions are a function wavelength of light used. The semiconductor hit an impasse when it needed to move to EUV (extreme ultra violet) light sources. From the early 1990s on US government research projects championed development of key technologies that allow EUV photolithography[xxv]. During this time Japanese equipment vendors Nikon and Canon gave up on EUV. Sole US vendor SVG (Silicon Valley Group) was acquired by ASML, giving the Dutch company a global monopoly on cutting edge lithography equipment[xxvi]. ASML became the US Department of Energy research partner on EUV photo-lithography development[xxvii]. ASML spent over two decades trying to get EUV to work. Once they had it in client foundries further time was needed to get commercial levels of production up and running. All of which meant that production processes to improve IC intelligence per watt slowed down and IC manufacturers had to start about systems in a more holistic manner. As foundry development became harder, there was a rise in fabless chip businesses. Alongside the fabless firms, there were fewer foundries: Global Foundries, Samsung and TSMC (Taiwan Semiconductor Manufacturing Company Limited). TSMC is the worlds largest ‘pure-play’ foundry making ICs for companies including AMD, Apple, Nvidia and Qualcomm. 
    • Progress in EDA (electronic design automation). Production process improvements in IC manufacture allowed for an explosion in device complexity as the number of components on a given size of IC doubled every 18 months or so. In the mid-to-late 1970s this led to technologists thinking about the idea of very large-scale integration (VLSI) within IC designs[xxviii]. Through the 1980s, commercial EDA software businesses were formed. The EDA market grew because it facilitated the continual scaling of semiconductor technology[xxix]. Secondly, it facilitated new business models. Businesses like ARM Semiconductor and LSI Logic allowed their customers to build their own processors based on ‘blocs’ of proprietary designs like ARM’s cores. That allowed companies like Apple to focus on optimisation in their customer silicon and integration with software to help improve the intelligence per watt[xxx]
    • Increased focus on portable devices. A combination of digital networks, wireless connectivity, the web as a communications platform with universal standards, flat screen displays and improving battery technology led the way in moving towards more portable technologies. From personal digital assistants, MP3 players and smartphone, to laptop and tablet computers – disconnected mobile computing was the clear direction of travel. Cell phones offered days of battery life; the Palm Pilot PDA had a battery life allowing for couple of days of continuous use[xxxi]. In reality it would do a month or so of work. Laptops at the time could do half a day’s work when disconnected from a power supply. Manufacturers like Dell and HP provided spare batteries for travellers. Given changing behaviours Apple wanted laptops that were easy to carry and could last most of a day without a charge. This was partly driven by a move to a cleaner product design that wanted to move away from swapping batteries. In 2005, Apple moved from PowerPC to Intel processors. During the announcement at the company’s worldwide developer conference (WWDC), Steve Jobs talked about the focus on computing power per watt moving forwards[xxxii]

    Apple’s first in-house designed IC, the A4 processor was launched in 2010 and marked the pivot of Makimoto’s wave back to specialised processor design[xxxiii].  This marked a point of inflection in the growth of smartphones and specialised computing ICs[xxxiv]

    New devices also meant new use cases that melded data on the web, on device, and in the real world. I started to see this in action working at Yahoo! with location data integrated on to photos and social data like Yahoo! Research’s ZoneTag and Flickr. I had been the Yahoo! Europe marketing contact on adding Flickr support to Nokia N-series ‘multimedia computers’ (what we’d now call smartphones), starting with the Nokia N73[xxxv].  A year later the Nokia N95 was the first smartphone released with a built-in GPS receiver. William Gibson’s speculative fiction story Spook Country came out in 2007 and integrated locative art as a concept in the story[xxxvi]

    Real-world QRcodes helped connect online services with the real world, such as mobile payments or reading content online like a restaurant menu or a property listing[xxxvii].

    I labelled the web-world integration as a ‘web-of-no-web’[xxxviii] when I presented on it back in 2008 as part of an interactive media module, I taught to an executive MBA class at Universitat Ramon Llull in Barcelona[xxxix]. In China, wireless payment ideas would come to be labelled O2O (offline to online) and Kevin Kelly articulated a future vision for this fusion which he called Mirrorworld[xl]

    Deep learning boom

    Even as there was a post-LISP machine dip in funding of AI research, work on deep (multi-layered) neural networks continued through the 1980s. Other areas were explored in academia during the 1990s and early 2000s due to the large amount of computing power needed. Internet companies like Google gained experience in large clustered computing, AND, had a real need to explore deep learning. Use cases include image recognition to improve search and dynamically altered journeys to improve mapping and local search offerings. Deep learning is probabilistic in nature, which dovetailed nicely with prior work Microsoft Research had been doing since the 1980s on Bayesian approaches to problem-solving[xli].  

    A key factor in deep learning’s adoption was having access to powerful enough GPUs to handle the neural network compute[xlii]. This has allowed various vendors to build Large Language Models (LLMs). The perceived strategic importance of artificial intelligence has meant that considerations on intelligence per watt has become a tertiary consideration at best. Microsoft has shown interest in growing data centres with less thought has been given on the electrical infrastructure required[xliii].  

    Google’s conference paper on attention mechanisms[xliv] highlighted the development of the transformer model. As an architecture it got around problems in previous approaches, but is computationally intensive. Even before the paper was published, the Google transformer model had created fictional Wikipedia entries[xlv]. A year later OpenAI built on Google’s work with the generative pre-trained transformer model better known as GPT[xlvi]

    Since 2018 we’ve seen successive GPT-based models from Amazon, Anthropic, Google, Meta, Alibaba, Tencent, Manus and DeepSeek. All of these models were trained on vast amounts of information sources. One of the key limitations for building better models was access to training material, which is why Meta used pirated copies of e-books obtained using bit-torrent[xlvii]

    These models were so computationally intensive that the large-scale cloud service providers (CSPs) offering these generative AI services were looking at nuclear power access for their data centres[xlviii]

    The current direction of development in generative AI services is raw computing power, rather than having a more energy efficient focus of intelligence per watt. 

    Technology consultancy / analyst Omdia estimated how many GPUs were bought by hyperscalers in 2024[xlix].

    CompanyNumber of Nvidia GPUs boughtNumber of AMD GPUs boughtNumber of self-designed custom processing chips bought
    Amazon196,0001,300,000
    Alphabet (Google)169,0001,500,000
    ByteDance230,000
    Meta224,000173,0001,500,000
    Microsoft485,00096,000200,000
    Tencent230,000

    These numbers provide an indication of the massive deployment on GPT-specific computing power. Despite the massive amount of computing power available, services still weren’t able to cope[l] mirroring some of the service problems experienced by early web users[li] and the Twitter ‘whale FAIL’[lii] phenomenon of the mid-2000s. The race to bigger, more powerful models is likely to continue for the foreseeable future[liii]

    There is a second class of players typified by Chinese companies DeepSeek[liv] and Manus[lv] that look to optimise the use of older GPT models to squeeze the most utility out of them in a more efficient manner. Both of these services still rely on large cloud computing facilities to answer queries and perform tasks. 

    Agentic AI

    Thinking on software agents went back to work being done in computer science in the mid-1970s[lvi]. Apple articulated a view[lvii]of a future system dubbed the ‘Knowledge Navigator’[lviii] in 1987 which hinted at autonomous software agents. What we’d now think of as agentic AI was discussed as a concept at least as far back as 1995[lix], this was mirrored in research labs around the world and was captured in a 1997 survey of research on intelligent software agents was published[lx]. These agents went beyond the vision that PapriCom implemented. 

    A classic example of this was Wildfire Communications, Inc. who created a voice enabled virtual personal assistant in 1994[lxi].  Wildfire as a service was eventually shut down in 2005 due to an apparent decline in subscribers using the service[lxii]. In terms of capability, Wildfire could do tasks that are currently beyond Apple’s Siri. Wildfire did have limitations due to it being an off-device service that used a phone call rather than an internet connection, which limited its use to Orange mobile service subscribers using early digital cellular mobile networks. 

    Almost a quarter century later we’re now seeing devices that are looking to go beyond Wildfire with varying degrees of success. For instance, the Rabbit R1 could order an Uber ride or groceries from DoorDash[lxiii]. Google Duplex tries to call restaurants on your behalf to make reservations[lxiv] and Amazon claims that it can shop across other websites on your behalf[lxv]. At the more extreme end is Boeing’s MQ-28[lxvi] and the Loyal Wingman programme[lxvii]. The MQ-28 is an autonomous drone that would accompany US combat aircraft into battle, once it’s been directed to follow a course of action by its human colleague in another plane. 

    The MQ-28 will likely operate in an electronic environment that could be jammed. Even if it wasn’t jammed the length of time taken to beam AI instructions to the aircraft would negatively impact aircraft performance. So, it is likely to have a large amount of on-board computing power. As with any aircraft, the size of computing resources and their power is a trade-off with the amount of fuel or payload it will carry. So, efficiency in terms of intelligence per watt becomes important to develop the smallest, lightest autonomous pilot. 

    As well as a more hostile world, we also exist in a more vulnerable time in terms of cyber security and privacy. It makes sense to have critical, more private AI tasks run on a local machine. At the moment models like DeepSeek can run natively on a top-of-the-range Mac workstation with enough memory[lxviii].  

    This is still a long way from the vision of completely local execution of ‘agentic AI’ on a mobile device because the intelligence per watt hasn’t scaled down to that level to useful given the vast amount of possible uses that would be asked of the Agentic AI model. 

    Maximising intelligence per watt

    There are three broad approaches to maximise the intelligence per watt of an AI model. 

    • Take advantage of the technium. The technium is an idea popularised by author Kevin Kelly[lxix]. Kelly argues that technology moves forward inexorably, each development building on the last. Current LLMs such as ChatGPT and Google Gemini take advantage of the ongoing technium in hardware development including high-speed computer memory and high-performance graphics processing units (GPU).  They have been building large data centres to run their models in. They build on past developments in distributed computing going all the way back to the 1962[lxx]
    • Optimise models to squeeze the most performance out of them. The approach taken by some of the Chinese models has been to optimise the technology just behind the leading-edge work done by the likes of Google, OpenAI and Anthropic. The optimisation may use both LLMs[lxxi] and quantum computing[lxxii] – I don’t know about the veracity of either claim. 
    • Specialised models. Developing models by use case can reduce the size of the model and improve the applied intelligence per watt. Classic examples of this would be fuzzy logic used for the past four decades in consumer electronics to Mistral AI[lxxiii] and Anduril’s Copperhead underwater drone family[lxxiv].  

    Even if an AI model can do something, should the model be asked to do so?

    AI use case appropriateness

    We have a clear direction of travel over the decades to more powerful, portable computing devices –which could function as an extension of their user once intelligence per watt allows it to be run locally. 

    Having an AI run on a cloud service makes sense where you are on a robust internet connection, such as using the wi-fi network at home. This makes sense for general everyday task with no information risk, for instance helping you complete a newspaper crossword if there is an answer you are stuck on and the intellectual struggle has gone nowhere. 

    A private cloud AI service would make sense when working, accessing or processing data held on the service. Examples of this would be Google’s Vertex AI offering[lxxv]

    On-device AI models make sense in working with one’s personal private details such as family photographs, health information or accessing apps within your device. Apps like Strava which share data, have been shown to have privacy[lxxvi] and security[lxxvii] implications. ***I am using Strava as an example because it is popular and widely-known, not because it is a bad app per se.***

    While businesses have the capability and resources to have a multi-layered security infrastructure to protect their data most[lxxviii]of[lxxix] the[lxxx] time[lxxxi], individuals don’t have the same security. As I write this there are privacy concerns[lxxxii] expressed about Waymo’s autonomous taxis. However, their mobile device is rarely out of physical reach and for many their laptop or tablet is similarly close. All of these devices tend to be used in concert with each other. So, for consumers having an on-device AI model makes the most sense. All of which results in a problem, how do technologists squeeze down their most complex models inside a laptop, tablet or smartphone? 


    [i] Radiomuseum – Loewe (Opta), Germany. Multi-system internal coupling 3NF

    [ii] (1961) Solid Circuit(tm) Semiconductor Network Computer, 6.3 Cubic inches in Size, is Demonstrated in Operation by U.S. Air Force and Texas Instruments (United States) Texas Instruments news release

    [iii] (2000) The Chip that Jack Built Changed the World (United States) Texas Instruments website

    [iv] Moravec H (1988), Mind Children (United States) Harvard University Press

    [v] (2010) Makimoto’s Wave | EDN (United States) AspenCore Inc.

    [vi] Jobs, S. (1980) Presentation on Apple Computer history and vision (United States) Computer History Museum via Regis McKenna

    [vii] Sinofsky, S. (2019) ‘Bicycle for the Mind’ (United States) Learning By Shipping

    [viii] Hertzfeld, A. (1981) Bicycle (United States) Folklore.org

    [ix] Jones, D. (2016) Codemasters (United Kingdom) Retro Gamer – Future Publishing

    [x] Engelbert, D. (1968) A Research Center For Augmenting Human Intellect (United States) Stanford Research Institute (SRI)

    [xi] Hormby, T. (2006) Apple’s Worst business Decisions (United States) OSnews

    [xii] Honam, M. (2009) From Brick to Slick: A History of Mobile Phones (United States) Wired

    [xiii] Ericsson History: The Nordics take charge (Sweden) LM Ericsson.

    [xiv] Singh, H., Gupta, M.M., Meitzler, T., Hou, Z., Garg, K., Solo, A.M.G & Zadeh, L.A. (2013) Real-Life Applications of Fuzzy Logic – Advances in Fuzzy Systems (Egypt) Hindawi Publishing Corporation

    [xv] Reid, T.R. (1990) The Future of Electronics Looks ‘Fuzzy’. (United States) Washington Post

    [xvi] Kushairi, A. (1993). “Omron showcases latest in fuzzy logic”. (Malaysia) New Straits Times

    [xvii] Watson, A. (2021) The Antique Microwave Oven that’s Better than Yours (United States) Technology Connections

    [xviii] Durbhakula, S. (2022) IBM dumping Watson Health is an opportunity to reevaluate artificial intelligence (United States) MedCity News

    [xix] (1998) PapriCom Technologies Wins CommerceNet Award (Israel) Globes

    [xx] Von Ahn, L., Dabbish, L. (2004) Labeling Images with a Computer Game (United States) School of Computing, Carnegie-Mellon University

    [xxi] Butterfield, D., Fake, C., Henderson-Begg, C., Mourachov, S., (2006) Interestingness ranking of media objects (United States) US Patent Office

    [xxii] Delaney, K.J., (2005) Yahoo acquires Flickr creator (United States) Wall Street Journal

    [xxiii] Hood, S., (2008) Delicious is 5 (United States) Delicious blog

    [xxiv] (2017) 10 years of Carbon Neutrality (United States) Google

    [xxv] Bakshi, V. (2018) EUV Lithography (United States) SPIE Press

    [xxvi] Wade, W. (2000) ASML acquires SVG, becomes largest litho supplier (United States) EE Times

    [xxvii] Lammers, D. (1999) U.S. gives ok to ASML on EUV effort (United States) EE Times

    [xxviii] Meade, C., Conway, L. (1979) Introduction to VLSI Systems (United States) Addison-Wesley

    [xxix] Lavagno, L., Martin, G., Scheffer, L., et al (2006) Electronic Design Automation for Integrated Circuits Handbook (United States) Taylor & Francis

    [xxx] (2010) Apple Launches iPad (United States) Apple Inc. website

    [xxxi] (1997) PalmPilot Professional (United Kingdom) Centre for Computing History

    [xxxii] Jobs, S. (2005) Apple WWDC 2005 keynote speech (United States) Apple Inc.

    [xxxiii] (2014) Makimoto’s Wave Revisited for Multicore SoC Design (United States) EE Times

    [xxxiv] Makimoto, T. (2014) Implications of Makimoto’s Wave (United States) IEEE Computer Society

    [xxxv] (2006) Nokia and Yahoo! add Flickr support in Nokia Nseries Multimedia Computers (Germany) Cision PR Newswire

    [xxxvi] Gibson, W. (2007) Spook Country (United States) Putnam Publishing Group

    [xxxvii] The O2O Business In China (China) GAB China

    [xxxviii] Carroll, G. (2008) Web Centric Business Model (United States) Waggener Edstrom Worldwide for LaSalle School of Business, Universitat Ramon Llull, Barcelona

    [xxxix] Carroll, G. (2008) Web of no web (United Kingdom) renaissance chambara

    [xl] Kelly, K. (2018) AR Will Spark the Next Big Tech Platform – Call It Mirrorworld (United States) Wired

    [xli] Heckerman, D. (1988) An Empirical Comparison of Three Inference Methods (United States) Microsoft Research

    [xlii] Sze, V., Chen, Y.H., Yang, T.J., Emer, J. (2017) Efficient Processing of Deep Neural Networks: A Tutorial and Survey (United States) Cornell University

    [xliii] Webber, M. E. (2024) Energy Blog: Is AI Too Power-Hungry for Our Own Good? (United States) American Society of Mechanical Engineers

    [xliv] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I. (2017) Attention Is All You Need (United States) 31st Conference on Neural Information Processing Systems (NIPS 2017)

    [xlv] Marche, S. (2024) Was Linguistic A.I. Created By Accident? (United States) The New Yorker.

    [xlvi] Radford, A. (2018) Improving language understanding with unsupervised learning (United States) OpenAI

    [xlvii] Heath, N. (2025) Authors outraged to discover Meta used their pirated work to train its AI systems (Australia) ABC (Australian Broadcast Corporation)

    [xlviii] Morey, M., O’Sullivan, J. (2024) In-brief analysis: Data center owners turn to nuclear as potential energy source (United States) Today in Energy published by U.S. Energy Information Administration

    [xlix] Bradshaw, T., Morris, S. (2024) Microsoft acquires twice as many Nvidia AI chips as tech rivals (United Kingdom) Financial Times

    [l] Smith, C. (2025) ChatGPT’s viral image-generation upgrade is ruining the chatbot for everyone (United States) BGR (Boy Genius Report)

    [li] Wayner, P. (1997) Human Error Cripples the Internet (United States) The New York Times

    [lii] Honan, M. (2013) Killing the Fail Whale with Twitter’s Christopher Fry (United States) Wired

    [liii] Mazarr, M. (2025) The Coming Strategic Revolution of Artificial Intelligence (United States) MIT (Massachusetts Institute of Technology)

    [liv] Knight, W. (2025) DeepSeek’s New AI Model Sparks Shock, Awe, and Questions from US Competitors (United States) Wired

    [lv] Sharwood, S. (2025) Manus mania is here: Chinese ‘general agent’ is this week’s ‘future of AI’ and OpenAI-killer (United Kingdom) The Register

    [lvi] Hewitt, C., Bishop, P., Steiger, R. (1973). A Universal Modular Actor Formalism for Artificial Intelligence. (United States) IJCAI (International Joint Conference on Artificial Intelligence).

    [lvii] Sculley, J. (1987) Keynote Address On The Knowledge Navigator at Educom (United States) Apple Computer Inc.

    [lviii] (1987) Apple’s Future Computer: The Knowledge Navigator (United States) Apple Computer Inc.

    [lix] Kelly, K. (1995) Out of Control: The New Biology of Machines (United States) Fourth Estate

    [lx] Nwana, H.S., Azarmi, N. (1997) Software Agents and Soft Computing: Towards Enhancing Machine Intelligence Concepts and Applications (Germany) Springer

    [lxi] Rifkin, G. (1994) Interface; A Phone That Plays Secretary for Travelers (United States) The New York Times

    [lxii] Richardson, T. (2005) Orange kills Wildfire – finally (United Kingdom) The Register

    [lxiii] Spoonauer, M. (2024) The Truth about the Rabbit R1 – your questions answered about the AI gadget (United States) Tom’s Guide

    [lxiv] Garun, N. (2019) One year later, restaurants are still confused by Google Duplex (United States) The Verge

    [lxv] Roth, E. (2025) Amazon can now buy products from other websites for you (United States) The Verge

    [lxvi] MQ-28 microsite (United States) Boeing Inc.

    [lxvii] Warwick, G. (2019) Boeing Unveils ‘Loyal Wingman’ UAV Developed In Australia (United Kingdom) Aviation Week Network – part of Informa Markets

    [lxviii] Udinmwen, E. (2025) Apple Mac Studio M3 Ultra workstation can run Deepseek R1 671B AI model entirely in memory using less than 200W, reviewer finds (United Kingdom) TechRadar

    [lxix] Kelly, K. (2010) What Technology Wants (United States) Viking Books

    [lxx] Andrews, G.R. (2000) Foundations of Multithreaded, Parallel, and Distributed Programming (United States) Addison-Wesley

    [lxxi] Criddle, C., Olcott, E. (2025) OpenAI says it has evidence China’s DeepSeek used its model to train competitor (United Kingdom) Financial Times

    [lxxii] Russell, J. (2025) China Researchers Report Using Quantum Computer to Fine-Tune Billion Parameter AI Model (United States) HPC Wire

    [lxxiii] Mistral AI home page (France) Mistral AI

    [lxxiv] (2025) High-Speed Autonomous Underwater Effects. Copperhead (United States) Anduril Industries

    [lxxv] Vertex AI with Gemini 1.5 Pro and Gemini 1.5 Flash (United States) Google Cloud website

    [lxxvi] Untersinger, M. (2024) Strava, the exercise app filled with security holes (France) Le Monde

    [lxxvii] Nilsson-Julien, E. (2025) French submarine crew accidentally leak sensitive information through Strava app (France) Le Monde

    [lxxviii] Arsene, Liviu (2018) Hack of US Navy Contractor Nets China 614 Gigabytes of Classified Information (Romania) Bitdefender

    [lxxix] Wendling, M. (2024) What to know about string of US hacks blamed on China (United Kingdom) BBC News

    [lxxx] Kidwell, D. (2020) Cyber espionage for the Chinese government (United States) U.S. Air Force Office of Special Investigations

    [lxxxi] Gorman, S., Cole, A., Dreazen, Y. (2009) Computer Spies Breach Fighter-Jet Project (United States) The Wall Street Journal

    [lxxxii] Bellan, R. (2025) Waymo may use interior camera data to train generative AI models, but riders will be able to opt out (United States) TechCrunch

  • End of culture

    This post on the end of culture as inspired by a presentation. Pip Bingemann of Springboards.ai presented at Cannes in Cairns – a marketing festival for Australians who wouldn’t be able to go to the Cannes Festival of Advertising. Pip’s presentation touched with things I had seen about the end of culture and had some interesting points within it. I didn’t agree with a lot of Pip said, some of it was down to nuance, but appreciated the journey that it took.

    I have built the main headers around Pip’s slides, strap in for the end of culture.

    What’s wrong with advertising?

    Bingemann’s presentation as in praise of the disruption that (generative) AI was bringing. The thesis he put forward was that ‘machines’ had already messed up the advertising and media industries. 

    • Advertising became self-service in nature. 
    • There had been a move in online media to relevance over distinctiveness
    • We became slaves to numbers

    Let’s look at those elements first. 

    Advertising became self-service in nature

    Like the technological disruption of banking in the past with: 

    • Postal banking
    • Automatic teller machines
    • Telephone banking 
    • Online banking 

    Meta and Google’s advertising platform democratised media buying. Years ago a guy I have lost touch with used to be a manager at a McDonald’s branch in the west end of London. 

    Before cellphones became commonplace he had a side hustle. He used the restaurant telephone to phone up the newspapers, to book small ads. The newspapers had advertising sales teams, that he would speak to. He did it once for a friend and then word got around. Eventually, he was calling for businesses across Soho. Premium line suppliers, porn publishers and adult mail order catalogue companies. Eventually they needed the ads to be designed. This work was done alongside creating porn DVD covers and other marketing material. 

    Ovid was a pimp

    He built a small successful agency off the back of it based in Soho. The agency remained in Soho until it was priced out by the fund management firms who moved in. Lots of other small businesses did the same for their plumbing business or hair salon. Their adverts would run in local newspapers across the country. 

    newspaper ad...

    For more sophisticated ads like large print ads, television or cinema advertising; help was needed. This help got the ad ready, made sure that the publication received the artwork on time and in a format that they could use. They made sure that the artwork was presented in the manner agreed. With the likes of television, the advert might have to go through regulatory approval prior to publication. 

    If you were a larger brand with a national or international campaign, further help was needed in pre-testing and orchestration. Expertise might be needed to access more regulated markets while remaining on the right side of the law. 

    Technology allowed newspaper type adverts to be easily accessed by both agencies and brands. 

    TLDR: Advertising has been self-serving for decades, but I will grant that online allowed more sophisticated formats such as videos, colour photos and carousels. AND regulation has been slower to police advertising online, for instance YouTube ads don’t get the scrutiny that TV ads get.

    Relevance over distinctiveness and slaves to numbers

    The move to relevance over distinctiveness in online media was down to where online media was in the customer journey. It was (and for the most part still is at the bottom of the funnel).

    Relevance made sense, particularly in search advertising. The first online adverts such as Craigslist classified and display ads were conceptually similar to their equivalents in the back pages of newspaper advertising. Newspaper ads were served in sections: cars for sale, homes for sale, local businesses, cinema listings, vets or pharmacies with a late closing time.

    Search and many banner ad campaigns for that matter are about the last step (hopefully) before purchase. In the old pre-internet world, they would be direct mail or the direct response adverts that used to appear in magazines or the special offers beloved of shopper marketing.

    Vintage 1960s Columbia Record Club Ad Double Page Advertisement 1962

    Distinctiveness appeared further up in the funnel building long term memory models through brand building. It was TV advertising, radio jingles, magazine print advertising and billboards that evoked emotion and still evoke nostalgia decades later.

    Silk Cut cigarette ad
    Saatchi & Saatchi for Gallaher

    I would argue that the issue is less about relevance at the expense of distinctiveness, instead it’s about short-termist mindsets facilitated by numbers. The media industry is about to double down on this error, with initiatives like the European Programmatic TV initiative. And so I can empathise with Pip’s last point about becoming slaves to numbers. It’s ironic that the PowerPoint-friendly charts used by Google search advertising to explaining its value for marketers took off and drove marketing thinking.

    Technology marketing itself came from broken origins and still is basically sales strategies by another name. A good deal of what data is created is based on what technology companies can see; rather than what marketers need to measure to get the balance between long term and short term marketing needs.

    This MIGHT BE about to change if marketing expert Mark Ritson is to be believed. He posits that marketing technology start-up Evidenza.AI will provide business-to-business marketers with the kind of insight previously driven by market research, but much faster. From then on he sees it doing a better job at communications and media strategy. I am trying to keep an open mind on this at the moment.

    TLDR: Advertising hasn’t become about relevance at the expense of distinctiveness, but instead about short-term at the expense of long-term marketing effects; partly down to technologists having a poor understanding of marketing.

    Technology outputs data which marketers paid an inordinate amount of attention to; reinforcing the short term bias. Machine learning techniques now becoming available might turn this around by providing better marketing insight.

    Machine learning tends towards the mean

    Pip’s presentation went on asserting that machine learning tends towards the mean. Generative AI synthesises content based on what has already been done, which why Pip assumes that everything tends towards the mean. But that depends on how one uses these tools that we’ve been given.

    As a strategist, I have used generative AI to knock out too obvious propositions, so I give the creative teams something interesting to work with in the creation of distinctive assets.

    Apparently creative teams have been taking a similar approach in terms of ideation.

    One thing I’ve heard more than once recently is how creative teams are using LLMs for brainstorms. But not quite how you’d expect… Because these algorithms answer back with the most likely predicted outcomes based on available data, you get the mean. The average. In creative terms that means the well worn “cliches”. So when starting a brainstorm or ideation session, quizzing the LLMs leads to a list of suggestions of what creative teams are generally most likely to suggest. At which point the team knows what NOT to do. The already well trodden ground. The list of the obvious. That also somehow gives a wonderfully smug angle on the use of AI in the pursuit of original work.

    Nic Roope on LinkedIn

    TLDR: generative AI will tend towards the mean, BUT that can be used creatively.

    Agencies and clients screwed advertising

    Pip’s slides don’t necessarily dig into the reasons why this happened. But I can put together some hypotheses and provide evidence that may indicate their validity or lack of it.

    Clientside factors

    • Shareholder value ethos – Shareholder value the way we understand it now can be traced back to the 1960s. While Milton Friedman popularised it in an essay A Friedman Doctrine: The Social Responsibility of Business Is to Increase Its Profits, the idea had surfaced years earlier in an opinion editorial published in Fortune magazine. The so-called Friedman doctrine became a lode star for investors and boards including the likes of ‘Neutron’ Jack Welch at General Electric. While this thinking still dominates the tyranny of the quarterly numbers that CEOs of publicly traded companies operate under; it is not the only perspective in the c-suite.
    • The financialisation of businesses – related to the Friedman doctrine, businesses became increasingly financialised thinking about short term financial decisions. A classic example of this is how post-regulation, legacy airlines in the US have been managed. Another example is Brazil’s private equity firm 3G Capital who managed to destroy billions of dollars in shareholder value with marketing cuts. Financialisation has definitely had an impact, but it varies from company to company. We also see it showing up on the agency side, with the move to using more freelance staff and burning out those staff that they do have. They have a fig leaf of mental health care in their talent acquisition literature, but it’s largely BS.
    • What gets measured gets done – Google advertising’s success was as much down to it being easy to tell a story about the marketing spend conducted on the platform as it was about effectiveness. The dashboards lended themselves to being easily reproduced in PowerPoint and spoke in the universal c-suite language of line graphs and pie charts. This was really important for Google to survive and thrive in the post dot com bust and the 2008 recession.
    • Marketing literacy – since before I have gone to college the c-suite was largely marketing illiterate. It doesn’t matter if they are a self-starting boy or girl made good, or minted from an Ivy League business school with an MBA. I have worked with both and they had a similar marketing knowledge level, the only thing that varied was the level of self confidence despite this gap. Neither do the management consultants that they may employ. Which is the reason why the team at 3G Capital were surprised when they cut marketing costs and destroyed brand and shareholder value.
    • Procurement – practices to systemise purchasing and avoid issues like nepotism and corruption have introduced a muscular procurement function who know the price of everything and the value of nothing. Margins across disciplines have been squeezed to breaking point. This has led to a decline in entertainment and side benefits, my LinkedIn feed had advertising folk explaining that the cost of attending the Cannes Festival of Advertising was likely paid through budget cuts in: training, subscriptions for tools and publications and even head count. We might not have had an end of culture, but this is no longer the industry portrayed in Mad Men.

    Agencyside factors

    • Splitting creative and media – prior to the mid-1970s creative and media buying were two departments in the one advertising agency. That allowed the free flow of research between the departments and the creative use of context as well as content. It also meant that margins had to support two management teams. Secondly, the options to best defend margins was in the media-buying side of the house, depending on how integrated into the media technology stack the the media buying agency became.
    • Change in north star from FMCG to technology companies – the rise of the internet completely changed the nature of marketing. Prior to the internet becoming mainstream, having FMCG experience as a marketer helped your career. In the early 2000s, Google, Yahoo! and later Facebook became the brands marketers wanted on your CV. The difference was that FMCG brands had subscriptions to the likes of the Ehrensberg-Bass Institute for Marketing Science. Yet American and British academia saw that most thinking from even the most prestigious schools can be boiled down to being the considered common sense opinion of tenured professors like David A. Aaker and Philip Kotler. Kotler was reportedly not interested in engaging with marketing science as consumer behaviour was too complex and difficult to model.
    • Relative recent awareness of marketing science. For reasons that I don’t fully understand marketing science is both old and a new phenomenon. The late Andrew Ehrensberg originally founded his Centre for Research in Marketing in the early 1990s and had been turning out marketing science academic papers for decades before that. Ehrensberg eventually moved to His work on the myth of ‘heavy consumers‘ and polygamous brand buying (smaller brands suffering a double jeopardy of fewer people purchasing them, and those that did purchase them, did so less often) was done back in the 1950s for Attwood Consumer Panel (would eventually become part of TNS). Some agency strategists knew about Ehrenberg, such as Stephen King of JWT. Some of this thinking was likely hidden by the decline of market research projects in agencies and the split between media buying and creative. In addition, Andrew Ehrenberg theorised why marketing science had a low adoption outside his center’s FMCG clients, which also encapsulated the gatekeeper role American academics played in overall mainstream academic adoption:

    I also realised slowly that our kind of theorising – which at base describes and explains already-established and generalised empirical discoveries and which thus post-dicts them – was anathema to many American academic marketing colleagues. They espoused much more ambitious and complex-looking econometric procedures which never worked in practice, with the recent citation for a Nobel typically not referring to any established empirical patterns

    My Research in Marketing : How It Happened by Andrew Ehrenberg
    • Channels – I don’t know who thought that a video view could be just a couple of seconds, but digital platforms benefited from it. Some of the wisdom from this years Cannes Festival of Creativity was that short adverts don’t work that well as they fail to build memory structures. Somehow agencies, platforms and brands suspended belief to develop marketing campaigns that only made sense in 1980s cyberpunk fiction like Max Headroom. Even at Cannes, platforms like Tiktok believed that they operate like, and a have similar impact to a TV advert…
    • Research – like most strategists I have found that I am often operating with less qualitative research than I would like. One of the biggest programmes I managed to work on the research for was the global launch of a now famous weight management product. Even then we didn’t do enough interviews around the world to understand cultural nuances in play. I remember reading about strategists in the 1970s spending a good deal of time listening to focus groups hosted around the country. There was a mid-week ritual of taking a drive or a train to a city or town outside London for this research. Social listening has been touted as a possible research for product tracking and can be a useful source of consumer soundbites sometimes.
    • Testing – hand-in-hand with a decline in research has been a decline in types of testing. Content still gets tested, but brands and agencies didn’t test channels to the same degree. Which is why we’ve had short form ad formats for years, yet the knowledge that they’re not as good at building memory structures doesn’t seem to be embedding into clients and agency teams.

    OK, but that’s advertising, what about the end of culture?

    Pip claims that advertising is just one part of our world that has been under attack (from technology). Alex Murrell’s essay The Age of Average was cited as the source of this insight. Murrell makes his case on the common looks in car designs driven by developments in aerodynammic design over time, architecture and cityscapes, coffee shop styles, logos, book covers, video game franchises, packaging design and product design.

    Part of the reason for the architecture was Le Corbusier and his his function over form theory of design and architecture (modernism) captured in Towards a New Architecture.

    Murrell harked back to a time of distinctive cities like Victorian London. However what Murrell’s explanation overlooked was that even back in Victorian times London was becoming ‘standardised’. Chimney pots, bricks, cast-iron beams, windows and even church stained glass windows came out of catalogues. The same designs repeat over-and-over-again. The church stained glass windows went around what was then the British empire. It is a similar situation today. Buildings are made of standardised materials and design tools as we understand more about engineering.

    Technology over time allowed buildings to get taller and let in more light thanks to improvements in construction, lifts (elevators) and environmental control. Where things get interesting is when governments and societies make decisions on what they want to keep or rebuild. Shanghai has preserved only a little of the Bund and few of its hutongs. Hong Kong has so far managed to keep some examples of its composite buildings. However once you get to street level you see a distinct evolving local culture despite their apparently similar skylines.

    This mix of standardised components bought from a supply chain, improved engineering and regulation has also driven similarities in other products, such as motor cars which Murrell cited as an example. But again those similarities are more about operating at a macro-viewpoint. On closer examination, diversity in car culture and driving experiences start to build clear lines of distinctiveness.

    And the car industry for decades has indulged in badge engineering where one vehicle truly does look like another.

    Wolsley Hornet
    Wolesley Hornet
    PBWA Hammersmith and Fulham
    Austin Cooper Mini
    1975 Innocenti Mini 1300
    Innocenti Mini
    1967 Riley Elf
    Riley Elf

    The examples I used above were all based on the Austin Mini. Wolesley was a luxury brand owned by BMC at the time. Italian care manufacturer Innocenti licensed the Mini from Austin until the agreement was cancelled by British Leyland. Lastly, the Riley Elf was a slightly more expensive alternative to Wolesley, both were owned by BMC.

    General Motors were the masters of badge engineering using ‘common platforms’ as far back at 1909.

    As for the complaints about logo design, books and later the web allowed influential design motifs like Neville Brody’s work at The Face, Arena and The Guardian went around the world, collected in three volumes by Thames & Hudson. His cover designs were in Tower Records stores from New York to Tokyo. Design is an industry sensitive to global influences that you see spread around the world. A second reason for the simplification and flattening of logos is the world that we now live in. Before the web logos only existed in the physical world. Digital brings common requirements:

    • Works in a website template that can be used globally.
    • Works in email headers and footers.
    • Works in a favicon and in a mobile app button.

    One interesting point came out when Murrell (and Bingemann) looked at media where there was a coalescence of homage images and content based around a success. But these in turn created their own genres like the sweary covers on self-help books. How is this marking a low point in culture was beyond me.

    I thought of genres like the European ‘gallo’ films or the European takes on the western films of which spaghetti westerns are the most well known. A lot of the films were dreadful. In the case of European westerns many of them borrowed a characters name from more successful films. So you saw ‘apparent’ franchises around ‘Ringo’, ‘Django’ and ‘Sartana’.

    Western saloon, cinema studio tabernas (Almeria)

    (Film director Alex Cox published one of the best works on the Italian western film genre 10,000 ways to die. It’s based on his university thesis and a fascinating read, if you choose to jump down that rabbit hole.)

    You had a similar experience in the Asian martial arts film industry with countless variations on the the star name Bruce Lee, as the industry coped with the loss of most famous star.

    To quote Sturgeon’s revelation:

    90 percent of anything is crap.

    This doesn’t mark the end of culture, but the manufacture of culture. What’s good or great is then strained through the filter of time and changing social attitudes.

    As for the cinematic superhero cul-de-sac, there are clear parallels with the end of the western and the New Hollywood movement. This time its distribution in the driving seat rather than a new generation of directors. Like the New Hollywood movement there will be both successes and car crashes along the way and I am largely excited by it.

    Bingemann also cites Adam Mastroianni’s essay Pop Culture Has Become an Oligopoly. Mastroianni hits on what is called a long tail. In scale-free networks with preferential attachments, power law distributions are created, because some nodes are more connected than others – so Taylor Swift will sell more because of the size of fan base she has grown over time. They have been studied since at least 1946 and Benoit Mandelbrot who is better known for his work on fractals was one of the main researchers. Wired magazine touched on it in 1998 when it published The Encyclopaedia of the New Economy written by John Browning and Spencer Reiss and the influence showed up in Wired contributor Kevin Kelly’s work New Rules for the New Economy. So one can guess that the ideas were being thrown around then.

    Wired editor Chris Anderson wrote about it in a magazine article for Wired in October 2004, and turned it into a book. Algorithms in online services create bubbles and rabbit holes in different areas and surface media winners like MrBeast. But again culture has thrived despite of popular culture out of sight of the general public for decades will continue to do so. Examples include Northern Soul, punk, the Chicago house music scene, UK garage, grime, drill and donk, the long tail does not mark an end of culture.

    TL:DR: Could the current culture eco-system be better? Yes, absolutely. But it isn’t broken in the way and extent that Bingemann believes. We definitely aren’t at the end of culture and it doesn’t need to be ‘saved’ by generative AI.

    So what can AI do?

    Bingemann believed that generative AI offers society a way out of the end of culture. So presumably it offers a way to enhance and create culture. He believes that it creates, I would finesse this a bit to say that it emulates, synthesises and combines elements to meet consumer instructions – since it is the sum of its training data.

    Ironically, Bingemann bases his thesis on how surreal and abstract art represented the ‘death of traditional art’ and reinvented the meaning of art and unleashed a large amount of creativity. Traditional art didn’t die per se, there are still several artists selling realistic pieces including painting and sculptures alongside the ‘new art’ movements.

    Generative AI puts tools in the hands of creatives that previously would have meant a lot of work. In the same way that desktop publishing and Photoshop reduced the cut-and-past compositing on layers of glass panels which were then photographed and image retouching done by hand in the past.

    In advertising Bingemann sees five opportunities enabled by generative AI:

    • Move to value-based pricing (presumably based on substantially reduced cost of production). It’s what Huge tried to do with their pivot and what thinkers like Michael Farmer have been recommended. We’ll see what happens when this aspiration meets client procurement teams. I hope Bingemann is right.
    • Design AI around people. So far the progress has been mixed around this. We have been some companies like Klarna using ‘good enough’ generative AI to automate jobs out of existence. Adobe have taken more of a creative enablement approach. Based on my experience working on ads in the past with collaged backdrops and photoshoots for global campaigns, this could save tens of hours or more in art working.
    • Embrace the newcomers. Just like social and digital before it, when we had new agencies like Crayon, AKQA and Poke; Bingemann thinks that generative AI is likely to bring new businesses to the advertising eco-system.
    • Spend 10x more effort developing the next generation. Given that the advertising industry manages to continually churn experienced people out of the industry and no one was found to have retired last year from the industry according to the IPA – this is going to be a tall order. It would make more sense if AI was used to make advertising more representative.
    • Unite. Clients, agencies and technology. It’s a nice aspiration, but when clients are looking for good enough and efficient content, agencies looking for a margin and trying to put effectiveness in there as well and technology companies trying hold back their natural instinct to suck all the value to themselves, it will be a hard feat to achieve.

    Bingemann argues that this is necessary for advertising, but also for creativity and considers advertising’s role to break culture rather than just reflect it. Culture and creativity will exist without advertising. Even during the Soviet Union, there was still creativity, art and culture – both mainstream and underground.

    A Final Thought To Leave You On

    GZero Media quoting Douglas Rushkoff (of Media Virus fame) on what generative AI means for culture moving forward.

    While its not the end of culture as we know it, Springboard.ai are putting out some interesting tools that I could see competing with the likes of Julian Cole, Mark Pollard and others who are filling the ‘how to strategy’ gap for brand planners.

    More related content can be found here.

    More information

    The ‘Pernicious Nonsense’ Of Maximizing Shareholder Value | Forbes.

    Customer Value, Shareholder Wealth, Community Wellbeing: A Roadmap for Companies and Investors by Denis Kilroy and Marvin Schneider

    CIA appoints ex-MindShare chief de Pear | MarketingWeek

    Vici – The evolution of display advertising

    Profit squeeze for ad agencies | MarketingWeek

    3G Capital discovers the limits of cost-cutting and debt | The Economist

    My Research in Marketing : How It Happened by Andrew Ehrenberg

    Creative Impact Unpacked: 11 effectiveness trends from Cannes Lions 2024 | WARC

    The Age of Average by Alex Murrell

    Modern Man: The Life of Le Corbusier, Architect of Tomorrow by Anthony Flint.

    10,000 ways to die by Alex Cox.

    The New Hollywood: From Bonnie and Clyde to Star Wars by Peter Kramer.

    Pop Culture Has Become an Oligopoly | Experimental History.

    New Rules for the New Economy by Kevin Kelly.

    The Long Tail: How Endless Choice is Creating Unlimited Demand by Chris Anderson.

    AI and creativity | renaissance chambara.

  • Car screens and synthesisers

    The current debate over car screens / car as computer design reminded me a lot of the journey that synthesisers have gone through.

    Charging screen

    I went down this train of thought on car screens thanks to a LinkedIn post by Nic Roope, reacting to an article published in Car Design News in praise of push buttons.

    There is a view in car circles that the reliance on screens to mediate so many of the functions of a car can be a bad thing. I can understand it. For enthusiasts driving a car is still a very analogue experience including the haptics of direct steering connectivity and a manual gearbox.

    I would be remiss if I didn’t share the opinion of Doug DeMuro who argued the case for screens in terms of two reasons:

    • Costs. Buttons cost more money and there would be the associated connectors. Modern vehicles offer such a range of controls, that doing them in buttons rather than soft buttons and car screens would be cost and space prohibitive.
    • Technological momentum. DeMuro essentially articulates a position similar to Kevin Kelly’s concept of the technium in his book What Technology Wants. Kelly uses a biological metaphor of progress as an organism or Gaia type metaphor that keeps growing and moving at its own pace. While Kelly has been accused to techno-mysticism, we do know that the development of key technologies like television or the light bulb were happening at the same time in different parts of the world in isolation from each other – there had become a time when they were inevitable.

    the greater, global, massively interconnected system of technology vibrating around us

    Kevin Kelly on the technetium in What Technology Wants

    Colin Chapman versus software engineers.

    DeMuro’s first point is based on the proposition that all this extra control in car screens is a good thing. Do we really need to have car interior mood lighting? And if we do, do we need to have colours that result in night blindness and make the car interior looks like a booth at a bottle service bar in Dubai?

    For some drivers, the answer will be no.

    Different car manufacturers have had different models that do very different things. One of the philosophies articulated most by car enthusiasts is that of Lotus cars founder Colin Chapman “simplify, and add lightness”.

    Chapman’s design ethos was very in-tune with the likes of mid-century thinkers like polymath Buckminster Fuller and those he influenced notably architect Sir Norman Foster.

    Chapman’s world view wasn’t perfect his vehicles were fragile and had quality issues, partly due to his daring use of new materials and techniques influenced by aerospace. It’s also a world away from the Tesla approach, where the vehicle can’t be started up without the screen even as a ‘limp mode’ function.

    Instead the Tesla pickup and car screens are infested with boondoggles including:

    • A video of a fireplace filled with burning logs
    • A game that allows you to break the windows of a virtual CyberTruck
    • Customisable horn sounds including celebrity voices
    • A pre-programmed light show

    Modern car economics.

    Car screens have advanced in tock-step with the move towards an electric car future. A technology transition at the best of times is difficult, but the car industry has other problems that will impact consumer views of vehicles.

    Consumer choice.

    In the 1970s cars cars seldom lasted over a decade, but due to improvements in corrosion treatment and car design that removed water traps the potential life of a car was extended. Given that classic cars are much less damaging to the environment. The average classic emits 563kg of CO2 per year, yet an average passenger car has a 6.8-tonne carbon footprint immediately after production. This means that a new car would need to be run for several years to achieve a similar climate ‘payback’ and older cars can be attractive for consumers, if they meet their needs reliably.

    Vehicle affordability.

    Over the time I have held a driving licence, the secondhand car market went from being the dumping ground for fleet sales to the Alice In Wonderland after effects of the lease agreements that drove new and nearly-new car sales. The financialisation of the car market isn’t without risk and has been considered a possible future risk in the way that consumer finance and home mortgages have been in the past.

    Yamaha DX7II-D

    So what do car touchscreens have to do with synthesisers?

    In order to answer that question, we need to go back in time. Massive steps forward in electronics had inspired research into different ways of creating sounds based on modulation techniques used in radio broadcast signals for decades. In the 1960s digital technology was also moving forward and provided a more stable base for FM synthesis. Stanford University scholars worked with Yamaha technologists to turn FM synthesis into a product.

    The first instrument that it appeared in was the New England Digital Synclavier, who had licensed the technology from Yamaha. The Synclavier, was a couple of racks full of computer storage, a processing unit, cooling and audio interfaces. This was all connected up to a monitor and a keyboard. Over time the Synclavier would evolve into the ancestor of the modern digital audio workstation (DAW) like Apple’s Logic Pro app.

    1983, comes around and Yamaha is finally ready to launch a mainstream product featuring FM synthesis. it also features MIDI, a standard that is still used to control musical instruments (and other studio equipment) remotely. Roland had released a couple of devices that supported the standard.

    But Yamaha’s DX7 proved to be the blockbuster product. At that time electronic music was a niche interest and instrument manufacturers would be very lucky to sell 50,000 units. Yamaha sold over 300,000 units in the first three years of sales over its 7 year life and 10,000s of more devices of the DX and TX families.

    Digital changes the interface

    Analogue synthesisers wer full of switches and dials. This Oberheim synthesiser above, isn’t that different from its analogue predecessors from five decades prior.

    The DX7 was a very different beast, it couldn’t have a dial or button for every parameter, rather like modern car screens with endless settings. So it had a few buttons which changed their function depending on what the synthesiser. A few earlier models had limited sales with a similarly spartan approach, but the DX7 mainstreamed the idea.

    A few things happened that might be instructive for how we now think about car screens:

    • Other synthesiser manufacturers like Roland and Korg copied Yamaha’s approach to interface design. Some of them tried using devices like jog wheels to provide additional intuitive control, in a similar way conceptually to BMW’s iDrive interface for its car screens.
    • Software companies looked to fill the gap to provide a better interface, which eventually begat modern software digital audio workstation applications like Logic Pro. We might see similar developments sold for cars, and this is likely the opportunity that the likes of Apple CarPlay sees. There is consumer demand to support it.
    • Despite the obvious benefit of soft button driven instruments, there still remained a strong demand for analogue controls. Now there is a strong demand for tactile interface controls and old style synthesis. In the car world that would equate to providing car enthusiasts with analogue experiences, while the mainstream goes to Tesla minimalism of the car screen. We can see this in the design of Hyundai’s analogue feeling performance electric cars that try and emulate a manual gear box and Ineos’ switch gear that owes more to aviation than automotive manufacturing.

    You can find similar posts to this here.

    More information

    Average Age of Cars in Great Britain | NimbleFins

    In praise of pushbuttons (and other physical controls) | Car Design News

    Car pollution facts: from production to disposal, what impact do our cars have on the planet? | Auto Express

    MIDI Quest Pro Yamaha DX7 software editor

    Patchbase Yamaha DX7 software editor

  • Jeremy Deller + more stuff

    Jeremy Deller

    Jeremy Deller is famous for examining 1980s events from the MIner’s strike to rave culture. Most notably his work Acid Brass, working with the William Fairey Brass Band on cover versions of early house and techno music. In this documentary he walks a group of sixth formers through the context that rave culture began in.

    Smiley detail

    Jeremy Deller has hosted a reenactment of the Battle Of Orgreave and made an inflatable version of Stonehenge in a piece called Sacrilege.

    Jacobs cream crackers

    As a child, I would have eaten several cream crackers manufactured on this line. In more recent times the production of Jacobs products were moved abroad for cost cutting reasons. There is something mesmerising about watching the process of production.

    Cyberpunk

    Quinn’s Ideas exploration of Neuromancer pointed out some links that I hadn’t realised in the formation of cyberpunk as a genre and cultural force.

    LA Noir

    Why film noir happened when it did, and why it is so synonymous with the city of Los Angeles is explained in this documentary which features an interview with author James Eilroy.

    EDC

    Everyday Carry (EDC) – whilst having evolved as an online cultural phenomenon has been around for as along as men and boys have had pockets. My childhood friend Nigel was obsessing about Swiss Army knives and really small Leica binoculars back when I was in primary school. His Dad was a well-to-do dentist and Nigel fulfilled his EDC goals before coming a teen. It was only natural that it eventually became a thing when the internet came around. Kevin Kelly had been talking about cool tools on the the web for the past quarter of a century; Drop.com when it was founded over 12 years ago covered products that would be considered EDC today.

    All of which leaves me more puzzled why EDC has sudden become the focus of media attention in the quality newspapers.

    Tacit and explicit knowledge

    Vicky Zhao’s content are handy thought starters for presentations and problem solving. I particularly enjoyed this one on tacit knowledge.

  • Humane AI pin + more things

    Humane AI pin

    The Humane AI pin has been hyped for a while. Now it’s been launched as a product with what seems to be a small initial batch based on a waiting list and drop type distribution model. I thought I would wait a bit to post on the Humane AI pin and let the dust settle.

    The Humane AI pin is an interesting take on a personal device. particularly with its ‘AI experience’ switching – picking the right smarts for the right task. This seem to fulfil the kind of vision that the likes of Kevin Kelly have outlined in the past. It also seems to access communications services like messaging services and the audio design in the product seems interesting. There is also a projected interface of sorts on the Humane AI pin. It’s an interesting alternative direction to the spatial computing vision of Apple’s Vision Pro.

    humane ai pin

    The Humane AI device falls down in being such a network-centric device. Although it has onboard machine learning technology, its reliance on a relationship with T-Mobile US’ cellular network is problematic. Cellular connectivity is not ubiquitous. It is one of several device visions that have been articulated over the years, but what I still don’t understand is the ‘why?’

    What’s going to be more interesting is what the Humane AI pin does next?

    Beauty

    How Chinese influencer Li Jiaqi’s outburst turned the poster child of C-beauty into a laughing stock

    China

    China’s first deficit in foreign investment signals West’s ‘de-risking’ pressure | Reuters

    US law firms rethink China future amid economic woes, data crackdown | ReutersOf the 73 largest U.S. law firms with a presence in China, 32 shrank their attorney presence in the last decade, according to a Reuters review of data from Leopard Solutions, which tracks law firm hiring. In Beijing, 26 of the 48 largest U.S. law firms drew down their presence since 2018. Worthwhile reading with: US consultancy Gallup withdraws from China | FT – market research was sensitive when I worked in China. Gallup’s business was closer to consulting than a pollster to get around these challenges. Interesting that they can no longer thread the needle in China

    China’s family-run businesses face succession challenges – Nikkei Asia – more than 80% of China’s 1 billion private enterprises are family-owned, with about 29% of these businesses in traditional manufacturing. From 2017 to 2022, around three-quarters of China’s family businesses are in the midst of a generational leadership transition

    Economics

    Fading prosperity and global shipping surplus pose challenges | DigiTimes

    Maersk cuts 10,000 jobs as shipping demand falls – BBC News

    Energy

    Shipowners urged to protect vessels against electric-car fires | FT

    Russia’s weaker hand undermines case for Power of Siberia 2 gas link to China | Reuters

    FMCG

    Post-pandemic party’s over as Americans shun cognac | FTHalf of all cognac in the US is drunk by African Americans, a demographic that has been disproportionately affected by the cost of living crisis, according to analysis by Bernstein. The skew to African American consumers is in part due to the fact that French spirits producers ignored the segregation mandated by America’s Jim Crow laws and “cultivated the African American market segment in ways that other producers did not,” said David Crockett, professor of marketing at the University of Illinois Chicago. French spirits producers at the time marketed to Black-owned and targeted publications. As early as the 1970s the advertisements conveyed a message of upward socio-economic mobility, said Naa Oyo Kwate, a sociologist at Rutgers University

    Sprite Ads Starred Rappers When Hip-Hop Was Young

    25-Year Lasagna, Special Ops Oatmeal, and the Survival Food Boom | WIRED

    Finance

    The BofA $136B Dynamite Stick – Puck – treasury bond related losses.

    Health

    Did the Carpal Tunnel Epidemic Ever Really End? – The Atlantic

    Plain packaging on cereal? The obesity crisis is far more complex | Comment and Opinion | The Grocer

    Hong Kong

    The Hong Kong Activist Who Called Washington’s Bluff – The AtlanticThe United States praised Joshua Wong and pledged itself to Hong Kong’s freedom. But when China cracked down, Wong found himself with nowhere to go….

    Wong wanted to enter the U.S. consulate. The diplomats told him that only the rooms in the St. John’s Building were on offer, and that the office tower did not offer the protection of a diplomatic compound. In Washington, Ngo took the matter up with one of Hawley’s policy advisers, reasoning that the ultra-Trumpian senator might have the president’s ear. Responding at 1 a.m., Hawley’s staffer promised to pass the message on to his boss, but nothing changed. On July 1, the national-security law passed. The diplomats’ positions were the same: Wong couldn’t enter the consulate and couldn’t apply for asylum from outside the United States. Wong and Ngo knew the rules. But they were asking for the same pathway to haven that had been granted to Fang and Chen…

    The focus in Washington has moved on from Hong Kong to Taiwan. The island is under constant military threat from Beijing, which claims the territory as its own, even though the Chinese Communist Party has never controlled it. But for those in Taiwan who cherish their democracy, Hong Kong’s story offers a cautionary tale. The United States gave Hong Kong’s cause its vocal backing, then abandoned the city in its time of greatest need.

    Bored Ape crypto fans report ‘eye burn’ after Hong Kong party | FT – this comes on the back of safety problems of a Mirror concert at a local arena in July 2022.

    This city never slept. Now with China tightening its grip, is the party over? | CNN Business

    Asia is much more important to U.S. interests than the Middle East | NoahpinionEast Asian cities like Tokyo, Seoul, Singapore, and until recently, Hong Kong are arguably the world’s most magnificent — hyper-dense and efficient and bustling with life and creativity and personal freedom, but also extremely safe. East Asia is a wealthy region with high quality of life across the board, rivaled only by North Europe and parts of the Anglosphere. Maciej Cegłowski called them “Zeroth World”, and I think that is an apt description. – the burn for Hong Kong on this is real

    Ideas

    The challenges of sustainable societies and solar punk.

    The one where Chandler Bing’s impenetrable job defined a generation | FTAndré Spicer, Executive Dean of Bayes Business School, suggests a new category altogether: a “Chandler Bing job”, one indifferent to finding meaning, “low on existential rewards but relatively high on extrinsic rewards, like pay and promotion”. Chandler’s stoicism more broadly reflects Gen X’s tacit acceptance of their lot: the forgotten latchkey kids squished between the Baby Boomers and the Millennials. Jennifer Dunn, author of Friends: A Cultural History, says he “showed that we might not all find fulfilment in the first, or even the longest lasting job we will ever have.” Compared to today’s employers who are increasingly concerned about making their younger colleagues happy, few cared about Gen X’s work-life balance.

    Korea

    Korea’s brutal economy.

    Luxury

    Skims and Swarovski Announce Collaboration | BoF – body jewellery, underwear and ready-to-wear pieces in range, launched in Swarovski shores.

    Morgan Stanley’s Top 20 Swiss Watch Company Ranking for 2023 | Professional Watches

    Former Vogue editor Carine Roitfeld says ‘no one’ wanted to dress Kim Kardashian | The Independent

    Marketing

    IPA and ISBA Launch Industry Principles for Use of Generative AI in Advertising | LBBOnline

    Media

    Chinese tech executive Chen Shaojie, CEO of DouYu, said to be held ‘incommunicado’ after authorities find porn on popular live-streaming platform | South China Morning Post

    John Battelle’s Search Blog Why Prime Time TV Might Make a Comeback | Battelle Media

    Is the Music Business Losing Money to Sped-Up Song Remixes? – Billboard

    Tech In Asia gets the exit and acquirer we all expected | Asian Tech Review – acquired by Singapore Press Holdings

    Online

    Israel-Gaza war fuels online anti-Semitism, Islamophobia in China | Israel-Palestine conflict News | Al Jazeera

    Short-Form War | No Mercy / No Malice – TikTok as weapon, chances are China has its thumb on the direction of the algorithm

    Why Puma Sees a Future in Virtual Products, Despite the NFT Bust | BoF

    Did SEO experts ruin the internet or did Google? – The Verge

    The OpenAI Keynote – Stratechery by Ben Thompson – OpenAI as consumer technology company

    Retailing

    Kantar’s O’Donnell Unravels The Secrets To Gaining Unplanned Purchases – NCA

    TikTok Shop Sellers Make Money From Viral Products but Fear Penalties | Business Insider – why can’t TikTok shop handle viral product demand?

    Shein’s US head of strategy on the company’s business model

    Fortnum & Mason resumes delivery to the EU following Brexit troubles – Retail Gazette

    Security

    Big Brother Unchained: UK Government to Abolish Biometrics and Surveillance Safeguards As It Embraces Facial Recognition | naked capitalism

    Software

    Apple is Heavily Invested In Generative AI While Samsung Seeks Help From Microsoft’s ChatGPT And Google’s Bard

    This Cheat Sheet Can Help You Unlock ChatGPT’s Full Potential

    What are the autonomous car levels? Levels 1 to 5 of driverless vehicle tech explained | CAR Magazine

    How AI chatbots like ChatGPT or Bard work – visual explainer | Technology | The Guardian

    Xiaomi launches home-grown cross-device system with HyperOS, as US-sanctioned Huawei moves further from Google’s Android | South China Morning Post – not a lot of detail in the underpinnings at launch but imagine that it will have Linux at the heart

    Taiwan

    The US is quietly arming Taiwan to the teeth – BBC News

    Technology

    Apple unveils M3 processor threeseome for Mac computers | EE News Europe as a counterpart to
    Qualcomm Snapdragon Summit – The Gauntlet – Radio Free Mobile

    Telecoms

    Starlink’s competition: Astranis.

    Web of no web

    BMW ConnectedRide smartglasses bring head-up displays to eyewear | CAR Magazine

    MB.OS: How Mercedes’ new software push gives it a direct line to customers | CAR Magazine – at the end of the day this looks like a dystopian Dubai night club, but without bottle service and ‘hostesses’