Search results for: “technium”

  • Intelligence per watt

    My thinking on the concept of intelligence per watt started as bullets in my notebook. It was more of a timeline than anything else at first and provided a framework of sorts from which I could explore the concept of efficiency in terms of intelligence per watt. 

    TL;DR (too long, didn’t read)

    Our path to the current state of ‘artificial intelligence’ (AI) has been shaped by the interplay and developments of telecommunications, wireless communications, materials science, manufacturing processes, mathematics, information theory and software engineering. 

    Progress in one area spurred advances in others, creating a feedback loop that propelled innovation.  

    Over time, new use cases have become more personal and portable – necessitating a focus on intelligence per watt as a key parameter. Energy consumption directly affects industrial design and end-user benefits. Small low-power integrated circuits (ICs) facilitated fuzzy logic in portable consumer electronics like cameras and portable CD players. Low power ICs and power management techniques also helped feature phones evolve into smartphones.  

    A second-order effect of optimising for intelligence per watt is reducing power consumption across multiple applications. This spurs yet more new use cases in a virtuous innovation circle. This continues until the laws of physics impose limits. 

    Energy storage density and consumption are fundamental constraints, driving the need for a focus on intelligence per watt.  

    As intelligence per watt improves, there will be a point at which the question isn’t just what AI can do, but what should be done with AI? And where should it be processed? Trust becomes less about emotional reassurance and more about operational discipline. Just because it can handle a task doesn’t mean it should – particularly in cases where data sensitivity, latency, or transparency to humans is non-negotiable. A highly capable, off-device AI might be a fine at drafting everyday emails, but a questionable choice for handling your online banking. 

    Good ‘operational security’ outweighs trust. The design of AI systems must therefore account not just for energy efficiency, but user utility and deployment context. The cost of misplaced trust is asymmetric and potentially irreversible.

    Ironically the force multiplier in intelligence per watt is people and their use of ‘artificial intelligence’ as a tool or ‘co-pilot’. It promises to be an extension of the earlier memetic concept of a ‘bicycle for the mind’ that helped inspire early developments in the personal computer industry. The upside of an intelligence per watt focus is more personal, trusted services designed for everyday use. 

    Integration

    In 1926 or 27, Loewe (now better known for their high-end televisions) created the 3NF[i].

    While not a computer, but instead to integrate several radio parts in one glass envelope vacuum valve. This had three triodes (early electronic amplifiers), two capacitors and four resistors. Inside the valve the extra resistor and capacitor components went inside their own glass tubes. Normally each triode would be inside its own vacuum valve. At the time, German radio tax laws were based on the number of valve sockets in a device, making this integration financially advantageous. 

    Post-war scientific boom

    Between 1949 and 1957 engineers and scientists from the UK, Germany, Japan and the US proposed what we’d think of as the integrated circuit (IC). These ideas were made possible when breakthroughs in manufacturing happened. Shockley Semiconductor built on work by Bell Labs and Sprague Electric Company to connect different types of components on the one piece of silicon to create the IC. 

    Credit is often given to Jack Kilby of Texas Instruments as the inventor of the integrated circuit. But that depends how you define IC, with what is now called a monolithic IC being considered a ‘true’ one. Kilby’s version wasn’t a true monolithic IC. As with most inventions it is usually the child of several interconnected ideas that coalesce over a given part in time. In the case of ICs, it was happening in the midst of materials and technology developments including data storage and computational solutions such as the idea of virtual memory through to the first solar cells. 

    Kirby’s ICs went into an Air Force computer[ii] and an onboard guidance system for the Minuteman missile. He went on to help invent the first handheld calculator and thermal printer, both of which took advantage of progress in IC design to change our modern way of life[iii]

    TTL (transistor-to-transistor logic) circuitry was invented at TRW in 1961, they licensed it out for use in data processing and communications – propelling the development of modern computing. TTL circuits powered mainframes. Mainframes were housed in specialised temperature and humidity-controlled rooms and owned by large corporates and governments. Modern banking and payments systems rely on the mainframe as a concept. 

    AI’s early steps 

    Science Museum highlights

    What we now thing of as AI had been considered theoretically for as long as computers could be programmed. As semiconductors developed, a parallel track opened up to move AI beyond being a theoretical possibility. A pivotal moment was a workshop was held in 1956 at Dartmouth College. The workshop focused on a hypothesis ‘every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it’. Later on, that year a meeting at MIT (Massachusetts Institute of Technology) brought together psychologists and linguists to discuss the possibility of simulating cognitive processes using a computer. This is the origin of what we’d now call cognitive science. 

    Out of the cognitive approach came some early successes in the move towards artificial intelligence[iv]. A number of approaches were taken based on what is now called symbolic or classical AI:

    • Reasoning as search – essentially step-wise trial and error approach to problem solving that was compared to wandering through a maze and back-tracking if a dead end was found. 
    • Natural language – where related phrases existed within a structured network. 
    • Micro-worlds – solving for artificially simple situations, similar to economic models relying on the concept of the rational consumer. 
    • Single layer neural networks – to do rudimentary image recognition. 

     By the time the early 1970s came around AI researchers ran into a number of problems, some of which still plague the field to this day:

    • Symbolic AI wasn’t fit for purpose solving many real-world tasks like crossing a crowded room. 
    • Trying to capture imprecise concepts with precise language.
    • Commonsense knowledge was vast and difficult to encode. 
    • Intractability – many problems require an exponential amount of computing time. 
    • Limited computing power available – there was insufficient intelligence per watt available for all but the simplest problems. 

    By 1966, US and UK funding bodies were frustrated with the lack of progress on the research undertaken. The axe fell first on a project to use computers on language translation. Around the time of the OPEC oil crisis, funding to major centres researching AI was reduced by both the US and UK governments respectively. Despite the reduction of funding to the major centres, work continued elsewhere. 

    Mini-computers and pocket calculators

    ICs allowed for mini-computers due to the increase in computing power per watt. As important as the relative computing power, ICs made mini-computers more robust, easier to manufacture and maintain. DEC (Digital Equipment Corporation) launched the first minicomputer, the PDP-8 in 1964. The cost of mini-computers allowed them to run manufacturing processes, control telephone network switching and control labouratory equipment. Mini-computers expanded computer access in academia facilitating more work in artificial life and what we’d think of as early artificial intelligence. This shift laid the groundwork for intelligence per watt as a guiding principle.

    A second development helped drive mass production of ICs – the pocket calculator, originally invented at Texas Instruments.  It demonstrated how ICs could dramatically improve efficiency in compact, low-power devices.

    LISP machines and PCs

    AI researchers required more computational power than mini-computers could provide, leading to the development of LISP machines—specialised workstations designed for AI applications. Despite improvements in intelligence per watt enabled by Moore’s Law, their specialised nature meant that they were expensive. AI researchers continued with these machines until personal computers (PCs) progressed to a point that they could run LISP quicker than LISP machines themselves. The continuous improvements in data storage, memory and processing that enabled LISP machines, continued on and surpassed them as the cost of computing dropped due to mass production. 

    The rise of LISP machines and their decline was not only due to Moore’s Law in effect, but also that of Makimoto’s Wave. While Gordon Moore outlined an observation that the number of transistors on a given area of silicon doubled every two years or so. Tsugio Makimoto originally observed 10-year pivots from standardised semiconductor processors to customised processors[v]. The rise of personal computing drove a pivot towards standardised architectures. 

    PCs and workstations extended computing beyond computer rooms and labouratories to offices and production lines. During the late 1970s and 1980s standardised processor designs like the Zilog Z80, MOS Technology 6502 and the Motorola 68000 series drove home and business computing alongside Intel’s X86 processors. 

    Personal computing started in businesses when office workers brought a computer to use early computer programmes like the VisiCalc spreadsheet application. This allowed them to take a leap forward in not only tabulating data, but also seeing how changes to the business might affect financial performance. 

    Businesses then started to invest more in PCs for a wide range of uses. PCs could emulate the computer terminal of a mainframe or minicomputer, but also run applications of their own. 

    Typewriters were being placed by word processors that allowed the operator to edit a document in real time without resorting to using correction fluid

    A Bicycle for the Mind

    Steve Jobs at Apple was as famous for being a storyteller as he was for being a technologist in the broadest sense. Internally with the Mac team he shared stories and memetic concepts to get his ideas across in everything from briefing product teams to press interviews. As a concept, a 1990 filmed interview with Steve Jobs articulates the context of this saying particularly well. 

    In reality, Jobs had been telling the story for a long time through the development of the Apple II and right from the beginning of the Mac. There is a version of the talk that was recorded some time in 1980 when the personal computer was still a very new idea – the video was provided to the Computer History Museum by Regis McKenna[vi].

    The ‘bicycle for the mind’ concept was repeated in early Apple advertisements for the time[vii] and even informed the Macintosh project codename[viii]

    Jobs articulated a few key concepts. 

    • Buying a computer creates, rather than reduces problems. You needed software to start solving problems and making computing accessible. Back in 1980, you programmed a computer if you bought one. Which was the reason why early personal computer owners in the UK went on to birth a thriving games software industry including the likes of Codemasters[ix]. Done well, there should be no seem in the experience between hardware and software. 
    • The idea of a personal, individual computing device (rather than a shared resource).  My own computer builds on my years of how I have grown to adapt and use my Macs, from my first sit-up and beg Macintosh, to the MacBook Pro that I am writing this post on. This is even more true most people and their use of the smartphone. I am of an age, where my iPhone is still an appendage and emissary of my Mac. My Mac is still my primary creative tool. A personal computer is more powerful than a shared computer in terms of the real difference made. 
    • At the time Jobs originally did the speech, PCs were underpowered for anything but data processing (through spreadsheets and basic word processor applications). But that didn’t stop his idea for something greater. 

    Jobs idea of the computer as an adjunct to the human intellect and imagination still holds true, but it doesn’t neatly fit into the intelligence per watt paradigm. It is harder to measure the effort developing prompts, or that expended evaluating, refining and filtering generative AI results. Of course, Steve Jobs Apple owed a lot to the vision shown in Doug Engelbart’s ‘Mother of All Demos’[x].

    Networks

    Work took a leap forward with office networked computers pioneered by Macintosh office by Apple[xi]. This was soon overtaken by competitors. This facilitated work flow within an office and its impact can still be seen in offices today, even as components from print management to file storage have moved to cloud-based services. 

    At the same time, what we might think of as mobile was starting to gain momentum. Bell Labs and Motorola came up with much of the technology to create cellular communications. Martin Cooper of Motorola made the first phone call on a cellular phone to a rival researcher at Bell Labs. But Motorola didn’t sell the phone commercially until 1983, as a US-only product called the DynaTAC 8000x[xii].  This was four years after Japanese telecoms company NTT launched their first cellular network for car phones. Commercial cellular networks were running in Scandinavia by 1981[xiii]

    In the same way that the networked office radically changed white collar work, the cellular network did a similar thing for self-employed plumbers, electricians and photocopy repair men to travelling sales people. If they were technologically advanced, they may have had an answer machine, but it would likely have to be checked manually by playing back the tape. 

    Often it was a receptionist in their office if they had one. Or more likely, someone back home who took messages. The cell phone freed homemakers in a lot of self-employed households to go out into the workplace and helped raise household incomes. 

    Fuzzy logic 

    The first mainstream AI applications emerged from fuzzy logic, introduced by Lofti A. Zadeh in 1965 mathematical paper. Initial uses were for industrial controls in cement kilns and steel production[xiv]. The first prominent product to rely on fuzzy logic was the Zojirushi Micom Electric Rice Cooker (1983), which adjusted cooking time dynamically to ensure perfect rice. 

    Rice Cooker with Fuzzy Logic 3,000 yen avail end june

    Fuzzy logic reacted to changing conditions in a similar way to people. Through the 1980s and well into the 1990s, the power of fuzzy logic was under appreciated outside of Japanese product development teams. In a quote a spokesperson for the American Electronics Association’s Tokyo office said to the Washington Post[xv].

    “Some of the fuzzy concepts may be valid in the U.S.,”

    “The idea of better energy efficiency, or more precise heating and cooling, can be successful in the American market,”

    “But I don’t think most Americans want a vacuum cleaner that talks to you and says, ‘Hey, I sense that my dust bag will be full before we finish this room.’ “

    The end of the 1990s, fuzzy logic was embedded in various consumer devices: 

    • Air-conditioner units – understands the room, the temperature difference inside-and-out, humidity. It then switches on-and-off to balance cooling and energy efficiency.
    • CD players – enhanced error correction on playback dealing with imperfections on the disc surface.
    • Dishwashers – understood how many dishes were loaded, their type of dirt and then adjusts the wash programme.
    • Toasters – recognised different bread types, the preferable degree of toasting and performs accordingly.
    • TV sets – adjust the screen brightness to the ambient light of the room and the sound volume to how far away the viewer is sitting from the TV set. 
    • Vacuum cleaners – vacuum power that is adjusted as it moves from carpeted to hard floors. 
    • Video cameras – compensate for the movement of the camera to reduce blurred images. 

    Fuzzy logic sold on the benefits and concealed the technology from western consumers. Fuzzy logic embedded intelligence in the devices. Because it worked on relatively simple dedicated purposes it could rely on small lower power specialist chips[xvi] offering a reasonable amount of intelligence per watt, some three decades before generative AI. By the late 1990s, kitchen appliances like rice cookers and microwave ovens reached ‘peak intelligence’ for what they needed to do, based on the power of fuzzy logic[xvii].

    Fuzzy logic also helped in business automation. It helped to automatically read hand-written numbers on cheques in banking systems and the postcodes on letters and parcels for the Royal Mail. 

    Decision support systems & AI in business

    Decision support systems or Business Information Systems were being used in large corporates by the early 1990s. The techniques used were varied but some used rules-based systems. These were used in at least some capacity to reduce manual office work tasks. For instance, credit card approvals were processed based on rules that included various factors including credit scores. Only some credit card providers had an analyst manually review the decision made by system.  However, setting up each use case took a lot of effort involving highly-paid consultants and expensive software tools. Even then, vendors of business information systems such as Autonomy struggled with a high rate of projects that failed to deliver anything like the benefits promised. 

    Three decades on, IBM had a similar problem with its Watson offerings, with particularly high-profile failure in mission-critical healthcare applications[xviii]. Secondly, a lot of tasks were ad-hoc in nature, or might require transposing across disparate separate systems. 

    The rise of the web

    The web changed everything. The underlying technology allowed for dynamic data. 

    Software agents

    Examples of intelligence within the network included early software agents. A good example of this was PapriCom. PapriCom had a client on the user’s computer. The software client monitored price changes for products that the customer was interested in buying. The app then notified the user when the monitored price reached a price determined by the customer. The company became known as DealTime in the US and UK, or Evenbetter.com in Germany[xix].  

    The PapriCom client app was part of a wider set of technologies known as ‘push technology’ which brought content that the netizen would want directly to their computer. In a similar way to mobile app notifications now. 

    Web search

    The wealth of information quickly outstripped netizen’s ability to explore the content. Search engines became essential for navigating the new online world. Progress was made in clustering vast amounts of cheap Linux powered computers together and sharing the workload to power web search amongst them.  As search started to trying and make sense of an exponentially growing web, machine learning became part of the developer tool box. 

    Researchers at Carnegie-Mellon looked at using games to help teach machine learning algorithms based on human responses that provided rich metadata about the given item[xx]. This became known as the ESP game. In the early 2000s, Yahoo! turned to web 2.0 start-ups that used user-generated labels called tags[xxi] to help organise their data. Yahoo! bought Flickr[xxii] and deli.ico.us[xxiii]

    All the major search engines looked at how deep learning could help improve search results relevance. 

    Given that the business model for web search was an advertising-based model, reducing the cost per search, while maintaining search quality was key to Google’s success. Early on Google focused on energy consumption, with its (search) data centres becoming carbon neutral in 2007[xxiv]. This was achieved by a whole-system effort: carefully managing power management in the silicon, storage, networking equipment and air conditioning to maximise for intelligence per watt. All of which were made using optimised versions of open-source software and cheap general purpose PC components ganged together in racks and operating together in clusters. 

    General purpose ICs for personal computers and consumer electronics allowed easy access relatively low power computing. Much of this was down to process improvements that were being made at the time. You needed the volume of chips to drive innovation in mass-production at a chip foundry. While application-specific chips had their uses, commodity mass-volume products for uses for everything from embedded applications to early mobile / portable devices and computers drove progress in improving intelligence-per-watt.

    Makimoto’s tsunami back to specialised ICs

    When I talked about the decline of LISP machines, I mentioned the move towards standardised IC design predicted by Tsugio Makimoto. This led to a surge in IC production, alongside other components including flash and RAM memory.  From the mid-1990s to about 2010, Makimoto’s predicted phase was stuck in ‘standardisation’. It just worked. But several factors drove the swing back to specialised ICs. 

    • Lithography processes got harder: standardisation got its performance and intelligence per watt bump because there had been a steady step change in improvements in foundry lithography processes that allowed components to be made at ever-smaller dimensions. The dimensions are a function wavelength of light used. The semiconductor hit an impasse when it needed to move to EUV (extreme ultra violet) light sources. From the early 1990s on US government research projects championed development of key technologies that allow EUV photolithography[xxv]. During this time Japanese equipment vendors Nikon and Canon gave up on EUV. Sole US vendor SVG (Silicon Valley Group) was acquired by ASML, giving the Dutch company a global monopoly on cutting edge lithography equipment[xxvi]. ASML became the US Department of Energy research partner on EUV photo-lithography development[xxvii]. ASML spent over two decades trying to get EUV to work. Once they had it in client foundries further time was needed to get commercial levels of production up and running. All of which meant that production processes to improve IC intelligence per watt slowed down and IC manufacturers had to start about systems in a more holistic manner. As foundry development became harder, there was a rise in fabless chip businesses. Alongside the fabless firms, there were fewer foundries: Global Foundries, Samsung and TSMC (Taiwan Semiconductor Manufacturing Company Limited). TSMC is the worlds largest ‘pure-play’ foundry making ICs for companies including AMD, Apple, Nvidia and Qualcomm. 
    • Progress in EDA (electronic design automation). Production process improvements in IC manufacture allowed for an explosion in device complexity as the number of components on a given size of IC doubled every 18 months or so. In the mid-to-late 1970s this led to technologists thinking about the idea of very large-scale integration (VLSI) within IC designs[xxviii]. Through the 1980s, commercial EDA software businesses were formed. The EDA market grew because it facilitated the continual scaling of semiconductor technology[xxix]. Secondly, it facilitated new business models. Businesses like ARM Semiconductor and LSI Logic allowed their customers to build their own processors based on ‘blocs’ of proprietary designs like ARM’s cores. That allowed companies like Apple to focus on optimisation in their customer silicon and integration with software to help improve the intelligence per watt[xxx]
    • Increased focus on portable devices. A combination of digital networks, wireless connectivity, the web as a communications platform with universal standards, flat screen displays and improving battery technology led the way in moving towards more portable technologies. From personal digital assistants, MP3 players and smartphone, to laptop and tablet computers – disconnected mobile computing was the clear direction of travel. Cell phones offered days of battery life; the Palm Pilot PDA had a battery life allowing for couple of days of continuous use[xxxi]. In reality it would do a month or so of work. Laptops at the time could do half a day’s work when disconnected from a power supply. Manufacturers like Dell and HP provided spare batteries for travellers. Given changing behaviours Apple wanted laptops that were easy to carry and could last most of a day without a charge. This was partly driven by a move to a cleaner product design that wanted to move away from swapping batteries. In 2005, Apple moved from PowerPC to Intel processors. During the announcement at the company’s worldwide developer conference (WWDC), Steve Jobs talked about the focus on computing power per watt moving forwards[xxxii]

    Apple’s first in-house designed IC, the A4 processor was launched in 2010 and marked the pivot of Makimoto’s wave back to specialised processor design[xxxiii].  This marked a point of inflection in the growth of smartphones and specialised computing ICs[xxxiv]

    New devices also meant new use cases that melded data on the web, on device, and in the real world. I started to see this in action working at Yahoo! with location data integrated on to photos and social data like Yahoo! Research’s ZoneTag and Flickr. I had been the Yahoo! Europe marketing contact on adding Flickr support to Nokia N-series ‘multimedia computers’ (what we’d now call smartphones), starting with the Nokia N73[xxxv].  A year later the Nokia N95 was the first smartphone released with a built-in GPS receiver. William Gibson’s speculative fiction story Spook Country came out in 2007 and integrated locative art as a concept in the story[xxxvi]

    Real-world QRcodes helped connect online services with the real world, such as mobile payments or reading content online like a restaurant menu or a property listing[xxxvii].

    I labelled the web-world integration as a ‘web-of-no-web’[xxxviii] when I presented on it back in 2008 as part of an interactive media module, I taught to an executive MBA class at Universitat Ramon Llull in Barcelona[xxxix]. In China, wireless payment ideas would come to be labelled O2O (offline to online) and Kevin Kelly articulated a future vision for this fusion which he called Mirrorworld[xl]

    Deep learning boom

    Even as there was a post-LISP machine dip in funding of AI research, work on deep (multi-layered) neural networks continued through the 1980s. Other areas were explored in academia during the 1990s and early 2000s due to the large amount of computing power needed. Internet companies like Google gained experience in large clustered computing, AND, had a real need to explore deep learning. Use cases include image recognition to improve search and dynamically altered journeys to improve mapping and local search offerings. Deep learning is probabilistic in nature, which dovetailed nicely with prior work Microsoft Research had been doing since the 1980s on Bayesian approaches to problem-solving[xli].  

    A key factor in deep learning’s adoption was having access to powerful enough GPUs to handle the neural network compute[xlii]. This has allowed various vendors to build Large Language Models (LLMs). The perceived strategic importance of artificial intelligence has meant that considerations on intelligence per watt has become a tertiary consideration at best. Microsoft has shown interest in growing data centres with less thought has been given on the electrical infrastructure required[xliii].  

    Google’s conference paper on attention mechanisms[xliv] highlighted the development of the transformer model. As an architecture it got around problems in previous approaches, but is computationally intensive. Even before the paper was published, the Google transformer model had created fictional Wikipedia entries[xlv]. A year later OpenAI built on Google’s work with the generative pre-trained transformer model better known as GPT[xlvi]

    Since 2018 we’ve seen successive GPT-based models from Amazon, Anthropic, Google, Meta, Alibaba, Tencent, Manus and DeepSeek. All of these models were trained on vast amounts of information sources. One of the key limitations for building better models was access to training material, which is why Meta used pirated copies of e-books obtained using bit-torrent[xlvii]

    These models were so computationally intensive that the large-scale cloud service providers (CSPs) offering these generative AI services were looking at nuclear power access for their data centres[xlviii]

    The current direction of development in generative AI services is raw computing power, rather than having a more energy efficient focus of intelligence per watt. 

    Technology consultancy / analyst Omdia estimated how many GPUs were bought by hyperscalers in 2024[xlix].

    CompanyNumber of Nvidia GPUs boughtNumber of AMD GPUs boughtNumber of self-designed custom processing chips bought
    Amazon196,0001,300,000
    Alphabet (Google)169,0001,500,000
    ByteDance230,000
    Meta224,000173,0001,500,000
    Microsoft485,00096,000200,000
    Tencent230,000

    These numbers provide an indication of the massive deployment on GPT-specific computing power. Despite the massive amount of computing power available, services still weren’t able to cope[l] mirroring some of the service problems experienced by early web users[li] and the Twitter ‘whale FAIL’[lii] phenomenon of the mid-2000s. The race to bigger, more powerful models is likely to continue for the foreseeable future[liii]

    There is a second class of players typified by Chinese companies DeepSeek[liv] and Manus[lv] that look to optimise the use of older GPT models to squeeze the most utility out of them in a more efficient manner. Both of these services still rely on large cloud computing facilities to answer queries and perform tasks. 

    Agentic AI

    Thinking on software agents went back to work being done in computer science in the mid-1970s[lvi]. Apple articulated a view[lvii]of a future system dubbed the ‘Knowledge Navigator’[lviii] in 1987 which hinted at autonomous software agents. What we’d now think of as agentic AI was discussed as a concept at least as far back as 1995[lix], this was mirrored in research labs around the world and was captured in a 1997 survey of research on intelligent software agents was published[lx]. These agents went beyond the vision that PapriCom implemented. 

    A classic example of this was Wildfire Communications, Inc. who created a voice enabled virtual personal assistant in 1994[lxi].  Wildfire as a service was eventually shut down in 2005 due to an apparent decline in subscribers using the service[lxii]. In terms of capability, Wildfire could do tasks that are currently beyond Apple’s Siri. Wildfire did have limitations due to it being an off-device service that used a phone call rather than an internet connection, which limited its use to Orange mobile service subscribers using early digital cellular mobile networks. 

    Almost a quarter century later we’re now seeing devices that are looking to go beyond Wildfire with varying degrees of success. For instance, the Rabbit R1 could order an Uber ride or groceries from DoorDash[lxiii]. Google Duplex tries to call restaurants on your behalf to make reservations[lxiv] and Amazon claims that it can shop across other websites on your behalf[lxv]. At the more extreme end is Boeing’s MQ-28[lxvi] and the Loyal Wingman programme[lxvii]. The MQ-28 is an autonomous drone that would accompany US combat aircraft into battle, once it’s been directed to follow a course of action by its human colleague in another plane. 

    The MQ-28 will likely operate in an electronic environment that could be jammed. Even if it wasn’t jammed the length of time taken to beam AI instructions to the aircraft would negatively impact aircraft performance. So, it is likely to have a large amount of on-board computing power. As with any aircraft, the size of computing resources and their power is a trade-off with the amount of fuel or payload it will carry. So, efficiency in terms of intelligence per watt becomes important to develop the smallest, lightest autonomous pilot. 

    As well as a more hostile world, we also exist in a more vulnerable time in terms of cyber security and privacy. It makes sense to have critical, more private AI tasks run on a local machine. At the moment models like DeepSeek can run natively on a top-of-the-range Mac workstation with enough memory[lxviii].  

    This is still a long way from the vision of completely local execution of ‘agentic AI’ on a mobile device because the intelligence per watt hasn’t scaled down to that level to useful given the vast amount of possible uses that would be asked of the Agentic AI model. 

    Maximising intelligence per watt

    There are three broad approaches to maximise the intelligence per watt of an AI model. 

    • Take advantage of the technium. The technium is an idea popularised by author Kevin Kelly[lxix]. Kelly argues that technology moves forward inexorably, each development building on the last. Current LLMs such as ChatGPT and Google Gemini take advantage of the ongoing technium in hardware development including high-speed computer memory and high-performance graphics processing units (GPU).  They have been building large data centres to run their models in. They build on past developments in distributed computing going all the way back to the 1962[lxx]
    • Optimise models to squeeze the most performance out of them. The approach taken by some of the Chinese models has been to optimise the technology just behind the leading-edge work done by the likes of Google, OpenAI and Anthropic. The optimisation may use both LLMs[lxxi] and quantum computing[lxxii] – I don’t know about the veracity of either claim. 
    • Specialised models. Developing models by use case can reduce the size of the model and improve the applied intelligence per watt. Classic examples of this would be fuzzy logic used for the past four decades in consumer electronics to Mistral AI[lxxiii] and Anduril’s Copperhead underwater drone family[lxxiv].  

    Even if an AI model can do something, should the model be asked to do so?

    AI use case appropriateness

    We have a clear direction of travel over the decades to more powerful, portable computing devices –which could function as an extension of their user once intelligence per watt allows it to be run locally. 

    Having an AI run on a cloud service makes sense where you are on a robust internet connection, such as using the wi-fi network at home. This makes sense for general everyday task with no information risk, for instance helping you complete a newspaper crossword if there is an answer you are stuck on and the intellectual struggle has gone nowhere. 

    A private cloud AI service would make sense when working, accessing or processing data held on the service. Examples of this would be Google’s Vertex AI offering[lxxv]

    On-device AI models make sense in working with one’s personal private details such as family photographs, health information or accessing apps within your device. Apps like Strava which share data, have been shown to have privacy[lxxvi] and security[lxxvii] implications. ***I am using Strava as an example because it is popular and widely-known, not because it is a bad app per se.***

    While businesses have the capability and resources to have a multi-layered security infrastructure to protect their data most[lxxviii]of[lxxix] the[lxxx] time[lxxxi], individuals don’t have the same security. As I write this there are privacy concerns[lxxxii] expressed about Waymo’s autonomous taxis. However, their mobile device is rarely out of physical reach and for many their laptop or tablet is similarly close. All of these devices tend to be used in concert with each other. So, for consumers having an on-device AI model makes the most sense. All of which results in a problem, how do technologists squeeze down their most complex models inside a laptop, tablet or smartphone? 


    [i] Radiomuseum – Loewe (Opta), Germany. Multi-system internal coupling 3NF

    [ii] (1961) Solid Circuit(tm) Semiconductor Network Computer, 6.3 Cubic inches in Size, is Demonstrated in Operation by U.S. Air Force and Texas Instruments (United States) Texas Instruments news release

    [iii] (2000) The Chip that Jack Built Changed the World (United States) Texas Instruments website

    [iv] Moravec H (1988), Mind Children (United States) Harvard University Press

    [v] (2010) Makimoto’s Wave | EDN (United States) AspenCore Inc.

    [vi] Jobs, S. (1980) Presentation on Apple Computer history and vision (United States) Computer History Museum via Regis McKenna

    [vii] Sinofsky, S. (2019) ‘Bicycle for the Mind’ (United States) Learning By Shipping

    [viii] Hertzfeld, A. (1981) Bicycle (United States) Folklore.org

    [ix] Jones, D. (2016) Codemasters (United Kingdom) Retro Gamer – Future Publishing

    [x] Engelbert, D. (1968) A Research Center For Augmenting Human Intellect (United States) Stanford Research Institute (SRI)

    [xi] Hormby, T. (2006) Apple’s Worst business Decisions (United States) OSnews

    [xii] Honam, M. (2009) From Brick to Slick: A History of Mobile Phones (United States) Wired

    [xiii] Ericsson History: The Nordics take charge (Sweden) LM Ericsson.

    [xiv] Singh, H., Gupta, M.M., Meitzler, T., Hou, Z., Garg, K., Solo, A.M.G & Zadeh, L.A. (2013) Real-Life Applications of Fuzzy Logic – Advances in Fuzzy Systems (Egypt) Hindawi Publishing Corporation

    [xv] Reid, T.R. (1990) The Future of Electronics Looks ‘Fuzzy’. (United States) Washington Post

    [xvi] Kushairi, A. (1993). “Omron showcases latest in fuzzy logic”. (Malaysia) New Straits Times

    [xvii] Watson, A. (2021) The Antique Microwave Oven that’s Better than Yours (United States) Technology Connections

    [xviii] Durbhakula, S. (2022) IBM dumping Watson Health is an opportunity to reevaluate artificial intelligence (United States) MedCity News

    [xix] (1998) PapriCom Technologies Wins CommerceNet Award (Israel) Globes

    [xx] Von Ahn, L., Dabbish, L. (2004) Labeling Images with a Computer Game (United States) School of Computing, Carnegie-Mellon University

    [xxi] Butterfield, D., Fake, C., Henderson-Begg, C., Mourachov, S., (2006) Interestingness ranking of media objects (United States) US Patent Office

    [xxii] Delaney, K.J., (2005) Yahoo acquires Flickr creator (United States) Wall Street Journal

    [xxiii] Hood, S., (2008) Delicious is 5 (United States) Delicious blog

    [xxiv] (2017) 10 years of Carbon Neutrality (United States) Google

    [xxv] Bakshi, V. (2018) EUV Lithography (United States) SPIE Press

    [xxvi] Wade, W. (2000) ASML acquires SVG, becomes largest litho supplier (United States) EE Times

    [xxvii] Lammers, D. (1999) U.S. gives ok to ASML on EUV effort (United States) EE Times

    [xxviii] Meade, C., Conway, L. (1979) Introduction to VLSI Systems (United States) Addison-Wesley

    [xxix] Lavagno, L., Martin, G., Scheffer, L., et al (2006) Electronic Design Automation for Integrated Circuits Handbook (United States) Taylor & Francis

    [xxx] (2010) Apple Launches iPad (United States) Apple Inc. website

    [xxxi] (1997) PalmPilot Professional (United Kingdom) Centre for Computing History

    [xxxii] Jobs, S. (2005) Apple WWDC 2005 keynote speech (United States) Apple Inc.

    [xxxiii] (2014) Makimoto’s Wave Revisited for Multicore SoC Design (United States) EE Times

    [xxxiv] Makimoto, T. (2014) Implications of Makimoto’s Wave (United States) IEEE Computer Society

    [xxxv] (2006) Nokia and Yahoo! add Flickr support in Nokia Nseries Multimedia Computers (Germany) Cision PR Newswire

    [xxxvi] Gibson, W. (2007) Spook Country (United States) Putnam Publishing Group

    [xxxvii] The O2O Business In China (China) GAB China

    [xxxviii] Carroll, G. (2008) Web Centric Business Model (United States) Waggener Edstrom Worldwide for LaSalle School of Business, Universitat Ramon Llull, Barcelona

    [xxxix] Carroll, G. (2008) Web of no web (United Kingdom) renaissance chambara

    [xl] Kelly, K. (2018) AR Will Spark the Next Big Tech Platform – Call It Mirrorworld (United States) Wired

    [xli] Heckerman, D. (1988) An Empirical Comparison of Three Inference Methods (United States) Microsoft Research

    [xlii] Sze, V., Chen, Y.H., Yang, T.J., Emer, J. (2017) Efficient Processing of Deep Neural Networks: A Tutorial and Survey (United States) Cornell University

    [xliii] Webber, M. E. (2024) Energy Blog: Is AI Too Power-Hungry for Our Own Good? (United States) American Society of Mechanical Engineers

    [xliv] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I. (2017) Attention Is All You Need (United States) 31st Conference on Neural Information Processing Systems (NIPS 2017)

    [xlv] Marche, S. (2024) Was Linguistic A.I. Created By Accident? (United States) The New Yorker.

    [xlvi] Radford, A. (2018) Improving language understanding with unsupervised learning (United States) OpenAI

    [xlvii] Heath, N. (2025) Authors outraged to discover Meta used their pirated work to train its AI systems (Australia) ABC (Australian Broadcast Corporation)

    [xlviii] Morey, M., O’Sullivan, J. (2024) In-brief analysis: Data center owners turn to nuclear as potential energy source (United States) Today in Energy published by U.S. Energy Information Administration

    [xlix] Bradshaw, T., Morris, S. (2024) Microsoft acquires twice as many Nvidia AI chips as tech rivals (United Kingdom) Financial Times

    [l] Smith, C. (2025) ChatGPT’s viral image-generation upgrade is ruining the chatbot for everyone (United States) BGR (Boy Genius Report)

    [li] Wayner, P. (1997) Human Error Cripples the Internet (United States) The New York Times

    [lii] Honan, M. (2013) Killing the Fail Whale with Twitter’s Christopher Fry (United States) Wired

    [liii] Mazarr, M. (2025) The Coming Strategic Revolution of Artificial Intelligence (United States) MIT (Massachusetts Institute of Technology)

    [liv] Knight, W. (2025) DeepSeek’s New AI Model Sparks Shock, Awe, and Questions from US Competitors (United States) Wired

    [lv] Sharwood, S. (2025) Manus mania is here: Chinese ‘general agent’ is this week’s ‘future of AI’ and OpenAI-killer (United Kingdom) The Register

    [lvi] Hewitt, C., Bishop, P., Steiger, R. (1973). A Universal Modular Actor Formalism for Artificial Intelligence. (United States) IJCAI (International Joint Conference on Artificial Intelligence).

    [lvii] Sculley, J. (1987) Keynote Address On The Knowledge Navigator at Educom (United States) Apple Computer Inc.

    [lviii] (1987) Apple’s Future Computer: The Knowledge Navigator (United States) Apple Computer Inc.

    [lix] Kelly, K. (1995) Out of Control: The New Biology of Machines (United States) Fourth Estate

    [lx] Nwana, H.S., Azarmi, N. (1997) Software Agents and Soft Computing: Towards Enhancing Machine Intelligence Concepts and Applications (Germany) Springer

    [lxi] Rifkin, G. (1994) Interface; A Phone That Plays Secretary for Travelers (United States) The New York Times

    [lxii] Richardson, T. (2005) Orange kills Wildfire – finally (United Kingdom) The Register

    [lxiii] Spoonauer, M. (2024) The Truth about the Rabbit R1 – your questions answered about the AI gadget (United States) Tom’s Guide

    [lxiv] Garun, N. (2019) One year later, restaurants are still confused by Google Duplex (United States) The Verge

    [lxv] Roth, E. (2025) Amazon can now buy products from other websites for you (United States) The Verge

    [lxvi] MQ-28 microsite (United States) Boeing Inc.

    [lxvii] Warwick, G. (2019) Boeing Unveils ‘Loyal Wingman’ UAV Developed In Australia (United Kingdom) Aviation Week Network – part of Informa Markets

    [lxviii] Udinmwen, E. (2025) Apple Mac Studio M3 Ultra workstation can run Deepseek R1 671B AI model entirely in memory using less than 200W, reviewer finds (United Kingdom) TechRadar

    [lxix] Kelly, K. (2010) What Technology Wants (United States) Viking Books

    [lxx] Andrews, G.R. (2000) Foundations of Multithreaded, Parallel, and Distributed Programming (United States) Addison-Wesley

    [lxxi] Criddle, C., Olcott, E. (2025) OpenAI says it has evidence China’s DeepSeek used its model to train competitor (United Kingdom) Financial Times

    [lxxii] Russell, J. (2025) China Researchers Report Using Quantum Computer to Fine-Tune Billion Parameter AI Model (United States) HPC Wire

    [lxxiii] Mistral AI home page (France) Mistral AI

    [lxxiv] (2025) High-Speed Autonomous Underwater Effects. Copperhead (United States) Anduril Industries

    [lxxv] Vertex AI with Gemini 1.5 Pro and Gemini 1.5 Flash (United States) Google Cloud website

    [lxxvi] Untersinger, M. (2024) Strava, the exercise app filled with security holes (France) Le Monde

    [lxxvii] Nilsson-Julien, E. (2025) French submarine crew accidentally leak sensitive information through Strava app (France) Le Monde

    [lxxviii] Arsene, Liviu (2018) Hack of US Navy Contractor Nets China 614 Gigabytes of Classified Information (Romania) Bitdefender

    [lxxix] Wendling, M. (2024) What to know about string of US hacks blamed on China (United Kingdom) BBC News

    [lxxx] Kidwell, D. (2020) Cyber espionage for the Chinese government (United States) U.S. Air Force Office of Special Investigations

    [lxxxi] Gorman, S., Cole, A., Dreazen, Y. (2009) Computer Spies Breach Fighter-Jet Project (United States) The Wall Street Journal

    [lxxxii] Bellan, R. (2025) Waymo may use interior camera data to train generative AI models, but riders will be able to opt out (United States) TechCrunch

  • Car screens and synthesisers

    The current debate over car screens / car as computer design reminded me a lot of the journey that synthesisers have gone through.

    Charging screen

    I went down this train of thought on car screens thanks to a LinkedIn post by Nic Roope, reacting to an article published in Car Design News in praise of push buttons.

    There is a view in car circles that the reliance on screens to mediate so many of the functions of a car can be a bad thing. I can understand it. For enthusiasts driving a car is still a very analogue experience including the haptics of direct steering connectivity and a manual gearbox.

    I would be remiss if I didn’t share the opinion of Doug DeMuro who argued the case for screens in terms of two reasons:

    • Costs. Buttons cost more money and there would be the associated connectors. Modern vehicles offer such a range of controls, that doing them in buttons rather than soft buttons and car screens would be cost and space prohibitive.
    • Technological momentum. DeMuro essentially articulates a position similar to Kevin Kelly’s concept of the technium in his book What Technology Wants. Kelly uses a biological metaphor of progress as an organism or Gaia type metaphor that keeps growing and moving at its own pace. While Kelly has been accused to techno-mysticism, we do know that the development of key technologies like television or the light bulb were happening at the same time in different parts of the world in isolation from each other – there had become a time when they were inevitable.

    the greater, global, massively interconnected system of technology vibrating around us

    Kevin Kelly on the technetium in What Technology Wants

    Colin Chapman versus software engineers.

    DeMuro’s first point is based on the proposition that all this extra control in car screens is a good thing. Do we really need to have car interior mood lighting? And if we do, do we need to have colours that result in night blindness and make the car interior looks like a booth at a bottle service bar in Dubai?

    For some drivers, the answer will be no.

    Different car manufacturers have had different models that do very different things. One of the philosophies articulated most by car enthusiasts is that of Lotus cars founder Colin Chapman “simplify, and add lightness”.

    Chapman’s design ethos was very in-tune with the likes of mid-century thinkers like polymath Buckminster Fuller and those he influenced notably architect Sir Norman Foster.

    Chapman’s world view wasn’t perfect his vehicles were fragile and had quality issues, partly due to his daring use of new materials and techniques influenced by aerospace. It’s also a world away from the Tesla approach, where the vehicle can’t be started up without the screen even as a ‘limp mode’ function.

    Instead the Tesla pickup and car screens are infested with boondoggles including:

    • A video of a fireplace filled with burning logs
    • A game that allows you to break the windows of a virtual CyberTruck
    • Customisable horn sounds including celebrity voices
    • A pre-programmed light show

    Modern car economics.

    Car screens have advanced in tock-step with the move towards an electric car future. A technology transition at the best of times is difficult, but the car industry has other problems that will impact consumer views of vehicles.

    Consumer choice.

    In the 1970s cars cars seldom lasted over a decade, but due to improvements in corrosion treatment and car design that removed water traps the potential life of a car was extended. Given that classic cars are much less damaging to the environment. The average classic emits 563kg of CO2 per year, yet an average passenger car has a 6.8-tonne carbon footprint immediately after production. This means that a new car would need to be run for several years to achieve a similar climate ‘payback’ and older cars can be attractive for consumers, if they meet their needs reliably.

    Vehicle affordability.

    Over the time I have held a driving licence, the secondhand car market went from being the dumping ground for fleet sales to the Alice In Wonderland after effects of the lease agreements that drove new and nearly-new car sales. The financialisation of the car market isn’t without risk and has been considered a possible future risk in the way that consumer finance and home mortgages have been in the past.

    Yamaha DX7II-D

    So what do car touchscreens have to do with synthesisers?

    In order to answer that question, we need to go back in time. Massive steps forward in electronics had inspired research into different ways of creating sounds based on modulation techniques used in radio broadcast signals for decades. In the 1960s digital technology was also moving forward and provided a more stable base for FM synthesis. Stanford University scholars worked with Yamaha technologists to turn FM synthesis into a product.

    The first instrument that it appeared in was the New England Digital Synclavier, who had licensed the technology from Yamaha. The Synclavier, was a couple of racks full of computer storage, a processing unit, cooling and audio interfaces. This was all connected up to a monitor and a keyboard. Over time the Synclavier would evolve into the ancestor of the modern digital audio workstation (DAW) like Apple’s Logic Pro app.

    1983, comes around and Yamaha is finally ready to launch a mainstream product featuring FM synthesis. it also features MIDI, a standard that is still used to control musical instruments (and other studio equipment) remotely. Roland had released a couple of devices that supported the standard.

    But Yamaha’s DX7 proved to be the blockbuster product. At that time electronic music was a niche interest and instrument manufacturers would be very lucky to sell 50,000 units. Yamaha sold over 300,000 units in the first three years of sales over its 7 year life and 10,000s of more devices of the DX and TX families.

    Digital changes the interface

    Analogue synthesisers wer full of switches and dials. This Oberheim synthesiser above, isn’t that different from its analogue predecessors from five decades prior.

    The DX7 was a very different beast, it couldn’t have a dial or button for every parameter, rather like modern car screens with endless settings. So it had a few buttons which changed their function depending on what the synthesiser. A few earlier models had limited sales with a similarly spartan approach, but the DX7 mainstreamed the idea.

    A few things happened that might be instructive for how we now think about car screens:

    • Other synthesiser manufacturers like Roland and Korg copied Yamaha’s approach to interface design. Some of them tried using devices like jog wheels to provide additional intuitive control, in a similar way conceptually to BMW’s iDrive interface for its car screens.
    • Software companies looked to fill the gap to provide a better interface, which eventually begat modern software digital audio workstation applications like Logic Pro. We might see similar developments sold for cars, and this is likely the opportunity that the likes of Apple CarPlay sees. There is consumer demand to support it.
    • Despite the obvious benefit of soft button driven instruments, there still remained a strong demand for analogue controls. Now there is a strong demand for tactile interface controls and old style synthesis. In the car world that would equate to providing car enthusiasts with analogue experiences, while the mainstream goes to Tesla minimalism of the car screen. We can see this in the design of Hyundai’s analogue feeling performance electric cars that try and emulate a manual gear box and Ineos’ switch gear that owes more to aviation than automotive manufacturing.

    You can find similar posts to this here.

    More information

    Average Age of Cars in Great Britain | NimbleFins

    In praise of pushbuttons (and other physical controls) | Car Design News

    Car pollution facts: from production to disposal, what impact do our cars have on the planet? | Auto Express

    MIDI Quest Pro Yamaha DX7 software editor

    Patchbase Yamaha DX7 software editor

  • General Magic

    General Magic has a reputation of being the technology equivalent of the Jordan-era Chicago Bulls, but it ended up going nowhere. I never got to see the device in person, it was only available in Japan and the US. It’s as famous much for its alumni, as it is for its commercial failure.

    Apple "Paradigm" project/General Magic/Sony "Magic Link" PDA

    This is captured in a documentary of the same name. For students of Silicon Valley history and Apple fan boys – the team at General Magic sounds like a who’s who of the great and the good in software development and engineering.

    General Magic started within Apple with a brief that sounds eerily like what I would have expected for the iPhone decades later.

    “A tiny computer, a phone, a very personal object . . . It must be beautiful. It must offer the kind of personal satisfaction that a fine piece of jewelry brings. It will have a perceived value even when it’s not being used… Once you use it you won’t be able to live without it.”

    Sullivan M. (July 26, 2018) “General Magic” captures the legendary Apple offshoot that foresaw the mobile revolution. (United States) Fast Company magazine

    The opening sequence tells you what the documentary is going to lay out. Over carefully curate images of Silicon Valley campuses, Segway riders and the cute bug like Google autonomous vehicle a voice talks about success and failure. That failure is part of the process of development. That General Magic has a legendary status due to its status as precursor to our always-on modern world and while the company failed, the ideas didn’t.

    Autonomous cars aren't nearly as clever as you think, says Toyota exec - Computerworld

    The genesis of the spirit of General Magic goes back to the development and launch of the Macintosh with its vision of making computers accessible. The team looked around the next thing that would have a similar vision and impact of a product. The Mac had got some of these developers on the front cover of Rolling Stone – they were literally rockstars.

    You get a tale of dedication and excitement that revolved around a pied piper type project lead Marc Porat, who managed to come to the table with a pretty complete vision and concept of where General Magic (and the world) would be heading. The archive of footage of the offices with its cool early to mid 1990s Apple Office products still amazes now. The look of the people in the archive footage, make my Yahoo! colleagues a decade later seem corporate and uptight by comparison.

    Veteran journalist Kara Swisher said that she started following the company because it was ‘the start of mobile computing, this is where it leads’.

    What sets the documentary apart is that it tapped into footage shot by film maker David Hoffman who was hired to capture the product development process. The protagonists then provide a voice over of their younger selves. Their idealism reaches back to the spirit of the 1960s. You can see how touch screen screens and the skeuomorphic metaphors were created and even animate emoticons.

    I’ve never known a development process with so much documentary footage. Having been in this process on the inside, the General Magic documentary portrays a process and dynamics that haven’t changed that much.

    The ecosystem that the startup assembled including AT&T, Apple, Motorola and Sony made sense given the ecosystem and power that Microsoft had behind it. It’s hard to explain how dominant and aggressive Microsoft was in the technology space. Newton came out as a complete betrayal and John Sculley, who is interviewed in the documentary comes across worse than he would have liked.

    The documentary also has access to the 1994 promotional film where General Magic publicly discussed the concept of ‘The Cloud’ i.e. the modern web infrastructure – but the documentary doesn’t dwell on this provable claim.

    Goldman Sachs was a key enabler, the idea of the concept IPO set the precedent for Netscape, Uber, WeWork and the 2020s SPAC fever.

    In a time when there is barely one thing changing the technology environment, General Magic were pursuing their walled garden of their private cloud and missed the web for a while. Part of this is down to their relationship with AT&T.

    The documentary covers how project management dogged the project. Part of the problem was perfectionism was winning over the art of the possible and not focusing on the critical items that needed to be done. The panic of having to ship.

    It’s about getting the balance between ‘move fast and break things’ versus crafting a jewel of a product.

    But shipping wasn’t enough, the execution of shopper marketing and sales training was a disaster. The defeat was hard given the grand vision. But the ultimate lesson is that YOU are not representative of the mainstream market.

    The documentary post-mortem featuring thinkers like Kara Swisher and Paul Saffo points out the lack of supporting infrastructure, that would take years to catch up to where General Magic’s Magic Link had gone. Paul Saffo uses a surfing analogy that I had previously read in Bob Cringely’s Accidental Empires about catching the right wave at the right time.

    John Sculley over at Apple made similar mistakes to the General Magic team which resulted in him being fired from Apple. Sculley makes the very human admission that being fired from Apple took him about 15 years to recover from personally.

    IBM Simon

    The documentary gives a lot of the credit (maybe too much of it) to General Magic as the progenitor of what we now think of as smartphones. The reality as with other inventions is that innovation has its time and several possible ‘inventors’; or what author Kevin Kelly would call ‘the technium’. This is the idea that technological progression is inevitable and that it stands on the layers of what has gone before, like fossils found inside rocks several foot deep. For instance, IBM created a device called Simon which was ‘smartphone’ which sold about 50,000 units to BellSouth customers in the six months it was on the market. Motorola – who were a General Magic partner also launched a smartphone version of the Apple Newton called the Motorola Marco in January 1995 and there are more devices around the same time.

    Reality is messy and certainly not like the clean direct line that the General Magic documentary portrays, even the Newton was only part of the story.

    The Wonder Years

    I was thinking about what I liked so much about the General Magic documentary. I immediately thought about it reminding me of my falling in love with the nascent internet and technology, which then bought me to the start of my agency career working with Palm (the company that eventually helped kill off General Magic’s product ambitions) and the Franklin REX which came out of sychronisation pioneers Starfish Software.

    But it was deeper than that. The Silicon Valley portrayed in the General Magic documentary wasn’t the dystopian hellscape of platform firms, generation rent, toxic tech bro culture and ‘churn and burn’ HR culture. Instead the General Magic documentary story represented a halcyon past of Silicon Valley portrayed in books like Where Wizards Stay Up Late, Fire In The Valley and Insanely Great. Where talented people motivated by a fantastic vision thing, with a user centred mission worked miracles. The darkness of fatigue and god knows what else is largely hidden by a Wonder Years TV show feel good nostalgia. Maybe it gives us hope again in the tech sector, despite Peter Thiel, Mark Zuckerberg, Tim Cook and Elon Musk? Maybe that hope might inspire something great again?

    Marc Porat’s personal tragedy and Tony Fadell’s business failure brings a hint of the real world through the door. The documentary uses Fadell’s link with the iPod and iPhone as a point of redemption, resilience, perseverance and vindication for General Magic.

    There’s also a cautionary tale full of lessons learned for new entrepreneurs, who often get the vision thing but forget about the details. More on General Magic here.

    More reviews here.

  • AI and creativity

    Why AI and creativity?

    This post on AI and creativity was inspired by experiments being done at work by a member of our design department. They had been using Midjourney to create images within a minute of receiving an initial set of words as creative prompts.

    For example we created this surreal image which fits somewhere between Christian kitsch familiar to catholic households around the world and a touch of Syd Mead‘s visual futurism. This comes from the prompt.

    Jesus fighting alongside the US Air Force
    https://flic.kr/p/2nSGwLB

    Other efforts weren’t successful, we had faces featuring eyes with two pupils and when it tried to render round shapes, it didn’t know when to stop. The hands would go on and on as a twisted mass of flesh. This could be resolved by creating a human character in a service like MetaHuman and uploading that to Midjourney as a base model instead.

    How neural networks drive AI and creativity?

    Midjourney works using two neural networks. The first works to render an image. The second compares the processing image to exemplars from a data bank of images. There is a back and forth exchange between the two networks until a number of variants are rendered. At this point the human operator is given a choice, or they can choose to have other variations created if the originals don’t meet their requirements.

    These images can be rendered in high resolution allowing for an amazing level of detail.

    Dystopian vibes

    The dystopian feel of the use of AI and creativity is down to a few different factors.

    The first reason is that dystopia is at the centre of our cultural zeitgiest in the west. Documentary maker Adam Curtis covers it really well in this discussion with with the Joe Politics channel on YouTube. This zeitgeist affects the type of imagery that the AI has available to draw upon and the kind of prompts that people use to create AI images.

    Secondly, the use of AI to ‘create’ something lacks the feeling and collective emotional experiences of a real person. Those elements can’t be captured in prompts which is why images land with the sensation of a dead fish.

    What does AI and creativity mean for agencies?

    Concepting

    The most immediate impact could be in rapid concepting, analogous to how rapid prototyping for manufacturing design. Creative teams would still need to conceive of ideas but concepts could then be brought to live in minutes.

    It’s as far away from the black marker and pad that creative directors traditionally used; as paste up graphic design techniques from the use of desktop publishing software that started to impact the design world in the mid to late 1980s.

    News illustrations and graphic novels show the way

    One of the first areas that is really shaken up by AI and creativity has been the world of the political cartoonist and news illustrator. At the moment newspapers and news magazines pay skilled artists to develop and conceptual designs that convey a political concept.

    A good example of this is the covers of The Economist magazine. However things are starting to change. US political publication The Bulwark has already started using AI generated illustrations processed by Midjourney. Midjourney has also been used to create graphic novels.

    One could easily see how this might be extended into business-to-business marketing for intangible products like software and services.

    Production

    The hyper-realistic effects that AI can produce is likely to inspire a desire in clients to use them more often for cost effective production costs. At the moment however, the results can be very hit and miss. There is a problem with hands, faces, interlocking round shapes and a ‘dead’ look to the work.

    Social implications

    At first we had a discussion about what happens to designers? Were they doomed? Should there be a universal income for them or should they march in the streets to ? How could the technology be stopped?

    I wasn’t exactly a ray of sunshine in this discussion. I pointed out that over the past few centuries, capital won out over labour every time. So people only kept their jobs if they cost less than the process to automate their tasks.

    Globalisation versus automation

    London like a few other cities have ad agency work done that is designed for global audiences. At the moment I work on campaigns designed for markets including: the UK and Ireland, Spain, Italy, the US, Vietnam and Japan. Globalisation seems to have benefited hub cities rather than moved the work to cheaper locales.

    This in sharp contrast to what happened to British manufacturing. Whole sectors largely disappeared:

    • Steel making
    • Textile mills
    • Shipbuilding
    • Car manufacturing
    • Chemical industry
    • Pharmaceutical manufacturing
    • Engineering and fabrication

    Where capacity was spared, it was largely down to the UK being a good point of entry into the European Union. As the number of countries expanded new manufacturing jobs moved east; and workers moved west to fill workforce needs in established UK factories depressing salaries.

    Research shows that globalization only accounts for 13 percent of job loss in US manufacturing while 88 percent of losses were from automation including robotic manufacturing. In fact, availability of ever-cheaper automation options combined with uncertainty in the global supply chain has led to a resurgence in “onshoring” manufacturing.

    Debunking the Myths: Job Loss, Globalization and Automation by Greg Council Parascript (April 14, 2017).

    Automation has been the quiet destructor of roles. During the 1960s businesses had typing pools and secretaries. Many of these roles disappeared due to desktop computers, office productivity software and the democratisation of touch typing as a skill.

    The Technium

    Even if labour won out over capital in the UK, there is no guarantee that they would be able to stop the march of technology. Kevin Kelly in his book What Technology Wants shares the idea of ‘The Technium’. The idea behind The Technium is that technology has a momentum of its own building on previous progress. Kelly goes as far to describe it as a super organism of technology. He believes that it exerts a force that is partly cultural with technology influencing and being in turn being influenced by technology. All of which means adaption and accommodation are likely to be the way forward for now.

    Adaptation

    While people don’t realise it, you’ve been using what could be termed AI for decades:

    • Autofocus on a camera
    • Losing ‘shake’ in camcorder and smartphone video
    • Programmes in a microwave the attempt to cook a casserole or baked potato
    • Predictive text (although it seems to have become more stupid over time)
    • Siri, Alexa and Google’s various search functions

    In the case of a designer it would also include tools like the ‘lasso’ function in Photoshop that automatically cuts around objects including frizzy hair on a model. So it’s a bit late in the day for people to get squeamish about AI and creativity. Dominant creative software company Adobe sees the place of AI and creativity more as a technology to augment designers in their work rather than replace them. Much of the current Adobe focus seems to be on lowering the on-ramp for new users of their software packages.

    There will be more of a challenge for supporting professions like photographers. Fashion brand Hugo Boss is looking to 3D AI powered design to aid in product design and 3D rendering threatening product photography for websites, look books and catalogues.

    Limitations of AI and Creativity

    One of the things that my colleagues said which really struck with me was ‘if an AI told the world’s funniest joke’ would it know that it was funny? Software is being used to track emotional response, but it wouldn’t necessarily know why something was funny.

    The AI can’t be coded with a summation of life experiences, it can analyse emotions, but as far as we know doesn’t experience them yet. This probably explains why Studio Ghibli and Disney animation feels like it has much more life in it than the best AI renders.

    Is it art?

    Auction houses have sold works generated using AI, but is the art in the creation of the work, or in the decision to use an AI to do the work and thinking of the artist behind that idea? AI can produce works as they have existed before and mash-up genres and ideas, but it wouldn’t be able (at the moment) to create something completely novel through a leap of abstraction, such as a concept like Marcel Duchamp’s sculpture ‘Fountain‘.

    AI images can be nice, but do they involve an illusion of creativity? Everything that appears in an AI image is depended on the inputs that the AI receives and the content in image banks that it uses as a reference – which is the reason why AIs often sign works with an indecipherable script.

    Do artist styles have to be better protected as part of their IP as well as their works?

    IP issues goes beyond artists. We created an artistic rendering of Pokemon character ‘Pikachu’ on Midjourney using the prompt

    Definitely not a pikachu

    In conclusion

    If you’re a creative we eventually managed to get to four thoughts from the discussion:

    1. In the grand scheme of things, change is the only constant
    2. AI has been changing things and will continue to do so
    3. It is inevitable that there willl be some automation and augmentation happening in the creative professions such as design
    4. In the words of Douglas Adams book Hitchhiker’s Guide To The Galaxy ‘Don’t Panic’ – but be prepared to adapt and learn new skills and develop new areas of expertise

    Suggested reading

    AI-generated images open multiple cans of worms | Axios 

    AI-generated digital art spurs debate about news illustrations | Axios 

    AI Makes 1993 Video Game Look Photorealistic 

    The Push of a Button – by David OReilly – Reminders 

    DALL-E now allows anyone to cash in on AI art, but ownership gets complicated | Quartz

    GitHub – microsoft/AI-For-Beginners: 12 Weeks, 24 Lessons, AI for All! 

    Inceptionsim : Going Deeper into Neural Networks

    The Golden age of AI-generated art is here. It’s going to get weird – FT Online

    Novo Nordisk wins over doctors with AI email subject lines — and a human touch – Endpoints News 

    MAX Sneaks – by Kevin Hart & Bria Alexander, Adobe, Inc.

    Demo of AI ad copywriting and art direction for online ads 

    Books

    Harvard Business Review – Artificial Intelligence: The Insights that you need to know (HBR Insights Series)

    Society of Mind by Marvin Minsky

    Artificial Life: A Report from the Frontier Where Computers Meet Biology by Steven Levy

    Accidental Empires: How the Boys of Silicon Valley Make their Millions, Battle Foreign Competition and Still Can’t Get a Date – Robert X. Cringely

    Inevitable & What Technology Wants by Kevin Kelly

    The Quest for Artificial Intelligence: A History of Ideas and Achievements by Nils J Nilsson

    8vo: On The Outside by Mark Holt & Hamish Muir

    Hackers: Heroes of the Computer Revolution by Steven Levy

  • Kris Wu and other news

    Kris Wu

    Canadian Chinese performer Kris Wu was in a Korean group before going out on a solo career in China as a rap artist. He has become a TV star as a judge on Rap of China – a TV talent show. Kris Wu has also appeared in some Chinese films, mostly wushu films that look and feel more like a computer game. The scandal that Kris Wu finds himself would be a career finisher in the west, but it will be interesting to see what happens in China.

    Actor Kris Wu Accused of Predatory Behavior | HYPEBAE – Kris Wu has endorsement deals in place with Louis Vuitton, BVLGARI, Porsche, Lancôme, L’Oreal and Kans in China, along with other companies like Master Kong Ice Tea, Tuborg Brewery and more. Kans, a Shanghai-based beauty brand owned by C-beauty giant Chicmax, was the first to cut ties with Wu, announcing on July 18 that it has “terminated Wu’s endorsement contract. Meanwhile, Porsche, Master Kong Ice Tea, Vatti and King of Glory have deleted all their posts of Wu on Weibo. Louis Vuitton temporarily archived its Weibo posts of Wu but put them back on its feed not long after

    Kris Wu: Brands Drop Pop Star Amid China Misconduct Allegations – Variety – the interesting thing is that Chinese brands dropped Kris Wu first, before western brands. This is despite western brands being exposed to the #metoo movement. Note: the Chinese police have since found Wu did avail of the casting couch and the accuser went public to gain fame – make of it what you will

    China

    The Lab-Leak Theory: Inside the Fight to Uncover COVID-19’s Origins | Vanity FairWuhan is also home to China’s foremost coronavirus research laboratory, housing one of the world’s largest collections of bat samples and bat-virus strains. The Wuhan Institute of Virology’s lead coronavirus researcher, Shi Zhengli, was among the first to identify horseshoe bats as the natural reservoirs for SARS-CoV, the virus that sparked an outbreak in 2002, killing 774 people and sickening more than 8,000 globally. After SARS, bats became a major subject of study for virologists around the world, and Shi became known in China as “Bat Woman” for her fearless exploration of their caves to collect samples. More recently, Shi and her colleagues at the WIV have performed high-profile experiments that made pathogens more infectious. Such research, known as “gain-of-function,” has generated heated controversy among virologists. – this shows you how bolloxed China soft power is

    What if America Delists Chinese Firms? by Shang-Jin Wei – Project SyndicateChinese firms’ most egregious accounting frauds tend to be discovered by professional short-sellers using techniques – such as undercover company visits – that auditing firms do not employ.

    Consumer behaviour

    The New-Style Family Values Underpinning the ‘China Dream’the emergence of a new kind of “familism” — an ideology in which family interests take precedence over individual ones. Yan, a professor of anthropology at the University of California, Los Angeles, sees this “neo-familism” as distinct from traditional Chinese familism, which revolved around ancestor worship and the perpetuation of one’s lineage. Success under neo-familism is defined in material terms such as wealth and consumption

    This year, Yan edited “Chinese Families Upside Down,” a collection of essays from academics that seeks to go beyond the conventional focus on filial piety to examine the new dynamics of intergenerational relations under neo-familism. Speaking with Sixth Tone over the phone, Yan talked about how and why family structures have received an unprecedented degree of high-level policy attention in recent years, the changes taking place in Chinese families, and the growing anxiety felt by parents and children in an increasingly risk-laden society

    We’re All Teenage Girls Now | EE TimesDuring the early days of mobile telephony, I was living in Tokyo, where I observed schoolgirls glued to their clunky DoCoMos, learning the obligations and pitfalls of 24-hour texting, taking proto-selfies with their primitive photo apps and flocking — like moths to a streetlight — to Harajuku and Akihabara to blow their allowance on the latest advanced purveyor of girl gab

    Fast forward to this month, in Paris. I was on the suburban train from Charles de Gaulle Airport to the heart of the city. I looked up from the book I was reading…  

    …and I looked around. There were perhaps forty people in the car, including a busker rendering “Au Ciel de Paris” on a battered accordion. My trainmates represented all shapes, sizes and colors of adulthood between the ages of 25 and 70-plus. Of this random cadre, not including the accordionist, three-quarters were clutching slim rectangles of metal and glass, gazing raptly downward into a tiny screen at words, photos, videos, news, games, mail, etc. One woman in her thirties, impeccable in hair, clothing and makeup, never once — as I observed — raised her eyes from her phone through the entire 45-minute haul from airport to place St. Michel. Her thumbs, when she set them to messaging, were a blur

    The first Olympic gold medal in skateboarding went to Japan — Quartz – big issue for skateboarding as a sport is how the Olympics might affect its culture and related categories. Despite its aspirations, the Olympics isn’t the X-Games.

    Design

    Humble Utility Poles Have a Long-Term Infrastructure Maintenance Plan | EE Times

    After 3D printing now comes 4D printing | Smart2Zero 

    Economics

    EDA, IC Manufacturing Gear Sales Hit Records | EE Times 

    Why Biden Seems Worse to China Than Trump – The New York Times

    In defense of sanctions – by Kevin Carrico – NSL can’t cancel me 

    China is keeping its borders closed, and turning inward | The Economist 

    Will China Retaliate Against U.S. Chip Sanctions? – Lawfare 

    Hong Kong

    Hong Kong leader Carrie Lam shrugs off scenes of residents leaving at airport, says city has ‘prosperous future’ ahead | South China Morning PostHong Kong leader Carrie Lam shrugs off scenes of residents leaving at airport, says city has best time and ‘prosperous future’ ahead. Hong Kong’s leader on Tuesday brushed off recent scenes at the airport suggesting an exodus of residents, adding that she would tell anyone considering leaving that the city would continue to prosper with Beijing’s support and the help of the national security law – this is quite shocking. I don’t think that I have seen a country allow a wilful brain drain in this way. Medical staff, teachers and the middle classes are the the people slipping away to supermarket jobs in the UK

    Now News exec. resigns, cites ‘turbulent times’ for Hong Kong media – reports – Now News is a cable news channel

    ‘Hong Kong Tram Green’ now a recognised colour | Hong Kong Free Press HKFP – great to see the ‘dingding’ having its iconic nature be recognised

    Over 90% of Hong Kong industrial estate tenants facing eviction oppose redevelopment plan | Hong Kong Free Press HKFP 

    Explainer: From ‘violent attack’ to ‘gang fight’ – How the official account of the Yuen Long mob attack changed | Hong Kong Free Press HKFP – to be fair other governments spin as well. But this shattered the illusion of Hong Kong being a place where rule of law happened and the Hong Kong Police were no longer ‘Asia’s finest’ but instead seen as Hong Kong’s biggest gang who were tight with the triads like the bad old days of the 1960s and early 1970s and then you have the wolves and sheep book prosecution National security law: Hong Kong police arrest 5 for allegedly conspiring to distribute seditious children’s books | South China Morning Post 

    Interview: London Mayor Sadiq Khan rolls out welcome mat for Hongkongers, HK$9.6m fund to help visa holders settle

    Billionaires and a Hong Kong bank chief handed seats on powerful new election body | Hong Kong Free Press HKFP – the HSBC appointee is retiring from HSBC

    Ideas

    How cryptocurrency empowered the far-right – The Face 

    Outrage reaching boiling point as virus rages out of control – by Andrew MacGregor Marshall – Secret Siam – just wow, interesting article about COVID in Thailand. I imagine that this picture has played out to varying degrees around the world

    The Year Modern Sport Watches Were Born | Gear Patrol – its interesting that all these iconic watch designs appeared in one year 1953. Every idea has its time that builds on previous innovations – an empirical proof of Kevin Kelly’s idea of the ‘technium’

    The Work of Culture – Made in China Journalthe commercialisation and bureaucratisation of academia have led to a shift from ‘poetic technologies’ to ‘bureaucratic technologies’, which is one of the reasons why today we do not go around on those flying cars promised in the science fiction of the past century. As universities are bloated with ‘bullshit jobs’ and run by a managerial class that pits researchers against each other through countless rankings and evaluations, the very idea of academia as a place for pursuing groundbreaking ideas dies (Graeber 2015: 135; 2018). As conformity and predictability come to be extolled as cardinal virtues, the purpose of the university increasingly becomes simply to confirm the obvious, develop technologies and knowledge of immediate relevance for the market, and exact astronomically high fees from students under the pretence of providing them with vocational training

    Luxury

    Coach CEO talks China: From digital-first to staff as KOLs | Vogue Business – “About two years ago, even before the virus, we developed a very extensive programme to train each of our sales into KOLs so that we can leverage not only professional KOLs but also have hundreds of our own brand ambassadors,” explains Bozec.

    Jackson Wang and Palm Angels’s Ragazzi on their new collab, making celebrity lines work | Vogue Business 

    Marketing

    Subway Tuna Is 100% REAL Wild-Caught Tuna – Subway launches response site to debate on ‘is their tuna real tuna?’

    Marketing imperatives for a cookieless world | WARCSophisticated marketers will attempt a shift to contextual and moment-based communication – a bit of time travel by brand custodians to the pre-internet era, where passion group targeting and focus on context might resurrect. There will likely be a pivot from the “bottom of the funnel” performance optimization to “top of the funnel” preference strategies. As the levers at the lower funnel weaken, it will become imperative to move the needle to build brand salience and affinity. Bringing the right audience to their owned website, capturing first-party data, building a strong CRM capability, and recalibrating emphasis on performance media to performance creative. The emergence of a third-party cookieless world presents an opportunity for brand marketers to truly own the consumer journey via meaningful and relevant communication strategies.

    Long-Term Business Vitality Should Outweigh Short-Term Sales Gains – Nielsen – great essay and research on long termism versus short termism in marketing

    Ehrenberg-Bass: 95% of B2B buyers are not in the market for your products

    Male advertisers win sex discrimination case | Financial Times – unfortunate use of the world ‘obliterate’ in Jo Wallace’s presentation. WPP have got to hope to hell that Mark Read’s comments on older staff aren’t brought up

    Media

    Official Secrets Act reform could see journalists treated like spies | Press Gazette

    Online

    Why do people on Tinder list their Instagram? | British GQ – not terribly surprising when you think about the dynamics of Tinder. This has implications for Tinder’s business model of friends and dating based on buying premium services (visibility, bundles of super likes, ability to rewind and reexamine a profile)

    How to be an Instagram influencer | British GQ | British GQ 

    How Neopets Paved the Road to the Metaverse – by Rex Woodbury – Digital Native 

    Security

    US accuses China of masterminding cyber attacks worldwide | Financial Times 

    Who is Mr Gu? – Intrusion Truth – interesting investigation into Gu Jian a former PLA member who is an information security academic and associated with Hainan Xiandun which is one of a network of front companies for APT activity

    Operation Fox Hunt: How China Exports Repression Using a Network of Spies Hidden in Plain Sight — ProPublica 

    The Huawei Moment – Center for Security and Emerging Technology

    Technology

    AI tool cuts 3nm chip design times 

    ABB to buy Spanish autonomous robotics group | EENews Europe – I wouldn’t have set up the factory in China

    ARM shows first plastic M0+ microcontroller 

    Web of no web

    Robotaxis: have Google and Amazon backed the wrong technology? | Financial Times

    Wireless

    What the Orange Dot on Your iPhone Means | Gear Patrol