Search results for: “unix”

  • Intelligence per watt

    My thinking on the concept of intelligence per watt started as bullets in my notebook. It was more of a timeline than anything else at first and provided a framework of sorts from which I could explore the concept of efficiency in terms of intelligence per watt. 

    TL;DR (too long, didn’t read)

    Our path to the current state of ‘artificial intelligence’ (AI) has been shaped by the interplay and developments of telecommunications, wireless communications, materials science, manufacturing processes, mathematics, information theory and software engineering. 

    Progress in one area spurred advances in others, creating a feedback loop that propelled innovation.  

    Over time, new use cases have become more personal and portable – necessitating a focus on intelligence per watt as a key parameter. Energy consumption directly affects industrial design and end-user benefits. Small low-power integrated circuits (ICs) facilitated fuzzy logic in portable consumer electronics like cameras and portable CD players. Low power ICs and power management techniques also helped feature phones evolve into smartphones.  

    A second-order effect of optimising for intelligence per watt is reducing power consumption across multiple applications. This spurs yet more new use cases in a virtuous innovation circle. This continues until the laws of physics impose limits. 

    Energy storage density and consumption are fundamental constraints, driving the need for a focus on intelligence per watt.  

    As intelligence per watt improves, there will be a point at which the question isn’t just what AI can do, but what should be done with AI? And where should it be processed? Trust becomes less about emotional reassurance and more about operational discipline. Just because it can handle a task doesn’t mean it should – particularly in cases where data sensitivity, latency, or transparency to humans is non-negotiable. A highly capable, off-device AI might be a fine at drafting everyday emails, but a questionable choice for handling your online banking. 

    Good ‘operational security’ outweighs trust. The design of AI systems must therefore account not just for energy efficiency, but user utility and deployment context. The cost of misplaced trust is asymmetric and potentially irreversible.

    Ironically the force multiplier in intelligence per watt is people and their use of ‘artificial intelligence’ as a tool or ‘co-pilot’. It promises to be an extension of the earlier memetic concept of a ‘bicycle for the mind’ that helped inspire early developments in the personal computer industry. The upside of an intelligence per watt focus is more personal, trusted services designed for everyday use. 

    Integration

    In 1926 or 27, Loewe (now better known for their high-end televisions) created the 3NF[i].

    While not a computer, but instead to integrate several radio parts in one glass envelope vacuum valve. This had three triodes (early electronic amplifiers), two capacitors and four resistors. Inside the valve the extra resistor and capacitor components went inside their own glass tubes. Normally each triode would be inside its own vacuum valve. At the time, German radio tax laws were based on the number of valve sockets in a device, making this integration financially advantageous. 

    Post-war scientific boom

    Between 1949 and 1957 engineers and scientists from the UK, Germany, Japan and the US proposed what we’d think of as the integrated circuit (IC). These ideas were made possible when breakthroughs in manufacturing happened. Shockley Semiconductor built on work by Bell Labs and Sprague Electric Company to connect different types of components on the one piece of silicon to create the IC. 

    Credit is often given to Jack Kilby of Texas Instruments as the inventor of the integrated circuit. But that depends how you define IC, with what is now called a monolithic IC being considered a ‘true’ one. Kilby’s version wasn’t a true monolithic IC. As with most inventions it is usually the child of several interconnected ideas that coalesce over a given part in time. In the case of ICs, it was happening in the midst of materials and technology developments including data storage and computational solutions such as the idea of virtual memory through to the first solar cells. 

    Kirby’s ICs went into an Air Force computer[ii] and an onboard guidance system for the Minuteman missile. He went on to help invent the first handheld calculator and thermal printer, both of which took advantage of progress in IC design to change our modern way of life[iii]

    TTL (transistor-to-transistor logic) circuitry was invented at TRW in 1961, they licensed it out for use in data processing and communications – propelling the development of modern computing. TTL circuits powered mainframes. Mainframes were housed in specialised temperature and humidity-controlled rooms and owned by large corporates and governments. Modern banking and payments systems rely on the mainframe as a concept. 

    AI’s early steps 

    Science Museum highlights

    What we now thing of as AI had been considered theoretically for as long as computers could be programmed. As semiconductors developed, a parallel track opened up to move AI beyond being a theoretical possibility. A pivotal moment was a workshop was held in 1956 at Dartmouth College. The workshop focused on a hypothesis ‘every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it’. Later on, that year a meeting at MIT (Massachusetts Institute of Technology) brought together psychologists and linguists to discuss the possibility of simulating cognitive processes using a computer. This is the origin of what we’d now call cognitive science. 

    Out of the cognitive approach came some early successes in the move towards artificial intelligence[iv]. A number of approaches were taken based on what is now called symbolic or classical AI:

    • Reasoning as search – essentially step-wise trial and error approach to problem solving that was compared to wandering through a maze and back-tracking if a dead end was found. 
    • Natural language – where related phrases existed within a structured network. 
    • Micro-worlds – solving for artificially simple situations, similar to economic models relying on the concept of the rational consumer. 
    • Single layer neural networks – to do rudimentary image recognition. 

     By the time the early 1970s came around AI researchers ran into a number of problems, some of which still plague the field to this day:

    • Symbolic AI wasn’t fit for purpose solving many real-world tasks like crossing a crowded room. 
    • Trying to capture imprecise concepts with precise language.
    • Commonsense knowledge was vast and difficult to encode. 
    • Intractability – many problems require an exponential amount of computing time. 
    • Limited computing power available – there was insufficient intelligence per watt available for all but the simplest problems. 

    By 1966, US and UK funding bodies were frustrated with the lack of progress on the research undertaken. The axe fell first on a project to use computers on language translation. Around the time of the OPEC oil crisis, funding to major centres researching AI was reduced by both the US and UK governments respectively. Despite the reduction of funding to the major centres, work continued elsewhere. 

    Mini-computers and pocket calculators

    ICs allowed for mini-computers due to the increase in computing power per watt. As important as the relative computing power, ICs made mini-computers more robust, easier to manufacture and maintain. DEC (Digital Equipment Corporation) launched the first minicomputer, the PDP-8 in 1964. The cost of mini-computers allowed them to run manufacturing processes, control telephone network switching and control labouratory equipment. Mini-computers expanded computer access in academia facilitating more work in artificial life and what we’d think of as early artificial intelligence. This shift laid the groundwork for intelligence per watt as a guiding principle.

    A second development helped drive mass production of ICs – the pocket calculator, originally invented at Texas Instruments.  It demonstrated how ICs could dramatically improve efficiency in compact, low-power devices.

    LISP machines and PCs

    AI researchers required more computational power than mini-computers could provide, leading to the development of LISP machines—specialised workstations designed for AI applications. Despite improvements in intelligence per watt enabled by Moore’s Law, their specialised nature meant that they were expensive. AI researchers continued with these machines until personal computers (PCs) progressed to a point that they could run LISP quicker than LISP machines themselves. The continuous improvements in data storage, memory and processing that enabled LISP machines, continued on and surpassed them as the cost of computing dropped due to mass production. 

    The rise of LISP machines and their decline was not only due to Moore’s Law in effect, but also that of Makimoto’s Wave. While Gordon Moore outlined an observation that the number of transistors on a given area of silicon doubled every two years or so. Tsugio Makimoto originally observed 10-year pivots from standardised semiconductor processors to customised processors[v]. The rise of personal computing drove a pivot towards standardised architectures. 

    PCs and workstations extended computing beyond computer rooms and labouratories to offices and production lines. During the late 1970s and 1980s standardised processor designs like the Zilog Z80, MOS Technology 6502 and the Motorola 68000 series drove home and business computing alongside Intel’s X86 processors. 

    Personal computing started in businesses when office workers brought a computer to use early computer programmes like the VisiCalc spreadsheet application. This allowed them to take a leap forward in not only tabulating data, but also seeing how changes to the business might affect financial performance. 

    Businesses then started to invest more in PCs for a wide range of uses. PCs could emulate the computer terminal of a mainframe or minicomputer, but also run applications of their own. 

    Typewriters were being placed by word processors that allowed the operator to edit a document in real time without resorting to using correction fluid

    A Bicycle for the Mind

    Steve Jobs at Apple was as famous for being a storyteller as he was for being a technologist in the broadest sense. Internally with the Mac team he shared stories and memetic concepts to get his ideas across in everything from briefing product teams to press interviews. As a concept, a 1990 filmed interview with Steve Jobs articulates the context of this saying particularly well. 

    In reality, Jobs had been telling the story for a long time through the development of the Apple II and right from the beginning of the Mac. There is a version of the talk that was recorded some time in 1980 when the personal computer was still a very new idea – the video was provided to the Computer History Museum by Regis McKenna[vi].

    The ‘bicycle for the mind’ concept was repeated in early Apple advertisements for the time[vii] and even informed the Macintosh project codename[viii]

    Jobs articulated a few key concepts. 

    • Buying a computer creates, rather than reduces problems. You needed software to start solving problems and making computing accessible. Back in 1980, you programmed a computer if you bought one. Which was the reason why early personal computer owners in the UK went on to birth a thriving games software industry including the likes of Codemasters[ix]. Done well, there should be no seem in the experience between hardware and software. 
    • The idea of a personal, individual computing device (rather than a shared resource).  My own computer builds on my years of how I have grown to adapt and use my Macs, from my first sit-up and beg Macintosh, to the MacBook Pro that I am writing this post on. This is even more true most people and their use of the smartphone. I am of an age, where my iPhone is still an appendage and emissary of my Mac. My Mac is still my primary creative tool. A personal computer is more powerful than a shared computer in terms of the real difference made. 
    • At the time Jobs originally did the speech, PCs were underpowered for anything but data processing (through spreadsheets and basic word processor applications). But that didn’t stop his idea for something greater. 

    Jobs idea of the computer as an adjunct to the human intellect and imagination still holds true, but it doesn’t neatly fit into the intelligence per watt paradigm. It is harder to measure the effort developing prompts, or that expended evaluating, refining and filtering generative AI results. Of course, Steve Jobs Apple owed a lot to the vision shown in Doug Engelbart’s ‘Mother of All Demos’[x].

    Networks

    Work took a leap forward with office networked computers pioneered by Macintosh office by Apple[xi]. This was soon overtaken by competitors. This facilitated work flow within an office and its impact can still be seen in offices today, even as components from print management to file storage have moved to cloud-based services. 

    At the same time, what we might think of as mobile was starting to gain momentum. Bell Labs and Motorola came up with much of the technology to create cellular communications. Martin Cooper of Motorola made the first phone call on a cellular phone to a rival researcher at Bell Labs. But Motorola didn’t sell the phone commercially until 1983, as a US-only product called the DynaTAC 8000x[xii].  This was four years after Japanese telecoms company NTT launched their first cellular network for car phones. Commercial cellular networks were running in Scandinavia by 1981[xiii]

    In the same way that the networked office radically changed white collar work, the cellular network did a similar thing for self-employed plumbers, electricians and photocopy repair men to travelling sales people. If they were technologically advanced, they may have had an answer machine, but it would likely have to be checked manually by playing back the tape. 

    Often it was a receptionist in their office if they had one. Or more likely, someone back home who took messages. The cell phone freed homemakers in a lot of self-employed households to go out into the workplace and helped raise household incomes. 

    Fuzzy logic 

    The first mainstream AI applications emerged from fuzzy logic, introduced by Lofti A. Zadeh in 1965 mathematical paper. Initial uses were for industrial controls in cement kilns and steel production[xiv]. The first prominent product to rely on fuzzy logic was the Zojirushi Micom Electric Rice Cooker (1983), which adjusted cooking time dynamically to ensure perfect rice. 

    Rice Cooker with Fuzzy Logic 3,000 yen avail end june

    Fuzzy logic reacted to changing conditions in a similar way to people. Through the 1980s and well into the 1990s, the power of fuzzy logic was under appreciated outside of Japanese product development teams. In a quote a spokesperson for the American Electronics Association’s Tokyo office said to the Washington Post[xv].

    “Some of the fuzzy concepts may be valid in the U.S.,”

    “The idea of better energy efficiency, or more precise heating and cooling, can be successful in the American market,”

    “But I don’t think most Americans want a vacuum cleaner that talks to you and says, ‘Hey, I sense that my dust bag will be full before we finish this room.’ “

    The end of the 1990s, fuzzy logic was embedded in various consumer devices: 

    • Air-conditioner units – understands the room, the temperature difference inside-and-out, humidity. It then switches on-and-off to balance cooling and energy efficiency.
    • CD players – enhanced error correction on playback dealing with imperfections on the disc surface.
    • Dishwashers – understood how many dishes were loaded, their type of dirt and then adjusts the wash programme.
    • Toasters – recognised different bread types, the preferable degree of toasting and performs accordingly.
    • TV sets – adjust the screen brightness to the ambient light of the room and the sound volume to how far away the viewer is sitting from the TV set. 
    • Vacuum cleaners – vacuum power that is adjusted as it moves from carpeted to hard floors. 
    • Video cameras – compensate for the movement of the camera to reduce blurred images. 

    Fuzzy logic sold on the benefits and concealed the technology from western consumers. Fuzzy logic embedded intelligence in the devices. Because it worked on relatively simple dedicated purposes it could rely on small lower power specialist chips[xvi] offering a reasonable amount of intelligence per watt, some three decades before generative AI. By the late 1990s, kitchen appliances like rice cookers and microwave ovens reached ‘peak intelligence’ for what they needed to do, based on the power of fuzzy logic[xvii].

    Fuzzy logic also helped in business automation. It helped to automatically read hand-written numbers on cheques in banking systems and the postcodes on letters and parcels for the Royal Mail. 

    Decision support systems & AI in business

    Decision support systems or Business Information Systems were being used in large corporates by the early 1990s. The techniques used were varied but some used rules-based systems. These were used in at least some capacity to reduce manual office work tasks. For instance, credit card approvals were processed based on rules that included various factors including credit scores. Only some credit card providers had an analyst manually review the decision made by system.  However, setting up each use case took a lot of effort involving highly-paid consultants and expensive software tools. Even then, vendors of business information systems such as Autonomy struggled with a high rate of projects that failed to deliver anything like the benefits promised. 

    Three decades on, IBM had a similar problem with its Watson offerings, with particularly high-profile failure in mission-critical healthcare applications[xviii]. Secondly, a lot of tasks were ad-hoc in nature, or might require transposing across disparate separate systems. 

    The rise of the web

    The web changed everything. The underlying technology allowed for dynamic data. 

    Software agents

    Examples of intelligence within the network included early software agents. A good example of this was PapriCom. PapriCom had a client on the user’s computer. The software client monitored price changes for products that the customer was interested in buying. The app then notified the user when the monitored price reached a price determined by the customer. The company became known as DealTime in the US and UK, or Evenbetter.com in Germany[xix].  

    The PapriCom client app was part of a wider set of technologies known as ‘push technology’ which brought content that the netizen would want directly to their computer. In a similar way to mobile app notifications now. 

    Web search

    The wealth of information quickly outstripped netizen’s ability to explore the content. Search engines became essential for navigating the new online world. Progress was made in clustering vast amounts of cheap Linux powered computers together and sharing the workload to power web search amongst them.  As search started to trying and make sense of an exponentially growing web, machine learning became part of the developer tool box. 

    Researchers at Carnegie-Mellon looked at using games to help teach machine learning algorithms based on human responses that provided rich metadata about the given item[xx]. This became known as the ESP game. In the early 2000s, Yahoo! turned to web 2.0 start-ups that used user-generated labels called tags[xxi] to help organise their data. Yahoo! bought Flickr[xxii] and deli.ico.us[xxiii]

    All the major search engines looked at how deep learning could help improve search results relevance. 

    Given that the business model for web search was an advertising-based model, reducing the cost per search, while maintaining search quality was key to Google’s success. Early on Google focused on energy consumption, with its (search) data centres becoming carbon neutral in 2007[xxiv]. This was achieved by a whole-system effort: carefully managing power management in the silicon, storage, networking equipment and air conditioning to maximise for intelligence per watt. All of which were made using optimised versions of open-source software and cheap general purpose PC components ganged together in racks and operating together in clusters. 

    General purpose ICs for personal computers and consumer electronics allowed easy access relatively low power computing. Much of this was down to process improvements that were being made at the time. You needed the volume of chips to drive innovation in mass-production at a chip foundry. While application-specific chips had their uses, commodity mass-volume products for uses for everything from embedded applications to early mobile / portable devices and computers drove progress in improving intelligence-per-watt.

    Makimoto’s tsunami back to specialised ICs

    When I talked about the decline of LISP machines, I mentioned the move towards standardised IC design predicted by Tsugio Makimoto. This led to a surge in IC production, alongside other components including flash and RAM memory.  From the mid-1990s to about 2010, Makimoto’s predicted phase was stuck in ‘standardisation’. It just worked. But several factors drove the swing back to specialised ICs. 

    • Lithography processes got harder: standardisation got its performance and intelligence per watt bump because there had been a steady step change in improvements in foundry lithography processes that allowed components to be made at ever-smaller dimensions. The dimensions are a function wavelength of light used. The semiconductor hit an impasse when it needed to move to EUV (extreme ultra violet) light sources. From the early 1990s on US government research projects championed development of key technologies that allow EUV photolithography[xxv]. During this time Japanese equipment vendors Nikon and Canon gave up on EUV. Sole US vendor SVG (Silicon Valley Group) was acquired by ASML, giving the Dutch company a global monopoly on cutting edge lithography equipment[xxvi]. ASML became the US Department of Energy research partner on EUV photo-lithography development[xxvii]. ASML spent over two decades trying to get EUV to work. Once they had it in client foundries further time was needed to get commercial levels of production up and running. All of which meant that production processes to improve IC intelligence per watt slowed down and IC manufacturers had to start about systems in a more holistic manner. As foundry development became harder, there was a rise in fabless chip businesses. Alongside the fabless firms, there were fewer foundries: Global Foundries, Samsung and TSMC (Taiwan Semiconductor Manufacturing Company Limited). TSMC is the worlds largest ‘pure-play’ foundry making ICs for companies including AMD, Apple, Nvidia and Qualcomm. 
    • Progress in EDA (electronic design automation). Production process improvements in IC manufacture allowed for an explosion in device complexity as the number of components on a given size of IC doubled every 18 months or so. In the mid-to-late 1970s this led to technologists thinking about the idea of very large-scale integration (VLSI) within IC designs[xxviii]. Through the 1980s, commercial EDA software businesses were formed. The EDA market grew because it facilitated the continual scaling of semiconductor technology[xxix]. Secondly, it facilitated new business models. Businesses like ARM Semiconductor and LSI Logic allowed their customers to build their own processors based on ‘blocs’ of proprietary designs like ARM’s cores. That allowed companies like Apple to focus on optimisation in their customer silicon and integration with software to help improve the intelligence per watt[xxx]
    • Increased focus on portable devices. A combination of digital networks, wireless connectivity, the web as a communications platform with universal standards, flat screen displays and improving battery technology led the way in moving towards more portable technologies. From personal digital assistants, MP3 players and smartphone, to laptop and tablet computers – disconnected mobile computing was the clear direction of travel. Cell phones offered days of battery life; the Palm Pilot PDA had a battery life allowing for couple of days of continuous use[xxxi]. In reality it would do a month or so of work. Laptops at the time could do half a day’s work when disconnected from a power supply. Manufacturers like Dell and HP provided spare batteries for travellers. Given changing behaviours Apple wanted laptops that were easy to carry and could last most of a day without a charge. This was partly driven by a move to a cleaner product design that wanted to move away from swapping batteries. In 2005, Apple moved from PowerPC to Intel processors. During the announcement at the company’s worldwide developer conference (WWDC), Steve Jobs talked about the focus on computing power per watt moving forwards[xxxii]

    Apple’s first in-house designed IC, the A4 processor was launched in 2010 and marked the pivot of Makimoto’s wave back to specialised processor design[xxxiii].  This marked a point of inflection in the growth of smartphones and specialised computing ICs[xxxiv]

    New devices also meant new use cases that melded data on the web, on device, and in the real world. I started to see this in action working at Yahoo! with location data integrated on to photos and social data like Yahoo! Research’s ZoneTag and Flickr. I had been the Yahoo! Europe marketing contact on adding Flickr support to Nokia N-series ‘multimedia computers’ (what we’d now call smartphones), starting with the Nokia N73[xxxv].  A year later the Nokia N95 was the first smartphone released with a built-in GPS receiver. William Gibson’s speculative fiction story Spook Country came out in 2007 and integrated locative art as a concept in the story[xxxvi]

    Real-world QRcodes helped connect online services with the real world, such as mobile payments or reading content online like a restaurant menu or a property listing[xxxvii].

    I labelled the web-world integration as a ‘web-of-no-web’[xxxviii] when I presented on it back in 2008 as part of an interactive media module, I taught to an executive MBA class at Universitat Ramon Llull in Barcelona[xxxix]. In China, wireless payment ideas would come to be labelled O2O (offline to online) and Kevin Kelly articulated a future vision for this fusion which he called Mirrorworld[xl]

    Deep learning boom

    Even as there was a post-LISP machine dip in funding of AI research, work on deep (multi-layered) neural networks continued through the 1980s. Other areas were explored in academia during the 1990s and early 2000s due to the large amount of computing power needed. Internet companies like Google gained experience in large clustered computing, AND, had a real need to explore deep learning. Use cases include image recognition to improve search and dynamically altered journeys to improve mapping and local search offerings. Deep learning is probabilistic in nature, which dovetailed nicely with prior work Microsoft Research had been doing since the 1980s on Bayesian approaches to problem-solving[xli].  

    A key factor in deep learning’s adoption was having access to powerful enough GPUs to handle the neural network compute[xlii]. This has allowed various vendors to build Large Language Models (LLMs). The perceived strategic importance of artificial intelligence has meant that considerations on intelligence per watt has become a tertiary consideration at best. Microsoft has shown interest in growing data centres with less thought has been given on the electrical infrastructure required[xliii].  

    Google’s conference paper on attention mechanisms[xliv] highlighted the development of the transformer model. As an architecture it got around problems in previous approaches, but is computationally intensive. Even before the paper was published, the Google transformer model had created fictional Wikipedia entries[xlv]. A year later OpenAI built on Google’s work with the generative pre-trained transformer model better known as GPT[xlvi]

    Since 2018 we’ve seen successive GPT-based models from Amazon, Anthropic, Google, Meta, Alibaba, Tencent, Manus and DeepSeek. All of these models were trained on vast amounts of information sources. One of the key limitations for building better models was access to training material, which is why Meta used pirated copies of e-books obtained using bit-torrent[xlvii]

    These models were so computationally intensive that the large-scale cloud service providers (CSPs) offering these generative AI services were looking at nuclear power access for their data centres[xlviii]

    The current direction of development in generative AI services is raw computing power, rather than having a more energy efficient focus of intelligence per watt. 

    Technology consultancy / analyst Omdia estimated how many GPUs were bought by hyperscalers in 2024[xlix].

    CompanyNumber of Nvidia GPUs boughtNumber of AMD GPUs boughtNumber of self-designed custom processing chips bought
    Amazon196,0001,300,000
    Alphabet (Google)169,0001,500,000
    ByteDance230,000
    Meta224,000173,0001,500,000
    Microsoft485,00096,000200,000
    Tencent230,000

    These numbers provide an indication of the massive deployment on GPT-specific computing power. Despite the massive amount of computing power available, services still weren’t able to cope[l] mirroring some of the service problems experienced by early web users[li] and the Twitter ‘whale FAIL’[lii] phenomenon of the mid-2000s. The race to bigger, more powerful models is likely to continue for the foreseeable future[liii]

    There is a second class of players typified by Chinese companies DeepSeek[liv] and Manus[lv] that look to optimise the use of older GPT models to squeeze the most utility out of them in a more efficient manner. Both of these services still rely on large cloud computing facilities to answer queries and perform tasks. 

    Agentic AI

    Thinking on software agents went back to work being done in computer science in the mid-1970s[lvi]. Apple articulated a view[lvii]of a future system dubbed the ‘Knowledge Navigator’[lviii] in 1987 which hinted at autonomous software agents. What we’d now think of as agentic AI was discussed as a concept at least as far back as 1995[lix], this was mirrored in research labs around the world and was captured in a 1997 survey of research on intelligent software agents was published[lx]. These agents went beyond the vision that PapriCom implemented. 

    A classic example of this was Wildfire Communications, Inc. who created a voice enabled virtual personal assistant in 1994[lxi].  Wildfire as a service was eventually shut down in 2005 due to an apparent decline in subscribers using the service[lxii]. In terms of capability, Wildfire could do tasks that are currently beyond Apple’s Siri. Wildfire did have limitations due to it being an off-device service that used a phone call rather than an internet connection, which limited its use to Orange mobile service subscribers using early digital cellular mobile networks. 

    Almost a quarter century later we’re now seeing devices that are looking to go beyond Wildfire with varying degrees of success. For instance, the Rabbit R1 could order an Uber ride or groceries from DoorDash[lxiii]. Google Duplex tries to call restaurants on your behalf to make reservations[lxiv] and Amazon claims that it can shop across other websites on your behalf[lxv]. At the more extreme end is Boeing’s MQ-28[lxvi] and the Loyal Wingman programme[lxvii]. The MQ-28 is an autonomous drone that would accompany US combat aircraft into battle, once it’s been directed to follow a course of action by its human colleague in another plane. 

    The MQ-28 will likely operate in an electronic environment that could be jammed. Even if it wasn’t jammed the length of time taken to beam AI instructions to the aircraft would negatively impact aircraft performance. So, it is likely to have a large amount of on-board computing power. As with any aircraft, the size of computing resources and their power is a trade-off with the amount of fuel or payload it will carry. So, efficiency in terms of intelligence per watt becomes important to develop the smallest, lightest autonomous pilot. 

    As well as a more hostile world, we also exist in a more vulnerable time in terms of cyber security and privacy. It makes sense to have critical, more private AI tasks run on a local machine. At the moment models like DeepSeek can run natively on a top-of-the-range Mac workstation with enough memory[lxviii].  

    This is still a long way from the vision of completely local execution of ‘agentic AI’ on a mobile device because the intelligence per watt hasn’t scaled down to that level to useful given the vast amount of possible uses that would be asked of the Agentic AI model. 

    Maximising intelligence per watt

    There are three broad approaches to maximise the intelligence per watt of an AI model. 

    • Take advantage of the technium. The technium is an idea popularised by author Kevin Kelly[lxix]. Kelly argues that technology moves forward inexorably, each development building on the last. Current LLMs such as ChatGPT and Google Gemini take advantage of the ongoing technium in hardware development including high-speed computer memory and high-performance graphics processing units (GPU).  They have been building large data centres to run their models in. They build on past developments in distributed computing going all the way back to the 1962[lxx]
    • Optimise models to squeeze the most performance out of them. The approach taken by some of the Chinese models has been to optimise the technology just behind the leading-edge work done by the likes of Google, OpenAI and Anthropic. The optimisation may use both LLMs[lxxi] and quantum computing[lxxii] – I don’t know about the veracity of either claim. 
    • Specialised models. Developing models by use case can reduce the size of the model and improve the applied intelligence per watt. Classic examples of this would be fuzzy logic used for the past four decades in consumer electronics to Mistral AI[lxxiii] and Anduril’s Copperhead underwater drone family[lxxiv].  

    Even if an AI model can do something, should the model be asked to do so?

    AI use case appropriateness

    We have a clear direction of travel over the decades to more powerful, portable computing devices –which could function as an extension of their user once intelligence per watt allows it to be run locally. 

    Having an AI run on a cloud service makes sense where you are on a robust internet connection, such as using the wi-fi network at home. This makes sense for general everyday task with no information risk, for instance helping you complete a newspaper crossword if there is an answer you are stuck on and the intellectual struggle has gone nowhere. 

    A private cloud AI service would make sense when working, accessing or processing data held on the service. Examples of this would be Google’s Vertex AI offering[lxxv]

    On-device AI models make sense in working with one’s personal private details such as family photographs, health information or accessing apps within your device. Apps like Strava which share data, have been shown to have privacy[lxxvi] and security[lxxvii] implications. ***I am using Strava as an example because it is popular and widely-known, not because it is a bad app per se.***

    While businesses have the capability and resources to have a multi-layered security infrastructure to protect their data most[lxxviii]of[lxxix] the[lxxx] time[lxxxi], individuals don’t have the same security. As I write this there are privacy concerns[lxxxii] expressed about Waymo’s autonomous taxis. However, their mobile device is rarely out of physical reach and for many their laptop or tablet is similarly close. All of these devices tend to be used in concert with each other. So, for consumers having an on-device AI model makes the most sense. All of which results in a problem, how do technologists squeeze down their most complex models inside a laptop, tablet or smartphone? 


    [i] Radiomuseum – Loewe (Opta), Germany. Multi-system internal coupling 3NF

    [ii] (1961) Solid Circuit(tm) Semiconductor Network Computer, 6.3 Cubic inches in Size, is Demonstrated in Operation by U.S. Air Force and Texas Instruments (United States) Texas Instruments news release

    [iii] (2000) The Chip that Jack Built Changed the World (United States) Texas Instruments website

    [iv] Moravec H (1988), Mind Children (United States) Harvard University Press

    [v] (2010) Makimoto’s Wave | EDN (United States) AspenCore Inc.

    [vi] Jobs, S. (1980) Presentation on Apple Computer history and vision (United States) Computer History Museum via Regis McKenna

    [vii] Sinofsky, S. (2019) ‘Bicycle for the Mind’ (United States) Learning By Shipping

    [viii] Hertzfeld, A. (1981) Bicycle (United States) Folklore.org

    [ix] Jones, D. (2016) Codemasters (United Kingdom) Retro Gamer – Future Publishing

    [x] Engelbert, D. (1968) A Research Center For Augmenting Human Intellect (United States) Stanford Research Institute (SRI)

    [xi] Hormby, T. (2006) Apple’s Worst business Decisions (United States) OSnews

    [xii] Honam, M. (2009) From Brick to Slick: A History of Mobile Phones (United States) Wired

    [xiii] Ericsson History: The Nordics take charge (Sweden) LM Ericsson.

    [xiv] Singh, H., Gupta, M.M., Meitzler, T., Hou, Z., Garg, K., Solo, A.M.G & Zadeh, L.A. (2013) Real-Life Applications of Fuzzy Logic – Advances in Fuzzy Systems (Egypt) Hindawi Publishing Corporation

    [xv] Reid, T.R. (1990) The Future of Electronics Looks ‘Fuzzy’. (United States) Washington Post

    [xvi] Kushairi, A. (1993). “Omron showcases latest in fuzzy logic”. (Malaysia) New Straits Times

    [xvii] Watson, A. (2021) The Antique Microwave Oven that’s Better than Yours (United States) Technology Connections

    [xviii] Durbhakula, S. (2022) IBM dumping Watson Health is an opportunity to reevaluate artificial intelligence (United States) MedCity News

    [xix] (1998) PapriCom Technologies Wins CommerceNet Award (Israel) Globes

    [xx] Von Ahn, L., Dabbish, L. (2004) Labeling Images with a Computer Game (United States) School of Computing, Carnegie-Mellon University

    [xxi] Butterfield, D., Fake, C., Henderson-Begg, C., Mourachov, S., (2006) Interestingness ranking of media objects (United States) US Patent Office

    [xxii] Delaney, K.J., (2005) Yahoo acquires Flickr creator (United States) Wall Street Journal

    [xxiii] Hood, S., (2008) Delicious is 5 (United States) Delicious blog

    [xxiv] (2017) 10 years of Carbon Neutrality (United States) Google

    [xxv] Bakshi, V. (2018) EUV Lithography (United States) SPIE Press

    [xxvi] Wade, W. (2000) ASML acquires SVG, becomes largest litho supplier (United States) EE Times

    [xxvii] Lammers, D. (1999) U.S. gives ok to ASML on EUV effort (United States) EE Times

    [xxviii] Meade, C., Conway, L. (1979) Introduction to VLSI Systems (United States) Addison-Wesley

    [xxix] Lavagno, L., Martin, G., Scheffer, L., et al (2006) Electronic Design Automation for Integrated Circuits Handbook (United States) Taylor & Francis

    [xxx] (2010) Apple Launches iPad (United States) Apple Inc. website

    [xxxi] (1997) PalmPilot Professional (United Kingdom) Centre for Computing History

    [xxxii] Jobs, S. (2005) Apple WWDC 2005 keynote speech (United States) Apple Inc.

    [xxxiii] (2014) Makimoto’s Wave Revisited for Multicore SoC Design (United States) EE Times

    [xxxiv] Makimoto, T. (2014) Implications of Makimoto’s Wave (United States) IEEE Computer Society

    [xxxv] (2006) Nokia and Yahoo! add Flickr support in Nokia Nseries Multimedia Computers (Germany) Cision PR Newswire

    [xxxvi] Gibson, W. (2007) Spook Country (United States) Putnam Publishing Group

    [xxxvii] The O2O Business In China (China) GAB China

    [xxxviii] Carroll, G. (2008) Web Centric Business Model (United States) Waggener Edstrom Worldwide for LaSalle School of Business, Universitat Ramon Llull, Barcelona

    [xxxix] Carroll, G. (2008) Web of no web (United Kingdom) renaissance chambara

    [xl] Kelly, K. (2018) AR Will Spark the Next Big Tech Platform – Call It Mirrorworld (United States) Wired

    [xli] Heckerman, D. (1988) An Empirical Comparison of Three Inference Methods (United States) Microsoft Research

    [xlii] Sze, V., Chen, Y.H., Yang, T.J., Emer, J. (2017) Efficient Processing of Deep Neural Networks: A Tutorial and Survey (United States) Cornell University

    [xliii] Webber, M. E. (2024) Energy Blog: Is AI Too Power-Hungry for Our Own Good? (United States) American Society of Mechanical Engineers

    [xliv] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I. (2017) Attention Is All You Need (United States) 31st Conference on Neural Information Processing Systems (NIPS 2017)

    [xlv] Marche, S. (2024) Was Linguistic A.I. Created By Accident? (United States) The New Yorker.

    [xlvi] Radford, A. (2018) Improving language understanding with unsupervised learning (United States) OpenAI

    [xlvii] Heath, N. (2025) Authors outraged to discover Meta used their pirated work to train its AI systems (Australia) ABC (Australian Broadcast Corporation)

    [xlviii] Morey, M., O’Sullivan, J. (2024) In-brief analysis: Data center owners turn to nuclear as potential energy source (United States) Today in Energy published by U.S. Energy Information Administration

    [xlix] Bradshaw, T., Morris, S. (2024) Microsoft acquires twice as many Nvidia AI chips as tech rivals (United Kingdom) Financial Times

    [l] Smith, C. (2025) ChatGPT’s viral image-generation upgrade is ruining the chatbot for everyone (United States) BGR (Boy Genius Report)

    [li] Wayner, P. (1997) Human Error Cripples the Internet (United States) The New York Times

    [lii] Honan, M. (2013) Killing the Fail Whale with Twitter’s Christopher Fry (United States) Wired

    [liii] Mazarr, M. (2025) The Coming Strategic Revolution of Artificial Intelligence (United States) MIT (Massachusetts Institute of Technology)

    [liv] Knight, W. (2025) DeepSeek’s New AI Model Sparks Shock, Awe, and Questions from US Competitors (United States) Wired

    [lv] Sharwood, S. (2025) Manus mania is here: Chinese ‘general agent’ is this week’s ‘future of AI’ and OpenAI-killer (United Kingdom) The Register

    [lvi] Hewitt, C., Bishop, P., Steiger, R. (1973). A Universal Modular Actor Formalism for Artificial Intelligence. (United States) IJCAI (International Joint Conference on Artificial Intelligence).

    [lvii] Sculley, J. (1987) Keynote Address On The Knowledge Navigator at Educom (United States) Apple Computer Inc.

    [lviii] (1987) Apple’s Future Computer: The Knowledge Navigator (United States) Apple Computer Inc.

    [lix] Kelly, K. (1995) Out of Control: The New Biology of Machines (United States) Fourth Estate

    [lx] Nwana, H.S., Azarmi, N. (1997) Software Agents and Soft Computing: Towards Enhancing Machine Intelligence Concepts and Applications (Germany) Springer

    [lxi] Rifkin, G. (1994) Interface; A Phone That Plays Secretary for Travelers (United States) The New York Times

    [lxii] Richardson, T. (2005) Orange kills Wildfire – finally (United Kingdom) The Register

    [lxiii] Spoonauer, M. (2024) The Truth about the Rabbit R1 – your questions answered about the AI gadget (United States) Tom’s Guide

    [lxiv] Garun, N. (2019) One year later, restaurants are still confused by Google Duplex (United States) The Verge

    [lxv] Roth, E. (2025) Amazon can now buy products from other websites for you (United States) The Verge

    [lxvi] MQ-28 microsite (United States) Boeing Inc.

    [lxvii] Warwick, G. (2019) Boeing Unveils ‘Loyal Wingman’ UAV Developed In Australia (United Kingdom) Aviation Week Network – part of Informa Markets

    [lxviii] Udinmwen, E. (2025) Apple Mac Studio M3 Ultra workstation can run Deepseek R1 671B AI model entirely in memory using less than 200W, reviewer finds (United Kingdom) TechRadar

    [lxix] Kelly, K. (2010) What Technology Wants (United States) Viking Books

    [lxx] Andrews, G.R. (2000) Foundations of Multithreaded, Parallel, and Distributed Programming (United States) Addison-Wesley

    [lxxi] Criddle, C., Olcott, E. (2025) OpenAI says it has evidence China’s DeepSeek used its model to train competitor (United Kingdom) Financial Times

    [lxxii] Russell, J. (2025) China Researchers Report Using Quantum Computer to Fine-Tune Billion Parameter AI Model (United States) HPC Wire

    [lxxiii] Mistral AI home page (France) Mistral AI

    [lxxiv] (2025) High-Speed Autonomous Underwater Effects. Copperhead (United States) Anduril Industries

    [lxxv] Vertex AI with Gemini 1.5 Pro and Gemini 1.5 Flash (United States) Google Cloud website

    [lxxvi] Untersinger, M. (2024) Strava, the exercise app filled with security holes (France) Le Monde

    [lxxvii] Nilsson-Julien, E. (2025) French submarine crew accidentally leak sensitive information through Strava app (France) Le Monde

    [lxxviii] Arsene, Liviu (2018) Hack of US Navy Contractor Nets China 614 Gigabytes of Classified Information (Romania) Bitdefender

    [lxxix] Wendling, M. (2024) What to know about string of US hacks blamed on China (United Kingdom) BBC News

    [lxxx] Kidwell, D. (2020) Cyber espionage for the Chinese government (United States) U.S. Air Force Office of Special Investigations

    [lxxxi] Gorman, S., Cole, A., Dreazen, Y. (2009) Computer Spies Breach Fighter-Jet Project (United States) The Wall Street Journal

    [lxxxii] Bellan, R. (2025) Waymo may use interior camera data to train generative AI models, but riders will be able to opt out (United States) TechCrunch

  • Layers of the future

    This post about layers of the future was inspired by an article that I read in the EE News. The article headline talked in absolutes: The external power adapter Is dead. The reality is usually much more complex. The future doesn’t arrive complete; instead we have layers of the future.

    GITS 3

    Science fiction as an indicator.

    The 1936 adaption by Alexander Korda of HG Wells The Shape of Things To Come shows a shiny complete new utopia. It is a tour-de-force of art deco design, but loses somewhat in believability because of its complete vision.

    https://youtu.be/knOd-BhRuCE?si=HfIDYsaa7nUZKrYE

    This is partly explained away by a devastating war, largely influenced by the Great War which had demonstrated the horrendous power of artillery and machine guns. The implication being that the layers of architecture assembled over the years had been literally blown away. So architects and town planners would be working from a metaphorical clean sheet, if you ignore land ownership rights, extensive rubble, legacy building foundations and underground ground works like water pipes, sewers, storm drains and cable ducting.

    In real-life, things aren’t that simple. Britain’s major cities were extensively bombed during the war. The country went under extensive rebuilding in the post-war era. Yet even in cities like Coventry that were extensively damaged you still have a plurality of architecture from different ages.

    In the City of London, partly thanks to planning permission 17th century architecture exists alongside modern tower blocks.

    Contrast the blank sheet approach of Things To Come with the immersive story nature of anime Ghost In The Shell; which based its architecture on Hong Kong.

    GITS 2

    You can see a mix of modern skyscrapers, tong tau-style tenements and post-war composite buildings that make the most of Hong Kong’s space. Given Hong Kong’s historically strong real estate marketplace, there are very strong incentives to build up new denser land uses, yet layers of architecture from different ages still exist.

    COBOL and other ‘dead’ languages.

    If you look at computer history, you realise that it is built on layers. Back in the 1960s computing was a large organisation endeavour. A good deal of these systems ran on COBOL, a computer language created in 1958. New systems were being written in COBOL though the mid-2000s for banks and stock brokerages. These programmes are still maintained, many of them still going long after the people who wrote them had retired from the workforce.

    Computer History Museum 10

    These systems were run on mainframe computers, though some of these have been replaced by clusters of servers. IBM still serves its Z-series of mainframe computers. Mainframe computing has even been moved to cloud computing services.

    In 1966, MUMPS was created out of a National Institute of Health project at Massachusetts General Hospital. The programming language was built out of frustration to support high performing databases. MUMPS has gone on to support health systems around the world and projects within the European Space Agency.

    If you believe the technology industry all of these systems have been dead and buried by:

    • Various computer languages
    • Operating systems like UNIX, Linux and Windows
    • Minicomputers
    • Workstations
    • PCs and Macs
    • Smartphones and tablets
    • The web

    At a more prosaic level infrastructure like UK railway companies, German businesses and Japanese government departments have been using fax machines over two decades since email became ubiquitous in businesses and most households in the developed world.

    The adoption curve.

    The adoption curve is a model that shows how products are adopted. The model was originally proposed by academic Everett Rogers in his book Diffusion of Innovations, published in 1962. The blue line is percentage of new users over time and the yellow line is an idealised market penetration. However, virtually no innovations get total adoption. My parents don’t have smartphones, friends don’t have televisions. There are some people that still live off the grid in developed countries without electricity or indoor plumbing.

    Diffusion_of_ideas

    When you look at businesses and homes, different technologies often exist side-by-side. In UK households turntables for vinyl records exist alongside streaming systems. Stuffed bookshelves exist alongside laptops, tablets and e-readers.

    Yahoo! Internet Life magazine

    yahoo internet magazine

    Yahoo! Internet Life magazine is a microcosm of this layers of the future co-existence . Yahoo! is now a shadow of its former self, but its still valued for its financial news and email. The company was founded in 1994, just over 30 years ago. It was in the vanguard of consumer Internet services alongside the likes of Wired, Excite, Go, MSN, Lycos and Netscape’s Net Center.

    Yahoo! Internet Life magazine was published in conjunction with Ziff Davis from 1996 to 2002. At the time when it was being published the web was as much a cultural force as it was a technology that people adopted. It was bigger than gaming or generative AI are now in terms of cultural impact. Yet there was no incongruity in being a print magazine about online media. Both existed side-by-side.

    Post-print, Yahoo! Life is now an online magazine that is part of the Yahoo! web portal.

    Technology is the journey, not the destination.

    Technology and innovation often doesn’t meet the ideals set of it, for instance USB-C isn’t quite the universal data and power transfer panacea that consumers are led to believe. Cables and connectors that look the same have different capabilities. There is no peak reached, but layers of the future laid on each other and often operating in parallel. It’s a similar situation in home cinema systems using HDMI cables or different versions of Bluetooth connected devices.

  • Pagers + more things

    Pagers

    Pagers went back into the news recently with Hizbollah’s exploding pagers. YouTuber Perun has done a really good run down of what happened.

    Based on Google Analytics information about my readership the idea of pagers might need an explanation. You’ve probably used a pager already, but not realised it yet.

    A Twosome Place wifi puck
    A restaurant pager from Korean coffee shop / dessert café A Twosome Place.

    For instance if you’ve been at a restaurant and given puck that brings when your table is ready, that’s a pager. The reason why its big it to prevent customers stealing them rather than the technology being bulky.

    On a telecoms level, it’s a similar principle but on a bigger scale. A transmitter sends out a signal to a particular device. In early commercial pagers launched in the 1960s such as the ‘Bellboy’ service, the device made a noise and you then got to a telephone, phoned up a service centre to receive a message left for you. Over time, the devices shrunk from something the size of a television remote control to even smaller than a box of matches. The limit to how small the devices got depended on display size and battery size. You also got displays that showed a phone number to call back.

    By the time I had a pager, they started to get a little bigger again because they had displays that could send both words and numbers. These tended to be shorter than an SMS message and operators used shortcuts for many words in a similar way to instant messaging and text messaging. The key difference was that most messages weren’t frivolous emotional ‘touchbases’ and didnt use emojis.

    Beepers
    A Motorola that was of a similar vintage to the one I owned.

    When I was in college, cellphones were expensive, but just starting to get cheaper. The electronic pager was a good half-way house. When I was doing course work, I could be reached via my pager number. Recruiters found it easier to get hold of me, which meant I got better jobs during holiday time as a student.

    I moved to cellphone after college when I got a deal at Carphone Warehouse. One Motorola Graphite GSM phone which allowed two lines of SMS text to be displayed. I had an plan that included the handset that cost £130 and got 12 months usage. For which I got a monthly allowance of 15 minutes local talk time a month.

    I remember getting a call about winning my first agency job, driving down a country road with the phone tucked under my chin as I pulled over to take the call. By this time mobile phones were revolutionising small businesses with tradesmen being able to take their office with them.

    The internet and greater data speeds further enhanced that effect.

    Pagers still found their place as communications back-up channel in hospitals and some industrial sites. Satellite communications allowed pagers to be reached in places mobile networks haven’t gone, without the high cost of satellite phones.

    That being said, the NHS are in the process of getting rid of their pagers after COVID and prior to COVID many treatment teams had already moved to WhatsApp groups on smartphones. Japan had already closed down their last telecoms pager network by the mid-2010s. Satellite two-way pagers are still a niche application for hikers and other outward bound activities.

    Perun goes into the reasons why pagers were attractive to Hesbollah:

    • They receive and don’t transmit back. (Although there were 2-way pager networks that begat the likes of the BlackBerry device based on the likes of Ericsson’s Mobitex service.)
    • The pager doesn’t know your location. It doesn’t have access to GNSS systems like GPS, Beidou, Gallileo or GLONASS. It doesn’t have access to cellular network triangulation. Messages can’t transmit long messages, but you have to assume that messages are sent ‘in the clear’ that is can be read widely.

    Consumer behaviour

    Yes, CEOs are moving left, but ‘woke capitalism’ is not the whole story | FT

    Culture

    ‘We were cheeky outlaws getting away with it’: the total euphoria of Liverpool’s 90s club scene | The Guardian – maybe one day I will tell my side of this tale. It’s all a bit more nuanced and I was stone cold sober throughout it all, which is a rare perspective.

    Economics

    Invest 2035: the UK’s modern industrial strategy – GOV.UK

    Corporate Germany is on sale | FT

    Health

    Ukraine’s pioneering virtual reality PTSD therapy | The Counteroffensive

    Korea

    The sabukaru Guide to Seoul’s PC Room Culture | Sabukaru

    Luxury

    How luxury priced itself out of the market | FT – brands have tested the elasticity of pricing and pushed beyond the limits for their middle class customer base

    Watch-maker Jaeger-LeCoultre expands into fragrances inspired by its Reverso dress watches: Jaeger-LeCoultre fragrances take form | Luxury Daily

    Ozempic is transforming your gym | FT, The Vogue Business Spring/Summer 2025 size inclusivity report | Vogue Business – GLP-1s blamed for stalled progress.

    Ferrari, Hermès lead global luxury brand growth in 2024: Interbrand | Luxury Daily

    What is Chinese style today? | Vogue Businessstreet style at Shanghai Fashion Week has been low-key. The bold looks of the past have given way to a softer aesthetic that’s more layered and feminine, with nods to Chinese culture and history. This pared-back vibe was also found on the runways. Part of this might be down to a policy led movement against conspicuous consumption typified by Xi Jinping’s ‘common prosperity‘.

    Marketing

    Adland’s talent spill: Seniors double blocked as ageism, cost-cutting compounded by ‘threatened’ younger managers | Mi3

    Where to start with multisensory marketing | WARC – 61% of consumers looking for brands that can “ignite intense emotions”. Immersive experiences that are holistic tap into people’s emotions and linger in the memory. It’s also an opportunity for using powerful storytelling to communicate a brand story.

    Media

    Tesco to launch location-based self-scanner adverts in stores – Retail Gazette

    Everyone is burning out on the news, including journalists – Baekdal

    Online

    Roblox: Inflated Key Metrics For Wall Street And A Pedophile Hellscape For Kids – Hindenburg Research

    How Google Influences Public Opinion | HackerNoon

    Software

    Apple macOS 15 Sequoia is officially UNIX • The Register

    Web-of-no-web

    Airbus to cut 2500 staff in Space Systems | EE News Europe“In recent years, the defence and space sector and, thus, our Division have been impacted by a fast changing and very challenging business context with disrupted supply chains, rapid changes in warfare and increasing cost pressure due to budgetary constraints,” said Mike Schoellhorn, CEO of the Airbus Defence and Space business.

    Wireless

    Boeing plans quantum satellite | EE News Europe

    Elon Musk battles Indian billionaires over satellite internet spectrum | FT

  • Y2K

    Early last year, fashion started to pillage the late 1990s and early 2000s for fashion inspiration, which became a Y2K trend on social platforms and in the fashion media. But this divorced Y2K from its original meaning. Y2K was technologist short hand for a calendar problem in a lot of legacy systems that were designed around a two digit date for years.

    The rise of micro-processors had meant that the world had more computers, but also more computer control of processes from manufacturing to building air conditioning systems.

    The HBO documentary Time Bomb Y2K leaned into the American experience of Y2K in an Adam Curtis type archival view, but without his narrative.

    Millennium layers

    There was so much to unspin from the documentary, beyond the Y2K bug, including the largely alarmist commentary. The run-up to the millennium had so many layers that had nothing to do with Y2K, but were still deeply entwined with anxiety around what might happen with Y2K.

    This included:

    • Internet adoption and more importantly the idea of internet connectedness on culture through the lens of cyberpunk – which in turn influenced the spangliness of fashion around this time and the preference for Oakley mirror shades that looked as if they were part of the wearer. The internet was as much a cultural construct and social object as it was a communications technology. It memed AND then got people online.
    This week
    • Telecommunications deregulation. In the United States the Telecommunications Act of 1996, saw a levelling playing field be set out and allow for new entrants across telecoms networks to television. They also defined ‘information services’ which internet platforms and apps fitted into giving them many freedoms and relatively few responsibilities. You had similar efforts at telecoms deregulation across what was then the EEC. This saw a rise in alternative carriers who then drove telecoms and data commuunications equipment sales, together with a flurry of fibre-optic cables being laid. There was a corresponding construction of data centres and ‘internet hotels‘ to provide data services. With these services came an expectation that the future was being made ‘real’. Which in turn fed into the internet itself as cultural phenomenon. The provision of new data centres, opportunities for computer-to-computer electronic data interchange (EDI) and services that can be delivered using a browser as interface also drove a massive change in business computing.
    s98_05016
    • An echo boom of the hippy back to the land movement, many of the people involved in that movement were early netizens. Hippy favourites The Grateful Dead had been online since at least 1996 and were pioneers in the field of e-commerce. The Whole Earth ‘Lectronic Link (or The WeLL) had founders from hippy bible The Whole Earth Catalog. There was also a strong connection through Stewart Brand to Wired magazine. Long time ‘Dead lyricist Jon Perry Barlow created a Declaration of the Independence of Cyberspace – a libertarian totem for netizens up to the rise of social media platforms like Facebook.
    Dead.net circa 1996
    Heaven's Gate's final home page update

    The confluence of noise around Y2K drove some anxiety and a lot of media chatter.

    Advertisers did their bit to fuel insecurities as well.

    However by October 1999, American consumers who responded to a poll by the Gallup Organisation were pretty confident that glitches would be unlikely

    • 55% considered it unlikely ATMs would fail.
    • 59% believed direct deposit processing wouldn’t be a problem.
    • 60% said they felt that temporary loss of access to cash was unlikely.
    • 60% believed credit-card systems were unlikely to fail.
    • 66% felt that problems with check processing were unlikely.
    • 70% had received Y2K-readiness information from their banks.
    • 90% were confident their bank was ready for Y2K.
    • 39% said they would definitely or probably keep extra cash on hand.
    Y2K: More Signs of the Time | Computerworld (January 10, 2000)

    Experts had felt that the Y2K challenge had largely been beat, but some prudent advice was given. I worked for a number of technology clients at the time including telecoms provider Ericsson and enterprise software company SSA Global Technologies. I had to keep my cellphone with me in case anything went wrong and we would have to go into crisis mode for our clients. Needless to say, I wasn’t disturbed during my night out at Cream by THAT call.

    Technology experts like Robert X. Cringely were rolled out to advise consumers on prudent precautions. Have a bit of cash in your wallet in the unlikely event that card merchant services don’t work at your local shop. Have some provisions in that dont need refrigeration in case there is a power cut. And a battery or solar powered radio just in case.

    All of these are still eminently sensible precautions for modern-day living.

    y2k Cringely

    Why were we ok?

    The warning

    There were several people who voiced warnings during the 1990s. Some of the most prominent were Ed Yourdon and Peter de Jager.

    Risk management

    During the 1990s company auditors were informing boards that they had to address Y2K. Failure to follow this would affect their ability to trade. Their public accounts wouldn’t be signed off and there would be implications for the validity the insurance policies need to run a business.

    Approaches

    IT professionals took Y2K very seriously, which meant that there was little to no impact. Some academics such as UCL’s Anthony Finkelstein posited that the problem was taken too seriously, though it is easier to say that in retrospect. There were a number of approaches taken to combat the risk of failure due to Y2K. In order of least to most ambitious they were:

    • Systems testing
    • Rip and replace
    • Recode

    Systems testing

    The Russian military had tested their systems for vulnerability to the millennium bug and announced this in the last quarter of 1999. Meanwhile businesses were often passing the testing out to contractors like Accenture with teams based in India, the former Soviet Union or the Philippines. There was a thriving market for auditing software to check if applications used two-digit dates or not. One of these was Peregrine Systems ServiceCenter 2000 Y2K Crisis Management software.

    Testing highlighted problems at Oak Ridge Laboratories who process American nuclear weapons, the alarm systems at Japanese nuclear power stations and some kidney dialysis machines.

    Problems would then be addressed by ripping and replacing the systems or recoding the software.

    Rip and replace

    Apple used Y2K as a sales tool to get Macs into businesses, including this campaign from early 1999 where the HAL computer from 2001: A Space Odyssey featured in Apple’s Super Bowl advert.

    Two years earlier IBM CEO had the company re-orientate an offering that he called e-business. There was snazzy advertising campaigns ran over an eight year period.

    Mainframes and high powered UNIX workstations became internet servers running multiple instances of Linux. IBM Consulting learned as they went building the likes of internet retailer Boxman (which would go bust due to IBM’s cack-handed software and the rise of Amazon).

    Timely replacement of business systems with e-business systems, paired with new personal computers like the latest Apple Mac allowed the firm to avoid Y2K and make speedier approaches in digitising their businesses.

    German enterprise software company SAP launched SAP Business Connector in association with webMethods in 1999, this provided an integration and migration layer for SAP and other business software applications. It also allowed the business software to be accessed using a web browser and for it to trigger business processes like email updates.

    Articles (like Robertson & Powell) highlighted the wider business process benefits that could be generated as part of a move to rip-and-replace existing systems with ones that are Y2K compliant. Reducing the amount of systems in place through rationalisation as part of Y2K preparation would then provide benefits in terms of training and expertise required.

    Recode

    Where rip and replace wasn’t an option due to cost, complexity or mission criticality recoding was looked at as an approach. For PC networks there were a few off the shelf packages to deal with low level BIOS issues

    IntelliFIX 2000 by Intelliquis International, Inc. Their product would check hardware, DOS operating system, and software. This version was free and ran a pass/fail test. The full version, which could be purchased for $79, would report the issues and permanently correct date problems with the BIOS and the CMOS real-time clock. In 1999, Stewart Cheifet of the Computer Chronicles rated the product as a very good all-in-one solution for hardware and software.

    National Museum of American History: Y2K collection

    Products similar to IntelliFIX included Catch/21 by TSR Inc.

    Longtime software makers like Computer Associates and IBM provided large companies with tools to audit their existing code base and repair them. IBM’s software charged $1.25 per line inspected. OpenText estimate that there 800 billion lines of COBOL language code out there. So having one of these tools could be very lucrative at the time.

    You might have mainframe code on a system that might not have been altered since the 1970s or earlier. Programmers in the developed world who had skills in legacy languages were looking at the end of their career as more of this work had been outsourced to Indian software factories saw Y2K as a last hurrah.

    COBOL is still very robust and runs business processes very fast, so is maintained around the world today.

    Y2K impact

    Professor Martyn Thomas in a keynote speech given in 2017 documented a number of errors that occurred. From credit card reading failures and process shut downs to of false positive medical test results across the world. But by and large the world carried on as normal.

    Academic research (Anderson, Banker et al) suggests that the most entrepreneurially competitive companies leaned hard into the Y2K focus on IT and used the resources spent to transform their IT infrastructure and software. Garcia and Wingender showed that these competitive returns were shown to provide a benefit to publicly listed company stock prices at the time.

    There were also some allegations that software companies and consultants over-egged the risks. Hindsight provides 20:20 vision.

    IT spending dropped dramatically during 2001 and 2002, and by the middle of 2003 technology started to see replacement of software and equipment bought to address Y2K. But the US department of commerce claimed that was no more than a transient effect on economic growth. This was supported by the Kliesen paper in 2003, which posited that the boom and subsequent economic bust was not as a result of Y2K preparation.

    More information

    Like It or Not, Gaudy Y2K Style Is Roaring Back | Vogue

    These Celebrity Y2K Outfits Weirdly Look Like They’re From 2023 | InStyle magazine

    20 Years Later, the Y2K Bug Seems Like a Joke—Because Those Behind the Scenes Took It Seriously | Time magazine (December 30, 2019)

    National Museum of American History – Y2K collection

    Y2K: a retrospective view by Anthony Finkelstein (PDF)

    Y2K: Myth or Reality? Luis Garcia-Feijóo and John R. Wingender, Jr.
    Quarterly Journal of Business and Economics (Summer 2007)

    Replacing Y2K technology boosts spending | The Record (July 28, 2003)

    Y2K spending by entrepreneurial firms by Mark C. Anderson, Rajiv D. Banker, Ram Natarajan, Sury Ravindran US: Journal of Accounting and Public Policy (December 2001)

    Exploiting the benefits of Y2K preparation by Stewart Robertson and Philip Powell (September 1999) Communications of ACM

    Was Y2K Behind the Business Investment Boom and Bust? Kevin L. Kliesen

    What Really Happened in Y2K? Professor Martyn Thomas (April 4, 2017) (PDF)

  • Pipes by Yahoo

    I discovered something at the end of last year. The belatedly missed Yahoo Pipes was, in fact, officially called “Pipes by Yahoo.” I made that mistake, despite being well-versed in the brand guidelines, having spent a year working there with a copy consistently at my side.

    Now, why this journey down the memory superhighway? That’s a valid question. The inspiration for this post came from Bradley Horowitz’s initial post on Threads. (I had to go back and re-edit the reference to post from tweet to include it in the previous sentence, force of habit). In his post, Bradley shared the history of Pipes by Yahoo. I’m acquainted with Bradley from my time at Yahoo!. During that period, he was one of the senior executives in Jeff Weiner’s Yahoo! Search and Marketplace team.

    Consider this article as complementary to the Pipes by Yahoo history that Bradley pointed out. I will share the link where it makes sense to go over and read it in my depth. My commentary provides context prior to Pipes by Yahoo launching, the impact it had and why it’s pertinent now.

    Origins

    To comprehend Pipes by Yahoo, a fair amount of scene-setting is necessary. The contemporary web experience is now a world apart from the open web of Pipes, just as Pipes was distant from the pre-web days of the early 1990s.

    Boom to bust

    During the mid-1990s through the dot-com bust, Yahoo! generated substantial revenue from various sources, with online display advertising being the most pivotal. Launching a blockbuster film from the late 1990s to the early 2010s often involved a page takeover on Yahoo! and featuring the trailer on the Yahoo! Movies channel and Apple’s QuickTime.com. A similar approach applied to major FMCG marketing campaigns, with large display advertising initiatives.

    San Francisco billboard drive-by

    Yahoo! profited significantly during this period, as the internet was the new trend, and display advertising was a cornerstone for brand building. Money was spent generously, akin to contemporary budgets for influencer marketing programmes.

    Yahoo! occupied a space between TV, magazine advertising, and newspaper advertising. The design of the My Yahoo! page mirrored the multi-column layout of a traditional newspaper.

    Similar to a newspaper, Yahoo! developed various departments and services:

    • Search
    • News (including finance)
    • Music services
    • Shopping, featuring a store for small businesses, auctions, and a shopping mall-type offering
    • Sports
    • Communications (email, instant messaging, voice calls, early video calling)
    • Web hosting

    Then came the dot-com crash. Advertising revenue plummeted by around a third to 40 percent, depending on who you ask. Deals like the acquisition of Broadcast.com shifted from appearing speculative and experimental to extravagant wastes of money as the bust unfolded. This experience left scars on the organization, restraining the size of deals and the scope of ambition. Opportunities were second and third-guessed.

    Yahoo! Europe narrowly survived, thanks to a white-label dating product. Love proved to be a more dependable revenue source than display advertising. A new CEO from the media industry was appointed to address shareholder and advertiser concerns.

    The advertising industry was in a constant state of learning. Performance marketing emerged as a significant trend, and search advertising gained prominence.

    The initial cast in this story

    Jeff explains something to the phone

    Weiner was hired into Yahoo! by then CEO Terry Semel. Semel knew Weiner from his work getting Warner Brothers into the online space.

    Bradley

    Yahoo! had started getting serious about search by acquiring a number of search technology companies and hiring talented people in the field. Bradley Horowitz had found an image and video search startup called Virage and joined Yahoo! (a year before I got there) as director of media search.

    Tim Mayer Yahoo

    There was former Overture executive Tim Mayer who was VP of search products and drove an initiative to blow out Yahoo!’s search index as part of a feature and quality battle with Google, Bing and Ask Jeeves. It was a great product, but with the best effort in the world we didn’t have the heat. The majority of Yahoos internally used Google because of muscle memory.

    how many points for visiting the metro?

    Vish Makhijani was ex-Inktomi and was VP – international search and has more of a focus on operations. He worked on getting non-US Yahoo! users feature parity – at least in search products.

    Former Netscaper, Eckhart Walther was the VP in charge of product management.

    Aside: where did Ged sit?

    Where did I sit? Low on the totem pole. To understand my position in the organisation, imagine a Venn diagram with two interlocking circles: the European central marketing team and Vish’s team. I would have sat in the interlocking bit. If that all sounds confusing, yes it was.

    Downtown San Jose

    Search wars and web 2.0

    Pipes by Yahoo emerged from the confluence of two technological trends that developed in parallel, extending all the way to early social media platforms.

    Search wars

    I had been discussing the prospect of working at Yahoo! with a couple of people since around 2003. I had an online and technology brand and product marketing background. I had been blogging regularly since late 2002 / early 2003 and managed to incorporate online reviews and forum seeding into campaigns for the likes of Aljazeera and BT. The business was emerging from survival mode. As an outsider, it wasn’t immediately apparent how precarious Yahoo!’s situation had been. However, the threat posed by Google was undeniable.

    At that time, Google didn’t have the extensive workforce it boasts today. One of my friends served as their PR person for Europe. Nevertheless, Google had embedded itself into the zeitgeist, seemingly launching a new product or feature every week. If there wasn’t a new product, stories would sometimes ‘write themselves,’ such as the time the face of Jesus was supposedly found on Google Maps photography of Peruvian sand dunes. The closest contemporary comparison might be the cultural impact of TikTok.

    The geographical impact of Google’s cultural dominance was uneven. In the US, Yahoo! was a beloved brand that many netizens were accustomed to using. Yahoo! held double the market share in search there compared to Europe. Part of this discrepancy was due to Europeans coming online a bit later and immediately discovering Google. But Google didn’t do that well with non-Roman derived European languages like Czech. It has similar problems with symbolic languages like Korean, Chinese and Japanese.

    Google explosion

    I can vividly remember the first time I used Google. At that time I was using a hodge podge of search engines, usually starting with AltaVista and then trying others if I didn’t get what I wanted. This was before tabbed browsers were a thing, so you can imagine how involved the process became.

    Google appeared in an online article, which I think was on Hotwired some time during late 1998, less than a year after it had been founded. I clicked on a link to use the search engine. Google looked every different to now. It had a clean page with three boxes beneath. The first one was a few special searches, I think one of them was Linux-related, which tells you a lot about the audience at the time. The second was set of corporate links including a link explaining why you would want to use Google – although experiencing one search was enough for most people that I knew. The final box was to sign up to a monthly newsletter that would give updates on what developments Google was up to.

    From then on, I very rarely searched on Alta Vista, though my home page was still My Excite for a long time. This was more because I had my clients news set up on the page already and they had decent finance overage at the time.

    The difference in searches was really profound, there were a number of factors at work:

    • Google’s approach seemed to give consistently better results than the vectored approach taken by Excite or AltaVista.
    • There was no advertising on the SERP (search engine results page), but that was to soon change.
    • You could use very directed Boolean search strings, which isn’t possible any more since Google optimised for mobile.
    • Search engine optimisation wasn’t a thing yet.
    • The web while seeming vast at the time, was actually small compared to its size now. Web culture at the time was quirky and in aggregate nicer and more useful than it is now. Part of this was was down to the fact that early web had a good deal of 1960s counterculture about it. Wired magazine would write about the latest tech thing and also profile psychedelic experimenters like Alexander Shulgin. Cyberpunk, rave and psychedelic tribes blended and found a place online. You can see the carcass of this today with Silicon Valley’s continued love of Burning Man. (Note: there were rich dark seams if that was the kind of thing you were into. There wasn’t the same degree of social agglomeration that we now have, nor were there algorithms that needed constant new content to feed diverse realities.)
    • Content creation on the web was harder than it is now. Blogging was at best a marginal interest, the likes of Angelfire, AOL Hometown, Geocities and Tripod provided free hosting, but you couldn’t put up that much content to pollute the search index even if you wanted to.

    The impact was instantaneous and by early 1999, it was much a part of the nascent netizen culture as Terence McKenna.

    Homage to Terence McKenna

    McKenna spent the last bit of his life interrogating the search engine for four to five hours a day. He was convinced that the online world it provided access to represented some sort of global mind.

    Sometimes he treats the Net like a crystal ball, entering strange phrases into Google’s search field just to see what comes up. “Without sounding too cliché, the Internet really is the birth of some kind of global mind,” says McKenna. “That’s what a god is. Somebody who knows more than you do about whatever you’re dealing with.”

    As our society weaves itself ever more deeply into this colossal thinking machine, McKenna worries that we’ll lose our grasp on the tiller. That’s where psychedelics come in. “I don’t think human beings can keep up with what they’ve set loose unless they augment themselves, chemically, mechanically, or otherwise,” he says. “You can think of psychedelics as enzymes or catalysts for the production of mental structure – without them you can’t understand what you are putting in place. Who would want to do machine architecture or write software without taking psychedelics at some point in the design process?”

    Terence McKenna’s Last Trip – Wired.com (May 1, 1999)

    A year after that McKenna interview, Google was running over 5,000 Linux servers to power the search engine.

    At first, Google also powered search on some of the web portals and saw itself as a competitor to search appliance businesses like Inktomi and Autonomy. The advertising kaiju started operation in 2000 and it was tiny. This violated patents held by GoTo.com – a business subsequently acquired by Yahoo!.

    Post-bust

    Once Yahoo! had disentangled itself from the carnage of the dot com bust, search was a much bigger deal. And Google had become a behemoth in the space of a few years. In 2002, Google launched Google News – a direct challenge to web portals like Yahoo!, MSN and Excite. Around about this time Google started to be used as a verb for using a web search engine.

    While display advertising had taken a dive, search advertising had took off for several reasons:

    • It was performance marketing, even when a business is just surviving sales are important
    • Behavioural intent – if you were searching for something you were likely interested in it and may even purchase it
    • So easy to do at a basic level, even small and medium sized businesses could do it
    • Advertising dashboard – Google did a good job at helping marketers show where the advertising spend had gone.

    We’ll ignore on the difficult facts for the time being, for instance:

    • The role of brand building versus brand activating media
    • What attribution might actually look like
    • That Google advertising is a rentier tax, rather than a business generator

    Google listed on the stock market in August 2004. Investors ignored governance red flags like the dual share structure so the founders could retain voting rights.

    Yahoo! in the search wars

    Yahoo! had come out of the dot com bust battered but largely intact. Yahoo! was scarred in a few important ways.

    Identity crisis

    Yahoo! came about pre-Judge Jackson trial when Microsoft spread terror and fear into the boardroom of most sensible technology companies. I know that sounds weird in our iPhone and Android world. Rather than the bright cuddly people who give us Xbox, it was a rabid rentier with a penchant for tactics that organised crime bosses would have approved of. It took a long time to work that out of their system.

    Another big factor was the fear of Microsoft. If anyone at Yahoo considered the idea that they should be a technology company, the next thought would have been that Microsoft would crush them.

    It’s hard for anyone much younger than me to understand the fear Microsoft still inspired in 1995. Imagine a company with several times the power Google has now, but way meaner. It was perfectly reasonable to be afraid of them. Yahoo watched them crush the first hot Internet company, Netscape. It was reasonable to worry that if they tried to be the next Netscape, they’d suffer the same fate. How were they to know that Netscape would turn out to be Microsoft’s last victim?

    Paul Taylor – ex Yahoo and founder of Y-Combinator

    Yet Yahoo! went on to hire media mogul Terry Semel as it went through the dot com bust, shows that this thinking must have coloured views somewhat.

    Cheque book shy

    Even Mark Cuban would admit that Broadcast.com was not worth the billion dollar price tag that Yahoo! paid for it. It was a high profile mistake at the wrong point in the economic cycle which haunted Yahoo! acquisition plans for years. Which is one of the reasons why may have Yahoo! dropped the ball when it had the chance to buy Google and Facebook.

    The game has changed

    But the game had changed. Display advertising was no longer as profitable as it had been. Search advertising was the new hotness, fuelled by online commerce. By early 2004, Yahoo! is confident enough in its own search offering to drop Google who had been providing its search function.

    Yahoo! acquired search appliance business Inktomi in 2002 and then Overture Services in 2003. Overture services provides the basic ad buying experience for Yahoo! search advertising.

    In 2004, Yahoo! realises having search is not enough, you have to offer at least as good as product as Google, if not better. This is where Tim Mayer comes in and for the next couple of years he leads a project to build and maintain search parity with Google.

    You had a corresponding project on the search advertising side to bring the Overture buying experience up to par with Google with a large team of engineers. That became a veritable saga in its own right and the project name ‘Panama‘ became widely known in the online advertising industry before the service launched.

    Search differently

    Googling is a habit. In order to illicit behavioural change you would have to

    • Have an alternative
    • Change what it means to search in a positive way

    Yahoo! approached this from two directions:

    • Allowing different kinds of information to be searched, notably tacit knowledge. I worked on the global launch of what was to become Yahoo! Answers, that was in turn influenced by Asian services notably Naver Knowledge IN. This approach was championed internally by Jerry Yang.
    • Getting better contextual data to improve search quality providing a more semantic web. This would be done by labels or tags. In bookmarking services they allowed for a folksonomy to be created. In photographs it provided information about what the pictures or video content might be, style or genres, age, location or who might be in them.

    Web 2.0

    Alongside a search war there was a dramatic change happening in the underpinnings of the web and how it was created. While the dot com bust caused turmoil, it also let loose a stream of creativity:

    • Office space was reasonably priced in San Francisco only a couple of years after startups and interactive agencies had refurbished former industrial buildings South of Market Street (SoMo).
    • Office furniture was cheap, there was a surplus of Herman Miller Aeron chairs and assorted desks floating around due to bankruptcies and lay-offs.
    • IT and networking equipment was available at very reasonable prices on the second hand market for similar reasons. You could buy top of the range Cisco Catalyst routers and Sun Microsystems servers for pennies on the dollar that their former owners had paid for them less than one computing generation before. This surplus of supplies be bought online from eBay or GoIndustry.com.
    • Just in time for the internet boom wi-fi had started to be adopted in computers. The first wi-fi enabled laptop was the Apple iBook. Soon it became ubiquitous. Co-working spaces and coffee shops started to provide wi-fi access connected to nascent mainstream broadband. Which meant that your neighbourhood coffee shop could be a workspace, a meeting space and a place to collaborate. We take this for granted now, but it was only really in the past 25 years that it became a thing. It also didn’t do Apple’s laptop sales any harm either.
    • Open source software and standards gave developers the building blocks to build something online at relatively little financial cost. Newspapers like the Financial Times would have spent 100,000s of pounds on software licences to launch the paper online. In 2003, WordPress was released as open source software.
    • Amazon launched its web services platform that allowed developers a more flexible way for putting a product online.
    • The corresponding telecoms bust provided access to cheaper bandwidth and data centre capacity.

    All of these factors also changed the way people wrote services. They used web APIs building new things, rather than digital versions of offline media. APIs were made increasingly accessible for a few reasons:

    • Adoption of services was increased if useful stuff was built on top of them. Flickr and Twitter were just two services that benefited from third party applications, integrations and mashups. Mashups were two or more services put together to make something larger than the ingredients. The integration process would be much faster than building something from scratch. It worked well when you wanted to visualise or aggregate inputs together.
    • Having a core API set allowed a service to quickly build out new things based on common plumbing. Flickr’s APIs were as much for internal development as external development. Another example was the Yahoo! UK’s local search product combining business directory data, location data and mapping.
    local

    There was also a mindset shift, you had more real-world conferences facilitating the rapid exchange of ideas, alongside an explosion of technical book publishing. One of the most important nodes in this shift was Tim O’Reilly and business O’Reilly Publishing. Given O’Reilly’s ringside seat to what was happening, he got to name this all web 2.0.

    Finally, a lot of the people driving web 2.0 from a technological point of view were seasoned netizens who had been exposed to early web values. The following cohort of founders like Mark Zuckerberg were more yuppie-like in their cultural outlook, as were many of the suits in the online business like Steve Case or Terry Semel. But the suits weren’t jacked into the innovation stream in the way that Zuckerberg and his peers – but that would come later.

    This was the zeitgeist that begat Pipes by Yahoo.

    The approach to a new type of search needed the foundational skills of web 2.0 and its ‘web of data’ approach. Yahoo! acquired number of companies including Flickr, Upcoming.org and Delicious. At the time developers and engineers were looking to join Yahoo! because they liked what they saw at Flickr, even though the photo service was only a small part of the roles at the business.

    Web 2.0 talent

    The kind of people who were building new services over APIs were usually more comfortable in a scrappy start-up than the large corporate enterprise that Yahoo! had become. Yet these were the same people that Yahoo! needed to hire to develop new products across knowledge search, social and new services.

    There were some exceptions to this, for instance the 26-person team at Whereonearth who operated a global geocoded database and related technology had a number of clients in the insurance sector and Hutchison Telecom prior to being acquired by Yahoo!. The reason why Yahoo! became so interested was a specific Whereonearth product called Location Probability Query Analyser. The technology went on to help both the Panama advertising project and Yahoo! search efforts. George Hadjigeorgiou was tasked with helping them get on board.

    I knew some of the first Flickr staff based out of London, they sat alongside technologist Tom Coates who would later work on FireEagle. They all sat in a windowless meeting room on a floor below the European marketing team sat in.

    Most people didn’t even know that they were there, working away thinking about thinks like geotagging – a key consideration in where 2.0 services and mobile search.

    Going over to the Yahoo! campus in Sunnyvale made it clear to me that the difference in cultural styles was equally different over there, from just one cigarette break with Stewart Butterfield of Flickr.

    Secondly, there was the locale. The best way I found to help British and Irish people get the environment of Silicon Valley was to describe it as a more expansive version of Milton Keynes with wider roads and a lot more sunshine. One of the biggest shocks for me on my first visit to the Bay Area was how ordinary Apple and Google’s offices felt. (This was 1 Infinite Loop before Apple Park construction started). The canopy over the main building entrance looked like an airport Novotel, or every shopping centre throughout the UK.

    In the same way that Milton Keynes is not London; Silicon Valley’s quintessential campus laden town Sunnyvale is not San Francisco.

    This is not the dystopian doom spiral San Francisco city of today with failed governance and pedestrianisation projects. At this time, San Francisco was on the up, having been clobbered by the dot com bust in the early noughties, financial services had kept the city ticking over. Technology was on the rise again. Home town streetwear brand HUF was making a name for itself with its first shop in the Tenderloin, the DNA Lounge had consistently great nights from west coast rave and goth sounds to being a haven for mashup culture with its Bootie nights.

    There was great cinemas, vibrant gay night life and the sleaze of the Mitchell Brothers O’Farrell theatre. The Barry Bonds era San Francisco Giants won more than their fair share of baseball matches.

    If Yahoo! were going to keep talent, they’d need a place in the city. It makes sense that setting up the San Francisco space fell to Caterina Fake. Fake was co-founder of Flickr and was given a mandate by Jerry Yang to ‘make Yahoo! more like Flickr’. So she decided to set up an accelerator for new products.

    Brickhouse

    According to Caterina Fake on Threads:

    I dug around on the company intranet and exhumed an old deck for an initiative called “Brickhouse” which had been approved by the mgmt, but never launched.

    Caterina Fake (@cefake on threads)

    This tracks with my experience in the firm, projects would form make rapid progress and then disappear. And during the first dot com boom, San Francisco was home to online media companies, such as Plastic (Razorfish SF), Organic and Agency.com, many of whom also had offices in New York. Wired magazine had its office there, as did a plethora of start-ups.

    Fake goes on to say that Brickhouse managed to use the same office space she had worked in while she had worked at Organic over a decade earlier.

    The 60 Minutes episode Dot-com Kids marked an acme in this evolution of San Francisco. At the time Fake was doing this exercise, there was probably a Yahoo! sales team based in San Francisco proper, but that would be it.

    Fake cleans up the Brickhouse deck and gets it through the board again with Bradley Horowitz with the then Chief Product Officers Ash Patel and Geoff Ralston, president Sue Decker and chief Yahoo Jerry Yang being the board champions of the project.

    Fake hands off to Chad Dickerson to realise Brickhouse as she heads off on maternity leave. Fake, Dickerson and Horowitz assemble the Brickhouse team (aka the TechDev group) and ideas that would eventually build Pipes by Yahoo!, Fire Eagle and other projects.

    This is where my origins viewpoint on Pipes by Yahoo finishes. For the download on its creation, go here now; the link should open in a new tab and I will still be here when you get back to discuss the service’s impact.

    Pipes by Yahoo was launched to the public as a beta product on February 7 2007. Below is how it was introduced on the first post added to the (now defunct) Yahoo Pipes Blog. At this time product blogs became more important than press releases for product launches as information sources to both tech media and early adopters.

    Introducing Pipes

    What Is Pipes?
    Pipes is a hosted service that lets you remix feeds and create new data mashups in a visual programming environment. The name of the service pays tribute to Unix pipes, which let programmers do astonishingly clever things by making it easy to chain simple utilities together on the command line.

    Philosophy Behind the Project
    There is a rapidly-growing body of well-structured data available online in the form of XML feeds. These feeds range from simple lists of blog entries and news stories to more structured, machine-generated data sources like the Yahoo! Maps Traffic RSS feed. Because of the dearth of tools for manipulating these data sources in meaningful ways, their use has so far largely been limited to feed readers.

    What Can Pipes Do Today?
    Pipes’ initial set of modules lets you assemble personalized information sources out of existing Web services and data feeds. Pipes outputs standard RSS 2.0, so you can subscribe to and read your pipes in your favorite aggregator. You can also create pipes that accept user input and run them on our servers as a kind of miniature Web application.

    Here are a few example Pipes to give you an idea of what’s possible:

    • Pasha’s Apartment Search pipe combines Craigslist listings with data from Yahoo! Local to display apartments available for rent near any business.
    • Daniel’s News Aggregator pipe combines feeds from Bloglines, Findory, Google News, Microsoft Live News, Technorati, and Yahoo! News, letting you subscribe to persistent searches on any topic across all of these data sources.

    What’s Coming Soon?
    Today’s initial release includes a basic set of modules for retrieving and manipulating RSS and Atom feeds. With your help, we hope to identify and add support for many other kinds of data formats, Web services, processing modules and output renderings.

    Here are some of the things we’re already got planned for future releases:

    • Programmatic access to the Pipes engine
    • Support for additional data sources (such as KML)
    • More built-in processing modules
    • The ability to extend Pipes with external, user-contributed modules
    • More ways to render output (Badges, Maps, etc…)

    Pipes is a work in progress and we’ll need your help to make it a success. Try building some simple pipes and advise us what works well and what doesn’t in the online editor. Tell us how you’d like use Pipes, what we can do to make cool things possible, and show us ways you’ve found to use Pipes that never even occurred to us. In return, we promise to do our best to make Pipes a useful and enjoyable platform for creating the next generation of great Web projects.

    And please have fun!

    The Pipes Development Team

    Pipes impact

    I had a good, if exhausting time at Yahoo! It was first inhouse role and my part of the central marketing team had an exhausting workload. By the time Pipes by Yahoo launched, I had left Yahoo! Europe. There has been a re-organisation of European arm and the business had been ‘Kelkoo-ised’; a few of us on the European central marketing team took the opportunity to take the money and run.

    I remember bringing Salim (who headed the European search team) up to speed and getting his support to push for me getting a payout, rather than fighting my corner.

    Peanut Butter Memo

    Brad Garlinghouse’s peanut butter manifesto was made public towards the end of the year portraying a game of thrones type power play which would have seen the kind of structures that were put in place in the European organisation rolled out globally.

    On the face of it, some of it was pertinent, but it lacked a wider vision.

    While Garlinghouse has gone on to have a really successful career at Ripple; the Yahoo! business unit he ran had several problems. He was in charge of Music and the Comms & Community BU. At the time it had a poor record of building products fit for early adopters like music properties that aren’t Mac-compatiable, this was when the iTunes store and Apple iPod springboard off the Mac community and into the mainstream.

    The then new Yahoo! Mail which didn’t work on Safari and a Messenger client which was worse to use than third party clients like Trillium or Adium. All of which made it hard to build a buzz that will bridge to mainstream users. Yahoo! Messenger, could have been Skype or WhatsApp. It became neither.

    For a more modern example, think about the way Instagram and Threads were Apple iPhone first to build a core audience.

    At the time, I was less charitable about the memo. And the memo raised wider questions about the business; like was the CEO facing an executive revolt?

    The launch of Pipes by Yahoo helped to inject some more positive energy back into the Yahoo! brand. Remember what I said earlier on how talent wanted to join Yahoo!’s engineering and development teams because of Flickr. They started to want to join Yahoo! because of Pipes.

    The outside world

    I was back agency side when Pipes launched. I had friends within Yahoo! still and kept an eye on the various product blogs. I got the heads-up on Pipes and put aside an afternoon and an evening to explore it fully. A quick exploration gave one an idea of how powerful Pipes by Yahoo could be. While Pipes was powerful, it was also relatively user friendly, like Lego for data. It was more user friendly than Apple’s Automator, which inspired Pipes by Yahoo! in the first place.

    At this time in London the amount of people working on social media and online things was still relatively small. Knowledge was shared rather than hoarded at grassroots events and on an ecosystem of personal blogs. This was a group of eople with enquiring minds, a number of whom I can still call friends.

    We shared some of the public recipes on Pipes by Yahoo and learned from them, just as I had learned about Lotus 1-2-3 macros in the early 1990s, by picking through other peoples examples. (I put this to use automating data records in the Corning optical fibre sales support laboratory that I worked in at the time.)

    The agency I worked with had a number of large technology clients including AMD, Fujitsu Siemens personal computing devices – notably smartphones, parts of Microsoft and LG.

    AMD and Microsoft were keen to keep track on any mention of their brand in a number of priority blogs or news sites at the time. Social listening was in its infancy and there were a number of free tools available, which I got adept at using.

    We managed to build and sell both AMD and Microsoft respectively a custom feed which provided them with links to relevant content in near real-time, which they then published on an internal site so that key audiences always had their fingers on the pulse.

    This was all built on top of two free Pipes by Yahoo accounts which used a similar but tweaked recipes to make this happen.

    On the back of that work, we managed to sell in a couple of small websites to the Microsoft team based on WordPress. I had long moved on to another agency role by the time the Pipes by Yahoo feeds would have died.

    Discussing Pipes by Yahoo with friends, they said it had inspired them to learn to code. Pipes by Yahoo spurred creativity and creation in a similar way to HyperCard.

    Zeitgeist

    While all of this has talked about Pipes by Yahoo! and how great the launch was, the ending of Pipes was much more humdrum. The service had been glitchy at the best of times and wasn’t being maintained in the end. In conversations I had with friends, it was compared to a British sports car: unreliable but loveable. Yahoo! closed it down on September 30, 2015.

    Which begs the question, why is Pipes by Yahoo, which was shut down eight and a half years ago being celebrated amongst the digerati?

    I think that the answer to this is in the current online zeitgeist. The modern web isn’t something that anyone involved in web 2.0 would have signed up for. Algorithms have fragmented the global town hall archetype envisaged for social. The web no longer makes sense in aggregate, as it’s splintered by design.

    The modern web feels ephemeral in nature. This seems to have gone hand-in-hand with a video first web exemplified by TikTok.

    The social platforms the fragmentation seem to be declining in relevance and its isn’t clear what’s next. The people-driven web of knowledge search and web 2.0 is under pressure from AI content providing a mass of ‘just good enough’ content. Even influencers are being usurped by digital avatars. Even the audience engagement is often synthetic. All of which leaves the netizen in a state of confusion rather than the control that Pipes by Yahoo offered.

    Taylor Lorenz is a journalist who made net culture and platforms her beat. Taylor Lorenz’ book Extremely Online feels like she is reporting from another planet rather than the recent web and it was published in October last year.

    More information

    Mediasaurus no more? The Well

    Let’s Get This Straight: Yes, there is a better search engine | Salon.com (December 21, 1998)

    The Original GOOGLE Computer Storage Page and Brin

    Notre histoire en détail | Google

    How Google Became a Verb | TLF Translation

    Facebook Yahoo! patents case | renaissance chambara

    Yahoo! Answers Adoption | renaissance chambara

    Sadowski, J. (2020). “The Internet of Landlords: Digital Platforms and New Mechanisms of Rentier Capitalism.” Antipode 52 (2): 562-580.

    Amazon.com Launches Web Services; Developers Can Now Incorporate Amazon.com Content and Features into Their Own Web Sites; Extends ”Welcome Mat” for Developers | Amazon.com newsroom

    Nobody Knows What’s Happening Online Anymore – The Atlantic

    Extremely Online: The Untold Story of Fame, Influence and Power on the Internet by Taylor Lorenz

    The Age of Social Media Is Ending | The Atlantic

    AI is killing the old web, and the new web struggles to be born | The Verge

    Is the web actually evaporating? | Garbage Day