Apple development changes was at the forefront of Apple’s WWDC keynote for 2025. I think that the focus on Apple development changes were happening for a few reasons:
Apple got burned announcing sub-standard AI offerings last year.
The new translucent interface is divisive.
The multi-tasking iPad was interesting for power users, but most usage is as a communal device to consume content.
Apple has a number of small on-device models that do particular things well. Which is why Apple needs to get developers on-board to come up with compelling uses.
The Mac still has great hardware, passionate developers and a community passionate about great life-changing software. Apple development focus was coming home.
My thinking on the concept of intelligence per watt started as bullets in my notebook. It was more of a timeline than anything else at first and provided a framework of sorts from which I could explore the concept of efficiency in terms of intelligence per watt.
TL;DR (too long, didn’t read)
Our path to the current state of ‘artificial intelligence’ (AI) has been shaped by the interplay and developments of telecommunications, wireless communications, materials science, manufacturing processes, mathematics, information theory and software engineering.
Progress in one area spurred advances in others, creating a feedback loop that propelled innovation.
Over time, new use cases have become more personal and portable – necessitating a focus on intelligence per watt as a key parameter. Energy consumption directly affects industrial design and end-user benefits. Small low-power integrated circuits (ICs) facilitated fuzzy logic in portable consumer electronics like cameras and portable CD players. Low power ICs and power management techniques also helped feature phones evolve into smartphones.
A second-order effect of optimising for intelligence per watt is reducing power consumption across multiple applications. This spurs yet more new use cases in a virtuous innovation circle. This continues until the laws of physics impose limits.
Energy storage density and consumption are fundamental constraints, driving the need for a focus on intelligence per watt.
As intelligence per watt improves, there will be a point at which the question isn’t just what AI can do, but what should be done with AI? And where should it be processed? Trust becomes less about emotional reassurance and more about operational discipline. Just because it can handle a task doesn’t mean it should – particularly in cases where data sensitivity, latency, or transparency to humans is non-negotiable. A highly capable, off-device AI might be a fine at drafting everyday emails, but a questionable choice for handling your online banking.
Good ‘operational security’ outweighs trust. The design of AI systems must therefore account not just for energy efficiency, but user utility and deployment context. The cost of misplaced trust is asymmetric and potentially irreversible.
Ironically the force multiplier in intelligence per watt is people and their use of ‘artificial intelligence’ as a tool or ‘co-pilot’. It promises to be an extension of the earlier memetic concept of a ‘bicycle for the mind’ that helped inspire early developments in the personal computer industry. The upside of an intelligence per watt focus is more personal, trusted services designed for everyday use.
While not a computer, but instead to integrate several radio parts in one glass envelope vacuum valve. This had three triodes (early electronic amplifiers), two capacitors and four resistors. Inside the valve the extra resistor and capacitor components went inside their own glass tubes. Normally each triode would be inside its own vacuum valve. At the time, German radio tax laws were based on the number of valve sockets in a device, making this integration financially advantageous.
Post-war scientific boom
Between 1949 and 1957 engineers and scientists from the UK, Germany, Japan and the US proposed what we’d think of as the integrated circuit (IC). These ideas were made possible when breakthroughs in manufacturing happened. Shockley Semiconductor built on work by Bell Labs and Sprague Electric Company to connect different types of components on the one piece of silicon to create the IC.
Credit is often given to Jack Kilby of Texas Instruments as the inventor of the integrated circuit. But that depends how you define IC, with what is now called a monolithic IC being considered a ‘true’ one. Kilby’s version wasn’t a true monolithic IC. As with most inventions it is usually the child of several interconnected ideas that coalesce over a given part in time. In the case of ICs, it was happening in the midst of materials and technology developments including data storage and computational solutions such as the idea of virtual memory through to the first solar cells.
Kirby’s ICs went into an Air Force computer[ii] and an onboard guidance system for the Minuteman missile. He went on to help invent the first handheld calculator and thermal printer, both of which took advantage of progress in IC design to change our modern way of life[iii].
TTL (transistor-to-transistor logic) circuitry was invented at TRW in 1961, they licensed it out for use in data processing and communications – propelling the development of modern computing. TTL circuits powered mainframes. Mainframes were housed in specialised temperature and humidity-controlled rooms and owned by large corporates and governments. Modern banking and payments systems rely on the mainframe as a concept.
AI’s early steps
What we now thing of as AI had been considered theoretically for as long as computers could be programmed. As semiconductors developed, a parallel track opened up to move AI beyond being a theoretical possibility. A pivotal moment was a workshop was held in 1956 at Dartmouth College. The workshop focused on a hypothesis ‘every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it’. Later on, that year a meeting at MIT (Massachusetts Institute of Technology) brought together psychologists and linguists to discuss the possibility of simulating cognitive processes using a computer. This is the origin of what we’d now call cognitive science.
Out of the cognitive approach came some early successes in the move towards artificial intelligence[iv]. A number of approaches were taken based on what is now called symbolic or classical AI:
Reasoning as search – essentially step-wise trial and error approach to problem solving that was compared to wandering through a maze and back-tracking if a dead end was found.
Natural language – where related phrases existed within a structured network.
Micro-worlds – solving for artificially simple situations, similar to economic models relying on the concept of the rational consumer.
Single layer neural networks – to do rudimentary image recognition.
By the time the early 1970s came around AI researchers ran into a number of problems, some of which still plague the field to this day:
Symbolic AI wasn’t fit for purpose solving many real-world tasks like crossing a crowded room.
Trying to capture imprecise concepts with precise language.
Commonsense knowledge was vast and difficult to encode.
Intractability – many problems require an exponential amount of computing time.
Limited computing power available – there was insufficient intelligence per watt available for all but the simplest problems.
By 1966, US and UK funding bodies were frustrated with the lack of progress on the research undertaken. The axe fell first on a project to use computers on language translation. Around the time of the OPEC oil crisis, funding to major centres researching AI was reduced by both the US and UK governments respectively. Despite the reduction of funding to the major centres, work continued elsewhere.
Mini-computers and pocket calculators
ICs allowed for mini-computers due to the increase in computing power per watt. As important as the relative computing power, ICs made mini-computers more robust, easier to manufacture and maintain. DEC (Digital Equipment Corporation) launched the first minicomputer, the PDP-8 in 1964. The cost of mini-computers allowed them to run manufacturing processes, control telephone network switching and control labouratory equipment. Mini-computers expanded computer access in academia facilitating more work in artificial life and what we’d think of as early artificial intelligence. This shift laid the groundwork for intelligence per watt as a guiding principle.
A second development helped drive mass production of ICs – the pocket calculator, originally invented at Texas Instruments. It demonstrated how ICs could dramatically improve efficiency in compact, low-power devices.
LISP machines and PCs
AI researchers required more computational power than mini-computers could provide, leading to the development of LISP machines—specialised workstations designed for AI applications. Despite improvements in intelligence per watt enabled by Moore’s Law, their specialised nature meant that they were expensive. AI researchers continued with these machines until personal computers (PCs) progressed to a point that they could run LISP quicker than LISP machines themselves. The continuous improvements in data storage, memory and processing that enabled LISP machines, continued on and surpassed them as the cost of computing dropped due to mass production.
The rise of LISP machines and their decline was not only due to Moore’s Law in effect, but also that of Makimoto’s Wave. While Gordon Moore outlined an observation that the number of transistors on a given area of silicon doubled every two years or so. Tsugio Makimoto originally observed 10-year pivots from standardised semiconductor processors to customised processors[v]. The rise of personal computing drove a pivot towards standardised architectures.
PCs and workstations extended computing beyond computer rooms and labouratories to offices and production lines. During the late 1970s and 1980s standardised processor designs like the Zilog Z80, MOS Technology 6502 and the Motorola 68000 series drove home and business computing alongside Intel’s X86 processors.
Personal computing started in businesses when office workers brought a computer to use early computer programmes like the VisiCalc spreadsheet application. This allowed them to take a leap forward in not only tabulating data, but also seeing how changes to the business might affect financial performance.
Businesses then started to invest more in PCs for a wide range of uses. PCs could emulate the computer terminal of a mainframe or minicomputer, but also run applications of their own.
Typewriters were being placed by word processors that allowed the operator to edit a document in real time without resorting to using correction fluid.
A Bicycle for the Mind
Steve Jobs at Apple was as famous for being a storyteller as he was for being a technologist in the broadest sense. Internally with the Mac team he shared stories and memetic concepts to get his ideas across in everything from briefing product teams to press interviews. As a concept, a 1990 filmed interview with Steve Jobs articulates the context of this saying particularly well.
In reality, Jobs had been telling the story for a long time through the development of the Apple II and right from the beginning of the Mac. There is a version of the talk that was recorded some time in 1980 when the personal computer was still a very new idea – the video was provided to the Computer History Museum by Regis McKenna[vi].
The ‘bicycle for the mind’ concept was repeated in early Apple advertisements for the time[vii] and even informed the Macintosh project codename[viii].
Jobs articulated a few key concepts.
Buying a computer creates, rather than reduces problems. You needed software to start solving problems and making computing accessible. Back in 1980, you programmed a computer if you bought one. Which was the reason why early personal computer owners in the UK went on to birth a thriving games software industry including the likes of Codemasters[ix]. Done well, there should be no seem in the experience between hardware and software.
The idea of a personal, individual computing device (rather than a shared resource). My own computer builds on my years of how I have grown to adapt and use my Macs, from my first sit-up and beg Macintosh, to the MacBook Pro that I am writing this post on. This is even more true most people and their use of the smartphone. I am of an age, where my iPhone is still an appendage and emissary of my Mac. My Mac is still my primary creative tool. A personal computer is more powerful than a shared computer in terms of the real difference made.
At the time Jobs originally did the speech, PCs were underpowered for anything but data processing (through spreadsheets and basic word processor applications). But that didn’t stop his idea for something greater.
Jobs idea of the computer as an adjunct to the human intellect and imagination still holds true, but it doesn’t neatly fit into the intelligence per watt paradigm. It is harder to measure the effort developing prompts, or that expended evaluating, refining and filtering generative AI results. Of course, Steve Jobs Apple owed a lot to the vision shown in Doug Engelbart’s ‘Mother of All Demos’[x].
Networks
Work took a leap forward with office networked computers pioneered by Macintosh office by Apple[xi]. This was soon overtaken by competitors. This facilitated work flow within an office and its impact can still be seen in offices today, even as components from print management to file storage have moved to cloud-based services.
At the same time, what we might think of as mobile was starting to gain momentum. Bell Labs and Motorola came up with much of the technology to create cellular communications. Martin Cooper of Motorola made the first phone call on a cellular phone to a rival researcher at Bell Labs. But Motorola didn’t sell the phone commercially until 1983, as a US-only product called the DynaTAC 8000x[xii]. This was four years after Japanese telecoms company NTT launched their first cellular network for car phones. Commercial cellular networks were running in Scandinavia by 1981[xiii].
In the same way that the networked office radically changed white collar work, the cellular network did a similar thing for self-employed plumbers, electricians and photocopy repair men to travelling sales people. If they were technologically advanced, they may have had an answer machine, but it would likely have to be checked manually by playing back the tape.
Often it was a receptionist in their office if they had one. Or more likely, someone back home who took messages. The cell phone freed homemakers in a lot of self-employed households to go out into the workplace and helped raise household incomes.
Fuzzy logic
The first mainstream AI applications emerged from fuzzy logic, introduced by Lofti A. Zadeh in 1965 mathematical paper. Initial uses were for industrial controls in cement kilns and steel production[xiv]. The first prominent product to rely on fuzzy logic was the Zojirushi Micom Electric Rice Cooker (1983), which adjusted cooking time dynamically to ensure perfect rice.
Fuzzy logic reacted to changing conditions in a similar way to people. Through the 1980s and well into the 1990s, the power of fuzzy logic was under appreciated outside of Japanese product development teams. In a quote a spokesperson for the American Electronics Association’s Tokyo office said to the Washington Post[xv].
“Some of the fuzzy concepts may be valid in the U.S.,”
“The idea of better energy efficiency, or more precise heating and cooling, can be successful in the American market,”
“But I don’t think most Americans want a vacuum cleaner that talks to you and says, ‘Hey, I sense that my dust bag will be full before we finish this room.’ “
The end of the 1990s, fuzzy logic was embedded in various consumer devices:
Air-conditioner units – understands the room, the temperature difference inside-and-out, humidity. It then switches on-and-off to balance cooling and energy efficiency.
CD players – enhanced error correction on playback dealing with imperfections on the disc surface.
Dishwashers – understood how many dishes were loaded, their type of dirt and then adjusts the wash programme.
Toasters – recognised different bread types, the preferable degree of toasting and performs accordingly.
TV sets – adjust the screen brightness to the ambient light of the room and the sound volume to how far away the viewer is sitting from the TV set.
Vacuum cleaners – vacuum power that is adjusted as it moves from carpeted to hard floors.
Video cameras – compensate for the movement of the camera to reduce blurred images.
Fuzzy logic sold on the benefits and concealed the technology from western consumers. Fuzzy logic embedded intelligence in the devices. Because it worked on relatively simple dedicated purposes it could rely on small lower power specialist chips[xvi] offering a reasonable amount of intelligence per watt, some three decades before generative AI. By the late 1990s, kitchen appliances like rice cookers and microwave ovens reached ‘peak intelligence’ for what they needed to do, based on the power of fuzzy logic[xvii].
Fuzzy logic also helped in business automation. It helped to automatically read hand-written numbers on cheques in banking systems and the postcodes on letters and parcels for the Royal Mail.
Decision support systems & AI in business
Decision support systems or Business Information Systems were being used in large corporates by the early 1990s. The techniques used were varied but some used rules-based systems. These were used in at least some capacity to reduce manual office work tasks. For instance, credit card approvals were processed based on rules that included various factors including credit scores. Only some credit card providers had an analyst manually review the decision made by system. However, setting up each use case took a lot of effort involving highly-paid consultants and expensive software tools. Even then, vendors of business information systems such as Autonomy struggled with a high rate of projects that failed to deliver anything like the benefits promised.
Three decades on, IBM had a similar problem with its Watson offerings, with particularly high-profile failure in mission-critical healthcare applications[xviii]. Secondly, a lot of tasks were ad-hoc in nature, or might require transposing across disparate separate systems.
The rise of the web
The web changed everything. The underlying technology allowed for dynamic data.
Software agents
Examples of intelligence within the network included early software agents. A good example of this was PapriCom. PapriCom had a client on the user’s computer. The software client monitored price changes for products that the customer was interested in buying. The app then notified the user when the monitored price reached a price determined by the customer. The company became known as DealTime in the US and UK, or Evenbetter.com in Germany[xix].
The PapriCom client app was part of a wider set of technologies known as ‘push technology’ which brought content that the netizen would want directly to their computer. In a similar way to mobile app notifications now.
Web search
The wealth of information quickly outstripped netizen’s ability to explore the content. Search engines became essential for navigating the new online world. Progress was made in clustering vast amounts of cheap Linux powered computers together and sharing the workload to power web search amongst them. As search started to trying and make sense of an exponentially growing web, machine learning became part of the developer tool box.
Researchers at Carnegie-Mellon looked at using games to help teach machine learning algorithms based on human responses that provided rich metadata about the given item[xx]. This became known as the ESP game. In the early 2000s, Yahoo! turned to web 2.0 start-ups that used user-generated labels called tags[xxi] to help organise their data. Yahoo! bought Flickr[xxii] and deli.ico.us[xxiii].
All the major search engines looked at how deep learning could help improve search results relevance.
Given that the business model for web search was an advertising-based model, reducing the cost per search, while maintaining search quality was key to Google’s success. Early on Google focused on energy consumption, with its (search) data centres becoming carbon neutral in 2007[xxiv]. This was achieved by a whole-system effort: carefully managing power management in the silicon, storage, networking equipment and air conditioning to maximise for intelligence per watt. All of which were made using optimised versions of open-source software and cheap general purpose PC components ganged together in racks and operating together in clusters.
General purpose ICs for personal computers and consumer electronics allowed easy access relatively low power computing. Much of this was down to process improvements that were being made at the time. You needed the volume of chips to drive innovation in mass-production at a chip foundry. While application-specific chips had their uses, commodity mass-volume products for uses for everything from embedded applications to early mobile / portable devices and computers drove progress in improving intelligence-per-watt.
Makimoto’s tsunami back to specialised ICs
When I talked about the decline of LISP machines, I mentioned the move towards standardised IC design predicted by Tsugio Makimoto. This led to a surge in IC production, alongside other components including flash and RAM memory. From the mid-1990s to about 2010, Makimoto’s predicted phase was stuck in ‘standardisation’. It just worked. But several factors drove the swing back to specialised ICs.
Lithography processes got harder: standardisation got its performance and intelligence per watt bump because there had been a steady step change in improvements in foundry lithography processes that allowed components to be made at ever-smaller dimensions. The dimensions are a function wavelength of light used. The semiconductor hit an impasse when it needed to move to EUV (extreme ultra violet) light sources. From the early 1990s on US government research projects championed development of key technologies that allow EUV photolithography[xxv]. During this time Japanese equipment vendors Nikon and Canon gave up on EUV. Sole US vendor SVG (Silicon Valley Group) was acquired by ASML, giving the Dutch company a global monopoly on cutting edge lithography equipment[xxvi]. ASML became the US Department of Energy research partner on EUV photo-lithography development[xxvii]. ASML spent over two decades trying to get EUV to work. Once they had it in client foundries further time was needed to get commercial levels of production up and running. All of which meant that production processes to improve IC intelligence per watt slowed down and IC manufacturers had to start about systems in a more holistic manner. As foundry development became harder, there was a rise in fabless chip businesses. Alongside the fabless firms, there were fewer foundries: Global Foundries, Samsung and TSMC (Taiwan Semiconductor Manufacturing Company Limited). TSMC is the worlds largest ‘pure-play’ foundry making ICs for companies including AMD, Apple, Nvidia and Qualcomm.
Progress in EDA (electronic design automation). Production process improvements in IC manufacture allowed for an explosion in device complexity as the number of components on a given size of IC doubled every 18 months or so. In the mid-to-late 1970s this led to technologists thinking about the idea of very large-scale integration (VLSI) within IC designs[xxviii]. Through the 1980s, commercial EDA software businesses were formed. The EDA market grew because it facilitated the continual scaling of semiconductor technology[xxix]. Secondly, it facilitated new business models. Businesses like ARM Semiconductor and LSI Logic allowed their customers to build their own processors based on ‘blocs’ of proprietary designs like ARM’s cores. That allowed companies like Apple to focus on optimisation in their customer silicon and integration with software to help improve the intelligence per watt[xxx].
Increased focus on portable devices. A combination of digital networks, wireless connectivity, the web as a communications platform with universal standards, flat screen displays and improving battery technology led the way in moving towards more portable technologies. From personal digital assistants, MP3 players and smartphone, to laptop and tablet computers – disconnected mobile computing was the clear direction of travel. Cell phones offered days of battery life; the Palm Pilot PDA had a battery life allowing for couple of days of continuous use[xxxi]. In reality it would do a month or so of work. Laptops at the time could do half a day’s work when disconnected from a power supply. Manufacturers like Dell and HP provided spare batteries for travellers. Given changing behaviours Apple wanted laptops that were easy to carry and could last most of a day without a charge. This was partly driven by a move to a cleaner product design that wanted to move away from swapping batteries. In 2005, Apple moved from PowerPC to Intel processors. During the announcement at the company’s worldwide developer conference (WWDC), Steve Jobs talked about the focus on computing power per watt moving forwards[xxxii].
Apple’s first in-house designed IC, the A4 processor was launched in 2010 and marked the pivot of Makimoto’s wave back to specialised processor design[xxxiii]. This marked a point of inflection in the growth of smartphones and specialised computing ICs[xxxiv].
New devices also meant new use cases that melded data on the web, on device, and in the real world. I started to see this in action working at Yahoo! with location data integrated on to photos and social data like Yahoo! Research’s ZoneTag and Flickr. I had been the Yahoo! Europe marketing contact on adding Flickr support to Nokia N-series ‘multimedia computers’ (what we’d now call smartphones), starting with the Nokia N73[xxxv]. A year later the Nokia N95 was the first smartphone released with a built-in GPS receiver. William Gibson’s speculative fiction story Spook Country came out in 2007 and integrated locative art as a concept in the story[xxxvi].
Real-world QRcodes helped connect online services with the real world, such as mobile payments or reading content online like a restaurant menu or a property listing[xxxvii].
I labelled the web-world integration as a ‘web-of-no-web’[xxxviii] when I presented on it back in 2008 as part of an interactive media module, I taught to an executive MBA class at Universitat Ramon Llull in Barcelona[xxxix]. In China, wireless payment ideas would come to be labelled O2O (offline to online) and Kevin Kelly articulated a future vision for this fusion which he called Mirrorworld[xl].
Deep learning boom
Even as there was a post-LISP machine dip in funding of AI research, work on deep (multi-layered) neural networks continued through the 1980s. Other areas were explored in academia during the 1990s and early 2000s due to the large amount of computing power needed. Internet companies like Google gained experience in large clustered computing, AND, had a real need to explore deep learning. Use cases include image recognition to improve search and dynamically altered journeys to improve mapping and local search offerings. Deep learning is probabilistic in nature, which dovetailed nicely with prior work Microsoft Research had been doing since the 1980s on Bayesian approaches to problem-solving[xli].
A key factor in deep learning’s adoption was having access to powerful enough GPUs to handle the neural network compute[xlii]. This has allowed various vendors to build Large Language Models (LLMs). The perceived strategic importance of artificial intelligence has meant that considerations on intelligence per watt has become a tertiary consideration at best. Microsoft has shown interest in growing data centres with less thought has been given on the electrical infrastructure required[xliii].
Google’s conference paper on attention mechanisms[xliv] highlighted the development of the transformer model. As an architecture it got around problems in previous approaches, but is computationally intensive. Even before the paper was published, the Google transformer model had created fictional Wikipedia entries[xlv]. A year later OpenAI built on Google’s work with the generative pre-trained transformer model better known as GPT[xlvi].
Since 2018 we’ve seen successive GPT-based models from Amazon, Anthropic, Google, Meta, Alibaba, Tencent, Manus and DeepSeek. All of these models were trained on vast amounts of information sources. One of the key limitations for building better models was access to training material, which is why Meta used pirated copies of e-books obtained using bit-torrent[xlvii].
These models were so computationally intensive that the large-scale cloud service providers (CSPs) offering these generative AI services were looking at nuclear power access for their data centres[xlviii].
The current direction of development in generative AI services is raw computing power, rather than having a more energy efficient focus of intelligence per watt.
Technology consultancy / analyst Omdia estimated how many GPUs were bought by hyperscalers in 2024[xlix].
Company
Number of Nvidia GPUs bought
Number of AMD GPUs bought
Number of self-designed custom processing chips bought
Amazon
196,000
–
1,300,000
Alphabet (Google)
169,000
–
1,500,000
ByteDance
230,000
–
–
Meta
224,000
173,000
1,500,000
Microsoft
485,000
96,000
200,000
Tencent
230,000
–
–
These numbers provide an indication of the massive deployment on GPT-specific computing power. Despite the massive amount of computing power available, services still weren’t able to cope[l] mirroring some of the service problems experienced by early web users[li] and the Twitter ‘whale FAIL’[lii] phenomenon of the mid-2000s. The race to bigger, more powerful models is likely to continue for the foreseeable future[liii].
There is a second class of players typified by Chinese companies DeepSeek[liv] and Manus[lv] that look to optimise the use of older GPT models to squeeze the most utility out of them in a more efficient manner. Both of these services still rely on large cloud computing facilities to answer queries and perform tasks.
Agentic AI
Thinking on software agents went back to work being done in computer science in the mid-1970s[lvi]. Apple articulated a view[lvii]of a future system dubbed the ‘Knowledge Navigator’[lviii] in 1987 which hinted at autonomous software agents. What we’d now think of as agentic AI was discussed as a concept at least as far back as 1995[lix], this was mirrored in research labs around the world and was captured in a 1997 survey of research on intelligent software agents was published[lx]. These agents went beyond the vision that PapriCom implemented.
A classic example of this was Wildfire Communications, Inc. who created a voice enabled virtual personal assistant in 1994[lxi]. Wildfire as a service was eventually shut down in 2005 due to an apparent decline in subscribers using the service[lxii]. In terms of capability, Wildfire could do tasks that are currently beyond Apple’s Siri. Wildfire did have limitations due to it being an off-device service that used a phone call rather than an internet connection, which limited its use to Orange mobile service subscribers using early digital cellular mobile networks.
Almost a quarter century later we’re now seeing devices that are looking to go beyond Wildfire with varying degrees of success. For instance, the Rabbit R1 could order an Uber ride or groceries from DoorDash[lxiii]. Google Duplex tries to call restaurants on your behalf to make reservations[lxiv] and Amazon claims that it can shop across other websites on your behalf[lxv]. At the more extreme end is Boeing’s MQ-28[lxvi] and the Loyal Wingman programme[lxvii]. The MQ-28 is an autonomous drone that would accompany US combat aircraft into battle, once it’s been directed to follow a course of action by its human colleague in another plane.
The MQ-28 will likely operate in an electronic environment that could be jammed. Even if it wasn’t jammed the length of time taken to beam AI instructions to the aircraft would negatively impact aircraft performance. So, it is likely to have a large amount of on-board computing power. As with any aircraft, the size of computing resources and their power is a trade-off with the amount of fuel or payload it will carry. So, efficiency in terms of intelligence per watt becomes important to develop the smallest, lightest autonomous pilot.
As well as a more hostile world, we also exist in a more vulnerable time in terms of cyber security and privacy. It makes sense to have critical, more private AI tasks run on a local machine. At the moment models like DeepSeek can run natively on a top-of-the-range Mac workstation with enough memory[lxviii].
This is still a long way from the vision of completely local execution of ‘agentic AI’ on a mobile device because the intelligence per watt hasn’t scaled down to that level to useful given the vast amount of possible uses that would be asked of the Agentic AI model.
Maximising intelligence per watt
There are three broad approaches to maximise the intelligence per watt of an AI model.
Take advantage of the technium. The technium is an idea popularised by author Kevin Kelly[lxix]. Kelly argues that technology moves forward inexorably, each development building on the last. Current LLMs such as ChatGPT and Google Gemini take advantage of the ongoing technium in hardware development including high-speed computer memory and high-performance graphics processing units (GPU). They have been building large data centres to run their models in. They build on past developments in distributed computing going all the way back to the 1962[lxx].
Optimise models to squeeze the most performance out of them. The approach taken by some of the Chinese models has been to optimise the technology just behind the leading-edge work done by the likes of Google, OpenAI and Anthropic. The optimisation may use both LLMs[lxxi] and quantum computing[lxxii] – I don’t know about the veracity of either claim.
Specialised models. Developing models by use case can reduce the size of the model and improve the applied intelligence per watt. Classic examples of this would be fuzzy logic used for the past four decades in consumer electronics to Mistral AI[lxxiii] and Anduril’s Copperhead underwater drone family[lxxiv].
Even if an AI model can do something, should the model be asked to do so?
We have a clear direction of travel over the decades to more powerful, portable computing devices –which could function as an extension of their user once intelligence per watt allows it to be run locally.
Having an AI run on a cloud service makes sense where you are on a robust internet connection, such as using the wi-fi network at home. This makes sense for general everyday task with no information risk, for instance helping you complete a newspaper crossword if there is an answer you are stuck on and the intellectual struggle has gone nowhere.
A private cloud AI service would make sense when working, accessing or processing data held on the service. Examples of this would be Google’s Vertex AI offering[lxxv].
On-device AI models make sense in working with one’s personal private details such as family photographs, health information or accessing apps within your device. Apps like Strava which share data, have been shown to have privacy[lxxvi] and security[lxxvii] implications. ***I am using Strava as an example because it is popular and widely-known, not because it is a bad app per se.***
While businesses have the capability and resources to have a multi-layered security infrastructure to protect their data most[lxxviii]of[lxxix] the[lxxx] time[lxxxi], individuals don’t have the same security. As I write this there are privacy concerns[lxxxii] expressed about Waymo’s autonomous taxis. However, their mobile device is rarely out of physical reach and for many their laptop or tablet is similarly close. All of these devices tend to be used in concert with each other. So, for consumers having an on-device AI model makes the most sense. All of which results in a problem, how do technologists squeeze down their most complex models inside a laptop, tablet or smartphone?
The IT director is seeing a return to power and its thanks to the power of hackers and AI. The smartphone, the resurgence of Apple and SaaS saw IT decisions become more organic thanks to increased access to online services that provided better features than traditional enterprise software companies and the rise of knowledge working. IT teams found management of mobile devices onerous and faced hostile users.
Michiko Fukahori of the Japanese National Institute of Information and Communications Technology at ITU TSB – 8th Chief Technology Officers (CTO) Meeting
This meant that the IT director became less important in software marketing. A decade ago marketing had pivoted to a bottom up approach of ‘land and expand’. This drove the sales of Slack, Monday.com and MongoDB.
Two things impacted this bottom up approach to enterprise innovation:
Cybercrime: ransomware and supply chain attacks. Both are not new, ransomware can be traced back to 1989, with malware known as the AIDS trojan (this had much cultural resonance back then as a name). Supply chain attacks started happening in the 2010s with the Target data breach and by 2011, US politicians were considering it a security issue. Over COVID with the rise of remote working, the attacks increased. The risk put the IT director back in the firing line.
AI governance: generative AI systems learn from their training models and from user inputs, this led to a wide range of concerns from company intellectual property leaving via the AI system, or AI outputs based on intellectual property theft.
The most immediate impact of this is that the IT director is becoming a prized target on more technology marketers agendas again. This takes IT director focused marketing from back in the 1980s and the early 2000s with a top-down c-suite focus including the IT director. This implies that established brands like Microsoft and IBM will do better than buzzier startups. It also means I am less likely to see adverts for Monday.com in my YouTube feed over time.
This doesn’t mean that the IT director won’t be disrupted in other parts of his role as machine learning facilitates process automation in ways that are continuing to evolve.
Brands plan for a quiet Pride Month | News | Campaign Asia – The hesitation around Pride may also be related to executives’ increasing reluctance to speak out on social issues more broadly. Wolff pointed to Edelman’s Trust Barometer, which found that 87% of executives think taking a public stance on a social issue is riskier than staying silent. “Essentially, nine out of every 10 executives believe that the return on investment for their careers is not worth the support during this turbulent time,” said (Kate) Wolff. “This is clearly problematic for both the community and the progress we have made in recent years.”
Chinese Firms Are Investing Heavily in Whisky Market | Yicai Global – Although international liquor giants have developed the local whisky consumption market for many years, the market penetration rate of overseas spirits in China, including whisky, is only about 3 percent. This means domestic whisky producers will need to develop new consumption scenarios, Yang said. Whisky consumption in China centers mainly around nightclubs, gift-giving and tasting events held by affluent consumers, Yang noted, but in these scenarios, imported whisky brands with a long history tend to be more popularly accepted,, so it will be difficult for domestic rivals to compete. According to the latest report from alcohol market analysts IWSR, China’s whisky market was worth CNY5.5 billion (USD758 million) last year, having grown more than fourfold over the past 10 years. It is expected to reach CNY50 billion (USD6.9 billion) in the next five to 10 years.
Yoox Net-a-Porter exits China to focus on more profitable markets – Multi-brand luxury clothing sales platform Yoox Net-a-Porter is closing its China operations, this against a backdrop of other brands also pulling out of Chinese e-commerce including Marc Jacobs fragrances. The corporate line from Richemont was “in the context of a global Yoox Net-a-Porter plan aimed at focusing investments and resources on its core and more profitable geographies”.
Ignite the Scent: The Effectiveness of Implied Explosion in Perfume Ads | the Journal of Advertising Research – Scent is an important product attribute and an integral component of the consumption experience as consumers often want to perceive a product’s smell to make a well-informed purchase decision. It is difficult, however, to communicate the properties of a scent without the physical presence of odorants. Through five experiments conducted in a perfume-advertising context, our research shows that implied explosion, whether visually (e.g., a spritz blast) or semantically created, can increase perceived scent intensity, subsequently enhancing perceived scent persistence. It also found a positive effect of perceived scent persistence on purchase intention. In conclusion, the research suggests that implied explosion can be a powerful tool for advertisers to enhance scent perception, consequently boosting purchase intention.
Mat Baxter’s Huge turnaround job | Contagious – interesting perspective on his time at Huge. What I can’t square it all with is what we know about marketing science and declining effectiveness across digital media
On my LinkedIn, I couldn’t escape from the Cannes festival of advertising. Partly because one of the projects I had been involved in was a shortlisted entry. One of the most prominent films was Dramamine’s ‘The Last Barf Bag: A Tribute to a Cultural Icon’. It was notable because of its humour, which was part of this years theme across categories.
震災復興から生まれた刺し子プロジェクトをブランドに! 15人のお母さんの挑戦! – CAMPFIRE (キャンプファイヤー) – ancient Japanese craft – KUON and Sashiko Gals are part of a new generation of designers keeping the traditional Japanese technique of sashiko alive. And together, they are bringing the decorative style of stitching to our favorite sneakers (including techy Salomons!). Sashiko is a type of simple running stitch used in Japan for over a thousand years to reinforce fabrics. It’s typically done with a thick white thread on indigo fabric and made into intricate patterns.
Nationalism in Online Games During War by Eren Bilen, Nino Doghonadze, Robizon Khubulashvili, David Smerdon :: SSRN – We investigate how international conflicts impact the behavior of hostile nationals in online games. Utilizing data from the largest online chess platform, where players can see their opponents’ country flags, we observed behavioral responses based on the opponents’ nationality. Specifically, there is a notable decrease in the share of games played against hostile nationals, indicating a reluctance to engage. Additionally, players show different strategic adjustments: they opt for safer opening moves and exhibit higher persistence in games, evidenced by longer game durations and fewer resignations. This study provides unique insights into the impact of geopolitical conflicts on strategic interactions in an online setting, offering contributions to further understanding human behavior during international conflicts.
The West Coast’s Fanciest Stolen Bikes Are Getting Trafficked by One Mastermind in Jalisco, Mexico | WIRED – “Not so long ago, bike theft was a crime of opportunity—a snatch-and-grab, or someone applying a screwdriver to a flimsy lock. Those quaint days are over. Thieves now are more talented and brazen and prolific. They wield portable angle grinders and high-powered cordless screwdrivers. They scope neighborhoods in trucks equipped with ladders, to pluck fine bikes from second-story balconies. They’ll use your Strava feed to shadow you and your nice bike back to your home.” – not terribly surprising, you’ve seen the professionalisation and industrialisation in theft across sectors from shoplifting, car theft and watch thefts so this is continuing the trend.
OpenAI Just Gave Away the Entire Game – The Atlantic – The Scarlett Johansson debacle is a microcosm of AI’s raw deal: It’s happening, and you can’t stop it. This is important not from a technology point of view, but from the mindset of systemic sociopathy that now pervades Silicon Valley.
Apple Intelligence is Right On Time – Stratechery by Ben Thompson – Apple’s orientation towards prioritizing users over developers aligns nicely with its brand promise of privacy and security: Apple would prefer to deliver new features in an integrated fashion as a matter of course; making AI not just compelling but societally acceptable may require exactly that, which means that Apple is arriving on the AI scene just in time.
‘Rare, vintage, Y2K’: Online thrifters are flipping fast fashion. How long can it last? | Vogue Business – as secondhand shopping becomes increasingly commonplace, this latest outburst brings to light the subjectivity of resale. What determines an item’s worth, especially in an age of viral micro-trends and heavy nostalgia? Is it ethically moral to set an item that’s the product of fast fashion — long criticised for not paying workers fairly — at such a steep upcharge, and making profit from it? If someone is willing to pay, does any of it matter?
In my take on the 2024 iPad Pro I am going to look at things through three lenses and after the initial hot takes have cooled down. These three lenses are:
Hardware
Semiconductors
Advertisement
Apple and Microsoft both push their most powerful tablets like the 2024 iPad Proas creator tools. However, at the time of writing I have been working alongside creative teams in a prominent ad agency and both the creative and strategic elements of the work we were doing were pulled together using different software, but the same hardware. Apple MacBook Pro computers and large secondary monitors. An illustrator attached a ‘graphics tablet‘ alongside their laptop to provide additional tactile control, just in the same way I am known to use an outboard Kensington trackball for additional fine control in creating presentation charts.
Where I have seen iPads used:
Senior (older executives) replying to emails – I suspect its because the screen is bigger than a smartphone.
As a media player device. The iPad is the travel and bedside equivalent of the book and the portable DVD player.
As a presentation device. Friends that give a lot of public presentations at conferences and one who works as a university lecturer both use the iPad as device to present from in place of lugging around a laptop.
In all of these use cases, there isn’t that much to differentiate iPad models and the main limitations are user intent or software-related.
My parents use an iPad I’ve bought them to keep in touch with me. We started using an iPad as a Skype client over a decade ago. Then iMessage and FaceTime started to make more sense, particularly has they started getting Skype spam. It’s the computing equivalent of a kitchen appliance: largely intuitive and very little can go really wrong – that’s both the iPad’s strength and its weakness.
Secondly, there is the confusion of the Apple iPad product line-up, which is at odds with the way Apple got its second wind. In Walter Isaacson’s flawed autobiography of Steve Jobs, one of the standout things that the returning CEO did was ruthlessly prune the product line-up.
He made it into a 2 x 2 grid: professional and consumer, portable and desktop. For most of past number of years, the iPhone has gone down this ‘pro and consumer’ split.
The iPad line-up is less clear cut to the casual observer:
iPad Mini
iPad
iPad Air
iPad Pro
In addition, there are Apple pencils – a smarter version of the stylus that used to be used prior to capacitive touchscreens became commonplace. Some of these pencils work with some devices, but not others. It’s a similar case, with other Apple accessories like keyboards that double as device covers. All of which means that your hardware accessories need an upgrade too. This is more than just getting a new phone case. It’s more analogous to having to buy a new second monitor or mouse every time you change your computer.
With all of that out of the way, let’s get into hardware.
Hardware
The 2024 iPad Pro launched before the Apple Worldwide Developer Conference, so we had no idea how the device will work together in conjunction with iPadOS 18. Addressing long term criticism of using the iPad is as much about software as it is about hardware.
The 2024 iPad Pro still doesn’t have a definitive user case, but Apple decided to focus on creativity in their marketing.
Presumably this is because the main thing to celebrate about the 2024 iPad Pro is increased computing power and creative apps are the most likely to make use of that power. For many ‘non-creative’ use cases, the previous generation of iPad Pro is very over-powered for what it does.
Some of the choices Apple made with the hardware are interesting. The existing iPad Pro is a thin, lightweight computing device. The 2024 iPad Pro is Apple’s thinnest device ever. This thinness is a clever feat of engineering, but so would be an iPad of the same size, but with more battery capacity. Instead Apple made the device made things a bit thinner device with exactly the same battery life as previous models.
The iPad Pro uses two screens one behind the other to provide deeper and brighter colours at a resolution that’s extremely high. This provides additional benefits such as avoiding screen burn-in which OLED screens were considered to be vulnerable to.
The camera has moved from the side to the top of the 2024 iPad Pro in landscape mode. This has necessitated a new arrangement of magnets for attachments, which then drove the need for new accessories including the new Apple pencil pro.
Semiconductors
The M4 processor is Apple’s latest silicon design and represents a move on from the current processors in Apple’s Mac range.
It is made by TSMC on a leading edge 3 nanometre process. This is TSMC’s second-generation process. Having it as the processor in the 2024 iPad Pro, allows Apple and partners to slowly ramp up production and usage of the new processor to match gains in semiconductor chip yields. This will give them the time to iron out any production challenges and resolve any quality issues. Relatively low production volumes would be a good thing, prior to the processor being rolled out more widely.
Apple seems to be designing the M-series processors in parallel to the A-series processors used in iPhones and iPads in the past. They seem to have them in mind for a wider range of devices.
Advertisement
Apple previewed an advertisement to promote the 2024 iPad Pro.
Crush has been executed with a high degree of craft in the production. It had a lot of negative reactions from celebrities and current Apple customers who saw it in terms of:
It being a wider metaphor of what technology was perceived to be doing to creativity. For instance, Hollywood actors and screen-writers are concerned about streaming and the effects of large language models.
Destroying real-life artefacts that consumers have attached meaning to. For instance, I use digital music, but also have a physical music collection that not only reflects my taste, but much more. Real-world experiences now provide respite from the digital world.
With product launches like the iPhone 3, Apple created adverts which were less of a literal metaphor for everything that could be crammed into the device by using show-and-tell.
Reversing the Crush! ad makes a similar point, but in a less oppressive way.
And as with everything else in life, there is seldom a time when an idea is truly new. There was an ad done by BBH London which used a crush metaphor to demonstrate all the features in LG’s Renoir phone circa 2008. As this circulated around Apple was perceived as being a copycat.
Presentation
Given that Apple events are now largely virtual post-COVID we didn’t have a positive live audience reaction amongst those who ‘got it’ to guide public opinion. Instead it was left on social media ‘contextless’.
The Apple exhibition centre at the new ‘space ship’ campus, doesn’t seem to be used in the same way that Apple did live events prior to 2020. Apple held small event screenings for journalists in New York and London.
But was Crush! bad?
When I first saw it, I thought that it was good from a craft point of view. I was a bit surprised at how dark the lighting was, it felt a little off-key.
My personal opinion about the concept was that it felt a bit heavy-handed because it was so literal. The creative brief done by a strategist is usually the jumping off point, not the literal creative concept.
But that doesn’t make it bad advert, it just felt not particularly clever for someone who is probably more media-literate than the average person. I would go as far as to say, it would have been unlikely to win creative advertising awards.
But I was also aware that my opinion didn’t mean that the ad wouldn’t be effective. Given the 2024 iPad Pro’s role as M4 guinea pig, Apple probably weren’t hoping for barn-storming sales figures and in the grand scheme of things the advert just wasn’t extremely important.
I was probably as blindsided as Apple was by the depth of feeling expressed in the online reaction.
TL;DR I don’t know if Crush! really is ‘bad’. Let’s ask some specific questions about different aspects of the ad.
Am I, or the negative responders the target market?
Maybe, or maybe not. I don’t have a place in it in my current workflow. I still find that a Mac works as my primary creative technology device. What about if Apple were aiming at college kids and first jobbers? These people wouldn’t come to buying the 2024 iPad Pro with the same brand ‘baggage’ that me and many of the commentators have.
Working in marketing, the 1984 ad and the Think Different ads were campaigns were classics. Hell, I can remember being a bit of an oddball at college as a Mac user. I helped friends get their secondhand Mac purchases up and running.
Going to coffee shops or working in the library and seeing a see of laptop lids emblazoned with the Dell, Gateway, Toshiba and H-P logos. If people were a bit quirky they may have a Sony Vaio instead.
I remember the booes and the hisses in the audience at MacWorld Boston in 1997, when Apple announced its partnership with Microsoft.
Even when I worked at Yahoo! during the web 2.0 renaissance, Mac users were second-class citizens internally and externally in terms of our product offering.
In the eyes of young people today Apple was always there, front and centre. The early iPad or iPhone experience as pacifier. The iPhone has must-have teenage smartphone. The Mac at home and maybe an Apple TV box.
Finally many high performing adverts of the past aimed at young adults have left the mainstream media and tastemakers non-plussed.
How did the ad test?
According to anecdotal evidence I have heard from people at IPSOS; in a survey they found that about half the respondents surveyed said they would be interested in finding out more about the 2024 iPad Pro. The younger the respondent, the more likely they were to be interested in the device.
System 1, tested the ad and found that it performed 1.9 out of a possible maximum score of 5. In System 1 parlance this indicates somewhere between low and modest long term brand growth derived from the advertisement. The average score for US advertisements is 2.3. But over half of ads that were run in the Super Bowl this year scored between 1 and 2. Which would imply that the ad could be improved; but the devil might be in the details as implied by the IPSOS research.
Is Crush! just a copy cat?
You can have the best creative director in the world who has seen a lot of advertising, but they might not know all advertising. Secondly, the advertising industry is getting rid of long term professionals. According to the Institute of Practitioners in Advertising no one retired from the industry in 2023, as staff were ‘phased out‘ of the industry way before retirement age. All of which means that there isn’t the historical memory to know if a campaign is sailing close to plagiarism.
And it isn’t just advertising. Earlier in my career, I got to see former business journalist and newspaper editor Damian McCrystal speak at a breakfast event. One thing stayed with me about his presentation, in which he talked about the financial industry:
The reasons why we make the same mistakes over-and-over again is because ‘the city’ has a collective institutional memory of about eight years.
Damien McCrystal
So we had Northern Rock, Bear Stearns and Lehman Brothers, despite the fact that pretty much every financier I have ever met had read Liar’s Poker by Michael Lewis. This was based on his experiences as a banker navigating the Savings and Loans scandal of the 1980s and 1990s.
So no, despite the similarity of the LG Renoir advertisement, I don’t think that Crush! was an intentional copy.
Welcome to my May 2024 newsletter, I hope that you’re looking forward to the spring bank holiday, unfortunately if like me you’re in the UK – then that was the last public holiday before the end of August. This newsletter which marks my 10th issue. I wasn’t certain that I would get to a tenth edition of this newsletter.
The number ten has a high amount of cultural symbolism from the biblical ten commandments to the ten celestial (or heavenly) stems during the Shang dynasty that marked the days of their week. There were corresponding earthy branches based on 12 day groupings. While the stems are no longer used in calendars they still appear in feng shui, Chinese astrology, mathematical proofs instead of the roman alphabet, student grading systems and multiple choice questionnaires.
New reader?
If this is the first newsletter, welcome! You can find my regular writings here and more about me here.
Things I’ve written.
I wrote a comment that struck a bit of a nerve about being asked to do a project ‘for my portfolio’.
Omakase and luxury futures. In the face of all the changes facing the luxury sector, is the answer learning from the Japanese tradition of omakase?
April marked the 20th anniversary of Dove’s campaign for real beauty. I took a slower approach than the LinkedIn hot takes to reflect on its legacy.
How behavioural science can help optimise the response to a coffee shop problem.
I saw clear parallels between car touchscreens and the changes that digital music instruments went through in terms of design and adoption.
I have had Alex Kassian’s cover version of the Manuel Göttsching classic E2 – E4 on heavy rotation. It was released just in time for the Ibiza season and has Mad Professor remixes dubbing out the balearic vibes for all the deep house shamans.
Books that I have read.
After Watches and Wonders 2024, I finally managed to get the time toreadRolex Wristwatches: An Unauthorized History by James M Dowling. Dowling is the person that the pre-owned watch market goes to for authentication of really old or unusual Rolex models. His history of the company, while unauthorised, had the collaboration of early Rolex staffers. What comes out is an interesting tale of adaption. Rolex started off as a UK reseller. The company innovated due to client needs and somewhere along the way because the luxury watch manufacturing giant we know today. What becomes apparent that their success was partly down to timing, circumstance and a belief that you change nothing, unless you’re making it better. The last point is something that product managers the world over could learn from.
David McCloskey’s Damascus Station came highly recommended as leisure reading. My taste in espionage fiction is more towards Mick Herron and John Le Carre rather than the more action orientated. This book had enough intellect and imperfection to make me put up with the James Bond factor.
I am at the time of writing working my way through Nixonland by Rick Perlstein – which I started before the student sit-ins against the conflict in the Gaza strip happened. More on this book once I have finished it.
I like watches, the design and quality of engineering that they represent and even the sound of them ticking away, but I generally don’t enjoy Hodinkee interviews. However, when they interviewed sneaker legend Ronnie Fieg I watched it. Fieg’s story around his watches is amazing, with each watch marking a milestone.
TML Partners and Accenture Song have done an interesting report on ‘the future of intelligent marketing performance‘ – basically CRM and e-commerce based on a impressive roundtable of marketers. What immediately struck me was how many of the problems would haven written about in a similar way a decade ago. We are constantly in a state of digital transformation, that is starting to feel more like ‘digital treading water’ now. It is due to relatively short organisation memory and lack of a ‘learning element’ in organisations.
Back when I worked in Hong Kong, I got to work on Colgate alongside other agencies. The work that I was doing was in association with the dedicated agency Red Fuse which was the umbrella for all WPP work. I was eventually shut down from working on it by APACseniormanagement from my own agency at the time; due to internal agency politics that I long gave up trying to understand.
While I was working on the project, I got to meet Jason Oke who is now in charge of global client relationships at Dentsu in New York. Jason appears on the Google Firestarters podcast discussing how to get great advertising ideas made. Some of the thoughts are timeless and echo the advice of Ogilvy on Advertising. It’s well worth listening to.
BBH Singapore Cultural Bleats newsletter
Every agency has some sort of email newsletter, but one that stands head-and-shoulders above other agencies is BBH Singapore’s Cultural Bleats. I promise you once you get past the name, it’s brilliant. The premise of the newsletter is that they put together interesting cultural things to act as useful provocations. This is exactly the kind of thinking, curation and sharing that planning and strategy teams should be doing if they aren’t over-committed on Workfront. A prime example of the kind of thing that Culture Bleats might pick up on is how rich people no longer appear to eat due to Ozempic and meal replacements like Huel.
Dow and Procter & Gamble announced an agreement to make a proprietary way to recycle mixed plastics. I am all for improving recycling of plastics, but having a proprietary method adds complexity into a recycling system that’s already unfit for purpose. I hope that once commercialisation happens P&G will follow the example of Unilever who freely licensed its more efficient aerosol cans to other manufacturers who were interested in the technology.
The Norwegian government published the results of its Mannsutvalgets or Men’s Equality Commission. The report goes into policies across several areas here (in Norwegian). It has some interesting findings that echo think tank thinking about the intersection of social class and opportunity outcomes.
Some of the content around health is particularly interesting Dagens Medisin covered some of these findings, you can see a translation of their article here. However some of the findings in health did make me wonder. It notes that men in Norway live shorter lives than women and considers this to be an equality challenge. Most writing I have seen around the gender mortality gap see it as a biological given rather than a ‘gap’. It felt like greater research was needed to support this reframe in science rather than a well-meaning aspiration.
The report calls on the Research Council in Norway to take up the challenge of improving the knowledge base on many of the issues tackled in the report. The commission acknowledged data-related challenges and wanted revised statistics / indicators for gender equality so that they reflect the equality challenges of boys and men than are currently available.
If you have semiconductor clients and haven’t been on Malcolm Penn’s Future Horizons semiconductor industry awareness workshop, you’re in look he’s running it again on June 18th. I started my agency career working on technology hardware, gadgets and semiconductors – the Future Horizons course helped no end. I went on to work for numerous technology clients including AMD, ARM and Qualcomm.
Finally this essay on human creativity provided a lot of fuel for thought. It pulls together a multi-variant model for why human creativity is on the wane.
Factors included:
A childhood lack of free time for play and imagination. Instead children have much more regimented structural lifestyles today.
Massive access to more cultural artefacts than we could possibly consume from around the world at the touch of our fingers. The unknown space is now limited and so there is less opportunity to be creative within it.
Science and technology innovation is connecting less disparate areas of knowledge in order to make a ‘thing’.
Stimulation is focused rather than a wide range of stuff, rather than washing over us.
Things I have watched.
I have found myself watching less Netflix over time. Then Netflix moved from getting paid through the Apple app store to wanting a direct payment and bumped the price up. So a mix of inertia and not wanting to watch a compelling show or two has meant that I have consciously uncoupled from Netflix for the time being. I will probably go back when I have a good enough reason. In the meantime, I am buying the odd Blu-Ray or DVD here and there instead. It seems that I am not the only one who has taken this approach.
Amazon Prime Video seems to have a bipolar personality between Apple TV+ level tentpole content and a wide range of trashy films, some of which deserve the moniker ‘cult cinema’. Red Queenfits into the former category rather than the latter. It is based a series of books by Juan Gómez-Jurado. I have just started reading the book Red Queen, but the TV series is compelling. I didn’t realise that I had managed to watch four episodes in one sitting.
I went back to watch the Alain Delon Traitement de choc aka Shock Treatment. Delon plays Dr Devilers, the proprietor of a clinic on the Brittany coast. The clinic focuses on rejuvenating tired wealthy clients with spa treatments, special diets and infusions. The middle-aged patients at the clinic are true believers and as their treatment happens they become more child-like as the rejuvenation happens. The dark side of the clinic is that the serum comes at a price. A new patient finds out what actually happens and what plays out is a French New Wave allegory that touches on similar ethical health concerns, rather like the film adaptation of John Le Carré’s The Constant Gardener.
My internet went down and I managed to work my way through The Street Fighter Trilogy starring Sonny Chiba and made famous by the Tony Scott-directed True Romance. The Street Fighter series was a key influence with Quentin Tarantino, who wrote in their role as a plot device in True Romance and had Sonny Chiba appear in his Kill Bill series. All of the films feel a bit hackneyed in a post-John Wick world, but the first instalment is hard-bitten. Given the torrent of films coming out of Hong Kong at the time, The Street Fighter films stood apart with their unflinching violence displayed on screen. They became the first film in the US to receive an X certificate for violence alone.
Along with the Shaw Brothers boxsets and Bruce Lee’s filmography, the Street Fighter trilogy, is essential viewing for both Asian cinema buffs and martial artist movie fanatics.
How do the sequel films stack up? The second and third film in the series have a bit more playfulness and off-kilter aspects to them similar to films of a similar age made as spaghetti westerns. Sonny Chiba’s 1974 trilogy typify the martial arts craze that swept western cinema in the early 1970s onwards. In the UK, The Street Fighter was called Kung Fu Street Fighter. The likely reasons were two-fold, a similarly named Charles Bronson film and the glut of Hong Kong martial arts films being shown.
The Source is a French police procedural series that shows the cat and mouse game between a French Moroccan crime family and the police tasked to catch them. I am in a few episodes and really enjoying the show so far.
Useful tools.
Email charter
My friend Marshall mentioned this email charter on LinkedIn. Share it with anyone you work with to improve the quality and volume of team communications. Much of it is about level setting expectations. More about the email charter here.
Martin
Martin is an app that integrates Claude-3, Deepgram’s Novo speech to text service and GPT-4 Turbo to interact with Google personal productivity software including Google Calendar and Gmail. Conceptually it’s a better Siri-type digital assistant. I have heard good things about it, but don’t rely heavily on Google services myself, so your mileage may vary. More details here.
Magnet
Magnet is a handy piece of software that keeps your desktop organised. It was recommended to me by a friend who codes software for a living. It is particularly handy for keeping ‘presence’ based channels (like Slack, Teams, Mail.app together on one screen as a ‘war room” type view and having creation on another screen. It even works if you use your screen in a vertical orientation.
PamPam
A service that allows you to create and share maps. You can import maps in various formats or describe it in text for PamPam to render it. Strangely useful.
Scribd downloader
I am not sure how Scribd managed to digest so many resources and hide them behind a paywall. But this might be the antedote if you have something specific that you need.
The sales pitch.
I have had a great time working on a project with GREY & Tank Worldwide. I am now taking bookings for strategic engagements for a bit of time that I have in early to mid-June; or discussions on permanent roles. Contact me here.