Hydrogen power and hydrogen fuel cells have been around for decades. Hydrogen power fuel cells as an invention were invented in the 19th century. The modern hydrogen fuel cell was refined before the second world war and have been used in NASA’s space programme since Project Apollo. The space programme’s use of hydrogen power inspired General Motors to create a hydrogen fuel cell powered van in 1966. By the late 1980s, BMW had developed a hydrogen-powered engine which it trialled in its 7-series vehicles a decade later.
By the mid-2010s there were four hydrogen power passenger cars using fuel cells: Honda Clarity, Toyota Mirai, Hyundai ix35 FCEV, and the Hyundai Nexo. BMW is collaborating with Toyota to launch another four models next year.
In addition in commercial vehicles and heavy plant Hyundai, Cummins and JCB have hydrogen power offerings. JCB and Cummins have focused on internal combustion engines, while Hyundai went with hydrogen fuel cells. The aviation industry has been looking to hydrogen power via jet turbines.
Hydrogen power offers greater energy density and lower weight than batteries. Unlike batteries or power lines, hydrogen can be transported over longer distances via tanker. So someplace like Ireland with wind and tidal power potential could become an energy exporter.
The key hydrogen power problem has been investment in infrastructure and an over-reliance on batteries. Batteries bring their own set of problems and a global strategic dependency on China.
Toyota is now warning that if there isn’t imminent international investment, that China will also dominate the supply chain, exports and energy generation in the hydrogen economy as well. It feels like me reaching a historic point of no return.
Skincare you can wear: China’s sunwear boom | Jing Daily – A jacket with a wide-brim hood and built-in face shield. Leggings infused with hyaluronic acid to hydrate while shielding skin from the sun. Face masks with chin-to-temple coverage. Ice-cooling gloves designed to drop skin temperature. In China, UV protection apparel isn’t just functional — it’s fashionable, dermatological, and high-tech. Once a niche category for hikers or extreme sports enthusiasts, China’s sunwear market has exploded into a $13 billion category blending climate adaptation, anti-aging culture, and the outdoor lifestyle wave. While other apparel segments slow, the sunwear sector is projected to reach nearly 95.8 billion RMB ($13.5 billion) by 2026 expanding at a CAGR of 9%, according to iResearch.
MACAU DAILY TIMES 澳門每日時報Beware of Li Kashing’s supersized value trap – But as the initial excitement starts to fade, investors are growing nervous, wary of a billionaire family that has a poor track record on shareholders’ returns. The Li clan takes pride in the myriad of businesses and markets it operates in. But what kind of value-add can a diversified conglomerate offer when globalization is out of favor and geopolitical risks are on the rise? CK’s de-rating has accelerated since Trump’s first term, with the stock trading at just 35% of its book value even after the recent share bump. The complex business dealings have made enterprise valuation an impossible task. In a sign of deep capital market skepticism, CK seems to have trouble monetizing its assets. Its health and beauty subsidiary A.S. Watson is still privately-held, a decade after postponing an ambitious $6 billion dual listing in Hong Kong and London. Softer consumer sentiment in China, once a growth market, has become a drag. Last summer, CK Infrastructure did a secondary listing in the UK, hoping to widen its investor base. – Rare direct criticism of CK Hutchison’s conglomerate discount.
Ghost in the Shell’s Creator Wants to Revisit the Anime, But There’s One Problem – Production I.G’s CEO Mitsuhisa Ishikawa—who produced both Ghost in the Shell films—spoke at the event. Ishikawa revealed a key obstacle preventing a third film: finances. He explained that Innocence had an enormous budget, estimated at around 2 billion yen (approximately $13 million), with profits reaching a similar figure. However, the film was planned with a ten-year financial recovery period, and even after 20 years, it has yet to break even.
‘Gucci’s 25% sales collapse should shock no one’ | Jing Daily – “Gucci is so boring now.” “They’ve lost all their confidence.” “It feels desperate — just influencers and celebrities.” Comparing Gucci’s bold, visionary eras under Tom Ford and Alessandro Michele and today’s safe, uninspired iteration reveals a stark contrast. That classroom moment reflected a broader truth: Gucci’s Q1 2025 is not a temporary dip. It’s the result of a deeper structural identity crisis — arguably one of the worst brand resets in recent luxury history.
My thinking on the concept of intelligence per watt started as bullets in my notebook. It was more of a timeline than anything else at first and provided a framework of sorts from which I could explore the concept of efficiency in terms of intelligence per watt.
TL;DR (too long, didn’t read)
Our path to the current state of ‘artificial intelligence’ (AI) has been shaped by the interplay and developments of telecommunications, wireless communications, materials science, manufacturing processes, mathematics, information theory and software engineering.
Progress in one area spurred advances in others, creating a feedback loop that propelled innovation.
Over time, new use cases have become more personal and portable – necessitating a focus on intelligence per watt as a key parameter. Energy consumption directly affects industrial design and end-user benefits. Small low-power integrated circuits (ICs) facilitated fuzzy logic in portable consumer electronics like cameras and portable CD players. Low power ICs and power management techniques also helped feature phones evolve into smartphones.
A second-order effect of optimising for intelligence per watt is reducing power consumption across multiple applications. This spurs yet more new use cases in a virtuous innovation circle. This continues until the laws of physics impose limits.
Energy storage density and consumption are fundamental constraints, driving the need for a focus on intelligence per watt.
As intelligence per watt improves, there will be a point at which the question isn’t just what AI can do, but what should be done with AI? And where should it be processed? Trust becomes less about emotional reassurance and more about operational discipline. Just because it can handle a task doesn’t mean it should – particularly in cases where data sensitivity, latency, or transparency to humans is non-negotiable. A highly capable, off-device AI might be a fine at drafting everyday emails, but a questionable choice for handling your online banking.
Good ‘operational security’ outweighs trust. The design of AI systems must therefore account not just for energy efficiency, but user utility and deployment context. The cost of misplaced trust is asymmetric and potentially irreversible.
Ironically the force multiplier in intelligence per watt is people and their use of ‘artificial intelligence’ as a tool or ‘co-pilot’. It promises to be an extension of the earlier memetic concept of a ‘bicycle for the mind’ that helped inspire early developments in the personal computer industry. The upside of an intelligence per watt focus is more personal, trusted services designed for everyday use.
While not a computer, but instead to integrate several radio parts in one glass envelope vacuum valve. This had three triodes (early electronic amplifiers), two capacitors and four resistors. Inside the valve the extra resistor and capacitor components went inside their own glass tubes. Normally each triode would be inside its own vacuum valve. At the time, German radio tax laws were based on the number of valve sockets in a device, making this integration financially advantageous.
Post-war scientific boom
Between 1949 and 1957 engineers and scientists from the UK, Germany, Japan and the US proposed what we’d think of as the integrated circuit (IC). These ideas were made possible when breakthroughs in manufacturing happened. Shockley Semiconductor built on work by Bell Labs and Sprague Electric Company to connect different types of components on the one piece of silicon to create the IC.
Credit is often given to Jack Kilby of Texas Instruments as the inventor of the integrated circuit. But that depends how you define IC, with what is now called a monolithic IC being considered a ‘true’ one. Kilby’s version wasn’t a true monolithic IC. As with most inventions it is usually the child of several interconnected ideas that coalesce over a given part in time. In the case of ICs, it was happening in the midst of materials and technology developments including data storage and computational solutions such as the idea of virtual memory through to the first solar cells.
Kirby’s ICs went into an Air Force computer[ii] and an onboard guidance system for the Minuteman missile. He went on to help invent the first handheld calculator and thermal printer, both of which took advantage of progress in IC design to change our modern way of life[iii].
TTL (transistor-to-transistor logic) circuitry was invented at TRW in 1961, they licensed it out for use in data processing and communications – propelling the development of modern computing. TTL circuits powered mainframes. Mainframes were housed in specialised temperature and humidity-controlled rooms and owned by large corporates and governments. Modern banking and payments systems rely on the mainframe as a concept.
AI’s early steps
What we now thing of as AI had been considered theoretically for as long as computers could be programmed. As semiconductors developed, a parallel track opened up to move AI beyond being a theoretical possibility. A pivotal moment was a workshop was held in 1956 at Dartmouth College. The workshop focused on a hypothesis ‘every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it’. Later on, that year a meeting at MIT (Massachusetts Institute of Technology) brought together psychologists and linguists to discuss the possibility of simulating cognitive processes using a computer. This is the origin of what we’d now call cognitive science.
Out of the cognitive approach came some early successes in the move towards artificial intelligence[iv]. A number of approaches were taken based on what is now called symbolic or classical AI:
Reasoning as search – essentially step-wise trial and error approach to problem solving that was compared to wandering through a maze and back-tracking if a dead end was found.
Natural language – where related phrases existed within a structured network.
Micro-worlds – solving for artificially simple situations, similar to economic models relying on the concept of the rational consumer.
Single layer neural networks – to do rudimentary image recognition.
By the time the early 1970s came around AI researchers ran into a number of problems, some of which still plague the field to this day:
Symbolic AI wasn’t fit for purpose solving many real-world tasks like crossing a crowded room.
Trying to capture imprecise concepts with precise language.
Commonsense knowledge was vast and difficult to encode.
Intractability – many problems require an exponential amount of computing time.
Limited computing power available – there was insufficient intelligence per watt available for all but the simplest problems.
By 1966, US and UK funding bodies were frustrated with the lack of progress on the research undertaken. The axe fell first on a project to use computers on language translation. Around the time of the OPEC oil crisis, funding to major centres researching AI was reduced by both the US and UK governments respectively. Despite the reduction of funding to the major centres, work continued elsewhere.
Mini-computers and pocket calculators
ICs allowed for mini-computers due to the increase in computing power per watt. As important as the relative computing power, ICs made mini-computers more robust, easier to manufacture and maintain. DEC (Digital Equipment Corporation) launched the first minicomputer, the PDP-8 in 1964. The cost of mini-computers allowed them to run manufacturing processes, control telephone network switching and control labouratory equipment. Mini-computers expanded computer access in academia facilitating more work in artificial life and what we’d think of as early artificial intelligence. This shift laid the groundwork for intelligence per watt as a guiding principle.
A second development helped drive mass production of ICs – the pocket calculator, originally invented at Texas Instruments. It demonstrated how ICs could dramatically improve efficiency in compact, low-power devices.
LISP machines and PCs
AI researchers required more computational power than mini-computers could provide, leading to the development of LISP machines—specialised workstations designed for AI applications. Despite improvements in intelligence per watt enabled by Moore’s Law, their specialised nature meant that they were expensive. AI researchers continued with these machines until personal computers (PCs) progressed to a point that they could run LISP quicker than LISP machines themselves. The continuous improvements in data storage, memory and processing that enabled LISP machines, continued on and surpassed them as the cost of computing dropped due to mass production.
The rise of LISP machines and their decline was not only due to Moore’s Law in effect, but also that of Makimoto’s Wave. While Gordon Moore outlined an observation that the number of transistors on a given area of silicon doubled every two years or so. Tsugio Makimoto originally observed 10-year pivots from standardised semiconductor processors to customised processors[v]. The rise of personal computing drove a pivot towards standardised architectures.
PCs and workstations extended computing beyond computer rooms and labouratories to offices and production lines. During the late 1970s and 1980s standardised processor designs like the Zilog Z80, MOS Technology 6502 and the Motorola 68000 series drove home and business computing alongside Intel’s X86 processors.
Personal computing started in businesses when office workers brought a computer to use early computer programmes like the VisiCalc spreadsheet application. This allowed them to take a leap forward in not only tabulating data, but also seeing how changes to the business might affect financial performance.
Businesses then started to invest more in PCs for a wide range of uses. PCs could emulate the computer terminal of a mainframe or minicomputer, but also run applications of their own.
Typewriters were being placed by word processors that allowed the operator to edit a document in real time without resorting to using correction fluid.
A Bicycle for the Mind
Steve Jobs at Apple was as famous for being a storyteller as he was for being a technologist in the broadest sense. Internally with the Mac team he shared stories and memetic concepts to get his ideas across in everything from briefing product teams to press interviews. As a concept, a 1990 filmed interview with Steve Jobs articulates the context of this saying particularly well.
In reality, Jobs had been telling the story for a long time through the development of the Apple II and right from the beginning of the Mac. There is a version of the talk that was recorded some time in 1980 when the personal computer was still a very new idea – the video was provided to the Computer History Museum by Regis McKenna[vi].
The ‘bicycle for the mind’ concept was repeated in early Apple advertisements for the time[vii] and even informed the Macintosh project codename[viii].
Jobs articulated a few key concepts.
Buying a computer creates, rather than reduces problems. You needed software to start solving problems and making computing accessible. Back in 1980, you programmed a computer if you bought one. Which was the reason why early personal computer owners in the UK went on to birth a thriving games software industry including the likes of Codemasters[ix]. Done well, there should be no seem in the experience between hardware and software.
The idea of a personal, individual computing device (rather than a shared resource). My own computer builds on my years of how I have grown to adapt and use my Macs, from my first sit-up and beg Macintosh, to the MacBook Pro that I am writing this post on. This is even more true most people and their use of the smartphone. I am of an age, where my iPhone is still an appendage and emissary of my Mac. My Mac is still my primary creative tool. A personal computer is more powerful than a shared computer in terms of the real difference made.
At the time Jobs originally did the speech, PCs were underpowered for anything but data processing (through spreadsheets and basic word processor applications). But that didn’t stop his idea for something greater.
Jobs idea of the computer as an adjunct to the human intellect and imagination still holds true, but it doesn’t neatly fit into the intelligence per watt paradigm. It is harder to measure the effort developing prompts, or that expended evaluating, refining and filtering generative AI results. Of course, Steve Jobs Apple owed a lot to the vision shown in Doug Engelbart’s ‘Mother of All Demos’[x].
Networks
Work took a leap forward with office networked computers pioneered by Macintosh office by Apple[xi]. This was soon overtaken by competitors. This facilitated work flow within an office and its impact can still be seen in offices today, even as components from print management to file storage have moved to cloud-based services.
At the same time, what we might think of as mobile was starting to gain momentum. Bell Labs and Motorola came up with much of the technology to create cellular communications. Martin Cooper of Motorola made the first phone call on a cellular phone to a rival researcher at Bell Labs. But Motorola didn’t sell the phone commercially until 1983, as a US-only product called the DynaTAC 8000x[xii]. This was four years after Japanese telecoms company NTT launched their first cellular network for car phones. Commercial cellular networks were running in Scandinavia by 1981[xiii].
In the same way that the networked office radically changed white collar work, the cellular network did a similar thing for self-employed plumbers, electricians and photocopy repair men to travelling sales people. If they were technologically advanced, they may have had an answer machine, but it would likely have to be checked manually by playing back the tape.
Often it was a receptionist in their office if they had one. Or more likely, someone back home who took messages. The cell phone freed homemakers in a lot of self-employed households to go out into the workplace and helped raise household incomes.
Fuzzy logic
The first mainstream AI applications emerged from fuzzy logic, introduced by Lofti A. Zadeh in 1965 mathematical paper. Initial uses were for industrial controls in cement kilns and steel production[xiv]. The first prominent product to rely on fuzzy logic was the Zojirushi Micom Electric Rice Cooker (1983), which adjusted cooking time dynamically to ensure perfect rice.
Fuzzy logic reacted to changing conditions in a similar way to people. Through the 1980s and well into the 1990s, the power of fuzzy logic was under appreciated outside of Japanese product development teams. In a quote a spokesperson for the American Electronics Association’s Tokyo office said to the Washington Post[xv].
“Some of the fuzzy concepts may be valid in the U.S.,”
“The idea of better energy efficiency, or more precise heating and cooling, can be successful in the American market,”
“But I don’t think most Americans want a vacuum cleaner that talks to you and says, ‘Hey, I sense that my dust bag will be full before we finish this room.’ “
The end of the 1990s, fuzzy logic was embedded in various consumer devices:
Air-conditioner units – understands the room, the temperature difference inside-and-out, humidity. It then switches on-and-off to balance cooling and energy efficiency.
CD players – enhanced error correction on playback dealing with imperfections on the disc surface.
Dishwashers – understood how many dishes were loaded, their type of dirt and then adjusts the wash programme.
Toasters – recognised different bread types, the preferable degree of toasting and performs accordingly.
TV sets – adjust the screen brightness to the ambient light of the room and the sound volume to how far away the viewer is sitting from the TV set.
Vacuum cleaners – vacuum power that is adjusted as it moves from carpeted to hard floors.
Video cameras – compensate for the movement of the camera to reduce blurred images.
Fuzzy logic sold on the benefits and concealed the technology from western consumers. Fuzzy logic embedded intelligence in the devices. Because it worked on relatively simple dedicated purposes it could rely on small lower power specialist chips[xvi] offering a reasonable amount of intelligence per watt, some three decades before generative AI. By the late 1990s, kitchen appliances like rice cookers and microwave ovens reached ‘peak intelligence’ for what they needed to do, based on the power of fuzzy logic[xvii].
Fuzzy logic also helped in business automation. It helped to automatically read hand-written numbers on cheques in banking systems and the postcodes on letters and parcels for the Royal Mail.
Decision support systems & AI in business
Decision support systems or Business Information Systems were being used in large corporates by the early 1990s. The techniques used were varied but some used rules-based systems. These were used in at least some capacity to reduce manual office work tasks. For instance, credit card approvals were processed based on rules that included various factors including credit scores. Only some credit card providers had an analyst manually review the decision made by system. However, setting up each use case took a lot of effort involving highly-paid consultants and expensive software tools. Even then, vendors of business information systems such as Autonomy struggled with a high rate of projects that failed to deliver anything like the benefits promised.
Three decades on, IBM had a similar problem with its Watson offerings, with particularly high-profile failure in mission-critical healthcare applications[xviii]. Secondly, a lot of tasks were ad-hoc in nature, or might require transposing across disparate separate systems.
The rise of the web
The web changed everything. The underlying technology allowed for dynamic data.
Software agents
Examples of intelligence within the network included early software agents. A good example of this was PapriCom. PapriCom had a client on the user’s computer. The software client monitored price changes for products that the customer was interested in buying. The app then notified the user when the monitored price reached a price determined by the customer. The company became known as DealTime in the US and UK, or Evenbetter.com in Germany[xix].
The PapriCom client app was part of a wider set of technologies known as ‘push technology’ which brought content that the netizen would want directly to their computer. In a similar way to mobile app notifications now.
Web search
The wealth of information quickly outstripped netizen’s ability to explore the content. Search engines became essential for navigating the new online world. Progress was made in clustering vast amounts of cheap Linux powered computers together and sharing the workload to power web search amongst them. As search started to trying and make sense of an exponentially growing web, machine learning became part of the developer tool box.
Researchers at Carnegie-Mellon looked at using games to help teach machine learning algorithms based on human responses that provided rich metadata about the given item[xx]. This became known as the ESP game. In the early 2000s, Yahoo! turned to web 2.0 start-ups that used user-generated labels called tags[xxi] to help organise their data. Yahoo! bought Flickr[xxii] and deli.ico.us[xxiii].
All the major search engines looked at how deep learning could help improve search results relevance.
Given that the business model for web search was an advertising-based model, reducing the cost per search, while maintaining search quality was key to Google’s success. Early on Google focused on energy consumption, with its (search) data centres becoming carbon neutral in 2007[xxiv]. This was achieved by a whole-system effort: carefully managing power management in the silicon, storage, networking equipment and air conditioning to maximise for intelligence per watt. All of which were made using optimised versions of open-source software and cheap general purpose PC components ganged together in racks and operating together in clusters.
General purpose ICs for personal computers and consumer electronics allowed easy access relatively low power computing. Much of this was down to process improvements that were being made at the time. You needed the volume of chips to drive innovation in mass-production at a chip foundry. While application-specific chips had their uses, commodity mass-volume products for uses for everything from embedded applications to early mobile / portable devices and computers drove progress in improving intelligence-per-watt.
Makimoto’s tsunami back to specialised ICs
When I talked about the decline of LISP machines, I mentioned the move towards standardised IC design predicted by Tsugio Makimoto. This led to a surge in IC production, alongside other components including flash and RAM memory. From the mid-1990s to about 2010, Makimoto’s predicted phase was stuck in ‘standardisation’. It just worked. But several factors drove the swing back to specialised ICs.
Lithography processes got harder: standardisation got its performance and intelligence per watt bump because there had been a steady step change in improvements in foundry lithography processes that allowed components to be made at ever-smaller dimensions. The dimensions are a function wavelength of light used. The semiconductor hit an impasse when it needed to move to EUV (extreme ultra violet) light sources. From the early 1990s on US government research projects championed development of key technologies that allow EUV photolithography[xxv]. During this time Japanese equipment vendors Nikon and Canon gave up on EUV. Sole US vendor SVG (Silicon Valley Group) was acquired by ASML, giving the Dutch company a global monopoly on cutting edge lithography equipment[xxvi]. ASML became the US Department of Energy research partner on EUV photo-lithography development[xxvii]. ASML spent over two decades trying to get EUV to work. Once they had it in client foundries further time was needed to get commercial levels of production up and running. All of which meant that production processes to improve IC intelligence per watt slowed down and IC manufacturers had to start about systems in a more holistic manner. As foundry development became harder, there was a rise in fabless chip businesses. Alongside the fabless firms, there were fewer foundries: Global Foundries, Samsung and TSMC (Taiwan Semiconductor Manufacturing Company Limited). TSMC is the worlds largest ‘pure-play’ foundry making ICs for companies including AMD, Apple, Nvidia and Qualcomm.
Progress in EDA (electronic design automation). Production process improvements in IC manufacture allowed for an explosion in device complexity as the number of components on a given size of IC doubled every 18 months or so. In the mid-to-late 1970s this led to technologists thinking about the idea of very large-scale integration (VLSI) within IC designs[xxviii]. Through the 1980s, commercial EDA software businesses were formed. The EDA market grew because it facilitated the continual scaling of semiconductor technology[xxix]. Secondly, it facilitated new business models. Businesses like ARM Semiconductor and LSI Logic allowed their customers to build their own processors based on ‘blocs’ of proprietary designs like ARM’s cores. That allowed companies like Apple to focus on optimisation in their customer silicon and integration with software to help improve the intelligence per watt[xxx].
Increased focus on portable devices. A combination of digital networks, wireless connectivity, the web as a communications platform with universal standards, flat screen displays and improving battery technology led the way in moving towards more portable technologies. From personal digital assistants, MP3 players and smartphone, to laptop and tablet computers – disconnected mobile computing was the clear direction of travel. Cell phones offered days of battery life; the Palm Pilot PDA had a battery life allowing for couple of days of continuous use[xxxi]. In reality it would do a month or so of work. Laptops at the time could do half a day’s work when disconnected from a power supply. Manufacturers like Dell and HP provided spare batteries for travellers. Given changing behaviours Apple wanted laptops that were easy to carry and could last most of a day without a charge. This was partly driven by a move to a cleaner product design that wanted to move away from swapping batteries. In 2005, Apple moved from PowerPC to Intel processors. During the announcement at the company’s worldwide developer conference (WWDC), Steve Jobs talked about the focus on computing power per watt moving forwards[xxxii].
Apple’s first in-house designed IC, the A4 processor was launched in 2010 and marked the pivot of Makimoto’s wave back to specialised processor design[xxxiii]. This marked a point of inflection in the growth of smartphones and specialised computing ICs[xxxiv].
New devices also meant new use cases that melded data on the web, on device, and in the real world. I started to see this in action working at Yahoo! with location data integrated on to photos and social data like Yahoo! Research’s ZoneTag and Flickr. I had been the Yahoo! Europe marketing contact on adding Flickr support to Nokia N-series ‘multimedia computers’ (what we’d now call smartphones), starting with the Nokia N73[xxxv]. A year later the Nokia N95 was the first smartphone released with a built-in GPS receiver. William Gibson’s speculative fiction story Spook Country came out in 2007 and integrated locative art as a concept in the story[xxxvi].
Real-world QRcodes helped connect online services with the real world, such as mobile payments or reading content online like a restaurant menu or a property listing[xxxvii].
I labelled the web-world integration as a ‘web-of-no-web’[xxxviii] when I presented on it back in 2008 as part of an interactive media module, I taught to an executive MBA class at Universitat Ramon Llull in Barcelona[xxxix]. In China, wireless payment ideas would come to be labelled O2O (offline to online) and Kevin Kelly articulated a future vision for this fusion which he called Mirrorworld[xl].
Deep learning boom
Even as there was a post-LISP machine dip in funding of AI research, work on deep (multi-layered) neural networks continued through the 1980s. Other areas were explored in academia during the 1990s and early 2000s due to the large amount of computing power needed. Internet companies like Google gained experience in large clustered computing, AND, had a real need to explore deep learning. Use cases include image recognition to improve search and dynamically altered journeys to improve mapping and local search offerings. Deep learning is probabilistic in nature, which dovetailed nicely with prior work Microsoft Research had been doing since the 1980s on Bayesian approaches to problem-solving[xli].
A key factor in deep learning’s adoption was having access to powerful enough GPUs to handle the neural network compute[xlii]. This has allowed various vendors to build Large Language Models (LLMs). The perceived strategic importance of artificial intelligence has meant that considerations on intelligence per watt has become a tertiary consideration at best. Microsoft has shown interest in growing data centres with less thought has been given on the electrical infrastructure required[xliii].
Google’s conference paper on attention mechanisms[xliv] highlighted the development of the transformer model. As an architecture it got around problems in previous approaches, but is computationally intensive. Even before the paper was published, the Google transformer model had created fictional Wikipedia entries[xlv]. A year later OpenAI built on Google’s work with the generative pre-trained transformer model better known as GPT[xlvi].
Since 2018 we’ve seen successive GPT-based models from Amazon, Anthropic, Google, Meta, Alibaba, Tencent, Manus and DeepSeek. All of these models were trained on vast amounts of information sources. One of the key limitations for building better models was access to training material, which is why Meta used pirated copies of e-books obtained using bit-torrent[xlvii].
These models were so computationally intensive that the large-scale cloud service providers (CSPs) offering these generative AI services were looking at nuclear power access for their data centres[xlviii].
The current direction of development in generative AI services is raw computing power, rather than having a more energy efficient focus of intelligence per watt.
Technology consultancy / analyst Omdia estimated how many GPUs were bought by hyperscalers in 2024[xlix].
Company
Number of Nvidia GPUs bought
Number of AMD GPUs bought
Number of self-designed custom processing chips bought
Amazon
196,000
–
1,300,000
Alphabet (Google)
169,000
–
1,500,000
ByteDance
230,000
–
–
Meta
224,000
173,000
1,500,000
Microsoft
485,000
96,000
200,000
Tencent
230,000
–
–
These numbers provide an indication of the massive deployment on GPT-specific computing power. Despite the massive amount of computing power available, services still weren’t able to cope[l] mirroring some of the service problems experienced by early web users[li] and the Twitter ‘whale FAIL’[lii] phenomenon of the mid-2000s. The race to bigger, more powerful models is likely to continue for the foreseeable future[liii].
There is a second class of players typified by Chinese companies DeepSeek[liv] and Manus[lv] that look to optimise the use of older GPT models to squeeze the most utility out of them in a more efficient manner. Both of these services still rely on large cloud computing facilities to answer queries and perform tasks.
Agentic AI
Thinking on software agents went back to work being done in computer science in the mid-1970s[lvi]. Apple articulated a view[lvii]of a future system dubbed the ‘Knowledge Navigator’[lviii] in 1987 which hinted at autonomous software agents. What we’d now think of as agentic AI was discussed as a concept at least as far back as 1995[lix], this was mirrored in research labs around the world and was captured in a 1997 survey of research on intelligent software agents was published[lx]. These agents went beyond the vision that PapriCom implemented.
A classic example of this was Wildfire Communications, Inc. who created a voice enabled virtual personal assistant in 1994[lxi]. Wildfire as a service was eventually shut down in 2005 due to an apparent decline in subscribers using the service[lxii]. In terms of capability, Wildfire could do tasks that are currently beyond Apple’s Siri. Wildfire did have limitations due to it being an off-device service that used a phone call rather than an internet connection, which limited its use to Orange mobile service subscribers using early digital cellular mobile networks.
Almost a quarter century later we’re now seeing devices that are looking to go beyond Wildfire with varying degrees of success. For instance, the Rabbit R1 could order an Uber ride or groceries from DoorDash[lxiii]. Google Duplex tries to call restaurants on your behalf to make reservations[lxiv] and Amazon claims that it can shop across other websites on your behalf[lxv]. At the more extreme end is Boeing’s MQ-28[lxvi] and the Loyal Wingman programme[lxvii]. The MQ-28 is an autonomous drone that would accompany US combat aircraft into battle, once it’s been directed to follow a course of action by its human colleague in another plane.
The MQ-28 will likely operate in an electronic environment that could be jammed. Even if it wasn’t jammed the length of time taken to beam AI instructions to the aircraft would negatively impact aircraft performance. So, it is likely to have a large amount of on-board computing power. As with any aircraft, the size of computing resources and their power is a trade-off with the amount of fuel or payload it will carry. So, efficiency in terms of intelligence per watt becomes important to develop the smallest, lightest autonomous pilot.
As well as a more hostile world, we also exist in a more vulnerable time in terms of cyber security and privacy. It makes sense to have critical, more private AI tasks run on a local machine. At the moment models like DeepSeek can run natively on a top-of-the-range Mac workstation with enough memory[lxviii].
This is still a long way from the vision of completely local execution of ‘agentic AI’ on a mobile device because the intelligence per watt hasn’t scaled down to that level to useful given the vast amount of possible uses that would be asked of the Agentic AI model.
Maximising intelligence per watt
There are three broad approaches to maximise the intelligence per watt of an AI model.
Take advantage of the technium. The technium is an idea popularised by author Kevin Kelly[lxix]. Kelly argues that technology moves forward inexorably, each development building on the last. Current LLMs such as ChatGPT and Google Gemini take advantage of the ongoing technium in hardware development including high-speed computer memory and high-performance graphics processing units (GPU). They have been building large data centres to run their models in. They build on past developments in distributed computing going all the way back to the 1962[lxx].
Optimise models to squeeze the most performance out of them. The approach taken by some of the Chinese models has been to optimise the technology just behind the leading-edge work done by the likes of Google, OpenAI and Anthropic. The optimisation may use both LLMs[lxxi] and quantum computing[lxxii] – I don’t know about the veracity of either claim.
Specialised models. Developing models by use case can reduce the size of the model and improve the applied intelligence per watt. Classic examples of this would be fuzzy logic used for the past four decades in consumer electronics to Mistral AI[lxxiii] and Anduril’s Copperhead underwater drone family[lxxiv].
Even if an AI model can do something, should the model be asked to do so?
We have a clear direction of travel over the decades to more powerful, portable computing devices –which could function as an extension of their user once intelligence per watt allows it to be run locally.
Having an AI run on a cloud service makes sense where you are on a robust internet connection, such as using the wi-fi network at home. This makes sense for general everyday task with no information risk, for instance helping you complete a newspaper crossword if there is an answer you are stuck on and the intellectual struggle has gone nowhere.
A private cloud AI service would make sense when working, accessing or processing data held on the service. Examples of this would be Google’s Vertex AI offering[lxxv].
On-device AI models make sense in working with one’s personal private details such as family photographs, health information or accessing apps within your device. Apps like Strava which share data, have been shown to have privacy[lxxvi] and security[lxxvii] implications. ***I am using Strava as an example because it is popular and widely-known, not because it is a bad app per se.***
While businesses have the capability and resources to have a multi-layered security infrastructure to protect their data most[lxxviii]of[lxxix] the[lxxx] time[lxxxi], individuals don’t have the same security. As I write this there are privacy concerns[lxxxii] expressed about Waymo’s autonomous taxis. However, their mobile device is rarely out of physical reach and for many their laptop or tablet is similarly close. All of these devices tend to be used in concert with each other. So, for consumers having an on-device AI model makes the most sense. All of which results in a problem, how do technologists squeeze down their most complex models inside a laptop, tablet or smartphone?
This post about layers of the future was inspired by an article that I read in the EE News. The article headline talked in absolutes: The external power adapter Is dead. The reality is usually much more complex. The future doesn’t arrive complete; instead we have layers of the future.
Science fiction as an indicator.
The 1936 adaption by Alexander Korda of HG Wells The Shape of Things To Come shows a shiny complete new utopia. It is a tour-de-force of art deco design, but loses somewhat in believability because of its complete vision.
https://youtu.be/knOd-BhRuCE?si=HfIDYsaa7nUZKrYE
This is partly explained away by a devastating war, largely influenced by the Great War which had demonstrated the horrendous power of artillery and machine guns. The implication being that the layers of architecture assembled over the years had been literally blown away. So architects and town planners would be working from a metaphorical clean sheet, if you ignore land ownership rights, extensive rubble, legacy building foundations and underground ground works like water pipes, sewers, storm drains and cable ducting.
In real-life, things aren’t that simple. Britain’s major cities were extensively bombed during the war. The country went under extensive rebuilding in the post-war era. Yet even in cities like Coventry that were extensively damaged you still have a plurality of architecture from different ages.
In the City of London, partly thanks to planning permission 17th century architecture exists alongside modern tower blocks.
You can see a mix of modern skyscrapers, tong tau-style tenements and post-war composite buildings that make the most of Hong Kong’s space. Given Hong Kong’s historically strong real estate marketplace, there are very strong incentives to build up new denser land uses, yet layers of architecture from different ages still exist.
COBOL and other ‘dead’ languages.
If you look at computer history, you realise that it is built on layers. Back in the 1960s computing was a large organisation endeavour. A good deal of these systems ran on COBOL, a computer language created in 1958. New systems were being written in COBOL though the mid-2000s for banks and stock brokerages. These programmes are still maintained, many of them still going long after the people who wrote them had retired from the workforce.
These systems were run on mainframe computers, though some of these have been replaced by clusters of servers. IBM still serves its Z-series of mainframe computers. Mainframe computing has even been moved to cloud computing services.
In 1966, MUMPS was created out of a National Institute of Health project at Massachusetts General Hospital. The programming language was built out of frustration to support high performing databases. MUMPS has gone on to support health systems around the world and projects within the European Space Agency.
If you believe the technology industry all of these systems have been dead and buried by:
Various computer languages
Operating systems like UNIX, Linux and Windows
Minicomputers
Workstations
PCs and Macs
Smartphones and tablets
The web
At a more prosaic level infrastructure like UK railway companies, German businesses and Japanese government departments have been using fax machines over two decades since email became ubiquitous in businesses and most households in the developed world.
The adoption curve.
The adoption curve is a model that shows how products are adopted. The model was originally proposed by academic Everett Rogers in his book Diffusion of Innovations, published in 1962. The blue line is percentage of new users over time and the yellow line is an idealised market penetration. However, virtually no innovations get total adoption. My parents don’t have smartphones, friends don’t have televisions. There are some people that still live off the grid in developed countries without electricity or indoor plumbing.
When you look at businesses and homes, different technologies often exist side-by-side. In UK households turntables for vinyl records exist alongside streaming systems. Stuffed bookshelves exist alongside laptops, tablets and e-readers.
Yahoo! Internet Life magazine
Yahoo! Internet Life magazine is a microcosm of this layers of the future co-existence . Yahoo! is now a shadow of its former self, but its still valued for its financial news and email. The company was founded in 1994, just over 30 years ago. It was in the vanguard of consumer Internet services alongside the likes of Wired, Excite, Go, MSN, Lycos and Netscape’s Net Center.
Yahoo! Internet Life magazine was published in conjunction with Ziff Davis from 1996 to 2002. At the time when it was being published the web was as much a cultural force as it was a technology that people adopted. It was bigger than gaming or generative AI are now in terms of cultural impact. Yet there was no incongruity in being a print magazine about online media. Both existed side-by-side.
Post-print, Yahoo! Life is now an online magazine that is part of the Yahoo! web portal.
Technology is the journey, not the destination.
Technology and innovation often doesn’t meet the ideals set of it, for instance USB-C isn’t quite the universal data and power transfer panacea that consumers are led to believe. Cables and connectors that look the same have different capabilities. There is no peak reached, but layers of the future laid on each other and often operating in parallel. It’s a similar situation in home cinema systems using HDMI cables or different versions of Bluetooth connected devices.
Mooncakes were a big part of my time in Hong Kong and Shenzhen. This year, mid-September marked mid-autumn festival across Asia or known as Chuseok in Korea. It is similar to harvest festivals that happen elsewhere in the world.
It is celebrated in Chinese communities with mooncakes. Mooncakes traditionally have been made of fat filled pastry cases and lids filled with red bean or lotus seed paste and a salted dried egg yolk.
Mooncakes are moulded and have auspicious messages or symbols embossed on the top, like the double happiness ideogram which also appears on new year decorations and at weddings.
In the past mooncakes have been used to make political statements in Hong Kong where they were embossed with messages against the Fugitive Offenders and Mutual Legal Assistance in Criminal Matters Legislation (Amendment) Bill 2019. This mirrored mooncake history, where concealed messages were alleged to have been used to ferment rebellion against Mongolian rule in China centuries ago.
China saw a halving of mooncakes sold this year, compared to last year. This is a mix of fast-moving events like the state of consumer spending and longer term factors including gifting culture and attitudes to health and fitness.
The economy
The consumer economy seems to be doing worse than industrial output. Youth unemployment is still an issue.
Gifting culture
China saw a crackdown on premium priced mooncakes as part of a government move against ‘excessive consumption‘ driven by societal excess and ‘money worship’. This overall movement has dampened luxury sales. The Chinese government stopped officials buying mooncakes a decade ago as part of a crackdown on corruption.
Some consumers just aren’t into them
They were as divisive as Christmas cake is in Irish and British households. Brands like Haagen-Daz and Starbucks have looked to reinvent mooncakes into something more palatable.
Health and fitness
Health and fitness has been steadily growing as a trend in China. A number of reasons have been at play including changing beauty standards. Chinese women are still going to favour slimness over muscle, but home workouts and running have been increasing in popularity. The fitness industry has been growing and the Chinese government has also tried to foster interest in winter sports. So there would be a good reason to avoid ruining all the hard work that you put in by eating mooncakes.
Why Do Workers Dislike Inflation? Wage Erosion and Conflict Costs* by Joao Guerreiro, Jonathon Hazell, Chen Lian and Christina Patterson – workers must take costly actions (“conflict”) to have nominal wages catch up with inflation, meaning there are welfare costs even if real wages do not fall as inflation rises. We study a menu-cost style model, where workers choose whether to engage in conflict with employers to secure a wage increase. We show that, following a rise in inflation, wage catchup resulting from more frequent conflict does not raise welfare. Instead, the impact of inflation on worker welfare is determined by what we term “wage erosion”—how inflation would lower real wages if workers’ conflict decisions did not respond to inflation. As a result, measuring welfare using observed wage growth understates the costs of inflation. We conduct a survey showing that workers are willing to sacrifice 1.75% of their wages to avoid conflict. Calibrating the model to the survey data, the aggregate costs of inflation incorporating conflict more than double the costs of inflation via falling real wages alone
FMCG
Unilever ends up as a punching bag for Greenpeace and having their purpose blown up. As a campaign idea, the public celebration by the Dove brand team of the 20th anniversary of Dove’s real beauty positioning and creative left themselves open to this. Greenpeace used a skilful reframing in this creative.
The reason why the developing world seems to be disproportionately affected by plastic waste highlighted is for a number of reasons:
A lot of and paper and plastic recycling is shipped abroad. It used to go to China, but they declined to accept waste to recycle from 2018 onwards. So this waste went to other markets.
Developing markets have single portion packaging so that FMCG companies can distribute via neighbourhood shops and sell the product for the price a consumer can afford.
Plastic is easier to colour, manufacture, package and transport than glass, metal or coated paper. Biodegradable or effective post-use supply chains are well behind where they should be. And even if you were open to recycling, there may be brand issues.
The pairing of advertisers with consumers close to the point of purchase via rich, first-party data is leading to better ROI relative to other channels for some advertisers and is cited as a key driver of increasing retail media investment.
Retail media is growing in double digits every year; it currently accounts for around 14% of global ad spend and is projected to account for 22.7% of online advertising by 2026.
Retail media is no longer a ‘medium’ in the conventional sense but is instead evolving into an infrastructure underpinning the entire digital advertising ecosystem.
Welcome to my April 2024 newsletter which marks my 9th issue. We managed to make it through the winter and the clocks moved forward allowing for lighter evenings in the northern hemisphere.
The number nine is full of symbolism in a good way. In Chinese culture it sounds similar to long-lasting. It was strongly associated with the mystical and powerful nature of the Chinese dragon. From the number of dragon types and children to the number of scales on the dragon – which were multiples of 9. You have nine channels in traditional Chinese medicine. In Norse mythology there are nine worlds and Odin the all-father hangs on the tree of life for 9 days to gain knowledge of the runes.
Social media-related cognitive dissonance
A couple of conversations with people, spurred me to write this next piece.
I know it’s obvious and common sense, but it needs to be said occasionally. This time last year, I was on a Zurich work trip, providing support to a teammate running a workshop for a client who viewed the agency as the least worst option. We did good work and built temporary rapport, we got insight about the wider client-side politics at play. It was the classic example of the complexities involved in agency life and Lord knows we already have enough internal politics in our own shops to deal with.
The photo I shared on Instagram at the time gave no clue to what was happening, serving as a reminder to consider the curated nature of social feeds when scrolling through.
New reader?
If this is the first newsletter, welcome! You can find my regular writings here and more about me here.
What my answers to Campaign’s a-list questions would look like.
Boutique e-tailers and why the multi-brand luxury retail sector has gone from boom to bust.
Very Ralph and other things – Ralph Lauren’s world building abilities and how others from a cancer patient or overseas migrant workers have bent the world to their needs, or made a new one.
Books that I have read.
There are a few books that I revisit and the March 1974 JWT London planning guide is one of them. In many respects it feels fresh and more articulate than more modern tomes.
Chinese Antitrust Exceptionalism by Angela Zhang sounds exceptionally dry to the uninitiated. But if like me, you’ve worked on brands like Qualcomm, Huawei or GSK you realise how much of an impact China’s regulatory environment can have on your client’s success. Zhang breaks down the history of China’s antitrust regulatory environment, how it works within China’s power structures and how it differs from the US model. What becomes apparent is that Chinese power isn’t monolithic and that China is weaponising antitrust legislation for strategic and policy goals rather than consumer benefit. It is important for everything from technology to the millions of COVID deaths that happened in China due to a lack of effective vaccines. Zhang’s book won awards when it first came out in 2021, and is still valuable now given the relatively static US-China policy views. Given the recent changes in Hong Kong where she lives, we may not see as frank a book of its quality come out of Hong Kong academia again on this subject matter.
Van Horne and Riley’s Left of Bang was recommended by a friend who recently left military service. It codified and gave me a lexicon for describing observations of focus group dynamics and observation-based shopper marketing. Probably of bigger value to people more interested in the analytical side of behavioural science is the bibliography – which is extensive.
Things I have been inspired by.
Sustaining a sustainable brand
Kantar do a good webinar series called On Brand with Kantar. I got to watch one of them: Why consumers ignore brands’ sustainability efforts. Consumers are reticent to trust in brand’s sustainable efforts. Kantar’s recommendation is to stay the course and continue to demonstrate real sustainability. Kantar’s work complemented System 1’s Greenprint US-orientated sustainable advertising report. There is a UK-specific version as well with half a dozen ideas for marketers published in partnership with ITV.
Media platform trends
GWI released their 2024 Global Media trends report. GWI takes a survey based approach to understand consumer media behaviour.
Broadcast TV still commands the greatest share of total TV time, despite Netflix, Amazon Prime Video and a plethora of other streaming platforms from Criterion to Disney+.
Survival/horror players are most excited about gaming luxury collabs, whether or not luxury brands are equally excited about survival or horror gamers is a bigger question.
Games console ownership has halved in the past ten years. This surprised me given how many of my friends have a Switch or PlayStation 5. It probably explains why Microsoft is focusing on being a publisher rather than on platforms as well.
They focused exclusivity on internet advertising, which gives you a good idea on where they want the balance of media spend to go, rather than necessarily the right tool for the right job. Yes digital is very important, BUT, we live in a world were we are wrapped by and consume layers of digital and analogue media.
We can see from GWI data that this viewpoint is likely to be still excessively myopic in terms of media due to offline – online media linkages. This is likely to be even more so in Japan that still has a more robust traditional media industry.
Internet advertising reached a new high, despite being a couple of years after the Olympic games were hosted in Tokyo. (Media spend when a country hosts the olympics tends to be skewed that year upwards).
One thing I would flag is that this report is based on surveying people across the Japanese advertising industry and built on their responses. So there maybe some biases built into that process. Overall it’s a fascinating read.
Social media engagement benchmarks
RivalIQ published their 2024 Social Media Industry Engagement bench report, download it to get the full details. Three things that struck me straight away:
Macro-level decline across platforms on engagement rate, which matches the trends that Manson and Whatley outlined ten years ago in their Facebook Zero paper for Ogilvy Social.
If brands didn’t need enough reason already to reduce exposure to Twitter, the falling engagement rates on the platform add additional reasons. Overall video seemed to underperform on engagement compared to photos.
One thing leaped out to me in the industry verticals data, if you are looking to reach student age adults, why not consider collaborating with higher education institution social media accounts rather than influencers?
Shocking health outcomes
The Hidden Cost of Ageism | A Barrier to Innovation & Growth | Future Work – sparked a lot of discussion with its implications on workplace practices, particularly within the advertising sector. What was less discussed but more important was the implications of ageism related biases on healthcare treatment.
Under-treatment or Over-treatment: Older adults may receive less aggressive treatment options or are overtreated because of age-related biases, rather than based on individual health needs and preferences.
Dismissal of Concerns: Healthcare providers might dismiss older patients’ health issues as inevitable parts of ageing, potentially overlooking treatable conditions.
Age-Based Prioritisation: In some cases, age influences the allocation of healthcare resources, with younger individuals being prioritised over older ones, assuming they have more “life worth living.”
The Hidden Cost of Ageism | Future Work
MSNBC News in the US did a report on what it called a ‘Post-Roe underground’ echoing the underground railroads to free slaves in the Southern states and the Vietnam war era draft dodgers who escaped north to Canada. This time it is to help women access abortion pills or procedures in other states or Mexico.
MSNBC
My friend Parrus hosted a talk on World Health Day, more on that here, the key takeaway for me was not trying to replicate developed market solutions in developing markets. Instead think about how it could be reinvented. Thinking that could be extended beyond health care to consumer goods, telecoms and technology sectors as well.
Luxury market shake-up
Business of Fashion covered a US court case where two women brought a lawsuit against Hermès, alleging purchase of its sought-after Birkin bag is dependent on purchase of other products and is an “illegal tying arrangement” that violated US antitrust law.
Hermès is more vulnerable than other brands because it owns its retail stores. The case, if successful could have implications far beyond the luxury bag-maker. For instance, how Ford selected prospective owners for its GT-40 sports cars, or most Ferrari limited edition for that matter.
While we’re on the subject of luxury, LVMH are rerunning their INSIDE LVMH certificate which is invaluable for anyone who might work on a luxury brand now or in the future. More here.
Morizo
Toyota are on a tear at the moment. They correctly guessed that electric cars were too expensive at the moment and focused hybrids as a stepping stone to electric and hydrogen fuel cell production. They have also successfully use the passion for driving in their products and their marketing. The Toyota GR Yaris was a result of Chairman Akio Toyoda instructing engineers to make something sporty enough to win the World Rally Championship and affordable.
He also outed himself as a speed demon who went under the nom de plume of Morizo.
Quebec
For many English speakers one of the most dissonant experiences is being confronted by a language you can’t speak. It’s part of the reason why ireland managed to become the European base of companies like Alphabet and and Intel. So I was very impressed by this campaign by the Quebec government to attract visitors and inbound investment.
Things I have watched.
I watched Mr Inbetween series one in March and managed to work through series two and three this month. I couldn’t recommend them highly enough as a series. They just keep building on each other.
Over Easter, I revisited some old VHS tapes my parents still had and rediscovered the Christopher Walken science fiction horror film “Communion.” It epitomizes its era, with alien abduction narratives emerging during the Cold War and permeating popular culture from “Close Encounters of the Third Kind” to “The X-Files,” tapering off after 9/11. “Communion” demonstrates how effective editing and minimal special effects can heighten tension and emotion. Despite the film’s incredulous premise, Walken delivers a fantastic performance.
“Modesty Blaise” is from a time when comic book adaptations were uncommon in cinemas. This 1966 adaptation of the 1960s comic strip shares stylistic similarities with “Barbarella” and stars a young Terence Stamp. I received a tape copy from a friend who was attending art college at the time. The depiction of the computer as a character with emotional reactions in the film feels contemporary, echoing the rise of virtual assistants like Siri and ChatGPT, despite being portrayed as a mainframe. It is interesting to contrast it with Spike Jonze’s movie Her made 50 years later.
Useful tools.
A lot of the tools this month have been inspired by my trusty Mac slowly dying and needing to get my new machine up and running before my old machine gave out.
Time Machine
Apple’s native backup software, Time Machine, serves as a personal sysadmin for home users. Regular backups are essential. If a crucial document disappears while you’re working on it, Time Machine, coupled with a Time Machine-enabled hard drive, allows you to retrieve earlier versions of the document, potentially saving your sanity in critical moments.
Microsoft Office
I prefer the one-off payment model over Office 365 services. I use Apple’s Mail, Contacts, and Calendar apps instead of Outlook. While Office is available for just £100, which is reasonable considering its features, I still prefer Keynote over PowerPoint for creating presentations.
Superlist
Many of you may recall Wunderlist, which Microsoft acquired, but much of its original charm was lost in the transition to Microsoft To Do. Superlist is a reboot of Wunderlist by the original team, this time without Microsoft’s involvement. It’s available on iOS, macOS, and the web, catering to both individual and team task management needs.
https://youtu.be/2MzzbRhYlSA?si=04eBXH-MqKLpX2bN
ESET Home Security Essential
I used to rely on Kaspersky, and while I generally like their products, I have concerns about the potential influence of the Russian government. Therefore, I switched providers. ESET has a strong reputation and offers better Mac support than F-Secure. I can recommend their ESET HOME Security Essential package.
Amazon Basics laptop sleeve
I use a various bags depending on my destination and activities. Over the years, I’ve found that Amazon Basics brand laptop sleeves work well for my machines. They’re often among the cheapest options available and tend to outlast the computers they protect.
Laptop camera cover
The photo of Mark Zuckerberg’s laptop with tape covering the camera raised awareness about privacy. Webcam privacy covers, such as a sliver of plastic that slides across, are ideal as they allow your laptop to close fully. A pro tip is to use a red LED torch to clearly locate your camera when applying the stick-on cover.
Protective case and keyboard cover
I’m a big fan of clip-on polycarbonate shells to protect my laptop, as they provide a better surface for the stickers that personalize my machine over time. You don’t necessarily need a big-name case. The one I have came with a keyboard cover that works well. Anything that prevented Red Bull, coffee, or croissant flakes from getting under my keys is worth doing.
Screen protector film
The screen protector film provides great protection and is easy to apply and clean, even for beginners like me. I’ll update you if my opinion changes.
The sales pitch.
I have enjoyed working on projects for PRECISIONeffect and am now taking bookings for strategic engagements or discussions on permanent roles. Contact me here.
Ok this is the end of my April 2024 newsletter, I hope to see you all back here again in a month. Be excellent to each other and enjoy the bank holiday.
Don’t forget to like, comment, share and subscribe!
Let me know if you have any recommendations to be featured in forthcoming issues.