I was started down the train of thought to think about the idea of a cyborg based on a discussion with my colleague Colleen with regards to the changes we had been seeing in consumer behaviour. With that in mind I thought I would reflect on what my understanding of what cyborgs are.
‘Moo-mail’ Yahoo! cow parade cow. The web appliance / cow cyborg hybrid used to stand in the lobby of building D, next to the Yahoo! branded merchandise store on the Yahoo! campus back when I worked there. It was originally created in 2000 as a buzz marketing gimmick to promote Yahoo! Mail – the company’s email product to New Yorkers. More here.
Cyborg in culture
I can just about remember playing with friends bionic man toys and primary school and remember the opening credits of The Six Million Dollar Man. The show ran from 1973 to 1978 and had a corresponding spin-off show called The Bionic Woman.
According to the show a cyborg was:
CY’BORG
A HUMAN BEING WHOSE ORIGINAL HUMAN PARTS HAVE HAD TO BE REPLACED TO ONE EXTENT OR ANOTHER BY MACHINES THAT PERFORM THE SAME FUNCTIONS.
According to the definition, at the time of writing my Dad is a cyborg, having had a pacemaker fitted a year or two ago. So would the character Batou be in Ghost In The Shell.
The cyborg was a feature of cyberpunk culture. The key difference was that people chose to have augmentation, not just as a repair but as a form of enhancement.
Optional enhancement
Johnny Mnemonic had a storage brain interface fitted that allowed him to be a giant walking thumb drive as a profession.
Fellow William Gibson creation Molly Millions has retractable razor sharp blades in her fingers and an augmented metabolic system. She has permanently fitted mirrored lens over her eyes that enhance her vision.
Captain Cyborg
Real life did a rather poor version of this cyberpunk fantasy with academic Kevin Warwick spoofed by IT paper The Register using the moniker Captain Cyborg for him. He did foolish things like implant himself with an RFID chip usually used for pet identification. And yes of course Warwick did a TED talk. I can’t tell whether the audience is laughing with him; or at him.
So what has an office conversation got to do with a cyborg?
Digital drugs
Which brings me to how an office conversation spurred me to reflect on how a conversation on compulsive behaviour got me to start thinking about cyborgs. Culture did envisage some form of device addiction. The premise of Neal Stephenson’s novel Snow Crash revolves around a file that crashes a person’s computer and leaves a hacker called Raven with real-world brain damage in the process.
Long live the new flesh
Ten years earlier Videodrome featured a TV executive called Max Renn investigating a satellite TV show called Videodrome. It is described a socio-political battleground in which a war is being fought to control the minds of the North American population. Built into it is a signal that produces a malignant brain tumour. Renn’s reality dissolves over the rest of the film as he finds out more and then kills himself.
There is a clear analogy with the heroin and crack cocaine epidemics that ravaged the cities of the western world through 1980s and 1990s as drugs of desperation in the face of globalisation. Science fiction is as much about the past and the present rather than the future. Heroin and crack both cost large amounts of money, so children tended to be secondary and tertiary victims rather than addicts in their own right. It would also be problematic for the authors to contemplate gratuitous harm to children in their works back then, let alone now in more anxious times.
In both Snow Crash and Videodrome users suffer damage from technology that they are unwilling to put aside.
Back to now
Addiction is ‘real’
My colleague put forward the following points:
Screens now dominate our lives, and their presence is only getting stronger and more powerful
(Some) adults can control to a certain extent how often and when they use screens. But there is a commonplace screen addiction.
Smartphone addiction and drug addiction share some similarities including a neglected personal life, a pre-occupation with the subject of the addiction, social media as a mood modifier or for escapism. The implication is that smartphones are an unwilling appendage which add capabilities (some of which are of a questionable value) and can’t be put down. All of which reminded me of my childhood (and adult relationship with music). But it is why I started to thinking about the nature of a cyborg
Smartphone addiction
Smartphone addiction goes by many names including screen addiction, online or internet addiction. Japan identified the phenomenon of hikikomori. The term was coined by social scientist Tomaki Saito in a 1998 book. While the term itself meant socially withdrawn, it hinged around the person staying home and playing video games or living a virtual life.
By 2015, academic research indicated that somewhere between 1.9 – 2.5 percent of Hong Kongers aged from 12 to 29 might fall into the hikikomori category, compared to the 1.5 percent of Japanese believed to in the category.
Meanwhile in the early 2000s BlackBerry email devices were nicknamed Crackberry, often by users who admitted overusing them in anti-social contexts. There was a corresponding term ‘BlackBerry orphans‘ for children who were ignored by parents wrapped up in their BlackBerry writing and reading emails instead of engaging at home.
China was the first country to push for action to clamp down on children’s online time, in particular the use of online games. As far back as autumn 2005, China’s General Administration of Press and Publication had started trialling a fatigue system to limit screen time.
By 2007, the local government of Shanghai had a camp set up to help cure teens of internet addiction working with a pilot bunch of inmate aged between 14 and 22. And just a year later the FT was documenting how the Chinese government was struggling to combat the addiction throughout the country. This addiction implies a cyborg-like relationship with their internet access device.
In 2017, the substitute phone is launched as a kind of fidget tool. This provides the tactile experience of swiping and button pressing, but without any of the compelling addictive software.
By 2018, smartphone manufacturers were worried about smartphone addiction and came up with different ways to try and give their customers better information and control over their smartphone usage.
What about the children?
My colleague asked the following question: given the impact on adults, who haven’t grown up with screens, what does this all mean for children?
Remember the BlackBerry orphans earlier? My colleague proposed that now children are being taught once they are born that screens and smartphones are at the centre of life, rather than people. Parents use their smartphone as a substitute to toys, parent-child playtime or conversation or even reading to the child.
This is claimed to manifest in impacted social and emotional development. Expert opinion is that children below 2 years old shouldn’t have any ‘technology in their life‘.
There is a belief amongst experts that screen time can result in permanent damage to developing child’s brains impacting concentration, social kills and vocabulary. Some even believe that there might be a link between ADHD and TikTok.
But the Royal College of Paediatrics and Child Health (RCPCH), the UK’s professional body most concerned with a child’s health, doesn’t publish any recommendations. There isn’t any research to indicate the ‘safe’ level and arguably commissioning this research would likely pose ethical questions.
By the time children enter secondary education, they are likely to own a smartphone of some sort. They maybe exhibiting a number of physiological effects:
‘Text neck’
Premature eye ageing
Sleepless nights
Admittedly, as a child I was told that reading after lights out and listening to the radio or watching TV in the dark would result in ‘going blind’ or a lack of much needed sleep.
Like television before it, online screen time adversely affects academic performance. My own exam grades were empirical evidence of this. China, South Korea and Taiwan both have different ways of limiting screen time. China has enabled technology with online games platforms. Taiwan has held the parents directly responsible and even fines them.
Questions
All of this prompted a number of questions with me:
Is it the device or is it the media?
Is it different to other waves of technology?
Moral panics – or what can we learn from the past & our cyborg future
Media
Rock music – academic research indicated that listening to rock music was linked to an increase of reckless behaviour including drug use, unprotected sex, casual sex, drunk driving, speeding and vandalism
Violent content – while violent content was considered to trigger a response in children, the overall risk associated with it was difficult to prove conclusively despite decades of research. Studies as far back as the mid 1990s indicated that there a lot of other factors to consider in addition to the exposure including mental health and cognitive ability.
Sexual content – the US Center for Media Literacy pulled together views on sexual violence in content. There wasn’t a lot of clarity in the plurality of views beyond the challenge of defining content to be of an overly sexual nature. What views were expressed were not backed by scientific research
Video gaming – because of the strategies used by players in video games. Academic research in 2015 indicated that video games might have a negative impact on brain development over time.
Devices
Personal stereos – the use of a Sony Walkman and later on the iPod was considered to a negative effect on hearing. They were also considered to have a social effect, depending who you ask it considered to be empowering or dislocating from society with increased narcissism. The positive autonomy based interpretation was called the ‘Walkman effect‘. The implication from this research is that not giving a child a smartphone at a certain point could have a detrimental effect on them – at some point the child has to become a smartphone | human cyborg.
Televisions – when I was a child I was constantly told to not sit too close to the television and that doing so would cause me to go blind. According to Scientific American, it isn’t the distance from the television that affects the child, but a long enough amount can cause eye strain.
The implication in past concerns about media and devices is that its the content that tends to do the damage rather than the device. This tends to indicate where action should be taken on ‘screen addiction’. As for our great cyborg future – it can’t be stopped.
Minitel: The Online World France Built Before the Web – IEEE Spectrum – For a generation of French citizens, Minitel wasn’t about hardware, switches, or software. It was about the people they chatted with, the services they used, the games they played, and the advertisements for these services they saw in newspapers and on billboards. Many of the services that we associate with the Web had predecessors in Minitel. Before there was Peapod, there was 3615 TMK (Tele-Market), a service that enabled Parisians to order groceries for same-day delivery. Before there was Cortana or Siri, there were Claire and Sophie, services that provided personalized information using natural-language interfaces. Before there was Ticketmaster, there was Billetel. And before there was telebanking, there was Minitel banking
Brand Equity May Be Auto Industry’s Biggest AI Risk | CLS Strategies – The AI Risk Index reflects a substantial gap between what is intended and what is perceived by critical stakeholders. The results are stark—especially in the context of substantial investment and many more years of public scrutiny as AI is improved—and reveal a growing crisis of trust. Though an average of 62% of Americans are familiar with companies in the transportation industry, only 35% have a positive opinion of them (compared to 43% for non-automotive manufacturing and 41% for retail companies) and only 37% trust them (compared to 44% for manufacturing and retail companies). Even more concerning is that the transportation companies most heavily involved in AI technology drive this sense of distrust, more so than traditional carmakers. That may explain why only three out of eight transportation companies analyzed during the third quarter of 2018 mentioned advancements in AI at all—indicating that auto companies are either communicating poorly or not communicating at all.
25 technologies that have come to prominence during the past quarter century and have changed the world. CNET came up with their own list. That inspired me to take a run at it and make my own list of 25 technologies.
CNET’s list
My list
Apple iPhone
SMS and instant messaging
Wi-Fi
Wi-Fi
IoT (internet of things)
Mobile broadband
Voice assistants
DOCSIS & DSL
Bluetooth
Bluetooth
VPN (virtual private network)
Voice recognition
Bitcoin
Search
Blockchain
SaaS (Software as a Service)
MP3
VoIP (voice (and video) over IP)
Facial recognition
Global navigation satellite systems
Artificial intelligence
OSS (open source software)
Drones
Email
DNA testing
XML (eXtensibleMarkup Language)
Quantum computing
JavaScript
Social networking
Social networking
3D printing
MPEG – (Moving Pictures Experts Group)
Video streaming
NFC – (near field communications)
Apps
Apps
Autonomous vehicles
2FA (2 factor authentication)
RFID
RFID
Virtual reality
Strong cryptography
Video conferencing
OCR
E-cigarettes
Machine learning
Ransomware
USB
Music streaming
CMOS sensors
Looking over the list of things now, I can see that my ideas were about more foundational 25 technologies required to make the modern technology environment. I also have taken a more sanguine view on the 25 technologies.
Bitcoin and blockchain didn’t make the cut. Most of the applications that people like IBM look at call for a ‘private blockchain’, which negates the distributed ledger benefit. It can’t handle as many transactions as an Oracle database as fast. Digital currency maybe a thing and central banks have been actively thinking about it, but I am less convinced by cryptocurrencies. Secondly, crypto currencies are exceptionally energy inefficient; which is important in a world trying to move towards a low carbon economy.
With quantum computing it is just too early to tell. The technology is probably only where the digital computer was back in the 1940s. IBM have form in backing alternative forms of computing that haven’t panned out, like Josephson junctions, optical computing and gallium arsenide based computing. None of which have made it into mainstream computing.
Back in the 1980s progress was made at a furious rate on superconducting materials, with a future promise of room temperature superconductors at some point in the future. Although the research gave use some novel materials, it has mostly made a difference in heavy hospital based medical equipment sensors. Hence my hesitation to get excited about technologies still in their relative infancy.
Common items in the list of 25 technologies
Wi-Fi – like many technologies, Wi-Fi didn’t suddenly spring forth from the ether. It was the child of several developments over three decades. The name itself came about in 1999, created by branding agency Interbrand. It meant nothing in and of itself except as a pun on ‘hi-fi’. The name and logo were important at the time as they were signs of compatability. A laptop with wi-fi could log on and use a network with the right security details. This changed IT and buildings dramatically. Before Wi-Fi, you needed ethernet cable, a modem, modem cables and socket adaptors. And you’d still need your laptop power brick as battery life was a lot poorer back then. Wi-Fi was easy to install and changed spaces, at home, at work and in between. If you had internet at home before brandband you were tethered to the telephone port; or the modem tethered to the telephone port. The Internet was used in a fixed space. At work you were tied to your desk and forget about working in a coffee shop if you needed to log on. Even the term log on implies a time when going on the internet was an active thing to do. Wi-Fi redefined all that, you could work wherever you wanted to in the house. Connect whatever devices you wanted. I still use ethernet at home for my computer and Apple TV, but I don’t have to. My laptop switches on to the Wi-Fi network when I move away from my desk. Wi-Fi was also critically important for smartphones. Mobile networks are patchy, even more so indoors, but with smartphones came the ability to route their cellular calls over wi-fi. This was first of use to Blackberry users and is now an option to be turned on with most modern smartphones. Logging on no longer had to be an active state, we became always on, all the time. Along the way Wi-Fi had to see off competition from a European standard called HyperLAN2. I worked on promoting Ericsson’s home hubs for that, lovely product design but it was going nowhere.
Bluetooth
While the origins of Bluetooth owe a lot to a couple of Ericsson engineers in the late 1980s. Much of what we now think of Bluetooth is down to a partnership that Ericsson and IBM did in 1997. They looked to incorporate a short link wireless connection between a laptop and a cellular phone. The cellular phone would then be used as a modem for basic email. At the time, the other options were a cable, or IrDA – an infra red connection. IrDA was supposed to have a one metre point-to-point connection. I found in practice that you had to to a third of that distance most of the time. This limitation at least made it secure. Bluetooth eventually made it to phones, laptops and headsets during the dot.com boom. A key driver in this was the more compact nature of lithium ion batteries. People found it disconcerting someone would be next to them apparently talking to no one. So Bluetooth headsets didn’t take off really well until people stopped using voice on phones so much. I was fortunate to go to the US on a business trip in 2006 and picked up a Jawbone headset. This was a major improvement in noise reduction and call quality, but I only ever used for Skype calls at home as I didn’t want to look like a doochebag boiler-room sales professional.
What’s amazing now is the sheer ubquity of Bluetooth. Industrial computing networks, medical technology, consumer electronics, gaming and electronic fences.
Social networking
Social networking as a concept had existed for as long as consumers had gone online. There was the bulletin board culture, forums, services that helped you build your own sites. Chat rooms kind of served the same role that Twitter hashtags do. 10 years ago, social networking was a place of interesting experiments. Localised solutions for different markets; Japan and Korea were way out in front doing mobile social. Mass adoption changed things. Now social is engrained in the fabric of society, like a bunion. What we didn’t get was the digital utopian dream of a harmonious global village, but the same grubby aspects of society accelerated through using a digital domain. The truth is no longer a universal concept.
Apps
Apps or more accurately an online app store and signed apps have changed computing. The app store first appeared in the early 1990s on NeXT computers. It was designed to manage intellectual property rights on digital media and software. The app store built on Unix-like system tools called a package manager. Palmix was an Indian web based app store aimed at PDA users. A year later NTT DoCoMo launched i-mode an online integrated app store for mobile phones. Vodafone, KPN and Nokia followed with stores soon after. Handango released the first on device store similar to the Apple App Store experience now.
Its often forgotten that the original iPhone launched without an app store. Instead Apple thought the device would run web apps. Unless you were on wi-fi they weren’t great. Everyone had got caught up in web 2.0 fever, where by the miracle of Javascript and XML web pages were no longer catalogues but could do things. Palm made a similar mistake with its WebOS. Fortunately Apple did a pivot with iOS 2 and an app store was launched. But this didn’t stop mobile developers arguing and blogging for years afterwards about which was the best approach native apps or web apps. The thinking moderated a little and now hybrid apps make it into the mix as well.
Apple quickly realised that the app store was a winner and put it front and centre of its marketing.
RFID
It was originally used to track boxes in a warehouse and containers in a port. Technology brought the cost down so that it could track most items on a shop, or books in a library. Security guards walking a beat could tap and go at checkpoints and so could credit card payments. Pet could be returned to their owners thanks to an RFID pellet injected below the skin. In a secure lab that I worked in, it took a certain knack to swipe your card through the magnetic stripe reader and open the door. With RFID, it would be tap and go. In a post-9/11 world RFID tags went into every passport, changing immigration experience of air travel forever.
On a more prosaic level it sparked off several stored value transport cards including Oystercards in London and Octopus in Hong Kong. On average, they still get you through the turnstile faster than a phone app and NFC.
The rest of the 25 technologies
SMS and instant messaging
The UK and US developed in very different ways during the late 1990s and into the 2000s. Thanks to the EU spending so much research and development money on getting second generation networks up and running. Meanwhile over the US there was a plethora of cellular network standards and mobile roaming a nightmare. Instead the US established an internet culture earlier. Free local calling made dial up internet popular. This meant that they developed an instant messaging culture, whilst Europe saw a similar surge around SMS messaging. Both provided training wheels for adoption of our current mobile messaging culture. SMS is still used as a lingua franca for smartphone messaging, by everyone from Amazon to airlines. Like email, tales of its demise are premature.
Mobile broadband
GSM or 2G democratised mobile phone usage, but it was limited by data bandwidth and data latency. Whilst it was rated as being similar to a dial-up modem it often felt way slower. It was only 2.5G (EDGE or EGPRS), 3 and 4G that made possible what we now take for granted as a mobile experience. With 2G, getting anything done took a real effort. Downloading text emails were painfully slow. And data was expensive. Mobile connections were worthwhile for specialist applications like news and sports photographers needing to get images as fast as possible for sale to picture desks. 3G promised video calls and TikTok-esque sports highlights. The reality was passible email and access to maps. It was around about this time that I no longer carried an A-to-Z atlas of London with me everywhere. You couldn’t have Instagram, WhatsApp, Google Maps, Siri or the weather without mobile broadband. These not only empowered services like downloads, streaming and video, but changed our relationship with the internet. Our relationship with bandwidth and real terms price drops were responsible for our always-on life as much as WhatsApp and Skype.
Voice recognition
As with many of our 25 technologies, voice recognition as we understand it now started with work done at Bell Labs. Back in the early 1950s, they managed to train a system to recognise a single voice dictating digits. From there voice recognition evolved in fits and starts. This innovation was predominantly driven by the telephone companies and the defence industry. 1990 was a pivotal year. Dragon Dictate – a personal computer based system was launched. AT&T deployed the Voice Recognition Call Processing service. AT&T service allowed calls to be routed without the involvement of a receptionist. This is usually the first line of a call centre experience, or when phone banking is used to validate online banking payments.
It has become more important as smartphone interfaces have hidden the number pad on calls. Voice has also been an area where phone interfaces and home devices have tried to tap into. And for many they have worked reasonably well. I have personally found that the results have been more inconsistent for me. My Ericsson T39 from 2001 was able to recognise ‘Call <insert name>’ consistently; associating the name with a person in my speed dial list. Something that Siri struggles to do now. Siri manages to play me the headlines from the BBC and Google doesn’t seem to understand me at all.
The benefits of speech recognition moves forwards in fits and starts. The UK may prove trickier due to the relative volume of accents compared to the size of the population. And then you have people like me with an accent that has changed over time as I have moved around. Unconsciously adapting to my environment and losing some of the edges of my North of England and Irish upbringing.
Search
Like most people who have been using the internet since the mid-1990s, my experience was divided not by before and after Facebook. But before and after Google. Originally the web was so small that the original search engines worked remarkably well. I remember using them as part of my research process during my degree. As the internet grew the original search engines like Hotbot, AltaVista and Excite struggled to keep up. On to the scene came Google.
Google changed the way that we found things on the web. Concepts like web rings and directories are now ancient history. Our relationship with the web was mediated through its search box and it became our gateway to the web. Search also changed our relationships with our devices. It inspired journaled index of computer drives as consumers expected answers to finding items on their computer with the same ease as the web. Search is now the primary way that I navigate my Mac and my iPhone. It is a design metaphor that will be with us for a long time.
SaaS (software as a service)
SaaS actually dates from before the web. IBM used to provide ‘time-sharing’ on mainframe computers, back before PCs were a thing. Supercomputers in many academic institutions still provide the same kind of function for organisations looking to do market or economic modelling. The internet however, provided a new way of connecting with time-shared resources. Eventually virtualisation broke out the major blockage that every user needed a separate instance of a software application to run on. Web 2.0 pioneer Oddpost – a paid for email service offered new levels of functionality. Oddpost used Javascript and XML to provide a desktop like application experience in the browser. Google extensively copied these ideas for Gmail two years later.
This then opened the door for the modern versions of Salesforce, e-Days, Workday and other SaaS. This software was now available for smaller businesses that couldn’t support running the applications on premises. SaaS, XML and Javascript are intimately connected in my choice of 25 technologies.
VoIP
Voice over Internet Protocol (VoIP) was first used in the early 1970s to pipe instructions into a flight simulator over the ARPANet. It really found its feet in 1991 with the first software programme allowing VoIP communications. The following Commuique was released which was like a Zoom analogue. As the commercial internet rolls out in the US, Israeli firm VocalTec releases its ‘Internet Phone’ application. Soon after the ITU looks at VoIP standards. The rise of the internet led to alternative telcos that routed voice minutes over data networks – a mix of old and new telecoms.
I started my agency career working on one such alternative telco that used technology from Israeli VoIP start-up deltathree. At this time, the price of voice calls declined precipitously; particularly for international calling at the expense of quality. The industry attracted numerous spivs. The SIP standard was developed as an analogue for SS7 in voice and video calls.
With 3G phones and a modicum of good interface design drove VoIP calls over services like Skype and Vonage. This was displaced in terms of popularity by a new generation of mobile first services like Viber, WhatsApp and FaceTime. Zoom built on this base for its conference call platform. In the meantime, telecoms providers have tried to reinvent themselves. Some with more success than others.
Global navigation satellite systems
The US highlighted the impact of global navigation satellite systems with its military’s use of GPS during the first Gulf War.
After the Gulf War, non-defence usage came into focus. Telematics and navigation. GPS also provided timing to a diverse range of technologies from mobile networks. to ATM machines. In the early 2000s. PDA manufacturers like Fujitsu manage to integrate GPS modules into their PDA (personal digital assistant) devices. Nokia’s N95 smartphone, was the first popular device with a built in GPS receiver and this spurred the adoption of maps on a smartphone.
Now the use cases are limitless as smartphone apps can tap into location data when a person is outside a building. The next step is accurate indoor location positioning – all be it, no longer relying on satellite signals.
OSS
Open Source Software (OSS) is pervasive in the modern day. This blog runs on OSS (Linux, Apache, MySQL and PHP). The Mac that I write this post on is based on OSS (Darwin, Mach microkernel, FreeBSD). The web browser is based on a branch of KDE Conqueror called WebKit and that’s the same with the iPhone and iPad as well. If you’re using an Android phone its based on Linux. Even smart home light bulbs run Linux.
The rise of OSS went hand-in-hand with the web. Widespread doption started in server software that worked with open standards. Pretty soon you saw attempts to put it elsewhere. Desktop Linux including Netbooks – lightweight low power laptops. Ideal for checking your email or surfing the web. At the same time Apple had transitioned from the ‘Classic’ MacOS to something based on NeXTSTEP – acquired with NeXT Computer. Motorola and other manufacturers put it into mobile phones – as forerunners of the modern smartphone. From there it went into Sony PlayStation 3 console. As globalisation drove electronics manufacturing to China; manufacturers of all kinds of gadgets saw the benefits of Linux – even if they didn’t honour the law and spirit of open source cough, cough Huawei…
Email
Despite Facebook owning all our data, email is the key identifier. The identifier that you log into your Amazon account, log on to Netflix with and countless other services. Despite email being dead and countless other services being layered on top to replace it, its still very much alive. My own email account has selected correspondence that goes back to 2001.
Email marketing statistics are declining in effectiveness yet its still a very effective medium. Just look at businesses like ASOS.
Our relationship with email changed. When I left college, I had signed up for an online account with Yahoo! I could keep in touch with friends and apply for jobs. The email address went on my CV and I went to a cyber cafe Liverpool with a disc full of email messagess to send every Saturday. I usually had coffee and carrot cake with a friend whilst I sent it. We’d then go into the shopping district of central Liverpool to chat and do some window shopping.
Working in an office, I could check my personal email at lunch time. Home broadband meant that I could check my account at home. Move forward ten years and email is in the palm of our hands, everywhere we go. I managed to get email to work on a Nokia 6600. You can see a surge in Gmail accounts that coincides with the rise in popularity of smartphones.
XML
XML (and JSON) are ways of getting formatted data on to a webpage and allowing the page to become an app. Portability of data is now a foundational technology for the modern web. Want an app or web page that uses data, like the weather forecast or a spreadsheet. It will have XML like feeds in the background. It is so pervasive alongside Javascript that it is more like data as a utility. Electric cabling or indoor plumbing would be a good real world analogy.
Prior to XML and Javascript, a web page would have to be completely refreshed to show updates. There were software as a service applications running on the web; but they were painful. I know, I was a beta tester for an early version of an IPG company’s real time reporting tool. Every agency person knows the pain of time tracking, but time tracking when the page had to constantly update was ten times worse. I know, I was there.
JavaScript
The modern web, where web pages function as an app is down to the use of a group of technologies one of which is JavaScript. JavaScript is a programming language that alters the in-browser behaviour of a web page. When this was combined with formatted data such as XML or JSON it allows a web page to perform as a piece of software. You make a change and the page doesn’t need to refresh, thus improving responsiveness. The potential of it first became obvious with a web email application called Oddpost. Oddpost was a subscription based email service. What you paid your $2.99 a month for was a web interface that thanks to JavaScript and XML worked just like a desktop app. It was Internet Explorer only, which gives you an idea how close Microsoft came to extending their monopoly from Windows to web browsers. Oddpost was eventually acquired by Yahoo! and inspired the launch of Gmail two years later. From there you got self-service enterprise apps like e-Days and Workday. JavaScript has proved surprisingly resilient and that’s why it makes the 25 technologies from the past 25 years.
MPEG
MPEG stands for Motion Picture Experts Group, which is responsible for pretty much every form of audio and video format that we use today. Whilst the technology might come from a multitude of sources, MPEG set standards are invaluable for it. Whether its digital radio, online radio, digital physical media like Blu-Ray and DVD or streaming media MPEG has had an outsized influence. It also relates directly to voice and video communications codecs, hence their place in the 25 technologies. If you’ve done a FaceTime call, listened to Spotify or watched a movie you can thank MPEG.
NFC
Near-field communications (NFC) offers a way of using devices as authentication. It has really come to its own in smartphones where they serve as contactless digital wallets, access passes and digital car keys. Admittedly mobile wallets have a poor experience and its frightening to think that you wouldn’t be able to get into your car because someone couldn’t be bothered to maintain the Android or iOS app. Yet whether we like it or not NFC has become part of our tech eco-system. I would have preferred if I didn’t have to put into this list of 25 technologies, but I had to acknowledge its impact.
2FA
Over the past ten years, two factor authentication (2FA) has gone from being an enterprise level security tool to consumer grade security. The traditional RSA dongle with its constantly changing number codes was a status symbol of the corporate road warrior alongside Tumi luggage and a Blackberry. Now we get those numbers via a smartphone app or by SMS. This has happened as online identity theft and data breaches have become commonplace and massive databases of passwords have been cracked. 2FA regrettably therefore ended up in one of my 25 technologies.
Strong cryptography
It’s hard to convey how pervasive strong cryptography has become. Up to a 1/4 of users online currently use a VPN application which encrypts their web traffic. Web connections between a site and a browser are now encrypted more often than not. If you’ve ever done online backing, or bought something online with your credit card you’re using strong cryptography. My laptop uses Apple’s FileVault to encrypt the drive completely. Messaging via iMessage, WhatsApp, Signal or Silent Phone all use strong cryptography. Back in the early 1990s, strong cryptography was seen as a weapon, it was limited in its export. I strongly recommend reading Steven Levy’s Crypto to find out how we got here. I remember when Lotus Notes came with weaker encryption outside the US during the dot com era. Now I am leery of using any communications platform that doesn’t have strong cryptography. In fact, when I freelanced encryption was an important consideration for my even being able to get professional insurance. Its now a core part of business, so is one of my 25 technologies.
OCR
Optical character recognition (OCR) is technology that has been around for decades. In its modern sense, the start of it is around 1974 with entrepreneur Ray Kurzweil. Now its a foundational technology for many leading edge applications:
Interpreting the real world (billboards, road signs, automatic number plate readers)
Real time translation (using Google translate to read restaurant menus etc)
Digitisation of books and manuscripts (Google Books)
handwriting recognition and pen computing
Making digitised documents searchable
All of this helps technology to interact with the real world in near real time. You need it for many of the wide range of future technologies that are envisaged. The slow rise of a web-of-no-web where the real world is blended with the online world is possible because of multiple technologies from GPS and QRcodes to optical character recognition. For its use in Google Translate alone, it would be enough to make it into this list of 25 technologies from the last 25 years.
Machine learning
When people talk about artificial intelligence they usually mean machine learning. Google and other companies are applying techniques that were developed at the University of Toronto in the 1980s during an AI winter. The idea is that if you show a computer programme enough pictures with cats, it will recognise cat attributes as a pattern and recognise them in the future. Its a very particular skill which is the reason why machine learning has offered so much promise and let us down at the same time.
I talked about an AI winter. That’s a time when there was a dearth of spending in artificial intelligence research. We’ve had several cycles of massive government investment and withdrawal as AI historically failed to deliver.
So under the right circumstances, machine learning can count craters on lunar photography or likely cancerous tumours in X-ray imagery. Yet machine intelligence struggles to recognise what I ask. AI driven ad platforms get targeting hilariously wrong. It mirrors some of the fuzzy logic capabilities of Japanese consumer electronics: the auto focus camera, lifts that optimise for traffic flow in tall buildings or the microwave that knows how long to cook your food for. This was based off a mathematical paper published in 1965 by an academic at UC Berkeley.
Moore’s Law and the worry of digital disruption has pushed machine learning adoption, the results may disappoint; but realpolitik will keep it in play. It will be the most invisible of the 25 technologies listed, you will feel its impact rather than you seeing it.
USB
The shock of seeing the floppy disk disappearing and the use of the USB on the first iMac would have been enough to get it on list this of 25 technologies. Computing before USB was messy. There was a range of ports for different things. Connecting a printer, connecting a keyboard and connecting an external hard drive or CD ROM drive all required different sized cable connectors. When you were setting up a computer, it would be clearly labeled on the back of the machine what its function was. I had cables which had ideograms that were moulded on the top of them which came with the Macs that I owned.
CMOS sensors
CCD sensors had been invented over 50 years ago. If you had asked me about 25 technologies, back when I was a teenager CCD would have been very close to the top of the list. They were well understood and had been incorporated in video cameras since at least the early 1980s. CCD sensors offered better quality, but had issues with lag. Techniques designed to deal with this helped the performance of CMOS sensors. CMOS sensors were invented by NASA’s Jet Propulson Lab building on work that Olympus did in the 1980s. First it went into mice, then into low end cameras. The technology got better all the time. Doing more in less space with less power. Eventually they went into webcams and cellphones. Nowadays, you’re only like to see CCDs in very particular use cases now. CMOS sensors are everywhere in modern life; even high end photography equipment like PhaseOne.
What would be in your 25 technologies, how would they differ from mine or CNet’s?
Over the past 20 years has the modern web became a dumb internet? That’s essentially a less nuanced version of what media theorist Douglas Rushkoff proposed.
Douglas Rushkoff at WebVisions 2011 taken by webvisionevent
Too much focus and analysis has been put in the new, new thing. Novelty gets the attention over human impact
Consumer movements or subcultures become fads when they lose sight of their purpose
Rushkoff thinks that netizens let go of the social / intellectual power of the web. This provided the opportunity for them to become yet another large corporate business
Bulletin boards, messaging platforms and email lists facilitated non-real time or asynchronous communications. Asynchronous communications channels allowed people to be ‘smarter’ versions of themselves.
The move to an ‘always-on’ medium has been detrimental Going online went from an active choice to a constant state of being. The resulting disorientation is self-reinforcing.
Rushkoff’s commentary is interesting for a number of reasons. He had been a herald of how online culture would change society and consumer behaviour.
But his essay posits a simple storyline. It wasn’t people that ruined the internet, it was big business that did it when people weren’t looking. So I wanted to look at the different elements of his hypothesis stage-by-stage.
Too much focus and analysis has been put in the new, new thing
With most technologies we see the thing and realise that it has potential. But it is only when it reaches the consumer, that we truly see its power.
Different cultures tend to use technology in very different ways. Let’s think about examples to illustrate this. Technology research giants like Bell Labs and BT Research had science fiction writers onboard to try and provide inspirational scenarios for the researchers. So it was no surprise that mobile wireless based communications and computing was envisaged in Star Trek.
A replica of a Science Tricorder from Star Trek by Mike Seyfang
And yes looking back Star Trek saw that the computer was moving from something the size of a filing cabinet, to something that would be a personal device. They realised that there would be portable sensing capabilities and wireless communications. But Star Trek didn’t offer a lot in terms of use cases apart from science, exploration and telemedicine.
These weren’t games machines, instead the crew played more complex board games. Vulcan chess seemed to be chess crossed with a cake stand.
Yes, but that’s just the media, surely technolgists would have a better idea? Let’s go to a more recent time in cellphones.
Here’s Steve Ballmer, at the time the CEO of the world’s largest technology company. Microsoft Research poured large amounts of money into understanding consumer behaviour and tech developments. In hindsight the clip is laughable, but at the time Balmer was the voice of reason.
The Nokia E90 Communicator and Nokia 6085 that I used through a lot of 2007
I was using a Nokia E90 Communicator around about the time that Ballmer made these comments.
I was working in a PR agency at the time and the best selling phone amongst my friends in the media industry were:
The Nokia N73 I’d helped launch right before leaving Yahoo! (there was an integration with the Flickr photo sharing service)
The Nokia N95 with its highly tactile sliding cover and built in GPS
The Danger Sidekick was the must-have device for American teenagers. Japanese teens were clued to keitai phones that offered network-hosted ‘smartphone’ services. Korea had a similar eco-system to Japan with digital TV. Gran Vals, by Francisco Tárrega was commonplace as the Nokia ringtone, from Bradford to Beijing. Business people toted BlackBerry, Palm or Motorola devices which were half screen and half keyboard.
The iPhone was radical, but there was no certainty that it would stick as a product. Apple had managed to reinvent the Mac. It had inched back from the brink to become ‘cool’ in certain circles. The iPod had managed to get Apple products into mainstream households. But the iPhone wasn’t a dead cert.
The ideas behind the iPhone weren’t completely unfamiliar to me. I’d had a Palm Vx PDA, the first of several Palm touch screen devices I’ve owned. But I found that a Think Outside Stowaway collapsible keyboard was essential for productive work on the device. All of this meant I thought at the time that Ballmer seemed to be talking the most sense.
Ballmer wasn’t the only person wrong-footed. So was Mike Lazaridis of Research In Motion (BlackBerry) repeatedly under-estimated the iPhone. Nokia also underestimated the iPhone too.
So often organisations have the future in their hands, they just don’t realise it yet; or don’t have the corporate patience to capitalise on it. A classic example is Wildfire Communications and Orange. Wildfire Communications was a start-up that built a natural language software-based assistance system.
In 1994 the launched an ‘electronic personal secretary’. The Wildfire assistant allowed users to use voice commands on their phone to route calls, handle messaging and reminders. The voice prompts and sound gave the assistant a personality.
Orange bought the business in 2000 and then closed it down five years later as it didn’t have enough users taking it up. Part of this is was that the product was orientated towards business users, like cellphones has been in the 1980s and early 1990s.
But growth took off when the cellphone bridged into consumer customer segments with the idea of a personal device. There wasn’t a horizon-scanning view taken on it, like what would be the impact of lower network latency from 3.5 and 4G networks.
Orange had been acquired by France Telecom and there were no longer executives advocating for it.
Demo of Wildfire’s assistant that I found on the web
In retrospect with the likes of Siri, Alexa and Google Assistant; Wildfire was potential wasted. Orange weren’t sufficiently enamoured with the new, new thing to give it the time to shine. And the potential of the service wasn’t fully realised through further development.
The reason why the focus might be put on the new, new thing is that its hard to pick winners and even harder to see how those winners will be used.
Consumer movements or subcultures become fads when they lose sight of their purpose
I found this to be a particularly interesting statement. Subcultures don’t necessarily realise that they’re a subculture until the label is put on them. It’s more a variant of ‘our thing’.
The Z Boys of Dogtown realised that they were great skaters, but probably didn’t realise that they were a ‘subculture’.
Shawn Stüssy printed up some t-shirts to promote the surf boards he was shaping. He did business the only way he knew how. Did he really realise he was building the foundations of streetwear culture of roadmen and hype beasts?
Punks weren’t like the Situationists with a manifesto. They were doing their thing until it was labelled and the DIY nature of doing their thing became synonymous with it.
The Chicago-based producers making electronic disco music for their neighbourhood clubs didn’t envisage building a global dance music movement. Neither did the London set who decided they had such a good time in Ibiza; they’d like to keep partying between seasons at home.
Often a movement’s real purpose can only be seen in hindsight. What does become apparent is that scale dilutes, distorts or even kills a movement. When the movement becomes too big, it loses shape:
It becomes too loose a network
There are no longer common terms of reference and unspoken rules
The quality goes down
But if a community doesn’t grown it ossifies. A classic example of this is The WeLL. An online bulletin board with mix of public and private rooms that covered a wide range of interests. Since it was founded in 1985 (on dial-up), it has remained a disappointing small business that had an outsized influence on early net culture. It still is an interesting place. But its size and the long threads on there feel as if the 1990s have never left (and sometimes I don’t think that’s a bad thing).
When you bring in everyone into a medium that has an effect. The median in society is low brow. This idea of the low brow segment of society was well documented as a concept in the writing of George Orwell’s 1984 and Aldus Huxley’s Brave New World. Tabloid newspapers like The Sun or the National Inquirer write to a reading age of about 12 years old for the man in the street. Smart people do stupid things, but stupid people do stupid things more often.
It is why Hearst, Pulitzer and Beaverbrook built a media empire on yellow journalism. It is why radio and television were built on the back of long-running daytime dramas (or soap operas) that offer a largely-stable unchanging backdrop, in contrast to a fast-changing world.
Netizens let go of the social / intellectual power of the web
When I thought about this comment, I went back to earlier descriptions of netizens and the web. Early netizen culture sprang out of earlier subcultures. The WeLL came out of The Whole Earth Catalog:
A how too manual
A collection of essays
Product reviews – a tradition that Kevin Kelly keeps alive with this Cool Tools blog posts
The Whole Earth Catalog came out of the coalescence of the environmental lobby and the post-Altamont hippy movement to back to the land. Hippy culture didn’t die, but turned inwards. Across the world groups of hippies looked to carve out their own space. Some were more successful than others at it. The Whole Earth Catalog was designed as an aid for them.
The hippy back to the land movement mirrored earlier generations of Americans who had gone west in the 19th century. Emigrates who had sailed for America seeking a better life. Even post-war GIs and their families who headed out to California from the major east coast cities.
The early net offered a similar kind of open space to make your own, not bounded by geographic constraints. Underpinning that ethos was a certain amount of libertarianism. The early netizens cut a dash and created net culture. They also drew from academia. Software was seen as shareable knowledge just like the contents of The Whole Earth Catalog. Which gave us the open source software pinnings that this website and my laptop both rely on.
That virtual space that was attractive to netizens also meant boundless space for large corporates to move in. Since there was infinite land to stake out, the netizens didn’t let go of power.
To use the ‘wild west’ as an analogy; early netizens stuck with their early ‘ranch lands’, whilst the media conglomerates built cities that the mainstream netizens populated over time.
The netizens never had power over those previously unmade commercial lands which the media combines made.
Asynchronous communications channels allowed people to be ‘smarter’ versions of themselves
Asynchronous communications at best do allow people to be the smarter version of themselves. That is fair to a point. But it glosses over large chunks of the web that was about being dumb. Flame wars, classes in Klingon and sharing porn. Those are things that have happened on the net for a long long time.
In order to be a smarter version of yourself requires a desire to reflect that view to yourself; if not to others. I think that’s the key point here.
The tools haven’t changed that much. Some of my best discussions happen on private Facebook groups. Its about what you choose to do, and who you choose to associate with.
In some ways I feel like I am an anachronism. I try and read widely. I come from a family where reading was valued. My parents had grown up in rural Ireland.
This blog is a direct result of that wider reading and the curiosity that it inspired. I am also acutely aware that I am atypical in this regards. Maybe it is because I come from a family of emigres, or that Irish culture prides education in the widest sense. My Mum was an academically gifted child; books offered her a way off the family farm.
My father had an interest in mechanical things. As the second son, so he had to think about a future beyond the family small holding that his older brother would eventually inherit.
Being erudite sets up a sense of ‘otherness’ between society at large and yourself. This shows up unintentionally in having a wider vocabulary to draw from and so being able to articulate with a greater degree of precision. This is often misconstrued as jargon or complexity.
I’d argue good deal of the general population doesn’t want to be smarter versions of themselves. They want to belong, to feel part of a continuum rather than a progression. And that makes sense, since we’re social animals and are hardwired to be concerned about difference as an evolutionary trait. Different could have got you killed – an enemy or an infectious disease.
The move to an ‘always-on’ medium has been detrimental
Rushkoff and I both agree that the ‘always-on’ media life has been detrimental. Where we disagree is that Rushkoff believes that it is the function of platforms such as Twitter. I see it more in terms of a continuum derived directly from network connectivity that drove immediacy.
Before social was a problem we had email bankruptcy and information overload. Before widespread web use – 24-hour news broadcasting drove a decline in editorial space required for analysis which changed news for the worse.
James Gleick’s book Faster alludes to a similar concept adversely affecting just about every aspect of life.
Dumb Internet
I propose that the dumb internet has come about as much from human factors as technological design. Yes technology has had its place; algorithms creating reductive personalised views of content based on what it thinks is the behaviour of people like you. It then vends adverts against that. Consumers are both the workers creating content and the product in the modern online advertising eco-system as Jaron Lanier’s You are not a gadget succinctly outlines.
The tools that we have like Facebook do provide a base path of least resistance to inform and entertain us. Although it ends up being primarily entertainment and content that causes the audience to emote.
But there is a larger non-technological pull at work as well. An aggregate human intellectual entropy that goes beyond our modern social platforms.
If we want a web that makes us smarter, complaining about technology or the online tools provided to us isn’t enough:
We need to want to be smarter
We need to get better at selecting the tools that work for us as individuals
We need to use those tools in a considered, deliberate way
Men Know It’s Better to Carry Nothing – The Cut – Medium – women clean up because fashion allows it. She pointed to the size of women’s bags, which allow us — like sherpas or packhorses — to lug around the tool kits of servitude. A woman is expected to be prepared for every eventuality, and culture has formalized that expectation. Online, lists of necessities proliferate: 12, 14, 17, 19, 30 things a woman should keep in her purse. Almost all include tissues, breath mints, hand sanitizer, and tampons — but also “a condom, because this is her responsibility, too.” (A woman’s responsibility for everyone else’s spills extends to the most primal level.) – I don’t think that this ‘carry nothing’ mentality of men is true any more. One only has to look at the backpacks carried around. Or the whole EDC culture of over-engineered products to optimise the carry experience, making a lie of carry nothing as a concept. For a lot of men, the car is the handbag, but that’s a whole other discussion around the idea of carry nothing. More consumer behaviour related content here
Silicon Valley’s China Paradox | East West Center – the period from 2014 to 2017 as a time of “segmentation and synergy,” two words that on their face are opposites of each other. Their juxtaposition forms the core of what Sheehan labels “Silicon Valley’s China paradox.” While at a corporate level US and Chinese companies were entirely separate, the flow of money, people, and ideas reached an all-time high during this period. “This is when you saw a lot of investors from China showing up in Silicon Valley, some prominent US researchers and engineers joining Chinese companies in positions of leadership, and ideas flowing in two directions,” said Sheehan. He noted that the concept of shared bicycles, now popular in US cities, started in China, and both Chinese and US companies have been active in the development of autonomous vehicles. Even while the relationship between the two national governments was in many ways going sour, “the relationship at the grassroots level, the technology relationship, was still very free-flowing,” he noted. Sheehan suggested that the relationship has now entered a new and uncharted phase, which he termed the new “technology cold war,” with the US government asserting national policies in what was previously considered a private arena. This new phase has three dimensions, he said. The first is an effort to disentangle the interconnected technology com- munities that bind the two countries together. In 2018, the US Congress passed the Foreign Investment Risk Review and Modernization Act (FIRRMA). This new legislation increases US government oversight and supervision of Chinese investment in Silicon Valley, Sheehan pointed out. The US State Department also began restricting visas for Chinese graduate students working in sensitive fields of science and technology. The second dimension is height- ened competition between US and Chinese companies in other countries. In general, “American companies know they can’t win in China, and Chinese companies know they can’t make a dent in the US market,” according to Sheehan. So US and Chinese companies are competing in markets such as India, Brazil, and Southeast Asia. (PDF)