A few thoughts on innovation

I started thinking about a post on innovation, after an agency meeting about a possible project. My friend Nigel Scott has been researching the venture capital industry. His ideas fired some of the thoughts in this post.

It caused me to reflect again on innovation and the way we think about it.

Innovation rewards hard work?

We are often told that innovators work really hard and strive to achieve their goals. In Where Wizards Stay Up Late – there is a description of Silicon Valley culture. Late nights by engineers and takeout food was considered one of the factors that drove the early Internet. Engineers were building new technologies as they went at break-neck speeds.

The problem is that for many jobs there is no 9-to-5 now. When I worked in agencies 12+ hour days were typical depending on the client load. Yet we weren’t pulling Cannes Lions award-winning work out of our butts.

In China, many companies now work to ‘996‘. That is 9am to 9pm, 6 days a week as core hours. This is basic a minimum requirement for engineers. Somewhere like Huawei, try to build a ‘wolf’ mentality. They work their staff much harder and they’re expected to retire at 45 – presumably physically and mentally burned out.

Working hard is a hygiene factor, technology has made it that way. You’re typical Uber driver is gamed by the driver app to put in excess of 12-hours/day. Both knowledge and unskilled workers would have a similar level of time poverty.

Innovation is like buses

For long-suffering public transport users in the UK many services are compared to buses. Due to road traffic and scheduling, there would often be an overly long time for a bus to arrive. When it eventually did, there would be another two following very closely behind.

You can see a similar thing with innovation.

Whilst we’re used to thinking of John Logie Baird as the inventor of television – and Baird worked very hard on television. The reality is that television was based on a series of inventions from the middle of the 19th century onwards.

There are at least 20 different inventors who had some claim to coming up with the light bulb. But Edison did manage to create the first commercially successful bulb. British school children are taught about Joseph Swan’s carbon filament bulb. This was let down by the vacuum process in manufacturing and poor quality electricity supplies so the bulbs didn’t last very long. Swan had solved his bulb’s problem and changed the filament.

It was only at this point that Edison started his research into electric light bulbs.

More recently, I was talking to an agency about a piece of work that didn’t come off in the end. The discussion turned to a drug that was very recently launched. The problem was that although they were first to market, they weren’t the only inventors. A large rival had launched drug approvals for their product in markets were original firm hadn’t focused on for its initial approvals. Another two companies were immediately behind them and likely to drop their prices (and profit margins) to make up for later market entry.

If one thinks about the modern computer with its graphical user interface. This was created by layers and layers of innovation. Doug Engelbart, whilst working at SRI International demonstrated the following to an audience of government officials in 1968

  • GUI interface
  • Mouse pointing device
  • Text manipulation
  • Collaborative editing
  • Video conferencing (a la Skype)

The Xerox PARC (Palo Alto Research Center) refined Engelbart’s concepts further with a complete modern office by 1973. Steve Jobs and his team got into see it, which drove work on the Lisa and then the Macintosh. Microsoft got in and eventually came up with Windows. Microsoft also learned from building software applications for the Macintosh.

Digital Research invented their own GUI layer called GEM. GEM was demoed at Comdex in 1984; right about the time Apple launched the Macintosh. Commodore launched the Amiga in 1985 and also added multi-tasking – the ability to run two or more apps at the same time.

These are just a few examples for the sake of brevity. But the inventor slaving away in isolation to come up with something, uniquely innovative is not rooted in evidence. Yet intellectual property law gives lie to this myth. I don’t want to belittle the work done, but it is as if there is a certain amount of predestination to invention based on prior innovations.

Innovation happens

This predestination of technological progress is something that Kevin Kelly labeled the Technium. In his book What Technology Wants he posited that technological progress can be slowed, but nothing short of an apocalypse can stop it completely.  Here’s what Kevin Kelly said in an interview with Edge.org when supporting the launch of What Technology Wants:

The technium is a superorganism of technology. It has its own force that it exerts. That force is part cultural (influenced by and influencing of humans), but it’s also partly non-human, partly indigenous to the physics of technology itself.

We understand the innovation process?

Nigel Scott has done some research on the historic records of venture capital companies. And a key finding was the Silicon Valley venture capital firms do a ‘random walk’ on Sandhill Road. It implies that much of the advice dispensed is survivor bias or post-rationalisation.

You hear the phrase ‘pivot’ which means changing the model to profitablity. Old time VCs used to talk about investing people or teams, which explains why research by Boston Consulting Group found that women get less funding than male entrepreneurs.

Venture capitalists have the monetary incentive and the budgets to develop a thorough understanding of innovation, yet they don’t seem to apply it successfully. Which begs the question – how much do we really understand about innovation?

Innovation: did software really eat the world?

Back in 2011, Marc Andreesen wrote an op-ed (opinion piece) in the Wall Street Journal ‘Why Software Is Eating The World‘.

Six decades into the computer revolution, four decades since the invention of the microprocessor, and two decades into the rise of the modern Internet, all of the technology required to transform industries through software finally works and can be widely delivered at global scale.

Over two billion people now use the broadband Internet, up from perhaps 50 million a decade ago, when I was at Netscape, the company I co-founded. In the next 10 years, I expect at least five billion people worldwide to own smartphones, giving every individual with such a phone instant access to the full power of the Internet, every moment of every day.

On the back end, software programming tools and Internet-based services make it easy to launch new global software-powered start-ups in many industries — without the need to invest in new infrastructure and train new employees. In 2000, when my partner Ben Horowitz was CEO of the first cloud computing company, Loudcloud, the cost of a customer running a basic Internet application was approximately $150,000 a month. Running that same application today in Amazon’s cloud costs about $1,500 a month.

As one can see Andreesen’s title is a bit of a misnomer. Software is only the front end of a technology stack that is transforming the world. That transformation started before the web, before broadband infrastructure; with the rise of integrated circuits. Machine learning is doing some impressive things, but they are part of a continuum. Machine learning in data mining is building on work done in academia in the 1980s. It is replicating work done in the 1990s on decision support systems and business intelligence software.

Even, back in the early 1990s, commercial chemical labs were using software to guide product development. Rather than having to test every combination religiously; you started inputting formulations and results. The software would then extrapulate possible combinations and narrow down on an ideal formulation much quicker.

Its_a_Sony

As for machine learning in consumer products; it mirrors the late 1980s. Fuzzy logic came out of a 1965 research paper by Lofti A Zadeh at the University of California, Berkeley.

Japanese manufacturers built lifts that optimised for traffic flows of people. Microwaves that set its own timer for defrosting an item. Washing machines customised spin cycles based on the drum load. Televisions adjusted their brightness based on the ambient conditions of the room. (When similar technology was rolled out on early Intel MacBook Pro screens and keyboard lights it was billed as game changing). It removed a lot of blur from camcorder videos. All applications that are not a million miles away from smart homes and consumer technology today. They improved energy efficiency, with precise lighting, heating or cooling.

A western analysis of Japanese technology companies; usually cites their ‘defeat’ by Silicon Valley as an apparent lack of software skills. I’d argue that this lacks an understanding of Japanese software capabilities. From gaming to rock solid RTOS (real time operating systems); Japanese products met Andreesen’s software definition. The Japanese didn’t manage to sell enterprise software in the same way as Silicon Valley. It is something to bear in mind given the current glut of machine learning-orientated businesses in Silicon Valley. Does it mean that we won’t have the type of general AI applications that we’ve been promised in the future? No far from it, though a technological idea often takes several tries before it breaks through.

What becomes apparent is that software making an impact is merely the last stage of previous innovations. The problem with Andreesen’s model is that it portends what Judy Estrin described as innovation entropy.

Andreesen’s model couldn’t exist without:

  • Packet-switched networks – 1960 (RAND)
  • Unix-type operating systems – mid 1960s (MIT, AT&T Bell Labs, General Electric)
  • C programming language – 1972 (Unix development team)
  • Optical fibre networks – 1965 (Telefunken)
  • Internet router – 1966 (UK National Physical Laboratory)
  • ADSL 1988 (Bellcore)
  • DOCSIS 1997 (CableLabs)

So the core technologies that Andreesen’s software relied upon to eat the world was between 15 and 50 years old. It also relied on a massive overinvestment in optical fiber.  The dark fiber was done as part of a telecoms boom that occurred around the same time as the dot com boom. Software isn’t eating the world, its just the cherry on top of innovation that’s gone before. More importantly, software seems to be an end point and  doesn’t seem to extend the base of innovation further.

A second problem is that semiconductors phenomenal progress in integrated circuits is slowing down. Part of the problem is that more money is being dumped into disrupting the supply and demand for service industries, rather than funding start-ups who will power the next wave of underlying innovation that future software will rely on.

ICYMI | 万一你错过了| 당신이 그것을 놓친 경우

The booming male beauty market in China – Daxue Consulting – Market Research China – finding the Korean idol flower boy image difficult to square with mainstream male beauty products

Hayden Cox On Becoming An IWC Ambassador, And The Watches We Should Be Wearing – GQ – interesting choice of ambassador aiming at millennials

Leading taxi-hailing app providers in Japan and South Korea to collaborate | The Japan Times – interesting move by Kakao. It shows the rise in Korea – Japan tourism

Doing One Thing, Well: The UNIX Philosophy | Hackaday – great essay on the design philosophy on Unix

Baidu in Hot Water After Hospital Mix-Up – Caixin Global – not the first time for Baidu

Five for Friday | 五日(星期五) | 금요일에 다섯 가지

I’ve managed to recover from the crushing ennui of Apple’s Special Event. Here are the things that made my day this week:

Dr Eugenia Cheng talks about the key themes in her book The Art of Logic: How to Make Sense in a World that Doesn’t

Merino wool producers assume that cyber punk is a synthetic dystopia. The ad was done for the Woolmark Company by TBWA Sydney. Do humans dream of technical merino wool?

Live footage of Talking Heads performing Once in a Lifetime in 1980

Zegna’s radical reinvention | How To Spend It – great profile of Gildo and the fashion brand that he manages. The process of reinvention doesn’t seem to create a tension with the heritage.

Jori Hulkkonen – Attack Magazine –  he has a great back catalogue, so looking forward to his Simple Music for Complicated People album

Apple Special Event – September 12, 2018

Random notes as I watched the iPhone Xr/s and Apple Watch Series 4 launch.

Phil Schiller

Watching the introductory clip, this felt like an event designed mostly for an internal audience. The events have become a parody of themselves with very well worn tropes.

Company and eco-system update

  • Apple stores: 500,000,000 visitors per year. The stores have free wi-fi and classes, so this isn’t just about purchasing or building loyalty with customers. It has become public private space.
  • 2 billion iOS devices – many users will have replaced at least three devices so the community of likely iOS users is probably closer to 600 million. iPad tend to end up being communal devices in family homes and so have a longer life.
  • Apple Watch is the number one watch – I found this clip suprising. I find that hard to believe given the ubiquity of the Casio G-Shock range, or the F-91W family of basic digital watches

Apple Watch series 4

It is clever in some of the engineering: mass producing a ceramic back. the way Apple has managed to squeeze an ECG function in there. But there is a lot not to like about the watch
The case design preportions seem off in the video, it may look better in real life. I am guessing that part of the move is about the cellular aerial, but then you have the ceramic back

  • They still haven’t sorted the crown positioning and protection – it will still fire up Siri for no apparent reason
  • The device is only minimally waterproof
  • The awful information design in the face used on Apple Watch hero images

apple-watch-series4_watch-front-training_09122018
Which got me rooting through old copies of Wired magazine. They used to have a ‘Future of’ section on the back inside cover. And lone behold
watch

iPhone Xs

I was really unimpressed by this. Don’t get me wrong it looks ‘nice’ and takes a lot of engineering. There isn’t an upgrade reason for X users. I find the AR applications are gimmicks rather than necessarily being regularly used apps. The notable exception would be the measuring tape app included in iOS 12

Screen pixel counts are now getting ridiculous – you won’t be able to see the difference in terms of pixel refinement. Contrast may improve in HDR.

The sound on the device doesn’t recognise that consumers use headphones. It was all about louder speakers.

For iPhone 6/7/8 users the battery life descriptions fo the new X devices were weasel language that I would be wary of upgrading on this cycle.

Facial recognition but no in screen biometric touch sensor means that you still have a notch. It also means that there is a dissonance in experience between the touch sensors on the latest MacBook Pro models and iPad models. How will Apple be handling websites that have integrated Apple Pay validation?

As a MacBook Pro user, this told me to hang on to my current device. Wait and see if Apple changes the authentication again on the next round.

A12 Bionic chip. 20 years ago five trillion instructions per second would have been impressive as this would have been a super computer. Now it is pretty much in line with what one would expect in Moore’s Law. Intel are squeezing double the rate fo computing power out of FPGAs. You’ve got all that power and you get animojis…

How the software handles the paralellism of the chip is key. That is something that Sony found in the Cell architecture of the Playstation 2. Don’t expect that power to be obvious in 3rd party applications. The addressable memory claim surprised me. Its a 64 bit processor, so of course it could address 512GB of memory.

  • How much of the A12 chip is required to get FaceID to work?
  • How will the software get the most out of the cores?
  • There isn’t modem integration which helps rivals with their circuit board designs.

iPhone camera ‘breakthroughs’ seem to come from intellectual property that Lytro developed?

Dual SIMs – it is definitely a minority interest. It is likely to annoy carriers in mature markets with the exception of challengers like T-Mobile US.

The SIMs are all non-standard formats which is a pain in the backside. eSIMs are only supported by EE and Vodafone in the UK. The nano-SIM is yet another smaller format of SIM which will be hard to sell to carriers. The most attractive model is the China market one with two physical SIMs.

This could be:

  • Because China Mobile, China Unicom or China Telecom wouldn’t get on board with eSIMs
  • To screw with the Chinese grey market for iPhones (which is on the decline anyway
  • An unfortunate side effect is that it makes the China models more desirable for a (minority) consumer like me. So the grey market is likely to go the other way

iPhones are coming with a USB rather than USB C cable in the box, which raises questions about the longer term commitment to Thunderbolt 3…

iPhone Xr

Why did Apple create so many colour versions. It has too many colour variations. One of Apple’s historic strengths has been keeping a tight leash on the product portfolio.

ICYMI | 万一你错过了| 당신이 그것을 놓친 경우

The End of Days Is Coming — Just Not to China – Foreign Policy – interesting essay on why China doesn’t produce as much apocalyptic fiction as the rest of the world

Zegna’s radical reinvention | How To Spend It

Starbucks debuts in Italy with premium brews, novelty bar – The Mainichi – the irony…