Throwback gadget: shareware

Back before the internet became ubiquitous, software was distributed by bulletin boards. It was expensive to dial into a board, so magazines uses to have storage media pre-loaded with applications on the front of them.

For much of the late 1990s and early 2000s my parents used to use MacFormat magazine CDs and floppy disks as coffee coasters. One disk may come with bloatware such as the installation software for AOL, Demon or Claranet. The other disk would be full of free or paid for software.

The paid for software was often written by a single developer. It was a labour of love / cottage industry hybrid. Often the developers wrote the software to deal with a real need that they had, it was then passed on as they thought others would benefit as well.

Open source software the way we understand it now was only in its infancy in terms of public awareness. Packaged software was big money. As recent as 2000, Microsoft Office for the Mac would have cost you £235. Quark Xpress – the Adobe Indesign of its day would have cost in the region of £700+ VAT.

Into the gap sprung two types of software: freeware and shareware.

Freeware was usually provided as is, there was little expectation of application support. It would become orphaned when the developer moved on to other things

ChocoFlop Shareware Style

 

Shareware usually had different mechanisms to allow you to try it, if you could see the benefit then you paid a fee. This unlocked new features, or got rid of nag screens (like the one from image editing app Chocoflop).

In return you also got support if there was any problems with the app. Shareware hasn’t died out, but has become less visible in the world of app stores. One that I have been using on and off for over 20 years is GraphicConvertor by Lemke Software. It handles any kind of arcane graphic file you can throw at it and converts it into something useable.

Kagi Software were one of the first people to provide programmers with a way of handling payments and software activation. Kagi provided an onscreen form to fill out, print, and mail along with their payment. it was pre-internet e-commerce.

I can’t remember exactly what utility programme I first bought for my college PowerBook, but I do remember that I sent the printed form and cheque to a developer in Glasgow. I got a letter back with an activation code and a postcard (I’ve now lost) from the Kelvingrove Art Gallery and Museum.

Later on, Kagi were one of the first online payment processors.

From the late 1990s FTP sites and the likes of download.com began to replace the magazine disk mount covers. Last year Kagi died, making life a little more difficult for the worldwide cottage industry of small software developers. it was inconvenient, but now with PayPal developers have an easy way to process payments and there are various key management options.

Jargon watch: lights out production lines

If you are of a certain age, ‘hand made by robots’ brings to mind the Fiat Strada / Ritmo a thirtysomething year old hatchback design that was built in a factory with a high degree of automation for the time.

Fiat subsidiary Comau created Robogate, a highly automated system that speeds up body assembly. Robogate was eventually replaced in 2000. The reality is that ‘hand made by robots’ had a liberal amount of creative licence. Also it didn’t enable Fiat to shake off its rust bucket image. Beneath the skin, the car was essentially a Fiat 127. Car factories still aren’t fully automated.

Foxconn is looking to automate its own production lines and create products that truly are ‘hand-built by robots’. Like Fiat it has its own robots firm which is manufacturing 10,000 robots per year.

Foxconn has so far focused on production lines for larger product final assembly (like televisions) and workflow on automated machine lines: many consumer products use CNC (computer numeric control) machines. That’s how Apple iPhone and Macs chassis’ are made. These totally automated lines are called ‘lights out production lines’ by Foxconn.

Foxconn is looking to automate production because China is undergoing a labour shortfall as the population getting older. Foxconn uses a lot of manual workers for final assembly of devices Apple’s iPhone because the components are tightly packed together. It will be a while before Foxconn manages to automate this as robotic motor control isn’t fine enough to achieve this yet.

More information
Foxconn boosting automated production in China | DigiTimes – (paywall)

Belated Christmas Gift: updated set of marketing data slides

I started pulling together and publishing different data sets focused on online marketing from social platforms to the size of mobile screens. I think that it might be useful for strategists and planners. Feel free to use. If you do find them useful drop me a note. You can scroll through the embedded version below and download the PowerPoint version here.

Why Amazon wins?

Much has been written about how Amazon has:

  • Amazing data and uses it as a way to try and better understand intent
  • It has access to large amounts of capital so it can scale internationally and defeat local e-tailing champions
  • Amazing logistics foot print to satisfy consumer needs quickly

But one of the biggest factors in Amazon’s success is the quality of competition that it often faces.

Let me give you an example that happened to me this week. I have kept the vendor’s name anonymous because they are no worse than many other e-tailers – and they make damn fine iPhone cases.

I got an iPhone 7 Plus when the phone first came out and ordered a protective case from my usual preferred case manufacturer. I ordered direct because Amazon hadn’t got it in stock at the time. The supplier sent me two cases instead of one – probably an order fulfilment error.

I then get an email from this week:

Keep your Pixel and Pixel XL protected and pristine. XXXX Certified XXXX Protection

Protect your Google Pixel and Google Pixel XL with XXXX Certified XXXX Protection.

Commute with Confidence with our Commuter Series or choose Rugged Daily Defence with our Defender Series.

Shop Google Pixel Shop Google Pixel XL

Let’s think about this for a moment. They have me buying a cover for an iPhone 7 Plus. The average consumer replaces their phone probably on a two to three year cycle

Citigroup estimates the phone-replacement cycle will stretch to 29 months for the first half of 2016, up from 28 months in the fourth quarter of 2015 and the typical range of 24 to 26 months seen during the two prior years.

(Wall Street Journal – Americans Keep Their Cellphones Longer)

They have a number of pieces of information about me:

  • Date of purchase
  • Model of phone that I purchased a case for
  • Colour combination that I selected
  • Gender (based on my title)
  • Address
  • Email

They will also know information about the phone model itself since they make an Apple certified product:

  • Dimensions
  • Date of release

They also know based on previous Apple launches that this handset is likely to be in the product line for two years, one year as the flag ship product and the next as a cheaper line.

So why did they decide to send me the Google Pixel email?

I can think of three likely hypothesises:

  1. The company’s email marketers don’t have access to information that could be used for targeting – good for privacy, not so good for successful email marketing campaigns
  2. The email marketers had the data but didn’t bother to use it – poor work
  3. The email marketers viewed the Pixel as a much buy device and considered me a likely purchaser – their opinion would be at odds with reviews of the Pixel

Using Occam’s razor the answer is likely to be one or two. It’s not that hard for Amazon to win with competition like this.

2017: just where is it all going?

We are entering a period of turbulence in much of the world and I suspect that there are going to be more changes over the coming years.

Smart watches still won’t be as big as fitness trackers. Fitness trackers will peak.

  • Smart watches are struggling for a reason to purchase. Apple’s Watch 2 was the product that they should have realised the first time around. It was fixing the bugs in the first version. But there is still no reason to purchase
  • Android Wear supporters seem to have laid off on development. Huawei Watches are now available for half the list price. Lenovo has laid its Android Wear ambitions aside.
  • Fitness trackers seem to have done a very good job at reaching health fanatics. However the market will soon become driven by replacement devices. There is a constant tension between buying a cheap device which requires low amounts of purchase consideration versus moderately expensive devices that competes agains the smartphone doing the job. It is interesting that Jawbone could not find a buyer and Pebble was sold to Fitbit

If there is a common content format and a rise in content (beyond brand marketing) then VR could take off (and hammer TV sales in the process – at least in single user situations. I still think that VR googles could act as a TV substitute for single person households, shared living and student dorms. When content is time-shifted or binge streamed you can get by without a TV tuner.

The key driver would be the high cost of housing. If you are a hipster living in a small bedsit, having a large TV is a waste of your precious space.

The ‘next billion’ smartphone users in the developing world won’t get their handsets as fast as everyone thinks. Why?

  • Much of the supply will come from small no-name brands. These brands currently are on razor thin margins. Smartphone manufacturers are being shaken out
  • Razor thin margins are crushing key component manufacturers, those that are left will prioritise big customers first
  • The Hanjin Shipping meltdown will hit small suppliers with valuable cashflow tied up in containers that can’t move. Hanjin is expected to precipitate failure in other shipping businesses as the industry still has massive over-capacity and financial institutions will be less interested in helping out distressed businesses. Mearsk’s acquisition of Hamburg Sud is a further sign of this
  • Increasing nationalism in key markets like Indonesia and India is requiring local investment in production lines and component sourcing. This will take the focus away from addressing other markets and likely temporarily rise manufacturing costs
  • Declining economic outlook in mature markets including China, the US and EU will affect the capital available to fund speedy expansion

Leaks about Uber’s finances and rising interest rates are likely to drive increased scrutiny of Silicon Valley businesses. Uber’s finances sound eerily like the investment money pits prevalent.

Media investment is going to pour into the Alt-Right at a VC level. Its been a void that they’ve left up to now. Given that many of the markets that they’ve tried to disrupt are going nowhere, expect Breitbart and Co. to start seeing VC funded competition.

2016: crystal ball gazing, how did I do?

Here are the predictions made at the top of the year

I expect Uber will continue to funnel money into China and still get sand in its face. Quite what this means for Lyft I am not so sure.

Uber raised more money, realised that things still weren’t improving and then got a face saving exit from the Chinese market. I’ll call that a win.

Twitter gets a change of management, but that doesn’t do any good… All of this would be bad news for potential advertisers and their intermediaries in the advertising and PR world.

God, where do we start with Twitter. It has had extensive management churn and a big staff lay-off. I don’t think that my own view about a change of management is correct though. I envisaged this as a strategic proactive more by the board rather than the current rotating door. I have been impressed by how well Twitter advertising has held up. Twitter might look like the Yahoo! of social media, but it still holds a lot of weight with the mainstream media which still counts for something.

Fintech bubble that will take good ideas and bad ones down together. Banks are currently considered to be ripe for disruption. One of the key problems with this is that technologists think it will be easy to sweep aside regulations that banks operate under.

This one is still percolating out. Banks are looking particularly at Blockchain as the basis of a better transaction ledger/database. Informally, I have heard that VC funding has largely dried up on fintech start-ups; but the other shoe has yet to drop.  Zopa applying for a bank licence and becoming a bank felt like a watershed moment.

The internet in the EU will become increasingly regulated. At the moment the European Union is succumbing to The Fear. 

The fear has grown beyond terrorism to being overrun by immigrants (some of whom will be terrorists). The UK is well on its way to putting into law some of the most Draconian web laws in the western world from porn filtering to sharing citizen web history access with a wide range of government agencies.

Overall this has made less progress than I expected because Brexit became the existential challenge that the EU members will seek to vanquish.

We will have reached peak smartphone and tablet. China has now reached replacement rate for devices, there is a corresponding lack of paradigm shifts in the pipelines for smartphone design and software. Tablets have shown themselves to be nice devices for data consumption but not requiring regular upgrades like the smartphone or replacement for the PC.

We’ve certainly reached peak tablet. Smartphones are taking a longer while to shake out. What we are seeing is declining margins in smartphones. Apple increased its industry share of profits to 90% despite:

  • Making a weak update to the iPhone 6S
  • Having a declining market share
  • Having a higher cost in terms of bill of materials

There were some one-off factors such as the Samsung Note 7 recall and the collapse of Hanjin Shipping which curtailed the supply of some handsets.

VR in 2016 will be all about finding the right content. VR won’t work in gaming unless it provides e-gaming athletes with some sort of competitive advantage, if it does then gaming will blow things up massively. Gaming will not be the only content vehicle for VR, it needs an Avatar-like moment to drive adoption into the early mainstream.

There were two things that surprised me about VR in 2016.

  • It look Sony so long to get VR on to the PlayStation, it will be a while for us to see the impact of gaming on the use of VR. It certainly provides immersive experiences, but does it provide e-athletes with competitive advantage?
  • China blew the amount of VR headsets available out of the water, but there has been a corresponding dearth of content. The stuff on YouTube is nice demo-ware, but where is the ‘Breaking Bad’ of VR

One thing that people aren’t talking about is the role of VR googles as a replacement for a large TV set. I have heard that some of the most used apps for VR is Netflix.

Older predictions
2016: just where is it all going? | renaissance chambara
2015: crystal ball gazing, how did I do? | renaissance chambara
2015: just where is it all going? | renaissance chambara
2014: crystal ball gazing, how did I do?
2014: just where is it all going? | renaissance chambara 
Crystal ball-gazing: 2013 how did I do?
2013: just where is it all going?
Crystal ball-gazing: 2012 how did I do?
2012: just where is digital going?
Crystal ball-gazing: 2011 how did I do?
2010: How did I do?
2010: just where is digital going?

China is making a product that Apple should have done

Trawling eBay gives access to a cottage industry of predominantly China-based suppliers. They take iPod Classics and remanufacture them. They get new cases and new batteries.

Real trick is in the new component put in the device. Out goes the Toshiba micro-hard drive of 120GB or 160GB and in goes a 256GB SSD. Apple had abandoned production of the iPod Classic because it couldn’t get the right parts any more. Technology had moved on and flash memory had replaced micro hard drive’s as storage technology of choice for portable consumer devices.
iPod ClassicSwapping out the hard drive for an SSD provides an iPod with a number of advantages:

  • Its a third lighter than Apple’s version of the iPod Classic. This changes dynamics in usage. It no longer has the same heft, you feel less conscious of it in a pocket or jacket
  • The battery lasts longer. I now get about 30 hours of listening from the iPod. By comparison I get 18 hours out of my smartphone. If I used the smartphone as a music player as well, that battery time would drop further. If I used a streaming service, that would sound worse, hammer the battery life and mobile phone bill even further
  • It holds more music. At 256GB up from 160GB in the last model of iPod Classic it makes the difference between being able to hold all of my music library with me or not. You don’t have Spotify when you have 15,000+ tracks to choose from
  • The same great iPod experience. iTunes still syncs with the device. It has a good quality DAC (digital-to-analogue convertor) chip. With the right headphones and a sufficiently high sample rate it is indistinguishable from CDs. Under normal circumstances it sounds better your typical smartphone – which is trying to do lots of job well
  • It is quieter than the original iPod Classic. There is no longer the noise of a hard drive spinning up and reading the music data from the disk
  • Vigorous movement is not a problem. Apple had done a good job with the original iPod Classic songs were cached in RAM to iron out temporary stoppages due to movement affecting the hard disk. An SSD had no moving parts so it isn’t an issue any more

What becomes apparent is that Apple wouldn’t have had to make that much effort to make the product itself, but for no known reason it didn’t want to.

I suspect that part of this is down to:

  • The law of big numbers. The iPod Classic revamped in this way would be a decent business for most companies, but just isn’t as big as Apple is used to
  • A modified iPod probably too simple a design solution. Apple likes to take a big step forward (even when it doesn’t) – there are no plaudits or design awards in an iPod Classic with a solid state drive

The reimagined iPod is a development in sharp contrast to Apple’s new product developments:

  • Loved products bought by key Apple advocates have not been updated or ignored: the Mac Pro and the Apple Display (which Apple has abandoned)
  • Moving out of entry level products. With the MacBook Pro and MacBook line-ups, the entry device is now a secondhand laptop rather than the 11″ MacBook Air or the non-Retina MacBook Pro
  • Big bets that aren’t resonating with the marketplace: the Apple Watch has been a best selling smart watch; but is in a category which lacks a compelling reason to purchase. The iPad is a passive content consumption device for most consumers. It has a replacement cycle that would be more familiar to television manufacturers than a computer company

 

Thoughts on the new Apple MacBook Pro

Having slept a few naps contemplating Apple’s new MacBook Pro. I have been a Mac user since it was the mark of eccentricity. I am writing this post on a 13″ MacBook Pro and have a house of other Macs and peripherals.

Theatre
Apple launched a new range of Apple MacBook Pro’s on October 27, 2016. This was a day after Microsoft’s reinvigoration of its Surface franchise.  Apple ignores timing and tries to plough its own furrow. But comparisons by journalists and market analysts are inevitable.

Microsoft has done a very good job at presenting a device that owes its build quality to the schooling that Apple has given to the Shenzhen eco-system over the past two decades.

The focus on touch computing feels like a step on a roadmap to Minority Report style computing interfaces.  Microsoft has finally mastered the showmanship of Apple at its best.

Apple’s presentation trod a well-worn formula. Tim Cook acts as the ringmaster and provides a business update. Angela Ahrendts sits at a prominent place in the audience and appears on a few cut-in shots. Craig Federighi presented the first product setting a light self-depreciating humour with in-jokes that pull the Apple watchers through the fourth wall and draws them inside ‘Apple’. Eddy Cue plays a similar role for more content related products. In that respect they are interchangeable like pieces of Lego.

Phil Schiller came in to do the heavy lifting on the product. While the design had some points of interest including TouchID and the touchpad the ports on the machine are a major issue.

Given the Pro nature of the computer, Apple couldn’t completely hide behind ‘design’ like it has done with the MacBook. So Phil Schiller was given the job of doing the heavy lifting on the product introduction.

There was the usual Jonny Ive voiceover video on how the product was made with identikit superlatives from previous launches. It could almost be done by a bot with the voice of Jonny Ive, rather than disturbing his creative process.

It all felt like it was dialled in, there wasn’t the sense of occasion that Apple has managed in the past.

User experience
Many people have pointed out that Microsoft’s products looked more innovative and seemed to be actively courting the creatives that have been the core of Apple’s support. In reality much of it was smoke and mirrors. Yes Apple has lost some of the video market because its machines just aren’t powerful, in comparison to other workstations out there.

The touch interface is more of a red herring. Ever since the HP-150 – touch hasn’t played that well with desktop computers because content creators don’t like to take their hands too far from the keyboard when work. It ruins the flow if you can touch type; or have muscle memory for your PhotoShop shortcuts.

Apple didn’t invent the Surface Dial because it already had an equivalent made by Griffin Technology – the PowerMate. In fact the PowerMate had originally been available for Windows Vista and Linux as well, but for some reason the device software didn’t work well with Windows 7 & 8.

I can see why Apple has gravitated towards the touchpad instead. But it needed to do a better job telling the story.

Heat
Regardless of the wrong headedness of Microsoft’s announcements, the company has managed to get much of the heat that Apple used to bring to announcements. By comparison Apple ploughed exactly the same furrow as it has done for the past few years – the products themselves where interchangeable.

The design provided little enthusiasm amongst the creatives that I know, beyond agitation at the pointless port changes and inconvenience that conveyed.

While these people aren’t going to move to Microsoft, the Surface announcements provided them with a compare and contrast experience which agitated the situation further.  To quote one friend

Apple doesn’t know who it is. It doesn’t know its customers and it no longer understands professionals.

Design
Apple’s design of the MacBook Pro shows a good deal of myopia. Yes, Apple saved weight in the laptops; but that doesn’t mean that the consumer saves weight. The move to USB C only has had a huge impact. A raft of new dongles, SD card readers and adaptors required. If like me you present to external parties, you will have a Thunderbolt to VGA dongle.

With the new laptop, you will need a new VGA dongle, and a new HDMI dongle. I have £2,000 of Thunderbolt displays that will need some way of connecting to Apple’s new USB C port. I replace my displays less often than my laptop. We have even earlier displays in the office.

Every so often I transfer files on to a disk for clients with locked down IT systems. Their IT department don’t like file transfer services like WeTransfer or FTP. They don’t like shared drives from Google or Box. I will need a USB C to USB adaptor to make this happen. Even the encrypted USB thumb drive on my ‘real life’ key chain will require an adaptor!

I will be swimming in a sea of extra cables and parts that will weigh more than the 1/2 pound that Apple managed to save. Thank you for nothing, Apple.  Where interfaces have changed before there was a strong industry argument. Apple hit the curve at the right time for standards such as USB and dispensing with optical drives.

The move to USB C seems to be more about having a long thing slot instead of a slightly taller one. Getting rid of the MagSafe power connector has actually made the laptop less safe. MagSafe is a connector that is still superior to anything else on the market.  Apple has moved from an obsession with ‘form and function’ to ‘form over function’.

The problem is one of Apple’s own making: it has obsessed about size zero design since Steve Jobs used to have a Motorola RAZR.

Price versus Value
So despite coming with a half pound less mass and a lot of inconvenience, the devices come in at $200 more expensive than their predecessors. It will be harder for Apple customers to upgrade to this device unless their current machine is at least five years old. I don’t think that this laptop will provide the injection in shipments that Apple believes it will.

A quick word on displays
Apple’s move away from external displays was an interesting one. There can’t be that much engineering difference between building the iMac and the Apple Display? Yet Apple seems to have abandoned the market. It gives some professionals a natural break point to review whether they should stay with Apple. Apple displays aren’t only a product line but a visible ambassador of Apple’s brand where you can see the sea of displays in agencies and know that they are an Apple shop. It is the classic ‘Carol Bartz’ school of technology product management.

More information
Initial thoughts on Windows 8 | renaissance chambara
Size Zero Design | renaissance chambara
Why I am sunsetting Yahoo! | renaissance chambara
Apple just told the world it has no idea who the Mac is for – Charged Tech – Medium
Apple (AAPL) removed MagSafe, its safest, smartest invention ever, from the new MacBook Pros — Quartz
How Apple’s New MacBook Pros Compare To Microsoft’s New Surface Studio | Fast Company | Business + Innovation – a subtly cutting article on the new MacBook Pro
New MacBook Pro touches at why computers still matter for Apple | CNet
Apple’s new MacBook Pro kills off most of the ports you probably need | TechCrunch

The internet of hacking or WTF is happening with my smart home?

Mirai – is a bot network that is powered by a range of devices including infected home routers and remote camera systems. It took over these systems by using their default passwords. The network of compromised machines is then targeted to overload a target network or service. Last week the Dyn DNS service was targeted which restricted access to lots of other services for users on the east coast of the US.

DNS is like a telephone directory of internet destinations, if no one knows where to go it becomes a lot harder to get in touch.

DDoSing
Mirai didn’t spring miraculously out of thin air. It finds its history in passionate gamers who used distributed denial of service (DDoS) attacks to slow down or even kick opponents off online gaming platforms. Eventually the gaming companies got hip to it and went after the cheaters, not to be outdone the cheaters went after the gaming companies.

Taking a service offline using DDoS became a source of extortion against online banking and e-commerce services. Attacks can be used as a form of ‘digital hit’ to take out opponents or critics like online security commentator Brian Krebs.

Computing
Moore’s Law meant that computing power has become so small and plentiful that it is surprising what we often have in the palms of our hands. The first Cisco router was built on the circuit board of a Sun Microsystems workstation. Home routers now are basically small computers running Linux. A CCTV camera box or a DVR are both basic PCs complete with hard drives.

Back in 2007, BlackBerry co-founder Mike Lazaridis described the iPhone as

“They’ve put a Mac in this thing…”

The implication being that the power of a sophisticated PC was essentially in the palm of one’s hand. The downside of this is that your thermostat is dependent on a good broadband connection and Google based cloud services and your television can get malware in a similar manner to your PC.

Security
For a range of Chinese products that have been acknowledged as part of the botnet; the manufacturer acknowledged that they were secured with a default admin password. They fixed the problem in a later version of the firmware on the device. Resetting the default password is now part of the original device set-up the first time you use it.

The current best advice for internet of things security is protecting the network with a firewall at the edge. The reality is that most home networks have a firewall on the connected PCs if you were lucky. The average consumer doesn’t have a dedicated security appliance on the edge of the home network.

Modern enterprises no longer rely on only security at the edge, they have a ‘depth in defence’ approach that takes a layered approach to security.

That would be a range of technology including:

  • At least one firewall at the edge
  • Intrusion detection software as part of a network management suite
  • A firewall on each device
  • Profile based permissions across the system (if you work in HR, you have access to the HR systems, but not customer records
  • Decoy honey post systems
  • All file systems encrypted by default so if data is stolen it still can’t be read

Processes:

  • Updating software as soon as it becomes available
  • Hard passwords
  • Two-factor authentication

Depth in defence is complex in nature, which makes it hard to pull off for the average family. IoT products are usually made to a price point. These are products as appliances, so it is hard for manufacturers to have a security eco-system. The likelihood of anti-virus and firewall software for light bulbs or thermostats is probably small to non-existent.

The Shenzhen eco-system
Shenzhen, just across the border from Hong Kong has been the centre of assembly for consumer electronics over the past 20 years. Although this is changing, for instance Apple devices are now assembled across China. Shenzhen has expanded into design, development and engineering. A key part of this process has been a unique open source development process. Specifications and designs are shared informally under legally ambiguous conditions – this shares development costs across manufacturers and allows for iterative improvements.

There is a thriving maker community that allows for blurring between hobbyists and engineers. A hobbyists passion can quickly become a prototype and then into production . Shenzhen manufacturers can go to market so fast that they harvest ideas from Kickstarter and can have them in market before the idea has been funded on the crowdsourcing platform.

All of these factors would seem to favour the ability to get good security technologies engineered directly into the products by sharing the load.

China
The European Union were reported to be looking at regulating security into the IoT eco-system, but in the past regulation hasn’t improved the security of related products such as DSL routers. Regulation is only likely to be effective if it is driven out of China. China does have a strong incentive to do this.

The government has a strong design to increase the value of Chinese manufacturing beyond low value assembly and have local products seen as being high quality. President Xi has expressed frustration that the way Chinese manufacturing appears to be sophisticated, yet cannot make a good ballpoint pen.

Insecurity in IoT products is rather like that pain point of poor quality pens. It is a win-win for both customers, the Chinese manufacturing sector and by extension the Party.

More Information
WSJ City – Massive Internet Attack Stemmed From Game Tactics
Your brilliant Kickstarter idea could be on sale in China before you’ve even finished funding it | Quartz
Asus lawsuit puts entire industry on notice over shoddy router security | Ars Technica
Europe to Push New Security Rules Amid IoT Mess — Krebs on Security
Why can’t China make a good ballpoint pen? | Marketplace.org

Technology autopsies

Jean-Louis Gassée has been at the centre of the technology industry for at least the past 30 years. He worked at Apple though the early Macintosh years, founded Be Inc. – a now forgotten OS and workstation company that focused on multi-media prowess and was chairman at PalmSource.

He recently published a meditation on why Palm, BlackBerry, Nokia and Microsoft failed in the smartphone sector – it makes a really good read, I have linked to it under more information. But there a few details missing, I suspect for the ease of storytelling. I’ve added them below as additional accompanying notes for his essay.

Nokia

Reading David Wood’s ‘biography’ of Symbian makes you realise how from the early years the OS was kludged together into something fit for purpose.  Moving Symbian on was a major issue, one that Nokia knew they faced. It was perplexing why Nokia couldn’t get Maemo right. I had used a developer model Nokia N950 and it was an impressive piece of kit – a symbol of what could have been.

A second part of Nokia’s problems were hardware related. Nokia Networks and phones had thrown their lot in with Intel on WiMax for 4G, rather than LTE championed by Siemens, Ericsson and NTT Docomo.

That put them in the wrong camp to do business with Qualcomm and its SnapDragon processors for modern smartphones. Nokia’s engineering brain trust had been completely wrong-footed. It also explains why valuable time was lost merging Nokia’s next generation mobile device OS with Intel’s similar project. Ironically, this operating system now powers Samsung smartwatches – which is a testament to its ability to squeeze real-world performance out of extremely low powered devices.

Texas Instruments a long-time Nokia supplier, pulled out of the mobile embedded processor market in 2012, which would have had implications for Nokia’s much vaunted supply chain, in particular chip pick-and-place machines.  One could see how these operational problems would have rippled through the engineering organisation.

Nokia actually had a prototype iPhone-esque device running by mid-2004, but were afraid to make a leap of faith

“It was very early days, and no one really knew anything about the touch screen’s potential,” Mr. Hakkarainen explained. “And it was an expensive device to produce, so there was more risk involved for Nokia. So management did the usual. They killed it.”

I had used touch screen devices since 1999, but it is hard to explain how transformative a responsive capacitative touchscreen interface was in comparison to everything that had gone previously.

Palm

By 2002 Palm had acquired Be Inc. presumably because they realised that mobile computing needed to have a modern OS for its underpinnings. Palm had previously looked at moving its OS over on Symbian as by 2000 the PalmOS was creaky.  PalmOS at that time ran on a low power version of the Motorola 68000 series processor that powered the first Macs in the mid-1980s to mid-1990s. The OS was migrated to an ARM processor for use on mobile devices.

Its PalmSource subsidiary was spun out of the business to better build an eco-system of licensees. The work of Be Inc. made it into a modern version of Palm OS called Cobalt in 2004, but this was not used by Palm or anyone else. Cobalt covered multi-tasking, better security and better multimedia.

PalmSource acquired a Chinese mobile Linux company in early 2005. PalmSource was sold to ACCESS of Japan.

ACCESS Linux offered the Palm interface running on top of a Linux micro-kernel and functionality for mobile networks etc.  ACCESS Linux was ready to go in 2006 prior to the launch of the iPhone.  While there was collaboration with NEC, Panasonic  and NTT Docomo there hasn’t been an ACCESS Linux powered device launched.

Instead Palm launched its WebOS in 2009. WebOS was slow and sluggish to use. Part of this was because the device was under-powered compared to competitor products.  So despite having an interface which had many of the pieces in place Palm had at least three gos at the software and still failed badly in terms of execution.

Microsoft

Gassée rightly points out that Google giving away its OS left Microsoft’s business model for Windows Mobile disrupted.

However, truth be told Google did a poor job of signing all the disparate Chinese manufacturers onboard and fully legit on Google. Many Chinese handsets had not gone through official channels for compatibility testing (CTS) and do not have a Google Mobile Services (GMS) license.  Google historically hadn’t bothered to scale to address the international aspirations of Chinese tier two and tier three handset makers.

Building a partner eco-system in the west would have been challenging. Microsoft had too many skeletons in their closet and their partners didn’t do too well.

  • Nortel was a historic Microsoft partner in wireline telecoms prior to going bankrupt in 2009. Where companies have PC / phone integration  it is built on the knowhow Microsoft gained with Nortel on VoIP PBXs
  • Motorola had a Microsoft Windows Mobile-powered smartphone the Motorola Q. That worked out sufficiently well that Motorola abandoned it and focused on Android devices
  • Sony when it was co-branded Sony-Ericsson used Windows Mobile for its Xperia phones for two years as it recognised that Symbian had reached the end of the line. Eventually Sony-Ericsson moved over to Android in March 2010, the company has struggled to remain relevant in the mobile market
  • Sendo was a start-up founded in 1999, they signed an agreement with Microsoft to be the company’s go-to-market partner for their Smartphone 2002 mobile operating system. The deal gave Microsoft a royalty-free license to Sendo’s designs if the company went insolvent. There was a legal dispute when Microsoft used Sendo’s designs to create the first of the Orange SPV phones made by HTC
  • After making Windows phones from 2009 to 2013, LG said that there was no demand for Windows Phone devices and moved its portfolio exclusively over to Android where it competes with a respectable performance against Samsung

Microsoft’s name in the telecoms industry is mud. To add insult to injury its Skype VoIP application is a direct competitor to carrier voice minute businesses on both wireless and wired connections.

More information
Blackberry: Meditation At The Grave | Jean-Louis Gassée
Nokia’s New Chief Faces Culture of Complacency | New York Times (paywall)

According to Google, PR and SEO are no longer earned media-only disciplines

Back in August Google started to roll out changes to its Keyword Planner tool. Users who did not have an active AdWords campaign running on there account would no longer get search volume data. Instead an indicative range appeared.
Crippled version of Google Keyword Planner
Information that isn’t particularly useful.

Search volume as a directional metric is important for both online and offline communications:

  • Public relations and branding specialists to understand the language of the customer. PRs would use it to help hone key messages. Branding specialists would use it to help the brand lexicon or editorial guide
  • SEO specialists use this information as part of their process to pick the most important key words for a web page or website

Now PR people and branding specialists need to have access to an account with a continuous advertising spend running. Even if they aren’t doing any online-related work themselves.

The road to paid media only access has been a long one: KeyWord Planner had been announced in 2013, a result of Google rationalising some of the existing tools into one. Keyword Tool and Traffic estimator merged. KeyWord Planner was notable for requiring an AdWords account, this was a noticeable change and not too subtle hint from Google.

Unlike other product changes this latest ‘enhancement’ was not announced on any of the Google product blogs like Inside AdWords, but instead was acknowledged retrospectively by a Google spokesperson answering a question from Search Engine Land. It has been up to the community to explore the full implications.

One aspect is that if you reduce your Google advertising spend, you lose access to the data. So, PR, branding and SEO become inextricably linked with PPC advertising.  Critics could accuse Google of abusing its market power with 95%+ share of search in Europe; but Google would argue that it has had to move this way because of bad actors. Secondly, since the service had been provided for free since it was launched at the end of 2008, there is never any guarantee that it would be free to use.

Google now has antitrust investigations coming at it in markets around the world, they probably just don’t care any more and its all about profit maximisation. If one looks at the Alphabet Group overall, Google is the obvious cash cow.

However it is ironic now that Russian search engine Yandex via its Wordstat tool provides better free information on search volumes than Google.

More information
What the heck is going on with Google Keyword Planner? | Search Engine Journal
Introducing Keyword Planner: combining the Keyword Tool and Traffic Estimator into One | Google Inside AdWords blog
Announcing the Search-based Keyword Tool | Google Inside AdWords blog
Yandex Wordstat

Throwback gadget: Danger Hiptop

Years ago I read an article which talked about the collective memory of London’s financial district being about eight years or so. Financiers with beautifully crafted models in Excel would be doomed to make the same mistake as their predecessors.
Hiptop
Marketers make the same mistakes, not being able to draw on the lines of universal human behaviour when it meets technology. Today’s obsession with the ‘dark social’ of OTT messaging platforms is very reminiscent of the culture that grew up around the Danger Hiptop. The  Hiptop drove a use of instant messaging platforms (Yahoo!, Aol and MSN) in a similar way to today’s use of Kik, Facebook Messenger and WhatsApp by young people.

Heritage

Danger was started back in 1999, by veterans from Apple, Philips and WebTV.

Back then mobile data was very primitive, email was slow and the only people I knew who used mobile phones on a regular basis were press photographers, sending images back from early digital SLRs using a laptop connected up to their phone. At this time it was still sometimes easier to bike images over. 3G wireless was on the horizon, but there wasn’t a clear use case.

Apple was not the force it is now, but recovering from a near death experience. The iMac, blue and white G3 tower units and ‘Wall Street’ laptops reignited belief in core customers. Mac OSX Server 1.0 was released in March that year and pointed to the potential that future Macs would have.

WebTV at the time was a company that felt like it was at the apex of things. Before the internet took off, companies like Oracle and BT had tried providing interactive TV services including CD ROM type experiences and e-commerce in a walled garden environment. This was based on having a thin client connected to a TV as monitor. WebTV took that idea and built upon the internet of the mid-1990s. It wasn’t appreciated how commoditised the PC market would become over time. They were acquired by Microsoft in 1997,  later that year they would also buy Hotmail.

At the time, Philips was a force to be reckoned with in consumer electronics and product design. The company had a diverse portfolio of products and a reputation for unrewarded innovation including the compact cassette, interactive CD media and audio compact discs. Philips was the company that the Japanese wanted to beat and Samsung still made third-rate televisions.

Some of them were veterans of a failed start-up called General Magic that had spun out of Apple. A technology super-team of engineers and developers came up with a wireless communicator device that failed in the market place.  It’s name became a byword for a failed start-up years later.  Talent was no predictor of success. General Magic was the silicon valley equivalent of Manchester United getting relegated and going bankrupt in a single season. So it is understandable that they may have been leery of making yet another wireless device.

The device

The Hiptop was unapologetically a data first device. It was a thick device with a sliding screen which revealed a full keyboard and four-way directional button to move the cursor. On later devices this became a trackball. The screen was a then giant 240 x 160 pixels in size. It became available in colour during the device’s second iteration, later devices had a screen that was 854 pixels wide.

I was large enough provide a half decent browsing experience, read and write messages and email. It was held in landscape arrangement and the chunky frame worked well in a two handed hold not that different from a games console controller, with thumb based typing which worked better than the BlackBerry keyboard for me.  Early devices allowed you to move around the screen with four-way rocker switch. Later devices had a trackball. This keyboard rather than touchscreen orientation made sense for two reasons:

  • Touchscreen were much less responsive than they are now
  • It enabled quick fire communication in comparison to today’s virtual smartphone keyboard

Once the device went colour it also started to have LEDs that lit up for ringing and notifications, providing the kind of visual cues enjoyed by Palm and BlackBerry owners.

The Hiptop had a small (even by Symbian standards) amount of apps, but these were held in an app store. At the time, Symbian had signed apps as a precaution against malware, but you would usually download the apps from the maker’s website or the likes of download.com or TUCOWS and then side load on to the device from a Mac or PC.

The Hiptop didn’t need the mediation of a computer, in this respect it mirrored the smartphones of today.

Product life

When Danger was launched in 2002, carriers had much more sway over consumers. The user experience of devices was largely governed by carriers who usually made a mess of it. They decided what the default applications on a device and even the colour scheme of the default appearance theme.

Danger’s slow rise to popularity was because it had a limited amount of channels per market. In the UK it was only available via T-Mobile (now EE).

In the US, the Hiptop became a cult item primarily because IM had grown in the US in a similar way to SMS usage in Europe.

Many carriers viewed Hiptop as a competitor to BlackBerry and refused to carry it in case it would cannibalise sales.

Danger was acquired in 2008 and that is pretty much when the death of the Hiptop set in as Microsoft acquired the team to build something different. An incident with the Danger data centres losing consumers data and taking two months to restore full service from a month-old back-up didn’t help things. It was a forewarning of how dependent on cloud services that users would become.

Danger held much user data and functionality in the cloud, at the time it made sense as it kept the hardware cheaper. Danger devices came with a maximum of 2GB internal memory.

Even if Microsoft hadn’t acquired Danger it would have been challenged by the rise of both Android and iOS. Social platforms like Facebook would have offered both an opportunity and a challenge to existing messenger relationships. Finally the commoditisation of hardware would have made it harder for the Hiptop to differentiate on value for its millennial target market.

 

 

 

The Yahoo! Data Breach Post

Yahoo! had a data breach in 2014, it declared the breach to consumers on September 22. This isn’t the first large data breach breach that Yahoo! has had over the past few years just the largest.

In 2012, there was a breach of 450,000+ identities back in 2012. Millions of identity records were apparently being sold by hackers in August 2016 that the media initially linked to the 2012 breach. It would be speculative to assume that the records for sale in August was part of the 2014 raid.

The facts so far:

  • 500 million records were stolen by the hackers. Based on the latest active email account numbers disclosed for Yahoo! many of these accounts are inactive or forgotten
  • Some of the data was stored unencrypted
  • Yahoo! believes that it was a state sponsored actor, but it has offered no evidence to support this hypothesis. It would be a bigger reputational issue if it was ‘normal’ hackers or an organised crime group
  • There are wider security implications because the data included personal security questions

The questions

Vermont senator asked the following questions in a letter to Yahoo!:

  • When and how did Yahoo first learn that its users’ information may have been compromised?
  • Please provide a timeline detailing the nature of the breach, when and how it was discovered, when Yahoo notified law enforcement or other government authorities about the breach, and when Yahoo notified its customers. Press reports indicate the breach first occurred in 2014, but was not discovered until August of this year. If this is accurate, how could such a large intrusion of Yahoo’s systems have gone undetected?
  • What Yahoo accounts, services, or sister sites have been affected?
  • How many total users are affected? How were these users notified? What protection is Yahoo providing the 500 million Yahoo customers whose identities and personal information are now compromised?
  • What steps can consumers take to best protect the information that may have been compromised in the Yahoo breach?
  • What is Yahoo doing to prevent another breach in the future?
  • Has Yahoo changed its security protocols, and in what manner?
  • Did anyone in the U.S. government warn Yahoo of a possible hacking attempt by state-sponsored hackers or other bad actors? When was this warning issued?

Added to this, shareholders and Verizon are likely to want to know:

  • Chain of events / timing on the discovery on the hack?
  • Has Yahoo! declared what it knew at the appropriate time?
  • Could Yahoo! be found negligent in their security precautions?
  • How will this impact the ongoing attrition in Yahoo! user numbers?

Additional questions:

  • How does Yahoo! know that it was a state sponsored actor?
  • Was there really Yahoo! web being sold on the dark web in August?
  • Was that data from the 2014 cache?
  • How did they get in?

More information
An Important Message About Yahoo User Security | Yahoo – Yahoo!’s official announcement
UK Man Involved in 2012 Yahoo Hack Sentenced to Prison | Security Week
Congressional Leaders Demand Answers on Yahoo Breach | Threat Post

Modern PR impact and consequences

Jessica Lessin wrote a great piece in The Information about her perspective as a journalist on how the practice of (tech) PR had changed (at least in Silicon Valley). The New PR Reality and What it Means outlines a number of traits emblematic of modern PR:

  • Press release as op ed piece on corporate or executive blog to promote one “story of record” about whatever you want to announce
  • Lessin considered exclusives with a friendly publication to be another variant of the same strategy
  • Lessin laments the demise of the press conference and the access that it brings to corporate executives for journalists.

Lessin also warns that the lack of information and dialogue reduces the variations and reflections on would see on the story in terms of analysis. The audience needs to have a greater capacity for critical thinking and a certain amount of cynicism to ask why?

The silicon valley bubble

Lessin and peers like Kara Swisher got to see an industry mature over time. They were in the right part of the world to build face-to-face relationships with the people that mattered.

The reality for journalists outside the Silicon Valley area was generally less access. 80 percent of the time when I arranged media access to my clients it was a ‘down-the-line’ telephone interview.

As an outsider who has had the opportunity observe public relations and media relationships in silicon valley I was surprised by the cordial differential aspect of it. There generally aren’t that many challenges, dissenting voices are usually shrill and stifled through a lack of access. The classic examples of this are Apple’s relationship with The Register, the 2009 blacklist of CNet by Google over Eric Schmidt’s opinions on privacy or Peter Thiel’s role in putting Gawker Media out of business.

This constriction of debate and access the Lessin cared about is in keeping with wider trend of silicon valley hubris and ego.

The reasons why public relations has changed

In the late 1990s through to the early 2000s the mass media was the best way to talk to the end consumer. Through advertising and PR. PR had a relatively low cost barrier to entry, but was relatively inefficient from a cost-per-reach and campaign impact point of view.

Online advertising offered new dynamics that changed the way marketing money was spent. This meant that you had to do more with  a static or declining marketing spend, this had a number of follow on factors:

  • Less budget for out-of-pocket expenses. The first agency I worked in launched Hitachi Data System’s Skyline Trillium range of IBM-compatiable mainframes. (I know, I know you want to sleep). We took a whole pile of journalists on a helicopter flight over London’s financial district as part of the launch, so they could see the iconic skyline (I know, grown at the crushingly twee creative concept). You just wouldn’t do that now.  There isn’t the money for decent gift bags or cleverly presented press packs either
  • Mid-and-senior agency staff salaries have been static for at least the past decade, which affects the quality of the thinking and the work done

There was also a corresponding change in the way PR was done in order to improve campaign impact. It used to be that you made a big bang  and hoped that the deluge of coverage would provide a 360 experience of sufficient reach, frequency and impact that client commercial goals would be achieved.

That theory fell down. Not only had PR spend changed but publication advertising spend had changed as well. There were less publications and less journalists writing for them. Those that wrote for the publications had to write more content.

That mean’t more time writing, less time research, thinking and networking. Less time to turn up at press conferences. Press conferences became a relatively high risk tactic for the agency PR to recommend; unless you had a landmark event.

What if you throw a press conference and few people show up or don’t stick around. Angela Eagle’s disastrous launch of her campaign to become leader of the UK Labour Party is a case in point.

Through little fault of Eagle’s campaign team, the Conservative leadership competition collapsed leaving Teresa May as prime minster. Eagle ended up with a poorly attended press conference with few questions from the media. Now imagine if a similar scenario happened to a Silicon Valley leader like Larry Ellison.

From an agency perspective this ‘journalist scarcity’ became a catalyst to change the approach to try and drive greater impact of coverage generated. It’s what agencies call ‘story-telling’; you work with a publication to craft all the right conditions including executive access – so that a story will run.

Working with a large corporate means that this takes a lot of time:

  • Building the story first of all, this is your product that you then reverse-engineer the journalist ‘journey’ through. It takes into account areas of interest that they journalist has previously written about, the publication style. The likely word count (a bigger canvas is better)
  • You pitch this to the client. This would include a complete plan including what you hope to get from the publication (likely headlines and synopsis), how this rolls up to business objectives
  • The pitching process to the journalist is a high touch process. The journey that they are taken on might take months based on executive and resource availability (such as lab tours)

With one agency client I worked with, my back-of-a-cigarette-packet maths had some disturbing numbers. Placing a story in the Wall Street Journal cost roughly the same as buying a full page of ad space.

Secondly stories need heroes: people. Bill Gates was framed as a superman – which was torn to shreds in the Judge Jackson anti-trust trial testimonial videos. A more cynical interpretation of the Bill and Melinda Gates Foundation would be having at least some role in rehabilitating Gates’ profile as a statesman of the technology sector.

Many of the heroes are drawn from the bench below the CEO; Microsoft used former research head Rick Rashid in that role for a number of years. Google had highlighted Marissa Mayer in a similar role – neither executive now work for those employers.

So how do you make the storytelling process develop greater agility and  become more  scaleable to improve campaign impact and frequency? Social media offered part of the answer for prominent technology companies. Corporate channels became de rigour and new media channels like The Verge and Buzzfeed news sprang up.  The technology sector even bankrolled some of these titles, notably Sarah Lacy’s Pando.

Hubspot have turned this into an industry as this approach is emblematic of the content marketing methods and tools they sell to businesses around the world. Codifying the PR techniques of silicon valley for a wider audience.

More information
The New PR Reality and What it Means | The Information (paywall)
Hitachi (finally) releases Skyline Trinium Nine high-end mainframe | ComputerWorld