Platforms and information war

8 minutes estimated reading time

Bill Bishop in his excellent newsletter Sinocism (paywall) asked some interesting questions about platforms and information war as part of an opinion piece covering a wider range of issues. It was a particular question that he posed about platforms and information war that got my attention.

…if you believe that the PRC is engaged in a coordinated global information war to control the narrative about China and delegitimize the US and the West, should the US and other governments targeted in this campaign pay more attention to the use of social media platforms like Youtube, Facebook and Twitter in those efforts? If so how?

Bill Bishop, Sinocism – Weekly Open Thread 2021 #10: Gratitude; Narrative control; Xinjiang and social media (April 2, 2021)

I replied to Bill’s question; in his thread and thought I would publish my thoughts here in an expanded and hopefully better written way.

In my response I decided to deal with the online / social platforms and information war head on. It is a problem that western countries have been wrestling with for a good while. Whether its:

  • State actors like China, North Korea or Russia
  • Political extremists on the right and the left including populism
  • Conspiracies: anti-vaxxers, anti-maskers, QAnon, 5G causes COVID-19
  • Non-state actors: jihadist groups, ‘spontaneous’ Chinese Han nationalists

On platforms

The main platforms of concern are:

  • Alphabet (Google search, YouTube)
  • Facebook (Facebook, WhatsApp, Instagram)
  • Twitter
  • Bytedance (TikTok)

For sake of convenience I am going to refer to Alphabet / Facebook / Twitter / Bytedance as AFTB for the rest of the article.

All of the platforms are controlled by algorithms, but what does that really mean? The algorithms are created by technologists to:

Surface content that the audience will want to engage with and make less visible content that is less interesting. Rather like an editor selecting articles in magazine. Algorithms are also used to select and match advertising with content. The algorithms incorporate feedback loops. Whilst they are created by technologists, they are in turn altered by audience behaviour over time. Descriptors like deep learning, reinforcement learning, machine learning and artificial intelligence are often used to describe the technologies supporting algorithmic selection. Popular media would conjure up images of sentient intelligent computers, but that isn’t really helpful.

Instead I would like to draw on geography as a metaphor to describe what’s going on. Imagine a fissure in the earth opens up in the side of a lake. The water follows a natural path downhill, over time the water carves out a gulley, that becomes a river. It removes the top soil, taking it down to sea. Stones are tumbled and rubbed together by the flow of the river. They become smooth and lozenge shaped. The landscape directs the flow of the water and is acted upon by the water. The water’s effect becomes more pronounced over time.

So it is with algorithms. They are created, they interact with the audience. Over time, their behaviour becomes affected by audience behaviour patterns that it experiences over and over again. The reality is that AFTB have power, but are also influenced by audience behaviours.

Is the audience real?

Algorithms automate advertising. Online advertising was estimated to be worth 319 billion dollars in 2019 and expected to be 1,089 billion dollars by 2027. To give a frame of reference, in 2019 global advertising spend was roughly equivalent to the GDP of Singapore. By 2027 it will be greater than the GDP of Indonesia, the country with the world’s fourth highest population and largest muslim population. All of that money attracts serious efforts to defraud advertisers and platforms. Statista estimates that advertising fraud will be worth 44 billion dollars by 2022. A lot of that will be created by technology driven ‘fake’ audiences or bots.

One of the main questions that platforms ask themselves is are their audience members real. A lot of efforts have gone into counteracting fake advertising audiences. Some of these efforts have also taken platforms and information war in the form of fake commenters and automated social accounts.

This has given rise to mass organised real commenters, from influencers in WhatsApp groups arranging to like and comment on each others posts, to troll farms and self organised groups. State actors like Russia and China are known to have have both troll farms and self organised groups of the politically faithful working for them. A lot of the time, their comments aren’t designed to persuade other people on social media, but distract, drown them out or intimidate others. Their role is also designed to shape the algorithms that surface content.

YouTuber talks about how his posts are commented on, demonetised and flagged for take down

Platforms also use algorithms and audience participation as a first line of defence against inappropriate content. Chinese respondents have compiled a great deal of expertise in successfully flagging content for removal or demonetisation. Demonetising a video soon after it has been posted is particularly harmful for video creators as this is when they have their most views and greatest opportunity for ad revenues. Video views over time from posting have a a curve that steeply declines over time. Video channel view distribution roughly follows the long tail model with a a small amount of popular channels having an outsized audience and the bulk of the advertising anyway. This puts demonetised creators in a very precarious situation. It puts aggressors at an advantage in terms of platforms and information war.

Creators dealing with this process will find the process very wearing even if they are one of the lucky few who has a creator account manager. The platforms can’t fix that without spending a large amount of money on real people, which would impact profitability.

The exception to the rule would be TikTok which has more aggressive algorithms similar to what would be used on Chinese social platforms and actively filter controversial content. TikTok focuses on light entertainment and a good deal about this approach is because of its Chinese ownership. Douyin, the China-only version of TikTok is even stricter. The algorithms are supplemented by an army of a human censors. The other platforms have a wider remit.

Content that works

There have been leaps forward in understanding how to make more effective content and what app designs worked. The secret sauce is variable rewards. An example of this would be content associated with the QAnon conspiracy, each instalment is known as a Q drop. Some Q drops were big claims, others relatively small details.

Nir Eyal’s Hooked is a book that covers this and is one of the more accessible of the of the advertising / marketing / product design-industrial complex works on this area.

The difficulty changing platforms

It is really hard to get platforms to do more than what they’re already doing through non-policy related means. A case in point would be studying Facebook’s recent history. The reason why Facebook was able to withstand large brand boycotts in 2020 was because they make their money from small companies around the world. A good deal of the business is small D2C (direct to consumer), gaming and major brands. Many of which are Chinese. Examples would be

  • wish.com
  • shein.com
  • Oasis games
  • Xiaomi
  • Vivo and OPPO (part of BBK)

This layer of immunity is likely to be less pronounced but similar on Google advertising as well. As I write this, I can see that Air China is running Google advertising campaigns. They are in both English and simplified Chinese against a range of search terms.

Secondly, social media platform changes wouldn’t be solved by policy alone. You’d need a reorientation of priorities in the boardroom, call it a higher purpose or patriotism in the broadest terms that hasn’t been seen in the US since the Eisenhower administration.

You would need a mammoth tech revamp inside the platforms; a huge increase in human account management and intervention. For instance, it would mean YouTube having to shake up the pro-China (or Q-Anon) rabbit holes that are instrumental in attitudinal change.

Deplatforming

You would need to deplatform domestic advocates who are either paid, or are fellow travellers with groups using platforms for information warfare. I would imagine that you would have a lot of people baulking at that. In order to inoculate local populations of Russians or Chinese against use of their favoured platforms for information warfare, you would need to completely rebuild their native language media within western countries. In the case of Chinese immigrants and students, you would need to start filtering content on WeChat. That would be a major undertaking for any security service. WeChat could make that job a lot harder if they integrated encryption into the app for overseas users.

You would need to deplatform foreign media organisations such as Russia Today, Global Times and CGTN.

Finally, there would need to be eye-watering punitive damage done to corporates who acts as apologists for these countries. This would need to be multilateral. So the US should be prepared to blow up Facebook and Goldman Sachs, the UK HSBC and Germany Daimler-Benz or Deutsche Bank. They would need to pick a side.

Given the heavy involvement of large corporates in setting policy, that’s quite a conundrum. More related posts here.