10 minute read

8.5. Case Study: Disinformation (Vicente Diaz de Villegas Roig

8.5. CASE STUDY: DISINFORMATION

by Vicente Diaz de Villegas Roig

Advertisement

The truth is the first victim in any conflict. The duty of every civil society is to develop its resilience and protect information as a common good. If you fail to take your place in the information environment, others will.

During the Cold War, the potential mutual destruction guaranteed by a conflict involving nuclear weapons served as a deterrent in the physical environment. However, the birth of the internet and the subsequent rise of social networks has led the information environment to become a battleground. Government agencies, private organisations and other pressure groups fight a 24/7 battle to control the narrative – a battle in which the technological gap is no longer a determining factor.

Disinformation is taking precedence in today’s crises. Although it is not a new phenomenon, its systematic use and the ease with which it can be disseminated thanks to new technologies have turned it into one of the main vehicles for hybrid threats. In this regard, the Joint Framework on Countering Hybrid Threats, published by the European Union in 2016, states that ‘massive disinformation campaigns, using social media to control the political narrative or to radicalise, recruit and direct proxy actors can be vehicles for hybrid threats’.

DISINFORMATION GENERATES DOUBTS

In the battle for the narrative, disinformation seeks to generate doubts about the truthfulness of facts. The truth is thus relativised by public discourse being devalued, so as to generate distrust in the institutions governing society. The main tool used to achieve this effect is not so much blatant lies, but rather the exploitation of information taken out of context and of messages that appeal more to emotion than to reason. An individual who doubts, mistrusts and is permanently subjected to information overload is fickle in their views, which makes it easy for their passive opinions to be turned into active convictions.

Assessing the effectiveness of disinformation is no simple task. The question is: can disinformation create new opinions, or does it simply strengthen

existing ones? In order to answer this question, we need to consider society’s vulnerability factors, such as the existence of external and internal divisions, the presence of minorities, fragile institutions and a weak media culture. The media play a fundamental role. Customised narratives (in some cases involving microtargeting or even individualised targeting), interference in democratic processes, self-serving leaks and document falsification are just a few examples of the ways in which the media can contribute to disinformation.

BOOM IN SOCIAL NETWORKS

Those responsible for disinformation campaigns have found in cyberspace an ideal place to hide their footprint. In other words, the nature of the internet makes it difficult to hold individuals accountable for their actions, at least under traditional regulations.

The horizontal nature of social networks enables just about any individual to become a journalist without going through any kind of editorial filter. Community saturation and the presence of troll farms (organised groups of people who make provocative comments to create controversy or divert attention away from a topic) have transformed the dynamics of the generation and dissemination of information. Semi-automatic and automatic dissemination systems also exist now, in the form of bots (computer programmes that automatically perform repetitive tasks online) and zombie servers.

HOW DO THEY DO IT?

In order to increase the time internet users spend online, platforms use customisation algorithms that isolate users in a soundbox (‘filter bubble’) with content related to their search history, reducing their access to information that runs counter to what they have already read. Troll communities do similar work, creating a large number of false identities (‘sock puppets’) that all convey the same idea with similar messages. In many cases, these messages are supported by false content created with increasingly sophisticated sound-, photo- and video-editing tools.

Humour has taken centre stage in information manipulation campaigns, with ‘memes’ – images combined with a small amount of text that appeal to viewers’ emotions and are easy to relay – proving to be a very effective tool.

What is right is right if everyone says so. In 2006, Cialdini established the six principles of persuasion. One of them – the principle of social proof – states that ‘we determine what is correct by finding out what other people think is correct’. This principle certainly applies on social networks, since once information is on our radar, the more likes it gets the more appealing it becomes. You can also buy likes on the internet. One of the main activities of troll communities is adding comments to one another’s posts to give the impression that most people agree with the ideas they promote.

Trolls also aim to increase social polarisation by actively participating and taking both sides in discussions about controversial issues such as immigration or racial tensions. In numerous cases, sites and active profiles are created from the same server and used to produce emotional content for each of the conflicting positions, thus seeking to sow greater social division.

LEAKS: WHERE DOES MY OPINION COME FROM?

One of the most powerful dissemination vehicles is information leaks. This is a very effective method since the target audience feels the information must be true because it has been obtained directly from the source. However, in most cases, leaks are part of a disinformation campaign, since the dissemination is self-interested and decontextualized, and tainted leaks – which deliberately alter the story – are also added, though they often go unnoticed.

HOW DOES THE EUROPEAN UNION PROTECT ITSELF?

Interference in electoral processes can either target voters, through campaigns to influence how they will vote, or electronic systems, in order to modify databases that feed the census, to tamper with vote counting or simply to steal data. The mere suspicion that the results of a vote may be manipulated generates a feeling of mistrust in the electorate that can undermine the legitimacy of the process. The European Union has been forced to act in light of an increase in cases of interference in electoral processes, in particular the Brexit referendum, the US presidential elections and the French elections.

The EU Global Strategy for 2016, the year of the Brexit referendum, established a series of priorities, chief among which is the security of the EU against current threats. In order to counter those threats, it presented a series of improvements to the EU’s defence, cybernetic, anti-terrorist, energy and strategic communication capabilities. The latter, in particular, must be able to rapidly and objectively refute disinformation, promote an open research and media environment both within and outside the European Union, and develop the Union’s ability to take action through social networks.

The European Union’s Action Plan against Disinformation defines disinformation as ‘verifiably false or misleading information created, pre-

sented and disseminated for economic gain or to intentionally deceive the public, and which may cause public harm. Public harm includes threats to democratic processes as well as to public goods such as Union citizens’ health, the environment or security. Disinformation does not include inadvertent errors, satire and parody, or clearly identified partisan news and commentary.’

The Union’s coordinated response presented in the plan is based on four pillars: 1. Improving the capabilities of Union institutions to detect, analyse and expose disinformation

This is proposed to be achieved by reinforcing the strategic communication teams of the

European External Action Service, the Union

Delegations and the Hybrid Fusion Cell with specialised staff, monitoring services and big data analysis software. 2. Strengthening coordinated and joint responses to disinformation

The plan states that prompt reaction via factbased and effective communication is essential to counter and deter disinformation, including in cases of disinformation concerning Union matters and policies. Therefore, in March 2019, a Rapid Alert System was established in

Brussels to facilitate sharing of data between

Member States and EU institutions so as to enable common situational awareness. This in turn was intended to facilitate the development of coordinated responses, ensuring time and resource efficiency. 3. Mobilising the private sector to tackle disinformation

About 70 % of web traffic goes through Google and Facebook. This means that the vast majority of websites, including news sites, are accessed via these platforms. The EU became aware of this fact and, about a year before the European

Parliament elections, an EU Code of Practice on Disinformation was published. Facebook,

Google and Twitter signed this code, pledging to develop, before the European Parliament elections, internal intelligence capabilities enabling them to detect, analyse and block malicious activities in their services. The Commission and the European Regulators Group for

Audiovisual Media Services (ERGA) monitor on a monthly basis the actions taken to uphold these commitments. 4. Raising awareness and improving societal resilience

‘Greater public awareness is essential for improving societal resilience against the threat that disinformation poses. The starting point is a better understanding of the sources of disinformation and of the intentions, tools and objectives behind disinformation, but also of our own vulnerability.’

WHAT ABOUT THE CORONAVIRUS PANDEMIC?

The abundance of information – be it true, deliberately misleading or simply inaccurate – available about COVID-19 makes it difficult for individuals to identify reliable sources. This ‘infodemic’, as it has been dubbed by the World Health Organisation (WHO), is spreading as fast as the virus.

The origins of the coronavirus have not escaped manipulation. One of the most common theories circulating on the web is that the virus is a US biological weapon that was intentionally spread following Trump’s orders to isolate China. Another theory attributes it to a British laboratory that allegedly also poisoned Russian dissident Sergei Skripal in Salisbury, while others argue that Chinese spies stole it from a Canadian laboratory. Many more such theories will follow.

In this regard, the European Parliament resolution of 17 April on the COVID-19 pandemic urged the European Commission to counter aggressive propaganda efforts that are exploiting the pandemic with the aim of undermining the EU and sowing mistrust in the local population towards the European Union.

Following the adoption of this resolution, on 30 April, the European Parliament and HR/VP Borrell debated the latest report by the European External

Action Service on disinformation activities related to the COVID-19 pandemic. The report reveals many troubling facts, for instance the significant number of coordinated disinformation campaigns to spread false health information in and around Europe, and conspiracy theories and claims that authoritarian political systems – not democracies – are best suited to deal with the current crisis.

Following the debate, Foreign Affairs Committee Chair David McAllister stated that, ‘to counter negative narratives, it is particularly important to communicate about the EU’s financial, technical and medical support in response to the pandemic, both between EU countries and to our other partners, among them China. Most acts of solidarity, by organisations, professionals or individuals, take place far away from the gaze of cameras and reporters. But it would also be unfair to all the health workers, volunteers helping fellow citizens and people organising the transport of crucial equipment to let the lies about a lack of European solidarity spread without effectively challenging them.’

WHO CERTIFIES THE IMPARTIALITY OF ‘DIGITAL POLICE’?

The European Union’s Code of Practice on Disinformation benefited from a great initial boost when large social networking platforms implemented self-regulatory tools (mainly filters and moderators) against so-called malicious activities. However, both tools can be manipulated, and thus their neutrality is questionable and their power to shape opinion undeniable. Therefore, in order to answer the above question, it should be borne in mind that, in attempting to identify information manipulation, one runs the risk of creating ‘ministries of truth’ which, in order to strengthen a certain political narrative, undermine one of the greatest achievements of democracy: freedom. OTHER INITIATIVES

The NATO StratCom Centre of Excellence in Riga provides analysis and advice, supports doctrine development and conducts research and experiments to find practical solutions to problems in the field of strategic communications, including disinformation.

There are also other private or semi-private organisations, such as the Digital Forensic Research Lab (DFRLab) and Bellingcat, that analyse open sources and social networks in order to identify and expose disinformation.

Finally, the traditional mainstream media – present as they are on the web as well as in print – can play a significant role as guardians of sound journalistic practices. They are a key element in detecting and reporting information being manipulated, and they also have a role to play in educating society. There are already several media outlets making special efforts to uncover information manipulation, such as Agence France Presse with its ‘Fact Check’, the BBC with ‘Reality Check’ and Le Monde with ‘Decodex’.

Society can benefit from an environment in which government, institutions, journalists and specialised organisations collaborate based on a common understanding of disinformation dynamics.

This article is from: