9 minute read

1 Digital tools: support or danger for democracy?

Social networks, from heaven to dust

How is it possible that, in the space of a few years, roughly between the end of 2010 and the end of 2020, social networks and, more generally, digital communication tools have gone from being seen as a great instrument of freedom and democracy (so much so that it has been suggested that Facebook and Twitter should be awarded the Nobel Peace Prize) to being seen as a danger to institutions, to relations between people and to civil democratic coexistence?

Advertisement

Symbolic images of this change are the covers of the well-known weekly Times, which in the immediate aftermath of the so-called 'Arab revolutions' named Mark Zuckerberg Man of the Year in 2010, and at the end of 2020 wondered whether it might not be necessary to delete Facebook.

Since the early 2000s, we have witnessed a long series of social movements of different kinds and in very different parts of the world. From the "Me Too" in the US in 2006 to the protests in Latin America in 2019 and 2020, through the so-called "Arab Springs", "Black Lives Matter", "EuroMaidan" in Ukraine, "Friday for the Future" and Hong Kong, and many other moments of protest that have always had a common element: the use of social networks and the Internet to promote and organise the activities of movements, to compare ideas and proposals. These experiences, some of which are still ongoing, have given rise to the myth of the Internet, and even more so of social networks, as great instruments of democracy.

A debate that, thanks also to various scientific studies carried out by universities and specialised centres, was then extended to the world of "social" communication in its "algorithmic" sense, that is, based on algorithms that filter and organise what users see online.

What radically changed the perception of these digital tools were two global political events in 2016: the Brexit and the victory of Donald Trump in the US presidential elections.

Immediately after each of these votes, some of which went against the pollsters' predictions, there was a debate (mainly among specialists and academics) about the role that social networks, and Facebook in particular, had played during the campaigns.

But it was only at the beginning of 2018, after some journalistic investigations and some "whistleblowers" had revealed the background of the affair, that with the outbreak of the Cambridge Analytica scandal, a long public debate began on the real impact that social networks can have on society and democratic processes.

Without going too much into the technical details of the affair, what the Cambridge Analytica scandal revealed was the possibility of creating 'personal profiles' by studying the data of millions of users held by social networks, on the basis of which hyper-personalised messages could be created with a very high capacity to influence voters' choices and behaviour.

The squirrel: from relevance to conditioning risks

Underlying these processes is a principle dear to large technology platforms and companies: relevance.

That is, the attempt to provide users with what is most 'relevant' to them, often anticipating their online movements.

Over time, however, with the refinement of technical tools and their application to persuasion processes, the suggestion of 'relevant' content has turned into the 'suggestion' of behaviour, if not actual conditioning.

And to explain what the concept of 'relevance' means to data giants, we turn once again to the founder of Facebook, who is credited with the phrase "A squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa".

It is the principle of relevance that inspires Amazon's advertising 'suggestions', the display priorities of Google's search results, or even the posts that Facebook prioritises for each user and, quite often, the content we see on a newspaper's homepage or the position of individual articles within it. In short, it is the principle that drives much, if not all, of people's online experience. It is therefore crucial for large web companies to know what is most 'relevant' to their users, or more precisely, what is relevant to each of their individual users, who, let us not forget, number in the billions.

To do this, it is necessary to have information and tools to 'classify' each individual user, to have (to use the terms already used for Cambridge Analytica) a 'personalised profile' of each individual user of Amazon, of Facebook, of Google, of Apple, of Microsoft, etc. etc. A personalised profile of each of us, internet users.

To understand the sophistication and personalisation of the suggestions that different digital channels can make, it is useful to understand how well they 'know us'. To this end, research conducted by the University of Cambridge in 2015 (which, of course, has nothing to do with Cambridge Analytica) showed that Facebook can build a psychological profile of its users based on 'likes'.

The box shows the number of likes required for Facebook to know its users better than the people closest to them.

How many “Like” Facebook needs you to use, to make it knows you better than:

• 70 likes. FB knows you better than a fried or a room-mate;

• 150 likes. FB knows you better than a member of your family or a close relative;

• 300 likes. FB knows you better than your partner.

Source: University of Cambridge - The Psychometrics Centre – 2015 The test used is available at www.applymagicsauce.com

Considering that in 2016 the big data broking companies (according to the documentary 'The great hack' edited by Netflix) had between 2,000 and 5,000 data points for each US voter, it is easy to imagine how deep the knowledge these companies have about individual voters.

And therefore the power to influence their decisions with tools, messages and suggestions, targeted 'ad personam'.

From data harvesting to potential manipulation

Although the data we have seen above refers to the US 'market', where the legal protection of privacy is weaker than in Europe, especially after the GDPR came into force, the techniques for collecting, analysing and classifying user data have been further refined in recent years. It is therefore not unlikely that a similar scenario exists in Europe, also due to the widespread neglect of privacy protection by the public.

In order to be able to have such a large amount of data on each voter/user, large communications companies and those that have 'data broking' in their corporate mission use different technical tools. In fact, there are companies whose main mission is precisely to broker, analyse and sell data on Internet users.

Through various tools, software and small pieces of software code embedded in the websites we visit or in the apps we use on our smartphones, thousands of pieces of information are collected at every moment of our online lives, allowing profiles to be created that can later be used to target us with advertising, information and tailored communications.

If we limit our discussion here to political communication, which certainly has the most direct impact on democratic processes and the civil rights of European citizens, we can see the risks associated with a deep knowledge of people's personalities: the risk of being exposed to 'hyper-personalised' communication, with individualised messages no longer aimed at large social groups (e.g. 'workers vs. businessmen' or 'suburbs vs. inner cities'), but targeted at individual citizens on the basis of their precise characteristics, history, behaviours and even dreams.

A textbook example of these risks was seen during the Brexit campaign in 2016, which saw a large number of personalised messages (often spreading false or misleading information) linked to the specific interests of small, but very specific, target groups of voters.

A 'film tax' has never been discussed, nor even proposed by European institutions at any level.

However, such a hypothetical proposal was clearly an issue of great concern to those who use film streaming platforms, thus mobilising them to vote for Brexit or, if they wanted to vote Remain, to reconsider their idea.

Similarly, a highly emotive message aimed at the large segment of the British population with origins and relatives in Commonwealth countries suggested that with Brexit, all citizens of these countries would be able to obtain a British passport. It was an obviously false claim, so much so that Indian or Canadian citizens would not automatically obtain British citizenship after Brexit.

From personalisation to polarisation, a short and risky step

The same algorithmic processes that enable the creation of 'hyperpersonalised' advertising are behind all the processes of content selection that are displayed to internet users in different contexts. As mentioned above, these algorithms now select which posts are displayed on social networks, which links are displayed first in Google search results, and often even which news items, or in which order, are reported on the front pages of online newspapers.

Such a personalisation of online content has given rise to the phenomenon of "filter bubbles", i.e. the tendency for the user him/herself to be placed in a "communication bubble" that filters content for him/her, thus providing a distorted image of the world that appears less diverse and rich than it actually is.

It goes without saying that there is a natural tendency for people to choose topics of their own interest. Nobody reads a 'noir' book if they are not interested in the genre and nobody sees a 'tear-jerker' film if they do not like that kind of story. But these are precisely individual and conscious 'choices' between different contents.

The 'Filter bubles' that are created on the Internet, which are created by algorithms, determine in fact not a choice, but an impossibility of choice, because the reality and the contents presented to users are univocal, without giving space to or in any case restricting the diversity of the world, in all its possible meanings: artistic, cultural, religious, political, etc.

This phenomenon, together with that of the so-called "echo chambers", could have serious consequences not only in cyberspace but also in the real world, with worrying polarisation phenomena.

In such echo chambers, the natural human tendency to seek confirmation of one's beliefs and positions is exacerbated. Obviously, it is easy to fall into a vicious circle when your own filter bubble exposes you to the same kind of personalised news that does nothing more than endlessly reproduce a world similar to our own.

Indeed, the combined effect of filter bubbles and echo chambers triggers a vicious circle of constant confirmation and reinforcement of one's own beliefs and, consequently, rejection of those of others. A perverse mechanism that can only lead to cultural and behavioural polarisation.

When this applies to social relationships, to human relationships based on strong opinions (e.g. religious beliefs or political positions), it risks polarising debate and ultimately people's behaviour.

A phenomenon in which the United States is once again in the vanguard (in this case a negative vanguard), as evidenced by so many studies showing the growing and increasingly violent cultural and political polarisation of which the events of January 2021 were perhaps a natural progression, but not yet an epilogue out of the virtual world.

Social networks: quantity vs quality. Not all that glisten is gold

Since the big 'social' platforms lost their 'network of friends' dimension to become 'social media' about 15 years ago, with the explosion of the phenomenon of so-called 'influencers' with millions of followers and views, large numbers have been the dominant element in assessing the success of communication.

Today, it is widely reported that a large majority of the European population of all ages uses social networks, although there are large differences in the way they are used by different age groups. For example, Facebook and WhatsApp, both owned by the same American company, Meta, are now used by the majority of the European population, according to Eurostat data for 2020.

Among younger people in the EU aged 16-24, almost 9 out of 10 (87%) participated in social networking sites. This ranged from 79% in Italy to 97% in Denmark.

Among older people aged 65-74, more than a fifth (22%) participated in social networks. This ranged from 10% in Croatia to 60% in Denmark.

It is no coincidence that the success of the messages disseminated by both political figures and show business stars is measured in thousands or millions of views, comments, shares and, above all, likes. An evaluation that is purely numerical and has nothing to do with the quality and content of the messages.

An example of Facebook's deep social penetration is given by the number of registered users in the city of Palermo, Italy: more than half the total of the entire resident population (of all ages, from newborns to the over 100s).

It is indeed clear that in view of these figures and this level of penetration of digital media and platforms, it is nowadays impossible to ignore their existence and use when thinking about organising social activities, campaigns, movements and forms of democratic participation.

In view of the limitations mentioned above (risks to privacy, quality over message, etc.), it is therefore useful to take stock of digital and online tools for promoting active citizenship, highlighting the advantages and disadvantages of each one.

This article is from: