8 minute read

Are our democracies in danger?

The intricate relation between social media, data protection, misinformation, artificial intelligence and democracy.

by Alina Mancino

It will probably not come as a surprise if I tell you that 2024 is considered the biggest election year in history, with around fifty countries going to the ballots.

Social media has played a crucial role in many elections over the last decade, but how did we get here? How did social media affect political communication and what are the consequences and threats to our democracies?

Social media and electoral campaigns

As social media entered people’s daily life and gained attention and importance, the political practice has changed and adapted its communicative methods to this new medium. The role of political parties, which used to be the main connection between citizens and its representatives, has declined in significance. It has been replaced by online communication, somehow distancing local politicians from their territories.

Today’s political leaders are supported by professionals that curate their public image and communication online. Electoral campaigns carried out on social media platforms are very complex, they’re supported by data-driven scientific methods and communication tactics based on targeting specific audiences. Data analysts have gained importance as they study predictive models on citizens’ electoral behavior and define which messages need to target which audiences.

Most voters today do not have a strong political identity, so their vote is more volatile and susceptible to change based on specific issues or topics. Consequently, electoral campaigns are no longer confined to a short specific pre-election period, but tend to be a constant state of permanent activity.

Social media creates echo chambers, or filter bubbles, that lead to the polarization of people’s thoughts. Inside these bubbles there’s no space for genuine debates among people that have diverse perspectives. As a result, the illusion of direct communication produces interactions characterized by the lack of empathy, a degeneration of public discourse and the lack of accountability of the users that produce certain contents.

A very common practice used in electoral campaigns on social media is “microtargeting”, which is when politicians target their audience and send personalized messages. This allows for different people to receive different messages, a practice that was not used before the social media era started. The political public debate has suddenly become more private, it reaches citizens’ phones and creates a narration that varies for different categories of people.

In this new landscape where fake news flourish due to social media platforms struggling to identify and remove them, the prevalence of data-driven campaigns varies. For instance, countries like Canada, the United States, and Australia have very few laws ruling on data protection while Germany, for example, is the country with one of the most restrictive legislation.

The Cambridge Analytica Case

The Cambridge Analytica case involved the unauthorized collection of Facebook users’ data for political profiling and targeting during elections. It raised concerns about privacy, data misuse and the influence of digital campaigning on democratic processes.

The Cambridge Analytica case is a very interesting story that I highly recommend taking a look into. There’s a Netflix documentary called “The Great Hack” and several books that speak about this case. I will make a brief summary here.

Cambridge Analytica was founded in 2013 as the american affiliated company of SCL Group, based in the UK. Cambridge Analytica was working with citizens’ behavioral data to support specific candidates; in the USA they started campaigning for the Republican Party during the 2014 half mandate elections, allowing 75% of the candidates they were supporting, to win.

What is behavioral data?

Behavioral data is all the information gathered from what we do with our smartphones and other smart objects we own. The extraction of personal information is done thanks to our (usually passive) acceptance of cookies on the websites we visit. This information is the most valuable asset we’re giving to the companies that allow us to use their platforms “for free” (our data is the price we actually pay).

Between 2015 and 2016 Cambridge Analytica collected 87 million Facebook users’ data, studied their personality type and created personalized content for each category. This way the social media electoral campaign started from the bottom up, first learning what people need to hear/read/watch and then sending out personalized content with different messages coming from the same party/politician.

Cambridge Analytica was detecting the people whose personalities were more emotionally unstable, inclined to believe conspiracy theories, and would send them messages, articles and invitations to Facebook groups that would increase their level of anger and frustration. For Facebook this was important in order to increase people’s activity online, for Cambridge Analytica it was important to influence people’s votes.

Cambridge Analytica worked for Trump’s electoral campaign and there are doubts about its work for the Brexit campaign as well, both happening in 2016.

It’s impossible to say if and how Cambridge Analytica’s work actually influenced the results of the elections, but we have to question the methods used and where to set the boundaries - a job that also politicians and legislations have to do.

Misinformation as the first global risk

The 19th edition of “The Global Risks Report 2024” released in January 2024 by the World Economic Forum, surveyed around 1.500 experts across different sectors. It highlighted that misinformation and disinformation are perceived as the first global risk within the next two years, with societal polarization being the third most important risk.

This year the widespread use of misinformation and disinformation may play a particular role in undermining the legitimacy of newly elected governments.

In response to misinformation, governments could increasingly control information based on what they determine to be “true”, but this solution could also be risky, as every democratic country measures its “democratic level” also based on how free their journalists are to write what they want.

The use of artificial intelligence in content creation on social media intensifies the spread of fake news, as it’s still difficult to detect videos, audios and images created by AI. The big tech companies like Meta, TikTok, Google, are requiring users to disclaim when a content has been created by AI, but there are already many examples where this rule hasn’t been applied.

Legislations

The only way to protect fair elections is by having stricter control over social media platforms and websites’ use of citizens’ data. One broad example is the General Data Protection Regulation that came into force in 2018, giving the European Union quite strict legislation on data protection. Platforms like Meta and TikTok for example can access a smaller amount of data in the European Union than they can do in the USA. In electoral campaigns, platforms need data for personalized content. Without access to your data, they can’t target you, allowing you to critically analyze politicians’ messages and shared content.

Today’s use of AI for content creation is a problem in terms of transparency of democratic elections. In December 2023 the European Parliament and Council reached an agreement on the AI Act. The aim of this Act is to make sure that all the AI systems used inside the European Union align with the values and rights of the EU, ensuring human control, security, privacy, transparency, non discrimination and social and environmental well being. As the AI Act still needs to be voted on by the European Parliament, it’s expected to be put in practice from 2026.

This is an important step for Europe, but the delay in enforcing the AI Act poses a significant risk particularly for this year’s elections, as they will remain unprotected until the implementation of the law. Democratic processes are too slow compared to the advancement of technology, so despite the importance of legislations, if they’re not applied fast, we will keep missing on opportunities for better protection.

What can we do?

Understanding the techniques of microtargeting and how political campaigns work on social media can be helpful to pay more attention to what we consume online. Learning about the importance of data protection is key for preventing the collection of our data. One simple thing we can do is for example avoiding accepting cookies on the websites that offer us the option to decline. Cookies will store our data and not allowing it is already a small but important step.

Gaining access to valuable information is crucial, and to achieve this we must dedicate time to reading and educating ourselves. In a world that always seems in a rush, taking the time to read well written articles or watch documentaries, listen to conferences etc, is key for critical thinking and understanding and therefore to make us better citizens in our democracies.

This article is from: