5 minute read
DEMOCRACY FOR THE DIGITAL AGE
Many anxieties about AI are focused on hypothetical future developments. But given the spike in online political disinformation and voter profiling during recent UK and US elections, one fear seems all too present: could AI destabilise society by undermining democracy?
A 2023 position paper by two Southampton researchers – Regius Professor of Computer Science Dame Wendy Hall and Associate Professor of Politics and International Relations Matt Ryan – along with technology policy consultant and Visiting Fellow Ben Hawes tackles this question. The paper explains that AI can create and spread misinformation, manipulate public opinion, use data profiling to target voters, hack online voting systems and enable cyberattacks on critical infrastructure before or during an election.
While none of these tactics are new, AI can vastly increase the effectiveness, volume and frequency of threats; generative AI technologies (which themselves create and distribute new information) are a particular worry.
There are steps that government, law enforcement, media, civil society organisations and the tech industry need to take to counter these threats. But Matt, whose research underpins the social science aspects of the paper, believes we can do more than just conserve democracy: we can use AI technology to help understand and improve it.
Language insights
“There’s a lot of talk about how AI is a potential threat to democracy. I’d like to look at it as a potential boon to democracy,” said Matt. His work on the ‘Rebooting Democracy’ project looks at how to regulate for inclusion in political speech, and how machines can track and predict behaviour, classify political communication, and foster political participation.
Research into Natural Language Processing (NLP) alongside Dr Rafael Mestre and Dr Stuart Middleton has yielded insights into how political speech works. This technique entails analysing speeches or writing by looking at the relationships between the words – for example, how frequently words occur in relation to each other. The team has looked at electoral debates in the US and parliamentary debates in the UK, as well as online and in-person conversations between ordinary people.
With enough data, a programme can create a rule-based model for how language works. It can then learn to predict the next word, or to identify where an argument is happening. This is useful for understanding the kinds of rhetorical devices people use, or trends in the way they communicate, such as when men talk over women in political debates. The data then allows for a corrective to be ‘programmed in’ – for example, via an automated facilitator which could moderate an online discussion space.
Just as human facilitators can improve the quality of deliberation and make sure people are included and listened to respectfully in discussions, a programme could be trained to provide interventions such as prompting an over-zealous contributor to give someone else a turn or asking people to give reasons for their arguments. Via NLP an AI could learn to recognise and predict problematic behaviours from language being used – or even, using different techniques, to interpret and respond to body language captured on video.
The dominance of particular ways of communicating can keep people shut out if they don’t have the right education or background, said Matt. “One way that people are excluded is through speech. Politics privileges rhetorical and deliberative forms of argument, while other modes and forms of engagement – such as storytelling – are deprioritised.” The goal is to use computeraided tools and better design to include more voices in the discussion.
Designing for democracy
Many of us will know from experience that popular online spaces don’t lend themselves to productive or inclusive conversations. “Social media platforms are designed to sell us products and give us a dopamine hit, not for a rational intellectual debate,” said Matt. Affordances – the opportunities technology gives to take actions such as liking or retweeting a post – are not always used in the way designers intended. And affordances can themselves shape user behaviour, with unintended consequences.
Matt has run workshops (alongside Dr Selin Zileli and Dr Richard Gomer) for practitioners working in the democracy sector to understand what they and their end users really need from software. This understanding lays the foundation for design that encourages constructive engagement. Participants have included the World Bank, Southampton City Council, Involve Foundation and the Scottish Parliament.
This approach, Matt hopes, can lead to digital spaces which are more useful and inclusive. “When social media platforms were first around, you’d have guys in Silicon Valley saying, ‘this is what we think the world really needs’. But their ideas about those needs are exclusive to certain types of people. As a social scientist I can ask critical questions about what’s missing.”
Questions and solutions
Working with computer scientists on these problems has its challenges, Matt said. “When you take on interdisciplinary projects, they’re naturally going to be slower than something that’s right in your wheelhouse.” Specialist language, for example, can be confusing, and ideas about worthwhile approaches and outcomes can differ between disciplines.
But there are benefits even in these differences, Matt reflected. While social scientists are trained to understand and critique the world as it is, their colleagues in computer science are driven to engineer solutions. “They’re coming from a design perspective, so they’re asking very practically, if we want to design something to make a better world, what would it look like? I like when computer scientists talk in the sense of what’s possible.”
Dr Matt Ryan was granted a UKRI Future Leaders Fellowship in 2020, which provides the primary funding for Rebooting Democracy. He is also Co-Director of the Centre of Democratic Futures and Policy Director at the Web Science Institute.