6 minute read
Artificial Intelligence, Disinformation and Elections: A case study from Pakistan
The recent elections in Pakistan saw an unprecedented level of AI-generated content used online, highlighting key challenges for Parliaments and Governments in protecting digital democracy.
In the recent general elections in Pakistan, a historic shift unfolded as independent candidates secured the highest number of seats in the National Assembly, largely backed by the Pakistan Tehreek-eInsaf (PTI). This unprecedented occurrence defies historical trends, where independents had never previously secured a majority in the National Assembly. Despite the PTI being banned, its popularity soared, positioning it as the most favoured party in a nation where political campaigning is pivotal. Even with the party's leader, Imran Khan, imprisoned and unable to hold office, the PTI successfully navigated this challenge through the strategic use of Generative AI technology.
The PTI party embraced AI technology to conduct virtual rallies and deliver speeches. The party created AI-generated footage of their party leader delivering speeches from his prison cell, urging supporters to vote. Online rallies on social media, watched by hundreds of thousands, further amplified their reach. A recent AI-generated speech by the party leader during a virtual session garnered over 1.5 million views on YouTube within 12 hours, showcasing a powerful display of political influence. In response to the electoral commission's ban on their cricket bat symbol, the PTI shifted its website to GitHub and developed an offline app to provide crucial information for voters. Additionally, an AI chatbot integrated with the party leader's Facebook account engaged voters through personalized messages and real-time updates.
This technological adaptation by the PTI party signifies a paradigm shift in the conduct of political campaigns in Pakistan, reflecting a broader evolution within the Pakistani electorate. While the use of AI presents challenges, its strategic deployment in the 2024 General Election suggests that dismissing it outright may not be a viable option. This raises a crucial question: Are we ready for the transformative impact of AI on our democratic processes?
Democracy, as a governance model, hinges upon the assumption that individuals possess freedom and rationality, enabling them to collectively determine decisions for the common good through informed deliberation and political representation. It is imperative that citizens not only have the right to comprehend the rationale behind decisions affecting them but also hold their representatives accountable.
The utilization of big data analytics and artificial intelligence (AI) in political campaigns and elections has introduced profound changes to democratic politics globally. Political parties increasingly leverage ‘big data’ analytics to formulate detailed voter profiles, employing behavioral and psychometric data to categorise voters into interest groups for targeted political messaging. This phenomenon, referred to as 'computational politics’, applies computational methods to vast datasets from diverse sources, both online and offline, facilitating outreach, persuasion and mobilization.
However, beyond its potential for political manipulation, computational politics gives rise to substantial information asymmetries between political entities and citizens. The algorithmic curation of news on social media platforms fosters information silos and echo chambers, creating filter bubbles that act as a form of 'invisible propaganda,' manipulating public opinion. Or to put it another way, constantly being fed the same information leads to siloed thinking and less engagement with different views.
Generative AI technologies, including deepfakes and digital avatars, introduce challenges in discerning reality from fiction. The emergence of manipulated materials, as witnessed in instances in Turkey and in Argentina, raises concerns about trust erosion. Deepfake materials, especially when strategically deployed close to elections, can significantly impact public perceptions, taking advantage of the limited time available for responsible actors to counter such manipulations.
The impact of AI extends to micro-targeting and voter manipulation, with the potential risk of fostering an ongoing perception of manipulation in the democratic process. Media discussions can also contribute to shaping this perception, and algorithmic biases within Generative AI contexts can further amplify these concerns. The use of AI for predicting election outcomes introduces uncertainties regarding the parameters influencing those seeking predictions, potentially leading to biased models.
Moreover, AI's capacity to distort reality so as to incite violence and to create manipulative personality chatbots poses a distributed risk of harm. When voters struggle to differentiate between authentic and fabricated information, it instills doubt in the authenticity of genuine information. Watermarking and labeling efforts lack transparency about the entities behind them, potentially exacerbating distrust.
While chatbots can serve as sources of information, their limitations must be acknowledged. Accessible summaries, while simplifying complex government issues, run the risk of substituting genuine reporting, potentially compromising the depth and accuracy of information.
The concentration of information within a handful of tech firms raises concerns about biased decision-making and polarization, akin to issues observed in social media platforms. Addressing this requires governance mechanisms to regulate rules and regulations, ensuring a diverse and transparent approach to information dissemination. The shift from traditional media outlets to tech companies raises questions about norms and authority in disseminating information, emphasising the importance of deciding where authority should reside and establishing the norms that govern it.
In the contemporary digital landscape, the exponential growth of data poses challenges in comprehension due to its unstructured nature and managing the sheer volumes of data to be managed. However, technology and algorithms offer solutions to translating this data into visualizations and manageable sections that decision-makers can use. Natural language processing (NLP) facilitates named entity recognition and topic modeling, while machine learning algorithms identify patterns and trends. Visualization tools further enhance data understanding, empowering decision-makers to make informed choices based on representative data.
Yet, the complexity and opacity of AI systems present significant limitations. Machine learning, including neural networks (computer systems modelled on the human brain and nervous system), may generate opaque ‘black box solutions’, making it difficult to explain how recommendations are formulated. (basically, if the AI system is operating with minimum human input and develops its own ‘approach’ built on a few initial instructions, then not even the people who created the AI system can explain how it ends up working in the end). The potential perpetuation of biases and inequalities in these systems raises concerns about their impact on democratic processes, emphasising the need for careful consideration and oversight.
In conclusion, the integration of AI into political discourse is inevitable, and its responsible use can significantly contribute to the enhancement of democratic processes. The recent case study of the PTI party in Pakistan serves as a testament to the potential of AI in political engagement. As we tread into this new era, a balanced and collaborative approach is essential in harnessing the benefits of AI while mitigating potential risks, ensuring a future where democracy thrives amidst technological advancements.