4 minute read

Artificial Intelligence (AI) and Deepfakes

Should We Be Worried About AI and Deepfake Technology Amongst Youth?

By Dr. Mmaki Jantjies

The impact of AI in the field of technology has been significant in improving its role as an enabler across sectors. As the exciting potential for innovation around the use of AI continues, there is also a perilous threat of misinformation, which is not always easy to interpret considering the quality of new AI-generated content. Recent advancements are exemplified by OpenAI's DALL-E 2 announcement that it can create realistic images from text instructions, revolutionising a whole range of sectors of the modern economy.

However, this progress is accompanied by the downsides of deepfake technology, which can manipulate the images and words of leaders and celebrities, such as recent AI-generated explicit images of megastar Taylor Swift.

A deepfake is particularly alarming. It is a type of synthetic media created using the techniques offered by AI, especially deep-learning algorithms. These algorithms analyse and manipulate existing images, videos, or audio recordings to generate highly realistic fake content, often featuring individuals saying or doing things they never actually did.

As witnessed in the case of Taylor Swift, the misuse of AI-generated content to propagate false narratives highlights the urgent need for robust mechanisms to combat misinformation across all areas of modern life. But with crucial elections on the horizon this year, the convergence of AI capabilities and deepfake technology raises highly challenging issues for the correct exercise of democracy.

There have further been several cases of social media misinformation impacting young people's confidence leading to mental well-being challenges amongst young social media users. One of the most pressing concerns was also the potential impact of deepfakes on democratic processes across the world, as many countries ran successful elections this past year.

The emergence of deepfake technology had introduced a new and concerning dimension to electoral processes, as evidenced by recent incidents in both Nigeria and Slovakia in 2023. In Slovakia, AI-generated audio recordings were used to fabricate statements attributed to a political candidate, suggesting an intention to manipulate markets while rigging an election. Similarly, in Nigeria, an audio clip faked by AI falsely implicated a presidential candidate in ballot manipulation, potentially swaying public opinion in terms of voting preferences.

These cases underscore the urgent need for robust measures to combat the proliferation of deepfake technology and safeguard citizens through education on deepfakes. The dissemination of manipulated media can have far reaching implications not just on young people but also on institutions' trust.

Up to today, several instances of legislation have demonstrated how to combat deepfakes, with South Africa having broader frameworks that might be used to tackle deepfakes. So, how can countries locate additional measures to tackle the menace of misinformation propelled by AI and deepfake technology?

Education and digital literacy remain a consistent cornerstone in this endeavour of taking on digital manipulation. By educating citizens not just on basic digital literacy but by empowering them on being digital citizens, exposes them to the existence and potential dangers of deepfakes. We can empower them to discern fact from fiction and resist misinformation.

Through the promotion of digital literacy and critical thinking skills, individuals may better discern authentic content from manipulated media. Indeed, raising awareness about the existence and implications of deepfakes is essential to fostering vigilance amongst citizens who will become more capable of identifying and rejecting false narratives.

Looking to the private sector, companies can further invest in research and development to create advanced deepfake detection tools. These tools use machine learning algorithms to analyse and identify inconsistencies in media content, thus helping to flag potential deepfakes before they are spread widely.

Ultimately, confronting the challenge of misinformation propelled by AI and deepfake technology demands a multisector and multifaceted approach rooted in education, regulation, and collaboration. As we stand at the crossroads of technological advancement and societal resilience, it is incumbent upon us to harness the transformative potential of AI while safeguarding and empowering citizens. Only through collective action and unwavering commitment to digital education and prevention can we navigate the dual edge of AI while enjoying the benefits that it continues to bring across sectors.

Dr. Mmaki Jantjies, Group Executive of Innovation and Transformation at Telkom
This article is from: