5 minute read

Deepfakes

Artificial Intelligence vs Real Intelligence

Dr Simon J. Cropper University of Melbourne

Advertisement

Do you believe what you see? Did Obama really say that (he probably did but not on video)? Can you tell real from fake? And if you could, would you share it with others? Deepfakes are getting better and better yet (some) humans remain the best judges of the truth despite the efforts of the machines. The more well-known deepfakes videos circulating are either for entertainment, such as Bill Hader channelling and momentarily becoming Tom Cruise and Seth Rogan, or to illustrate the danger of deepfakes, as pithily done by Jordan Peele (and Barak Obama). More disturbingly a recent example ofJoe Biden falling asleep in an interview has surfaced, which in the year of the US elections takes an altogether more sinister turn.

The dissemination of false information online is a well-recognised societal problem that has been around since the internet itself, and indeed the spread of disinformation is a fundamental characteristic of human self-interest.

The harmful (conspiracy) theories surrounding COVID-19 highlighted by David Coady in the August edition of Sentry are doubtless fuelled by such disinformation. Indeed, there is growing concern that credible fake videos (known as deepfakes due to the way in which they are produced – plus it sounds cool) are an insidious form of online disinformation, exploiting the innate assumption that photographic evidence is more reliable that written or spoken evidence.

These sources of ‘information’ inevitably erode our trust in traditional sources of information and ultimately promote disengagement from democratic processes, arguably fundamental to a civilised society.

Deepfake detection

In response to this problem, there has been an extensive body of research into artificial intelligence (AI) methods of deepfake detection to weed out the fakes before they reach the audience; human vulnerability to deepfakes, however, remains largely unexplored.

The AI approach has recently resulted in one of the more culpable disseminators of false information issuing a challenge to coders to find a way to automatically detect manipulated videos: The Facebook Deepfake Detection Challenge (DFDC). Facebook/Kaggle created over 124,000 videos, both real and fake, of unknown ‘actors’ talking about day to day things to train and test the submitted code.

The winning code (written by Selim Seferbekov) was able to distinguish a real video from a fake video in the private test set 66% of the time (guessing is 50%). That's not particularly good, and with the rate at which the quality of the fakes improves, that performance is likely to get worse rather than better over time unless the detection technology can keep up.

We think humans can do better, so using the publicly available DFDC stimulus set. We developed our own challenge for National Science Week in Victoria, Fake Out, a short (15 minute) challenge where we showed a selection of the DFDC videos and asked the participants to judge whether they were real or fake, how confident they were, and on what basis they made their decision if the video was deemed to be ‘fake’.

We also concurrently ran a longer survey which included more videos and personality measures. The short survey has been completed by over 700 participants thus far, the longer one by over 100. While we are still examining the data there are some interesting aspects falling out of the analysis so far.

You Won’t Believe What Obama Says In This Video!, deepfake by Jordan Peele (Youtube)

On average, people (RI – Real Intelligence) seem to be about as good as the machines (AI) with a hit rate of around 62%, but the variation between individuals is large with the highest scorers getting almost perfect performance. The AI did not even do this in the public test videos where the code could ‘examine’ the stimuli multiple times before coming to a decision; our human subjects only saw them once (equivalent to the private test set conditions in the DFDC).

When asked about the reasons for their decision, apart from the stimuli where there was some obvious glitch or discontinuity, to which we are particularly sensitive in natural images such as faces, a very common response was that the person in the video did just not ‘feel’ right: a very RI response. It remains to be seen whether performance correlates to measurable traits of personality and whether the ‘super-detectors’ are good at other related tasks such as facial recognition or memory for time and place, but it is a clear suggestion that RI can beat AI at its own game.

Of course, these results characterise only the first stage of the deepfake lifecycle; being sufficiently good to deceive. Then the viewer needs to decide to share it, to have the means to do so and for the cycle to continue; examining this behaviour is the next phase of the research but as with all good science, however slow it may seem (for good science is slow), you start at the beginning.

Trust your gut

So, what does this all mean? In the current climate of a global pandemic and heavily distorted political power structures, misinformation and disinformation play a significant role in skewing people’s beliefs and reactions to the world, often with the potential for great damage to be done. From dangerous ‘cures’ for COVID-19 to discrediting a presidential candidate, these things change the course of society, and it is not in the right direction. Our research suggests they are not easy to detect when done well and it might well make sense to trust your gut if you do see something odd or unexpected.

With the pandemic simply being the forerunner to the crises we will have to deal with in the 2020s, accurate believable information is crucial. We need to believe Dr Norman Swan when we see him on our screens and hope for better leaders at every level; they need to be allowed to speak and be heard and seen unadulterated.

I once harboured hopes that Donald Trump was simply an elaborate fake to warn us of what might happen if we were truly that careless, but the sad truth is that no fakery can make him more repulsive to humanity than he really is.

This article is from: