LAURA WATKINS
TECHNOLOGY
A
few months ago, Tom Cruise joined the myriad of celebrities on Tik Tok who, deprived of an audience, were uploading videos of themselves in their homes and #keepingitreal. Cruise did magic tricks; turning a cookie into currency, cleaned the floor, and talked about the
technology. Machine learning modelled the President’s mouth using only 14 hours of footage, which then allowed the developers to put any words into his mouth, resulting in very realistic videos. Every time a video surfaces, it seems to be even more believable, and the most recent Cruise
CAN YOU BELIEVE
WHAT YOU THOUGHT YOU SAW?
Tom Cruise Deepfake top left, Tom Cruise top right, Barack Obama deepfake bottom
importance of exfoliator. Except he didn’t. The videos uploaded to the platform have the account name @deeptomcruise, and they are the work of a very talented visual effects artist. The account was made to have fun, and to make people aware of what was now possible. It’s working, and the murky world of deepfake technology is now gaining more attention. Reality has never been more flexible. Although the Cruise videos have been made with good humour – they are clearly presented as clever fakes and don’t feature the actor doing anything unsavoury – they are the most recent example of how far machine learning has come, and what is possible. Deepfake is a combination of ‘deep learning’ and ‘fake’, whereby someone combines an existing video or image with someone else’s likeness. This may include acting, clever lighting and direction, but also utilises powerful machine learning and AI to manipulate images and footage to generate believable results. One of the first, best-known examples is the video of Barak Obama, which was created in 2017, also to increase awareness of the 30
NETWORKS
Sponsored by www.revive.digital
videos would be incredibly difficult to detect. Even experts have said that only a slight distortion around the pupils give it away. Unsurprisingly, around 90 - 95% of deepfake videos are porn, and around 90% of them are nonconsensual porn of women. In fact, this technology got started in this arena, with a tech-savvy Reddit user swapping female celebrity faces onto porn videos back in 2017. It wasn’t long before the technology started to be used in a ‘revenge’ capacity, with women finding intimate or violent images of themselves online that they never posed for. Unlike revenge porn (where someone makes public footage or images that weren’t meant for public sharing), there is no law against faked images or videos and nothing that the police can do. The real issue happens when they look so real that anyone watching them would believe them. How do you explain that the images of you in compromising positions online aren’t actually you? Hot on the heels of this new technology, was the launch of apps that do the same thing, DeepNude was launched in 2019 which helped users create the videos they wanted with minimal input or knowledge themselves, and there is code that uses AI to remove the clothing of any woman whose image you upload. When the resultant deepfake images are undetectable from the real thing, is invasion of privacy taking place? Where is the line? The UK are looking at the laws around online harassment, so hopefully the legal system will soon catch up to the technology available. Offering almost a limitless potential for misuse, this technology has criminals waking up to the possibilities it offers. Businesses, governments and the public need to take note of the potential dangers it poses and consider how best to tackle this new threat on truth. Despite the dangers, the technology is currently sitting at the fringes of public awareness (around 80% of the general public are unaware of what deepfake technology is), but it is not difficult to see the potential issues for business, politics and healthcare.