6 minute read

CAN YOU BELIEVE WHAT YOU THOUGHT YOU SAW?

Next Article
AMBER ALERT

AMBER ALERT

A few months ago, Tom Cruise joined the myriad of celebrities on Tik Tok who, deprived of an audience, were uploading videos of themselves in their homes and #keepingitreal.

Cruise did magic tricks; turning a cookie into currency, cleaned the floor, and talked about the importance of exfoliator. Except he didn’t. The videos uploaded to the platform have the account name @deeptomcruise, and they are the work of a very talented visual effects artist.

The account was made to have fun, and to make people aware of what was now possible. It’s working, and the murky world of deepfake technology is now gaining more attention. Reality has never been more flexible.

Although the Cruise videos have been made with good humour – they are clearly presented as clever fakes and don’t feature the actor doing anything unsavoury – they are the most recent example of how far machine learning has come, and what is possible.

Deepfake is a combination of ‘deep learning’ and ‘fake’, whereby someone combines an existing video or image with someone else’s likeness. This may include acting, clever lighting and direction, but also utilises powerful machine learning and AI to manipulate images and footage to generate believable results.

One of the first, best-known examples is the video of Barak Obama, which was created in 2017, also to increase awareness of the technology. Machine learning modelled the President’s mouth using only 14 hours of footage, which then allowed the developers to put any words into his mouth, resulting in very realistic videos. Every time a video surfaces, it seems to be even more believable, and the most recent Cruise videos would be incredibly difficult to detect. Even experts have said that only a slight distortion around the pupils give it away.

Unsurprisingly, around 90 - 95% of deepfake videos are porn, and around 90% of them are nonconsensual porn of women. In fact, this technology got started in this arena, with a tech-savvy Reddit user swapping female celebrity faces onto porn videos back in 2017. It wasn’t long before the technology started to be used in a ‘revenge’ capacity, with women finding intimate or violent images of themselves online that they never posed for.

Unlike revenge porn (where someone makes public footage or images that weren’t meant for public sharing), there is no law against faked images or videos and nothing that the police can do. The real issue happens when they look so real that anyone watching them would believe them. How do you explain that the images of you in compromising positions online aren’t actually you?

Hot on the heels of this new technology, was the launch of apps that do the same thing, DeepNude was launched in 2019 which helped users create the videos they wanted with minimal input or knowledge themselves, and there is code that uses AI to remove the clothing of any woman whose image you upload.

When the resultant deepfake images are undetectable from the real thing, is invasion of privacy taking place? Where is the line?

The UK are looking at the laws around online harassment, so hopefully the legal system will soon catch up to the technology available.

Offering almost a limitless potential for misuse, this technology has criminals waking up to the possibilities it offers. Businesses, governments and the public need to take note of the potential dangers it poses and consider how best to tackle this new threat on truth.

Despite the dangers, the technology is currently sitting at the fringes of public awareness (around 80% of the general public are unaware of what deepfake technology is), but it is not difficult to see the potential issues for business, politics and healthcare.

Given the current vaccine hesitancy across the world, we can all too readily imagine the impact of a faked video of a politician or healthcare worker talking about how vaccines contain microchips, or how shares would react to a video of a politician promising tighter business restrictions. Impacts would be felt long after the video was proven a fake.

REALITY HAS NEVER BEEN MORE FLEXIBLE.

It also offers a new avenue for fraud; though the public are aware of scams over traditional communications (email, text, phone), there are millions lost each year as the scammers get ever more sophisticated. What would be the impact of this new technology?

Imagine going online to meet your boss or financial advisor over Zoom. You talk about moving around money, making purchases or investments, and once you finish the meeting you make the discussed transactions. Your actual boss or financial advisor has no idea that you think you’ve spoken to them.

This isn’t as unlikely as you may think. The first known case involving this technology happened in 2019, when criminals used machine learning to impersonate the voice of a chief executive at a UK-based energy firm. The CEO of the company thought he was talking to his boss on the phone and was told that he needed to urgently make a payment of £200,000 to a Hungarian supplier. The money was transferred, but when the criminals tried for a second payment later that day, the CEO became suspicious, and it was reported. The £200,000 was swiftly moved to Mexico and was lost in a sea of bank accounts.

It is unknown if this is the first case of its kind, as previous victims may not have reported a crime, or the use of AI may not have been detected. Either way, companies are going to have to get more serious about fraud, cyber-crime and protecting themselves.

In the above scenario, as we’ve seen in the previously mentioned examples of Obama, or Cruise, a machine would only need 14+ hours of footage of the person they are imitating – at awards, press conferences, or even harvested from social media. They can then build a working AI ‘puppet’ of the person that can be manipulated to say whatever the fraudsters want.

PROTECTING YOURSELF

We are somewhat behind with detecting this technology; as always, the threat appears before the protection is developed.

There are software programmes that can detect deepfakes, however these are more useful if, for example, a video of a politician emerged saying something unsavoury. They compare numbers of eye-blinks, unusual head poses and facial expressions, and inconsistencies with face-wrapping.

With the day-to-day operations of a business, unfortunately tackling this tech is more about prevention than detection, and there are steps we can take to mitigate risk:

• Make your social media accounts private and only allow people you know to access your videos and images.

• Minimise channels for communications in a company. The CEO above was eventually suspicious in part due to an unknown number being used. Consider using certain lines of communication only, and never deviating from it. That way if a staff member gets a WhatsApp from you, they’ll know something is wrong.

• Drive consistent information distribution.

• Create multiple layers of authorisation.

• Change passwords on communication platforms regularly.

• Organise central monitoring and reporting of attacks and suspicious activity.

• You may want to consider agreeing a password or confirmation from a known phone number, when discussing sensitive information or making financial transactions.

• Talk to a specialist. Each business is different and has different needs. A specialist in cyber security will be able to generate a plan about how to keep you and your business as safe as possible.

Creating truly convincing deepfake videos is currently very time consuming and requires specialist skills, but the threat is coming, if it isn’t here already. It will pay to make your business as unappealing a target as possible.

With the proliferation of ‘fake news’ on the internet, it is already crucial to really question what we read and hear. This new technology makes it essential to carefully consider the information we choose to believe.

This article is from: