4 minute read

GLOBAL PERSPECTIVE

/ By Deon Van Zyl /

Emerging Threats: The Dangerous Duo of Deepfakes and Social Engineering

Social engineering is the practice of deceiving others into disclosing private information or carrying out specific tasks which is a major concern for every person including organizations in Africa. Artificial intelligence (AI) is making it even more complex and effective. There are numerous AI technologies available with various free and easy-to-use options.

Deepfake pictures, audio, and video are examples of how AI is being used in social engineering. Copied photos of real people for phishing profiles have been around for many years but the usage of such AI-generated photos on social media is a new element. The generation of AI-generated photos of non-existing people, might make you look twice. LinkedIn which is used by both job seekers and employers is a target for fake profiles with such a AI twist. Profiles and photos seem so authentic that even Twitter are fooled to stamp them as verified

AI-generated voices are near perfect, with the ability to mimic celebrities or any other person given enough data. AI voices were recently used in the theft of $35 million by impersonating a company`s director.

Lastly, the most familiar format might be the Deepfake videos, where a face of a person in an original video is normally replaced with another face. These ultra-realistic personas are a growing cybersecurity risk to businesses.

The key areas of concern:

> Corporate Sabotage

> Stock Manipulation

> Fraud

> The defeat of Biometric Security

> Propaganda

> Deepfake extortion, which is the portrayal of individuals or organizations engaging in a variety of illicit (but fictitious) activities that could harm their reputation

- Election influence

- Inciting Violence

- Deepfake Kidnapping

- Cyberbullying

- The production of false evidence in criminal cases

While AI has given criminals new and sophisticated ways to perpetrate cybercrime, it has also given businesses a chance to create new tools and strategies to defend against AI-powered cyberattacks.

Solutions on the horizon

There are numerous solutions coming to the forefront to address AI-generated media. China has issued regulations that make it illegal to create AI media without watermarks. A good indication of how governance and law might help in this fight.

Adobe (in partnership with Microsoft) has approached the problem with a solution called “Content Credentials.” Companies who adopt the solution will add a button to video and images to state the history of the content (such as who took the photo, when it was made and edited) which then would be an indicator of trust although it is not a silver bullet.

Deepfake audio detection is also possible with new research that point to tells in synthetic versus organic voices. This is due to the different areas of the human anatomy which is used for biological vocal tracts.

Intel has introduced their real-time Deepfake detector which promises to detect fakes even in formats such as streaming video by looking for authentic clues in real videos, by assessing what makes us human such as subtle “blood flow” in the pixels of a video. Clearly, Deepfakes will increasingly be counter by protective AI measures.

Mitigation measures

Internal/Digital Security: Traditional security measures such as antivirus (spyware and trojan protection), spam protection and network traffic monitoring. These aid to prevent even text-based attacks like phishing and possible digital against snooping.

Implementation of efficient security policy and procedures. Prevent data leakage: Text, audio, photos, and videos may be available to unintended viewers for nefarious uses. Secure meetings and conversation recordings: Introduce passwords and encryption to avoid unintended bad actors using the data.

Training and Security Awareness: Continuous training which may include inspecting visual or audio tells such as limited blinking, glitches which includes blurring, choppy sentences or phrasing that seems out of place. On a text level the email addresses should be inspected for validity along with the context, spelling, and structure of the message.

Penetration testing: Get a second opinion. This is a great measure of the implemented security measures effectiveness.

By implementing these steps, with the combination of AI assisted counter measures, we can prevent Deepfake attack based social engineering and lessen the effects of cybercrime.

Deon van Zyl (Norway)

BCom (Hons), Senior System Developer

Linkedin: deonvanzyl

Deon is a sophisticated technical IT professional with a solid history of effectively bridging the gap between Programming, Security, Digital Forensics, Artificial Intelligence, and Teaching. His track record of over 24 years, has a footprint which spans major corporations, academic institutions, and government.

This article is from: