3 minute read

BEYOND THE PIXELS

The digital revolution has radically changed the way people interact. Today almost half of the world’s population are connected to the internet whether it’s watching or sending videos, looking at images on social media, reading or watching the news. We now have 24/7 access to the rest of the world online. But not everything we see is reality.

Pam Johnston, a developing lecturer at RGU’s School of Computing Science and Digital Media, has been looking beyond the pixels to detect when an image or a video has been manipulated – ultimately preventing digital tampering.

Advertisement

Pam initially started her journey with RGU as a KTP associate working with Dr Eyad Elyan and a local technology company and when that came to an end, she applied for a fully-funded PhD which focussed on learning and utilising video compression features for localisation of digital tampering.

“Video compression is pervasive in digital society,” Pam comments. “With rising usage of deep convolutional neural networks (CNNs) in the fields of computer vision, video analysis and video tampering detection, it is important to investigate how patterns invisible to human eyes, like compression, may be influencing modern computer vision techniques.

Video compression is the term used to define a method for reducing the data used to encode digital video content. This reduction in data translates to benefits such as smaller storage requirements and lower transmission bandwidth requirements, for a clip of video content.

“I’ve been looking at how compression affects learning in neural networks. I knew all data would be compressed to some degree and that compression algorithms were specifically designed to fool human eyes but no one seemed to be looking at whether they also affected – or even fooled – machine vision algorithms. I look at videos and see that they are compressed, which means they’re not perfect representations of reality. The compression patterns reveal a little bit about the history of a video and I started investigating whether computers could detect that – whether CNNs could be trained to estimate the level of compression right from the pixels themselves.”

Pam explains that around the time of her research ‘Deep Fakes’ hit the headlines, also known as ‘Digital Puppetry’ which is a form of video tampering that involves altering people’s faces, or putting someone’s face over an actor’s. Pam explained: “The thing that made my research a little bit more unique is that I looked at trying to detect multiple different tampering techniques with one algorithm. We reviewed some of the current video manipulation techniques and it’s frightening to see how realistic some of them are. The whole point of tampering a video is to make it look like it hasn’t been tampered with. Humans can’t even see some video tampering, let alone guess at the type of tampering, so new, computer-based detection techniques are required and they should be broadly applicable to multiple tampering types. “Of course, you need quite a lot of data and a few powerful computers to train a CNN to detect compression straight from the pixels, but we managed it to some degree in the end.

‘Deep Fakes’ are a form of video tampering that involves altering people’s faces, or putting someone’s face over an actor’s

“I am working towards a detection technique that works on different kinds of video manipulation – I have one algorithm that detects two different kinds of tampering using compression estimates from my trained CNN.

A digital fake refers to a digital video, photo, or audio file that has been altered or manipulated by digital application software.

“Ultimately, I want to be able to detect images that have been entirely manufactured and reliably detect images that have been manipulated but then re-compressed. So right now my research is focussing more on detecting recompression from pixels, or the number of times a given image patch has been compressed. Again, that’s mostly invisible to human eyes, but research shows that computers are quite good at it.

“If we can detect it reliably, then we’ll be able to detect when a video has been manipulated and help to prevent digital puppetry.”

This article is from: