A Question of Neural Networks and AI Learning BY DOMINIQUE LANGEVIN
Despite the incredible progress made with advanced artificial intelligence in the past years, many image recognition systems are running into a similar problem. If only a couple pixels are changed in the image they are trying to identify, they will completely misidentify the contents of the image, being consistently incorrect, even if a human could tell what was there (1). These images are called adversarial examples (1), and can even be successful with the alteration of just one pixel, known as a one-pixel attack (2). This poses a serious risk to a government or organization attempting to use image recognition AI for any purpose. It means that it would be significantly easy for a hacker to incapacitate their system (2). These advanced AI systems often utilize a deep learning neural network. These networks are designed to function in a way that mimics human neural processing. They contain different levels of processing and are trained on massive sets of data (3). In the case of image recognition AI, this learning takes the form of being presented an image, providing an answer, and being corrected when it is wrong. Have you ever been around a young child who calls any four legged animal a dog? The process of distinction in which the child learns to distinguish between dogs,
cats, and other furry tetrapods is the same process which AI scientists are trying to recreate in their systems through machine learning. The issue is that, like human brains, deep learning neural networks tend to be black boxes, with even the designers having trouble understanding the
FIGURE 1
Adversarial example (1)
DID YOU KNOW?
ReCaptcha is a machine learning tool, every time you fill one out you are training image recognition AI.
ELEMENTS | 21