2 minute read

A Question of Neural Networks and AI Learning

BY DOMINIQUE LANGEVIN

Despite the incredible progress made with advanced artificial intelligence in the past years, many image recognition systems are running into a similar problem. If only a couple pixels are changed in the image they are trying to identify, they will completely misidentify the contents of the image, being consistently incorrect, even if a human could tell what was there (1). These images are called adversarial examples (1), and can even be successful with the alteration of just one pixel, known as a one-pixel attack (2). This poses a serious risk to a government or organization attempting to use image recognition AI for any purpose. It means that it would be significantly easy for a hacker to incapacitate their system (2). These advanced AI systems often utilize a deep learning neural network. These networks are designed to function in a way that mimics human neural processing. They contain different levels of processing and are trained on massive sets of data (3). In the case of image recognition AI, this learning takes the form of being presented an image, providing an answer, and being corrected when it is wrong. Have you ever been around a young child who calls any four legged animal a dog? The process of distinction in which the child learns to distinguish between dogs, cats, and other furry tetrapods is the same process which AI scientists are trying to recreate in their systems through machine learning.

Advertisement

The issue is that, like human brains, deep learning neural networks tend to be black boxes, with even the designers having trouble understanding the

DID YOU KNOW?

ReCaptcha is a machine learning tool, every time you fill one out you are training image recognition AI.

true mechanisms in which these networks process information (3). This is what makes it difficult to solve the adversarial example problem. Because it is nearly impossible to know how the information is being processed in the neural network, it becomes nearly impossible to correct the defect that is causing the misrecognition of an image with a few pixels altered. But, what if the issue wasn’t within the neural network itself, but an issue of a missing component? All this time, scientists have been seeking to harness the immense processing power of human neural networks. However, neurons only make up 10 percent of our total brain cells. The remaining 90 percent are glial cells, such as interneurons, astrocytes, and oligodendrocytes (4). Each kind of cell serves a number of purposes, but the one to focus on as the potential solution for the adversarial example problem is astrocytes. Fields et al. (2014) described the key role astrocytes play in human perception of figure-ground relationships, due to their function in connecting and regulating large numbers of neuronal connections across many regions of the brain. Astrocytes allow for the successful completion of complex cognitive functions needed for perception and large-scale analysis (5). This is what lets us distinguish between the foreground and background of a scene and to utilize depth perception cues on a flat image. In images such as the one above, these cells allow us to mentally switch between seeing either the black or white image as the foreground, and the other color as the background. We have astrocytes to thank for the saying “once you see it, you can’t unsee it.”

Finally, the question is: if AI engineers were to incorporate synthetic astrocytes, or to mimic astrocyte function in their neural networks, could the ad-

This article is from: