![](https://assets.isu.pub/document-structure/231113005728-94ab45b58cee7aaafce21a399ab33987/v1/9f2ce57e4c19981555520ac49e21af9b.jpeg?width=720&quality=85%2C50)
2 minute read
You Don’t Smell Human
from Issue 30
BY DOMINIQUE LANGEVIN
In an age where artificial intelligence can pay a human being to complete a Captcha for it by convincing them that it's a blind person, many people worry that the AI uprising is close at hand. However, while this technology might be rapidly becoming better than humans at a number of things, it is nowhere near perfect. For example, sometimes it labels a human as a giraffe if it's wearing a fun sweater. This is one instance of what are called adversarial examples. Adversarial examples are inputs given to a machine learning software that are created by applying small but intentionally worst-case “perturbations” to images from the dataset, such as altering a single pixel or applying a low-opacity pixelated filter to the image, which is called the fast-gradient sign method. They are classified as such if the perturbed input results in the model outputting an incorrect answer with high confidence (1).
Advertisement
These kinds of attacks on image recognition AI range from simple one-pixel attacks to fool digital recognition systems to special kinds of makeup that protesters use to escape identification. Those who utilize one-pixel attacks and the fast-gradient sign method tend to be those well-versed in mathematics and computer science who are comfortable coding. However, there are many who are using this technology to fight against digital authoritarianism with layman's knowledge of how these machine learning softwares work.
Facial recognition software must complete four key steps in order to successfully identify an individual. These are detection, normalization, extraction, and recognition (2). Some of the first kinds of adversarial examples used by the general population were makeup, to try and mask or distort facial features, with the goal to fail the algorithm at the third step, the extraction of those features. Adam Harvey came up with a successful version of this kind of makeup, a technique which he named CV Dazzle. It works by altering the light and dark areas of the face using a combination of makeup and hair styling, and was able to break the most widely used facial recognition software of this time (3). Juggalo makeup worn by fans of the band Insane Clown Posse works in a similar way. The over-exaggerated clown makeup actually prevented cameras from identifying the features of peoples faces such as their jawline, and while still able to recognize them as human, is unable to exactly determine their identity (4). These innovations have been used by protesters in order to remain anonymous, protecting their rights to assembly as partisan law enforcement attempts to make arrests after the fact using social media images or security camera footage.
Additionally, as facial recognition software has been improving beyond determining the light and
Blending In By Standing Out
![](https://assets.isu.pub/document-structure/231113005728-94ab45b58cee7aaafce21a399ab33987/v1/727cb1ab4d80dd05e23b8f439c8ec3c0.jpeg?width=720&quality=85%2C50)
![](https://assets.isu.pub/document-structure/231113005728-94ab45b58cee7aaafce21a399ab33987/v1/85f6f28261d55bab7a71ccc95ff48845.jpeg?width=720&quality=85%2C50)
dark spots of one's face to utilizing facial geometry for identification (3), new popularly accessible adversarial technologies have kept pace. Cap_able, an Italian garment company, has taken adversarial examples a step further and turned them into a fashion item. The designers used their own AI to create a fast-gradient adversarial patch, which they then applied to items of clothing such as sweaters and t-shirts (5). By putting this product out, they are allowing the public to easily counter being tracked by networks of security cameras, like those present in London or China. It is a simple yet effective way to protect the data of your daily life from becoming the government's property.
![](https://assets.isu.pub/document-structure/231113005728-94ab45b58cee7aaafce21a399ab33987/v1/d120584486773d4efdcb8381be283d87.jpeg?width=720&quality=85%2C50)
As digital authoritarianism continues to rear its head in our society, people are harnessing these adversarial examples as a way to fight against it, and are using their own AI to do so. As AI advances it becomes less vulnerable to these kinds of attacks, but since adversarial examples were created using AI in the first place, advancing technology will ultimately lead to more advanced methods of subverting that technology. As much as news outlets can make it feel as though these technologies exist in an industry vacuum, these kinds of systems will inevitably be absorbed into society, leading to weird, creative, and innovative uses that drive further industry progress and remind governments and corporations that they do not hold all the power.
![](https://assets.isu.pub/document-structure/231113005728-94ab45b58cee7aaafce21a399ab33987/v1/3a59060d0e929081561b76f0a92d2181.jpeg?width=720&quality=85%2C50)