NUPR Spring 2021

Page 30

R

obert Williams had just pulled into his Farmington Hills driveway after another long day’s work, relieved to spend time with his wife and two daughters. Nothing seemed amiss. But something was very wrong. A police car waiting down the street inched forward, trapping Williams against his own house. Two officers approached, cuffing him in front of his family as they cried out in bewilderment and distress. His wife’s pleas to learn where he was being taken went unanswered, though one of the officers snarkily replied, “Google it.” Williams was driven to a nearby detention center, where law enforcement took his mugshot, fingerprints, and DNA. He was held overnight, interrogated by two detectives, and assigned a court date. But, once in court, prosecutors dropped the charges. His arrest, they admitted, was based on “insufficient evidence.” Later, state police revealed their “insufficient evidence” was the product of a facial recognition machine learning algorithm. It had analyzed grainy video footage from a robbed Detroit retail store and determined that Williams was the offender, even though Williams recalled he looked nothing like the robber. Williams’s wrongful arrest may be the first of its kind, but it won’t be the last. One in four law enforcement agencies has unregulated access to facial recognition software, and more than half of Americans are in police facial recognition databases. It also isn’t surprising that Williams—a Black man—was the first case; according to the National Institute

30

of Standards and Technology, a majority of facial recognition software is racially biased.

To be fair to the Michigan police who arrested Williams, it seems outlandish that machine learning could be racist. Machine learning algorithms are just lines of computer code that minimize error between values in a dataset—called training data—to generate predictive models.

Adding the resulting SSEs allows the algorithm to generate a total SSE unique to each split. The split that produces the smallest total SSE is considered the best and is recorded for later. Imagine using a machine learning model to analyze a dataset describing crime in North Carolina. If we fed the variable “population density” to a machine learning algorithm, it would split the data into different groups for every density value until it found the split that minimized the total SSE. But the dataset includes other variables, such as year, probability of arrest, and police officers per capita. Machine learning algorithms account for the different variables in training data by minimizing the total SSE for each one, then selecting the best split out of all the variables. The algorithm repeatedly forms subsets and splits the data within each one until error reaches zero or a set endpoint. The best splits are organized into a model that can be fed new data to predict outcomes. These complex calculations have no obvious racist influences; machine learning algorithms themselves can’t discriminate. However, the data that programmers feed to algorithms can train them to.

Machine learning algorithms themselves can’t discriminate. However, the data that programmers feed to algorithms can train them to.

Error is calculated however the coder wants it to be, and the code runs until error reaches zero or the coder instructs it to stop. For instance, one form of error used in machine learning algorithms is the difference between a value and an average value. This allows programmers to quantify the error of entire datasets. For every value in a dataset, an error can be squared and summed with the other errors, producing a sum of squared errors (SSE). Some machine learning algorithms minimize dataset error by forming smaller subsets and recalculating the SSE for each.

Machine learning models have no ethical code. If predicting an outcome by race minimizes error and improves predictive accuracy, the algorithm will do it. The same is true when predicting by sex, religion, nose length, or any characteristic.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.