8 minute read

The Mystery of Algorithms

BY JOE AZZINARO They Rule the World But What Are They? Algorithms The Mystery of

As every day goes by, we’re being told that modern life has reached its current state thanks to something called algorithms. They are fast becoming ubiquitous and pervasive, touching nearly every human activity—business, leisure, transportation, shopping, even eating and sleeping. But what are they? Where did they come from?

Advertisement

“Algorithms are essentially man-made,” explains a popular blog called Memory. “They are tools that humans have created by drawing from our actions, habits and preferences.”

It’s like baking a cake In more erudite language, authors Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein in their book Algorithms, say, “An algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values as output. Algorithms are like road maps for accomplishing a given, well-defined task. Even a simple function for adding two numbers is an algorithm, in a sense. If you want to bake a cake the steps are: Preheat the oven; mix flour, sugar, and eggs; pour into a baking pan; and so forth. This set of instructions is what we call a recipe. Recipes tell you how to accomplish a task by performing a number of steps. Algorithms are like recipes.” Except in this case, they are recipes prepared by humans for use by programmers and computers.

The word “algorithm” can be traced to the 9th century to the Persian astronomer and mathematician Abdullah Muhammad bin Musa alKhwarizmi, best known as “The father of Algebra.” His last name—alKhwarizmi—when Latinized produced the word “algoritmi.”

“Until recently, algorithms were the domain of mathematicians like Alan Turing, who helped break the unbreakable German “Enigma” code in World War II. Then it became the domain of computer programmers. From Algorithmic trading to the Facebook algorithm—even algorithmic warfare—the use of algorithms has crisscrossed every facet of life. The internet runs on algorithms. Email knows where to go thanks to algorithms. Smartphone apps are nothing but algorithms. Computer and video games are algorithmic storytelling. All those opinions and postings on social media are brought to you by algorithms.

Because of the sophistication of algorithms,” says the blog Memory. “and because we perceive them as being free from human error, they must be irreproachable when it comes to making decisions. But this isn’t true. Machine learning algorithms can only do what they’re taught. And since they are created and trained by humans, human flaws will inescapably slink in. Distorted data, incorrect logic, or the biases of human programmers, mean algorithms cannot only replicate human biases they can intensify them, too.”

And Hannah Fry, a mathematician at University College London, argues that we need to be paying more attention to the people programming them. “Algorithms are making hugely consequential decisions in our society on everything from medicine to transportation to welfare benefits to criminal justice. Yet the public knows almost nothing about them, and even less about the engineers and coders who are creating them behind the scenes.

“They are changing human life in all sorts of ways. From what we choose to read and watch to who we choose to date, algorithms are increasingly playing a huge role. We’ve invited these algorithms into our courtrooms and our hospitals and our schools, and they’re making decisions on our behalf that are subtly shifting the way our society is operating.

“They are not perfect, and often contain the biases of the people who created them. We shouldn’t blindly trust algorithms, but we also shouldn’t dismiss them altogether. They’re incredibly consistent. They never get tired, and they’re absolutely precise. The problem is that algorithms don’t understand context or nuance. They don’t understand emotion and empathy in the way that humans do,” she explains.

A stronger regulatory framework

“But we do need a stronger regulatory framework,” says Fry. “We’ve been living in the technological Wild West, where you can collect private data on people without their permission and sell it to advertisers. We’re turning people into products, and they don’t even realize it. And people can make any claims they want about what their algorithm can or can’t do, even if it’s absolute nonsense, and no one can really stop them from doing it.”

Technologist Anil Dash concurs, “The best parts of algorithmic influence will make life better for many people, but the worst excesses will harm the most marginalized. We’ll need both industry reform within the technology companies creating these systems and more savvy regulatory regimes to handle the challenges that arise.”

John Markoff, author of Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, adds, “I am most concerned about the lack of algorithmic transparency. We are a society that takes its life direction from our smartphones. Guidance on everything from the best BBQ to who to pick for a spouse is algorithmically generated. There is little insight, however, into the values and motives of the designers of these systems.”

Fry continues, “Even if a particular algorithm works, there is no one assessing whether or not it is providing a net benefit or cost to society. We need an agency that can protect the intellectual property of a company that comes up with an algorithm but also ensure the public isn’t being harmed or violated in any way.”

As algorithms become more embedded in our daily conversations, culture and commerce, there are issues that will determine how comfortable we become with them. Barry Chudakov, founder of Sertain Research says “‘If every algorithm suddenly stopped working, it would be

the end of the world as we know it. We have already turned our world over to algorithms. The question now is, how to better understand and manage what we have done? “

Where are algorithms taking us? What significant benefits lie ahead? Experts see breakthroughs in science, conveniences, human capacities and the ability to connect people to important information.

Stephen Downes, of the National Research Council of Canada, pointed to banks, health care, and government as examples. “More people will be able to obtain loans in the future, as banks turn away from using factors like race, socio-economic background, zip code, and the like to assess fitness. With more data, and with a more interactive relationship with its clients, banks can reduce their risk, thus providing more loans and a range of services individually directed to help a consumer’s financial state.

“Health care is a growing expense because of the significant overhead required to support increasingly complex systems, including prescriptions, insurance, facilities and more. New technologies will enable health providers to shift a significant percentage of that load to the individual, who will, with the aid of personal support services, manage their health better and create less of a burden on the system.

“Government is based on regulation and monitoring, which will no longer be required with the deployment of automated production and transportation systems, along with sensor networks. This includes many daily and unpleasant interactions we have with government, from traffic offenses to treatment in commercial and legal processes.”

The future of human judgment

As for the future of human judgment? Will it be lost when data and predictive modeling become paramount? Some critics argue that algorithms are primarily written to optimize efficiency and profitability without much thought of societal impact. Writing for a Pew Research Center study entitled Code-Dependent: Pros and Cons of the Algorithm Age, authors Lee Rainie and Janna Anderson contend that humans are considered “input” and are not seen as real, thinking, feeling, changing beings. That we are creating a flawed, logic-driven society and that as the process evolves, as algorithms begin to write other algorithms, humans may get left out of the loop entirely, letting robots decide.

The core problem is lack of accountability,” observes Marc Rotenberg, of the Electronic Privacy Information Center. “Machines have become black boxes, even developers and operators do not fully understand how outputs are produced. The problem is exacerbated by an unwavering faith in the reliability of big data. There is a larger problem with the increase of algorithm-based outcomes beyond the risk of error or discrimination: the increasing opacity of decision-making. We need to confront the reality that power is moving from people to machines. That is why transparency is one of the great challenges of our era.”

Researcher Andrew Tutt calls for an FDA for Algorithms, explaining, “The rise of increasingly complex algorithms calls for critical thought about how to best prevent, deter and compensate for the harm they cause. Algorithmic regulation will require federal uniformity, expert judgment, political independence and pre-market review to prevent, without stifling innovation, the introduction of dangerous algorithms.”

Bart Knijnenburg of Clemson University offers: “Algorithms will capitalize on convenience and profit, discriminating against certain demographics, but also eroding the experience of everyone else. The goal of algorithms is to fit some of our preferences, but not all of them. They essentially present a caricature of our tastes and preferences. Unless we tune our algorithms for self-actualization, it will be too convenient for people to follow the advice of an algorithm or too difficult to go beyond such advice, turning these algorithms into self-fulfilling prophecies, and users into zombies.”

Despite their flaws, inconsistencies, even biases, algorithms are here to stay, so we need to learn to coexist with them. They’re solving more problems than they’re creating,” believes Hannah Fry.

“We should stop thinking about how accurate you can make an algorithm, and how few outliers we have. Instead, we should accept that algorithms are never going to be perfect. Stop over-relying on them and make it so that the human habit of over-trusting machines is considered at every possible step.”

This article is from: