8 minute read
ALGORITHM vs. ALGORITHM
CARY COGLIANESE, Edward B. Shils Professor of Law and Professor of Political Science and Director of the Penn Program on Regulation
In “Algorithm vs. Algorithm,” recently published in the Duke Law Journal, Coglianese and Alicia Lai L’21 tackle a key choice increasingly confronting governmental decision-makers: when to automate administrative tasks. They frame this choice as fundamentally one between the use of digital algorithms — such as artificial intelligence (AI) and machine learning — versus the continued reliance on the existing algorithms that constitute human decision-making.
Humans “operate via algorithms too,” write Coglianese and Lai, and these are reflected in status quo governmental processes — including existing administrative procedures.
In this pathbreaking article, Coglianese and Lai offer a framework for determining when government should choose digital algorithms over human ones. Although they caution that “public officials should proceed with care on a case-by-case basis,” they also argue that decision-making about AI ought to be predicated on the acknowledgement “that government is already driven by algorithms of arguably greater complexity and potential for abuse: the algorithms implicit in human decision-making.”
Human algorithms are susceptible to many the same problems as digital algorithms, they write, and “will in some cases prove far more problematic than their digital counterparts.” Digital algorithms can “improve governmental performance by facilitating outcomes that are more accurate, timely, and consistent,” they argue.
Limitations of Human Algorithms
Coglianese and Lai review the range of physical, biological, and cognitive limitations that afflict human-decision-making. These include issues with memory, fatigue, aging, impulse control, and perceptual inaccuracies.
Human bias in various forms can also “lead to systematic errors in information processing and failures of administrative government,” write Coglianese and Lai. While training programs and other attempts at “debiasing” humans may counteract some of these problems, it is not always possible to remove errors and biases completely from human decision-making, they note.
Some of the examples they provide of biases embedded in human “algorithms” include:
• The availability heuristic: that is, “the human tendency to treat examples which most easily come to mind as the most important information or the most frequent occurrences”;
• Confirmation bias: or, “the tendency to search for and favor information that confirms existing beliefs, while simultaneously ignoring or devaluing information that contradicts them”;
• Anchoring effects: by which decisions can be skewed by how questions are framed or selective information is provided;
• System neglect: which often occurs when individuals make decisions in isolation “with insufficient regard to the systemic context”;
• Present bias: or the undue discounting of the future, which can also be related to loss aversion;
• Susceptibility to overpersuasion: which occurs when people are “persuaded by superficial, even irrelevant” appeals; and
• Racial and gender discrimination: which can manifest either as explicit animus or implicit but deeply ingrained biases.
Coglianese and Lai show how these individual human tendencies that negatively affect governmental decision-making.
They also note that, when it comes to making group decisions within organizational settings — as frequently occurs within government — humans succumb to a variety of well-documented collective dysfunctionalities, such as groupthink and free-riding, among others. These problems also too often impair governmental decisions, Coglianese and Lai explain.
The Promise of Digital Algorithms
Coglianese and Lai extol the potential advantages of using digital algorithms in governmental processes by noting their importance to “nearly every major advance in science and technology” in recent years.
Machine-learning algorithms, they write, are often grouped into two categories: “supervised learning,” which are provided with labeled data, and “unsupervised learning,” which can learn without labeled data. While humans are necessary to establish AI processes, machinelearning algorithms otherwise can operate autonomously. They “largely design their own predictive models based on existing data, finding patterns in the data that can be used to generate predictions that are quite accurate,” write Coglianese and Lai.
Machine-learning algorithms have become increasingly attractive in both the private and public sector because of benefits that “might even be characterized as inherent to digital algorithms” — accuracy, consistency, speed, and productivity.
Coglianese and Lai are quick to point out that this does not mean that machine-learning algorithms will always be better than human algorithms. They argue that the choice is always a comparative one: that is, one of a digital algorithm versus a human one.
They note the strengths of each type of algorithm. Digital algorithms can provide consistency and speed, while the human mind
“is well-suited to making reflexive, reactionary decisions in response to sensory inputs.”
Coglianese and Lai discuss a growing body of research that has “compared machine-learning algorithms’ performance with status quo results and found improved performance in a variety of distinctively public sector tasks.”
Despite these results, worries persist about the use of machinelearning algorithms, especially about the possibility that they can be “too opaque and prone to bias.” Coglianese and Lai note that existing systems dependent on human algorithms, though, “do not necessarily compare favorably to machine learning” on these grounds.
“When it comes to bias, the issue again is not whether machinelearning algorithms can escape bias altogether, but rather whether they can perform better than humans,” they write.
Deciding to Deploy Digital Algorithms
In their article, Coglianese and Lai warn against human errors that can occur in designing and operating digital algorithms. They suggest that the necessary human element in the design and establishment of computerized systems may be digital algorithms’ biggest weakness. Still, they argue that digital algorithms can promise to make fewer mistakes overall — “if they are used with care.”
“The key is for humans to engage in smart decision-making about when and how to deploy digital algorithms,” write Coglianese and Lai.
To aid government officials in deciding between digital and human algorithms, Coglianese and Lai present three approaches for balancing different, often competing values involved in these decisions:
• Due process balancing: As articulated by the Supreme Court in Mathews v. Eldridge, this approach “seeks to balance the government’s interests affected by a particular procedure . . . with the degree of improved accuracy the procedure would deliver and the private interests at stake”;
• Benefit-cost analysis: Under this approach, “machine learning would be justified . . . when it can deliver net benefits (i.e., benefits minus costs) that are greater than those under the status quo”;
• Multicriteria decision analysis: A variation of the first two, this last approach requires running through “a checklist of criteria against which both the human-based status quo and the digital alternative should be judged.”
Coglianese and Lai land on multicriteria decision analysis as the best and most practical approach for structuring administrative decisions about automation — and then they explain that the key question turns to which criteria should be used in coming to such a decision.
They acknowledge that the precise criteria to rely upon will vary according to each particular use case, but they explain that generally two types of criteria should be considered when deciding to digitize a governmental process. Specifically, these criteria are those related to the preconditions for the successful use of digital algorithms and the validation of improved outcomes from digital automation.
Coglianese and Lai write that, in addition to the need for adequate human expertise and computer technology, three main preconditions “can be thought of as a necessary, even if not sufficient, condition for a potential shift from a human- to machine-based process”: (1) goal clarity and precision, (2) data availability, and (3) external validity.
“Taking these three preconditional factors together,” they write, “machine-learning systems will realistically only amount to a plausible substitute for human judgment for tasks where the objective can be defined with precision, tasks that are repeated over a large number of instances (such that large quantities of data can be complied), and tasks where data collection and algorithm training and retraining can keep pace with relevant changing patterns in the world.”
On how to validate whether machine learning in fact improves outcomes, Coglianese and Lai identify three general types of impacts that should be tested: (1) goal performance, (2) impacts on those directly affected by an automated system, and (3) impacts on the broader public.
Coglianese and Lai emphasize the importance of agency officials carefully thinking through their decisions to digitize. Failing to do so “can have real and even tragic consequences for the public” as well as open the agencies up to public controversy and litigation.
The article concludes by offering readers three principal strategies for making sound decisions about putting digital algorithms into place: planning, public participation, and procurement provisions.
Coglianese and Lai conclude that “agency officials should take appropriate caution when making decisions about digital algorithms — especially because these decisions can be affected by the same foibles and limitations that can affect any human decision.” They conclude that “officials should consider whether a potential use of a digital algorithm will satisfy the general preconditions for the success of such algorithms, and then they should seek to test whether such algorithms will indeed deliver improved outcomes.”
“Algorithm vs. Algorithm” is one of the latest of a series of articles that Coglianese has authored or coauthored on public sector use of artificial intelligence, including “Regulating by Robot: Administrative Decision-Making in the Machine-Learning Era,” “ Transparency and Algorithmic Governance,” “Administrative Law in the Automated State,” and “AI in Administration and Adjudication.” A more complete collection of his work on artificial intelligence can be found online within the Penn Carey Law scholarship repository.