Regulation Magazine Winter 2019

Page 1

Regulation Th e C at o Re v i e w o f B u s i n e s s a n d G o v e r n m e nt

Trump Will Hurt Americans if He Repeals H-4 Work Permits P.8 The Troubling History of Cancer Risk Assessment P.16 Do Government-Supported Energy Efficiency Programs Make Sense? P.26 SPRING 2019 / Vol. 42, No. 1 / $6.95

Should Automakers be Responsible for Accidents? DISPLAY UNTIL JULY 1, 2019

Automaker enterprise liability would have useful incentives that driver liability law misses


economics with AttituDe “The best students in our program find our ‘economics with attitude’ contagious, and go on to stellar careers in research, teaching, and public policy.” —Professor Peter Boettke Director of the F. A. Hayek Program for Advanced Study in Philosophy, Politics, and Economics at the Mercatus Center

Mercatus Graduate Student Fellowships provide financial assistance, research experience, and hands-on training for graduate students in the social sciences. PhD FellowshiP Train in Virginia political economy, Austrian economics, and institutional analysis, while gaining valuable practical experience to excel in the academy. ADAm smith FellowshiP Are you in a PhD program at another university? Enhance your graduate studies with workshops in Hayekian political economy, public choice, and institutional economics. mA FellowshiP Earn a master’s degree in applied economics to advance your career in public policy. FréDéric BAstiAt FellowshiP Interested in policy? Complement your graduate studies with workshops on public policy analysis.

Learn more and apply at grad.mercatus.org.

Bridging the gap between academic ideas and real-world problems


Volume 42, Number 1 / Spring 2019

CONTENTS

P. 16

P. 8

F E AT U R E S LABOR

8

Hurting Americans in Order to Hurt Foreigners Benefit–cost analysis challenges the Trump administration’s effort to end the H-4 EAD program. By Ike Brannon and M. Kevin McGee LABOR

12 Tipped Workers and the Minimum Wage California’s broad minimum-wage increases appear tailored to manage political forces. By Richard B. McKenzie H E A LT H & M E D I C I N E

16 The Troubled History of Cancer Risk Assessment The Linear-No-Threshold paradigm, which asserts there are no safe exposure levels, is the product of flawed and corrupted science. By Edward J. Calabrese

D E PA R T M E N T S Briefly Noted 2

Are Market Data Fees Too High? By Ike Brannon and Robert Jennings

4

The SEC’s Pay Ratio Regulation Reveals What We Already Knew By Sam Batkins and Ike Brannon

6

USDA Reform: Help Rural America by Freeing Scientific Innovation By Amanda Maxham and Henry I. Miller COVER:

Illustration by Keith Negley

P. 20

P. 26

INSURANCE & LIABILITY

20 Should Automakers Be Responsible for Accidents? Automaker enterprise liability would have useful incentives that driver liability law misses. By Kyle D. Logue E N E R GY & E N V I R O N M E N T

26 A Cautionary Tale about Energy Efficiency Initiatives If these programs are such bargains, then why does government mandate them and energy utilities push for them? By Kenneth W. Costello E N E R GY & E N V I R O N M E N T

30 Utility Energy Efficiency Initiatives Are Good Policy These programs address important market failures and have been shown to be cost-effective. By Martin Kushler, Ed Vine, and Ken Keating IN MEMORIAM

34 A Conservative Anarchist? Anthony de Jasay, 1925–2019 By Pierre Lemieux

In Review 38 In Defense of Openness Review by Pierre Lemieux

46 Stubborn Attachments Review by Pierre Lemieux

40 Incentives to Pander Review by Greg Kaza

49 Rethinking America’s Highways Review by George Leef

42 Fighting Financial Crises Review by Vern McKinley 44 Can You Outsmart an Economist? Review by Phil R. Murray

51 Noncompliant Review by Vern McKinley 53 working papers Reviews by Peter Van Doren and Ike Brannon

Final Word 56 Why Can’t We Admit Policy Mistakes? By Tim Rowland

Regulation, 2019, Vol. 42, No. 1 (ISSN 0147-0590). Regulation is published four times a year by the Cato Institute (www.cato.org). Copyright 2019, Cato Institute. Regulation is available on the Internet at www.cato.org. The one-year subscription rate for an individual is $20; the institutional subscription rate is $40. Subscribe online or by writing to: Regulation, 1000 Massachusetts Avenue NW, Washington, DC 20001. Please contact Regulation by mail, telephone (202-842-0200), or fax (202-842-3490) for change of address and other subscription correspondence.


2 / Regulation / SPRING 2019

Regulation EDITOR

Peter Van Dor en M A N AG I N G E D I TO R

Thomas A. Fir ey

D E S I G N A N D L AY O U T

Dav id Her bick Design C I R C U L AT I O N M A N A G E R

Alan Peter son

CONTRIBUTING EDITORS

Sam Batkins, Ike Br annon, Art Car den, Thomas A. Hemphill, Dav id R. Hender son, Dw ight R. Lee, George Leef, Pier r e Lemieux, Phil R. Mur r ay EDITORIAL ADVISORY BOARD

Chr istopher C. DeMuth

Distinguished Fellow, Hudson Institute

Susan E. Dudley

Distinguished Professor of Practice and Director of the Regulatory Studies Center, George Washington University

William A. Fischel

Professor of Economics and Hardy Professor of Legal Studies, Dartmouth College

H.E. Fr ech III

Professor of Economics, University of California, Santa Barbara

Robert W. Hahn

Professor and Director of Economics, Smith School, Oxford University

Scott E. Harrington

Alan B. Miller Professor, Wharton School, University of Pennsylvania

James J. Heckman

Henry Schultz Distinguished Service Professor of Economics, University of Chicago

Andr ew N. K leit

MICASU Faculty Fellow and Professor of Environmental Economics, Pennsylvania State University

Michael C. Munger

Professor of Political Science, Duke University

Sam Peltzman

Ralph and Dorothy Keller Distinguished Service Professor Emeritus of Economics, University of Chicago

George L. Pr iest

Edward J. Phelps Professor of Law and Economics, Yale Law School

Paul H. Rubin

Samuel Candler Dobbs Professor of Economics and Law, Emory University

Jane S. Shaw

Board Member, John William Pope Center for Higher Education Policy

R ichar d L. Stroup

Professor Emeritus of Economics, Montana State University

W. K ip Viscusi

University Distinguished Professor of Law, Economics, and Management, Vanderbilt University

Cliffor d Winston

Searle Freedom Trust Senior Fellow in Economic Studies, Brookings Institution

Benjamin Zycher

John G. Searle Chair, American Enterprise Institute PUBLISHER

Peter Goettler

President and CEO, Cato Institute Regulation was first published in July 1977 “because the extension of regulation is piecemeal, the sources and targets diverse, the language complex and often opaque, and the volume overwhelming.” Regulation is devoted to analyzing the implications of government regulatory policy and its effects on our public and private endeavors.

B R I E F LY N O T E D

Are Market Data Fees Too High? Should We Care? ✒ BY IKE BRANNON

AND ROBERT JENNINGS

W

hen ordinary investors buy stock, they usually do so through a brokerage. The brokerage executes the trade on one of a dozen different stock exchanges in the United States.

Most people have heard of the New York Stock Exchange and Nasdaq, but there are several other U.S. exchanges, including the Investors Exchange (or IEX, referenced in Michael Lewis’s 2014 book Flash Boys) and the Better Alternative Trading System (BATS, now part of Cboe Global Markets). There also are trading venues that are not formally stock exchanges—the so-called “dark pools” run by Wall Street firms. And a significant amount of order flow simply gets “internalized” in a brokerage’s own system for matching orders. When a brokerage buys or sells stock on behalf of a client, the brokerage is legally obligated to try to obtain the best execution possible. This requires it to have access to a wide variety of data on prices and quantities traded on an exchange, as well as the various bids and asks that are current. The exchanges charge for that information and their prices have risen dramatically over the last decade, a trend that was helped along by a provision in the 2010 Dodd–Frank Act. The question is whether those price increases can be justified under normal standards employed by the Securities and Exchange Commission. The SEC seems to have some concerns about that; it recently pushed back—at least temporarily—on the increases by overturning prior fee increase approvals given to Nasdaq and the NYSE. That has given brokerages some hope for more price relief in the future. This is more than just a fight between exchanges and brokerages. If the price of these data keeps increasing, it may lead IKE BR ANNON is president of Capital Policy Analytics and a senior fellow at the Jack Kemp Foundation. ROBERT JENNINGS is professor of finance emeritus at the

Kelley School of Business at Indiana University.

to a reconsideration of the present execution standards. The more brokerages and hedge funds have to pay to execute trades, the higher they will set their fees for ordinary investors who have retirement funds invested with such entities. And thanks to the magic of compound interest, even small reductions in net returns can result in a significant reduction in the ultimate size of an investor’s nest egg. The Securities Acts Amendments of 1975 charged the SEC to develop a “national market system” that would link the numerous financial markets. The SEC concluded that this law required the exchanges (and other non-exchange execution venues as well) to publish current bid and offer prices—and their quantities—as well as the price and quantities of recent trades. To that end, the SEC required that the industry provide a consolidated information source known as a securities information processor (SIP), which would be run jointly by the exchanges and their private regulator, the Financial Industry Regulatory Authority (FINRA). This information is referred to as core data. Exchanges also are free to sell non-core data that customers might also find valuable. This includes the so-called depthof-book data—that is, the bids and offers currently on the limit order book that are not at the best prices. For example, assume that the current market for Acme Inc. is $10.00 bid for 5,000 shares, while 3,000 shares are offered at $10.01. In deciding how to route orders, firms might find it useful to know the quantity of Acme shares Making data more available /


SPRING 2019

demanded below $10.00 and quantity supplied above $10.01—in other words, enough information to draw approximate supply and demand curves. Suppose there is a high demand for Acme at $9.99 and below, but not much supply at $10.02 and $10.03. Market participants might find this information useful. It would be particularly important information for institutional brokers, who might wish to transact 250,000 shares when the current quote is relevant for only a small transaction. In many cases, these brokers are trading stocks for mutual funds owned by common investors. In the last decade, the cost of obtaining these data has increased considerably. Table 1 contains some of the NYSE fees in both 2008 and 2018. Access fees nearly tripled over that time and new fees were introduced. The SEC’s targets for its recent order are the fees for depth-of-book products charged by the NYSE’s ArcaBook and Nasdaq’s Level 2, both of which were implemented in 2010. Dodd–Frank permitted exchanges to immediately implement fee increases while the SEC determined their appropriateness. Before then, the exchanges had to file a formal proposal to increase fees and then wait for SEC approval, an often-drawn-out process that included a public comment period.

this context. One would be to compare the fee to the marginal cost of providing the service, and the other would be to show that the fee is based on some sort of market price. The exchanges embrace the latter approach—understandably so because their incremental cost of providing those data is almost assuredly close to zero. The exchanges claim that competition constrains their ability to impose fees in two ways. First, if their fees are too high, then customers will simply reroute orders to other exchanges. That would cost highpriced exchanges both data fees and trading fees for transactions carried out on their platforms. Second, the exchanges argue that the market data of other exchanges are substitutes for their data product. In this scenario, if the ArcaBook fee gets too high, then customers will simply drop ArcaBook and pick up market data from some other exchange because the limit order books across exchanges are highly correlated. In 2014 an administrative law judge agreed with the exchanges on this point. However, a 2018 court case and an associated 2018 SEC finding concluded otherwise. While agreeing that the competition for order flow is intense, both the court and the SEC found that the exchanges had not done an adequate job of demonstrating that the link between order flow and market data fees was strong. In fact, in 2014 the SEC concluded that while depth-of-book market data drives order flow, most market participants do not purchase the depth-of-book market data, contrary to the competition argument. In

Determining fair and reasonable fees / Giv-

ing the SEC such a vague mandate has made its adjudications a bit problematic, but there are two commonly accepted ways to determine fair and reasonable in Table 1

ArcaBook Monthly Fees in 2008 and 2018 Fee

Access Fee

1

2008

2018

$750

$2,000

Multiple Data Feed Fee

N/A

$200

Redistribution Fee2

N/A

$2,000

N/A

$6,000 per device (capped)

Non-Display Fee3 Professional User Fee4 Non-professional User Fee4

$30 per user

$60 per user (capped)

$10 per user (capped at $20,000)

$10 per user (capped at $40,000)

1. The basic charge to get the particular data in question. 2. The cost for the right to resell. 3. The cost to have the data go directly to a computer without being displayed to any human (presumably to use in some sort of trading algorithm). 4. Fee for someone to actually see the data.

/ Regulation / 3

addition, the current findings question whether exchanges’ depth-of-book products are, in fact, close substitutes. No one has apparently examined the data all that closely to determine the truth. Our conclusion is that there are market participants that need depth-of-book data from all exchanges in order to meet their obligations to seek best execution for clients. This does not necessarily hold for every participant in the market, but an important subset cannot stop sending orders to a particular market or merely assume that the book on Exchange A is perfectly correlated with the book on Exchange B. Justifying data price increases / Complicat-

ing the market price standard is the fact that a few firms control the bulk of the order flow, which gives each a modicum of market power when negotiating fees. Nasdaq claims that 90% of its order flow comes from 100 firms—way too many for any sort of collusion to occur. The court did not believe that the exchanges demonstrated that firms would divert order flow elsewhere if fees went up. In particular, the court found the exchange “evidence” to be merely anecdotal, as it amounted to a couple of examples of what happened to order flow when fees increased, which included one firm substantially diverting order flow. There are multiple alternatives to the non-core data of an exchange: the exchange’s own core data, the non-core data of other exchanges, purchasing noncore data from data vendors, or merely “pinging” orders (which entails sending an oversized order to an exchange to see what the depth behind the best price is). However, the existence of various substitutes does not imply that the exchanges lack market power; all of these alternatives are lacking in some way. Several experts, working for the exchanges, produced an event study of market share of trading around data fee increases, complete with a regression analysis of market share of trading as a function of the fees. They found no significant correlation. However, the 2018 court ruled that


4 / Regulation / SPRING 2019 B R I E F LY N O T E D

these experts had not properly controlled for various important independent variables. The court eventually concluded that the best approach would be to estimate the product’s elasticity of demand without specifying a particular methodology. We believe that a serious economic analysis should be conducted that does precisely this task, as we do not believe such a study currently exists. What is best for retail investors? / Exchanges

have brokerages over a barrel with these fees, to some extent. In order to get the best execution for their clients as required by law, brokerages must obtain data from multiple exchanges. The exchanges have taken advantage of this requirement by dramatically increasing the price of their data beyond what can be justified by either costs or market competition, the two commonly accepted methods to justify data

price increases. The data substitutes for non-core data suggested by the exchanges— their own core data, non-core data from other exchanges, or pinging the market— are not viable substitutes. We do not want to return to a world where retail investors would not necessarily get their trades executed in a timely manner and unwitting traders received prices that were hours or even a day out-of-date. However, it is not clear to us that the cost of the data that brokerages must obtain in order to comply with data execution rules are worth the benefits. As long as the SEC holds that these data must be acquired, then the commission should also ensure that data fees are appropriate. We do not believe that is currently the case. We suggest that the SEC look more closely at the fees as well as the broader information acquisition requirements for funds.

SEC’s Pay Ratio Regulation Reveals What We Already Knew ✒ BY SAM BATKINS AND IKE BRANNON

I

t has been nearly four years since the Securities and Exchange Commission finalized its rule mandating that companies annually publish the ratio of chief executive officer compensation to the salary of the company’s median employee. We’ve previously argued that this provision is one of the costliest regulations required by the 2010 Dodd–Frank Act, writing in these pages that “mandating the regular publication of a crude gauge of relative CEO compensation is a costly exercise that fixes precisely nothing” (“The Meaninglessness of the SEC Pay Disclosure Rule,” Spring 2014). Given the information that has been reported to the SEC after this rule’s implementation, we stand by that assessment. Our chief complaint with the rule when it was first promulgated was that the SEC’s cost estimate for compliance woeSAM BATKINS is director of strategy and research at Mastercard. IKE BR ANNON is a senior fellow at the

Jack Kemp Foundation and president of Capital Policy Analytics. The views expressed in this article are their own.

fully understated reality. To calculate the total compensation of every employee in a company—domestically and internationally—likely would require the use of significant resources and cost firms millions of dollars. However, in its initial proposed rule, the SEC pegged annual costs at just $72 million for all affected firms, with an associated 545,000 paperwork-burden hours. This works out to roughly $18,000 per company and 142 paperwork-burden hours. Industry objected that these figures significantly underestimated the true compliance costs. For instance, respondents to a U.S. Chamber of Commerce survey of 118 firms—the only such survey on the rule’s

cost that we found—estimated it would take 952 paperwork hours per firm at a cost of roughly $185,000. That translated to an industry-wide burden of more than 3 million hours and aggregate costs exceeding $700 million, potentially vaulting it into one of the most expensive Dodd– Frank rules on record. To its credit, the SEC implicitly acknowledged this error when it released its regulatory impact analysis for the final rule. It increased its annual aggregate compliance cost estimate to $526 million, a seven-fold increase, and paperwork compliance rose to more than 2.3 million hours, a four-fold increase. The total net present value cost jumped eight-fold in the final rule. The benefits to society in return for these costs are unclear. We see no reason to think that this exercise has shed any useful light on U.S. income inequality or that the rule is ameliorating income inequality’s causes. In addition, there is no evidence the requirement will prompt Congress to take any sort of action on executive pay; such action, if it happens, will be the product of ideology, not SEC reporting. In retrospect, even if compliance costs were minimal, that still would not justify the rule. The SEC’s final cost estimate incorporated numerous concessions to ease compliance costs. For instance, the final rule allowed multinationals operating in foreign jurisdictions with onerous privacy laws to exclude foreign employees if obtaining total compensation figures would violate those privacy laws. The final rule also limited the burden of collection to just consolidated subsidiaries instead of forcing companies to calculate the median pay of all employees. Finally, the SEC allowed companies to identify the median employee every three years instead of annually, provided there is no reason to believe there would be a significant change in the company’s pay ratio. Despite these cost-ameliorating measures, the pay ratio disclosure rule still ranks as the eighth most expensive Dodd– Frank rule promulgated, exceeding rules on home mortgage disclosure, standards for swap-dealers, and regulatory capital requirements. Knowing what we know now


SPRING 2019

about the causes of the Great Recession, the pay ratio rule seems especially unjustified. Three years after implementation, the rule continues to impose unnecessary compliance costs without generating any measure of value for investors, the market, and perhaps even the politicians who insisted on its inclusion in Dodd–Frank. A study by executive compensation consultant Deb Lifshey weighed in on the costineffectiveness of the rule. She observed that it would prove to have disproportionate effects on large multinational firms, something we noted in our 2014 article.

often subjective enterprise. What’s more, aggregating compensation for thousands of employees across numerous countries—a step that is necessary for multinational corporations—also requires numerous decisions to account for exchange-rate fluctuations and purchasing power differentials. Many of these companies may also have to integrate various payroll systems that do not otherwise connect, which can be a costly undertaking. Finally, the treatment of part-time and partial-year employees can easily bias the estimate: retail companies may have the majority of their workforce

More generally, she suggested that the larger the firm, the larger the ratio, with consumer-facing firms especially affected.

working part-time, which means the CEO comparison is made to someone working less than 40 hours a week. It is also not clear that the rule provides any new information. Numerous scholars had already estimated an overall CEO/median worker ratio before the formal regulation appeared in the Federal Register. A 2014 piece published by the Economic Policy Institute estimated a pay ratio of roughly 200:1, which was consistent with several contemporaneous studies. A few years earlier, the Society for Human Resource Management pegged the ratio at 344:1 and a 2009 study by the Center for American Progress estimated 240:1 in 2005. There was plenty of research before the rule on CEO pay itself. For instance,

When the SEC began contemplating the pay ratio rule, plenty of data relevant to the rule were already available. For instance, existing law already required the publication of CEO pay for public companies, which meant that the pay ratio rule merely required that companies do the arithmetic necessary to calculate median employee compensation (as opposed to just wages or salary), which turns out to be more complicated than meets the eye for numerous reasons. For starters, assigning a value to fringe benefits of each employee is a complicated and

PHOTO BY TOM MERTON/GETTY IMAGES

A redundant rule /

/ Regulation / 5

one study found that companies with the highest-paid CEOs tend to have belownormal returns—good fodder for corporate board discussion. These days, it takes just a few seconds of sleuthing to uncover a ballpark estimate for the average pay of a particular company that is immune to the SEC’s dictate. Those figures—such as the ones reported on the websites Glassdoor and Payscale—are voluntarily provided from current or prospective employees. For example, Payscale’s data suggest that the typical employee at Honeywell earns $81,000, while at General Electric the typical employee earns $86,400. Both websites also disaggregate salary data within a company by profession, providing more relevant data than the SEC. When the first set of pay ratios was reported, there were some attention-grabbing revelations. But the numbers were not as extreme as researchers anticipated. For instance, Lifshey found the median pay ratio was 70:1 for Russell 3,000 companies and 166:1 for Equilar 500 companies. Both figures are less than the estimates done prior to the regulation (and the 2008 financial crisis), but that does not mean that the rule resulted in some sort of decline in compensation. More likely, the stock market rise pre-2008 increased CEO compensation and a few firms accelerated compensation into the year prior to the beginning of pay ratio reporting or else arranged for some sort of contingent compensation As always, the individual firm data provide more perspective than any national average, and extreme outliers can be illuminative. For instance, 10 companies reported a ratio of zero—indicating the CEO did not take a compensation package (a group that included Twitter and RE/MAX). The highest reported ratio was at Weight Watchers, where the CEO earned $35 million compared to median employee compensation of just over $6,000. A ratio of 6,000:1 may very well provoke a modicum of outrage, but it provides investors—who are supposed to benefit from such information—no useful context to understand the ratio. It does implicitly reveal the company has plenty of part-time staffers, which any educated inves-


6 / Regulation / SPRING 2019 B R I E F LY N O T E D

tor would presumably already know. But it provides no relevant insight into the appropriateness of a given CEO’s compensation. Data for whom? / Several other provisions of Dodd–Frank were inserted with the intent to do little other than shame some firms. For instance, the law’s “Conflict Minerals” rule required businesses to disclose whether its minerals originated in the Democratic Republic of Congo or an adjoining country. That turned out to be quite difficult—and costly—for some to ascertain. But the pay ratio rule may be even more shabby than the others because it may not provide anything close to an accurate estimate of what Congress intended to be revealed. Deferred pay, accumulated bonuses, or one-time company-wide bonuses of the sort that were provided by numerous corporations in the wake of the 2017 tax reform will distort pay-ratio estimates. Hiring in a growing economy will have a similar distortionary effect. For instance, a company that adds 5,000 new employees mid-year will doubtless increase its pay ratio as many of those workers will be brought in at low “training” wages, but the net result of this development is an unalloyed good for the labor market and the company’s workers. If we are stuck with the pay ratio rule— and, absent Dodd–Frank reform, we most certainly are—then one way we could improve the statistic so that it measures something useful would be to adjust for part-time workers. Companies like Weight Watchers and McDonalds employ a bevy of part-time workers who may log as few as 10 hours a week. Using their data to determine the “median” informs absolutely no one of the true status of income inequality. Such a fix would not be all that difficult. Robert Pozen and Kashif Qadeer of MIT’s Sloan School of Management suggested in a 2018 Wall Street Journal op-ed that firms could simply consider full-time equivalents in their calculus, something that is commonly done in other contexts. The SEC could accomplish this by issuing an administrative guidance document allow-

ing for part-time pay to be annualized. The pay ratio rule is, in fact, incongruous with the body of regulations promulgated by the SEC. By law those regulations must be intended to promote capital formation, increase market efficiency, or facilitate investor protection. It is difficult to argue that the pay ratio rule advances any of those goals, especially given the previous requirement that CEO compensation be publicly disclosed. U.S. corporations currently keep two different sets of data: one for investors and the other for the government for the purpose of reporting taxes. Keeping those two disparate books may seem redundant, but it’s for a good reason: the information that the IRS requires for tax purposes is not necessarily relevant for discerning how a corporation is actually performing. The pay ratio represents a datum that is not relevant to either investors or the IRS.

The ratio is put forth for those who feel compelled to check the behavior of companies to ensure that they hew to whatever social standards policymakers are embracing at the moment. This sets a troubling precedent: will we soon be asking firms to provide other data that are irrelevant to management, shareholders, or the tax authority? READINGS

“Executive Pay: Perception and Reality,” by Robert J. Grossman. HR Magazine, April 1, 2009. ■

“Performance for Pay? The Relation between CEO Incentive Compensation and Future Stock Price Performance,” by Michael J. Cooper, Huseyin Gulen, and P. Raghavendra Rau. Social Science Research Network working paper #1572085, November 1, 2016. ■

“The CEO Pay Ratio: Data and Perspectives from the 2018 Proxy Season,” by Deb Lifshey. Harvard Law School Forum on Corporate Governance and Financial Regulation, Oct. 14, 2018. ■

“The Fix for Misleading ‘CEO Pay Ratios,’” by Robert Pozen and Kashif Qadeer. Wall Street Journal, March 20, 2018. ■

USDA Reform: Help Rural America by Freeing Scientific Innovation ✒ BY AMANDA MAXHAM AND HENRY I. MILLER

L

ast year, the U.S. Department of Agriculture requested public comment on proposed measures to “improve efficiencies” at the department. Those measures mainly would reorganize the USDA by merging some agencies and shuffling boxes around its organizational chart. It would also establish a Rural Development Innovation Center (RDIC) to “identify and develop new tools to better serve rural communities in achieving prosperity.” Perhaps the changes will make life better for farmers and other rural residents in some ways. But if the USDA really wants to promote prosperity in rural America, it should pare back the excessive regulations that have been constraining agricultural innovation for the last three decades. AMANDA MAXHAM is an astrophysicist and science writer. HENRY I. MILLER , a physician and molecular

biologist, is a senior fellow at the Pacific Research Institute.

Consider the USDA’s questionable certification of “organic” products, which has been discussed in these pages previously. (See “The USDA’s Meaningless Organic Label,” Spring 2016.) Because the “organic” designation doesn’t reflect any difference in food safety or healthfulness, and because the requirements for earning the “USDA Organic” label are wholly arbitrary—not to mention the USDA’s questionable effectiveness in correctly identifying which products meet those requirements—consumers and producers


SPRING 2019

of organic products would likely be better served by private certification regimes that would respond to market demands rather than political whims. Another example, also previously discussed in these pages, would be a drastic relaxation—if not an outright repeal—of USDA and Food and Drug Administration regulations that inhibit the development and use of genetically modified organisms (GMOs). (See “The USDA’s Anti-Science Activism,” Summer 2011.) The longstanding scientific consensus is that GMOs pose no more (and often less) risk to human health and the rest of the planet than organisms created through largely unregulated traditional techniques. Permitting broader use of genetic modification would open the way for developing both animals and plants that require fewer inputs, are more healthful and environmentally friendly, and would make land that would have been needed for agriculture available for other uses. Likewise, the once-promising sector of “biopharming,” which uses genetic engineering techniques to induce crops such as corn, rice, and tobacco to produce high concentrations of high-value pharmaceuticals, is moribund as a result of USDA regulation. Not surprisingly, few companies or other potential sponsors are willing to invest in the development of badly needed genetically improved varieties of the subsistence crops grown in the developing world. Unwise excessive regulation has a wide ripple effect. The USDA, through the Biotechnology Regulatory Services organization within its Animal and Plant Health Inspection Service (APHIS), is responsible for the regulation of genetically engineered plants. APHIS had long regulated the importation and interstate movement of organisms (plants, bacteria, fungi, viruses, etc.) that are plant pests, which are defined by means of an inclusive list—essentially a binary “thumbs up or down” approach. A plant that an investigator might wish to introduce into the field is either on the prohibited list of plant pests and therefore Needed regulatory reform at APHIS /

requires a permit, or it is exempt. This straightforward approach is riskbased in that the organisms required to undergo case-by-case governmental review are an enhanced-risk group. But for more than a quarter-century, in addition to its basic risk-based regulation, APHIS has applied a parallel regime that focuses exclusively on plants altered or produced with the most precise genetic engineering techniques. Thus, APHIS distorts the original concept of a plant pest (something known to be harmful) because it has crafted a new category—a “regulated article”—defined in a way that captures virtually every recombinant DNA-modified plant for case-bycase review, regardless of its potential risk,

/ Regulation / 7

insects, disease organisms, weeds, herbicides, and environmental stresses. Plants have also been modified for qualities attractive to consumers, such as seedless watermelons and grapes and the tangerine–grapefruit hybrid called a tangelo. APHIS has not shown any willingness to rationalize its regulatory approach, so the regulatory obstacles that discriminate against genetic engineering continue to impede the development of crops with both commercial and humanitarian potential. Many innovative genetically engineered crops foreseen in the early days of the technology have literally withered on the vine as regulatory costs have made testing and commercial development economically unfeasible. The opportunity costs Permitting broader use of genetic of unnecessary regulatory modification would open the way for delays and inflated develdeveloping both animals and plants that opment expenses are forare more environmentally friendly. midable. As agricultural economists Gregory Graff, Gal Hochman, and David Zilberman because it might be a plant pest. observed in a 2009 paper in the journal In order to perform a field trial with a AgBioForum, “The forgone benefits from “regulated article,” a researcher must apply these otherwise feasible production techto APHIS and submit extensive paperwork nologies are irreversible, both in the sense before, during, and after the field trial. that past harvests have been lower than After conducting field trials for a num- they would have been if the technology had ber of years at many sites, the researcher been introduced, and in the sense that yield must then submit a vast amount of data growth is a cumulative process of which the to APHIS and request “deregulation” of the onset has been delayed.” organism, which is equivalent to approval for unconditional release and sale. These Conclusion / If the USDA wants to underrequirements make genetically engineered take meaningful, long-overdue, and obviplants extraordinarily expensive to develop ously needed reform, it should worry less and test. The cost of discovery, develop- about its organizational chart and more ment, and regulatory authorization of a about the restraints it places on agriculnew trait introduced between 2008 and tural innovation. Department officials 2012 averaged $136 million according to have acknowledged that plants modified Wendelyn Jones of DuPont Pioneer, a major with the new gene editing techniques will not be considered “regulated articles” corporation involved in crop genetics. APHIS’s approach to recombinant because they don’t meet the definition in DNA-modified plants is difficult to justify. the regulations. But there’s nothing in its Plants have long been selected by nature, as current reform plans that indicates a more well as bred or otherwise manipulated by appropriate, scientific approach to reguhumans, for enhanced resistance or toler- lating recombinant DNA-modified plants ance to external threats to their survival or for removing the USDA’s involvement and productivity. These threats include in the organics market.


8 / Regulation / SPRING 2019 LABOR

Hurting Americans in Order to Hurt Foreigners Benefit–cost analysis challenges the Trump administration’s effort to end the H-4 EAD program. ✒ BY IKE BRANNON AND M. KEVIN MCGEE

I

n 2015, the Obama administration authorized temporary work permits for the spouses of H-1B visa holders who were awaiting green cards. Over 90,000 of these H-4 visa holders have since received a permit, known as an Employment Authorization Document (EAD), and three-fourths of them are gainfully employed. In 2017, the Trump administration announced that it intended to repeal the rule providing this work authorization. This February the administration followed through on that announcement with a notice of proposed rulemaking. The administration’s stated reason for repealing the rule is that it would create more jobs for U.S. citizens. We believe a thorough benefit–cost analysis, as required under Executive Order 12866, would find this justification unfounded. Ending the ability of these workers—who are, by and large, welleducated and high-skilled—to hold jobs in the United States would at best have no net effect on Americans’ employment and likely would reduce Americans’ employment and wages. Further, ending EAD would hurt the U.S. economy and U.S. taxpayers. THE ECONOMICS OF SKILLED IMMIGRATION

The effect of any immigrant group on the U.S. economy depends on those immigrants’ skills and educational attainment. Highly skilled, well-educated workers, both foreign-born and domestic, have high employment levels, are less likely to avail themselves of public services such as food stamps and welfare, and are more likely to be in occupations that are hard to fill. As a result, they boost U.S. tax revenues while having little effect on government spending. IKE BR ANNON , a contributing editor to Regulation, is a senior fellow at the Jack Kemp Foundation and president of Capital Policy Analytics. M. KEVIN MCGEE is

professor emeritus of economics at the University of Wisconsin, Oshkosh.

These skilled foreign workers benefit U.S. economic growth and employment, both for skilled and unskilled American workers. One reason for this is that skilled foreign labor has a relatively small substitution effect on skilled domestic workers because skilled foreign workers are relatively mobile and go where there are many available jobs. In contrast, the U.S. labor force is not so flexible: geographic mobility has gradually diminished in the United States since the 1950s and has fallen by 10% in just the last two years. The chief reason for this trend is the rise in two-income households, which increases the cost of moving for one spouse’s job. Another reason that skilled foreign workers have a positive effect on domestic employment is that they create what economists call a “scale effect”: they boost overall economic activity, creating more opportunities and jobs for both skilled and unskilled domestic workers. This effect outweighs the small substitution effect for skilled domestic workers. For example, a 2014 study by Giovanni Peri, Kevin Shih, Chad Sparber, and Angie MarekZeitlin showed that reducing the number of skilled foreign workers coming to a community significantly reduced the wages of college-educated, U.S.-born workers in those communities who work with computers. Skilled foreign-born workers have an unambiguously positive effect on unskilled U.S.-born workers. Skilled workers and unskilled workers are, in general, complementary, just as skilled workers and capital are complementary—that is, an increase in the quantity of one increases the demand and price for the other. Hence, an increase in the supply of skilled foreign workers increases the amount of capital in the economy and—along with it—the demand for unskilled workers. This results in higher wage and employment levels for unskilled workers, even without the scale effect. Highly skilled foreign workers are also more likely to create


SPRING 2019

PHOTO BY CASARSAGURU / GETTY IMAGES

new businesses than U.S. citizens with similar skills and education. Foreign-born workers in the United States are 30% more likely to start a new business than a native worker, and 25% of all startups in Silicon Valley have been founded by immigrants. Workers currently holding an H-4 visa are almost exclusively skilled workers. Thus, eliminating their work authorizations would have a small negative effect on skilled domestic workers, a large negative effect on unskilled domestic workers, and a significant negative effect on new business formation in the United States. THE DATA

The American Immigration Lawyers Association distributed a questionnaire we prepared to its approximately 14,000 members, encouraging them to ask their H-1B and H-4 clients to complete the survey. At the same time, an H-4 advocacy group called Save H-4 EAD sent the survey to its members. The questionnaire had 25 questions pertaining to the immigrants’ level of education and area of study, the type and extent of work experience both in the United States and abroad, family status, and the visa status of the respondent.

/ Regulation / 9

We received responses from 4,708 individuals currently holding H-4 visas, 90% of whom were female. Not surprisingly, our sample was highly educated. Recall that H-4 visa holders are spouses of H-1B visa holders, who are foreign workers in specialty occupations. H-1B recipients overwhelmingly have college degrees in science, technology, engineering, and mathematical (STEM) disciplines and are employed in occupations like engineering, computer science, bio-sciences, and other high-tech areas. Their spouses, H-4 visa holders, tend to have similar educational backgrounds. Less than 1% of our sample had less than a college degree, and nearly 60% had a master’s degree, doctorate, or other professional or postgraduate degree. Some 83% of our respondents currently hold EADs; 75% of those EAD holders are currently employed in the United States, and almost 7% of them report being self-employed. Employed EAD holders typically have held an EAD for two years and earn about $77,000 a year—an income that is above the U.S. median salary. Some 66% of them work in a STEM field, mostly in computer-related, engineering, or math or statistics jobs, earning on average about $83,000 annually. Some common self-reported job titles in our survey include systems engineers,


10 / Regulation / SPRING 2019 LABOR

software developers, automation engineers, quality assurance analysts, and data analysts—all jobs that U.S. employers have trouble filling. Another 16% of respondents report working in the Business, Finance, or Management fields: this group typically reports holding occupations such as project managers or product managers and average about $73,000 annually. An additional 8% of employed respondents report working in the Healthcare Practitioner or Healthcare Support fields, in such occupations as physician, dentist, pharmacist, nurse, physical therapist, and healthcare business analyst. Once again, these are areas of high economic value, reflecting the high level of education and training among these H-4 visa holders. The average earnings of this group is about $76,000 a year. Self-employed EAD holders / Most of the 7% of employed EAD holders who report being self-employed appear to be independent contractors—technically selfemployed, but doing work for one company. However, 2% of all the individuals we surveyed operate businesses that employ both themselves and others. About 22% of this cohort were in the Business, Finance, and Management fields, another 22% were in a STEM field, and about 23% were in the Healthcare Practitioner and Healthcare Support fields. These self-employed have on average worked longer in the United States than those who work for others; 84% of the self-employed got their EADs in 2015 or 2016, as compared to only 70% of those employed by others. The self-employed report an average income of about $60,000 a year and employ five other people on average.

the true average. The arithmetic implies that this cohort’s annual income—and its annual contribution to U.S. gross domestic product—is approximately $5.5 billion, 55 times the $100 million threshold for a benefit–cost test. Our estimates show that rescinding the EAD for H-4 visa holders would reduce federal tax revenue as well as U.S. economic activity without creating any jobs on net for domestic workers. Economic and tax revenue effects / The EAD program clearly affects the ability of American employers to hire and retain H-4 visa holders. Our estimate of $5.5 billion in annual lost earning for this group provides a minimum estimate of their employers’ lost output. However, the H-4 EAD program also affects the ability of American employers to hire and retain H-1B visa holders. Some 28% of our employed H-4 respondents reported that their EADs have been important in their families’ decision to remain in the United States; an identical 28% of the H-4 respondents

Our estimates show that rescinding the EAD program for H-4 visa holders would reduce federal and state tax revenue and U.S. economic activity without creating any jobs on net for domestic workers.

THE COSTS AND BENEFITS OF REPEALING EAD FOR H-4 VISAS

The process for any administration to enact or rescind a rule is straightforward: it must formally publish its intent to do so, make the entirety of the proposed rule change available on its website, and allow at least 30 days for public comment. For a rule that has an estimated economic effect of at least $100 million—either to the government, the wider economy, or both—it must also pass a benefit–cost analysis pursuant to E.O. 12886. The order tasks the Office of Information and Regulatory Affairs (OIRA) with determining if the rule does indeed pass a benefit– cost threshold. There are approximately 91,000 H-4 workers with an EAD, and our estimates suggest that approximately 75% of this cohort are currently employed, with an average annual salary of $80,000. The sample we obtained is both representative and quite large— approximately 4% of the population, an almost unheard-of ratio— which implies that our earnings estimate is likely quite close to

who are not currently employed but want to work identify the EAD as important to their remaining in the United States. This suggests that rescinding EADs could result in the loss of up to 25,000 H-1B employees and the roughly $2 billion in U.S. production that they contribute to the U.S. economy. Thus, a better estimate is that the rule rescission could reduce U.S. GDP by around $7.5 billion per year. We can improve our estimate of the annual cost to the federal government from ending EAD in terms of forgone tax revenues. H-4 visa holders earn about $80,000 a year and they all must have employed H-1B spouses. After the $24,000 standard deduction and one $2,000 child tax credit, the H-1B spouse would have about $54,000 in taxable income. At current tax rates, the H-4 spouse’s income would result in an additional tax bill of $15,300. To that we must add both the employee’s and the employer’s shares of the payroll tax, about $12,200. Hence, each employed H-4 visa holder would, on average, pay $27,500 in federal taxes annually. With roughly 68,000 H-4 visa holders currently employed, that means that ending their employment would result in a federal revenue loss of around $1.9 billion annually. This estimate does not reflect the additional tax revenue that would be lost when American employers lose some of their H-1B employees. We also estimate the state and local taxes paid by this cohort


SPRING 2019

by multiplying their total income of $5.46 billion by the average proportion of income that states and localities assess, 9.75%, via income, sales, property, and other taxes. This results in a state and local tax revenue loss of $530 million. This estimate is undoubtedly low because it assumes that the distribution of H-4 visa holders resembles that of the overall population; in fact, they disproportionately congregate in high-tax states like California. Further, our calculation ignores the effect of the EAD rescission on employers’ ability to attract and retain H-1B workers. Employment effect

/ A question basic to the motivation for

rescinding the work permit is whether the jobs currently held by H-4 visa holders would subsequently be filled by U.S. citizens. This defies an easy or objective answer, but we can make a few relevant observations. First, the education and training of typical H-4 visa holders mean that the jobs they occupy would be relatively difficult for employers to fill with American workers. Unemployment rates tend to fall steadily with educational attainment. In June 2018, the unemployment rate for people without a high school degree was 5.8%, two points above the national average, but 4.3% for those with a high school diploma and only 2.5% for college graduates. For those with professional degrees or doctoral degrees, the unemployment rates were 1.5% and 0.9% respectively. These numbers suggest that there is very little unused supply of U.S. labor with the ability to do the work of the typical H-4 worker with a postgraduate degree and several years of professional experience. A second, complementary point is that, as we approach the 11th year of an economic expansion, the availability of unemployed workers at any skill or educational level willing and able to do the jobs currently held by H-4 visa holders is relatively slight. This is not to say that there is no pool of underemployed workers in the U.S. economy; the labor force participation rate, which measures the proportion of the adult non-institutionalized population that is active in the labor market, is 5–7 percentage points below where it was at the peak of the previous two business cycles. This suggests that there may be a pool of domestic workers willing to enter the labor market if the opportunity arose. However, it is more likely that the labor force participation rate today does not have much room to increase: research published by Ike Brannon and Andrew Hanson finds that the gap between past rates and the current rate are due to demographics (a greater proportion of workers are above age 55), the crippling effects of opioid addiction, and the ongoing sluggishness of new home construction. None of these suggest that there is a sizable pool of unemployed college graduates with STEM degrees. To reasonably approximate the number of H-4 visa-held jobs that would be filled by U.S. citizens, we began with the Bureau of Labor Statistics’ unemployment rates by occupation for June 2018. We assumed that the frictional unemployment rate is roughly 2%, so any occupation with a 2% or lower unemployment

/ Regulation / 11

rate would have no excess workforce whatsoever: all the currently unemployed workers in that occupation would be in the process of searching for and moving to a new employer. On the other hand, we assumed that if the unemployment rate in an occupation is 8% or higher, there is sufficient slack to allow all the H-4 visa workers to be replaced. For occupational unemployment rates between 2% and 8%, we used linear interpolation: at a 4% unemployment rate, only one-third of the H-4 workers would be replaced, but at a 6% unemployment rate two-thirds would be replaced. In our sample, over two-thirds of the employed H-4 EAD holders were in fields with a June 2018 unemployment rate below 2%: Management, Engineering, Legal, Healthcare Practice, and Computer and Mathematics. Interpolation suggests that only 8% of the H-4 EAD workers would be replaced by American workers if the H-4s lost their employment status—12% if we were to boost our slack estimates. We conclude that eliminating the employment status of 68,000 working H-4 visa holders would result in the employment of only 5,500 to 8,200 U.S. citizens. Remember that 2% of our employed H-4 visa holders were selfemployed, and they in turn employed an average of five workers. If the 68,000 employed H-4 EAD holders all lose their employment status, 6,800 U.S. citizens would also become unemployed when the H-4 business owners liquidate their businesses. This would almost exactly cancel out any employment gains accruing to U.S. citizens from H-4 job replacement. And this ignores the other employment-creating effects from high-skilled H-4 workers. Overall, ending the H-4 program will likely reduce overall employment and wages for American workers. NO DISCERNIBLE ECONOMIC BENEFITS—AT BEST

Rescinding employment authorization for H-4 visa holders would result in substantial costs to the U.S. economy, to federal and state tax coffers, and to U.S. employers’ ability to attract H-1B workers. There would be no employment or income gains by domestic workers to offset those losses because the relatively small gains to U.S. workers replacing H-4 workers would be offset by the jobs lost when self-employed H-4 workers were forced to close their businesses and dismiss their employees. Worse, the loss of these high-skilled foreign workers is likely to have a negative overall effect on Americans’ employment and wages. In short, rescinding H-4 visa holders’ ability to work fails to meet any credible benefit–cost analysis. This proposal should thus be rejected by OIRA. READINGS ■ “Closing Economic Windows: How H-1B Visa Denials Cost U.S.-born Tech Workers Jobs and Wages during the Great Recession,” by Giovanni Peri, Kevin Shih, Chad Sparber, and Angie Marek-Zeitlin. Partnership for a New American Economy, June 2014. ■ “Foreign Scientists and Engineers and Economic Growth,” by Giovanni Peri, Kevin Shih, and Chad Sparber. Cato Papers on Public Policy 3: 107–184 (2014). ■ “Wisconsin: A Blueprint for More Workers,” by Ike Brannon and Andrew Hanson. Badger Institute, August 2018.


12 / Regulation / SPRING 2019 LABOR

Tipped Workers and the Minimum Wage California’s broad minimum-wage increases appear tailored to manage political forces.

A

✒ BY RICHARD B. MCKENZIE

t the start of this year, California’s mandated minimum wage for workers in firms with 26 or more employees rose from $11 to $12 an hour, after rising in 50¢ increments the previous two years. The state’s minimum wage for those employers will continue to rise by a dollar a year until it reaches $15 at the start of 2022, an increase of 50% over six years. Smaller employers will have to match that level the following year. Proponents of minimum-wage increases have argued that the hikes are needed to provide low-income, low-skill workers in, say, fast-food restaurants a “living wage,” meaning enough income to provide an acceptable living standard. Interestingly, though, the coming California wage increases will have an unheralded (and, some might say, perverse) effect: they will raise the paychecks of some well-off California restaurant servers who work in higherend restaurants and whose total earnings, including hefty tips, are now well above the “living wage.” Thus, minimum-wage policy is not particularly well targeted to the low-wage workers that it’s ostensibly intended to help. Minimum-wage coverage of already-well-off servers may be nothing more than an unintended consequence of the legislation, a simple policy mistake made by busy legislators. However, another explanation is that this policy is intended to dampen opposition from some affected industry sectors, as well as manage political forces. SERVERS’ HOURLY TIP INCOME

Because California law, unlike federal law, does not allow a subminimum wage for tipped workers, restaurants must pay those workers the state’s minimum wage regardless of how much they RICHARD B. MCKENZIE is the Gerken Professor of Economics and Society (emeritus)

in the Merage School of Business at the University of California, Irvine. His latest book is A Brain-Focused Foundation for Economic Science (Palgrave, 2018).

receive in tips. Tips are considered by law to be servers’ “sole property” and many servers make significantly more income from tips than from their wages. In the fall of 2015, the Restaurant Opportunities Centers United and other labor groups pressed for a ban on restaurant tipping on the grounds that it leads to unequal server incomes (because of, say, customers’ racial and gender biases). Further, these groups argue, tipping forces servers to grovel for income and “tolerate inappropriate and degrading behavior from customers, co-workers and managers in order to make a living,” to quote a labor activist writing in the New York Times. Tippingban advocates proposed to replace tips with a minimum hourly wage of $15. As I previously described in these pages (“Should Restaurant Tipping Be Abolished?” Summer 2016), I interviewed 40 servers in eight Orange County, CA “casual” table-service restaurants, which I defined as serving alcoholic beverages and cheeseburger-andfries meals for less than $12 (e.g., Chili’s). All interviewed servers scoffed, without hesitation, at the idea of giving up their tips for a “mere” $15 minimum wage. I then asked what minimum wage they would accept in exchange for giving up tips plus the then$10 minimum wage. Their responses ranged from $18 to $50 an hour, with $30 being the median. (Several servers responded after consulting their smartphones, where they recorded their tips.) That median translates to slightly more than $33 an hour today, after adjusting for inflation on tip income and adding the $2 increase in the state’s minimum wage since 2015. Understandably, many servers said that if tips were banned and replaced with a $15-minimum wage, they’d leave their restaurant jobs for work that pays better. Most servers volunteered that their relatively high income inclusive of tips was the key inducement for serving. Several added that they were quietly pleased that customers think they make far less than they do, believing that results in better tips.


SPRING 2019

RAWPIXEL LTD / GETTY IMAGES PLUS

SERVER INCOMES

To assess the absolute and relative income of the interviewed servers, let’s suppose a server works 40 hours a week for 50 weeks a year, earning an average of $33 an hour. The server’s annual income would be $66,000, which is 6% above the June 2018 national household median. Assuming the “median server” reports for tax purposes twothirds of his total income, he would save $5,903 in taxes by not reporting his full tip income, making his “effective pre-tax annual income” $71,903. That income would put the server in the top 42% of American households, roughly speaking. If two median servers live together, they would have an effective pre-tax household income of $141,890 and be in the top 15% of all households. With the scheduled $3 increase in the California minimum wage by 2022, the effective annual income of an individual server will be $79,061 (assuming tips do not rise, which they could with minimum-wage-induced increases in menu prices). The median server, alone, will be in the top 38% of households. Two such servers, living together, will reach the top 12% of households. If those two servers make the upper limit of my responses, $50 an hour, they would be in the top 5% of all households in 2022.

/ Regulation / 13

Of course, servers in low-level restaurants don’t do nearly as well as my interviewed servers. But servers in higher-end restaurants (e.g., Morton’s) often do much better than my surveyed workers and still must be paid the state’s mandated minimum wage. I recognize that my interviewed servers would rank lower on the Orange County income scale (which has a median income 42% above the national median), but that doesn’t affect their real incomes. Their relatively higher living cost—mainly from housing—does. However, the servers can also take advantage of the driving forces behind the higher housing costs, mainly the mild year-round climate, proximity to stunning coastlines, pockets of superior public schools, and the surrounding high-tech/entrepreneurial economy (with an unemployment rate at 2.8% in November 2018, a full percentage point below the national rate). I don’t suggest that servers’ hourly income is not undercut by living costs, only that there are good reasons the interviewed servers continue to work and live in the area. THE MINIMUM WAGE AND RESTAURANT TECHNOLOGY

Econometric studies of the employment effects of minimum-wage hikes over the last 70 years have mainly focused on young, low-


14 / Regulation / SPRING 2019 LABOR

wage, low-skill workers. For those labor groups, empirical studies have found minor reductions in employment—generally no more than 3% and, at times, between 0 and 1%—given wage increases of 10%. No one (to my knowledge) has considered the employment effects of minimum-wage increases on groups of skilled and welloff workers receiving the minimum wage, namely servers and other workers with significant (even substantial) tip incomes. My interviewed servers do well enough that, as several volunteered, they consider their wage to be an income supplement that mainly covers their withheld income taxes, while the tips provide the lion’s share of their take-home pay. Restaurant managers, on the other hand, clearly see mandated wages as a significant cost that needs to be managed to remain competitive on menu prices—especially now that the California minimum wage will rise another 25% by 2022. California’s minimum-wage increases will give an added competitive edge to restaurants that conserve paid labor by shifting work to customers. Of the eight restaurants I surveyed, Chili’s and Red Robin are the only ones to date to have placed small consoles with touchscreens on tables. They both did that in mid-2017, after the scheduled minimum-wage increases became law earlier that year. Customers use the consoles to order and pay for their meals, resulting in less work for servers. The devices offer the prospect of mutual gains for management and some—but not all—servers. Because the consoles enable individual servers to cover more customers, managers can reduce their wait-staffs and wage payments, as well as moderate menuprice increases. For servers who keep their jobs, the consoles can cause their total tips to increase because they’re working more customers. But the consoles can also cause servers’ tips to decrease because the servers have less interaction with their customers (and a number of my interviewed servers were confident they could affect their tips with customer interaction). I pressed four Chili’s and Red Robin bartenders on the effects of the devices. All agreed that the consoles had been a boon to their ability to cover more customers. One Chili’s bartender observed, “Had we not had the consoles at lunch today, I could not have handled the demands without delays.” When asked if server hours had been cut as a result of the devices, he admitted, “Instead of having three servers in the bar at lunch before the consoles were installed, we now have just me, but then I have an additional ‘runner’ to move completed orders from the kitchen to customers.” Red Robin took a different tack. It took out all bussers and shifted table-clearing dutues to the bartenders, and it plans to streamline the kitchen and use fewer cooks. The Chili’s bartender assured me that the consoles had increased his average daily tip income by 20% “at least!” The Red Robin bartender, who confessed to not tracking his tips carefully, could only say that his percentage tips had risen, primarily because people who use the consoles click on “20% tip,” which is at the top of the list of tip options. If the bartenders had their tip increases right, we should expect

the consoles to pop up in other casual restaurants. The looming minimum-wage increases will raise the rate of return on ordering technologies that shift some of the food service work to unpaid customers. Some restaurants will make the substitution for an old-fashioned economic reason: to cut costs. Others will adopt them later for another old-fashioned reason: to fend off the cost and pricing gains by competitors that have already implemented the technologies. Similar labor-saving consoles and kiosks are now spreading rapidly at many airport restaurants, as well as fast-food restaurants. The coming minimum-wage increases can also be expected to favor “fast-casual” (counter-service) restaurants over “casual” (table-service) restaurants. This is because the fast-casual model reduces paid server hours by shifting work to customers and permits restaurants to tap lower-skilled workers who don’t have the experience and language skills that servers need. In lieu of tips, fast-casual restaurants can raise menu prices to cover added minimum-wage costs, effectively claiming a portion of what would have been server tips and, at the same time, leaving customers with lower total bills than they would receive otherwise, with an implicit payment for their added work. One manager noted that his burger-based restaurant chain was already feeling market pressure from the growing number of fast-casual, burger-based restaurants (e.g., Five Guys, The Habit, The Stand) that have higher-quality burgers and prices than traditional fast-food (and some casual) restaurants but with prices lower than his own casual chain. Interestingly, in late 2018, fast-casual sales were estimated to be expanding by 7.5% for the year, while casual and fine-dining sales were expected to grow by only 2.7% and 1.8%, respectively. Such a shift in the relative growth rates of restaurant segments can mean increased employment for low-skill workers, partially offsetting the overall job losses among menial workers in fast-food restaurants (and elsewhere) that can result from higher menu prices. ARE WAGE INCREASES FOR SOME WELL-OFF SERVERS UNINTENDED?

Minimum-wage increases for some well-off tipped workers could be an unintended effect of policy advocates and policymakers failing to think through the consequences of their proposed market controls. But the effect could also be intended, grounded in the underlying economic and political forces that press for the expansion of wage controls beyond their narrowly intended targets—say, fast-food workers who seldom make tips. If restaurant segments above fast food were exempt from paying the scheduled higher minimum wages because of servers’ tip incomes, then fast-food restaurants would suffer a growing cost disadvantage relative to other industry segments. Their menu prices would have to rise relative to near-competitors’ prices. Understandably, fast-food chains might vehemently oppose minimum-wage increases in principle but, at the same time, when wage hikes are assured, they could strongly favor industry-wide


SPRING 2019

coverage. From this perspective, tipped servers are the beneficiaries of policy efforts to disturb as little as possible the competitive balance across restaurant segments. In addition, full industry coverage of minimum-wage hikes makes it easier for all restaurants to pass along their higher wage costs to customers in the form of higher menu prices. Doing this converts the wage increases into a disguised excise tax with the intent of redistributing income from customers to workers (with some collateral income destruction overall). Broad minimumwage coverage might even moderate resulting job losses by not concentrating the effects of the wage hikes on a given market segment—fast food, for example—that could be most vulnerable to replacement of workers with technology and reduction in sales because of the segment’s relatively price-sensitive customers. In short, labor (or any other) market controls can beget extension of their coverage, not so much because of fairness considerations, but because of the economic and political forces set in motion by controls. Friedrich Hayek warned of this in The Road to Serfdom and other works. CONCLUDING COMMENTS

California’s scheduled minimum-wage increases will likely have conventional and unconventional labor-market effects. They can decrease the growth, albeit marginally, in the state’s total

/ Regulation / 15

employment opportunities for low-wage workers, as menu price increases reduce growth in restaurant meals eaten and as readily available technology replaces some workers. The increases can also be seen, ironically, as an indirect and partial means of curbing servers’ opportunities to work for tips, perhaps causing some servers to move to different jobs, such as teaching, that pay less than what they once earned in their restaurant jobs. At the same time, relatively well-off tipped servers may have been included in the state’s minimum-wage increases not because they “deserve” or “need” more money, but because including them reduces the political opposition from different types of employers. READINGS ■ “Minimum Wages and Employment: A Review of Evidence from the New Minimum Wage Research,” by David Neumark and William Wascher. National Bureau of Economic Research working paper #12663, January 2006. ■ “Should Tipping Be Abolished?” by Richard B. McKenzie. Policy Report #382, National Center for Policy Analysis, March 2016. ■ “The 2018 Restaurant Technology Trends to Look Out For,” by Caitlin Stanley. Revel Systems, January 6, 2018. ■ “The Effect of the Minimum Wage on Employment and Unemployment,” by Charles Brown, Curtis Gilroy, and Andrew Kohen. Journal of Economic Literature 20(2): 487–528 (June 1982). ■ “Why Are There So Few Job Losses from Minimum-Wage Hikes?” by Richard B. McKenzie. Policy Report #354, National Center for Policy Analysis, April 9, 2014.

Real Activism Real Results

JOIN TODAY IJ.ORG/ACTION INSTITUTE FOR JUSTICE


16 / Regulation / SPRING 2019 H E A LT H & M E D I C I N E

The Troubled History of Cancer Risk Assessment The Linear-No-Threshold paradigm, which asserts there are no safe exposure levels, is the product of flawed and corrupted science.

W

hen crafting regulations on exposure to carcinogens and other dangers, policymakers often vow to “follow the science” on what is safe and what is unsafe. But what if that science is flawed or grounded in questionable judgments—or worse? In 1956, a National Academy of Sciences (NAS) panel formally recommended to the U.S. Government that it change how it assesses risk from ionizing radiation. That sounds innocuous enough, but the Biological Effects of Atomic Radiation (BEAR) I Genetics Panel’s proposed change was momentous: switching from a threshold model in which exposure is deemed safe if kept below a certain level, to a linear model in which no exposure is considered safe. This recommendation would ultimately be accepted by leading regulatory and advisory bodies in the United States and internationally, and extended to other prospective hazards like chemical carcinogens. As the saying goes, “As the twig is bent, so grows the tree.” All subsequent cancer risk assessments in the United States and throughout the world would inherit the risk assessment paradigm from the NAS BEAR I Genetics Panel. But was this change sound?

X-RAY MUTATIONS

The NAS BEAR I Genetics Panel based its recommendation mainly on a strongly held belief that all radiation-induced mutation was unrepairable, irreversible, cumulative, and linear in the matter of dose response. However, the empirical evidence for this view was weak and equivocal. Yet the recommendation had considerable authority because the panel was deemed by opinion EDWARD J. CALABRESE is professor of toxicology at the University of Massachu-

setts, Amherst.

leaders, including the New York Times, as a virtual genetics “dream team” that included a Nobel laureate, a future laureate, and others of high achievement and prestige. The origin of the Linear-No-Threshold (LNT) belief was borne in the judgment and passion of Hermann Muller, who was the first to claim that X-rays induced gene mutations. Muller had indeed made a momentous breakthrough in late 1926 when he found a way to produce quickly copious transgenerational phenotypic changes (e.g., alterations in size, color, or shape) in fruit flies, which he interpreted as being the result of gene mutation. This was something that no one else had done. Muller believed that he had discovered the long-sought mechanism of evolution, as he claimed that he had produced the “artificial transmutation of the gene.” He even introduced the term “point mutation” (i.e., very small mutational gene change) into the geneticist’s lexicon. Muller rushed to publish his discovery after only the first of the three seminal experiments that would ultimately earn him a Nobel Prize. However, the first article, published in the journal Science, offered no data, instead presenting a discussion of his observations. Several months later and with considerable suspense, he unveiled the data at a large conference in Berlin, to great acclaim. (The relevance of all this will be explained below.) His star rose meteorically and he became the clarion of “the new genetics” that gained insight into evolution as well as medical concerns resulting from excessive use of X-rays. Just when the initial commotion settled down, Muller made headlines a second time, in 1930, when he announced that the nature of the dose response for X-ray-induced mutation was linear, all the way down to a single ionization. That is, he claimed, there is no safe exposure. He thought this idea was basic, a universal concept, occurring in all life, including the plant, microbe, and animal domains, and called it the “Proportionality Law.”

PHOTO BY HULTON DEUTSCH / GETTY IMAGES

✒ BY EDWARD J. CALABRESE


SPRING 2019

MOUSE EXPERIMENTS SUPPORTING LNT

Muller’s gene mutation breakthrough and his formulation of its implications in the Proportionality Law would lead several highly prestigious geneticist and physicist colleagues to provide a mechanism (i.e., gene target theory) of the X-ray-induced gene mutation Proportionality Rule. A 1935 paper presented the

/ Regulation / 17

“linear non-threshold – single hit model” and applied it to mutation; it would later be applied to cancer. The NAS BEAR Genetics Panel would use this model in its 1956 recommendation. The model was reaffirmed 16 years later with the next NAS committee, then called the U.S. Biological Effects of Ionizing Radiation (BEIR) Genetics Subcommittee. However, instead of using data from Muller’s fruit flies, this committee based the LNT model on the massive mouse model experiments of William Russell of the Oak Ridge National Laboratory. Russell used over 2 million mice in his studies, a size that will likely never be approached again. The findings and the linearity recommendation became the basis for the Environmental Protection Agency’s adoption of the LNT model and its regulatory applications to radiation and chemicals. This reflected the belief that cancer was mediated by a commonly shared mutation mechanism. The BEIR recommendation has been the “gold standard” for exposure regulation, providing the assurance that linearity was “real” because of the limitations of epidemiological studies to confidently resolve dose–response relationships in the low-dose zone. In many ways, the Russell findings became the toxicology and risk assessment version of the Rosetta Stone. They offered a reliable translation of experimental and epidemiological studies to the language of human risk assessment. MULLER’S MISTAKE

The above summary is the “official” history of cancer risk assessment offered in most toxicology texts. However, several historical revelations have emerged over the past few years that have turned this entire story upside down. Those revelations affect the reputations of some very prestigious scientists, the validity of a Nobel Prize, and the scientific foundations of cancer risk assessment worldwide. The problems with the “authoritative” cancer risk assessment story start with its foundation. Muller’s claim to have induced gene mutation has been found to Hermann Muller preparing one of his genetic experiments exposing fruit flies to X-rays.


18 / Regulation / SPRING 2019 H E A LT H & M E D I C I N E

be incorrect. He actually induced massive gene deletions, affecting chromosome (rather than a gene change) transgenerational phenotypic changes. This criticism of his work was raised while Muller was still alive, but he was able to stifle it. However, modern DNA/nucleotide analysis studies have shown the relevance of this criticism. Muller’s mistakes on the gene mutation interpretation invalidated the 1935 LNT–single hit model that was based on the assumption of gene mutation. Muller mistook an observation (i.e., transgenerational phenotypic changes) for a mechanism (gene mutation), conflating the two. He made a big mistake and perpetuated it for decades, with profound consequences because it infected cancer risk assessment principles and practices. Fruit flies and a cover-up / The story would grow even more bizarre, involving the atomic bomb–making Manhattan Project of World War II. While the Manhattan Project primarily focused on the bomb, it also had a genetics component designed to assess the effects of ionizing radiation on heredity. That work was conducted at the University of Rochester under the direction of

also admitted that he could find no problem with Caspari as a researcher or with the study. Despite the new findings, Muller announced at his Nobel lecture that the threshold model should be trashed and replaced with the LNT – single hit model, knowing full well that a better study did not support that claim. Needless to say, he didn’t share that information with his Nobel audience. In effect, this started Muller on the road to deliberate deceit and deception, along with Stern, to ensure the acceptance of the LNT – single hit model. Muller’s public deceptions did not stop with his Nobel lecture. He would publish several dishonest and incorrect articles to further the LNT position, all under the watchful eyes of Stern, Caspari, and others. They simply let their Nobel laureate colleague mislead the scientific community and the general public. PANEL PROBLEMS

The 1956 NAS BEAR I Genetics Panel also exhibited some novel, odd, and troubling features. First, it was not funded by the U.S. Government, but by the Rockefeller Foundation. Second, the president of the NAS was Detlev Bronk, who also was president of the Rockefeller Institute for Medical Research. In essence, Bronk decided to fund himself. Third, he appointed the chair of the NAS BEAR Genetics Panel, Warren Weaver, who was not a geneticist but a mathematician and who had long worked for the Rockefeller Foundation. Weaver had funded essentially most of the genetic researchers in the United States and elsewhere. Fourth, Weaver and Bronk selected the panel and stacked it with LNT believers, clearly ignoring other geneticists with differing views. Fifth, Weaver selected eight geneticists who had no prior publications on the effects of radiation on mutations. Sixth, during a panel session, Weaver tempted panel members with vast sums of research dollars—a seeming bribe. Seventh, panelist James Crow persuaded his colleagues to alter the research record on two specific matters in order to ensure the likelihood of having the LNT recommendation accepted. LNT would soon become the law of the land, so to speak, and helped to lead the environmental revolution of the 1960s and 1970s. However, Oak Ridge Labs’ Russell reported in December 1958 that the BEAR Panel was wrong when it assumed that all ionizing radiation-induced genetic damage was irreversible and cumulative. He convincingly showed that thresholds could occur at low dose rates, probably because of a DNA repair process. That finding shocked Muller and others. Russell’s suggestion on DNA repair was confirmed several years later in research that would earn the 2015 Nobel Prize. These findings set the stage for the next battle. In 1972, the BEIR Committee acknowledged that the 1956 BEAR Genetics Panel had been wrong and that dose rate, not total dose, was the

The flies did not cooperate, showing a threshold response in the most extensive study ever conducted on the topic. These findings shocked Stern and his team, threatening to turn their scientific views and world upside down.

world-renowned geneticist Curt Stern. Muller, then at Amherst College, was a paid consultant. The project included a new fruit fly experiment to confirm Stern and Muller’s belief in linearity. However, the flies did not co-operate, showing a threshold response in the most extensive study ever conducted on the topic. The findings shocked Stern and his research team, threatening to turn their scientific views and world upside-down. Stern and co-author Ernst Caspari did write a manuscript presenting the new data, but they directed the scientific community not to accept or use their findings, even though they came from what was clearly the best study yet done on the subject. A reading of preserved letters and memos between the two and other colleagues reveals their fear that the new findings would invalidate the LNT– single hit model. Their work became an effort to “save the hit model” rather than follow the data. Five weeks before he received his Nobel Prize, Muller received the Stern–Caspari manuscript. He soon wrote Stern acknowledging the great threat the findings posed for the LNT – single hit model and strongly requested the study be repeated. He


SPRING 2019

key factor for mutation and cancer risk assessment. Russell had shown that at an ionizing radiation exposure rate some 27,000 times above the background exposure, female mice showed a clear threshold—that is, no increase in mutations. While the male mice showed a strong trend in the threshold direction in the same experiment as the females, they had not yet achieved the safe threshold dose. Nonetheless, the BEIR NAS Committee retained the LNT model, relying only on the male mice data. QUIET REVERSAL

This is where things stood for 25 years, until another Oak Ridge Lab geneticist, Paul Selby (a former student and colleague of Russell), discovered several data problems with the Russell control group. Selby dug into the issue, finding more problems with the research—enough to challenge the key scientific findings. Because of the great sensitivity and significance of these developments, he went to the very top of the Department of Energy and presented his data challenging the Russell findings. The DOE quietly convened an external panel to evaluate the Selby claims while giving Russell a chance to defend himself. In the end, the panel sided with Selby; Russell had made a major error with his control group data. Russell and Selby both published these revelations in the scientific literature, though they differed on the size of the errors. The corrections were so

/ Regulation / 19

scholarly written that one could not easily detect the magnitude and significance of the controversy and the underlying hostilities that had emerged. The write-up was amazingly tame and clinical. These findings sat quietly for another two decades until I came upon them. After obtaining many of the details of the DOE hearing and other information, including a long series of telephone interviews (about 12 hours in total) with Selby, I applied the appropriate correction to the 1972 data used by BEIR to sustain LNT. I found that had the data been corrected at the time of the BEIR I recommendation, it would have supported a threshold rather than the LNT model. Thus, these new findings call into question the “gold standard” that has been guiding U.S. cancer risk assessment since 1977. CONCLUSION

The story of cancer risk assessment as told by regulatory agencies such as the EPA is really a profound example of flawed science—the product of errors, deception, perverse incentives from academic grants, and ideology. A major remaining question is whether our regulatory agencies can honestly and objectively confront this history and make the needed corrections, or will they simply preserve the historical “lie” that they and society have long been living. If they do the latter, it will continue the harm to both science and public welfare.

The proposals before Congress have long-term effects on our nation's budget — and potentially yours. WashingtonWatch.com delivers the numbers behind federal legislation in the form that matters most: the price for you and your family. Let WashingtonWatch.com be your starting point for investigating what the government does with your money, and for taking action to control it. Whatever your viewpoint, you will have more power to effect change. Subscribe to the WashingtonWatch.com Digest, free e-mail notice of new legislation — and, most importantly, its price tag.


20 / Regulation / SPRING 2019 INSURANCE & LIABILITY

Should Automakers Be Responsible for Accidents? Automaker enterprise liability would have useful incentives that driver liability law misses.

M

✒ BY KYLE D. LOGUE

otor vehicles are among the most dangerous products sold anywhere. Automobiles pose a larger risk of accidental death than any other product, except perhaps opioids. Annual autocrash deaths in the United States have not been below 30,000 since the 1940s, reaching a recent peak of roughly 40,000 in 2016. And the social cost of auto crashes goes beyond deaths. Autoaccident victims who survive often incur extraordinary medical expenses. Those crash victims whose injuries render them unable to work experience lost income. Auto accidents also cause nontrivial amounts of property damage—mostly to the automobiles themselves, but also to highways, bridges, or other elements of the transportation infrastructure. Finally, serious motor vehicle accidents often cause severe noneconomic injuries—that is, “pain and suffering.” According to some estimates, such noneconomic harms amount to more than twice the magnitude of the aggregate economic damages caused by auto accidents. All of this may be about to change. According to many autoindustry experts, the eventual transition to driverless vehicles will drastically lower the economic and noneconomic costs of auto accidents. Why might this be so? Humans are bad drivers. People have bad judgment, slow reflexes, inadequate skills, and short attention spans. They drive too fast. They drive while intoxicated or sleepy or distracted. According to the National Highway Traffic Safety Administration, roughly 94% of auto accidents today are attributable to “driver error.” The hope is that computers can do better. Fully driverless KYLE D. LOGUE is the Douglas A. Kahn Collegiate Professor of Law at the University of Michigan Law School. This article is condensed from his paper, “The Deterrence Case for Comprehensive Automaker Enterprise Liability,” Journal of Law and Mobility 1 (2019).


PHOTO BY ROBERTCRUM / GETTY IMAGES PLUS

SPRING 2019

vehicles, sometimes referred to within the industry as “Level 5s” to distinguish them from vehicles with levels of partial autonomy, would not suffer from the problems that plague human decisionmaking in the driving context. These vehicles thus promise to be substantially safer than the human-driven alternative. How should the automobile tort/insurance regime be redesigned to take into account the emergence of driverless vehicles? I propose to replace our current auto tort regime (including auto products liability law, driver-based negligence claims, and auto nofault regimes) with a single comprehensive automaker enterprise liability system. This new regime would apply not only to Level 5s, but to all automobiles made and sold to be driven on public roads. My basic argument is that while current negligence-based auto liability rules could in theory work to provide optimal accidentavoidance incentives, in practice they do not. The current system requires courts and drivers to evaluate benefit–cost tradeoffs they are not equipped to make. Also under the current system, much of auto-accident costs are offloaded onto medical and dis-

/ Regulation / 21

ability insurers or taxpayers. By contrast, under an automaker enterprise liability system, responsibility for those costs would be placed on the parties in the best position to reduce and insure them: vehicle manufacturers. In addition, automakers would be induced to charge enough for cars to fully internalize the costs of automobile accidents. Further, if auto-insurance contracts—and auto-insurance premium adjustments—could be deployed to improve driving habits, auto manufacturers would be induced to coordinate with auto insurers to achieve these deterrence gains. Moreover, to the extent that Level 5s reduce the cost of accidents, they would be cheaper to purchase than conventional vehicles, which would provide a natural subsidy to encourage (and potentially accelerate) their deployment. EVALUATING THE DETERRENCE IMPLICATIONS OF CURRENT AUTO TORT LAW

Existing automaker liability law is primarily a negligence-based regime. Under current law in most U.S. jurisdictions, individuals who suffer harm caused in an automobile crash can recover from the automaker in tort if they can prove that the harm resulted from negligence (or a lack of reasonable care) on the part of the automaker in designing or constructing the vehicle. Alternatively, auto accident victims can invoke modern product liability doctrine and argue that a “defect” in the vehicle’s design, manufacturing process, or warnings caused the harm. And in most jurisdictions, the definition of a product defect likewise requires a showing of negligence. A negligence-based liability rule would induce automakers to take efficient care, provided the following two assumptions are true: ■ ■

Automakers are aware of the law and respond rationally to it. Courts perform a thorough and accurate benefit–cost analysis in their determinations regarding what constitutes automaker negligence or what counts as a design defect.

Under those assumptions, the negligence-based regime would incentivize efficient automaker care levels—i.e., investments in crash–risk reduction—because automakers would avoid negligence-based liability if they make all cost-justified design and warning changes. A negligence-based automaker liability regime can also create incentives for efficient driver care-levels. A negligence-based regime would leave accident costs on victims and their insurers if the automaker is not negligent. That would induce drivers to drive carefully so as to minimize their own risk of uncompensated accident losses. Thus, an efficiently and accurately applied negligence-based automaker liability rule can produce efficient incentives for both automakers and drivers to take care to avoid auto accidents. There are obvious problems with this rosy picture, however. First, consider the effects on automaker care levels if we relax the assumption that courts accurately apply negligence-based


22 / Regulation / SPRING 2019 INSURANCE & LIABILITY

standards. If judges and juries are not very good at doing the complex and information-intensive analysis, the outcomes of courts’ negligence determinations become highly uncertain. This can produce incentives for automakers to both over-invest and under-invest in auto safety. The incentive to over-invest can arise when manufacturers expect courts to set the standard of reasonable care (or a non-defective design) inefficiently high. The incentive to under-invest can arise if courts rely too much on custom within the industry as their source for what constitutes reasonable care because industry custom can lag what is a truly efficient level of safety. A second problem with a negligence-based auto products liability regime has to do with driver care levels. For a negligencebased regime to efficiently incentivize drivers to drive carefully, the tort system must impose on drivers the risk of accidents that are not cost-justifiably preventable by the manufacturer. But drivers simply are not aware of the tort law rules that apply to them or the product liability rules that apply to automakers. Moreover, even when drivers do know about accident risks and legal rules, they may not respond rationally to that information or may externalize those risks to insurance companies. Because of these facts, the ability of a negligence-based auto products liability regime to optimize driver care levels is substantially undermined. Legally imposing costs on drivers would not—or at least may not—have the desired deterrence effect on driver care levels. The final deterrence problem with a negligence-based auto products liability regime would exist even if judges and juries were good (accurate and unbiased) at applying benefit–cost standards. In fact, this problem results because automakers would expect accurate application of the negligence-based rules. The problem involves the effect of a negligence-based automaker liability rule on the number of vehicles sold or, in the language of deterrence, the effect on automaker “activity levels.” Even an efficiently safe car (one with no defects whatsoever) that is driven carefully by its human or algorithmic driver poses some residual risk of crashing. This residual risk will tend to be ignored or externalized by automakers under a negligence-based product liability regime because automakers are not liable for them under a negligence liability standard. The result is that the number of cars sold may be higher than the social-welfare-maximizing level, even ignoring the effect of automobile emissions on the environment, because the price of vehicles does not include this cost of unpreventable auto accidents. To summarize, under our current negligence-based automaker liability regime, there are reasons to be concerned that automaker and driver care levels may be too low and activity levels too high. Driver liability law / In a majority of U.S. states, if someone is injured or suffers property damage as a result of a driver’s negligent operation of an automobile rather than as a result of automaker negligence, the victim may recover from the negligent driver under standard common-law principles of tort. The victim must demonstrate that the harm to her was a result of the

driver’s failure to do something that a reasonable driver would have done under the circumstances, or the driver’s doing something that a reasonable driver under the circumstances would not have done. Negligence-based driver liability law can have beneficial deterrence effects on driver care levels (that is, how safely people drive) if we make the following assumptions: ■

■ ■

Drivers are well informed about accident risks (and how their behavioral changes affect those accident risks). Drivers are well-informed about the rules of tort law. Drivers internalize those risks (rather than externalize them to insurers, for example). Drivers process the information about those risks rationally (without any systematic cognitive biases). Courts are good at applying benefit-cost-type negligencebased liability rules.

If all of these assumptions are true, drivers would have adequate incentive to drive with efficient care in terms of driving speed, safe braking and passing practices, smart-phone usage (or nonusage), and the like. This is so because, by taking efficient care in driving, drivers would avoid liability for the accidents that nevertheless occur. The assumptions listed above almost certainly do not hold in the real world. While drivers may be generally aware of the broad outlines of the driver liability regime in their state (whether it is fault-based or no-fault), they likely do not understand what the precise implications of that fact are on their chances of being found liable in court for unsafe driving. What’s more, the average driver, while generally and vaguely cognizant of the risks of driving, is almost certainly uneducated about the precise levels of risk associated with various aspects of driving—for example, precisely how much the chance of a crash is increased by texting while driving or changing lanes abruptly with no signal. In fact, there is a good chance that most drivers underestimate those risks. Thus, a negligence-based driver liability regime, which relies on assumptions of informed and rational drivers to produce optimal driver care levels, may not produce the deterrence benefits that are predicted by deterrence theory. How is this pessimistic picture of driver liability law as a system of incentivizing good driving changed by the presence of auto insurance? The answer is complicated. On one hand, automobile insurance has the potential to correct some of these deterrencerelated problems. Auto insurers are, unlike most drivers, extremely well-informed about the intricacies of accident law. They employ teams of lawyers whose job is to understand how driver liability laws in each state affect the liability risks of their customers. Indeed, their profitability and their survival as going concerns depend on this expert understanding of the auto liability laws of all sorts. In addition, auto insurers have unparalleled access to enormous amounts of detailed information regarding the crashrisk characteristics of millions of drivers and automobiles. This


SPRING 2019

is the result of decades of experience providing auto insurance coverage to hundreds of millions of drivers and vehicles, which in turn means pricing millions of auto insurance policies and adjusting millions of auto-crash claims over the years. No other institution or organization has the same amount of driver-specific and automobile-specific data as the auto insurance industry. In addition, recent innovations in “telematics” (which combines telecommunications, data science, and automotive technology) have increased auto insurers’ ability to gather and analyze risk-relevant driver and vehicle data. With this new and emerging technology, not only do insurers have access to information regarding how drivers’ past auto-claims and traffic-ticket histories affect their riskiness as drivers, they also have the ability to gather information on the effects of a range of specific driving behaviors on auto-crash risks. For example, a number of insurers currently gather information about drivers’ braking, acceleration, speeding, turning, and cornering behaviors. Once these driver-specific data are combined with data gathered by insurers and others (including NHTSA) about what factors cause auto accidents generally, it becomes possible for auto insurers to link specific driving behaviors of particular drivers with premium discounts. All of this information is to varying degrees already being taken into account by many auto insurance companies in the pricing of their insurance policies. For example, policy discounts are offered to drivers with good safety records as well as for vehicles with particular safety features. In addition, insurers are now offering discounts if drivers will improve their driving ability—for example, if they will take defensive driving classes. Because of the telematics revolution, auto insurers are even able to adjust premiums on the basis of the specific driving behavior of individual drivers. For example, some insurers give discounts for a range of drivercare-level factors such as wearing seatbelts, driving at moderate speeds, limiting late night trips, and avoiding aggressive braking. Also, the advances in telematics have made “pay as you go” auto insurance, under which premiums are a function of the number of miles driven, more accurate—and thus more prevalent—than ever before. Driving-behavior-sensitive auto insurance premiums— which take into account both good and bad driving choices (i.e., driver care levels) and, critically, the number of miles driven (i.e., driver activity levels)—would incentivize risk-reducing driving behavior more than even the most sophisticated government regulator could hope to do. But here is the problem: under current law and existing market conditions, auto insurers do not have strong incentives to make full use of their comparative advantage at gathering risk-relevant information and pricing their insurance on the basis of that information. The reason is that the amount of coverage currently being provided by auto insurers represents only a fraction (in many cases a small fraction) of the total risks of auto crashes. This is true of first-party auto insurance coverage, which tends to cover only a fraction of the accident risks that any driver faces. It is also true of auto liability coverage because the mandatory minimum amounts

/ Regulation / 23

in most states are far less than the maximum harm threatened by an auto accident that results in even one serious injury or death. As a result, many of the costs of auto accidents are currently being externalized to non-auto first-party health and disability insurers who—unlike auto insurers in the telematics age—do not tailor premiums at all based on their insureds’ driving decisions. (To the extent such coverage is provided through government programs funded by tax dollars, there is obviously no premium being charged at all.) Thus, even to the extent that auto insurers do attempt to charge individualized, behavior- and risk-adjusted auto insurance rates (which, as I noted above, they are increasingly trying to do), this incentive is undermined by the fact that auto insurers cover only a fraction of the risks of auto accidents. There are important ways, however, in which the allocation of auto-accident risks to non-auto first-party insurers has cost-reducing advantages. This may seem incongruous with the argument in the previous paragraph, but it is not. While auto insurers are in a good position, through premium discounts, to help optimize driver care and activity levels, auto insurers are not necessarily in a good position to minimize some other costs associated with providing insurance benefits. For example, primary health care coverage provided through auto insurance companies is almost certainly much more expensive than primary health care provided through regular non-auto, first-party health insurers. Although auto insurers, in a sense, specialize increasingly in reducing driver ex-ante moral hazard, it is non-auto health insurance companies who specialize in reducing ex-post medical moral hazard—that is, excessive or wasteful use of the health care system. My point here is only that the current division of auto-accident costs, allocating so little to auto insurers, may be non-optimal given auto insurers’ potential ability to incentivize better (and less) driving. To summarize, because of drivers’ lack of accident-risk information and understanding of auto tort law and their susceptibility to cognitive biases, and because of the presence of costexternalizing private and public insurance coverage for auto-crash risks, there is reason to doubt that the current negligence-based auto tort laws—automaker liability laws as well as driver liability laws—work to optimize driver care and activity levels. THE AUTOMAKER ENTERPRISE LIABILITY ALTERNATIVE

As an alternative to our current negligence-based auto tort regime, consider the possibility of a comprehensive automaker enterprise liability regime. Under such a regime, anyone who suffers a physical injury or property damage in an automobile accident would be entitled to recover compensation for the losses sustained as result of the accident from the manufacturer of the vehicle. Accident victims would not be required to show negligence on the part of the manufacturer. Nor would they have to prove that the automobiles, or any of the warnings or instructions accompanying the automobiles, are in any way defective or unreasonably dangerous. Crash victims would need to prove only that the harms for which


24 / Regulation / SPRING 2019 INSURANCE & LIABILITY

they seek compensation “arose out of the use of” a vehicle that was designed and built by the manufacturer from whom compensation is sought. Each automaker would be financially responsible for the losses resulting from any crash arising out of the use of that automaker’s vehicles. Liability under an enterprise liability regime, however, would not necessarily be limited to auto manufacturers. Liability could also be extended to a range of other enterprises that fall within the design, production, sale, and distribution chain of any given vehicle. The allocation of responsibility among those enterprises, however, would presumably be determined by contracts among the various counter-parties. Those contracts should be enforced so long as the cost of auto accidents is not allocated to parties who are insolvent or judgment-proof. The types and amount of compensation recoverable under an automaker enterprise liability regime would probably be limited to economic losses—medical expenses, lost income, and property damage. The dearth of pain-and-suffering insurance observed in the marketplace could suggest that limiting compensation to economic losses would be consistent with consumer preferences. And, in any event, not providing compensation for noneconomic harms is a common and reasonable political compromise for alternative compensation regimes. The compensation regime I am imagining is a comprehensive automaker enterprise liability regime. In other words, it would apply to all automobiles sold after the effective date of the enacting legislation, whether driven by humans, computer algorithms, or any combination of the two. One result of the adoption of a comprehensive automaker enterprise liability regime would be an increase in the price of most newly purchased automobiles relative to vehicles purchased before the effective date of the enacting legislation. This would happen because the cost of auto accidents that had been hidden in non-auto first-party medical insurance coverage prior to the enterprise liability regime would be brought into the open through increases in automobile and auto-insurance prices. Under a comprehensive automaker enterprise liability regime, because automakers would be responsible for all of the economic costs of auto accidents associated with their vehicles, they would be forced to internalize those costs. As a result, there would be beneficial deterrence consequences for automaker and, potentially, driver care and activity levels. First and most obviously, automakers would have a strong legal and financial incentive to develop and implement cost-justified auto-safety innovations, whatever those might be. That is, if an automaker determined that there was some new brake design (such as a new computer-assisted automatic braking system) or some new guided cruise control mechanism that would reduce overall accident costs relative to its costs of development and implementation, then enterprise liability would reward them for Theoretical deterrence benefits /

implementing those innovations and punish them for not doing so. In addition, there would be no incentive to stick with existing industry customs or consumer expectations if such customs or expectations were lagging behind proven safety innovations. And there would be no incentive to over-invest in safety features that are likely to impress a court or jury in a negligence-based lawsuit (such as a design defect lawsuit) but that, in actuality, provide less additional accident-risk reduction than they cost to produce. Second, enterprise liability would force the price of automobiles to reflect the full expected costs of auto accidents. That cost internalization, in turn, could result in a scale of automotive manufacturing and sales that would be closer to the social optimum than is currently the case because drivers would—in deciding whether to purchase a vehicle—be more likely to consider something closer to the full social costs of that decision. In other words, auto enterprise liability could push us in the direction of optimal manufacturer activity levels—the optimal number of vehicles being sold. If that were to happen, it would be a clear improvement—in terms of overall efficiency—over the existing negligence-based automaker liability regime. What would the implications of auto enterprise liability be for fully driverless vehicles? If Level 5s have lower expected accident costs relative to human-driven vehicles, then they would also have a substantially lower enterprise liability “tax” relative to humandriven vehicles (including perhaps partially driverless vehicles) made and produced after the new regime is adopted. Thus, the adoption of a comprehensive automaker liability regime would, under present assumptions, strongly incentivize and reward auto manufacturers to proceed, as quickly as is feasible, with the development and distribution of Level 5s. If an enterprise liability regime is likely to have deterrence benefits on the automaker side, what about its deterrence effects on driver behavior? First, enterprise liability would create strong legal and financial incentives for automakers to develop and adopt the most cost-effective ways of warning drivers about crash risks and of instructing drivers about how best to avoid certain types of accidents. This effect flows from the fact that enterprise liability makes automakers responsible for all of the economic costs of their vehicles’ accidents. If an automaker could actually reduce the frequency or severity of accidents in its vehicles by altering the wording, design, or placement of warnings or instructions, it would have an incentive to do so. On the other hand, if some new or revised warning would be more likely to confuse or annoy drivers than to educate them, the automaker would be incentivized under enterprise liability not to add that sort of unhelpful warning—even if it would have gotten the automaker “off the hook” under a more traditional negligence-based warning-defect standard. Automakers would do whatever works best to reduce accident costs. Thus, in the transition to Level 5s, automakers would be incentivized to warn and instruct optimally regarding both the risks and the appropriate uses of intermediate driverless technology such as guided cruise control.


SPRING 2019

In addition, enterprise liability could incentivize automakers to restructure the ways that automobiles are insured and sold in order to improve driver care and activity levels. Under an enterprise-liability regime, automakers would have an incentive to shift contractually much of the expected costs of auto accidents to auto insurers. This somewhat counterintuitive result flows from the fact that auto insurers have a comparative advantage with respect to monitoring and regulating driver care and activity levels. If automakers could get auto insurers to take on somewhat more of the risk of auto accidents, the insurers would have a strong incentive to help drivers reduce expected accident costs. That is, because of competition for customers in the insurance industry, auto insurers would be incentivized to use the tools at their disposal—including individualized, driving-behaviorsensitive, risk-adjusted insurance premiums—in ways that would tend to encourage better driving habits and perhaps less driving, especially by high-risk drivers. What does this mean for how auto insurance would be sold? Auto insurance under an enterprise liability regime might be sold in the same way it is today. An individual auto purchaser, in other words, might pay the automaker for the vehicle itself and then purchase a separate auto insurance policy at the same time from a separate auto insurance company. However, given that automakers ultimately would be responsible legally for the autoaccident losses paid by the auto insurers, there would be strong incentives for contractual coordination between automakers and auto insurers. Individual auto manufacturers might even be induced to partner with particular auto insurers in an effort to offer the best, most competitively priced combined product of vehicle and vehicle-insurance coverage. Another way that enterprise liability could improve driver care and activity levels is through its effect on how automobiles are sold. For example, the introduction of an enterprise liability regime might push the automotive industry in the direction of lease transactions rather than outright sales because leasing would make it easier for automakers to enforce the terms of the auto insurance policies sold by an insurer that is contractually partnered with the automaker. Under a lease arrangement, for example, if a driver became uninsurable (because of bad driving behavior and/or increased claim payouts) or if the driver simply stopped paying her premiums, there might be a provision in the lease empowering the automaker to reclaim the vehicle. In addition to favoring leasehold arrangements, the introduction of enterprise liability might create market pressure on auto manufacturers to sell vehicles to commercial purchasers rather than individual consumers. These commercial purchasers, in turn, would either lease the vehicles to individual drivers or perhaps make them available through ride-share arrangements. Automakers would be incentivized to choose commercial purchasers who are financially responsible and would be incentivized to purchase efficient auto insurance contracts to cover the enterprise liability payouts. Such a trend toward commercial fleets would be con-

/ Regulation / 25

sistent with already existing market trends toward ride-sharing companies. I am not suggesting that comprehensive automaker enterprise liability would necessarily result in auto-lease arrangements replacing individual sales or ride-sharing replacing driving. Rather, once automakers are made legally responsible for the cost of auto accidents (or for most of those costs), they will have an incentive (and the ability) to structure automobile distribution markets in ways that are more efficient. CAVEATS, CONCERNS, AND CONCLUSIONS

This description of an automaker enterprise liability regime is only a rough outline of an idea, a jumping-off point for further discussion. The actual design of such a program would require empirical research into a range of topics, including whether shifting to enterprise liability would actually, and not just theoretically, produce substantial deterrence benefits. Among the questions to be answered would be these: Under any real-world version of an automaker enterprise liability regime, how long would automakers’ responsibility for insuring their vehicles remain in effect? Would it be for the useful life of the vehicle or for some period of time—say, 10 years? If for some period of time, who would be responsible for covering the accidents arising out of the later use of the vehicle? ■ What would the precise relationship be between an automaker enterprise liability regime and state mandatory insurance/financial responsibility laws? Presumably, rescission of coverage by the insurer because of excessive accident experience or the failure to pay premiums would result in a suspension of driving privileges, but how would that be enforced? ■ Furthermore, if an auto enterprise liability regime were adopted, would all vehicles manufactured and sold before a given date be exempt? Or would older vehicles made before the new law goes into effect be transitioned into the new regime over time? If older vehicles were fully exempted from the new regime, how would we deal with the resulting potentially large price differential between new vehicles (which would be priced with full accident costs internalized into the purchase price) and used vehicles (which would not be)? What role could increased mandatory minimum levels of auto insurance play in assisting with that transition? ■ In addition, given that the transition to an automaker enterprise liability regime would almost certainly increase the “experienced” price of autos and driving, how would lowincome families be expected to afford access to auto ownership, which has been shown to foster upward mobility? ■

All of these are fair questions and would need to be considered before auto enterprise liability were seriously considered.


26 / Regulation / SPRING 2019 E N E R GY & E N V I R O N M E N T

A Cautionary Tale About Energy Efficiency Initiatives If these programs are such bargains, then why does government mandate them and energy utilities push for them? ✒ BY KENNETH W. COSTELLO

I

constantly hear about how wonderful utility and government-mandated energy efficiency (EE) initiatives are. Many EE supporters claim these efforts to push consumers to buy higher-efficiency appliances and use more insulating materials are “negative-cost” ways to reduce carbon emissions—that by reducing energy consumption along with emissions, these changes more than pay for themselves. For instance, in 2009 the consulting firm McKinsey & Co. estimated that adoption of cost-effective EE investments in the United States could generate $700 billion in net private cost savings. Amory Lovins, an environmental scientist and chairman of the Rocky Mountain Institute, once remarked that EE is the “lunch you are paid to eat.” Yet these free lunches seem suspicious to me—and to many analysts who have studied the benefits and costs of EE initiatives. If these efforts are such a bargain, then why must government mandate them and utilities push for them? WHY DO WE NEED EE POLICY?

The conventional economic defense for government-imposed EE standards begins by assuming deep flaws in consumer rationality, barriers to information, or underpricing of energy. Supposedly, these factors lead to consumers making incorrect calculations and tradeoffs between the initial costs of appliances and their subsequent energy-use costs. Consumers allegedly are unwilling to pay more initially for consumer durables that would use KENNETH W. COSTELLO is a regulatory economist and independent consultant.

less energy and save money in present value. Instead, they buy cheap durables that are costlier to run over time. Mandatory energy standards force consumers to make the “correct” tradeoff between initial and operating costs, “purchase” more energy efficiency, and eliminate the so-called “EE gap.” In the typical EE gap study, analysts often calculate the savings in energy costs over the lifetime of an appliance by using a discount rate converting the stream of annual costs into a present value. If the present value of cost savings from an efficient appliance is greater than the incremental cost of the efficient appliance relative to a conventional substitute, then an EE gap is said to exist. Said differently, the discount rate that consumers appear to use in their decisions about paying more initially for later energy savings is “too high” relative to the “market” discount rate used by the analyst. This gap provides the justification for both government EE standards and utility EE initiatives. Policymakers attribute the “low” adoption of EE investments to market failure or consumerbehavioral problems. The presumption is that consumers are incapable of making the correct calculations or else make decisions contrary to their self-interest. Hence, there is an economic rationale for government policies such as energy building codes, appliance standards, and utility subsidies. However, this rationale includes two assumptions that often go unrecognized by EE supporters: ■ ■

The gap truly represents a market or behavioral failure. The benefits from correcting this failure are greater than the costs.

Just because market problems exist that might hinder EE invest-


SPRING 2019

ments does not mean that utility or governmental intervention is socially desirable. RECONCILING AN EE GAP AND RATIONAL CONSUMERS

PHOTO BY PARNTAWAN / GETTY IMAGES

Energy consumers who do not invest in seemingly cost-effective EE can be acting rationally. To understand why, we must keep in mind three additional factors. First, consumers have difficulty verifying energy savings claims. And even if the energy savings are verifiable, future energy prices are not. Past energy prices have varied dramatically; they were

/ Regulation / 27

much higher in the 1970s, then low from the mid-1980s through the early 2000s, high again in the mid-2000s, and now they are low again. Thus, consumers have reason to balk at making EE investments because of uncertainty over whether those investments will pan out. The second factor is consumer heterogeneity—the simple fact that different people use energy differently. Although the average consumer may find an EE investment economically attractive, some may not because of differences in preferences, the level of energy usage, and the cost of borrowing. The third factor is the need to consider costs borne by consum-


28 / Regulation / SPRING 2019 E N E R GY & E N V I R O N M E N T

ers themselves. These include transaction costs (e.g., the time spent by households in searching for energy-efficient appliances), poor appliance performance (e.g., dishwashers and clothes washers that do a poor job on especially soiled loads), and so forth. ACADEMIC VS. UTILITY EVALUATIONS OF EE PROGRAMS

Another problem is that supposedly objective analyses of specific EE initiatives often reach very different conclusions. Utility-sponsored studies of EE proposals often yield results that are much more optimistic about energy savings than subsequent academic, peer-reviewed studies of the programs once they are in place. Why does this happen, and whose results should regulators believe? Academic reviews of EE programs conclude that such programs are not the “low-hanging fruit" that many people believe. Academic reviews find that utilities grossly overstate energy savings from EE programs because they rely on ex-ante engineering estimates. The reviews also note that utilities often fail to consider “hidden costs” for consumers from the time and effort spent on both energy audits and investments. The combination of these factors, according to some academic studies, has led to

article by Arik Levinson found that California’s strict EE building codes have resulted in much less energy savings than projected. The common perception is that residential weatherization programs have produced large and cost-effective savings to lowincome households. But a 2015 American Economic Review: Papers and Proceedings article by Meredith Fowlie, Michael Greenstone, and Catherine Wolfram and a 2016 Energy Journal paper by Joshua Graff Zivin and Kevin Novan provide empirical evidence to the contrary. They find ex-ante energy savings projections to be grossly high and the overall net benefits to participating households in many instances to be negative. Most utilities fail to apply the best analytical tools to their evaluations of EE programs. These tools include randomized trials and quasi-experimental designs to measure energy savings and understand consumer behavior. The problem with other approaches is that they do not reliably measure the actual energy savings from individual EE programs. WHY ARE EE PROGRAMS SO POPULAR?

Despite the negative evaluations of EE programs by academics, these programs are politically popular. Legislatures, governors, and state public utility commissions (PUCs) want utilities to promote EE. Some utilities initially balk at this, but PUCs then offer support to ensure the utilities’ profitability isn’t hurt by reduced energy sales. For instance, about half the states have adopted “revenue decoupling” for gas utilities; that is, the PUCs permit utilities to raise their rates in order to offset lower sales. These initiatives have been instrumental in mitigating utility opposition to EE programs. Instead, the utilities release reports (arguably both biased and technically flawed) showing that EE initiatives are cost-beneficial. Everyone’s happy, right? Well, someone has to pay for these initiatives, and it is almost always the utility’s customers. But is it equitable and good public policy to compel utility customers to pay for EE initiatives? Many of these initiatives benefit only a relatively few customers, most of whom can afford to pay for higher EE without any financial assistance. Besides, these consumers are quite capable of making rational decisions, just like they do when they invest in other activities. So, why should utilities offer these customers subsidies and why should other customers bear the costs?

Utility-sponsored ex-ante studies of energy-efficiency proposals often yield results that are much more optimistic about energy savings than subsequent academic, peerreviewed studies of the programs once they are in place.

utilities understating the costs of EE programs by as much as 50% or more. Academic research on utility studies has also found “rebound effects” that reduce anticipated energy savings. A “rebound” occurs when energy consumers use their air conditioners and heating systems more intensively because of lower operating costs for the EE technologies. This reduces the actual energy savings relative to those predicted by engineering possibilities. Academic studies also find “free riders.” These are individuals who would have purchased lower energy-use appliances or HVAC systems regardless of the existence of the EE programs and thus their energy savings should not be counted as benefits created by the policy. The subsidies they receive for purchasing their EE products are pure transfers from other utility customers, many of whom are low-income households. For instance, a 2016 Energy Journal paper by Anna Alberini, Will Gans, and Charles Towe document this effect in a heat pump subsidy program. EE building codes have also produced less-than-expected energy savings. For instance, a 2016 American Economic Review

ARE SOME EE PROGRAMS NOW UNECONOMICAL?

An especially relevant question for gas utilities today is, should they have eliminated or downsized some of their EE programs over the course of the “fracking” era? After all, shale gas has greatly increased the supply and lowered the cost of gas, thereby altering the energy efficiency calculus. Yet, gas utilities now spend


SPRING 2019

about $1.5 billion annually on EE programs, up from $320 million in 2007. It seems that the rationales for EE programs of both electric and gas utilities are less valid today than when they were first implemented. Their customers have better information on EE programs, and natural gas prices are low and expect to remain so for the next several years. Presumably, the most cost-effective actions have already been exploited. Thus, market failures for EE have decreased over time, lessening the need to have utility or government intervention to advance EE. Over time (we are talking about decades), we should expect to see a continual erosion of market problems, as well as consumerbehavioral ones, warranting fewer utility/regulatory (“bureaucratic”) programs. That is, society should rely more heavily on the marketplace to influence EE investments, or the role of utilities should be increasingly displaced by better-functioning market mechanisms that rely on the self-interest of individual customers to reduce their energy bills. THE PUSH FOR ELECTRIFICATION RESEMBLES THE PUSH FOR ENERGY EFFICIENCY

“Electrification” refers to the enactment of policies to induce consumers to use electricity rather than natural gas and other fossil fuels for specific end-use applications. Electrification can include conversion from natural gas heating to an electric heat pump in an existing home, or conversion from gasoline to electricity for transportation. Electrification, according to its advocates, would reduce carbon emissions, lower energy costs for at least some consumers, and increase EE by reducing the primary energy use per unit of energy service (e.g., the full-cycle energy usage per mile of driving or gallon of heated water). These advocates assume that an “electrification gap” exists—that is, there is a deviation between socially optimal electrification and actual electrification. Electrification advocates inevitably push for additional subsidies and out-of-market incentives to accelerate electrification. (Both electric vehicles and electric heat pumps presently receive subsidies from both the government and utilities.) Advocates have referred to electrification as “strategic electrification,” “smart electrification,” “beneficial electrification,” “efficient electrification,” and “policy-driven electrification.” I would add to this lexicon “bad electrification” and “artificial or subsidized electrification.” Studies have shown electrification to be technically feasible in many end-use applications and economically feasible in at least some applications. Technological advances and public policy (e.g., digitization and the focus on clean energy) seem to favor electricity over fossil fuels in the future. Electrification proponents champion policies that would accelerate electrification. Before committing to such policies, should we not have more precise calculations of the costs and benefits, instead of referring to them in qualitative terms (which so far has dominated the analyses)? Lacking today is evidence that market and behavioral problems

/ Regulation / 29

are severe enough to warrant additional government intervention to hasten the pace of electrification. There is a more-than-remote chance that subsidized electrification will have a negative effect on society. The question at present for policymakers is how fast electrification should develop. We should expect the electrification advocates in the coming years to employ many of the same justifications that are now used to advocate EE. CONCLUSION

The best available evidence—peer-reviewed studies conducted by disinterested analysts using sophisticated methods—suggests that EE initiatives funded by utility customers should be scrutinized rather than reflexively praised by policymakers. Even if EE programs were ever cost effective, the “shale gas” era has made many of them ineffective now. The best available evidence suggests that EE programs transfer money from some utility customers to others with no gains in efficiency. Regretfully, this evidence has had little effect on these programs because the public is unaware of the transfers, energy efficiency is culturally popular, and utilities can enjoy their support without suffering any financial consequences. Despite that, many of these programs would fail a benefit–cost test and should be called into question. READINGS ■ “Are the Non-Monetary Costs of Energy Efficiency Investments Large? Understanding Low Take-Up of a Free Energy Efficiency Program,” by Meredith Fowlie, Michael Greenstone, and Catherine Wolfram. American Economic Review: Papers and Proceedings 105(5): 201–204 (May 2015). ■ “Electrification: The Nexus between Consumer Behavior and Public Policy,” by Kenneth W. Costello. The Electricity Journal 31(1): 1–7 (2018). ■ “Evaluating the Costs and Benefits of Appliance Efficiency Standards,” by Jerry A. Hausman and Paul L. Joskow. American Economic Review: Papers and Proceedings 72(2): 220–225 (May 1982). ■ “Free Riding, Upsizing, and Energy Efficiency Incentives in Maryland Homes,” by Anna Alberini, Will Gans, and Charles Towe. The Energy Journal 37(1): 259–290 (2016). ■ “How Much Energy Do Building Energy Codes Save? Evidence from California Houses,” by Arik Levinson. American Economic Review 106(10): 2867–2894 (October 2016). ■ “Motivating and Evaluating Energy Efficiency Policy,” by Kenneth Gillingham, Amelia Keyes, and Karen Palmer. Resources for the Future Working Paper WP17–21, November 2017. ■ “The Energy Paradox and the Diffusion of Conservation Technology,” by Adam B. Jaffe and Robert N. Stavins. Resource and Energy Economics 16(2): 91–122 (May 1994). ■ “Upgrading Efficiency and Behavior: Electricity Savings from Residential Weatherization Programs,” by Joshua Graff Zivin and Kevin Novan. The Energy Journal 37(4): 1–23 (2016). ■ Unlocking Energy Efficiency in the U.S. Economy, published by McKinsey Global Energy and Materials (McKinsey & Co.), 2009. ■ “What Does a Negawatt Really Cost? Evidence from Utility Conservation Programs,” by Paul Joskow and Donald B. Marron. The Energy Journal 13(4): 41–74 (1992).


30 / Regulation / SPRING 2019 E N E R GY & E N V I R O N M E N T

Utility Energy Efficiency Initiatives Are Good Policy These programs address important market failures and have been shown to be cost-effective. ✒ BY MARTIN KUSHLER, ED VINE, AND KEN KEATING

R

esearchers have been evaluating and documenting the effects of utility energy efficiency programs for decades, and nearly every state in the nation now has policies providing for utility energy efficiency programs. The research shows that these programs have been generally cost-effective and are well-justified as a way to address market failures such as imperfect information, split incentives, externalities such as environmental costs, and regulatory concerns that arise from utility monopoly power. From the outset, some critics leveled three arguments against these programs. Those arguments are: ■

■ ■

If these energy efficiency measures are really so beneficial, then consumers would adopt the measures on their own. The methods used to evaluate these programs are flawed. There are insufficient evaluation data to demonstrate that energy efficiency programs are cost-effective.

Perhaps the most-cited example of these arguments is Paul Joskow and Donald Marron’s 1992 Energy Journal article “What Does a Negawatt Really Cost?” Immediately following the appearance of that article, and in the quarter-century since, energy efficiency program supporters have responded to those arguments. For example, in the same journal in 1994, Amory Lovins authored “Apples, Oranges, and Horned Toads: Is the Joskow & Marron Critique of ElecMARTIN KUSHLER is senior fellow at the American Council for an Energy Efficient

Economy and former director of evaluation at the Michigan Public Service Commission. ED VINE is a former senior staff scientist at the Lawrence Berkeley National Laboratory. KEN KEATING is the former manager of evaluation at the Bonneville Power Administration. All three have served as board president of the International Energy Program Evaluation Conference.

tric Efficiency Costs Valid?” In another article in that journal in 1996 titled “The Total Cost and Measured Performance of Utility-Sponsored Energy Efficiency Programs,” Joseph Eto et al. examined 20 resource-oriented utility programs and confirmed the cost effectiveness of those programs and their viability as a utility resource option. REBUTTING THE ARGUMENTS

Those responses and many years of subsequent field testing and program evaluation have persuaded energy regulators. Total annual utility spending on energy efficiency programs has increased seven-fold since 1996. However, given that the old criticisms have resurfaced in recent years, this brief article offers some updated responses. Consumer choices? / With respect to the argument that consumers would adopt energy-saving measures on their own if the measures were truly efficient, the obvious response is, “Then why haven’t they?” There is plenty of cost-effective energy efficiency improvement available to be captured by energy efficiency programs year after year. Considerable research has identified the market failures and obstacles to customer implementation of energy efficiency measures, e.g., lack of information, lack of easy access in the local market, lack of capital, etc. For examples of this literature, see the 2015 U.S. Department of Energy Report to Congress Barriers to Industrial Energy Efficiency and the Lawrence Berkeley National Laboratory report Market Barriers to Energy Efficiency by William Golove and Joseph Eto.

With respect to the criticisms of evaluation methods, some of those concerns had some validity in the early Flawed research? /


PHOTO BY ALACATR / GETTY IMAGES

SPRING 2019

days of energy program planning (e.g., over-reliance on ex-ante engineering projections, failure to account for “free-riders” who would have adopted the efficiency measures without the programs). But practitioners have subsequently recognized those concerns and taken steps to address them. For example, there are 75 peer-reviewed papers on the International Energy Program Evaluation Conference (IEPEC) website (www.iepec.org) about free-ridership. Evaluators now routinely take free-ridership into consideration when evaluating programs. Similarly, evaluators have contributed feedback to improve ex-ante engineering expectations in program planning, which has led to less divergent estimates between the ex-ante and ex-post estimates. Most importantly, at this point professional practice within the evaluation industry would never simply claim ex-ante engineering estimates as the reported energy savings from a program without any ex-post analysis and verification. More recently, critics of energy efficiency programs have argued that in order to produce sufficient evidence of the programs’ benefits, a true randomized experiment (where subjects are randomly assigned to treatment and control conditions) must be conducted. Practitioners in the field know that because of practical constraints and regulatory concerns about customer access, it is seldom possible to randomly assign customers to receive a program. Instead, the program evaluation profession uses a variety of technically sophisticated quasi-experimental methods to try to answer important policy questions using the best empirical evidence. Such methods are widely used in many other professions, from advertising to education to mental health. While we support the greater use of randomized experimental design in the evaluation of energy efficiency programs, to suggest that anything short of a true randomized experiment is not methodologically sufficient is a poor criticism and ignores the limitations of such

/ Regulation / 31

a methodology. Our view is echoed by the State and Local Energy Efficiency Action Network, which provides guidance and recommendations on methodologies that can be used for estimating energy savings resulting from energy efficiency programs. Insufficient data? / A particularly objectionable criticism of energy efficiency programs that has rarely but occasionally been raised is that evaluations that show utility energy efficiency programs are cost-beneficial must somehow be biased. This claim clashes with the seriousness of program evaluation professionals. IEPEC has been providing training and conducting conferences on program evaluation for over 30 years, with hundreds of peerreviewed professional papers being published and cited in regulatory proceedings as well as the academic literature. The IEPEC website provides free access to all its published papers since 1997. Utility regulators also take their jobs very seriously and offer another buttress against the claim that evaluations of energy efficiency programs produce biased results. States universally require utility energy efficiency programs to pass cost-effectiveness tests (with the exception of low-income programs, which are justified by equity considerations). Nearly all states require that evaluations be conducted by independent contractors rather than by utility staff. Proposed utility programs and utility energy efficiency program results are typically examined in contested case proceedings where all interested parties are free to challenge those results. (One of the authors of this paper was the director of evaluation at a utility regulatory commission for 10 years and would argue that, dollar for dollar, no other area of utility expenditures receives as much scrutiny as energy efficiency programs.) As for the question of whether “government mandates” are necessary to achieve utility energy efficiency programs, there is a fundamental market failure at play here. Despite extensive


32 / Regulation / SPRING 2019 E N E R GY & E N V I R O N M E N T

evidence that energy efficiency programs are much less expensive than building, fueling, and operating power plants and delivering that energy through extensive transmission and distribution networks, utilities inherently would rather sell more energy than less. Absent requirements and regulatory mechanisms such as decoupling and performance incentives, utilities simply would not provide—and historically have not provided—serious energy efficiency programs for their customers. Utilities are regulated monopolies that operate under all sorts of government mandates in exchange for their franchise. So, if energy efficiency programs are in the public interest (e.g., they reduce total costs to customers for the utility system), it is entirely appropriate for government to require and/or incentivize utilities to provide these programs. Finally, as to the fundamental question of whether these utility energy efficiency programs are a good value, the evidence is overwhelming. There are literally thousands of individual reports documenting the effects of these programs (many of which are cited in the IEPEC archives noted above). This extensive analysis is itself testimony to the fact that utility regulatory commissions require extensive scrutiny of energy efficiency programs. How many independent evaluation reports have been required and published for other utility expenditures, from bucket-trucks, to transformers, to billing systems, to company management structures, etc.? For a good overview of utility energy efficiency results, comprehensive summaries are available from the American Council for an Energy Efficient Economy (ACEEE) as well as from the Lawrence Berkeley National Laboratory (LBNL). In a 2018 ACEEE analysis, Maggie Molina and Grace Relf examined results for the 49 largest electric utilities in the United States and found an average cost of saved electricity of 3.1¢ per kilowatt-hour. A previous ACEEE report by Molina summarized the results from 10 different states across four years of programs and found an overall utility cost of 2.8¢/kWh saved for electric programs and 35¢ per therm (100,000 Btu) saved for natural gas programs. An earlier 2009 ACEEE study by Katherine Friedrich et al. across 10 states for electricity and six states for natural gas found average costs of 2.5¢/kWh and 37¢/ therm. A 2018 LBNL study by Ian Hoffman et al. examined the cost performance of 8,790 electricity efficiency programs between 2009 and 2015 for 116 investor-owned utilities and other program administrators in 41 states and found an average cost of 2.5¢/ kWh saved. A 2014 LBNL study by Megan Billingsley examined over 100 programs across 31 states over a three-year period and found an average total utility cost of 2.1¢/kWh and 38¢/therm. All of these average costs are well below the utility system avoided cost for delivered electricity and natural gas, demonstrating clearly that energy efficiency is indeed a cost-effective utility resource. This is true even in an era of very low well-head natural gas prices. (And the risks associated with that era being temporary is a whole other subject.) Furthermore, this assessment of energy efficiency program value does not attempt to incorporate the value of any additional benefits commonly associated with

energy efficiency programs, such as improved customer health and safety, business productivity and operation and maintenance savings, or reduced environmental emissions. CONCLUSION

The best available evidence robustly demonstrates that utility energy efficiency programs have been a cost-effective utility resource. The public policies that require and encourage these programs are well-justified, both conceptually as a response to market failures and monopoly power, as well as empirically given the demonstrated cost-effective results. Regulators should of course continue to exercise good oversight, but there is no basis for abandoning the policy and regulatory framework that facilitates these programs. READINGS ■ “A National Survey of State Policies and Practices for the Evaluation of Ratepayer-Funded Energy Efficiency Programs,” by Martin Kushler, Seth Nowak, and Patti Witte. ACEEE Research Report U122, 2012. ■ “Apples, Oranges, and Horned Toads: Is the Joskow & Marron Critique of Electric Efficiency Costs Valid?” by Amory B. Lovins. The Electricity Journal 7(4): 29–49 (May 1994). ■ Barriers to Industrial Energy Efficiency: Report to Congress. U.S. Department of Energy, June 2015. ■ “Does Efficiency Still Deliver the Biggest Bang for Our Buck? A Review of Cost of Saved Energy for US Electric Utilities,” by Maggie Molina and Grace Relf. Proceedings, ACEEE Summer Study, 2018. ■ “Evaluation, Measurement, and Verification (EM&V) of Residential BehaviorBased Energy Efficiency Programs: Issues and Recommendations,” by Annika Todd, Elizabeth Stuart, Steven R. Schiller, and Charles A. Goldman. Lawrence Berkeley National Laboratory, 2012. ■ “Experimentation and the Evaluation of Energy Efficiency Programs,” by Edward Vine, Michael Sullivan, Loren Lutzenhiser, et al. Energy Efficiency 7: 627–640 (2014). ■ Market Barriers to Energy Efficiency: A Critical Reappraisal of the Rationale for Public Policies to Promote Energy Efficiency, by William H. Golove and Joseph H. Eto. Energy & Environment Division, Lawrence Berkeley National Laboratory, March 1996. ■ “Saving Energy Cost-Effectively: A National Review of the Cost of Energy Saved through Utility-Sector Energy Efficiency Programs,” by Katherine Friedrich, Maggie Eldridge, Dan York, et al. ACEEE Research Report U092, 2009. ■ “The 2018 State Energy Efficiency Scorecard,” by Weston Berg, Seth Nowak, Grace Relf, et al. ACEEE Research Report U1808, 2018. ■ “The Best Value for America’s Energy Dollar: A National Review of the Cost of Utility Energy Efficiency Programs,” by Maggie Molina. ACEEE Research Report U1402, 2014. ■ The Cost of Saving Electricity through Energy Efficiency Programs Funded by Utility Customers: 2009–2015, by Ian M. Hoffman, Charles A. Goldman, Sean Murphy, et al. Lawrence Berkeley National Laboratory, 2018. ■ The Program Administrator Cost of Energy Saved for Utility Customer-Funded Energy Efficiency Programs, by Megan A. Billingsley, Ian M. Hoffman, Elizabeth Stuart, et al. Lawrence Berkeley National Laboratory, 2014. ■ “The Total Cost and Measured Performance of Utility-Sponsored Energy Efficiency Programs,” by Joseph Eto, Edward Vine, Leslie Shown, et al. The Energy Journal, 17(1): 31–51 (1996). ■ “What Does a Negawatt Really Cost? Evidence from Utility Conservation Programs,” by Paul Joskow and Donald B. Marron. The Energy Journal 13(4): 41–74 (1992).


“It isn’t often that a group of people get to claim that they have changed the world of thinking... and PERC has done that.” — Kimberley A. Strassel, Wall Street Journal Editorial Board

© Bob Wick, BLM CA

OUR VISION is a world where free market environmentalism is the default approach to conservation. To make this vision a reality, our focus will always remain on results. Through high-quality research, outreach, and applied programs, our ideas are changing the world of thinking.

Learn more at PERC.org


34 / Regulation / SPRING 2019

IN MEMORIAM

A Conservative Anarchist?

ANTHONY DE JASAY 1925–2019

A

BY PIERRE LEMIEUX

nthony de Jasay died January 23rd at age 93. (“bring utility to,” as economists would say) everybody because Although he is not a household name even among individuals have different preferences. Whatever intervention the academics and intellectuals, he was one of the state carries out, it will benefit some individuals but harm othmost creative thinkers of our time and I think he ers. The simplest example is redistribution of money: it benefits will be recognized as such by future generations. the recipients and harms the coerced donors. Regulations and He already has many fans among libertarians and classical liber- other interventions have a similar effect. The state is necessarily an “adversarial state.” als. Perhaps, as we shall see, he should have more admirers among Because individual preferences are subjective, there is no sciconservatives, too. entific way to weigh the costs and benefits of policy to all affected He was an academic anomaly. His books, published by major individuals and derive a net benefit or cost that is meaningful. academic publishers, and his scholarly articles were at the cutting Economics has been haunted by this problem for nearly a cenedge of political philosophy but he was not affiliated with any tury but economists have generally tried to sidestep or ignore it. university and did not have a Ph.D. (which, at least in America, is De Jasay took it seriously. Even if only one individual is harmed the membership card of the academic class). At age 23 he left his and countless others benefit, we cannot know if the utility (or native Hungary to study economics in Australia and at Oxford. satisfaction or happiness) lost by the harmed individual is larger In 1962 he settled in Paris where he made a career in banking and or smaller than the utility gained by the others. finance. He retired in 1979, moved to Normandy, and devoted the rest of his life to independent scholarship. Comparing the utility of two individuals, and a fortiori between I was privileged to know him personally, as he participated in two sets of individuals, relies on nothing but the “personal value several of the Liberty Fund conferences I directed in France from judgments of whoever is doing the comparison,” de Jasay wrote 1990 to 2009. He was nearly blind for most of the years I knew him; in his last book, Social Justice and the Indian Rope Trick (2015). (See his wife Isabelle would read aloud for him. Our last contact was in “The Valium of the People,” Spring 2016.) Putting a money value June 2018. By that time, he had suffered a stroke from which he on gains and losses from a government intervention does not had not totally recovered. On June 4, I published a review of his solve the problem; only an actual voluntary exchange can guarmasterpiece The State (1985) in the new Liberty Classics section of antee that both parties gain. Liberty Fund’s Library of Economics and Liberty. Two days later I As The State put it in immortal terms, “When the state cannot received a nice email from him, beginning with, “You have given please everybody, it will choose whom it had better please.” The me great pleasure with your review of my book.” chosen beneficiaries will of course be the clients whose support is most necessary to the state—that is, to the state’s politicians THE PROBLEMS WITH THE STATE and bureaucrats (including judges, prosecutors, policemen, De Jasay’s 1985 book set out the basic problem of the state, the and soldiers). The state buys the consent of the clients it needs whole apparatus of formal government. The state cannot please in order to stay in power. The incipit of The State reads, “What would you do if you were the state?” (de Jasay’s emphasis). You PIERRE LEMIEUX is an economist affiliated with the Department of Management Sciences of the Université du Québec en Outaouais. His latest book is What’s Wrong would, of course, try to increase or at least maintain your power, with Protectionism? Answering Common Objections to Free Trade (Rowman & Littlefield, 2018). which is the proximate means by which to obtain money, perks,


SPRING 2019

access to people, and a host of subjective benefits. A democratic state does not change that. On the contrary, it is a formal feature of the democratic state that it must satisfy (a majority of ) the electorate. But the more it satisfies some voters at the expense of the others, the more dissatisfied the others become. If it then tries to compensate the latter, it will re-alter the distribution of costs and benefits, making more people unhappy. The state has no money but what it takes from its citizens. Its regulations rarely—never, de Jasay would say— benefit everybody and leave no one unharmed. Political competition forces the state to run the redistribution machine to the point where most people are inextricably both benefited and harmed, and where the state uses all its power to satisfy insatiable clienteles. As time goes on, the state becomes more and more of a redistributionist drudge. While its total power has increased, its dwindling discretionary power is used just to stay in power. At some point it will have no choice but to limit electoral competition and put all its former clients under its yoke. Tyranny must be the ultimate outcome. De Jasay exposed the concept of “social justice” as being nothing but a set of conflicting claims that put in motion and justify state redistribution. Some groups simply want to grab benefits at the expense of other groups. For a while, the most powerful and influential win; ultimately, everybody loses except for state rulers.

conventions. Ownership is prima facie proof of legitimate property unless it can be proven that a specific piece of property has been stolen or is creating a nuisance. Conventions solve the problems of social coordination that can’t be solved by contract. By following established conventions, individuals contribute to the requirements of social cooperation, as shown by game theory in repeated interactions. Conventions solve prisoner’s dilemma situations (that is, situations where individual rationality leads to results that nobody wants). One way to understand de Jasay’s theory is to contrast it with that of James Buchanan, the 1986 economics Nobel laureate and principal originator of the public-choice school of economics. Buchanan wrote in Public Choice a very favorable review of The State despite the fact that it represents a major challenge to his own political theory. Like most economists, Buchanan believed that “public goods” exist , and they can only be produced or financed at an optimal level by the state. The reason is that, by definition, public goods are goods that everybody wants but are non-rival in consumption. For example, Buchanan would say, everybody equally benefits from national defense or (locally) from a floodcontrol dam once those goods are produced. Free-riders will not pay for such goods, hoping that others will. But then the goods won’t be produced at an optimal quantity—if at all. The state ostensibly has to intervene and tax people to produce what everybody wants. De Jasay, on the contrary, argued that contractual means and voluntary institutions exist to produce public goods. Many consumers will not want to risk the chance that a public good they want will not be produced, so they will not take the gamble to free-ride. Others might but, in the absence of a functioning market, there is no way to know what the “optimal” quantity of a public good is. If there is “market failure,” it is not as dangerous as “government failure”—government-mandated, non-optimal provision. One instance of government failure is that production

The more democratic government tries to satisfy some voters at the expense of others, the more dissatisfied the others become. If it then tries to satisfy the latter, it will make others unhappy.

HOW CAN SOCIETY WORK?

Can society exist without state laws and social justice? Or, as the question is usually asked, if the state doesn’t make laws and pursue social justice, who will? De Jasay argues for a theory of justice based on evolved conventions that simply prevent wrongs and thus circumscribe exceptions to a general presumption of liberty. The performance of promises and the rules of property (including the principle of “finders keepers”) are such

/ Regulation / 35


36 / Regulation / SPRING 2019 IN MEMORIAM

of public goods by the state brings the “return of the free rider”: the larger the state is, the more benefits it can offer, and the more citizens will free-ride by minimizing their taxes and maximizing their receipt of state benefits (including subsidies). De Jasay’s 1989 book Social Contract, Free Ride: A Study of the Public Goods Problem analyzes these issues. Another major point of discord between de Jasay and Buchanan is social contractarianism. Buchanan imagined an implicit, unanimous social contract that calls on the state to produce public goods while at the same time limiting the reach of the state. De Jasay countered that if individuals are capable of unanimously agreeing on a social contract, they also are able to contractually agree to the production of the public goods that supposedly justify the social contract. He also argued that a presumed social contract fools people into believing that they are, as it were, coerced by themselves, thus disarming resistance and strengthening Leviathan. LIBERALISM, CONSERVATISM, ANARCHY

that a country can be viewed as “the extension of a home.” In some cultural sense, he was a non-politically-correct conservative. He did not try, like many libertarians, to be everything to everybody. An example, which also illustrates his devastating humor, is found on the first page of Social Contract, Free Ride. His first generic “he” (“Nobody would suffer or profit from ‘spillovers’ he did not cause”) directed to a footnote that read, “Wherever I say ‘he’ or ‘man,’ I really mean ‘she’ or ‘woman.’” He once told me that Murray Rothbard had said the book was worth its price if only for that footnote. THE PRIMACY OF ANARCHY

The conjunction of anarchism and conservatism may look strange, but classical liberalism and anarchism are clearly related. As Raymond Ruyer, the late French philosopher, wrote in his 1969 book In Praise of the Consumer Society (Éloge de la société de consommation), “Real anarchism, feasible and actual, as opposed to mere emotional statements, is simply the liberal economy.” Another French author, academician Émile Faguet, wrote in 1888 that “an anarchist is an uncompromising liberal.” Tony is in good company. The fraternity between anarchism and classical liberalism suggests a common denominator between de Jasay and Buchanan, who also defined himself as a liberal. For both theorists, the state is dangerous and anarchy would be the ideal. Buchanan thought that a limited state was necessary to protect ordered anarchy, while de Jasay believed that limiting the state was a mission impossible. But both agreed with what we could call the primacy of anarchy. This provides us with a principle for evaluating all things political. De Jasay went much further than Buchanan on the road to anarchy. The only state that could ideally tempt de Jasay was a “preemptive state” or “capitalist state” or “minimal state.” He wrote that such a state “would be an anti-state actor whose rational purpose would be the opposite of that of the state, preempting the place that a state can otherwise take and expand in.” The minimal state would protect anarchy, if that were possible. In a sense, de Jasay combined the best in classical liberalism, cultural conservatism, and anarchism. In that, too, he and Buchanan were similar. But de Jasay was not an optimist. He was not sure that anarchy could survive if it were to appear again in the world as it appeared before in primitive societies. And constraining Leviathan, he argued, is impossible. I would suggest that we should try, even while keeping the focus on anarchy as an ideal. In this task, the work of Anthony de Jasay will be very useful.

If individuals are able to agree unanimously on a social contract, they are also able to contractually agree to the production of the public goods that supposedly justify the social contract.

Where exactly does this put de Jasay on the political spectrum? Prodded by philosopher and economist Hartmut Kliemt of the Frankfurt School of Finance and Management in a 2000 video by Liberty Fund, de Jasay confessed, “Yes, I am an anarchist.” He might have worn that badge more proudly had he not believed, speaking of “the few other anarchists,” that “many of them are crazy.” In a 2014 email I asked him if he would accept the label “conservative anarchist.” Without explicitly disavowing my proposal (on the contrary, he said that one of his grandsons had had the same idea), he replied, “If I have to describe myself, I would say that I am a modern liberal.” He later added, much in the vein of his conversation with Kliemt, that he did not like the label “anarcho-capitalist” because, he said, “I do not wish to be counted as one of that company.” By “modern liberal,” de Jasay meant “classical liberal,” but the label does not fit perfectly. He seemed suspicious of universal values, yearning instead for the close community where conventions can be voluntarily enforced. He was critical of codified rights and their supporting theories, which he pejoratively called “rightsism.” He did not seem worried that his convention-based anarchy could be unenlightened, stifling, and oppressive, which may be the Achilles heel of his theory. All that seems more conservative than liberal. I still believe that the label “conservative anarchist” partly fits him. His defense of property also looks at least as conservative as liberal. He invoked against immigration the strange argument



38 / Regulation / SPRING 2019

IN REVIEW demand for other goods and services and thus the wages in those other industries. Did women push down wages when they REVIEW BY PIERRE LEMIEUX arrived on the labor market? And, anyway, ✒ would that be a good objection to their freedom to work? The answer to both queshe question in the title of this review is paraphrased from tions is negative. As for the welfare state, it does not justhe new book by Bas van der Vossen and Jason Brennan, phitify cutting immigration. As a matter of losophers at Chapman University and Georgetown University fact, immigrants in America don’t seem to respectively. Their book, In Defense of Openness, presents a strong, well- use the welfare state more than the natives. argued case for global openness, by which they mean not only free Assuming there is a welfare-state problem, it trade in goods and services but also open are defendable within national borders, would only justify cutting welfare payments immigration. they seem to also be valid in interactions to immigrants, not cutting immigration— Global openness, they argue, is the only over national borders. Thus, there is a although this would raise other issues. way to resolve the injustices that have gen- moral presumption for free trade and Van der Vossen and Brennan confront erated or maintained so much poverty in free international mobility, the “illiberal immigration” the world. Their case is primarily a moral just as such a presumption argument proposed notably case: morally defendable individual rights applies within a given counby economist Paul Collier. include economic freedom across political try. This presumption may Van der Vossen and Brennan borders. They argue that a strong presump- be defeated, but only with explain this argument as tion exists for liberty and this presumption justifications. To assert that follows: “Immigrants bring is impossible to invalidate. They also pres- normal economic freedoms along their cultures, ones ent an economic case for openness, which stop at a political border that lack support for the rule is the only way to increase prosperity over because they are superseded of law, democracy, and freethe whole planet. It is an interesting book by the group rights of people dom.” “As a result,” the arguof philosophy informed by economics (as across the border presupment continues, “allowing it should be). people to move freely from poses a demonstration that Van der Vossen and Brennan believe group rights (already a fuzzy poor to rich societies underthat justice must be compatible with “com- construct) are sufficient to In Defense of Opencuts the very institutions that ness: Why Global mon-sense moral intuitions and ideas” defeat the presumption of Freedom Is the make prosperity possible, and with empirical facts. For example, liberty. This is not easy to do. Humane Solution to and with it social stability economists have shown that the quality and liberal freedom.” Open How could the moral pre- Global Poverty of institutions (social, political, legal, eco- sumption for free mobility By Bas van der Vossen immigration would destroy nomic) is a determining factor in economic and thus free immigration and Jason Brennan the very institutions and 240 pp.; Oxford growth and we must include this factor be defeated? Certainly not features of free societies that in any theory of justice. What is needed by economic arguments, the University Press, 2018 make them attractive. is “positive-sum global justice”—that is, authors argue persuasively. Note that Van der Vossen win–win cooperation among individuals Economic research suggests and Brennan use the term as opposed to simply taking from some that free mobility of workers, whereby every “liberal” and its opposite “illiberal” to refer individuals to give to others. individual can move wherever his work is to classical liberalism writ large and its Good institutions are built around the most valued in the world, would greatly opposite. “Liberals” include libertarians as rule of law, private property rights, and increase global GDP, perhaps by as much well as those American-style liberals who economic freedom. These economic rights as 50% to 150%. Open immigration would believe in the presumption of liberty. are “human rights” by themselves, the two be a win–win, just as free mobility within a To the Collier “illiberal immigrants” philosophers argue, adopting the usual country increases economic efficiency. objection, the book offers many counrights-talk of mainstream philosophers. The two philosophers, who know terarguments. First, the implied extreme much about economics, debunk standard scenario of the liberal receiving country ‘Yes’ to mass migration / If economic rights economic objections to open immigra- transforming into a poor, illiberal country tion. Immigration cannot generally push is not likely. Liberal cultures have proven to PIER R E LEMIEUX is an economist affiliated with the Department of Management Sciences of the Université du down wages, if only because (as they could be extremely robust, as American history Québec en Outaouais. His latest book is What’s Wrong with Protectionism? (Rowman and Littlefield, 2018). have noted) immigrants also increase the shows. Second, the authors admit that

Us or Them, or Us and Them?

T


SPRING 2019

“forcibly preventing immigration could be justified if it really were necessary to avoid this nightmare scenario.” At worst, they add, the danger of illiberal immigration would be a reason to restrict immigration from institutionally bad countries only— assuming that such discrimination would be constitutionally or politically feasible. And if it were legitimate to block illiberal immigrants in order to preserve the liberal society, would it not also be legitimate for, say, Virginians to protect themselves against West Virginians? It won’t do to answer, “No, because Virginians and West Virginians are from same country,” because the moral legitimacy of prioritizing fellow citizens is precisely what needs to be demonstrated. We are back to the need to defeat the presumption for liberty in order to oppose international openness. “We need to know,” Van der Vossen and Brennan write, “why countries are supposedly justified in doing things to foreigners that they would view as horribly unjust if done to their subjects.” We must avoid the circular argument that fellow citizens must be prioritized over, and protected against, foreigners because the latter are different; and that the latter are different because fellow citizens must be prioritized. Doubts about open borders / Is the two phi-

losophers’ defense of open immigration as tight as it appears? Is the invasion objection so easily dismissed? Isn’t it likely that, past a certain threshold, illiberal immigrants would destroy liberal institutions and prosperity? It is not mainly—or at least only—a matter of the right to vote, which immigrants don’t immediately obtain anyway and which could be postponed longer. It is more a matter of informal institutions being toppled by an invasion of people with different cultures. Think about the rules of tolerance. Or think about trust, a certain level of which is important, especially in a free society. Some research suggests that trust can be best, if not only, maintained among individuals who generally follow the same rules and share the same culture. Assume a million-strong liberal soci-

ety with liberal institutions and imagine that two million illiberal immigrants take up residence in their midst. It seems obvious that the invasion will change this society’s institutions. Predictability of human behavior will diminish. Mistrust will increase. People will self-segregate in different enclaves. Individuals will feel more and more insecure. To maintain some sort of social peace, laws will eventually change. Social relations will become more regulated and individual liberty more controlled. James Buchanan’s contractarianism may illuminate the problem under investigation. The parties to a Buchanan-type of (implicit) social contract would see their country as a club with a controlled membership precisely in order to preserve their liberal institutions. This approach, which Van der Vossen and Brennan do not discuss, would not justify completely closing the border to foreigners. It is unlikely that the contracting individuals would unanimously agree that, for example, foreign spouses could not immigrate or that citizens could not hire foreigners as nannies or business employees. But it could justify some reasonable and not-illiberal control on immigration.

/ Compared to the murky case of immigration, trade represents a simple case. Van der Vossen and Brennan argue that the right to trade internationally is, just like the right to trade domestically, a basic right that is essential for an individual to pursue justice and the good life—notwithstanding philosopher John Rawls. The moral presumption for the freedom to trade internationally seems as irrefutable as other economic freedoms. The economic case reinforces the moral presumption: free trade has been shown to lead to increased production and a radical drop in poverty. Following the law of comparative advantage (of which Van der Vossen and Brennan provide a good explanation), free trade benefits both poor and rich countries, just as it benefits different regions within a country. The authors could have added to their arguments that free trade between California and MissisFree trade: a simple case

/ Regulation / 39

sippi benefits people in both states even if wages are 40% lower in the latter. In Defense of Openness finds that no good philosophical argument overcomes the moral presumption for free trade. For example, “exploitation” in sweatshops is not a good argument. Working there is the best option for the poor who choose it; otherwise they would have chosen another option among those open to them— scavenging dumps or prostitution, for example. Closing a sweatshop amounts to removing the best option of its workers, making them worse off. “We shouldn’t take away a victim’s best option on the grounds that it is unjust unless we can replace it with an even better option,” they write (emphasis in original). (See “Defending Sweatshops,” Spring 2015.) The two philosophers also answer an argument of Aaron James, a philosopher at the University of California, Irvine. Abolishing a tariff (or another obstacle to trade), James argues, hurts those who benefited from it just as establishing it hurts those who previously traded freely. In both cases, he claims, some lose and some win, and there is no presumption one way or the other. Ignore the fact that those harmed by a tariff shoulder a higher cost than the benefits of those it favors, which is the same as saying that free trade leads to a net benefit in terms of money. Van der Vossen and Brennan emphasize that the ban of a liberty does not have the same moral status as the restoration of a liberty. Note that a Buchanan type of social contract creating an island of liberty cannot conceivably limit free trade as it can constrain immigration. The parties to the contract are unlikely to unanimously accept that protecting their liberty and liberal institutions requires limiting their freedom to trade goods and services over the country’s borders more than within their country. On the contrary, the power to limit trade with foreigners would put too much power in the hands of Leviathan. Positive-sum justice / One major strand of

In Defense of Openness is the idea that justice or the correction of injustices—the focus


40 / Regulation / SPRING 2019 IN REVIEW

of the political philosopher—can best be attained through individual liberty and economic growth. Consider the injustices created by colonialism, which may partly explain the poverty of many of today’s underdeveloped countries. The two philosophers point out that the depredations of the natives may explain as much. Moreover, the residents of the colonizing countries were probably exploited by their own imperial governments: “Empires don’t pay for themselves.” The moral and efficient way to correct such past injustices is not to impose a collective responsibility on today’s descendants of the colonizers, but for them to open their borders and markets. Since everybody would benefit, it would be positive-sum justice. Many philosophers, such as Thomas Pogge of Yale University and Nicole Hassoun of Binghamton University, argue for some form of international redistribution toward poor countries. There is no need for morally and economically doubtful redistribution, reply Van der Vossen and Brennan. What is needed is simply to stop the current injustice of governmentimposed obstacles to international trade and mobility. Foreign aid cannot be defended from either a moral or an economic viewpoint. As many economists have observed, the large amounts of aid given over the last five decades have had practically no effect—or worse, have fed corrupt regimes and thus retarded economic growth. Experience has shown that trade liberalization is the way to cut world poverty. Trade, not aid! “The main way people in developed societies have contributed to ending poverty abroad,” write Van Der Vossen and Brennan brilliantly, “has been through buying Made in China products.” And the buyers have benefited, too. Free trade is justice. “Not only do people have a prima facie right to exchange goods without coercive interference,” the authors write, but also “allowing them to do so generally works to the benefit of everybody.” Climate change—which the authors take very seriously— is often used as an argument to restrain economic growth. To the con-

trary, the book argues, growth can provide the resources to mitigate the effects of climate change without harming developing countries, which are responsible for much of the current growth in greenhouse gas emissions. But economic growth requires international openness, which would also allow the people who are most harmed by climate change to move to more hospitable places. “The choice, the authors write, “is largely between a world much better equipped to deal with poverty and climate change, and a world much worse in both respects” (emphasis in original). “What we owe people around the world is openness,” concludes the book’s post-

script (emphasis in original). Van der Vossen and Brennan provide several strong arguments for abolishing at least some immigration restrictions, but I have argued that completely open immigration is very questionable. In fact, the two authors are often less radical than they appear at first sight. The presumption of liberty perhaps is what’s most important: departures from it need justifications. At any rate, the book remains a good antidote to the current irrational discourse and callousness against the convenient scapegoats that immigrants represent. And Van der Vossen and Brennan’s case for free trade is unassailable.

The Truth about Economic Incentive Programs ✒ REVIEW BY GREG KAZA

T

he oft-stated goal of government economic development incentive programs is to create jobs. Yet policymakers and program advocates seldom conduct careful analysis of these programs to determine how well they work. Given how many states use these programs liberally yet fail to even keep up with national job-creation rates, one suspects these efforts are largely ineffective. So why do these programs continue to multiply? In this new book, Nathan Jensen (University of Texas, Austin) and Edmund Malesky (Duke University) advance original arguments that explain the ubiquity of these incentives and offer technically feasible and politically practical reforms to rein in these programs. The issue is topical: in recent decades government pandering with incentives has grown in response to threats by sports teams, movie companies, and manufacturing firms to relocate their operations. Government incentives can be discretionary (“deal-closing funds”) or statutory. Oftentimes they are targeted by policymakers to assist only a small Zero-sum games /

GR EG K A ZA is executive director of the Arkansas Policy

Foundation.

number of firms in the vast universe of enterprise—sometimes even a single firm (think of the goodies different states and localities recently offered for Amazon’s HQ2). These incentives have grown in size, with at least 17 single-firm state packages eclipsing the $100 million mark in recent decades. Examples include South Carolina’s $130 million-plus offer to attract a BMW plant (1992) and Georgia’s $258 million to land a Kia plant (2006). Less well known is the aggressiveness of city and county programs. One example Jensen and Malesky share is Lenoir, NC offering a quarter-billion dollars in incentives (mainly tax breaks) over 30 years to woo a Google server farm. That equals roughly $1 million for each center employee. The economic inefficiency of these programs is a recurring theme in the book. The authors tell the story of an “incen-


SPRING 2019

/ Regulation / 41

tive war” between Kansas corruption charge because politicians with opportunities to pander and Missouri that resulted “politicians do not hide their to the public and take credit for new investin employers of 3,200 workallocations of incentives to ment. ... Rather than making domestic ers moving from the east side firms. [This is] far from politics irrelevant, globalization can lead to of the Kansas City metroplex what we would expect from increased political activity.” Incentives give to the west, and employers of under-the-table exchanges of politicians reason to take credit. 2,800 workers moving from campaign contributions for This process is also visible at the local the west to the east, which financial support.” level, where cities with mayor–council systhe authors term “the very Indeed, elections provide tems offer more generous incentives but essence of a zero-sum game.” politicians with an incentive are less likely to mandate performance There is no clear evidence that to publicize incentives. Voters requirements and benefit–cost analyses. incentives create net public prefer incumbents who take The authors find mayors “are more prone benefit, however, as numercredit for creating jobs over to use incentives for electoral gain.” IncumIncentives to Pander: ous academic studies have How Politicians Use those opposed to incentives. bents facing electoral pressure are more shown them to be ineffective Corporate Welfare for Voters also prefer incumbents likely to use incentives than city managers Political Gain and redundant. who try to attract jobs with shielded from the ballot. Why use such a flawed pol- By Nathan M. Jensen incentives even if they fail, icy? And why do politicians and Edmund J. Malesky instead of critics who vow to Regressive incentives / How do local politiherald these incentives rather 258 pp.; Cambridge eschew the practice. Politi- cians pay for these programs? Oftentimes, University Press, 2018 than hide them? Jensen and cians exploit their “informa- regressive sales and excise taxes shift burMalesky present “puzzles,” tion advantage” by providing dens onto the poorest taxpayers. Call it building on Gordon Tulltoo many and too generous “economic development by sales tax,” ock’s insight that voter ignorance about incentives. This “information asymmetry bad public policy that drives economic incentives is rational given the absence between voters, politicians, and firms” can inequality. Incentives create a reverse of knowledge of their true costs. Incen- lead incumbents to use incentives to take Robin Hood effect, as wealth is transferred tives are the perfect “pandering tool” for credit or reduce the blame for economic from the poor and middle class to wealthy politicians engaged in “credit-claiming outcomes. Politicians will use incentives, residents. For instance, Ferguson, MO and blame-avoiding roles” with voters. regardless of investors, if voters believe they politicians filled their budget hole from funding new incentives by increasing the The “consistent use of incentives in press are effective. releases by governors’ offices around the Consider Donald Trump’s highly pub- revenue from fines and penalties, fueling country and in campaign materials,” they licized move in late 2016 to retain manu- racial acrimony through increased policwrite, “suggests that politicians see the use facturing jobs in two counties in Indiana. ing. Critics of incentives should explore of incentives as an asset, not a liability.” The authors write, “From the start, there this common ground with citizens who are troubled by inequalGiven that, we must conclude that the ity and injustice. main purpose of these incentives is for Voters prefer candidates who try to Incentives’ use is political gain. Apparently, politicians use not restricted to westincentives to signal alignment with voter attract jobs with incentives, even if they ern-style governments. interests, assuming that voters have imper- fail, instead of candidates who vow to Authoritarian regimes, fect knowledge of incentives’ importance. eschew this practice. especially those linked The counterfactual is unobserved: voters with meritocratic perfordon’t know that most incentives are given mance at the local level, to firms that are already planning investments. They may lower firms’ costs but was some fuzziness in the numbers” of are associated with higher levels of use jobs ostensibly created or saved by Trump of these programs than their democratic they seldom create jobs. and Indiana officials’ efforts. In fact, counterparts. Incentives are more likely Pandering / Jensen and Malesky develop Bureau of Labor Statistics data show total to be provided to foreign investors if the a “theory of pandering” to explain this employment in one county declined after regime has strong protections for meribad public policy. They begin by rejecting the announcement, while the other county tocratic promotion for sub-national leadthe argument, commonly found in the experienced an employment increase of ers. Central government elites want gross popular press, that incentives are driven only 0.3% versus the national average of domestic product, government revenue, by corruption or as legal means to obtain 2.5%. Capital movement influenced by and employment growth, and are agnostic campaign contributions. They dismiss the globalization, they write, “can provide about how those gains are achieved. The


42 / Regulation / SPRING 2019 IN REVIEW

authors term this phenomenon “upward pandering” and note it exists in single-party states with quasi-meritocratic institutions until the point when officials are no longer eligible for promotion. Aging Vietnamese officials, Jensen and Malesky observe, abandoned the use of tax incentives once they became ineligible for promotion. Interestingly, “personalist regimes” such as Russia under President Vladimir Putin offer far less in incentives because loyalty trumps performance. Jensen and Malesky pose a series of questions about incentives that should be answered by every politician contemplating their use. Are they worth the cost? Are they effective at attracting or retaining investments? Can governments target firms and pick winners? Do they generate jobs and are those jobs worth the cost? Are the incentives the only option for generating economic development? Most incentive-happy politicians answer in the affirmative. One recent exception, to some extent, was Michigan governor Rick Snyder. As a businessman turned candidate in 2010, he criticized tax incentives during his election campaign, though he did simply relabel some of the incentives “grants” when he continued them once he took office. But he also signed a 2012 executive order dissolving the Michigan Economic Growth Authority (MEGA), established in 1995 over the objections of the Mackinac Center, a market-based think tank, and state legislators. (Disclosure: I was among those critics.) MEGA’s demise is significant for two reasons. First, BLS records show total Michigan employment declined (4,450,800 to 3,893,700) under Snyder’s predecessor, Jennifer Granholm, an incentives proponent. Since the U.S. economy began expanding in June 2009, a period that largely coincides with Snyder’s tenure, Michigan has been one of the 17 states with a job-creation rate above the U.S. average. Invert those circumstances by postulating a state that does not use incentives and records negative jobs growth, or a state that ends a program like MEGA and trails the nation in jobs creation, and what would political commentators say? Incen-

tive critics should always present their case to the public using the easiest-understood argument: lack of jobs means incentive programs should end. Researchers, good-government activists, and policymakers can use other strategies to challenge these programs. State-level researchers focus on local programs. One example: Jacob Bundrick of the Arkansas Center for Research in Economics found “no evidence that the [state’s] Quick Action Closing Fund (a discretionary program)

provides meaningful increases in employment or net business establishments at the county level.” Clawbacks allow tax dollars to be recouped when programs do not meet political promises. Written performance criteria provide greater transparency. GASB Statement No. 77 (2015) requires disclosure of the true cost of tax abatements. Politicians’ embrace of incentives means researchers should embrace this book for its insights about political pandering.

Let’s Hear It for the Standard Narrative ✒ REVIEW BY VERN MCKINLEY

T

he year 2018 marked the passing of a decade since the lowest point of the financial crisis. Observers of the financial industry rehashed the various narratives of the crisis as part of a burst of anniversary commemorations. The dominant narrative about the policy response to the crisis (although not necessarily the most accurate or fact-based one) is what might be called the standard narrative: the interventions of the financial authorities during 2008 and 2009 likely saved us from another Great Depression. There are variants of this standard narrative, but most of its adherents believe that either the response (consisting of bailouts and massive financial support) was measured and effective, or else the authorities should have been even more aggressive in their interventions. This book, Fighting Financial Crises by economists Gary Gorton and Ellis Tallman, supports the latter version of the standard narrative. Interestingly, the book liberally cites fellow standard narrative advocates Ben Bernanke and Timothy Geithner, who were the chief architects of the U.S. response. Bernanke even provided a blurb for the book’s jacket cover. Gorton is a professor at the Yale School of Management and is widely known for

advancing the argument that the financial crisis was a “run on the repo market.” He was also an adviser during the crisis to American Insurance Group, which was one of the largest government bailout recipients during the crisis. Tallman is the director of research at the Federal Reserve Bank of Cleveland and is known for his work on the history of banking panics and liquidity lending during financial crises. The premise of Fighting Financial Crises is that, consistent with the authors’ prior research, there is a “plug and chug” formula for responding to financial crises. Whether we look at the panics of the Gilded Age or the recent crisis, this formula requires that the financial authorities need only do some basic research to determine the appropriate policy response to the crisis. Specifically, they need to: ■

V ER N MCK INLEY is a visiting scholar at the George Wash-

ington University Law School and coauthor, with James Freeman, of Borrowed Time: Two Centuries of Booms, Busts and Bailouts at Citi (HarperCollins, 2018).

Find the short-term debt causing the instability (run). Suppress individual institution financial information.


SPRING 2019

■ ■

Open emergency lending facilities. Prevent systemic (too-big-to-fail) institutions from failing by bailing them out. Circumvent any laws and regulations that stand in the way of this response.

Panics and bailouts of the 19th century

/

The authors open the book with a deep dive into the National Bank panics of the Gilded Age. They divide these into more severe panics (1873, 1893, and 1907) and less severe panics (1884, 1890, 1896, 1914) in order to judge the prudence of interventions in each panic. Disappointingly, the authors do not explain clearly what distinguishes the more severe from the less severe panics and the comparative data Gorton and Tallman provide do not clearly support such a classification. The authors then shift to a detailed discussion of the New York Clearing House Association (NYCHA), its history, and what tools it used to fight panics. The NYCHA was a privately organized association, modeled after a counterpart in London, through which the New York banks would “settle their accounts with each other and make or receive payment of balances and to ‘clear’ the transactions of the day for which the settlement is made.” It also conducted periodic bank examinations of its members to assess the risk the individual banks posed to the clearinghouse members: “Each member has a direct interest in every other, for it does not wish to run the risk of loss in giving credit to checks of an insolvent institution.” Special examinations, which were more targeted, were triggered by rumors of weakness. If a bank did not follow the recommendations incorporated into an examination report, it could be suspended or expelled from NYCHA membership. As Gorton and Tallman describe it, the function of the NYCHA was akin to a “regulatory and central-bank-like role.” In the early stages of a panic, the NYCHA’s supportive response involved three actions: ■

issuance of clearinghouse loan certificates, which were short-term,

collateralized loans “effectively guaranteed by the clearinghouse membership jointly”; bailouts of too-big-to-fail institutions; and suppression of financial information of individual institutions.

/ Regulation / 43

was a reasonable response to the vulnerability of shortterm debt to runs that could unnecessarily threaten large banks and thereby the entire banking system.

To support their case, the authors set out statistics for the 12 outright bank In their chapter “Too failures and five “bank assisBig to Fail Before the Fed,” tance transactions” in New which is also the name of Fighting Financial York City from 1864 to the a National Bureau of Eco- Crises: Learning from creation of the Fed in 1913. Gorton and Tallman choose nomic Research paper they the Past released in March 2016, By Gary B. Gorton and a case study of a bailout by Ellis W. Tallman Gorton and Tallman make the NYCHA of Metropolithe case that the megabank 256 pp.; University of tan National Bank (MNB) in Chicago Press, 2018 bailouts of the past 35 years 1884. MNB was double the size of the average clearinghad their origins in similar bailouts through the NYCHA house bank and was quite during the 19th century. interconnected based on the data the The chapter starts off with a direct authors reference: “Had the clearinghouse attack on the “moral hazard” argument not acted with admirable promptness in coming to its assistance, there is little against bailouts: question that out-of-town banks would Banks have allegedly engaged in taking have become alarmed for their deposits, risks greater than they otherwise would not only in this bank but for those in the because of a belief that they would be banks generally.” A bailout of $6 million bailed out by the government, poswas extended by the NYCHA to MNB: sibly causing or contributing to the “Private-market participants were therefinancial crisis of 2007–8, because large fore acutely aware that their actions were banks believe they are too big to fail. … effectively a bailout of the stricken bank. In the modern era it has been hard to The benefit was the prevention of banking find evidence that large banks are the panic on a wider scale.” MNB ultimately beneficiaries of implicit too-big-to-fail failed outright several months later, after government policies and become riskier the panic subsided. as a result. Gorton and Tallman state that the The chapter then walks through case bailout of MNB does “align closely to our studies of how and why it made economic view of proper responses to fight financial sense for the member banks of the NYCHA crises.” There were questions of the bank’s to support too-big-to-fail institutions dur- solvency, as there often are during crises. ing the panics of the 1800s: The bank had a high degree of interconnectedness, there was a risk of losses to Because a private-market coalition of clearinghouse members as a result of the banking institutions took these actions, bailout, and the implication is that MNB it strongly suggests that a too-big-to-fail was “systemically important.” The authors practice or policy per se (and the associgo on to make an analogy to the bailouts ated “moral hazard” problem of exacerof the 2000s crisis, claiming MNB was bating bank risk taking) is not the prob“a model for an orderly resolution.” The lem causing crises…. In the pre-Fed era, authors contrast MNB with the NYCHA’s bailing out large, interconnected banks decision during the Panic of 1907 to allow MNB /


44 / Regulation / SPRING 2019 IN REVIEW

Knickerbocker Trust to fail, which they argue “led to the most severe period of the financial crisis of 1907, and likely the ramifications of that failure contributed largely to that distress.” Conclusion / Gorton and Tallman’s histori-

cal details of the panics of the 1800s for the New York banks at the epicenter of the financial system are unequaled, based on my research. The authors have taken stories from the contemporary New York press and employed available financial data from bank reports to develop a narrative and accompanying tables that bring to life the panics of that era. These details alone make the book worthy of a place on any financial historian’s bookshelf. Where their effort falls short is in the policy conclusions they draw from those details. I’m not convinced by their arguments justifying the public sector bailouts

that have become so familiar in the past century. There is an enormous difference between a voluntary organization of bankers like the NYCHA bailing out an institution based on their own financial interests and the case where government authorities use public funds to bail out politically connected institutions. As for Gorton and Tallman’s consideration of moral hazard, if you look at a toobig-to-fail bank with a long history, e.g., Citi, you find that during the Gilded Age it was a rock-solid bank that absorbed weaker institutions during panics. Since the creation of the Fed and the proliferation of government bailouts in the last century, Citi has now morphed into a perpetual ward of the state. This would have been a good moral hazard case study for Gorton and Tallman to consider. Unfortunately, it seems they already had their chosen narrative and stuck to it.

Landsburg’s New Puzzles ✒ REVIEW BY PHIL R. MURRAY

S

teven Landsburg teaches at the University of Rochester, where he ponders the “big questions” and writes terrific, accessible books on economics. His latest is Can You Outsmart an Economist? and it continues this tradition. To Landsburg, “Economics is, first and foremost, a collection of intellectual tools for seeing beyond the obvious.” “If this book has a moral,” he professes, “it is this: think beyond the obvious.” The book offers a number of puzzles involving economics, probability and statistics, and more. They range from easy to difficult. Consider this easy one: Suppose the government imposes a price ceiling on wheat, so that instead of selling at the current price of, say $4 per bushel, nobody is allowed to charge more than $3 a bushel. What happens to the price of bread?

Don’t be fooled into thinking that wheat will be more abundant. Those who are PHIL R . MUR R AY is a professor of economics at Webber

International University.

fooled will conclude that the price of bread will fall. A student of economics knows that price ceilings cause shortages. Wheat will become scarcer, the supply of bread will decrease, and the price of bread will rise. Puzzle solved. Now consider a more difficult puzzle: My wife and I each drive exactly the same number of miles every day and would continue to do so even if we upgraded our vehicles. Now, which would save more gas—replacing my wife’s 12-mile-pergallon SUV with a 15-mile-per-gallon SUV, or replacing my 30-mile-per-gallon car with a 40-mile-per-gallon car?

The trap here is to think that replacing the

car is a better idea because getting another 10 mpg beats getting another 3 mpg with the SUV. Or that replacing the car is better because 40 mpg is 33% more than 30, whereas 15 mpg with the new SUV is only 25% more than 12. To the contrary, Landsburg explains, “the SUV uses so much gas to begin with that a little added efficiency goes a long way.” Replacing the car would reduce the Landsburgs’ fuel consumption by 7%, but replacing the SUV would reduce it by 14%. Landsburg later adds that the assumption that he and his wife wouldn’t change their driving habits despite the vehicle change is an “arbitrary (and probably quite unrealistic) assumption.” It would be realistic to assume that after replacing the SUV with one that gets better gas mileage, his wife will drive more. As a result, they wouldn’t get the 14% reduction in fuel consumption. Better solutions to puzzles anticipate changes in behavior. Landsburg follows up the fuel efficiency puzzle with one about child safety: Infants traveling on airplanes are currently permitted to ride in their parents’ laps. Every five years, approximately two of those infants die from injuries sustained during turbulence. If a new rule required each infant to be strapped into a separate seat, how many infant lives could be saved?

Reckoning that the regulation would save two babies per five years fails to anticipate changes in behavior. Landsburg asks: Under the new rule, how many families would choose to drive rather than pay for an extra airline seat? How many of those families would be involved in car accidents? How many of those car accidents would result in infant fatalities?

He reports that economists who tackled those questions found that “for every two infant lives saved in the air, there would be about seven infant lives lost on the roads.” The effect of a regulation requiring infants to have their own airplane seats would therefore be a net loss of five babies’ lives.


SPRING 2019

/ Regulation / 45

tioning these assumptions, Most questions, by themselves, do not reveal Many solutions require a I am not arguing that they whether an individual is rational. Answers calculation. Take “The Genare unacceptable; I am merely to pairs of questions, however, can reveal der Gap.” “Alice” observes trying to test my understand- inconsistencies that suggest the quiz taker that women earn 77% of what ing and join the fun of solv- is irrational. One question asks how much men earn. “Bob” is skeptical ing the puzzle. of this because it suggests that the reader would give up to avoid playing employers are leaving money Russian roulette with two bullets in a pisE x p loi ti ng irratio n a lit y / on the table; for example, if tol with six chambers. The next question Economists conventionally would note that the six-shooter now holds a manager is paying a man assume that individuals are four bullets and ask how much the reader $10 an hour to generate $11 rational. On one level, this would pay to remove one of those bullets. of revenue per hour, he could means that if an individual Although I thought carefully about my substitute a woman for the prefers apple pie to blue- answers, my score was mediocre. If I lose man, pay her $7.70 per hour, Can You Outsmart berry pie, and blueberry pie my job as an economics professor, maybe and increase his profit from an Economist? 100+ Puzzles to Train to cherry pie, he’ll prefer I’ll look for work as a “performance artist.” $1 to $3.30 per hour. Your Brain apple to cherry. Landsburg Landsburg supposes that Many puzzles involve probability and By Steven Landsburg introduces Sidney Morgen- statistics. For instance, Landsburg enjoys half the labor force is women. 288 pp.; Mariner, 2018 besser, who ordered apple demonstrating what is known as SimpFirms pay two-thirds of their among those three flavors. son’s paradox, a phenomenon in probabilrevenue to workers. Of the When the waitress mentions ity and statistics in which a trend appears other third, they pay half to bondholders and half to stockholders. “To that she actually has no blueberry pie on- in several different groups of data but a very rough approximation,” he continues, hand, Morgenbesser changes his mind disappears or reverses when the groups “the total value of the bond market and the and picks cherry. That’s not only funny; are combined. In perhaps the best-known total value of the stock market are equal.” it’s irrational. real-world example of this paradox, in Landsburg writes, “You’re irrational if the 1970s lawyers sued the University of Given those assumptions, Landsburg calculates that by substituting women for men your preferences allow me to bleed you California, Berkeley on grounds of sex in the workplace and paying them 77% dry.” He would offer Morgenbesser the discrimination in admissions. Their evias much, managers would increase profdence was that graduits by 42%. Without doubting that some ate programs admitted Economists usually assume that people 46% of male applicants employers may be oblivious to increasing profits this way, Landsburg argues that a are rational, but some aren’t. Landsburg versus 30% of female 42% increase in profits is so “huge” that its writes,“You’re irrational if your applicants. Landsburg “widespread” existence must be “implausi- preferences allow me to bleed you dry.” presents a table showing ble.” Confident that gender discrimination the numbers of men and does not explain why women earn 77% of women that applied to what men earn, Landsburg speculates as to each of six departments, apple, blueberry, and cherry pie, and Mor- as well as the numbers accepted. Half the what else might account for the gap. This reviewer wants to question a few genbesser would pick apple. Landsburg departments admitted more women than of the author’s “rough but reasonable” would then say there really was no blue- men. Four of six departments accepted a assumptions. If the labor force consists of berry, prompting Morgenbesser to pick greater percentage of women than men. more men than women, won’t it become cherry. Landsburg agrees so long as Mor- How did a smaller share of all women increasingly difficult to substitute lower- genbesser pays a nominal charge for the get accepted overall? Landsburg explains, paid women for higher-paid men? In order switch. Landsburg would then announce “Women were being disproportionately to measure the gain to stockholders, is the that blueberry is back on the menu. Now rejected because women were disproporrelevant comparison between “the total Morgenbesser selects apple and is willing tionately applying to the most selective value of the bond market” and “the total to pay another nominal amount to get it. departments.” Using the numbers Landsvalue of the stock market,” or the market Landsburg would then continue to remove burg provides, 72% of female applicants for all corporate liabilities including bank and replace the blueberry pie, pumping applied to the three departments with the loans and the stock market? If the latter, money out of Morgenbesser and proving lowest acceptance rates for women. Meanand corporate balance sheets show more that Morgenbesser is irrational. while, 34% of male applicants applied to Buyers of the book can take Landsburg’s the three departments with the lowest debt than equity, will substituting women for men be even more profitable? By ques- quiz that evaluates how rational they are. acceptance rates for men. “The moral,”


46 / Regulation / SPRING 2019 IN REVIEW

the author warns, “is to beware of aggregate statistics.” There are some puzzles I can solve without peeking at the solution. There are many I was unable to solve, though I could understand the solution once I read it. Some have solutions beyond my understanding. One of the most challenging puzzles is “Albert and the Dinosaurs.” Albert is trying to drive home from work without being attacked by dinosaurs. Apparently, he does survive the attacks, but he’d prefer to avoid them, as much as possible. To avoid them, he must go straight at the first intersection and right at the second. What makes this a puzzle is that “Albert is extremely absent-minded” and does not recognize either intersection. The author anticipates in the introduction that “you might be tempted to ask: ‘But what has this got to do with economics?’” His response is that economics is “anything to do with thinking beyond the obvious.” Albert will need to do that in order to avoid the dinosaurs as much as he can. Landsburg explains that, given Albert’s absent-mindedness, his optimal course of action “is to flip a fair coin at each intersection, with the faces labeled ‘straight’ and ‘right.’” On any day, there are three possible outcomes. Albert and the dinosaurs /

The coin shows right at the first intersection with probability 0.5, and Albert is attacked by a dinosaur. The coin shows straight at the first intersection and straight at the second intersection with probability 0.5 × 0.5 = 0.25, and Albert is attacked by a dinosaur. The coin shows straight at the first intersection and right at the second with probability 0.5 × 0.5 = 0.25, and Albert makes it home unmolested.

But this is only scratching the surface because Albert cannot recall at which intersection he is. The first question is: “What’s the probability that Albert is approaching First Street?” Imagine what

happens day after day. Each day Albert arrives at the first intersection. Half the days he makes it to the second intersection. So over the course of say, 100 days, he arrives at the first intersection 100 times and arrives at the second approximately 50 times. Thus, on these 150 times when he’s arriving at an intersection, two-thirds of the times (100 ÷ 150) it’s the first intersection. Does this change the probability that Albert gets home? Landsburg shows how it might and adds that “it depends on exactly how we interpret the word probability.” The discussion becomes very complicated thereafter. Readers who demand more relevance of Albert and the Dinosaurs to economics will probably not be satisfied. Landsburg leaves whatever applications there are to

the imagination. The puzzle serves to show the extensive amount of thinking one can do beyond the obvious.

/ The author delivers on his intention to show the reader a good time. Landsburg’s enthusiasm for solving puzzles is contagious. He introduces puzzles he has known since his childhood as well as some that perplex full-time thinkers. One should expect a few humbling experiences. The author also delivers on his intention to squeeze in some intellectual edification. There are “morals” galore. Unfortunately, one puzzle the author doesn’t grapple with is why citizens who wouldn’t challenge say, a scientist, nevertheless expound uninformed on economic affairs. Conclusion

An Open and Enlightened Libertarianism ✒ REVIEW BY PIERRE LEMIEUX

I

s it worth saving a person’s life today at the cost of 39 billion deaths (or perhaps non-births) some five centuries later? What about killing a baby if it saves $5 billion of GDP, equivalent to a new $200,000 house for 25,000 poor families? Those are some of the questions Tyler Cowen considers in Stubborn Attachments, a book of political philosophy informed by economics. Cowen is a creative thinker who teaches economics at George Mason University. The scope of his new book is indicated by its subtitle: “A Vision for a Society of Free, Prosperous, and Responsible Individuals.” As for the title (not to mention the overall thesis), it is a subtle extraction from a sentence on the first page: “We need to develop a tougher, more dedicated, and indeed a more stubborn attachment to prosperity and freedom.” Distant future / So what of that tradeoff of PIER R E LEMIEUX is an economist affiliated with the

Department of Management Sciences of the Université du Québec en Outaouais. His latest book is What’s Wrong with Protectionism: Answering Common Objections to Free Trade (Rowman and Littlefield, 2018).

one life for 39 billion? Assume, as benefit– cost analysis does, that future lives must be discounted just like other benefits (e.g., money) are. Assume a discount rate of 5%. How many lives in 500 years are equivalent to one life today? Multiply 1 (one life) by 1.05 (1 plus the discount rate) raised to the 500th power (500 years). The result is 39,323,261,827 (lives). The magic of compound interest is always amazing. Cowen notes that, under this line of thinking, one life today “could even be worth the entire subsequent survival of the human race, if we use a long enough time horizon for the comparison.” It is difficult not to agree that this result hurts “common-sense morality,” which would seem to counsel that one life does not outweigh 39 billion or more.


SPRING 2019

It may thus be argued that, as far as human lives are concerned, the discount rate for the far future should be zero or at least much lower than what we usually assume. This means that if we have to choose between different paths of economic growth—what individuals will be able to consume in goods or leisure as time passes—the path that is consistently higher should always be chosen. It is moral to choose to have more today only if this choice also implies that individuals in the future will obtain more than they would have received otherwise. We must have a “deep concern for the distant future.” The practical implications are massive. One implication is that environmental problems, such as climate change, gain a heightened importance if they will retard economic growth long-term. More generally, we should be concerned with the longterm future of our civilization.

growth makes people happier. Money may not buy happiness, but it certainly makes life easier. Cowen would probably add that, in the long term—if, for example, incomes have been multiplied by 39 billion after five centuries—it does buy happiness ceteris paribus.

/ Regulation / 47

public of intelligent laymen. Problem of aggregation

/ Sustainable

economic growth, Cowen argues, helps resolve problems stemming from clashing preferences among different individuals, especially in the long run where growth produces “an overwhelming preponderSustainability / It is true that standard ance of benefits.” “The wealthier society income (GDP) figures don’t provide a will, over time, make just about everyone complete picture of how production con- much better off.” All problems are flattened by the benefits of long-term growth. Just If we must choose between different let “happiness talk”! paths of economic growth, the path that Turning Keynes on his is consistently higher should always be head, Cowen basically chosen because of its benefits. says that in the long run we are all good. Do the long-r un benefits of economic tributes to happiness. To GDP, Cowen growth sidestep the aggregation probprefers a theoretical concept that he calls lem, as Cowen claims? This problem, best “Wealth Plus,” a measure of well-being represented by Kenneth Arrow’s Imposthat incorporates leisure time, domestic sibility Theorem, is the mathematically Need for economic growth / Like interproduction (goods and services produced demonstrated proposition that, under est, economic growth is compounded— in the home), and “sustainability,” along realistic conditions, it is impossible to growth applies to the result of previ- with standard economic production. derive from the preferences of all indious growth—and produces the magical (Instead of “Wealth Plus,” by the way, viduals a consistent and non-imposed inverse of discounting. Indeed, the same he should have written “Income Plus” “social welfare function” telling us what math underlies both processes. At a because wealth is a stock while income “society” wants. This amounts to saying growth rate of 1%, income doubles every and GDP are flows.) that “society” cannot want anything that 69 years. At a growth rate of Cohen rescues the idea of would be consistent and equally repre10%, which we saw in China “sustainability,” which has sentative of all individuals’ preferences. during its liberalizing spree, become a mantra in environ- Cowen suggests that all individuals will income doubles every seven mental discourse. His notion agree on what Wealth Plus means. This years. What’s great about of sustainability includes the agreement is precisely what appears to the growth of income—or environmentalists’ “environ- be impossible. gross domestic product, mental amenities” and “the For example, how do “we” choose a which is the same thing— prerequisites for a durable path of economic growth or reach any is that “wealthier societies civilization.” Over and above other social choice? Who is this “we”? For offer greater opportunities individual preferences, which the reader conscious of this problem, the and freedoms to pursue can be “irrational or mis- constant use of “we” in Stubborn Attachone’s preferred concepts guided,” he welcomes the ments becomes annoying. of happiness.” Life expec- Stubborn Attachments: “plural values” that may be A related problem that pops up is the tancy, diet quality, and lei- A Vision for a Society required for the good society. scientific impossibility of comparing the sure time grow. Since 1870, of Free, Prosperous, “Sustainability” is largely utility or happiness of different individuand Responsible in developed countries a Individuals left undefined and raises many als. We can only indirectly measure ordinal typical employee’s working problems that Cowen brushes utility, that is, the degree of happiness of a By Tyler Cowen time outside the home has aside a bit too easily. Keep in given individual. We cannot measure cardi160 pp.; Stripe Press, decreased by nearly half. mind, however, that Stubborn nal utility and add it over many individu2018 No wonder that, as recent Attachments is a short book als, even indirectly. Cowen recognizes this research confirms, economic obviously written for a general problem but dismisses it. It does seem to


48 / Regulation / SPRING 2019 IN REVIEW

make sense to say that most individuals are better off in a wealthy society than in a poor society, but it may be because the “we” has been more effectively silenced— the state has put its nose in fewer activities—in the former than in the latter. The reader—or at least this one—would have liked to hear more from Cowen on that. As we’ll see shortly, epistemic humility is in order. If we suspend difficult questions about aggregation and interpersonal utility comparisons, and focus instead on social coordination and common-sense morality, we get Cowen’s “Principle of Growth”: “maximize the rate of (sustainable) economic growth” and, when in doubt, choose growth. This seems to make sense. How to reconcile this ode to economic growth with the “great stagnation” to which Cowen’s name is now associated? (See his book The Great Stagnation: How America Ate All the Low-Hanging Fruit of Modern History, Got Sick and Will (Eventually) Feel Better, 2011.) He anticipated that question and answers that “progress is unevenly bunched,” implying that the great stagnation is only temporary. Principle of economic growth /

Economists are natural consequentialists: they are interested in the social consequences of individual actions and public policies. But we face what philosophers call the “epistemic problem.” As Cowen phrases it, “We hardly know anything about long-run consequences.” How can we seriously evaluate individual actions and public policies? It is a troubling problem. If Hitler’s parents had conceived him in a slightly different position in bed or at a different time, his genetic make-up would have been different. He would not have been Hitler. He could have become the grandfather of a second Mother Teresa or, better for economic growth, the father of another Jeff Bezos. Many things would be different today and in 500 years’ time. Changed genetic identities change the genetic identities that follow. This problem is especially acute if the long-term future is discounted Consequences /

at a lower rate because good and bad consequences loom larger in our eyes. Cowen argues that the epistemic problem should not paralyze us. It should instead bring us to focus on big actions more likely to push in the right direction. “We should not discriminate on the basis of relatively small benefits and losses,” he writes, because “anything we try is floating in a sea of long-term radical uncertainty.” We should “pursue values that are high in absolute importance” and are consistent with doing the right thing given broad rules of moral action. This may not be a totally satisfactory answer, but Cowen is after some common-sense morality.

costs. But it does not affect the Principle of Growth, which applies to a phenomenon for which, nearly by definition, the benefits are massively larger than the costs. It follows that “the dual ideals of prosperity and liberty will be central to ethics” (Cowen’s emphasis). The motto is “Growth and Human Rights.” In a kindred political regime, one can do what one wants, provided only that it is compatible with what others want. Cowen sees the case for (nearly) inviolable human rights as bolstered by the “froth of massive uncertainty” that covers longterm consequences. “Rights rarely conflict with consequences in the simple ways set out by philosophical thought experiments,” Cowen sees the case for (nearly) inviohe writes. “We can think lable human rights as bolstered by the of radical uncertainty as “froth of massive uncertainty” that giving us the freedom to covers long-term consequences. act morally, without the fear that we are engaging in consequentialist destruction.” Individual rights / Not everything must be The case of the baby’s life versus $5 sacrificed to economic growth, as “sus- billion illustrates these points. Such an tainable” as it might be. Cowen argues alternative is meaningless because there is that the principle of economic growth no way to know what would be the longmust be constrained by “nearly abso- term consequences of killing the baby or, lute” or “semi-absolute” “human rights.” for that matter, of losing $5 billion of GDP. (Instead of “individual rights,” he uses You might be killing baby Hitler, but then the more faddish expression “human you might be killing baby Mozart—there’s rights.” The latter term can be seen as the no way to know. On the other hand, it is degenerated and politically correct ver- a bad rule to kill babies if one wants to sion of the 18th-century “rights of man” preserve civilization and its institutions, or Adam Smith’s “natural liberty.” The which are the conditions for future ecodegenerated version is quite certainly not nomic growth. If killing babies doesn’t what Cohen means by “human rights,” violate individual rights, nothing will. so I don’t contradict him by using “indi- So, even in a consequentialist perspective, vidual rights” instead.) don’t kill the baby. The rights Cowen has in mind follow Robert Nozick’s model in that they Stubborn attachment needed / What is define strict constraints on what individu- “the appropriate scope of redistribution,” als (alone or in gangs) may do to others. to borrow the title of one chapter? “Our Cowen is even less explicit than Nozick, strongest obligations,” Cowen writes, “are and not necessarily as radical-libertarian, to contribute to sustainable economic about what these rights are or should be. growth and to support the general spread The “nearly” or “semi” qualification to the of civilization.” Some redistribution is absolutist character of rights is intended warranted only to the extent that it conto cover minor practical exceptions where tributes to these general objectives. The exercising a right would generate very large book contains an interesting discussion


SPRING 2019

on why anybody living in a rich country is not morally compelled to give all his income to much poorer people in poor countries. One reason, of course, is that self-sacrifice by everyone would be selfdefeating because there would be nothing to share; productive people in developed nations would soon lose their motivation to produce. Cowen continues to sail close to common-sense morality—or at least to what people in the classical liberal tradition consider such. The book does not clearly answer the question of whether or when redistribution by the public sector is preferable to private charity. But the author obviously thinks that private charity (and perhaps some public redistribution) is good if it contributes to long-term growth. A short postscript explains how Cowen feels a stubborn attachment (the second and only other time the expression appears in the book)

to a poor entrepreneur he met in Ethiopia, to whom he is donating the book’s profits. One of the many originalities of Stubborn Attachments is how it invokes Ayn Rand, with some caveats. Rand almost certainly would not have given money to an Ethiopian quidam. But, as Cowen notes, she “is the one writer who best understood the importance of production to moral theory.” She also emphasized the “the creative individual mind” and the importance of ideas, which are “the wellspring of economic growth.” The author of Stubborn Attachments concludes that we should think big and entertain a utopian vision for the long-term future. Sustainable economic growth constrained by “semi-absolute human rights” should be our “working standard.” These ideas provide an imperfect ethics, but it hugs common-sense morality. In many ways, Cowen shows a path to an open and enlightened libertarianism.

Roads for the Future ✒ REVIEW BY GEORGE LEEF

B

ob Poole is well-known for two things. First, he was one of the founders of Reason magazine in 1970, giving the nation a consistently libertarian investigative magazine. Second, he has devoted most of his career to the analysis of America’s transportation problems, especially our highways. This book brings together decades of his research with the objective of showing how we could enjoy a far more efficient highway system if we would shift away from the heavily politicized approach to roadway funding that has predominated for more than a century, in favor of a utility model. In short, Poole argues that we should build and maintain our roads the same way we build and maintain our water and electric utilities: customers pay companies for their use. Stuck in the past / We are rapidly approach-

ing a turning point regarding highway policy, Poole argues. The reason is that

GEORGE LEEF is director of research for the James G.

Martin Center for Academic Renewal.

our old funding model for highways is breaking down just as many of our illmaintained roads and bridges are. Ever since the Great Depression, we have relied primarily on the federal gas tax to provide the money needed for roadways. At that time there was no convenient way to meter the amount of driving Americans did, so the best way to fund road construction was to tax gasoline and diesel purchases. Over the years, Congress often raised the amount of that tax, but it has not done so since 1993. The taxes on gasoline and diesel have been 18.4¢ cents and 24.4¢, respectively, per gallon for a quarter-century. Moreover, fuel economy has improved steadily and a small but growing

/ Regulation / 49

percentage of vehicles use relatively little or none of those fuels. As a result, the federal gas tax is less and less able to pay for our current highway system, much less any major upgrades. And our highway system certainly could use some upgrading. Poole cites research showing that Americans waste at least $160 billion per year because of highway congestion. Rational investments could greatly reduce that cost while modernizing many bridges that, though not “crumbling” (contrary to the claims of politicians eager for public money for their districts), are on their way to obsolescence. (Among the virtues of Poole’s writing is that he takes down exaggerated claims on both sides. He likes cool, sober analysis.) Mired in politics / The main reason why we

are behind some other countries in the modernization of highways is that our system is so entwined in politics. Resources are frittered away on low-priority projects, some of which don’t have the slightest connection to roads. Special interest groups involved in transportation are good at using their political clout to block changes to the roadway funding system that would upset what’s for them a comfortable status quo. The federal Highway Trust Fund, Poole writes, has been gradually converted into “an all-purpose public transportation works program.” Money the public believes is going to highways is increasingly spent on other things like “urban transit, bike paths, sidewalks, recreational trails, historical preservation, and even transportation museums.” Naturally, voters are opposed to increasing the gas tax, in part because much of the money will get siphoned away into the kinds of projects that politicians love to brag about when they want to show their constituents that they’re “bringing home the bacon.” Legislative maneuvering also undermines state highway funding. Prospective efficiency improvements are often delayed or completely sidetracked because each representative wants some chunk of the spending for his district.


50 / Regulation / SPRING 2019 IN REVIEW

Democracy has saddled us operates there. Beginning with a very suboptimal highin 1955, the French have way system. What America built numerous tolled highneeds to do, Poole argues, is ways, usually financed with escape from “the mistaken a combination of private belief that highways are the and government funding. kind of thing that only govIt is now common for the ernment can provide.” Hisgovernment to seek comtory offers alternatives. In our petitive bids from compaearly years, many toll roads nies for new construction, were built privately and, as such as the astonishingly you would expect, cost less beautiful Millau Viaduct to construct than govern- Rethinking America’s that Poole pictures on the ment roads. The federally Highways: A 21stbook’s cover. Spain and Porbuilt National Road that Century Vision for tugal have followed France Better Infrastructure was begun in 1811 to open in turning to private firms access to the Northwest Ter- By Robert W. Poole Jr. operating under toll concesritories cost $13,000 per mile, 363 pp.; University of sions for improvements in Chicago Press, 2018 whereas the contemporanetheir highway systems. So ous and private Lancaster have Australia, Japan, South Turnpike in Pennsylvania cost Korea, Brazil, and Chile. only $7,500 per mile. Privately financed and operated toll Unfortunately, private roads suffered roads are not, of course, unknown in the from travelers avoiding paying the tolls. United States. Poole recounts in great Most toll road companies went bankrupt. detail the first such project here, the Government stepped in and the idea that Dulles Greenway toll road extension. It roads must be provided by government was the brainchild of Reagan administratook hold. However, modern technology tion transportation official Ralph Stanley, has found ways to prevent this public who lobbied for the necessary legislation goods problem—if only the United States in Virginia and then oversaw the project, would give private roads greater support. which opened six months ahead of schedFurther harming roadway mobility in ule in 1995. the United States is the popularization At the same time on the other side of of the notion that America “can’t build the country, private toll roads were comits way out of highway congestion” and ing to the rescue of congestion-desperate therefore ought to “get people out of their drivers in California. The state legislature cars” and into governmental mass transit. had approved a bill allowing up to four On the contrary, Poole argues, we can privately funded toll roads in the state. The build our way out of congestion if we SR-91 “express lane” toll road in Southern allow the market to work, albeit usually in California was the first to open and was an conjunction with government. Privately immediate success. financed and operated highways can and Of course, some private highway projdo work, as he shows with numerous ects have been losers, just as you would examples from both the United States expect in any business. For example, the and abroad. “Southern Connector” in Greenville, SC opened in 2001 and declared bankruptcy The new toll roads / Highway privatizain 2010. The losses, however, fell upon tion has been embraced in Europe, the road’s investors, not taxpayers. The Asia, and South America for decades. creditors restructured their bonds and the Italy approved the first modern inves- roadway continues to operate today. The tor-owned toll road in 1921 and a net- Dulles Greenway also experienced defaults work of privately owned highways now in its early years and underwent a massive

refinancing in 1999, but since has come good. (See “A New Approach to Private Roads,” Fall 2002.) Bringing in the innovative thinking and know-how of the private sector has proven extremely beneficial in some states, particularly Florida. Poole explains how partnering with one of the French highway firms enabled Florida’s Department of Transportation to save a great deal of money in its Port of Miami Tunnel, built to alleviate congestion and wear-and-tear on surface streets from heavy trucks. The bulk of the book is about highways, but Poole devotes a chapter to possible improvements in urban expressways and arterial roads. He envisions transponder technology, which enables road companies to bill drivers based on the amount and times they use roads as the key to revitalization and improved efficiency. I find persuasive and appealing Poole’s vision of depoliticizing roads and highways, turning them into network utilities where customers pay regular bills based on usage. But there are some powerful opposition groups who want to prevent that from happening. First, there are conservative/populist enemies who fight any suggestion of privatization because “we already pay for roads with taxes” and anything more is “double taxation.” A relative handful of pundits, bloggers, and radio talk show hosts can move masses of people to register their opposition to toll roads. The cogency of the case for escaping from road socialism into a free market doesn’t seem to have any effect on those people. Antitoll forces of this sort have been especially effective in Texas. Second, there are left-wing enemies who influence policy with claims that the roads rightfully belong to the people and private firms shouldn’t profit from them. These critics argue against “selling our infrastructure” and quite a few Americans are persuaded by them. Finally, there are interest groups that are wedded to the status quo. Governmental toll entities, for instance, do not want Prognosis /


SPRING 2019

any competition that would threaten their comfortable jobs. In 2007, when Pennsylvania Gov. Ed Rendell advanced a plan to privatize the Pennsylvania Turnpike, the existing toll authority fought and eventually defeated it. An even bigger obstacle is the panoply of environmental groups that dislike cars and oppose changes that would make driving more efficient. In short, the road to the kind of marketbased highway utility Poole has in mind is cracked and strewn with potholes. Nevertheless, he is optimistic. The good record of toll highways here and abroad should ultimately persuade people, but even more significant will be the federal government’s

increasingly dire fiscal situation. As entitlements eat up more and more federal revenue, turning to the private sector to build and maintain our highways will become very hard to resist. Poole concludes by stating that the Interstate Highway System is wearing out and will have to be replaced at a cost of around $1 trillion. At present, we do not have a funding source for this. The good news, he writes, is that “large-scale investment capital is waiting for the opportunity to invest in replacing and modernizing U.S. highway infrastructure. It’s time to begin the transition to this new and better model for 21st century highways.”

A Damning Portrait of the New York Fed ✒ REVIEW BY VERN MCKINLEY

C

armen Segarra’s story broke in a big way in 2013 and 2014. It was a tale of big banks in New York, the supervisors of those banks, and tapes of sensitive conversations she secretly recorded before being fired as an employee of the Fed’s most prominent regional bank. Her story revealed that, five years after the financial crisis, there were still systemic problems, not just with the big financial institutions that received all manner of bailouts, but also with the Federal Reserve Bank of New York, which was at the epicenter of distributing those bailouts. Contemporaneously with the release of her story, renowned business writer Michael Lewis wrote an article claiming her revelations were a clear indication of “how dysfunctional our financial regulatory system is.” It has taken nearly half a decade, but now Segarra, the whistleblower who first told her story to journalists at ProPublica, has turned that story into a book. She is an attorney who worked in regulatory compliance positions in banks such as MBNA, Citigroup, and Société Générale before takV ER N MCK INLEY is a visiting scholar at the George Wash-

ington University Law School and coauthor, with James Freeman, of Borrowed Time: Two Centuries of Booms, Busts and Bailouts at Citi (HarperCollins, 2018).

ing a job at the New York Fed. Her work as a regulator was not that of a typical bank examiner who visits the banks to review loan files for asset quality and crunch numbers on the bank’s capital, liquidity, and earnings. Rather, she specialized in areas on the “compliance” side of regulation to check matters such as how banks monitor their conflicts of interest. This position was a good match for her legal background and is why Noncompliant makes for a logical title for the book. Segarra does a good job of summarizing the book in one of her final chapters: A lawyer goes to work for the New York Fed. She is assigned to supervise a bank, verifying whether said bank is complying with the law. In the process the lawyer discovers that numerous laws, rules and regulations are being violated and

/ Regulation / 51

disregarded. And not just by the bank the lawyer supervises—but also by some of her fellow New York Fed regulators.

/ Segarra’s first few weeks on the job at the New York Fed just happened to be at the same time that a bizarre reorganization of the bank’s compliance function was in motion. Her new boss explained that the group in which Segarra would work was staffed by “relationship managers” who in the past were responsible for scrutinizing compliance at the megabanks. But, as part of the reorganization, the relationship managers would be replaced by “risk specialists,” the role that she would play. These specialists were assigned to monitor market, credit, audit, operational, legal, and compliance risk. Most of these positions would be filled by experts who, like her, were new to the New York Fed. The idea was to replace the “relationship managers,” who were former long-time bank examiners, in order to “upgrade the New York Fed’s personnel.” As Segarra summarizes it, “This convoluted and confusing structure had more to do with giving the old bank examiners the appearance of a job so as to improve their prospects of getting hired out of the New York Fed and less to do with how supervision would work moving forward under the new structure.” Additionally, the manager she was replacing, Jonathan Kim, was supposed to transition out of the job within a month, but he ultimately remained in his position the whole time Segarra was at the New York Fed (about seven months). She states the obvious: this structure “made my job very difficult.” Other relationship managers also remained or received promotions: “So much for getting rid of the old guard…. So much for changing a culture that was rotten to its core.” To add to the confusion, Segarra was not scheduled to receive vital systems training for her job until many months after her arrival. When she raised this concern with a colleague, the response was troubling: “Don’t worry about that. I didn’t do anything the first year I was here.” During the transition before her training, she could The reorganization


52 / Regulation / SPRING 2019 IN REVIEW

do nothing more than listen closely and take meticulous notes. “Dysfunctional” seems like a kind assessment of the work environment. Supervising Goldman / As luck would have

it, Segarra was assigned to work on compliance matters for Goldman Sachs: Long before I arrived at the New York Fed, Goldman’s reputation in legal and compliance circles was not good…. If the word on the street was right, my job would be incredibly easy. Finding issues with their legal and compliance programs would be like shooting fish in a barrel.

The oversight of Goldman was shared with the New York State Department of Financial Services and the Federal Deposit Insurance Corporation, with all the supervisors from the three working on the “Goldman regulator floor.” The regulator floor and the Goldman offices are the primary settings for the book. The writing style of Noncompliant is not breezy by any means, but becomes predictable. Most of the storyline involves Segarra describing the meeting (or meetings) of the day, either with Goldman, her colleagues at the New York Fed, or the other agencies that oversee Goldman. She characterizes the sequence as “another mind-numbingly repetitive meeting.” One of her major findings was that “Goldman did not have a firm-wide conflictsof-interest policy.” After she discovered this, countless meetings ensued and a Goldman legal counsel admitted, “There is no one policy per se.” As the story unfolds, the reader gets bombarded with acronyms from the financial industry: CFPB, MRIA, RCSA, MOU, BSC, SR, IO, BSA, AML, CCAR. Reading the book is analogous to watching a very long episode of The Office, but without the bursts of humor. Segarra reveals some really egregious practices: “Many New York Fed employees had side jobs…. We were free to set up our own legal practice on the side and make money practicing law while working full-

time at the New York Fed.” As for sharing bationary period looming, she had information with colleagues at the Board of concerns. But she took hope when Kim Governors in Washington to facilitate over- entered her in the performance appraisal sight of Goldman, a New York Fed colleague system, believing “he would not have been claimed, “We don’t share information with bothering to [set me up in the system] if the Board.” Segarra claims that she was the New York Fed was planning to fire blocked from taking a tough enforcement me.” But she would soon learn that, as stance against Goldman. “We made a deal with everything else at the New York Fed, with Goldman last year that we would raise the performance appraisal process was a “shit-show.” their rating,” explained a vetThe end came seven eran Fed colleague. months into her tenure, Insider trading apparwhen she was ushered into ently is a “side-gig” for some a conference room by Kim, New York Fed employees. where one of the managers “Have you gotten any good and someone from human trading tips yet?” one former resources awaited her. “Caremployee asked Segarra. Regmen, I am here to tell you ulatory capture was pervayou’ve been released from sive: “A number of the [New the bank,” she was told. York Fed Goldman] team “We’ve lost confidence in members often leapt to the bank’s defense and worried Noncompliant: A Lone your ability to allow your how Goldman would react to Whistleblower Exposes work to be adequately supervised.” Segarra fought her negative criticism from the the Giants of Wall Street dismissal in court but her risk specialists.” By Carmen Segarra wrongful termination case The hammer falls / As Segarra 352 pp.; Nation Books, was ultimately thrown out. 2018 pushed back against this “The experience had eroded culture, one of the managmy trust in the government’s ers she worked under made ability to supervise the financlear that her moves were not appreciated cial system and protect the savings of taxand that “he had received some troubling payers,” she writes. feedback about [her] from a few people on his team.” Her notes of official conversa- Conclusion / Segarra’s continuous narrations that were to allow her to both learn tive regarding one meeting after another her job and create the official record for could have been presented better in the meetings were brought under scrutiny. book. For example, it would have been One colleague interrogated her: “Isn’t it helpful for the reader if she had offered interesting how different people hear dif- a simple scorecard of all the many playferent things in meetings…. I don’t recall ers at the New York Fed, Goldman, and hearing a lot of these things noted in elsewhere. All the Presidents’ Bankers, a 2015 your meeting notes.” Segarra implies that book also published by Nation Books, did the true meaning of the comments was just that. Maybe she read too much into some of clear: destroy her minutes of the meeting. That colleague would depart a few years the comments of her colleagues. Maybe working in a bureaucracy was too much later to work at Goldman. With the evidence building that she for her. Maybe some of her colleagues was becoming persona non grata, Segarra considered her insubordinate. But even if began to realize that “I need to talk to a some of what she has to say in Noncomplilawyer.” Her lawyer advised her to purchase ant was exaggerated or misunderstood, the picture is very troubling for the fate a USB recording device. With the end of her six-month pro- of megabank oversight.


SPRING 2019

/ Regulation / 53

Working Papers ✒ BY PETER VAN DOREN AND IKE BRANNON

A SUMMARY OF RECENT PAPERS THAT MAY BE OF INTEREST TO REGULATION’S READERS.

The ACA and Opioid Deaths

“Health Insurance and Opioid Deaths: Evidence from the Affordable Care Act Young Adult Provision,” by Gal Wettstein. Forthcoming in

Health Economics.

A

ccidental drug overdoses have become the leading cause of death for those under age 50, and the rate of death via opioids has increased dramatically in the last few years. In 2017, approximately 72,000 people died of a drug overdose in the United States, which is nearly twice as many as in 2013 and four times as many as at the turn of the 21st century. The recent spike in drug mortality coincides with the advent of the Affordable Care Act, which greatly increased the ability of young people to obtain medical coverage. In the years following the ACA’s passage, insurance coverage for people between 18 and 25 increased from 70% to 87%. This has led some people to infer that the increase in insurance coverage contributed to the increase in opioid deaths. The rationale is that having a doctor and insurance coverage makes it easier for people to access and become addicted to opioids, despite attempts in recent years to restrict the availability of the drugs. Despite the timing, it is not clear that the increase in opioid deaths has anything to do with the increase in health insurance coverage. In this paper, Gal Wettstein notes that, ex ante, the very opposite effect is possible: People with regular health care should have better health and thus have less reason to seek painkillers to begin with. Moreover, those who do become addicted will find it easier to access treatment via mental health counseling or medication, as well as follow-up care. Most importantly, addicts with insurance generally abuse prescription opioids, which are less risky than heroin or other narcotics bought illegally. Indeed, the Centers for Disease Control attributes fully 40% of all overdose deaths to fentanyl, an incredibly lethal drug that is often added to batches of heroin to accentuate the high it confers. With such confounding intuitions, Wettstein turns to the data to attempt to determine if there is a connection between the ACA and opioid deaths. He uses the ACA’s health insurance provision for young adults as a quasi-experiment. The ACA allows children to remain on their parents’ health insurance until they turn 26, and this provision took effect upon the law’s passage on September 23rd, 2010, while many other ACA provisions did not take effect until 2014. The distinct implementation dates mean that we should see the effect of the ACA on drug deaths—if there is one—occur at different times for different age cohorts. Two confounding events occurred between these two dates that PETER VA N DOR EN is editor of Regulation and a senior fellow at the Cato Institute. IKE BR A NNON , a contributing editor to Regulation, is a senior fellow at the Jack Kemp

Foundation and president of Capital Policy Analytics.

make this natural experiment a bit less than ideal. The first is that marijuana became legal (or quasi-legal) in several states over this time period, and there is some evidence that marijuana dampens opioid usage by serving as another way for people to deal with pain. Also, use of naloxone, a medication that can rapidly alleviate the effects of an overdose, became more prevalent over this time. It is possible naloxone availability could reduce overdose deaths; it is also possible it could contribute to them through moral hazard: people may be more willing to take risks with opioids if they know naloxone is at the ready in case of overdose. Another problem in this analysis, Wettstein notes, is that it can be difficult to discern precisely what, in fact, killed someone. Not all decedents get an autopsy and coroners may forgo one if they believe that they can easily discern a cause of death from circumstances and there is no next of kin insisting that an autopsy be done. People who spent years abusing drugs and died of a heart attack at a young age may have clearly had their lives cut short because of drug abuse, but the coroner may attribute their death to natural causes. Wettstein looks at opioid-related deaths of people ages 19–29 by year and state from 2011 to 2016. The 29-year-olds in 2016 could obtain coverage in 2011. He compares this cohort’s overdose death rate to the overdose death rates via opioids for people age 32–36, which had no access to young-adult health insurance coverage. He employs two distinct methods of analysis. The first is a difference-in-differences approach, which entails comparing the two groups. The second method is a simple dose-response model whereby he measures what happened after the implementation of the ACA while attempting to control for other factors. Wettstein finds that the deaths from opioid abuse in the older cohort increased faster than the younger group post-ACA using the difference-in-differences approach. That leads him to tentatively conclude that health insurance access reduced deaths from opioids. However, there are caveats. One concern is that prescription opioids spill over between age groups within a state, confounding cohort comparisons. For instance, in their 2015 paper “How Increasing Medical Access to Opioids Contributes to the Opioid Epidemic: Evidence from Medicare Part D,” David Powell, Rosalie Liccardo Pacula, and Erin Taylor found that states with higher take-up rates for Medicare Part D were associated with greater drug abuse for non-retirees as well. In other words, it may be that the younger people having more insurance may actually increase access to drugs for older people as well. This “dilutes the experiment,” giving us one more thing that cannot be controlled for. The regression results from the dose-response methodology show a decline in deaths from an increase in health insurance coverage. Wettstein does not discern any obvious stepwise linear tradeoff, but his data do suggest accumulated declines in drug


54 / Regulation / SPRING 2019 IN REVIEW

abuse deaths. By looking at the entire panel of observations, he discerns a lagged effect to access to health insurance, with the reduction in death rates from health insurance access increasing in subsequent years. He estimates that a 1–percentage point increase in health insurance coverage ultimately reduces opioid deaths by 3.6 per 100,000, which is a 16.5% reduction. Wettstein cautions against reading too much into his data, noting that 2011–2016 may turn out to be anomalous, with death rates much higher than in previous—and hopefully subsequent— eras. He concludes that health insurance seems to have reduced drug deaths from where they would be otherwise, and he suggests that it does so partly through the improved physical and mental —Ike Brannon health that regular access to health care begets.

Banking Regulation “The Limits of Shadow Banks,” by Greg Buchak, Gregor Matvos, Tomasz Piskorski, and Amit Seru. October 2018. SSRN #3260434.

T

raditional banks accept deposits that are federally insured, issue loans, are members of the Federal Reserve System, and are subject to safety and soundness banking regulations and examinations. Shadow banks, on the other hand, do not accept deposits; they raise money in the capital markets. They originate loans, not to hold in their portfolio, but to securitize and sell to other investors. Among those investors are the government-chartered Federal National Mortgage Association (Fannie Mae) and Federal Home Loan Mortgage Corporation (Freddie Mac), which also receive government subsidies. The 2010 Dodd–Frank banking reform legislation, passed after the financial crisis in 2008 and the ensuing Great Recession, altered the regulation of traditional banks but not shadow banks. This paper argues that because shadow and traditional banks are partial substitutes for each other, the effects of regulatory reform on traditional banking has shifted some banking activity to the shadow sector, dampening any soundness benefits from Dodd–Frank. The market for mortgages is segmented. Well-capitalized traditional banks issue so-called jumbo loans (above $484,350, or $726,525 in high-cost home areas) that cannot be sold to Fannie Mae and Freddie Mac. Thus, the traditional banks hold those loans on their balance sheets. Shadow banks originate loans to distribute to Fannie and Freddie or private investors through securitization. Poorly capitalized traditional banks with limited balance sheet capacity also originate loans to distribute to investors. A central component of traditional banking reform has been increased capital requirements. The more equity in a traditional bank’s capital structure, the less likely depositors are to lose money if loans are not repaid in full. (See “Bank Capital Requirements,” Working Papers, Winter 2010–2011). Under Dodd–Frank, capital requirements were increased from 4% of assets (i.e., loans) in 2010 to 6% in 2015. The central insight of this paper is that traditional banks rely

on deposit insurance to attract deposits while shadow banks and poorly capitalized traditional banks rely on the government subsidies to Fannie and Freddie to facilitate their business model. As subsidies for traditional banks decline because of increased capital requirements, jumbo loan activity declines but “conforming” loan activity—loans below $484,350 or $726,525 in high-cost loan areas—increases to take advantage of the mortgage guarantees —Peter Van Doren provided by Fannie and Freddie.

Soda Taxes “The Impact of Soda Taxes: Pass-Through, Tax Avoidance, and Nutritional Effects,” by Stephan Seiler, Anna Tuchman, and Song Yao. January 2019. SSRN #3302335.

T

axes on high-calorie beverages, i.e., “soda taxes,” have become a popular policy response to the obesity epidemic. Mexico implemented a nationwide soda tax in 2014. Estimates of its effects have used standard elasticity estimates that a 1% increase in soda price results in a 1–3% decrease in consumption. In a previous Working Papers column (“Soda Taxes,” Winter 2017–2018) I discussed research that lowered those estimates by considering the purchase of cheaper soda (switching brands) as a taxpayer response. In the United States, beverage taxes have been enacted by localities rather than nationwide. A response to such a tax could be shopping outside the jurisdiction and avoiding the tax. On January 1, 2017, Philadelphia imposed a 1.5¢ per ounce tax on sweetened beverages. This was a large tax, amounting to $1.01 on a 2-liter bottle that had a pre-tax price of $1.56—a 65% tax on the price. In comparison, the Mexican tax was 9% of the pretax average. In this paper, the researchers found that the Philadelphia tax has had little, if any, effect on city residents’ consumption of caloric soda. The researchers found that beverage purchases within Philadelphia decreased by 42% after the tax, but that reduction was fully offset by an equivalent increase in purchases in stores —P.V.D. outside of Philadelphia.

Securities Regulation “A New Market-Based Approach to Securities Law,” by Kevin S. Haeberle and M. Todd Henderson. August 2018. SSRN #3233122.

Three claims are used to justify modern securities regulation: ■ ■ ■

Firms fail to disclose enough information. Firms disclose untruthful information. Insiders trade on information, reducing their incentive to release information and the incentive of outsiders to invest in information production.

Modern securities regulation attempts to solve all three of these problems through mandates and restrictions.


SPRING 2019

Government-mandated production of information results in the overproduction of information irrelevant to firms’ soundness (e.g., blood diamond disclosure, CEO pay ratios) and underproduction of relevant information. For instance, disclosure law has become a focal point and securities fraud litigation reinforces this legally defensible but mindless focal point. “Additional statements mean additional exposure to lawsuits based on the allegation that those statements are false or misleading,” note the authors of this paper. Class action investor fraud lawsuits result in overcompensation (the authors point out in a footnote that, since 1996, these suits have named 35,000 defendants and produced $95 billion in settlements) and are just a wealth transfer from one set of shareholders to another, with a healthy cut for the lawyers. (See “The End of Securities Fraud Class Action?” Summer 2006.) The marginal deterrence benefits from securities fraud are low because the payments are orders of magnitude greater than the actual level of fraud. For the authors, the central economic problem with current securities regulation is that it mandates that firms provide information for free. The authors’ solution is to legalize payments for early access to public information. This money would generate incentives for firms to provide the information that investors want. Participation in class action securities fraud suits would be limited to those who paid for early access, acted on that information, and lost money

/ Regulation / 55

because of fraudulent information. And corporate insider trading would be severely reduced because such behavior would now cost the firm money. Insider trading would undermine the firm’s profits from selling early access to information. The most important objection to their proposal is that, in this new regime with advanced disclosure, there would be no uninformed investors from whom the knowledgeable could buy and sell securities. The uniformed, who did not pay for access, would avoid trading during the publicly announced time periods in which some investors get early access to the information, thereby protecting the uninformed from getting fleeced by the informed. Thus, the only people trading in these periods would be the informed. Given the belief that serious money is made only by the informed trading with the uninformed, there wouldn’t be anyone willing to pay for early access to information because they couldn’t make any money from that information. In this view, the wolves make money only by selling to the sheep. The authors counter that informed people now trade with each other because they differ on the implications of information that they all possess. “Information can be valuable even when other people have it (if you have different predictions or can get to market first) or if it can be used to predict outcomes in related areas.” —P.V.D.

Robert Higgs, Founding Editor

SUBSCRIBE ONLINE NOW and Get a FREE book!

independent.org/p/IRA1707 Independent Institute 100 Swan Way Oakland, CA 94621-1428

1-800-927-8733 Phone: (510) 632-1366 info@independent.org


56 / Regulation / SPRING 2019

FINAL WORD ✒ BY TIM ROWLAND

Why Can’t We Admit Policy Mistakes?

E

very so often, usually in a backwater weekly newspaper, you can still find what was once a ubiquitous newspaper feature known as the police blotter. It is a dutiful, verbatim digest of business that recently came across the police desk. Often sad and occasionally humorous, it records every last disturbance, from a drugstore shoplifting to a rat in a toilet. The blotter is often a nonlethal version of the Darwin Awards. If I had to pick a favorite item from my years in newspapers, it was the car thief in Key West who hotwired a jalopy and sped away. Sadly for him, the only road out of town was the 113-mile-long Overseas Highway, a corridor from which there is no exit. In no particular hurry, the police radioed ahead to the community of Islamorada, 84 miles to the east, and asked the police chief there to please nab the thief when he happened by. Which, an hour and 45 minutes later, he did. The blotter can be viewed as a leading social and cultural indicator. Have opioids infected the community? Are economic stresses causing increased incidences of domestic abuse? Do unsupervised juveniles suggest fractured families? But the failures revealed in the blotter do not always lie with the perp. Vestiges of failed law and government behavioral modification show up as well. Two blotter items published in northern New York earlier this year, when the roads were awash in salt and slush, show how desperately we cling to such policies. In each item, a driver was pulled over on the pretense that his license plates were unreadable—which they probably were, T IM ROW L A ND is the author of the books Strange and

Obscure Stories of Washington, DC and Strange and Obscure Stories of New York City.

along with every other car traveling the Northway that day. In each case, the ostensible safety stop resulted in a charge of possession of a small amount of marijuana. One man, an executive of color from the Bronx, was driving a newly minted Range Rover. The other was an unemployed 42-year-old who was driving an old beater of a Volvo. Their commonality, along with an affinity for weed, is the misfortune of being tagged for violating a law that in another year very well might not exist. Reflecting on the Catholic Church’s decision to permanently absolve the sin of Friday meat consumption, George Carlin quipped, “I bet there are still some guys in hell doing time on a meat rap.” And so it will be for these two. Like too many laws, marijuana was criminalized without any study, without any science, without any scintilla of evidence that the common good would be improved were it to be scoured from the face of the earth. Cannabis was among the tinctures sitting in American medicine chests minding its own business when it got swept up along with prohibitions of other “poisons” such as opium and cocaine. In 1914, the New York Times praised the criminalization of

marijuana on the grounds that—well, there were no grounds except that “the inclusion of Cannabis indica among the drugs to be sold only on prescription is common sense. Devotees of hashish are now hardly numerous here enough to count, but they are likely to increase as other narcotics become harder to obtain.” In other words, shoot all your cows today and you won’t have to worry about brucellosis tomorrow. More nefariously, criminalization of marijuana was a tool in the toolboxes of Southwestern lawmen who needed an excuse to detain Mexicans crossing the border. But when bad law finally falls, it falls fast. In another decade, will there be anywhere in America where you can’t walk into a streetcorner merchant and buy a little weed? OK, fine, insert a Mississippi joke here. Meantime, whither our New York friends with the obstructed license plates? How much disruption have they suffered in their lives because of a little-toe of a law that in evolutionary terms is not far from dropping off? Multiply that by millions of others who have lost money, freedom, careers, and dignity over the past centuryplus—all because, once passed, we have such trouble admitting that our new law is a failure. Weed is all-too-emblematic of our policy mindset. When presented with a problem and no particular information or facts concerning said problem, our first solution always is “Jail.” Or, in the corporate realm, “Law.” Incarcerate or regulate now and worry about the consequences later. The Washington Post recently reported on two approaches to the deadly drug fentanyl. The traditional approach of fighting it as one fights crime resulted in a massive wave of overdose deaths on the East Coast and in Appalachia. But in California, public health workers mingled among the users, encouraging proper labeling of the drug and demonstrating what levels were safe to use. Ideal? Hardly. Better for social welfare? By far. If only the same sort of calm analysis had gone into marijuana policy a century ago.


Exceptional in consistently publishing articles that combine scholarly excellence with policy relevance.

— MILTON FRIEDMAN

C

ato Journal is America’s leading free-market public policy journal. Every issue is a valuable resource

for scholars concerned with questions of public policy, yet it is written and edited to be accessible to the interested lay reader. Clive Crook of The Economist has called it “the most consistently interesting and provocative journal of its kind.” Cato Journal’s stable of writers constitutes a veritable Who’s Who in business, government, and academia. Contributors have included James M. Buchanan, Richard Epstein, Kristin J. Forbes, Milton Friedman, Robert Higgs, Vaclav Klaus, Justin Yifu Lin, Allan H. Meltzer, Charles Murray, Wiliam Niskanen, Douglass C. North, José Piñera, Anna J. Schwartz, and Lawrence H. White. Three times a year, Cato Journal provides you with solid interdisciplinary analysis of a wide range of public policy issues. An individual subscription is only $22 per year. Subscribe today!

SUBSCRIBE TODAY— BY PHONE OR ONLINE For more information call (800)767-1241 or visit www.cato.org/cato-journal Save more with multiple years Individuals Institutions

1 YEAR

2 YEARS

3 YEARS

$22 $50

$38 $85

$55 $125


Non-Profit Org. U.S. Postage

Regulation

PA I D

Hanover, Pa 17331 Permit No. 4

1000 Massachusetts Avenue N.W. Washington, D.C. 20001 (202) 842-0200 | Fax: (202) 842-3490 Address Correction Requested www.cato.org @RegulationMag

Is“Free”College the Solution? T

he price of college has inflated enormously for decades, and many students have had to take out increasingly large loans to pay for higher

and higher credentials demanded by employers. It’s not surprising that people would demand an end to the cost insanity. How has this happened? What can be done? Before we leap at simple solutions, we ought to determine what the problems are and think clearly about the unintended consequences our solutions might have. This new edited volume from the Cato Institute, Unprofitable Schooling:

Examining Causes of, and Fixes for, America’s Broken Ivory Tower, looks at the issues facing higher education from the perspectives of both economics and history. Each chapter explores crucial aspects of the provision of higher education with an eye to bringing about innovation, improved quality, and lower costs.

HARDBACK AND EB OOK AVAILABLE NAT IONWIDE.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.