277 minute read

Soundings

Next Article
Contributors

Contributors

including the Catholic Church in Kansas, which has donated some $750,000 to the political campaign for it. Leading the opposition is the ACLU, which has so far contributed some $235,000 to defeating it.

In Kentucky, religious organizations want to add a clause to the constitution like the one proposed in Kansas, explicitly denying any right to abortion. Voters in four other states (Alabama, Tennessee, West Virginia, and Louisiana) have already passed such amendments. Conversely, Vermont voters will get their say on a Right to Personal Reproductive Autonomy Amendment, which would enshrine the right to abortion in that state’s constitution. The measure is being promoted by the state’s ACLU and Planned Parenthood chapters and opposed by religious groups and Vermonters for Good Government, which fears that the passage of the amendment might make taxpayers responsible for funding abortions, fertility treatments, gender-transformation surgeries, and other procedures related to reproduction. The California legislature, meantime, has placed an initiative on the ballot that would explicitly add language to its constitution guaranteeing a right to abortion. In Montana, voters will decide on a law declaring that infants born alive during an abortion procedure are persons and must receive medical care. Supporters placed the law on the ballot after the state’s Democratic governor, Steve Bullock, vetoed a similar measure in 2019.

Advertisement

Unionization is on the ballot in Illinois and Tennessee, albeit in different ways. In recent years, a wave of right-to-work legislation has swept the country. Four states bordering Illinois (Indiana, Wisconsin, Iowa, and Kentucky) have adopted laws giving workers the right to opt out of unions. Alarmed, union allies in the Illinois legislature have now placed on the state’s ballot a constitutional amendment guaranteeing a right to collective bargaining in both the private and public sectors. The proposed amendment not only gives workers the right to bargain but also explicitly states that Illinois may not pass laws restricting unions’ ability to negotiate over wages and benefits, as well as “other terms and conditions of employment.” While the amendment has gained the support of many unions in Illinois, the expansive nature of the language in the ballot initiative has sparked opposition from employer groups, including the Illinois Association of School Boards, the Illinois Chamber of Commerce, and the Illinois Manufacturers’ Association.

Tennessee, meantime, was one of the first states to adopt right-to-work, shortly after Congress gave states that option in the 1947 TaftHartley Act. Bolstered by research showing that right-to-work states have vastly outperformed required-unionization states on private-sector job growth over the last two decades, Tennessee now wants to join nine other states that have enshrined right-to-work protections in their constitutions. The state has been particularly

Soundings

effective at grabbing new manufacturing jobs and winning business relocations from unionization states like California in recent years.

The unexpectedly robust rebound in tax revenues after the Covid-induced economic lockdowns has left many states flush with cash. A ballot initiative in Colorado will give voters a chance to cut the state’s income-tax rate to 4.40 percent, from 4.55 percent—an estimated $500 million reduction in revenue. This would be the state’s second voter-approved cut in two years: in 2020, Colorado residents approved a reduction from 4.63 percent to the current level, and Democratic governor Jared Polis backed the initiative. Are Colorado voters in the mood for more? Maybe. Polis recently said that the state should aim to eliminate its income tax and find better, less economically painful, ways to raise revenues.

Massachusetts Democrats want to take their state in a dramatically different direction. Local Democrats have approved a state ballot initiative that seeks to raise taxes by $2 billion. Voters will get to weigh in on the issue this November, amid increasingly good news on state finances. In April, Massachusetts collected $2 billion more in tax revenues from its residents than anticipated, and Republican governor Charlie Baker has been negotiating for tax cuts, even as Democrats ask voters for a whopping hike.

The right to bear arms is never out of the news for long. The Supreme Court’s recent Bruen decision has kept a spotlight on gun rights, and citizens of several states will have their chance to vote on Second Amendment issues this fall. In November, Iowa citizens will vote on a constitutional amendment to “keep and bear arms.” Currently, 44 other states have similar reinforcements of the Second Amendment in their constitutions (California, New York, and Maryland are among the six that do not). One Iowa legislative supporter of the initiative said that the amendment is an attempt to set up obstacles for “liberal judges” who are “willing to just take away your right to keep and bear arms.” Democrats have opposed the amendment on grounds that it might make it harder to modify the state’s gun laws.

Given the long lead time necessary to place a referendum on the ballot in most states, this year’s initiatives are the result of momentum created before our current news cycles. But given timely policy debates over issues like abortion and gun rights, it seems likely that what voters in some states will decide over the next few months will set the stage for more directdemocracy campaigns.

Facing Reality on A new Congressional Budget Office Entitlements report confirms that Washington’s finan-

A Congressional Budget cial prospects are dire.

Office report leaves no Even while making doubt about the cause generous assumptions of the nation’s crushing about inflation and debt burden. interest rates, these government econo-

Milton Ezrati mists and statisticians anticipate that federal spending will continue to outpace revenues well into the twenty-first century, creating historically large annual deficits and adding to the nation’s accumulation of public debt. By 2032, they forecast, outstanding government debt will stand at 110 percent of the nation’s gross domestic product (GDP), and by 2052, that figure will reach 185 percent—an astronomical share that exceeds that accumulated during World War II.

Past decisions have made this forecast all but inevitable. More than anything, the growth of entitlements—Social Security, Medicare, and Medicaid—has caused spending to exceed revenues, creating outsize deficits and a mounting debt burden. Though CBO forecasters did their best to work moderation into this inexorable trend, they could not erase the effect on their projections. Their numbers make clear that Washington won’t get a handle on its deteriorating finances until it gains some control over entitlement spending and financing.

Assuming that tax law remains largely unchanged, these economists project that federal revenues from all sources will grow roughly in tandem with the economy, with the government taking in an average of about 18 percent of GDP annually. The bulk of the projected deficit emerges on the spending side. The CBO projects

Soundings

that federal outlays will expand from 23.8 percent of the economy this year to 25.8 percent by the mid-2030s, and then climb to 28.9 percent by mid-century.

Entitlement spending is to blame for most of this increase. Social Security, health care, and other transfers from Washington swell in these projections from 10.8 percent of GDP this year to 13.7 percent in the mid-2030s and to 14.9 percent by 2052, amounting to more than four-fifths the relative boost in all spending. The rest comes from the need for Washington to pay interest on an enlarged debt load, itself the result of past increases in entitlement spending.

The CBO’s forecast tells a familiar story. Federal spending for decades has grown as a portion of the economy. In 1970, for instance, overall federal spending equaled some 18.7 percent of the country’s GDP. The rise from this level to the CBO’s 23.8 percent estimate for 2022 occurred even as defense expenditures have fallen from 7.8 percent of GDP in 1970 to just 3.5 percent today. It was the relative rise in entitlement spending from some 7.6 percent of GDP in 1970 to today’s level that all but erased the budgetary relief from the relative decline in defense spending.

With this history in mind, the CBO may have taken too optimistic a tack in its calculations. For instance, it assumes that defense spending will hold steady at about 3.5 percent of GDP—but mounting geopolitical pressures seem likely to drive some expansion in defense outlays. On entitlements, too, forecasters show signs of optimism, predicting that spending will go up at something close to the historical pace—yet several considerations suggest that it may accelerate. For example, the government has decided to continue subsidies under the Affordable Care Act, even though they were set to expire. These will accumulate over time. As of late June, President Biden was considering forgiving student debt, or at least a portion of it. This would constitute a new entitlement. Exact amounts would, of course, depend on how much the government decides to forgive and what income tests it places on recipients, but the government’s posture suggests an increase in the relative entitlement burden on the economy—and also sets a precedent for future generations of indebted collegians.

More significant than student-debt relief is the likely impact of aging on the country’s population. As the huge baby-boom generation retires, the number of dependent retirees will continue to mount. In 1970, some 10 percent of the population was aged 65 or older; by 2019, that number was 16 percent; and by the mid-2030s, according to Census Bureau estimates, it will be 21 percent. Such a large proportion of older people will increase demands for Social Security and Medicare entitlements and other federal services. Of course, CBO estimates try to take this trend into account, but perhaps not sufficiently. They consider the draw on Social Security, for instance, but not the burden on Medicare from the growing number of recipients over age 75.

None of this is to say that entitlements, already at some two-thirds or more of the federal budget, represent the wrong way for Washington to spend money. These programs reflect popular priorities, ratified by Congresses of both parties and signed into law by Democratic and Republican presidents alike. How the nation should allocate its output of goods and services is ultimately a political judgment.

Economics can only point out that those decisions will condemn federal finances to deficits and ever-increasing debt burdens until Washington takes one of three admittedly tough steps: reining in the growth of entitlement spending; sacrificing other government services; or telling voters that they must pay more in taxes. As the CBO has made clear, these are the only real ways to avoid an unprecedented debt burden.

Perhaps the economy will eke out faster real economic growth than the CBO’s assumption of slightly more than 1.5 percent a year, which would relieve some financial strain. Or perhaps inflation will persist longer than the CBO’s expectation that all will be well by 2023, which would benefit debtors, including the government, even if it hurts American households. But even if events reduce the intensity of the government’s financial problem, the budget basics would remain the same. Washington must address the impact of entitlements realistically— and the clock is ticking.

Soundings

In Defense of E very now Slippery-Slope and then, a piece Arguments of philosophical theory breaks They identify a structural into the poputendency of contemporary lar consciousprogressive thought. ness, such that people without Benedict Beckeld any philosophical education regularly refer to it. One such theory is the rejection of the slippery-slope argument, which holds that an event, A, will set off a series of events culminating in some dreadful consequence, B—and therefore A should not occur. “The supposed Ωslippery slopes≈ are fake, silly rhetoric to placate the faithful,” a columnist recently wrote, pointing to how partisans on different sides of contentious issues like gun rights or abortion take extreme positions. True, such arguments can sometimes ignore a potential middle ground and overlook the fact that the dreaded consequence will not necessarily follow. But slippery-slope arguments are not always incorrect, and they offer insight about the nature of modern progressivism. Usually, rejection of slippery-slope arguments occurs in the context of their claims that some policy will have bad consequences. These claims may be wrong, but the slippery-slope label does not prove their wrongness. Societies slide down such slopes all the time: history is full of examples of nations that moved in a progressive direction over time, tended toward decadence or exhaustion after altering rules for elites, and then relaxed moral standards. Indeed, the slippery-slope argument, especially in the context of social decay, has a noble pedigree. Plato observes in The Republic that democracy leads to authoritarianism; as freedom and equality expand beyond orderly limits, only hardheaded authority can rein them in. The fall of the Roman republic to authoritarian empire and the rapid collapse of French republicanism before the rise of Napoleon stand as examples.

Today’s slippery slopes are more familiar. Consider contemporary discussion over the nature of rights. While political conservatives generally define rights negatively (freedom from something), progressives define them positively (freedom to something). The preservation of negative liberties is definite and circumscribed, seeking to conserve a particular thing; the search for positive liberties is less bounded, aiming to widen the scope of what is alleged to be true freedom. Contemporary progressives tend not to be satisfied with certain political victories,

Soundings

which, once achieved, give way to new demands: for example, activists hoping to secure rights for sexual minorities initially made assurances that those who disagreed would be left alone; now they intend to stamp out dissent and expand the universe of rights beyond gay marriage. Given this history, one may be forgiven for suspicion of progressive intentions, or for concluding that the slippery slope is itself embedded in the progressive posture. It is also a definitional question. Whereas conservatism wishes to remain in one place, or, at most, to move only in certain limited respects, the very definition of progressivism is to progress—that is, to keep moving.

But the slippery-slope argument often resonates as a criticism of modern progressivism and liberalism. By leaving the space open for debate and increased equality, liberalism tends to empower those who, unwittingly or not, seek to overthrow anything that can be called liberal. The race to ever-changing equality takes on a logic of its own. The slippery-slope argument is therefore plausibly applicable to progressive politics, by dint of the latter’s own nature to push onward.

Slippery-slope arguments thus deserve some respect. The mere fact that an argument follows that form is largely irrelevant to determining its veracity. Someone accused by his interlocutor of using the slippery-slope argument should reply that the debate should not focus on the notion that cause A could lead to cause B—which is unremarkable—but rather, on how likely it is that cause A could lead to cause B. If recent experience is any indication, the slippery slope against which that interlocutor had protested is likely to unfold in all its decadence as the years progress.

Encouraging Office The rise of remote employment may benConversions efit workers who suddenly have access to a Repurposing office wider national labor space can ensure that market, but it has left city centers remain many office-heavy corvibrant in the ridors of city downremote-work age. towns half-empty. Some U.S. cities rich in Connor O’Brien specialized white-collar workers, such as New York, San Francisco, and Washington, D.C., risk being left with permanently suppressed demand for office space and high apartment rents that push newly virtual workers out to the suburbs.

Revamping commercial corridors for mixed or residential use presents a compelling option for cities with still-sluggish office markets: here is a chance to put vacant office space on valuable land to better use while easing urban

Soundings

housing shortages. If as many as 40 percent of office workers in cities like New York ultimately work from home in a post-pandemic world, these office-to-residential conversions will be unavoidable. But office conversions will not happen on their own unless cities make a serious effort to bring down their cost.

Conversions tend to be inconvenient and expensive, particularly for postwar structures with large floor plates. Modern office buildings are typically wide and filled with windowless offices, storage rooms, and elevator shafts. Since apartments require natural light, converting buildings with deep footprints will yield dark, long units. Some buildings are so deep that the only feasible conversions involve cutting an interior courtyard. And tearing out or redirecting existing HVAC, plumbing, and electricity to individual apartment units is costly and timeconsuming.

Easing office-to-housing conversions will thus require much more than simply zoning neighborhoods for new uses. Cities should also relax rules to allow by-right conversions without lengthy reviews. Further, they should exclude these renovation projects from any parking requirements, setback rules that differ between commercial and residential developments, and limits on height or floor-to-area ratios. A streamlined process to shepherd conversions quickly through relevant permitting processes and limiting discretionary review will also save potential developers time and money.

In commercial zones dominated by office space, planners should consider exempting conversions from inclusionary-zoning requirements. High affordability quotas—requiring a share of new units to be rented or sold at prices likely unprofitable for developers—may deter conversions. Given larger office floor plates, developers will need to build larger units, making affordability requirements even more cumbersome. In an all-residential or mixed-use neighborhood, inclusionary zoning can be the price that developers pay for building a new apartment complex; but in commercial zones with few or no existing residents, political backlash and displacement are less relevant concerns. And though conversion developments will probably need to consist of large units with above-average rent or sale prices to be financially viable, additional supply will still ease the rent crises gripping major cities this year.

Finally, leftover federal money from last year’s American Rescue Plan could be redirected to tax breaks for office-to-residential conversion projects. Cities should be careful not to overemphasize conversion or attempt to engineer the conversions of entire neighborhoods, keeping open the possibility that more workers might return to the office down the road. But given how high work-from-home rates remain more than two years after the beginning of the pandemic and how few conversions are happening in city centers, it’s hard to imagine a well-tailored subsidy yielding too many conversions.

Farther from central business districts, the case for subsidies is weaker. Lower rents and land values make more conversions viable without public funding. In some suburban markets, workers and firms are returning to in-person work much faster than in city centers—a process with which subsidies might interfere. Public subsidies also should not chase conversions of Class A offices, instead focusing on older Class B or C buildings that are easier to convert and less coveted by tenants.

Not all office buildings—even those sitting largely vacant right now—are cut out to be converted to residential uses. Some may simply need to be demolished, while others will ultimately fill up again. But even converting just 10 percent of midtown Manhattan’s older office space to residential use could yield 14,000 units in a city starved for new housing. It may not be local government’s role to assume the losses that building owners are facing in a post-pandemic economy. But cities do have an interest in preventing prime neighborhoods from emptying and decaying, ensuring an adequate supply of housing, and adapting nimbly to economic change.

Mayors and council members will need more than slogans and public pressure to bring life back to eerily quiet downtowns. They must meet this challenge with flexibility and a focus on dismantling the barriers to productive, adaptive reuse across their cities.

The Corruption of Medicine

Guardians of the profession discard merit in order to alter the demographics of their field.

Heather Mac Donald

The post–George Floyd racial reckoning has hit the field of medicine like an earthquake. Medical education, medical research, and standards of competence have been upended by two related hypotheses: that systemic racism is responsible both for racial disparities in the demographics of the medical profession and for racial disparities in health outcomes. Questioning those hypotheses is professionally suicidal. Vast sums of public and private research funding are being redirected from basic science to political projects aimed at dismantling white supremacy. The result will be declining quality of medical care and a curtailment of scientific progress.

Virtually every major medical organization— from the American Medical Association (AMA) and the American Association of Medical Colleges (AAMC) to the American Association of Pediatrics—has embraced the idea that medicine is an inequity-producing enterprise. The AMA’s 2021 Organizational Strategic Plan to Embed Racial Justice and Advance Health Equity is virtually indistinguishable from a black studies department’s mission statement. The plan’s anonymous authors seem aware of how radically its rhetoric differs from medicine’s traditional concerns. The preamble notes that “just as the general parlance of a business document varies from that of a physics document, so too is the case for an equity document.” (Such shaky command of usage and grammar characterizes the entire 86-page tome, making the preamble’s

When the chairmanship of UCLA’s Department of Medicine opened up, some qualified faculty members did not even put their names forward, believing that they would not be considered.

STOCK CONNECTION BLUE/ALAMY STOCK PHOTO

The Corruption of Medicine

boast that “the field of equity has developed a parlance which conveys both [sic] authenticity, precision, and meaning” particularly ironic.)

Thus forewarned, the reader plunges into a thicket of social-justice maxims: physicians must “confront inequities and dismantle white supremacy, racism, and other forms of exclusion and structured oppression, as well as embed racial justice and advance equity within and across all aspects of health systems.” The country needs to pivot “from euphemisms to explicit conversations about power, racism, gender and class oppression, forms of discrimination and exclusion.” (The reader may puzzle over how much more “explicit” current “conversations” about racism can be.) We need to discard “America’s stronghold of false notions of hierarchy of value based on gender, skin color, religion, ability and country of origin, as well as other forms of privilege.”

A key solution to this alleged oppression is identity-based preferences throughout the medical profession. The AMA strategic plan calls for the “just representation of Black, Indigenous and Latinx people in medical school admissions as well as . . . leadership ranks.” The lack of “just representation,” according to the AMA, is due to deliberate “exclusion,” which will end only when we have “prioritize[d] and integrate[d] the voices and ideas of people and communities experiencing great injustice and historically excluded, exploited, and deprived of needed resources such as people of color, women, people with disabilities, LGBTQ+, and those in rural and urban communities alike.”

According to medical and STEM leaders, to be white is to be per se racist; apologies and reparations for that offending trait are now de rigueur. In June 2020, Nature identified itself as one of the culpably “white institutions that is responsible for bias in research and scholarship.” In January 2021, the editor-in-chief of Health Affairs lamented that “our own staff and leadership are overwhelmingly white.” The AMA’s strategic plan blames “white male lawmakers” for America’s systemic racism.

And so medical schools and medical societies are discarding traditional standards of merit in order to alter the demographic characteristics of their profession. That demolition of standards rests on an a priori truth: that there is no academic skills gap between whites and Asians, on the one hand, and blacks and Hispanics, on the other. No proof is needed for this proposition; it is the starting point for any discussion of racial disparities in medical personnel. Therefore, any test or evaluation on which blacks and Hispanics score worse than whites and Asians is biased and should be eliminated.

The U.S. Medical Licensing Exam is a prime offender. At the end of their second year of medical school, students take Step One of the USMLE, which measures knowledge of the body’s anatomical parts, their functioning, and their malfunctioning; topics include biochemistry, physiology, cell biology, pharmacology, and the cardiovascular system. High scores on Step One predict success in a residency; highly sought-after residency programs, such as neurosurgery and radiology, use Step One scores to help select applicants.

Black students are not admitted into competitive residencies at the same rate as whites because their average Step One test scores are a standard deviation below those of whites. Step One has already been modified to try to shrink that gap; it now includes nonscience components such as “communication and interpersonal skills.” But the standard deviation in scores has persisted. In the world of antiracism, that persistence means only one thing: the test is to blame. It is Step One that, in the language of antiracism, “disadvantages” underrepresented minorities, not any lesser degree of medical knowledge.

The Step One exam has a further mark against it. The pressure to score well inhibits minority students from what has become a core component of medical education: antiracism advocacy. A fourth-year Yale medical student describes how the specter of Step One affected his priorities. In his first two years of medical school, the student had “immersed” himself, as he describes it, in a student-led committee focused on diversity, inclusion, and social justice. The student ran a podcast about health disparities. All that political work was made possible by Yale’s pass-fail grading system, which meant that he didn’t feel compelled to put studying ahead of diversity

concerns. Then, as he tells it, Step One “reared its ugly head.” Getting an actual grade on an exam might prove to “whoever might have thought it before that I didn’t deserve a seat at Yale as a Black medical student,” the student worried.

The solution to such academic pressure was obvious: abolish Step One grades. Since January 2022, Step One has been graded on a pass-fail basis. The fourth-year Yale student can now go back to his diversity activism, without worrying about what a graded exam might reveal. Whether his future patients will appreciate his chosen focus is unclear.

Every other measure of academic mastery has a disparate impact on blacks and thus is in the crosshairs.

In the third year of medical school, professors grade students on their clinical knowledge in what is known as a Medical Student Performance Evaluation (MSPE). The MSPE uses qualitative categories like Outstanding, Excellent, Very Good, and Good. White students at the University of Washington School of Medicine received higher MSPE ratings than underrepresented minority students from 2010 to 2015, according to a 2019 analysis. The disparity in MSPEs tracked the disparity in Step One scores.

The parallel between MSPE and Step One evaluations might suggest that what is being measured in both cases is real. But the a priori truth holds that no academic skills gap exists. Accordingly, the researchers proposed a national study of medical school grades to identify the actual causes of that racial disparity. The conclusion is foregone: faculty bias. As a Harvard medical student put it in Stat News: “biases are baked into the evaluations of students from marginalized backgrounds.”

A 2022 study of clinical performance scores anticipated that foregone conclusion. Professors from Emory University, Massachusetts General Hospital, and the University of California at San Francisco, among other institutions, analyzed faculty evaluations of internal medicine residents in such areas as medical knowledge and professionalism. On every assessment, black and Hispanic residents were rated lower than white and Asian residents. The researchers hypothesized three possible explanations: bias in faculty assessment, effects of a noninclusive learning environment, or structural inequities in assessment. University of Pennsylvania professor of medicine Stanley Goldfarb tweeted out a fourth possibility: “Could it be [that the minority students] were just less good at being residents?”

Goldfarb had violated the a priori truth. Punishment was immediate. Predictable tweets called him, inter alia, possibly “the most garbage human being I’ve seen with my own eyes,” and Michael S. Parmacek, chair of the University of Pennsylvania’s Department of Medicine, sent a schoolwide e-mail addressing Goldfarb’s “racist statements.” Those statements had evoked “deep pain and anger,” Parmacek wrote. Accordingly, the school would be making its “entire leadership team” available to “support you,” he said. Parmacek took the occasion to reaffirm that doctors must acknowledge “structural racism.”

That same day, the executive vice president of the University of Pennsylvania for the Health System and the senior vice dean for medical education at the University of Pennsylvania medical school reassured faculty, staff, and students via e-mail that Goldfarb was no longer an active faculty member but rather emeritus. The EVP and the SVD affirmed Penn’s efforts to “foster an anti-racist curriculum” and to promote “inclusive excellence.”

Despite the allegations of faculty racism, disparities in academic performance are the predictable outcome of admissions preferences. In 2021, the average score for white applicants on the Medical College Admission Test was in the 71st percentile, meaning that it was equal to or better than 71 percent of all average scores. The average score for black applicants was in the 35th percentile—a full standard deviation below the average white score. The MCATs have already been redesigned to try to reduce this gap; a quarter of the questions now focus on social issues and psychology.

Yet the gap persists. So medical schools use wildly different standards for admitting black and white applicants. From 2013 to 2016, only 8 percent of white college seniors with below-average

The Corruption of Medicine

undergraduate GPAs and below-average MCAT scores were offered a seat in medical school; less than 6 percent of Asian college seniors with those qualifications were offered a seat, according to an analysis by economist Mark Perry. Medical schools regarded those below-average scores as all but disqualifying—except when presented by blacks and Hispanics. Over 56 percent of black college seniors with below-average undergraduate GPAs and below-average MCATs and 31 percent of Hispanic students with those scores were admitted, making a black student in that range more than seven times as likely as a similarly situated white college senior to be admitted to medical school and more than nine times as likely to be admitted as a similarly situated Asian senior.

Such disparate rates of admission hold in every combination and range of GPA and MCAT scores. Contrary to the AMA’s Organizational Strategic Plan to Embed Racial Justice and Advance Health Equity, blacks are not being “excluded” from medical training; they are being catapulted ahead of their less valued white and Asian peers.

Though mediocre MCAT scores keep out few black students, some activists seek to eliminate the MCATs entirely. Yet the MCATs, like all beleaguered standardized tests, are constantly scoured for questions that may presume forms of knowledge particular to a class or race. This “cultural bias” chestnut has been an irrelevancy for decades, yet it retains its salience within the antitest movement. MCAT questions with the largest racial variance in correct answers are removed. External bias examiners, suitably diverse, double-check the work of the internal MCAT reviewers. If, despite this gauntlet of review, bias still lurked in the MCATs, the tests would underpredict the medical school performance of minority students. In fact, they overpredict it—black medical students do worse than their MCATs would predict, as measured by Step One scores and graduation rates. (Such overprediction characterizes the SATs, too.) Nevertheless, expect a growing number of medical schools to forgo the MCATs, in the hope of shutting down the test entirely and thus eliminating a lingering source of objective data on the allegedly phantom academic skills gap. Meantime, medical professors need to be reeducated, to ensure that their grading and hiring practices do not provide further evidence of the phantom skills gap. Faculty are routinely subjected to workshops in combating their own racism. On May 3, 2022, the Senior Advisor to the NIH Chief Officer for Scientific Workforce Diversity gave a seminar at the University of Pennsylvania medical school titled “Me, Biased? Recognizing and Blocking Bias.” Senior Advisor Charlene Le Fauve’s mandate at NIH is to “promote diversity, inclusiveness, and equity in the biomedical research enterprise through evidence-based approaches.” Yet her presentation rested heavily on a supposed measure of bias that evidence has discredited: the Implicit Association Test (IAT).The IAT’s own creators have acknowledged that it lacks validity and reliability as a psychometric tool.

Increasing amounts of faculty time are spent on such antiracism activities. On May 16, 2022, the Anti-Racism Program Manager at the David Geffen School of Medicine at the University of California at Los Angeles hosted a presentation from the Director of Strategy and Equity Education Programs at the Icahn School of Medicine at Mount Sinai titled “Anti-Racist Transformation in Medical Education.” Mount Sinai’s Dean for Medical Education and a medical student joined Mount Sinai’s Director of Strategy and Equity Education Programs for the Los Angeles presentation, since spreading the diversity message apparently takes precedence over academic obligations in New York.

Grand rounds is a century-long tradition for passing on the latest medical breakthroughs. (Thomas Eakins’s great 1889 canvas, The Agnew Clinic, portrays an early grand rounds at the University of Pennsylvania.) Rounds are now a conduit for antiracism reeducation. On May 12, 2022, the Vice Chair for Diversity and Inclusion at the University of Pittsburgh’s Department of Medicine gave a grand rounds at the Cleveland Clinic on the topic “In the Absence of Equity: A Look into the Future.” Afterward, attendees would be expected to describe “exclusion from a historical context” and the effects of “hierarchy on health outcomes”; attendance would confer academic credit toward doctors’ continuing-education obligations.

Thomas Eakins’s great 1889 canvas, The Agnew Clinic, portrays an early instance of grand rounds—a century-long tradition for transmitting medical breakthroughs. Such rounds are now a conduit for antiracism reeducation.

The medical school curriculum itself needs to be changed to lessen the gap between the academic performance of whites and Asians, on the one hand, and blacks and Hispanics, on the other. Doing so entails replacing pure science courses with credit-bearing advocacy training. More than half of the top 50 medical schools recently surveyed by the Legal Insurrection Foundation required courses in systemic racism. That number will increase after the AAMC’s new guidelines for what medical students and faculty should know transform the curriculum further.

According to the AAMC, newly minted doctors must display “knowledge of the intersectionality of a patient’s multiple identities and how each identity may present varied and multiple forms of oppression or privilege related to clinical decisions and practice.” Faculty are responsible for teaching how to engage with “systems of power, privilege, and oppression” in order to “disrupt oppressive practices.” Failure to comply with these requirements could put a medical school’s accreditation status at risk and lead to a school’s closure.

Mandatory instruction in such politicized concepts will help diversify the faculty and administration—for who better to teach about oppression than a person of color? (Part of the appeal of diversity trainings and bureaucracy, whether in academia or the corporate world, lies in the creation of new employment slots dedicated to diversity activities, which can be filled without as great a sacrifice of meritocratic standards.) But being indoctrinated in “intersectionality” does nothing to improve a student’s clinical knowledge. Every moment spent regurgitating social-justice jargon is time not spent learning how to keep someone alive whose body has just been shattered in a car crash. Advocates of antiracism training never explain how fluency in intersectional critique improves the interpretation of an MRI or the proper prescribing of drugs.

The Corruption of Medicine

The academic skills gap, confirmed in every measure of knowledge before and during medical school, does not close over the course of medical training, despite remedial instruction. Yet the lower representation of blacks throughout the medical profession is solely attributed to racism on the part of the profession’s gatekeepers. Nature accused itself of denying a “space and a platform” to black researchers, without naming any such researchers against whom it had discriminated or any editor who had done the discriminating. In April 2022, the Institute for Scientific Information decried the fact that the proportion of black authors in medical research did not match U.S. census data on the population at large. Black representation had not improved between 2010 and 2020, lamented the institute. If white supremacy lay behind that lack of progress, it was a mystery why the proportion of published Asian researchers over the same decade had outstripped Asian population changes.

Despite the persistent academic skills gap, a minority hiring surge is under way. Many medical schools require that faculty search committees contain a quota of minority members, that they be overseen by a diversity bureaucrat, and that they interview a specified number of minority candidates. One would have to be particularly dense not to grasp the expected result. In recent years, the Memorial Sloan Kettering Cancer Center, the Cleveland Clinic Taussig Cancer Center, the Uniformed Services University of the Health Sciences, the University of Chicago Cancer Center, the University of Pittsburgh Division of Medical Oncology, the Massey Cancer Center at Virginia Commonwealth University, the University of Miami Miller School of Medicine, and the Department of Medicine at UCLA’s medical school have hired black leaders.

These candidates may all have been the most qualified, but the explicit calls for diversity in medical administration inevitably cast a pall on such selections. In at least one case, the runnerup possessed a research and leadership record that far surpassed that of the winning candidate. But he lacked the favored demographic characteristics.

It matters who heads research ventures and medical faculties. Top scientists can identify the most promising directions of study and organize the most productive research teams. But the diversity push is discouraging some scientists from competing at all. When the chairmanship of UCLA’s Department of Medicine opened up, some qualified faculty members did not even put their names forward because they did not think that they would be considered, according to an observer.

College seniors, deciding whether to apply to medical school, can also read the writing on the wall. A physician-scientist reports that his best lab technician in 30 years was a recent Yale graduate with a B.S. in molecular biology and biochemistry. The former student was intellectually involved and an expert in cloning. His college GPA and MCAT scores were high. The physician-scientist recommended the student to the dean of Northwestern’s medical school (where the scientist then worked), but the student did not get so much as an interview. In fact, this “white, clean-cut Catholic,” in the words of his former employer, was admitted to only one medical school.

Such stories are rife. A UCLA doctor says that the smartest undergraduates in the school’s science labs are saying: “Now that I see what is happening in medicine, I will do something else.”

Funding that once went to scientific research is now being redirected to diversity cultivation. The NIH and the National Science Foundation are diverting billions in taxpayer dollars from trying to cure Alzheimer’s disease and lymphoma to fighting white privilege and cisheteronormativity. Private research support is following the same trajectory. The Howard Hughes Medical Institute is one of the world’s largest philanthropic funders of basic science and arguably the most prestigious. Airline entrepreneur Howard Hughes created the institute in 1953 to probe into the “genesis of life itself.” Now diversity in medical research is at the top of HHMI’s concerns. In May 2022, it announced a $1.5 billion effort to cultivate scientists committed to running a “happy and diverse lab where minoritized scientists will thrive and persist,” in the words of the institute’s vice president. “Experts” in diversity and inclusion will assess early-career academic scientists based on their plans for running

“happy and diverse” labs. Those applicants with the most persuasive “happy lab” plans could receive one of the new Freeman Hrabowski scholarships. The scholarships would cover the recipient’s university salary for ten years and would bring the equivalent of two or three NIH grants a year into his academic department. If an applicant’s “happy lab” plan fails to ignite enthusiasm in the diversity reviewers, however, his application will be shelved, no matter how promising his actual scientific research.

The HHMI program and others like it amplify the message that doing basic science, if you are white or Asian, is not particularly valued by the STEM establishment. How many scientific breakthroughs will be forgone by such signals is incalculable.

The leaders of today’s medical schools, professional organizations, and scientific journals would reject the foregoing critique. Teaching racial justice concepts and advocacy is not a swerve from medicine’s core competencies and obligations, they would argue; it is the highest fulfillment of those obligations. Racial disparities in health, they would say, are the biggest medical challenge of our time, and they are a social, not a scientific, problem. If blacks have higher rates of mortality and disease, it is because systematic racism confronts them at every turn. Changing the demographics of the medical profession is essential to eliminating the sometimes-lethal racism that black patients encounter in health care. Changing the profession’s awareness of its own biases is also key to achieving medical equity. And changing the orientation of medical research—away from basic science and toward race theory—simply moves medicine to where it can be most effective.

And here we encounter a second a priori truth: health disparities are the product of systemic racism; any other explanation is taboo and will be ruthlessly punished.

On February 24, 2021, Ed Livingston, deputy editor for clinical reviews and education at the Journal of the American Medical Association (JAMA), recorded a podcast with Mitch Katz, president of New York City Health and Hospitals, called “Structural Racism for Doctors— What Is It?” Livingston, a UCLA surgeon, asked Katz to define structural racism. Katz gave as examples the routing of diesel trucks through poor neighborhoods and disparities in access to top-level medical care. Livingston responded that Katz had described a “very real” problem: impoverished neighborhoods with poor quality of life and little opportunity, where most residents are black and Hispanic. Livingston agreed with the urgency of making sure that all people “have equal opportunities to become successful.” His only quibble was with the current emphasis on “racism,” which “might be hurting” the cause of racial equality, he said. Livingston had been taught to revile discrimination and yet was being told that he was racist. The focus, as Livingston saw it, should be on socioeconomic disparities, not alleged racial animus.

After the podcast became an instant totem of white supremacy, JAMA disappeared it from the web. Livingston himself was disappeared from JAMA shortly thereafter. (Back at his home base at the UCLA medical school, he faced a show trial from fellow faculty members.) JAMA’s editor-inchief Howard Bauchner, a professor of pediatrics and public health at Boston University, apparently sensed that he might be next on the chopping block and started issuing serial apologies. The disappeared podcast, Bauchner declared, was “inaccurate, offensive, hurtful, and inconsistent with the standards of JAMA.” JAMA would be “instituting changes that will address and prevent such failures from happening again”—a “failure” being defined as deviation from racial justice orthodoxy. Bauchner genuflected further in an official statement: “I once again apologize for the harms caused by this podcast and the tweet about the podcast.” (JAMA had promoted the podcast with a tweet asking: “No physician is racist, so how can there be structural racism in health care?”) For good measure, Bauchner also released a letter dated March 4, 2021, apologizing for the “harm” caused by the tweet and podcast and expressing his “commitment” to call out “injustice, inequity, and racism in medicine.”

JAMA was once a leading forum for physicians and other scientists to present research to their peers. Now JAMA’s overseers regard a fundamental component of the scientific method—debate—

The Corruption of Medicine

as out of bounds, at least regarding the diversity agenda. Livingston’s disagreement with Katz and the “structural racism” conceit was over language, not substance. Yet because Livingston suggested taking the “racism” out of the “structural racism” phrase and focusing instead on equal opportunity, he had, in Bauchner’s widely shared view, harmed blacks and violated professional standards of journalism. No disagreement is tolerated.

Meanwhile, Bauchner’s efforts to distance himself from the “offensive” dialogue were not bearing fruit. Ominously, an AMA committee put him on administrative leave, pending an “independent investigation”—as if there were a complex backstory to what were clearly Livingston’s personal opinions. By June 2021, Bauchner, too, was out, even though, as he ruefully observed, he “did not write or even see the tweet, or create the podcast.”

The chance that the AMA would not appoint an intersectional editor-in-chief to replace the hapless Bauchner was zero. But just to be safe, the AMA named a black epidemiologist specializing in racial disparities to lead the search and staffed the search committee with suitably diverse members. The new editor, Kirsten BibbinsDomingo, is a “health-equity researcher”—also an overdetermined fact, given the career course of many black M.D.s.

Bibbins-Domingo has already announced her determination to bring in “new voices” to ensure that JAMA‘s family of journals regularly “name” structural racism as the cause of health inequities. Will those new voices be conducting the most cutting-edge clinical science? It doesn’t matter: basic science is, at best, irrelevant to structural racism and, at worst, complicit in it.

Livingston’s challenge to the idea that health disparities are caused by racism was sui generis among medical journalists. The hold of that idea within medical publishing is otherwise absolute. The New England Journal of Medicine, another formerly august institution now in thrall to racial politics, presents a nonstop stream of articles on such topics as the “Pathology of Racism,” “Toward Antiracist Allyship in Medicine,” and “How Structural Racism Works—Racist Policies as a Root Cause of U.S. Racial Health Inequities.”

Entire issues of scientific journals have been devoted to racism. Scientific American published a “special collector’s edition” on “The Science of Overcoming Racism” in summer 2021. The edition was dominated by paeans to the IAT, denunciations of the police, and scorn for any suggestion of patient self-efficacy. (Prescribing weight loss to black women, for example, is a “racist” way to fight obesity, wrote a sociology professor and a nutritionist.) A special issue of Science in October 2021 addressed “Criminal Injustice” and “Mass Incarceration.” The issue opened with an editorial by a social work professor claiming that the U.S. crime rate is “comparable to those in many Western industrial nations.” This is a fanciful proposition, in light of the fact that the American firearm homicide rate is 19.5 times higher than the average of other high-income countries, and nearly 43 times higher among 15- to 24-yearolds.

Like the AMA’s Organizational Strategic Plan to Embed Racial Justice and Advance Health Equity, many of these antiracism articles consist of the formulaic rhetoric of academic victim studies, supplemented by the personal narratives that characterized early critical race theory in law schools. Others, though, try to quantify the racism that allegedly produces higher levels of illness and mortality in blacks. Those efforts, done through regression analysis, do not capture the personal behaviors that affect the course of disease, such as compliance with a doctor’s orders, adherence to a medication regime, and showing up for follow-up appointments. In some cases, the regression analysis does not account for the differences in the illnesses suffered by black patients and white patients at the start of the study.

Nevertheless, the second a priori truth—that health disparities are necessarily the product of systemic racism—has devalued basic science and encumbered medical research with red tape. The fight against cancer has been particularly affected. White and Asian oncologists are assumed to be part of the problem of black cancer mortality, not its solution, absent corrective measures. According to the NIH, leadership of cancer labs should match national or local demographics, whichever has a higher percentage of minorities.

Obsessing over diversity will not help students master the complexities of biology or the intricacies of treatment.

Cancer grant applications must now specify who, among a lab’s staff, will enforce diversity mandates and how the lab plans to recruit underrepresented researchers and promote their careers. As with the Howard Hughes Medical Institute’s Freeman Hrabowski scholarships, an insufficiently robust diversity plan means that a proposal will be rejected, regardless of its scientific merit. Discussions about how to beef up the diversity section of a grant have become more important than discussions about tumor biology, reports a physician-scientist. “It is not easy summarizing how your work on cell signaling in nematodes applies to minorities currently living in your lab’s vicinity,” the researcher says. Mental energy spent solving that conundrum is mental energy not spent on science, he laments, since “thinking is always a zero-sum game.”

A lab’s diversity gauntlet has just begun, however. The NIH insists that participants in drug trials must also match national or local demographics. If a cancer center is in an area with few minorities, the lab must nevertheless present a plan for recruiting them into its study, regardless of their local unavailability. Genentech, the creator of lifesaving cancer drugs, held a national conference call with oncologists in April 2022 to discuss products in the research pipeline. Half of the call was spent on the problem of achieving diverse clinical trial enrollments, a participant reported. Genentech admitted to having run out of ideas.

There is no evidence that racist researchers are excluding minorities from drug trials on nonmedical grounds, nor has anyone presented a theory as to why they would. The barriers to such drug trial diversity include a higher incidence among blacks of disqualifying comorbidities, higher levels of personal disorganization, and a suspicion of the medical profession, which suspicion that same profession constantly amplifies with its drumbeat about racism.

In May 2022, a physician-scientist lost her NIH funding for a drug trial because the trial population did not contain enough blacks. The drug under review was for a type of cancer that blacks rarely get. There were almost no black patients with that disease to enroll in the trial, therefore. Better, however, to foreclose development of a therapy that might help predominantly white cancer patients than to conduct a drug trial without black participants.

The Corruption of Medicine

The requirement of racial proportionality in drug trials is perplexing, since diversity advocates insist that race is a social construct, without biological reality. Suggesting that genetic differences exist between racial groups will brand you a racist. The AMA’s Organizational Strategic Plan to Embed Racial Justice and Advance Health Equity sneers at “discredited and racist ideas about biological differences between racial groups.” If race does not exist, as received wisdom now has it, then the racial makeup of clinical trials should not matter.

The proponents of the systemic racism hypotheses are making a large bet with potentially lethal consequences. In accordance with the idea that racism causes racial health disparities, they are changing the direction of medical research, the composition of medical faculty, the curriculum of medical schools, the criteria for hiring researchers and for publishing research, and the standards for assessing professional excellence. They are substituting training in political advocacy for training in basic science. They are taking doctors out of the classroom, clinic, and lab and parking them in front of antiracism lecturers. Their preferential policies discourage individuals from pariah groups from going into medicine, regardless of their scientific potential. They have shifted billions of dollars from the investigation of pathophysiology to the production of tracts on microaggressions.

The advocates of this change insist that it is essential to improving minority health. But what if they are wrong? If it turns out that individual behavior, pathogens that disproportionately infect certain groups, and other genetic dispositions have a more proximate influence on health than supposed structural racism, then this reorientation of the medical project will have impeded progress that helps all racial groups. Obstetricians working in inner-city hospitals report that black mothers have higher rates of complications during pregnancy and in delivery because of higher rates of morbid obesity, hypertension, and inattention to prenatal care and prenatal-care appointments. Packing those doctors off to diversity reeducation will not improve black childbirth outcomes. It will, though, divert attention from solutions that could improve those outcomes— whether offering help in keeping appointments and complying with a medication regime or encouraging exercise and weight loss. And yet we are told that efforts directed at behavioral change are racist and that convincing patients that they have power over their health is victim-blaming.

Higher rates of Covid fatalities among blacks is the latest favored proof of medical racism, amplified by a 2022 Oprah Winfrey and Smithsonian Channel documentary, The Color of Care. State and federal health authorities gave priority to minorities in vaccination and immunotherapy campaigns, however, and penalized the highest risk group—the elderly—simply because that group is disproportionately white. Those are not the actions of white supremacists. The likelier reasons for disparities in Covid outcomes are vaccine hesitancy and obesity rates. When the constant refrain about medical racism intensifies vaccine resistance among blacks, the widened mortality gaps will be used to confirm the racism hypothesis, in a vicious circle.

Medical science has been one of the greatest engines of human progress, liberating millions from crippling disease and premature mortality. It has also seen its share of dead ends and misconceptions. Science goes astray when politics becomes paramount, as in the denial of plant genetics and natural selection under Stalin. America’s very real history of structural racism, a history that took us too long to remedy, resulted in segregated hospitals and cruel disparities in treatment. That past is belatedly but thankfully behind us.

The scientific method is a natural corrective to such fatal errors. Now, tragically, when it comes to the contention that racism is the defining trait of the medical profession and the source of health disparities, opposing views have been ruled out of bounds and are grounds for being purged. The separation of politics and science is no longer seen as a source of empirical strength; it is instead a racist dodge that risks “reinforcing existing power structures,” according to the editor of Health Affairs.

The guardians of science have turned on science itself.

The Elite Panic of 2022

From the end of Covid restrictions to Elon Musk’s Twitter bid to the Dobbs ruling, startling developments threaten progressives’ grip on power.

Martin Gurri

On April 18, U.S. District Judge Kathryn Kimball Mizelle struck down the federal requirement for wearing surgical masks on airplanes, in airports, and while riding mass transit. Online videos showed passengers and airline staff ripping off their masks and celebrating in midflight. Given the accumulated frustration of two years of pandemic travel, the reaction was understandable.

Far more remarkable was the vehemence of those opposed to the ruling. Judge Mizelle was unfit for office, they said. She was too young, at 35; she was unelected; she was a single, unrepresentative voice. Worst of all, she was an “activist Trump judge” and thus branded with the mark of the beast. Rescinding government policy—the kind of thing that American judges engage in with abandon, and usually to progressive cheers—in this instance was condemned as a usurpation of the powers of the executive branch.

Judge Mizelle had crashed an exclusive party reserved for people of higher caste. “The CDC has the capability, through a large number of trained epidemiologists, scientists, to be able to make projections and make recommendations,” said Anthony Fauci, bureaucratic czar of all things Covid-19. “Far more than a judge with no experience in public health.”

That was the heart of the matter. Fauci embodied a bureaucracy and political class that, with the active support of the media, had converted the public’s fear of infection into a principle of elite authority. Under this principle, only trained scientists can make projections and recommendations. The writ of government stretched as far as the boundaries of scientific truth—and those boundaries were, of course, determined by government agencies. It wasn’t just a question of specific policies like lockdowns and vaccine mandates. At stake was the restoration of the public’s habit of obedience that had gone missing during the Trump years.

By spring of this year, however, the public had shed most of its fears, as the in-flight celebrations demonstrated. Legally and psychologically, the state of emergency couldn’t last forever. Judge Mizelle merely officiated at the burial rites over the carcass of an improvised authority. The disproportion between a ruling about masks and the existential howl of the opposition can be

The Elite Panic of 2022

explained in terms of the loss of elite control—and it wasn’t the only recent example of such panic.

Three days after the mask mandate was struck down, on April 21, Barack Obama delivered the bad news about “disinformation” to a Stanford University forum on that subject. His unacknowledged theme, too, was the crisis of elite authority, which he explained with a history lesson. The twentieth century, Obama said, may have excluded “women and people of color,” but it was a time of information sanity, when the masses gathered in the great American family room to receive the news from Walter Cronkite and laugh over I Dream of Jeannie and The Jeffersons. Those were the days when a “shared culture” could operate on a “shared set of facts.”

The digital age has battered that peaceable kingdom to bits. Obama seemed unaware of the argument he was making, but it boiled down to this: the rise of social inclusiveness has opened the door to political chaos. As in the Judge Mizelle flap, the question, asked only tacitly, was who had the authority to make projections and recommendations.

Online, everyone did. People with opinions that the former president found toxic—nationalists, white supremacists, unhinged Republicans, Vladimir Putin and his gang of Russian hackers—could say anything they wished on the Web, no matter how irresponsible, including lies. A defenseless public, sunk in ignorance, could be deceived into voting against enlightened Democrats.

Total blindness to the other side of the story is a partisan affliction that Obama makes no attempt to overcome. At Stanford, he never mentioned the most effective disinformation campaign of recent times, conducted against Trump by the Hillary Clinton campaign, in which members of his own administration participated. He simply doesn’t believe that it works that way. Disinformation, for him, is a form of lèse-majesté—any insult to the progressive ruling class.

How are we to deal with this “tumultuous, dangerous moment in history”? Obama was clear about the answer: we must recover the power to exclude certain voices, this time through regulation. The government must assume control over disorderly online speech. First Amendment guarantees of freedom of speech don’t apply to private companies like Facebook and Twitter, he noted. At the same time, since these companies “play a unique role in how we . . . are consuming information,” the state must impose “accountability.” The examples he provided betray nostalgia for a lost era: the “meat inspector,” who would presumably check on how the algorithmic sausage is made; and the Fairness Doctrine, which somehow would be applied to an information universe virtually infinite in volume.

Obama views disinformation much as Fauci does Covid-19: as a lever of authority in the hands of the guardian class. Democracy, he tells us over and again, must be protected from “toxic content.” But by democracy, he means the rule of the righteous, a group that coincides exactly with his partisan inclinations. By toxic, he means anything that smacks of Trumpism. The former president’s speech was vague on details, but it left all options open. Who can say what pretext will be needed to expel the next rough beast from social media, tomorrow or the day after?

The tone, nevertheless, was far from hopeful. The Barack Obama who spoke at Stanford plainly believed that nefarious forces have overrun the information sphere and that the Web, supreme medium of American politics, is now the mother of lies. The talk of regulation, like the strangely unprogressive yearning for the twentieth century, was a plea for dramatic intervention from a government that can’t enact an annual budget. An air of quiet desperation, I thought, clung to the performance.

Obama’s speech, in turn, took place four days before the apparent sale of Twitter to Elon Musk—at which point elite despair, always volatile, at last exploded in a fireball of rage and panic. Our unhappy age possesses a capacity for rhetorical grandiosity that may be unparalleled in history. Twitter in the clutches of the “supervillain” Musk, we were warned, could trigger World War III “and the destruction of our planet.” The complaints about Judge Mizelle resembled genteel mumblings in comparison.

In April, a U.S. district judge struck down federal mask mandates for all transit.

Two clear themes emerged from the noise surrounding this pseudo-event. The first is that the people in charge of American politics and culture are obsessed with control and hostile to any principle that might interfere with it. After expelling Trump and edging out founder Jack Dorsey, Twitter had evolved into a gated community for progressive-minded elites, with heretical opinions on race and Covid-19, for example, ever more tightly policed. Musk, whose Twitter acquisition is not yet finalized, calls himself a “free speech absolutist.” He purchased the platform with the aim of returning it to a more “neutral” posture.

For a considerable number of agitated people, the goal of neutrality was an abomination. Suddenly, “free speech” became a code for something dark and evil—racism, white nationalism, oligarchy, transphobia, “extremist rightwing Nazis”—all the phantoms and goblins that inhabit the nightmares of the progressive mind. Selfawareness was the first casualty of this war of words. The Washington Post, owned by multibillionaire Jeff Bezos, solemnly preached the need for regulation “to prevent rich people from controlling our channels of communication.”

Following the Obama formula, the itch to control what Americans can say online was equated with the defense of freedom. Granting unfettered speech to the rabble, as Musk intended, would be “dangerous to our democracy,” Elizabeth Warren said. “For democracy to survive we need more content moderation, not less,” was how Max Boot, Washington Post columnist, put it. “We

The Elite Panic of 2022

must pass laws to protect privacy and promote algorithmic justice for internet users,” was the bizarre formulation of Ed Markey, junior senator from Massachusetts. The Biden White House, never a hotbed of originality, recited the Obama refrain about holding the digital platforms “accountable” for the “harm” they inflict on us.

Honesty is a rare quality in the exercise of the will to power. For at least a generation, our elites haven’t valued anyone’s freedom but their own: but it was fascinating to hear them say it.

The second theme follows from the first. The elites are convinced that their control over American society is slipping away. They have conquered the presidency, both houses of Congress, and the entirety of our culture; yet their mood is one of panic and resentment. Trump, they are certain, will return to Twitter. There, as Obama warned, he will spew toxic clouds of disinformation and poison the minds of a gullible public. Attempts to enforce truth by legal means will be foiled by Trumpist judges like Mizelle. Inevitably, Trump will be reelected, and a revanchist holocaust will ensue.

To conservatives and Republicans, this rending of garments will appear disingenuous. After all, if you are obsessed with control, you will never get enough of it. But there’s no necessary contradiction between the two perspectives: you can be addicted to control and aware of its loss. In a vague and inchoate way, the progressive elites sense that they have power but lack authority. They live in dread of a reversal in the tide of history that will bestow the future to the worst kind of people and the bloody idols they worship.

Much of the gloom reflected the changed political climate. Starting with the onset of Covid-19 in the spring of 2020, elite fortunes took an almost magical turn. The pandemic frightened the public into docility. The Black Lives Matter riots enshrined racial doctrines that demanded constant state interference as not only legitimate but mandatory in every corner of American culture. The malevolent Trump went down to defeat, and the presidency passed to Biden, a hollow man easily led by the progressive zealots around him. The Senate flipped Democratic.

This victory parade culminated on January 6, 2021, when Trump’s outrageous behavior led to his expulsion from social media and the shunning by polite society of anything that reeked of “insurgent” Republicanism. By Inauguration Day, the elites and their identitarian ideas seemed to stand unchallenged.

That was never quite true—and the moment quickly passed. The burden of incumbency has crushed the Democrats. With Trump gone, they have nothing of substance to rally around. President Biden has staggered from disaster to disaster and is touching calamitous levels of unpopularity. Biden inherited peace and a recovering economy but must now deal with inflation, shortages, and a major war in Europe.

As I write, the conventional wisdom is that the 2022 midterm elections will witness a slaughter of Democrats in both the House and Senate. With Biden’s incumbency blocking better candidates, Democratic prospects of retaining the presidency in 2024 look increasingly poor. Next Inauguration Day, the elite class, so recently triumphant, could find itself stranded in the political wilderness. Yet even then, all would not be lost.

There is a tremendous asymmetry in the alignment of ideological forces in this country. Politically, we are fractured: war-bands of every denomination prowl restlessly through a zone of perpetual conflict. Electorally, we are divided. Voting is binary: in practice, this means that the war-bands get artificially squeezed into one of two mega-tribes. On Election Day, we must choose one or the other—and, because of the dynamic among war-bands, any one of which can defect at any moment, majorities rest on a razor’s edge.

Culturally, however, we are monolithic. From the scientific establishment through the corporate boardroom all the way to Hollywood, elite keepers of our culture speak with a single, shrill voice—and the script always follows the dogmas of one particular war-band—the cult of identity—and the politics of one specific partisan flavor, that of progressive Democrats.

The imbalance between a divided nation and a monolithic culture warps our shared perception of reality. A potentially scandalous story about

“Free speech absolutist” Elon Musk’s attempt to buy Twitter and return it to a more neutral posture enraged progressives.

the son of the Democratic presidential candidate, though entirely true, can be smothered to death by Facebook, Twitter, and Google. On the other side, if you are a former Republican president, you can expect to get locked out of social media permanently, even though 74 million Americans voted for you.

These decisions don’t reflect a consensus of public opinion. None of us was polled on the proper informational treatment for Hunter Biden or Donald Trump. This was control at a far more elemental level—and only here, in the murky depths of truth and post-truth, can we discern the motive for this year’s meltdown over disinformation and its avatar, Musk. The elites, confronting what they believe to be a political tempest of biblical proportions, are terrified of losing their monopoly over culture as well.

The Elite Panic of 2022

Whether this will actually happen is beyond the reach of analysis: culture evolves in mysterious ways. But it may be useful to speculate on the matter. In this spirit, let me propose three strong countercurrents, already visible across the American landscape—that might, in time, threaten the cultural supremacy of the elites.

The first is the intrusion of the political into the cultural. Since conservatives and Republicans are politically strong but culturally nonexistent, they will flex their political muscle to try to right the imbalance. Virginia and Florida have banned the teaching of certain progressive doctrines in public schools. When Disney, Florida’s largest employer, vocally condemned these laws, the company was punished with the removal of local privileges. Should Republicans win Congress and the White House, I would expect American politics to experience a cultural Armageddon. The output of culture can’t be legislated on demand: otherwise, the Soviet Union would have been a golden age of creativity. But raw political power can make the cost of cultural monopoly— and of idle posturing, Disney-style—unpleasantly high.

A second threat to elite culture is the defection of the victim class. The cult of identity generates an insatiable demand for victim groups, which, by necessity, must become ever smaller and more marginal not only to the mainstream but also to traditional minorities. Even as the elites solidified their grip on culture, the focus of their performative outrage was drifting from civil rights and pocketbook issues to more esoteric questions of sexuality and climate justice. The new causes simply don’t resonate with Hispanics or blacks, whose socioeconomic interests lie in other directions. According to recent polls, significant numbers of both groups are threatening to abandon the Democratic Party.

Progressivism is essentially a protection racket. If the elites ever lose the undisputed right to shout “Racism!” at the producers of culture, the latter will begin to fracture like the rest of the country and to look to the marketplace, rather than ideology, for inspiration.

The last countercurrent may be the most potent of all: the internal churning and dispersal of populations spurred by the pandemic and the availability of remote work. The number of Americans moving from their home regions, a recent survey found, is at the highest level on record. Though conservative writers are quick to observe that this is predominantly a flight from Democratic-controlled states to Republican strongholds in the Sunbelt, the political implications strike me as unclear. Many of the newcomers, I’m guessing, will be Democrats.

Far more significant will be the impact on the culture. Migration is a powerful solvent. Millions of people are leaving home in pursuit of change. They wish to be reborn, reinvented, liberated from the dead hand of the past; pick your metaphor for personal transformation. Such sweeping tides of humanity have always exemplified the central tenet of the American creed: that we are not captives to fate. Each wave of immigrants will begin a strange new story. To tell it, the culture, too, must be reborn and reinvented—and the mold of progressive dogmatism will be shattered in the process.

An unexpected blow against the progressive hold on culture came on May 2, when an anonymous leaker within the Supreme Court made public Justice Samuel Alito’s draft decision to overturn Roe v. Wade and devolve the regulation of abortion to Congress and the states. By the time the formal ruling came down on June 24, traumatized elites seemed ready to repudiate the one branch of the federal government that they did not control. The Supreme Court had “burned whatever legitimacy they may still have had,” Senator Elizabeth Warren proclaimed. “They just took the last of it and set a torch to it.” Abortion on demand—an early victory over traditional culture—has become sacramental to the left, with Roe v. Wade as holy writ. If Republican governors can align with Republican-appointed justices to demolish this once-settled arrangement, then every facet of the culture will be up for grabs. Justice Alito’s opinion “is not just about a woman’s right to choose. It is about much more than that,” cautioned Hillary Clinton, after the draft leaked. “Once you allow this kind of extreme power to take hold, you have no idea who they will come for next.”

The Supreme Court decision in the Dobbs case, overturning Roe v. Wade, proved traumatic for the Left.

Are we on the cusp, then, of an anti-elite cultural revolution? I still wouldn’t bet on it. For obscure reasons of psychology, creative minds incline to radical politics. A kulturkampf directed from Tallahassee, Florida, or even Washington, D.C., won’t budge that reality much. The group portrait of American culture will continue to tilt left indefinitely.

But that’s not the question at hand. What terrifies elites is the loss of their cultural monopoly in the face of a foretold political disaster. They fear diversity of any kind, with good cause: to the extent that the public enjoys a variety of choices in cultural products, elite control will be proportionately diluted.

Our cultural monolith, never popular, is today pounded by crosscurrents that undermine its solidity. Alongside the vast progressive choir, quieter voices—conservative, libertarian, religious, none-of-the-above—could soon arise, leaving our culture more fractured, more divided, and more representative of the nation as a whole. If that were to occur, sullen elites will point to the sale of Twitter to Elon Musk in 2022’s springtime of discontent, and remark, with typical vehemence, that their panic was fully justified.

School Choice Rising

JOCHEN TACK/ALAMY STOCK PHOTO

Parental discontent with public education has sparked new momentum for alternatives.

Steven Malanga

For school-choice supporters, 2011 is remembered as a banner year that reshaped education policy. It began with a newly installed Republicanled House of Representatives voting to reinstate Washington, D.C.’s Opportunity Scholarship Program, which congressional Democrats had worked to kill. A flurry of new state initiatives followed, from higher charter school caps to scholarship programs for kids leaving public schools. Behind the burst of activity was a dramatic change in leadership in some states after the 2009 and 2010 gubernatorial elections, with school-choicesupporting Republicans gaining a net of seven governors’ seats. States with new governors, including Wisconsin, Florida, and Oklahoma, accounted for half of school-choice legislation passed that year.

It has taken a decade and then a pandemic for the school-choice movement to surpass 2011’s achievements. Widespread public school closures and harsh health mandates in response to

Covid-19—including compulsory masking of students—angered many parents, as did school districts adopting unpopular programs inspired by critical race theory, or CRT. One result: a bonanza of new choice legislation in 2021. In all, 18 states (six more than in 2011) either instituted

Several states have recently created education savings accounts that residents can use to finance alternatives to public schools—including homeschooling, which became more popular during Covid closures.

School Choice Rising

school-choice programs for the first time or expanded offerings. Some 3.6 million students nationwide gained potential access to more education options, thanks to these moves. Several states widened initiatives originally designed to help lower-income children so that middleclass kids could also take advantage of them—an illustration of how disillusionment with public schools during the pandemic extended beyond failing urban districts. Homeschooling also received a boost in some states.

The broadening discontent may mark a decisive change for school choice. Far from dissipating after last year’s activity, the momentum seems to be building. Already in 2022, state legislators around the country have sought to bring scholarships or charter schools into their state or to boost funding of existing programs. More radical proposals, like breaking troubled districts into smaller groups of charter schools, are also under consideration. Recent polls show support rising, even in Democratic-leaning states, for choice initiatives like education savings accounts (ESAs), tax credits for donors to scholarship programs, vouchers for special-education students, and charter schools.

And the unexpected victory in Virginia last November of Republican Glenn Youngkin, who tapped into parents’ opposition to CRT in schools, has helped propel education reform as an election issue in other states this fall. “In Florida, we are taking a stand against the statesanctioned racism that is critical race theory,” Governor Ron DeSantis, up for reelection this year, said in April. “We won’t allow Florida tax dollars to be spent teaching kids to hate our country or to hate each other.”

The stakes in November are significant, with 36 governorships and thousands of state legislative seats up for grabs. Victories by reform advocates might initiate even more far-reaching efforts in 2023. No wonder optimism among choice backers has rarely been higher.

Economist Milton Friedman originally proposed the idea of school choice in a 1955 paper, arguing that the current model for governmentfinanced and administered public education concentrated too much power in the public sector, leading inevitably to an ineffective monopoly. He suggested a dramatic way to stave off decline: give money to parents and let them choose where to send their children to school. Over the years, as the performance of public schools, especially in urban districts serving low-income kids, declined, Friedman’s criticism intensified. In a 1995 Washington Post essay, he decried a system that produced “dismal results: some relatively good government schools in high-income suburbs and communities; very poor government schools in our inner cities.” Mounting support for school choice, he observed, suggested a tide that only the educational bureaucracy was holding back.

In some early forms, school choice was bipartisan. One of the movement’s early signature programs, vouchers introduced in Milwaukee in 1990, was the work of Democratic mayor John Norquist and Wisconsin Republican governor Tommy Thompson. Democrat Cory Booker won a 2006 Newark mayoral election stumping for alternatives to the city’s awful public schools. After Republican Chris Christie won New Jersey’s governorship in 2009, he worked with Booker to grow the number of charter schools in that city. In his 2008 election campaign, Barack Obama promised to double the federal money allocated for charters under President George W. Bush.

Over time, that bipartisanship faded. One sign of a widening division emerged after Obama became president. He increased funding for charters modestly, but the support fell well short of his initial promises. Obama also backed away from extending the Washington, D.C. Opportunity Scholarship Program, begun by his Republican predecessor Bush, which many of Obama’s Democratic allies, including teachers’ unions, opposed. The program, funding scholarships at private schools for some 2,000 low-income kids, was on the verge of being phased out by Obama, though it had proved so popular that its waiting list numbered some 9,000 students. But the 2010 midterm elections saved it, as Republicans, with a new majority in Congress, and a small group of Democrats, led by Senator Joe Lieberman, voted to renew the scholarships.

That victory, combined with state election results, energized the school-choice movement.

Advocates in Wisconsin and around the nation heavily backed Milwaukee County Executive Scott Walker in his 2010 bid to become governor of the state. Opponents poured money into the campaign of Walker’s adversary, Democratic Milwaukee mayor Tom Barrett. After Walker won with nearly 53 percent of the vote, he quickly made national headlines in introducing Act 10, legislation that allowed employees in Wisconsin to opt out of unions and that took away the right of government labor groups to bargain collectively over wages and benefits. Almost immediately, publicsector union membership slumped in Wisconsin, shrinking from nearly 15 percent of all government workers to just 8 percent—in the process, cutting the power of labor groups, including the formidable Wisconsin teachers’ union.

Following that victory, in 2011, Walker signed legislation expanding the 20-year-old Milwaukee choice program, opening it to all students in the district and introducing a similar voucher plan in Racine. Walker would keep plugging away at school choice—in 2013, for instance, making vouchers available to qualifying families of students around the state. The number of students benefiting from that effort has grown from 511 in 2014 to more than 14,500 today.

Business executive Rick Scott similarly made school choice a key part of his campaign to win Florida’s 2010 gubernatorial election, after Republican-turned-independent Charlie Crist declined to seek reelection. “I want to offer parents a menu of options for their children, including but not limited to charter schools, private schools, homeschooling and virtual schools,” Scott said. “I want to create an educational program that will allow parents to get creative in how to meet the distinctive needs of their children.” Winning in a close, contentious race, Scott signed several key pieces of education legislation during his first year in office—increasing the number of special-education

vouchers, making it easier for students from failing high schools to transfer to new schools, and raising the number of charter schools. He added other reforms in ensuing years, making Florida a school-choice leader. In Oklahoma, Republican Mary Fallin made history in 2010, becoming the state’s first female governor. One of her first-year victories was a law creating scholarships to private schools for low-income students, via a tax-credit program. In Indiana, Governor Mitch Daniels, in his third year in office, signed in 2011 what may have been the nation’s most sweeping schoolchoice legislation—removing the state cap on charter schools, enabling universities in Indiana to authorize charters, and establishing vouchers to help low- and middleincome students finance their education at private schools. “The number of children using ESAs, government scholarship programs, and vouchers more than tripled over ten years.” Arizona Republican governor Jan Brewer, meantime, signed a measure creating ESAs for special-needs students—an idea that subsequently spread to other states, including Ohio, where Governor John Kasich doubled the size of the state’s choice scholarship program and boosted the money that low-income students in Cleveland could tap for scholarships and tutoring. In Louisiana, Governor Bobby Jindal, who had expanded vouchers and tuition tax-credit programs earlier in his administration, added a scholarship-choice initiative for special-needs kids in 2011. After 2011, the population of students taking advantage of school choice would rise significantly. The number of children using ESAs, government scholarship programs, and vouchers more than tripled over ten years, from 200,000 in 2011 to 621,000 today. By 2021, ESAs were providing students with some $3.2 billion annually to spend on their K–12 education, and tax-credit scholarship initiatives distributed nearly $3 billion more. But this growth has run into limits. Until the pandemic, the most credible argument for

School Choice Rising

school choice remained that it was a way to help children from low-income families or with special needs in failing districts. Enthusiasm for more universal programs was less strong in many places. Even in Republican-leaning states, skeptics included otherwise-conservative legislators from rural districts, who sometimes opposed educational choice on the grounds that too few alternative schools existed in their districts. Moreover, as the Democratic Party under Obama, and then during the Trump years, moved further leftward, opposition to schoolchoice programs grew intransigent; bipartisanship, such as it was, disappeared. An American Enterprise Institute analysis of some 70 state programs found that Republicans provided 2,844 votes to pass choice bills, with Democrats voting for them just 381 times. While Democrats have never been the main drivers of alternatives to centralized public school education, many choice programs are passed today without any help from the party. That makes new legislative victories for the GOP the most likely path for future expansion of educational choice.

The pandemic proved a school-choice accelerant. When Covid hit, schools were among the institutions that officials shut down first, largely because we knew so little about the virus, how it spread, and who was most vulnerable. Fairly quickly, however, it became clear that children were among the least affected and that spread was not common in schools. In Europe, many school systems reopened in late spring 2020, just months after the virus struck. In America, openings were slower to happen. A late 2021 UNESCO survey estimated that France had closed its schools for only 12 weeks over the previous two years. In Spain, the number was 15 weeks; in the U.K., schools shut down for 27 weeks, and in Germany for 38. American schools had closed for 71 weeks on average.

Florida governor Ron DeSantis (shown here signing a 2019 bill to create a new voucher program) has been a leader in pushing for parents’ educational rights, from barring the use of critical race theory in schools to expanding scholarships for poor children.

LYNNE SLADKY/AP PHOTO

School Choice Rising

As time wore on, many American parents also got a lesson in the power of teachers’ unions to dictate policy. School closures varied considerably by district and by state; one audit found that states with some of the least amount of in-person instruction during the pandemic included California, Oregon, Washington, Illinois, New Jersey, and Massachusetts—all with strong teachers’ unions. In Chicago, the teachers’ union made national headlines by shutting down schools several times, including during the spread of the Omicron variant. Meantime, two states where unions enjoyed far less bargaining power—Texas and Florida—boasted the most in-person instruction.

Pervasive discontent with public schools manifested itself in unprecedented enrollment declines. In the school year starting in September 2020, enrollment fell 3 percent, according to the National Center for Education Statistics—a trend that seems to have continued for a second year, according to a National Public Radio survey of major American school districts. Early reporting in California, for example, suggests that enrollment fell by 1.8 percent in the 2021–22 school year, on top of a 2.6 percent drop the previous year. Bigger transformations may be coming. Los Angeles school officials recently warned that the district faces an unprecedented 30 percent enrollment drop in the next decade, driven by demographic factors and a shift toward alternative schools.

By contrast, a Cato survey of K–12 private schools estimated that recent enrollment gains may have been as high as 7 percent. The National Alliance for Charter Schools reported a similar rise among schools that it surveyed. These numbers are no mystery. In a 2022 poll, 18 percent of parents said that they had switched schools for one of their children recently, and more than half were considering changes. More than one-third reported that the main reason they were looking elsewhere was their current school’s pandemic policies.

Lurking in that poll is another explosive detail: 21 percent of parents considering a move said that they wanted more of a say in their child’s curriculum. While that sentiment can mean many things, the issue of school curriculum took on new importance over the last two years—above all, with the rise of critical race theory. Following the death of George Floyd in police custody in Minneapolis in May 2020, progressive educators and politicians intensified their advocacy for an instructional approach that teaches kids that systemic racism is pervasive in the United States and that whites enjoy special privileges that impede the advancement of other racial groups.

A backlash ensued. In Loudoun County, Virginia, parents showed up at a board of education meeting to decry the racialized pedagogy. “You are now training our children to be social justice warriors and to loathe our country and our history,” one Chinese-American mother told the board. “Growing up in Mao’s China, all this seems very familiar.” Parents sued the board, claiming that the instruction was racially discriminatory. In Wisconsin, a black retired Air Force pilot helped lead protests against CRT in schools, calling it a “pernicious ideology” and demanding curriculum transparency.

In turn, some educators and government officials reacted to the parental concerns in ways that caused even more controversy. When parents in a Missouri school district objected to racialized instruction, a district literacy coordinator encouraged teachers to hide course materials from parents. As protests spread around the country, the National School Boards Association sent the Biden administration a letter claiming that school officials were under “immediate threat.” U.S. Attorney General Merrick Garland then directed the FBI to investigate school protests as possible “domestic terrorism,” enraging parents. In Virginia, 2021 gubernatorial candidate Terry McAuliffe objected to a curriculum transparency bill because “I don’t think parents should be telling schools what they should teach.” Republican Youngkin, who supported transparency, subsequently upset McAuliffe in the November election.

“After a year and a half, almost two years, of incredibly disrupted institutional experience that was visited on almost every family in the country, you probably shouldn’t say something like ΩParents don’t matter,≈ ” Derrell Bradford, president of 50 Can, a school-choice advocacy

group, observed at a Harvard conference on education policy. “There’s a lesson there about treating people poorly.”

the program. Indiana, Kentucky, Missouri, and West Virginia joined New Hampshire in creating ESAs last year. West Virginia shows how the tide has turned, What followed Covid lockdowns and curpost-Covid. Back in 2018 and 2019, teachers in the state mobilized to stop an expansion of charters riculum fights was a burst of education legisla- and to vie for higher pay and better benefits. Aftion, marking another milestone in the choice ter striking in both years, they had managed, for movement. Eighteen states launched new choice example, to water down a charter school bill so programs or added to existing ones in 2021. that it sanctioned just three new schools a year— The earlier emphasis had been on alternative this, in one of the few states at the time with no schools like charters; but in 2021, legislation educational choice. The union also waged camfocused more on paigns against Republican state legislators who had sponsored school-choice measures, defeating two of them. But following Covid school closings and a 2020 election in which Republicans won a state legislature supermajority, lawmakers passed a bill that ofproviding parents— including those in middle- and upperincome groups—with additional education options so that they could select those that suited them best. Florida again took a leading role. In May 2021, Governor Ron DeSantis traveled to a “School curriculum has taken on new importance—above all, with the rise of critical race theory.” Catholic high school in Hialeah to sign a bill that fers ESAs to students of all income categories. committed about $200 million to increased schol- The expansive new initiative, like other ESAs, arships for low-income students, covering 100 pays for tuition at private schools and includes percent of tuition at the school of their choice, money for tutoring and to purchase learning while also raising the income cap on the pro- materials. gram so that families making up to $100,000 a Last year’s successes may be a prelude to furyear could qualify. The state estimated that the ther gains. For one thing, government schools expansion, which also exempts children of mili- are flush with cash after the 2021 Biden stimulus tary personnel from scholarship waiting lists, provided K–12 education with an unprecedented would make it possible for 60,000 more kids to $128 billion in federal money. That has helped take advantage of the initiative. mute criticisms that school-choice programs take

New Hampshire illustrated the demand for money away from traditional public schools, such programs. Pre-pandemic, the state offered leaving them cash-starved. In addition, polls limited choice-scholarship money, funded by suggest that Republican candidates are poised to businesses, to low-income children; the program make substantial gains in November elections— had a waiting list of just 30 students. Once Co- not just in Washington but in state and local vid lockdowns began, that list lengthened to 800 races, too. That could supercharge school-choice kids. Seeing the demand, state Republicans, who initiatives in places where Democrats or moderhad wanted to build out school-choice offerings, ate Republicans have been blocking programs. passed a law in June 2021 to establish ESAs, The year has gotten off to a fast start. Alagiving families making less than $79,500 a year bama’s legislature in April increased funding for $4,000 to $5,000 in state money, which could be education scholarships by 50 percent, to $30 bilused on private school instruction, homeschool- lion. Earlier, South Dakota lawmakers expanded ing materials, or other services. By November, the state’s tax-credit scholarships. “It seems like New Hampshire already had 1,600 applicants for every Republican lawmaker is sponsoring an

School Choice Rising

Thousands of Milwaukee parents—including Sheila Haygood, shown here with her four daughters—have taken advantage of a voucher program in Wisconsin, one of the nation’s earliest, that pays for children in the city to attend private schools.

education bill in this General Assembly,” an Ohio newspaper quipped in March. Among other things, Buckeye State Republicans want the state’s vouchers to cover all students. Tennessee is considering a similar bill that would expand statewide a 2019 voucher program originally designed for students in Memphis and Nashville. In South Carolina, lawmakers want to use part of a budget surplus to create the state’s first vouchers. “The two things that I think are very distinct and loud that we’ve heard is that parents want a voice in their children’s education,”

the head of the state’s legislative budget-writing committee said. Arizona expanded access to its state-funded scholarship program, originally designed for disabled children, to all students, allowing them to spend tax dollars on a school of their choice. In Wisconsin, Republicans have proposed breaking up the Milwaukee school system into several smaller systems. The bill is a response to a sharp drop in student performance during the pandemic.

feated another Republican endorsed by the local teachers’ union. The upheaval has also struck primary elections for the state board of education, where two incumbent Republicans lost to candidates viewed as more supportive of school choice. Betsy DeVos has also waded into local battles, writing in the Fort Worth Star Telegram that Texas’s educational opportunities pale beside Florida’s: “Texas’ students felt this pain acutely during the pandemic, as many districts shut The ballot box may down and left students and families scrambling,” DeVos noted. “Add in concerns about how public schools are handling issues such as teachings on race and sex, and it’s even more remarkable Texas continues to leave government, not parents, in control of education.” She issued a challenge to public officials in the state, also prove decisive this election cycle (or next) in Michigan, where school-choice supporters, backed by former Secretary of Education Betsy DeVos, are trying to get enough signatures to place the Let Kids Learn referendum on the November ballot. DeVos “The last two years may turn out to be the launching point of a transformation in American education.” and family members have contributed $400,000 to warning that parents “aren’t inclined to accept the effort, which would establish ESAs in the state. any more excuses as to why there’s a Texas-sized

Some individual state races will be crucial. hole on the map of states that empower families Wisconsin governor Tony Evers, a Democrat, to make the best educational choices.” has stymied Republican-led school-reform ini-

tiatives. He’s up for reelection, facing voters Tworried about crime, the economy, and schools. he last two school years may turn out to be the Wisconsin voters, polling suggests, strongly ap- launching point of a transformational era in Amerprove of the state’s school-choice program, so it’s ican education. School closures and other Covidno surprise that Republicans vying to run against related measures, along with the outsize power Evers are talking about education reform. For- that teachers’ unions wielded over classrooms, mer lieutenant governor Rebecca Kleefisch, for have drawn far more parents into the schoolinstance, kicked off her campaign with a 30-sec- reform fold than ever before. No longer is public ond ad lamenting school closings and pledging school choice an issue largely confined to poor parto expand school choice. ents in failing districts. More important, polls and

Though Texas is considered one of the nation’s interviews illustrate that many parents are upset most conservative states, its record on school enough to change how they vote in order to get rereform is modest. Advocates hope to change form. “Interviews with New Jersey voters revealed that by defeating Democrats and, in primaries, that some Democrats’ breaks from their party last moderate Republicans opposed to school choice. fall were neither flippant nor fleeting,” the Wall Groups like the Texas Federation for Children, Street Journal recently observed. “Many [voters] a PAC, have poured money into school-choice described personal struggles to stress what they candidates’ campaigns. Already, in a special viewed as the needs of their family or community state-legislative election in Waxahachie, a can- over partisanship.” The next six months will tell us didate backed by choice advocates handily de- just how deep those new priorities are.

Subsidizing Addiction

The government pays homeless addicts to stay on drugs and alcohol.

Judge Glock

Ira, an older, soft-spoken homeless man, recently went to the Downtown Austin Community Court to see if he was eligible for free housing. One of my friends, a fellow researcher, accompanied him. Ira answered questions about his background, including about whether he had had run-ins with the law or a history of drug abuse. After the interview, the social worker at the court told Ira that his problems were not severe enough to get housing. Dejected, Ira joked, “If only I would have been a drug addict.” The social worker shrugged and responded that the community court’s housing program “takes a lot of things into consideration, but yeah.”

Today, drug abuse is not a barrier for homeless people seeking housing and welfare. In fact, many policies make drug abuse a prerequisite for services. Federal, state, and local programs give addicts more funds and assistance than nonaddicts. And other favors go to homeless individuals who can prove that they’re engaging in criminal activity.

The government claims that it is trying to support the most vulnerable people, which means finding those with the most problems. But it has come to regard those problems as immutable, in need of a constant flow of funding. The government ignores how, by rewarding destructive behavior, it makes it harder for people to get their lives together—and thus, how it is encouraging the very problems that it claims to be solving.

Most Americans recognize that subsidizing drug abuse and crime is a terrible idea. In the 1990s, the public forced Congress to end similar, older programs. But the bureaucracy and the advocates have found ways to resurrect such policies—and expand them. For many welfare programs today, the deserving recipients are no longer those with the most setbacks or the least income but those maintaining the worst addictions and committing the most crimes. If anyone wonders why, say, Los Angeles suffers more than 2,000 homeless deaths yearly—quadruple the level of about a decade ago—and if anyone wonders why drug abuse and violence are the overwhelming killers of the homeless, one reason is that the government is paying them to kill themselves and one another.

Needle exchange in Albuquerque, New Mexico: many government programs hand out free needles to anyone who asks.

We have seen the lamentable effects of subsidizing addiction before. From 1972 to 1996, the government defined addiction to drugs and alcohol as a disability, which meant that an addict could get a monthly disability check. If you could prove to a federal bureaucrat that you had a crippling drug dependence, the government would pay you enough to feed yourself and your habit. If you got clean, it would declare you recovered and cancel your payments. The incentives, as economists say, were perverse.

Congress created Supplemental Security Income (SSI) in 1972 to combine disability payments for the poor into a single program. Representative Hugh Carey, a New York Democrat, was worried about his own state’s thenunique disability program, which funded heroin addicts—not because he wanted to end it but because he wanted federal taxpayers to pay for it. He succeeded in getting drug and alcohol addiction included as a disability in the House of Representatives’ SSI bill. Senator Harold Hughes, a Democrat from Iowa and a recovering alcoholic, understood the dangers of subsidizing addiction and got it excluded from the Senate’s version. Yet Carey triumphed over Hughes in the congressional conference committee. Soon, about 10,000 addicts were receiving SSI checks—almost all of them from New York’s welfare rolls.

In its early years, the addiction program remained limited in scope. The Social Security Administration, more at ease cutting checks for the elderly than administering a complicated welfare scheme, disfavored it. But a 1984 congressional expansion of disability benefits, and some new court rulings, added more drug abusers to the caseload. Then, in 1989, the federal government began spending tens of millions of dollars in outreach to get people on the disability rolls, focusing on the homeless, whose frequent addiction woes provided an easy route to benefits. Soon, 250,000 addicts were on the rolls.

Americans would eventually reject the experiment. Newspaper reports told of addicts dying the day the latest SSI check arrived. An episode of NBC’s Dateline showcased a recovered alcoholic, who observed that the federal checks to addicts were “killing them on the installment plan.” CBS’s

Subsidizing Addiction

60 Minutes ran a devastating piece featuring an addiction specialist who fretted that the federal government was “enabling” addiction and discouraging treatment. In 1996, a recently elected Republican majority, with the help of many Democrats, ended the SSI addiction program.

Academics and many welfare bureaucrats were outraged. They spent years finding ways to skirt the law and restore addiction as a route to secure benefits. They have succeeded.

One way that addicts have secured benefits is through new homelessness programs. The HEARTH Act, signed by President Obama in 2009, reorganized homelessness spending to emphasize giving permanent homes to the chronically homeless—those on the streets for more than a year and with a disability. This was known as the Housing First philosophy. Without any seeming debate, Congress adopted the bureaucracy’s own definition of disability, which included “substance use disorder.”

The Department of Housing and Urban Development told local governments to devise a single ranking to determine which homeless people would get free and permanent housing. Those with “significant health or behavioral health challenges,” such as “substance use disorders,” should get an advantage, HUD said. In a new twist on the old disability programs, the federal government also began making criminality a favorable condition for benefits. One of its housing-voucher programs for the homeless, HUD said, should prioritize those with “criminal records” and those with “high utilization of crisis services,” such as “jails.”

The HUD mandates led a nonprofit group, OrgCode, to create the infelicitously named Vulnerability Index–Service Prioritization Decision Assistance Tool, or VI-SPDAT (pronounced veeeye spi-dat), which local governments nationwide adopted. In a typical VI-SPDAT survey for single homeless adults, a homeless person can accumulate “points” toward free housing. He can get a point if he has “run drugs for someone” or “shared a needle.” He gets another point if his drug abuse got him evicted from an apartment. There’s another bonus point for taking medication other than “the way the doctor prescribed,” or selling the medication. For crime, the system awards a point if the homeless person has tried to harm someone in the last year, another for being the “alleged perpetrator of a crime,” and yet another for landing in a drunk tank, jail, or prison. If the person does enough drugs and commits enough crimes, he can get six total points. With enough time on the streets, he can get to the necessary eight points toward a free house, without showing any other issues, apart from criminal behavior and drug abuse.

For families—almost always, single mothers— the scoring system rewards both drug and child abuse. Beyond the usual substance-use points, mothers get a point if children are frequently truant. They get a bonus point if their child spends two or more hours per day without any responsible adult around. Incredibly, a mother can also get a bonus point if child protective services has removed one or more of her kids. Having “two or more planned activities each week,” such as going to the library or park, is a negative on their benefits score.

Abenefits system that rewards drug abuse, crime, and child abuse should collapse after public exposure, right? Yet the system came under attack only after tenuous accusations of racism. Activists argued that not enough minorities were getting into housing. Earlier this year, the creator of VISPDAT issued a mea culpa, calling for an end to the putatively racist program and for accelerating “activities to improve approaches that further promote racial and gender equity.” Though the Fair Housing Act forbids rewarding housing based on race, HUD in 2020 had already said that cities should change their scoring system to “dismantle embedded racism in [scoring] and prioritization processes” and find ways to get more minorities enrolled.

Now, as a consequence, many states and cities provide their own scoring systems, often after adjusting the scoring to emphasize questions to which black people are, supposedly, more likely to answer yes. Under the Massachusetts “Vulnerability Assessment Tool,” a homeless individual gets four points for agreeing that “I am currently

using alcohol or drugs and not in recovery” but only one point if he has “been in recovery for more than one year.” The individual gets an extra two points if he has had an overdose or alcohol poisoning in the past 12 months. In Tacoma, Washington, the government says that the scoring system should focus on getting housing for those with “active substance use,” “frequent criminal justice interactions,” and, ideally, a “felony.” San Francisco asks if applicants have “ever had to use violence to keep yourself safe”; answering affirmatively yields bonus points for a new house. In all these places, getting into recovery or refraining from violence is almost fatal for an applicant’s chances for housing and benefits.

The homeless can accelerate their benefits by committing new crimes. Many cities have created “specialty courts” to deal with the particular problems of the addicted, the mentally ill, and the homeless. While some of these institutions use the power of mandated treatment to change lives, others have become another means to distribute funds to criminals. (See “Keeping the Mentally Ill Out of Jail,” Autumn 2018.) If you’re homeless in Austin and commit an offense such as trying to sell drugs, you will get assigned to the Downtown Austin Community Court. According to internal documents, your supposed “punishment” will be “access to basic needs, social services, and other resources,” so as to “address the root causes” of your situation. Your community service requirement will be met by the time you spend applying for welfare. Your assigned case managers will help you with those applications and also serve as chauffeurs, driving you to meetings and appointments. A single criminal act can open up various new benefits. As one news report says, the court wants to be “more of a social service organization than a court of law.”

In San Francisco, similarly, the CONNECT program will let anyone charged with crimes vaguely associated with homelessness, such as “defecating in public,” “aggressive soliciting,” “drinking in public,” “fighting,” or even plain “destruction of property” to receive, instead of punishment, “supportive housing, case management, medical services, family & employment programs,” and “meals service.”

The government’s effort to give housing priority to addicts and criminals is even more damaging because the current Housing First model discourages treatment for addiction or other problems. The idea behind Housing First, also known as permanent supportive housing, was that homeless people needed “low barriers” to get off the street and into housing; any mandates for treatment, on this view, would discourage homeless applicants. “One program gives homeless addicts free apartments—as well as a needle exchange and drug paraphernalia.” Housing First is now official federal policy, and every local homelessness group receiving federal funds has to adopt it. HUD tells these groups that mandates for addiction services “should be rare and minimal if used at all.” The head of a major nonprofit providing housing for Native Americans in Arizona told me that many of her homeless clients suffered severe alcohol problems, but the federal government upbraided her when she tried to require minimal treatment in exchange for housing. The Tacoma homelessservices center warns that housing “may not be restricted based on . . . current sobriety,” willingness “to participate in substance abuse treatment or counseling,” or even “goal setting” of any sort. In exchange for free housing, then, the government expects less than nothing from its clients. Some Housing First programs don’t just disdain treatment; they actively encourage drug abuse. One Pennsylvania program, Pathways to Housing, for example, provides homeless addicts with free apartments (“fully furnished units chosen by the participants”)—as well as a needle exchange and necessary drug paraphernalia.

Subsidizing Addiction

It’s not surprising that housing filled with criminal addicts under zero requirements for treatment attracts problems. A San Francisco Chronicle investigation reported that in 2020–21, at least 166 people in the city’s permanent supportive housing program overdosed. This represented 14 percent of all overdoses in San Francisco during that period, even though these houses held less than 1 percent of the city’s population. One resident, Joel Yates, described what happened when he moved from a recovery house, which required sobriety, to a low-barrier supportive-housing unit: he quickly bumped into a neighbor on his floor who was smoking crack—and Yates relapsed.

The only reasons the number of overdoses in San Francisco housing is not higher is that, first, the city doesn’t track all overdoses, and, second, it has installed hallway Narcan dispensers to help revive overdosed residents. The horrific results of these programs confirm recent studies that show that the homeless placed in supportive housing are more likely to abuse drugs and alcohol than those left on the streets. And these grim findings dovetail with decades’ worth of research showing that boosting income to addicts increases their drug consumption and the likelihood of relapse. A free house frees addicts from lots of other expenses.

The drug-abuse problem in these units has gotten so bad that the federal government awarded the largest homeless-housing provider, CSH, almost $4 million to research “overdose prevention practices in permanent supportive housing.” The outcome of the research, by the very definition of permanent supportive housing, cannot be to discourage drug abuse.

Activists, bureaucrats, and judges have begun chipping away at legal restrictions on addiction subsidies in other programs. In 2007, the federal government released an article titled “Documenting Disability for Persons with Substance Use Disorders & Co-Occurring Impairments,” which explains ways to get SSI disability payments for addicts, while avoiding the formal ban. The article claims that a profound difference exists “between the [ban on addiction funding] and scientific understanding of addiction,” which suggests that the right approach should be to provide indefinite checks to drug abusers. The article claims, without concrete evidence, that the 1990s-era cutoff led to more addicts in jail and restricted access to treatment. It also explains how to demonstrate to welfare officials that an applicant’s addiction was a manifestation of other, fundable disabilities, using lines such as the “patient’s cocaine use clearly exacerbates his underlying psychiatric conditions.”

Psychiatric ills have become the easiest way to get addicts back on disability. After the cutoff of addiction funding in 1996, the percentage of SSI awards based on psychiatric diagnoses soared, to more than a third of the total, often with “cooccurring” addiction as a secondary disability. The increase in mental diagnoses absorbed almost half of the addicts who had been kicked off the rolls. In contrast to the bureaucracy’s claims of harm, a National Bureau of Economic Research paper found “appreciable increases in labor-force participation and current employment” for addicts removed from the rolls in the first years, but noted that, in the longer run, disability checks “returned to earlier levels, and the shortrun gains in labor market outcomes waned.”

The federal Substance Abuse and Mental Health Services Administration, SAMHSA, a hotbed of activism, runs several programs designed to get benefits to addicts. One, SSI Outreach, Access, and Recovery, seeks to enroll individuals with “substance use disorders” for disability. Its literature trumpets the ways its disability applications get flagged by the bureaucracy for quicker and more positive treatment and reports that 71 percent of its applications are approved—twice the rate of all applicants. Another SAMHSA program, Grants for the Benefit of Homeless Individuals, tries to connect “clients who experience substance use disorders,” along with other mental-health problems, to everything from Medicaid to SSI to food stamps.

In 1990, a bipartisan majority in Congress forbade the Veterans Administration from giving disability to people based on their drug and alcohol addictions. But activists convinced the administration that if a “primary” injury, such as a leg wound, led to the “secondary” problem of drug abuse, the addiction garnered extra

Deserving recipients in many social-services programs are no longer those with the most setbacks or the least income but those with the worst addictions and who commit the most crimes.

benefits. The 2001 federal court case of Allen v. Principi argued that substance abuse was often a sign of a psychiatric disorder and thus should be positive evidence of disability when awarding benefits. Now, despite earlier, broad-based concerns that the government was feeding veterans’ addictions, the government is again paying for drug and alcohol abuse among a vulnerable population.

In line with the medicalization of every aspect of modern life, federal Medicaid dollars for indigent health care are now used for housing the homeless, and specifically for drug addicts. Connecticut and Florida use Medicaid funds to help those with both mental health and addiction disorders find a place to live and help them keep their current housing. Florida says that its goal is to keep people with substance-abuse disorders in “sustainable housing through improved supports.” North Dakota says that its special Medicaid program for the disabled provides housing support to those with “alcohol abuse,” “cannabis abuse,” “cocaine abuse,” “hallucinogen abuse,” “opioid abuse” and the all-encompassing “other stimulant abuse,” as long as these are accompanied by a “drug-induced mood disorder.” Los Angeles uses Medicaid and other funds to support the most incorrigible addicts. A homeless drug abuser in LA won’t get special help if she overdosed only twice last year—but if she had three or more overdoses, she can get free “transportation, childcare support, establishment of benefits,” as well as assistance in receiving “SSI, SSDI, CAPI, CalFresh, and General Relief” funds.

These programs are in addition to the general cash provision for the homeless, which itself has deleterious consequences. Many cities like New York and San Francisco phased out such cash programs about 20 years ago (the latter under then-mayor Gavin Newsom’s Care Not Cash program), due to concerns about the money fueling alcohol and drug abuse, but they’ve now come back into vogue. One “old-school junkie” told former California gubernatorial candidate Michael Shellenberger about all the funds he gets to abuse drugs and live in the City by the Bay—over $600 in cash and $200 in food stamps a month. “I get paid to be homeless in San Francisco.”

The “harm reduction” approach to dealing with addiction began with the simple idea that clean

Subsidizing Addiction

needles could prevent the spread of blood-borne diseases such as HIV or hepatitis among intravenous drug users. But it has morphed into yet another source of addiction subsidies. Earlier harm-reduction activists backed needle exchanges, where addicts brought in dirty needles to trade in for clean ones. But now, government programs hand out dozens of free needles to anyone who asks. Cities like Seattle and San Francisco moved to providing free glass pipes for meth or free foil and cookers for heroin, usually focused (again) on the homeless. The putative health benefits of new glass pipes and foil have never been clearly explained.

Since 1988, federal law has prohibited the funding of needles for illegal drugs. But in 2015, Congress allowed funding for all aspects of needle programs except the needles themselves. Instead of the simple needle exchanges that some politicians wanted to support, the bureaucracy recommended supporting programs that offered as many free needles as possible. The Centers for Disease Control and Prevention says that “although restrictive syringe distribution approaches such as 1:1 exchange may seem desirable,” they are “not recommended.” Instead, they note that providing people up to 30 syringes a month may be helpful. President Biden’s stimulus act provided funds for many types of “syringe service programs and other harm reduction” initiatives without any of the usual restrictions, and that led the bureaucracy to try to maximize free drug supplies. SAMHSA offered a $30 million grant program to distribute, among other harm-reduction tools, “smoking kits/supplies” for those smoking crack and methamphetamine. The grant prioritized smoking-kit distribution in poor and minority communities. What was once a wild conspiracy theory about the U.S. government encouraging crack use among African-American city-dwellers is now a publicly stated policy.

The harm-reduction programs have extended their ambit to sustaining anyone with an addiction. According to financial statements, St. Ann’s Corner of Harm Reduction, in the Bronx, received over $3.2 million in government grants and contracts in 2020. Though this was slightly more than it spent on all its programs that year, much of that spending went to general lifestyle support. An academic study of an unnamed, government-funded Bronx harm-reduction center noted that it offered free food, clothing, backpacks, and MetroCards to attract “clients” from three “competing” needle programs. As the study noted, the center’s funding was “based upon the volume of individuals served.” The center paid regular income to “peer” counselors—individuals with a current or past drug addiction who advise other addicts, though what a current drug abuser could tell another, besides how to score, was unclear. Those addicts passed over for the coveted peer positions told the academic researcher that they would “take [their] talents” to other needle programs. The center also became a useful place to fence stolen goods, with employees purchasing some of the contraband themselves, the study noted.

Some cities have found even more direct ways to subsidize addiction. San Francisco famously provided alcohol, marijuana, and cigarettes to homeless addicts when it put them in free hotel rooms during the Covid lockdowns. The city also laid out in the hotel lobbies the usual assortment of free needles, rubber tourniquets for injections, cookers for heroin, and glass pipes for methamphetamine and crack. The city said that these supplies were necessary to keep the homeless in the hotels and protect them from the effects of Covid. And it worked: San Francisco did not see a single homeless death from Covid in the pandemic’s first year. Yet overall homeless deaths were double any previous year in the city’s history; more than 80 percent were substance overdoses.

Over the past century, elites have tried to redefine all crimes and all social problems as illnesses. The goal has been to remove any sense of personal responsibility and to attribute every problem either to biology or society.

Addiction, of course, is an illness. It hijacks the brain and turns a human into a vessel seeking just one thing: a fix. But indulging that addiction is a choice. It must be if we are going to encourage addicts to take steps toward their own recovery. Twelve-step programs require a person

A Los Angeles center for “harm reduction”—an approach that began on a principle of prevention but has morphed into yet another source of addiction subsidies

to “make a decision” to stop abusing substances. Yet the government ignores choice and pretends that continued self-abuse is inevitable. It enables addiction instead of fighting it.

We know that current policies don’t work. Drug overdoses have risen 500 percent in just two decades. More than 100,000 Americans overdosed last year, the vast majority from opioids. More drug abuse has been accompanied by more violence and crime, as well as increases in homelessness, especially on the street. Many cities have seen a doubling in annual homeless deaths just over the last five years. Earlier fears about heroin or the crack epidemic pale in comparison with the modern blight.

Thankfully, some efforts promote better decisions. The bipartisan 2018 Support for Patients and Communities Act created the Recovery Housing Program, which houses sober individuals, recovering from an addiction, for up to two years. The program should bolster nonprofits like Oxford House, which allows recovering addicts to live together in small, suburban houses and help one another on the path to recovery. All clients in Oxford House must work and stay clean. Conditioning more housing and services on sobriety would be the best possible incentive for personal change. But such programs remain the exception.

In the 1990s, the public learned about destructive programs that funded drug abuse, and it struck back with laws and prohibitions. Yet new programs, often sneaked in by the bureaucracy, have reversed this trend and instead are encouraging what the law forbids. Then and now, most Americans understand the obvious: the government shouldn’t be supporting drug abuse and crime. The taxpayer should not feed such habits or make it harder for addicts to get clean. In a better world, the government would help damaged individuals to move ahead with their lives. But today, as Ira and other homeless people know, the government is helping them kill themselves on the public dime.

The Green War on Clean Energy

SVEN HOPPE/PICTURE-ALLIANCE/DPA/AP IMAGES

Radical environmentalists fight against the very technologies that would cut carbon emissions.

James B. Meigs

In 2018, a radical new environmental group emerged in the United Kingdom. The loose-knit organization called itself Extinction Rebellion, or “XR,” and aimed to raise awareness of climate change through disruptive protests. XR activists staged dramatic “die-ins” and shut down London bridges and metro stations. The group’s leaders warned that climate change could “kill six billion people this century” and called for Britain to halt the use of fossil fuels virtually overnight. Like the Occupy Wall Street movement that inspired it, XR disdains detailed policy prescriptions. But its members generally scorn our modern, energy-intensive lifestyles, while also rejecting nuclear power and other high-tech approaches to reducing emissions. To save the planet, many believe, capitalism itself needs to be overthrown. One of the group’s most charismatic spokespeople was Zion Lights. The daughter of Indian immigrants and a mother of two, Lights was a longtime environmental advocate. (The Telegraph once dubbed her “Britain’s greenest mum.”) But she found herself hard-pressed to defend XR’s more extreme claims. Hoping to understand the issues better, Lights returned to college, where she studied the debates surrounding nuclear power and related themes. “I started to realize that almost everything I had believed was wrong,” she told me, when I interviewed her recently for a podcast. When Lights tried to discuss her new perspective with her XR

Activists from Extinction Rebellion demonstrate at Munich Re’s headquarters, demanding the end of fossil fuels.

The Green War on Clean Energy

colleagues, she said, “I found there was this immense, immense resistance.”

Ultimately, Lights had to ask herself a painful question: “What if you’d dedicated most of your life to trying to save the planet,” she wrote in Quillette last year, “but then you realized that you may have actually—potentially— made things worse?” It’s a question that more environmentalists should grapple with today. Over the past half-century, their movement has scored world-changing victories in reducing air and water pollution, preserving wilderness, and protecting wildlife. But when it comes to fighting global warming, the issue that most environmentalists now see as the planet’s paramount threat, the green-policy elite has arguably done more harm than good.

That claim certainly sounds counterintuitive, but evidence shows that some of the activists’ favored policies—especially the single-minded focus on wind and solar facilities for making electricity—have been marginally effective, at best. Other policies, such as replacing gasoline and diesel fuel with biofuels made from plants, actually increase emissions. One of the environmental movement’s biggest self-described victories has been its long-running war against nuclear power, the only technology that demonstrates the capability to reduce dramatically a nation’s carbon footprint. Today, some green activists are fighting against the next generation of climate-friendly technologies, including advanced nuclear reactors and systems to capture and store the carbon in fossil fuels, or even scrub it from the atmosphere. Call it the green war on clean energy.

Extremists like Extinction Rebellion aren’t the only ones with misguided ideas about how best to reduce emissions. Last November, heads of state and representatives from global NGOs, financial firms, and energy companies gathered in Glasgow for COP26, the United Nations climate summit. Speakers unleashed their most impassioned language. “We are digging our own graves,” said UN Secretary-General António Guterres. British prime minister Boris Johnson compared the planet to James Bond, “strapped to a doomsday device” that threatens to “end human life as we know it.” Despite the catastrophism, conference attendees mostly stuck to a well-worn playbook. Governments promised to boost spending on renewable energy and restrict use of oil and gas. Financial organizations agreed to international guidelines that penalize fossil-fuel investments and favor greenenergy projects.

While some countries promised to set even stricter targets for future emissions, China, the world’s biggest greenhouse-gas emitter, resisted demands to curtail its heavy coal consumption and pledged only to start reducing emissions sometime in the indefinite future. As the Associated Press noted, “the high aspirations and apocalyptic imagery at the start of the summit were soon met with a cold dose of reality.”

Nonetheless, global emissions do appear to be peaking. The more apocalyptic scenarios that some activists forecasted are unlikely to happen. In fact, most developed nations are slowly reducing their carbon footprints, though not at the aggressive rates they’ve promised. Ironically, these reductions in emissions often occur not because of the policies advocated at climate conferences but despite them.

Ted Nordhaus, founder of the eco-modernist Breakthrough Institute, is skeptical of the “global climate-industrial complex” on display at COP26. “A climate movement less in thrall to fever dreams of apocalypse would focus more on balancing long-term emissions reductions with growth, development, and adaptation in the here and now,” he writes. The extremists of Extinction Rebellion and similar groups demand “system change,” by which they mean dismantling free markets, creating alternatives to existing democratic institutions, and deliberately reducing living standards through a process they call “degrowth.” The COP26 technocrats don’t advocate anything that radical, but they, too, envision a more centralized, less growth-oriented model for society. Under the COP26 paradigm, entire sectors of the economy—energy, transportation, manufacturing, housing—would undergo wrenching transformations.

According to this vision, markets are not adequate to manage the necessary transitions. Instead, change must be driven through

government regulation, supranational agreements between industry and NGOs, financial controls, and other top-down measures. Certain technologies—electric vehicles, say, or rooftop solar panels—must be heavily subsidized, while others—internal combustion engines, gas stoves—should be penalized or even banned. The use of fossil fuels should be curtailed by any means necessary, including pushing up prices by restricting drilling and pipeline construction. All policies must be geared to achieve “net-zero emissions” by 2050.

This is a staggeringly difficult goal, which would touch every aspect of modern life. Yet net-zero advocates too often reject or neglect the very policies most likely to help the world achieve it. As Nordhaus recently wrote in The Economist, the activist community “insists upon re-engineering the global economy without many of the technologies that most technical analyses conclude would be necessary, including nuclear energy, carbon capture and carbon removal.” In other words, green elites want to upend the lives of billions but show surprisingly little interest in whether their programs work. In some parts of the world, the climate lobby has already managed to enact policies that raise prices, hinder growth, and promote political instability—all while achieving only marginal reductions in emissions.

The problem starts with the movement’s blanket opposition to fossil fuels. For example, most environmentalists viscerally oppose fracking and natural-gas pipelines. The Biden administration moved to curtail U.S. gas drilling within days of taking office (one reason U.S. gas prices have roughly tripled since Biden became president). But in fact, since natural gas emits nearly 50 percent less carbon dioxide than coal, it is one of our best tools to bring down emissions in the short term, while also benefiting the economy. Alex

Trembath, deputy director of the Breakthrough Institute, writes: “The U.S. fracking boom of 2008 onward tempered inflation, created hundreds of thousands of jobs during the worst recession in a century, and, yes, reduced carbon emissions by displacing much dirtier coal-fired power.” Eco-pragmatists like Trembath see natural gas as a “bridge fuel” that can ease the transition to lower-carbon energy sources. (Soon, carbon capture and storage [CCS] technology could make it feasible to harness the energy in gas while putting much less carbon into the atmosphere.) But most environmental activists argue that we must phase out natural gas as rapidly as possible, replacing it almost exclusively with wind and solar power. Wind and solar power can help reduce carbon emissions, as long as they are part of a mix of energy sources. But “Some greens are fighting against the next generation of clean technologies, including carbon capture.” renewable-energy champions tend to gloss over the huge challenges of trying to power the grid primarily with such on-again, off-again energy sources. People understand, of course, that wind and solar facilities make power only when the wind blows or the sun shines. But even experts sometimes underestimate what a complex challenge this “intermittency” presents to grid operators. Since most wind and solar facilities sit idle most of the time, renewable-power producers have to overbuild production capacity massively. Renewable power also requires a whole new network of transmission lines in order to shuttle power from, say, sunny areas to cloudy ones. Renewable backers promise that imminent breakthroughs in battery technology will make intermittency a minor problem. In reality, while batteries can help grid operators manage short peaks in demand, they remain far too expensive to serve as a long-term backup. All these challenges mean that, while the “all-renewable” power-grid activists’ demand isn’t technically impossible, it would cost far more—and take far longer to build—than more balanced approaches.

The Green War on Clean Energy

Despite those obstacles, most green activists regard wind and solar power as something close to a climate panacea. So one would assume that environmental groups are lobbying hard to get these projects approved and built. Yet environmental activists often lead the way in opposing the construction of renewable-energy projects— especially when they’re slated to be built in their own backyards. In the U.S., environmental groups are currently fighting solar installations in Massachusetts, California, Nevada, Florida, and many other states. Wind-turbine farms face even more opposition: since 2015, more than 300 U.S. communities have rejected or restricted wind projects, according to a database maintained by energy author Robert Bryce.

It’s no wonder many environmentalists are conflicted: the zero-carbon energy sources they demand can take a terrible toll on the wildlife and open spaces they love. California’s iconic Altamont Pass wind farm, for example, kills thousands of birds yearly, including an estimated 75 to 110 golden eagles. Solar farms threaten endangered desert tortoises and other wildlife. Because of their low energy density, wind and solar developments require enormous tracts of land, compared with other energy sources. New York’s now-shuttered Indian Point nuclear power plant sits on just 240 acres. Replacing its power entirely with wind power would require more than 500 square miles of turbines. That’s a massive amount of land and habitat lost to energy production.

The biggest roadblock that the green movement has thrown in front of cutting emissions is its long-standing opposition to nuclear energy. Leading environmental groups, including the Sierra Club, the National Resources Defense Council, and the League of Conservation Voters, have been fighting nuclear power since the 1970s. “When you are in the environmental movement, you are just automatically anti certain things,” Zion Lights told me. “And nuclear power is the biggest bogeyman.”

Even after decades of research into alternative energy, nuclear power remains the only proven means to produce electricity that is at once reliable, emissions-free, and capable of being scaled up to meet growing demand. But decades of antinuclear activism have eroded public support. After the Three Mile Island and Chernobyl accidents, the U.S. and other countries imposed regulatory burdens that go far beyond legitimate safety needs. In most Western nations, nuclearplant construction has largely ground to a halt.

But what if nuclear research and plant construction had continued to advance at the pace seen in the 1970s? One Australian researcher concluded: “Had the early rates continued, nuclear power could now be around 10 percent of its current cost.” That cheap, clean power would have made the use of coal—and, in many cases, even natural gas—unnecessary for power generation. In turn, this hypothetical nuclear revolution would have eliminated roughly five years’ worth of global emissions from fossil fuels and prevented more than 9 million deaths caused by air pollution. Most green activists today would see such numbers as nothing short of a miracle. Yet it was environmentalists who led the campaign to halt the rollout of the cleanest, and greenest, of all power sources.

When Lights studied the debates around energy and climate, she came to the same conclusions that other open-minded environmentalists have reached: that fears of nuclear accidents and waste are wildly overblown; that the advantages of renewable energy have been oversold; and that policies limiting the supply of energy inflict heavy costs on the poor. In 2021, Lights decided to split with her radical green allies, launching Emergency Reactor, a group that advocates for nuclear power and takes a more positive stance toward energy in general. “Wealthy countries need reliable, non-carbon energy, and poorer countries need clean energy to develop,” she writes on the group’s website.

Lights isn’t alone. As I have written in these pages, a growing number of pragmatic environmentalists now embrace nuclear power. (See “The Nuclear Option,” Winter 2019.) Tech gurus, including Bill Gates, are investing in next-generation nuclear startups. A handful of

Wind-power technologies kill thousands of birds yearly, like this red-tailed hawk.

The Green War on Clean Energy

environmental groups have softened their opposition to the technology. And some political leaders—notably, France’s Emmanuel Macron and President Biden—have embraced nuclear energy. As part of its $1 trillion infrastructure plan, the Biden administration is rolling out a $6 billion program to help save endangered U.S. nuclear plants. After years of discouraging investments in nuclear power, the European Union recently moved to include nuclear in its “Green Taxonomy” of technologies that it considers compatible with net-zero goals. Recently, the global energy crunch caused by the war in Ukraine gave nuclear supporters another boost.

Nuclear advocates still face an uphill battle. Most leading environmental groups continue to oppose the technology. The Capital Research Center estimates that American nonprofits campaigning against nuclear power “spent at least $1.1 billion in 2018.” And official support for nuclear often comes with strings attached. The EU’s inclusion of nuclear in its Green Taxonomy, for example, includes tight time limits and other restrictions calculated to scare off investors.

So despite hints of progress, the nuclear industry remains in a vise: on one side, nuclear plants face pressure from activists and politicians; on the other, they are financially squeezed by renewable energy, which receives comparatively massive subsidies. Not surprisingly, U.S. nuclear facilities are closing at a rate of roughly one per year, with several plants likely to shut down over the next five years. And groups, including the Union of Concerned Scientists, have begun lobbying against regulatory approval for the next generation of designs, including small modular reactors and other concepts. Despite ample evidence that these advanced reactors will be dramatically safer than today’s (already quite safe) nuclear plants, UCS opposes them— partly because their small size and low risk “could facilitate placement of new reactors in BIPOC [black, indigenous, people of color] communities.” The U.S. Nuclear Regulatory Commission recently pleased these critics when it rejected an application from Oklo Power—one of the most promising nuclear startups—to build a test version of the company’s groundbreaking micro-reactor. What makes nuclear power such a lightning rod for environmentalists? Climate economist Gernot Wagner notes that the modern environmental movement “came of age against the backdrop of the global threat of all-out nuclear war.” He writes: “Take this anti-war base, add to it a hefty dose of anti-corporatism, a helping of anti-capitalism, and a pinch or two of Ωsmall is beautiful,≈ and most environmentalists’ attitude toward nuclear power becomes a fait accompli.” As Wagner says, green hostility toward nuclear power harmonizes with broader progressive political views. Some activists clearly state that their real enemy isn’t carbon but the market economy. In her 2014 book This Changes Everything: Capitalism vs. the Climate, Canadian writer Naomi Klein calls climate change “the best argument there has ever been for changing . . . the rules of capitalism.”

Even moderate environmentalists often express similar, if less explicit, sentiments. These include the suspicion that technology is somehow antithetical to nature, a fear that markets are fatally corrupted by greed, and a vague yearning for a more natural way of life. This faintly Rousseauian worldview leads to certain policy preferences: organic farms are better than “industrial agriculture”; collectivist solutions are superior to money-grubbing markets; growing biofuels is preferable to drilling for oil and gas; and so on. In this mind-set, wind and solar power (despite requiring plenty of exotic materials and technologies) intuitively seems like the antithesis of scary, high-tech nuclear energy. What could be more natural than harvesting the wind and the sun? Not all climate advocates embrace this kind of fuzzy thinking, of course. But an alarming number of lawmakers, NGOs, and even heads of state continue to favor utopian sentiments over economic and engineering reality.

Europe offers a vivid example of this phenomenon. In 2000, Germany announced its ambition to become the world leader in developing renewable energy, while renouncing fossil fuels and nuclear power. As noted environmental scientist Vaclav Smil writes, this Energiewende policy “is rooted in Germany’s naturalistic and romantic tradition.” It reflects the socialist influence of the Green Party as well as the German public’s

antipathy to all things nuclear. Two decades later, electricity from other states, and even permit the Germany has spent well more than 500 billion use of diesel generators for grid power. Not sureuros on wind and solar infrastructure, biofuels, prisingly, California consumers now pay about and other initiatives. Nonetheless, Energiewende 80 percent more for electricity than most Ameriis an environmental, economic, and geopolitical cans. But the state’s carbon emissions have fallen train wreck. By 2019, Smil notes, the country’s only about 5 percent since 2000, roughly on par total share of energy produced by fossil fuels with the national average. had fallen from—wait for it—84 percent to 78 Nonetheless, for years, California leaders repercent. Despite its huge commitment to renew- mained committed to retiring Diablo Canyon, able energy, Germany hasn’t managed to reduce the state’s last operating nuclear power plant, its carbon emissions any faster than the U.S. has. in 2025. Fortunately, Governor Gavin Newsom The country still mines and imports mountains is having second thoughts. But the arguments of dirty coal. Even advanced against the plant show the quasireligious nature of some renewable-energy advocacy. After experts pointed out that keeping Diablo running would prevent an 11 percent spike in carbon emissions and “save ratepayers billions of dollars,” the Los Angeles Times before the Ukraine crisis, German consumers were paying the highest electricity rates in Europe. And shortfalls in domestic energy production have made Germany desperately dependent on coal—and on natural gas from Vladimir Putin’s Russia. “Relying on fluctuating wind and solar energy has made California’s power grid notoriously unreliable.”

Faced with that cascade of undesirable out- responded with an editorial arguing that Calicomes, you might think that Germany’s leaders fornia should close the plant regardless. There would reassess. But no. In January 2022—as win- are “better ways to fight climate change,” the ter set in, energy anxieties mounted, and Putin paper said. The plant’s closure should “serve as amassed his troops—Germany closed three of an impetus for California to accelerate the shift its last six remaining nuclear plants. The rest are to renewable energy.” In other words, the paper scheduled to shut down by the end of the year. contended, instead of taking the path most likely At that point, Germany will have eliminated in a to result in lower emissions and lower costs, the single year 12 percent of its total electrical gen- state should go the dirtier, more expensive route erating capacity—all safe, reliable, and carbon- to spur itself to even more virtuous action. As free. The action was “applauded by environmen- economist Wagner notes, “the real fear for most talists,” wrote the New York Times. The biggest opposed to nuclear power appears to be that winner in this debacle has been Putin. supporting it is a distraction from rapid solar and wind deployment.” For environmentalists of In the U.S., California has followed a similar this stripe, getting to net-zero seems more like an abstract moral crusade than a genuine effort route. In 2018, Governor Jerry Brown signed into to cut emissions. law a mandate to create “an entirely carbon-free Even mainstream environmental groups energy grid” by 2045. The plan isn’t going well. sometimes seem strangely biased against poliReplacing the reliable baseload electricity from cies that might bring down energy prices or help fossil fuels and nuclear plants with fluctuating the economy. In New York’s Hudson Valley, wind and solar power has made the state’s power the environmental nonprofit Riverkeeper has grid notoriously unreliable. To avoid blackouts, an impressive history of protecting the Hudson California has had to allow gas plants to exceed River habitat. But it also spearheaded the camnormal emissions limits, import coal-generated paign to close Indian Point, the nuclear plant

The Green War on Clean Energy

that provided 25 percent of the electricity in the New York City region. Advocates for closing the plant promised that renewable energy would easily replace the power lost. In addition to new wind and solar projects, they pointed to a planned underground transmission line that would carry renewable hydro power from Quebec to the metro region. Then-governor Andrew Cuomo promised that the closure would result in “no new carbon emissions.”

But when Indian Point shut down for good in April 2021, all the wind and solar facilities in New York State combined were producing less than a third of the power churned out by that single plant. So, just as in other regions where nuclear plants have closed, grid operators turned to natural gas to fill the gap. Statewide grid-related CO2 emissions shot up by 15 percent. Analysts warned of potential blackouts. Electricity prices rose, too, jumping 50 percent for New York City residents. Then Riverkeeper executed a brazen maneuver: with Indian Point now closed, the organization began lobbying New York’s Public Service Commission against the proposed power line from Canada that it had previously supported. The group announced that it had “the courage to take a second hard look at this project.” Many clean-energy advocates were outraged. Jesse Jenkins, a respected energy analyst at Princeton, took to Twitter to say that he found it “incredibly frustrating to see environmental groups who allegedly see climate change as a Ωcrisis≈ regularly and actively opposing solutions.”

Riverkeeper’s about-face reveals a troubling contradiction at the heart of the climate movement. Green technocrats say that we must “electrify everything,” shifting cars and trucks, home heating, industrial processes, and more to electric power instead of fossil fuels. In a world of ample, cheap electricity, that process might be feasible, even desirable. But while activists support renewable energy in theory, they consistently oppose the infrastructure needed—not just to produce that energy but to deliver it to consumers. For example, a mostly renewablepower grid would require hundreds of thousands of miles of new high-voltage transmission lines. Nonetheless, environmental groups have filed lawsuits against a proposed line designed to carry wind power from New Mexico to Arizona and a similar transmission corridor linking Iowa and Wisconsin. Following a Sierra Club campaign against the project, Maine voters recently rejected a planned power line designed to deliver Canadian hydropower to New England.

The green economy that activists envision would also entail a massive network of highspeed rail lines to help replace air travel. But NIMBY activists are fighting every mile of California’s planned high-speed rail system. That project’s estimated costs have ballooned to $100 billion, with no reasonable expectation that it will ever be completed. Electric vehicle batteries and components for wind and solar facilities will require millions of tons of minerals: lithium, cobalt, rare-earth metals, and more. Maine has one of the world’s richest deposits of lithium, but a 2017 law makes mining in that state virtually impossible. Activists are fighting other proposed mines in Nevada, North Carolina, and other states. In Nevada’s Black Rock Desert, habitués of the Burning Man festival are suing to stop a proposed geothermal energy project. Greenpeace and other groups oppose research into technologies that can capture and store the carbon in fossil fuels, or even strip CO2 from the atmosphere. Critics worry that CCS technologies could “prolong demand for fossil fuels,” according to Inside Climate News.

The list goes on. Time and again, climate visionaries propose sweeping transformations of our way of life in the name of reducing emissions. But then they fail to build—or even actively oppose— the infrastructure necessary to make that dream a reality.

Environmental radicals like the members of Extinction Rebellion might say that this is a good thing: our society is too rich, too energyhungry; we must be taught a lesson in austerity. Even supposed moderates sometimes echo that message. Conservatives never forgot Obama energy secretary Steven Chu’s 2008 comment that “we have to figure out how to boost the price of

After two decades of heavy investments in renewable energy, Germany has only slightly reduced its share of power produced by fossil fuels—and it still mines and imports mountains of dirty coal.

gasoline to the levels in Europe.” Even as he tries to reassure Americans about today’s stratospheric gas prices, President Biden optimistically describes the price surge as part of the “incredible transition” away from fossil fuels.

“Only when the tide goes out do you discover who’s been swimming naked,” Warren Buffet once said. Russia’s Ukraine invasion slashed Europe’s energy supplies and exposed the risks of relying too heavily on wind and solar power. Some experts warn of blackouts, gas shutoffs, and economic chaos. Now European leaders are scrambling to get their hands on any type of fossil fuel they can. Germany is reopening coal mines and has asked the EU to roll back plans to limit investments in overseas fossil fuel projects. But despite the growing crisis, Germany refuses to consider reopening its recently retired nuclear facilities, or to keep its last three plants running. Belgium, which gets half of its electricity from nuclear power, also aims to close its nuclear plants by 2025.

Other countries are taking a broader approach. France’s Macron had already announced a program to build up to 14 new nuclear reactors. The Netherlands is making plans to build two new nuclear power stations. Several other countries are exploring partnerships with U.S. companies to build small modular reactors. In Japan, Prime Minister Fumio Kishida wants to accelerate the reopening of nuclear plants that the country mothballed after the 2011 Fukushima disaster. Energy pragmatism is in the air.

Today’s economic and geopolitical crises may be an opportunity for climate activists to dial down the catastrophism and focus on policies that actually reduce carbon—without destroying our standard of living. For decades, radicals and even mainstream environmentalists have spoken the language of deprivation. Zion Lights and her eco-pragmatist allies prefer to argue for abundance. We don’t need to punish the public to save the planet, she says. The key is simply to “build a lot of clean energy.” In contrast to the angry, anarchic protests launched by Extinction Rebellion, her group Emergency Reactor recently held a series of small, cheerful demonstrations around London. Their aim: to educate the public about nuclear power. “People are keen to engage, get involved, and have thanked us for focusing on solutions instead of the negative aspects of climate change,” she said. When a former green radical becomes an optimistic environmental pragmatist, that’s a sign of progress.

The Influencer

Charles Murray’s social science is sometimes provocative, usually controversial, and always significant to the national debate.

Robert VerBruggen

Charles Murray’s 1984 book Losing Ground was the type of consequential study rarely seen anymore. Culminating in a radical “thought experiment” of eliminating the social safety net, it captured public attention, drew academic fire, laid the groundwork for the welfare reforms of the following decade, and launched a career in which Murray would provoke—and shape—debates over IQ, genetics, class, race, education, and more. As both welfare policy and racial disparities return to prominence in our political debate, it’s worth looking back on that career. A mix of preparation and happenstance catapulted him into the closest thing to superstardom to which a social scientist can aspire.

“The whole thing about Ronald Reagan Ωshredding the safety net≈ had been a big deal,” Murray recently recalled from his home in Burkittsville, Maryland. The president had popularized the term “welfare queen” and tightened some welfare rules by signing a 1981 bill. And by that point, Murray had some informed thoughts about the safety net.

Charles Murray in his office in 1994, the year he published The Bell Curve, coauthored with Richard Herrnstein

DENNIS BRACK/ALAMY STOCK PHOTO

The Influencer

Then pushing 40, Murray had already lived quite a life. He hailed from Newton, Iowa, where his father was a Maytag executive—and where Murray had taken on identities both as a smart misfit who played chess by mail and as a pool-hall prankster in what he calls a “happy, uneventful childhood.” He’d earned a history degree from Harvard and a political science Ph.D. from the Massachusetts Institute of Technology. And he’d spent years evaluating government programs.

Murray joined the Peace Corps after getting his B.A., heading to Thailand as a volunteer with the Village Health and Sanitation Project, which promoted modern sanitation to rural Thais. He worked for two years in that role and another four doing research on rural development. An insurgency was under way, and the Thai and U.S. governments were funding such work to win over the people. (“No, I was not a covert CIA agent,” Murray told me, referring to an allegation one finds online.) It was in Thailand that Murray learned how government can backfire, as well-meaning people wade into affairs they don’t understand and state-provided aid infantilizes formerly self-sufficient citizens.

It was also in Thailand that Murray first worked with the American Institutes for Research and his mentor Paul Schwarz, whose writing style he intentionally copied in his early years. After returning stateside and putting in his time at MIT, Murray again worked for Schwarz and AIR, this time studying U.S. programs and eventually becoming chief scientist of AIR’s Washington, D.C., office. Murray was sympathetic to those who tried to help the poor—but he kept finding that the programs didn’t work.

Murray soon tired of his AIR work. “I’m not sure anybody ever read the reports, and even if they did, they didn’t have any effect,” he says. He was also coming out of a guilt-ridden divorce from his first wife, with alimony and child-support obligations. But he decided to take a risk, quitting his job with a plan of doing some consulting and writing a book about the nature of happiness. When consulting didn’t work out as he hoped, he reached out to conservative think tanks, setting off a lucky series of events in the early 1980s.

Murray’s first success was an offer to write a monograph about welfare policy for the Heritage Foundation. In researching it, Murray became fascinated by how the decline of poverty had slowed after the introduction of Great Society programs, and he decided to “practice” writing an op-ed about it. He turned the result over to Heritage, expecting little to come of it. Weeks later, he was surprised when Schwarz—for whom he was still doing consulting work—congratulated him on appearing in the Wall Street Journal.

Between the monograph and the unexpectedly high-profile op-ed, Murray’s arguments reached the conservative intelligentsia, including Irving Kristol, then editing The Public Interest, and Joan Kennedy Taylor, then director of book publishing at the Manhattan Institute. The institute invited Murray to speak, raised a $30,000 advance against royalties for him to write a book on welfare, and promoted his ideas to lawmakers and media outlets. Murray would spend the better part of a decade as an MI senior fellow.

Losing Ground is the story of how things were supposed to improve after the 1960s but didn’t. Disadvantaged young women were having children out of wedlock, disadvantaged young men were dropping out of the labor force, crime was up, and education had gone to hell.

Reading the book today, one is struck by how far social science has come. Nowadays, poverty data are available at the click of a mouse; back then, Murray spent hours in the Library of Congress and the reading room at the Census Bureau’s facilities in Suitland, Maryland, often settling for less-than-ideal data. But the book is gripping, anyway, using a mix of graspable numbers and commonsense storytelling to make its point.

As Murray noticed, the official federal poverty rate fell markedly in the 1950s and 1960s, but progress faded in the 1970s. However, the official measure is an odd statistic; in determining how much money a person has to live on, it counts cash income, including welfare benefits, but excludes other sources of government aid like food stamps. So Murray also presented

trends for “net” poverty, which includes those other sources of support, and “latent” poverty, which excludes government assistance entirely. Net poverty continued to decline after the safety net expanded, but latent poverty, which Murray labeled the “most damning statistic,” stalled— and even started rising. Poverty had fallen for two decades as the economy had grown, but that progress had ended, with any further “gained ground” coming from government transfers.

Murray’s narrative of how progress ended relies heavily on incentives. In rewriting welfare policy, society changed the rules for the poor. It now made sense, in the short term, to behave in ways that were self-destructive in the long term. He illustrated this vividly in the narrative of “Harold and Phyllis,” a fictional young couple. Phyllis is pregnant, neither plans to go to college, and their parents have little money. Murray explained the options that the couple would have confronted in 1960, and again in 1970. In 1960, welfare was unattractive; the benefits for single mothers were low, and “man in the house” rules would kick Phyllis off the program if the two lived together. But by 1970, welfare was more generous. Phyllis could support herself that way, if hardly lavishly; the two could live together without affecting her benefits, so long as they didn’t marry; and Harold might be able to work only sporadically if he didn’t enjoy his job.

Some aspects of the welfare system that Murray described were frustratingly wrongheaded. Before 1967, welfare mothers who got jobs were taxed at a rate of 100 percent, losing a dollar in benefits for every dollar they earned. That year, the government replaced this policy with the “thirty and a third” rule, meaning they could keep the first $30 they made and a third of the money they earned above that amount. As Murray notes, this change encouraged welfare recipients to work but made the dysfunctional program more attractive to those not already on it.

The ensuing furor over the book was messy. As Murray himself wrote in 1985, Losing Ground “covers too much ground and makes too many speculative interpretations to lend itself to airtight proof.” Contemporaneous critiques tended to dispute that welfare had caused the social problems that Murray pinned on it, to credit welfare for alleviating more hardship than Murray acknowledged, and to quibble about how generous safety-net programs really were. Looking back, Murray told me that he’s still proud of the work, but he admitted that its focus on incentives came with some blind spots, including that “there was something in the nature of modernity that was pushing these phenomena of family breakdown over and above the welfare system.” Of course, the United States never embraced Murray’s “most “His ‘most ambitious thought experiment’ was to end welfare and let families and communities step in.” ambitious thought experiment,” which was to end welfare almost entirely and let families and communities step in. But within a decade, it was clear to most Americans that the welfare system was indeed highly dysfunctional, and states were experimenting with better ways of doing things. Congress passed some work-focused measures, most famously the 1996 welfare reform—essentially declaring that society would continue to help poor single mothers but that they would be expected to get jobs. Like Reagan’s “shredding” of the safety net, welfare reform didn’t stop the rise of federal social spending—at least, not for long. And like the expansion of welfare benefits discussed in Losing Ground, welfare reform inspired debate as to its effects. But single mothers worked more, and their poverty rate declined, after the laws changed. Welfare reform is considered one of the Right’s biggest policy successes in recent decades. Murray eventually accepted the safety net. In 2006, he released In Our Hands, which proposed replacing government social programs with a direct payment to each adult: a “universal basic income,” or UBI. It was a concept he’d thought

The Influencer

about since the late 1980s—inspired by the ideas of Milton Friedman, his growing awareness of the fact that people are born with different abilities, and his dislike of meddlesome government—but that he didn’t think the United States could afford until federal spending ballooned to its more recent levels. Murray was therefore an early adopter of an idea that has attracted interest across the political spectrum.

Murray’s writings echo in welfare debates to this day. As President Joe Biden has pushed to expand the safety net through his Build Back Better plan, for example, conservative critics have highlighted the unwelcome incentives that such changes would create in terms of work, marriage, and unwed childbearing.

Afew years after Losing Ground, Murray became interested in the topic of IQ. It related to themes he’d already been writing about, he found, and most social scientists had neglected the literature on the subject. He decided to write a book about it but worried that the Harvard psychology professor Richard J. Herrnstein—who had written IQ in the Meritocracy in the early 1970s and an article in The Atlantic about fertility differentials by IQ in 1989—might be thinking the same thing.

Murray called to ask. Herrnstein had no such plans, but he suggested that the two write a book together. Murray agreed. For this project, he moved to the American Enterprise Institute. The Bell Curve, appearing shortly after Herrnstein’s death in 1994, is several books in one. It summarizes the academic literature about intelligence and its measurement. It presents an original study based on the 1979 National Longitudinal Survey of Youth (NLSY79), testing the theory that intelligence is a powerful predictor of life outcomes. And in its later chapters, it dives headfirst into the forbidden topics that it’s best known for, including racial differences in IQ.

A sort of cognitive horsepower, IQ is the ability to process complicated information, and it can be measured as accurately as any psychological trait. High IQ is almost a prerequisite for success in many high-paying occupations, and it’s measurably beneficial to workers in many lower-skilled jobs, too.

As Herrnstein and Murray explained, a “cognitive elite” was emerging based largely on this characteristic. College attendance had grown, and universities had increasingly relied on standardized tests that correlate strongly with IQ, so the brightest kids from around the country could demonstrate their smarts and head to the top schools. Such tests had brought Murray from Iowa to Harvard.

That’s often called “meritocracy,” but as Herrnstein and Murray explained, people don’t do anything to earn their IQs. Quite the contrary: IQ is significantly genetic in origin; and in 1994, there was little evidence that deliberate environmental interventions—short of, say, adopting a child into a new family—could raise or lower someone’s IQ. (More recent studies suggest that mandatory-schooling laws did have that effect, to the tune of one to five added IQ points per year.) In other words, society was stratifying based on a trait that is partly inherent and, at any rate, very hard to change.

Like the statistics in Losing Ground, Herrnstein and Murray’s analysis of the NLSY79 stands out today for its simplicity. Even at the time, researchers were building increasingly sophisticated statistical models to explore their data sets, but Herrnstein and Murray avoided that “bottomless pit” to tell a story.

The NLSY79 had given its subjects the Armed Forces Qualification Test (AFQT), which Herrnstein and Murray believed measured IQ well, in their teens and early twenties, and then followed these kids to see how they did. Herrnstein and Murray used the data to answer this question: If all you know about someone is his IQ and a number summarizing his parents’ socioeconomic status, or SES (combining income, education, and occupation), which will better predict his outcome? To eliminate the role of race, these analyses were run only on whites.

On metric after metric—poverty, unemployment, out-of-wedlock childbearing, dropping out of high school, getting a college degree, crime—IQ proved the better predictor. One might object that neither IQ nor SES was a fixed determinant of someone’s future (Herrnstein and Murray noted this themselves) or that the approach is unsophisticated. But these

By the mid-1990s, Murray’s arguments in Losing Ground (1984) had influenced welfare-reform efforts, culminating in legislation that President Bill Clinton signed.

analyses issued a challenge to social science’s focus on SES.

Then the book turned to race. It’s undisputed that black Americans score lower on IQ tests than whites, on average. Herrnstein and Murray emphasized that a difference in averages says nothing about individuals: millions of blacks are smarter than millions of whites. They also summarized the debate over whether this gap was a function of whites’ and blacks’ differing social environments or instead reflected some genetic difference.

After laying out the evidence in detail both ways—for example, statistically controlling for SES reduced the black/white IQ gap by only about a third, but the gap had been shrinking in recent years, and children fathered by white and black U.S. servicemen in Germany following World War II had similar IQs—they presented their ultimate verdict: “It seems highly likely to us that both genes and environment have something to do with racial differences. What might the mix be? We are resolutely agnostic on that issue; as far as we can determine, the evidence does not yet justify an estimate.”

Murray told me that Herrnstein and he had applied “ordinary rules of evidence and Occam’s razor.” Many readers, though, winced at the idea of treating a genetic IQ gap among racial groups the way one would treat any other topic.

The Influencer

The Bell Curve then asked whether society was getting less intelligent on the genetic level because lower-IQ women were having more children than smarter women. (On the environmental level, society was actually getting smarter: in a phenomenon that Herrnstein and Murray christened “the Flynn Effect,” IQ scores had been rising for generations, far too quickly to be the result of genetic changes.)

Finally, the book also reiterated Murray’s call to end welfare, arguing that redistribution was dysgenic:

We can imagine no recommendation for using the government to manipulate fertility that does not have dangers. But this highlights the problem: The United States already has policies that inadvertently social-engineer who has babies, and it is encouraging the wrong women. . . . We urge generally that these policies, represented by the extensive network of cash and services for low-income women who have babies, be ended. The government should stop subsidizing births to anyone, rich or poor.

The Bell Curve inspired even greater pushback than Losing Ground had. Articles and entire books appeared in response, raising every conceivable objection: the AFQT isn’t an IQ test; IQ tests are racially biased; many forms of intelligence exist; other ways of measuring the environment make it more competitive with IQ as a predictor of outcomes; race is a social construct; racial groups’ IQ scores respond to social conditions more than the book acknowledged; Herrnstein and Murray’s sources were racist.

One can spend weeks reading these old debates and taking stands on each sub-issue. (Thomas Sowell’s American Spectator review, “Ethnicity and IQ,” and James Heckman’s Reason review, “Cracked Bell,” are two good critical takes.) My own view is that Herrnstein and Murray had gotten ahead of the evidence on genetics and that they should have kept dysgenics out of welfare policy.

Yet The Bell Curve is convincing when it comes to the power of IQ in modern societies. Much of what it said was indeed just summarizing current science, as numerous experts showed during the controversy. And its themes continue to resonate, sometimes in unexpected ways: witness the rise of “hereditarian leftists,” such as socialist pundit Freddie deBoer and liberal behavior geneticist Kathryn Paige Harden, who accept that genes powerfully influence how individuals fare in modern societies but argue that this is a reason to help the disadvantaged. Advances in genetics have also kept alive the debate over race, though modern geneticists prefer the term “human populations.” A 2018 New York Times article by Harvard’s David Reich urged acceptance of the fact that these populations differ on the genetic level in important ways:

Recent genetic studies have demonstrated differences across populations not just in the genetic determinants of simple traits such as skin color, but also in more complex traits like bodily dimensions and susceptibility to diseases. . . .

I am worried that well-meaning people who deny the possibility of substantial biological differences among human populations are digging themselves into an indefensible position, one that will not survive the onslaught of science. I am also worried that whatever discoveries are made . . . will be cited as “scientific proof” that racist prejudices and agendas have been correct all along, and that those well-meaning people will not understand the science well enough to push back against these claims.

That’s not an endorsement of The Bell Curve, but it’s an expansion of the range of acceptable opinion.

Murray returned to the topic of race in 2020’s lengthy Human Diversity—which didn’t reiterate his position that the black/white IQ gap is partly genetic but did summarize the emerging science of genetic differences across human populations, in addition to exploring the literature on sex differences. For this reason, I found Human Diversity more cautious than The Bell Curve, but Murray disagreed. Given the broader scope of the newer

book, including differences in personality and so- to jump from a difference in group averages to a cial behavior, he didn’t think that there was a good belief that all whites are smarter than all blacks; reason to focus on IQ specifically: “The only reason historically, a belief in genetic IQ differences has to have emphasized IQ is to say, ΩAnd oh, by the led to monstrous policies. In a recent discussion way, on something I took so much shit about, we with Murray, writer Coleman Hughes urged were right on that, too.≈ ” viewers to think about the situation that black par-

Murray’s latest book, the shorter Facing Real- ents would find themselves in if it became normal ity, steers clear of discussing the causes of racial to talk about the IQ gap on the nightly news. gaps. But, inspired by the mess of a public de- Murray believes that compelling reasons exbate that followed the George Floyd protests of ist to talk about racial differences, that efforts 2020, it tries to force America to confront the fact to sideline the genetic question have failed, and that racial gaps in crime and cognitive ability that any practical consequences of opening up exist. It explains, for the discussion can’t be worse than the status quo. For many purposes, he notes, public policy and public debates start with an assumption that any racial disparity must result from discrimination. If we cannot talk about racial gaps in IQ and crime, we cannot explain why example, that the race gap on academic tests stopped narrowing in the 1990s and that blacks commit crimes at higher rates than whites by every available measure. Yet where his earlier books drew critics’ ire for years after their publication, Human “Often, Murray notes, public debates start with an assumption that any racial disparity must result from discrimination.” Diversity and Facing Reality hardly registered on this assumption is false, and therefore we cannot the national radar. Why can’t Charles Murray stop the stampede toward race-based policies annoy people like he used to? “I went into Fac- designed to equalize outcomes. ing Reality saying, ΩIt is your obligation to write When I noted his discussion with Hughes and this book because you’re one of the few people asked if he worried about the consequences of a who is in a position to do it without putting their franker debate, especially adding genetics to the career in jeopardy≈—and nothing happened,” he mix, Murray responded: said. He’d even declined to dedicate the book to anyone and kept his usual agent out of the pro- We’ve had a natural experiment; we’ve cess in order to protect her from blowback. “It’s tried for 60 years to not talk about all those almost as if, in the current intellectual climate, it wounding things in public. And what has is no longer necessary to argue with people who come out of it is the worst racial polarization say things like I’ve been saying,” he observed. since the Civil Rights Act—it’s been build“Given my history and my age and everything ing over a long period of time. We have colelse, apparently I’m ignorable. You don’t have to leges dropping the SAT. We have Oregon confront the data.” outlawing minimum standards in math and reading and writing and so forth. We have On such matters, the truth is one question, and a rhetoric in which whites are called evil and oppressive, and not just privileged but, whether we should talk about it is another. Numer- worse than privileged, racist, no matter how ous thinkers have urged commentators to avoid hard they try not to be racist, and in which such discussions for various reasons: it’s demoral- “colorblind” is hate speech, “melting pot” is izing to see one’s race characterized as inherently hate speech. . . . I could keep on going. . . . less smart on average; the science is too shaky to So when you tell me that I am going to crepermit strong conclusions; people will be tempted ate bad stuff by now saying, “Look, we’ve

The Influencer

probably got differences that are genetic to some degree,” I don’t buy it. I don’t see how it could be any worse.

Does Murray enjoy stirring up controversy? He doesn’t write with the freewheeling joy or spiteful rhetoric of someone who revels in political incorrectness. He has a clear prose style and presents well-crafted arguments that aim to convince an open-minded skeptic. Yet he’s no stranger to provocation.

This tension lays at the heart of a 1994 profile of Murray by Jason DeParle in the New York Times Magazine, titled “Daring Research or ΩSocial Science Pornography≈?” Written in the lead-up to The Bell Curve’s release, the profile followed Murray on a trip to Aspen, Colorado, during which “the man who would abolish welfare” flew first-class, drank fancy wines, and unguardedly doled out quotable quotes to the reporter, from using the term “white trash” to admitting that the topics of The Bell Curve offered “the allure of the forbidden.” “Social-science pornography,” from the article’s title, wasn’t an allegation from a Murray hater; it was an off-the-cuff comment from Murray himself, describing how his data could answer such questions as which types of white kids are most likely to drop out of high school. “Murray’s persona in print is that of the burdened researcher coming to his disturbing conclusions with the utmost regret,” DeParle wrote, “but at the moment, he seems to be having the time of his life.”

DeParle also mentioned an incident from Murray’s past:

In the fall of 1960, during their senior year, [Murray and his friends] nailed some scrap wood into a cross, adorned it with fireworks and set it ablaze on a hill beside the police station, with marshmallows scattered as a calling card. [An old friend of Murray’s] recalls his astonishment the next day when the talk

In Coming Apart (2012), Murray identified troubling trends in white America that have become obvious a decade later, especially drug addiction.

TEUN VOETEN/SIPA USA/NEWSCOM

The Influencer

turned to racial persecution in a town with two black families. “There wouldn’t have been a racist thought in our simple-minded minds,” he says. “That’s how unaware we were.”

A long pause follows when Murray is reminded of the event. “Incredibly, incredibly dumb,” he says. “But it never crossed our minds that this had any larger significance. And I look back on that and say, ΩHow on earth could we be so oblivious?≈ I guess it says something about that day and age that it didn’t cross our minds.”

To some of Murray’s detractors, this was a chance to paint him as a white-hooded crossburner. Since even the Times didn’t portray the incident that way, however, others have drawn a different connection between this story and his later work, one in which the through-line is a certain racial obliviousness. DeParle, for example, wrote that a controversial passage from The Bell Curve recalled “the high-school prankster who burned a cross, only to learn later what the fuss was all about.”

I talked with Murray about the DeParle profile (he admits being casual in his language to convey a certain persona), the cross incident (“the kind of thing that’s so stupid that only teenagers could do it”), and whether he enjoys controversy. He did confess to a contrarian streak: “Any time something is the conventional wisdom, there is an itch within me to say, ΩOh, yeah?≈ ”

But if there’s part of him that enjoys the fire he’s come under over the years, he said, “it must be hidden really deeply.” He was depressed after The Bell Curve came out, especially because people accused him—falsely—of having the numbers wrong. “There was no part of me that I could tap into that was saying, ΩIsn’t this cool?≈ ”

Nearly 20 years after The Bell Curve, Murray published Coming Apart, the third book of his to make a major impact. It tied into The Bell Curve’s theme of a society increasingly stratified along class lines but viewed the topic through a less IQ-focused and more sociological lens. Murray told the story of how the social problems once associated with the “underclass,” including unwed childbearing and lack of work, had afflicted lower-educated whites more generally, while elite whites had self-segregated. Some accused Murray of neglecting the role of economics in these patterns. But Murray saw something happening within white America that few others noticed, and that no one can deny a decade later, given lower-educated whites’ role in both Donald Trump’s election and the opioid epidemic.

Coming Apart serves as a sturdy bridge between Murray’s most high-profile works and the impressive assortment of other books he has written. Some of these, like In Our Hands and Human Diversity, update Murray’s analyses in his most well-known areas of expertise; others flesh out the lessons that his work holds in a specific issue area, as in Real Education, which (among other proposals) urges educators to grapple more effectively with the fact that students have a wide range of cognitive ability.

But still others—such as In Pursuit, What It Means to Be a Libertarian, American Exceptionalism, and By the People—deal with deeper matters. These books are key to understanding what animates Murray, as they explain his thought in a less fraught context.

Murray believes that humans want to gain the satisfaction that comes from a life well lived. People want to earn their own way, make use of their talents, overcome challenges, and feel valued. They want to believe that, without their hard work, their families and communities would be worse off. Left to their own devices, with a limited government that keeps the peace, individuals and communities can strive toward that ideal. Murray especially has a soft spot for small towns, as shown by his choosing to live in a community of fewer than 200 people. But when a large, impersonal government provides too much, it robs citizens of the satisfaction, dignity, and self-respect that comes from taking care of themselves and one another.

Murray also has a deep love of the American Founding. Having once called himself a libertarian, he now goes by “Madisonian.” He believes that the Founders got a lot right when it came to enabling citizens to pursue happiness and that early Americans really were an

exceptional people, including in their insistence As for his family, he feels indebted to his wife, on limited government and in their dedication Catherine Bly Cox, for the disproportionate role to an individualistic creed where people were she played in raising the children while he fojudged on their merits, not on their social class cused on his work. The two, both from Newton, at birth. began seeing each other about a year after Mur-

Murray does not deny the horrors of Amer- ray’s divorce. ica’s racial history. But alongside the positive “I’ve been a good dad in reasonable ways,” developments on race and freedom since the Murray said. “Have I been as good as other dads Founding, he thinks that America has lost parts are? No, I’ve spent too much time in this room, of what made it special. In By the People, Murray sitting in this chair, to have been as good as other pinpointed 1937–42 as the period when the Con- dads can be.” In Pursuit argues that the satisfacstitution’s limits on the federal government dis- tion that one gains from an endeavor is proporsolved in a series of tional to the effort put in, which Murray has become only more aware of since he wrote it: “I don’t think the kids paid too heavy a price because they have such a wonderful mother, but the satisfaction I’ve taken from raising kids is not as much as it would have been if I Supreme Court decisions. Facing Reality raises the alarm about the threat that leftwing racial ideology poses to America’s individualistic streak. Over the years, Murray has offered ideas for restoring what America has lost. In Pursuit sought a return “Murray has a deep love of the American Founding. Having once called himself a libertarian, he now goes by ‘Madisonian.’” to the ideals of Jeffersonian democracy, with had made a greater investment, and that’s just a more local control; By the People proposed law- reality of human life. There are trade-offs.” suits and civil disobedience to tame the federal As for his professional accomplishments, most administrative state. Now, he’s mostly pessimis- writers would envy Murray’s influence. But tic. “Even if we were to try to bring things back,” Murray says that only one of his books will last he laments, there’s so much constitutional, legal, 100 years, or maybe even 1,000—Apollo, which and institutional “sludge” to wade through. he coauthored with Cox in the 1980s. Rather than focusing on the astronauts who went to the For now, Murray doesn’t have another book in moon, Apollo tells the story of the people who designed the spacecraft, planned the missions, and the works. He’s spending some time working on guided the ships, based on extensive interviews. databases, something he loves doing. One of his “What will people remember about the twenprojects is to post publicly more of the data behind tieth century 1,000 years from now?” he asks. Human Accomplishment, a 2003 book in which Mur- “Catherine argues that they’ll remember two ray ranked history’s most impressive artists and things. They will remember World War II, which scientists based on the attention they had received she thinks will take on kind of a Homeric Ωgood in encyclopedias and histories from around the versus evil≈ that will keep it alive in the same world. way that a few wars have been kept alive. She

How is Murray faring on that all-important says the other thing will be—and I agree with question of a life well lived? He recalled the ad- this—it’ll be the century in which human beings vice he gave in The Curmudgeon’s Guide to Get- first left the earth. And the first time they did ting Ahead, a short, lighthearted book from 2014: it was Apollo, and we will be a kind of primary “Marry your soul mate and find a vocation you source for historians, for as long as people write love, and everything else is a rounding error. about it. I’m not saying that it will be a bestseller I’ve done both of those things.” 1,000 years from now. But that book will last.”

The Deacon and the Dog

Fifty years later, a former FBI agent looks back on the bizarre bank robbery that inspired an iconic New York film.

Daniel Edward Rosen

The memory that sticks out to Jim Murphy from the screwiest bank robbery in New York City’s history is not the slow drive down a dark road at JFK Airport, with a shotgun leveled inches from his head, or the scrum of onlookers hooting and hollering every time hostage-taker John Wojtowicz stood toe-to-toe with negotiators. It’s not the salacious details of Wojtowicz’s backstory—man robs bank to pay for his “wife’s” sex-change operation in attempt to woo him/ her back—or the pop of Murphy’s revolver as he shot Sal Naturale during a struggle for control of Naturale’s shotgun. It isn’t the kiss on the cheek from the hostage he had just saved, or the night, a few years later, that he saw Lance Henriksen play a grim-faced caricature of him in Dog Day Afternoon, the Sidney Lumet film based on the 1972 robbery, while seated in a theater packed with an audibly pro–Al Pacino (playing “Sonny Wortzik,” the fictionalized version of Wojtowicz) and anti-Henriksen audience.

What Murphy remembers most is the shot he didn’t take. It’s the feeling of the trigger as he aimed his gun at Wojtowicz, the mastermind of the robbery. At that moment, Murphy had just shot Naturale in his torso. Another FBI agent had just disarmed Wojtowicz of his rifle. But Wojtowicz also had a pistol in his waistband.

John Wojtowicz gestures outside a Chase Manhattan Bank branch during the infamous robbery and hostage-taking in Brooklyn, August 22, 1972.

LARRY C. MORRIS/THE NEW YORK TIMES/REDUX

The Deacon and the Dog

His hands were slowly moving down toward his waist. Murphy knew that Wojtowicz had the pistol and commanded him to “freeze,” to get his hands back up in the air; his trigger finger maintained the tension between mercy and retribution.

Fifty years later, seated at a diner in Fresh Meadows, Queens, Murphy says that he can still feel that tension, the great control he had at that moment—and when Wojtowicz eventually complied with his orders, the sensation of the trigger’s release. Had Murphy not released it—had the incalculable hours of training he received at the Bureau not kicked in—he could have shot two men that early morning instead of one. Wojtowicz “wasn’t at his gun yet. He was going for it. I could have shot him, and people would have said it was a justifiable shooting. I don’t think that’s the best way to behave. The instinct isn’t to kill somebody. The instinct is to stop the action,” Murphy noted.

“You can’t leave these things in the bad guys’ hands. And I use Ωbad guys≈ for lack of a better term. We’re talking about a moment. I don’t think Sal was a bad guy. I don’t think there’s anyone in the world who’s a bad guy, you know? But he put himself in a very bad situation where the opposition can’t make that distinction,” said Murphy.

Both the robbery and Dog Day Afternoon brought Murphy stature and admiration within the Bureau, as he regularly gave talks to starstruck FBI agents about the eternal conflict between the facts of a case and its Hollywood portrayal. Outside the Bureau, he remained relatively anonymous—few people knew of his involvement in the robbery, save for friends and family. In both the film and in published reports about the event, he was known simply as “Murphy.”

Today, Murphy runs his own private investigation firm, which he’s done since he resigned from the Bureau in 1984. He looks the same as he did back then, save for more gray in his close-cropped hair. He still loves the Bureau and everyone he worked with there. He still wears collared shirts, ironed to perfection, still wears an expression that’s congenial yet discerning, still speaks in a gentle Queens accent. He is a man at peace with his life and the good and bad it has brought with it.

Murphy could have stayed at the Bureau and risen in the ranks. His last role was as assistant special agent in charge at the FBI’s BrooklynQueens Metropolitan Resident Agency. But he retired early to be closer with his family and take care of his younger son, who, at the time, had been diagnosed with cancer. He remains a man of deep Catholic faith. For the past 20 years, Murphy has served as a deacon for the Diocese of Rockville Centre. That evening, he would be presiding over a wake service for a family that had just lost a relative to suicide following a struggle with depression. “We’ve got to make some sense of it.”

As for Sal Naturale, the young man he shot, he feels sorrow for him. He wishes things could have been different. Naturale, in Murphy’s mind, was a lost soul, a person whose free will steered him wrong. But for Murphy, sorrow does not equate guilt. “I do feel bad about Sal. The kid never had an opportunity to live his life. That has nothing to do with guilt,” Murphy said. “He got up that morning not having any idea what was going to be happening so many hours later. He had no idea, nor did I, on where he was going to be that night.”

For Wojtowicz, the bank robbery was over a man who wanted to be a woman. Wojtowicz met Ernest Aron at St. Anthony’s feast in Soho in 1971. Tall, thin, and effete, Aron was dressed in semidrag. Wojtowicz, a Vietnam veteran, was smaller, irascible—and promiscuous. He was also a married father of two who became involved in the Gay Activists Alliance under the alias “Littlejohn Basso,” the last name a nod to his mom’s maiden name, the first a reference to his microphallus.

“He was a Goldwater Republican who volunteered in the war in Vietnam to serve his country, came back home with his brain scrambled, and somehow in the Army he discovered he liked having gay sex,” said Randy Wicker, a reporter, author, and gay activist who knew Wojtowicz and Aron. Wojtowicz became infatuated with Aron, and, after a long courtship, they got “married” in an informal ceremony, with Aron in a flowing wedding gown and his male wedding party dressed as bridesmaids.

But Aron’s desire to transition to a woman caused friction in their relationship. Wojtowicz opposed the idea. During an argument, Aron had told him, “I want to have a sex change, or I want to die.” Aron became suicidal, and on his birthday on August 19, 1972, he overdosed on pills and was taken to King’s County Hospital, where he was committed to the psychiatric unit.

Heartbroken and determined to get Aron out of the hospital, Wojtowicz recruited Bobby Westenberg and Naturale, a 19-year-old from New Jersey with priors for grand larceny and drug possession, to help him rob a bank. After a few false starts at other banks in the city—in the 2013 documentary about him, The Dog, Wojtowicz recalled that they dropped a shotgun outside a Lower East Side bank, cutting that attempted robbery short, while Westenberg ran into a family friend at another bank in Howard Beach—they eventually settled on a Chase Manhattan Bank in Gravesend, Brooklyn.

On August 22, 1972, at closing time, Naturale and Wojtowicz, armed with a colt revolver and carrying a .303 British rifle and a 12-gauge shotgun, both concealed inside a box, entered the bank. Westenberg was supposed to hand a typewritten note to the teller:

THERE IS NO REASON FOR ANYONE

TOO [sic] BE INJURED OR KILLED. IT IS

ALL UP TO YOU. THIS IS AN OFFER YOU

CANT [sic] REFUSE

Instead, he got cold feet and fled the bank, abandoning his two accomplices with seven bank tellers, a branch manager, and an unarmed security guard.

Without Westenberg, Wojtowicz and Naturale proceeded with the robbery. Naturale, revolver in hand, approached the desk of Robert Barrett, the bank manager, and informed him that his bank was being held up. The excitable

Wojtowicz, rifle now out of the box, jumped behind the teller’s counter and went through the drawers and carriages, separating real money from the decoy bills, telling bank employees that he had once worked in a bank himself and knew how these things went. The bank tellers were instructed to keep answering phone calls and to act as though everything was fine. When Barrett got a call from a human resources officer at Chase Manhattan’s downtown office to discuss staffing, the officer, struck by Barrett’s unusual tone, asked if there was a problem. “Very “In contrast to today, bank robberies and hijackings were much so and have a nice day,” said Barrett, before hanging up. Chase Manhattan notified the NYPD. As it is today, New York City was then grappling with troubling crime numbers. Midway through 1972, 810 exceedingly common in the 1970s. ” homicides had been committed in the city—a new record—along with 443 shootings, according to the New York Times. In comparison, in 2022, 559 shooting incidents had taken place in New York as of June 12, NYPD statistics show. The number of murders—185 during this time frame— thankfully hasn’t reached 1972 levels. But serious crimes have been rising in the city for several years now. In contrast to today, bank robberies and hijackings were exceedingly common in the 1970s. Gotham saw 469 bank robberies in 1970–71 alone, the New York Times reported. Back then, these crimes—along with kidnappings and hostage negotiations, among other disruptions— informally fell under the auspices of the FBI’s Bank Robbery Squad. Working on nonviolent squads that handled, say, white-collar crimes, involved long hours stationed at desks or inside surveillance vans. The Bank Robbery Squad, by contrast, gave agents the chance to work the high-risk and dangerous cases that made them want to join the Bureau in the first place. “Some of the groups that we worked that were committing bank robberies were the BLA [Black

The Deacon and the Dog

Liberation Army], the Weather Underground, and the Westies. You had bank robberies where guys went in and claimed they had a bomb with them,” recalled Murphy.

Working for the Bank Robbery Squad meant being a jack-of-all-trades—sharpshooter, hostage negotiator, investigator, anything the situation might call for—at a time when bank robberies were a daily occurrence in New York. “You had to be prepared for meeting violence at the time of an arrest, and that was the adrenaline rush that we all sought and pursued, not for glory, but to get these guys and get them off the street,” said Kenneth Lovin, a former FBI Special Agent with the Bank Robbery Squad.

Skyjackings were another unofficial FBI specialty. It was “the golden age of hijacking,” observes Brendan Koerner, in The Skies Belong to Us. More than 130 U.S. airplanes were hijacked between 1968 and 1972. One such attempted takeover involved Richard Obergfell, an unemployed airline mechanic and lovesick New Jersey native. He had become infatuated with a woman in Italy, who was also his pen pal. He boarded a Chicago-bound TWA flight and, using a pistol he sneaked onboard with him, commandeered the plane, demanding that it be rerouted to Milan. As the Boeing 727 lacked the fuel capacity for a cross-Atlantic trip, Obergfell was flown back to LaGuardia and transported by car to JFK, where another jet awaited him and the air stewardess he had taken hostage.

The Bureau tried to de-escalate the situation, bringing in a Catholic priest and FBI negotiators to reason with Obergfell. Those tactics failed. When Lovin arrived at JFK, John Malone, assistant director in charge of the FBI’s New York Field Office, ordered him to stand behind a blast fence 175 yards away from the plane, armed with a Remington 760 rifle. When Obergfell was to make his way from the car toward the plane, Lovin had orders to take the shot—but only if he could avoid harming the stewardess. Lovin had scoped him for 15 minutes when Obergfell became distracted by a police car. “During his moment of excitability, he removed the gun for a matter of a few inches away from the girl’s head. And when he did that, I felt that even if I hit him and he pulled the trigger, she was in no danger. And that’s when I took my shot,” recalled Lovin. He shot Obergfell twice, sending him to the ground. The stewardess escaped unscathed. Obergfell died. The Federal Aviation Administration told the New York Times that it was the first fatal shooting of a hijacker in the United States.

“We tried to appeal to him and to de-escalate, and we wanted it to be resolved without any loss of life, period,” Lovin said. “But things don’t always work out that way.”

Murphy was in the FBI’s field office on East 69th Street when they got the call from the NYPD advising of the bank robbery in progress. By then, it was all over the media. NYPD and FBI snipers were already positioned on roofs. “Anyone else who had a right to carry a gun showed up,” noted Murphy.

Bob Kappstatter, then a reporter for the Daily News, managed to get Wojtowicz on the phone by calling the bank. “I asked him, ΩHow you doing? Do you think you could kill all these people?≈ And he says, ΩYep, I could kill.≈ So we’re off and running,” says Kappstatter.

Murphy arrived on the scene to find thousands of spectators watching the standoff from the tops of trucks and behind police barricades. By then, the bombshell had already dropped: Wojtowicz was robbing the bank to pay for Aron’s sex change. “You couldn’t think of a better angle while a story was happening. It came out of the blue, and people’s jaws were dropping,” said Kappstatter. Wojtowicz confessed to being a “homosexual,” something that also made headlines back then. Inside the bank, Wojtowicz told tellers that he didn’t plan to harm them— that the police had forced his hand to keep them as hostages.

By 8:00 pm, Dick Baker, special agent in charge at the FBI’s New York City office, took over hostage negotiations from the NYPD. Wojtowicz left the bank throughout the late afternoon and into the evening to speak with negotiators, each meeting causing a stir from the crowd. Wojtowicz, dressed in a T-shirt, offered to trade a hostage for Aron, who was still in Kings County Hospital. He asked for

FRED W. MCDARRAH/MUUS COLLECTION/GETTY IMAGES

Wojtowicz and his transsexual girlfriend Elizabeth Debbie Eden (Ernest Aron) in 1979, after his release from prison

hamburgers to be delivered to the bank. Instead, pizza was dropped off at the front door, for which Wojtowicz paid by tossing $1,000 in cash in the air, FBI agents scrambling to pick up the bills. The hostage-takers never ate the pizza, fearing it was drugged. “Every time he did exit the bank, he’d have the community yelling and screaming and chanting in support of them,” said Murphy. Inside the bank, the atmosphere between the hostages and their captors was, surprisingly, festive. “We cried, we laughed and joked. We took it as it came,” one of the hostages told the New York Times.

Eventually, Aron was brought to the scene, per Wojtowicz’s request. But he refused to meet with Wojtowicz directly, fearing Wojtowicz’s “bad temper.” Instead, Aron was set up at a neighborhood barbershop, which had been converted into a makeshift police command center. Wojtowicz said that “he wanted to come out but he was afraid to, and that if he left, Sal would kill everybody,” according to Aron, in archival footage in The Dog.

Meantime, the police had cut off both the telephone lines and the air conditioning. Wojtowicz and Naturale were now overheated and hungry. Sure, the two men now had over $38,000 in cash and nearly $175,000 worth of traveler’s checks in their possession. But with that had come eight restless hostages and unrelenting, unflattering media coverage. They were trapped inside a bank, surrounded by a battalion of law enforcement and spectators who wanted to see things escalate to an explosive finale. (On two occasions, Wojtowicz fired his gun, once toward the rear of the bank upon hearing a “menacing” noise, another when he accidentally discharged his rifle after bumping it into a desk, nearly blowing off a foot.) What Wojtowicz didn’t have was Aron— or a clear escape plan.

The Deacon and the Dog

After hours of negotiations, Baker and Wojtowicz reached an agreement. The two robbers would be taken with their hostages to JFK, where a plane would fly them to multiple destinations. At each stop, two hostages would be released, and when all the hostages were off the plane, Wojtowicz and Naturale would continue on to freedom, wherever that might be. Inside the bank, the robbers and their hostages brainstormed ideas on where they’d go—maybe Moscow, maybe Tel Aviv.

But how would the two robbers and their hostages get to the airport? There was talk of taking separate cars, one robber and a few hostages per car, to JFK. Ultimately, it was agreed that they would be transported in an airport limousine, with a sole FBI agent, who would be at the wheel. Baker presented four agents to Wojtowicz, having them stand in a line in front of the bank. Among the four were Thomas Sheer, a former Marine who would go on to serve as assistant director of the FBI from 1986 to 1987, and Jack Jansen, who stood over six feet and was built like a boulder. Wojtowicz looked them over and pointed at the least physically imposing of the four. “I’ll take him,” he said—meaning Murphy.

Wojtowicz went back into the bank. It dawned on Murphy that someone was going to die that night. It could be him. It could be one of the bad guys. God forbid it be one of the hostages. It was an unsettling feeling. Murphy turned to the crowd and looked for the priest he had seen earlier—the one who had tried to reason with Wojtowicz.

As the product of a Catholic education, from grammar school through St. John’s University, Murphy remained a man of faith. But he wasn’t living that faith the way he would have liked. If this was going to be it, he wanted to demonstrate remorse for any wrongs he had done. It would be a while before the airport limousine arrived. Murphy approached the priest. “I told him what was going to happen. I told him my concern was that there was a real possibility here that someone was going to die, someone was going to get shot, and it could be me. I asked him if he would hear my confession,” Murphy recalled. They walked along the tree-lined streets away from the crowd, and the priest listened to his confession. When they returned, Murphy felt at peace.

Then the limousine arrived.

The first challenge for Murphy was where to secrete his weapon, a Smith and Wesson model 15. An agent offered his ankle holster and weapon, but Wojtowicz would find it if he patted Murphy down. He settled on placing the gun underneath the gas and brake pedals, concealing it under the floor mat.

Murphy drove the airport limousine to the front of the bank. Wojtowicz ordered Murphy to walk to the back of the vehicle, in order to be frisked. Wojtowicz made a great show of the patdown, slowing down his search when he reached Murphy’s groin. The spectators went bananas. Murphy felt humiliated. Murphy returned to the driver’s seat and placed his foot on the brake. Wojtowicz searched underneath the driver’s seat and other locations—but not beneath the pedals. When Wojtowicz went to get the hostages, Murphy put the gun in his belt, covering the grip with his tie.

Around 4:10 am, Wojtowicz and Naturale exited the bank. Naturale had hostages huddled around him—a human shield. In the limo, three hostages were placed in the second row. Naturale sat in the middle of the third row, a hostage on either side. In the last row was Wojtowicz, sitting in the middle, also sandwiched between two hostages. There were now seven hostages left— a security guard had been released earlier in the standoff, and another hostage had been allowed to leave when everyone walked out of the bank. At 4:45 am, Murphy and his passengers headed off to the airport. Baker was in the lead car in front of him. Behind him was a 20-car convoy of lawenforcement vehicles. It was a 25-minute drive to the airport.

Naturale was extremely nervous, Murphy saw. He held the shotgun at the back of Murphy’s head.

“Sal, do me a favor and put that up. My wife will be really disappointed if that goes off,” said Murphy.

“Don’t worry. It won’t fire,” replied Naturale.

“If we go over a bump and it accidentally

discharges, it’s going to fire and I’m not going to be here anymore,” said Murphy. Naturale lowered the gun.

On the drive over, they exchanged small talk, putting both hostage-takers at ease. Wojtowicz was starving and wanted to stop and get hamburgers. That couldn’t happen, Murphy explained, but he would see if he could sort something out at the airport, maybe get them some food on the plane.

At JFK, Murphy turned onto a long, dark road that would take them to the satellite area where the plane would eventually be—the same area where Obergfell was shot. Naturale was now frightened. He again raised the gun to Murphy’s head.

Baker had already arrived in the satellite area. Murphy stopped the limousine and said he’d talk to Baker about getting them some food. Per prior agreement, a hostage would be released at the airport. It was supposed to be Barrett, but he refused. Instead, it was one of the women sitting behind Murphy, creating an opening between him and Naturale.

Baker and Murphy met halfway between the cars and devised their plan. When the airplane taxied into the satellite area, Baker would walk back to the limo’s right rear window, close to where Wojtowicz was sitting. With Baker in position, if Murphy thought he could take the shotgun from Naturale, he would ask Baker, “Will there be food on the plane for these people?” If Baker responded yes, it meant LMPC/GETTY IMAGES that he thought he could get the rifle Poster for Dog Day Afternoon, the 1975 film based on the from Wojtowicz, who had it resting robbery on his lap. Their plan decided, Murphy returned to the limousine. Baker was look- had his right elbow on top of the seat, his handing into getting some food for them, he told the gun concealed in his right hand. They continued robbers. the small talk.

It was another 20 minutes before the plane The plane taxied into the satellite area. The would arrive. Murphy slowly got himself in po- glare of the lights and the whine of the jet engines sition, resting his right knee up on the seat. He distressed Naturale even more. Baker casually

The Deacon and the Dog

walked back to the rear window. Murphy turned around, sensing Naturale’s unsteadiness.

“Will there be food on the plane for these people?” asked Murphy.

Baker looked at him. “Yes.”

Murphy swung in his seat. He grabbed Naturale’s shotgun with his left hand and pushed it up to the ceiling, raising his gun in his right hand while doing so. Naturale hung on to the shotgun with both hands. Murphy fired a shot, hitting Naturale in his chest. Naturale let go and collapsed in his seat. Baker pulled the rifle away from Wojtowicz.

The hostages opened the doors and streamed out. Murphy had his gun on Wojtowicz, whose hands were inching down. But Murphy didn’t shoot him. Wojtowicz stopped. The robbery, as sensational and unprecedented as it was, had reached a predictable end.

Murphy jumped over his seat. With Murphy’s hand on the back of his neck, Naturale let out one long exhale, flapping his lips, and then stopped breathing. He died on the way to the hospital. Later that evening, a hostage approached Murphy and asked if Naturale had died. “Yes,” he responded. “That’s too bad,” she said, kissing him on the cheek.

Murphy’s case was the third incident in 1971–72 in which hostage-takers were shot and killed by special agents from the Bank Robbery Squad. Lovin’s was the first.

The second was the hijacking of a Mohawk Airlines plane by Heinrich von George, a failed businessman and father of seven, who faced indictment for simultaneously drawing welfare and unemployment checks. Von George successfully hijacked the plane as it departed Albany, using what he claimed were a pistol and a bomb. The plane landed at Westchester County Airport, where he released all the passengers. After hours of negotiations, he received the $200,000 and two parachutes that he had demanded.

The plane departed for Pittsfield, Massachusetts, diverted midair to Poughkeepsie, New York, per von George’s orders, and landed at Dutchess County Airport, where a car waited on the runway for him and his hostage, a stewardess. As von George entered the car, FBI agents charged. They had followed von George’s plane in one of their own, flying with no lights. One of those agents was the massive Jack Jansen, who, months later, would be rejected by Wojtowicz as a driver to JFK. Jansen jumped in, pulling the woman away from von George. He had his shotgun aimed at von George and yelled “freeze.” Von George raised his gun toward Jansen and fired. Jansen returned fire, shooting von George in the throat at point-blank range. He died, facedown on the tarmac. Von George’s gun turned out to be a starter pistol. The bomb was two canteens filled with water and wrapped in a red blanket.

Three failed crimes committed by three failed men, each a victim of a thousand self-perceived indignities. They all shared the same delusion: that the bounty they would steal would finally bring them the life owed to them, the people they desired, the respect they desperately needed. Their strategies were, at best, half-baked. The crimes themselves would be beset by delays, intense negotiations that dragged on for hours, and, when all options for peaceful surrender had been exhausted, they were met with lethal force.

Like Murphy, Jansen and Lovin had their own flashbulb memories of the incidents, images that they could never shake. For Jansen, it was the viscera that sprayed forth from von George, levitating in the air like smoke, before hitting the ground on its final descent. For Lovin, it was the impact of the bullet as it struck Obergfell dead center in the chest, and then the dying man’s face, his eyes and mouth agog, a rictus of shock. Over 50 years later, Lovin says that he still can’t wipe that face from his memory bank.

The three men spoke about this on a few occasions, but not often. They knew what each was going through, and they all reached the same conclusion: while it was sad that it had to happen, these men had their chances to surrender. They wouldn’t do it.

In life and in death, Sal Naturale was a big unknown. He went by the alias “Donald Masterson.” During the standoff, the press described him as a homosexual, which he denied and wanted corrected. Whereas the whole world knew about

Wojtowicz’s dirty laundry, few knew what Natu- Institute. In these scenarios, “you’re either gorale looked like, let alone who he was. To those ing to neutralize the threat or you’re going to be who did know him, he was a “six-time loser” neutralized and, potentially, all the hostages get whom “nobody gave a damn about,” according to neutralized,” observed Straub. the New York Times. Besides, what would have happened had they

The “Sal” the rest of the world came to know let the hostages and the robbers board on the getwas the dimwit played by thirtysomething away plane? No way was the FBI going to let the John Cazale in Dog Day Afternoon—though, in situation go mobile. And unknown to Wojtowicz, real life, Naturale had been just a lost 19-year- FBI men were waiting for them onboard the plane. old kid. The movie amped up Naturale’s If it wasn’t Murphy who stopped Naturale, the naïveté to the point of ridicule, which bothered job would have fallen to another agent. It’s unforMurphy. tunate that Naturale was killed. But it happened.

When the film came As for Wojtowicz, after serving just five years in prison, he spent the second act of his life parlaying his notoriety in publicity stunts, like signing autographs in front of the same bank he robbed (while wearing an “I robbed the bank” T-shirt). He out in 1975, Murphy took his wife to see it in a packed theater on the Upper East Side. The audience, much like the crowd at the scene of the real standoff, seemed mostly to be on the side of the “bad guys.” When Henriksen, playing Murphy, shoots “With Murphy’s hand on the back of his neck, Naturale exhaled, flapping his lips, and then stopped breathing.” Cazale, dead center in the forehead, the crowd died of cancer in 2006. booed. Murphy wanted to leave before anyone Aron transitioned to Elizabeth Eden, a surgery in the audience might recognize him. paid for by the sale of Wojtowicz’s film rights

Relatives who saw Dog Day Afternoon were for Dog Day Afternoon. She died of AIDS-related indignant, cornering Murphy at family gather- pneumonia in 1987. Murphy’s son died from canings to ask: “How could you have possibly shot cer in 1992. He was only 12. His death left Murthat guy?” They doubted whether he was in the phy in unthinkable pain and in need of a higher right. “That forced me into a situation of sitting calling. At the gentle suggestion of his wife, he down with my [older] son, who, at the time, was pursued the diaconate at his church, becoming only five, and explaining to him what had hap- ordained in 1999. Being a deacon has brought pened because I couldn’t allow him to find out him the peace and sense of service that he had from anybody else or to get a distorted view of been seeking. “My son now has what we’re all it,” said Murphy. working toward: salvation. Eternal happiness

But the second-guessing of his actions lasts with the Lord. He’s never going to go through to this day. Reporter Randy Wicker still thinks the loss of a child or some of the other heartache that Murphy’s shooting was “cold-blooded mur- that’s around,” said Murphy. “I’m getting up der,” as he did when it happened. “He pushed there, so it’s not going to be long before we’re the boy’s gun to the ceiling and instead of say- together again.” ing Ωfreeze,≈ he simply shot him in the chest,” said Kappstatter interviewed Wojtowicz again in Wicker. 1979, not long after he left prison. He asked Woj-

Yet simply to say “freeze” to someone hold- towicz whether, after the death of a friend, a stint ing a shotgun—with another armed criminal behind bars, the collapse of two marriages, and in the car, along with several hostages—could inspiring a Hollywood movie, he had learned have been a catastrophic misstep, according to anything. Frank Straub, director of the National Policing “Yeah,” said Wojtowicz. “Don’t rob a bank.”

Crash Curse

In New York City, traffic deaths are up as enforcement is down.

Nicole Gelinas

Just before 2 am on February 28, 2022, after a night partying in upper Manhattan, Edgar Valette, 39, got into his BMW to drive two friends—Kimberly Martinez, 28, and Michael Santos, 30—home. Careening southbound down the Henry Hudson Parkway, he lost control of his powerful vehicle, vaulting over a barrier onto train tracks 500 feet below. The driver and passengers died. Ten weeks later, just before midnight on May 18, 30-year-old Alwayne Hylton lost control of his own speeding BMW on the

A fatal and deliberate crash in Times Square, 2017, involving a mentally ill driver: New York cut traffic deaths significantly over the three decades before Covid-19, but since the pandemic, it’s been a different story.

WANG YING/XINHUA/ALAMY LIVE NEWS

Crash Curse

elevated Bruckner Expressway, north of the Manhattan border in the southwest Bronx, plummeting to the roadway below to his death. Not long after that crash, on May 26, just before 7 pm, also in the Bronx, an unnamed 25-year-old man sent his Mercedes hurtling off the New England Thruway, landing on the street below; he, too, perished. Also in May, a 36-yearold man rode his motorcycle down the West Side Highway; as the sun rose, he slammed into a median barrier, dying on impact. Weeks later, in Queens, a 28-yearold man crashed his motorcycle “at a high rate of speed” down the Utopia Parkway into a brick wall, with the same fatal results.

Since the Covid pandemic hit New York City in March 2020, traffic deaths have skyrocketed, just as they have across the country. Locally and nationally, these deaths have paralleled the same double-digit trajectory upward as homicides and drug-overdose deaths. In 2019, 220 New Yorkers died on city streets, 0 near the record low of 206, set the year before. In 2021, 273 people died, a nearly one-quarter increase in two years. In 2022, as of late May, 93 people have died, down slightly from last year, but Yet the bad raw numbers hide some successes. 12 percent above pre-Covid levels. The changes that the city has made to its streets

Beyond the human toll, this reversal of street over the last decade or so—creating room for pesafety was a particular blow for former mayor destrians and cyclists and slowing car and truck Bill de Blasio, who left office at the end of 2021. drivers—have helped pedestrians, especially, De Blasio had made traffic safety a mayoral cen- who are dying in fewer numbers relative to a deterpiece, promising, in his 2013 campaign, signif- cade ago. The city hasn’t made as much progress icantly to curtail traffic deaths, building on the in protecting cyclists, but nor have cyclist deaths double-digit reductions that his predecessors, soared during the pandemic—an achievement, Michael R. Bloomberg and Rudolph W. Giuliani, considering how much cycling has increased, as had made. By the conclusion of de Blasio’s final New Yorkers avoid the subway and as food-determ, the increased carnage on the roads would livery workers serve people eating more takeout. appear, at first glance, to have undone all the im- Who, then, is perishing now in greater numprovements that he, too, had notched. bers? The victims often fit the profile of those

Progress Interrupted

From 1990 to 2019, New York City cut traffic deaths significantly—but then pandemic disorder sent them soaring. 700 600 CITY TOTAL CITY PEDESTRIAN ANNUAL TRAFFIC DEATHS 500 400 300 200 100 2021 1990 1995 2000 2005 2010 2015 2020

SOURCE: New York City Department of Transportation

Losing Control

killed in the single-car crashes noted above: younger men, drivers or passengers in motor vehicles, often late at night, often speeding. New York’s increase in traffic deaths, in other words, tends to mirror its (and the nation’s) broader public-safety problem: the self-destructive and dangerous behavior of a young male demographic. As with the recent explosion in violent crime, members of this group are taking advantage of a law-enforcement vacuum that lets them get away with ever more antisocial behavior—until it kills them or someone else. Street engineering has mitigated this problem to some degree, and can do more, but it can’t entirely fix it. Policing and other direct enforcement of behavior also have crucial roles to play.

As in many areas of public safety and public health, New York City started the pandemic with an advantage. In 2019, the city’s 220 traffic deaths—whether people in cars, or pedestrians, or cyclists—repreAntisocial driving, including drunk driving and speeding, soared during the pandemic, as selected factors in fatal sented a per-capita rate of about 2.6 per 100,000 resicrashes show. dents, just a small fraction of the

ALCOHOL Deaths attributed to: 11.1 per 100,000 killed nationwide. Among large, urbanized areas,

DRIVER INATTENTION New York stood out for safety, as

SPEEDING well. In Miami-Dade County in 2019, for example, the rate was 11 per 100,000; metro Atlanta’s rate was similar. Even among denser northeastern and mid-Atlantic cities, which have long had lower traffic-death rates than the sprawl13 ing south and west, New York performed slightly better than Boston, with its 2.8 traffic deaths per 100,000, and much better than 2017–19 AVERAGE/YR 2020 2021 Philadelphia, with its 5.7 deaths SOURCE: Author’s calculations based on New York State Traffic Safety Statistical Repository maintained by the Institute for Traffic Safety Management & Research per 100,000. Pre-pandemic, New York’s falling traffic deaths made it a national outlier. Between 2011, when traffic deaths hit a modern low nationwide, and 2019, such fatalities across the country rose by 11.9 percent, to 36,355 annually. In Gotham over this period, by contrast, they fell 12 percent. The difference in pedestrian casualties was especially striking. Nationwide, pedestrian deaths began rising in 2010, after having fallen, reasonably steadily, for at least three decades. By 2019, annual pedestrian deaths had risen from their 2009 low by more than half. But in New York, pedestrian deaths fell by 21.5 percent over the same near-decade. What made the difference? New York’s decade of reengineering its streets to favor walkers and

51 42 7 45 69 21 41 80

Crash Curse

cyclists indisputably saved lives. Bike lanes and new pedestrian islands and plazas narrowed automotive driving lanes, for example, forcing motor vehicles to go slower and giving walkers and cyclists more room to move and cross. As Alex Armlovich, then of the Manhattan Institute, concluded in a 2017 study, “the evidence is clear that Vision Zero”—the aspirational global slogan to achieve zero traffic deaths—“has improved street safety.” Indeed, citywide, as street makeovers expanded, pedestrian deaths fell from 158 in 2009 to a modern pre-pandemic low of 108 in 2017. Such remade streets were good for the safety of people in cars and trucks, too. In 2009, 90 motorists, including motorcyclists, died on city streets; in 2019, the figure was 68.

Enforcement also saved lives. Notes Michael Replogle, deputy transportation commissioner for policy during most of the de Blasio years, “automated and conventional traffic enforcement,” coupled with a lower 25 mph speed limit on non-highway roads, “expanded greatly to discourage aggressive driving.” Automated speed cameras in school zones, introduced by Bloomberg and widened under de Blasio, have slowed drivers over the past decade (even though, until recently, the state legislature required the city to turn the cameras off on weekends and overnight, when the plurality of crashes occur). Crash injuries fell by 13.9 percent in the year after installation. More than half of drivers getting a speeding ticket in a school zone needed only one such $50 citation to change their behavior, never receiving another ticket, city data show.

Police action reinforced the technology. In 2018, the record-low year for traffic fatalities, police issued a modern record-high number of speeding tickets—152,381—that more than doubled the 2012 total. Whether making a purposeful substitution or not, as police retreated from the tactic of stopping, questioning, and frisking young men allegedly behaving suspiciously on foot, they devoted some of these resources to stopping and summonsing young men behaving dangerously in cars.

Though cyclist deaths are not driving the current surge in road violence, New York hasn’t made as much progress protecting cyclists as it has pedestrians.

SPENCER PLATT/GETTY IMAGES

Crash Curse

The law-enforcement role in making traffic safer had begun much earlier, under former mayors David Dinkins and Rudolph W. Giuliani. Between 1990, New York’s record-high year for traffic deaths (and murders), and 2001, when Giuliani left office after eight years, traffic deaths fell from 701 to 393 annually; annual pedestrian deaths fell from 366 to 193. On traffic safety, remembers Giuliani, “We tried to apply the same process we applied to the other problems . . . find out where the most deaths were taking place, and figure out what is needed to reduce it.” Among other initiatives, Giuliani launched a TrafficStat program, similar to CompStat for crime, to create a statistical map of hot spots for traffic crashes. And his administration also cracked down hard on drunk drivers and reckless drivers, seizing their vehicles and charging reckless drivers with misdemeanors, something rarely done. The city seized so many vehicles, Giuliani recalls, “that we didn’t know what to do with the darn cars.”

The pandemic and related shutdowns that began in March 2020 have imperiled these intertwined successes of street engineering and law enforcement. Over the past two years, New York has performed even more dismally when it comes to traffic bloodshed than the rest of America. In 2021, 42,915 people died on the nation’s roads, up 18 percent from 2019—but New York saw an even worse increase of 24 percent. As Mayor Eric Adams said in May 2022, “we’ve seen traffic violence increase drastically in the past two years. This is a real crisis.”

Even amid these grim figures, however, we can glimpse what New York has done right, relative to the rest of the country. The least bad news, relatively speaking, is the pedestrian death toll. In 2021, 126 pedestrians died. This figure is higher than the annual average of 114 between 2017 and 2019—again, nothing to brag about. But the pedestrian death toll nationwide has risen much more. Further, in 2022, through the end of May, 43 pedestrians had died, down from the same period in 2021 and 2019.

One might argue that fewer pedestrians died in 2021, relative to the increase in other traffic deaths, because fewer of them were on the streets. This hypothesis was true, for a while. In 2020, pedestrian deaths reached a record low. No pedestrians died in April 2020, a never-beforeachieved feat. Yet absent the extremity of total lockdown, that’s not how pedestrian deaths work. Pedestrians enjoy safety in numbers, not just from violent crime but from dangerous drivers. Large crowds on foot in intersection crosswalks, for example, deter drivers from speeding through turns. This reality is why dense New York has long had a much lower pedestrian death rate compared with other cities, where fewer people walk.

Nor are cyclist fatalities pushing New York’s traffic-death numbers higher. This is a testament to the safety-in-numbers principle, as well as to New York’s improved cycling infrastructure. Bike riding soared by an estimated one-third on major corridors during the first year of the pandemic and has remained elevated since. The Citi Bike cycle-sharing program has smashed all its daily pre-pandemic records, with 72,298 rides daily in April 2022, up 21 percent from three years before; during the summer months, daily Citi Bike ridership reaches the low six figures. Tens of thousands of delivery riders have also taken to the streets, often riding faster-moving electric bikes that put their riders in greater danger. Indeed, of the 26 cyclists who died in 2020 and the 19 who died in 2021, 11 were “e-cyclists” (not including moped riders or riders on other vehicles without pedals). Working cyclists face disproportionate risk, but this changing mix of danger did not push the overall cyclist death toll up.

New York’s network of bicycle lanes works, where it exists. During 2021, none of the cyclists killed was riding in a well-protected bike lane when he died. One Brooklyn cyclist was killed, though—allegedly by a teenage wrong-way truck driver—in a lane where the city had allowed barriers between cyclists and drivers to deteriorate, replacing physical separations with inadequate lane markings. The provisional takeaway is that the city’s bike lanes and other infrastructure are a success; they are just insufficient to keep up with growing demand. As Sergio Solano, a food-delivery cyclist who has

helped organize a trade group, El Diario de los doubled up on motorbikes without helmets, Deliveryboys en la Gran Manzana, to advocate joyriding. In 2020 and 2021, motorcyclist deaths for better safety for working cyclists, notes, outpaced the annual average between 2017 and “putting . . . barriers between cars and cyclists is 2019. There’s one rough way to quantify the working,” albeit “very slowly.” motorcyclist risk-taking that has helped cause

That’s not to say that New York should be these deaths: between 2017 and 2019, according satisfied with these numbers. Deaths remain far to my calculations derived from a state database higher than they are in comparable European cit- of crash factors maintained by the Institute for ies. In London in 2021, for example, 55 pedestri- Traffic Safety Management and Research, just ans and nine cyclists died—less than half of New 11 percent of motorcyclists killed in New York York City’s totals, despite similar populations. City crashes failed to wear a helmet, as required But in New York, experiencing an explosion in by law. In 2020, the figure rose to 17.8 percent, the murder rate since and in 2021, to 25 percent. The proliferation of electric mopeds (which, like motorcycles, but unlike electric bicycles, require a license plate, and thus fall into the “motor vehicle” category) adds to the problem. In the summer of 2020, three riders on bike-share mopeds operated by the Revel startup died, March 2020, let’s declare a modest success when we find one: the city has not lost a decade, or more, of progress in keeping pedestrians (and, to a lesser degree, cyclists) safer. It’s a different story with motor vehicles. In “A new urban blight: drag racers, drivers accelerating at light changes, and drivers who seem enraged.” 2020, 122 motorists died, a 52.5 percent increase two helmet-less, despite city requirements. That over the average between 2017 and 2019. In fall, a Revel moped rider killed a Manhattan pe2021, 113 car and truck drivers (mostly car) and destrian, a rare but real example of the danger motorcyclists died, a 42.2 percent rise over the that fast-moving two-wheeled vehicles can pose pre-Covid level. The trend is continuing in (nearly all pedestrians are killed by car and truck 2022, with 41 motor-vehicle occupants killed drivers). through May, up from 25 for the same period in The most significant increase in the death toll 2019. since 2020, though, is borne by people driving

Motorcyclists make up a big share of the in- or riding in cars or SUVs. Since 2020, New Yorkcrease. Though many motorcyclists are responsi- ers on foot, on bicycles, and in other cars and ble hobbyists, and many others aren’t at fault in trucks will be familiar with a new urban blight: fatal crashes with cars and trucks, this category drag racers, drivers accelerating at light changes, of motor-vehicle death has long been a mark of and drivers who seem enraged. This translates young male recklessness. Before the pandemic, into real public-safety deterioration. Between motorcycles made up just 2 percent of registered 2017 and 2019, an average of 46 vehicle occuconveyances in the city but 14 percent of fatal pants—drivers or passengers—died annually crashes. As the city estimated in 2015, almost all in car crashes. The pandemic smashed—litermotorcycle crash victims were male, more than ally—these historically low numbers. In 2020, half were under 35, and more than four in ten the death toll was 71, and in 2021, 64—increases were driving without a license. of 54.3 and 39.1 percent, respectively. As of May,

Motorcycling has become far more dangerous the situation was worsening: 33 such motor-vesince the pandemic began. If you’ve spent time hicle occupants have died so far this year, nearly in New York City over the past two and a half double the average of 17 in the first five months years, you’ve doubtless observed more cyclists of 2017, 2018, or 2019.

Crash Curse

It’s never been a secret that male drivers—particularly, young male drivers driving their own vehicles—are responsible for most traffic deaths. A 2010 city study found that “80 percent of pedestrian [fatal or serious] crashes involve male drivers, while only 57 percent of New York City driver’s licenses are held by males.” In about eight in ten fatal crashes, drivers were behind the wheel of their own vehicle, not a commercial vehicle such as a taxi. Between 2017 and 2019, in more than one-quarter of fatal crashes (across genders), the driver was under 30.

The gender breakdown hasn’t changed much of late. Men continued to be behind the wheel in more than eight in ten fatal crashes the last two years. The other risk factor, though, has grown even riskier: young drivers. In 2020, drivers under 30 were in 100 fatal crashes, up 42.9 percent from the average between 2017 and 2019 and constituting 31.4 percent of the total. In 2021, the trend, though abated, continued, with young males driving in 28.2 percent of all fatal crashes. Just as fewer motorcyclists are wearing helmets, as the law requires, fewer drivers and passengers in fatal crashes were wearing seatbelts, as similarly mandated—40.8 percent were unbelted in 2021 and 2022, up from 31 percent in the immediate pre-Covid years.

More drivers are speeding: 69 such deaths occurred in 2020, a major increase over the average of 42 annually between 2017 and 2019, followed by a new high of 80 such deaths in 2021. And more are drinking. Though drunkdriving deaths fell in 2020—presumably because bars and restaurants were closed for much of the year—they rose past the pre-pandemic average in 2021, as entertainment venues reopened. Finally, more fatal crashes are occurring at night: 114 in 2021; and 97 in 2020, up from the pre-pandemic average.

As former city traffic commissioner Sam Schwartz says, drivers are “behaving outrageously.” Schwartz visited the scene of that single-car February 2022 Hudson Parkway crash that killed three, including the driver—and found no skid marks. In 2020, one could at least argue that drivers were behaving recklessly by accident. With roads devoid of traffic, people could perhaps speed or lose their attention with-

out fully noticing. By 2021, though, normal traffic levels had returned—and the deathly trips continued.

What was the big factor that changed beginning in 2020, apart from the less trafficked roads and the fact that many newly unemployed and out-of-school teens and young men had extra time for motoring misadventures? It’s not as if New York ripped out its new pedestrian plazas and bike lanes. To the contrary, the city made even more such space for cyclists and walkers, and for diners, via its “open streets” and “open restaurants” recreation and dining programs.

Rather, law enforcement changed. Automated cameras continued to issue fines—nearly 4.4 million tickets in 2020 (numbers for previous years aren’t comparable, as the city greatly expanded the program in 2020). But police-directed enforcement of the laws of the road—against drunk driving, speeding, and general reckless behavior—plummeted. Between 2017 and 2019, the NYPD issued an average of slightly more than 1 million “moving violations”—for speeding, redlight running, and other dangerous driving—annually. In 2020, the figure dropped to 510,000, and in 2021, to 508,000. Just as New York drivers were proving that they couldn’t regulate their own behavior, the city severely curtailed its regulation of that behavior.

On the densest streets and avenues, drivers still find some of their reckless behavior thwarted by bike lanes, pedestrian plazas, speed bumps, frequent red lights and intersections, and other physical obstacles, and the city can and should create more such obstacles. But bike lanes and bus lines, especially those without physical barriers to keep out car and truck drivers, aren’t entirely self-enforcing. As A. Mychal Johnson, founder of South Bronx Unite, a quality-of-life advocacy group, notes of his own truck- and cardominated neighborhood, “traffic deaths have increased because traffic enforcement is not a priority for the NYPD. They overlook doubleparked cars, especially their own, vehicles in bike lanes, delivery trucks double-parked, and trucks traversing through residential areas to avoid congested truck routes.” In early May 2022, the

driver of a box truck careening through a mid- since, as The City news site reports, drivers are Bronx intersection struck and killed 16-year-old increasingly using bogus or obscured license Alissa Kolenovic, who was walking to school; plates to evade cameras. Police officers must weeks later, a truck driver killed a bicyclist on stop drivers with such plates. Bruckner Boulevard. In both instances, chaoti- The Adams administration can’t ignore the cally designed streets, coupled with negligible death toll among working delivery cyclists. Most enforcement of traffic in a semi-industrial area, cyclists are independent contractors for apps were to blame. such as DoorDash and Grubhub. The business

The law-enforcement absence is most glaring model requires them to ride long distances, and on limited-access highways and wide arterial quickly, to deliver hot food. A double-digit anroads. “We’ve made strides on intersections,” nual death number that would be unacceptable says Matthew Carmody, a veteran transporta- in any other blue-collar occupation, such as contion engineer at city struction, should be unacceptable in this one as well. As for reckless car and truck operators, repeat dangerous drivers, including the tens of thousands whose vehicles rack up five or more speed-camera violations yearly, should face consequences. Before 28-year-old planning firm AKRF. “But away from intersections, where you have these long, wide boulevards” and “big distances between intersections . . . [drivers] speed.” Wide roads, including highways, are “empty of traffic many hours of the day.” Crash barriers did “Police enforcement of the laws of the road—against drunk driving, speeding, and reckless behavior—has plummeted.” not stop Valette and his friends from plunging Tyrik Mott sped the wrong way down Gates Aveover an upper Manhattan highway embankment nue in Brooklyn and killed infant Apolline Mongto their deaths. Guillemin in her baby carriage in September 2021, his vehicle had racked up 91 speed-camera tickBad drivers, like other antisocial actors, have ets, as Streetsblog reported. Cops had repeatedly pulled him over, and the state had suspended his proved since 2020 that they’re not going to con- license. But Mott kept driving. Similarly, Michael trol themselves. The Adams administration has de Guzman, the person charged with drunkenly taken some welcome steps to do so. Most impor- hitting and killing New York University student tant, the mayor realizes that traffic violence and Raife Milligan in May 2022, had four speeding violent crime go together. “I’m sending a clear violations on his vehicle in just five months. But message that this city is not going to be a city of both still drove with impunity. disorder,” he said in May, and “vehicle crashes” To stop such drivers before they kill, Adams are a sign of “a city of disorder.” The mayor will should revive the other Giuliani-era program, revive the Giuliani-era TrafficStat program, pin- as well: seizing the vehicles of the most reckless pointing locations where “people are speeding, drivers, people caught behaving dangerously driving fast, reckless[ly] driving” for stepped-up several times behind the wheel. “If you get arpolice enforcement. The city will also continue rested for reckless driving to the point where to build out speed bumps, new intersection de- we charge you with a misdemeanor, we’re gosigns that slow traffic with raised markings, and ing to take your automobile from you,” Giuliani bike lanes. said in 2000. “And we’re going to take it from

This spring, the city successfully lobbied the you . . . because it’ll remind you that this is state legislature to let it keep its speed cameras important. This kills people. It also kills you.” on 24 hours daily. But cameras can’t make up Twenty-two years later, his words are more relfor the human enforcement pullback, especially evant than ever.

How the Media Polarized Us

TONY CENICOLA/THE NEW YORK TIMES/REDUX

The shift from ad revenue to the pursuit of digital subscriptions has turned journalism into post-journalism.

Andrey Mir

Public trust in the media has hit an all-time low. Common explanations for this crisis of credibility include bias, polarization, and fake news, but these causes are themselves effects of the tectonic, and generally overlooked, shift in the media’s business model. Throughout the twentieth century, journalism relied for its funding predominantly on advertising. In the early 2010s, as ad money fled the industry, publications sought to earn revenue through subscriptions instead of advertising. In the process, they became dependent on digital audiences—especially their most vocal representatives. The shift

Journalism wanted its picture to fit the world. Post-journalism wants the world to fit its picture.

How the Media Polarized Us

from advertising to digital subscriptions invalidated old standards of journalism and led to the emergence of post-journalism.

Everything we once knew about journalism depended on the model of the ad-funded news media. Advertising accounted for most of the news industry’s revenue during the twentieth century.

This business model provided a selective advantage to certain kinds of media. Since the revenue from copy sales was not sufficient to maintain news production, news outlets needed to attract advertising. As a result, media that relied mostly on the reader’s penny, such as the formerly influential working-class press, eventually lost out in the marketplace. The mass media that oriented themselves around the “buying audience”—the affluent middle class—received money from growing advertising and thrived.

In political economy, this selective effect is called “allocative control.” The ad money did not tell the media what to do; it just chose the media that encouraged its audience to buy goods. Media critics Edward S. Herman and Noam Chomsky argued that it affected the mechanism of discourse formation, as the media maintained a context favorable for consumerism and political stability, and thereby “manufactured consent.”

This business model was extremely successful. By the end of the twentieth century, the news media had reached the apex of their 500-year history. Even regional newspapers such as the Baltimore Sun possessed several well-staffed foreign bureaus. Never were the media as rich and influential as in their golden age, just 25 years ago. Plenty of journalists still on the job remember those glorious days.

Under the ad-based model, media capital represented a significant social force. It protected its interests, its market value, and therefore its independence. The abundance of money enabled newsrooms to develop an autonomy secured by the division between news production and adsales departments—a “glass wall” between ads and news. Preselected by ad money, news organizations geared toward affluent audiences became influential to the point that their autonomy determined their market value.

Ad money carried the risks of advertisers’ pressure in news production, which would have undermined newsroom autonomy, a source of reputation and therefore capitalization. So professional standards were elaborated to protect journalism from advertisers and establish the credibility of news coverage. Credibility was seen as a professional virtue but also as a commodity. “The theory underlying the modern news industry has been the belief that credibility builds a broad and loyal audience, and that economic success follows in turn,” declared the American Press Association in its 1997 “Principles of Journalism” statement.

Thus, paradoxically, the allocative control of ad money determined the allegiance of mainstream media to corporate elites (hence, “corporate media”) but also sustained high-quality journalism. Newsroom autonomy was protected by the standards of objectivity, nonpartisan and unbiased reporting, attention to the arguments of all parties involved, investigative rigor, the separation of fact from opinion, and other guarantees enshrined in the ethical and professional codes of news organizations.

The same set of professional standards that was meant to secure credibility and independence from ad money turned journalism into a public service. “The central purpose of journalism is to provide citizens with accurate and reliable information they need to function in a free society,” claimed the American Press Association’s statement. “Commitment to citizens also means journalism should present a representative picture of all constituent groups in society. Ignoring certain citizens has the effect of disenfranchising them.”

The only entity to which journalism was called to be biased against, even meticulously so, was power. This was a part of the credibility code, too. Endowed with these principles, initially rooted in ad funding, journalism evolved in the twentieth century as the “watchdog of democracy,” positioning itself above partisan party struggle.

Finally, the media’s dependence on advertising determined their attitude toward their readers. If the audience was supposed to be affluent, mature, and capable, so, too, were journalists

expected to avoid judgment when reporting— they were to present the naked facts and the positions of both political sides to the public to judge. Hypocrisy and professional arrogance, of course, had always had a place in the profession: journalists have long seen themselves as a kind of priestly class. Nevertheless, leaving judgment to readers (or at least pretending to do so) was one of the fundamental virtues of ad-funded journalism. And since publications wanted to broaden their audience, not narrow it, they served reader preferences by downplaying, rather than emphasizing, potentially divisive issues.

All of this cooled the political activity of the public. In his 1999 book Rich Media, Poor Democracy, media historian Robert McChesney described the low degree of participation in elections as “democracy without citizens.” If the medium is the message, then the message of the ad-funded news media was “buy!”—not “vote!” or “protest!” This might have seemed to be bad for democracy, but polarization in society was at a low point, while the influence and prosperity of the mass media were at an all-time high. The political tranquility of the public was a side effect—detrimental or benevolent, depending on one’s perspective.

The Internet broke this idyll. It turned out that the ad-based model relied not on the content attracting an affluent audience but on the monopoly over ad delivery that the Internet simply destroyed. The ad-based media business achieved power and prosperity over the course of 100 years; it collapsed in just ten.

The collapse started with the classifieds. At their peak in 2000, classified ads brought in $19.6 billion, about one-third of newspapers’ revenue. As Craigslist, eBay, and others killed this market, classifieds revenue plummeted to $2.2 billion in 2018. Corporate advertising was next. Suddenly, firms found that they could reach their desired

audience online directly and precisely with full control over content, context, and targeting. Google and Facebook delivered the fatal blow. It became obvious to advertisers that old media had offered them a costly and inefficient method of carpet-bombing their targeted audiences. By contrast, Google and Facebook knew the preferences of billions of individuals and provided personally customized delivery of ads to each of them. In 2013, Google alone made $51 billion in ad revenue. That year, American newspapers’ ad revenue was $23 billion, and the global newspaper industry collected with digital platforms. “If the audience was supposed to be mature and capable, so, too, were journalists expected to avoid judgment.” $89 billion in ad revenue. The GoogleFacebook duopoly surpassed 60 percent of the share in the U.S. digital ad market in 2018. It became increasingly clear that old media had little chance of competing Ad revenue in the U.S. press hit rock-bottom in 2013, falling below the level of 1950, when the industry started measuring the print ad market. In 2016, the Newspaper Association of America stopped reporting newspapers’ annual ad revenue: this source of revenue had basically ceased to exist. Residual advertising in print media, both offline and online, lost its industrial scale and any commercial meaning. Today, advertising contracts in the media often resemble charity from ideologically aligned businesses. Advertising revenue fell below the level of reader revenue at the same time across the world. In 2014, ad revenue in the global newspaper industry ($86.5 billion) trailed reader revenue ($92.4 billion) for the first time in the history of industrial measurement. Even the strongest American newspapers could not hold advertisers: the New York Times began getting more revenue from readers than from ads in 2012. Observing these changes, McChesney wrote in a foreword to the 2015 reissue of Rich Media, Poor Democracy that “the marriage of capitalism and journalism is over.” Divorced by capitalism, journalism now sought new partners. Some

publications invested their hopes in ancillary editor. A power structure based on hierarchical businesses—from organizing conferences to distributions of material resources—the “pyraselling wine—but these markets were already mid”—began competing with one formed by saturated. Others courted philanthropic bil- the coagulating contributions of information relionaires or public funding, but the handful of sources: the “cloud.” high-profile survival stories could not arrest the These forms of social organization are incomdynamic of decline. The news media returned to patible. And the greater the differences between their natural and only remaining source of reve- the agendas shaped by social media and by the nue—selling content—at a time when subsisting mainstream media, the more intense the clash on print subscriptions and newsstand copies was becomes. Between 2009 and 2014, the alternano longer viable. Losing ad business and having tive agendas induced on social media became no support from the printed word, news organi- so powerful that they produced a “crisis of auzations turned to their thority,” in the words of Martin Gurri, who described this as the “revolt of the public.” The viral editor agitated the digitized, urban, educated, and progressive youth to the point of political indignation. Protests, and even revolutions, broke out across the globe, including the Arab last hope: digital subscriptions. Who was the digital audience by the early 2010s? Social media had already spread around the world, beginning with young, urban, educated, and usually progressive people. “The viral editor agitated progressive youth to the point of indignation. Protests, and even revolutions, broke out.” Both Twitter and Facebook were created by youth, Spring (2009–11), Occupy Wall Street (2011), the for youth. Social-media users strove to find, pro- “social justice” protests in Israel (2011), the Indigduce, and share facts, evidence, opinions, and ex- nados protest in Spain (2011), the student propertise—anything that could trigger interest from tests in Greece (2010–11), the anti-Putin protests others. Discourse on social media involves a dis- in Moscow (2011–12), the Taksim Square Protest persed mechanism of mutual informing that does in Turkey (2013), and many others. the job of what I call the “viral editor”: selecting, Each of these events had its own set of causes, refining, and delivering socially relevant content, of course, but all had several features in common. optimized for virality. First was the demographics of the participants—

This tendency created an alternative news en- they tended to be, again, those digitized, urban, vironment. And led by the pioneers of digital educated, and progressive youth. Second, the activism, young progressives shaped an alterna- protests generally opposed the establishment, retive agenda. Before long, they revealed how sig- gardless of ideology, from Hosni Mubarak’s and nificantly their agenda differed from that of the Vladimir Putin’s regimes to the U.S. economic old mainstream media. and political system during Barack Obama’s

The transition of news coverage and public presidency. Social media elevated the role of discussions from legacy media to social me- progressive discourse producers: academic, bodia invited politicization. The old news media, hemian, and social-justice activists. The main which tended to serve established institutions, social feature of the new medium—the intensity began competing with the multidirectional, ver- of self-expression in the pursuit of response— satile, and oscillating forces born in the live in- tended to convert private talks into public activteractions of peers and structured by the viral ism and thus empowered activism as a mind-set, Newspapers once relied heavily on advertising revenue, an arrangement that kept journalists’ not just an activity. In the 2010s, activism gained momentum in digital media and thus prolifernatural liberal predisposition in check. ated far beyond its traditional circles.

How the Media Polarized Us

H. ARMSTRONG ROBERTS/CLASSICSTOCK/GETTY IMAGES Such were the conditions in which legacy media began looking for business opportunities in a new digital environment. To sell digital subscriptions, they needed to find ways to attract the digital audience.

This is not to say that journalists were complete strangers to the digital public. On the contrary: nowadays, journalists usually come from the ranks of urban, educated, and progressive elites; they are often young; and some were themselves social-media pioneers. Journalists, therefore, were naturally predisposed to align with the dominant ethos of early social media—in part, because they “have always been more liberal than their fellow countrymen,” as Batya UngarSargon pointed out in her 2021 book Bad News: How Woke Media Is Undermining Democracy. “But in the past,” she maintained, “this liberalism was checked by their publishers, who were often the owners of large corporations, or Republicans, or both. They wanted their newspapers and their news stations to appeal to the vast American middle, which meant that journalists were not at liberty to indulge their own political preferences in their reporting.”

Indeed, the ad-based business model had kept the natural liberal predisposition of journalists in check. The balance between the liberalism of the newsrooms and the business necessity to appeal to the “vast middle” for better advertising maintained both the market value and cultural power of journalism. Despite its inherent liberalism, journalism still needed to address affluent consumers, encouraging journalists to follow the professional standards of objectivity and unbiased investigative rigor. The highest examples of that work—such as the Watergate investigation, the Pentagon Papers, and the Boston Globe’s “Spotlight” investigation of sexual abuse in the Catholic Church—bolstered the professional reputation of the news media.

Yet the essential ingredient of that recipe— the advertising-dictated necessity to appeal to the median American—had disappeared by the early 2010s. The inherent liberal predisposition of the newsrooms was suddenly unchecked by

The ad-revenue model funded a cornucopia of print choices for consumers.

How the Media Polarized Us

any financial imperative. The cultural proximity between journalists and the progressive users of early social networks, the news-gathering power of social media, and the need for media organizations to secure digital subscriptions led to an ideological convergence between large media organizations and digital progressives. Quantitative studies cited by Ungar-Sargon indicate that the use of terminology associated with woke politics, such as “racism,” “people of color,” “slavery,” “white supremacy,” and “oppression,” has skyrocketed in the American mainstream media precisely since 2011.

The principles of news coverage also changed significantly. Coverage was determined by focusing on pressing social issues highlighted by the progressive Twitterati. The need to go digital made the media consider Twitter as their referential source for discourse formation. This radical shift affected the entire news ecosystem—including television and radio, which dutifully followed the changing discourse model of the print press, acquiring their own digital addictions and dependence on the social-media crowd.

Still, this ideological transformation did not bring the media any financial gains—at least, not until Donald Trump arrived on the scene.

The metamorphosis of the media did not happen during Trump’s presidency. Instead, Trump’s ascension was in part a result of the media’s transformation from being ad-funded to chasing digital subscriptions.

As social media began permeating society, the user demographic grew older, more rural, less educated, and more conservative. Comparative data on social-media proliferation suggest a hypothesis that the online activity of a certain demographic group can lead to the political activation of this group if the group exceeds an “awareness threshold”—defined as the point at which about 60 percent of the group uses social media. The urban, college-educated, and aged 18–49 cohorts crossed this threshold in 2011. Social-media use for older, less urban, and generally more conservative demographics had reached about 60 percent by 2016.

In the early 2010s, digital progressives still identified with a new, decentralized power structure that fought the establishment. But as the mainstream media gravitated toward social media and propelled them into a dominant discourse-formation role, digital progressives became the establishment. In a matter of several years, digital progressivism resettled from the “cloud” to the “pyramid,” from a posture of rebellion against centralized power structures to one of alliance with them.

Meantime, social media kept growing. By 2016, digital conservatives represented the new “cloud.” As their younger predecessors had done years earlier, they became a socially significant force. Soon enough, they discovered that the agenda that the mainstream media imposed clashed with their views—and their sense of losing ground and losing country grew sharp. The power of social media lies not so much in exposing mainstream bias but in revealing that so many other people see these biases, too. As Marx said, an idea becomes a material force as soon as it grips the masses; by providing access to information and self-expression, social media enabled the materialization of indignation.

What happened next is history. Donald Trump sensed the demand, gained extraordinary media attention for free, made himself a channel for conservative indignation, and brought about the release of the already builtup resentment of the conservative “cloud.” His rise used the same mechanism that underlay the early Twitter revolutions. In terms of media ecology, Trump’s ascension completed the Occupy Wall Street movement, but on a different demographic basis.

Between 2010 and 2016, digital subscriptions remained insignificant from a business perspective. The news media wooed the digital progressives, but it was not until the conservative demographic—and Trump—arrived as forces on social media that the news media started raking in digital subscriptions. Until then, the mainstream media did not have any commodity to offer their newly chosen referential group. Trump helped fix that. He became that missing

commodity immediately after his shocking vic- ness model. But this business model has stratitory. The mainstream media understood the fied the press, bringing meaningful results only signal, upgraded Trump from amusement to ex- to large, nationally concerned media outlets. istential danger, and started selling the Trump News validation creates a swarming effect: peoscare as a new commodity. ple want to have disturbing news validated by an

The media quickly learned to solicit subscrip- authoritative notary with a greater followership. tions as support for a noble effort—the protec- Audiences want to pay only for flagship media, tion of democracy from “dying in darkness,” as such as the New York Times or the Washington the Washington Post put it. A new business model Post. If other, smaller media outlets don’t join emerged, soliciting subscriptions as donations the chorus, they risk digital backlash; if they do to a cause. Donations required triggers that the join it, they struggle to differentiate themselves love-hate alliance of Trump and the media read- and lack authority to be a recognized news valiily supplied. The cru- dator, anyway. Most cial part of the new business model was not just Trump himself but the significant number of his supporters. The most terrifying thing was that fully half the electorate supported such a “monster” (in the view of the other half). subscription money flows to a few behemoths. The new subscription model has led not only to media polarization but also to media concentration. The biggest loss, however, is the mutation of journalism into post-journalism. The “As the scare replaced news as a commodity, the media switched from news supply to news validation.”

By no means were the media interested in death of those newspapers that shut down before mitigating this divide. They needed to maintain this mutation was at least honorable. Journalism frustration and instigate polarization to keep do- wanted its picture to fit the world. Post-journalnors scared, outraged, and engaged. The news ism wants the world to fit its picture, which is media reminded readers how outrageous the a definition of propaganda. Post-journalism has outrageous events were, and their focus turned turned the media into the crowdfunded Ministoward such events. As the scare came to re- tries of Truth. The worst part for journalists is place news as a commodity, the mainstream that only a few enterprises can succeed in this media switched from news supply to news new business model. The worst part for society validation. is that all legacy media need to pursue digital

Both ends of the political spectrum were in- subscriptions or viewership as their last hope volved. Right-wing outlets also tried to sell scare for survival, and thus must join the race of postinstead of news—the scare of losing ground and journalism. country. The new business model made the me- A temptation always exists to blame media bias dia the agents of polarization. They organically on a closely held conspiracy, but the real drivers joined to the mechanisms of polarization that lie deeper. Creed and greed might fill the mehad formed in the larger media environment— dium with the messages, but it is the medium iton the Internet and social media. Some main- self that defines polarization—its true message. If stream media grew their digital subscriptions ad-driven media manufactured consent, readerseveralfold during Trump’s tenure. driven media manufacture anger. If ad-driven media served consumerism, reader-driven media What comes next for the media industry? serve polarization. There can be no “solution” for a shift of such magnitude. “How do we fix polarThe validation of disturbing news within certain ization?” is the wrong question. The right quesvalue systems has finally become a viable busi- tion is, “How are we going to live with it?”

Can We Manage to Integrate?

TODD BANNOR/ALAMY STOCK PHOTO

Chicago suburb Oak Park’s effort to achieve racial balance counsels skepticism about engineered diversity.

William Voegeli

One lesson from America’s two-decade Afghanistan debacle is that you can’t achieve success if you can’t define it. A political goal is seldom attained if every description of it sounds vague or arbitrary; it cannot be realized by any known policy mechanism; and it draws strong opposition from foes while earning only tepid support from putative constituents. Housing integration is no exception. As a senator, Walter Mondale was a leading congressional sponsor of the Fair Housing Act. After its 1968 enactment, he said that the law’s purpose was to replace ghettos by means of “truly integrated and balanced living patterns.” Fifty-four years later, it remains unclear how to parse or implement that objective.

Not that people have stopped caring. The Imperative of Integration (2010), by philosophy

A view of downtown Chicago from Oak Park’s Metra commuter rail station

Can We Manage to Integrate?

professor Elizabeth Anderson, contends that integration “promotes greater equality and democracy” by enlarging the shared civic realm from “particularistic ethno-racial identities” to “identification with a larger, nationwide community.” Similarly, Richard Rothstein of the Economic Policy Institute argued in The Color of Law: A Forgotten History of How Our Government Segregated America (2017) that federal policies are the main reason we have housing segregation, by which he means that blacks and whites are far less likely to share a neighborhood than random chance would predict. Such segregation has harmed all Americans, he contends, but especially blacks. Further, only countervailing federal policies can end it, which is urgent because “integration will benefit all of us, white and African American.”

At one point, however, after endorsing various government measures to promote integration, Rothstein admits that it’s “appropriate to wonder why we should go to great expense to persuade people to follow a policy that nobody, black or white, seems to want.” One attorney told him: “I am a middle-class African-American professional woman, and I want to live where I can be comfortable, where there are salons that know how to cut my hair, where I can easily get to my church, and where there are supermarkets where I can buy collard greens.” And the evidence for anti-integration sentiment goes beyond anecdotes. Rothstein cites surveys showing that most whites and blacks speak favorably about integration in the abstract. But the data immediately reveal a difficulty: whites consider a neighborhood integrated when the proportion of blacks residing there is around 10 percent, close to the present national total of 12.1 percent. (Rothstein ascribes this preference to whites’ desire to “dominate.”) The same polls show that AfricanAmericans believe that a neighborhood is integrated when blacks account for 20 percent to 50 percent of the inhabitants, two to four times their proportion in the national population.

Is integration still a goal worth pursuing?

That whites and blacks have irreconcilable ideas about what integration means is no small problem. If, as a thought experiment, we subordinate every conflicting political consideration to racial integration, a housing czar could assign people homes in specific locations. By embracing the standard that African-Americans should constitute 12 percent of the residents of each block, he could use his unchecked power to integrate every zip code and census tract. Yet, if the goal is to increase the number of communities where blacks amount to one-fifth or half of the local population, there’s an obvious limit to what he could do, given that they constitute just oneeighth of the national population. Even the most obsessive social engineer would eventually run out of blacks to integrate, leaving some places integrated according to the more expansive definition of the term and many others with few or no African-American residents.

In this scenario, not only would many communities remain predominantly white; no community could be predominantly black. Sociologists Maria Krysan and Reynolds Farley showed in 2002 that the first choice for 20 percent of African-Americans was to live in a neighborhood that was all-black, and another 23 percent preferred one that was more than two-thirds black. In 2019, a banker, who could have afforded a home anywhere in Chicago, explained to journalist Carlo Rotella his decision to reside in South Shore, a neighborhood more than 90 percent black: “I value the quality of Afrocentrism, and not just in the political sense. I value being recognized and regarded as normal, not being seen when I walk down the street as different and remarkable.”

Suppose the housing czar accedes to those blacks who want to live in a predominantly black area. If they amount to 43 percent of the U.S. black population, the zero-sum problem becomes even more acute. Few communities will ever be “truly integrated and balanced,” by any definition, if only 57 percent of black Americans—6.9 percent of all Americans—are available for the czar’s integration project.

Step outside this thought experiment, back to the reality where Americans have considerable latitude to move from place to place, and it becomes even harder to spell out what integration means and how we achieve it. Integration in the real world requires public policies, economic

incentives, and social pressures that induce some people to relocate and some to stay put. Otherwise, it’s highly likely to meet a Chicago alderman’s cynical, but empirically grounded, definition of integration: the transitional period that commences when the first black family moves into a neighborhood and concludes when the last white family moves out.

Rothstein defines a “stable integrated community” as a suburb with a black population no more than 10 percentage points over, but also no less than 10 under, the proportion of blacks in an entire metropolitan area. In greater New York, where 15 percent of the residents are African-American, the boundaries would be 5 percent and 25 percent, and they’re between 22 percent and 42 percent in metropolitan Atlanta, where the figure is 32 percent.

Rothstein’s proposals for achieving this goal do not lack for audacity. One is for the federal government to purchase, at market rate, houses for sale in suburbs where African-Americans are underrepresented, and then sell them to black buyers at steeply discounted prices. Another calls on Congress to “amend the tax code to deny the mortgage interest deduction” to all homeowners in a suburb where the proportion of black residents falls more than 10 percentage points below the metropolitan average.

He implicitly acknowledges that a Catch-22 will impede such programs. Such sweeping proposals won’t be politically feasible until a widespread “sense of outrage” exists, based on the acceptance of his thesis about housing segregation’s causes and effects. This is the climate of opinion that we might expect in an America where a political party could win congressional majorities by promising to suspend the mortgage interest deduction in insufficiently integrated suburbs. In that country, though, Rothstein’s policy agenda would be redundant. That version of America would already comprise thousands and thousands of predominantly “white communities whose interracial hospitality,” as he terms it, had become “widely known.”

The more promising course, then, may be to heed the advice offered on many Prius bumpers: think globally but act locally. One of the most durable and, by its own standards, successful efforts to achieve a stable integrated community can be found in Oak Park, Illinois, a suburb of some 55,000 residents abutting Chicago’s border, eight miles west of the Loop. Oak Park was 99 percent white in 1970, but blacks accounted for 11 percent of its residents in 1980 and 18.5 percent by 1990. Based on the pattern established in many other Chicago neighborhoods and suburbs, as well as in communities across the country, Oak Park should have been poised for “white “Polls show that African-Americans see a community as integrated when blacks are 20–50 percent of inhabitants.” flight,” as the black population’s growth reached a “tipping point” that saw the slow departure of whites turn into a rush for the exits. Instead, Oak Park’s demographics have changed only slightly. In 2021, the village’s population was 18.4 percent black. (As such, it satisfies Rothstein’s criterion for integration, since African-Americans account for 16.7 percent of metropolitan Chicago’s population.) The modest decline in Oak Park’s white population, from 76.9 percent in 1990 to 66.1 percent in 2021, corresponds to a rising number of Hispanic and Asian residents. To achieve this degree of integration and then sustain it for decades is as unusual in northeastern Illinois as it is throughout the United States. You won’t find it anywhere in Oak Park’s immediate vicinity, which is made up of other communities that were also nearly all-white for most of the twentieth century. An adjacent village, River Forest, remains 83 percent white and 7 percent black. Cicero and Berwyn, to Oak Park’s south, are now predominantly Hispanic, with small minorities of non-Hispanic whites and even smaller ones of non-Hispanic blacks. Just west of Oak Park, Maywood and Bellwood

Can We Manage to Integrate?

have large African-American majorities, as does Austin, the Chicago neighborhood immediately east of Oak Park, which is 84 percent black and 4 percent white.

Austin’s transformation from middle-class and predominantly white (more than 99 percent white in 1960) to poor and predominantly black (over 86 percent black in 1990) was the proximate cause of Oak Park’s decision to confront and control its demographic change. “Reconsidering the Oak Park Strategy,” an academic paper written in 2002 by Evan McKenzie, a political scientist, and Jay Ruby, an anthropologist, states: “The 1970s witnessed classic block-by-block resegregation in Austin, an event that had enormous psychological impact on Oak Parkers.” As a result, “Austin became a negative example for many Oak Parkers, who were determined to chart a different course.”

That course became “managed integration,” also known as “integration maintenance” or “intentional integration.” The policy was carried out by the village government, advisory boards, civic groups, and, above all, the nonprofit Oak Park Regional Housing Center (OPRHC), created in 1972. The goal was to assist people seeking to move into Oak Park, especially blacks, while reassuring those thinking about leaving Oak Park, especially whites.

Beyond stabilizing Oak Park’s demographic profile to resemble closely that of the Chicago metropolitan area, the managed integration program worked to prevent the emergence of any predominantly black or white neighborhoods within the suburb. As J. Robert Breymaier, executive director of OPRHC from 2006 to 2018, has written, the goal is to encourage as many relocations as possible that will “sustain or improve the integration of a particular building or block.” McKenzie and Ruby observed that 81 percent of Oak Park’s blocks had at least one black family in 2000. This is, they say, “an achievement that few communities have realized.” It is also, however, an effort consistent with writer Steve Sailer’s derisive opinion that managed integration amounts to making sure that there is “one black per block” but as few as possible in excess of that.

Initially, Oak Park’s managed integration effort focused on homeowners, seeking simultaneously to encourage “fair housing”—a nondiscriminatory real-estate market—and discourage white flight. To keep the sight of For Sale signs on lawns from triggering panic selling, as had occurred in Austin and other Chicago neighborhoods, Oak Park prohibited them. A 1977 Supreme Court ruling held that such bans violated the First Amendment, but because no Oak Park real-estate agent has challenged it in court, the prohibition remains in effect as a practical matter.

Oak Park also offered, beginning in 1978, the Equity Assurance Program, an insurance policy providing protection from housing-market changes related to integration. In Chicago neighborhoods that had resegregated, either the fact or fear of demographic change had led many homeowners to sell at a loss. The Equity Assurance Program guaranteed policyholders that they would be compensated if they found themselves selling their home at a price below what they paid for it.

Perhaps the best evidence for the success of Oak Park’s effort to stabilize integration is that no claims have ever been made under this insurance program. Instead, property values rose steadily as Oak Park became one of the most affluent suburbs in western Cook County. Breymaier writes that diversity is among Oak Park’s “core values,” central to its “identity and sense of place,” as well as the village’s “brand.” Integration, in this view, is sustained by a virtuous circle: it draws people of all ethnicities who value life in a diverse community, and then gives them a stake in preserving it. The steadily climbing property values, in turn, guarantee that homeowners of any ethnicity are likely to be upper-middle-class professionals with compatible outlooks and concerns.

By the time McKenzie and Ruby examined Oak Park’s managed integration program, nearly 30 years after its inception, it had “become so complicated that few Oak Parkers fully understand it.” There’s nothing esoteric about a crucial element, however. Oak Park is an older inner-ring suburb, unlike those around the country that saw subdivisions of single-family homes spring up in the years after World War II. As a result, it has fewer homeowners and more renters than most

It’s unclear why, given Oak Park’s prosperity and seeming harmony, integration has been managed for 50 years without becoming self-sustaining, and apparently needs to be managed for decades to come.

suburbs. Breymaier reports that when the managed integration effort began in the 1970s, half of Oak Park’s residential units were rental properties; 40 percent remain so. Most of these rental units are apartments in small buildings, managed by “mom-and-pop” landlords who own, at most, a handful of additional apartment buildings.

The city government has used sticks, such as building inspections and the threat of fines, and carrots, including loans and other financial benefits, to motivate landlords to use OPRHC as their rental agent. Through the city’s Diversity Assurance Program (DAP), for example, owners of apartment buildings can receive low-interest loans to make upgrades in exchange for a fiveyear commitment to do business with OPRHC. In addition, McKenzie and Ruby write, DAP “pays the owner rent for leaving apartments empty until a tenant of the proper race can be found—i.e., a white tenant for a predominantly black building, or a black tenant for one that is predominantly white.”

Breymaier estimates that OPRHC brokered 40 percent of the leases signed in Oak Park over the five years from 2010 through 2014. OPRHC, in turn, engages in “affirmative escorting” or “reverse steering” when an apartmentseeker contacts it to inquire about vacancies. A 1988 Chicago Reader article on Bobbie Raymond, OPRHC’s founder, described the goal less clinically: for clients to end up in “appropriate apartments.” That is, “whites are usually given listings on the east side of the village, next to Chicago’s mostly black Austin neighborhood.” Blacks, conversely, “are sent to white neighborhoods in the center and west of Oak Park, or referred to other suburbs with newer housing.” Breymaier calculates that 68 percent of relocations that OPRHC facilitates are “affirmative moves”—ones that increase a block’s or a building’s racial integration. By his estimation, only 25 percent of relocations taking place on the open market, without OPRHC involvement, are affirmative moves.

Can We Manage to Integrate?

Managed integration operates in a gray area. As Breymaier carefully stipulates, “Landlords do not have the same legal ability to engage in integration activity that the nonprofit and propertyfree Housing Center enjoys.” Translated, this means that an Oak Park landlord would invite endless trouble by rejecting an application from a tenant of the “wrong” racial or ethnic group. OPRHC, by contrast, has spent 50 years telling clients, based on their race, that they should consider this neighborhood rather than that one, and has done so without incurring lawsuits or bad publicity. “It is through direct, face-to-face conversation that OPRHC addresses irrational fears, provides missing information, replaces myths and stereotypes with facts, and engages in gentle persuasion to consider new options,” in Breymaier’s account of the coaching process. “This results in a much different housing search than would occur without OPRHC.”

Its long-term success in stabilizing integration makes Oak Park an exception. But what rule does it prove? It’s clear to Breymaier that, because managed integration has bequeathed “strong and stable property values” and “a foundation for community harmony,” Oak Park has “provided a replicable model for other communities.”

If this is true, why have only a few other places tried the Oak Park strategy, and why has none replicated its success? Presumably, thousands of localities would be pleased to combine strong, stable property values with community harmony. And Oak Park’s achievement is well documented and widely known.

Yet even localities that launched versions of managed integration before Oak Park did have scaled back or given up. Like Oak Park, the Cleveland suburb of Shaker Heights regards “healthy race relations” as “a cornerstone of the community’s identity,” according to a 2019 Washington Post story. Yet, in Breymaier’s assessment, Shaker Heights’s programs, which predate Oak Park’s by more than a decade, have abandoned the integration of individual neighborhoods, retreating to stabilization of the city’s aggregate demographic makeup.

At least they’re still at it. Chicago’s South Shore neighborhood also tried to stabilize its demographic mix in the 1960s. The endeavor, studied in sociologist Harvey Molotch’s Managed Integration: Dilemmas of Doing Good in the City (1972), was admirably earnest—and utterly ineffective. “The efforts of the South Shore Commission and the public resources allocated to Ωsave≈ South Shore were,” Molotch concludes, “wasted.” South Shore went from nearly all-white to nearly all-black over the course of 20 years, just like many other South Side and, later, West Side neighborhoods. If anything, managed integration accelerated resegregation. The commission’s public-relations campaigns about the satisfactions of life in a multiracial neighborhood neither induced whites to move in nor dissuaded them from moving out. Circulars about improved schools and parks did, however, help persuade black people living elsewhere in Chicago to relocate to South Shore.

McKenzie and Ruby have the better argument when they contend that Oak Park cannot be an integration template, since its success rests on a unique mix of factors: “proximity to a depressed urban neighborhood, aging housing stock, a high percentage of apartment buildings, and a small, affluent, politically independent liberal community that has the means to be proactive.” They note that Republican presidential nominee George W. Bush won 23 percent of the vote in the 2000 election, losing all 70 precincts in what a Chicago Tribune columnist calls “the People’s Republic of Oak Park.”

The more pressing question, in McKenzie and Ruby’s view, is not whether the Oak Park Strategy can be implemented elsewhere, but how long it can continue in Oak Park itself. In a roundabout way, Breymaier confirms those doubts. If not for OPRHC’s programs, he told the Washington Post in 2015, “Oak Park would probably remain diverse, but it would start segregating very quickly.” But it’s unclear why, given Oak Park’s prosperity and alleged harmony, integration has been managed for 50 years without becoming self-sustaining, and apparently needs to be managed for decades to come, with no prospect of persisting on its own.

Breymaier’s explanation—managed integra- siding in Starrett City—not just overall, but in each tion “is not something we can stop doing” because building, and even on each floor. Within its first without “an intention to promote integration, seg- years of operation, however, managed integration regation often just happens because of the way came to mean that black applicants for a Starrett our society is built”—is too amorphous to resolve City apartment were placed on a waiting list eight this paradox. Better insight begins with Nikole times as long as the one for white applicants. Hannah-Jones, famous for guiding the New York Lawsuits by private parties and the Justice Times’s 1619 Project. In a 2016 article on race and Department resulted in federal court rulings public education in New York City, she wrote dis- that these quotas violated the 1968 Fair Housdainfully about the “carefully curated integration” ing Act. “Although integration maintenance prothat certain schools practiced in order to reassure grams are consistent with the spirit of residential whites by enrolling “some students of color, but desegregation,” sociologist Douglas S. Massey not too many.” wrote in American Apartheid: Segregation and the Making of the Underclass (1993), “ultimately they operate by restricting black residential choice and violating the letter of the Fair Housing Act.” When managed integration does not constrain blacks’ housing options directly with Such curation appears integral to the Oak Park strategy. OPRHC added the word “Regional” to its name in the 1990s, when it began “expanding choices for its clients throughout the western suburbs,” according to the official history. The change en“Beyond maintaining ethnic ratios, curating a suburb’s population entails screening prospective residents.” hanced OPRHC’s capacity to limit the number quotas, it does so indirectly, says Massey, of blacks in Oak Park by affirmatively escort- “through a series of tactics designed to control ing black clients to available properties beyond the rate of black entry.” the village borders. For the black proportion of Beyond maintaining neighborhoods’ ethOak Park’s population to “fluctuate” between 18 nic ratios, curating a suburb’s population enpercent and 22 percent over a 31-year period is tails screening prospective residents, managing otherwise difficult to explain. Yet candidly ac- the sort of people who move in as well as the knowledging this consideration would, McKen- number. Consider: Oak Park, with some 10,000 zie and Ruby say, come “perilously close to say- African-American residents, averages about ing that there should be a quota for blacks in Oak one murder every three years. Chicago’s Austin Park,” thereby implying “that black residents are neighborhood, home to about 84,000 Africanno longer welcome, which is radically contrary Americans, had more than 450 homicides from to the village’s historic openness and racial lib- 2001 through 2012, a rate exceeding three per eralism.” month, evidence that Austin is 100 times as violent as Oak Park. Given this contrast, it’s hard This circumscribed hospitality, implicit in a to doubt that Oak Park’s population is very different, socioeconomically, from Austin’s. It’s managed integration program relying on com- not much easier to believe that these differences plexity and euphemism, becomes explicit when came about spontaneously, or can be explained every decision about who moves in gets made by entirely by Oak Park’s higher housing costs. If, a single entity. Starrett City Associates, owner and alternatively, Oak Park and Austin residents are manager of a 5,800-unit Brooklyn rental apartment not so different, we must believe that the expecomplex that opened in 1974, was committed to rience of integration is profoundly pacifying, integration, which it pursued by maintaining fixed while enduring the trauma of diversity deprivapercentages of the major demographic groups re- tion incubates murderous rage. (And yet, River

Can We Manage to Integrate?

Forest, as white as Austin is black, has a crime rate lower than Oak Park’s.)

Breymaier’s insistence that Oak Park must prop up its integration eternally suggests that there’s less to the village’s openness, liberalism, and harmony than meets the eye. This suspicion is fortified by a ten-part documentary, America to Me, first aired on cable television in 2018. It was directed by Steve James, best known for his 1994 movie Hoop Dreams. James lives in Oak Park, where his children attended Oak Park and River Forest High School, the village’s only public secondary school. It is the setting for America to Me, filmed over the course of the 2015–16 academic year. (River Forest’s population is one-fifth the size of Oak Park’s. A graphic in the first episode says that the high school’s student population is 55 percent white, 27 percent black, 9 percent Latino, 6 percent biracial, and 3 percent Asian.)

Though affecting, the documentary leaves the impression of race relations that are more wary and tense than harmonious. America to Me mentions early on that white students and families were conspicuously reluctant to appear on camera. At one point, an African-American education consultant voices the sort of judgment that such families might well have feared. “What I’ve discovered about white liberal people is that their liberalness goes only as far as when it starts to challenge their situation personally. That’s the Oak Park–River Forest community.”

At another point, in a meeting of parents, teachers, and administrators to discuss black students receiving, on average, lower test scores than kids in other demographic groups, one woman says that her son had graduated from the high school with “a sense of urgency to rediscover his identity.” In other words, “like a lot of other AfricanAmerican kids who left here, [he] couldn’t get to a historically black college and university fast enough. Because he needed to re-find himself.” It appears that life in integrated Oak Park leaves some black residents with the same aversion as the South Shore banker’s to being seen, with uncomfortable regularity, as different and remarkable.

All told, the Oak Park strategy appears to be that rare phenomenon: a discouraging success. Those who share the cautious temperament prized by philosopher Michael Oakeshott will conclude that Oak Park proves, again, that a consensus that a certain social change should come about does not establish that there must be some way to bring it about. Nor does it guarantee that a policy approach that worked once, somewhere, can work everywhere else, or even anywhere else, without unacceptable costs or collateral damage.

B.O’KANE/ALAMY STOCK PHOTO

Oak Park became one of the most affluent suburbs in western Cook County—and its steadily climbing property values ensured that homeowners of any ethnicity were likely to be upper-middle-class professionals with compatible concerns.

Unless we jettison liberty for the sake of equality and fraternity, Americans will always find ways to vote with their feet against imposed integration. That reality should constrain our ingenuity, leading us to reject shoves in favor of nudges. Examples of the latter include Oak Park’s Equity Assurance Program and the Shaker Heights initiative of using private grants to provide mortgage subsidies to people who purchased homes in neighborhoods where most residents were of a different race. The zeal to do more—to transform—leads directly to what author Tanner Colby calls “integration fatigue.” As a black resident said after the Supreme Court put a failed Kansas City school desegregation program out of its misery in 1995, “We’re tired of chasing white people.”

A Serious Critic for Unserious Times

Brian Allen

Hilton Kramer rejected political correctness to champion aesthetics and standards in art.

When I was an art history graduate student in the early 1990s, Hilton Kramer (1928–2012) was a peripheral figure for my colleagues and me. We didn’t read The New Criterion, which he cofounded and published from 1982 to his retirement in 2007. All of us were too young to remember his tenure from 1965 to 1982 with the New York Times, during which he wrote more than 1,000 reviews of exhibitions. He was the paper’s first art critic.

Political correctness wasn’t pervasive enough in those days to paralyze our brains, so no one dismissed him as a right-winger, a Nazi, or a racist. Still, his take on contemporary culture made him seem antique. “We are still living in the aftermath,” he wrote in 1982, “of the insidious assault on the mind that was one of the most repulsive features of the radical movement of the Sixties.”

Neither an art historian nor an academic, Kramer was a self-taught scholar, which made his view of art history fresh and quirky. He saw the art marketplace change from the 1960s through the 1980s, powered by new buyers, many of them volatile, tasteless, commitment-phobic, and always clamoring for something new. He hated the triumph of irony in the 1960s as the basis for much new art. This was the grease on the skids that took art from the soulful to the soulless. He thought kitsch a waste of time.

Kramer was best known in his day as a champion of Modernism in art. For him, Modernism was, first of all, a liberation movement. It overthrew lots of things and produced a hundred styles. Picasso, Matisse, Miró, Léger, Pollock, and Rothko—names we all know—show the range. All were rebels. Modernism in art belonged to a broad social, political, and economic movement starting in the nineteenth century that was driven by the abandonment of disguises and fake distinctions. This movement affected everything, including aesthetics, and most of Kramer’s writing is concerned with aesthetics only. Modernism was the confident, positive triumph of the individual (both artist and viewer) over officialdom, of progress over senescence—always freewheeling and inventive. It was a great tossing out of phoniness, emperors with no clothes, cheap melodrama, and philistinism of all stripes. Modernism was the discipline of freedom and truth. That’s the big idea.

This didn’t mean that avantgarde artists were supposed to run out and shoot the first archduke they found. Modernism in art was its own Drain the Swamp movement, but the hygiene was aesthetics. Kramer’s take on Pre-Raphaelite art is instructive. Its style and subjects—languid ladies with flowing hair, moral messaging, decorative flourish, and tight finish—visually defined the Victorian zeitgeist. In his view, it took a water cannon to clean the “literary excrescences” that made it so awful. Good art wasn’t propaganda, and it was no one’s tool but the artist’s. The truth that a Modernist artist sought went beyond the mundane. Strange for a newspaperman to think this way, but Kramer believed that the daily headlines were the last things that should interest an artist.

After reading hundreds of his columns, I would call Kramer prescient. By the late 1960s, he had spotted the thennascent forces shaping high culture today. Political correctness became the new jingoism. Diversity, boutique socialism, and privilege studies now rule the roost. Of the Whitney Biennial, he was blunt. It, along with most contemporary biennial art exhibitions, “seem[s] to be governed by a positive

Urbanities

FRED W. MCDARRAH/MUUS COLLECTION/GETTY IMAGES

Kramer in 1975, when he was art critic for the New York Times

hostility toward—and really visceral distaste for—anything that might conceivably engage the eye in a significant or pleasurable visual experience.”

Kramer didn’t think that good art was a gimmick aimed at a giggle or a sneer. He had standards and believed in quality, but he wasn’t pompous, and he saw quality in many different things. He abhorred the invasion of semiotics, feminism, Marxism, and multiculturalism in the study of art, feeling that they undermined aesthetics as the basic criteria for judging art; they made art a branch of social studies.

His writing comes with a handicap now, since he was a serious person, and we live in an unserious time. He was also direct, and we live in an age of hypocrisy, convolution, and denial. Kramer was born in Gloucester, on Cape Ann in northeastern Massachusetts, in 1928 and majored in English at Syracuse University. He started writing for Partisan Review in 1952. The next year, he wrote a story for the magazine criticizing the culture critic Harold Rosenberg’s advocacy of Abstract Expressionism. Rosenberg, writing for Art News, said that art by Jackson Pollock, Franz

Urbanities

Kline, and Willem de Kooning was a “psychological event” driven by the artists’ individual biographies, rather than an aesthetic act or an engagement with existing subject matter. Rosenberg approved of this; Kramer did not. If artists were making art as a psychological event, he felt, this implied that the viewer would need to be a psychologist to access it. Such artists, Kramer argued, annihilated not only content but reality.

For Kramer, just 25, the Partisan Review essay was a feat of intuition translated into authority. In those days, within the tiny art intelligentsia, mano-a-mano moments like these were atomic. Rosenberg’s nemesis, Clement Greenberg, hired Kramer to write for Commentary. Kramer later became editor-in-chief for Arts Magazine and, in 1965, the art news editor at the New York Times. In 1974, he became the paper’s chief art critic.

Kramer’s art history and sense of the start of what we call Modernism in art begins with J. M. W. Turner, the nineteenth-century British master of swirling, abstract seascapes. Kramer almost never wrote about the Old Masters; he may have thought that he was unqualified. Besides, the Old Masters weren’t really part of his Times beat. The galleries and museums rarely did Old Masters shows, and the art market for this work was based in London.

Turner, he felt, divorced color from drawing. Color didn’t fill in the lines because Turner didn’t have lines, which for Kramer meant that he felt free to break rules. He conveyed less the visual content of nature, or nature dressed for display, than nature in the raw—not its simple look alone, but its energy. Turner painted from experience, and he strove to present the truth he construed from that experience as profoundly and directly as he could.

Kramer saw every work of art as a piece of fiction, an abstract of something real and tangible. Real objects exist, but the artist modifies, or interprets, how he perceives them. The best artists shed social constructions, distractions, and disguises until they reach something essential. That’s what makes Modernism a radical art movement.

He thought Turner was onto something, but in exploring Modernism, Kramer’s foundation was French. Paul Cézanne (1839–1906) was, for him, the ur-Modernist. A little later, Cézanne’s artistic children, Henri Matisse (1869–1954), Pablo Picasso (1881–1973), Piet Mondrian (1872–1944), and Wassily Kandinsky (1866–1944), were the most analytical, serious, and influential figures in peak Modernism. Amid their differences, they were all essentialists and abolitionists of pomp and cant.

Each chased a Modernist ethos in a different way, in Kramer’s view. Cézanne started the fragmentation of the subject in earnest. He made his buildings, landscapes, and people from cones, rectangles, and cubes—coolly seeking structure. Picasso fragmented further, giving us Cubism and, later, women carved to pieces and reassembled in a way that resembled violence. Picasso’s Les Demoiselles d’Avignon (1907) at the Museum of Modern Art was his first big brothel picture, with fragmented women wearing African masks and looking like beasts.

Kramer loved Édouard Vuillard (1868–1940) for his balance of exquisite chromatic observation, charm, and wealth of common experience with a toughness—a reduction of every form into a flat, though sometimes tiny, field of color. Vuillard took Seurat’s cool, detached observation and gave it affection and warmth. Vuillard was one last step to Matisse, who was the bigger, more intense, artist—intelligent, absolutely serious in his pursuit of harmony and order. Picasso, Kramer felt, got lost in his libido, but Matisse stayed true to a vision of a serene paradise. Picasso might have mastered dissonance, but Matisse gave the eye’s satisfaction a spiritual dimension.

Matisse was, for Kramer, the greatest painter of the twentieth century. Matisse sought what the artist called “an art of balance, of purity and serenity, an art devoid of troubling or depressing subject matter.” Kramer loved The Red Studio (1911), which hangs at MoMA. He loved Matisse’s palette and considered him a brilliant

Andy Warhol’s Pop Art style embodied what Kramer called “a cult of the facetious.”

Urbanities

Urbanities

colorist but also admired his quiet sense of order. Most early critics of Matisse thought that he was too decorative, in contrast with Picasso’s strength and passion. Kramer saw him as the ultimate heir of Giotto, Piero della Francesca, and Raphael. Matisse’s quest for balance, purity, and tranquility was the most egalitarian of journeys, spanning centuries and the full range of human emotion. For Kramer, Picasso and Matisse were the twentieth century’s two Modernist giants—standing at opposite poles.

Kramer tended to downplay the polarity of abstraction and realism. Many people get tripped up by the notion that the two are separate universes. Since Modernism is the assertion of the individual, it doesn’t mean a single style or theme; it’s all over the place visually. Kramer sees the best artists, and the best of Modernism, as peeling an onion to get to deeper, fresher elements of the actual thing. This can take the artist far from the surface look, or keep him close to it. Impressionism, he believed, explored “the intensity of nature seen freshly.” Forms might have been indistinct and brushstrokes irregular, but the object remained, altered by the artist’s emotional response to it. It was emotionalism and expressionism leavened by discipline.

The best art, Kramer believed, has what he called a moral purpose. I’d quibble with the word “moral.” Morals are personal and vary based on an individual’s upbringing and experience. Kramer, rather, saw the best art as rooted in ethics, those broadly held standards of best conduct—going beyond family, tribes, or taste—that make us a human community. Rigor, honesty, expressiveness, vision, yearning, and conviction are at the center of the art he liked best.

Kramer didn’t write about religious belief often. He was Jewish—not observant, but religious feeling wasn’t far. He wasn’t dogmatic about faith but was keen on soulfulness. He considered the early Modernists like Kandinsky and Kazimir Malevich, the most extreme abstractionists, as not dogmatically religious artists but certainly searchers after God. They were willing to abandon direct, discernible references to recognizable objects to get beyond materialism and the desperation, unbelief, and lack of purpose that it foments. Kramer thought that the best art broached topics like immortality and the meaning of life.

As his career unfolded, Kramer’s thinking became more explicitly social and political. In the 1970s, Kramer had grown more and more troubled by the left-wing drift of the Times, his employer, and by the 1990s, he had written brilliant pieces on, among other literary figures, Whittaker Chambers, whom he admired, and Lillian Hellman and Susan Sontag, whom he didn’t. Kramer’s book The Twilight of the Intellectuals (1999) is about culture and politics during the Cold War; its first section is called “In the Service of Stalinism.”

In the 1970s, Kramer began to see broad cultural trends having a deleterious impact on art. Even in the 1960s, he wasn’t happy with what he was observing in New York’s top galleries and American museums. He felt that the truth-seeking mission of Modernism ran off the rails, probably because a surfeit of prosperity made people spoiled. Modernism in art might have been a liberation movement at one time, but things were getting muddy and directionless. And what direction he saw, he didn’t like. Kramer thought that de Kooning (1904–97) had run out of steam by 1960. He found Rothko increasingly sad and depressing. Pollock (1912–56) had a brief period of triumph in the late 1940s but quickly became repetitious, decorative, and deflating. Kramer called his work after 1950 “Abstract Expressionist Salon painting.” For Kramer, that was a big insult.

Two artists—Andrew Wyeth (1917–2009) and Salvador Dalí (1904–89)—were totems for much of what Kramer came to dislike most in contemporary art. He conceded their technical proficiency but felt that they had no real vision other than self-promotion. Wyeth offered an image of American life—“pastoral, innocent, and homespun—that bears about as much relation to reality as a Neiman Marcus boutique bears

Urbanities

In a 1976 New York Times essay, Kramer took a revisionist view of victims of the blacklist and McCarthy hearings of the 1950s.

to the life of the old frontier.” Of Dalí, Kramer writes:

He understands very well the modern appetite for violence and scandal, and has made a career of catering to this appetite, spicing each successive dish with sufficient outrage and surprise to keep the public a little baffled, a little angry, a little appalled, but always delighted, impressed, and—above all else—interested. He is a master showman who lavishes his real genius on the instruments of public relations.

Kramer saw Pop Art’s celebration of kitsch and camp as a giddy repudiation of substance. It relieved high culture of its rectitude and critical consciousness. In one of his many articles on Pop Art, Kramer quotes the architect Philip Johnson: “What good does it do you to believe in good things? It’s feudal and futile. . . . I think it much better to be nihilistic and forget it all.” Kramer found this poisonous. He rejected Pop Art as a “cult of the facetious.” It takes Cézanne, Picasso, Matisse, and Mondrian out to the trash, replacing them

Urbanities

with a bemused sterility that Kramer associated most with Andy Warhol (1928–87): infantile, mercenary, and “half-straight, half-gag double talk.”

It’s striking that many of the artists Kramer prized the most in the 1970s and 1980s—Milet Andrejevic, Helen Torr, Morris Kantor, Elsie Driggs, Augustus Vincent Tack, Mary Frank, Anne Arnold, and Richard Hunt, among others—never took off. Most are known to art insiders and niche collectors. Besides their obscurity, these artists had a few things in common: superb craftsmanship and a unique vision. Otherwise, these late favorites were all over the map. Arnold (1925–2014) created quirky sculptures of animals and people, using wood. Andrejevic (1925–89) was a Realist painter of landscapes and scenes of everyday life. Kramer saw in his work “a purity of tone and a gravity of feeling” absent in the work of more “clamorous” Realists like Richard Estes, who were gaudy and not much more than tricky copyists. About Jean-Michel Basquiat, Jeff Koons, Cindy Sherman, Keith Haring, Richard Prince, and Barbara Kruger—among the biggest names in 1980s art—he couldn’t have cared less.

Kramer saw high culture as bearing three impossibly heavy structural burdens as it stumbled into the twenty-first century. One was the state of art history—“so many minuscule talents burrowing in ever tinier reaches of the mind,” as he put it. Kramer memorably skewered the field in “The ΩApples≈ of Meyer Schapiro,” a 1981 essay in The American Scholar. Schapiro (1904–96) taught at Columbia for decades, fomenting a new art history that introduced class conflict and social upheaval as interpretive contexts. Initially a scholar of Romanesque art, Schapiro became a central figure in Modernist scholarship. He was a Jew, as well as a Marxist. He served as the éminence grise for younger art historians looking to make the field—a rarefied subject that demanded good taste and a travel budget—into a means to explore social, political, and economic problems.

Reviewing a compendium of Schapiro’s scholarship published in 1979, Kramer disputed Schapiro’s reading of Cézanne’s The Apples. Cézanne, Schapiro believed, revealed a “displaced erotic interest” and “an unconscious symbolizing of a repressed desire” in the picture of two apples, which he considered surrogate breasts. Kramer felt that this reading was absurd, as was Schapiro’s intellectual justification, which began with Horace and Virgil, borrowed from Flaubert and Baudelaire, and ended with a flourish: psychologists studying dreams. The painting’s aesthetic characteristics were buried, Kramer said, by blather and a false reading.

Earlier in his career, Kramer noted, Schapiro was quick to drain religious fervor from Romanesque church sculpture. He called the style “a new sphere of artistic creation without religious content and imbued with values of spontaneity, individual fantasy, delight in color and movement, and the expression of feeling that anticipate modern art.” Kramer was appalled that an art historian would dispute the obvious role of religion in religious art while inflicting on Cézanne an entirely speculative sexual agenda. Schapiro was happy to see aesthetic impulses as a reason to throw medieval spirituality under the bus, but even happier to jettison Cézanne’s aesthetics for repressed impulses dating to the artist’s childhood. This, Kramer felt, was an intellectually dishonest double standard. It was the art historian, not the artist, who was repressed.

A few years later, in The New Criterion, Kramer wrote “T. J. Clark and the Marxist Critique of Modern Painting,” a review of Clark’s just-published book, The Painting of Modern Life, which he describes as “just another contribution to the propagation of the mythic phenomenon which lies at the heart of the Marxist conception of history: class conflict.” Kramer took offense to Clark’s denial of “even the slightest degree of aesthetic independence from the iron laws of history.” The Impressionist fascination with industry was a classist rebuke to labor. Impressionists like Monet and Renoir, who depicted Paris’s boulevards, celebrating Baron Haussmann’s redesign of Paris, cleansed the city of its

Urbanities

© 2022 SUCCESSION H. MATISSE/ARTISTS RIGHTS SOCIETY (ARS), NEW YORK / DIGITAL IMAGE © THE MUSEUM OF MODERN ART/LICENSED BY SCALA/ART RESOURCE, NY

Kramer loved The Red Studio (1911) by Matisse, whom he considered the greatest artist of the twentieth century.

restless, working-class slums, Clark wrote. Their choice of subjects—boulevards, leisure, train stations—grew from class allegiances. That all art is, first and foremost, sociological reportage, a symptom of social ills, or a weapon to change the world became gospel before too long. For Clark, the measure of an object began and ended with how well it advances or thwarts revolution. Kramer saw this as a perversion of art.

The second of high culture’s structural burdens for Kramer was the change in the art market. Until the 1970s, the best art exhibitions, including ambitious loan shows, were organized by dealers at their galleries. The landmark 1970 exhibition One Hundred Years of Impressionism was hosted at Wildenstein’s. The art market was small. Collectors tended to be serious and informed. Dealers were few, too, and developed long-term relationships with artists and collectors. News from the art market reached the separate worlds of money and media glacially. But glamour and celebrity had already begun to intrude, beginning in the 1960s. Soon

Urbanities

marketplace success and chic came to define the canon. Kramer believed that money was an invasive species in the art world, treating its creations as mere investments.

Kramer understood the value of the commercial art gallery in keeping standards elevated. Dealers take risks on artists. They show courage. They make discoveries. They rotate shows often. They show art to the public for free. Dealers knew their artists in depth, and often supported them through rough patches. Museums make decisions slowly, by committee, and long after critics have vetted an artist. The collapse of the small and mid-level gallery economy and the current hegemony of a few big dealers would have distressed Kramer.

The third burden was the state of museums, which Kramer saw as the traditional keepers of standards. His bugaboo was Thomas Hoving, director of the Metropolitan Museum of Art from 1967 to 1977. Hoving, he believed, started a trend in American museums that continues in fits and starts. Hoving made everything he touched, however serious or arcane or complex, a branch of show business. From there, museums slid into the realm of mass culture, competing with shopping malls, video games, television, and sports as just another source of entertainment. But the entertainment industry already existed to give people what they wanted; it was the job of museums, Kramer believed, to give them what they needed. He was right—though he sounded like an old Victorian to say it.

Kramer’s most controversial piece in the Times was a 1976 article, “The Blacklist and the Cold War,” which assessed the revisionist trend to rehabilitate the Communist sympathizers barred from working in Hollywood in the early 1950s. He thought that his long article was balanced—presenting, as it did, the considerable complexities of the time but also the many lies told by people who were clearly Communists, as well as the facts surrounding a lesser-known blacklist that Hollywood, the theater, book publishing, and newspapers would enforce against peers who were vocally anti-Communist.

As the 1970s proceeded, he was appalled by how thoroughly people in entertainment, as well as historians and journalists, whitewashed an ugly episode of civic cannibalism. He believed that many prominent cultural figures from the early Cold War years bore America ill will, and the revised, official storyline was not only letting them off the hook but also praising their courage. If a Manhattan conservative is a liberal mugged by reality, then this was the new reality that mugged Kramer.

Kramer was sharp-eyed when it came to spotting the perils to artists and art historians of making their work political. Until the 1960s, Modernist artists were far removed from politics. It simply wasn’t their sphere. They were focused, as was Kramer, on the imperatives of aesthetics, which are nonpolitical. The politicization of art and art history reminded Kramer of culture behind the Iron Curtain. Culture has no purpose for totalitarians except to serve the state. Art in the Soviet Union, but also in its doppelgänger Nazi Germany, was thoroughly debased as a consequence. In The Twilight of the Intellectuals, Kramer took many of America’s Cold War–era thinkers and writers to task as, at best, useful idiots and, at worst, consciously complicit.

By the 1990s, he feared that the same thing was happening again. The arts issue that concerned Kramer was the role of political correctness in demolishing art criticism, since, fundamentally, it was an assault on quality. Kramer wrote: “In a culture now so largely dominated by ideologies of race, class, and gender, where the doctrines of multiculturalism and political correctness have consigned the concept of quality in art to the netherworld of invidious discrimination and all criticism tends to be judged according to its conformity to current political orthodoxies, even to suggest . . . that aesthetic considerations be given priority in the evaluation of art is to invite the most categorical disapprobation.” I can imagine Kramer’s blood pressure rising as he wrote that long sentence in 1993, reviewing a new book collecting the writing of Clement Greenberg.

Urbanities

Though Kramer admired Abe Rosenthal, the Times’s executive editor, and had many cherished colleagues, by the late 1970s a new, younger tier of reporters and editors were shifting the paper ever leftward. Kramer viewed this with alarm and, eventually, disgust. He left the paper in 1982.

His post-Times years spanned multiple acts. In the 1990s, he would toss TNT spitballs at his old employer via his “Times Watch” column in the New York Post. He aimed weekly at news media bias and incompetence profession-wide, but mostly at the Times. The paper, he wrote, didn’t mind “offending people so long as they’re white heterosexual males.”

The real love and mission of his later years, however, was The New Criterion, which he cofounded with pianist and critic Samuel Lipman in 1982. Marking its 40th year in 2022 under longtime editor Roger Kimball, the journal remains devoted to “championing what is best and most humanely vital in our cultural inheritance and of exposing what is mendacious, corrosive, and spurious”—a Kramerian directive, if ever there was one. In the end, Kramer’s vision was neither conservative nor liberal but, rather, catholic in its approach to high culture.

There was nothing arbitrary about Kramer’s move to Damariscotta, Maine, toward the end of his life. Damariscotta isn’t far from Gloucester, where he grew up. Kramer was highly cultured and erudite and hardly craggy. He was incisive and trenchant and less likely than other critics to suffer fools or phonies gladly. Despite his many years in Manhattan, he never lost personal traits that marked him as a New Englander.

Kramer died in 2012 from many ailments, among them advanced dementia. Toward the end of his life, he moved to the Vicarage by the Sea in Harpswell, Maine, a small hospice. Weaned from drugs, left to walk the seaside grounds and talk, even read, he died an apparently peaceful death.

Writing a story focused on a single artist means getting into his or her head, as best the writer can. It’s no different when one is focused on an art critic— and easier, too, since a critic like Kramer left us millions of words. What would he think today? That American society has gone bonkers. What would he propose? That’s a trickier question. Kramer didn’t think much of federal support for the arts, which he believed bolstered conventional left-wing thinking. He’d surely feel today that high culture is struggling. Cratering audiences; malpractice in the classroom, leaving students ignorant of high culture and history; diminished numbers of discerning collectors and specialist dealers; and the decline of serious art criticism—all have damaged the arts.

These days, Kramer might also ask, “Where is the Right?” The state of high culture and good taste should be of deep interest to conservatives. In Kramer’s day, art critics, art historians, and the art market together promoted high standards, a need for rigor, and the privileging of aesthetics. This served Modernism well. It has served creativity and high culture well, too, both modern and classical. Each of these sectors is now befuddled or lost. The Right, Kramer would likely warn, is too often an absentee landlord on the culture front. But conservatives ignore high culture at their peril.

A project of renewal might start with education, with supporting classical and traditional music, art, theater, and dance more broadly and deeply, not just in New York and not just at the biggest venues. It might include think tanks adding art and culture to their roster of concerns. It would mean supporting magazines, like The New Criterion, that defend high standards. As for the university crisis, Kramer would probably attribute it in part to frightened leadership. “Grow a pair,” he might advise college presidents and trustees cowed by the outrage machines on campus. I’d certainly be happy to see more cojones on campus at Yale and Williams, where I studied art history. Tackling the tiresome fury among students finding racism everywhere, the assaults on freedom of speech and thought, growing anti-Semitism, and political nihilism requires both common sense and courage. Hilton Kramer had both, and much else.

Raven’s End

Theodore Dalrymple

Revisiting a controversial postwar British murder case and execution

On October 11, 1949—the day I was born—a man named Daniel Raven, aged 23, was arrested in a north London suburb and charged with the murder the previous evening of both of his parents-in-law. He was hanged in Pentonville Prison slightly less than three months later, public petitions to spare his life notwithstanding.

Was he guilty? No eyewitnesses emerged, but the circumstantial evidence against him was strong, though perhaps not beyond doubt. There was also the question of his mental state if he did do the acts of which he was accused. This was not raised at the time of his trial because he insisted that he had not performed those acts; to have entered a plea of insanity would have diluted his claim to innocence of the actus reus, the guilty deed.

The case troubled the public conscience and also that of one of the detectives involved. The only book published about it (60 years later) was written by that detective’s son, Jeff Grout, who discovered after his father’s death that the only case on which he had kept documentation, though he had been involved in many famous or infamous cases, was that of Daniel Raven. This might have been mere chance or coincidence, but more likely it revealed a mind long haunted by doubts or regrets.

The facts were these. Raven’s wife, Gertrude, had just given birth in a private clinic. On the evening that the murders took place, Raven and his parents-inlaw, Leopold and Esther Goodman, had visited her there. They left in two cars, more or less at the same time. The Goodmans had bought a house for their daughter and son-in-law that was near their own home—a gift that 70 years later would have been worth more than $1 million. Daniel Raven stopped off at the Goodmans’ house for a brief chat and then went home.

According to his story, he returned to the Goodmans’ house about half an hour later (for reasons never explained) and found them dead, lying in their sitting room, their heads bashed in—Esther Goodman so brutally beaten that her face was no longer recognizable. Because he knelt beside them, he got blood on his trousers; instead of calling for help, he panicked, went home, and tried to burn his trousers. He said that he thought he would be a prime suspect if he had called the police straightaway.

It did not take the police long to find and question him. They discovered his trousers only partially burned, and what was left was spotted with blood of the AB group, which, though comparatively rare (just 2 percent of the population), was that of both Goodmans. Raven demonstrably lied to the police during questioning, construed as further evidence of his guilt.

Raven’s defense at his trial, conducted with considerable brilliance by John Maude, who later became a Conservative Member of Parliament and then a judge, was that in essence his story was true: that he had found his parents-in-law dead and that he had panicked. Such panic was understandable, if not exactly laudable. (I was reminded of a trial for murder in which I was a witness. A man had strangled his girlfriend in a jealous rage. Afterward, he put her in the trunk of his car. I was asked whether this was not indicative of an irrational state of mind. I replied that, never having had to make such a decision, I could not really say; but it struck me as being as rational as any other action in the circumstances.)

Maude surmised that an intruder had killed the Goodmans, and then fled when Daniel arrived. In support of this hypothesis was the fact that the Goodmans’ main bedroom was in a state of disarrangement when their bodies were discovered, but no blood was found in it, which suggested that it must have been disarranged

People read notices on the case shortly before Raven was executed in Pentonville Prison.

before the murders were committed: the murderer, whoever he was, must have had considerable blood upon him. (Maude had only to sow doubt in the jury’s mind, so he came up with a hypothetical alternative explanation.) In fact, the state of the bedroom remained forever unexplained; burglary was unlikely, for money was lying around the bedroom untouched. Maude’s alternative explanation was that Leopold Goodman was a police informer who had once informed on an Australian immigrant for breaking the then-strict currency regulations, leading to the man’s deportation. The intruder, on this theory, was someone hired to take that man’s revenge: but there was no evidence of his existence. Further, if the intruder had entered the house to wreak revenge, he surely would have come armed with a knife, gun, or crowbar. But the murder weapon was the base of a television antenna, in those days a heavy and clumsy piece of equipment. Surely no one would use such a weapon if he had a more efficient one?

The prosecution insisted that, in English law, it had no requirement to prove motive, for murder was the deliberate killing of someone without lawful excuse, and that absence of motive was no bar to conviction for such a crime. The defense pointed out, however, that if the prosecution did have evidence of motive, it would certainly have made the most of it; and the question of motive was bound to arise in a case such as this, in which a man with no known history of violence, and whose relations with his supposed victims were close and friendly (if not always completely harmonious), suddenly acted with terrible ferocity.

Since Daniel Raven’s defense was that he did not commit the act, no medical or psychiatric evidence was entered. Nowadays, with much less at stake because of the abolition of the death penalty, every murderer is examined medically and psychiatrically, irrespective of the wishes of the defense (who may, of course, ask for additional reports). But in 1949, an examination had to be requested,

Oh, to be in England

and both Daniel Raven and his father, who believed his son innocent, firmly opposed making one.

Would medical evidence have saved Raven from the gallows? I think it would have, by casting enough doubt on his mental capacity at the time of the killings (assuming that he committed them), to have made commutation of his sentence, if not acquittal, likely. In 1949, only a half of death sentences in Britain were carried out, with the rest commuted—often with less reason than in this case—to life imprisonment. The doctors, if asked to provide evidence, might have spread more heat than light on the problem because they might have disagreed strongly with one another, but they would have sown sufficient doubt in the minds of, if not the jury, at least that of the trial judge, who, though legally obliged to pass a death sentence, was entitled to recommend clemency. The home secretary, with the final say on commutation, would surely not have ignored medical evidence, even if it were not unanimous. To execute someone who was mad rather than bad would be unjust and cruel: and the home secretary at the time, James Chuter Ede, was not a cruel man.

What would the doctors have said? Freudianism was at its high tide, and Raven’s upbringing would have offered some clues. His father was unstable, given to violent and irrational rages, and beat his son, even after the boy was fully grown. The father was a chronic founder of failed businesses in various parts of the country. By the time Daniel Raven was 12, he had attended six schools because of his father’s peripatetic lifestyle. The boy was above average intellectually but apparently highly strung. He wet the bed until he was ten, feared the dark, had night terrors, and bit his nails until they bled. He was taken out of school at 14 because the family moved yet again.

No doubt one might have claimed a connection between this disordered childhood and Raven’s subsequent violence. If the child is father to the man, then surely, the argument would go, his childhood contained the key to his adulthood: and that childhood was a difficult one. The counterargument, of course, would be that not many unhappy childhoods lead to the murder of parents-in-law and that one could probably find an explanation for criminal behavior in any life—from parental overindulgence to parental neglect. Raven’s anxious personality would likewise not have helped his case: anxiety is so common that few people would want it to count as an excuse for, or even an extenuation of, killing.

A more promising approach might have observed that Raven was epileptic and committed the act in a state of epileptic automatism. Though he had not acted violently before, he had exhibited irrational rages, disproportionate to any provocation; he also supposedly suffered from absences, when he seemed to lose contact with the world, from which he recovered without remembering anything. The most eminent expert in the country at the time, Dr. Denis Hill, reported that Raven’s electroencephalograph was abnormal, though the doctor was too cautious or scrupulous to claim that this had a direct bearing on his state of mind at the time of the killings (if, indeed, he committed them). It is possible that Raven had a form of epilepsy that manifested itself in irrational violence, but he hindered a defense based on this by insisting on his own innocence of the killings and failing to mention any period of loss of consciousness at the relevant time. He might have killed in a state of post-seizure confusion and then, on recovering, found himself in the presence of two brutally killed people without any knowledge of what happened—after which it was only too plausible that he would have panicked and told lies.

All murders are tragic, but one courtroom scene during the trial must have been almost unbearable to watch. Raven’s wife, Gertrude, took the stand as a witness for the defense. According to a newspaper report, she looked across at her husband and smiled at him. The only point on which she testified was that, when her parents and their son-in-law left the clinic

Raven’s defense attorney, John Maude, was a strong supporter of the death penalty, but he pleaded for his client’s life until the eve of the execution, expressing his “profound belief in irresponsibility in this case which I found overwhelming and terrible.”

Oh, to be in England

after visiting her, they appeared to be in a good mood, without evident conflict. The prosecution thought it wise not to cross-examine her, for this would have created sympathy for the accused. One reporter claimed that, after the court adjourned, he had asked Gertrude what she would do if her husband were acquitted. “Well,” she replied, “I suppose he would come home and I would make him a cup of tea.” Another reporter claimed that the only thing anyone had heard her say at the trial, other than her largely monosyllabic testimony, was that she would “never believe that Danny murdered my mother and father. Danny could never do such a thing.”

Her mental agony must have been terrible, for she had to know that the case against her husband was strong. After the closing prosecution speech, a newspaper observed, “she raised a tired hand to her forehead and asked her friend to take her from the court. She was smuggled out of a side door”— reporters were as intrusive and unscrupulous then as they are today—“and driven away by police.”

It took the jury no time to find Raven guilty, and he was duly sentenced to death. That Gertrude accepted the verdict is suggested by the fact that she neither wrote to, nor visited, him in the prison where he was held in the condemned cell for 40 days and 40 nights. It is difficult to imagine that anyone could suffer more, at least in peacetime, than to have the man she loved kill the parents she loved. It is a thought from which the mind instinctively turns away.

After the trial, a brief challenge to the verdict arose. One of the jury was Jewish—as were all the main characters in the story—and it turned out that he had taken his juryman’s oath on a New Testament. It was surmised, then, that he was not properly sworn, and therefore could not deliver a verdict, which at the time had to be unanimous (now only a majority of 10–2 is required); but a rabbi testified that the juror in question told him that he nevertheless stood by his oath, and the challenge faded away.

One intriguing aspect of the case reveals how much attitudes have subsequently changed. Raven’s defense lawyer, John Maude, was strongly in favor of retention of the death penalty, while James Chuter Ede, the home secretary, was strongly for its abolition. Yet it was Maude who pleaded for Raven’s life and Chuter Ede who refused to commute.

It was clear that Maude was emotionally, not merely professionally, involved in the case. Once the trial was over, and Raven’s appeal had been turned down, Maude’s official duty was performed, but he went much further than he was obliged to do. He had done his best at the trial, and his closing address to the jury was brilliant, if unsuccessful. But he continued to advocate for Raven until the eve of his execution, when he sent the following telegram to the home secretary:

YOU KNOW HOW DEEPLY I FEEL UPON THE MATTER ABOUT WHICH YOU SAW ME AND I NOW BEG YOU TO GIVE EFFECT TO ALL THE LONG HISTORY OF THE MANS ABNORMALITY STOP SIMPLY CANNOT RID MYSELF OF A PROFOUND BELIEF IN IRRESPONSIBILITY IN THIS CASE WHICH I FOUND OVERWHELMING AND TERRIBLE

To this telegram (and the very word “telegram” is redolent of an age as bygone as that of horse-drawn carriages), Chuter Ede replied:

RECEIVED AND CAREFULLY CONSIDERED YOUR TELEGRAPH BUT REGRET AM UNABLE TO ALTER MY DECISION

This exchange speaks well of both men. Maude, who made a speech in Parliament endorsing the view that the death penalty was a necessary deterrent to murder, was also obviously possessed of a strong sense of justice in each individual case and not merely of the social utility of the deterrent. When he said to the jury that it was the

Though an advocate for the abolition of the death penalty, British Home Secretary James Chuter Ede (shown here inspecting the London Fire Brigade) felt that it was his duty to uphold the law as it stood.

Oh, to be in England

prosecution’s duty to prove its case beyond reasonable doubt, that is precisely what he meant: the accused was entitled to the benefit of any doubt, and he believed that he had cast sufficient doubt on the prosecution to merit acquittal.

Chuter Ede, for his part, had introduced before the war a motion in Parliament to abolish the death penalty and was soon to vote for its temporary suspension while a commission reported on the measure (it finally recommended retention, but only under very restricted circumstances). Yet Chuter Ede also felt that it was his duty— no doubt painful—to uphold the law as it stood, which was more important as a principle than adherence to his personal convictions, however strong, on a matter about which more than one opinion was possible.

Chuter Ede was a scrupulous man and had not come to his decision lightly; while Raven was in the condemned cell, the home secretary had asked three eminent psychiatrists, including Hill (who thought that Raven was epileptic), for a report. They examined him in prison, and their report was not favorable to reprieve on psychiatric grounds. “We do not consider that Raven was insane at the time of the crime or that he is insane now. He is probably an anxious and nervous type of man, but we do not believe that he is suffering now, or was suffering at the time of the crime, from any minor mental abnormality which would justify us making any medical recommendation.”

Chuter Ede felt that he had no grounds for commutation of the sentence. Maude’s inner conviction that Raven was either innocent of the actus reus or did not have the requisite mens rea (guilty mind) was not enough. The law had to take its course.

On his last night alive, Raven wrote four letters: to his mother, his cousin Muriel, his sister Sylvia, and one of his lawyers. He did not write to his wife, the mother of his child, either because she now thought of him as guilty and had forsaken him, or from a certain delicacy of feeling. As far as we know, he never confessed to the crimes.

If Raven went to trial under current laws instead of those of 1949, and in the present state of medical knowledge, he would likely have been convicted of manslaughter, not murder. To establish the lesser charge, the defense would have had to prove, on the balance of probabilities, that Raven suffered from a state of mind at the time of the killings so different from normal that it reduced his mental responsibility for his acts. Doctors probably could have convinced a jury that this was so. If found guilty of the lesser charge, Raven would either have been sent to a mental hospital or sentenced to prison for fewer years than for a murder conviction. But even if found guilty of murder, he would have had his life spared and received a sentence of life imprisonment, with the possibility of parole after 15 years.

On the whole, this seems more humane than what actually happened. My one reservation is the following. If Raven were found guilty of murder, under these alternative circumstances, a sentence of (in effect) 15 years’ imprisonment would be inadequate, not because, once released, he might repeat his crimes but because it would exert a downward pressure on all sentencing. The severity of sentences must reflect, at least approximately, the seriousness of the crime or crimes committed: and to bash in the heads of two parents-in-law is a very serious crime indeed.

Now, a civilized society must put a limit to the severity of a sentence that may in practice be imposed—a threshold above which we cannot go. Someone who kills ten people cannot be punished ten times more severely than someone who kills only one, though the crime, in a sense, is considerably worse. But if the threshold for the most severe sentence is set too high, leniency throughout the system is the inevitable consequence. And the consequences of leniency are obvious to all except criminologists.

Still, the execution of Daniel Raven horrifies me. A newspaper wrote shortly after his execution: “The uproar over the hanging of Daniel Raven has been quite out of proportion to the facts of the case,” and the Liverpool man who wrote to Chuter Ede that the “mentality of the 16,000 folks who signed [a petition] for [a] mindless reprieve is a blot on our civilisation,” horrifies me also.

This article is from: