281 minute read
Soundings
from Picture CITY JOURNAL 2022
by Frankio
disease—then encouraging physicians to become activists will do nothing to improve patients’ outcomes. The failure of antiracist programs to do anything to improve clinical outcomes for black patients will only deepen the frustration of clinicians and the dismay of patients.
Doctors should conduct research and find treatments that work. They shouldn’t treat patients differently based on skin color. Doing so would undermine everything that physicians pledge when they first are called “doctor.”
Advertisement
Nothing “Random” N ew York City is mired in a
About New York Street frightening swamp of violent crime. Assurances from Violence proponents of criminal-justice The media and public officials reform that there’s portray the city’s escalating nothing to worry violent-crime problem as about, as the crime arbitrary—but the assailants rate is still well and their targets are both below early 1990s entirely predictable. levels, ring hollow. These same advoSeth Barron cates for social progress would not be consoled if they were told that maternal deaths and poverty rates were worse in the 1990s. That crime isn’t as bad as it was 30 years ago is no consolation.
People do not experience life measured in decades but as it happens—and the sudden acceleration of the murder rate in 2020 was profoundly dislocating. Even if New York logged more murders in an earlier era, the city has never experienced a 40 percent rise in homicides over just one year. This sudden plunge into violence made people feel that the streets were chaotic and dangerous—and as criminologists attest, the impression that streets are unsafe is enough to deter many people from venturing out.
In recent months, a series of depraved acts has shocked even the most jaded New Yorkers. On January 16, Martial Simon, a mentally ill career criminal, shoved Michelle Go into the path of a subway car, killing her. Simon did not know his victim, and the media and the police generally labeled the incident a “random” attack.
A few weeks later, Dorothy Clarke-Rozier, a middle-aged grocery-store employee, was on her way to work before dawn when she was suddenly stabbed to death by Anthony Wilson, who was described as emotionally disturbed and noncompliant with his treatment regimen. “It was random,” reported the police. “He was a stranger to her. No relation.”
In mid-February, Christina Yuna Lee was followed into her Chinatown apartment and murdered by Assamad Nash in what was termed a “random” assault. Nash had been arrested seven times in the previous nine months.
In March, Gerald Brevard, a mentally ill man from Washington, D.C., shot three homeless people in his hometown before traveling to New York to continue his killing spree. He shot two homeless men while they slept, killing one. The attacks were widely characterized as “random.” In 2018, Brevard was sentenced to a year in prison for assault with a deadly weapon, but the sentence was suspended.
This list could go on. The sense that one could at any time become the target of violent assault speaks to the anonymous nature of street crime. There is certainly a random quality to such attacks—but only from the perspective of the victims, who were merely going about their business when brutalized for no reason. For them, the violence is indeed arbitrary.
For the assailants, however—generally seriously mentally ill, or with extensive criminal records, or both—little about these incidents deserves the label “random.” Certainly, nothing is random about their choice of prey. If street violence were truly random, we would expect the occasional assault on a strapping young bodybuilder, or a brawny retired ironworker. But those people never seem to be “randomly” attacked. Instead, victims of street assaults are generally vulnerable: women (often petite), older men, or the disabled or indigent.
By labeling escalating street violence “random,” we let criminals and public officials off the hook. The criminals may be mentally ill, but they are rational enough to attack people weaker than themselves. Our public-safety and
Soundings
mental-health officials know that the untreated mentally ill pose a significant threat to society, and they should not shelter them under the guise that the violence they commit is unpredictable. It is entirely predictable. If you continuously throw rocks off a city roof, randomness may determine who gets hit—but that someone will be hurt is a certainty.
Give Kids a Sporting As pandemic restrictions drag on in many Chance parts of the United States, their negative The lockdown of youth impact on the mental athletics has taken a and physical health steep toll on physical of America’s children and mental fitness. has become more obvious. Persistent school Steven Malanga closures, in particular, have contributed to a decline in student achievement and a rise in mental-health problems. Closely tied to school shutdowns has been a dramatic curtailing of youth sports. Though children as a group have largely escaped the most serious effects of Covid-19, nearly half of all parents of youth athletes say that their kids have yet to get back to sports participation after the virus put an end to their youth leagues. The results, researchers say, include rising obesity rates and levels of depression. With many local sports groups collapsing after more than two years of lockdowns and parents reporting less interest by children in sports, reviving this valuable contributor to kids’ health and well-being won’t be easy. In an Aspen Institute survey on youth sports released late last year, some 45 percent of American parents said that local community programs and so-called travel teams had either disappeared during the pandemic or returned with reduced capacity. Nearly a quarter said that the lost programs were an impediment to getting their children back into games, and 28 percent admitted that, since the shutdowns, their kids had lost interest in organized sports. That’s up from 19 percent in a similar survey taken a year earlier.
Covid has worsened what was a troubling trend even before the pandemic: a sharp decline in children’s participation in physical activity. From 2012 through 2019, Aspen surveys revealed, the share of American children participating in sports had declined to 31 percent, from 38 percent. Among lower-income and lower-middle-income kids, especially, studies show a sharp rise in inactivity. Meantime, rates of childhood obesity have been climbing, from about 11 percent of all kids in the mid-1990s to 19 percent by 2019, and 22 percent last year.
Participation in youth sports has also been closely associated with improved mental health and socialization skills. Not surprisingly, curtailing local programs has taken its toll. A survey of youth athletes after the first round of Covid
Soundings
lockdowns in 2020 found 70 percent reporting moderate to high levels of anxiety when sports ended. Scores on quality-of-life tests, which assess an individual’s overall mental well-being, also plunged, according to University of Wisconsin researchers. Conversely, half of parents surveyed in the Aspen Institute study found that kids’ mental health greatly improved when they returned to play. Research has consistently demonstrated a long-term value of participation for youth development. One study found that the longer children participate in sports, the less likely they are to develop emotional problems like panic attacks and social phobias. Responding to the idea that playing sports during the pandemic put kids at risk, one researcher on the University of Wisconsin study said, “I take the opposite view. . . . We need to give them sports opportunities to keep them safe and healthy.”
Athletics have suffered the most in places with the strictest lockdowns, such as California. Los Angeles County, for instance, initially required all youth athletes to wear masks while playing—even outdoors, where transmission is rare. In the Aspen survey, 40 percent of California parents of student athletes said that their kids had lost interest in sports. Similarly, in New York State, 38 percent of parents report that children have abandoned sports during Covid. By contrast, in Texas, with far looser Covid requirements, only 18 percent of kids have lost interest in sports, parents told pollsters. Athletes, parents, and organizers have grown frustrated by inconsistent Covid policies, which sometimes put low priority on getting kids back to play. In early 2021, Michigan governor Gretchen Whitmer’s administration delayed the opening of winter scholastic sports by nearly two months, even though Covid cases were declining and the state had allowed bars to reopen weeks earlier. Parents and athletes protested the decision at the state capitol. The superintendent of Detroit public schools, Nikolai Vitti, wrote a caustic letter to Whitmer, pointing out that more than 99 percent of student athletes had tested negative for Covid the previous fall and warning that parents and coaches were threatening to sue the administration.
In some states and districts, advocates for youth sports balked at restrictions that shut down games, even as neighboring jurisdictions played on. In early 2021, Michigan coaches began lining up games next door in Indiana, a state with fewer restrictions. Pennsylvania governor Tom Wolf surprised parents and organizers by announcing at an August 2020 press conference that he opposed school sports resuming. That recommendation sent the state’s high school athletic association, which was ultimately responsible for making that decision but had not been consulted by Wolf, scrambling. By contrast, in neighboring New Jersey, as local papers noted, Governor Phil Murphy had worked with the state athletic association to develop a plan
Soundings
that reopened youth sports. Though the Pennsylvania athletic association ultimately allowed school athletics to resume, opposition from state officials was so strong that entire athletic conferences declined to play in the fall of 2020. Some school districts attributed their decision to cancel sports to liability fears, given the pushback from state health department officials.
Two years of Covid has taught us much. We know that youth are at much less risk of serious outcomes from the virus. Transmission rates among kids are substantially lower than those among adults, as numerous studies have shown. Conversely, lockdowns have exacted a steep toll on students’ physical, emotional, and academic life. We may be seeing the effects on this generation of young people play out for years after we’ve tamed Covid. Fortunately, the educators, politicians, and experts of some districts have recognized the importance of youth athletics and have worked hard to get them going again. Before the pandemic, some 8 million high school students participated in at least one sport. Understanding the value of youth athletics is the first step toward rebuilding this essential American institution.
American coun-
What
terterrorism experts’ biggest fear remains a biological weapons attack in the United States. A conventional bomb can kill hundreds or even thousands, but a biological weapon can surreptitiously invade, spread, mutate, and kill millions. Biological weapons have been used throughout history, from poisoning wells to “gifting” smallpox-infected blankets. Whether you believe that Covid-19 came from bats sold in a wet market or escaped from a Chinese lab, terrorists are observing the pandemic’s toll on America and taking notes for a future biological attack. What lessons might they have learned from America’s reaction to Covid?
Terrorists Learned from Covid
America remains vulnerable to pathogens.
Thomas Hogan and
Jim Fitzgerald
The first is that America is unprepared for a biological attack. Our national defense experts tend to look backward to past threats rather than preparing for future ones. The U.S. botched the response to swine flu under the Obama administration, avoiding a damaging experience only through some good luck, but still failed to engage in research to address future threats. Terrorists can plan on the nation not having a ready-made response to any tailor-made virus.
The next lesson: America does not affirmatively defend itself from biological threats. We train our police to respond to active shooters in schools. We take off our shoes for scanning before we get on planes. But we do not regularly monitor wastewater supplies for the presence of new and dangerous pathogens. Only now are some local governments discovering that they can track Covid outbreaks by testing wastewater. We also leave our water supply wide open to attacks; ten terrorists in ten cities adding a highly infectious virus to the water supply would have a devastating nationwide impact.
The pandemic has revealed that America’s investigative capabilities—scientific and military— are either fragile or fainthearted. Two years in, we still have no definitive answers about the origin of Covid. Nor do we appear to be pressing China particularly hard for an explanation. Tracing biological weapons is difficult unless you act quickly, competently, and decisively.
Terrorists also learned that politics is corrupting our scientific organizations. From federal agencies to private universities, ideology is distorting science. When science is filtered through the lens of politics, nobody is likely to have much respect for empirical fact-finding. Thus, mixed public-health messaging eventually leads sensible people to the conclusion that some scientists refuse to admit their own ignorance.
America’s political polarization has taught terrorists that our divisiveness will prevent any immediate and unified response to a biological attack. Many Democrats expressed doubt about Covid vaccines when Donald Trump was in the White House. When President Biden took office, some Republicans began questioning the vaccines. Some states demand vaccines and masks everywhere; some leave it to individual choice.
Soundings
The politicization of science, egged on by the mainstream media, has reduced public trust.
Covid also taught terrorists that American supply chains for critical materials are weak, fractured, and often reliant on materials from outside the United States. Necessary supplies for vaccines, tests, and medical items come from other countries, forcing the nation to face shortages of items critical to our medical defenses. For instance, the United States struggled with a shortage of Covid testing kits. Over 50 percent of those kits are manufactured in China.
Finally, potential bioterrorists have learned that Americans are physically vulnerable to a biological attack. Our borders are porous. Our population is exceptionally obese and unfit. Particularly in dense urban areas, this lack of physical vitality makes an inviting target for weaponized viruses. The coronavirus took a disproportionately heavy toll on obese Americans. If you’re a terrorist designing a virus, this is hard information to ignore.
Yet some good news remains. The United States retains the ability to innovate quickly: through Operation Warp Speed, American pharmaceutical companies performed a miracle in creating and mass-producing vaccines in an incredibly short time frame. Not since the nation geared up for World War II have we seen such a concerted effort to achieve a singular result. And Americans remain a strong, independentminded people. Even as the government bungled an economic response to the pandemic, markets rebounded. The American public has awakened to the fact that its leaders and the media might not be fully trustworthy or competent, so the age-old national tradition of questioning authority has returned with vigor.
Last, any terrorist planning a biological attack on the U.S. will have to deal with the reality that Covid has acted as a national stress test. Somewhere in the depths of American law enforcement, agents, prosecutors, and scientists are considering the strategic implications of what we have been through in recent years. They are running models, preparing new approaches and defenses, and learning new lessons. With hard work and luck, they will have solutions in place if and when such a threat emerges. Breachers of the Peace W hen I moved to New York, I left A day at the park in Astoria, Queens, can be a bucolic experience— until the biker battalions show up. behind my small central Pennsylvania hometown for Astoria, which is as close to bucolic as it gets in Gotham. Since movTheodore Kupfer ing, I’ve taken up running: I go to new neighborhoods, size up nearby Greek restaurants, and find forgotten pockets of northwest Queens. I usually end my runs in Astoria Park, which sometimes reminds me of home—trading the Susquehanna for the East River, the Blue Mountain Ridge for the Manhattan skyline. It’s one of the things that made the transition tolerable.
Still, everywhere has its annoyances, and Astoria is one of many New York neighborhoods beset since the pandemic by noise from drag racers and dirt bikes, which can sometimes make one long for leaf blowers—that scourge of suburbia. (See “The Gotham Cacophony,” Autumn 2021.) Organizing their meet-ups online, the racers constitute a discrete subculture. A former participant in New York’s drag-racing scene told me in an online message that a meetup some years ago was “one of the best days of my life.” But of all hobbies, it’s one of the more antisocial.
This isn’t some isolated complaint from a hayseed. Vehicle noise is a common theme of community meetings for the NYPD’s 114th Precinct, and commenters on a neighborhood forum that tends to deride precious newcomers have coined the term “fart cars” for the problem. In October 2020, the New York Times published a long report on the city’s “insanely loud car culture.” Local politicians have sought to combat it: State Senator Andrew Gounardes, a Democrat representing South Brooklyn, sponsored the Stop Loud and Excessive Exhaust Pollution Act, which raises fines for modifications that make vehicles louder and requires police to carry decibel meters that can measure vehicle noise. The law takes effect this spring.
But officers can do only so much, as I discovered on a balmy March afternoon, when I took
Soundings
advantage of the now-seasonably warm weather and set out running down Shore Boulevard, a two-mile stretch of road bordering the water that’s been closed to traffic since spring 2020. My noise-canceling earbuds were blaring either the new Animal Collective album or the latest episode of Bill Simmons and Ryen Russillo’s interminable NBA talk show, but neither can block out the unmistakable noise of bikes. Sure enough, when I looked over my shoulder, I saw a convoy of motorcyclists racing down the street. Admittedly, I antagonized them: I yelled something like, “Get in the bike lane!” as they passed and gestured helpfully to show them the way. I maintain that the motorized five-man weave that they were carrying out was the original antagonism, but the bikers objected to my admonishment. They turned around and started circling me, and an unfriendly colloquy ensued. I didn’t want to compromise my pace, and they didn’t want to dismount, so the six of us continued yelling at each other as we made our way down the road. But when I informed one of my interlocutors that, actually, this street was only for pedestrians, they took that as a challenge. One invited me to “call the cops! I love the cops!” When I said that I might, they circled one more time before taking off, but not before the last biker reached out and pounded me on the back of the head. It didn’t hurt much, but I wanted to prove a point (plus, onlookers seemed sympathetic to my situation). So I took the gentleman’s invitation and called the police, even though the bikers were long gone by now. After the cops arrived and scolded a group of kids for playing on the rocks by the river, they thanked me, told me that they would send more patrols to the area, and conceded that there wasn’t much that they could do. To the responding officers’ chagrin, New York cops have been barred from giving chase to otherwise-law-abiding dirt bikers since 2012—because someone could get hurt. The motorcyclist’s assertion that he loves the police may well have been true, I was told. Bikers know that cops can’t do anything to them, and they relish reminding the authorities of this fact.
Nothing about the incident shook me, but someone who isn’t a man in his twenties might have felt menaced. And while I’m under no illusions about what living in New York actually entails, you can see why these kinds of incidents can make some people want to stay away. Bikers roaming the streets with impunity—ruining a quiet weekend for a family outing at the park, or subjecting everyone else to ear-splitting engine noise at 2 am—is one of those “grinding, day-to-day incivilities and minor street offenses that erode the quality of urban life, make people afraid, and create the milieu within which serious crime flourishes,” as the great criminologist George Kelling described such incursions. I don’t plan on changing my routine, though. You can’t let them win.
Won’t Get Fooled Again
2020 IMAGES/ALAMY STOCK PHOTO
After the pandemic, Americans should never let public-health authorities deprive them of their liberties.
John Tierney
More than a century ago, Mark Twain identified two fundamental problems that would prove relevant to the Covid pandemic. “How easy it is to make people believe a lie,” he wrote, “and how hard it is to undo that work again!” No convincing evidence existed at the start of the pandemic that lockdowns, school closures, and mask mandates would protect people against the virus, but it was remarkably easy to make the public believe that these policies were
“the science.” Today, thanks to two years of actual scientific evidence, it’s clearer than ever that these were terrible mistakes; yet most people still believe that the measures were worthwhile— and many are eager to maintain some mandates even longer.
Undoing this deception is essential to avoid further hardship and future fiascos, but it will be exceptionally hard to do. The problem is that so many people want to keep believing the falsehood—and it’s not just the politicians, bureaucrats, researchers, and journalists who don’t want to admit that they promoted disastrous policies. Ordinary citizens have an incentive, too. Adults meekly surrendered their most basic liberties, cheered on leaders who devastated the economy, and imposed two years of cruel and unnecessary deprivations on their children.
They don’t want to admit that these sacrifices were in vain.
They’re engaging in “effort justification,” a phenomenon famously demonstrated in 1959
The Biden administration rewarded Anthony Fauci for his Covid failures by giving him a new title, Chief Medical Advisor to the President.
Won’t Get Fooled Again
with an experiment involving a tame version of a hazing ritual. Social psychologists Elliot Aronson and Judson Mills offered female undergraduate students a chance to join a discussion group on the psychology of sex, but first some of them had to pass an “embarrassment test.” In the mild version of the test, some students read aloud words like “prostitute” and “petting.” Others had to pass a more severe version by reading aloud from novels with explicit sex scenes and lots of anatomical obscenities (much more embarrassing for a young woman in the 1950s than for students today). Afterward, all the students, including some who hadn’t been required to pass any test, listened in on a session of the discussion group, which the researchers had staged to be a “dull and banal” conversation about the secondary sexual behavior of lower-order animals. The participants spoke haltingly, hemmed and hawed, didn’t finish their sentences, mumbled non sequiturs, and “in general conducted one of the most worthless and uninteresting discussions imaginable.”
But it didn’t seem that way to the women who’d undergone the severe embarrassment test. They were far more likely than the other students to give the discussion and the participants high ratings for being interesting and intelligent. The experiment confirmed the then-novel theory of cognitive dissonance: the young women didn’t like thinking that they’d gone through an ordeal for the sake of a worthless reward, so they avoided this mental discomfort (cognitive dissonance) by rewriting reality to justify their effort. Other studies showed the same effect in people who had undergone real-life initiation rituals to join fraternities and other groups. The more effort involved in the initiation ritual, the more valuable seemed the reward of membership.
Researchers also reported that “shared dysphoric experiences” produced “identity fusion” within a group, making members more loyal and more willing to make further sacrifices for their comrades. Thus, fans of English soccer teams who suffered together through a losing season were more devoted to one another than were fans of a winning team, and members of Brazilian jujitsu clubs who endured a painful graduation ceremony—walking a gauntlet while being whipped by belts—became more willing to make charitable donations to their club than were members at similar clubs with less extreme ceremonies.
If one brief bad experience can transform people’s thinking, imagine the impact of the pandemic’s ceaseless misery. It’s been a twoyear-long version of Hell Week, especially in America’s blue states, with Anthony Fauci and Democratic governors playing the role of fraternity presidents humiliating the pledges. Americans obediently donned masks day after day, stood six feet apart, disinfected counters, and obsessively washed their hands while singing “Happy Birthday.” They forsook visits to friends and relatives and followed orders to skip work and church. They forced young children to wear masks on the playground and in the classroom— a form of hazing too extreme even for Europe’s progressive educators.
Some Americans refused to submit to these rituals, but their resistance only intensified solidarity among the faithful. The most zealous kept their masks on even after they were vaccinated, even when walking alone outdoors. The mask became their version of a MAGA hat or a fraternity brother’s ring; some have vowed to keep wearing theirs long after the pandemic. They’ve already called for permanent mask mandates on airplanes, trains, and buses, and they’ll probably clamor for more school closures and lockdown measures during future flu seasons.
Facts alone will not be enough to change their minds. To undo the effects of the hazing, we need to ease their cognitive dissonance by showing that they’re not to blame for their decisions. The mental mistakes were not made by citizens who dutifully sacrificed for two years. They assumed that the Centers for Disease Control knew how to control disease and that scientists and public-health officials would provide sound scientific guidance about public health. Those were reasonable assumptions. They just turned out to be wrong.
After a great disaster, the traditional response is to appoint a blue-ribbon panel to investigate it, and a bill has already been introduced in Congress
to create a Covid commission. In theory, this could be a worthy public service, allowing experts to sift the evidence impartially and determine which strategies worked, which ones failed, how much needless damage was done—and whom to blame for it. But in practice, which experts would the current Democratic administration or Congress appoint? Presumably, the pillars of the public-health establishment—the same luminaries whose advice was followed so calamitously the past two years.
Before Covid, the United States drew up plans for a pandemic and maintained the world’s most lavishly funded scientific and medical institutions to deal with one. When the coronavirus arrived, the leaders of those institutions should have identified who was at serious risk and who wasn’t and adopted proven strategies to protect the vulnerable while doing the least harm to everyone else. They should have monitored the effects of their policies and adjusted them based on what they learned. By honestly communicating the risks and considering the overall public good, they could have tamped down needless fear and united the country behind their efforts.
Instead, they proceeded to ignore their own plans as well as the basic principles of science and public health. Leaders of the CDC terrified the public with worst-case scenarios based on computer models—and then used those blatantly unrealistic projections to claim unprecedented powers and experiment with untested strategies. Their pre-Covid planning scenarios had rejected business and school closures even for a pandemic as deadly as the Spanish flu of 1918, but when the Covid-19 pandemic came, they imposed lockdowns without even pretending to weigh the hypothetical benefits against the tangible economic, medical, and social costs— not to mention the intangible costs in emotional hardship and lost liberty. Randomized clinical trials conducted before the pandemic had repeatedly shown that masks did little or no
good at preventing viral spread, but the CDC proclaimed them effective against Covid and promoted mask mandates nationwide. As the pandemic wore on, federal health officials looked for excuses to justify the lockdowns and mandates, hyping flawed studies and cherry-picked data, while failing to sponsor rigorous research testing their strategies. They stubbornly ignored the hundreds of studies around the world showing that, except in a few isolated places, lockdowns did not reduce Covid mortality and that mask mandates were generally ineffective and senselessly cruel in classrooms. The most glaring evidence came from places with the least restrictive policies, like Florida, along with the Nordic countries Sweden, Finland, and Norway, which avoided lockdowns and mask mandates—yet did as well as, “CDC leaders used unrealistic projections to claim unprecedented powers and experiment with untested strategies.” or better than, the average in preventing both Covid deaths and overall mortality during the course of the pandemic. Instead of heeding all this evidence of their mistakes, federal officials did their best to suppress it and silence dissenters. Francis Collins, the head of the National Institutes of Health, wrote to Anthony Fauci in October 2020 urging “a quick and devastating takedown” of the “three fringe epidemiologists” responsible for the Great Barrington Declaration. These three researchers—from the “fringe” institutions of Harvard, Stanford, and Oxford—criticized the deadly harms of the lockdowns and urged an alternative strategy of focusing protection on the high-risk elderly, allowing natural immunity to grow among the younger population at low risk. Fauci and Collins went on a media offensive, dismissing the focused-protection strategy as “very dangerous” and “not mainstream science.” Other scientists quickly joined the attack on the Great Barrington Declaration by signing a rebuttal, the John Snow Memorandum, which asserted that lockdowns were effective and dismissed the idea of natural immunity, claiming that “there is
Won’t Get Fooled Again
no evidence for lasting protective immunity to SARS-CoV-2 following natural infection.”
It was a remarkably irresponsible claim, given already-existing evidence at the time (the fall of 2020) that people’s immune systems developed defenses after a Covid infection. It would have been surprising if an infection didn’t produce durable protection. Yet this denial of natural immunity appeared in The Lancet and was signed by thousands of scientists and doctors, including Rochelle Walensky, who enshrined this unscientific notion as CDC policy when she became the agency’s leader during the Biden administration. It didn’t matter that natural immunity was repeatedly shown to be stronger and longer-lasting than vaccine immunity, or that other countries exempted people with natural immunity from vaccine mandates. The Biden administration insisted on vaccinating everyone—and firing workers who refused, including hospital staff and other essential workers whose prior Covid infections gave them stronger immunity than their vaccinated colleagues. Instead of uniting Americans against the virus, public-health officials chose painful policies that divided the faithful from the disobedient.
The public needs to learn what went wrong during the pandemic, but they’re not going to hear it from the Biden administration. It rewarded Fauci for his failures by giving him a new title, Chief Medical Advisor to the President, and would surely resist any serious investigation of its Covid strategies. Republicans could start the process if they win control of Congress in November and establish a Covid commission, but they’d be taking on the federal bureaucracy as well as the publichealth establishment. Scott Atlas, the health-policy analyst from the Hoover Institution who joined the Trump administration’s Covid task force and fought unsuccessfully against Fauci’s pandemic policies, says that his experience in Washington has convinced him that any government-run commission would be a mistake.
“It’s massively naïve to think that our government will do anything objective,” Atlas says. “Any U.S. government panel would be viewed as, or be in reality, 100 percent partisan. It could
JOHN TLUMACKI/THE BOSTON GLOBE/GETTY IMAGES
be smarter to have an international organization do it, looking at the overall questions of management, because if it’s only an assessment of the U.S., then it naturally becomes a political blame game.” But which international organization could be trusted to do a fair investigation? The obvious candidates, like the World Health Organization or the World Bank, would presumably rely on establishment figures loath to admit their mistakes. And even if they honestly evaluated the pandemic strategies, how much impact would
Americans obediently forced young children to wear masks on the playground and in classrooms—a restriction too extreme even for Europe’s progressive educators.
the report have? The mainstream press would probably either ignore it, as it ignored a recent meta-analysis concluding that lockdowns had “little to no effect” on Covid mortality, or attack it with the same tactics used to smear the Great Barrington Declaration as “not mainstream.”
The Great Barrington scientists’ ideas about focused protection and natural immunity have been vindicated—unlike the counterclaims and unproven strategies promoted in the John Snow Memorandum—but these researchers were no match for their media-savvy opponents, as Stanford’s John Ioannidis recently concluded after analyzing the credentials of the two sides. By considering how often the scientists’ research had been cited in the scientific literature, he found that the signatories of the Great Barrington Declaration included just as many topcited scientists as did the signatories of the John Snow Memorandum. But there were a few crucial differences: the Snow signatories had many more Twitter followers, and they received a lot
Won’t Get Fooled Again
more attention on Twitter than in the scientific community. They had the dubious distinction of scoring much higher on a scale called the Kardashian index, named after the celebrity Kim Kardashian, which measures the discrepancy between a scientist’s social-media footprint and the citation impact of the scientist’s research. Twitter enabled activist scientists to exert an outsize impact on the public debate over Covid strategies. The lockdowns and mask mandates came to be perceived as “the science,” parroted by the mainstream press and enforced by censors on socialmedia platforms.
The activists kept pretending that those strategies worked, but their narrative became harder to sustain. Surveys by the Pew Research Center showed that the public’s trust in scientists rose at the start of the pandemic and then began falling. The earliest and steepest declines occurred among Republicans, so that today only 15 percent of Republicans have “a great deal of confidence” in scientists—while more than a third have “not too much” or “none at all.” Democrats remain far more trusting, but even their confidence in scientists is now lower than at the start of the pandemic.
“Public trust in science is over unless there’s a thorough review of the pandemic policies,” says Jay Bhattacharya of Stanford, one of the Great Barrington Declaration scientists. Unlike Atlas, he believes that a federal Covid commission could serve a purpose. “The harms of the lockdowns are so obvious, and the failure to protect the vulnerable so obvious, that it would be hard for a commission to whitewash the facts. It’s going to take political will, but it needs to be done to restore trust in public health.”
For now, the best opportunity for a public airing of the facts may be the 2022 election campaign. Some candidates are already attacking the lockdowns and mask mandates, and pandemic strategies could become a major issue in the 2024 presidential race, especially if Ron DeSantis runs on his success as Florida’s governor. That prospect has already inspired hit pieces in the media and attacks from Democrats like Gavin Newsom, the governor of California, which suffered one of the nation’s worst surges in unemployment during its strict lockdowns. Newsom recently defended his state’s draconian mandates by claiming that an additional 40,000 Californians would have died if he had followed Florida’s policies. But that misleading figure, repeated uncritically by journalists, was based on a crude comparison of the states’ Covid mortality rates without accounting for the larger percentage of elderly people in Florida.
When properly adjusted for the age of the population, the cumulative Covid mortality rate in Florida has been below the national average. As of late March, Florida’s rate was the 19th lowest among the states, only a little higher than California’s, which was the 14th lowest. And by a more important indicator—the rate of excess mortality, a measure of how many more deaths than normal from all causes occurred during the pandemic—Florida has fared slightly better overall than California, and notably better among the young. The rate of excess mortality among young adults has been consistently lower in Florida than in California, where the strict lockdowns presumably contributed to deaths from other causes. If California’s cumulative rate of excess mortality equaled Florida’s, about 5,000 fewer Californians would have died during the pandemic. And if California’s unemployment rate equaled Florida’s last year, 500,000 fewer Californians would have been out of work.
Those are the hard truths that Americans need to hear after two years of Covid hazing. It won’t be easy convincing them that they fell for a deception, but it can be done, as DeSantis demonstrated at a recent appearance in Florida when he urged a group of college students on the podium to take off their masks. “We’ve got to stop with this Covid theater,” he said. “If you want to wear it, fine, but this is ridiculous.” As usual, the facts were distorted by the press, which pretended that by giving the students a choice, DeSantis was somehow guilty of “bullying”—as if these poor students hadn’t been bullied for two solid years into wearing masks that they didn’t need. Some students on the podium kept their masks on, looking like meek pledges during Hell Week, but a few were emboldened to uncover their faces and breathe fresh air. At least for the moment, they were free to wonder whether this ridiculous fraternity was worth staying in anymore.
How Really to Be an Antiracist
Teach black kids to read.
Kay S. Hymowitz
There’s an old joke about a chemist, a physicist, and an economist stranded on a desert island with only a sealed can of food. The chemist and physicist each propose their own ideas about how to open the can. The punch line comes from the economist, who proffers: “First, assume a can opener.”
I’ve been brooding over this joke while watching “antiracism” teaching—some might call it Critical Race Theory (CRT) or social justice—take over the American education world with Omicron-like speed. Lesson plans, books, tips for inclass activities, discussion points, and curricula swamp the teachers’ corner of the Internet. The proposals come from a metastasizing number of pedagogic entrepreneurs and activist groups, some savvy newcomers, some influential veterans like Black Lives Matter at School, Learning for Justice (formerly Teaching Tolerance), Teaching People’s History (the Zinn Education Project), the Racial Justice in Education Resource Guide (from the National Education Association), and, of course, the current star: the 1619 Project (the Pulitzer Center). To me, all these ideas seem like the ruminations of desert-island economists. They start with an impossible premise: that the students of these recommended texts actually know how to read.
Iam overstating, but not by much. A significant number of American students are reading fluently and with understanding and are well on their way to becoming literate adults. But they are a minority. As of 2019, according to the National Association of Education Progress (NAEP), sometimes called the Nation’s Report Card, 35 percent of fourth-graders were reading at or above proficiency levels; that means, to spell it out, that a strong majority—65 percent, to be exact—were less than proficient. In fact, 34 percent were reading, if you can call it that, below a basic level, barely able to decipher material suitable for kids their age. Eighth-graders don’t do much better. Only 34 percent of them are proficient; 27 percent were below-basic readers. Worse, those eighth-grade numbers represent a decline from 2017 for 31 states.
As is always the case in our crazy-quilt, multiracial, multicultural country, the picture varies, depending on which kids you’re looking at. If you categorize by states, the lowest scores can be found in Alabama and New Mexico, with just 21 percent of eighth-graders reading proficiently. The best thing to say about these results is that they make the highest-scoring state—Massachusetts, with 47 percent of students proficient— look like a success story rather than the mediocrity it is.
The findings that should really push antiracist educators to rethink their pedagogical assumptions are those for the nation’s black schoolchildren. Nationwide, 52 percent of black children read below basic in fourth grade. (Hispanics, at 45 percent, and Native Americans, at 50 percent, do almost as badly, but I’ll concentrate here on black students, since antiracism clearly centers on the plight of African-Americans.) The numbers in the nation’s majority-black cities are so low that they flirt with zero. In Baltimore, where 80 percent of the student body is black, 61 percent of these students are below
How Really to Be an Antiracist
basic; only 9 percent of fourth-graders and 10 percent of eighth-graders are reading proficiently. (The few white fourth-graders attending Charm City’s public schools score 36 points higher than their black classmates.) Detroit, the American city with the highest percentage of black residents, has the nation’s lowest fourthgrade reading scores; only 5 percent of Detroit fourth-graders scored at or above proficient. (Cleveland’s schools, also majority black, are only a few points ahead.)
In April 2020, the Sixth Circuit Court of Appeals ruled in favor of former students suing Detroit schools for not providing an adequate education. The suit cited poor facilities and inadequate textbooks, but below-basic literacy skills were the primary academic complaint. One of the plaintiffs was a former Detroit public school student who went on to community college and ended up on academic probation, in need of a reading tutor. His story is typical enough as to be barely worth mentioning—except for the fact that he graduated at the top of his public high school class. And as if this isn’t bad enough, the numbers appear likely to get worse, with the impact of Covid-19 disruptions.
The tragedy for black children and their families, as well as a nation trying to reckon with racial disparities rooted in its own history, can’t be overstated. If you want to make sense of racial gaps in high school achievement, college attendance, graduation, adult income, and even incarceration, you could do worse than look at third-grade reading scores. Three-quarters of below-proficient readers in third grade remain below proficient in high school. Before third grade, children are learning to read; after that, they are reading to learn, in one well-known formulation. All future academic learning in humanities, social sciences, business, and, yes, STEM fields depends on confident, skilled reading. “The kids in the top reading group at age 8 are probably going to college. The kids in the bottom reading group probably aren’t,” as Fredrik deBoer, the iconoclastic author of The Cult of Smart, has put it. And the absence of a sheepskin is hardly the
Racial gaps in third-grade reading scores offer a window onto everything from college graduation rates to adult income levels.
IMAGEBROKER/JIM WEST/NEWSCOM
How Really to Be an Antiracist
worst of it. Upward of 80 percent of adolescents in the juvenile justice system are poor readers, according to the Literacy Project Foundation. Over 70 percent of inmates in America’s prisons cannot read above a fourth-grade level. It’s been said that authorities use third-grade reading scores to predict how many prison beds will be needed. That meme is probably apocryphal, but the sad fact is that it makes sense.
The irony would bring tears to the eyes of Martin Luther King, Jr. Before the Civil War, most Southern states had laws forbidding slaves from reading or writing. Enslaved men and women were known to risk whippings and death in order to learn their letters, sometimes with the aid of a sympathetic white but frequently on the strength of their own determination. “Once you learn to read, you will be forever free,” the most famous of those readers, Frederick Douglass, promised. What would he, or King, make of an education system that leaves more than half of twenty-first-century black kids barely literate?
Scour antiracist education sites on the Internet, and you’ll get the distinct impression that no one in the field has grasped the implications of this reality or that educating children in any familiar sense of the term was never the goal, anyway. In fact, a number of antiracist activists and educators have been blunt about their indifference to teaching reading. What else could it mean when the chancellor of the nation’s largest school system scorns “worship of the written word” as an imposition of white supremacy? In fairness, most educators are probably simply assuming the proverbial can opener—namely, competent readers who also have considerable background knowledge, including basic facts about the world and history. Learning for Justice, for instance, recommends a fourth-grade text about a woman named Helen Tsuchiya. Though Tsuchiya was born in the U.S., the site tells us, she was moved “to an internment camp surrounded by barbed wire after the Japanese attack on Pearl Harbor.” What are the chances that the fourth-grader reading at a basic level—never mind the majority of black children who are reading below basic—will be able to decipher words like internment, barbed wire, and Pearl Harbor, much less grasp their significance enough to facilitate comprehension? Progressive educators are not only failing to factor in the sad truth about students’ reading ability but also overlooking the fact that American students do even worse in geography and history than in reading.
Another lesson plan for elementary and middle school students, this one recommended in the Pulitzer Center’s 1619 portal, reveals a similar chasm between politicized pedagogical fantasy and student reality. “In this unit, students learn to identify underreported stories of migration, and what is missing from mainstream media representations of migrants’ experiences,” the plan reads. “They analyze nonfiction texts and images, practice identifying perspectives in media, and synthesize their learning to form a new understanding of migration. In their final project, students communicate how their perspective on migration has grown or changed through a creative project, original news story, or existing news story edited to provide a more holistic picture of migration.” The lesson’s unspoken purpose is to impress students with the putatively anti-immigrant slant of American news. But an elementary schooler probably doesn’t know what the “mainstream media” is and is even less likely to have read any of it. Basic readers will have difficulty deciphering words like migrant or immigration. (Unless they have family there, they also won’t know the location of Syria or Sweden, two of the immigrant countries mentioned in the lesson plan—there’s that geography problem again.) The same obstacles are bound to trip up the typical middle schooler; remember, 68 percent of eighth-graders can’t read proficiently. This is not education but indoctrination: teachers are being told to foist an opinion worthy of debate on ill-informed children, while denying them the capacity to evaluate it critically.
Social-justice educators would doubtless object that the catastrophic literacy rates of black students are solid proof of the structural racism and teacher bias that they’re intent on fighting. They would rightly observe that reading scores correlate with parental income and education; black children tend to come from less affluent and less educated homes, a fact at least partially tied to
historical racism. But evidence that racial disadvantage should not be an obstacle to literacy is there for anyone who bothers to look. Nearly 60 percent of black children in New York City charter schools read proficiently; that’s true for only 35 percent of those in district schools. (And 80 percent of the kids in New York City charters are economically disadvantaged.) Unless someone can prove that district teachers are more racist than those at charters— an unlikely theory—it would seem that charters simply do a better job of teaching kids to read. The differences between states also point to a pedagogical, rather than whitesupremacist, explanation for racial discrepancies. People might reasonably predict that poor Southern states would have lower overall reading scores than more affluent states in the Northeast, and they’d be right. But the Urban Institute has developed a nifty interactive chart that lets us compare states adjusting for race and poverty (or other variables). The counterintuitive results show that Mississippi, the poorest state in the nation and one with a dreadful racial history and an equally dreadful education record, is turning things around. The state is now more successful at teaching disadvantaged black children to read than top-ranked and affluent Massachusetts and New Jersey.
EVERETT COLLECTION/NEWSCOM
“Once you learn to read, you will forever be free,” said
TFrederick Douglass.
hese successes are no mystery, but they do require a quick history of the na- they’ve seen before, and to guess, with the help tion’s long-simmering “reading wars.” For at of illustrations and intuition, when they encounleast a generation now, American educators’ ter an unfamiliar word. The guiding assumption preferred approach to reading has been known is that reading is a natural process and teachers as “whole language.” Whole language encour- should just guide kids toward literacy. Children ages teachers to do “shared” and “interactive” don’t need direct instruction to read any more reading with children, to sight-read words that than they need instruction to learn to talk.
How Really to Be an Antiracist
Struggling Readers S li R
Percentage of U.S. fourth-graders reading below NAEP’s basic level
ALL U.S. FOURTH-GRADERS
34%
HISPANIC FOURTH-GRADERS
45%
BLACK FOURTH-GRADERS
52%
learn to blend those sounds, or phonemes, together into syllables, which they then combine into words. With practice, the process becomes fluent, even automatic, freeing up the bandwidth for a fuller comprehension of the meaning of the words. One example from journalist Emily Hanford, who has done some of the best work on reading science, succinctly captures the problem when children are not taught to decode. Hanford interviewed a group of adolescents reading at a third-grade level in a phonics-oriented class in a Houston juvenile detention center. She asked 17-year-old DeShawn what he is learning in his class. “Like Ωph.≈ It’s a Ωf,≈ like physics,” DeShawn explained. “I never knew that.” Though whole language has been failing many millions of schoolchildren like DeShawn (and some unknown number of middle-class kids whose parents can afford to spend money on private tutors to teach the decoding skills that their children should have learned in school), educators have been loath to give up their dreams. So they introduced a (supposedly) new approach with the benign-sounding name “balanced literacy.” In theory, balanced literacy blends the two methods
SOURCE: NAEP Report Card: Reading, http://www.nationsreportcard.gov/reading/nation/achievement/?grade=4 of whole language and phonics; in practice, phonics gets short shrift. Few ed schools or teaching programs show student teachers how to teach phonics in the defined, log-
But over recent decades, linguists, cognitive ical progression necessary for students to catch psychologists, and data-driven educators have on to the complexities of the English language. reached a consensus that this is not what makes Basement-level reading scores haven’t budged. Johnny read. The beginning reader needs, first Still, signs of change are evident. In 2013, of all, to “de-code.” To accomplish that, teachers legislators in Mississippi provided funding to must systematically impart “phonemic aware- start training the state’s teachers in the science ness.” The shorthand for this approach is “pho- of reading; I’ve already noted their encouragnics”—that is, the relation between the letters ing results. Other states, including Florida, on the page and the sounds of speech. Children Colorado, and Tennessee, are gesturing toward
taking reading science more seriously. And David Banks, New York City’s new schools chancellor, canceled his predecessor’s dismissal of the “white worship of the written word.” Teachers have been “teaching wrong” for 25 years, Banks said. “ ΩBalanced literacy≈ has not worked for Black and Brown children. We’re going to go back to a phonetic approach to teaching.”
The good news comes with some cautions: first, for reasons no one understands, a significant minority of children will learn to read competently without getting any direct instruction in how to sound out words; their success continues to have the unintended consequence of provid- A Big Difference in ing balanced literacy supporters cover for their otherwise disastrous Reading-Proficient Students results. Second, phonics needs to be HISPANICS taught systematically from kinder- 58% BLACKS garten through third grade; no one 53% should expect solid results with a random sprinkling of “phonemic awareness” here and there, the practice in most balanced literacy classes. Third, 37% 35% learning how to decode is not everything; to become proficient readers, children also must know what words mean. They will, in other words, need to develop a rich vocabulary and varied background knowledge. Finally, NYC CHARTERS NYC PUBLIC SCHOOLS intelligent teaching methods are not SOURCE: NYC Charter School Facts 2021–22, New York City a panacea for racial and income dis- Charter School Center, http://nyccharterschools.org/policy-research/ parities; no matter how well black fact-sheets/charter-facts children are taught to read, white children are still more likely to grow up with educated parents, which means that they will be hearing more vocabulary ber of school districts are interviewing prospecwords, more complex language, and more useful tive teachers, even those for elementary school, information about the wider world. This problem fixated on one question: “What have you done can be solved over time but only if more disad- personally or professionally to be more antiravantaged kids are given the chance to pass on the cist?” The best answer to that question would be: benefits of their own literacy to their children. “Teach black children how to read.” Better yet,
The reading emergency should be the primary change the question to “What’s the best way to focus for educators, especially those in a posi- teach reading?” and we might see some real ration to help black children. Yet a growing num- cial progress.
China’s Impossible Dream of Order
Haunted by past humiliations, the nation’s leaders seek to restore what they see as its rightful place in the world.
Guy Sorman
Since November 2021, Lithuania has been China’s enemy Number One. How did a country with 2 million inhabitants manage to provoke Chinese leaders to the point of ending diplomatic and commercial relations? The Lithuanian government dared to allow Taiwan to open a representative office in Vilnius, the capital, using the name Taiwan, instead of Taipei, the term that China prefers. Taipei is a city whose existence the Chinese regime cannot deny; Taiwan is a dissident republic that isn’t supposed to exist. The Lithuanians, fiercely antiCommunist after enduring Soviet Union occupation, acted deliberately. Perhaps they underestimated Beijing’s aggressive reaction—but then the West sometimes has difficulty grasping what appears to be Chinese paranoia.
In trying to understand China, Henry Kissinger observed—and he practiced this advice— one should put oneself in its place. Chinese leaders, haunted by a desire for international recognition, perceive the slightest breach in diplomatic protocol as a resurrection of imperialism. China was once the world’s greatest power, but it was late to recognize the West’s rise, as well as the importance of science and industry in fueling that rise. This blindness led to China’s effective colonization in the nineteenth century— by Europeans, Americans, and, in a supreme humiliation, the Japanese. Over the course of the nineteenth century, Chinese emperors had to sign treaties of surrender in bunches and to surrender territory, before the Empire collapsed totally in 1911.
Then followed a half-century of violent struggle between warlords, until the victory of the Communist army, led by Mao Zedong and supported by the Soviets, which put the Communist Party in power. The real reason Mao and his successors found acceptance among the various peoples of China was not due to the new leaders’ Marxism; it was because they ended the civil wars. They replaced the wars with the eradication of the middle class, totalitarian constraints on private life, the destruction of ancient customs, and the crushing of religions—but for the Chinese, anything was better than the horror of ceaseless civil strife. Too often in the West, we believe that the Chinese Communist Party’s legitimacy is based on economic growth, but this didn’t take off until 1979. More fundamental than growth is order. The Beijing regime is, in a way, akin to Franco’s Spain, more fascist than Communist, though any classification should be historically contextualized.
China under the CCP wants to maintain order, then, but it also wants to erase the stain of the colonial period. The official historiography blames
Chinese president Xi Jinping has amassed Mao-like authority.
the colonizers for all the woes that brought down the Empire. Chinese historians thus greatly exaggerate the importance of the Opium Wars (fought between 1839 and 1860), which were merely local conflicts, intensified by commercial rivalries between Chinese and British businesses. In reality, the Empire was a victim above all of its incapacity to modernize—a task that Japan, during the same period, accomplished.
If we consider this mind-set today, we’ll be less surprised that a rising China is indignant that international institutions, international laws, and human rights are imposed on it, while it had no part in their elaboration. If we were Chinese, we would not easily accept the presence of the American fleet patrolling our coastlines. As a Chinese ambassador to France asked: How would the Americans react if they saw, every day, the Chinese war fleet along the California coasts?
The Chinese, including in intellectual circles, bristle when Westerners put them under a microscope and judge them. When I traveled extensively in China in the mid-2000s for my book The Empire of Lies, my interlocutors would ask why I was writing about their country and not my own, France. It is true that Westerners have published countless books on China; there are comparably few written about the West by Chinese authors.
This Chinese indifference to the external world characterized the Empire, as the story of the admiral Zheng He illustrates. In 1405, the Yongle emperor of the Ming Dynasty tasked Zheng He, a Muslim eunuch, with exploring the world beyond the seas. Was this curiosity, or a will to conquest? The project was without precedent for this rural empire, which had never before even had a maritime fleet. Admiral Zheng He would head up a gigantic squadron, carrying, at its apogee, nearly 30,000 warriors on 300 vessels, and undertaking seven expeditions from 1405 to 1433. These journeys took Zheng He from what is today Indonesia to the Horn of Africa.
Zheng concluded that none of the civilizations he encountered was of comparable power to China or, indeed, of much interest. At no point did he envision taking possession of faroff lands. After the Yongle emperor’s death, in 1424, his son, the Hongxi emperor, ordered the
China’s Impossible Dream of Order
maritime explorations stopped (though Zheng He conducted one last voyage under the Yongle emperor’s grandson). The construction of new boats was banned, and the existing fleet destroyed.
The memory of these expeditions remained largely effaced until 2006, when a great exhibition in Beijing resurrected it. The contemporary goal was ideological, not historical: to show how China, unlike the West, had always respected other civilizations, never imposed its religions or norms, and never colonized distant lands. This was intended to reassure Africans and Asians about the presence of Chinese maritime bases, which the newly assertive government was seeking to establish around the globe.
“There is nothing to learn from others,” the Ming emperors had concluded. This haughty stance reemerged when European religious missions, beginning in the seventeenth century, and then diplomatic and commercial missions, failed to forge relations with the emperor. Over three centuries, all emissaries—Jesuits, ambassadors, and business interests—were sent away for the same reason: China had nothing to learn from the outside.
This indifference has not totally disappeared. It was not until the regime of Deng Xiaoping, after Mao’s death, in 1976, that China began watching the West closely, careful to import only techniques and not cultural and political ideas. It was then that the number of Chinese students in American universities surged, mostly in technical fields. President Xi Jinping has been explicit in this emphasis, continually repeating his hostility to liberal ideas; piracy of Western technologies, on the other hand, is encouraged.
The Harvard political scientist Joseph Nye popularized the term “soft power,” denoting various nonquantifiable cultural values of universal significance. The soft power of nations depends on their ability to win the admiration, and even the allegiance, of people in other countries. By this measure, the United States remains the leading soft-power nation, thanks to its incomparable cultural vitality, both popular and elite—from Disney to the Metropolitan Opera. France and Italy also possess considerable soft power, as one can see in their ability to attract tourists and the worldwide appeal of their fashions. Soft power can also be ideological: the Soviet Union’s draw came not from Russian literature but from its model of society, which propaganda presented as a shining alternative to capitalism and colonialism. It was all a deception, of course, but it fooled many for a long time.
Chinese leaders’ aspiration to international legitimacy demands a soft power as attractive as that of the Americans and Europeans. Mao realized this as he exported the revolutionary ideology that bore his name, inspiring movements that shook India, Indonesia, Peru, Italy, and France during the 1960s. Western intellectuals flocked to Beijing seeking enlightenment, just as an earlier generation went to Moscow to bow before the dictatorship of the proletariat. Maoism was eradicated after Mao’s death. Since then, China has exported almost nothing immaterial, whether ideas, films, or books. (Only Chinese science fiction has found an international audience, in the translated works of Cixin Liu and a few other writers.)
China’s soft power has dropped to near-zero because the Communist Party systematically destroyed Chinese civilization. Mao began the destruction. In a speech delivered in 1949 from Tiananmen Square, he called for “an ocean of smokestacks” in the capital, which had been known as the “city of a thousand pagodas.”
In the early 2000s, one could still find the old pagodas here and there, surrounded by factories. Today, the ancient city has been leveled. Only a few vestiges, such as the Forbidden City, survive as tourist attractions, and these are badly restored amid Beijing’s ordinary buildings and urban highways. Beijing is not only among the most polluted capital cities in the world; it is also the ugliest.
The poverty of contemporary Chinese culture holds true in literature. When Gao Xinjian, the greatest contemporary Chinese writer, received the Nobel Prize in 2000, the Chinese government, far from celebrating his achievement, let
Portrait of Zheng He, the fifteenth-century admiral whose voyages led him to conclude that China had little to learn from the outside world.
China’s Impossible Dream of Order
An early sixteenth-century painting found in Taiwan’s National Palace Museum; Taiwan has become the repository of much of traditional Chinese culture.
it be known that he did not represent China— it pretended that he was not truly Chinese but French, as he was living in Paris when awarded the Nobel. (In reality, Gao writes in Chinese and does not speak a word of French.) Then Beijing pressured the Nobel committee to honor a true Chinese writer—that is, one selected by the Communist Party: Mo Yan. The Nobel jury gave in, crowning him in 2012. When I met Mo Yan in Beijing that year, I noted that, in his books, he denounced the destruction of the Chinese patrimony but had never mentioned the 1989 massacre of students in Tiananmen Square. We were in a busy café, and, nervously looking around, he responded: “It is much too early to speak of that.”
Back in 2006, it was still possible to find traces of religiosity in a nation once profoundly religious. In China, religion must be spoken of in the plural, such as it lives on in Taiwan. Before the Communist conquest, the Chinese adhered to Buddhist, Taoist, Muslim, Catholic, and Protestant forms of worship. The Communist Party, after trying to eradicate these spiritual traditions by assassination of religious leaders and other repressive measures, decided to tolerate them, as long as they accepted party control. As Xi Jinping declared in 2017, at the 19th National Congress of the Communist Party: “Religious personnel or leaders in China must be Chinese in orientation and provide active guidance to religions so that they can adapt themselves to socialist society.”
The result has been a kind of two-level religious practice in temples, mosques, pagodas, and churches, with an officially sanctioned form and a continuing illegal fervor. The Communist Party succeeded in bureaucratizing the two largest official Chinese religions, Taoism and
Buddhism, the oral teaching of which depends on ness; and also an economic model that some think the quality of masters, who were exterminated, is more effective than the West’s liberal capitalism. exiled, or replaced by patriotic personnel. Islam The exportation of Chinese soft power goes is no better tolerated, though it is practiced by partly through the so-called Confucius Instisome ethnic Chinese, whose families converted tutes, which the Communist Party has sought to centuries ago. The fault of the horrifically op- spread across the world, especially on university pressed Uighurs of Xinjiang is to be both Muslim campuses. Since the institutes are unencumbered and of another race. As for underground Christi- by any academic ethic and censor truths unpalatanity, from what I have seen in its secret gather- able to the party, leading American universities ings, it is a hodgepodge of beliefs borrowed from refuse to host them. But some schools, needing various Christian sources—more reflecting a de- the funds, accept them. sire for Westernization than expressing a coher- One might be surprised by the institutes’ ent faith. name. After all, Confucius is, in principle, hated by the Communist Party, since his thought exalts a lost paradise—the antithesis of the progress that the CCP promises. True, Confucius counsels obedience to rulers, but he also argues for revolution if their behavior is immoral. I emphasize the disappearance of religions because they were constitutive of the old China and because the Communist Party fears them more than it does democratic dissidents. Liu Xiaobo, winner of the Nobel Peace Prize in 2010, who died under “Xi is betting on China’s vast construction of new infrastructure in other countries to expand its influence globally.” guard in a Chinese hospital in 2017, managed But the party is of the view that, beyond China’s to convince his Western interlocutors, including borders, Confucius is a recognizable brand. me, that democracy was compatible with Chi- Still, China’s soft power, at a low ebb since nese civilization; but few know of him outside the Tiananmen massacre, has continued to deof university circles. The same is true of Wei cline with Xi Jinping’s rise. The limited creative Jingsheng, considered the leader of the Chinese freedom in Communist China that had emerged democracy movement. He was freed from Chi- before Xi has now vanished. The relatively prenese captivity in 1997, following pressure from dictable rules of succession that Deng Xiaoping President Bill Clinton, but since then, he has established—collegial leadership, a maximum lived in exile in America and is without a signifi- of ten years in power—have been replaced by a cant audience in China. The most destabilizing new personality cult and a government that, in recent protest against the Communist Party, by some ways, is as oppressive as that under Mao. It contrast, was of religious origin: in 1999, 10,000 is a regime unlikely to win over people in China members of the Falun Gong Buddhist commu- or outside of it. nity occupied, in silence, an area near the party’s Xi is betting on China’s vast construction Central Committee compound in Beijing. The of new infrastructure in other countries to exsect was subsequently crushed in China, but pand its influence globally—the new silk roads. party leaders still ask themselves how it could But the Belt and Road initiative, as the project have escaped their vigilance. is known, hasn’t always gone smoothly, especially when poorer countries in Africa or Central What form of soft power, then, is to be exAmerica discover that they must pay back Chinese loans at rates higher than those in the world ported, and how, given the rest of the world’s markets and that the specific building projects lack of appetite? There is, of course, the Chinese are directed, and often executed, by Chinese exlanguage—the official Mandarin—useful in busi- patriates, who frequently despise the locals.
China’s Impossible Dream of Order
When Kissinger asked about China’s plans for conquering Taiwan, Deng replied, in essence, “We are in no hurry.” At the time, China had other priorities—above all, its economy, as the country was still very poor. With Xi, Beijing seems to be in more of a hurry. In official talk, Taiwan is an obsession—much more so, in my experience, than among the general population.
This recurring obsession has many facets. One is historical. In 1949, the last troops of the nationalist armies, with their leader, Chiang Kaishek, took refuge in Taiwan, escaping from Mao, who had no navy. To take Taiwan would be to complete the Communist military victory. It is also the party’s view that no historically Chinese territory should escape Beijing’s authority. This notion is important for understanding Chinese geographic claims over borderlands conquered by earlier dynasties: Tibet, eastern Turkestan (today Xinjiang), the frozen deserts of Himalaya, the lost islands in the China Sea, and a few rocks disputed with Japan. Yet Taiwan, one should note, was not always Chinese but frequently independent, peopled by Austronesian aborigines—and later, by Dutch and Japanese colonists.
Yet, for party doctrine, whatever was for even a day Chinese must again be Chinese. This puts in question a large part of Siberia, which Russia today occupies—though Chinese leaders don’t say this aloud. Having crossed the Amur River, which forms the border with Russia, I observed that Chinese peasants and traders in this part of Siberia acted as if it were their home—indeed, the Chinese consider eastern Siberia to be rightfully China’s, but they are in no hurry in this case, either—while the Russians, few in number, pretended not to notice. Both sides remember the deadly 1969 military engagement on the Ussuri River, when the Soviets beat back the Chinese army, at the cost of many casualties on both sides.
Another reason an independent Taiwan infuriates Beijing: it is a prosperous democracy, right at China’s doorstep, and is more authentically Chinese than the mainland. In fact, Taiwan is a conservatory of Chinese culture. As he fled the mainland, Chiang Kai-shek took with him all the treasures of the Imperial City; the National Palace Museum in Taipei contains major works that the emperors accumulated over 1,000 years. Imagine the treasures of the British crown removed by the Irish and exhibited in Dublin! Beyond these material artifacts, Taiwan is home to the arts and traditions of classical China at its peak: music, opera, calligraphy, lacquered and ceramic artwork—all that has disappeared from Communist China. Similarly, all the religions banished from the mainland are freely practiced in Taiwan, especially Taoism, the Chinese religion par excellence. It is in Taiwan (and, until recently, Hong Kong) that the books censored in Beijing appear. These works find their way clandestinely from Taiwan to the mainland, sold underground or published on the Internet, in simplified characters, as only Taiwan preserved the ancient calligraphy of classical China.
In Taiwan, the Taiwanese language, imported from Fujian province, is spoken by those who have inhabited the island for several generations, while Mandarin, the official language of Communist China, is spoken by more recent immigrants. This linguistic difference, a cultural form of democracy, obviously displeases Beijing, since it reminds people that China was once a federation of peoples, cultures, and languages, as India remains today.
In any takeover, Beijing would not hope to confiscate Taiwan’s riches—these would vanish with the flight of entrepreneurs, just as Hong Kong is being emptied of its financiers. The objective would be to eliminate the example of a free, authentic, and prosperous China—one without the Communist Party.
Is a military move likely? It may seem to Xi that the time is propitious, with America seemingly weakened and less inclined to rush to anyone’s aid—and such concerns are even more amplified now, given Russia’s invasion of Ukraine and the ongoing war in that country. That Xi continually references Taiwan in his speeches seems designed to prepare global and local opinion for a military operation. An attack would also demonstrate that China now has a powerful military, helping to obscure the memory of a number of external defeats—in Korea in 1950, in the 1969 Russian conflict, and against Vietnam in 1979.
Would war with Taiwan make it possible to rally the Chinese people around a national
Covid-19 dealt a blow to China’s aspirations for expanded soft power.
cause? For Xi, nationalism could serve as a substitute ideology for Marxism-Leninism, which now rings increasingly hollow. But this would be a Western importation, since the Chinese have never practiced nationalism; traditionally, each person was from his own province, the language of which he spoke, while also seeing himself as a subject of the emperor. With Mao, a conversion to permanent revolution was required—but not to nationalism. By supposing that the Chinese people would endorse en masse the taking of Taiwan, which I doubt, and supposing further that the Chinese army would prove to be up to an unprecedented combat, there remains the question of Taiwanese forces and, in the possible absence of the Americans, an intervention by the Japanese army and fleet, among the world’s most technologically advanced. In the geopolitical scenarios surrounding China, we often forget Japan—and that is a mistake.
Most troublesome for the West is something still emerging: a formidable political innovation, which Chinese leaders consider an unrivaled alternative to Western democracy. Call it technological despotism. Until the regime of Xi Jinping, the post-Mao Communist Party had sought to combine economic growth with public safety, while deploying a security force that worked to ensure that nothing escaped the party’s grasp. But now, the collecting of personal data is taking despotism to a new level, with high-capacity computers enabling the government to keep files on every Chinese person and assign each an algorithm that tracks behaviors and expectations, establishing a “social-credit” score. The state will thus know what every person hopes and fears and will be able to deliver customized goods and services— or sanction every deviation from the party line— with precision.
This system already exists for attributing credits for consumer goods or for obtaining lodging, especially in China’s large cities. Abetting it is facial-recognition technology, which is highly advanced in China, allowing the government to identify unwanted behavior and monitor nonHan persons, including Uighurs and Tibetans, who are, by definition, suspect. In the streets of Urumqi, the capital of Xinjiang, the cameras that proliferate across the city make it possible immediately to identify the Uighurs by their
China’s Impossible Dream of Order
physiognomy and to arrest purported Islamic troublemakers and incarcerate them in reeducation camps.
Chinese leaders mock Western democracy for its disorder and inefficiency. But without a real alternative, the Communist Party long mimicked the external forms of democracy. The people elected representatives, mayors, and assembly members—but the elections were parodies, with unanimous results. A few attempts at genuine local elections in villages occurred in the early 2000s. I witnessed several, along with Jimmy Carter, whose foundation financed the purchase of ballots and printing of bulletins. The experiment proved disastrous for the Communist Party, with independent candidates regularly beating the party choices. After a year or so, the government returned to the simulacra of elections, closing a rare opening that made it possible to see what many Chinese really thought of the party.
National sentiment remains mysterious today, at least to the outside world. If I trust my own observations, which are far from scientific, the Chinese I know have renounced political action and taken refuge in family life. Resistance to the Communist Party still shows up privately. When Deng Xiaoping, seeking to limit China’s population growth, mandated that families have only one child, many couples had two, despite facing fines. Then Xi, worried about a rapidly aging population, ordered families to have two children; but many parents decide to have just one.
Even the parody of democracy will soon be abandoned. In a new technological despotism, what the Chinese continue to think in their hearts will escape the algorithms—but the party won’t care. What will matter is outward behavior: conformity to the party line, enforced by omnipresent surveillance. Xi claims that this will allow society to flourish; as he puts it, the Communist Party “really strives for the happiness of the Chinese people.” We might scoff at this sciencefiction-like project, but China’s leaders see in it a future that will guarantee the eternal power of the Communist Party.
What is most probable in China, however, even in an era of algorithmic control, remains the unexpected. A virus that began infecting people in Wuhan in late 2019 wound up severing the supply chains linking China to the world, cutting the growth rate in half for a time—and launching a global pandemic that killed millions and has transformed economies, altered political arrangements, and left social and cultural ramifications that will take years to absorb and understand. (And Covid-19, whatever the truth of its provenance, will do no good for China’s soft power, either, such as it is.) Nor can we exclude the possibility that new social movements could arise in China, perhaps of a religious inspiration. The history of China is haunted by religious revolts that brought down dynasties.
Ten years ago, no one foresaw an economic ascent as astounding as what China has experienced. Two-thirds of the population have reached the standard of living of the middle classes of Western countries, at least in appearance.
Nevertheless, housing is mediocre, health services are archaic, and life outside large urban areas is difficult. Despite an average annual prepandemic economic growth rate of 8 percent, a quarter of Chinese peasants live in poverty. The villages of western China are no better off than the poorest in India or Africa, but the more vigorous inhabitants have the opportunity of leaving for worksites in the east or the south. They will be exploited there, and they lack rights because their domestic passport, or hukou, ties them to their place of origin, where the police can return them at any moment.
This reserve army of the proletariat, to borrow Marx’s vocabulary, exerts downward pressure on Chinese salaries, contributing to the international competitiveness of the country’s industries. Communist China has, in this regard, remained a model of classical economics.
Deng Xiaoping was the first Chinese leader to understand that China could not invent an original model for growth and that it would have to bend to the scientific laws of economics. The government returned the land that Mao had collectivized to the peasants, who, property owners once again, went back to work. They succeeded in feeding themselves, supplying
food to the cities, and reaping surplus value and ization of trade. It is access to the global market, freeing up part of the labor force. This allowed in the absence of a liquid domestic market, that factories to run at low cost and to begin to ex- has spurred so much annual growth. port to Western consumers. The profits thus ac- The Chinese have also borrowed from the clascumulated made it possible to modernize pro- sical model the key role of innovation. Growth, duction methods. There never was a Chinese on this view, is based first on an exodus from rueconomic “miracle” but only the application of ral areas, which improves productivity, and then the laws of classical economics, applied to a on innovation, which takes up the baton. But insociety eager to escape poverty, with workers novation costs a lot before yielding marketable prevented from rebelling against abusive work goods and services—unless one takes a shortcut, hours and low pay. as Japan did in the 1920s and South Korea in the
Beyond what it borrows from classical econom- 1950s: copy what others have done. The Chinese ics, the model has dis- did not invent the pirating of patents and intellectual property, but they systematized it on a grand scale and even improved it. High-speed trains are an example. In the 2000s, the Chinese asked global leaders in building such trains to submit project proposals for a Chitinctive characteristics that cannot be reproduced. Are we so sure that the Communist Party wants the impoverished quarter to be absorbed into the general economy? A recent example makes me skeptical: Shein, a world leader in low-priced fashion, has an office “Despite annual pre-pandemic economic growth of 8 percent, a quarter of Chinese peasants live in poverty.” in the city of Guangzhou and copies new styles nese system, which they did. Chinese engineers within 48 hours of their appearance in Paris, Lon- closely examined these models and combined don, or New York. The copies get assigned to elements in order to devise an improved Chithousands of subcontractors, who subcontract, in nese version. This method of recombination or turn, down to the most isolated villages, where “tweaking” has become an oft-deployed method labor costs are minimal. In less than a week, the in China. The Chinese do not see themselves completed styles come back to Guangzhou and as copying, and they even file patents so as to are then exported to the world, at a quarter of give their approach legal protection. This helps the price of European or American competitors. explain why the Chinese appear to register the The unknown village workers remain unknown, greatest number of patents in the world, though with no right to future orders or social protec- these are recognized only in China. To measure tion. The head person at Shein, an American of China’s true capacity for innovation, one should Chinese origin, acknowledged that the clothing refer to the patents called “triadic”—that is, recline was made in the “far west”—that is, China’s ognized in the United States, Europe, and Japan. extremely poor west. Shein’s method of produc- Under this classification, the U.S. is still well in tion is widespread in China, which now exports the lead, followed by Japan and Europe. China, more to countries of middling development in as well as India and Russia, is more or less invisAsia, Africa, and Latin America than to the ad- ible. The future will not be written in China, but vanced West. in America.
The Chinese economic story exemplifies the
classical theory of the division of labor, identi- Afied by Adam Smith in his 1776 book The Wealth t the end of 2021, under the guise of restrainof Nations. What may be truly miraculous is that ing monopolistic behavior, Xi Jinping curbed the China’s conversion to classical economics coin- expansion of the two largest service enterprises on cided with a historically unprecedented global- the Chinese Internet: Tencent and Alibaba. They
China’s Impossible Dream of Order
were forbidden to raise further funds or to extend further offers of service without government approval. Western observers were stupefied, as these businesses and their founders had earlier been celebrated and held up as exemplars of success for Chinese youth.
On reflection, however, this power play—even with the cost of slowing down the national economy and its technical capacities—is consistent with the nature of the regime. It is permissible to make money, as long as no aspersions are cast on the dominion of the Communist Party. The sanctioned firms were also gathering data on the Chinese people, but the party considers itself alone authorized to control such data, which are a major tool for the emerging technological despotism.
Everything by the party, for the party, and under control of the party—that could be the regime’s motto. But what is the party? It is not the state. The state functions as an administration, as in the West, but every decision that it makes is supervised by the party’s delegates—political commissars inserted into all the organs of political, economic, and judicial power. All leaders, public and private, are generally, though not necessarily, party members. What matters most is that they report to the political commissars above them.
At the Communist Party’s summit, one finds mostly men and engineers. You find few philosophers or sociologists; the party is in the hands of a technocracy very different in its recruitment and temperament from the mandarinate of the Empire, which was made up of learned men, versed in literature and political philosophy. For a party that views itself as Marxist, one finds few women or workers in its decision-making echelon.
Another particularity of the party is its dynastic character. The leaders are almost all sons of leaders, including Xi himself. If one had to synthesize to the extreme, China is in the hands of technocratic dynasties, for whom nothing is more important than holding on to, and transmitting, power. If, by chance, the party changes course, which happened often in Mao’s day and after, this is the result of palace revolutions— fights to the end between ruling dynasties.
An effective way of getting rid of rivals, perfected by Xi, consists in accusing them of corruption. The accusation is easy enough to back up because corruption is widespread in the party— from the “red envelope” slipped to a low-level apparatchik to give him the right to open a shop or build a building to the granting of stupendous contracts at the top, passing through a family member or a mistress in favor at court. Once, long ago, I was designated by the French government to hand over an important envelope to a Chinese courtesan, so as to obtain for a French business the contract to build an opera house. Alas, the courtesan’s name had a homonym, and I approached the wrong recipient. No matter; I was asked to make another trip and hand off a second envelope, and the business was concluded.
This example, apart from the blunder, is not isolated but a common practice in commercial relations between the Western business world and its Chinese interlocutors. The key to success is to identify the right intermediary to bribe, who will then take care of everything. This practice likely helps explain Western business executives’ enthusiasm for the Chinese model, since, for a certain sum, they can break through bureaucratic barriers and violate all the rules. It’s simpler than in today’s United States or Europe, where one must now fulfill so many social or environmental requirements.
In relations between China and the United States, American leaders, along with many commentators, seem to invert the famous formula of John Quincy Adams in 1821, warning against going abroad “in search of monsters to destroy.” It suffices to analyze China, such as it really is, to see that it is neither the enemy, nor even a true competitor, of the United States. In every domain, China is significantly behind the United States. As testimony, consider the more than 300,000 Chinese students in the United States who enjoy freedom and learning unavailable in China. China often serves as an imaginary threat distracting Americans from their domestic quarrels, in the same way that Chinese leaders accuse Americans of interference to distract their own people from their lack of freedom,
The Chinese Communist Party has created a technological despotism, with omnipresent surveillance of its citizens.
the mediocrity of their cultural life, the oppression of workers, and rural poverty.
The proper realist attitude for the United States to adopt must, it seems to me, be founded on two pillars. First, it would be well to know the Chinese better—the people as well as their leaders, their frustrations and their ambitions. The second pillar might take inspiration from a strategy that proved effective in confronting the Soviet Union: containment. China has the right to develop itself, and its inclusion in the global order is a net benefit for the Chinese, as it can be for non-Chinese. China has the right to be present in all international institutions. But it has no right to aggression. To encourage economic development and discourage aggression—this seems to be an attitude toward China that the United States and Europe could share. China as a monster to destroy, on the other hand, would be an error of analysis. China does not fit the role; it cannot be compared with the USSR. The Soviets wanted to conquer the world and to export their ideology. China has no such ambition. It only demands the place that, according to its leaders, it deserves. This is of the order of negotiation, not of war.
Make America Make Again
JASON ARMOND/LOS ANGELES TIMES/GETTY IMAGES
Amid a supply-chain crisis, Biden administration policies are thwarting efforts to bring industrial jobs back home.
Steven Malanga
Optimism among American manufacturers hit a 20-year high during the Trump administration, but the mood changed swiftly after Joe Biden was elected. Fearing a tsunami of onerous and expensive new regulations, manufacturers began halting expansion projects and trimming planned investments. The CEO of a midsize Ohio company, Alloy Precision Technologies, told USA Today that his firm had spent $9.5 million on new equipment to expand domestic production during the Trump years. Now, though, it was delaying an additional $2 million in spending for equipment and hiring because the Biden administration’s new directives raised costs on everything from labor to opening and expanding industrial facilities. As USA Today noted, in his first few weeks in office, Biden signed “a whirlwind of executive actions” dismantling Trump’s deregulatory agenda, which the Republican president had initiated to cut costs, especially for industrial firms.
Just a few months after Biden took office, the United States faced a crisis unanticipated
Last summer, dozens of cargo ships waited outside American ports to unload their goods.
Make America Make Again
by his agenda: a pandemic-related breakdown of worldwide supply chains that caused widespread shortages of goods ranging from medical gear and medicines to microchips and everyday items like cleaning wipes and toys. In February 2022, the crisis grew worse with Russia’s invasion of Ukraine. Sanctions, severed trade routes, and a ban on importing Russian oil provoked rare unanimity among Democrats and Republicans: the United States needs to make more stuff here in America, instead of relying on foreign suppliers.
How to pull that off remains the question. Facing intense pressure, the Biden administration issued a policy paper claiming that the U.S. could revive domestic industrial output by toughening employment standards—which inevitably would raise costs—to create “good jobs” and by subsidizing favored industries like clean energy. Some Republicans responded with plans that amounted to putting American industry on a war footing, with government decreeing certain sectors essential and requiring that their goods be made here.
What both sides largely ignored is something that industrial companies themselves have said for years: the Number One reason they’ve offshored jobs is costs, and 50 years of growing regulation at the federal, state, and local levels contributed enormously to that burden. By one count, industrial firms must adhere to some 300,000 federal regulations alone—a bureaucratic tax of hundreds of billions of dollars annually. Local zoning and environmental laws often short-circuit valuable industrial projects, even when they pass federal muster. The Trump administration, to its credit, understood this, which is why it initiated a regulatory review of industrial policy and began hacking away at some of the most arduous and outdated regs. But it didn’t get far enough in one term, and Biden already has undone much of what Trump accomplished.
Still, any cursory examination of the effects of these regulations, many decades old, suggests that the first step in re-industrialization—the low-hanging fruit—would be deregulatory policy that makes manufacturing in America more competitive, including enabling firms to build new plants and staff them at reasonable costs. It’s doubtful that the Biden administration, which somehow imagines that a green energy policy would alleviate the supply-chain crisis, will make such an effort. But some future progrowth president will likely make that project central to bringing back more U.S. jobs.
“How did the most dynamic country on the planet become so sclerotic? We did it to ourselves,” economist Eli Dourado noted in the New York Times last year. “We enacted laws that privilege the status quo at the expense of change and progress. We liberally passed out veto rights to anyone with the money and wherewithal to hire a lawyer. If we want to reverse the damage and create a more prosperous future, we must make it easy to build”—and to manufacture.
The decline of American manufacturing has been a long process, with many causes. Industrial jobs peaked in the U.S. in the late 1970s, at about 19.4 million, began falling in the steep recessionary period of 1980 through 1982, and then kept cycling downward, to about 12.2 million today. As a portion of the private-sector economy, the manufacturing decline has been even more dramatic, falling from more than a quarter of all jobs in 1980 to around 10 percent today. Liberalization of trade policies that encouraged globalization of manufacturing, plummeting transportation costs, a rise in output in Third World countries that began mass-producing goods: all played a role in the massive shift away from industrial production in the U.S. and in other developed economies.
This globalization brought lower costs and greater availability and variety of goods for U.S. consumers. And the larger American economy has kept expanding, despite manufacturing’s contraction, adding some 45 million private-sector positions since 1980. But a price was incurred as well, in the decline of regional economies that once depended substantially on industrial jobs, and in the greater reliance in the U.S. on goods from around the globe—a development that has contributed to the current supply-chain crisis.
Even pre-pandemic, American firms had begun talking about “reshoring” industrial jobs and
buying more supplies from domestic sources. Some of the movement to reshore has been a response to escalating expenses overseas—including labor—that have eroded the price advantage that came from making things elsewhere. Industrial wages in China, for instance, have risen approximately threefold since 2000. Another key factor: falling energy costs in the U.S., spurred by the nation’s natural-gas and oil-fracking boom. Some companies have already acted on the reshoring front. One survey estimated that, between 2010 and 2015, U.S. firms that make things overseas had brought back some 240,000 jobs. A 2013 Boston Consulting study suggested that new investments in American plants and equipment could add 2.5 million to 5 million industrial jobs in the country, under the right circumstances.
Pandemic-related supply woes have more firms thinking this way. A recent Thomas Industrial Survey found that 83 percent of U.S. manufacturers that make products overseas were considering reshoring some of those jobs. The survey also found widespread interest among American companies in shifting some supply contracts from foreign sources to U.S. producers. Automotive companies, slammed by microchip shortages and other supply delays, along with natural-gas and oil businesses, are the most motivated to switch back to American sources. The potential added value to the U.S. economy of such moves is nearly $450 billion, says Thomas.
But significant obstacles remain—above all, the continuing high cost of manufacturing things in America. Some 40 percent of firms still list that as the top concern. For years, manufacturers have argued that a thickening regulatory tangle makes it more expensive to hire workers, delays the opening of projects and plants as they wait for bureaucratic approval, and requires considerable outlays to comply with government mandates and (once again) to pay for energy.
Over the past 40 years, the National Association of Manufacturers calculates, the federal government has issued an average of one new regulation a week affecting industrial firms. And, if anything, the pace of new regulations has sped up, though the U.S. already has some of the developed world’s strictest standards on everything from workplace safety to labor practices. During the tenure of President Bill Clinton from 1993 through 2000, the federal government issued an average of 36 major regulations (those costing more than $100 million to employers) a year. During the “Manufacturers argue that a thickening regulatory tangle Obama administration’s first term, that number doubled, to 72 major new regulations yearly. Donald Trump built a significant part of his presidential campaign on reinvigorating American manufacturing. Once makes it more expensive to hire workers.” he was elected, some of his policies—especially imposing tariffs on imports—drew intense criticism from free-market economists. And he took a heavy-handed approach to firms that announced that they were moving jobs overseas—threatening, for example, the heating and cooling company Carrier with reprisals when it announced that it was eliminating 1,000 jobs at an Indianapolis factory. Carrier eventually accepted $7 million in government incentives to stay. But Trump officials also recognized that the regulatory regime was a significant drag on American jobs. Shortly after assuming office, Trump instructed the Commerce Department to study how to streamline regulations affecting the industry. Federal officials sought comments from company officials describing the rules they must adhere to, as well as their price tag. Manufacturers complained about, among other things, how often federal rules required them to inspect and report on their operations. One industrial firm told officials that it had to send an employee weekly to the roof of each of its eight plants to ensure by sight that they weren’t leaking emissions. Since 2011, the unnamed firm
Make America Make Again
said, it had made more than 700 such inspections, consuming 1,000 worker hours, without observing a leak.
Another business noted that, while it had once spent most of its environmental resources on equipment that reduced emissions, its regulatory costs lately had grown because of increased recordkeeping, demanded by the feds. Other companies complained, a Commerce Department report said, of “inadequately designed rules that are impractical, unrealistic, inflexible, ambiguous or lack understanding of how industry operates”; of duplicative regulations; and of lengthy approval processes for projects and deadlines that federal bureaucrats regularly missed.
The Commerce Department later recommended that every federal agency that issued regulations affecting manufacturing should develop a reform plan. It also urged accelerated permitting for industrial facilities using the federal FAST Act, originally designed to provide rapid approval for transportation projects. The report criticized the cost-benefit analyses that many federal agencies must do in evaluating projects, finding that they often fail “to capture the true costs of implementing regulation.” Commerce officials urged agencies to adopt more rigorous methods.
Added to these reform initiatives were the Trump 2017 tax cuts, which reduced the basic corporate rate and offered incentives to encourage companies to invest more of their foreign earnings back in America. One global manufacturer, Illinois-based INX International, recounted late last year how it had boosted investment in the U.S., thanks to tax-reform savings—including expanding its workforce by 7 percent and purchasing new equipment. “We have not had one year since 2017 without raises or an increase in benefits,” the company’s top finance officer said. “That’s because the company has been doing pretty well—reaping the benefits from the economy and tax reform.”
The momentum that such steps created clearly influenced manufacturers’ plans. After slumping 1.85 percent, to $2.085 trillion in 2016, manufacturing output in the U.S. surged to $2.334 trillion in 2018, a gain of nearly 12 percent for the first two years of the Trump administration. Those gains coincided with a sharp uptick in manufacturers’ optimism. In a survey by the National Association of Manufacturers in the third quarter of 2016, 61 percent of manufacturers said that they were optimistic about their business. By the fourth quarter of 2017, that number had hit 94.6 percent, and then climbed still higher, to 95.1 percent, in mid-2018—an all-time high for the industrial index.
Biden agreed broadly with Trump’s desire to bring more manufacturing jobs back to the United States. Soon after his inauguration, he signed an executive order attempting to strengthen existing federal efforts to use government’s massive procurement spending to buy increasingly from domestic sources. But other Biden executive orders and legislative proposals have proved far more troubling to industrial firms. Biden’s plan to enact a minimum corporate tax, for instance, would undo some of the Trump tax relief by effectively ending key deductions for investing in local plants and equipment, manufacturers have warned. “Right now, any savings get invested into our people and our operations,” the INX chief financial officer said about the proposed Biden tax hike. “Any loss will negatively affect that.”
Biden’s push to impose labor-union organizing on companies and raise the country’s already-high labor standards would inflate the already-high cost of making goods in America. The administration’s proposed PRO Act would end right-to-work features of federal law, which allow states to let workers opt out of unions, even when the workplace is organized. Today, 28 states are right-to-work, and they accounted for 70 percent of manufacturing-employment growth in the economic rebound after the 2008 recession. Right-to-work has become a key feature of site selection for foreign firms looking to open industrial plants in the U.S. and for domestic firms expanding their manufacturing workforce here. Many of the biggest new U.S. plant projects announced recently, including those by Ford, General Motors, and Volkswagen, are located in right-to-work states. Ending this option wouldn’t boost unionization; instead, it would further discourage reshoring and likely dampen
American manufacturing employment.
The PRO Act reflects a broader Biden administration commitment to intensify regulations, increase bureaucratic oversight, and funnel resources to favored industries. That’s clear from a June White House report, purportedly on how to reinforce the country’s supply chains and revitalize manufacturing. In some 250 pages, the report manages largely to ignore the chief reason that manufacturers locate jobs overseas—those high domestic costs—and focuses instead on a parade of “woke” goals for industry and on government interventions that inevitably will make it more expensive for most firms to operate. The document attributes some of the country’s supply problems, for example, to “lost Einsteins,” who never fulfill their destiny to become “workers, researchers, and entrepreneurs” because of America’s “inequality in income, race, and geography.” The admin- ERIK SCHELZIG/AP PHOTO istration commits to new, unspecified “pathways” to deMany of the biggest new U.S. plant projects are in right-to-work states—like Tennessee, where workers are seen assembling velop these resources. Volkswagen sedans at the automaker’s plant in Chattanooga.
Another key to solving supply-chain problems, the report says, is to in- duction Act to impose federal controls on procrease pay and benefits for manufacturing and duction in key industries, are additional favorlogistics workers through higher labor stan- ites. But while such measures might help a few dards—though many of these workers, even in sectors through government subsidies and prononunion shops, earn considerably above the tections, they would do little to solve the broader average for similarly educated employees in supply-chain problems affecting the country. other fields. The administration’s focus is also Manufacturers are just as concerned about on developing green industries—ensuring, for the Biden administration’s energy policies— instance, that enough rare minerals are available from the refusal to support key energy projects to manufacture lithium batteries for electric cars. to proposed higher taxes on natural gas to haltGovernment interventions, including subsidies ing new leases for gas and oil extraction on fedfor green products and using the Defense Pro- eral lands. The administration rolled out these
Make America Make Again
The Regulatory Burden
Issues that manufacturers say most affect their businesses
FEDERAL REGULATIONS
EMPLOYEE ISSUES
58%
ATTRACT/RETAIN CUSTOMERS 42%
R&D or PRODUCTION 24%
CONSTRUCTION or MAINTENANCE 20%
FINANCE/CASH FLOW 15%
OTHER 88%
gas prices last year. In the U.S., where an explosion of oil and gas production had kept prices down for the last decade, costs doubled in 2021, spiking upward even more after Russia’s attack and the Biden administration’s ban on importing Russian oil. Even with extraordinary gains in efficiency over the last 30 years, industrial companies still use a quarter of all the energy consumed in America, so the future health of the sector depends on keeping those costs under control.
Manufacturers applauded Trump’s moves to stimulate energy production and distribution, including an executive order to fast-track approval of infrastructure projects like the Keystone XL pipeline. Biden, by contrast, killed the pipeline, threatening America’s “energy security,” the National Association of Manufacturers said. When the administration later announced that it would release oil from the country’s strategic reserve to curb skyrocketing prices, the manufacturers’ association dismissed the action as a “Band-Aid.” “A true energy strategy would strengthen our energy independence, enhance manufacturers’ competitiveness and alleviate many of the other supply chain challenges facing our nation,” the group contended. “Instead of asking OPEC and Russia to fill the void, we should let American energy workers take the lead.”
ALBERTO MENA
8%
SOURCE: National Association of Manufacturers
policies even as worldwide prices of oil and gas rose and then soared even higher with Russia’s invasion of Ukraine, hampering manufacturing worldwide. In Europe, some energy-intensive firms even went as far as shutting down production following a 250 percent increase in natural-
What should be the agenda, then, for making domestic manufacturing more competitive? For now, Republicans and moderate Democrats in Washington should try to stop the Biden administration from continuing down its cost-increasing path. One important move would be to block any future legislative attempts to federalize required unionization. Biden has already showed that he wants to suppress competition among states in this area. The significant
manufacturing gains in states that rank among tries’ emissions and waste. Over the decades, the least taxed and most sensibly regulated in the U.S. has made enormous strides in this area, the U.S. offer a clear lesson in how to boost in- only to see the weight on industry keep getting dustrial employment. heavier. According to the Environmental Protec-
Policies that offer expanded social benefits tion Agency’s own data, for instance, since 1980, that discourage work should also be opposed. the U.S. has seen lead levels in our air decline Extra stimulus rounds during the pandemic, 99 percent, while carbon monoxide has fallen including supplemental federal unemployment 75 percent and sulfur dioxide 93 percent. Yet benefits that stretched into last summer, under- despite such gains, a growing federal environmined the nation’s jobs recovery, as many of mental bureaucracy has extended the time that the unemployed simply stayed home. Biden has it takes to approve new facilities and has hugely other proposals that industrial firms say would increased expenses. Consider the “vintage-differworsen this disincen- entiated regulations” of the Clean Air Act, which require a company building a new plant to incorporate the latest (and usually the most expensive) technology for emissions control. In 1970, the authors of that act wanted to ensure that firms rapidly adopted the newest technology to tive problem, among them dropping the age of Medicare eligibility from 65 to 60. That move, experts at Kearney Consulting argue, would “ encourage more workers to retire earlier,” disproportionately affecting manufacturers because a quarter of their work“Manufacturers applauded Trump’s moves to stimulate energy production and distribution.” force is already 55 or older. Similarly, the strong reduce pollution. Over time, though, as techpush within Democratic circles for free college nologies have improved, the need to install the for all would likely pull many potential indus- very latest equipment in any new plant or in an trial workers into college, whether that was right expansion of any older facility has become a disfor them or not. By contrast, Biden should main- incentive to such projects—including plants that tain initiatives, endorsed by both the Obama and companies might build to return jobs to the U.S., Trump administrations, to funnel more aid into according to research by Harvard environmental manufacturing-training pathways, including ap- economist Robert Stavins. prenticeship programs, which often reward stu- The Biden administration’s study calling for a dents with well-paying industrial jobs. domestic-manufacturing revival proclaims: “We
There is harder work to be done, probably must rebuild our small and medium-sized busiby some future pro-growth president and more ness manufacturing base, which has borne the moderate Congress. High on the list would be brunt of the hollowing out of U.S. manufacturto reinstate the regulatory reform efforts that ing.” Yet it’s smaller industrial firms that bear the Trump began for manufacturers but that Biden heaviest load from government regulatory poliquickly rescinded. Industrial firms have urged cies. The average bill to comply with regulations Biden to continue to reduce regulations on the amounts to nearly $35,000 a year per employee sector, to expedite environmental reviews by at manufacturing firms with fewer than 50 workfederal agencies, and to pressure the bureau- ers—almost triple the proportionate weight that cracy to meet its deadlines for reform. government policies impose on large manufac-
An even bigger task would be to reform out- turers, according to one study. But Biden intends dated federal laws. Legislation from the 1970s to add expensive new rules and mandates. It’s a such as the Clean Air Act and the National Envi- giant discordance between cause and effect that ronmental Policy Act sought to address serious will only make it harder to bring jobs back to environmental problems by regulating indus- America.
The Empire of Fees
How charges and fines drive government growth
Judge Glock
When I wake up in the morning at my home in Austin, Texas, I turn on the lights, and thereby provide a few cents to the city government’s electric company. I flush the toilet, owing a few more to Austin’s sewer service. When I pour myself a glass of water, the city water department gets a piece. After I get dressed and step outside, I watch the city take my trash, my recycling, and my compost—each pickup costs a few dollars. Sometimes, I discover a $25 ticket for parking my car in the wrong spot. Then I swallow my anger and drive down the MoPac highway, where I pay a toll to the Central Texas Regional Mobility Authority. I park in a garage downtown owned by the Austin Transportation Department, pay them a few bucks, and walk to my office. If I need to take a trip out of town, I pay $1.25 for a Capital Metro District bus to the city-owned Austin-Bergstrom International Airport, where, along with the price of my plane ticket, I pay a $5.60 fee for the benefit of being patted down by a TSA agent, a Passenger Facility Charge, and a small part in any rents the city charges restaurants and retailers. Only when I’m in the air does the drain to the government stop.
In one typical morning, I handed over money to several government bodies. But I didn’t pay any taxes—only fees, charges, and fines. These are the future of government in the United States.
The idea that government operates just by taxing and spending money is anachronistic. A growing share of its revenue comes from charges that the government imposes in exchange for its services or as a penalty for breaking its rules. In 1950, about 1 percent of Americans’ income went to
Any living thing today is subject to licenses as well as fees: even magicians who pull a rabbit out of a hat must get a federal rabbit license for $40.
charges from state and local governments. Today, that number is 4 percent. Include federal fees and charges, themselves the fastest-growing part of federal revenue, and that number rises to over 5.5 percent. Though largely hidden from the public, fees and charges account for most of the growth in government over the past 70 years and have become the top source of revenue for state and local governments.
Two factors drive this new reliance on special charges. First, governments are expanding the “businesses” they run—hospitals, universities, airports—and forcing users to pay more for them. Second, governments are using charges to avoid voter opposition to, and constitutional restrictions on, raising taxes. Those hoping to restrain the size of government need to understand the role of fees and charges. Though governments complain about private citizens evading taxes, the biggest tax evaders are governments themselves: charging citizens while avoiding anything that might be called a tax. In the process, they’re nickel-and-diming Americans from cradle to grave.
Not every government service can be paid for with fees. Governments must provide what economist Paul Samuelson called “public goods”—things that benefit everyone and from whose benefits no one can be excluded. It is both wrong and impossible for the government to impose fees for public goods, the classic examples of which are public safety and clean air. A city can’t charge for the use of public safety because it benefits whoever happens to be there. Borderline cases, such as parks and local roads, could potentially exclude anyone who couldn’t pay; but in practice, it’s easier for a government to keep these amenities open to all and fund them through general taxes.
But for many “private goods,” from which people can be excluded, charging individuals directly for services is easier and more justifiable.
The Empire of Fees
Private companies run some roads, parks, and golf courses and impose fees for their use. Governments can provide such private goods and charge fees for their use, too.
What distinguishes a fee from a tax? The technical definition is that you pay a fee in exchange for some activity that you could have avoided: taking a toll road, parking illegally, or patronizing a government business. Many politicians and policy wonks justify the growth in charges as a move away from taxes to “user fees.” They argue that people who benefit from a government service should be charged for it. On one level, this is true. It’s better to charge people for using a public golf course than to make it free and have the general public pay for it. Charges both prevent the rationing that occurs with any free good and help align the costs and benefits of services.
Yet the larger question is whether the government should be providing these services in the first place. A user-fee system beats a general tax, yes—but it also demonstrates just how many private goods governments now provide. If government can charge for something and prevent those who don’t pay from using it, it’s not a public good. Democratically elected governments regulate or subsidize businesses all the time. But should the government really be appointing the CEO of the golf course and purchasing its landscaping equipment?
City governments are responsible for much of the growth in fees and charges. They have become accustomed to running all sorts of businesses and charging for them, usually in direct competition with private companies. One in every five golf courses in the United States, for instance, is a municipal course. The U.S. is home to municipal universities, hospitals, retirement communities, industrial parks, power plants, tech incubators, parking garages, shopping malls, convention centers, hotels, stadiums, arenas, and theaters. Most large American cities own several municipal housing complexes. All charge their so-called customers to sustain city budgets.
Politicians claim that they run these businesses to reduce charges on citizens. What looks like an increase in government fees, they say, is just a switch away from more expensive private options. Yet governments tend not to run businesses more efficiently and instead use subsidies and regulations to push out private alternatives.
The subsidies to these public businesses often become larger than the charges themselves. In 1991, Memphis built a 20,000-seat, pyramidshaped arena that would supposedly pay for itself by charging rents to a major-league sports team. As befits a pyramid, however, almost no living soul set foot in the place. The city now collects a modest rent from a Bass Pro Shops megastore located inside while continuing to lose millions. Other local governments have had similar problems renting out their municipal hotels and shopping malls.
The old saying of former Indianapolis mayor Stephen Goldsmith holds true: if you can find three examples of a business in a local phone book (today, we would say online), then the city should not be in that business. But while selling off these assets is a no-brainer, cities keep building more of them, hoping that the charges will ease their budgets. More often, the businesses weigh them down.
Cities also run noncompetitive businesses, or what are known as natural monopolies: electric, water, gas, and sewer lines—and sometimes, highways, ports, and airports. Though these services are not public goods by the usual definition, it is not easy to have two or more competing companies provide them. A stronger argument exists for government involvement in these sorts of enterprises, though governments themselves do not have to run them. (As E. S. Savas shows in Privatization in the City, turning these services over to private companies can cut costs and increase efficiency by 30 percent, on average, and their fees can still be regulated.)
To see how direct government ownership of a so-called monopoly can increase both fees and government spending, consider public transit. In the early post–World War II period, private companies ran and operated most bus and rail systems in the United States. Their fees were regulated and occasionally subsidized by city governments. With some federal assistance, including that provided by the Mass
Transportation Act of 1964, many cities began buying these private companies. In 1970, the fees charged by municipal transit still covered their cost. But by 2020, public transit users covered only one-fourth of the systems’ cost—about $16 billion—while federal, state, and local governments paid $60 billion. The total amount of charges had risen, but the taxes and subsidies to support inefficient operations grew even more.
Charges and fees in cities have gone from 11 percent of city budgets a few decades ago to 40 percent today—more than any other single revenue source. Taxes didn’t fall. Charges simply rose more rapidly.
While city governments run brick-andmortar businesses, state governments’ business portfolio is more expansive. States run tolled highways and large ports or airports, as well as some liquor businesses held over from the post-Prohibition era. But they also administer esoteric financial concerns. For instance, many states along the Gulf Coast manage hurricane-insurance companies for property owners and, of course, charge for the benefit. States also run student-lending corporations, mortgage companies, business-lending companies, farm-lending companies, property-insurance companies, autoinsurance companies, and other enterprises. Some states operate their own bank-deposit-insurance businesses, separate from the more well-known Federal Deposit Insurance Corporation program.
We think of some of these businesses as social support, but states run them for profit and sometimes exploit their position to extract more money. Consider workers’ compensation programs for on-the-job injuries. While some states allow employers to purchase this insurance from a subsidized public company or from private firms, others, such as Ohio and Washington, allow businesses to purchase it only from the stateowned workers’ comp company.
These financial businesses aren’t usually good investments for states. Sure, they bring in regu-
lar interest and premium payments. But thanks to lax government accounting standards, they don’t have to take off bad loans until they’ve already gone bust. Thus, state budgets show large revenues from these programs, even as they falter and fail. Many remember the hundreds of millions that the federal government lost on loans to solar-power company Solyndra, but similar state-level examples abound. Rhode Island, for instance, once guaranteed $75 million in loans to a local video-game company, which quickly failed. States’ annual financial reports helpfully contain a section on their “BusinessType Activities,” which demonstrate their scope. State businesses together hold over $600 billion in assets, making them larger than Walmart and Amazon combined. When off-bal“Charges and fees in cities have gone from 11 percent of city budgets a few decades ago to 40 percent today.” ance-sheet public corporations and authorities are accounted for, state businesses hold over $1.5 trillion in assets, making them larger than the five biggest corporations in America combined. These proliferating businesses explain why state charges, though not as ubiquitous as those in cities, have swollen to more than 25 percent of state revenue. The charges have almost doubled as a proportion of states’ revenues in recent decades, and therefore account for most of the growth in state budgets. Universities and hospitals are the two largest sources of revenue for both state and local governments. The “eds and meds” that power many urban centers constitute about half of all government charges. Public hospitals earn almost $175 billion annually in patient fees, and public universities bring in $125 billion annually in tuition and other fees—all this on top of billions of dollars of public subsidies. Since education and health care are the two fastest-growing parts of the American economy, government’s expansion into these fields is unsurprising. But few areas of modern life are more sclerotic than a public hospital or a public
The Empire of Fees
university, both of which tend to jack up fees while providing declining service. Governments can improve the quality of education and health care without owning and operating these institutions at an increasing loss to the public.
The most visible government charges are the regulatory fees and fines for drivers’ licenses, construction permitting, illegal parking, and so on. The increase in such fees—and their expansion into every corner of modern life—has been rapid. For instance, governments today find it impossible to resist imposing charges on anything with a motor. Fees have risen on issuing titles, licenses, and registrations for cars, boats, all-terrain vehicles, snowmobiles, motorcycles, and planes. Anyone who actually uses these vehicles will pay other fees. Dozens of cities now charge drivers for the costs of responding to on-the-road incidents. The Cost Recovery Corporation, a company from Dayton, Ohio, has made a business helping cities charge motorists hundreds of dollars after they are in a traffic accident.
If such charges once made sense under a userfee model for drivers who benefited from public roads, they are less defensible today. State and local governments are diverting tens of billions of such charges, as well as general gas taxes, to mass transit, bicycle paths, and so forth. These charges aren’t user fees, and they don’t align costs and benefits. They’re just another way to extract revenue.
Any living thing today is subject to licenses as well as fees. Many cities require annual dog licenses, with larger fines if you somehow forget to renew such an essential piece of paperwork. The Department of Agriculture has special licenses for those who want to sell or exhibit animals. This means that magicians who pull a rabbit out of a hat must get a federal rabbit license for $40. If you want to keep chickens in Texas, you’ll need at least $35 for a Fowl Registration Application. For flora, you must apply for, and then periodically renew, a plant-nursery license. If you want to sell plants at a temporary location, however, you’ll need a special-event license with another fee. Then there are marriage licenses, which can cost over $100 in some states. (I once witnessed two slightly tipsy individuals try to get married in a Washington, D.C., office. The man was yelling that the fee had gone up since the last time he was there; the woman then started yelling, since he had told her that he’d never been married before.)
Licensing fees have become a revenue source for government. Economists across the political spectrum have been outraged by the increase in occupational licensing, which now forces about 30 percent of all workers to obtain government permits to do their jobs. These licenses restrict competition and raise prices, but economists ignore how they also raise money. The New York State Division of Licensing Services, which unironically places its title next to a “New York, State of Opportunity” logo on its paperwork, extracts fees from every imaginable profession. Nurses must pay up to $228 to apply for their first license and another $103 for periodic renewal. Interior designers pay $377 to apply for their initial license. Cosmetologists pay $15 for taking their written exam, $40 for their initial license, and another $40 for renewing their license every four years. The state licensing division helpfully notes that such application fees are nonrefundable: if you apply but don’t have the interior-designer qualifications, the state keeps your money.
States claim that these fees are necessary just to administer the licensing agencies. But most states have a version of what Arizona calls the 90–10 rule: 90 percent of fees go to the agency, but the general government still keeps 10 percent. In reality, the legislature often sweeps any so-called excess funds from these agencies back into the general budget. Many such fees bear no relation to costs. Courts have upheld fees even if they are three times higher than any plausible costs to provide licenses or services.
Even basic government functions have become riddled with fees. In some cities, “Pay for Spray” programs require people to pay annual firefighter subscription fees and more whenever firefighters respond to an emergency call. In Olbion County, Tennessee, firefighters have let houses burn when they realized that the homeowners were in arrears on their subscription. Prisoners have found themselves subject
Public hospitals earn almost $175 billion annually in patient fees, on top of billions of dollars of public subsidies.
to everything from co-pays for doctors’ visits in the infirmary to phone charges of up to $1 per three minutes, rates not seen in the private sector for decades. Many jails also have “pay to stay” charges. In Mahoney, Ohio, the jail charges a $100 booking fee and $50 for each night spent in lockup.
Also bothersome to citizens are the unexpected fines, from parking tickets to jaywalking charges, that have come to define many local governments. The increase in fines came to public attention in 2014, following riots in Ferguson, Missouri. It turned out that the city’s income from fines doubled in the years after 2010, reaching 23 percent of its total budget. And Ferguson’s fines were no exception. A study by Governing found dozens of local governments getting more than 50 percent of their total budget from fines, with many governments garnering higher than 90 percent. Every year, state and local governments collect about $15 billion in fines, with speeding tickets, of course, being the most important source. But dozens of other minor fines can surprise people. Many cities have laws requiring bicycle registration. Though haphazardly enforced, failure to get a license can get you a $50 fine in New Jersey. You might also be fined for riding a bike without the required bell.
While state and local fees and fines have been the biggest drivers of government growth, federal fees are mounting, too. They now constitute 10 percent of federal revenue, or more than $330 billion a year. Despite the gradual decline of the largest source of federal fees—namely, the U.S. Postal Service—such fees have grown more rapidly than any other kind of federal revenue over the past four decades.
Propelling the growth in federal fees are premiums for Medicare, as well as crop-, flood-, and bank-insurance programs; fees for the Transportation Security Administration at airports; fees for patents and trademarks; and rents from federal land. Federal agencies love these fees because once they receive them, they can spend them without having to go through congressional appropriations and oversight.
The Empire of Fees
City governments now run all sorts of businesses and charge for them, usually in competition with private companies—one in every five golf courses in the United States, for instance, is a municipal course.
The federal government also leads the way in civil-asset forfeiture—the taking of private property on the suspicion that it was used for a crime. It raises more than $5 billion a year from such forfeitures, much of it shared with state and local governments. Usually, these funds are insulated from legislative or congressional oversight; often, the proceeds directly benefit the police who make the seizures. One study noted that federal civil-asset forfeiture takes more from citizens every year than burglars do.
An important reason for the increase in fees and fines is that they allow government to increase revenue sub rosa, without voter input or judicial control. Since the 1970s, many citizen groups have secured constitutional amendments that let voters weigh in on tax increases at the state or local level. Not surprisingly, voters have been reluctant to increase taxes. Thus, more governments have been boosting their fees, and courts have agreed with them that such fees aren’t subject to the same constitutional restrictions.
Though fees are theoretically distinct from taxes, many are taxes in all but name. For instance, after California’s Proposition 13 capped property-tax rates, the state gradually replaced property taxes with hikes in “impact fees” for building new housing. These can cost over $150,000 per unit, enough to buy a home in much of the United States. Most of the increase in license fees and trash-pickup fees have come in the wake of similar state constitutional restrictions.
Some citizens and taxpayer groups have tried to limit charges and fees, with little success. In 1980, Missouri passed the so-called Hancock Amendment, requiring voter approval for any new “tax, license, or fee.” State courts kneecapped the law by interpreting the word “fee” to exclude any charges “for services actually provided,” thus protecting most government fees. In a 2010 referendum, California voters approved Proposition 26—the Stop Hidden Taxes Initiative—to redefine certain regulatory fees as taxes subject to a two-thirds vote of local residents. Almost every local government in the Golden State has since ensured that its fees count as one of the enumerated exceptions.
State laws in Illinois, Texas, and California states that have imitated it manage to control mandate that governments prove the connection taxes and spending more than most. An Aribetween the charges they impose and the services zona state judge recently struck down a 3.5 perthey provide—but in reality, these laws have cent income surtax because the amount of funds spawned an industry that writes elaborate user- raised would have busted through the constitufee studies showing, for instance, that waste- tional spending limit. pickup charges should account for millions of Efforts to rein in extraneous fines have borne dollars of city administrative overhead or that a fruit. Some states cap fines to a certain propornew housing development causes a dozen more tion of a city’s budget. After the Ferguson issue visits to a city park, which itself costs hundreds came to light, Missouri said that cities could not of thousands of dollars. These studies have less raise more than 20 percent of their budgets from to do with actual costs than with letting govern- fines. This figure is not particularly encouraging, ments charge what- but it’s a start. More ever the citizen will bear. Since governments don’t include the cost of user-fee studies themselves in their estimates, these must be covered by taxpayers. They’ve become just another subsidy created by the charging of fees. promisingly, the Supreme Court, in the 2019 case Timbs v. Indiana, finally applied the Eighth Amendment clause against excessive fines to state and local governments. In this case, the government took a $42,000 Land Rover that plaintiff Tyson Timbs “Any new federal constitutional restrictions on fees are unlikely, but Congress can make these fees more transparent.” had bought with the insurance money from his For many government functions, the move to father’s death. The government claimed that the car had been used to carry drugs. Some revenuefees can be an improvement. Fees force those hungry jurisdictions may now think twice before who benefit from a program to bear the burden charging large fines or seizing property. of sustaining it. But fees should be a substitute, Any new federal constitutional restrictions on not a supplement, for raising taxes. Instead, as fees are unlikely, but Congress can make these currently structured, charges and fees let govern- fees more transparent. A bipartisan bill, the ment both expand and draw on more subsidies Agency Accountability Act, would end bureauand taxes. They are a way around accountability, crats’ control over their own fees and deposit not a means toward it. them back into the general Treasury fund. This
The fight against the inexorable growth would end the practice of keeping such revenue of government must contend with charges as a bottomless kitty outside of congressional and fees. The best way to limit them is to at- oversight. So far, however, the bill has not adtack spending in general. Many states already vanced in Congress. do this; today, 15 states have limits on overall Citizens are catching on that taxes and charges spending growth. Most of these, however, al- are both symptoms of a wider problem: the ablow either the shifting of burdens to local gov- struse, impenetrable nature of proliferating governments (which don’t have the same limits) or ernment programs. To limit taxes or fees without easy overrides by the legislature. The 1992 Colo- limiting the spending that enables them both is to rado Taxpayers Bill of Rights Act, which limited delay the inevitable. Fees are the latest trick that the increase in total state and local government government has devised to expand its scope, but spending to population growth and inflation, is they won’t be the last. If we want to stop pouring a better model. Despite obvious skullduggery money down the government drain, we need to by politicians and bureaucrats, Colorado and curb spending itself.
Wokeness, the Highest Stage of Managerialism
Well-educated progressives wield institutional power to impose a new political and social order.
Malcom Kyeyune
It can be easy to forget how new our political and culture-war conflicts are. Ten years ago, critical race theory was something you’d encounter only online or in academic settings, Democratic politicians were still talking about civil unions for homosexual couples, and the media and federal government were busy pointing out how far America had come in repairing the broken race relations of the past. Today, little remains of that old order. Just how fast has this transformation unfolded? Consider a simple measure of how frequently the word
“racism” appears in the nation’s four largest newspapers: after staying basically constant from the 1970s to 2010, its usage explodes around 2012, with the Washington Post and the New York Times leading the charge.
Though this “Great Awokening” has scrambled political coalitions and upended widely held truths, wokeness itself remains a muddled concept.
The obvious definition—that it is a belief system, what writer Wesley Yang has dubbed “the successor ideology”—has considerable merit. (See “The
Identity Cult,” Winter 2022.) But as American polarization increases, it becomes clear that wokeness is also a social, economic, legal, and political phenomenon; it cannot simply be reduced to the ideas inside people’s heads. (See “The Genealogy of Woke Capital,” Autumn 2021.)
If wokeness is an institutional force, a comparative analysis can help describe it. Most Europeans can remember when America was considered stodgy and conservative, compared with progressive Western Europe. And yet, in 2022, the U.S. is experiencing deeper levels of polarization and social strife than other Western countries. Polls suggest a rapid loss of faith in public institutions. Americans identifying with either political party increasingly see the other party as a threat to democracy itself.
Why is it, then, that people in traditionally progressive countries—my native social-democratic Sweden being a prime example—can believe the same things, read the same books, and propound the same ideas as their
American counterparts, without their societies experiencing the same sort of catastrophic polarization afflicting the U.S.? Why is it that capital seems to have gone woke in the U.S. more than in the rest of the West, with large companies intervening directly in political battles in a way that would be unthinkable in the Nordic countries? If this behavior were simply a product of neo-Marxist or socialist ideology, one would think that it would be more
© SZ PHOTO/HANNES BETZLER/BRIDGEMAN IMAGES
The 1941 book The Managerial Revolution, by James Burnham (left, seen here with Arthur Koestler), described a new form of class power—in which managers, not capitalists, would control the real economy.
prevalent in a country like Sweden, where the ruling Social Democratic Party still sings “The Internationale” at its congresses.
The core thesis of James Burnham’s 1941 The Managerial Revolution helps explain what is happening in the West today. A former Trotskyite who later became a leading figure in postwar American conservatism, Burnham argued in that book that Western society would not see the collapse of capitalism and its replacement by socialism. Instead, he maintained, America would likely see capitalism replaced by a nonsocialist successor—one dominated not by capitalists in the classical sense but by a class of managers that would come to control the real economy, regardless of formal ownership status.
This distinction—between ownership of, and control over, capital—was a topic of some discussion in the interwar years, with early analyses noting that apparatchiks in the Soviet Union had appropriated control over public resources. In the U.S., Burnham’s prophecy of a new
Wokeness, the Highest Stage of Managerialism
managerial order came against the backdrop of the New Deal, which had coincided with a (somewhat understandable) loss of faith in capitalist ideas. The balance of power was shifting from property rights to a steadily increasing category of human rights, and Americans were becoming more accepting of state planning and control over larger parts of society.
Burnham saw America in the early 1940s as being in a somewhat transitory phase. The old, capitalist order was clearly ailing, and managers were steadily growing their power at the owners’ expense. Still, the process of forming a new rulership class was by no means complete. While “control over the instruments of production is everywhere undergoing a shift” toward managers, wrote Burnham, “the big bourgeoisie, the finance-capitalists, are still the ruling class in the United States.” New Dealism was not yet a “developed, systematized managerial ideology” that was capable of fully replacing capitalism.
But if Burnham were alive today, he might see wokeness as exactly that: a systematized, managerial ideology capable of standing on its own as a claim to rulership over society on behalf of the new class of managers. Indeed, many of the dynamics that worried or fascinated thinkers like Burnham during the interwar and New Deal era seem to reappear today in hypertrophied form.
Let us return to the question of ownership versus control. Here, wokeness serves to abrogate property rights, as seen in many controversies taking place in the business world. Consider the fate of the video-game behemoth Activision Blizzard, recently bought by Microsoft. After various ex-employees leveled allegations of workplace mistreatment and a frat-boy culture at its California offices, the company found itself under siege from multiple directions. First, the state of California sued it. Then, the media started covering the story with fervor. Various NGOs and activist organizations jumped into the fray, and the Securities and Exchange Commission launched an investigation. Though the original accusations against the company had to do only with sexual misconduct in the workplace, the list of demands made on Activision Blizzard quickly expanded beyond the original crime. Firing the offending workers or instituting mere workplace reform wasn’t good enough; rather, Activision Blizzard would need to open up its internal hiring and firing decisions to some sort of public review to ensure that it met various “diversity” targets. If one reads between the lines of the controversy, it becomes clear that the owners of a company now must subject their hiring process to review by other managerial institutions.
The main practical demand that wokeness places on society is a massive expansion of managerial intermediation in previously independent social and economic processes. With Activision Blizzard, a controversy regarding the workplace environment quickly metastasized into a struggle to implement new, alternative humanresources structures that corporate leadership would not control, and to which it would have to pay, in effect, a kind of ideological protection money. In real terms, this represents a nontrivial abrogation of property rights: you may still own your company, but don’t expect to be free to run it as you see fit without the “help” of outside commissars. Another example of creeping intermediation can be seen in the Hollywood trend to hire so-called racial equity consultants to ensure that characters from various minorities are sufficiently represented in movies and TV. Time was when a screenwriter would conceive of a plot and populate it with characters, drawing upon crude, inequitable instruments such as empathy and imagination; this is less and less permissible. Populating stories with various minority characters is not just encouraged but demanded—and one must do so only after employing intermediary consultants. Writing now requires intercession from a class of moral managers.
Seen in this light, wokeness is not a mere scholastic ideology. Indeed, the woke tend to be uninterested in any form of Socratic dialogue regarding their suppositions. In 2017, the feminist philosophy journal Hypatia descended into massive controversy after a writer, Rebecca Tuvel, published an argument that transracialism ought to enjoy the same sort of philosophical status as transgenderism. Tuvel appeared to make her argument sincerely, in an effort to explore the philosophical implications of people who transcend social categories, but the effort rendered her a pariah.
If woke ideology has little use for academic discussions, it is quite adept at asserting control over institutions. One cannot separate woke controversies from struggles over hiring and firing privileges inside institutions. What appears to be a fight over principles is simultaneously a fight over institutional prerogatives and access to resources.
Like the managerial ideology that Burnham anticipated, wokeness both asserts a wide variety of rights that supersede ownership and insists upon the creation of a permanent caste of managers to monitor the implementation of these rights. This tendency toward intermediation now extends to almost every facet of modern society, including in areas previously seen as foundational to the political system. Democracy, for instance, is now seen as needing various forms of intermediation so as to function properly. Without the input of managers, the thinking goes, the raw expression of the popular will can lead to aberrations, such as the election of Donald Trump or Britain’s decision to leave the European Union. Calls are increasingly being made to impose a layer of experts qualified to judge just what political questions and issues could be safely left to purportedly benighted voters to decide. The instinct to resort to expert guidance and thereby remove contentious issues from the realm of public debate takes many forms. Consider Extinction Rebellion, a radical environmental group of marginal prominence but one that has nevertheless articulated a vision for fixing our supposedly broken political systems along these lines. Extinction Rebellion envisions the introduction of “citizens’ assemblies” consisting of a representative portion of the population that would form a “mini-public.” This mini-public would then receive information selected by a caste of experts and formulate various recommendations based on it. The experts would listen to the mini-public’s (nonbinding) recommendations before making their own decisions about what was best.
But why has America become more woke than its European counterparts? After all, many planks of progressive ideology, such as legal same-sex marriage, were achieved in Europe much earlier than in the United States. The ideas are fairly similar on both sides of the Atlantic. In my view, the material insecurity of the American managerial classes, whose numbers, as Peter Turchin argued, have grown too large to be absorbed by society in ways commensurate with their lofty economic expectations, helps account for this development. Consider Sweden, which is far less polarized and enjoys a much more sedate cultural environment “If woke ideology has little use for academic discussions, it is quite adept at asserting control over institutions.” than the United States. It operates a massive government machine to furnish the scions of the managerial class with all sorts of work. My own municipality, Uppsala, a city two-thirds the size of Reno, Nevada, employs almost 100 people as “communicators.” Their official workload mostly consists of managing the municipality’s social media accounts and writing policy documents. The communications department is notoriously dysfunctional; the municipality hired an outside consultancy to find out what all these employees do all day. But in at least one sense, it does what it is supposed to do: provide make-work jobs for university graduates who would otherwise risk going unemployed—and become potential social agitators. Sweden is rife with various taxes, carve-outs, fees, and other accommodations that together form a massive patronage machine employing artists, bureaucrats, gender-studies majors, activists, curators, mindfulness consultants, environmental advocates, and much more. The state aggressively pays for art, education, NGOs, and even journalism—most major newspapers in Sweden depend heavily on subsidies to stay
Wokeness, the Highest Stage of Managerialism
in the black. Perhaps the best illustration of the Swedish political economy is that Swedes pay in the neighborhood of $9 per gallon for gas. This massive cost difference owes almost entirely to taxes and fees, which fund social work. At first, the gas tax was intended primarily to pay for the maintenance of roads. Today, people argue for raising gas taxes to fund environmentalist causes. The managers running these causes are trying to fund themselves by imposing regressive taxes on their blue-collar countrymen.
Swedes, it’s worth observing, aren’t knocking down monuments of Carl Linnaeus. Even as the frenzy of iconoclasm and statue-toppling swept America, Swedish activists were content to launch an online poll on the subject of statue removal and give up, once it was clear that they didn’t enjoy majority support. Statue-toppling is less attractive when the municipality that owns the statues is likely to be your employer.
Even if they were not designed with this purpose in mind, the social-democratic welfare states of Europe as a whole have been adapted to provide a new form of welfare for the collegeeducated, aspirational managerial classes. Aggressive tax policies once enacted to eliminate disparities between workers and owners have now been altered so that, in practice, they hit hardest against rural small-business owners and workers, while funding various subventions and tax breaks for residents in the comfortable urban cores. As environmentalism furnishes these urban-dwellers with a plausible excuse for everincreasing intermediation in society, it is no accident that the base of green parties throughout Europe is almost uniformly wealthy, urban, and highly credentialed.
In the United States, by contrast, while some public-sector sinecures exist, it is hard to imagine such a pervasive culture of make-work ever taking hold. Deep-seated cultural assumptions weigh against it, as do other practical considerations. The scope of the U.S. welfare state is narrower (though it has always been understated and, indeed, is more redistributive than its European counterparts). Further, the U.S. remains more federalist, meaning that large, state-driven projects shifting resources from one segment of the population to another are more difficult to implement. In Europe, managerial dominance in the economy can be justified as a natural outgrowth of the responsible welfare state. The woke rarely have to lower themselves to highway robbery—they can merely call for additional gas taxes to fund whatever managerial initiatives need funding. In America, woke managerial intermediation resembles a crude shakedown against private corporations and institutions.
What is the future of the managerial society? Will the European response continue unabated? Will the U.S. overcome its unique idiosyncrasies and produce a uniform system in which tax collection—or perhaps tribute extraction—funds the expansion of the managerial state, overcoming the constitutional design?
Probably not. In Europe, managers are now facing backlash as disillusionment with the welfare state grows. Regressive taxes have ignited fuel protests in Sweden, Finland, Ireland, and, most dramatically, in France. There’s no particular reason to expect Europe to be spared from large-scale conflicts between classes and political factions in the years ahead. If anything, the gilets jaunes’ rebellion prefigured a growing dynamic in European countries.
Meantime, scenes from both the United States and Canada suggest that workers operating outside the managerial structures of big unions are starting to resist intermediation by experts. Leftists were always taught that workers, once robbed of leadership and organization from a well-educated vanguard, would devolve into an inert mass of potatoes with no political agency or ability to make their voices heard. This has proved wrong, and the future promises more conflicts between workers and the managers seeking to impose further restrictions on them.
What about the owners of capital themselves? The old assumption on the radical left—that small-business owners are the faithful incubators of reaction and thus will always end up on the opposite side of working people—may be disproved in the years ahead. Insofar as the petty gentry of capitalism is concerned, the managerial regime offers little in the way of carrots, while
At video-game firm Activision Blizzard, a controversy regarding workplace treatment led to new humanresources structures that corporate leadership would not control.
the ever-growing requirements of the expanding caste of bureaucrats and commissars places an unsustainable financial burden on them. The dismal fate of Activision Blizzard also hints that, even for very large companies, the relationship between capitalists and managers isn’t necessarily one of happy symbiosis; it is increasingly becoming one of strife and parasitism.
In short, woke managers want to impose a new political and social order. Managerialism requires intermediation, and intermediation requires a justifying ideology. Wokeness has accomplished what New Dealism never set out to do in the 1940s: it serves as a comprehensive, flexible, and ruthless ideology that can justify almost any act of institutional subversion and overreach. But already, the cracks are starting to show. With gas prices now rising precipitously and inflation running wild, the contradictions inherent in managerialism are likely only to sharpen in the days ahead. If wokeness is indeed the “highest stage”—to borrow from Lenin—of the managerialist society that Burnham saw coming nearly 100 years ago, then one needn’t be a revolutionary to ask: How long can it really last?
Inflation and the City
MTA/ALAMY STOCK PHOTO
New York will have to grapple with exploding prices, both economically and fiscally.
Nicole Gelinas
After a 40-year break from high inflation, Americans are rediscovering what it means when the prices of goods and services rise rapidly. With 7.9 percent inflation in February, people now must spend $107.90 to buy the same food, clothing, day care, electronics, and other day-to-day needs that cost $100 a year ago. Inflation doesn’t just affect household spending, moreover. If it continues, which seems likely, it will have a severe impact on New York City’s economy and budget, as Wall Street profits suffer, public-sector workers demand double-digit raises, and the cost of construction projects soars, among other consequences. City government can’t entirely avoid this national shock. But smart leadership from Gotham’s new mayor, Eric Adams, can blunt it.
Two things cause inflation, and both are with us today, as they were in the 1970s, the last time America saw sustained high inflation. The first is supply shock: that is, something happens abruptly
Costs for construction, a bedrock of New York’s blue-collar economy, are soaring— stymieing critical infrastructure work.
Inflation and the City
to cut off the supply of a good that people use regularly and that they can’t easily substitute with another, cheaper good. In the 1970s, the supply shocks were two oil crises: the Arab oil embargo of 1973 and 1974, related to the Yom Kippur War; and in 1979, the Iran hostage crisis, which similarly sent oil prices into orbit. Today, the supply shock is largely due to pandemic-related disruptions around the world, which have kept people from manufacturing and shipping goods. Russia’s invasion of Ukraine and the curtailment of Ukraine’s critical grain exports, as well as sanctions on Russia’s oil and gas exports, will intensify the effect.
The second big factor is the money supply— the amount of ready money in the American economy available to spend or invest but not including sums already invested in assets such as stocks or houses. The United States, like most Western countries, runs on fiat money—that is, money not backed by gold or another store of physical value. Instead, the government, through the Federal Reserve, literally creates money, largely through setting interest rates at which banks can borrow. When rates are low, the money supply expands; when rates are high, the money supply shrinks. The Fed has a natural bias toward low interest rates and increases in the money supply, in order to keep up with the American public’s desire to borrow.
But the Fed generally cuts interest rates more sharply, and thus more dramatically expands the money supply, when it fears recession—as it does when the economy suffers supply shocks. In the 1960s, the money supply doubled. But in the 1970s, as the Fed confronted shock after shock, the money supply tripled, from less than $600 billion to $1.5 trillion. The money supply would not rise as quickly again until after the 2008 financial crisis, when the Fed slashed interest rates to an unprecedented low of zero and kept them there for seven years, until 2015. To revive the economy, the Fed reasoned, Americans had to borrow more—even though record household lending had helped trigger the crisis in the first place. In January 2008, the money supply was $7.5 trillion. By February 2020, the eve of the pandemic, it had doubled to $15.5 trillion, mostly because of this post-financial-crisis money creation.
Though the purpose of growing the money supply is to boost economic activity, and thus employment, the intention can backfire. More money can stimulate more economic growth only if the economy can quickly respond to this demand creation by producing more goods and services.
If this happy result doesn’t occur, inflation results: more money chasing the same amount of goods and services that existed before. In the 1970s, inflation rose from 3.3 percent annually in 1972 to 11.1 percent in 1974. It remained persistently high, often in double digits, until 1983. The 1970s were not a good decade for the U.S. economy. From 1974 to 1975, the country lost 2.9 percent of its jobs, and unemployment, having remained well below 4 percent for most of the 1960s, hit 9 percent in 1975. The Federal Reserve kept slashing interest rates, from 12.9 percent in mid-1974 to 4.6 percent in early 1977. Rather than spurring high growth, though, lower interest rates just introduced Americans to new terms: “stagflation” and “the misery index”—a persistently double-digit rate of unemployment and inflation combined.
In the early 1980s, then–Federal Reserve chairman Paul Volcker determined to break inflation’s back. To do so, he had to raise interest rates sharply, thus tightening the money supply and making it tougher for businesses and people to borrow. Rates, tremendously volatile already, blasted from 9.5 percent in mid-1980 to 19 percent by early 1981, where they stayed for six months. This sharp rise in the cost of money brought a short, but deep, recession. Even as the nation’s homebuilders and automakers protested, as potential customers couldn’t borrow, Volcker held fast. He had to prove not only that the Fed would cut inflation, whatever the public uproar, but that it would never again countenance double-digit inflation. It eventually worked: by the mid-1980s, inflation had fallen below 5 percent; by the 1990s, it was hovering close to the Fed’s annual 2 percent target. For nearly 40 years thereafter, people and firms could invest and lend with confidence, knowing that the base that those decisions stood upon—the dollar—would reasonably maintain its value.
Why didn’t inflation spike after 2008, when the Fed slashed interest rates to zero? Partly because the economy suffered no supply shocks. In fact, the U.S. economy was awash with a surplus of goods made by cheap labor abroad. Abundant cheap labor at home, via immigrants often working below the legal minimum wage, also restrained prices. The economy did experience a different type of inflation, however: asset inflation. The price of stocks, houses, and other investments reached record highs simply because people had no other place to put their cheap money. In the aftermath of the pandemic, the Fed reduced rates, once again, to zero.
Now, though, the more traditional cycle, of increases in the costs of everyday goods and services, may be starting anew. After the pandemic began in March 2020, the Fed pumped the money supply at a rate never previously seen. As of this writing, the money supply stood at $21.8 trillion, thrice as high as it was barely a decade and a half ago. The economy has never had so much money looking for a place to go.
New York City fared worse than the rest of the country during the 1970s stagflation, losing 16.1 percent of its jobs between 1969, a postwar high, and 1977. Though the nation at least grew in fits and starts between recessions during the 1970s, leaving the U.S. with more jobs at the end of the decade than at the beginning, New York would not recover its lost 1970s jobs total until 2008. Inflation wasn’t the only culprit in New York’s decline, of course: the city suffered the destabilizing results of middle-class flight to the suburbs, skyrocketing crime, and a failing transit system.
But inflation definitely made things worse. New York lost 11 percent of its financial-services jobs in the 1970s, for example. Inflation harmed banks’ profits; the business of financial services is investing and lending, after all, but both were moribund. The stock market ended the 1970s at a
lower level than at the start of that lost decade; in inflation-adjusted terms, stocks plummeted. As for lending: if inflation erodes the value of the dollar as a borrower repays a loan, the lender can end up losing money. The low-inflation era that began in the 1980s, by contrast, jump-started New York’s economy. Wall Street took off, as both stock and bond (lending) markets boomed. The city’s population rose again, as white-collar “yuppies” flocked to the city to work not just in banking but in related professional industries, such as lawyering and advertising, and as immigrants and other workingclass newcomers came in search of job opportunities created by this newfound wealth. The city rebuilt its tax base, giving it the resources to reinvest in its neglected physical infrastructure and, a decade later, to help con“New York City fared worse than the rest of the country during the 1970s stagflation, losing 16.1 percent of its jobs.” front its violent-crime crisis.
What now for New York? Though history never quite repeats itself, the parallels between the early 1970s and today are eerie. Just as in the 1970s, New York City doesn’t seem like a great place to live right now, if you have other opportunities. The city’s unemployment rate is 8.8 percent, more than double the national rate. As of December, the city’s private sector was missing 9.8 percent of its jobs, relative to December 2019; the only reason that the unemployment rate is lower than 9.8 percent is that some people who lost their jobs left the city, searching for better chances elsewhere, and some people have given up entirely, leaving the workforce. Crime is skyrocketing, with the murder rate up more than 50 percent in two years. New York’s affluent population is falling: the number of white kindergarteners, a good, if rough, proxy for affluence, fell by 16 percent post-pandemic, far higher than among any other racial group. The city’s economy doesn’t start from a great position to withstand high inflation, to say the least.
Inflation and the City
And high inflation may worsen these trends. Finance still powers New York’s economy, and one reason the city budget has withstood the last two years of turmoil, in addition to tens of billions of dollars in federal aid, is that Wall Street has earned record profits. Yet high inflation remains bad for Wall Street, just as in the 1970s. On the stock-market side: inflation erodes consumer confidence, thus harming corporate profits. Inflation also pushes up the price of the materials and labor that companies need to produce their goods and services, further slicing profits. On the lending side: financial firms, again, will be reluctant to make loans if inflation will erode repayment dollars.
A struggling Wall Street would have an outsize effect on the city. As state comptroller Thomas DiNapoli reported last year, Wall Street jobs constitute just 5.2 percent of the city’s private-sector total—but those jobs pay nearly $407,000 on average annually, five times as high as the average $92,000 wage in the rest of the city’s private sector. Wall Street jobs thus make up one-fifth of all private-sector wages, and 14 percent of all economic activity, “more than any other industry.” A steep drop in Wall Street profits and bonuses could shave $3 billion from New York’s personal-income-tax revenues alone, taking out half the city’s $6 billion in budget reserves against its $101 billion budget.
This anticipated fall does not account for how, in a recession, Wall Street firms would look to cut costs, including real-estate costs—something made easier by how, during the pandemic, workers proved that they can do their jobs outside of expensive midtown Manhattan. Even with record profits, as the comptroller notes, Wall Street has been shrinking, down nearly 2 percent in 2020, to 179,900 jobs. “The loss of industry jobs in the city, at a time when profits are soaring, may be attributed to a combination of advances in technology and the relocation of jobs,” writes DiNapoli. “In 2021, job losses appear to have accelerated, with the industry on pace to lose 4,900 jobs.”
These New York losses come at a time when “Wall Street” jobs, in the rest of the nation, are growing, observes the comptroller; portions of the industry have moved to Florida and more hospitable climes, from a tax and cost perspective (and for other reasons, including, during 2021, relative freedom from Covid restrictions). That New York depends on Wall Street is nothing new, but a downturn would be particularly acute now, with the rest of the city’s economy so fragile. Every job loss now amounts to a greater share of the city’s shrunken economy.
Inflation could also have a direct impact on the city budget. Theoretically, inflation should not harm it directly, since, as private-sector wages rise, the tax revenues that the city collects would also rise and more residents would find themselves pushed into higher tax brackets. Yet this assumes that private-sector wages will rise in tandem with inflation and that people won’t resist paying more of that money to the government and decide to relocate to lower-tax states and cities.
Inflation would increase the cost that taxpayers shoulder to employ city workers. Despite modest reductions in the number of city employees that Mayor Adams suggested in his first budget presentation, in February 2022, the city still employs more than 334,000 people as of this spring—at an annual cost, in wages and salaries, of $30.6 billion. As it happens, the city’s major labor agreements are expiring or have already expired. The labor contract with District Council 37, the biggest civilian union, with 59,000 workers, elapsed in May 2021, and the agreement with the United Federation of Teachers, with 112,900 workers, ends this September.
As inflation spikes, unionized city workers will demand raises to keep up. The state comptroller estimates that 1 percent raises would cost about $500 million. (Raises have knock-on effects for city-paid pension contributions, as pensions are paid as a percentage of salary.) Yet city workers won’t be asking for a 1 percent raise. They’ll probably demand annual 5 percent raises and above to keep pace with rising prices—at a cost of $2.5 billion in the first year, $5 billion in the second year, and so on. Even if Wall Street profits hold up, the city would swiftly find itself drained of its $6 billion in reserves at this rate, and in deficit.
In the late 1970s and early 1980s, New York actually used inflation to its advantage in paring
labor costs. Mayor Edward Koch’s administration held fast against granting inflation-adjusted raises, partly because state and federal oversight forced it to, after the city nearly went bankrupt in 1975. In 1980, city workers sought 14.5 percent raises yearly. Yet the mayor inked agreements with labor unions representing 200,000 workers for 8 percent annual raises over two years. At a time when inflation was running at 10 percent, these agreements saved the city money in inflation-adjusted terms. Importantly, the city never put clauses into contracts allowing wages to increase automatically with inflation, though the unions asked for them. “Inflation helped us,” says one former Koch budget official.
Yet this is a dangerous game. Adams can hardly award 7.5 percent raises to match inflation—nor can he award, say, 5 percent raises, hoping that inflation remains high over the term of a contract; A if inflation falls, the city would wind up paying more in real terms. “Rising expenditures, including wages newly negotiated, in times when inflation is well above average could be a threat to ongoing fiscal balance if not supported on the revenue side,” says Tom Kozlik, head of municipal research and analytics at HilltopSecurities, an investment firm.
“In terms of approaching the unions,” suggests Peter Goldmark, budget director under Governor Hugh Carey in the late 1970s, when the state oversaw Gotham’s emergence from the fiscal crisis, “both the state and the city should indi- pressed, and if raises are not kept to reasonable cate firmly and early to the unions that they are levels, then there may have to be a serious numtaking a broad, cooperative, and progressive ber of layoffs.” If government workers hold out approach to inflation. A central axiom of that for inflation-linked raises, the city should simply approach should be that the more unions cooper- allow agreements to remain expired for a few ate in limiting raises during this difficult period, years. It’s easier to assess inflation costs in the the greater will be the city’s capacity to avoid past than in the future. layoffs.” Goldmark adds: “Despite all the refer-
ences to miraculous pots of money and reserves Nof federal funds, the city operating budget over ext, consider the cost of the city’s debt. The the next couple of years is going to be severely good news: New York City is far better off here
%Inflation, Consumer Prices for the United States, 1970–2022
15.0 DJUSTED 12.5 10.0 .5 .0 PERCENT NOT SEASONALL Y 7.5 5.0 2.5 0 .5 .0 .5 0 -2.5 1970 1975 1980 1985 1990 1995 2000 2005 2010 2015 2020
SOURCE: World Bank, St. Louis Fed (http://fred.stlouis.org/series/FPCPITOTLZGUSA)
FEB. 2022 7.9%
ALBERTO MENA
Inflation and the City
than in the 1970s, when it didn’t balance its budget and borrowed billions yearly just to make ends meet. The money had to be refinanced every year, at the new, prevailing interest rate. The rate went up as inflation rose, as lenders wanted to ensure that they were compensated for the eroded value of the dollar and because the city’s credit was getting riskier. In 1973, New York paid a 5 percent interest rate on its bonds; by 1975, just before the city’s default, the rate had nearly doubled.
Today, the city owes $94.2 billion. In contrast with five decades ago, this debt wasn’t incurred to meet day-to-day expenses but to improve New York’s physical infrastructure, such as roads, bridges, and school buildings. Further, most of this debt, $86 billion, like a long-term home mortgage, has a fixed rate of interest. Even if inflation rises, and lenders want a higher interest rate to compensate, they’re out of luck. Perversely, if inflation mounts, the city can repay this debt with ever-cheaper dollars. “Sustained inflation theoretically would help existing debt become relatively more affordable,” says Kozlik.
The flip side of this supposed advantage, though, is that New York needs to borrow more money. Over the next five years, the Adams administration wants to invest $100 billion in capital, including perhaps more than $9 billion to build jails in four boroughs in order to close Rikers Island. Most of this money, $95.4 billion, would be “city-funded”—that is, borrowed. Even with record-low interest rates, the city expects its annual debt-service costs—payments on principal and interest—to go from $6.8 billion to $9.6 billion over the next four years. Abruptly higher interest rates levied by lenders to compensate for runaway inflation would make such new borrowing prohibitive, undoing some of the benefit of repaying older debt with cheaper dollars.
Then, too, there’s the infrastructure that the city proposes to build with all that borrowed money. Though national inflation stands at 7.9 percent, inflation in the construction industry is running nearly twice as high, at 13 percent. New York cannot build its new jails anywhere close to budget with this double-digit inflation, nor can it even ensure that bridges remain in good condition. The high inflation of the 1970s produced such an infrastructure backlog that, a decade later, the Williamsburg Bridge almost collapsed, causing Koch to close it in the late 1980s for emergency refurbishment. The city’s public-housing stock already faces a $40 billion repair backlog—which, every year that inflation remains at 13 percent, grows by close to $5 billion.
Though the city doesn’t control the Metropolitan Transportation Authority, which runs subways and buses, its residents and commuters suffer from inflation pressure there, too. The contract for the Transport Workers Union, which staffs subways and buses, expires in 2023. The raises that the city awards to its own workers will help set the pattern for TWU workers. The MTA can’t award double-digit raises to its employees without hiking fares by an equivalent amount. But can subway and bus riders withstand, say, a 20 percent fare hike? Ridership remains at barely half of what it was pre-pandemic, with current riders mostly lower-wage retail, health-care, and other service-sector workers, already strapped by higher costs of food, child care, and other essentials.
The city’s regulated housing sector also stands to suffer. Of the more than 2.1 million rentalhousing units in the city, two-thirds of New York’s housing stock, roughly half, or 966,000, are rent-regulated—that is, a city board, under state law, sets the maximum rent increase each year. Of course, the board can’t control how much property owners’ costs go up. Last year, the city’s rent guidelines board approved a rent bump of just 1.5 percent for people signing oneyear leases and allowed that increase to take place only during the final six months of the lease. This hike falls far behind inflation and comes as landlords struggle to collect back rent from hundreds of thousands of tenants who stopped paying rent due to the public-health emergency.
If future rent increases fail to keep up with cost inflation, more of the city’s rent-regulated housing stock will fall into distress or abandonment, as landlords fail to maintain properties that don’t pay for themselves. In 1990, after two decades of abandonment, 13.9 percent of the city’s rent-regulated housing units were distressed. The figure fell to a low of 4.9 percent in 2016, but
New York City mayor Eric Adams should impress on Democrats in Washington the importance of controlling inflation.
it has inched up since. In 2019, the figure was 5.5 percent.
New York City is not an island—and it can’t entirely insulate itself from the damaging effects of rampant inflation. For now, Adams should impress upon fellow Democrats in Washington, including in the White House, that the government must get inflation down before it reaches double digits.
Beyond this prevention, which is the best cure, the city must ready itself to hold fast against union demands for raises that keep up with uncertain inflation. Adams also must ensure that New York spends every scarce dollar for infrastructure wisely; the specter of inflation eating away at New York’s finite dollars for infrastructure is yet another opportunity to rethink the costly four-borough jails project, for instance.
“Inflation is as violent as a mugger, as frightening as an armed robber, and as deadly as a hit man,” said Ronald Reagan in 1978. Today’s New York already has too many muggers and robbers; now it must contend with the violence of inflation, too.
Mass Incarceration Hysteria
Those alleging that the United States imprisons too many people rely on faulty history and bad facts.
Matt DeLisi and John Paul Wright
Since President Donald Trump signed the First Step Act in December 2018 to relax federal prison and sentencing laws, the federal Bureau of Prisons (BOP) population has plummeted, reaching its lowest total since 2000. The federal government is not alone in trying to reduce confinement: several states have enacted reforms with this goal. According to Bureau of Justice Statistics (BJS) data, the American jail population declined 25 percent between 2019 and 2020; the jail incarceration rate is currently at its lowest point since 1990. During the same period, the state prisoner population is down 15 percent.
Animating these reforms is a belief that the United States incarcerates too many people and that prison is ineffective or unjust. Many academics, journalists, and activists criticize incarceration as unduly harsh on lawbreakers and corrosive to inner-city communities. Such organizations as the National Research Council of
America’s incarceration rate is currently at its lowest since 1990.
ERIC THAYER/GETTY IMAGES
Mass Incarceration Hysteria
the National Academies, American Society of Criminology, and Academy of Criminal Justice Sciences have assailed the “systematic” and “entrenched” racism that allegedly characterizes our criminal-justice system. For many on the left, incarceration is simply a social evil. Some on the right also back such efforts, convinced that reducing prison populations will save tax dollars or that embracing reform will yield political benefits.
But mass incarceration is an ill-defined notion that ignores what we have learned about the essential role of government in controlling crime. Incarceration, appropriately applied, represents effective public policy, worthy of investment. Compelling data show that prison incarceration in the United States is lower than should be expected. While some states and public officials tout a hard line against crime, the reality is that many serious, recidivistic criminal offenders rarely see the inside of a prison cell. When they do, most get released after serving time well short of their actual sentence. Incarceration is the proverbial revolving door. Nevertheless, the mass incarceration narrative remains potent and retains bipartisan support—but its historical and empirical foundation is weak.
Contemporary reformists argue that rising incarceration rates in the late twentieth century represented a sinister attempt to reimpose a racial caste system after the end of Jim Crow. For much of the twentieth century, they say, despite increases in crime, new criminal statutes, and longer criminal sentences, prison populations and the amount of time served remained relatively unchanged—until the early 1960s, when politicians began to adopt tough-on-crime measures. Indeed, incarceration rates were low for much of the twentieth century, often not exceeding 150 per 100,000, at least using traditional methodology. Between 1880 and 1950, while the number of state prisons rose from 61 to 106, the U.S. population more than tripled, from about 50 million to about 152 million.
But incarceration was more common than this history lets on—and the eventual crackdown on crime, when it came, was long overdue. While incarceration rates seemed low in the early twentieth century, Progressive reformers also built a variety of institutions to detain the poor and infirm, the delinquent, the criminal, and the mentally ill. By 1940, almost 218,000 citizens 18 and older were in prisons, with another 99,000 in local jails and workhouses. And these numbers were far exceeded by the population housed in state mental institutions: more than 591,000, many of whom had committed criminal offenses. By 1950, that number rose to 613,000, with another 178,000 in state prisons, 86,000 in jails and workhouses, and 140,000 in youth facilities. Excluding the mentally handicapped, the total incarcerated population exceeded 1 million, for an incarceration rate of 670 per 100,000—far higher than the typically cited rate of that era of about 100 per 100,000.
As the deinstitutionalization movement targeted psychiatric prisons, individuals who had received treatment and other services in those facilities were set free. What deinstitutionalization advocates could not have forecast was the wave of social unrest and violent crime that emerged in the late 1960s and continued unabated for the next three decades. Yet state prison populations actually declined in the 1960s and early 1970s. There were 189,735 inmates in 1960 but only 176,403 inmates in 1970. “Proof is impossible,” noted William Stuntz in The Collapse of American Criminal Justice, “but the low and falling prison populations of the 1960s and early 1970s probably contributed to rising levels of serious crime during those years.”
As crime rose, the criminal-justice system was revealed to be pathetically underprepared. Limited training, inadequate equipment, and overt corruption plagued many police departments. In the correctional sphere, the number of jail cells and prison beds had not kept pace with population growth, much less with unparalleled increases in crime. Given the scarcity of correctional space, those arrested found themselves returned almost immediately to the streets, where most continued to re-offend.
The issue was not just one of resources. Sentences for serious crimes were brief. In 1933, a person convicted for murder or manslaughter would serve a median 44 months before release;
by 1980, the figure had risen somewhat, to 60 months. The median time served for a rape conviction held steady at 36 months. And between 1923 and 1981, the median time served for all offenses fell from 18 months to 16 months.
To manage the growing numbers of people arrested for serious crimes, states began holding people bound for state prisons in local jails. In short order, local jails were overflowing with inmates, and many suits were brought for Eighth Amendment violations relating to overcrowding. States and local jurisdictions also started to manipulate the levers of confinement. Probation caseloads soared as courts diverted many offenders from prison while states simultaneously liberalized conditions for parole and good behavior. Revolvingdoor justice was born, not because of legislator contempt for public safety but because states had historically not adequately invested in their systems of justice—allocating resources to arrest, prosecute, and confine offenders—and thus were unprepared for how to handle a three-decade increase in serious crime.
Increasing crime rates eventually became a federal issue. In 1968, Congress passed the Safe Streets Act, which created the Law Enforcement Assistance Administration, providing block grants for states and cities to improve police education and training, develop and increase funding of institutes for research and training, and expand jail and prison capacity. The new law emerged out of the recognition that investment in the justice system had not kept pace with population growth or crime. Before the Carter administration recommended the elimination of the block-grant program (and Congress agreed, in 1983), it would shift about $55 billion in inflationadjusted dollars to the states to combat crime.
Federal funds encouraged states to modernize their justice systems, but jail and prison capacity still lagged year-over-year increases in crime until the early 1980s, when many states started funding new prison construction. In 1980, for ex-
ample, the prison population rose by only 10,613. As more prison space was created through the 1980s, yearly admissions increased, reaching 75,521 in 1990 and approaching 80,000 in 1995. The size of the inmate population likewise grew, from more than 288,000 in 1980 to more than 1 million in 1990, and then to its 2009 apex of 1.4 million state prisoners. If state spending on criminal justice increased dramatically after 1980, it was long overdue. As criminologist William Spelman notes, state spending also grew precipitously on education, parks and leisure, and health care as most state populations expanded. Hence, states were spending proportionately more on criminal justice, including on prisons, partly because they had spent so little on their justice systems for so long and because states now had more rev“Mass incarceration is an ill-defined notion that ignores what we have learned about controlling crime.” enue to spend. Contrary to claims that incarceration is too expensive, the cost of housing state inmates has actually declined as a share of overall state revenues: in 1969, the cost was about 8 percent of all state revenues; today, that figure is just 2.9 percent. This brief summary vitiates the historical arguments of today’s reformists. If mass incarceration ever existed, it was between 1930 and 1960. As crime worsened, governments were slow to react, getting a handle on the problem only well after the crime wave had begun. By then, however, the reformers who gave us institutions of confinement had turned against them, leaving millions of seriously mentally ill people to their own devices and then waging campaigns to reduce incarceration.
Reformists decrying mass incarceration don’t just advance a tendentious history. They also overlook four key facts about the contemporary state of the U.S. criminal-justice system. Both the prevalence of behavioral disorders linked with crime and the annual offending numbers conveyed by
Mass Incarceration Hysteria
A Smaller Federal Prison Population
240
220
200
180
160
140
0 2001 2005 2010
SOURCE: Federal Bureau of Prisons
2015 20202022
victim reports exceed incarceration rates, suggesting that the system has failed to capture a substantial portion of criminal offenders. The system also tends to be far more lenient than activists let on. Finally, the incapacitation of criminals itself offers public-safety benefits.
First, the prevalence problem. Behavioral disorders and pathological criminal prototypes linked with serious offenders are far more common than is incarceration. Over the past 40 years, the prevalence of prison confinement in the United States has ranged from 0.1 percent to 0.9 percent of the population. This pales next to the estimated prevalence of clinical psychopathy (1 percent), clinical antisocial personality disorder (3–4 percent), career criminality (5–6 percent), and life-coursepersistent offending (10 percent). Extrapolating from these estimates, one suspects that many criminal psychopaths, career criminals, and offenders with lifelong conduct problems remain free to continue their offending careers. And data from nations spanning North America, South America, Europe, Asia, Africa, and Australia consistently validate the career-criminal prototype, the small group that accounts for more than half of the incidence of crime in a population. Second, the annual offending problem. Each year in the United States since 1980, the FBI Uniform Crime Reports have counted 10–15 million arrests. The U.S. Department of Justice’s National Crime Victimization Survey has counted 20–42 million victimizations. Yet the total correctional population currently sits at 6.4 million, and the current prison confinement population at 1.47 million, for all offenders cumulatively sentenced over the past decades. Only a small fraction of the victimizations and arrests that occur annually culminate in a prison sentence—partly because of the incapacity of criminal-justice systems to process the magnitude of offending that occurs. Reformists commonly lament the confinement of older prisoners, but the annual offending problem and the cumulative nature of the prisoner population counter such sentiment. A 2016 BJS report on the aging of the state prisoner population found that nearly one in three prisoners aged 65 and older was serving either a death or life sentence, and would never be released, barring exceptional circumstances. These would include multiple-homicide offenders; those who murdered while committing other grievous crimes such as kidnapping, rape, or armed robbery; and those who murdered
police officers. Moreover, an aging prison population reflects not only the advanced age of the most recalcitrant offenders but also their ongoing annual offending. Between 1993 and 2013, the number of prisoners aged 55 and older admitted to state prison rose 400 percent.
Third, the leniency problem. Activist, media, and academic narratives suggest that scores of people are in prison for possessing small amounts of drugs for personal use, being unable to pay their fines and restitution, or perpetrating trivial technical violations of their felony probation or parole. Reformists assert a discriminatory process where offenders who are racial minorities face excessive confinement.
But the truth is that the justice system tends toward leniency. Offenders get plenty of opportunities for community-based supervision, have many of their charges dismissed or reduced, and recidivate frequently before actually being sentenced to prison. Nationally representative BJS data indicate that about one in four criminal cases—including nearly one in three violent criminal cases— is dismissed, and just 54 percent of violent offenses result in prison confinement. Most traffic, misdemeanor, and nonviolent felonies that end in conviction bring their perpetrators fines, restitution, deferred sentences, day reporting, home confinement, electronic monitoring, community service, or the ubiquitous probation. Less serious offenders comply well with these sanctions, complete their sentences, and exit the justice system, but even serious offenders receive these opportunities if their underlying charges are not grave. One example is Tom Latanowich, a 29-year-old probationer in Massachusetts who, in 2018, killed a police officer who was attempting to execute a warrant for his arrest. Latanowich had 111 prior offenses on his arrest record but received probation. His murder trial is ongoing.
Many arrest charges are dismissed, rejected in the interests of justice—meaning that the charges are considered too trivial to expend judicial resources—or are simply not prosecuted.
Declining State Prisoner Counts
1.5
4 1.4
S MILLION 3 1.3
2 1.2
1.1
1.0
0 2001 2005 2010 2015 2020
SOURCE: Department of Justice, Bureau of Justice Statistics
In a population of federal correctional clients on supervised release after a term in federal prison, 36 percent of arrest charges in their criminal careers were dismissed. Are dismissed charges ipso facto evidence that the accused was wrongfully charged? In reality, most criminal charges are dismissed for triviality because the victim or witnesses in the case refuse to cooperate, or because the dropped charges were part of a plea agreement. A 2017 University of Chicago Crime
Mass Incarceration Hysteria
Lab report found that in Chicago in 2016, only 26 percent of murders and 5 percent of shootings were cleared by arrest, and a troubling proportion of those did not result in conviction. The main reason for the low clearance and conviction rates was the unwillingness of victims and witnesses to cooperate with law enforcement and prosecutors.
The correctional domain is especially lenient. With important exceptions, most states and the federal system drastically reduce prison sentences via “good-time” reductions (such a provision was included in the First Step Act). Theoretically, these encourage rehabilitation, allow inmates to serve sentences concurrently, as opposed to consecutively, and impose postconviction relief due to legislative changes. In practice, serious offenders often serve unserious sentences—undermining truth-in-sentencing and leading to miscarriages of justice. A 2021 BJS report found that the median time served in prison is 1.3 years. For violent crimes, the median is 2.4 years. For murder, the median is 17.5 years, with a mean of 17.8 years. Seventy percent of convicted homicide offenders serve less than 20 years before their initial release from state prison. And prisoners convicted of drug possession—the symbolic victims in the mass-incarceration myth—serve a median nine months in confinement before release. Serious offenders must busily engage in crime to climb the punishment ladder and receive a prison sentence, but once confined, irrespective of their conviction offense, they are soon released.
Finally, the incapacitation problem. Custody provides substantial public-safety benefits simply by incapacitating offenders. Increased incarceration through the 1980s and 1990s helped turn around the mounting levels of violence, though it was not the only factor involved. Studies find large reductions in crime attributed to earlier efforts to incapacitate criminal offenders. Consider, too, a recent BJS report on more than 404,000 state prisoners released from custody in 30 states. BJS found that 68 percent recidivate within three years and 79 percent within six years. For prisoners with the longest criminal histories, recidivism estimates exceed 80 percent. While a stint in prison may not deter future offending, the fact remains that incarcerated offenders can’t harm citizens.
Data also challenge the wisdom of conservative cases for decarceration. In November 2019, for instance, Oklahoma enacted the largest single-day prison commutation in U.S. history. Governor Kevin Stitt described the event as “another mark on our historic timeline as we move the needle in criminal justice reform, and my administration remains committed to working with Oklahomans to pursue bold change that will offer our fellow citizens a second chance while also keeping our communities and streets safe.” But while the Oklahoma Pardon and Parole Board recommended 527 inmates for commutation, 65 of those were wanted by another jurisdiction for criminal activity and could not be released. More than one in ten of these “non-violent, low-level offenders,” then, were ineligible for commutation because of the extensiveness of their criminal history. Among the 462 commuted prisoners released, 30 were back in custody by February 2020. That equates to an arrest rate of 6,494 per 100,000—more than double the national arrest rate of 3,011 per 100,000 in the United States.
The mass commutation saved $11.9 million from reduced confinement costs, equivalent to $25,758 per offender. How would those reduced prison costs compare with the per-offender costs of housing assistance, food vouchers, medical care, insurance, unemployment benefits, and substance-use treatment that are standard among inmates reentering society? How to value the public-safety threats arising from recidivism, which, BJS data make clear, is a near-certainty for most felony offenders?
In the final analysis, the U.S. incarceration system remains judicious. The narrative of mass incarceration fails on inspection. This is not to say that we should not question our use of prisons and jails—but a balanced and empirically informed approach would also question the likely consequences of incarcerating too few. The progressive Left remains unwilling to define the boundaries whereby crime reductions are maximized through confinement, or where too little
The vast majority of violent crime—which exploded starting in the 1960s—is still committed by a small group of repeat offenders, often while out on bail.
incarceration increases crime and violence—as it has in recent American history, such as now.
The prison boom of previous years was the result of population growth, an explosion in crime and violence, and decades of neglecting our system of justice. When the volume of crime finally exceeded the system’s capacity to manage it, policymakers on the left and on the right responded to their constituents’ demands for public safety by investing broadly in our justice system: better-trained police, expanded probation departments, new intermediate sanctions—and newer, more efficient, and safer prisons. Through the combined efforts of police, prosecutors, judges, correctional officials, and legislatures, the crime surge was eventually blunted to unprecedented lows.
Under normal circumstances, these changes would be understood as the by-product of effective public policy. The wholesale rejection of incarceration as an effective policy tool risks too much. If past is prologue, then we can learn much from the consequences of deinstitutionalization and the failure to invest in all components of our justice system. Both led to more human suffering and to lives lost, as would current efforts to defund the police. We should avoid such mistakes and be reluctant to accept the kinds of promises that jeopardized public safety in an earlier era. “Hell,” wrote Hobbes, “is truth seen too late.”
Lawyers for Radical Change
The legal profession, once a guardian of republican government, is now a force for social upheaval.
John O. McGinnis
The transformation of the legal profession marks a fundamental change in American democracy. In the republic’s early days, lawyers provided ballast for stability and served as an anchor against excessive populism. The judiciary’s sober attachment to formal order was a primary reason for giving it the power to review the constitutionality of legislation. Law was the profession most likely to preserve the enduring framework of republican government against the mutable passions of the hour.
Nowadays, lawyers are a force for often-radical progressive change. Nothing symbolizes that shift better than the American Bar Association. Once an embodiment of conservatism, it has been captured by the Left. Its resolutions at its annual meeting constitute a wish list of Democratic Party proposals. It also deploys its influence in the accreditation process of law schools to make them even more monolithically left-wing than they already are.
The reasons for the shift lie, at least in part, in the reorientation of lawyers’ interests. Since the birth of the modern regulatory state, lawyers are no longer primarily the allies of commercial classes, as they were in the early republic, but instead the technocrats and enablers of regulation and redistribution. The more the nation intervenes in economic affairs to regulate and redistribute, the greater slice of compliance costs and transfer payments lawyers can expect to receive. Thus, they cannot be counted on as supporters of property rights or even of a stable rule of law. Their interest lies frequently in dynamic forms of legal transformation and the uncertainty they bring. Far from supporting a sound, established social order, they are likely to seek to undermine it. Only an ideological attachment to older forms of legal orders, like that which Federalist Society members
MICHAEL SELEZNEV/ALAMY STOCK PHOTO
Today’s American Bar Association uses its accreditation power to make law schools monolithically left-wing.
manifest, can call lawyers back to the essential role they play in the civic life of our republic.
In Democracy in America, Alexis de Tocqueville admired the American experiment in republicanism but foresaw several dangers. Democracy could turn excessively populist, he warned, as demagogues successfully appealed to the ill-considered whims of an excitable public. Democratic mutability would thereby endanger republican stability. By operating on the principle of equality in the political sphere, democracy also tends to impose equality throughout society, depressing excellence everywhere, including in commerce and in the culture. That kind of egalitarianism can engender mediocrity, not meritocracy.
Lawyers once were an important bulwark against such dangers. According to Tocqueville, the importance of law in a republic made lawyers peculiarly powerful in America. When the nobility and princes are excluded from government, lawyers become the most effective governing class. Indeed, they are the closest group that America has to an aristocracy, albeit one of talent, not birth.
Tocqueville further believed that the nature of the profession makes lawyers’ power a beneficent force in the republic. Their devotion to law gives them the inclination to resist popular passions, and even a bit of contempt for the vacillations of democracy. The formal structure of law encourages an abiding suspicion of innovations that would disturb it. Tocqueville praised, above all, the disposition of character that comes naturally, he believed, from the legal profession: “Men who have made a special study of the laws derive from this occupation certain habits of order, a taste for formalities, and a kind of
Lawyers for Radical Change
instinctive regard for the regular connection of ideas, which naturally render them very hostile to the revolutionary spirit and unreflecting passions of the multitude.” Tocqueville was not so naive as to assume that every attorney has such qualities. Lawyers are prominent in every kind of political movement, after all, including revolutionary ones. But respect for tradition and resistance to popular fads are important general tendencies.
As Gordon Wood writes in Power and Liberty, his superb book on early American constitutionalism, Tocqueville saw judges’ power to void unconstitutional statutes as another restraint on democratic excess. In Federalist No. 78, the famous essay defending judicial review, Alexander Hamilton justified the power of the federal judiciary by pointing to the sound judgment and acumen of the lawyers who would staff it. In response to claims that the judiciary would consolidate power in the federal government against the plan of the Constitution, Hamilton conceded that judicial review would serve the republic only if it respected stable traditions of legal interpretation. “To avoid an arbitrary discretion in the courts,” Hamilton observed, “it is indispensable that [judges] should be bound down by strict rules and precedents, which serve to define and point out their duty in every particular case that comes before them.” And those capable of being judges will be “the few”—an allusion from political theory to elites such as aristocrats—contrasted with “the many.” Hamilton had confidence in finding lawyers “who unite the requisite integrity with the requisite knowledge.”
Hamilton’s defense of judicial review by reference to the conservative tendencies and meritocratic excellence of lawyers was vindicated in the early republic. The judiciary resisted the most populist impulses of the Democratic-Republican Party when it took over the presidency and Congress after the 1800 election. Though they made enough appointments to establish a Supreme Court majority, three successive Democratic-Republican presidents—Thomas Jefferson, James Madison, and James Monroe—did not alter its fundamental character.
Despite Democratic-Republican attacks on the constitutionality of the Bank of the United States—itself a stabilizing institution for the fledgling networks of commerce in America— the Supreme Court upheld it unanimously in McCulloch v. Maryland. Jefferson blamed the Court’s intransigence on the magical powers of his cousin Chief Justice John Marshall, but it was the nature of the bar in the early republic that ultimately safeguarded the judiciary’s constitutional role. The justices were prudent men who rose to prominence from a legal practice dependent on arbitrating the private law among merchants and thus were well disposed to protecting the commercial republic that the Constitution established.
Today’s bar looks nothing like that of the early republic. Far from being a conservative influence, lawyers are more liberal than the median voter. And those who train the next generation of lawyers, the law professors, are overwhelmingly leftwing, favoring all sorts of foolish innovations— from abolition of prisons to putting the Federal Reserve in charge of setting prices for core goods.
The rise of the regulatory state is partly to blame. At the time of the Constitution, lawyers obtained their fees largely from private transactions. They negotiated and litigated contracts, conveyed property, and drew up trusts. But since the New Deal, much of law has become administrative law because the modern state is the administrative state. Government lawyers’ practice consists in finding new ways to regulate. For private lawyers, it consists in finding new ways of complying with or avoiding regulation. Lawyers thus gain with any increase in the scope of government agencies and complexity of the related procedures.
The other primary factor behind the bar’s transformation has been the rise of living constitutionalism and rights expansion, beginning in the 1960s. Under living constitutionalism, lawyers and judges are not simply servants of the law but potentially tribunes of the people, because they can choose to create new rights and discard others. In a legal world without the formal anchoring in text and precedents that characterized the lawyer’s craft of the past, innovation and, indeed, radicalism are prized as sources of
power. Lawyers become no longer the rampart of the republic but the disrupter of its order.
The American Bar Association’s history reflects the lurch to the left. The ABA began in the 1870s. Until 1938, its positions on law and politics were conservative. It favored formalism in law and free enterprise in economics. Its members overwhelmingly opposed Franklin Roosevelt’s scheme to pack the Supreme Court to assure approval of New Deal legislation. The ABA argued that such a move threatened republican government; the group garnered both praise and blame for the plan’s eventual defeat.
But as the regulatory state entrenched itself after the New Deal, the ABA grew considerably less conservative. By the era of President Lyndon Johnson’s Great Society and the rise of the rights jurisprudence of the 1960s, it was a more openly left-wing organization. The watershed public moment marking this shift was the decision in 1987 by four members of its Standing Committee on the Federal Judiciary to rate Robert Bork “not qualified” for the Supreme Court. The committee’s remit was to evaluate nominees for judgeships based on their professional qualifications. Bork had been solicitor general of the United States, a professor at Yale Law School, and the author of the most influential book on antitrust law in the history of the subject. The judgment of the committee members represented an ideological assassination under the veil of professional assessment. And it may have proved decisive, because Bork’s opponents trumpeted it as a politically neutral reason to oppose his nomination. Subsequently, studies have shown that the Standing Committee has rated Republican lower-court nominees less highly than Democratic nominees with similar qualifications. For instance, it rated as barely qualified Professors Richard Posner and Frank Easterbrook. When they took the bench, however, their colleagues on other circuit courts cited them four standard deviations more often than they did the average judge.
By the 1990s, the ABA had begun publicly to endorse left-liberal positions. The most famous resolution was its 1990 affirmation of a constitutional right to abortion. That approbation of the 1973 Roe v. Wade decision showed how far lawyers had come from a conservatism based on legal formality because the abortion decision was notoriously unmoored from the text of the Constitution or any substantial precedent. As John Hart Ely, a liberal law professor at Harvard and supporter of abortion rights, once observed: “Roe is not constitutional law and gives almost no sense of an obligation to be so.” Since then, the ABA’s resolutions have ranged even more widely, but consistently leftward. It has recently passed resolutions, for example, to provide voting rights to those incarcerated for any crime, to prohibit states from preventing persons who are biologically male from competing in women’s sports at schools, and to forgive student debt. It has called for a minimum wage, too, though lawyers have no more expertise on this and many other subjects of resolutions than any other group of citizens. As the Anglican Church was once the Tory Party at prayer, the ABA is the Democratic Party at the bar.
The ABA helps reinforce and expand the Left’s power chiefly through its influence in accrediting law schools. The ABA’s involvement in the accreditation process initially focused on the issue of greatest concern to lawyers: making sure that they were paid comparably with doctors! Thus, the ABA imposed on law schools limits on how many hours law professors could teach. Two decades ago, the Justice Department upended this cozy arrangement, seeing it as facilitating a cartel that drove up prices for students. As part of a consent decree, the ABA agreed not to impose requirements that affected salaries and certain
Lawyers for Radical Change
other economic matters. The result also shifted proposals for accreditation to the Council on Legal Education. But the council’s independence from the ABA is attenuated. Its head is approved by the ABA president, and most of its members are ABA lawyers, with a strong representation of professors. The ABA can still reject the accreditation standards. The consent decree has not prevented ABA influence, in other words, but merely redirected it.
Now that the lawyers’ guild cannot openly mandate policies to advance its members’ immediate economic interests, it has switched to imposing requirements that reflect the predominant ideology of the profession. Just this year, the council has strengthened its requirements for race- and ethnic-based hiring of faculty, making clear that law schools may henceforth be compelled to take such considerations into account, unless they are in jurisdictions that explicitly forbid such hiring. The requirement offers fresh confirmation of how far the organized bar will go in using its influence for a progressive-favored cause. Even under current precedent that the Supreme Court will reconsider next year, the requirement would be illegal: the Court has approved racial preferences for student admissions but not for faculty hiring. And regardless of the standard’s legality, it is a striking abuse of political power for an accreditor to mandate a policy nationwide that voters have rejected in almost every statewide referendum on racial and gender preferences.
The rule will be almost certain to push law schools even further left. A recent study by Harvard and University of Chicago researchers found that many law schools currently lacked any substantial representation of conservatives and that professors from racial and ethnic minority backgrounds (as well as female professors) tended to be more left-wing than the median professor. Preferences in hiring would thus likely reduce law schools’ already-limited ideological diversity.
The new standards also require that a law school shall provide training and education to law students on bias, cross-cultural competency, and racism at the start of the program of legal education, and at least once again before graduation. This requirement breaks new ground by telling law schools that a social problem is so important that it must be addressed, even if not directly related to the subject of law. As a group of senior Yale law professors, not one a conservative, observed about the reality of this standard: “The new proposed requirements . . . attempt to institutionalize dogma, mandating instruction in matters that are unrelated to any distinctively legal skill.” Law schools already have a woke atmosphere because they are part of universities. The ABA is helping to ensure that they become even more of an ideological bubble.
Beyond the baleful influence of the national organized bar, associations of lawyers at the state level also pressure society leftward. The most powerful mechanism for doing so: “Missouri plans” for the appointment of state judges (taking their name from a reform that originated in that state in 1940). Many states with such plans neither replicate the federal structure of executive nomination and legislative confirmation nor permit direct election of judges. The argument against such political processes is that they allow special interests to skew nominations. Instead, Missouri plans require the state’s executive to nominate judges from a list drawn up by a special commission. Lawyers have a privileged position on those commissions, with guaranteed seats for the bar. Not surprisingly, state supreme court justices selected under such plans favor the interests of lawyers. For instance, they have often been hostile to tort reform, which tends to reduce the income of plaintiff lawyers. And in keeping with the enthusiasm for creating new rights that characterizes the modern bar, justices selected under these plans are often sympathetic to living constitutionalism. Missouri plans may reduce the power of some special interests in the nomination process, but they embed one special interest— lawyers—that wants a judiciary that serves their own financial and ideological interests.
The ABA’s leftward movement has spurred the growth of the Federalist Society. Though
The older view of lawyers and the law, where excessive sentiment is seen as an impediment to justice, is reflected in this 1893 cartoon.
Lawyers for Radical Change
SAMUEL CORUM/THE NEW YORK TIMES/REDUX conservatives constitute only a minority of lawyers, their abandonment by the ABA was enough to attract more than 80,000 to join the Federalist Society. The organization is dedicated to promoting the jurisprudence of originalism—the view that the Constitution should be interpreted according to the meaning at the time it was enacted. While the Society does not lobby for the appointment of originalist judges, its network both refines the arguments for originalism and brings the best exponents to the attention of the relevant decision-makers in the White House and Congress. Without the Federalist Society, we would not have an originalist-leaning Supreme Court today, making the organization the most successful and important new civic organization in America of the last five decades.
Its success enrages the Left, undermining its near-monopoly of nonpartisan institutions that engage in legal and policy discourse. The Left dominates most of the commanding heights that influence the nation’s political agenda—universities, the media and entertainment world, philanthropies, and so on. It would dominate the legal sphere even more but for the Federalist Society.
If, largely thanks to the Federalist Society’s efforts, the Supreme Court is transformed, it may slowly change the legal profession as a whole. As the Supreme Court decides cases based on formal rules rather than policy perspectives or social visions, lawyers will again find themselves more concerned with formal order; after all, lawyers must follow, to some extent, the tribunals before which they practice. And if the Court’s jurisprudence helps bring about a more restrained federal government and regulatory state, lawyers would no longer see their interests so tied to increasing regulation. The bar may then return to the characteristic lawyerly virtues of prudence and skepticism about rash innovation that Tocqueville celebrated. Lawyers as a class would then cease to be the sappers of republican stability and once again serve as its shields.
Supreme Court Justice Amy Coney Barrett speaking at the Federalist Society, an organization dedicated to promoting the jurisprudence of originalism
Urban Renewal, Redlining, and Race
As Baltimore’s experience suggests, taking the eminentdomain bulldozer away from local governments will encourage better development.
Stephen J. K. Walters
Susette Kelo and Sonia Eaddy have much in common. Kelo became famous for fighting in the late 1990s to save her home from the urban-renewal bulldozer. After she renovated a centuryold cottage in New London, Connecticut, the city announced that it would use its power of eminent domain to take Kelo’s little pink house and about 90 neighboring properties to make room for a Pfizer research facility and related amenities. Kelo battled all the way to the Supreme Court but lost, the Court ruling that local governments could take private property not only for a public use (such as a highway) but for a public purpose (such as a tax-revenue-enhancing private office park). Adding insult to injury, the city’s grandiose plan was never realized.
Now Eaddy, along with hundreds of other residents of a West Baltimore neighborhood called Poppleton, finds herself in a similar fight. Members of Eaddy’s family have called the area home for three generations; in 1992, she and her husband, Curtis, bought a row home there that dates to 1900. Crime-ridden Baltimore has suffered white and black flight for decades, and the city is using eminent domain to advance 15-year-old redevelopment plans in Poppleton, which is 88 percent black. But the Eaddys are stayers, even rebuilding after a devastating fire in 2012, and
Homeowner Sonia Eaddy, who, with her husband, is challenging the condemnation of her home by Baltimore housing officials—and leading an effort to preserve the city’s historic Poppleton neighborhood
KENNETH K. LAM/THE BALTIMORE SUN
Urban Renewal, Redlining, and Race
Eaddy has become a leader of efforts to preserve the historic community.
Media coverage of the saga has invoked familiar narratives about Baltimore’s history of segregation and redlining, recalling James Baldwin’s long-ago charge that urban renewal translates roughly as Negro removal. “You’re not doing that in Federal Hill,” a mostly white neighborhood, Eaddy pointed out at a summer rally against the program, according to a Baltimore Sun report. But city leaders’ treatment of Eaddy and her neighbors does not suggest intentional racial bias. Predominantly black Baltimore’s political leadership has been staunchly Democratic and determinedly progressive for decades; the city’s last five mayors have been black, as is the CEO of the development firm implementing the Poppleton plan, which itself is heavy on buzzwords and stipulations about equity, inclusion, and affordability. As for Kelo, her unhappy experience began when a Republican governor targeted a largely white neighborhood.
These episodes are more complex—and instructive—than common narratives about race and privilege might suggest. What’s going on in Baltimore is indeed an injustice, but the misguided policies that brought it about cut across racial and ideological lines.
Sixty years ago, when Jane Jacobs published The Death and Life of Great American Cities, she told a story about the overcrowded, seemingly chaotic, and largely poor North End of Boston. After enjoying a walk there, marked by “buoyancy, friendliness and good health,” she called a local planner she knew. “Why in the world are you down in the North End?” he asked. “That’s a slum . . . the worst in the city. It has two hundred and seventy-five dwelling units to the net acre! I hate to admit we have anything like that in Boston, but it’s a fact.” Jacobs nudged him to find more data—all of which, to his surprise, were favorable: low delinquency, affordable rents, positive indicators of public health. The planner even admitted that he’d found it “fun” when he’d actually visited the neighborhood—though he still concluded that “we have to rebuild it eventually.” Jacobs responded, “You should have more slums like this. Don’t tell me there are plans to wipe this out. You ought to be down here learning as much as you can from it.” But, she observed, everything her planner friend had learned “about what is good for people and good for city neighborhoods, everything that made him an expert, told him the North End had to be a bad place.”
Thanks to Jacobs’s devastating critiques, many wrongheaded presumptions about the ills of density and mixed-use buildings have been cast aside. But today’s rebuilding efforts often disappoint (see “They’re Taking Away Your Property for What?,” Autumn 2005) as contemporary planners continue to misdiagnose the areas that require their attention and the remedies they need. Perhaps it’s because those carrying them out rarely ask why a neighborhood is suffering in the first place.
The answer to that question isn’t always race. Both Boston’s North End and Baltimore’s Poppleton were redlined—that is, rated as “hazardous” neighborhoods for investment and colored red on the original maps produced by the federal Home Owners’ Loan Corporation (HOLC) from 1938 to 1940. Today’s commentators often assume that these maps were motivated by, or at least reflected, racial animus; a Brookings Institution report defined redlining as “the practice of outlining areas with sizable Black populations in red ink on maps as a warning to mortgage lenders, effectively isolating Black people in areas that would suffer lower levels of investment than their white counterparts.”
This misreads the history. While the notes attending Poppleton’s rating did mention an “infiltration of Negroes,” the North End was entirely white. Indeed, only 3 percent of Boston’s population at the time was black, but 25 percent of the city was redlined and another 64 percent was rated “also declining” (and colored yellow on the HOLC maps). By contrast, Baltimore’s proportion of black residents, at 19 percent, was six times that of Boston’s—yet its proportions of red (15 percent) and yellow (31 percent) were not nearly as great. Many cities with negligible black populations were heavily redlined. San Francisco, for example, was less than 1 percent black in 1940, but its map was 31 percent red and
32 percent yellow—including Haight-Ashbury, where rents would keep falling until, a generation later, they became affordable to the vanguard of America’s counterculture.
Discrimination in mortgage lending has certainly been a problem, but the contemporary narrative about those famous HOLC maps is distorted. It would be unnecessary for a biased lender to check an address on a map while in the presence of a loan applicant whose class, race, ethnicity, or gender is easily determinable. Doing so would be an imprecise tool of discrimination, anyway, as the redlining maps showed that geographic boundaries could be a poor proxy for membership in a “disfavored” group. A paper in the Journal of Urban History points out that the maps were devised after the agency had expended its resources to stabilize Depression-era mortgage markets, and finds that black borrowers received HOLC assistance roughly in proportion to their ownership rates in most locales. Evidence suggests that racial bias played only a minor role in the construction of the HOLC maps: observed concentration of black, poor white, or immigrant households in redlined zones, notes a National Bureau of Economic Research study, reflected their modest means and thus their tendency to locate in areas that the New Deal technocrats would identify as declining. A paper by University of Pennsylvania urban studies professor Amy E. Hillier offers no evidence that the grades on HOLC’s maps explain differences in lending volumes (though interest rates tended to be higher in areas colored red). The maps told a story, but race was not its antagonist.
If race did not cause much of 1940s-era Boston to be deemed hazardous for investment, what did? What was the real root cause of the North End’s redlining and its status two decades later—at least, in the view of Jacobs’s planner
friend—as Boston’s worst slum? Why would New London’s leaders view Kelo’s neighborhood as a candidate for rebuilding in the late 1990s? And why, more recently, has Poppleton become a target? The cases have differences, but they share a long-term pattern: disinvestment driven by an unfriendly tax environment. In a vacuum, houses depreciate as they age and require regular investment to maintain their value. High property taxes act to depress property values, diminish wealth, and inhibit investment. And the consequences of such disinvestment—physical decay, lower rents, and population shifts as neighborhood amenities decline—first become obvious in older neighborhoods or those that had lower-quality stock to begin with. As it happened, Boston was a hostile invest“Thanks to Jacobs, many wrongheaded ideas about density and mixed-use buildings have been cast aside.” ment environment for much of the twentieth century. James Michael Curley, “the mayor of the poor” for four staggered terms through 1950, was a malign force—pursuing a winning political strategy by taxing the property of his city’s “haves” (well-to-do WASPs) to provide benefits to his political base (Irish immigrants). He quintupled Boston’s property-tax rate, making it the highest of any major city in the United States. As a result, Boston suffered population and capital flight even as other northern cities grew rapidly; its many redlined areas bore the most obvious signs of decline—including the historic North End, with its older stock of physical capital. Still, physical decay and resulting lower rents will be attractive to some. That’s why the HOLC maps often noted “infiltration” by blacks or immigrants. Redlined neighborhoods drew these populations because they were depreciating physically and, therefore, cheap. But as Jacobs understood from her study of the North End and many other neighborhoods, as Kelo knew as she fought to defend Fort Trumbull, and as Eaddy and her neighbors have long understood about Poppleton, a neighborhood can look as though it
Urban Renewal, Redlining, and Race
Eaddy walks past the Sarah Ann Street alley homes in Poppleton.
needs rebuilding while nonetheless serving its residents reasonably well.
Unfortunately, when government officials, city planners, connected developers, and the construction industry see an area suffering from disinvestment, they often respond by embracing projects that recall the days of urban renewal. They designate the area as a redevelopment zone and request proposals; the wheels of the bureaucratic process begin to turn. Accordingly, investment falls further: few property owners put money into a home or business likely to be in the bulldozers’ path. As decay accelerates and some residents give up, the area becomes, in the words of Baltimore officials discussing Poppleton, “largely vacant and desolate.”
Parcha McFadden, one of Eaddy’s neighbors and allies, remembers that “Poppleton used to be a nice neighborhood” before the plan to renew it began in 2005. This was so, despite an
KENNETH K. LAM/THE BALTIMORE SUN
earlier planning catastrophe that wiped out several West Baltimore neighborhoods, including those on Poppleton’s northern border, in order to make way for an interstate—which, like Fort Trumbull, was never completed and thus became infamous as Baltimore’s “highway to nowhere.” Those neighborhoods were once home to thousands of people who had spent years creating congregations, clubs, and associations embodying tremendous accumulated social capital. Much of that has been destroyed, along with those residents’ depreciating and cheap homes.
What would better treat a disinvestment crisis? The key first step is the creation of a favorable overall investment climate via competitive tax rates and secure property rights. That’s what Boston learned when Massachusetts voters passed Proposition 2½ in 1980, ending decades of damaging tax policy and reversing its long-running flight of capital and population. Baltimore’s ongoing disinvestment crisis is largely due to its leaders’ refusal to pursue tax competitiveness: its property-tax rate is more than twice that available in the surrounding county, and its services are far inferior. Of course, voters might solve this problem themselves, and some political and faith leaders are now encouraging Baltimoreans to do so.
But the problem also flows from the Supreme Court’s Kelo decision. Justice John Paul Stevens called the case “the most unpopular opinion I ever wrote, no doubt about it.” Legal scholar Ilya Somin, among others, has argued that it’s also poorly reasoned. And despite some states’ attempts to limit the “public purposes” for which private property may be taken, Somin concludes that eminent-domain abuse won’t stop entirely until the Court reconsiders and reverses its narrow (5–4) decision in Kelo.
Sonia Eaddy’s case provides an excellent opportunity for that to happen. The starting point for all “rebuilding” plans is always identification of areas suffering from disinvestment. As the originators of the original HOLC maps documented, these areas were usually “depreciating and cheap” and therefore chiefly attractive to lower-income groups—often immigrants and blacks, though certainly not always, as we have seen. These residents’ property rights (and corresponding social capital) would seem worthy of constitutional protection from abuse by powerful officials and developers. The question now is whether taking the eminent-domain bulldozer away from local government might not only keep Sonia Eaddy and many of her neighbors in their homes but also encourage public officials to pursue more benign and effective tools of urban renewal in the future.
Deconstructing New York’s Building Costs
One-of-a-kind regulations illustrate why it’s so hard to build anything in Gotham.
Connor Harris
New York is a city unlike any other in many ways—one of the less distinguished being its one-of-a-kind building-construction regulations. Though typically justified on safety grounds, these regulations provide only questionable safety benefits and, in some cases, may be counterproductive, while considerably raising the cost of new construction. There has never been a better time to revisit these rules, as most New Yorkers now recognize the extreme expense of housing—a problem worsened by expensive construction—as one of the city’s greatest challenges.
Two aspects of this regulatory environment are particularly egregious. First, New York State liability law—above all, a law that lets injured workers sue employers for millions of dollars, even for accidents caused by their own recklessness—makes construction far more expensive and harder to insure. Second, an onerous system of crane regulations, without parallel elsewhere in the United States, entrenches the power of a corrupt local union and forces independent building firms to resort to inefficient, dangerous construction methods.
Insurance costs for construction contractors in New York State are inflated by Section 240 of the New York State Labor Law, more commonly known as the Scaffold Law (though the law has little to do with scaffolds). The law’s text obliges builders to put up scaffolds and other safety devices, “which shall be so constructed, placed and operated as to give proper protection” to workers, but gives only a few concrete prescriptions for the meaning of “proper protection.” State court decisions have molded the Scaffold Law into an open-ended obligation on contractors and building owners. Far beyond just enforcing a list of regulations, the Scaffold Law lets injured workers sue employers for almost all gravityrelated workplace injuries that some conceivable safety measure, no matter how excessive, might have stopped.
In most states—including, for most classes of workers besides construction workers, New York—workers generally cannot sue employers for workplace injuries. Injured workers are instead compensated under workers’ compensation, a program designed to be predictable and efficient to administer. A “no-fault” system that does not assess workers’ culpability for accidents, workers’ compensation makes awards based on predefined formulas and avoids the adversarial legal system. It is financed through a tightly regulated market of mandatory insurance, with premiums generally dictated by state regulators. The system’s insurance policies have two parts: the main part makes payments through workers’ compensation itself; a second “employer’s liability” part covers restricted conditions in which employers can be sued over workers’ injuries. Formulas for determining insurance premiums include “experience modifiers” that give discounts to employers with better safety records, thereby encouraging worker safety. In New York, an association of insurance carriers, the New York Compensation Insurance Rating Board, calculates experience modifiers.
In limited circumstances, workers or related third parties may file lawsuits against
Onerous regulations on cranes, favored by unions, make it difficult for independent construction firms to bid on projects.
employers. For instance, if a worker injured by defective construction equipment sues the equipment’s manufacturer, the manufacturer can file a “third-party over-action” claim against a developer, claiming that poor maintenance, not the equipment itself, was to blame. New York, incidentally, is one of only two states that require unlimited employer’s liability coverage; most other states stipulate only some amount in the range of hundreds of thousands of dollars. This condition likely contributes to the expense of workers’ compensation insurance in New York: according to data compiled by the Oregon Department of Business and Consumer Services, premiums in New York are the nation’s second highest, behind only New Jersey, and 55 percent above those in the median state.
The Scaffold Law, by contrast, allows workers injured in a fall or by a falling object—two criteria defined loosely enough that tripping from a footstool might qualify—to sue employers for damages under general liability, not employer’s liability. Like workers’ compensation, lawsuits under the Scaffold Law are functionally no-fault: the New York Court of Appeals has stated that “the statute places absolute liability upon owners, contractors, and their agents for any breach of the statutory duty which has proximately caused injury and, accordingly, it is to be construed as liberally as necessary to accomplish the purpose for which it is framed.” A trial lawyers’ website notes the Scaffold Law’s exceptional breadth: “The Scaffold Law frequently produces liability and a right of recovery where none would typically exist for the construction worker. This law imposes absolute liability against a property owner, tenants, and contractors.” A 2014 report by the New York County Lawyers’ Association, a reliable defender of the Scaffold Law, notes that workers can typically be found liable for their own injuries under the Scaffold Law only if they were “recalcitrant”—that is, if they persisted in violating safety rules despite explicit instruction—and if their own actions were the “sole proximate cause” of the accident. That’s a massive departure from the “comparative negligence” standard otherwise nearly universal in tort law, which assigns liability to parties in proportion to how much their actions contributed to accidents.
But awards from lawsuits under the Scaffold Law far exceed workers’ compensation. New York currently caps benefits from workers’
Deconstructing New York’s Building Costs
compensation at $1,063.05 per week of lost wages, or about $55,000 per year; awards even for severe permanent disabilities total, at most, ten years of wages. By contrast, awards under the Scaffold Law can often stretch into the millions. One law firm advertises numerous cases in which it won settlements of several million dollars under the Scaffold Law—including one in which even the firm’s curt description makes it hard to see employer negligence as a cause: a $3.5 million award for a stonemason “injured when another worker threw a piece of sheetrock out a window striking him on the ground below.”
Employers can be liable even for injuries of workers who knowingly violate safety rules. In one case, a nonprofit housing developer was found liable for the injuries of a worker who fell after detaching his safety harness in order to enter a construction site through a window, though a safer, intended form of entry was provided. The court found that the developer was nevertheless liable because it had “acquiesced” to workers’ use of unsafe window entries by not prohibiting them clearly enough. The website scaffoldlaw.org, maintained by advocates of Scaffold Law reform, lists cases of workers who won compensation even though the injuries they suffered occurred while they were under the influence of alcohol or illegal drugs.
The Scaffold Law makes the New York State market for insurance far less competitive, especially for so-called admitted carriers. New York, like other states, distinguishes “admitted” and “non-admitted” (or “excess line”) insurance providers. Admitted carriers must file a range of prices with a state regulatory agency and cannot charge outside that range. In return, if an admitted carrier goes bankrupt, the state will take over the contracts of any of its policyholders. New York State also assesses a 3.6 percent tax on nonadmitted policies, and some banks require policies with an admitted carrier as a condition of financing.
According to one New York insurance broker, however, most insurance brokers’ admitted rates for general liability insurance are calculated from national averages and come in far below the rate needed to be profitable in New York. Even for a run-of-the-mill project such as a ten-story building, therefore, only “one or two” admitted carriers would be willing to issue policies at their stated rates. Use of riskier non-admitted carriers is therefore common, but even the non-admitted market is hardly competitive.
The lack of competition means still higher insurance prices: several estimates conclude that the Scaffold Law approximately doubles insurance costs. A 2014 report from the New York Building Congress, for instance, claims that “a significant body of loss-cost data from all major insurance carriers . . . shows dramatically higher loss costs in impacted classes of construction in New York compared to all comparable and neighboring states.” It also quotes one partner at a New York insurance firm claiming that insurance costs for construction are roughly 10 percent of construction expenses in New York, versus an average of 5 percent in the rest of the United States—a difference roughly confirmed in a 2019 article from the insurance corporation CRC Group, which reports typical ranges of 7 percent to 10 percent in New York and 3 percent to 5 percent elsewhere.
Does this expense at least pay off in improved construction safety? Defenders of the Scaffold Law make two principal arguments. First, they point to Illinois after the 1995 repeal of the Structural Work Act, a similar state law. “During the three-year period preceding the repeal of the [Structural Work Act], construction deaths constituted on average 17 percent of the total workplace deaths in Illinois,” says a 2007 report from the progressive think tank Center for Justice & Democracy. “The construction industry’s Ωshare≈ of total fatalities increased over the next ten years to an average of one out of every five fatal injuries (or 21 percent).” (Note, though, that an increase in the share of on-the-job deaths accounted for by the construction industry does not mean an increase in the number of deaths.) Second are claims that New York’s injury rates are relatively low, compared with those of other states, for which the Scaffold Law deserves presumptive credit. The aforementioned 2014 report from the New York County Lawyers’ Association attributes New York’s putative good construction safety
record to the Scaffold Law. “This current incentive to provide a safe workplace,” it says, “is one reason why from 2000–2011, New York had the nation’s sixth-lowest construction injury rate.” The New York State Trial Lawyers’ Association, also a major advocate for the Scaffold Act, uses similar defenses.
These arguments rest on a selective look at the data. New York’s construction injury rate is indeed generally low, but the rate of fatal injuries, according to Bureau of Labor Statistics data, is unremarkable or a bit worse than average. In 2019, New York State had 10.2 fatal accidents per 100,000 construction workers and New York City had 11.6, compared with a national average of 9.7. To take a few other large, mostly urbanized states as comparisons, California had 6.5 deaths per 100,000 workers, Pennsylvania 6.3, Texas 9.9, and Florida 10.9.
Moreover, the best comprehensive comparison of state-level construction safety data—a 2014 working paper by several authors from Cornell University—finds it difficult to attribute good points of New York’s safety record to the Scaffold Act, or any bad points of Illinois’ record to the repeal of its equivalent. First, it notes, construction has become generally safer throughout the United States since the 1990s. Rates of fatal falls, in particular, the main type of injury that the Scaffold Law and Structural Work Act were meant to hinder, declined nationally by about one-quarter from the early 1990s to the early 2010s. In fact, Illinois saw a larger decline in fatal falls of construction workers than did New York, and fewer falls in absolute numbers by the end of this period. (My own analysis of online Bureau of Labor Statistics data finds no detectable break after 1995 in the overall downward trend in deaths in the construction industry or deaths from workplace falls in Illinois.)
Further, the Cornell researchers find that statelevel construction safety is highly predictable from a general national trend toward greater
safety over time and differences between states in the predominant type of construction: for instance, “higher commercial construction activity . . . is associated with a lower rate of nonfatal injuries.” After controlling for these factors, New York’s safety record is about average, and accident rates in industries covered by the Scaffold Law are actually somewhat higher than national trends would predict. This finding does not necessarily mean that the Scaffold Law is counterproductive—other features of New York’s regulatory environment could also contribute to New York’s excess danger—but it does suggest that the law is unlikely to have large benefits. Tom Stebbins, director of the Lawsuit Reform Alliance of New York, suggests possible mechanisms by which the Scaffold Law could actually be counterproductive. “Several estimates conclude that the Scaffold Law approximately doubles insurance costs.” Because the law gives out large awards under no-fault processes, both good and bad contractors pay inflated premiums for general liability insurance to cover accidents that can’t reasonably be considered their fault—thereby attenuating the market advantage of lower premiums for more responsible contractors. Stebbins also notes that unscrupulous contractors in neighboring states can undercut New York–based contractors for work in border regions if they’re willing to lie to insurers by not telling them that they have projects in New York.
If the Scaffold Law is a unique presence in New York, a unique absence also plays a major role: in New York City, construction of all but the tallest buildings seldom uses cranes. In other cities (including elsewhere in New York State), cranes are commonly used even on buildings of only a few stories. Builders in New York City who do use cranes, furthermore, tend to avoid “tower cranes” embedded in the ground in favor of mobile cranes mounted on trucks or Caterpillar tracks. Mobile cranes tend to have inferior
Deconstructing New York’s Building Costs
In New York, unlike in other states, construction workers injured on the job can sue their employers.
reach and lifting capacity to tower cranes, and they occupy a larger ground footprint, making them impractical for small sites and often necessitating closing streets. They are also less stable and prone to collapse in high winds, as in a fatal 2016 collapse of a large track-mounted “crawler” crane in Tribeca.
More commonly, builders without cranes resort to hauling heavy construction materials up construction sites by hand, exhausting laborers and wasting time while also reducing safety. The levels of a construction site, like those of a ship, are ordinarily connected by ladders through narrow openings in floors—fine if workers are transporting only themselves, but dangerous if they are carrying heavy construction materials. Safe transport of heavy loads without cranes necessitates expensive construction-site redesigns such as continuous stairways from the ground to the construction deck.
Another common solution in New York: portable “spider cranes”—consisting of a boom several feet long attached to a tetrapod—which can be carried up to a construction deck by hand and then bolted down near the edge, so that the boom projects off the side of the building. Spider cranes, though, have low load limits and are prone to accidents when rushed workers overload them. In 2018, two workers were seriously injured on a construction site in Harlem when a spider crane toppled over: it was rated for 880 pounds, according to a contemporaneous report in the New York Post, but was being used to haul a 1,500-pound load. (Tower cranes, by comparison, typically have load limits of at least several tons.)
Once a load is lifted by a spider crane, it still must be moved across the construction deck to the correct location, another often-dangerous waste of labor. In 2018, a construction worker was killed on the roof of a Brooklyn worksite when a forklift carrying building materials toppled over—an accident that could have been avoided with a larger crane that could set loads down at the correct spot. One contractor who doesn’t use spider cranes on his projects calls them “a solution to a problem 100 percent created by the DOB and the licensing.” He estimates that a quarter of accidents on New York City construction sites could be avoided with more widespread use of cranes and that many projects that use only one crane in the city would use several cranes, at much higher efficiency, anywhere else.
The lack of cranes in New York stems from unique city regulations. First, the Cranes and Derricks unit of the New York City Department of Buildings must review and approve all plans to use cranes in construction; most other cities’ bureaucracies insist that a licensed engineer sign off on crane designs but do not review plans themselves. According to the principals of an engineering firm that works with cranes in New York and in many other cities, employees at Cranes and Derricks often lack specific expertise in cranes and are excessively technologically conservative, refusing to approve new crane models or imposing untenable conditions on their use; their main safety benefit is that the mere act of preparing crane designs to meet Cranes and Derricks’ conditions helps engineers spot potential safety problems. (In one case, according to a 2016 report from Crain’s, NYC DOB’s imposition of prohibitive rules for a new crane model likely stemmed in part from lobbying by politicians supported by the city’s main crane operators’ union.) Department inspectors can unilaterally stop crane work if they see any conditions that they consider possibly unsafe—one contractor complained that he received a stop-work order because an inspector was simply unfamiliar with a crane model. Restarting work with cranes requires potentially time-consuming negotiations.
Two other artificial obstacles discourage the use of large cranes in New York, especially tower cranes: excessive insurance conditions; and New York City’s sui generis licensing for crane operators. The year 2008 saw two fatal collapses of tower cranes—one caused by improper hoisting of a device for securing the crane mast to the building and the other by slapdash maintenance by the crane’s owner, who bought a safety-critical replacement part from an unauthorized manufacturer in China without designating appropriate specifications. In response, the city government raised general-liability insurance requirements for projects that use tower cranes from $10 million to $80 million, making tower cranes unaffordable for smaller projects. Some other cities mandate crane-specific insurance—Chicago, for instance, demands $1 million per occurrence for property damage and bodily injury—but nowhere besides New York, to my knowledge, do tower cranes trigger such an increase in necessary general-liability coverage.
The most counterproductive aspect of New York City’s crane regulations is its uniquely onerous regime for licensing crane operators. The National Commission for the Certification of Crane Operators (NCCCO), which gives widely recognized certification exams, lists only seven U.S. cities, including New York, with city-specific operator licenses. Most of these licenses are rubber stamps that call for, at most, passing an extra exam. Philadelphia, for instance, demands little more than that crane operators hold a valid Pennsylvania state license and certification from a recognized national organization, such as NCCCO. Pennsylvania, in turn, imposes only mild conditions on state licenses—mostly that crane operators hold an NCCCO or similar certification and pass character and fitness tests. Chicago’s regulations, the most demanding besides New York’s, stipulate that operators pass an additional exam and either have gone through a recognized apprenticeship program or have worked at least 2,000 hours (a year of full-time work) in the last four years as a crane operator in another city.
New York City’s rules are incomparably more stringent. New York offers three main classes of license. Class C licenses, the easiest to obtain, allow the operation only of mobile cranes with booms of up to 200 feet (divided further into three types, with a license subclass for each type), while the harder-to-get Class A and B licenses allow work on larger mobile cranes and tower cranes. An applicant for any Class C license must have done two years of supervised work within the last three years on similar crane models and have spent at least one of those years working in the New York metropolitan area or in one of six other U.S. metropolitan areas that NYC DOB considers comparable urban environments.
These requirements for small mobile cranes far outstrip any other city’s rules for any sort of crane. But they pale in comparison with New York City’s own requirements for the Class A and B licenses needed to work on tower cranes. In addition to obtaining a certification from NCCCO, Class A and B license applicants, no matter how experienced, must spend several years in supervised work with a current New York City license
Deconstructing New York’s Building Costs
holder. For a Class A license, which allows work only on immobile cranes with a boom length of under 200 feet, applicants must have spent at least three years under the supervision of a Class A or B license holder. The higher-level Class B license, which allows operation of cranes with unrestricted size, mandates two more years of supervised work. In the recent past, these licenses were even harder to get because the city did not recognize NCCCO certifications: in addition to training under a New York City license holder, applicants had to pass a city-specific certification test. (The city government had to fight a lawsuit from the largest union of crane operators in order to begin recognizing national certifications; it finally prevailed in 2016.)
By giving crane operators control over the supply of new licensed workers, New York City massively raises the cost of crane operations. A Wall Street Journal report from 2011 states: “A relief crane operator in New York City earns $82.15 an hour in base pay and benefits . . . well above the $66 an hour he would earn in Chicago or the $39 an hour in Washington, D.C.” Including overtime and benefits, the Journal reports, crane operators can earn more than $500,000 per year. Most crane operators with New York licenses, furthermore, belong to the International Union of Operating Engineers Local 14-14B, a union with a long history of allegations of corruption and ties to organized crime, at least until a federal investigation in the late 2000s. A New York Times report from 2009 notes “allegations that union officials helped unqualified organized crime figures get lower-level crane licenses,” as well as accusations that the union’s president Edwin Christian—who made almost $400,000 in total compensation in 2019, according to the website UnionFacts.com—handed out jobs to members out of favoritism.
Much of the additional costs imposed by Local 14 come not from crane operators’ wages but from their refusal to work for contractors who hire nonunion labor in other roles and from featherbedding of support roles. New York has some nonunion crane operators who actually earn as much as or more than union operators, though both still make far more than they would if New York had fewer regulatory barriers to entry. But Local 14’s work rules call for union wage scales for many non-operator support staff with basically unskilled jobs, such as “hoist car” operators who run elevators up and down crane masts. One contractor estimated that nonunion hoist-car operators in New York earn $25 to $30 per hour and that he had lured crane operators away from union work by, as he put it, appealing to their pride and offering them work operating a crane rather than a hoist car. Similarly, a New York Daily News article from 2011 mentions that Local 14’s rules called for high-rise construction at One World Trade Center to hire dozens of union workers, often for total compensation of hundreds of thousands of dollars per year each, for almost no work—including an “oiler” whose only job was to turn the crane on each morning.
The effect of crane regulation on construction costs is likely substantial. Crain’s cites an estimate that the construction cost of new top-end office space in New York City is about $300 per square foot, of which the direct cost of a crane is about $20 per square foot, or slightly under 7 percent of the total bill—a figure that does not include the indirect costs of using fewer cranes than optimal. Information on crane costs in other cities is hard to find, but a 2012 student project from the Technion in Israel, giving a model of construction management for a residential highrise in central Tel Aviv, estimated $294,000 in crane operation costs—only 1.6 percent of the total construction price of $18.49 million. The total construction cost in this model project, incidentally, came out to only $100 per square foot, far below the New York average.
Fortunately, most of New York’s bad construction regulations have potential straightforward fixes. A sensible reform to state liability law, preserving employers’ incentives to keep a safe workplace while reducing prohibitive insurance costs, might be to create an affirmative defense of regulatory compliance. Builders and contractors who can prove that they followed an explicit list of safety regulations, such as OSHA guidelines, would be protected against negligence lawsuits. If the state government wants to give injured
Without cranes, workers often haul heavy materials by hand.
workers larger payouts than workers’ compensation currently awards, it could simply make workers’ compensation more generous, thereby ensuring that more of the payouts go to workers themselves rather than to trial lawyers.
New York’s crane regulations deserve more discussion. Most of the construction professionals I spoke with—including those who regularly work in other cities—believed that New York’s dense pedestrian traffic and underground infrastructure could justify some unique regulations. Such opinions should be considered, but not uncritically. New York might have more high-rise construction than other cities, but the densest parts of Manhattan closely resemble the central areas of Chicago, Philadelphia, San Francisco, and many other old cities in the United States and abroad. New York’s crane regulations apply not only to high-rise construction in central Manhattan but also to less dense areas such as industrial districts in the Bronx or outer Queens. In any case, improvements in the safety of crane operations themselves must be weighed against the consequence of forcing contractors to resort to more dangerous and labor-intensive workarounds.
As a first step, New York might consider narrowing the authority of NYC DOB Cranes and Derricks: for instance, limiting its discretionary ability to approve new types of cranes and, perhaps, ending DOB review of crane design plans outside dense areas such as Manhattan and downtown Brooklyn. In less dense parts of New York, as in other cities, Department of Buildings staff would just check that a licensed engineer had signed off on plans to use cranes.
A similar reform could be made to New York’s licensing scheme. It may be justifiable to require crane operators in New York to have some experience in other dense urban environments; the case for requiring work in New York specifically is much weaker. Regulations like Chicago’s city-specific written-exam rule could ensure that operators had any requisite New York–specific knowledge. Again, it might make sense to have laxer requirements for operating cranes in lowerrise districts than in dense central areas.
We should be skeptical of arguments that New York is so special that it needs a system of construction regulations unlike anything found anywhere else. New York may be the greatest city in the world, but it is merely one of hundreds of places with many tall buildings. The same laws of physics govern construction here as anywhere else.
Twilight of the Idols
JOHN RUDOFF/SIPA USA/AP PHOTOS
The new iconoclasm has deep roots.
Geoff Shullenberger
Last October, three large busts created by artist Chris Carnabuci were installed in Manhattan’s Union Square. One represented the late congressman and activist John Lewis, who died in the summer of 2020, just as a new wave of protests decrying racism against African-Americans was sweeping across the nation. The other two represented individuals whose death at the hands of the police triggered those protests: Breonna Taylor and George Floyd.
Carnabuci uses a computerized 3-D modeling technique to produce his work. The busts are constructed from stacked plywood planks carved by programmed machine tools. Their appearance evokes something of the digital method that generated them: the vertical stack of thin bands that constitutes the busts lends them a flickering, screen-like aura. They look at home on Instagram—a platform that helped propel 2020’s protests. However, Carnabuci also painted the Union Square busts bronze, stating that he aimed to create a dialogue with the tradition of
Throughout the summer of 2020, rioters vandalized and toppled historic monuments across the country.
Twilight of the Idols
monumental statuary. In this sense, the installation might be seen as a constructive contribution to recent controversies around public statuary that have often taken a destructive form.
An equestrian sculpture of George Washington stands just behind the site of the Union Square installation. It dates back to 1856, making it the oldest statue in a city park. While no one has recently attempted to remove it, other civic monuments of similar antiquity have come down in the city in the same period in which Carnabuci’s busts went up. Just before Thanksgiving, an 1834 statue of Thomas Jefferson was removed from City Hall, and the removal of another presidential tribute, the statue of Theodore Roosevelt on the steps of the American Museum of Natural History, began shortly thereafter. Demands to remove other statues, particularly those of Christopher Columbus, have thus far been unsuccessful.
A vandal splattered the Floyd bust with paint shortly after its installation. The incident was investigated as a hate crime, but it also fell into a larger recent pattern. Much the same or worse befell older public statues during the protests the previous year, including the George Washington monument a mile to the south in Washington Square Park. Meantime, in Portland, Oregon, a George Washington statue was toppled and set on fire by protesters. Statues of figures including Francis Scott Key and Ulysses S. Grant met similar fates in San Francisco. In response, President Donald Trump issued a directive demanding the prosecution of vandals of federal monuments and threatening to withhold funding from localities that failed to protect their own statues.
Protesters in 2020 targeted figures outside the standard pantheon of dead white men, including a women’s rights monument, a statue of an abolitionist leader, and even a statue of Frederick Douglass. In aggregate, the wave of destruction that overtook the country and the world resembled historical periods of iconoclastic fervor. It was as if statuary itself, and not simply monuments to certain historical individuals, had become offensive.
In response to the vandalism, local municipalities and educational institutions across the nation opted to remove a range of controversial monuments from public view. Critics of these developments tend to acknowledge the flaws and misdeeds of at least some of the historic figures targeted but typically argue that to remove their statues is to “erase history.” If mere historical preservation were the concern, however, surely the solution would be to move contested monuments to museums, as is happening with the Jefferson and Roosevelt statues in New York. But such moves are unlikely to placate those who objected to the removals.
What they are lamenting, in fact, is not the loss of knowledge of historical events but the decline of an altogether different mode of history—the mode that Friedrich Nietzsche, in his essay “The Uses and Disadvantages of History for Life,” called “monumental.” Monumental history does not merely preserve or document the past: it memorializes great individuals and their deeds and enjoins us to follow their example. Its social function is to consolidate groups around common objects of admiration and emulation. Hence, the effect of toppling civic monuments is not to remove the figures they represent from the historical record, which is preserved elsewhere, but to demote them from their previously elevated status.
Writing in the late nineteenth century, Nietzsche glimpsed the waning of this sort of history, alongside the rise of what he called “critical history.” Rather than elevating the heroes of the past, critical history aims precisely to take them down from their pedestals. Perhaps its most notable popular embodiment in recent years was the New York Times’s 1619 Project, which sought to displace the story of America’s Founders with another foundation narrative featuring different protagonists: the first victims of the transatlantic slave trade. By memorializing two previously unknown people regarded as victims of racialized violence, Carnabuci aimed at something similar.
The toppling of statues often marks moments of regime change. In ancient Egypt, both the rise and fall of the iconoclast pharaoh Akhenaton saw massive desecration of public art. The
iconoclastic destruction of the Protestant Reformation delegitimized the Catholic Church in England and northern Europe. More recently, the fall of the Iron Curtain and the U.S.-led overthrow of Saddam Hussein generated iconic media images of statues falling, thus enacting the end of the old regime. The U.S. military staged and disseminated one such scene—the toppling of the Saddam statue in Baghdad’s Firdos Square, attributed to the local populace but actually performed by U.S. soldiers—because of the potency of the message it conveyed.
These scenes vividly embody the process by which an adamantine order loses the mandate of heaven. As long as a regime remains assured in its power, statues are sites of ritual homage, or neutral backdrops of civic life. But as the historical narratives that undergird a power structure cease to inspire conviction, aggressive outbursts against its most robust physical manifestations can make tangible the criticisms that have already eaten away at its ideological authority. Destruction of statues offers proof of concept for altering a political arrangement that once seemed immutable.
The current wave of demolition and removal in the United States first targeted Confederate monuments in the South during the Obama administration’s second term. In this sense, the ongoing iconoclastic project initially appeared not as the beginning of a new process but a belated recognition of one already accomplished: the end of Jim Crow. Most of the Confederate statues had themselves been erected well after the Civil War, part of the stealth restoration of racial hierarchies. They memorialized a fallen order to bestow a patina of historical grandeur on a more newfangled reactionary system. The fall of monuments to Robert E. Lee, Jefferson Davis, and the like thus seemed preordained. The recent targeting of a far wider array of historical monuments, including ones dedicated to those who defeated the Confederacy, was something quite different:
it raised the question of whether the country can reach any consensus at all about a “usable past.” (See “Monumental Ambitions,” page 106.) Before the iconoclastic outbursts of mid-2020, the public life of cities and towns had been suspended altogether, generating stunning images of empty streets in the most bustling metropolises. With the advent of Covid-19 lockdowns, life migrated almost entirely into the very different public spaces to which it had already been relocating for some time: Internet platforms. This emptying of civic spaces was then succeeded by mass gatherings in them. Those who participated in these gatherings, overwhelmingly “digital natives,” treated the stolid landmarks in these spaces not as enduring and sacrosanct but as subject to deletion, like any online post. The iconoclasm of the Reformation also oc“Destruction of statues offers proof of concept for altering a political arrangement.” curred after a media revolution: the printing press, which disrupted long-standing information monopolies, expanded average people’s access to knowledge, and destabilized both secular and religious authority. Emergent strains of radical Protestantism rejected not only the representation of particular figures but the modes of plastic and visual representation that they saw as idolatrous. In a similar manner, the ongoing dematerialization of collective life into digital channels is one reason the objects of stone, bronze, and concrete that still punctuate our cityscapes face an uncertain future.
You will not find,” wrote the late French philosopher Michel Serres, “any general philosophical treatise on sculpture or statues.” Ironically, he made this statement in Statues, a 1987 book that is, to my knowledge, the only exception to his assertion. Despite wide-ranging debate and reflection on this subject in recent years, the book has received little attention. Serres’s determinedly unpolemical approach limits the utility of his ideas to culture warriors of any faction. He is, as the
Twilight of the Idols
anthropologist and Serres admirer Bruno Latour puts it, “gun-shy.” Unlike better-known French contemporaries Michel Foucault, Jacques Derrida, and René Girard, all of whom were his friends and colleagues at various points, Serres avoided pugilistic provocations in favor of allusive, digressive, lyrical inquiries into surprising subjects.
Serres’s body of writing is sprawling and eclectic but united by an interest in communication, mediation, networks, and connections. Statues pursues these same themes. Translated into English in 2015, the book argues that statues are humanity’s first media technology. According to Serres, the first statues are corpses, “before [which] every subject draws back: the dead body lies there, cutting out its space, larger lying down than standing, more terrifying dead than alive . . . stiff, hard, rigorous, coherent, consistent, absolutely stable, the first stone statue.” The cadaver, both an inert object and a human subject, intermediates between the here and the beyond. Mortuary customs, culminating in one of the earliest sciences—mummification—derive from the troubling encounter with this object.
The statue-fied body, according to Serres, becomes the first symbolic object by way of the “transubstantiation of life into sign.” In this way, “statues precede languages” and “produce hominity”: that is, they make humans human. By this same process, the distinctly human notion of “place”—somewhere designated for common habitation—also emerges, “to be defined . . . by the stone or the boundary marker beneath which the dead person lies. Cemetery, the first garden, necropolis, the first city.” The corpse’s solidity, and later its sculpted representation, grounded settlement. In many cultures, bodies were buried beneath the foundations of buildings to prevent their collapse. “Substitutions, substances, institutions,” Serres writes. “[E]verything comes out of death.” The organization of public space around statues is a reminder of these origins: the solidified dead still orient and stabilize a place.
Yet these continuities also conceal a fundamental rupture, which Serres identifies with the series of media revolutions by which text displaced statuary from its central role as a symbolic medium. “The religions of writing and speech have won so completely, have invaded space and our cultures so universally,” he writes, “that we no longer see their victory as the end of the crushing of the other zone, the one that’s forgotten, humiliated, left in silence and shadow.”
“Statues precede languages,” Serres argues, but with the capacity of text to give speech permanence, “these latter have buried them, just as the religions of the world destroy, with blows of stones and letters, the idolatries that engendered them.” Statues live on at the cost of demotion to a secondary role. During the periods of iconoclasm visited upon them in modern times, their archaism becomes too offensive to tolerate. But, Serres tells us, “iconoclasts’ fury against fetishes rings like a parricidal anger” because it is ultimately directed against our origins.
Most discussion of the recent furor against old statues pits historical preservation against the demands of the present. Serres’s account suggests that the iconoclasts actually get something right. Statues are not merely neutral records of the past: they are containers of violence and receptacles of death. In this sense, they should disturb us. Statues begins with a story from Flaubert’s Salammbô of a giant hollow metal statue of Baal into which sacrificial victims are placed, after which the effigy is placed on a fire and set alight. (The 1973 cult film The Wicker Man portrays a comparable ritual.) This horrifying image is representative because “every statue is . . . a black box whose secret walls envelop someone or something that they hide.” Last year’s protesters, in this sense, tore open such black boxes to reveal the bodies of victims.
For Serres, just as ruptures conceal continuities, oppositions hide similarities. He makes this point in a striking way in the first chapter of Statues, by juxtaposing the sacrificial idol of Baal with the Challenger space shuttle, which exploded in 1986, immolating seven crew members. The first seems to represent the primitive and barbaric, and the second the most advanced science; and yet a series of resemblances, he argues, allows for a systematic “translation” between them.
“The idol and the rocket,” he notes, are both “ingenious pieces of machinery” and are both also fiery tombs; both are projects of transcendence that seek to establish a channel of communication
Philosopher Michel Serres’s book Statues may be the only philosophical treatise on the meaning of public monuments.
with the heavens and attempts at “mastery of our surroundings,” through technical expertise or ritual appeasement. In sum, the two objects are different but inhere secretly in each other: “[R]eligion is in technology; the pagan god is in the rocket; the rocket is in the statue; the rocket on its launching pad is in the ancient idol.” Serres’s uncanny translation between archaic idolatry and advanced technoscience exemplifies the strange connections that he sought in his work.
Serres likewise seeks to bridge the distance between icons and their assailants. Commenting on Nietzsche’s project of “philosophizing with a hammer,” he remarks that the “hammer” wielded by the iconoclast is “equivalent to the thing hammered”: “a hard and fashioned mass,” since if it were “less solid or dense . . . it would fly into pieces.” Likewise, “the stone thrown at the idol quickly becomes the idol itself.” Again, where others see contrast, Serres seeks underground connections: “In the Eternal Return of the thing to the thing and the hammer to the hammer, critique becomes magic, religion, fetishism; analysis changes to unanalyzed dogma.” Because all “our ideas come from idols,” they may always revert back into them.
Our species has replaced “hard sculpture” with “soft waves”: the coded information streams that
Twilight of the Idols
dominate our media landscape. But the interplay of “hard” and “soft” media also goes back to our origins. It was figured initially, for Serres, in the opposition of statuary and music, one fixed on solid ground, the other floating across air. Serres’s project in Statues is to explore why statues remain a stumbling block and an essential landmark, even as their status is eroded by the proliferation of digital information.
The convulsions of 2020 culminated in destructive outbursts not only against statues but also against solid structures of all sorts, which were vandalized, smashed, and burned. At times, all this felt less like a repudiation of certain outmoded symbols than like a revolt against the solidity of the “hard” built environment itself on the part of people immersed in the “soft” media of the screen. Brick-and-mortar stores had been shuttered and overtaken by e-commerce as the virus chased us inside. The hard fixity of stone, brick, concrete, and bronze likewise yielded to viral outrage generated amid the flux of two-dimensional pixel space.
But the solid structures of the cityscape had already been succumbing to the prevalent logic of “soft waves” for some time before this. The demands of the digital content economy have reshaped urban spaces worldwide, turning many once-anonymous locations into preferred selfie backdrops. Unlike the fixed landmarks—many of them statues—built to punctuate our common geography, the new global itinerary of Instagram destinations emerges out of the logic of the attention economy, refashioning the city into a reflection of a digitized simulacrum. Recent transformations of public space explicitly respond to this demand.
If all this seems to signal a further melting of all that is solid into air, of hard sculpture into soft waves, it also sometimes returns us to the origins of symbolic media. Consider another recently installed New York monument: the Vessel, which, like so many statues of old, stands as a central point of reference for an area of urban settlement, albeit the inorganic, simulacral one of Hudson Yards. It is an object whose physical vacuity seems like a commentary on its status as nothing but a backdrop for the countless selfies that it would predictably prompt. Yet the vast emptiness of its structure has also made it something else: a tomb. As of this writing, the Vessel is closed to visitors because of the persistence of suicidal jumps from its labyrinthine weave of walkways. Regardless of the aims of its architect, it has become a statue of Baal, a devourer of life.
Such inadvertent reversions to archaic sacrifice explain what Serres was attempting to elucidate. He states that “a certain number of contemporary actions, behaviors, or thoughts repeat, almost without change, extremely archaic modes of thought or behavior.” But this recurrence owes as much to the advancement of the ancients as to our own primitivism. The contemporary fantasy of the self, released from solidity and evaporated into the digital ether, inherits ancient notions of the soul. The world remade into a selfie stage becomes a portal to the underworld. The deathly weight of the statue haunts our airy digital dispensation because the statue both anticipated its successor and persists within it.
In the haunted year of 2020, disease and death circulated as digital information. The genetic code of viral RNA that rapidly traversed the globe, eluding all controls and barriers, had a simulated counterpart in the models, graphs, charts, and maps that tried to track the shifting coordinates of a suddenly unrecognizable reality. As the virus and its representations spread among us, gluing us to our screens, we were enjoined to become sedentary—statue-like, transfixed like victims of Medusa by the serpentine peregrinations of the virus. To appease this vengeful new god, we also were asked to subject ourselves to new ascetic discipline and purification rituals. Yet this self-denial often seemed identical to indulgence in the pleasures of the screen, to whose temptations we had already been succumbing for some time.
What followed was a peculiar reversal. The abstraction of mass death by invisible infection was displaced by the singular spectacle of one man’s death, which spread virally in a sequence of grimly captivating images. The result was an astonishing mobilization that counteracted the uncanny stillness of previous months. Before
The new iconography: George Floyd in Union Square
George Floyd’s fatal encounter with police officer Derek Chauvin, he had fentanyl in his veins and Covid-19 in his respiratory system. Before his image became an icon worldwide, the traces of globalized biological and pharmaceutical circulation were within his body. An emblematic figure in many ways, Floyd became ubiquitous in the makeshift public art that appeared in cities nationwide. The location of his death became sacrosanct, a new pilgrimage site.
Thus, the event that initiated the dismantling of statues worldwide also brought us back to the origins of the statue, as traced by Serres: the traumatic spectacle of the cadaver. The resulting unrest rattled the shaky foundations of a troubled regime, many of whose monuments had lost their symbolic potency before their dismantling, partly because that potency was grounded in an increasingly obsolete medium. To both his champions and their detractors, George Floyd seemed to have laid, through his death, the foundations of a new order.
Much has been said about the contradictory features of this new mode of power. Critics of woke capital and luxury beliefs note that a specific kind of concern for the downtrodden, particularly racial minorities, has become a requirement for entry into exclusive institutions and powerful corporations. The declaration of such concern increasingly legitimizes elite rule. But it is risky to regard this as merely a cynical ruse, or even as something entirely novel. Preserving life and containing death are responsibilities that the oldest rulers claimed. Today’s elite, too, requires its tutelary deities. As Serres would remind us, even as we are propelled forward into an uncertain future, we are always returning to our origins.
Monumental Ambitions
Catesby Leigh
Rodney Cook, Jr.’s Atlanta project seeks to reinvigorate American civic art.
Atlanta’s Vine City, a black neighborhood with a notable history but with many impoverished residents, has a new 16-acre park, situated in a floodplain where a calamitous 2002 storm ravaged dozens of houses that are now gone. Martin Luther King, Jr. and his family lived nearby, on Sunset Avenue. So did the civil rights activist Julian Bond and Atlanta’s first black mayor, Maynard Jackson. Sociologist and civil rights activist W. E. B. DuBois also lived in the area. A cluster of historically black institutions of higher education is located a short distance to the south. Even so, the park is named for a dead white male. More on that shortly.
The park’s centerpiece is an elaborately landscaped pond and wetland that will provide stormwater retention. This nucleus includes fountains, one with a waterfall, and a stony channel resembling a creek bed. Pedestrian bridges loop this way and that, and paved paths cross the park, which rises gently up a grassy slope. Not far away, the gigantic, faceted roof panels of the new Mercedes-Benz Stadium, home of the National Football League’s Atlanta Falcons, loom like a surreal origami set-piece.
Rodney Cook Sr. Park has few trees—open vistas offer more security—but boasts a well-equipped playground, complete with a splash pad, along with two multipurpose athletic courts. And what is believed to be the largest maple tree in Fulton County still reposes on the western slope. Not far from that tree, Rodney Mims Cook, Jr., son of the park’s namesake, wants to erect a monumental column reaching a height of 95 feet, to be crowned by a 20-foot-tall bronze of Tomochichi, the Yamacraw chief who welcomed James Oglethorpe and his companions to what would eventually become the great state of Georgia. A statue of Cook Sr., a Republican businessman and politician who actively supported the civil rights movement when it counted, would stand on the column’s pedestal, a 10,000-square-foot building housing collections including King’s library. Numerous statues of civil rights luminaries are to be situated elsewhere around the park.
Endowed with Southern charm, chiseled good looks, and a distinguished pedigree, the 65-year-old Cook Jr. is one of a kind—a monument impresario. He’s into building classical monuments that lift the spirit, not the more fashionable, antimonumental sites of mourning like the Vietnam Veterans Memorial in Washington, the 9/11 Memorial at Ground Zero, or Montgomery, Alabama’s new National Memorial for Peace and Justice, commonly known as the “lynching memorial.” Cook took a passionate interest in architecture as a child. His interest flourished under the tutelage of another Atlantan, Philip Trammell Shutze (1890–1982), one of the finest classical architects the South has ever produced.
As a teenager, Cook played a leading role in the fight to save Atlanta’s old Fox Theater (1929), a stupendous multipurpose venue featuring a movie palace designed as an Arabian Nights fantasia, with a nod elsewhere in the complex to the pharaonic splendor of Luxor. His numerous civic initiatives since then include two important Atlanta monuments: the World Athletes’ Monument (1996) and the Millennium Gate (2008). Both trace their descent to ancient prototypes, the former to the Choragic Monument of Lysicrates in Athens; and the latter to the triumphal Arch of Titus in Rome. (The Tomochichi column would itself derive from the columnar monuments of antiquity.) Cook’s local and international connections made these projects possible. With
The Tomochichi statue in front of the Millennium Gate
Urbanities
Urbanities
good reason, former president Donald Trump appointed him to the Commission of Fine Arts prior to leaving office in January 2021. Since 1910, the seven-member CFA has reviewed architectural and commemorative designs for Washington’s monumental core. In late March, however, Cook became the fifth Trump appointee to be summarily dismissed by the Biden White House; no previous administration has taken such action.
During a visit to the park last August, I found a bronze figure of the late congressman John Lewis, the first statue that Cook’s National Monuments Foundation (NMF) installed there, waving as he looked east over the pond toward Atlanta’s skyscraper skyline. The civil rights crusader sports the Presidential Medal of Freedom that he received from President Obama. Weeks later, the bronze statue of Tomochichi that is intended to crown the park’s monumental column was unveiled at its temporary site in front of the Millennium Gate. And on March 10, a statue of Andrew Young, the King confidant and former Atlanta mayor, congressman, and diplomat, was unveiled in the park in conjunction with the celebration of Young’s 90th birthday.
Eventually, Cook plans to transform a wooded slope at the park’s southwestern corner into an acropolis harboring a “peace pantheon” and a peace institute. The former, a two-story, $10 million classical structure, would offer “thinktank incubation space” for international peace initiatives. It would be surmounted by an open-air circular shrine harboring statues of Nobel Peace Prize winners with Georgia ties: King, former president Jimmy Carter, Woodrow Wilson (who grew up in Augusta), and Theodore Roosevelt, whose wife, Martha, was a Georgian—as well as the Dalai Lama and the late South African bishop Desmond Tutu, on account of their professorships at Atlanta’s Emory University. Behind it a much larger, $60 million building would house an institute for the pursuit of international peace named for Young, along with a noted collection of African-American art. Cook envisions the park as both anchoring a civil rights–themed historic district and rebranding Atlanta as a global center for peace. The National Park Service acquired the King family home, located a block from the park, in 2019 and is expected to open it to the public before long.
Many doubt that Cook can pull off his grand plan. And some don’t want him to pull it off. “Statues in a park?” the head of the local NAACP chapter remarked a few years ago. “Birds poop all over statues.” Though the city council unanimously approved his park plan in 2020, Cook has yet to secure a ground lease agreement from city hall.
That problem appears surmountable. Still, Cook Park is a far bigger project than any that the NMF has previously undertaken. The foundation’s fund-raising efforts are running years behind schedule, and it’s waging the campaign at a time when the generation that vanquished Jim Crow is fading away. Young, for whom the peace institute is to be named, defeated the senior Cook in a 1972 race for a House of Representatives seat. The “Atlanta Way” that Cook and Young epitomized and that Cook Jr. espouses—the hashing out of consensus on sensitive matters by the city’s white and black leaders—may have spared the city the race riots that ravaged cities like Los Angeles half a century ago; but in today’s racially polarized climate, it doesn’t get the respect that it once did. During the 1960s, the senior Cook, who died in 2013, served concurrently as an Atlanta councilman (or alderman, as the position was then known) and member of the Georgia legislature, an arrangement no longer permitted. His civil rights advocacy earned him a KKK cross-burning on his front lawn. For that, fortunately, his memory still commands respect in an overwhelmingly blue city like latter-day Atlanta—and despite his pivotal role in shaping Georgia’s latter-day Republican Party in mentoring key figures such as the late U.S. senator Paul Coverdell and former House speaker Newt Gingrich. Another challenge that the junior Cook faces is that woke politics—as demonstrated by Montgomery’s Peace and Justice memorial, whose centerpiece is a pavilion featuring hundreds of identical,
Urbanities
six-foot-tall rectangular blocks of rust-tinted steel hanging from the ceiling—is apt to reinforce anti-monumental trends in commemorative design, and not just in the South. The impact isn’t limited to what gets built, of course; it also holds for what gets vandalized or removed. Cook’s focus on cultural continuity in public art and architecture is decidedly countercultural.
One political threat to Cook’s park plan at this juncture is Native American hostility to the Tomochichi statue, which the NMF commissioned at a cost of well over $400,000. Tomochichi has traditionally been regarded as a benevolent figure—Georgia’s answer to the Plymouth Colony Pilgrims’ loyal friend Massasoit, the Wampanoag sachem. Young has referred to Tomochichi as a forerunner of King and other Georgia civil rights leaders. But Tomochichi broke away from his ancestral Muscogee (Creek) tribe before entering into a pact with Oglethorpe, and Muscogee advocates have denounced him as a sellout who contracted to return escaped slaves to the colonists. Objections have also been raised to the youthful, muscular Tomochichi figure’s seminudity, as though it were a degradation rather than a heroic attribute. Michael Julian Bond, a city council member and son of the civil rights leader, declared in February that the statue’s inclusion in the park would have to be “reconsidered.”
Cook says that he has “almost” enough money in the bank to erect the $11 million column for which the statue was created. Whether that statue wins acceptance or not, a statue-crowned column at Cook Park will likely get built. The odds on his acropolis plan—the peace pantheon and institute—appear longer. But Cook’s vision amounts to a remarkably salutary alternative to the destructive “urban renewal” schemes of yesteryear. It would add new cultural and economic value to Vine City, where public-private initiatives will seek to limit displacement of current residents as a result of anticipated gentrification.
Aquarter-century ago, Cook instigated and oversaw the design competition for the athletes’ monument—also known, thanks to its sponsor, as the Prince of Wales Monument. Cook has known Prince Charles, a longtime advocate of traditional architecture, for decades. Commemorating the 1996 Olympics, which took place in Atlanta, the monument is situated at a difficult site, a triangular spit of parkland formed by the junction of two major avenues, with an expressway verging off from one of them. The circular monument, based on the winning design by Russian-born architect Anton Glikin, is only 55 feet tall. It consists of a high base with heavily rusticated bands of limestone surmounted by five columns (representing five continents) enclosing a cauldron, all crowned by five bronze Atlas figures holding aloft an openwork bronze globe. The base is articulated with elegant aedicules—stylized window motifs—facing north and south. They contain black granite panels inscribed with the names of supporters of the project. Akin to the larger, similarly circular Soldiers and Sailors Monument (1902) in Manhattan’s Riverside Park, the athletes’ monument performs the vital civic function of endowing an otherwise amorphous, visually cluttered but important, urban intersection with a redeeming artistic focus.
Cook subsequently attempted to sell Washington, D.C., on the idea of erecting a triumphal arch at Barney Circle, where Pennsylvania Avenue approaches the Anacostia River in the city’s southeast quadrant, to celebrate the new millennium. It was an excellent idea because Washington lacks such an arch, and Barney Circle is a grassy nullity. The ambitious plan that Cook offered included the monument; roofed colonnades flanking the circle; a new, appropriately formal, entrance to the historic Congressional Cemetery immediately to the north; and elegant bastions on each side of the avenue where it meets the John Philip Sousa Bridge over the river, with fountain terraces and steps down to the riverfront. Cook says that the plan had the support of Senators Coverdell and Daniel Patrick Moynihan but that it was consigned to oblivion as a result of 9/11.
So he took his idea home. The timing was propitious. On
Urbanities
a brownfield site formerly occupied by a huge steel foundry, a tightly woven, 138-acre mixed-use complex of offices, shopping outlets, houses, and apartments was under construction in Atlanta’s Midtown. The main artery through the Atlantic Station development is 17th Street, a six-lane avenue running east–west. Cook got his 101-foot-tall arch, the Millennium Gate, built just where the street swerves to the southwest, with eastbound and westbound lanes parting at the point where Tomochichi now stands. The arch’s design originated with a Barney Circle competition in Washington and was refined by a British architect, Hugh Petter. As you approach the structure from the high-rise business district to the east, it presents itself at a cranked angle that makes its mass much more visible— much more monumental—than if you were approaching it dead-on.
At the top, a handsome penthouse in the form of a low-slung, temple-like pavilion of bronze and glass runs athwart the arch. The pavilion, with its slender Corinthian pilasters, is a subtle but handsome touch, partially screened by the limestone arch’s parapet. The attic panel below is carved in relief with two large, crossed palm branches (a variation on the familiar crossed-swords motif). Farther down, the arch’s Latin frieze inscription reads: “In the year 2000 the American people celebrate with this monument all that has been achieved in peace since the birth of Jesus Christ.” In front of each of the arch’s broad piers, a pedestal is surmounted by a black steel tripod. Cook works out of the penthouse, though it is available for special events. The views that it offers, day and night, are spectacular.
Just before the 17th Street lanes’ parting and fronting the pylons on each side of the street, Scottish sculptor Alexander Stoddart’s seated, hieratic female figures of Justice and Peace, each accompanied by a more animated male youth, face due east, marking the entrance to the Millennium Gate precinct. On the other side of the arch, the pavement extends back to a balustrade, with steps to either side leading down to an oval-shaped, sunken terrace lawn flanked by colonnades and, outside them, clusters of fir trees. The trees provide a screen for private gatherings. The lawn opens onto a park nestled between the east- and westbound 17th Street lanes. This park, likewise ovalshaped, features a man-made lake with a few fountain jets. It is another locus of stormwater retention, and it is girded by maples and Japanese cherry trees, with low-rise, hipsterpostmodern apartment buildings framing it on each side. At the park’s far end, the street’s east- and westbound lanes reunite. The Millennium Gate thus crowns an extraordinarily cohesive and successful composition involving architecture, sculpture, and landscape design that vastly enriches its urban setting.
One end of the sunken terrace’s oval lawn abuts a massive stone arcade, leading to the foyer level beneath the arch. This level features an enfilade of galleries, designed by Cook himself, devoted to the history of Georgia and its capital city, with display cases containing historical artifacts and memorabilia. Scroll-like canvas drops on the walls provide a historical narrative, commencing with the settlement period. The Millennium Gate Museum includes a high-tech interactive gallery that lets visitors timetravel around Georgia—stopping, for example, at one of Savannah’s historic squares to see it as it looks now and as it looked 200 years ago. The museum also offers thematically wide-ranging temporary exhibitions. Pre-pandemic, it drew as many as 200,000 visitors annually.
Not long after the Millennium Gate opened, Cook turned his attention to Vine City, where a great-great-uncle and onetime Atlanta mayor, Livingston Mims, had commissioned Frederick Law Olmsted’s office to design a park extending over a full city block. Cook says that the park, which opened in 1899, was Atlanta’s first racially integrated playground. Mims Park, located a few blocks from the Cook Park site, was replaced during the 1950s by a school. It was his father, Cook adds, who got the idea of re-creating it, though the original plan to name the new park after Mims was
Urbanities
eventually scrapped because he served as a Confederate army officer.
In 2012, the NMF got the goahead from the city to build the park to its designs. But the foundation’s fund-raising efforts lagged. Four years later, the nonprofit Trust for Public Land was charged with overseeing, and raising some of the money for, the $45 million park project, which wound up being funded mainly by the city. The design was produced by a corporate firm, HDR, which had previously planned a somewhat similar park in another part of town. With its curlicued pond and wetlands layout and modernistic bridges and railings, Cook Park as built is a far cry from the more formal, traditional landscape that the NMF envisioned. What Cook is proposing amounts to a monumental overlay.
The antithesis of Cook’s monumental sensibility can be seen at the Peace and Justice memorial, perched on a low hill overlooking downtown Montgomery. Designed by Boston’s MASS Design Group, it opened in 2018. The memorial is the brainchild of Bryan Stevenson, a Harvard-educated AfricanAmerican attorney whose Equal Justice Initiative and its staff of lawyers are credited with saving scores of Alabama prisoners from execution. EJI has exhaustively researched the history of lynching, identifying some 4,000 cases in the South between 1877 and 1950, a quarter of them previously undocu-
MARK HERTZBERG/ZUMA PRESS, INC./ALAMY STOCK PHOTO
Steel blocks commemorate lynching victims at the National Memorial for Peace and Justice in Montgomery, Alabama.
mented. (Its findings appeared in a 2015 report that makes for disturbing reading.) EJI also has identified 400 other cases of lynchings of blacks in other states during that period and more than 2,000 cases that took place during Reconstruction. Each of the memorial pavilion’s more than 800 blocks of rusting COR-TEN steel corresponds to a U.S. county where one or more lynchings occurred. Names of the counties and the victims and the dates when they were murdered are inscribed on each block.
The square, open-air pavilion rings a grassy, sloping courtyard and is itself situated within a grassy landscape. As you walk through its four galleries (one on each side of the pavilion) with their rows of steel blocks, the wood-plank floor under your feet gradually descends while the blocks, all suspended on poles from the ceiling, gradually rise from the floor and eventually hang above you. A long inscription on a concrete wall at the bottom with a thin film of water cascading down its surface pays tribute to the unknown victims. Long rows of bleacherlike seating face it. In a plaza outside the pavilion, duplicates of the blocks are neatly stacked. EJI wants communities where lynchings occurred to claim them for public display, but the Montgomery Advertiser reported in late February that not a single one had moved.
The idea of descent to some kind of moment of reckoning became an anti-monumental
Cook’s civic initiatives include the World Athletes’ Monument (1996).
meme, thanks to Maya Lin’s chevron-shaped Vietnam Veterans Memorial (1982). You descend along its wings to the vertex, where the walls reach a height of 11 feet. It is here that the quantity of names of the dead and the magnitude of loss inflicted by the war are intended to register most powerfully. At the deconstructionist architect Peter Eisenman’s Memorial to the Murdered Jews of Europe (2005) near Berlin’s Brandenburg Gate, 2,711 gray concrete blocks, arranged in a grid, sprawl over five and a half acres. The uneven cobblestone pavement slopes downward as the visitor ventures into the grid. The blocks rise just a few inches from the ground on the memorial’s perimeter but gradually morph into ominously tilting slabs as tall as 15 feet.
The Berlin Holocaust memorial’s influence on the lynching memorial, with its minimalist plethora of hanging blocks, is obvious. Both are guilt memorials. And in both venues, quantity winds up displacing quality—displacing, in other words, the spatially compact, highly resolved symbolic resonance that makes classical monuments like Atlanta’s athletes’ monument and Millennium Gate tick. It’s the same anti-monumental deal, mutatis mutandis, with the 9/11 memorial in lower Manhattan, with the titanic quantities of water falling into the gigantic abysses where the Twin Towers once stood.
The lynching memorial’s landscape includes figurative sculptures addressing the themes of the African slave trade, the mid-century struggle for civil rights in Montgomery, and the criminal-justice system’s mistreatment of African-Americans. The memorial’s many textual panels give the white-guilt theme quite a workout. Panels near the entrance to the memorial grounds say that Africans were “kidnapped” for enslavement—like Kunta Kinte in Alex Haley’s famous epic, Roots—without mentioning that the overwhelming majority were, as Harvard scholar Henry Louis Gates, Jr. noted in a 2010 New York Times commentary, “enslaved by Africans and then sold to European traders.” As Gates continued: “The sad truth is that without complex business partnerships between African elites and European traders and commercial agents, the slave trade to the New World would have been
Urbanities
impossible, at least on the scale it occurred.” A nearby panel states that the nearly 6 million blacks who migrated to the North between 1910 and 1970 “fle[d] racial terror as traumatized refugees” without mentioning economic motivations that were surely more important, especially after the mechanization of agriculture got under way in the 1930s.
The memorial, in short, is the centerpiece of EJI’s larger commemorative agenda of drawing a line from slavery to lynching to systemic racism in law enforcement. The title of EJI’s multimedia Legacy Museum exhibition in downtown Montgomery is From Slavery to Mass Incarceration. (The memorial and the original museum, which moved last fall into more spacious quarters—allowing for a larger, more elaborate, exhibition—reportedly cost some $20 million, about the same as the NMF’s Atlanta arch and museum.) No doubt the treatment that young black offenders receive at the hands of the criminal-justice system deserves serious scrutiny. But when you read about 104 people being shot in Chicago over last summer’s long July 4 weekend alone—almost all the incidents taking place in black or Hispanic neighborhoods, with none of the victims shot by police (though two policemen were shot)—you get to thinking that the Legacy Museum is missing something where the abuses suffered by today’s blacks are concerned. The museum’s overarching narrative—addressing racial injustice throughout the nation, not just the South—jibes to perfection with Black Lives Matter propaganda.
A cynic might say that the Millennium Gate’s exhibit of Georgia history amounts to propaganda. It includes dark incidents like Atlanta’s horrific race riot of 1906 and the lynching a few years later of Leo Frank, an Atlanta Jew unjustly accused of murder, but it unquestionably emphasizes the positive. That’s not the same thing, however, as indoctrination, and the interlocked themes of black victimhood and white guilt are so relentlessly woven into EJI’s commemorative program that indoctrination appears to be the aim. And that program has been a success. Hundreds of thousands of people have visited the memorial and museum—a huge development for a city with a population of about 200,000.
Yes, simplistic narratives concerning white supremacy and systemic racism can have much uglier ramifications than a minimalist guilt memorial. A fitting synecdoche for 2020’s plague, in the aftermath of George Floyd’s killing, of statuary defacement by mobs of BLM and antifa enragés and statuary removals by spineless public officials pleading “public safety” might have been the once-majestic granite pedestal of the equestrian statue of Robert E. Lee on Richmond’s Monument Avenue. A legally dubious state supreme court decision issued in September cleared the way for the statue’s removal. But the obscenityriddled pedestal remained in place until year’s end, an open sore to delight the woke and distress many families living on or near the avenue. Only after Governor Ralph Northam agreed to transfer the entire state-owned monument to the city of Richmond was the pedestal finally dismantled and hauled away. Lee Circle, once the crown jewel of one of the nation’s finest streets, is now desolate.
Rodney Cook’s monumental initiatives are particularly valuable at a time when American civic art is being dumbed down—or worse—by reductive aesthetics and political fanaticism. Whether Tomochichi or another historical figure winds up crowning his column, Cook Park’s western slope is well suited to a monument of this scale, and placing an observation deck at the foot of the crowning statue should make the column, as well as the park, a very attractive destination, perhaps providing leverage for the development of the acropolis complex that Cook envisions.
Working within a demanding tradition that imposes objective standards of achievement is a tall order. Cook has met the challenge with remarkable success. With the Tomochichi column’s realization, he will have accomplished a monumental trifecta in his hometown—and that would represent a significant feat in the annals of American civic art.
Urbanities
Plural Like the Universe
Brian Patrick Eha
Brilliant, restive, alternately depressed and exhilarated, Portuguese poet Fernando Pessoa had second thoughts about everything.
Fernando Pessoa, the Portuguese modernist who, in many respects—and in many aspects—is a fitting poet for our identity-obsessed age, was at least four poets. His best verse, and much of his prose, entered the world variously under the sign of the pastoralist Alberto Caeiro, the classicist Ricardo Reis, and the world traveler Álvaro de Campos, as well as that of Pessoa himself, the progenitor of this powerful triad that he dubbed “heteronyms.” Too complexly realized to be mere pseudonyms, too individual in their tastes, temperaments, philosophies, and flashings-forth of genius, they were the high triumvirate among the more than 100 literary alter egos that Pessoa invented in his lifetime, many coming to light only after his death. “Be plural like the universe!” he commanded himself. Walt Whitman—one of his largest influences—may have contained multitudes, but Pessoa sent his panoply of inner selves flocking out into the world, where they unfolded rich psychologies, personal convictions, and private obsessions. The heteronyms argued with one another in print, at times even taking issue with Pessoa himself.
Pessoa’s unstable identity reflected the upheaval of his time, as well as the disruptions of his own early life. He was born in 1888 in Lisbon, the capital of a decadent, declining power whose ruling family had sat on the throne since 1640. Even for left-wingers, colonialism was synonymous with national pride: Portugal’s economy depended on wealth extracted from Brazil, and its monarchy, at the time of Pessoa’s birth, laid claim to a vast swath of lightly occupied, poorly administered colonial territory stretching the entire breadth of the African continent, from what is now Angola in the west to Mozambique in the east. This was the decaying empire, the glory days of which Luís de Camões, Portugal’s national poet—whom Pessoa aimed to outdo—had extolled in his Virgilian epic The Lusiads.
By the time he died, in 1935, Pessoa had lived through a dictatorship, a republican revolution, the end of the Portuguese monarchy, the Great War, and the first several years of the Salazar regime. Despite writing at length on imperialism, decadence, and other cultural topics, he remained allergic to the “vocabulary of social responsibility.” Even his close friends had trouble pinning down his views. As the critic Harold Bloom remarked, “Pessoa can be read as a political poet only if you start with the good morning’s conviction that everything is political, including a good morning.” But he wasn’t insensitive to the world around him. His three major heteronyms emerged in 1914, the dawning of World War I, as though welling up from the fissures of a fractured way of life. More than any of his contemporaries, Pessoa personalized the upheaval of his time. Each disruption occasioned a seismic shift of self, as Ricardo Reis—“a Greek Horace who writes in Portuguese,” according to Pessoa—tells us:
Fate frightens me, Lydia. Nothing is certain.
At any moment something could happen
To change all that we are.
Brilliant, restive, alternately depressed and exhilarated, Pessoa had second thoughts about everything—and third and fourth thoughts, too. After dropping out of college, he cadged money from relatives and friends, borrowed against his mother and stepfather’s investment bonds, and supported himself by writing
Fernando Pessoa (1888–1935): even his close friends had trouble pinning down his views.
Urbanities
letters in English and French for Portuguese businessmen, while pursuing a dizzying array of literary projects and business schemes, most of which never got off the ground. His life in Lisbon was hectic—he made the rounds of literary cafés—but largely uneventful. Having diagnosed himself with “a mild sexual inversion,” he never married, and likely remained a virgin. He was besotted not with men or women but with language, enamored of his own alchemical creative powers. Endlessly fecund, he seemed to be at times a spectator of himself, “the meeting-place of a small humanity that belongs only to me.”
They belong to the world now, Pessoa’s invented selves. In the dramatis personae that opens Pessoa, Richard Zenith’s mammoth new biography of the poet, we learn, charmingly, that Reis “immigrated to Brazil in 1919, and was still living in the Americas, perhaps in Peru, when Pessoa died in 1935”— the heteronym surviving his maker. Pessoa’s first biographer, the Portuguese author João Gaspar Simões, believed that the “exotic” appeal of the heteronyms would fade—but, in fact, it has only grown stronger with time, the poet’s self-partitioning a more apt allegory for the obsessive and self-obsessed way in which so many of us craft our digital personae, personal brands, and public-facing lives. Pessoa’s inventions, sprung from their captivity in the large wooden trunk he left behind, stuffed with more than 25,000 papers, have unquestionably outlived him.
But for every “full-fledged soul” and perfect piece of writing he produced, there are dozens of fragmentary works and pseudo-authors who exist in little more than name. These cast-off limbs—“rubble [from] a kind of literary Pompeii,” Zenith calls them—put me in mind of those wonderfully expressive hands by Rodin on display at the Metropolitan Museum of Art: radiant with genius, but incomplete. Even Pessoa’s greatest prose work is a tumulus of shards. Early on, in November 1914, he told a friend, despairingly, “My state of mind compels me to work hard, against my will, on The Book of Disquiet. But it’s all fragments, fragments, fragments.” Like his contemporary T. S. Eliot, Pessoa went on to the end of his life shoring up fragments, though not exactly against his ruins.
Born to a romantic, literary mother and a civil servant father who moonlighted as a prolific music and theater critic while slowly dying of tuberculosis, Pessoa grew up a sensitive, withdrawn, yet independent child. Words were his playthings, though he still “enjoyed the good health of understanding nothing,” as he later wrote. One thing he may have struggled to understand during those early Lisbon years was the disruptive presence of his paternal grandmother, the half-demented Dionísia. Prone like her namesake, the Greek god, to fits of madness, she furnished the future creator of so many alternate selves with early evidence, according to Zenith, “that multiple personalities can dwell in one and the same human body.”
At age five, Pessoa suffered the deaths, six months apart, of his father and his infant brother, Jorge. Then he watched, bewildered, as his mother whipsawed from grief to giddy elation. Just days after losing her son, she met a charming Portuguese navy captain: the attraction was electric, and they soon married. Here was another lesson for the budding Pessoa. “Grief doesn’t last because grief doesn’t last,” the heteronym Álvaro de Campos tells us in a poem about a newly bereaved mother. The mother who loses her son is a recurring figure in Pessoa’s mature writings, along with an awareness of how quickly the loss can lose its sting. Personalities, emotions, marriage, widowhood—it seemed that nothing was stable or endured for long.
In Pessoa’s childish imagination, reality itself grew unstable: daydreams supplanted the waking world. Egged on by a doting uncle with a weakness for make-believe, the future poet began to people his solitude with fictitious individuals—at least two of whom, Captain Thibeaut and the Chevalier de Pas, he remembered for the rest of his life as having been utterly real to him, “fathomed to the depths of their souls.” This
Urbanities
dreamy habit only augmented in adolescence. The lonely boy’s desire to surround himself “with friends and acquaintances who never existed” prefigured the grand fakery of a literary career in which he would conduct interviews with himself, one heteronym picking another’s brain. Years later, Pessoa would play with his own selfhood as he had once played with imaginary friends: “I unwind myself like a multicoloured skein, or I make string figures of myself, like those woven on spread fingers and passed from child to child. . . . Then I turn over my hand and the figure changes. And I start over.”
Starting over was what the chameleon-like boy and his mother did in Durban, South Africa, where her new husband assumed the post of Portuguese consul. The largest city in the British colony of Natal, Durban boasted efficient public transportation, a public library, a botanical garden, literary societies, and other trappings of civilization, including the rigorous convent school where Pessoa was promptly enrolled. Forced to start the five-year primary school curriculum over from the beginning, and in a new language, he finished in just three years, receiving First Prize in both English and Latin as well as the award for all-around academic excellence. In high school, he devoured the prose of Thomas Carlyle and wrote verses emulating Milton and the English Romantics. Pessoa returned to Lisbon for good in 1905, but his exposure to Anglo-American literature proved decisive.
The most crucial influence was Whitman. The American poet, Zenith writes, taught Pessoa “how to open up, feel everything, be everything, and sing.” The experience of reading Song of Myself made possible the sudden emergence, on March 8, 1914, of his first true heteronym, a pastoral yet unsentimental poet named Alberto Caeiro:
I am a keeper of sheep.
The sheep are my thoughts
And my thoughts are all sensations.
I think with my eyes and my ears
And with my hands and feet
And with my nose and mouth.
A rush of poems poured from Pessoa’s pen in this astonishing new voice. Ostensibly lacking in formal education, Caeiro was nonetheless a keen observer, “newborn with every moment / To the complete newness of the world.” It was as though he had distilled the antidote to his own overwrought intellectualism:
I lie down in the grass
And forget all I was taught.
However, as Thomas Merton, Caeiro’s first major English translator, noted, these poems have a touch of selfconsciousness. It is as though the world in which the Galician poet declares himself an “Argonaut of genuine sensations” were not the everyday world but some imagined highland of the sun, where things dwell in accurate light and cast clean shadows on the eye. Caeiro is more cerebral than Whitman, who had likewise wondered at the material world and refused to offer final answers:
A child said What is the grass? fetching it to me with full hands.
How could I answer the child? I do not know what it is any more than he.
Pessoa dreamed of launching himself as an English poet in his own right. He wrote dozens of sonnets, publishing 35 of them as a chapbook, and sent poems to the Poetry Society in London. (They were ignored.) Often, he attributed his English works to one of his other selves. Incredibly, Pessoa’s last name means “person” in Portuguese, as well as “persona.” Seemingly taking the hint, he punned with the names of his English alter egos, too—each had a distinct signature and notebooks of his own.
First came Charles Robert Anon, who published a poem in a Durban newspaper in 1904. He was superseded, around 1906, by Alexander Search, who claimed authorship of more than 100 poems, a short story, and various essays. Zenith characterizes Search as “a Platonic or transcendent version of Pessoa”—a Shelleyan idealist questing after truth,
Urbanities
with a head full of philosophy and enlightened humanism. In other words, Pessoa’s most fervent spiritual or metaphysical inquiries in English were conducted by an alter ego named A. Search. Years later, Caeiro, again as if reacting to his maker’s native bent, declared this search pointless: “Things have no meaning: they have existence. / Things are the only hidden meaning of things.”
Zenith’s biography takes flight whenever it immerses us in the Pessoan imagination and tends to flag when it turns political or sociological. The Durban section gets bogged down in passages about the living conditions of native Africans and Indian immigrants in Natal, as well as asides about the “racist division of labor” that they were part of. The index has a twopage entry for “blackface.” Zenith’s book having been published in 2021, it was perhaps inevitable that some of its 937 pages (not including the prologue and back matter) would be devoted to convicting Pessoa of racism and misogyny wherever possible— though Zenith in the role of judge and jury has clemency enough to acknowledge that such attitudes, never central to Pessoa’s genius, had “shallow roots” and were eventually outgrown as he matured. A biographer, having promised us a portrait of the man, can be forgiven for describing his outer garments as a clue to the essential self, but it’s another thing to spend thousands of words on the pedigree and life history of the subject’s tailor and on the labor practices of the mills that produced the cloth out of which said garments were made.
The essential self lies elsewhere, and what is best in Pessoa transcends politics. Yet Zenith devotes a whole chapter to Gandhi on the thin pretext that the older man was a practicing lawyer and budding civil rights activist in Durban while Pessoa was a student in primary school. It’s true that Pessoa admired Gandhi later in life—mainly for his asceticism—but Zenith doesn’t stop there. He closes a disquisition on the British treatment of Indians as second-class citizens this way: “All of which no doubt seemed to Fernando, the stepson of a European diplomat, like the natural order of things.” Rare indeed is the biographer who would feel compelled to round out his portrait of the artist as an eight-yearold by depicting that child as a representative of white supremacy.
“If there’s one thing I hate, it’s a reformer,” writes Pessoa in The Book of Disquiet, defining this type as “a man who sees the world’s superficial ills and sets out to cure them by aggravating the more basic ills.” Call him a reactionary, if you like; one searches his oeuvre in vain for a social program. Roman Catholic by birth, he was a spiritual seeker and dabbler in the occult, obsessed with astrology, and a thoroughgoing skeptic—part of a generation “that inherited disbelief in the Christian faith and created in itself a disbelief in all other faiths,” which presumably would include most of the secular dogmas in which our media and universities today catechize the faithful. In one of his English poems, Pessoa locates God “Between our silence and our speech, between / Us and the consciousness of us.” Religious curiosity and metaphysical concerns crop up frequently in his work, whether attributed to a heteronym or not. Even Caeiro, whom Pessoa dubbed an “atheist St. Francis of Assisi” and who denies any reality beyond material things, invokes God—if only to say that the deity is largely beside the point:
To think about God is to disobey God,
Since God wanted us not to know him,
Which is why he didn’t reveal himself to us. . . .
Let’s be simple and calm,
Like the trees and streams,
And God will love us, making us
Us even as the trees are trees
Religion, for Pessoa, was an illusion without which “we live by dreaming, which is the illusion of those who can’t have illusions.” His dreams were, first and foremost, about self-invention, self-division, self-multiplication. His inability to believe in the triune God
Urbanities
GRANGER
As though welling up from a broken world, Pessoa’s three major heteronyms emerged in 1914, the dawning of World War I.
seems wedded somehow to his endless unfolding of new personae. Álvaro de Campos sets intellectual uncertainty beside the wish to be someone else:
Every day I have different beliefs—
Sometimes in the same day I have different beliefs—
And I wish I were the child now crossing
The view from my window of the street below.
Just so, Pessoa’s detachment from society gives rise to the impulse to invent his own society. In his static drama The Mariner, a character tells of a shipwrecked man, who, finding it too painful to recall his former life, invents an imagined past, a fictitious homeland, the made-up people and geography and events of which gradually supplant his actual memories. Pessoa, in The Book of Disquiet, longs to create in himself
a nation with its own politics, parties and revolutions, and to be all of it, everything, to be God in the real pantheism of this people—I, to be the substance and movement of their bodies and their souls, of the very ground they tread and the acts
Urbanities
they perform! To be everything, to be them and not them!
An infinite expansion and elevation of the self, so that it would exist as both deus and demos, bringing about a solipsistic kingdom of heaven on earth, though this nation—with its “parties and revolutions”— would be fractious, disputative, rabble-rousing. Among friends, Pessoa had a “fondness for ardently defending a certain idea one day and then attacking it the next, with equally impassioned arguments,” Zenith writes. While the more romantic modernists sought an “unfractioned idiom” (as the American poet Hart Crane put it) with which to mount their raids on the inarticulate, Pessoa’s own idiom was endlessly fractionated, full of tricks and evasions, enriched by philosophies and ways of seeing that he practiced for the length of a poem and no longer. Rather than try to integrate his disparate drives into a cohesive whole, he heightened the contradictions. He produced new selves as if by cellular mitosis and gave them independent life.
Among these selves was the “semi-heteronym” Bernardo Soares, the supposed author of Pessoa’s unclassifiable prose masterpiece The Book of Disquiet. This “factless autobiography,” as Pessoa/Soares calls it, was first published in 1982 (47 years after Pessoa’s death), but subsequent editions have enlarged and reordered its contents, about which editors and scholars disagree. A nonbook of which no original exists, begun in 1913 and consisting of irregularly dated entries composed intermittently over the course of 20-odd years, some handwritten, some typed, with no definite order or overarching schema, The Book was nonetheless an astonishing discovery. Few posthumous works have caused such a dramatic reevaluation of their author’s achievement.
“I make landscapes out of what I feel,” Soares writes. The Book of Disquiet puts you midway along the journey of another man’s life, lost in the dark wood of his interiority. But whose interiority, exactly? The Book’s heteronymic authorship changed over time; but ultimately, Pessoa laid it at the feet of his invention Soares, an assistant bookkeeper who lives in a rented room on the Rua dos Douradores and writes in his spare time. Soares espouses a philosophy of inaction, lives in his imagination, and, at times, can view his fellow Portuguese only as “an alien tide of living things that don’t concern me.” Less individuated than Caeiro, Reis, and de Campos, Soares is a sort of pareddown Pessoa, possessing his irony but not his humor. His semifictional diary is a kind of library in utero; many of the roughly 500 passages feel like the seeds of unwritten books, sorties where a more stolid writer might have launched entire campaigns.
Written over half a lifetime, The Book discloses an unwieldy profusion of styles and genres, from ethereal dream scenes and prose poems to clear-eyed confessions and cultural observations, sociological speculations, aesthetic maxims, and aphorisms worthy of Kafka. Even if you aren’t as dreamy or passive as Soares, you know what he means when he says that he is “suffering from a headache and the universe.” Paradise, for Soares, is eternal stasis, everything in abeyance: a world in which “the same moment of twilight forever paint[s] the curve of the hills,” a life that resembles “an eternal standing by the window”—because, even in paradise, he imagines himself as alienated, an observer at one remove from the scene. “The Book of Disquiet never ceased being an experiment in how far a man can be psychologically and affectively self-sufficient, living only off of his dreams and imagination,” Zenith tells us. “It was an extreme, monomaniacal version of Pessoa’s own, essentially imaginative way of living life.”
It was probably also a coping mechanism. The world of daydreams is one where mothers don’t plunge suddenly into obsessional love affairs and where little brothers don’t expire in infancy. Tedium besets Soares, but tedium is a small price to pay. “The fictions of my imagination . . . may weary me, but they don’t hurt or humiliate,” he says. “They never forsake [me], and they don’t die or disappear.”
Twenty-first-century America, with its cult of action and positive thinking, would
Urbanities
hardly know what to make of the dreamy, ineffectual Soares, were it not that the narcissistic sublime (or a degraded facsimile) has become our dominant cultural mode. Then, too, at a time when some individuals claim to be incapable of settling on a single gender, much less any other unambiguous identity, we are primed to accept what Zenith calls Pessoa’s “poetics of fragmented selfhood.” There are moments in The Book of Disquiet when Pessoa breaks through the authorial mask, if only to affirm his lifelong masquerade: “To create, I’ve destroyed myself. I’ve so externalized myself on the inside that I don’t exist there except externally. I’m the empty stage where various actors act out various plays.” The triumph of his heteronymic enterprise is like that aimed at by those who today practice “manifesting”: the triumph of an idea transcended into life.
The Pessoan spirit—albeit lacking his genius—is alive and well. It lives on in Reddit forums, Twitch chats, Twitter feeds, and other venues where anonymity, or pseudonymity, is common and where even members of the blue-checkmark class, who use their real names, put up a front. The once-singular self divides or turns into a heteronym. Online spaces are loud with the personae we have unleashed.
I know this firsthand. In the summer of 2021, I delved into the world of non-fungible tokens—digital items that are provably unique and the ownership of which can be publicly verified on a blockchain. NFTs represent a new frontier of art and collectibles, gaming and pop culture, where even some of the biggest-name artists and collectors don’t use their real names. Instead of self-portraits, they employ NFT avatars as visual identities. In this scene, “anon”—Pessoa’s first major English heteronym—is a common form of address for a compatriot whose real-world identity you may never know.
To participate, I needed a suitable persona. So I created a pseudonymous Twitter account, registered domain names to match, and launched my alter self into that hothouse ecosystem. More gregarious than usual, I found it easy in that guise to make friends and forge bonds. The real me, such as he was (shades of Whitman again), took a backseat. And it worked: inside of three months, I had gained 1,000 followers and a reputation as a serious collector. We are what we dream ourselves to be, Pessoa says. Through his eyes, I have come to see the Internet increasingly as a place where heteronyms abound, casting large shadows and printing their own legends. From Bitcoin creator Satoshi Nakamoto to the master conspiracist Q, these identities shape the lives of millions.
Who, then, is the real Pessoa? A dream dreamed by no one, he sometimes thought—as Borges imagined Shakespeare in one of his Ficciones. One of Pessoa’s strongest poems, “The Tobacco Shop,” begins:
I’m nothing.
I’ll always be nothing.
I can’t want to be something.
But I have in me all the dreams of the world.
In Borges’s story, Shakespeare at the end of his life, having “been so many men in vain,” asks God to give him at last a singular identity to call his own. From a whirlwind, the voice of the Lord answers: “Neither am I anyone; I have dreamed the world as you dreamed your work, my Shakespeare, and among the forms in my dream are you, who, like myself, are many and no one.”
Wanting to be everyone while fearing that he was no one, Pessoa was nonetheless, in his fiercest moments, in touch with an unshakable core at the center of his kaleidoscope of selves. Refusing either to suppress or falsify his internal conflicts, he displayed a kind of radical authenticity. “Even if what we pretend to be (because we coexist with others) crumbles around us, we should remain undaunted,” he exhorts readers in The Book of Disquiet, “because we’re ourselves, and to be ourselves means having nothing to do with external things that crumble, even if they crumble right on top of what for them we are.”
Nearly 90 years after his death, the best of this inveterate pretender’s poetry and prose has not crumbled.
The Bookseller’s Tale
Theodore Dalrymple
Lessons in resilience amid calamity
About 30 miles from my home in England, deep in the Shropshire countryside, is the small and picturesque, but not very prosperous, town of Wem, where William Hazlitt, the great essayist, spent much of his early life, though I am told that the locals are not particularly proud of him, as few these days have heard of, let alone read, him.
On the night of October 6 last year, a terrible fire erupted in Wem, consuming the whole premises and stock of a business, Cosmo Books, owned by Robert Downie, with whom I had had some dealings in the past. Cosmo was an unusual bookseller, entirely online, specializing in articles taken from eighteenth- and nineteenth-century periodicals. Some 170,000 items were lost in the fire—undoubtedly caused by arson, not by lightning or electrical fault.
The bookseller was not the target of the arson. Behind his main premises—essentially a warehouse with an office attached—was another small unit, sublet not long before the fire to a group of people who turned out to be drug dealers. On the night of the fire, a person or persons drilled a hole in the door of the sublet and poured a fire-accelerant through it. The fire ripped through a part of the industrial estate, but it was Cosmo Books that suffered the worst devastation: other areas suffered only smoke damage. It took the fire department three days to extinguish the blaze completely.
Probably, though not certainly, a rival gang of drug dealers set the fire in revenge for some past violence, as a battle in a turf war, or as a way of reminding the occupants to pay a debt. Though small, poky, and with permanently boarded-up windows, the location had served as a venue for parties, perhaps tending to orgy, complete with a bar and beds and couches.
A few days later, after a display of inertia that we now expect of our police, a search turned up weapons at the site: coshes, knives, and a sword. Also found was a box containing about $250,000 worth of cocaine. Theft seems not to have been among the arsonists’ motives, unless so much cocaine was already stored there (and taken) that they simply overlooked so trifling a quantity.
It subsequently emerged, at least in rumor, that the new tenants of the small property had set up security cameras, not to protect them from crime but from agents of the law or other gangs. They often posted lookouts on the industrial estate to warn them of the approach of the police or their rivals. Naturally, they had more to fear from their rivals than from the police; and the tenants have now disappeared without a trace. At any rate, no one has yet been caught or charged, and no one is holding his breath until someone is. My guess is that the intended victims and the culprit gang may indeed be caught one day, but more by luck than by judgment or detective work. If either group had done something really serious— utter a misogynistic remark, say—they would by now have found themselves, as prison argot puts it, “banged to rights.” As it is, they will probably continue their activities unmolested by police for some time.
After the fire had ruined Downie’s stock almost entirely (by a kind of miracle that reminded me of the famous photograph of the survival of St. Paul’s Cathedral during the Blitz, an exquisitely bound copy of a small book, Poetry of the Anti-Jacobin, remained unharmed, though the padded envelope that stored it was thoroughly charred), he informed his customers via e-mail of the disaster and received in return many messages of condolence and offers of help. I decided to pay him a visit.
Aged 60, with a resilience and good humor that I do not think I should have been capable of in such circumstances, Downie had almost immediately started
Robert Downie’s Cosmo Books, in Wem, England, a provider of rare and scholarly manuscripts
to trade again from a temporary office near his old location. His stock had taken him decades to collect, collate, and catalog. One could not simply reconstitute it by sending one or two orders to suppliers. It was more like a life’s work, and 60 is not usually an age when one begins again.
But Downie said something that struck me as wiser than anything that an army of therapists might have managed to articulate. Yes, he was a victim, he said (and none more innocent), but he was not going to victimize himself a second time by brooding endlessly on his calamity, and thereby turning himself into a bitter old man. Moreover— though, in a sense, he had to start again from scratch—he knew his business thoroughly, and the fire had not cost him his knowledge.
Of course, so wise an attitude did not preclude anger at what had happened, or rather—as he soon discovered—at what had been done. He recalled his arrival at his business the following morning—someone had informed him by telephone that a fire had broken out on the industrial estate, but he did not believe that it was at his premises—to see the blaze still raging, and he knew at once that his work of decades was lost. An articulate man, he was at a loss for words to describe his emotions at that moment. To adapt Wordsworth slightly:
To him the local fire that burns can give
Thoughts that do often lie too deep for tears.
He was angry on several fronts: at the actual fire-setters, naturally, though they were unknown to him; at the drugdealing tenants of the adjoining unit, but also at the person who sublet to them, probably illegally; at the owner’s agent, who did not supervise the property for which he had responsibility; at the distant owners of the site, who were indifferent to what had happened; and, finally, at the police, who had not bothered to interview him. As readers of detective novels know, and common sense dictates, he who is most directly affected by a crime should be questioned as soon as possible. After all, the arson might have been targeting him; he might have known something about the subtenants of the premises in which the fire had started; he might
Oh, to be in England
even have contracted for the fire himself, for what is known in arsonists’ circles as an insurance job. The fact that the police had asked him nothing removed any confidence that they were trying to find the culprits of what was a serious crime, and which might even have ended in fatalities. They fiddled while Wem burned.
Downie said that he might have a legal claim against the tenant, or the managing agent of the premises, or the managers of the whole site and the owners—but even if he did, he would probably not pursue it far. It would cost much and yield little, even if legal liability could be fixed on any of the parties. I gave him the benefit of my experience of civil actions in a medical context—admittedly, only as a witness; but I have observed that such processes continue for years and come to obsess the plaintiffs until they can think of nothing else. And finally, when the case settles, the plaintiffs still feel that justice has not been done, that the wrong verdict was reached or the compensation was insufficient, that the defendants got off lightly, and that the whole legal system was corruptly biased against such as they. I have never met a happy plaintiff, except those who won and whose claim was fraudulent.
Yet when I visited the burned-out warehouse in Downie’s company, my first thought, after the shock and pity of it, was of vengeance upon those who did such a thing: and it was not even my life’s work that was so wantonly destroyed. The ground was a black, tarlike mass of carbon mud. On the shelves that remained were carbonized books, completely irrecoverable and many irreplaceable, glued together by water from the firefighters’ hoses. More than half a mile of books and papers, carefully cataloged at the expense of thousands of hours of work, were no more. The roof had caved in; the wooden beams were charcoal. A glance was enough to make one despair.
Downie started as a bookseller when he was a teenager. His father, a successful car salesman, did not want him to continue his education, and Downie was too young to oppose his father’s wishes. His mother ran two small secondhand bookshops in provincial towns, and the young Downie worked in them during his school holidays. Then, when he was 16 and had left school, his father drove him to one of them, dropped him off, and said, “There, you manage it.” And he did, until he was 18, when he bought it from his mother with a bank loan that his father guaranteed. He ran it for another decade, before selling it.
By then, Downie had grown tired of direct contact with the general public. People entering his bookshop would treat him as if he worked for a tourist information office or ask him whether he had any cigarettes for sale. He decided thenceforward to do only mail-order selling of real antiquarian books from catalogs. In any case, traditional bookselling from small shops was soon almost to die out.
He had an illumination one day while looking through a large volume of old Acts of Parliament. Who would want such a volume? On the other hand, people with a particular interest in a subject might want a single Act for their collection or research. (By the time of the fire, Downie had amassed what was probably the world’s largest private collection of Acts of Parliament, from the seventeenth to the nineteenth centuries.) The idea came to him to break each volume into its component parts and sell each item individually. A natural extension of this idea, which proved successful, was to split up the many journals of the eighteenth and nineteenth centuries, which contained numerous essays of historical and intellectual importance, and sell them article by article. The coming of the Internet expanded the reach of his business immensely.
I asked whether he ever had any qualms about separating old volumes of journals and magazines, and he replied with what, to me, came as a surprisingly unequivocal negative. He would not break up a beautiful, precious, or rare book, he said (in fact, many such books have been wrecked for their illustrative plates by avaricious booksellers), but he was making available to the public what it might otherwise not know existed and would otherwise molder on shelves for another 100 or 200 years. And people who might otherwise search fruitlessly for material on whatever was their interest, but who
Oh, to be in England
did not want large volumes of mostly extraneous matter cluttering up their homes, could now find it with a few touches of a keyboard.
He was right: I had once bought from him a two-page paper by Charles Loudon, in whom I happened to be interested. Loudon was a Scottish doctor who practiced in Leamington Spa, a town about 50 miles from where I live, in another direction from Wem. Leamington Spa’s spring waters were believed in the nineteenth century to be curative, and the town soon became an elegant and aristocratic resort. Loudon wrote a book about those supposedly curative waters, which went through three editions, taking the unfashionable, and probably not locally well-received, view that they were no panacea. He was appointed to the Royal Commission on the use of child labor in factories in 1833 and quoted by Friedrich Engels in his Condition of the Working Class in England in 1845. In 1841, Loudon retired to Paris, where he wrote (in French) a book trying to refute the population theories of Thomas Malthus—an interesting career. His first publication was about an obscure ear disease, published in an equally obscure Glasgow medical journal of the 1820s, and it was this that I bought, more for the sake of its author than its subject matter. Without Downie’s business, I would never have known of its existence, nor would I have bought the complete volume of the journal that contained it if I had ever found it.
A portion of Downie’s ruined shelves after the arson
“A perfect illustration,” said Downie.
The Wem fire was illustrative in more than one respect. The day before I went to see Downie, I was involved in a debate about drug legalization. One of the arguments in favor of such legalization, put forward by a reasonable person, was that prohibiting any drug that people might want to take created a black market that promoted and gave opportunity to criminality.
I have sometimes flirted with this thought myself. But does opportunity create criminals, or do criminals create opportunity? Suppose that cocaine were not a trafficked commodity, but instead on sale legally,
Oh, to be in England
like alcohol. Would the destroyers of Downie’s stock have then become decent, upstanding citizens, or would they, rather, have turned their attention to some other illegal activity?
The fire and its aftermath also exemplified what everyone already knows: the almostprogrammed incompetence of the police. No one to whom I related the failure of the police even to question Downie was surprised: years of criticism and reform have left the police demoralized and intellectually and morally corrupted. They hardly know what they exist for these days, or if they do know, they nevertheless choose to concentrate on safer and less taxing trivialities, such as the investigation of so-called hate speech rather than the detection of real crime. The formalization of all their procedures, the constant resort to pen-pushing and formfilling, discourages flexibility and powers of discretion and has ruined even their individual intelligence, for what is not used habitually withers and decays. Even their appearance has changed, from that of a citizen in an unthreatening uniform to that of a paramilitary wing of a violent fascist movement. Only the fact that so many of them are patently unfit and could hardly chase an old lady with a walker very far prevents them from being truly fearful, festooned as they now are around the waist with the encumbering apparatus of repression—truncheon, handcuffs, gas canisters, and sprays of various descriptions. They are, and probably at some level of their psyche know themselves to be, part of an elaborate charade, in which they bully the innocent and ignore the guilty—especially the dangerous.
The inefficacy of the police is not merely an inconvenience, though it is certainly that, as well. It indicates that the government and its associated administration, which have for decades mandated, and presided over, a depressing degeneration of law enforcement, are unaware that the maintenance of order is not merely one of their tasks among others—such as ensuring that people are protected from the way others might happen to address them—but the very justification of their existence. Criminal disorder saps their legitimacy, which, in turn, leads to further disorder.
The fire was a powerful reminder of the baleful effect of crime upon its victims—an effect that I confronted practically every day of my professional life, though I found that many educated middle-class people were reluctant to hear about it, so anxious were they to display their theoretical compassion toward the perpetrators and never grasping the point that it was the poor, not the rich, who were the principal victims of crime.
It takes no great imagination to apprehend the emotions of a man whose life’s work has been casually destroyed by a vicious gang, thousands of hours of labor set at naught, and what effect it might have had were he a lesser person. But Downie understood something that is seldom understood, or at least not much emphasized, in our current climate of opinion, in which vulnerability and fragility are what is admired and encouraged: that while emotion is important, and without it nothing would matter, it is not all-important, not the only guide to action. Emotion must be tempered by thought and rational reflection, which, by confronting the situation constructively, helps limit its destructive effects. Of course, many factors aided Downie’s admirable response: intelligence not the least of them, but also the fact that he was sufficiently insured to tide him over, and the fact of his indestructible knowledge of his endlessly interesting business. He claimed no great merit for his resilience, since, he said, he had little choice but to be resilient. You do not enter a business such as his to make a great fortune, and he was too young to retire. He still had his livelihood to make. But many people in his circumstances would have been crushed or sought other ways of rising from the ashes, such as psychotherapy or legal action— both damaging, in such instances, to the human personality.
How we respond to circumstances is generally a choice, admirable or otherwise. I admired this bibliographical phoenix rising from the flames, and recently bought a few more items from his swiftly resuscitating business: one an essay from 1797 on women’s depravity caused by reading novels, the other an engraving of Protestant martyrs burned at the stake.