異議者 春天 19

Page 1




Dissent (ISSN 0012-3846), issued April 8, 2019, is published quarterly—winter, spring, summer, and fall—by the University of Pennsylvania Press for the Foundation for the Study of Independent Social Ideas, Inc., 120 Wall Street, 31st floor, New York, NY 10005. Phone: (212) 316-3120. Website: http://www.dissentmagazine.org. For new subscriptions: https://www.dissentmagazine.org/subscribe; for subscription inquiries: subscriptions@dissentmagazine.org or call (212) 316-3120. Subscription payments can also be mailed by check to Dissent Subscriptions, 120 Wall Street, 31st floor, New York, NY 10005. Subscription rates: $30 for one year (students $24). Institutions: $70 for one year. Single copy $12. For foreign addresses add $18 for shipping. All payments from foreign countries must be made by credit card, U.S. money orders, or checks payable in U.S. currency. For special rates on bulk orders from organizations, teachers, etc., write directly to Dissent. For information on newsstand and bookstore distribution, call Ingram Periodicals at (615) 213-3660. Postmaster: Send address changes to Dissent, 120 Wall Street, 31st floor, New York, NY 10005. © 2019 by the Foundation for the Study of Independent Social Ideas, Inc. (FSISI). Periodicals postage paid at New York, NY and additional mailing offices. Permission to reprint any article must be obtained from pressrts@pobox.upenn.edu. Submissions must contain a stamped, self-addressed envelope. Electronic submissions may be sent to submissions@dissentmagazine.org. Volume 66, No. 2 (whole No. 275). Made in USA. Dissent does not engage in lobbying or support candidates or legislation. Opinions expressed in signed articles and editorials are entirely those of the authors. Dissent is indexed in the Alternative Press Index, Family & Society Studies Worldwide, Left Index, Periodicals Index Online, the Social Sciences Index, the Social Science Source, and Sociological Abstracts. Funding has been made possible by the Open Society Foundations and by the Puffin Foundation.


Dissent Spring 2019 Editor’s Page Beat the Rich Michael Kazin

Culture Front Already Great: The Dead-End Liberalism of Parks and Rec Timothy Shenk What Amos Oz Couldn’t See Joshua Leifer

Labor’s Comeback The Road to Renewal Michelle Chen, Sarah Jaffe, and Mark Levinson How to Win Nelson Lichtenstein A Seat at the Table: Sectoral Bargaining for the Common Good Kate Andrias No Power without Organizing Rich Yeselson Airport Workers Strike Back: An Inteview with Sara Nelson Sarah Jaffe


After Act 10: How Milwaukee Teachers Fought Back Eleni Schirmer Immigrants Didn’t Kill Your Union Ruth Milkman

Articles Power Is Sovereignty, Mr. Bond Daniel Immerwahr Nicos Poulantzas: Philosopher of Democratic Socialism David Sessions Modi’s Saffron Democracy Sanjay Ruparelia France’s Anti-Liberal Left Michael C. Behrent Climate Politics after the Yellow Vests Colin Kinniburgh

Portfolio How Eugene Debs Became a Socialist Illustrated by Noah Van Sciver, written by Paul Buhle and Steve Max with Dave Nance

Reviews Last Days at Hot Slit: The Radical Feminism of Andrea Dworkin, eds. Johanna Fateman and Amy Scholder


Charlotte Shane Identity Crisis: The 2016 Presidential Campaign and the Battle for the Meaning of America by John Sides, Michael Tesler, and Lynn Vavreck Harold Meyerson Democracy and Truth: A Short History by Sophia Rosenfeld and We’re Doomed. Now What?: Essays on War and Climate Change by Roy Scranton Jennifer Ratner-Rosenhagen

The Last Page An Opposing Force Nick Serpe Cover: Supporters at the March for Public Education last December in Los Angeles, weeks before the city’s teachers went on strike (Ronen Tivony/NurPhoto via Getty Images)


Editor’s Page Beat the Rich Michael Kazin Howard Schultz and Ken Griffin have been giving billion-aires a bad name. Schultz, who became a celebrity selling overpriced coffees gentrified with Italian labels, quickly morphed into an arrogant fool who thinks his riches somehow qualify him to be president. The lesser-known Griffin recently plowed $238 million of his hedge-fund earnings into a future penthouse overlooking Central Park that will be the most expensive home in U.S. history. The Republican donor is also having a mansion built in Palm Beach, Florida, that will cost at least $10 million more. Not so long ago, many Americans would have reacted to these two characters with a mix of awe and envy. But since the financial collapse a decade ago, the notion that “greed is good” is more likely to draw disdain than a smile. In January, a national poll found a clear majority supporting the idea of doubling the marginal income tax rate on the richest among us to seventy percent. A society that “allows billionaires to exist” while some Americans live in poverty is “immoral,” declared Rep. Alexandra Ocasio-Cortez, who proposed that hefty hike in revenue during her very first week in Congress. The suspicion of people with extreme wealth is not just a moral sentiment. It has always been essential to gaining mass support for reforms that made the United States a somewhat more decent society. Images of haughty industrialists sporting silk hats and diamond stick-pins helped inspire the progressive income tax and regulations on big banks


and railroads. Disgust at the moneyed speculators who touched off the Great Depression lit a spark that fired up the New Deal and emboldened the labor upsurge led by the CIO. After Franklin Roosevelt blasted “economic royalists� in his 1936 acceptance speech, he scored one of the greatest landslides in American history. By itself, this sentiment will raise no taxes on the rich, win workers no seats on corporate boards, enact no curbs on private donations to campaigns, pass no component of a Green New Deal. Both the left and the labor movement are still too weak to take full advantage of the shift in public opinion. And an appalling billionaire got elected president by spouting a brand of populist rhetoric and will run for reelection by doing so again. But any hope of creating economic democracy has to name enemies and hold them up to scorn. Bashing aspiring plutocrats like Schultz and Griffin is a necessary place to start.



“In the first season, the Parks and Rec writers had played the idea of a Leslie Knope presidency for laughs; six years later, it had turned into a prophecy.� Amy Poehler as Leslie Knope (NBC)


Culture Front Already Great Timothy Shenk Two days after the 2016 election, an article from Leslie Knope went viral. Like many of the essays that took over Blue America’s collective social media feed in the aftermath of Donald Trump’s victory, the piece asked readers to join what would soon be called the Resistance. “I reject out of hand the notion that we have thrown up our hands and succumbed to racism, xenophobia, misogyny, and crypto-fascism,” she declared. “Today, and tomorrow, and every day until the next election, I reject and fight that story.” Except “Leslie Knope” wasn’t a real person. She was a character in Parks and Recreation—Parks and Rec, to its fans—a critically acclaimed NBC sitcom that aired its final episode in 2015. The show centered on a handful of bureaucrats at the parks department in the fictional town of Pawnee, Indiana. Leslie, played by Amy Poehler, was the focus of the series. A ferociously competent public servant, Leslie slept four hours a night and devoted the rest of her time to figuring out how to make her hometown a slightly better place, with the help of a cast of future stars that included Aziz Ansari (the greedy but lovable Tom Haverford), Chris Pratt (the goofy but lovable Andy Dwyer), and Nick Offerman (the libertarian but lovable Ron Swanson). Before Parks and Rec debuted in the spring of 2009, Poehler had been best known for her role on Saturday Night Live portraying Hillary Clinton, and the show alluded to the president-in-waiting throughout its run. She’s mentioned by name around the one-minute mark of the first episode, and


her picture is prominently displayed in Leslie’s office. In 2016, comparisons between the two became a staple of liberal punditry. “[T]he only sensible way to look at the US election is through the prism of Parks And Recreation,” wrote Hadley Freeman in the Guardian. According to Vox, “Democrats’ not-so-secret-closing argument” was that “Hillary Clinton is Leslie Knope.” The show was even used to excuse Clinton’s botched attempt to cover up her pneumonia, which Salon dubbed her “Leslie Knope moment.” Mike Schur, co-creator of the series and likely author of the post-election war cry against Trump, said in the summer of 2016 that Leslie would be out “campaigning like a mofo” for the Democrats. Clinton herself blessed the association, filming a video with Poehler where she asked what kind of president Leslie Knope would be. Ten years after its premiere, we’re living in the world Parks and Rec helped make. That’s not just because it remains a ubiquitous cultural presence, especially among millennials. (“I can’t go on Tinder without finding some idiot comparing himself to Ron Swanson,” a friend complained to me the other day.) Parks and Rec was animated by a coherent philosophy—a philosophy that was still unusual a decade ago, but is now as widespread as GIFs of Ron Swanson eating bacon. It’s a world-view that fits perfectly with what American liberalism has become, a self-satisfied politics so confident in its righteousness that it can’t quite believe there’s anything left to argue about. But it didn’t start out that way. The story of Parks and Rec is the story of liberalism in the Obama years. And both begin with hope. You can think of Parks and Rec as an answer to two sets of questions that were on Mike Schur’s mind as the series was going into production in the fall of 2008. One batch was raised by Barack Obama. “[T]he show was sort of forged in the pre-Obama, ’08 election,” Schur said in a 2015 interview. “The Tea Party hadn’t happened yet, but the nation’s divide was getting worse every day.” Both sides of the debate had a spokesman in the cast. Leslie Knope was a do-gooder who wanted an active and effective government; Ron Swanson opposed bureaucracy in all its forms. (“Child labor laws,” he said in one episode, “are ruining this country.”) Their relationship provided the series with its major political arc.


Could a libertarian with stockpiles of gold buried in his backyard work alongside an idealistic reformer who hadn’t yet found a problem that government couldn’t solve? If they could do it, what excuse did the rest of us have for not getting along? The next set of questions was inspired by David Foster Wallace. In interviews, Schur talks about his discovery of Wallace’s work with the kind of passion normally reserved for religious awakenings. “It’s not a stretch to say that it’s influenced everything I’ve ever written,” he’s said. “It kind of rescrambled my brain.” As an undergraduate at Harvard, Schur arranged for Wallace to visit campus so that he could meet his hero in person, and the pair struck up a correspondence. After moving to Los Angeles, Schur purchased the film rights to Infinite Jest and wrote a character named “David Wallace” into The Office. “The creation of Leslie Knope would not have been possible,” he’s said, “without me reading David Foster Wallace.” And it must be relevant that as he was bringing Leslie Knope to life in the fall of 2008, Schur learned, along with the rest of the world, that Wallace had committed suicide. Today, Schur keeps quotations from Wallace in his office for inspiration. One of them reads: “In dark times, the definition of good art would seem to be art that locates and applies CPR to those elements of what’s human and magical that still live and glow despite the times’ darkness.” It’s a mission statement for all of the shows that Schur has created, including Brooklyn 99, The Good Place, and his new series Abby’s. Wallace believed that contemporary American culture had been overtaken by an easy but poisonous cynicism. Could sincerity survive in a culture of irony? Could earnestness, maybe, be cool? Schur was willing to gamble that the answer was yes. Ironically, he chose to wage his campaign for sincerity using the art form that Wallace held singularly responsible for cynicism’s hold on the national psyche: television. Schur came by his love of TV honestly—he watched Cheers devotedly as a child—and by 2009 he had a résumé that few in the industry could match. From the presidency of the Harvard Lampoon, he had moved on to a job writing for Saturday Night Live and then left to join Mindy Kaling and B.J. Novak on the original writing team for the American adaptation of The Office. All of them had been hired by showrunner Greg Daniels, another veteran of the Lampoon and SNL.


When NBC executives asked Daniels to come up with a spinoff for The Office, he tapped Schur to help bring the show to the screen. They ditched the idea and came back instead with a pitch for what Daniels called “a mockumentary version of The West Wing.” Although the series probably never would have made it to the screen without Daniels’s clout, it was Schur who gave Parks and Rec its distinctive worldview. Breaking from the model set by Seinfeld, this would be a show about something— about how optimism could prevail over pessimism, and about how much people could achieve when they worked together. Jokes were their secret weapon: they would turn irony’s characteristic mode of attack against itself. Transforming that aspiration into a viable sitcom took some work. The writers struggled to find the right tone for Leslie, to make her flawed without appearing pathetic, ambitious without being unmoored from reality. Both sides of the character were hinted at by her surname, which rhymes with “hope” but more directly suggests “nope.” Early in the first season, she announced, “It is my dream to build a park [long pause] that I one day visit with my White House staff on my birthday. And they say, ‘President Knope, this park is awesome. Now we understand why you are the first female president of the United States.’” A line like that would have made sense in a spin-off from The Office—Steve Carell’s barely competent Michael Scott specialized in delusions of grandeur—but coming from an anonymous government employee wasting away in the middle ranks of Indiana bureaucracy it mostly seemed sad. Parks and Rec found its balance in the second season. The writers boosted Leslie’s IQ enough to make her a talented public servant, and they turned the rest of the parks department, including her supposed ideological antagonist Ron Swanson, into her accomplices. Other departments still got in her way, the public made impossible demands, and the local business community could always block her path. But she would do what she could with the material she had. Leslie’s character, like the rest of the series, was designed to impart a lesson that Schur credited to Wallace. “I think TV has, at some level, trained people to believe that the only noble choice in life is to be the biggest, best, fastest, strongest,” he said in 2012. “One of the themes of this show is to kind of celebrate the nobility of working really hard for your


little tiny slice of America, and doing as well as you can for that part of it in a way that tangibly helps people.” That might sound hokey, but it worked. The writers let characters fail, confront problems that didn’t have easy solutions, and feel shitty about their lives, all while being really funny. And they pulled it off week after week, for a whole season of television. There was just one problem: the ratings were abysmal. The series averaged 5.97 million viewers an episode in its first season, making it the ninety-fourth most-watched show on network television. It did even worse the next year, losing more than a million viewers. Facing imminent cancellation, the writers came up with a plan to save the show: they would turn Leslie into a super-hero. A series that began as a parody of the earnest take on government exemplified by The West Wing turned into its Obamaera equivalent. The producers hinted at the transition by adding a new name to the cast: Rob Lowe, last seen on NBC playing the fictional counterpart of George Stephanopoulos in Aaron Sorkin’s glossy reinvention of the Clinton White House. At the outset of the third season, Pawnee’s government was gushing red ink and the parks department was in danger of being eliminated, a plot point that neatly brought together the real-life austerity crunch of 2010 and the show’s own precarious standing. By the end of the year, Leslie had—with the support of her scrappy team—saved the parks department, found love with a Paul Ryan–ish auditor the state had sent to trim the town budget, and been courted to run for city council. She won that race in season four, setting her off on a path that, the series finale heavily suggested, would end at the White House. In the first season, the writers had played the idea of a Knope presidency for laughs; six years later, it had turned into a prophecy. Critics lapped up Parks and Rec: Pawnee, and politicians were just as smitten. Though the show obviously leaned Democratic, Leslie was at heart a partisan of the American political system, and the producers snagged major figures from both parties for a litany of excruciating cameos. The roster of guest stars included Madeleine Albright, Newt Gingrich, Cory Booker, Orrin Hatch, Barbara Boxer, Olympia Snowe, and


Michelle Obama, plus two appearances each from John McCain and Joe Biden. Ratings remained low, but it was a hit with TV executives’ favorite kind of viewer: the rich kind. It did better with affluent households than every network comedy but Modern Family, giving Schur and his team the cachet they needed to keep the series on air for 125 episodes.

In interviews, Parks and Rec creator Mike Schur talks about his discovery of David Foster Wallace’s work with the kind of passion normally reserved for religious awakenings. (Steve Rhodes/Flickr)

But as the writers made Leslie smarter, the rest of Pawnee, which had never been that bright to begin with, seemed even worse. After a year of supporting right-minded but disastrously unpopular policies—starting with a soda tax and culminating with Pawnee’s version of a Wall Street bailout —she lost a recall vote and found herself out of a job. “I love my town, but you know how they repay me? By hating me,” she complained. “The people can be very mean and ungrateful, and they cling to their fried dough and their big sodas, and then they get mad at me when their pants don’t fit.” The final season jumped ahead two years, placing the show in a notso-distant future when Leslie had taken up a high-ranking job in the National Park Service, splitting her time between Washington and Pawnee, which was finally seeing the fruits of the Knope agenda. Leslie’s hometown, which she had once described as “overrun with raccoons and obese toddlers,” had become a miniature Brooklyn dotted with yoga studios, juice bars, and chic restaurants. The only major question was whether she would be able to persuade a tech giant—a combination of


Google, Facebook, and Amazon—to make Pawnee its regional headquarters and gentrify the last derelict part of town. Spoiler alert: she won. So did everyone else. By the last episode, which aired in February 2015, characters who began the series stuck in bureaucratic anonymity were launched onto careers that would bring all of them fame and success, usually outside Pawnee. One would became a best-selling author, another a real-estate mogul, and a third a congressman. Then there was Leslie, who would round out two terms as Indiana’s governor with the aforementioned turn in the White House. Remember, this was the series that was going to break TV’s habit of insisting that “the only noble choice in life is to be the biggest, best, fastest, strongest.” Not everything was perfect. The show alluded to a coming financial crash and cuts to education budgets so deep that schools had to stop teaching math, and an Infinite Jest–style parody commercial for VerizonExxon-Chipotle included the tagline “proud to be one of America’s eight companies.” But with Leslie Knope on a glide path to the presidency, the country would be in safe hands. America, you see, was already great. Two months after the Parks and Rec finale aired, Hillary Clinton announced that she was running for president. The writers had toyed with the idea of asking Clinton to appear on the show, but setting the final season in 2017 raised the tricky question of what she would be doing after Election Day 2016. They decided to avoid the issue, never mentioning the president by name in the entire season. That’s one of the strangest things about watching this era of Parks and Rec today, because in our timeline the only thing that Leslie Knope would be talking about is Donald Trump. The gap between Parks and Rec and our own reality had been widening for years. Leslie transformed into a bureaucratic superhero in the spring of 2011, just as Republicans were taking over the House of Representatives and state governments across the country. The major legislative accomplishments of the Obama administration were all in the rearview mirror, but frustrated liberals could watch Leslie put together an epic town harvest festival. Meanwhile, back in the real world, Donald


Trump was going from station to station demanding that Barack Obama release his birth certificate. “I’m starting to think that he was not born here,” he said on NBC, the same network that broadcast Parks and Rec —and, of course, The Apprentice. Parks and Rec had tried to prove that Republicans and Democrats could still communicate with each other, and that each would be better for it. But the show’s version of conservatism had no room for Trump. In 2016, Mike Schur told interviewers that even Ron Swanson would have voted for Clinton. He had set out to represent all sides of the country’s political debate; in the end, he couldn’t even include all of NBC’s primetime lineup. That fall, Trump carried Leslie Knope’s home state by almost twenty points, boosted by the presence of Indiana governor Mike Pence on the ticket. Although Barack Obama won the state narrowly in 2008, it had been trending toward the GOP for the entirety of Parks and Rec’s run. By 2012, it was back in the GOP’s column; Romney beat Obama by ten points, helping give Pence a narrow victory in the governor’s race. But the writers don’t seem to have thought about politics much when they decided to put the show in Indiana. According to co -creator Greg Daniels, he and Schur chose the state because it was “a real backwater to contrast with [Leslie’s] amazing ambition and optimism.” The Apprentice had a different understanding of ambition and optimism. While the winning team in each episode sipped champagne in private pools, the defeated were forced to spend the night in the backyard, put up in tents without electricity. Summarizing his show’s message, Trump told them that “life’s a bitch.” It sounds tough, but Trump knew his audience, and The Apprentice regularly drew twice as many viewers as Parks and Rec. Popularity was never Leslie’s top priority anyway. She wanted to help people, not be like them, or be liked by them. “Pawnee has done you a favor,” a political consultant tells her after she loses her recall vote. “You’ve outgrown them.” By the end of the series, she was holding lunch dates with Madeleine Albright and playing charades with Joe Biden. She was still looking out for Pawnee, but from a distance. Leslie’s future had taken her where she belonged, all the way to Washington. Then again, so did Trump’s.


What’s striking in retrospect is how easy it was to bring together the two questions that Schur was puzzling through in 2009— the Obama question of how to transcend the divide between red and blue America; the Wallace question of whether sincerity could survive in a culture of irony—and how quickly both answers led toward a particularly oblivious variety of liberalism. Maybe it shouldn’t be that surprising. Reading Wallace today, what most stands out is how much the cultural landscape has changed. By 2009, it should already have been clear that the cynics were on the defensive. Obama’s presidential campaign was one long demonstration of a profound hunger for something worth believing in. Liberal America was primed for an optimistic series that claimed to speak for the country as a whole, even if most people didn’t bother watching it. In other words, a show like Parks and Rec. It’s often forgotten, however, that Wallace thought the ironists had a point. His most extended discussion of irony’s hold on the American psyche, the 1993 essay “E Unibus Pluram: Television and U.S. Fiction,” granted that American society was filled with hypocrites spreading platitudes contradicted by the realities of daily life: “corporate ascendancy, bureaucratic entrenchment, foreign adventurism, racial conflict, secret bombing, assassination, wiretaps, etc.” Against this backdrop, he noted, “rebellious irony . . . seemed downright socially useful.” The problem was that the ironists had no second act. They could tear down a system, but they had no plan for what came next. According to Wallace, it had taken decades for irony to change from a valuable cultural counterweight into the “cynical, narcissistic, essentially empty phenomenon” it had become by the 1990s. On Parks and Rec, the decline of sincerity took just seven years. The most telling sign of decay became apparent around season four, when the jokes stopped being funny. A suffocating niceness settled over the show. The characters spent most of their time trading compliments with each other, and the stakes could never be that high, because Leslie would always swoop in to save the day. Earnestness, it turned out, could be every bit as narcissistic and empty as cynicism. Like the ironists before them, champions of the new


sincerity didn’t have a next move. Today, there’s a name for the genre Parks and Rec pioneered: hopepunk. According to Alexandra Rowland, coiner of the term, hopepunk is “about DEMANDING a better, kinder world, and truly believing that we can get there if we care about each other as hard as we possibly can.” It’s a melodramatic framing perfect for a cultural moment that treats posting online as a form of ideological warfare. Back in the Obama years, Parks and Rec could afford to be more covert about its politics. Only in the last season did the series acknowledge that Leslie was a Democrat. Schur admitted that the show indulged in wishful thinking, but he distinguished it from the “liberal fantasy” promoted by The West Wing. Parks and Rec had a post-partisan agenda—“an American fantasy” of mutual respect and cooperation. Really, though, it was the same liberal fantasy, where everyone could laugh together because they were all on the right side of history. Schur might have taken a different turn if he had paid closer attention to David Foster Wallace’s critique of TV. “Television,” Wallace observed in 1993, “from the surface on down, is about desire.” Producers kept viewers in their seats by giving them what they craved. “This is what TV does: it discerns, decocts, and represents what it thinks U.S. culture wants to see and hear about itself.” And that’s what Parks and Rec did for most of its run, assuaging the anxieties of managerial-class liberals by telling them everything would be okay if we trusted the grownups—the Obamas, the Clintons, the Knopes —to look out for us. “On some level,” Schur said, “we have to present optimism.” By the end of the show, optimism meant a future where public services are gutted, a handful of corporations dominate the economy, and all your favorite characters are doing just splendidly. Faced with a similarly dreary vista a generation ago, Wallace noted, “the forms of our best rebellious art have become mere gestures, shticks, not only sterile but perversely enslaving.” Though he never underestimated the power of catering to desire, Wallace asked a question that still deserves consideration today: Shouldn’t we want something better?


Timothy Shenk is co-editor of Dissent.

Amos Oz poses at the Basilica of Maxentius during the 2005 International Festival in Rome. He died in December 2018 as one of Israel’s most celebrated writers. (Marilla Sicilia/Archivio Marilla Sicilia/Mondadori Portfolio)

What Amos Oz Couldn’t See Joshua Leifer Amos Oz wrote that as a child he would imagine his own funeral. It would be a state funeral, with eulogies by politicians, “marble statues and songs of praise in my memory,” he recalled in his 2002 memoir, A Tale of Love and Darkness. He was not far off the mark. Oz died in late December 2018 at the age of seventy-nine as one of Israel’s most celebrated writers —perhaps its last “national” writer, for no other contemporary Israeli author has as insistently, or successfully, grafted their own biography onto the country’s history. Oz did not die young as a hero on the battlefield as he had once fantasized, but as a different kind of warrior, arguing over Israel’s national culture and political future. To many, his death signified that he, and his camp, had lost. To those in his camp, Oz represented the romance of the kibbutz, the peace movement, Israel’s enlightened face abroad. To those outside of it, he embodied the Ashkenazi, secular elite and its enormous


condescension toward religious and Mizrahi Jews (of North African and Middle Eastern descent). To the right, he represented the kind of Zionism that always apologized to the world for doing what was necessary to survive. To the left, especially the left outside of Israel, he exemplified the kind of Zionism that justified whatever it did as necessary to survive and cried about this terrible necessity, as if tears were enough to expiate its crimes. Aware of how he was seen, Oz cultivated his roles as ambassador— first of one Israel to the others, then of Israel to the West. He assumed these roles with a combination of agony and relish that can be felt across his work, but which is most perceptible in his nonfiction. In his collection of reported essays, In the Land of Israel (1983), for example, he traveled to ultra-Orthodox neighborhoods, the offices of a Palestinian newspaper in East Jerusalem, working-class Mizrahi districts in Beit Shemesh, and right-wing religious settlements in the West Bank, as much to argue as to listen. Intended as a set of snapshots of various “other Israels,” it was also an attempt by Oz, acting as emissary of a waning Labor Zionism, to explain the dovish position, particularly to the ascendant settler right, on the eve of Israel’s invasion of Lebanon. Oz emerged on the Israeli literary scene as part of the kibbutz movement’s vanguard. In the early 1960s, he positioned himself as an ardent defender of “kibbutz values,” which appeared increasingly threatened by the demands of state-building, the growing power of commercial society, and cultural currents coming from abroad. As he proudly remembers in his memoir, he even challenged Prime Minister David Ben-Gurion in the pages of Davar, Labor Zion-ism’s official organ, for abandoning the core ideal of fundamental human equality. (This idea was preached far more than practiced by the kibbutznikim when it came to Palestinians and Jews from North Africa and the Middle East.) Oz’s zealous defense of the kibbutz may have stemmed, in part, from his fear that he would never truly belong there—that, as he wrote in his memoir, he would “always be just a beggar at their table, an outsider, a restless little runt from Jerusalem.” Oz was born Amos Klausner, not on a kibbutz but in the West Jerusalem neighborhood of Kerem Avraham, to


an émigré family deeply committed to European high culture. He grew up surrounded by books in over a dozen languages and heady debates over literature, Zionism, and the future of Jewish people. His great uncle Joseph Klausner, a major influence on Oz, was a prominent linguist, scholar, and critic who kept busts of Ludwig van Beethoven and Vladimir Jabotinsky, the father of Revisionist Zionism, in his living room. After his mother’s suicide, the teenaged Oz abandoned his family’s petit-bourgeois nationalism for the völkisch socialism of the Hebrew “pioneers”—the “blond-haired, muscular, suntanned” warrior-farmers, with “their rugged, pensive silhouettes, poised between tractor and plowed earth.” (Oz’s writing tends to treat fair hair, light eyes, bronzed skin, and brawn as marks of character, a symptom of the hatred for the stereotypical diaspora Jew at Zionism’s core.) He left Jerusalem—though the city would become the setting of his most successful books—for Kibbutz Hulda, a Labor Zionist stronghold in central Israel, and changed his last name to Oz—in Hebrew, “courage” or “might.” There was something untimely about Oz’s adoption of the kibbutz movement. He sensed this, too; even as he joined the movement, he worried that its high idealism was becoming a thing of the past. In 1967, with Israel’s victory in the Six-Day War and subsequent occupation of the West Bank and Gaza Strip, it became even clearer to Oz that Labor Zionism had begun to falter. Though he did not call for an immediate withdrawal from the occupied territories, as others on the Israeli left did, Oz was among the few Jewish Israelis at the time to warn that a protracted occupation would lead to disaster. “Even unavoidable occupation is a corrupting occupation,” Oz wrote in 1967, “an enlightened and humane and liberal occupation is occupation.” And while Oz recognized what the occupation meant for Palestinians, who now found themselves under military rule, his primary concern then and later was what the occupation was doing and would do to Israelis. The reactions to Israel’s victory shocked Oz. What was supposed to have been Zionism’s rationalist, secular left wing seemed overtaken by the euphoria and even messianism unleased by the end of the Six-Day War. Oz was repulsed, he wrote, by “the mood that had engulfed the country immediately after the military victory, a mood of nationalistic


intoxication, of infatuation with the tools of statehood, with the rituals of militarism and the cult of generals, an orgy of victory.” Oz would never abandon Labor Zionism, but from 1967 on he would represent its dissenting tendency. For Oz, as for many Labor Zionists, 1967 was the year Zionism’s rightful captains lost control of the ship. Though members of the Labor Party would remain at its helm for a decade more, the winds had turned against them. Religious Zionism surged, its spiritual leaders heralded the opening stages of the Messianic Age, and true believers moved—quite a few from the United States—to settle the newly “liberated” territories. In 1974, following the Yom Kippur War, this movement would gain institutional form with the establishment of Gush Emunim (the Bloc of the Faithful) and dramatically alter the course of Israeli history. Then, in 1977, the event still referred to in Israel as hamahapach, or the upset: Menachem Begin—a man whom Oz recalled mocking as a child for his unidiomatic Hebrew—was elected, in part by Mizrahi voters frustrated and marginalized by three decades of Labor Party rule. Begin’s victory—Israel’s first transition of power from one party to another— seemed to confirm that Labor Zionism could no longer claim to represent the core of Israeli society. The right took over the pioneer mantle and pursued settlements in the occupied West Bank, Gaza, and Sinai. Begin’s victory also marked the first major crack in Ashkenazi hegemony. Jews from the Middle East and North Africa, “the throngs of Sephardim, Bukharians, Yemenites, Kurds, and Aleppo Jews” Oz had encountered at Revisionist movement rallies in Jerusalem, could no longer be ignored. Oz would often insist that his political writings and fiction were separate. “Novels for me have never been a political vehicle,” he told the Irish Times in 2014. And yet his 1987 epistolary novel, Black Box, gives an expressly libidinal interpretation to this historic shift in power. The book’s male protagonist, Alex Gideon, is a famous professor, decorated general, and expert on fanaticism—one of Oz’s enduring interests—living in the United States with terminal cancer. The plot begins when his exwife, Ilana, sends him a letter from Israel seven years after their


acrimonious divorce asking for money to help with their troubled son, Boaz. Ilana has remarried, to a French-Algerian Jew named Michel, whom Oz caricatures as a swarthy, scripturequoting zealot with goldrimmed glasses, a gold-chain watch, and bad cologne. Alex, exemplar of the old Ashkenazi elite, is dying. He has been replaced—in the home, in bed, in politics—by Michel, the traditionalist North African Jew dedicated to settling the occupied territories. But that is not all. After Alex first agrees to send money to his ex-wife, he becomes increasingly involved in her new family’s affairs. The checks he sends, intended to aid Boaz, become increasingly large: they finance Michel’s renovation of his and Ilana’s house, an update to Michel’s wardrobe, and, eventually, enable Michel to quit his job as a French teacher to focus on organizing the settlement movement in the West Bank. The reader is left wondering whether Alex’s late-life generosity is the result of subtle extortion, or simply a dying man’s desire to set his affairs in order. At the core of the novel is a truth that Oz was reluctant to admit. In his nonfiction, he frequently wrote as if the religious Zionists were entirely to blame for the settlement enterprise—“their messianic intoxication” and “moral autism,” as he put it in In the Land of Israel, having “brought about a collapse of Zionism’s legitimacy” after the 1967 war. But the fiction of Black Box was much closer to reality. The occupation would not have lasted as long as it has if Labor Zionist leaders had not helped perpetuate it. Oz, like many Labor Zionists, focused his criticisms on Israel’s Michels and too infrequently acknowledged the extent to which Israel’s Alexes were just as deserving of blame. Oz’s characterization of Michel most clearly exemplifies the combination of condescension, jealousy, and contempt with which the deposed Labor Zionist elite viewed their “Oriental”—the literal translation of Mizrahi—compatriots. The Mizrahim found themselves doubly stigmatized. When they were out of power, confined to remote development towns in the desert or urban slums, they were written off as backward, primitive, inclined to petty crime; once they’d gained a measure of political power, they were to blame for the “collapse of Zionism’s legitimacy,” for the corrupting violence of the occupation. But was it not the Ashkenazi pioneers, the Zionist settlers from the Pale of Settlement, who were responsible for the violence that drove hundreds of


thousands of Palestinians from their homes? Who were the kibbutznikim, who built their ethnicexclusivist communes over the ruins of Arab villages, to talk about morality? What made 1967 different from 1948?

Women members of Kibbutz Ein Harod carrying stones from a quarry. Labor Zionism sought to create a “new Jew” through harsh physical labor and attachment to the land. (Zoltan Kluger/GPO via Getty Images)

Oz’s answer to this last question was his greatest blind spot. Until his very last days, he was prone to facile, often marital metaphors for solutions to the Israeli- Palestinian conflict: the two sides needed “a divorce”; coexistence would not be “a honeymoon”; the land was like a house that needed to be divided “into two smaller next-door apartments.” The idea that such a neat separation could be possible is ludicrous to anyone who has been to the West Bank in the last decade, where more than half a million Jewish settlers now live, many in militarized gated communities wedged between Palestinian villages. In 1967, and perhaps even in 1995, things looked different. But the more Israel changed, the more Oz stayed the same. A self-described opponent of fanaticism, Oz never wavered in his devotion to the two-state solution, even when it began to appear like the kind of messianic vision he had spent his whole life opposing. In the end, Oz refused to fully accept the Palestinian perspective as legitimate. He had no trouble recognizing the injustice of forcing


Palestinians in the West Bank to live under perpetual occupation; he even tried valiantly, though perhaps unsuccessfully, to convince Jewish Israelis of this. “If they feel themselves to be under occupation, then this is indeed occupation,” Oz said of the Arabs before an audience in the West Bank settlement of Ofra. “One can claim that it is a just occupation, necessary, vital, whatever you want, but you cannot tell an Arab, You don’t really feel what you feel and I shall define your feelings for you.” But while Oz could accept the Palestinians’ feelings about 1967, it was different for 1948. For Oz, 1967 was the beginning of the corrupting, unjust, and unjustifiable occupation; it was the primary obstacle to peace, and its end would mark, if not the end of the conflict, then at least the beginning of the end. But 1948—the violent displacement of roughly 700,000 Palestinians from their homes—was non-negotiable and entirely morally justifiable. The Jews in 1948, Oz claimed, were like a drowning man; the land was his plank. “And the drowning man clinging to his plank is allowed, by all the rules of natural, objective, universal justice, to make room for himself on the plank, even if in doing so he must push the others aside a little.” The cumulative effects of rapid Jewish settlement and the war of 1948, however, did far more than “push the others aside a little.” They ended Palestinian society as it had existed for generations and turned an entire people into refugees. Oz knew that for Palestinians, 1948 was the Nakba—the catastrophe— and 1967 the Naksa—the setback—and not the other way around. Yet he demanded the singular right to define the legitimate way to think and feel about what had happened. For a writer who spoke so much about the imperative of empathy—who captured so sensitively the suffering of refugees and “the darkness of exile”—Oz could not accept that for Palestinians, the events of 1948 would always be at the heart of the conflict. It is a shame, too, that Oz was unwilling to use his powers of imagination and sensitivity to envision what real, egalitarian Arab-Jewish coexistence could have looked like. Was Amos Oz a great writer? He certainly thought so. He believed that Hebrew literature was a branch, though perhaps a small and brittle one, of the same great tree of European literature to which Chekhov and


Tolstoy belonged. And he wrote to prove this was true. His fiction, however, was strongest when evoking the particulars of place: the winding streets and narrow alleys, the smells and sounds of Jerusalem immediately before and after Israel’s founding. In his prose, the city itself became like a character, its architecture—limestone buildings and enclosed courtyards—enchanted, almost sentient. His fiction was weakest when attempting to inhabit the subjectivities of people unlike himself, especially women, whom Oz often imagined as promiscuous, selfish, and untrustworthy. My Michael (1968) is a haunting portrait of Jerusalem in wartime, but its protagonist, Hannah Greenbaum, is an unconvincing depiction of an unstable woman given to lusting after young boys. Ilana, the main female character in Black Box, is a compulsively adulterous sex addict who, in a somewhat notorious passage, describes Michel, during sex, “like a humble restaurant violinist who has been permitted to touch a Stradivarius.” Arabs, meanwhile, only occasionally appear in Oz’s fiction and rarely as full characters. Mostly free of these missteps, his non-fiction was uneven. He thought himself a man of peace, and yet he often seemed compelled to justify wars. He could be sanctimonious and unoriginal. Whatever socialist commitments he had held in his youth had mostly attenuated by middle age, as he turned toward a kind of antipolitical criticism of fanaticism—his final book was titled Dear Zealots—and identified one of his enemies as “the sentimental gauche”—Western defenders of national liberation movements in the Global South. But at his best, and A Tale of Love and Darkness is indeed his best work, Oz succeeded in etching an image of a lost world down to its most granular detail, and in so doing brought it to life. It’s no coincidence that A Tale of Love and Darkness won accolades internationally as well as in Israel. Though the book is indeed a memoir, to view it solely as such is to miss the scale of its ambition. It is an attempt at a total history of Oz’s family and, by extension, of a particular but significant segment of European Jewry—the Jews from what the American-Jewish poet Philip Levine called “Russia with another name.” With novelistic flourish and abundant digressions, Oz traced his distant forbearers back to their towns in Lithuania and Ukraine and followed their path to Mandatory Palestine in the shadow of the Second World War.


That the book retrieved the common ancestry of a portion of Israeli and American Jews may partially account for its popularity in the United States, where it was made into a film, directed by Israeli-born American actress Natalie Portman, in 2015. But A Tale of Love and Darkness was also published in the midst of the Second Intifada. Instead of a Jerusalem of blownup pizzerias and suicide bombings, where violence and death had become part of the everyday routine, Oz conjured a Jerusalem of émigré scholars, displaced rabbis, and resilient refugees, a city bustling with activity and life, filled with incessant philosophical musing and political sparring, but where the ghosts of Europe’s atrocities were never fully out of sight. Most of all, without having to do so explicitly, A Tale of Love and Darkness argued for Israel’s moral legitimacy and necessity at a time when it seemed to be in jeopardy. The characters in Oz’s memoir are men and women who narrowly escaped drowning and found, just in time, a plank—the land of Israel—that could save them. Reading the book made it all but impossible to claim they did not have a right to be there. Which is perhaps why Oz, in 2011, sent a copy of the book to Marwan Barghouti, the imprisoned former leader of Fatah’s paramilitary wing, with a dedication reading, “This is our story, and I hope you read it and understand us better. Hoping you will soon see peace and freedom.” Oz’s gift, and particularly the dedication, caused a public outrage in Israel. Speeches were canceled. There was even talk of revoking his prizes. It is hard to think of another act that better encapsulates both the bravery and the insufficiency of Oz’s politics. On the one hand, here was a small yet powerful act of resistance to the prevailing common sense, a gesture of reconciliation toward a man most Israelis consider a terrorist. On the other hand, it implied not only a desire for dialogue but a hope for a kind of conversion—that Barghouti would read the book and recognize its rightness. How Oz will be remembered is, of course, impossible to say. But public displays of moral courage such as his letter to Barghouti will likely serve as a kind of record for future historians, proof of the efforts of Israelis who did not stand idly by as their country’s skies darkened—as well as proof of their shortcomings.


Joshua Leifer is an associate editor at Dissent.



The Road to Renewal Introduction Michelle Chen, Sarah Jaffe, and Mark Levinson

While President Trump claims to be responsible for an “unprecedented economic boom,” a wave of labor militancy across the country suggests that America’s workers are not sharing in the supposed prosperity. The past year witnessed more people participating in work stoppages than at any time in the last thirty years, teacher strikes in ten states, the continuation of pioneering Fight for $15 campaigns, a nationwide walkout by Google employees, campaigns for economic justice under the banner of Black Lives Matter and the women’s marches, Amazon run out of New York in large part due to its anti-union practices, and unprecedented multistate strikes by hotel workers against Marriott. Forty years of attacks by private employers and conservative courts have radically reduced labor’s strength. Just last year, public-sector unions took a hit with the Supreme Court’s Janus ruling, which undermined their ability to collect fair-share fees from workers for whom they bargain. But in keeping with the militancy suggested by the increase in strike activity, unions mounted aggressive campaigns to shore up their membership. While some lost members, others picked up more than they lost. Contrary to expectations, public-sector union membership has barely declined after Janus. And despite, or perhaps because of, the crisis in the labor movement, public support for unions is at its highest level in years. Americans under thirty— many burdened by student debt, employment insecurity, and unaffordable housing—made up a staggering 76 percent of new union members in 2017. Today, labor is reassessing its approach to politics (who are, and who should be, labor’s allies?), bargaining (should it be at the firm or sectoral level?), and organizing (what campaigns can build labor’s strength, and


how can community support be enlisted?). Many in labor are keenly aware that a revitalized labor movement will have to embrace innovative policies, different organizational forms, and new alliances with an increasingly diverse, globalized, and complex labor force. This special section is devoted to advancing that renewal. Nelson Lichtenstein argues that recent labor successes, such as the Fight for $15 and the teacher strikes, occurred when the struggle moved to the political sphere where, in a form of sectoral bargaining, states and cities mandated higher wages. Lichtenstein points out that when unions were strong, they developed sectoral bargaining in the auto, steel, mining, and trucking industries without assistance from the state. That system collapsed in the late 1970s and 1980s under the pressure of deregulation, employer attacks on unions, and globalization. He cautions that while state-imposed sectoral bargaining may be useful in raising wages, a union should do much more: “It raises consciousness among its members, creates an oppositional and continuously active locus of power in a society otherwise dominated by capital, and it has the capacity to mobilize the community as well as its own members for social struggles.” The U.S. labor movement was eroded in part by outdated and dysfunctional laws. There is a growing awareness that worksite- or firmbased bargaining is often insufficient to protect workers’ interests and to solve problems of economic and political inequality. In many other countries, there is a legal infrastructure for bargaining at the industrial or sectoral level. Kate Andrias resurrects a moment in history when the United States used a form of sectoral bargaining. While a different bargaining model cannot replace the imperative to increase labor membership, Andrias makes a powerful case for exploring “a legal regime that both encourages workers’ collective activity and gives their organizations real power in the governing process.” Looking north of the border, Rich Yeselson asks: why are nearly three times as many Canadian workers in unions (as a percentage of the workforce) as in the United States? Spurred by Barry Eidlin’s Labor and the Class Idea in the United States and Canada, Yeselson rejects simplistic cultural arguments and interrogates Eidlin’s provocative thesis


that the U.S. labor movement’s early success in the New Deal, of which there was no Canadian equivalent, meant that the Canadian labor movement was never aligned with a major political party and thus nurtured its own oppositional party, while the U.S. labor movement was one interest among many others in the Democratic Party. Yeselson concludes that the Canadian path was never really an option in the United States, and that the only way labor can grow in both countries is by increasing its political and economic power by organizing. The government shutdown made Sara Nelson, president of the Flight Attendant’s Association, one of the most recognizable labor leaders in the country. She sounded a clarion call for a general strike that helped end the shutdown in January. In her support for massive job actions in solidarity with embattled federal workers, her words were both a fiery rebuttal to Trump’s scorched-earth budget politics and an echo of the union militancy at the heart of a federal labor clash nearly forty years ago, the air-traffic controllers’ strike, that foreshadowed the wave of neoliberalism that workers are wrestling with today. Eleni Schirmer’s piece on the Milwaukee Teachers’ Education Association (MTEA) provides a case study of the transformational power of a union with a social vision. The MTEA once saw itself as a professional association that kept its distance from unions and was often at odds with the African-American community it served. But more recently, and especially after former Wisconsin Governor Scott Walker launched a vicious attack on unions, the MTEA has reinvented itself by aligning with the community and promoting an egalitarian and inclusive vision for education. The damage done by Act 10, Walker’s frontal assault on public-sector collective bargaining, has been severe. But conservative attacks have also spawned a feisty union that is inspiring other teachers around the country. In a moment filled with dramatic teacher strikes, this piece provides a look at the work that goes on behind the scenes to build worker power. Finally, Trump’s one big idea is that immigration is the source of American workers’ discontent. Unionists, with a few exceptions in the building trades, have resisted this as a distraction from the real causes of declining working-class living standards. Ruth Milkman highlights the political and economic logic of labor’s movement toward embracing


immigrant rights in recent years, and argues that this advocacy can only be effective if labor develops a powerful narrative that explains how business strategies and economic policies cause working-class distress. Even in its weakened state, the labor movement remains the largest organizational counterweight to capital and the power of the wealthy. A vibrant labor movement is a crucial component of left renewal. Michelle Chen is is a contributing editor to Dissent and co-host of its Belabored podcast. Sarah Jaffe is is an editorial board member at Dissent, co-host of its Belabored podcast, and the author of Necessary Trouble: Americans in Revolt (Bold Type Books, 2016). Mark Levinson is Chief Economist of the Service Employees International Union (SEIU) and the book review editor at Dissent.


How to Win Nelson Lichtenstein

“It was the best of times, it was the worst of times,” wrote Charles Dickens of revolutionary France. “It was the spring of hope, it was the winter of despair.” Too melodramatic for our twenty-first-century taste, perhaps, but not without a kernel of truth when applied to the contemporary labor movement this political season. On the one hand, something is stirring in the land. The red-state teacher strikes, the Democratic sweep in the 2018 midterms, the Los Angeles teachers’ historic victory in early January, and the organizing success unions have enjoyed among millennial wordsmiths in media, both dead tree and on the web, testify to the spread of the union idea in even the most unexpected venues. In 2018 more workers took part in strikes than in any year since 1986. Fully 62 percent of Americans support unions, according to a recent Gallup poll, a number that has increased 14 points over the last decade. Among young adults under the age of twenty-nine, some surveys have found that more identify as socialists than as supporters of capitalism. Meanwhile, in a surprise to almost everyone, left and right, the Supreme Court’s Janus decision, which outlaws “agency fees,” has not generated a public employee rush to “opt-out” of paying union dues, a prospect much anticipated by right-wing legal warriors in the Freedom Foundation and other anti-union entities. On the eve of their successful strike in January, the United Teachers Los Angeles (UTLA), a union long targeted by the right, had actually increased its dues-paying membership in the post-Janus months. The 2018 election reinforced the critical role unions play in electing progressive, pro-worker candidates. In Michigan and Pennsylvania, union-household voters made up 25 percent of the electorate and helped


sweep Democrats to victory up and down the ballot. And as the presidential campaign heats up, Democratic candidates are competing with each other to stake out policy terrain on the left. Elizabeth Warren and Bernie Sanders have both put forward programs that borrow from both European social democracy (worker representatives on corporate boards, universal health provision) and FDR’s early New Deal (higher taxes on the rich, massive infrastructure spending, higher social security benefits, and the reregulation of Wall Street).

At the March for Public Education in Los Angeles, weeks before the city’s teachers went on strike (Ronen Tivony/NurPhoto via Getty Images)

But, on the other hand, this moment is also a long “winter of despair” when it comes to a revival of trade unionism and collective bargaining, especially in the private sector, where union density is a vanishingly small 6.4 percent. Despite the remarkable victory of union school teachers in California and elsewhere and the inspiring success of union hotel workers, nurses, and a few other militant labor organizations, the union movement remains essentially stalemated in the private sector, certainly when it comes to making the kind of organizing breakthroughs and qualitative bargaining advances that were a hallmark of labor activism


between 1934 and 1973. Unemployment is low, wages are barely advancing, unions are viewed in a quite favorable light, and a new generation of young and energetic organizers have been hired onto union staffs, but it still remains incredibly difficult to organize new workers or win a decent first contract. The future for traditional, enterprise-based unionism looks bleak, not because workers don’t want to be represented in a collective fashion, but because opponents of unionism—among employers, politicians, antiunion law firms, the conservative judiciary—have had decades to perfect their legal and organizational weapons so that today even the most robust and imaginative organizing drive can be defeated if corporate executives are willing to spend enough money, retaliate against employees wishing to organize, appeal any pro-union NLRB or Court decision, and delay, delay, delay. And of course, all this implies that workers know who is their real boss. The rise of fissured employment— subcontracting, franchising, and the corporate transformation of millions of workers into “independent” contractors—has obscured where power, money, and responsibility lie in the employment relationship. Under the system of firm-centered organizing envisioned by the Wagner Act and diabolically refined by the NLRB and the judiciary, virtually any employer can thwart the unionizing efforts of even the most enthusiastic and dedicated set of organizers. In consequence, says Larry Cohen, former president of the Communications Workers of America (CWA) and current board chair of Our Revolution, “It is now clear that enterprise-based organizing and bargaining in the U.S. has a dim future.” David Rolf, the Seattle labor leader who pioneered the Fight for $15 movement, concurs. Of collective bargaining and private sector unionism he has said, “The twentieth-century model is dead. It will not come back.” Thus when and if liberals and labor partisans win power in a postTrump America, they will not try to “revitalize” the labor movement. For more than half a century, from the mid-1960s effort to ban right-to-work laws through the Obama-era attempt to pass the Employee Free Choice Act, labor has sought to make the Wagner-era system of enterprise unionism actually function. None of these legislative reforms passed, but even if they had, their impact on labor’s capacity to organize and bargain for a better work life would have been marginal. The structures of capital


have shifted too much, the managerial mindset has become too hostile, and the nation’s legal regime governing collective bargaining has become ossified, if not an outright employer weapon. Is there a road forward, modeled on movements like the Fight for $15 and the campaigns against sweatshops, foreign and domestic? Many labor partisans think “sectoral bargaining” could be an answer for our times. Sectoral bargaining encompasses an effort to win better wages and working conditions in an entire occupation or industry, usually in one state or city. Instead of a collective bargaining contract, the goal is standard-setting laws enacted either by the legislature or through an agency—a “wage board” or other tribunal—that sets wages and working conditions once all the stake-holders have had their say. This is social bargaining with the state on behalf of all workers. Just as civil rights laws apply to all workplaces regardless of the attitude of workers or employers, so too would a wage board promulgate a set of work standards that are equally universal, at least within the industry and region over which the board has jurisdiction. Such systems were pioneered in northern Europe where peak associations of capitalists and unionists hammer out an incomes policy that sets a national or regional framework, which is then refined in a more decentralized fashion to account for historic industrial and occupational patterns and new economic conditions. In the United States we had something close to this system during the height of the New Deal when government entities, from the Depression-era “codes of fair competition” through Second World War labor boards, established uniform wage and union status guidelines in the auto, steel, rubber, trucking, electrical, and food processing industries, and also including such highly competitive and low-wage sectors as textiles and garment manufacturing. Legal scholar Kate Andrias recounts, in this issue of Dissent, how wage boards were a vital and integral part of the 1938 Fair Labor Standards Act during its first decade of existence. Of course, if unions are large, powerful, and economically ambitious it is possible and often preferable to construct a sectoral bargaining regime without assistance from the state. As Walter Reuther, the visionary UAW leader, put it in the late 1940s, “I’d rather bargain with General Motors than the U.S. government. . . . General Motors has no army.” During the


1950s and 1960s such “pattern bargaining” created a set of sectoral wage and benefit standards whereby key agreements, such as the 1950 UAW-GM “Treaty of Detroit,” were replicated, not only by Ford and Chrysler, but throughout mass production industry. Bargaining in steel, coal, commercial construction, and short-haul trucking was even more centralized, with a committee representing the entire industry sitting down with a big union like the United Mine Workers or the Steelworkers to structure a work regime for hundreds of thousands. Jimmy Hoffa, for all his faults, used militant strike tactics and a strategic negotiating strategy to create a series of regional collective bargaining regimes that standardized wages and working conditions throughout an historically fragmented trucking industry. He even brought incomes for Southern over-the-road truckers up to Northern and Western standards in the 1960s. That system collapsed in the 1970s and 1980s when deregulation, deindustrialization, global competition, and the growth of employer antiunionism put wages and other work standards back in competition between one firm and another. The few remaining examples are found in key occupational niches: major league sports, the Hollywood talent guilds at the major studios and broadcast networks, and West Coast longshore. And the teacher strikes that recently swept West Virginia, Oklahoma, and Arizona were also a species of sectoral bargaining, in which negotiations took place not with the individual county boards of education, but at the state capital where the real money and power were concentrated. But the private sector is a harder nut to crack, and like the teacher strikes, it requires the active engagement of the state to make sectoral bargaining once again work. The Fight for $15 could only succeed when the struggle moved to the political realm, where states and municipalities passed ordinances mandating higher wages. Such initiatives might well be given more of a “bargaining” flavor in states, like New York, California, New Jersey, Massachusetts, North Dakota, and Colorado, where wage boards still exist. They were put in place during the Progressive Era when they were designed to raise standards for workers—mainly women—in what were then called the sweated trades. And they still work. In New York, a 2015 wage board held an extensive series of hearings, during which it heard from workers, employers, academics, and politicians


before authorizing a $15 hourly minimum wage for fast-food workers, phased in first in New York City and then more slowly in the rest of the state. And such state-mandated standards are not just for low-wage workers: in the construction trades, “prevailing wage” standards insure that on big government projects occupational wages of up to $80 an hour are paid to skilled craftsmen, union or not. Given this successful precedent, a push for additional state-level wage boards may well be on the liberal agenda, and not just for fast-food workers. Nursing homes, retail, warehouses, and home healthcare are largely non-union, low-wage sectors of the economy that could be covered by such government agencies. Indeed, state and municipal regulation has already begun for some gig economy workers whose actual employment status has been so contested. In contrast to the federal laws governing collective bargaining, such state-level initiatives are not “preempted” by the National Labor Relations Act. Seventy years ago, labor partisans saw such “preemption” as a great legal and legislative victory because it prevented reactionary politicians in places like Texas or Mississippi from enacting their own state-level obstacles to union organizing and bargaining. But as the decades passed this federal displacement of state activism soured as the courts reinterpreted the meaning of the Wagner Act so as to turn labor’s magna carta into an employer weapon. In contrast, states retain the right to set wages and directly regulate other aspects of American work life, which is why we have so many different minimum wage and rest break standards all across the land. All this opens the door to a new season of liberal-labor statecraft that puts high on its agenda the kind of wage boards discussed above. The Center for American Progress, a think tank with close ties to Obama and Clinton circles, is on board, likewise the Sanders and Warren campaigns, and of course advocacy of a $15 minimum wage is now standard fare for almost every Democrat, although the demand is less radical today than when it was introduced six years ago, in part because living costs have risen. Wage boards and a higher minimum wage are a natural fit for a leftward shifting Democratic Party: it is a policy issue legitimized by history and current circumstance; large numbers of low-wage workers will benefit; and employer opposition will be muted because such


governmental initiatives take wages out of competition throughout an entire labor market. If a union organizing drive were to force a handful of McDonald’s restaurants in Manhattan to offer higher wages, while the rest pay two or three dollars less, then one can be sure that those franchisees will scream bloody murder in the months before they close up shop. But if every fast food eatery in the borough pays the same wage, then burger prices might rise a bit, but the competitive field remains flat and equitable. Moreover, such sectoral bargaining is a tool that has the capacity to ameliorate the employment fissuring that has been the bane of so many organizing drives. If a wage board mandates that all janitors, home health-care workers, or all warehouse employees are paid the same, then unions can avoid the near-impossible task of organizing the multitude of contractors and subcontractors in those industry sectors. Indeed, some of these subcontractors are likely to welcome a stateimposed wage standard, which would stop the chiseling and constant spin-off of fly-by-night firms whose only competitive advantage is the exploitation or self-exploitation of those who work for them. Finally, wage boards seem to offer an alternative to the social strife, the outright class conflict, that has made even the most liberal Democrat wary of too close an identification with union organizing campaigns, contract fights, and the strike itself. These governmental wage-setting institutions promise to realize one of the more problematic ideas held out by the original Wagner Act. That 1935 law was premised, in part, upon the theory that social harmony might be achieved when and if capital and labor met on somewhat equal terms—both would be organized—and thereby both had the incentive and the power to construct a set of social bargains, with the strike weapon held largely in reserve. But if U.S. employers ever thought this policy regime a good idea, they reject it today. In the private sector, certainly, and often in the public as well, managers seek domination and unitary rule. Unions therefore are in the business of creating class conflict, when and if they have the chance, because it is only under such adversarial conditions that managers are incentivized to recognize the more advanced claims of their employees. Liberal politicians may well offer support for contemporary strikes and organizing drives, but the turmoil created by union activism often plays


havoc with a candidate’s effort to build a constituency as broad and inclusive as possible, even when, in the abstract, they stand with working people. Strikes are messy and often end in a partial victory or divisive defeat. Many people, and not just those in the managerial strata, are repelled by such social conflict. So while Democratic Party liberals may join the occasional picket line, they hesitate to identify their campaign with the fate of a union struggle. Though the Fight for $15 has, from the beginning, framed its demands as “$15 and a union,” the wage plea has captured far more attention than the call for union rights. When it comes to the latter, most Democratic politicians hesitate to put themselves squarely on the side of all those shrill and disruptive organizers. Instead they use distancing rhetoric, with appeals to create a “level playing field” between management and labor, or they seek to avoid the conflictual narrative altogether by just condemning income inequality, tax breaks for the rich, and the role of the “billionaire class” in election campaigns. Unionism, even when its chief objective is a higher wage for union men and women, embodies far more than a mechanism for ameliorating income inequality. It raises consciousness among its members, creates an oppositional and continuously active locus of power in a society otherwise dominated by capital, and it has the capacity to mobilize the community as well as its own members for social struggles, thereby demonstrating both social solidarity and a progressive vision of what would constitute a good society. All this was brilliantly demonstrated during the teacher strikes that swept the nation in 2018 and early 2019. Wage boards do none of this, and while the Fight for $15 campaigns have often been genuine social movements, they have not won for SEIU, the key funder and organizer of that movement, more than a handful of new members. And this is crucial, because without organization and the dues flow to sustain it, the labor movement will come to resemble a philanthropic foundation that makes incremental social changes, but is incapable of building a self-sustaining movement. Without unions to institutionalize them, waves of activism dissipate. The energy that went into the first Obama campaign evaporated after the thrilling election celebrations. The Occupy movement in 2011 fizzled


when the tents cleared. And the contemporary anti-Trump resistance lacks an organizational structure independent of the people it has put into office. In contrast, effective trade unionism contributes not only to the mobilization of voters at the climax of a campaign season, but in the aftermath as well, when the political and organizational trench warfare continues in a large array of legislative chambers, administrative agencies, and community political institutions. In recent years the right— through megachurches, the National Rifle Association, and ad hoc donor formations—has proven far more potent than the left in this kind of continuous partisan warfare. Now that the nation and the labor movement is shifting to the left, progressives need to push forward policies and politics that actually strengthen those working-class institutions so they can both play a vigorous role in raising wages—by themselves or through state agencies —and begin to win the adherence of those elements of the working class who have defected. The union movement, indeed democracy itself, has always advanced when will and circumstance conjoin to create a great leap forward, as in the Civil War, the New Deal, and the sixties. A new era of state-mandated sectoral bargaining may well be part of that reinvigoration, but its promise will fall short without the rebirth of a set of working-class organizations that give ordinary men and women their own voice and the power to make it persuasive. Nelson Lichtenstein teaches history at the University of California, Santa Barbara, where he directs the Center for the Study of Work, Labor, and Democracy.


A Seat at the Table Sectoral Bargaining for the Common Good Kate Andrias

There is growing consensus among left-leaning union leaders, scholars, and public policy experts that fundamental labor law reform is necessary, not only to fix a broken labor and employment regime but also to address the nation’s staggering economic and political inequality. According to conventional wisdom, however, more social democratic approaches to labor relations—for example, enabling bargaining for all workers on a sectoral basis—are in deep conflict with American traditions. A largely forgotten moment in U.S. history draws that conventional account into question. The Fair Labor Standards Act (FLSA), first enacted in 1938, a few years after passage of the National Labor Relations Act (NLRA), empowered tripartite industry committees of unions, business representatives, and the public to set minimum wages on an industry-by-industry basis. For about ten years, industry committees successfully raised wages for hundreds of thousands of Americans while helping facilitate unionization and a more egalitarian form of governance. Though the committees were limited in their scope and power, they were an important component of a broader struggle to democratize the economy, the workplace, and the government itself. Recovering this history can help inspire more ambitious alternatives in the future. As many observers have noted, the rise of inequality over the last few decades is closely related to the decline of unions. More than a third of U.S. workers once belonged to unions, helping to raise wages and benefits throughout the economy and giving workers a collective voice in


the workplace and in politics. Now, unions represent roughly a tenth of the labor market, and only about 6 percent of the private sector workforce. While any number of factors help explain the drop in union density, it’s clear that the U.S. system of labor law bears significant responsibility for the withering of unions in this country. The NLRA promises to protect the right to organize, to bargain collectively, and to strike. But the statute fails to offer meaningful protection in practice: enforcement mechanisms are weak, penalties are minimal, delays are lengthy, and employers are legally permitted to engage in a wide range of anti-union activity, like “predicting� negative consequences of unionization, closing down in response to unionization, and permanently replacing striking workers.

Fight for $15 protesters in New York, where sectoral bargaining among unions, employers, and the state produced a major increase in the minimum wage (Erik Mcgregor/Pacific Press/LightRocket via Getty Images)

Moreover, although the economy has become increasingly globalized and fissured, labor law still channels bargaining and concerted activity to the worksite level. Workers at a single workplace have little power when


negotiating with multinational employers, and even less ability to transform conditions along a supply chain or throughout an economic sector. In addition, the law excludes from its protections many of the most vulnerable workers, including domestic workers, agricultural workers, and independent contractors, who make up a growing portion of the workforce, at least as classified by employers. Employment law, which protects workers on an individual basis, doesn’t fill the void left by a broken labor law. Most non-union workers are employed “at will” with few rights at work and few protections against termination. Federal law and most state laws lack guarantees of paid family leave, vacation, or sick time; statutory minimums do not provide the wages or benefits necessary to keep a family out of poverty. Government enforcement of employment law is lax and violations are rampant, particularly in low-wage workforces. Effective private remedies are often unavailable because of mandatory arbitration clauses and the difficulties of class certification. As with labor law, many workers are excluded from employment law’s coverage. In short, both labor law and employment law have failed American workers. Against this backdrop, there is growing support among union leaders, policymakers, and academics for a different approach to labor law—a system that would protect all workers’ rights to organize, strike, and bargain for a decent livelihood, not just at individual worksites, but across each economic sector. Scholars have shown through a number of comparative studies that power-sharing over decisions about wages, benefits, and the economy through comprehensive systems of sectoral bargaining achieves more egalitarian outcomes than firm-based bargaining alone. Still, many argue that such a system is out of step with America’s more minimalist approach to labor relations. According to the conventional account, the United States has always been committed to government neutrality on unionization, required bargaining only at the enterprise level, and kept labor and employment law as distinct regimes (except briefly during wartime emergencies and under the failed National Industrial Recovery Act of 1933). It’s a system in which workers have little say over the direction of the political economy.


The history of FLSA challenges this account. Today, FLSA guarantees minimum wages and overtime rights. It is a relatively modest statute. But FLSA’s original ambition was much greater. The statute was designed to operate in tandem with the NLRA by implementing a system of tripartite industry committees. These committees were tasked with negotiating minimum wages on an industry-by-industry basis. In short, FLSA’s backers aspired not just to ensure subsistence wages, but also to empower unions to negotiate for all workers, to build a more egalitarian political economy, and to remake the very structure of American democracy. The enactment of the FLSA’s industry committees followed a multidecade effort by unionists, feminists, socialists, and progressive intellectuals to resist turn-of-the century laissez faire economics and to democratize the political economy. They were convinced that political problems and economic problems were inextricably linked, and that treating the latter required addressing the former. In their view, democracy could not function in the context of great disparities in wealth and required institutional commitments that went beyond the franchise. To that end, Progressive Era reformers sought to rebalance the power of labor and capital. They believed that the working class needed to be organized and that the state needed to ensure the ground rules to enable such organization. FLSA’s industry committees grew out of these ideological commitments. The bill’s strongest backers in Congress and the executive branch saw the minimum-wage law as a way to ensure a system of basic equality that extended into the political, economic, and social realms. Tripartite industry committees were one way to further this goal; they would engage unions in governing the political economy, while helping to expand the reach of union-negotiated rights to unorganized workers, particularly in the non-union South. The American Federation of Labor (AFL) had previously resisted universal minimum-wage laws on the ground that labor conditions, at least for “able-bodied” men, were better left to private negotiation than to governmental supervision. But the organization eventually came around to support the bill. Meanwhile, the newly founded Congress of Industrial Organizations (CIO), welcomed a more universal approach to labor


relations. Leaders of industrial unions, like Sidney Hillman of the garment workers union, embraced the idea of intertwining labor and employment law; in Hillman’s view, the FLSA could serve as a mechanism to enhance collective bargaining and help reduce downward wage pressure on organized shops and the related problem of capital flight. Unsurprisingly, industry groups like the Chamber of Commerce and conservatives in Congress, many of whom objected to any legislation on wages, vigorously opposed using FLSA to support tripartite bargaining. They argued it would create a morass of government bureaucracy and would be controlled by particular interests that could not possibly provide fair representation for all. Despite business opposition, FLSA passed by a vote of 291 to 89 in the House and a similar margin in the Senate. President Roosevelt signed the bill into law on June 27, 1938. The new statute required the administrator of the Department of Labor’s Wage and Hour Division to define different sectors of the economy, and then to appoint representatives from labor, business, and the public to committees that represented each of these sectors. The committees were tasked with proposing industry-specific minimum wage standards, which could be greater than the universal minimum though they could not exceed the upper bound set by the statute. The committees were to be evenly divided among labor, business, and public representatives. In practice, the industry committees’ work was a mix of bargaining and administrative decision-making. The committees conducted fact-finding missions and grounded their conclusions using statutory criteria. But the decision-making emerged from compromise between business and labor, with the public committee members acting as referees, albeit usually ones supportive of labor. Committee recommendations did not have the force of law until the administrator approved them after a public hearing, but the scope of his power was limited. He could not alter a recommendation; he could only veto it, and only for failure to meet statutory standards. Public hearings were collective events, with union members and business leaders showing up and testifying in large numbers. In the end, the industry committees were a great success at their admittedly limited task. They were widely deemed efficient and effective.


Seventy industry committees were established between 1938 and 1941, and their wage orders covered 21 million workers. Unions used the process to launch organizing campaigns and to raise awareness about workers’ plight. They took seriously their responsibility to represent nonunion workers, viewing the process as a way to undertake a form of collective bargaining for unrepresented workplaces. For example, when a forty-cent minimum went into effect in the millinery industry, Max Zaritsky, president of the United Hatters, Cap and Millinery Workers International Union, AFL, commented that he considered it “one of the most significant gains of our organization and our people in recent years.” The forty-cent minimum wage spurred a new organizing drive among the hatters. Union organizers visited homes of workers and “pointed out that for the enforcement of the order they must depend not only on the government whose facilities are limited, but upon a strong union which would see to it that there were no violations or that if there were violations, those guilty would be punished.” The CIO was similarly aggressive in capitalizing on FLSA to promote organizing. Its weekly newspaper regularly featured stories about FLSA, and locals created a system for educating workers about the wage orders and enforcing them. They urged workers to submit any FLSA complaints through the union, emphasizing that such a method would trigger protections provided by the NLRA. The CIO initiated wage recovery suits on behalf of large groups of employees and organized picket lines and strikes to oppose violations of FLSA. By the mid-1940s it looked like the United States might expand its tripartite system to give unions formal bargaining power over an array of economic and social welfare policy questions. War boards established during the Second World War provided a potent model. But the AFL revived its longstanding opposition to governmental involvement in labor relations and opposed making the National War Labor Board’s tripartite sectoral bargaining permanent. Business and conservative forces, particularly white Southerners hostile to the empowerment of black laborers, mobilized even more forcefully in opposition. In 1947, Congress decisively changed the statutory and regulatory landscape by passing the Taft-Hartley Act over President Truman’s veto, significantly curtailing labor rights. Against this background, a proposal to expand FLSA’s industry


committees was soon rejected. In 1949 the tripartite approach was abandoned. FLSA’s industry committees were not accused of self-dealing or inefficiency, as had been the case with committees under the earlier National Industrial Relations Act. But rising hostility to unions, the opposition of Southern Democrats to the extension of labor rights to African-American workers, and divisions within the labor movement meant that there was insufficient support for a continuation or expansion of government-facilitated sectoral bargaining. A weakened Democratic Party and an embattled, divided labor movement were willing to trade the committee system for a new minimum wage increase. Tripartism and sectoral bargaining all but disappeared from core federal labor and employment statutes. The FLSA industry committees show us that within the broad statutory framework that still exists today, worker organizations were once granted formal power in policymaking and the capacity to bargain for all workers in an industry. Their history also blurs the line that today exists between labor and employment law. At the outset, unions were given a role in the implementation of FLSA, and FLSA was seen as a way to advance unionization. In the current moment, which bears so much similarity to the vast inequality, concentrated political power, and corporate-friendly judiciary of the Gilded Age, we should revisit the ideas that workers, sympathetic political leaders, and intellectuals advanced during the New Deal. For now, it’s unrealistic to expect any move to empower workers to negotiate over expansive labor and social welfare regulation at the federal level. But reforms along the lines of the early New Deal are possible at the state and local level. Federal labor law preemption forecloses nearly all state and local labor law legislation, but employment law does not face the same hurdles. Several states, including California and New York, already have tripartite commissions vested with the power to set wages and other standards. These commissions have existed for generations and have intermittently operated to bring labor and management together under state administrative supervision to set standards on an industry-by-industry basis. For example, in 2015, after


growing protests and strikes organized by Fight for $15, the New York labor commissioner exercised his authority to impanel a wage board to recommend higher wages in the fast-food industry. The board members —representatives from labor, business, and the general public—held hearings over the next forty-five days across the state. Workers organized by Fight for $15 participated in great numbers at these hearings. On July 21, the board announced its decision: $15 per hour for fast-food restaurants that are part of chains with at least thirty outlets, to be phased in over the course of six years, with a faster phase-in for New York City. The wage board order was a significant victory, followed by another victory: a bill to raise the state-wide minimum wage to $15. Support for such reform is growing across the country. Since 2012, over two dozen states and many more localities have raised their minimum wages. Even during the election that brought President Trump to victory, minimum-wage increases prevailed when they were on the ballot. So too have regulations providing for paid leave and other benefits. These new laws have emerged out of organizing campaigns that frame the demand for better employment rights and social welfare benefits as part and parcel of the demand for union rights. Some, like the 2015 restaurant worker wage increase in New York, have even emerged from sectoral bargaining among unions, employers, and the state. Meanwhile, the recent teacher strikes in Los Angeles, West Virginia, Arizona, Colorado, and Oklahoma represent another form of workerdriven sectoral bargaining. Teachers are organizing not just at one school, or in one neighborhood, but across their cities and states. Like the early New Deal efforts, the new teacher union movements collapse traditional divides between areas of law while offering an ambitious vision for reform. That is, the teachers demand not just fair wages and good benefits for themselves, but also adequate education funding for their students. And they demand the right to negotiate about those matters on a sectoral basis. Hotel workers, Google employees, and airport workers also are engaging in broad-based collective action at levels not seen for several decades. These movements, too, are not just seeking better conditions at individual worksites; they are demanding change across their sectors, while challenging the basic assumptions underlying current workplace law. From this on-the-ground organizing, the outline of a new,


or revitalized, model of labor law is emerging that would allow workers to build strong unions at their worksites while also giving them a seat at the table in decisions about the direction of the broader political economy. The precise contours of any future labor law remain uncertain, but the need for reform is clear. Taking a cue from the early twentieth century, we might once again begin to imagine a legal regime that both encourages workers’ collective activity and gives their organizations real power in the governing process. We might begin to imagine a more enduring democratic and egalitarian political economy. Kate Andrias is a Professor of Law at the University of Michigan Law School. This essay draws from an article, “An American Approach to Social Democracy: The Forgotten Promise of the Fair Labor Standards Act,” originally published by The Yale Law Journal Company, incorporated in the Yale Law Journal, vol. 128, pp. 616–709 (2019).


No Power without Organizing Rich Yeselson

Around twenty years ago when I worked in the labor movement, I used to go up to Toronto to help a hotel workers’ union fight the boss. During one contract dispute, union members and staffers had an understanding with the cops and management that they could stand in front of a car attempting to enter the property for exactly a minute before letting the vehicle pass. As we were in Ontario, there were a lot of U.S. license plates amid the Canadian ones. The difference between the two groups was striking: the Canadians took the whole thing in stride, waited for their minute to pass, and then went on their way. But the U.S. drivers were enraged! A lot of them screamed and cursed and said that we “had no right” to delay their journey, and urged the cops, blandly looking on, to bust us. What I took from this episode was a sense that Canadians, regardless of their politics, accepted unions as part of their political culture. They were institutional articulations of the working class. Americans just viewed unions as a nuisance—a special interest. In Labor and the Class Idea in the United States and Canada, Barry Eidlin, a sociologist at McGill University, has confirmed my anecdotal intuition with a lucid and provocative work of historical sociology. Eidlin wants to understand why and how the fortunes of the labor movements in the United States and Canada so sharply diverged beginning in the mid1960s. At that time, union density in both countries stood at between 25 and 30 percent. But since then American union density has steadily declined to its present 10.5 percent, as low as it was at the start of the Great Depression, while Canadian union density went up, declined a bit, and then stabilized at its current 28 percent. So what happened? Eidlin argues that the seeds of this disparity were planted during the 1930s and 1940s. At that time, the U.S. labor


movement had more power and success than its Canadian counterpart. Franklin Roosevelt, faced with an upsurge of labor militancy, “. . . adopted a co-optive response to worker and farmer upsurge,” writes Eidlin, signing the National Labor Relations Act (NLRA), supporting union organizing (sort of), and making labor an indispensable part of the New Deal. Union growth skyrocketed through the late 1930s and, again with the administration’s support, during the Second World War. Meanwhile, in Canada, the Liberal Party attracted no politicians as strategically adroit as FDR. The ineffective William Lyon Mackenzie King, who served noncontinuously as prime minister for over eighteen years beginning in 1926, was closer to Herbert Hoover than Roosevelt. As Eidlin puts it, he “hesitated to implement comprehensive collective bargaining policies because of an ideological commitment to a value normally viewed as classically American: voluntarism.” R. B. Bennett, the Conservative leader who served as prime minister from 1930 to 1935, doubled down on King’s free-market nostrums while also promoting arrests of radicals and violent repression of labor. Neither Canadian party supported an NLRA-style law.

From left: AFL president William Green, U.S. Secretary of Labor Frances Perkins, and United


Mine Workers of America president John L. Lewis in 1935, the year the National Labor Relations Act was adopted (Library of Congress, Prints & Photographs Division, photograph by Harris & Ewing)

The Canadian labor movement didn’t win the concessions that FDR had granted in the United States during the 1930s. The Conservative Party unconditionally opposed organized labor, and the Liberal Party failed to support unionists enough to politically consolidate them. Canadian unions won wartime concessions due to their continued militancy and, ultimately, the passage of the Industrial Relations and Disputes Investigations Act in 1948, the rough analogue to the NLRA. The paradoxical result was that the Canadian unionists forged themselves as an oppositional “class representative.” Rather than absorbing themselves as one interest among several into a major party, they nurtured their own oppositional party, affiliating in 1943 with the agrarian based Co-operative Commonwealth Federation, which became the New Democratic Party (NDP) in 1961. And, despite the fervent desire of the Canadian business class, Canada did not pass an equivalent to the labor-restraining Taft-Hartley Act of 1947. (It’s not entirely clear from reading Eidlin why the Canadian business class and its conservative political allies were not as ferociously supportive of such a law as were the National Association of Manufacturers and the Southern white supremacist bloc in the United States.) Moreover, facing a milder postwar red scare than McCarthyism in the United States, Canadian labor did not fully divorce itself from Communist and other radical influences like U.S. unions did. Unions in both countries took a more nationalist turn after the radicalism of the Depression era. In Canada, this deepened labor’s leftist oppositional stance, connecting nationalism to criticism of their huge and powerful neighbor to the south. Nationalism of the U.S. labor movement had an entirely different social meaning, embodied in the reactionary Vietnam War hawk, George Meany, who as president of the AFL-CIO famously opined in 1972 that he saw no particular reason to organize workers beyond labor’s current membership. Within unions and the Democratic Party, Meany’s forces vehemently fought the New Left. In Canada, by contrast, labor leftists in the 1970s helped massively increase public-sector organizing, just as globalization and the collapse


of the New Deal order were undermining manufacturing unions in the United States. American labor had gotten a head start by incorporating itself into the New Deal. But, over time, it grew complacent and dependent upon a party it could not control, stuck in what Mike Davis memorably described as a “barren marriage.” Canadian labor, harshly rejected by both major parties, had a more difficult period during the late 1930s and 1940s. Eventually, however, it controlled its own autonomous working-class party. It devoted itself not to “access” to power, but to mobilization and generating its own power. Eidlin incisively dismantles reasons that scholars have routinely suggested for differences between the Canadian and U.S. labor movements—in particular, the old chestnut of national comparative studies that the two countries have dramatically different national cultures: the United States individualist, Canada collectivist. Instead, he focuses on the administrative and legal structures under which unions operate. In the United States, labor was increasingly hamstrung by bureaucracy after Taft-Hartley. The NLRB general counsel —a position created by Taft-Hartley—is charged with deciding which cases the board should take up, reducing unions to supplicants rather than advocates on their own behalf. In Canada, unions represent themselves directly. Moreover, U.S. unions must deal with an onerous judicial review process: federal courts hear appeals of board decisions and frequently overturn them on substantive matters. As Eidlin writes, this “creates huge incentives for employers to appeal and delay as much as possible.” In Canada, judicial review is mostly limited to procedural issues, making courts the “final and binding arbiter” of the labor-relations system. Eidlin also attempts to address another common reason suggested for the relative weakness of U.S. labor: racism. He concedes that, while racism is pervasive in the history of both countries, American racism “has had a more damaging effect on labor in the United States compared to Canada.” Yet he also asserts that racism is exacerbated by a labor movement’s inability to create an independent class-based political


formation. But the Socialist Party of the early 1900s, the most successful third party ever on the left, was, like the rest of America, rife with racism; nothing about being an Independent Left Third Party (ILTP) precluded that. The AFL unions of that time frequently barred black workers from membership. Later, the overwhelmingly working-class civil rights movement was indeed boosted by social democratic black trade unionists like A. Philip Randolph and Bayard Rustin and pro-union social democratic civil rights advocates like Dr. Martin Luther King, Jr. But unions did not lead this movement, and significant percentages of white working-class and unionized voters stood in the way of their struggle and lent support to George Wallace in 1968 and Richard Nixon in 1972. (Indeed Nixon carried the union vote that year.) Similarly, the failure of Operation Dixie, the postwar drive to increase union membership in the South, is symptomatic of racist obstacles to unionism in the United States that are far more powerful than those in Canada. One of Eidlin’s sharpest insights, paradoxically, points to how American racism is not given its due in the book. Eidlin compares the 1968 report by the Canadian Task Force on Labor Relations—a comprehensive historical analysis of the Canadian labor relations system that included many reform recommendations implemented by the government—with a much briefer 1970 memo prepared by the U.S. Department of Labor, “The Problem of the Blue-Collar Worker,” which focused on the individual issues of worker alienation and did not recognize unions as the institutional representatives of the working class. This is indeed a fascinating comparison. But an even more resonant parallel to the Canadian study is the much better-known Kerner Commission report on “civil disorders,” also released in 1968, in the wake of the violent, urban racial conflicts of 1967. Structural racism remains the most prominent and recognizable feature of the hierarchical social order in the United States. With some significant exceptions, that racism has repeatedly undermined the possibility of American workers seeing themselves as agents of the class idea. At times, it seems to rankle Eidlin that U.S. labor took the good deal it


was offered during the 1930s. At one point, he notes that union membership “exploded” under the NLRA, but in the very next sentence he writes, “The Wagner Act’s perceived benefits drew labor toward Roosevelt [emphasis added].” But the benefits weren’t merely perceived; they were concrete. The New Deal delivered the goods, including the passage of Social Security and the Fair Labor Standards Act of 1938 (which codified a minimum wage and overtime pay). Eidlin writes that we often forget how New Deal policies “were contingent outcomes of political battles, and the degree to which historical outcomes were suppressed.” But once FDR seized control of the presidency and gained the support of Southern white supremacists for the early New Deal, it’s hard to see how things might have turned out other than the way they did. It’s easier to imagine Canada spawning a gifted, opportunistic politician like FDR who would have cut a deal with labor in the 1930s than it is to think that U.S. unions would reject the first president to recognize unions’ right to exist and even grow. Eidlin attributes much of the failure of the U.S. labor movement to start an ILTP to conflicts between the AFL and CIO. But earlier fights between the AFL and IWW hadn’t prevented the rise of the Socialist Party (SP) decades earlier, and the obstacles to launching a successful labor-driven third party were, by the postwar period, extensive. There was also a fierce intra-CIO dispute between its still influential communist wing and the abrasively brilliant social democratic UAW president Walter Reuther. That alone made independent political collaboration hard to imagine. The structural difficulties that, in part, limited the growth of the SP earlier in the century had also persisted. By the end of the Second World War, the effective anti-labor conservative alliance between the white supremacist Southern Democratic bloc and most of the Republican Party that had begun in the late 1930s had consolidated. The passage of the 1947 TaftHartley Act was one major result of that alliance. Harry Truman’s failed veto of the law highlighted the risks of a third party, which would, unlike in Canada, have to simultaneously fight and win separate legislative and executive branch elections. An independent left party in the United States would have also had to contend with a far more inhospitable political climate during the early Cold War. Eidlin allows that the Cold War made an anti-communist


“backlash all the more inescapable.” But he doesn’t see why that backlash was far more pervasive in the United States than in Canada. The United States was the adversary of the Soviet Union and, until 1952, the only other nuclear power. The anti-communist right in the United States was morbidly linked, in a way it could not quite be in Canada, with “mushroom clouds, real and metaphorical,” in the words of Paul Boyer, the historian of American atomic culture. While a few left-wing intellectuals and journals (like this one) survived the McCarthy period, it is hard to imagine that a left-wing third party could have sustained itself during the coercive consensus of the 1950s. Eidlin’s characteristic care in comparative structural analysis falters in his discussion of another critical period, the 1970s, when America labor failed to win legislative reforms under a Democratic president and Congress, while Canadian labor did win such reforms. Eidlin briefly points to the ambivalent Democratic Party support for the Labor Law Reform Act of 1977–8; the Democrats treated labor as merely a special interest, one to be judged against the needs of other party constituents. “Whereas the Canadian labor regime allowed for an effective translation of class mobilization into the political realm, leading to regular policy reforms,” writes Eidlin, “the US labor regime consistently mistranslated class mobilization into the political realm, diffusing labor’s independent political pressure.” This misses a number of features of the American system that would have made a Canadian outcome all but impossible. We shouldn’t assume that an independent labor party would have produced better labor law when, under a first-past-the-post presidential system, it could just as easily have split the vote and led to Republicans capturing the presidency in 1976. Moreover, Canadian labor reform in the 1970s happened at the provincial level across the country. By contrast, U.S. labor law preempts most state-level legislation that would revise or repeal it. For example, in 1994, the Minnesota Supreme Court struck down that state’s anti-striker replacement law, an effort by state Democrats to replicate the failed national version of the law. Even so, the 1978 labor reform bill could have passed were in not for another procedural obstacle: the filibuster. The mundane but profound truth is that the U.S. political system has far more chokepoints than


Canada’s parliamentary, (effectively) unicameral system. Antimajoritarian procedure in the United States, including voter suppression policies, have long structured political outcomes. At the end of his book, Eidlin observes that class politics in the Canadian labor movement and its political vehicle, the NDP, are so diluted that political observers see no salient differences between it and the businesscompliant Liberals and Conservatives. The “class idea is under attack” in Canada, writes Eidlin, but it remains “more embedded” than in the United States. Eidlin offers no explanations for this deterioration in labor’s political and economic position in Canada, which makes this reader wonder why the “class idea” of the book’s title remains vulnerable to erosion even in a country that produced, according to Eidlin, such a strong version of class representation. Why, if the NDP had been so successful in representing the interests of the Canadian working class, has it recently moved away from that position? This raises the possibility that the U.S.-Canadian comparison, while yielding many sharp insights, needs to be expanded further. Today, across the economically advanced democracies, a cohort of young leftist intellectuals is shaping a new analysis of political culture and economy. But the large institutions of the left—its traditional social democratic parties and unions—are, in many places, in crisis.


In 2017 Jagmeet Singh became the leader of the New Democratic Party, the labor-backed “third” party in Canadian politics founded in 1961. (ideas_dept/Flickr)

Eidlin is most pessimistic—as is everybody else—about the United States. Yet Canada’s current baseline for union density and political power are indeed higher than in the United States, but there doesn’t seem to be many other reasons for a more optimistic prognosis for Canadian organized labor. In fact, it might be argued that just as Canada’s labor movement didn’t get its version of the Wagner Act until a decade after the United States, it may now just be deteriorating on a later timeline. For either country, as Eidlin emphasizes, there is no path forward without a renewed focus on workplace organization. Only workers at the point of production, provision, and distribution can enforce agreements and hold management accountable. Workplace organization, in turn, must create and sustain “durable collective identities” for today’s working class. Even in a period of great difficulty for the labor movement, we might now be seeing the development of those identities in the inspiring workplace organizing in sectors like education and media. But militancy in those venues isn’t enough to revive U.S. labor when the commanding heights of the economy remain union-free. In a 2017 article in the Yale Law Journal, entitled,“Nothing New Under the Sun,”


which Eidlin cites, Matthew Ginsburg, an associate general counsel of the AFL-CIO, directs the movement to its most difficult organizing challenges. “There’s no avoiding Walmart, Toyota, Amazon, T-Mobile, and Federal Express. The greatest concentration of unorganized workers in the United States is still employed at these and similar large multinational corporations.” This would truly be organizing on an enormous scale—the only way labor in the advanced world has ever grown enough to increase its political and economic power. It seems barely conceivable to organize these goliaths any time soon. But while workers might gain benefits from less ambitious projects—alt-labor and worker centers, pushes to increase the minimum wage— nothing other than taking on the biggest companies can create political change on the scale required. Alt labor and worker centers are boutique-level interventions. They cannot generate sufficient revenue from workers themselves without being tied to specific company bargaining agreements. And for workers’ organizations to depend upon capital’s philanthropic arm, an idea that Ginsburg rightly derides, is to become a grant recipient, not a labor movement. Fight for $15 has been stunningly successful in every way—raising the wages of millions and pushing the Democratic Party to the left—except for actually increasing union membership. Similarly, the idea of implementing sectoral bargaining is percolating among labor intellectuals in the United States, but massive militancy and numerical expansion in the labor movement is the only leverage that can win such a large reform. As Ginsburg argues, “Organizational growth on a firm by firm basis must precede any effort to significantly change the legal rules governing labor relations in the United States.” The intellectual labor can bear fruit someday, but the only way to win big is to go big. Workers must build those durable collective identities on their own behalf. And unions must institutionalize that social solidarity. For now, we are left with incipient possibilities. Barry Eidlin has given us a smart lay of the land for both nations. We know much more about the different labor regimes in these neighboring countries than we ever have before. And Eidlin’s core argument—that New Deal–era support for U.S. labor limited its autonomy decades later, while Canadian labor’s lengthier struggle to win concessions from the


state steeled its independent, class-conscious perspective—is arresting and persuasive, even if it’s almost impossible to imagine a plausible historical counter-narrative in the United States. What he can’t explain is why today, even in Canada, the class idea is more fragile than ever, or how U.S. labor can recapture its New Deal militancy in the political economy of the twenty-first century. Nevertheless, Labor and the Class Idea is a stimulating contribution to today’s movements for egalitarianism and labor solidarity, and activists throughout North America should ponder it for some time to come. Rich Yeselson is a contributing editor to Dissent. He is writing a book about the causes and consequences of the passage of the 1947 Taft-Hartley Act.


Airport Workers Strike Back An Interview with Sara Nelson Sarah Jaffe

The 2019 government shutdown, the New York Times declared, made Sara Nelson “America’s most powerful flight attendant.” Through her bold leadership and willingness to threaten a general strike, Nelson, president of the Association of Flight Attendants, became one of the country’s most recognizable labor leaders, seemingly overnight. Shortly after the shutdown’s end, Nelson joined Dissent’s Belabored podcast to talk about the symbolism of airport workers striking back against the president, the current wave of union militancy, and why so many of today’s dynamic unions are led by women. Sarah Jaffe: Longtime labor watchers were struck by the fact that it was airport workers who helped end the recent shutdown, specifically air traffic controllers, considering Reagan’s attack on air traffic controllers was what crushed strike activity in the United States in the 1980s. Airports are becoming a center of protest and action in the Trump era— I’m also thinking about the taxi workers’ strike around the Muslim ban [at JFK in 2016]. Can you talk about the airport as a center of American life and the importance of the work that goes on there and how much power that gives airport workers? Sara Nelson: First of all, we made a grave mistake in the past strike [the 1981 PATCO strike] in not understanding how we are all connected—not understanding how individual workers’ issues and their efforts to gain contracts and recognition for their work are directly tied to the rest of us being able to fight for the same issues. That was a grave mistake, we


should learn from it, and we should never allow it to be repeated. Anyone who has any labor consciousness certainly can’t help but think of [PATCO]. There’s a general recognition of what happened, even if people don’t understand the full ramifications. We do have tremendous power. If airline workers had stood together at that time, we could have stopped the attack and the signal from the government that it’s OK to plow over workers’ rights, to really just put them in a position of being forced to do what those with power and money want them to do.

Sara Nelson, the International President of the Association of Flight Attendants, at a press conference on aviation safety during the shutdown (Andrew Caballero- Reynolds/AFP/Getty Images)

We’ve seen a steady decline in both union membership and strikes in this country, and as a result, the American worker is working harder than ever for less pay. We’re seeing rubber bands break—all across the country. We’re seeing teachers rise up, people understanding their power in their workplace. And to your point about power at airports, yes: airports are where people converge. People of all races, genders, cultures, and creeds come together and actually climb into a metal tube together. We have a


microcosm of America on every flight. [The airport] is a central place that everyone can relate to, even those people who fly in corporate jets, because they take off and land at airports, too. They count on people buying tickets to come to cities where they’re putting on events and where they’re selling what they’re selling. Airports are places where people pay attention. There’s tremendous power in that; we should use it. On a very separate track, we had an emergency webcast with the flight attendants [recently] to really define what’s at stake—to help them understand that they also have rights, separate and distinct from what we’re talking about in terms of a general strike. That is, to look out for their own safety. They are in these sensitive positions, where they’re looking out for the good of the public, and they see that there are lives in danger. They can withhold their service and say, I’m not going to participate in that: I’m not going to fly this flight because I believe that everyone is in danger. That is a right that they have today. We are making it very clear to our members that they have that right to withhold that service if the scene becomes too unsafe. Sarah Jaffe: You trained as a teacher before becoming a flight attendant. It’s so striking to me that the leadership in labor is coming from teachers, flight attendants. It’s coming from workers, not on a factory line, but who work every day with people and who are responsible for the safety of people. Flight attendants are also literally in a field where the idea of emotional labor was invented. What is it about flight attendants, teachers, and nurses that’s making them the leaders of today’s labor movement? Sara Nelson: Well, I love my brothers, but let’s be clear: all those professions you named have a high percentage of women. And women get results. Whether it’s [at work or] in the home, for our children, women are focused on the results, and they’re not afraid to speak with people who don’t agree with them or fight fiercely for the people they love. That’s really what’s going on here: you’ve got people who are saying that failure is not an option and that we are going to fight fiercely for the people that we love. The people we love are our students, our flying partners, and the passengers who are in our care every day. These people enrich our


lives. We also see on our planes, quite frankly, that we deal with the occasional jerk—and everybody remembers that—but let’s face it: flight attendants know firsthand that Americans are good people. There’s way more that we have in common than we have that’s different. If our country were really in a state that some people are trying to make us believe it’s in, there is no way those planes could take off and land. There’s just no way. You’re jamming people in completely uncomfortably, forced to sit together, who have to do things that they don’t want to do: stay in their seats with their seatbelt on when it’s bumpy, put their tray tables up, put their phones away, and then come through security. They have to behave themselves, and they do. And not only do they behave themselves, but they’re generally kind and nice to the people around them. That’s what we see every day. In the teaching profession, the people who are on the frontlines, the people at our post offices, flight attendants, and people interfacing with the general public—we actually know that the vast majority of people in this country care very deeply for the people who are next to them. We know that we can lead a dialogue that actually brings people together. In this moment, reversing PATCO, learning about power from labor being involved the shutdown, from the teachers’ strikes—all of this is about reviving a labor movement that really fights for the working class. We are learning that power starts in the workplace. If we understand that and come together in our workplace, the rest of American life will follow, including our politics. This interview has been edited and condensed for clarity.


After Act 10 How Milwaukee Teachers Fought Back Eleni Schirmer

In 2011, as the great recession hit public coffers, Wisconsin’s republican governor Scott Walker addressed a purported budget crisis by attacking public workers. Walker’s signature bill, Act 10, struck down public-sector unions’ ability to automatically collect dues and limited bargaining to wages capped at inflation. teachers could no longer negotiate class sizes. Nurses were forced to accept mandatory overtime. Unions lost money and members. Similar anti-union legislation quickly spread across the United States. these state-level offensives culminated in a national policy shift in the summer of 2018, when the Supreme Court’s Janus ruling made it illegal for public-sector unions to automatically gather dues from employees. Critics wrung their hands and pronounced it a nail in the coffin for public unions. What started as a Wisconsin union problem has now become a national one. Yet today, teacher militancy sweeps the nation. teachers from West Virginia’s fifty-five counties filed onto picket lines, the second time in a year. Strikes have surged across Oklahoma, Arizona, Colorado, and California. Chicago charter school teacher unionists are fighting against the racist project of education privatization. Amid this burgeoning resistance, Wisconsin unionists appear a cautionary tale of defeat, the concussed victims of a brutal first round. Union members made up less than 14 percent of the state’s workforce before Act 10; four years later they were barely 8 percent. teachers were hit especially hard by the bill. Many fled the profession; more than 10 percent of teachers in Wisconsin quit the year after Act 10, a spike from the 6.4 percent exit rates the year before. this contradictory landscape raises a question: are today’s teachers’ unions the victorious challengers of capitalism, or among its


many victims? What constitutes success for a movement, and what constitutes defeat? the long history of Wisconsin’s largest teacher union local, the Milwaukee teachers’ Education Association (MtEA), helps us answer this question. MtEA rebuffed solidarity with civil rights and labor groups in its first decades in order to secure bread-and-butter benefits for its predominantly white teachers. MTEA’s narrow self-interest fueled the conservative movement that led the charge against public education and teachers’ unions. Yet, starting in the mid-1980s, MTEA activists began forging a new vision for teacher unionism and public education. This vision has been exported to teachers’ union’s nationwide, thanks to their local activism and their nationally circulated progressive education magazine, Rethinking Schools. Today, MTEA proudly declares itself a social justice union, with racial justice and strong community relations central to its mission. If we measure unions’ successes exclusively in terms of the short-term outcomes—did they get the goods, did they defeat the bad politicians— we ignore the broader scope and power of union activity. Workers’ collective power does not only come from these wins; it also comes from creating and nurturing a space for future movements and activists to take root. This dimension of unionism takes time and can be difficult to detect. But such tasks are arguably the more vital project of a union: to change workers’ sense of what is possible, to sow solidarity, to bring faraway aspirations into reach. This vantage reveals a subtle but crucial truth: Unions don’t organize workers; capitalism does that work. Unions, instead, reorganize workers. Our current economic system has forced education to accommodate itself to hypercompetitiveness. Teachers must prepare students for highstakes standardized tests, quantitative measures that compress education into a market value. Schools are tasked with grooming students to become not participants in democracy, but future widget workers of the world. Public goods and institutions are depleted of resources. Teachers’ unions, in this light, must do more than fight for better wages; they need to organize workers and schools and communities in order to counter capitalism’s corrosive effects. Yet unions are not preordained to organize for progressive movements against


capitalism—to demand investment in care labor, spaces for play, time for art, taxes on the rich, and meaningful education for people of all income levels, black and brown and white. As the history of MTEA illustrates, unions can deepen and exacerbate social division created under capitalism, but they can also bring people together in the name of a common good. They can build our collective capacity to imagine the world we want. In MTEA’s first decades, it cast itself as an organization of professionals. Teaching was one of the few paid employment opportunities available to women at the turn of the twentieth century. Many who became educators were eager to distance themselves from their working-class backgrounds. A number of teachers in Milwaukee and elsewhere had little interest in joining labor federations, preferring to seek workplace improvements by emphasizing their genteelness and professionalism rather than by forging class solidarities. Milwaukee teachers first organized under the National Education Association (NEA), which was at the time unaffiliated with organized labor. The NEA held meetings in grand ballrooms. Its leaders, many of whom were district administrators, mailed teachers dainty handkerchiefs along with membership solicitations. The competing teacher organization, the American Federation of Teachers (AFT), was a member of the local labor federation. Unlike the NEA, it encouraged teachers to explicitly use the words “union” rather than “association” and “strike” rather than “professional withdrawal program.” The point of a union, AFT leaders asserted, was to draw members into political struggle. To do otherwise, as one AFT leader claimed, would be “to sterilize and fertilize the plant at the same time.”


Renee Jackson at the protest in the Wisconsin State Capitol on March 7, 2011 against Act 10, which restricted collective bargaining for public-sector workers (Justin Sullivan/Getty Images)

When Milwaukee teachers did eventually seek collective bargaining rights in 1963—thereby converting their association to a union—it was due to the organization’s predominantly white teachers’ growing fear of black students. A wave of migration of black families from southern cities crested in Milwaukee in the early 1960s, and many white teachers saw a union as a means to secure enhanced corporal punishment rights, stronger powers to remove “disruptive” students from the classrooms, and greater protections for teachers from “student attackers” in the city’s predominantly black schools. By the mid-1960s, a winnowing welfare state put schools and educators under great pressure to solve society’s ills. Whereas the New Deal programs of the 1930s addressed inequality through jobs programs, minimum wages, and labor protections, in the 1960s the state rolled back its commitment to economic redistribution and instead turned to schemes to develop human capital, such as job training programs. The War on Poverty took aim at the “culture of poverty”—the behaviors and dispositions of the poor—and education became one of its main weapons. Legal rulings called on schools to desegregate but left in place the vast inequalities between black and white neighborhoods. Public


schools were left to shoulder these systemic inequities. Teachers were responsible for curing the effects of rising poverty, housing insecurity, and unemployment, while earning wages that only just enabled subsistence. As teachers were increasingly asked to solve society’s problems, they understandably wanted more control of their jobs. But in MTEA, racist fears and narrow self-interest structured their concerns. White teachers looked to gain power over black students, rather than build power with black and poor communities for better healthcare, housing, and employment. Throughout the 1960s and 1970s, the union leadership rejected civil rights movements’ demands for desegregation. In 1974 MTEA disaffiliated from the state teachers’ union, partially in rejection of the state union’s increasingly progressive political direction. While these moves enabled MTEA to secure strong contracts for its teachers in the short-term, they crippled teachers’ capacity to build broader political movements that defended public schools in the long-term. MTEA rendered itself vulnerable to the rising tide of education privatization that would soon sweep through Milwaukee. By the 1980s, jobs were hemorrhaging from Milwaukee and incarceration rates soared. The conservative government privatized the state’s basic welfare provision; Milwaukee became a laboratory for national welfare reform. Milwaukee’s children, especially its children of color, increasingly came to school burdened by the traumas of poverty, their parents struggling to find work, shelter, and healthcare. The legitimacy of public schools themselves began to crack, as the growing conservative education movement took the lead in criticizing the quality of the city’s schools. In 1990, the nation’s first comprehensive school voucher program opened in Milwaukee. It was created with political and financial support from conservative philanthropists invested in the ideological project of privatization and religious evangelicals interested in skirting laws preventing state aid for religious schools. The program also received support from a cadre of black community leaders who saw the free market as the solution to racial inequalities baked into public institutions. Howard Fuller, a black education reform activist in Milwaukee and former superintendent of Milwaukee public schools who had butted heads with MTEA for decades, formed alliances with the conservative philanthropies,


like the Bradley and Walton Family Foundations, to become a national leader of the “school choice” movement. (Years later, Fuller supported Betsy DeVos in her nomination for U.S. Secretary of Education.) Thanks to Fuller and others’ leadership, a number of black families were drawn to school choice to circumvent a teachers’ union that had turned their backs on them, and to seek quality education for children of color, be it publicly or privately provided. The private financial investment combined with the claims for racial justice made the Milwaukee voucher experiment a political triumph in its early years, incubating the national conservative reform movement. By the 2010s, education privatization had become a major rallying point for Wisconsin’s conservative movement against the public sector, helping Republicans secure the state legislature and governorship in 2011. Despite the increasingly politicized landscape of education in Milwaukee, MTEA saw its primary function as negotiating and administrating teachers’ benefit packages rather than fighting for students and teachers in defense of public education. Members had very little say or stake in the union’s operations, much less its vision and priorities. In 1981, a feisty group of progressive teachers came together to form a caucus that challenged the MTEA leadership’s apolitical posture. They saw their role as unionists as integrally connected to fights for communities and schools, especially when it came to racial justice. Though predominantly white, this group of teachers articulated a vision of public education that grappled with stark inequalities and sought to inspire the movements necessary to transform them. Their caucus was explicitly critical of MTEA’s leadership, its heavy reliance on union staff to execute the union priorities, and its failure to address racism. To augment their work, these educators, along with other community activists, founded Rethinking Schools in 1986, which now serves as a leading voice in progressive educational reform around the country. The progressive caucus distributed copies of Rethinking Schools to building representatives at MTEA’s monthly meetings. Its issues chronicled Milwaukee’s specific challenges, from curriculum adopted by the school board, to the union’s negotiations, to city politics. It produced some of the earliest reporting on the Bradley Foundation, a Milwaukee-based conservative organization that funded the city’s school choice initiative.


By the 2000s, neoliberal education programs had moved from the fringe to center, as Republicans and Democrats alike adopted the mantle of “choice” and “accountability.” This agenda subjected public schools to market standards, forcing competition by awarding aid to only topperforming schools and sanctioning poorly performing ones. With an eye to winning funds from Barack Obama’s Race to the Top (RTTT) program, in 2009 Wisconsin’s Democratic governor, Jim Doyle, and Milwaukee’s Democratic mayor, Tom Barrett, began plotting the mayoral takeover of the Milwaukee public schools, a rumored RTTT eligibility criterion. The proposed takeover would dissolve the city’s elected school board, causing the predominantly black and brown families attending Milwaukee public schools to lose democratic representation, and eventually the closure of many schools. The threat of mayoral takeover spurred a new urgency among those fighting for public education in Milwaukee. Despite support for the takeover from key power players—the Democratic mayor, the Democratic governor, state legislators, business alliances such as the Metropolitan Milwaukee Association of Commerce, and the national group Democrats for Education Reform—many teachers, students, and community activists fought back. Some two dozen community groups, spearheaded by the teachers’ union, formed a coalition to demand a democratically governed, public school system. This coalition organized protests, including at the homes of obstinate Democratic legislators. They attended hearings. They wrote letters to the editor. They picketed the press for failing to report on the plan. In the process, disparate groups became unified. Together, they articulated a grassroots, pro-labor, pro-democratic, anti-racist vision for public education. Within months, the “Stop the Takeover of MPS” coalition, as they called themselves, had indeed stopped the takeover. But their work was far from over. In 2010, when Act 10 was still a twinkle in Scott Walker’s eye, Rethinking Schools founder, leader of MTEA’s progressive caucus, and anti–mayoral takeover activist Bob Peterson decided to run for MTEA president. When he won the election in April 2011, shortly after Walker’s Act 10 passed, MTEA found itself guided by people accustomed to organizing.


Immediately following the law’s passage, Peterson and his allies set to work re-organizing their union. In lieu of collective bargaining, Peterson declared, MTEA would embrace collective action. Instead of contract protections, community alliances would strengthen schools and classrooms. Today, when asked how they got involved in their union, many Milwaukee teachers hiss through gritted teeth, “I’m here because of Scott Walker.” One teacher I spoke to in 2017, as the MTEA geared up for more budget cuts, pushed up her sleeves and told me, “Oh it’s Scott Walker. He organized us here. He’s woken the sleeping giant.” Milwaukee teachers managed to defeat another attempted takeover in 2016, this time from the state of Wisconsin. Working with community coalitions, the teachers have mobilized to oppose unregulated charter school expansion. They have successfully advocated to build a community schools program that provides wraparound services for students and families and operates through community decision-making, rather than by command of private management companies. Teachers have joined with students to fight against bringing more police into schools, demanding instead more funding for educational resources. The union, in other words, has come to life. This revitalization has happened in part because of the loss of legal protections for Wisconsin unions. Milwaukee teachers have been forced to give up their all-too-common understanding of a union as an insurance company-cum-vending machine: put in dues, jimmy out legal protections if things go bad with the boss. Act 10 dismantled the laws that enabled unions to passively accumulate resources and powers. In today’s Wisconsin, if teachers want a union, they have to show up and fight for one by actively organizing. It’s an uphill battle. Simply to be recognized by the state under Act 10, each union must conduct an annual recertification election and win support of 51 percent of all eligible members. Imagine if Governor Walker had been held to the same standards, forced to win an election every single year with 51 percent of the possible electorate. It would prove an impossible threshold that would occupy all of his administration’s time and energy. But following Act 10, MTEA committed to training teachers to organize.


Teachers regularly gather in the union’s conference room to discuss strategies and skills at the building level. But they have also taught themselves how to organize for the struggles outside of classrooms through popular education workshops on topics like political economy and its effects on schools. “Working in public education is political,” MTEA vice president Amy Mizialko told her fellow teachers during one organizing meeting in 2017. “It’s a fight about our taxes, it’s a fight about our communities, it’s a fight about what our kids are going to do, what they’re going to learn. . . . Whether we want it to be political or not, we are. And our union is engaged . . . we have a voice and power that move us forward.” In addition, the union has brought new focus to the work of educational assistants (EAs), the unsung, low-wage workers, predominantly women of color, who provide vital classroom support to students and teachers yet often live paycheck to paycheck, working several jobs to make ends meet. In 2014, MTEA embarked on a campaign to raise EA wages, linking up with local Fight for $15 activists. As part of this campaign, school board members spent a day walking in the shoes of EA members, to see and feel what life was like for someone earning $12 an hour. Board members were picked up at their homes at 5 a.m. to bring them to an EA’s first shift job, then to their second, even third. Thanks to their organizing, EAs earned themselves a wage increase. In 2017 MTEA also made a small but radical change to its by-laws, allowing EAs to hold union officer positions for the first time (they had previously only been allowed to serve as officers in their own sub-unit). With near-unanimous support, MTEA representatives voted to ensure the organization’s most vulnerable members had the power to lead the organization, bringing rank-and-file democracy to a new level in the union. Now, when Milwaukee teachers want to win demands, they don’t send a cadre to closed-door bargaining sessions. Instead, a throng of teachers gathers in MTEA’s “war room,” a sprawling basement lair with butcher paper taped to walls charting support at each building. They crowd around folding tables, phoning their fellow teachers late in the evening to tell them about the plan to pack the school board meeting the next week, or to wear their “Black Lives Matter” T-shirts to school. They hold community events to build support for their demands. One February 2017


weekend, for example, in anticipation of Walker’s 2017 budget, MTEA hosted a community “art build” to make banners, signs, and parachutes for the upcoming protests. A local graphic designer, a thirty-something Latino man with warm eyes, told me some of his artist friends asked him to join. “It’s a good use of my built-up rage,” he smiled at me. An elementary-school student showed me their hand-drawn sign, bright scribbles that read, “$9999 for Schools.” Dozens of screen-printed canvases were hung up to dry from a clothesline stretched across the room, fluttering like prayer flags. One read, “Public Schools Are the Heart of Democracy.” Another said, “Organize Students, Workers, and Immigrants,” with a woodblock image of people huddled under an umbrella; deep chisel marks made their faces look weary and fierce. Like the struggle had made them strong.

The MTEA hosted an “art build” in February 2017 in anticipation of then-Governor Scott Walker’s budget. (Joe Brusky/MTEA)

It is a good problem that, today, when people talk about teachers’ strikes, they want to talk about success. Recently, teachers in Los Angeles won a charter school moratorium, fewer cops in schools, legal support for


immigrant families, and promises for more social workers and librarians. Many are energized by the prospect of taxing billionaires to fund smaller class sizes, art classes, and playgrounds with grass. No doubt, these are successes. But what makes each of these victories a success isn’t simply that teachers got the thing they demanded—the raise, the moratorium, the better funding plan. They are successes because the organizing that achieved those demands created space for future movements to grow. Each action brought people together. Each demand brought closer the dream for a better world. A union can succeed on these terms even when winning short-term goals remains out of reach. Conversely, a union can make short-term gains while failing to achieve these movement aims. The early generation of MTEA won strong contracts for its teachers but sacrificed broad solidarity and political analysis. More recently, though MTEA lost key labor rights in 2011, they also reasserted their power to organize movements, to foment big ideas, to build bonds among disparate groups. Over three decades of political attacks on public schools and unions, Milwaukee teachers have developed the ideological architecture of social justice unionism. Their vision has fertilized movements for progressive education across the country, despite their short-term defeats. They remind us that while laws can protect unions, it is people dreaming and fighting together that make them strong. Eleni Schirmer is a PhD candidate at UW-Madison in Educational Policy Studies and Curriculum & Instruction. Former co-president of the nation’s oldest graduate employee union, UW-Madison’s Teaching Assistants’ Association, her writing has appeared in Jacobin, Labor Notes, The Progressive, and espnW.


Immigrants Didn’t Kill Your Union Ruth Milkman

Immigrant organizing stood out as a rare bright spot on the otherwise dismal U.S. labor scene in the late twentieth and early twenty-first centuries. To the surprise of many observers, starting in the late 1980s low-wage foreign-born workers, including the undocumented, eagerly welcomed opportunities to unionize and infused the labor movement with new energy. Immigrants also helped to galvanize the “alt-labor” movement, flocking to worker centers across the nation that deployed new strategies to challenge wage theft and other employer abuses in sectors where obstacles to traditional unionism were especially formidable. Largely in response to these developments, union leaders abandoned their longstanding support for restrictive immigration policies; by the turn of the century organized labor instead had become a vociferous champion of immigrant rights. Yet some unionists dissented from this stance, especially in the relatively conservative building trades, many of which are still overwhelmingly made up of U.S.-born white males. In 2010, the Pennsylvania building trades lobbied for a proposed state bill to penalize construction firms that hired undocumented workers. More recently, in upstate New York a carpenters’ union representative admitted that his union routinely reported undocumented workers on construction sites to immigration authorities. These unionists, like many ordinary Americans, were convinced that immigrants, and especially the undocumented, lowered wages and took jobs away from U.S. citizens. On the surface, their view may seem plausible. Construction has suffered severe deunionization over recent decades, leading to lower pay and degraded working conditions, especially in the residential sector of the industry. Employers launched a vigorous anti-union assault as the


residential industry recovered from the recession of the early 1980s, using a variety of tactics to expand the non-union segment of the industry. When that happened, U.S.-born building-trades union members abandoned the jobs affected, typically moving from the residential to the commercial sector of the building industry—the latter was booming in the 1980s and remained heavily unionized. Meanwhile, employers recruited immigrant workers, both authorized and unauthorized, to fill the newly degraded jobs in residential construction. Thus the employment of immigrants did not cause the labor degradation in the industry; on the contrary, it was the result of the employers’ anti-union campaigns. Similar processes unfolded in many other industries as well. But rank-and-file workers, as well as some unionists, unaware of this dynamic, often blamed immigrants instead for the degradation of jobs.

A New York construction worker demonstrates in anticipation of the Supreme Court’s Janus ruling in 2018. Construction has suffered severe deunionization in recent years, leading to lower pay and degraded working conditions. (Drew Angerer/Getty Images)

Such scapegoating has become even more widespread since the rise of Donald Trump and the aggressive attacks on immigrants that propelled


him into the presidency. Not only did his 2016 campaign, with its gratuitous attacks on birthright citizenship and “chain migration,” as well as unfounded claims that “illegals” raised crime rates and committed voter fraud, famously arouse the latent xenophobia and racism of many white workers. In addition, after taking office, the Trump administration systematically promulgated an array of draconian anti-immigrant initiatives: the Muslim travel ban, new limitations on refugees and asylum-seeker admissions, family separations at the border, large-scale ICE sweeps, and increased arrests and deportations. Some on the left point to continuity in regard to the last of these: not for nothing had Obama earned the moniker “deporter-in-chief.” Immigration and Customs Enforcement (ICE) arrests were up 42 percent in the first eight months of the Trump administration, compared to the same period in 2016, but the numbers were even higher in 2010 and 2011, under Obama. Yet most deportations in the Obama era involved new arrivals apprehended at the border, or immigrants with serious criminal records. By contrast, under Trump ICE prioritized “internal removals” of the undocumented, often sweeping up those with no criminal records and others who had resided in the United States for many years. ICE agents became increasingly aggressive, apprehending undocumented immigrants in courthouses and outside schools, locations it had avoided under earlier administrations. Workplace raids, rare in the Obama years, were revived. Trump has also taken steps to curb legal immigration, for example, seeking to end “temporary protected status” for Haitians, Central Americans, and others. All these policies are relentlessly trumpeted in the president’s speeches and tweets, along with his beloved border wall proposal. As detentions and deportations became increasingly arbitrary and unpredictable, fear and anxiety in immigrant communities spiked to levels not seen for half a century. In California, the state with the largest undocumented population as well as a much-vaunted sanctuary law (introduced immediately after Trump’s election and signed into law in 2017), “thousands exist in a cordon of terror,” as Michael Greenberg reported in the New York Review of Books in November. “Paranoia has infiltrated every aspect of life. Civic activity [among the undocumented], such as attending town meetings and other public events, has ground to


a virtual halt.” Not surprisingly, despite his populist rhetoric, the president is no friend to organized labor. Still, many unionists welcomed (albeit warily) his posture on trade, resonating to the critique of NAFTA and the “tough” approach to trade with China. Labor leaders also harbored hopes that Trump’s stated commitment to rebuilding the nation’s infrastructure (which soon proved to be “fake news”) would generate a raft of new union jobs. Yet there has been no retreat from the AFL-CIO’s or the Change to Win (CTW) federation’s support of immigrant rights, with the notable exception of the unions representing ICE agents and border control officers, both of which endorsed Trump in 2016 and ever since have been cheerleaders for his “zero-tolerance” immigration policies. Indeed, organized labor mobilized in support of immigrants threatened with deportation, for example in the Working Families United coalition, formed in 2017 by the Painters union, the hotel workers’ union UNITE HERE, the United Food and Commercial Workers, the Teamsters, LIUNA, as well as the Bricklayers and Ironworkers. That same year the AFL-CIO developed a toolkit to assist unionists threatened with workplace immigration raids. Several individual unions launched their own training efforts to educate members about how best to respond to raids or the threat of deportation. While most segments of the labor movement have continued to support immigrant rights, if less vocally than in earlier years, the liberal consensus on immigration policy has begun to weaken in the wake of Trump’s success (and that of right-wing populists in Europe) in winning working-class support by demonizing immigrants. For example, Hillary Clinton warned in an interview shortly after the midterm elections that “if we don’t deal with the migration issue it will continue to roil the body politic.” And in his 2018 book, The Nationalist Revival, John Judis confessed his sympathy for Trump’s nationalist agenda, arguing that lowwage immigration inevitably reduces the leverage of the U.S.-born working class. “Enormous numbers of unskilled immigrants have competed for jobs with Americans who also lack higher education and have led to the downgrading of occupations that were once middle class,” he declared. This type of left-wing nationalism is even more widespread in Europe.


Similarly, Angela Nagle’s provocative essay, “The Left Case against Open Borders,” published in the pro-Trump journal American Affairs, harkened back fondly to the days when organized labor embraced restrictive immigration policies, pointing out that the main supporters of open borders have been free-market ideologues like the Koch brothers, along with employers reliant on cheap labor. Historically, she added approvingly, trade unions took the opposite view: They [unions] saw the deliberate importation of illegal, low-wage workers as weakening labor’s bargaining power and as a form of exploitation. There is no getting around the fact that the power of unions relies by definition on their ability to restrict and withdraw the supply of labor, which becomes impossible if an entire workforce can be easily and cheaply replaced. Open borders and mass immigration are a victory for the bosses. The attack on the left for supporting “open borders” is a red herring; this stance remains on the margins of the progressive mainstream—but most progressives do oppose the restrictive policies favored by Trump and his acolytes. Moreover, the labor movement abandoned the perspective Nagle articulates two decades ago. Despite their painful awareness that many rank-and-file union members voted for Trump in 2016, the AFL-CIO leadership and that of the CTW federation, as well as the vast majority of their affiliates, have not wavered from the proimmigrant rights stance they adopted at the end of the twentieth century. There are compelling economic reasons for progressives to align with labor in this regard, as Eric Levitz has noted in New York Magazine. Immigration obviously does expand the labor supply, but it also creates additional economic demand; and in the context of an aging population, the immigrant influx, disproportionately comprised of prime-age workers, contributes to the fiscal sustainability of programs like Social Security and Medicare. This is the consensus among most experts, as a 2017 National Academy of Sciences report documented. But as Levitz observes, the case for restrictionism put forward by commentators like Judis and Nagle is “primarily an argument about politics, not economics,” pivoting on the


susceptibility of U.S.-born workers to right-wing populist appeals.

The Scapegoat (Flor de Pascua), by M. C. Escher.

The fact that proposals to support immigration restriction have surfaced among liberals and on the left in the wake of Trump’s success is remarkable in its own right. But Levitz makes a compelling case that adopting them would be politically disastrous for the Democratic Party and the wider progressive community. Given the seemingly irreversible demographic trends toward a majority-minority society, he declares, “The Democrats are going to be a visibly multiracial party in a browning


America,” adding that on both moral and pragmatic grounds “there is no way for Democrats to avoid the liabilities of that position—they can only strive to capitalize on its benefits.” To meet that challenge, for progressives and the labor movement alike, the most urgent task is to push back against the right-wing narrative that blames immigrants for the reversal of fortune suffered by white U.S.-born workers over the past four decades. Progressives need to promote instead a counternarrative that highlights the ways in which business strategies from the 1970s onward have reduced wages and undermined the labor movement—strategies that have been rendered invisible or irrelevant for the many U.S.-born workers who have been persuaded by Trump and his supporters to scapegoat immigrants. In a nutshell, the task is to redirect the entirely justifiable anger of those workers toward employers instead of the foreign-born. The case that immigration was a key driver of working-class distress does seem plausible at first glance, especially in regard to timing. Not long after the passage of the 1965 Hart-Celler Act ended four decades of highly restricted immigration, the economic status of white male noncollege-educated workers, most of whom had prospered in the postwar years, began to spiral downward. In the same period, inequality surged as well. These trends are indeed interconnected, but the line of causality runs in exactly the opposite direction from what Trump’s and Judis’s antiimmigrant narratives imply. Immigration was not the cause of the neoliberal economic restructuring that began in the 1970s or of the accompanying explosion of inequality and labor degradation. On the contrary, the influx of low-wage immigrants was a consequence of these developments. U.S. employers’ efforts to externalize market risk through various forms of subcontracting, and at the same time to actively undermine labor unions, generated a surge in demand for low-wage labor. That, in turn, led millions of immigrants, both authorized and unauthorized, to enter the bottom tier of the nation’s labor market to fill “jobs Americans won’t do.” As I documented in my 2006 book L.A. Story, in many sectors immigrants entered low-wage jobs in substantial numbers only after pay and conditions had been degraded to such a degree that U.S.-born workers exited the impacted occupations.


The primary driver of labor migration, past and present, is economic demand. While “push” factors in sending countries do spur emigration, it materializes on a significant scale only in response to employers’ search for new sources of labor. The 2008 financial crisis is revealing in this regard: as the U.S. economy imploded, and jobs in sectors like construction and manufacturing evaporated, the number of unauthorized migrants crossing the border decreased dramatically. Prior to the Great Recession, immigration grew in direct response to rising employer demand for cheap and pliable labor. Starting in the late 1970s, new business strategies drove down labor costs through expanded subcontracting, deregulation, and efforts to weaken or eliminate labor unions. In industries like taxi driving and trucking, where deregulation led to union decline and wage cuts, as well as in deunionized construction, manufacturing, and service industries, many U.S.-born workers voted with their feet to reject the newly degraded jobs, and then immigrants were hired to fill the vacancies. If migrants did not arrive on their own in adequate numbers to fill the demand, employers routinely sent recruiters to Mexico and other parts of the Global South to find them, often in blatant violation of immigration laws and regulations. In short, immigration was the consequence, not the cause, of declining labor standards. Demand for immigrant labor also expanded in the domestic and personal services sector in this period. Here the key driver was not employment restructuring and job degradation but instead a combination of demographic changes and rising income inequality. As maternal labor force participation grew, the nation’s increasingly prosperous professional and managerial classes devoted a growing part of their disposable income to purchasing services from housecleaners, nannies, and eldercare providers, as well as manicurists and other “personal appearance workers.” Many affluent households now included two adults with long working hours, thanks to the feminist movement’s success in opening the professions and the corporate suite to upper-middle-class women in the 1970s, even as changing expectations of parenting and the aging of the population stimulated growing demand for care work inside the home. Yet in the same period, the traditional labor supply in domestic labor occupations was evaporating, as the civil rights movement opened


up lower-level clerical and service jobs and other options to AfricanAmerican women. Black women thus began to shun domestic work just as demand for it began to rise, leading many households to replace them with immigrant women, who were increasingly available in this period as permanent family settlement came to dominate over the earlier pattern of male-dominated circular migration. Some of the biggest concentrations of Trump’s U.S.-born white working-class supporters in 2016 were in the Rust Belt. No one can seriously suggest that immigrants should be blamed for the massive wave of plant closings that swept across the Midwest starting in the 1970s. In this context jobs were not degraded, they simply disappeared. Yet as Linda Gordon showed in her recent study of the 1920s Ku Klux Klan, immigrant scapegoating does not necessarily have to be rooted in reality. Nativeborn “anger at displacement, blamed on ‘aliens,’ sometimes rested on actual experience but more often on imagination and fear stoked by demagoguery,” Gordon points out. “We know this because the Klan flourished in areas with few ‘aliens.’” The right-wing anti-immigrant narrative has in effect distracted attention from the actual causes of declining working-class living standards. The white working class has every reason to be alienated and enraged by rising inequality and the disappearance of good jobs, but their anger has been profoundly misdirected. It should focus not on immigrants but on the deliberate actions of business interests to degrade formerly well-paid blue-collar jobs and to promote public policies that widen inequality. Rather than following the lead of Judis and Nagle (fortunately still a marginal position on the left) in opportunistically jumping on the antiimmigrant bandwagon, labor and progressives hoping to regain support from the white U.S.-born workers who supported Trump in 2016 should devote their energies to shifting the public conversation in this direction. Ruth Milkman is Distinguished Professor of Sociology at the CUNY Graduate Center and the CUNY School of Labor and Urban Studies. Her most recent book is On Gender, Labor, and Inequality.
















Power Is Sovereignty, Mr. Bond Daniel Immerwahr

“Ah, Mr. Powers,” says Dr. Evil, “welcome to my hollowed-out volcano.” The setting, an elaborate underground base on a tropical island from Austin Powers: The Spy Who Shagged Me, is instantly recognizable. The deranged supervillain, his island lair, the threat of world domination—it’s so familiar you forget how bizarre it is. Of all the potentially menacing locales, why do our most ambitious evildoers, the ones bent on world domination, seek out remote specks of land in the middle of seas and oceans? You’d think the qualities of islands that make them desirable vacation spots—their distance from population centers, their relaxed pace of life—would ill suit them as launchpads for global conquest. After all, Napoleon’s adversaries sent him to Elba to neutralize him, not to encourage him to have another go. For the rest of the nineteenth century, that’s how islands were seen. Lawless and perhaps even dangerous, but not powerful places. It wasn’t until the twentieth century that the notion of planetary domination from an island started cropping up in literature. As far as I can tell, it started with Bond. Ian Fleming, the creator of James Bond, knew something about islands. During the Second World War, he’d served as the assistant to Britain’s director of naval intelligence. In 1943, he traveled to Kingston, Jamaica, for a high-level intelligence conference with the United States. The Caribbean was then in dire straits, tormented by German submarines that evaded the Allied navies. Rumors floated that the boats were finding safe berth at a secret harbor built by Dr. Axel Wenner-Gren, a mysterious Swedish multimillionaire with Nazi ties who had established himself on an island in the Bahamas.


The accusations that Wenner-Gren was using his island as a secret Nazi base proved false. But Fleming nevertheless found it all irresistible. He bought an estate in Jamaica (named Goldeneye, after one of the intelligence operations he’d help run) and began writing his Bond novels from there. One of them, Live and Let Die, used the bit about a secret island submarine base. Another, Doctor No, took the idea further. Its titular villain, a cosmopolitan multimillionaire with a private Caribbean island, bore an undeniable resemblance to Dr. Wenner-Gren.

Known U.S. bases beyond the contiguous United States today

From his secluded base, Doctor No tells James Bond in the novel, he can use radio to monitor, jam, and redirect U.S. missiles. The secrecy of his location is essential to this. “Mister Bond, power is sovereignty,” he explains. “And how do I possess that power, that sovereignty? Through privacy. Through the fact that nobody knows. Through the fact that I have to account to no one.” The films took that notion and ran with it. The private island looms large in the movie of Doctor No, but similar locales can be found in other Bond films: Thunderball (filmed on Wenner-Gren’s island), You Only Live Twice (rocket base under a Japanese volcanic island), Diamonds Are Forever (off-shore oil rig), Live and Let Die (small Caribbean island dictatorship), The Man with the Golden Gun (private Thai island), The Spy Who Loved Me (giant sea base), and Skyfall (abandoned island).


There is a sequence in the 2006 Casino Royale shot, as was Thunderball, on Wenner-Gren’s island. The world of James Bond contains many absurdities. The exploding pens, shark tanks, and endless procession of round-heeled female helpmeets seem more the fruits of Fleming’s rum-soaked imagination than insights into actual espionage. Yet with the island thing, Fleming was onto something. Just as he saw, islands and secret bases are instruments of world domination. James Bond was fiction, but not as far from fact as it might seem. Starting in the Second World War, the United States had begun seriously acquiring overseas bases around the planet. Some were, like Doctor No’s base, on remote islands. Others were walled-off enclaves inside of other countries. In the 1950s, Washington claimed hundreds of overseas bases. It has, according to David Vine, some 800 today. For contrast, consider that all other countries in the world hold around thirty foreign bases combined. Physically, the United States’s overseas holdings aren’t vast. Mash together all U.S. island territories (such as Puerto Rico and Guam) and all of its bases and you’d still have an area smaller than Connecticut. But those tiny specks of land are spread all over the globe, perforating the sovereignty of dozens of countries. A lot, as it turns out, has happened on or around them. What, specifically, could the United States do with a base? A fine example is the Swan Islands, a small cluster of three islands in an isolated part of the Caribbean, not far from the fictional location of Doctor No’s island. In the 1950s, the CIA secretly built a landing strip and a fiftythousand-watt radio transmitter on Great Swan. That single transmitter could reach South America, allowing the United States to cover with its radio beams territory inaccessible by ground. Soon after the CIA built that radio station, a delegation of Honduran students carrying arms came to Great Swan to “liberate” the islands and claim them for Honduras. They had no idea of the CIA’s presence, and the agency was determined to keep them in the dark. GIVE THEM PLENTY OF


BEER AND PROTECT THE FAMILY JEWELS

read the frantic cable from Washington (i.e., don’t let them discover the broadcasting equipment). Marines sped to the island to repel the invasion. What happened next can be best appreciated by reading the cable traffic from Swan to Washington: Swan to HQ: HONDURAN SHIP ON HORIZON. BEER ON ICE. TALKED TO STUDENTS. THEY CONFABING [SIC]. HAVE ACCEPTED BEER. Swan to HQ: STUDENTS MIXING CEMENT IN WHICH THEY INTEND TO WRITE “THIS ISLAND BELONGS TO HONDURAS.” ONE GROUP MALINGERING, LISTENING TO EARTHA KITT RECORDS AND DRINKING FIFTH BEER. Swan to HQ: STUDENTS HAVE JUST RAISED HONDURAN FLAG. I SALUTED. Swan to HQ: BEER SUPPLIES ARE RUNNING LOW. NOW BREAKING OUT THE RUM. THESE KIDS ARE GREAT. Swan to HQ: STUDENTS HAVE EMBARKED FOR HONDURAS. LIQUOR SUPPLY EXHAUSTED. FAMILY JEWELS INTACT. In the end, the students were permitted to sing the Honduran anthem, take a census, and raise their flag. They left, never realizing who their drinking buddies were. Or that a contingent of marines had been waiting, ready to start shooting if the beer didn’t work. The family jewels were worth protecting. In 1954, the CIA had used radio to spread fake news during a coup it helped stage to overthrow Guatemala’s elected government. With a transmitter on Great Swan, it could run an even more extensive operation, this time directed at Fidel Castro’s Cuba. Through “Radio Swan,” which posed as a privately run station, the United States promulgated false news reports and trolled the Cuban government. Castro and his lieutenants were “pigs with beards.” Raúl Castro was “a queer with effeminate friends.” The power and location of its transmitter allowed Radio Swan to boast fifty million


listeners throughout the Caribbean and Central and South America. In 1961, the United States sent seven ships of paramilitaries to invade Cuba, an attempt to repeat its success in Guatemala. Radio Swan played a key role, sowing confusion with cryptic messages designed to confound Castro. (“Look well at the rainbow.” “Chico is in the house. Visit him.”) During the invasion, Radio Swan broadcast orders to nonexistent battalions to encourage the rebels and spread fear among the authorities. The Bay of Pigs invasion, as it was called, ended in failure. Radio Swan’s cover was blown, and journalists snickered over the resemblance between the operation and the plot of Doctor No. But that wasn’t the end of the Swan Islands. In the 1980s, the CIA outfitted Great Swan with a port to offload cargo intended for its political allies. Munitions, uniforms, parachutes, and other materiel flowed from the island to rebels in Nicaragua who sought to bring down the leftist government there. The Swans were where right-wing paramilitaries trained, where mercenary pilots from southern Africa took off for their airdrops over Nicaragua. The illicit streams of aid at the heart of the Iran-Contra scandal flowed straight through the Swan Islands. In the 1958 novel Doctor No, the villain’s lair is on a “guano island,” beset by thickly flocking birds whose droppings were in the nineteenth century a valued sort of fertilizer. At the end of the novel, Bond defeats Doctor No by burying him in a guano pit. But in the film version, made four years later, there is no trace of guano. Instead, Doctor No’s island is the site of a nuclear reactor, and Bond triumphs by triggering a meltdown, drowning Doctor No in the pool containing the overheating reactor. (That Bond’s action would very likely have turned Jamaica and its environs into a Chernobyl-style fallout zone goes narratively unexplored.) The nuclear theme wasn’t a random choice. There is a special connection between nuclear weapons and bases. The very remoteness of military bases from the homeland, and often from large populations, makes them ideal sites to test and store nuclear devices. The United States found one such site at Bikini Atoll and the next-door atoll of Enewetak, a lightly inhabited part of the Marshall Islands in


Micronesia. The navy ushered the Marshallese off their homeland and began using the atolls for nuclear tests. Between 1946 and 1958, the U.S. military detonated sixty-seven nuclear weapons on or near Bikini and Enewetak. To the proverbial Martian looking on from space, it must have appeared that humanity was for some indiscernible reason waging furious, unrelenting war against a string of sandbars in the middle of the Pacific. One test at Bikini, the “Bravo shot,” involved exploding a hydrogen bomb with a fifteen-megaton yield. The explosion was twice as powerful as expected, and unusually strong winds carried the fallout far beyond the cordoned-off blast zone. Had it detonated over Washington, D.C., it could have killed 90 percent of the populations of Washington, Baltimore, Philadelphia, and New York within three days. On Rongelap, more than a hundred miles from the Bravo shot, islanders watched radioactive white ash fall from the sky like snow. Dozens of them suffered from radiation poisoning, and the whole island had to be evacuated for three years. A Japanese tuna fishing boat, the Lucky Dragon, also outside the blast zone, was engulfed in the fallout. All twenty-three of its crew members got radiation poisoning, and one of them died. These were small numbers, easy to ignore from Washington. “There are only 90,000 people out there,” Henry Kissinger said of Micronesia. “Who gives a damn?” Yet when the Lucky Dragon limped back to port carrying its catch of irradiated tuna, it encountered a country that very much gave a damn. Japan had firsthand experience with radioactive fallout. The return of the Lucky Dragon set off a media frenzy. Rumors that the irradiated fish had made their way onto the market briefly triggered the collapse of the tuna industry. The emperor himself began traveling with a Geiger counter. Among those swept up in the spirit was a young film producer, Tomoyuki Tanaka. He’d later go on to produce such high-end classics of Japanese cinema as Akira Kurosawa’s Yojimbo. But in 1954, the year of the Bravo shot, he had something else in mind. He hired a director, Ishirō Honda, who had traveled through Hiroshima in 1945 and seen the destruction firsthand. Their film, Gojira, told the story of an ancient dinosaur awakened by


U.S. hydrogen bomb testing. Gojira first destroys a Japanese fishing boat — a thinly veiled Lucky Dragon—before attacking and irradiating a Bikinilike island called Odo. Gojira, said to be “emitting high levels of H-bomb radiation,” then turns on Tokyo, breathing fire and laying waste to the city. As films go, Gojira isn’t subtle. It’s full of talk of bombs and radiation. “If nuclear testing continues, then someday, somewhere in the world, another Gojira may appear,” are its somber final words. That message, however, got lost in translation. Gojira, phenomenally popular in Japan, was remixed for a U.S. audience. The Hollywood version used much of the original footage but spliced in a white, Englishspeaking protagonist played by Raymond Burr. What got cut out was the antinuclear politics. The Hollywood version contains only two muted references to radiation. And it ends on a much happier note. “The menace was gone,” the narrator concludes. “The world could wake up and live again.” The Japanese Gojira was a protest film, hammering away at the dangers of U.S. bases in the Pacific. The English-language Godzilla, by contrast, was just another monster flick. The Japanese weren’t the only ones to object. The United States has maintained bases in every region in the world, and wherever the bases have opened, protest has followed. The French complained of U.S. “occupiers” and forced the military to abandon its base sites. Thousands of Panamanians, marching with signs reading DOWN WITH YANKEE IMPERIALISM and NOT ONE MORE INCH OF PANAMANIAN TERRITORY, also forced the bases out. For the British, the main issue was nuclear weapons. In the 1950s, the United States stored its bombs on British bases and flew B-47s regularly over England. Were those planes carrying nuclear bombs? “Well, we did not build those bombers to carry crushed rose petals,” the U.S. general in charge told the press. He was bluffing, slightly—the bombs were unarmed. But the terrified British public had no way of knowing that. The British had reason to be afraid. The United States, we now know, did fly armed bombs over its allies’ territory, and doing so was terrifically dangerous. In the 1960s, a B-52 carrying four Mark 28 hydrogen bombs


near a U.S. base in Greenland crashed into the ice at more than 500 miles an hour, leaving flaming debris. The conventional explosives in all four bombs blew up. The bombs were ostensibly “one-point safe,” meaning that those explosives around the core could go off without detonating the bomb, so long as they didn’t go off simultaneously (which would violently compress the core and trigger nuclear fission). Yet some bombs in the arsenal had proved not to be one-point safe, and a lot could go wrong in a crash. The Greenland accident didn’t set off a nuclear explosion. It did, however, spew plutonium all over the crash site. In this, it resembled the time when another B-52 also carrying four armed hydrogen bombs crashed over a village in Spain. Part of the plane landed eighty yards from an elementary school, another chunk hit the earth 150 yards from a chapel. The conventional explosives in two of the bombs went off, sowing plutonium dust into the tomato fields for miles. A third bomb landed intact. A fourth dropped out of sight and took the military a hair-raising three months to find (all while the box office was dominated by Thunderball, a James Bond thriller about nuclear weapons gone missing).

Marine Corps Air Station Futenma—an outpost of the United States lodged in the heart of a tightly packed Okinawan city (Wikimedia Commmons)

And so the British had cause for alarm about U.S. bases on their soil. Within months of the U.S. general’s announcement about the bomber


over-flights, more than 5,000 well-dressed protesters gathered in the rain at Trafalgar Square. From there, they marched for four days to a nuclear weapons facility in Aldermaston. By the time they reached it, their numbers had grown to around 10,000. NUCLEAR DISARMAMENT and NO MISSILE BASES HERE, their banners read in sober black and white. An artist named Gerald Holtom designed a symbol for the Aldermaston march. “I was in despair,” he remembered. He sketched himself with his arms outstretched and downward, “in the manner of Goya’s peasant before the firing squad. I formalized the drawing into a line and put a circle around it.” The lone individual standing helpless in the face of world-annihilating military might—it was “such a puny thing,” thought Holtom. But it captured vividly the feeling of vulnerability, the combination of impotence and fear that living in the shadow of the U.S. bases engendered. Others felt it, too, it seemed. Holtom’s creation, the peace symbol, resonated and quickly traveled around the world. Operating bases in a foreign country is a delicate operation. It’s not hard to imagine the public reaction to a Chinese base in, say, Texas. In fact, it’s not even necessary to imagine. In the eighteenth century, the stationing of British soldiers in North America was so repellent to the colonists that it fueled their revolution. The Declaration of Independence denounced the king for “quartering large bodies of armed troops among us.” So it wasn’t a surprise to Washington that it was met with some caginess after the Second World War when it asked to open a base in Saudi Arabia. The site was ideal, like an “immense aircraft carrier” right in the middle of the major air traffic lanes of the planet, a State Department cable noted. Yet the Saudi royals worried how it would look to let the United States fly its flag over the land of Mecca and Medina. So nervous was the king that, though he granted the military the right to open a base at Dhahran, he forbade it from physically planting a flag. Instead, the stars and stripes had to be attached to the U.S. consulate, to prevent it from touching Saudi soil. And the site was to be called an “airfield,” never a base.


Just as the king feared, many Muslims blanched. The Dhahran complex brought Christians and Jews to the Holy Land, making the Saudi monarchy complicit in the kingdom’s desecration. The Voice of the Arabs, an Egyptian radio station critical of the Saudi government, invoked Dhahran as its prime example of U.S. imperialism. Eventually, the Saudi government relented and ended the lease, forcing the U.S. military out in 1962. But it didn’t stay out. In 1990, Saddam Hussein, the dictator of Iraq, invaded Kuwait. It was a bold and sudden attack, giving Hussein control of two-fifths of the world’s oil supply. And it looked very much as if he might invade Saudi Arabia next. Dick Cheney, General Norman Schwarzkopf Jr., and the Pentagon’s Paul Wolfowitz flew to Jeddah the next day. Cheney proposed reopening Dhahran to the U.S. military. “After the danger is over, our forces will go home,” he promised. King Fadh acquiesced. “Come with all you can bring,” he told Cheney. “Come as fast as you can.” They did. The first planes landed at Dhahran within twenty-four hours, and they kept coming. The Pentagon put “everything aloft that could fly,” wrote Colin Powell. “You could have walked across the Mediterranean on the wings of C-5s, C-141s, and commercial aircraft moving across the region,” one pilot marveled. Saudi Arabia became the basis for Operation Desert Storm, the U.S.-led campaign against Iraq. But hosting U.S. forces at Dhahran was no less of a touchy subject in the 1990s than it had been before. Saudis near the base were unnerved by seeing female service members driving vehicles and wearing T-shirts. Radio broadcasts from Baghdad charged U.S. forces with defiling Islam’s holiest sites. Saudi clerics complained. For one vexed Saudi, Osama bin Laden, the bases weren’t only an affront to religion, they were also a maddening capitulation to empire. “It is unconscionable to let the country become an American colony with American soldiers—their filthy feet roaming everywhere,” he fumed. The United States, he charged, was “turning the Arabian Peninsula into the biggest air, land, and sea base in the region.” At the urging of the nervous Saudi government, Bin Laden left the country, making his way eventually to Afghanistan. But he did not drop


the issue. That the U.S. troops stayed in Saudi Arabia after defeating Saddam Hussein in the Gulf War, in breach of Cheney’s promise, only added fuel to Bin Laden’s fire.

Major coalition bases used in the Gulf War

In 1996, a bomb went off at a housing facility at Dhahran. Nineteen U.S. Air Force personnel died, and 372 people were wounded. Bin Laden claimed responsibility. It’s genuinely unclear if he was involved, but someone hated the base enough to bomb it. Shortly after the Dhahran bombing, bin Laden issued his “Declaration of War Against the Americans Occupying the Land of the Two Holy Places.” On the face of it, this seemed an absurdly imbalanced war: an exile living in a cave complex in Tora Bora, Afghanistan, taking on the most powerful military in existence. Yet bin Laden, equipped with the latest satellite technology in his mountain base, calculated that he could, like some sort of Central Asian Doctor No, order strikes from afar. Those calculations were right. On the eighth anniversary of the arrival of U.S. troops at Dhahran, bin Laden used satellite communications to coordinate simultaneous bombings of the U.S. embassies in Kenya and


Tanzania. More than 200 people died, and several thousand were wounded. The climax of Bin Laden’s campaign came three years later, in what Al Qaeda referred to as its “planes operation.” Nineteen hijackers, fifteen of them from Saudi Arabia, commandeered four commercial aircraft. One hit the Pentagon (“a military base,” Bin Laden explained). Two more struck the World Trade Center (“It wasn’t a children’s school!”). The fourth, en route to the U.S. Capitol, crashed in a field in Pennsylvania. The attacks baffled many in the United States. “To us, Afghanistan seemed very far away,” wrote the members of the 9/11 Commission. So why was a Saudi man there attacking Washington and New York? The answer is that, for bin Laden, the United States was not “very far away.” “Your forces occupy our countries,” he wrote in his message to the U.S. populace. “You spread your military bases throughout them.” Bin Laden’s list of grievances against the United States was long, ranging from its support of Israel to Bill Clinton’s affair with Monica Lewinsky. (“Is there a worse kind of event for which your name will go down in history?” he asked.) But his chief objection, voiced consistently throughout his career, was the stationing of troops in Saudi Arabia. This is worth emphasizing. After the 9/11 attacks, “Why do they hate us?” was the constant question. Yet Bin Laden’s motives were neither unknowable nor obscure. September 11 was, in large part, retaliation against the United States for its empire of bases. The War on Terror was, Defense Secretary Donald Rumsfeld told the press after 9/11, a “very new type of conflict.” Previous wars had been against countries. Now the United States was fighting terrorism writ large. The old area-based military concepts of front, rear, and flank no longer made as much sense. “We’ll have to deal with networks,” Rumsfeld explained. Having identified the adversary as a series of connected points, Rumsfeld adhered to a new approach to fighting. It was less a game of Risk than one of hide-and-seek. Eyes in the sky, not boots on the ground, would be the key. Rumsfeld favored a military that specialized in finding targets and zapping them from above with pinpoint aerial strikes. The


enemy in this style of warfare wasn’t a country. It was a GPS coordinate. But if large occupying armies weren’t central to this new conception of war, bases were. Even drones need launchpads, and the war on terror relied on a string of bases running from North America to the hot spots and war zones. Such bases, Rumsfeld confessed, “grate on local populations.” But even as the U.S. military has been kicked out of place after place—Vieques in Puerto Rico, Dhahran (again) in Saudi Arabia, Uzbekistan, Kyrgyzstan—it has held tight to the sites it can control, often islands. Anti-base resistance in Okinawa has led the military to plan a major expansion on Guam. As a U.S. territory, Guam has no voting power in Congress and no power to vote for the president. It is a possession, and the United States can do with it what it pleases. Guam isn’t the only spot of land that has proved crucial in the ongoing War on Terror. Soon after 9/11, the Bush administration fastened on Guantánamo Bay in Cuba as a place to detain suspected terrorists. A century-old lease gave the United States complete jurisdiction over it. It had a McDonald’s, a Baskin-Robbins, a Boy Scout contingent, and a Star Trek fan club. But because the land remained technically Cuban, it was, White House lawyers argued, “foreign territory” where U.S. laws and treaties regarding the treatment of prisoners wouldn’t apply. Guam and Guantánamo Bay are a fitting pair, both U.S. outposts far from the fighting that have nevertheless become central to the war on terror. Small dots on the map like this might seem unimportant. But they are the foundation of the U.S. Empire today. They and hundreds of other sites around the globe are where the military can store its weapons, station its troops, detain suspects, launch its drones, and monitor global affairs. They are so valuable because they are outposts of the United States where, in the words of Doctor No, Washington has to “account to no one.” Daniel Immerwahr teaches history at Northwestern University. He’s the author of How to Hide an Empire: A History of the Greater United States (Farrar, Straus and Giroux, 2019), from which this article is excerpted, and Thinking Small (Harvard, 2015).


Nicos Poulantzas Philosopher of Democratic Socialism David Sessions

As Marxism’s old messianic character faded in the late twentieth century, too many forgot that wandering in the wilderness is often the precondition of a prophet’s appearance. With the collapse of “really existing” socialism came what seemed like a permanent triumph of capitalism and the slow, grinding destruction of whatever resisted the market’s advance. But the fartoo-unexpected renaissance of socialism in the twenty-first century reveals not only how much ground has been lost, but how much baggage has been shed. The presence of an authoritarian communist superpower was not only an ideological ball and chain for left politics outside the Eastern bloc, but also a real geopolitical straitjacket: at the electoral peak of European communist parties in the 1970s, the Soviet Union never kept secret that it preferred reactionaries in power in the West. Now that this old shadow has passed and socialists are making a slow exit from the desert, they have a chance to redefine themselves for a new century. That involves taking bigger and more difficult steps, and it is not surprising that the effort has sent contemporary democratic socialists back to the 1970s, the last historical moment when socialist thinkers enjoyed even the illusion of political possibilities. In the brief window before the neoliberal era, socialists were just beginning to ask what a left politics that could win elections in a democratic system would look like. Who would its base be—what sort of alliance between classes and identity groups would it appeal to? How would it act toward a “bourgeois” political system that communists had always seen as an unredeemable instrument of class domination? Is it even possible to be a democratic revolutionary? These questions came together in the work of Nicos Poulantzas, a


Greek thinker who spent much of the 1960s and 1970s in Paris. There, Poulantzas argued that a sophisticated understanding of the capitalist state was central to a strategy for democratic socialism. Pushing as far as possible toward a Marxist theory of politics while still holding onto the central role of class struggle, Poulantzas tried to combine the insights of revolutionary strategy with a defense of parliamentary democracy against what he called “authoritarian statism.” Recent signs of a Poulantzas renaissance, including the republication of several of his books in French and English, have a lot to do with the fact that his dual strategy for democratic socialism resonates with the task of today’s socialists: to understand how to use the capitalist state as a strategic weapon without succumbing to a long history of failed electoral projects and realignment strategies. The tensions in Poulantzas’s thinking resemble the current tensions within the left: is winning back power a matter of casting the oligarchs out of government and restoring a lost fairness, or is a more radical transformation of the state required? It is an open question whether Poulantzas himself was able to articulate a satisfying vision for democratic socialism. His work, nevertheless, goes straight to the heart of the problems that twenty-firstcentury socialism must face. Toward a Structural Theory of the Capitalist State Nicos Poulantzas was born in Athens in 1936. In his twenties, he began a law degree at the University of Athens as a back door into philosophy. Jean-Paul Sartre’s writings became a conduit for Marxism among young Greek intellectuals since, as Poulantzas later explained, it was difficult to get the original canonical Marxist texts in a country that had suffered Nazi occupation, then civil war, then a repressive anticommunist government. After a brief stint in legal studies in Germany, Poulantzas made his way to Paris, where he was soon teaching law at the Sorbonne and mingling with the editors of Sartre and Simone de Beauvoir’s journal Les Temps modernes. Poulantzas was drafted among a crop of new, younger writers for the journal, which published his earliest writings on law and the state and his engagements with British and Italian Marxists, including the Italian Communist Party’s in-house theorist, Antonio Gramsci. His 1964


doctoral thesis on the philosophy of law was broadly influenced by Sartre’s existentialism and the thought of Georg Lukács and Lucien Goldmann, who harmonized with the Hegelian Marxism dominant in France. Louis Althusser, then a more marginal French philosopher but soon to be famous across Europe, dissented from this Hegelian turn. Althusser’s 1965 seminar, “Reading Capital,” was a curious event in the history of Marxism that marked the intellectual itineraries of well-known theorists like Étienne Balibar and Jacques Rancière. The framework it launched into Marxist theory, usually described as “structuralism,” was inextricable from Althusser’s dual opposition to Stalinist economism and the humanism of thinkers like Sartre. In the classic Marxist schema, the economic “base” gives rise to political and ideological “superstructures”— in other words, most everything about capitalist society, from its political institutions to its culture, are ultimately fated by the laws of economics. The Althusserians argued that, on the contrary, all of the domains of capitalist society operate quasi-independently of one another in order to more flexibly reproduce capitalist domination. Of course, they are tightly interrelated, and the economic decides “in the last instance” whether economics or something else will take priority, but, according to Althusser himself, “the lonely hour of the ‘last instance’ never comes.”


Activists demonstrate against the Greek junta in London in 1967. Poulantzas left Athens for Germany, then France. (Aliki Sapountzi/Alamy Stock Photo)

Poulantzas was not a major participant in the “Reading Capital” seminar, but applied some of its theoretical principles to his own thinking about law and the state. Like Marx and Engels before him, Poulantzas believed that the fundamental role of the state is to defend class power. But the capitalist state, he argued, does this in a complex way that is obscured both by liberal and traditional Marxist theory. The capitalist state is not merely, as liberals imagined, a political structure that represents the unity of the individual members of a “civil society.” Nor is it, as in base-and-superstructure Marxism, simply an outgrowth of capital’s economic domination of labor, a straightforward tool of class power. On the contrary, liberal ideals—popular sovereignty, individual rights—are what enable the capitalist state to act in the interests of the dominant classes. Because it can pose as the representative of the people, the capitalist state is the ideal manager of the interests of the capitalist class. It can arrange compromises with the “dominated classes” necessary to establish the legitimacy of the social order while maintaining a distance from the most venal and short-sighted fractions of the capitalist class, whose natural instinct is to pursue what Marx called “the narrowest and most sordid private interests” over the well-being of the


dominant classes as a whole. Poulantzas’s shift of emphasis away from the struggle between capital and labor required him to rethink of the nature of “class” and “class struggle.” Classes, he argued, are born in traditional “economic” confrontation over wages, time, and working conditions, but they are also made politically, depending on how they organize themselves and exert pressure on the political system. Poulantzas argued that the political in capitalist society in fact “overdetermines”—establishes a kind of complex, contradiction-riddled hierarchy over—other kinds of class struggle by rigging things from the beginning against the dominated classes. The same legal setup that enables the capitalist state to “organize” the interests of the dominant classes simultaneously disorganizes the dominated classes: it recognizes them, legally and politically, only as isolated individuals, with no recognition of the economic position into which they have been sorted. The capitalist state’s separation of the political from the economic isolates class struggle in factories and workplaces while the real battle has already been decided in the very functioning of the political system. As a work of militant Marxist sociology, Political Power and Social Classes struck out onto a terrain that, since the end of the Second World War, had grown over with new liberal theories of social groups, bureaucracy, and “industrial relations” that celebrated the postwar order as an era of growing social integration and declining class conflict. Liberal sociology tended to see the growth of bureaucracy in both private firms and state administration as an inevitable result of the complexity of social organization, a new era of “managerial” or “industrial” society that was, for some, a welcome overcoming of the competition and conflict of laissez-faire capitalism. Many, though certainly not all, liberal social scientists and technocrats took an elitist view of postwar society: the Keynesian compromise delivered real gains to the masses while keeping political power safely in the hands of rational experts. Poulantzas was not the only figure of the late 1960s to sense that Marxist theory had to advance in order to demonstrate what most everyone to the left of social democrats believed: that the liberal orthodoxy of the epoch was a delusional obfuscation of the real nature of the new technocratic Keynesian state. In The State in Capitalist Society,


published just months after Poulantzas’s book, the British political scientist Ralph Miliband demonstrated empirically that the transition from the more limited liberal state to the interventionist, managerial state, had done nothing to threaten the ruling class’s consolidation of power. In many cases, he argued, it wasn’t even true that big business kept a distance from the state—in fact, it had a direct and constant presence in executive cabinets and the apparatuses of financial governance and economic planning. Influenced by the American sociologist C. Wright Mills, who tried to diagnose the tight interlocking of the American ruling classes in The Power Elite (1956), Miliband assembled a mass of evidence that different kinds of elites share social origins, cultural backgrounds, educational trajectories, and mentalities, and the exceptions were subtly indoctrinated into conforming to the rules. Whatever its compromises with the working class, the capitalist state was still the instrument of the dominant classes. Miliband’s approach to the capitalist state had certain affinities with the communist view that was Poulantzas’s other primary target. For Poulantzas, this view mistakenly saw the state as a neutral infrastructure that was corrupted by who had power over it. On the contrary, he argued, it made zero difference who was in charge because the capitalist state was already a highly calibrated machine for manufacturing class domination. This was a theoretical point with big strategic consequences, Poulantzas argued: if the left imagined the state could be left intact and steered toward socialism, it was in for a rude awakening. “Lenin said that it was necessary to win state power by smashing the state machine,” he declared, “and I need say no more.” Authoritarian Statism, or How We Got Neoliberalism All Wrong As Poulantzas was debating the nature of the state in the late sixties and seventies, the postwar, post-ideological consensus was coming undone. Left-wing movements with new ideas sprouted everywhere at the same time traditional social democratic and communist parties’ memberships swelled, apparently putting them on the path to electoral power. But almost everywhere, socialism’s steps toward power were answered by brutal reaction. Fears of a left-wing government led to a military coup in Greece in 1967, and the democratically elected socialist government of


Salvador Allende in Chile was crushed by a similar—and equally U.S.supported— coup in 1973. By the end of the decade, economic crisis had further complicated the situation, heralding a long period of retreat from the use of state power for redistributive and egalitarian projects. Poulantzas stood out among 1970s thinkers in seeing military dictatorship and the beginnings of neoliberalism as part of a single menu of options capitalist governments had in response to economic and political crisis. There is a doggedly persistent view that the post-1970s political-economic order involved a weakening of the nation-state: that big business demanded a retreat from state intervention in the economy, while the increasingly global system enabled capitalists to circumvent national government. For Poulantzas, neoliberalism was only one facet of a broader turn he called “authoritarian statism”: a combination of the managerial powers of the Keynesian state with a strategic retreat from some of its former economic functions. New state tactics included deliberate submission to anti-democratic international institutions, economic policies that made life more atomized and precarious, and intensified surveillance and repression. In extreme situations, especially in countries dependent on larger “imperialist” powers, economic crisis could lead to “exceptional forms” of capitalism, like fascism or military dictatorship. In advanced liberal-democratic countries it was likely to look like a subtler combination of selective internationalism, intensified technocracy, and police violence.


In his final years, Poulantzas seemed to be straining against the seams of his think-ing—and perhaps even against the Marxist tradition itself.

Early in his trajectory, Poulantzas had highlighted the importance of locating each nation’s position in a global “imperialist chain” to make sense of the particular form its state needed to take to reproduce capitalist class power. In the 1970s, he focused particularly on the emerging dependence of European states and their dominant classes on the U.S. imperialism, expressed in the growing investment of American capital in Europe during the 1960s. It was not enough for the European left to conclude that the crises of “monopoly capitalism” were destined to destroy it from within, as many communist parties held. For strategic reasons, they needed to understand the specific relations of imperialism and the crises they produced, including the relations between the “imperialist metropoles” of the United States and Europe. American capital, Poulantzas argued, had increased its hold over Europe through direct investment in sectors where American corporations already exercised highly consolidated international control. By doing so, they were able to exert even broader economic influence, setting the standards for raw materials, insisting on reorganizing the labor process, and imposing certain management ideologies. The answer to Europe’s new dependence, or “satellite imperialism,” was not, as even some French liberals argued, one of the nation-state versus “multinational corporations,” or, as some leftists imagined, the


chance for a coalition that aligned a national bourgeoisie with the left against the dominating forces of international capital. Despite the internationalization of the economy and the growth of supranational institutions like the European Economic Community, Poulantzas insisted that the national state was still the primary site of the “reproduction” of capitalism. The rise of supranational institutions itself was merely a part of the national state’s transformation of its role in managing the economy, facilitating economic internationalization as part of its efforts on behalf of its national ruling class. But acting as the primary agent of internationalization put the capitalist nation-state in a position particularly vulnerable to crisis and with a limited range of responses. Internationalization weakened the unity of the domestic ruling classes, as the state acted on behalf of certain fractions of capital at the expense of others. It put the ideological unity of the nation in jeopardy by supporting lopsided economic development within its own territory—as illustrated by our current situation where booming mega-cities power the global economy while small towns and rural areas suffer painful depopulation and decline. Such contradictions are certain to cause political tension and revolt because they shatter the myth that the state is a neutral arbiter on behalf of the whole nation. (They, might, for example, get people thinking about “nationalists” versus “globalists.”) “In a certain sense, the state is caught in its own trap,” Poulantzas writes. “It is not an all-powerful state with which we are dealing with, but rather a state with its back to the wall and its front poised before a ditch.” “Authoritarian statism,” then, was a general term for the type of capitalist governance that had emerged in the postwar period and only been accentuated by the political and economic crises of the 1970s and the upsurge of popular militancy. He deliberately intended the term as a broad stand-in for what seemed to be the transformation of capitalist government: the massive shift in power from parliaments to the executive, the decline of traditional political parties, the shift of more and more functions of governance from representative institutions to permanent bureaucratic apparatuses controlled by executive power. It also had dimensions of direct repression: the increased use of police and military violence against domestic populations, arbitrary curtailments of civil liberties, and the rise of government on an emergency basis that


transcended—sometimes permanently—the normal “state of law.” State, Power, Socialism (1978) was Poulantzas’s last major update to his theory of the capitalist state, in which one of his major tasks was to think through the French philosopher Michel Foucault’s theory of power, and to articulate how authoritarian statism, as he later put it, brought a shift from “organized brute force to internalized repression.” Unlike Foucault, however, Poulantzas insisted that such disciplinary techniques, even though they are laundered through the state, are ultimately linked back to economic exploitation and class power. Poulantzas had already argued that the separation of the political from the economic, with its attendant creation of atomized legal individuals, was part of the infrastructure of the capitalist state. In State, Power, Socialism, he reiterated that dividing up individuals for domination in the economy is the liberal state’s “primal” role; it continually institutionalizes that fracturing, reinforcing it both ideologically and materially. In other words, the state uses its own practices to make the neoliberal individual. Old markers of social hierarchy and relationships are replaced with scientificbureaucratic norms that classify and measure people and remind them of their status as individualized social atoms. Poulantzas’s conception of the state had grown progressively more dynamic: where he had initially emphasized its functional, machine-like qualities, he now dramatized its internal fractures and divisions, and the contingencies introduced by its vulnerability to crisis and its tight links to class struggle. The state, in Poulantzas’s most famous formulation, was “the condensation of a relationship of forces between classes. . . . Class contradictions are the very stuff of the state: they are present in its material framework and pattern its organization.” Poulantzas’s insistence on the materiality of the state’s apparatuses and their reproduction of class power was thus a direct challenge the Foucauldian theorization of power as the all-encompassing fabric of society, a kind of game in which every act of resistance was a strategic “move.” “Power always has a precise basis,” Poulantzas countered. The state “is a site and a center of the exercise of power, but it possesses no power of its own.” Inside and Outside the State: The Democratic Road to Socialism Poulantzas’s evolution toward a more dynamic conception of the state


had important implications for socialist strategy, one of the features of his thought that has attracted the most attention from contemporary democratic socialists. In his early work, the central argument of his theory of the capitalist state—that it was a structural device for reproducing class domination—led him to affirm a traditional Leninist strategy of “smashing the state.” But as Poulantzas got more specific about the complexity of the state’s apparatuses and their status as a force field of class struggle, he reached a new conclusion: if the state was a set of relationships rather than a “thing,” could it really be encircled or charged like a fortress? There was no question that, in its current form, the state acted as the organizer of class domination. But a crucial dimension of Poulantzas’s theory was that, in nontrivial ways, the dominated classes were already a part of the state. In the twentieth century, the capitalist state’s fundamental task of “organizing” class struggles had forced it to take major steps—not least the creation of the welfare state—toward accommodating working-class demands. While such achievements were always under threat from capital, they were still achievements that had become a real part of the state infrastructure. In the mid-1970s, as the dictatorships of Southern Europe transitioned to democracy, and as the Italian and French Communist parties wrestled with how to participate in parliamentary politics, Poulantzas began to think about how the balance of power between classes could be radically shifted so that the weak and marginal positions the dominated classes already held in the struggles over the state could be turned into bases for rupture and transformation. For both theoretical and strategic reasons, Poulantzas reconsidered the relevance of Leninist “dual-power” strategies that aimed to build working-class counter-institutions that would eventually grow strong enough to “smash” the capitalist state. This strategy had originated in a rather adhoc fashion in the run-up to the Russian Revolution in 1917. For Poulantzas, looking at the political systems of Western Europe in the late 1970s, it was impossible to imagine a position entirely outside the state. While the dominated classes could and should build rank-and-file institutional power at a distance from the state, they could never be truly outside its field of power. “Today, less than ever is the state an ivory tower isolated from the popular masses,” he wrote. “The state is neither a


thing-instrument that may be taken away, nor a fortress that may be penetrated by means of a wooden horse, nor yet a safe that may be cracked by a burglary: it is the heart of the exercise of political power.” The rhetoric of “smashing” the state not only failed to see that the state was not a “thing” to smash, but also implied—as it ultimately had in the October Revolution—a suppression of institutions of representative democracy that could serve as a defense against an authoritarian statism under new management. Poulantzas tried to envision a way that the left could simultaneously champion both rank-and-file democracy at a distance from the state and a push for radical transformation within it. Working within the state would aim to produce “breaks” that would polarize the highly conflictual state apparatuses toward the working class, assisted by external pressure from rank-and-file organizations. “It is not simply a matter of entering state institutions in order to use their characteristic levers for a good purpose,” Poulantzas wrote. “In addition struggle must always express itself in the development of popular movements, the mushrooming of democratic organs at the base, and the rise of centers of self-management.” Poulantzas’s attempt at an internal-external strategy aimed to walk a narrow line between a social democratic reformism that merely practiced parliamentary politics as usual and a Leninist revolutionary strategy that he saw as potentially authoritarian and in any case doomed to perpetual isolation from really-existing paths to socialism. Revolutionary critics from the 1970s to the present have argued that this was merely a reformism in disguise. Poulantzas agreed that the risk of falling into reformism was real, but suggested that such a risk was endemic to every revolutionary position in the late twentieth century. “History has not yet given us a successful experience of the democratic road to socialism,” he wrote. “What it has provided—and that is not insignificant—is some negative examples to avoid and some mistakes upon which to reflect. . . . But one thing is certain: socialism will be democratic or it will not be at all.” A Marxism for the Twenty-First Century? Poulantzas threw himself from a window in Paris in 1979. In his final years, he seemed to be straining against the seams of his thinking—and perhaps even against the Marxist tradition itself. He had tried to remake


the theory of the capitalist state for the twentieth century and socialist strategy for an era of democratic politics. Fellow Marxists have accused him of every transgression in the book: of “scholasticism,” of reformism, of abandoning the concept of class, of remaining too attached to class struggle and the determining power of the economic. He considered his own position as far as one could go toward a Marxist politics without abandoning the fundamental commitment to the determinant role of the relations of production. “If we remain within this conceptual framework, I think that the most that one can do for the specificity of politics is what I have done,” he confessed to the British journal Marxism Today in 1979. “I am not absolutely sure myself that I am right to be Marxist; one is never sure.” The ambiguities of the final Poulantzas could stand for the whole of his work. Is it possible to square a structural theory of the capitalist state with a dynamic sense of class struggle? Can the vision of a machine-like state whose infrastructure unfailingly spits out class domination be reconciled with one that has “no power of its own,” that merely reflects the balance of class forces in society? Can we really think about class struggle without attention to historical subjects, to the consciousness of all the past discriminations and defeats that, as Marx put it, “weighs like a nightmare on the brains of the living?” Is the strategy of combining struggle within the capitalist state with popular movements outside it any less of a pipe dream than all the revolutionary strategies that went before? There is certainly no question of Poulantzas answering all, or even most, of the questions that democratic socialists face today. If nothing else, his at times maddeningly abstract and incantatory writing style make his work a forbidding thicket for a reader of almost any level of preparation to penetrate. But it is also possible to argue that his very contradictions and ambiguities, which reflected an era of uncertainty that strongly resembles our own, are precisely what makes Poulantzas a provocative source today. Even if he failed to provide satisfying answers to the challenges of the 1970s, he did a great deal to highlight them. Above all, Poulantzas draws attention to the what the British political theorist Ed Rooksby calls “one of the oldest and most fundamental controversies in socialist thought”—that is, “how, and to what extent,


capitalist state power might be utilized for socialist objectives.” Poulantzas’s conception of the capitalist state reveals the clear limits of the view typical on the liberal wing of the Democratic Party, likely to be on full display in the 2020 election campaign, that reversing American oligarchy is primarily a matter of restoring smart governance and rolling back the grip of the wealthy on the political system. At the same time, however, it is skeptical that unreconstructed revolutionism, which has a small but vocal presence in the resurgent American left, is anything but a fantasy and a path to continued marginality. A nuanced theoretical understanding of the state could serve as an antidote to both kinds of error. Relatedly, Poulantzas’s sense of the modulations of the capitalist state through its succession of crises are a welcome challenge to simplistic narratives that have colored even left-wing understandings of twentiethcentury history. By trying to understand the phases and crisis forms of a fundamentally continuous capitalist state, Poulantzas is a helpful corrective to the notion of a mid-century Keynesian period of strong state interventionism followed by a deregulated neoliberal period marked by a weakened and undermined national state. For strategic reasons, it is important that the contemporary left understand neoliberalism as neither an overall weakening of the nation-state nor a decline in in its strategic importance. Technocratic statism is, rather, a combination of state practices developed during the twentieth century, including the selective delegation of governing powers to international bodies, that have both effectively disorganized the dominated classes and provoked social resistance that now makes them sites of controversy and struggle. And then there are his writings on the democratic road to socialism, sketches that, while providing no answers in advance, leave a series of suggestive blanks begging to be filled in. “There is only one sure way of avoiding the risks of democratic socialism,” Poulantzas concluded his final book, “and that is to keep quiet and march ahead under the tutelage and the rod of advanced liberal democracy.” We know that path has frightening risks of its own. David Sessions is a doctoral candidate in European history at


Boston College and a graduate fellow at the Clough Center for Constitutional Democracy. His essays and reviews have appeared in The New Republic, Jacobin, Commonweal, and elsewhere.


Modi’s Saffron Democracy Sanjay Ruparelia

In May 2014, Narendra Modi became India’s fourteenth prime minister since independence. Storming to power after a charged electoral campaign, the strongman from Gujarat represented a political earthquake. Under his leadership, the Hindu nationalist Bharatiya Janata Party (BJP) was the first party to win a parliamentary majority since 1984, ending a quarter century of national coalition governments in New Delhi. It also became the only party apart from the Indian National Congress (known simply as “the Congress”), which had traditionally ruled the country under the direction of the Nehru-Gandhi dynasty, to win a mandate on its own. Roughly 66 percent of the electorate, the highest share ever, voted in the 2014 general election. Voter participation increased in virtually every state and across diverse segments of the population, including historically marginalized communities of Dalits and Adivasis and especially women. The stunning triumph of the BJP heralded a new political order in the world’s largest democracy. Before the verdict, the vast majority of observers had expected another hung parliament and diverse coalition government. Controversy had dogged Modi, a longtime pracharak of the Rashtriya Swayamsevak Sangh (RSS), the parent organization of the family of Hindu nationalist groups (including the BJP) called the Sangh Parivar. In 2002, Modi had failed to prevent an anti-Muslim pogrom that left over 1,000 people dead shortly after becoming chief minister of Gujarat, the so-called laboratory of Hindutva (Hindu cultural nationalism). There was sufficient evidence of his administration’s involvement for the United States to deny Modi a visa in 2005. Yet the chief minister, while denying responsibility, never showed remorse for what happened. Modi accused critics of insulting Gujarati pride. A protracted investigation by a Supreme Court–appointed


committee eventually cleared him of personal responsibility in 2012. The following year, the BJP named Modi as their candidate for prime minister in the 2014 elections. Many within the party and the RSS, which prized collective decision-making, remained wary of empowering him, but Modi ruthlessly sidelined potential rivals and the rapidly aging BJP old guard. In public, opposition parties decried his candidacy. In private, many rejoiced that it would prevent the BJP from recapturing national power.

Prime Minister Narendra Modi at an event promoting a cash subsidy program for small farms in February (Arijit Sen/Hindustan Times via Getty Images)

But the weaknesses of the Congress-led United Progressive Alliance (UPA), which governed from 2004 to 2014, provided an opening. The UPA had achieved the highest rate of economic growth in India since independence. The governing coalition liberalized many sectors of the economy and oversaw substantial increases in savings and investment in the public and private sectors, as well as rising trade. It also introduced landmark welfare legislation granting the right to many socioeconomic entitlements. But persistent ideological differences within its leadership,


neglect of underlying structural problems, and a series of events badly damaged the UPA during its second term. Major public scandals involving the allocation of contracts to favored business groups galvanized the India Against Corruption movement led by the Gandhian activist Anna Hazare. Yet the Congress failed to act decisively against individual ministers accused of corruption. Opposition parties, often led by the BJP, obstructed parliament. Senior bureaucrats stopped taking potentially sensitive decisions. A sense of paralysis, fueled by aggressive media coverage, declining private investment, and adverse global conditions, induced a rapid economic deceleration. Modi, a charismatic politician with a keen sense of historical opportunity, seized the moment. Embracing his claim to being a humble chaiwala (tea-seller) from a plebian backward caste, Modi mocked the shazhada (prince) Rahul Gandhi, the presumptive heir to the Congress. Flaunting his “fifty-six-inch chest,” he scorned the outgoing prime minister as “Maun [silent] Mohan Singh,” promising to eliminate political corruption and to stand up to Pakistani cross-border aggression. As chief minister of Gujarat, a relatively industrialized state, Modi also projected the image of a “development man.” Tapping into the frustrations and aspirations of millions of people for decent work and social mobility, he vowed to turn India into a manufacturing powerhouse and create good jobs for the millions of young people entering the national labor force every year. In Gujarat, Modi had hosted a series of conclaves to court big business with tax breaks, cheap land, and reliable infrastructural facilities. During his prime ministerial campaign, money flowed into the coffers of the BJP from industrial titans keen to elect an openly pro-business government in New Delhi. Reportedly, the party spent nearly $1 billion during the campaign, recruiting high-level IT professionals and tens of thousands of volunteers and many paid campaigners. Harnessing old media, advertising companies’ expertise, and new social media platforms, they saturated the public sphere with images of Modi, including simultaneously broadcast holograms of him giving speeches across the country. In the end, the BJP captured 31 percent of the national vote and 282 seats in the Lok Sabha, the lower house of parliament, cementing its leadership of a multi-party coalition named the National Democratic


Alliance (NDA). The Congress won 19 percent of the vote and 44 seats, its worst performance ever. The stunning reversal of fortunes reflected the disproportional logic of India’s electoral system, which, like the United States and the United Kingdom, is first-past-the-post. The BJP’s vote share was conspicuously low in historical perspective, but the party achieved stunning victories in traditional regional strongholds in the north and west, where it either dominated or swept the opposition, and significant gains in the south and east where it had previously failed to penetrate. Moreover, the BJP had mobilized an unprecedented social coalition. The party consolidated its traditional high-caste, upper-class, urban base but also won a plurality of votes among other backward castes, and rising support from Dalits and Adivasis, especially among middle-class sections in semi-urban zones. Constituencies that saw a higher turnout of young voters—individuals under twenty-five years old constituted an astonishing 50 percent of the population—disproportionately voted BJP because of Modi. Triumphant, the new prime minister proclaimed, “Victory to India. Good days are coming.” Concentrating and Personalizing Power Since attaining power, Modi has assiduously pursued the populist, plebiscitary, and presidential style of rule that marked his campaign for office. While politicians and parties across the spectrum have long tried to constrain the autonomy of state institutions by appointing partisan administrators, few displayed the BJP’s desire and capacity to capture entire institutional domains. After a quarter century of coalition governments, which increased the clout of the president, parliamentary committees, election commissioners, comptroller and auditor generals, Supreme Court justices, and the media, the BJP-led government has recentralized political authority. Modi and a small circle of advisers dictate the tempo and direction of the government. They have disciplined the prime minister’s offices everyday operations and political messaging while undermining collegial responsibility and cabinet autonomy. Few ministers have much sway. The various parties that comprise the NDA, which formally rules, have even less. Most observers simply refer to “Modi sarkar [government],”


underlining its personalization of power. If allies in the NDA feel disempowered, opposition parties confront a far greater threat. The new ruling dispensation views electoral rivals and partisan opponents as permanent enemies to be destroyed. Modi vowed to usher in a Congress-mukt (free) India. The BJP rejected the Congress’s claim to be the official leader of the opposition, as the second largest party in the Lok Sabha, on grounds that its forty-four seats constituted less than 10 percent of the total. The prime minister rarely subjects himself to questioning in parliament. This unwillingness to face criticism extends to the media. Modi grants very few interviews. Most are highly scripted affairs with carefully selected journalists. Press conferences, a routine affair in previous administrations, rarely occur. Modi often derides journalists as “news traders,” except for channels such as Zee TV and Republic TV, which are essentially mouthpieces for the government. (BJP party colleagues have stooped lower, calling them “presstitutes.”) Modi prefers direct communication via new digital media with his legions of followers. To date, more than 22,000 messages have been sent on his Twitter handle, which boasts more than 45 million subscribers. He also gives a monthly radio address, Mann Ki Baat, and uses the “NaMo”—Narendra Modi— app to deliver the latest updates directly to his followers’ phones. Some traditional newspapers and media houses, along with new digital outlets, have investigated the government more assiduously. Yet critical scrutiny has led to behind-the-scene interventions in some cases, including the dismissal of editors that failed to toe the line. In other cases, bureaucratic harassment and police raids for ostensible lapses in tax payments and other administrative matters have made life difficult. A vast swath of private media in India, afflicted by “paid news” scandals in recent years and beholden to corporate houses for advertising revenue, has been compromised for some time. Yet the climate of fear and threat of reprisal today is far more ominous, encouraging many to self-censor. The Supreme Court has provided some checks on the current government. Its expansive constitutional remit makes it one of the most powerful judiciaries in the world. Yet the structure of the court, whose thirty-one judges sit on benches of various sizes depending on the significance of a case and who must retire by sixty-five, also creates a


fluid polyphony of voices. In late 2015, a constitutional bench of the court (appointed to interpret fundamental law) rebuffed legislation designed to grant the executive far greater power in selecting higher justices. It also quashed the government’s efforts in 2016 to impose direct rule in two smaller states. And in the summer of 2017, another constitutional bench unanimously declared that privacy was a fundamental right, a landmark judgment with ramifications for many social questions. Yet the Supreme Court has also issued controversial rulings. A 2016 order made it mandatory for individuals to stand for the national anthem in cinema halls; another bench eventually overruled the order, but the incident exposed the court’s susceptibility to rising nationalist fervor. Prospective and active justices who had crossed the BJP found their nominations scuttled or posts transferred, intimating behind-the-scenes interference. Most explosively, in early 2018 the four most senior judges of the Supreme Court publicly accused the chief justice of fixing the composition of benches in sensitive cases, presumably to favor the executive. Since then, the prime minister’s office has delayed the appointment of justices and allowed a staggering case backlog in the higher judiciary to worsen. The overriding factor enabling executive overreach is the remarkable march of the BJP in state-level elections, which has transformed the map of India’s federal democracy. In 2014 the party controlled just seven of the thirty-six states and union territories in India. By the spring of 2018, either by itself or with its allies, the party had won control of twenty-one states, which comprised roughly 70 percent of India’s total population. The Congress governed just four. The sources of the BJP’s newfound electoral prowess are twofold. First, Modi remains by far the most charismatic politician in the country, the only true mass leader on a national scale. He possesses remarkable personal energy and selfdiscipline, an ability to rally massive crowds and exploit popular aspirations as well as elite resentment through a mastery of Hindi, and the courage to address difficult public issues. And no Indian prime minister—not even Nehru—has visited so many countries in a single parliamentary term. Second, the installation of Amit Shah, an old colleague of Modi’s from Gujarat, as president of the BJP has turned the party into an electoral machine. Party membership has increased


dramatically, with some estimates at 100 million, which would make it the largest party in the world. Moreover, Shah has tasked newly recruited members to undergo strict ideological training and acquire fine-grained knowledge of potential voters so they can disseminate propaganda down to the level of local voting booths, using both old-fashioned canvassing as well as social media. He has also changed the social composition of the traditionally high-caste party by introducing organizational elections for positions within the party. The BJP accumulates vast funds by mobilizing these local networks and cajoling sections of business and capital to pay through a mix of coercion and inducement. These partyorganizational assets, combined with the traditional ground strength of the cadre-based RSS and wider Sangh Parivar, have enabled the BJP to become an electoral juggernaut. Disrupting the Economy Many progressive critics feared Modi would pursue a radical agenda given his past, his clear parliamentary majority, and the expectations of Hindu nationalist ideologues in the Sangh Parivar. The BJP manifesto contained many controversial pledges, including the construction of a Ram temple on the ruins of the Babri mosque in Ayodhya, whose destruction by Hindu nationalists in 1992 unleashed terrible communal violence; the establishment of a uniform civil code, which would nullify special personal laws for religious minorities; and the abrogation of Article 370, which gave special rights to the contested Muslim-majority state Jammu & Kashmir. Indeed, despite electing 282 MPs to the lower house, the BJP did not have a single Muslim in its ranks. Other commentators noted that the BJP lacked a majority in the Rajya Sabha (upper house of parliament) and believed Modi would focus instead on governance and development, which would require opposition cooperation. Political tensions and communal violence would unnecessarily jeopardize that agenda. Shortly after coming to power, Modi declared, “My government will function on the mantra of ‘Minimum Government, Maximum Governance.’” Neoliberals rejoiced that New Delhi would streamline decision-making; deregulate land, capital, and labor; and devolve power to the states. The appointment of several wellknown proponents of economic liberalization to key positions seemed to


signal Modi’s commitment to that agenda. Yet the size of government, and quality of governance, has failed to change in expected ways. The Modi administration has announced a plethora of schemes to modernize the economy, expand work opportunities and improve social welfare, many of which tout personal initiative and commercial entrepreneurship: Startup India, Skill India, Stand-Up India. Others stress impressive public commitments, most prominently a campaign to eliminate open defecation through universal rural sanitation and public investment in renewable energy. But ardent liberalizers bemoan the absence of structural reform. To promote its Make in India campaign, which aimed to raise the share of manufacturing from 16 percent to 25 percent of GDP and create 100 million skilled jobs by 2025, the government has cut red tape and relaxed caps on foreign direct investment in various sectors. Labor protocols and environmental regulations have been scaled back, and businesses have been allowed to self-certify their compliance with the remaining rules to end “arbitrary harassment” by the “inspector raj.” The Modi administration passed a national value-added tax, eliminating a bevy of state-level levies that hampered interstate trade. And it unveiled a bankruptcy code, enabling struggling companies to declare insolvency and state banks to redirect credit to under-resourced sectors. Similarly, the government has altered social welfare provisioning using market instruments. The National Mission on Financial Inclusion extended bank accounts and debit cards to roughly 150 million poor families, seeking to reduce their vulnerability to extortionate money lenders. The JAM Number Trinity program has converted many in-kind subsidies to cash transfers, lowering the scope for bribery by officials and middlemen. Finally, the extension of accident and life insurance, and the massive increase in hospital insurance under the Ayushman Bharat scheme, provides greater economic security to informal sector workers. But to access these accounts and entitlements, beneficiaries must authenticate their identities through biometric identity cards, a controversial project initiated by the UPA called Aadhaar. Despite widely publicized technical deficiencies, the Modi administration rapidly expanded their use through parliamentary legerdemain. The result is a massive potential expansion of bureaucratic surveillance without


adequate safeguards regarding individual privacy and data security. The Supreme Court finally intervened in the fall of 2018, saying that the government could not deny benefits to citizens lacking an ID card and private businesses could not demand it. But the judiciary upheld the constitutionality of the project and its underlying legislation for welfare entitlements using public funds, disappointing activists that wanted it to be struck down completely. The Modi administration’s most dramatic exercise of arbitrary state power came at the end of 2016, when it decided to demonetize all 500and 1,000-rupee notes—the sort of move rarely pursued except in countries facing massive hyperinflation or the aftermath of war. Modi’s government issued a variety of explanations: to eliminate illicit wealth (most of which is actually held in commodity or property form, or is stashed abroad); to weaken underground terrorist organizations using counterfeit currency; and to accelerate India’s transition to a cashless economy. The last justification is of a piece with the government’s faith in modernizing, leapfrog technologies, but it made a joke of Modi’s pledge to reduce arbitrary state intervention in the economy. The demonetization was an extraordinary shock, affecting roughly 86 percent of all currency bills in circulation in an economy predominantly driven by cash. Citizens were forced to wait in lines for hours over many days to convert their demonetized notes to new legal tender. Urban middle classes were able to cope. But hundreds of millions of workers, merchants, and traders without access to credit in the vast informal economy suffered immense hardship, losing their businesses and jobs, likely driving many into absolute poverty. It has been reported that the prime minister neither consulted his chief economic advisers nor the Council of Ministers, and gave the Reserve Bank of India only a few hours’ notice, before announcing the decision. The demonetization exemplifies the decisive image Modi has cultivated, but it also revealed a lack of preparation and poor implementation. Modi survived the effects of demonetization in the short run. In fact, by portraying the decision as necessary for the good of the nation and not backing down in the face of criticism, he reaped huge political dividends. Most observers assumed the BJP would suffer major losses in the subsequent 2017 assembly elections in Uttar Pradesh. Instead, voters


awarded it with an overwhelming 312 seats in the largest state in the country. The party now appeared invincible. Modi’s subsequent decision to appoint Yogi Adityanath, a reactionary Hindu preacher known for fomenting communal violence, as its chief minister deepened his critics’ worst fears. Saffron Civil Society Like virtually all cultural nationalists, members of the Sangh Parivar have long pursued the Gramscian imperative to reshape ethical norms, popular consciousness, and social practices in the trenches of civil society. Their ideological leaders have sought to reinterpret history to valorize purported Hindu deeds, rulers, and customs against the ostensibly alien influences of Christianity and Islam. They promulgate a skewed notion of Hindu common sense, which includes promoting vegetarianism, opposing intercommunal marriage, and redefining the public sphere in simplistic Hinduized terms. Their project has advanced furthest in states run by the BJP administrations in northwestern India and to a lesser extent in the Gangetic Plain across northern India, through a mixture of state policy and social mobilization. The Modi-led government declared that the Constitution would be its only scripture. Since attaining national power, however, it has appointed stalwart ideologues to key leadership positions in many universities, research centers, and cultural institutes. Other parties had often pursued a similar partisan approach, but the BJP’s ousting of perceived opponents, often replacing them with individuals with far less experience and more dubious qualifications, is more brazen than before. Modi’s promotion of certain Hindu nationalist myths, such as the claim that ancient Indian civilization had mastered genetic science and plastic surgery, has invited widespread ridicule. In other cases, such as the decision to designate Christmas as “good governance day,” the motive was clear: to render minority communities less visible. More systematic efforts to normalize Hindu nationalism have occurred in the domain of education in Maharashtra, Rajasthan, and Gujarat, states run—presently or in the past—by the BJP. The party has revised school textbooks to glorify ancient Hindu myths, denigrate Mughal rulers, and downgrade the leadership of Jawaharlal Nehru and Mahatma Gandhi in the nationalist


movement as opposed to key Hindutva figures such as Vinayak Savarkar and M. S. Golwalkar. Even more troubling, the revised textbooks openly praise the strong, unifying rule of Hitler and Mussolini and cast doubt upon the benefits of rights and democracy.

Modi in India’s largest state, Uttar Pradesh, where the BJP won a major electoral victory in 2017 (Rajesh Kumar/Hindustan Times via Getty Images)

Praising European fascism and distorting Indian history is part of a wider political campaign against liberal, progressive, and secular voices in civil society. The government has intensified a crackdown on political opponents and dissent that began in the last years of the UPA. New Delhi has accused many NGOs with external funding of violating the Foreign Contribution Regulation Act. Over 10,000 organizations have been compelled to modify or suspend their activities as a result, including national chapters of Greenpeace, Oxfam, and the Ford Foundation, which had questioned the social, economic, and ecological costs of recent development policies. The government justified its move on grounds of transparency and accountability, but it has also labeled these organizations as “anti-national,� revealing a cynical exploitation of


governance norms to erode civil liberties and political rights. Students, intellectuals, and activists expressing political dissent or secular views have been increasingly harassed and intimidated online as well as in the streets. Some BJP politicians have said that critics of Modi should move to Pakistan. Government authorities have charged activists and university students with sedition, a colonial era law still on the books, for questioning state policy, social inequality, and human rights violations in Jammu & Kashmir, the northeast, and the so-called red corridor of central India where various states have waged a brutal campaign against Maoist guerillas. Critics of Hindu mythological claims, such as Govind Pansare, M. M. Kalburgi, and Gauri Lankesh, have been murdered for expressing their views. As prime minister, Modi has gradually moderated his rhetoric. But his proclamations that every town should have a Muslim cemetery as well as a Hindu crematorium, and every village deserves electricity for Ramadan as well as various Hindu festivals, also represent a cunning ruse to insinuate that Muslims had it better under opposition rule. State-level politicians from the BJP have pursued traditional Hindutva goals far more aggressively, encouraging campaigns that have spread militant cultural vigilantism into everyday life. Governments in many states have introduced bans on cow slaughter with draconian penalties, a long-held demand of the Sangh Parivar. Spurred by these polices, Hindu vigilantes have taken the law into their own hands. Muslims and Dalits rumored to be involved in cow slaughter, leather tanning, and the cattle trade have been attacked and lynched in many regions since 2014. Given the significance of Dalits to the new electoral base of the BJP, the prime minister eventually denounced attacks upon them, tweeting in the summer of 2017 that “anti-social elements were spreading anarchy.” Yet he left it to state governments to resolve the problem and failed to condemn the violence against Muslims. Taking his cue, many state-level BJP politicians have shielded the perpetrators of these heinous acts or even praised them, leading to acquittals and the withdrawal of many cases. Upon coming to power in Uttar Pradesh, the BJP sanctioned “antiRomeo squads” in the name of protecting girls and women from harassment and violence, a searing public issue. Again, the formation of


such groups have emboldened its foot soldiers to impose vigilante justice. Campaigns by organizations affiliated with the Sangh Parivar to break relationships between Hindu girls and Muslim boys, who allegedly pursue “love jihad,� and to reconvert Dalits and Adivasis, termed ghar wapsi (back to home), are more explicit attempts to polarize communal tensions and extend social control. The failure of the government to combat these acts of intimidation and violence has encouraged more quotidian forms that differ from previous deadly episodes, like the antiSikh pogrom instigated by the Congress in revenge for the assassination of Indira Gandhi in 1984. Many intellectuals, artists, and other public figures have courageously decried these developments, including Bollywood stars like Aamir Khan and Naseeruddin Shah, only to incur vitriolic responses from Hindu extremists on social media. Cracks in the Road to 2019 Given these events and trends, many commentators believe the BJP and its Hindu nationalist ideology represent a new hegemonic force in modern Indian democracy. Yet the electoral sweep of the party in many northern states, and the concentration of power in New Delhi, create political vulnerabilities too. The early veneer of sound governance has been tarnished. The Modi administration has thus far avoided high-level corruption scandals like those that damaged the UPA. Yet questions have arisen over its failure to apprehend high-flying businessmen accused of financial misdeeds and its decision to revise the terms of the Rafale fighter jet deal with France, benefitting the powerful Ambani conglomerate. The government has also weakened the Right to Information commission as well as the Whistleblowers Protection Act through administrative delays and parliamentary amendments. It has held only two meetings of the Lokpal committee, charged with appointing national and state-level ombudsmen, a key demand of the anti-corruption movement that helped Modi capture national power. The government has introduced electoral bonds, purportedly to reduce the flow of black money in electoral campaigns, but neither the donor nor the party has to disclose the source of these bonds, merely the amount, leaving the system as non-transparent and unaccountable as before.


India’s disappointing economic performance under the government is another serious electoral liability. Annual economic growth has once again topped 7 percent. But the government’s decision to revise the figures using a new statistical methodology has even impartial observers asking questions. Moreover, growth remains extremely dependent on high public spending, especially on infrastructure. The early boom in foreign direct investment has faded. Private domestic companies and public-sector banks are still constrained by very high levels of debt and non-performing assets, necessitating massive recapitalization. The campaign to Make in India remains an aspiration. The national valueadded duty brought many companies into the formal tax net, improving India’s low tax-GDP ratio, but its five tiers of tax rates and clumsy roll-out imposed a high regulatory burden on millions of small- and medium-sized enterprises it was supposed to help. Perhaps most damaging, formal sector employment has failed to expand. Last spring the Indian Railways, the country’s biggest civilian employer, received an astonishing 23 million applications for 90,000 vacancies. Moreover, since 2016, the government has stopped releasing customary job reports, saying the underlying measures had to be revamped. On August 15, 2018, giving his last Independence Day speech before the general election this spring, Modi vowed to double farmers’ incomes by 2022. It was a characteristically extravagant pledge—but reckless too. Prolonged drought, lower prices, and higher input costs had cut agricultural growth rates in half since his administration took office in 2014. Severe rural distress, alongside the slow-burning ramifications of demonetization and poor employment prospects, exposed the “tall promises” of the BJP. In state elections at the end of 2018, opposition parties inflicted a surprising defeat on the BJP in three of its strongholds in the Hindi heartland, despite heavy political campaigning by the prime minister. The outcome rejuvenated the Congress, bestowing greater leadership credibility to Rahul Gandhi, and encouraging his more charismatic sibling, Priyanka, to finally step off the political sidelines. It also galvanized diverse regional parties to explore the prospects of mounting an anti-BJP coalition. It remains to be seen whether the opposition can mount a united front among its myriad personalities and diverse interests, and offer a real programmatic alternative, in their


respective states and across the country. So far, apart from pledges to protect minority rights and restore public institutions, its main idea is to extend a universal basic income to every poor citizen.

A Kashmiri woman near concertina wire installed following mass arrests by government forces in the aftermath of a suicide bomber attack that killed forty members of an Indian paramilitary force (Faisal Khan/NurPhoto via Getty Images)

The Modi government has responded with old-fashioned populist gestures it had previously disparaged as dole. It passed a 10 percent quota for individuals from poor upper caste families in public-sector jobs and higher education. The government pressured the Reserve Bank of India (the country’s central bank) to share its excess savings with the finance ministry. And it broke convention by announcing new schemes— most notably cash handouts, tax breaks, and monthly pensions to small farmers, middle-class households, and informal workers—in the last budget before the 2019 polls. The prime minister has also doubled down on old Hindutva demands, saying the government would explore how to build a Ram temple in Ayodhya once the Supreme Court had delivered its impending judgment —implying a willingness to bypass judicial review. And he has retaliated


against Pakistan for the recent suicide bombing in Kashmir claimed by Jaish-eMohammad, the worst attack against Indian forces in the region since 1989, raising nationalist fervor and the prospect of war to galvanize voters. Only two years ago the BJP claimed it would rule New Delhi for at least a decade. Now, the party’s desperation is increasingly clear. The upcoming general election is a contest, the prime minister declared at the start of 2019, of “janata [the people] versus gathbandhan [coalition].” India’s demos may grant Modi another chance to embody its aspirations and fears. But his classic populist gambit failed to hide a plain truth: the “good days” he promised have still not arrived. Sanjay Ruparelia is the author of Divided We Govern: Coalition Politics in Modern India. He holds the Jarislowsky Democracy Chair at Ryerson University in Toronto.


France’s Anti-Liberal Left Michael C. Behrent

Where did liberalism go wrong? Since right-wing populist electoral victories upended American and European politics three years ago, the left has been plagued by this question. Different voices on the left have proposed different diagnoses of liberalism’s failures, along with corresponding remedies. Some, contending that liberals are too invested in identity politics, admonish them to embrace a more encompassing vision of the common good. Others maintain that liberals have, for decades, been the enablers of free-market capitalism, offering no economic alternative to the right. The left, they believe, should make a sharp turn toward social democracy, and perhaps even socialism. Some advocates of these positions have made a related argument: the left must reclaim the label of “populism,” which is too important to concede to demagogues and bigots. The Belgian political theorist Chantal Mouffe has recently made a robust case for “left populism,” arguing that as neoliberalism enters a period of sustained crisis, the left must accentuate the cleavage between the “people”—broadly and inclusively construed—and the political and economic “elites” that have presided over mounting inequality. Left-of-center politicians have cozied up to these elites and endorsed a sterile politics of consensus that is tone deaf to their constituents’ concerns. In Mouffe’s view, embracing overt contention and anti-elitism—what she calls “agonism”—could help break the liberal impasse without ceding terrain to right-wing populism’s authoritarian and anti-pluralistic proclivities while maintaining a pluralistic and diverse society. Not all left populists agree with Mouffe’s solution. The French philosopher Jean-Claude Michéa believes that the left, in its current form, is ideologically fated to betray the very people it once sought to empower.


Michéa, virtually unknown in the English-speaking world, has written over a dozen books since the mid-1990s, earning him a reputation as a withering polemicist. Still, he is hardly a marquee figure, a “French intellectual” in the grand tradition of Jean-Paul Sartre or Michel Foucault. Michéa has never held a university position, nor does he live in Paris. He spent most of his career as a high-school teacher in the southern city of Montpellier. Few of his books have appeared in English; none have the cachet of being published by Verso or Semiotext(e). Yet Michéa’s thought has exerted a subterranean influence on a new generation of anti-capitalist radicals in France. Through his writings and media interventions, he has become a kind of patron saint of a new wave of “little magazines” written by young people on both the left and right. For those who celebrate his work, Michéa’s relatively marginal position in French intellectual life adds considerably to his appeal. For it is intellectuals, Michéa contends, who lie at the heart of liberalism’s problem. Their critique of social norms and breezy value neutrality are fundamentally at odds with popular moral instincts. Liberal intellectuals fail to see, moreover, how their moral preferences predispose them to becoming allies of the free market and its reverence for individual choice. In Michéa’s ideas, we can see what left populism fully divorced from liberalism might look like. Michéa was born in 1950 into one of the most storied milieus of the French left: the subculture that blossomed around the Communist Party. His parents, who met as members of the French Resistance during the Second World War, were both communists. His father earned a living as the sports writer for L’Humanité, the party’s newspaper. Communism, as Michéa once put it, was his political “mother tongue.” His childhood was profoundly shaped by his family’s politics. He traveled to the Soviet Union and learned to speak Russian. Yet he remembers escaping from official tours long enough to meet ordinary workers and to discover what “really existing socialism” was really like. Michéa left the party in 1976. Unlike the stereotypical ex-communist, he bears the party no grudge. He was never really disillusioned with communism, because he never saw it primarily as an ideology.


Communism, in his experience, was first and foremost a community. “In neighborhood or company cells,” he reminisces, “one often met men and women of incredible generosity and courage . . . who would never for a moment have considered the Party as a stepping stone for their own personal career.” The lesson of Michéa’s communist education was not doctrinal, but moral: political commitment meant living daily life according to a set of shared values. Even so, Michéa was, like many of his contemporaries, drawn to philosophy and Marxism. His first intellectual crush was Lenin’s Materialism and Empirio-Criticism. After studying at the Sorbonne, he began his professional career in 1972 as a prof de philo—a high-school philosophy teacher. Many prominent French thinkers, from Émile Durkheim to Gilles Deleuze, taught high-school philosophy before achieving intellectual fame. Michéa, however, considers it a badge of honor that he never abandoned his position for a supposedly more “noble” career in academia. In doing so, he sought to honor his father’s mottos: “loyalty to one’s working-class origins” and “the refusal to succeed.” The latter principle, embraced by early twentieth-century French anarchists, entailed a rejection of such bourgeois values as upward mobility, acceptance of titles, and other markers of personal success.


In Jean-Claude Michéa’s ideas, we can see what left populism fully divorced from liberalism might look like. (Hanna Assouline/Flammarion)

Michéa’s outlook owes much to his experiences as a provincial teacher, especially by observing the transformation of the French educational system in the wake of May 1968. Michéa recognized that traditional public schools had aided capitalism’s advance by creating a culturally homogenous citizenry instilled with disciplined work habits. Yet these schools had also nurtured communal practices that had little to do with money-grubbing utilitarianism, notably a commitment to “transmitting . . . knowledge,”—such as Greek and Latin—“virtues, and attitudes that were as such perfectly independent of the capitalist order.” These traditional functions came under attack in the name of the “liberalization” of education after ’68. The most famous of these reforms was the dismantling of the “stages” from which teachers had long harangued their charges, now dismissed as archaic and hierarchical. Students, not teachers, became the classroom’s focus, and they were encouraged to experience the giddy freedom that results from rejecting one’s “linguistic, moral, or cultural heritage.” By the late 1990s, European directives were instructing teachers to think of students as “clients,” a trend that Americans have encountered under the guise of “education reform.”


These experiences provided Michéa—a classic organic intellectual— with his trademark insight: free-market capitalism, by adopting the cultural radicalism of the sixties, had been given new momentum. His point was not that the ideals of ’68 had been coopted, but that complete, unrestrained liberation from social norms created virtually limitless opportunities for capitalist growth. Borrowing a neologism popularized by the eccentric communist philosopher Michel Clouscard, Michéa referred to this synthesis of capitalism and cultural radicalism as libéral-libertaire. Libéral refers to economic liberalism, while libertaire (a synonym for “anarchist”) means emancipation from cultural norms. The critique of the libéral-libertaire worldview and the search for an alternative have been the leitmotifs of Michéa’s thought. Michéa’s outlook has been shaped by several thinkers belonging to what might be called a left-populist canon. The most important is unquestionably George Orwell, who gave Michéa a powerful analysis of the kinds of ideological reforms he had witnessed at the high-school level. In his first book, Michéa argued that Orwell’s famous novel 1984 was not a cautionary tale about socialism or totalitarianism, but a critique of progressive conceits—in particular, of the way that intellectual elites conspire to dismantle communal solidarities through turgid jargon, technocracy, and their will to power. In a 1940 essay, Orwell praised Charles Dickens for his ability to capture the mindset of the “common man”—the impulse, as Orwell puts it, “that makes a jury award excessive damages when a rich man’s car runs over a poor man.” Left-wing intellectuals tend to dismiss Dickens’ stories as so much “bourgeois morality,” but Orwell argued that Dickens articulated the “native decency of the common man.” Orwell celebrated this gut-level morality, and he claimed to have “never met a working man who had the faintest interest” in “the philosophic side of Marxism” and its “pea-and-thimble trick” of dialectics. Michéa took this lesson to heart. “The socialist struggle,” he wrote, “is above all an effort to interiorize these working-class values and to spread their effects through all of society.” By contrast, progress was the ideology of intellectual elites and a threat to “common decency.” The scandal of Orwell’s thought, Michéa


argues, is that it is simultaneously socialist and conservative. That same combination is what attracted Michéa to another nonFrench author, the American historian and social critic Christopher Lasch. Lasch’s oeuvre is organized around the conceit that intellectuals have perverted emancipatory politics by unmooring them from any grounding in popular reality. Lasch argued that American intellectuals since John Dewey have been motivated by a cult of experience and authenticity that led them to embrace social reform and even political radicalism as ends in themselves, in ways that alienated them from mainstream values. By the 1970s, Lasch claimed, this navel-gazing radicalism had detached itself from any pretense of proposing a political alternative. It succumbed, rather, to a “culture of narcissism,” steeped in ideas of self-help and wellbeing that proved eminently compatible with capitalist mass consumption. In his magnum opus, The True and Only Heaven, Lasch writes that where progressives remain committed to “a wistful hope against hope that things will somehow work out for the best,” the “populist or pettybourgeois” sensibility asserts that the “idea that history, like science, records a cumulative unfolding of human capacities” runs “counter to common sense—that is, to the experience of loss and defeat that makes up so much of the texture of daily life.” Progressives believe, in short, that we can have it all—which is why, Lasch maintained, that however critical liberals may be of capitalism, they can never quite bring themselves to loathe consumerism. Populism, Lasch conceded, is less overtly radical than Marxism: it prefers evenly distributed property to the prospect of indefinite material improvement. It is a philosophy of limits, of curtailed horizons—and the moral disposition such an outlook implies. These various ideas informed the critique of liberalism that Michéa developed in the 2000s. Its problem, he concluded, is its amoral character, or its value neutrality (neutralité axiologique). Liberalism, he maintains, was born as a philosophical solution to the religious wars of the sixteenth and seventeenth centuries. Political thought became obsessed with pacifying the violent passions unleashed by religious conviction. Two particularly promising cures were proposed: law, through a system of rights that applied to individuals as individuals, irrespective of


their beliefs; and the market, which offered the peaceful pursuit of material well-being as an appealing alternative to the elusive quest for theological certainty. Liberalism began, in short, by making a virtue of its lack of virtue. To this end, liberals launched a “methodical dismantling” of communal practices based on common decency, which were now seen as impediments to personal liberty and material gain. “Actually existing liberalism,” as Michéa calls it, rests on the illusion that a meaningful distinction can be drawn between economic liberalism, on the one hand, and political and cultural liberalism, on the other. Limitless growth is the necessary corollary to endless self-realization. By the same token, free markets only truly thrive in societies based on cultural and political liberalism. “Capital accumulation (or ‘growth’),” Michéa writes, “would not be able to go on for long if it constantly had to accommodate religious austerity, the cult of family values, indifference to fashion, or the patriotic ideal.” It follows that “a ‘right-wing economy’ cannot function in a lasting way without a ‘left-wing culture.’” Michéa’s left populism hits its stride in his jeremiads against elite liberal culture. Among his favorite targets is Libération, the mainstream, left-ofcenter newspaper founded by erstwhile sixties radicals whose closest American equivalent is the New York Times. Postmodernism is another one of his bêtes noires. He expresses bewilderment at the success of Foucault’s book I, Pierre Rivière, Having Slaughtered My Mother, My Sister, and My Brother, claiming that it exemplifies “the characteristic fascination of modern intellectuals for crime and delinquency.” Michéa sneers at anything that smacks of urban intellectual life, academic fads, the “hip” and the “cool.” Invariably, his punchline is that the cognoscenti are capitalism’s closest objective allies. “The Cannes Film Festival,” he scoffs, “is not a majestic negation of the Davos Forum. It is, to the contrary, its fully realized philosophical truth.” Michéa’s scorn for liberal cultural elites clarifies how his conception of left populism differs from the kind proposed, for instance, by Chantal Mouffe. The French philosopher and the Belgian theorist agree that left-of- center parties have, by embracing neoliberalism, failed to offer a meaningful political alternative to the right. They also both object to heaping scorn on


right-wing populists, recognizing that their supporters express genuine democratic opposition to prevailing orthodoxy. But for Mouffe, as she argues in her recent essay, For a Left Populism, the problem is political: by embracing consensus, the left has obscured the fundamentally conflictual (or “agonistic”) nature of politics. For Michéa, the issue is moral: by embracing liberalism (and not just neoliberalism), the left has muddied the ethical basis of its politics and diluted beyond recognition its commitments to solidarity and common decency. Inspired by poststructuralism, Mouffe cautions the left against returning to Marxism’s “class essentialism”—the belief that only the industrial working class can embody progressive aspirations. Michéa believes that essentialism—a moral essentialism, a conviction in its values’ inherent superiority—is the very core of the left’s identity. At what point does this left-wing populism cease to be left-wing? Michéa reproaches liberalism for what many Marxists consider its undeniable achievements. He has notably criticized the left’s fixation on fighting racism and homophobia. He insists that he is not criticizing these positions per se, but showing how they provide cover for the liberal left’s increasing indifference to the victims of the free-market. In Michéa’s view, racism and homophobia can only be the result of a “moral ideology”—an attempt to articulate basic moral instincts into an airtight worldview. Claims that homosexuality is a sin or a bourgeois perversion are intellectual conceits, not the spontaneous moral inclinations of ordinary people. Michéa, in this way, tars bigotry and liberalism with the same brush: both disrupt the practices of mutual aid and generosity associated with “common decency.” Yet when Michéa contends that moral common sense can provide more credible protection against homophobia and racism than liberal notions of rights, one wonders if his confidence in popular virtues is not colored by a healthy dose of wishful thinking. Michéa argues that the liberal idea of tolerance is itself simply a “moral ideology” with little bearing on the struggle against discrimination. In his view, ordinary moral instincts provide a much sturdier bulwark against homophobia and racism than rights conceived in liberal terms. Michéa has been embraced by more than a few activists on the right— though the groups that gravitate to him defy traditional categorization. Some of his conservative readers participated in the Veilleurs


(“watchman”) movement, which, in 2013, protested France’s gay marriage law. They claimed to oppose the legislation on Christian principles, but they are also fiercely critical of financial capitalism and embrace “integral ecology,” a Catholic variant of environmental thought. In 2015 two activists from this milieu, Marianne Durano, a twenty-sevenyear-old philosophy teacher, and her thirty-one-year-old partner, Gaultier Bès, founded the journal Limite, which champions many of Michéa’s ideas. “It is Michéa’s total, holistic vision,” Durano explains, “encompassing economics, ethics, and the social, his non-schizophrenic approach to problems, that seduces me.” Like Michéa, Bès and Durano denounce a “liberal-libertarian system founded on always needing more,” while berating “sixty-eighters [who have become] the useful idiots of the almighty market.” Michéa also inspires radicals on the left, his own political family. Yet the concerns of these radicals overlap considerably with their counterparts on the right: both are critical of liberal capitalism, scornful of the sixties generation, and subscribe to an ethos of limits. Kévin Boucaud-Victoire, a young journalist and former Trotskyist who calls Michéa his “favorite contemporary philosopher,” recently launched a journal called Le Comptoir (The Counter). Its first issue called for a genuinely “popular” form of socialism, based on “premodern or pre-capitalist social, moral, and cultural values” and a concern for “ordinary people.” BoucaudVictoire—who is an evangelical Protestant as well as a socialist—argues that France needs “social populism in the tradition of the Narodniks,” the Russian populists of the 1860s and ’70s who maintained that radicals must “go to the people.” This populism must break with “‘cultural leftism,’ which defines itself in terms of societal and minority questions, in favor of social [i.e., labor] issues and more unifying symbols, without, however, flirting with right-wing conservatism.” He sees La France Insoumise, Jean-Pierre Mélenchon’s left-populist party, which won 19 percent in the last presidential election, as approximating these ideals. In short, it is Michéa’s emphasis on moral conviction that explains much of his appeal to the young, whether on the left or the right. Many aspects of the Michéa phenomenon reflect the distinctly French


context in which it arose. But he is nonetheless symptomatic of a broader crisis in Western political culture. His appeal represents popular discontent not only with neoliberal capitalism, but also with the current alternatives to it. Beneath Michéa’s bitter harangues against hip urban elites and fashionable intellectuals lies a resonant message: any movement that seriously maintains that the neoliberal model is headed toward disaster can no longer make excuses for supporting, when push comes to shove, center-left parties that have consistently served as capitalism’s willing enablers. Mélenchon’s refusal, in the 2017 election, to support Macron over Le Pen is consistent with this position. Michéa’s disciples seem, at least anecdotally, to share a sociological base. They belong to the new intellectual underclass: educated young people who struggle to find full-time jobs and cannot afford Parisian rents — France’s equivalent, perhaps, of the burgeoning reserve army of adjunct professors in the United States. Though often intellectuals, they have been marginalized by elite cultural institutions and lack the economic resources they afford. To those who feel screwed by contemporary society, Michéa offers the assurance that their resentment is legitimate and important. Michéa seeks to explore the political consequences of a left-wing break with progressive values that have, as he argues over and over, furthered the interests of capital. A genuine populism must reject the ideology of boundless possibilities and adopt a philosophy of limits—economic, territorial, and cultural. A politics attuned to limits must take seriously the human need for community, or what philosophers call a “lifeworld”—an environment and sphere of relationships based on shared values and mutual understanding. Just as resources and the biosphere need to be preserved, so do human relationships and the traditions that nurture them. Michéa calls this the “conservative moment” inherent in all radical thought. His own vision forces us to consider what it would mean to sustain this moment, to put it at the center of anti-capitalist politics. He may be too schematic and visceral a thinker to persuade his critics. At a time when populism has upset the long-prevailing liberal order, Michéa forces us to ask whether the historic compromise with liberalism has sapped the left of its moral force. Whatever one thinks of his conclusions, his writing has the merit of clarifying the stakes of this crucial question.


Michael C. Behrent teaches history at Appalachian State University.


Climate Politics after the Yellow Vests Colin Kinniburgh

I first passed the protest camp on Christmas Eve, as the sun was setting and most of the country was preparing to sit down for the holiday dinner. So were twenty-odd local gilets jaunes. This dedicated group of protesters had spent over a month camped out at Jeanne Rose, a large roundabout on the outskirts of the former industrial town of Le Creusot, about four hours’ drive southeast of Paris. Their ranks had thinned since November 17, when 150 or so protesters first rallied to the Jeanne Rose roundabout, out of some quarter-million across the country. But those who stuck around had reason to be optimistic. Already, they had won a series of concessions—including the suspension of the fuel tax hike that sparked the movement—from a government that had spent its first year and a half steamrolling reform after reform past all opposition. Wearing their signature yellow vests, the local gilets jaunes toasted the Christmas holiday together with escargot—a regional specialty—donated by a sympathizer and grilled over the campfire. It is this kind of camaraderie that has sustained the protesters through the damp cold of France’s winter months, and has given the yellow-vest movement a much greater staying power than expected. In mid-January, a few weeks after I first visited, a series of raids cleared most of the small-town protest camps. But some groups of gilets jaunes have managed to hang on. As of early March, a cabin at the edge of the Jeanne Rose roundabout still welcomes passersby “Chez Manu et Brigitte,” bonfire roaring; across the country, mass marches and rallies remain a Saturday routine, with protesters numbering in the tens of thousands every weekend. Meanwhile, police repression of the gilets jaunes has if anything grown fiercer. Since the movement started, hundreds of protesters and


bystanders have been gravely injured by flash-balls and other police weapons, including one who was killed, twenty-two who have lost an eye, five who have lost their hands, and dozens more who have been permanently disabled. The yellow-vested protesters have achieved a phenomenal amount of attention, impact, and support over a relatively short amount of time. At the height of their popularity, in November and December, the gilets jaunes were favored by upward of 70 percent of those polled. The decline in popular support since then (to around 50 percent) can partly be attributed to troubling instances of violence by a subset of participants, not just against property and police but against journalists and each other. Anti-Semitic incidents on the margins of the protests, though limited in number, have drawn national attention to the far-right, conspiracist creep of segments of the movement. The harassment of philosopher Alain Finkielkraut by gilets jaunes in Paris in mid-February— amid a rash of other anti-Semitic incidents in and around the capital, including the defacement of portraits of Simone Weil with swastikas— marked an especially low point.

Far from being anti-environmental, the gilets jaunes have exposed the greenwashing of Macron’s


deeply regressive economic and social agenda. (Chris McGrath/Getty Images)

The movement’s explosive trajectory and lack of clearly defined leadership mean that the gilets jaunes continue to defy easy characterization. But their lasting contribution has been a national reckoning over Macron’s pro-business agenda, which several previous rounds of strikes and protests failed to provoke. A broad swath of people who felt they had no say in politics—people who had never been to a protest or been part of a union, who had lost trust in elected officials and their parties, who felt nothing but contempt from the elites holding power over their lives—have experienced the thrill of collective power. Whatever becomes of the gilets jaunes, their uprising has achieved at least one crucial thing. It has jolted the idea—still stubborn among policy elites—that climate change and inequality can somehow be confronted separately. It has demanded an urgent reconciliation, that is, between two of the defining challenges of our time. By way of a stern warning, the gilets jaunes have dragged the climate debate just one step further away from incremental, market-based halfmeasures and toward an egalitarian alternative. Climate politics, they remind us, must spell equality, not austerity. Or else. For years, France has positioned itself as a global leader in combating climate change. It was in Paris, of course, that 195 countries in 2015 signed a landmark agreement to reduce greenhouse-gas emissions and keep global warming below 2°C; it was in Paris, too, that Emmanuel Macron, in the first weeks of his presidency, vowed to “make our planet great again” in response to Donald Trump’s announcement that the United States would withdraw from the agreement. Macron had already raised international hopes by naming a widely respected longtime activist, Nicolas Hulot, as environmental minister. In July 2017, Hulot released a climate plan designed to lead a just transition toward a lowcarbon economy, ending French fossil-fuel extraction and reducing unequal access to energy along the way. But Macron’s green halo quickly dimmed. Even as the climate plan was unveiled, his government made clear that it had other priorities. It pushed ahead a slew of tax cuts and other market-friendly reforms (notably to


labor law, education, and the national rail system) whose guiding principle was to make France more competitive. In keeping with his determination to transform Paris into the financial capital of post-Brexit Europe, Macron scaled back France’s financial transactions tax (which he had promised to strengthen) and abolished the “solidarity tax on wealth” (in French, ISF for short). In the process, he surrendered billions of euros of revenue earmarked for social and climate policy, and cemented his reputation as “president of the rich.” The repeal of the wealth tax, which applied only to France’s richest 5 percent, has since become a central grievance of the gilets jaunes. Meanwhile, the government’s professed ambitions on climate were slipping. Hulot lamented that carbon emissions were going back up and pleaded for a stronger financial transactions tax. His frustration culminated in a bombshell radio interview last August, when he announced live, without warning, that he could “no longer lie to himself” and would be resigning as minister. All of this left Macron on thin ice when it came time, last fall, to defend a fresh round of fuel tax hikes announced for 2019. The specific policy in question is, in fact, a carbon tax, which since 2014 has made up an increasing share of France’s notoriously high gas taxes. Macron accelerated these increases, and on January 1, 2019, the tax on diesel— which powers most French drivers’ cars, especially outside of big cities— was set to jump by another 11 percent. The government’s stated purpose in raising the carbon tax was twofold: to encourage people to drive less (or, better yet, switch out their old, polluting cars for more efficient ones) and to raise revenue for green investment. It was also a way of backtracking on a three-decades-long European push toward diesel cars, based on the premise that they polluted less—an error whose full implications are finally catching up to the governments responsible, not least France’s. In this respect, Macron was perfectly open. “They told us for decades that we had to buy diesel and now it’s the opposite,” he said in early November. But, he continued, if the French public was taking his proposed solution badly, it was understandable—he simply hadn’t explained himself enough. On the contrary, an increasing share of the public understood the


problem all too well. Their government had made a huge mistake, partly under pressure from corporate lobbies. And yet again, it was trying to weasel its way out by passing the bulk of the costs down—not to the auto companies, not to the biggest polluters, but to the people reduced to counting every cent when they went to fill up the tank. In the name of the planet, Macron was demanding that the working class sacrifice while the rich were getting tax cuts, public services were being eroded, and green investment was nowhere to be seen. For several gilets jaunes I spoke to, and many more interviewed by other outlets, this was the last straw. About fifteen minutes’ drive southeast from the Jeanne Rose roundabout is the exit for Sanvignes-les-Mines (population 4,500), where my grandfather lives. This is “peri-urban” France: neither urban nor entirely rural, nor close enough to a big city to constitute a suburb, it’s the sort of in-between zone that now constitutes much of the French landscape. Long devoid of protest movements, such areas have become ground zero for the gilets jaunes. And it’s no coincidence: they’re the kind of places that are almost impossible to get around without a car. Sure enough, when I visited the area in late December, I was greeted by a clownish puppet of Macron and a barrier of stacked tires, fencing off the protest camp of the gilets jaunes du Magny. Inside was a firepit and a wooden cabin, big enough to hold a dozen people. The site wasn’t chosen at random: the road it overlooks, known as the Route CentreEurope Atlantique (RCEA), is a major artery for trucks crossing from France’s Atlantic ports into central Europe, and the on-ramp provides an easy vantage point from which to block it. “We don’t exactly do blockades . . . let’s call it ‘filtering,’” says Yves Clarisse, who has been a fixture of the local yellow-vest protests since November 17. “We tend to slow the trucks down, because by doing so, even if it’s only for a half-hour, an hour, an hour and a half . . . it has an impact on the economy, so the government is forced to take notice.” Clarisse, fifty-four, lives in social housing in neighboring Montceau-lesMines (population 19,000), and spent most of his career working in factories. For the last eight years, he has devoted himself to looking after his ninety-year-old father, who suffers from Alzheimer’s.


Unlike many gilets jaunes, who express disdain for all established parties, Clarisse is an avowed supporter of La France Insoumise, the leftpopulist formation led by veteran leftist Jean-Luc Mélenchon. Asked why the fuel tax hike had triggered a mass movement when so many other unpopular reforms hadn’t, he said it was about freedom. But for Clarisse, the kind of freedoms allowed by owning a car could just as well be provided by free public transit. “If we moved toward free transit, it would allow a ton of people to get out of the house—whether it’s the elderly, people living alone, people who are out of work—and to express themselves more in life,” he says. If anything, it is this desire—for a greater say in the decisions affecting their everyday lives—that has animated the gilets jaunes. The demand for a “citizens’ referendum” (RIC, for référendum d’initiative citoyenne) has become one of the movement’s signatures, and a rare point of unity. At the Jeanne Rose camp outside of Le Creusot, you could see it from the highway: the sole message greeting passing drivers, from a bright yellow banner, were the three letters “RIC.” This demand is emblematic of the way the gilets jaunes’ positions scramble typical understanding of the divisions between left and right. Étienne Chouard, the figure most credited with popularizing the “RIC” among the gilets jaunes, styles himself as an anarchist, but his conspiracist bent has found him allies among hardened anti-Semites and other veterans of the far right. Prominent gilets jaunes friendly to Chouard, such as Eric Drouet and especially Maxime “Fly Rider” Nicolle, have spread their own share of vicious conspiracy theories—notably one that equates France’s adoption of the December 2018 Marrakech Pact on migration with “selling” France to the United Nations. Hatred of the European Union and other international institutions courses through yellow-vest social media channels; among some gilets jaunes, the call for participatory democracy itself reflects this distrust of “big government,” especially as it extends beyond the borders of the nation-state. But for the gilets jaunes I spoke to, and in the movement’s most widely shared collective statements, the broader demand for democracy was inextricable from social—and, by extension, climate—justice. A list of forty-two demands issued in late November, compiled through an online poll in which 30,000 people were said to have participated, included a


halt to closures of local rail lines, post offices, and schools; a major retrofitting program to insulate homes; the renationalization of electric and gas utilities; a ban on the privatization of other public infrastructure; an end to austerity; and the fair treatment of asylum seekers, among a host of other measures aimed at reducing precarity and increasing equality. In January, an “assembly of assemblies” comprising 100 gilet jaune delegations from across the country concluded in a joint statement stressing many of the same themes. Running through these demands is a call for a renewal of the public sphere—and, between the lines, an acknowledgement that in the twentyfirst century, there can be no meaningful public sphere without collective, and transformative, solutions to climate change. Far from being antienvironmental, the gilets jaunes have exposed the greenwashing of Macron’s deeply regressive economic and social agenda. Clarisse pointed out that, of the additional €4 billion in revenue that the fuel tax hike—since scrapped— was projected to raise in 2019, only 19 percent would have been channeled directly toward the green transition, with the rest going back into the government’s general budget. Meanwhile, France’s biggest corporations are reaping tens of billions of euros’ worth of tax cuts under the “tax credit for competitiveness and jobs,” or CICE. Of course, the government’s main justification for raising the carbon tax —publicly at least—was not to raise revenue, but to discourage the use of fossil fuels. But on this point too, gilets jaunes and environmental economists alike are skeptical. Attempting to change people’s habits through taxes presumes that, if one product gets too expensive, they can just switch to another one. But many working-class households simply can’t afford to decarbonize their commutes. “It’s well and good to tell people who are making €1,000 a month to change their car, but they can’t,” says Elsa Mercier, a thirty-three-year-old translator and a fellow regular at the Magny camp along with Clarisse. She, like so many gilets jaunes, sees Macron’s government as imposing a false binary between people’s livelihoods and saving the planet. Macron’s government, for its part, claims that it has offered French drivers alternatives, by granting low-income households up to €5,000 in incentives to upgrade to a less polluting car (one of a few of the measures the government proposed in early November in an attempt to


calm mounting anger over the fuel tax). But for many gilets jaunes, this offering was either too little, too late, or nowhere near enough. (Even a €5,000 bonus, for one thing, falls far short of the cost of switching to a hybrid or electric car.) The explosion of the yellow-vest movement on November 17 exposed a deeper unease, which a series of concessions since—including a government-funded bonus for low-wage workers— have similarly failed to allay. The continuous protests have put the president in an uneasy position of his own. It’s not just that his government’s legitimacy has taken a severe blow, though it has. (A February poll showed his approval rating steadily recovering from its record low at the height of the yellow-vest movement— when he tied his predecessor François Hollande for least popular president of the Fifth Republic—but it remained at 34 percent, several points below Trump’s.) Macron’s concessions to the protesters have also put his government at odds with EU budget rules, which mandate a maximum deficit-to-GDP ratio of 3 percent.

“It’s well and good to tell people who are making €1,000 a month to change their car, but they can’t,” says Elsa Mercier, a thirty-three-year-old translator. (Colin Kinniburgh)


Macron’s finance minister Bruno Le Maire was quick to clarify that the €10 billion in concessions (including the energy subsidies and wage bonus) would be made up by spending cuts elsewhere. Macron himself has insisted that restoring the wealth tax (ISF), a central demand of the gilet jaunes, is off the table. And while the government is keen to showcase a new tax on tech giants—Google, Apple, Facebook, and Amazon, or GAFA for short—Le Maire stressed in a mid-January interview that the government’s priority remains to downsize the public sector in a bid to attract foreign investment. As for the yellow-vest movement and “great national debate” launched by Macron in response, Le Maire was sanguine. This is a “historic opportunity,” he said, for French citizens to make their voices heard—as long as they stick to the right questions. Notably: “which spending to cut in order to cut taxes?” Of course, this austerity mindset is not only a French problem—far from it—nor is Macron its chief architect. But the audacity of a government that professes to be a global leader on the environment, while in practice catering above all to transnational capital, has brought into stark relief the true stakes of the climate fight. The French government’s approach is symptomatic of the attitude that treats climate change as a market error—one which can be corrected with a tax here, an incentive there, targeted primarily at individual consumers—when climate science increasingly tells us that confronting climate change means reorienting our entire economies, and fast. So far, the gilets jaunes have been far more effective in underlining the flaws of neoliberal climate policy than in proposing alternatives. But other movements are filling in the gaps. Since September, near-monthly climate marches in France and neighboring countries have brought tens of thousands of protesters into the streets to demand meaningful action on climate change, including about 200 in Montceau-les-Mines in December. Even those who weren’t wearing yellow vests overwhelmingly shared the sense that their struggles were one and the same. Borrowing a phrase from Nicolas Hulot, they chanted, “Fin du monde, fins de mois / Mêmes coupables, même combat” (“End of the world, end of the month / same culprits, same fight”). Poll after poll shows climate change to be a top concern for growing numbers of French voters, as for many of their counterparts around the world. Nicolas Hulot remains by far the most


popular political figure in France, with a 75 percent approval rating. (He is practically the only one to clear the 50 percent mark.) A petition mounted by four environmental groups in mid-December, threatening legal action against the French government if it did not take immediate, concrete measures to honor its climate commitments, quickly became the most successful in French history. With 2.1 million signatures as of this writing, it outpaces the petition credited with kickstarting the yellow-vest movement by almost a million. Still, it took the gilets jaunes to send out the kind of SOS signal that the rest of the world was willing to hear. They have set the tone for the rest of Macron’s first term, and may yet augur a new era in French politics, if not European politics writ large. In the meantime, the trucks rumble on along the RCEA. In a week, some 300 of them go back and forth just from one new Lidl warehouse outside of Le Creusot. This outpost of the German-based discount supermarket mega-chain is now the largest in France, having replaced a smaller one just a few miles down the road—directly across from the now evacuated yellow-vest outpost at le Magny. This hasn’t escaped the attention of the local gilets jaunes, who blockaded the warehouse on two separate occasions in late November. (A group of about 200 gilets jaunes did the same at a Lidl distribution center in small-town Brittany in early December.) Perhaps the movement’s most unsung strength is its knack for pinpointing such key nodes of an evolving world economy, whose carbon footprint continues to balloon while those least responsible shoulder the blame. Taxing everyday consumption of fossil fuels may be a necessary step toward abandoning them for good, but it will only succeed if those who profit most, pay most, and the benefits to everyone else are immediate and tangible. Proposals toward that end are not lacking. Among them is the European version of the Green New Deal championed by the DiEM25/European Spring movement, led internationally by economist Yanis Varoufakis and represented in France by the new party Génération.s. This group insists that problems as fundamental as inequality and climate change cannot be solved at the national level


alone. Even a French government hell-bent on taxing the rich would be hard pressed to do so singlehandedly, at least at the levels needed to finance a rapid, low-carbon overhaul of the economy, without support from European institutions. So their answer is not less Europe, but more —a more democratic, more egalitarian continental system that could oversee a massive green transition. The European Spring manifesto calls for a €500 billion/year, continent-wide green investment program; a job guarantee and a “universal citizen dividend” that would pave the way for a continental basic income; renegotiation of Europe’s energy and agricultural policies to foster renewables and agroecology; a financial transactions tax and a crackdown on tax havens; a strengthened right to housing; greater rights for migrants and refugees; and so on. This program, like the Green New Deal in the United States, could easily be accused of being a socialist grab-bag. But the vital insight it shares with its U.S. counterpart—and what brings it coherence as a climate policy—is its emphasis on the massive investment, across a huge variety of different sectors, needed to flip the switch toward a low-carbon economy. In France, a similar platform is also being championed by Place Publique, a new group seeking to form a united front of ecological, democratic-left parties for the European elections and beyond. Their “points of unity” include the principle that the future of life on earth cannot be sacrificed to spending limits like the EU’s 3 percent deficit rule. Adding to this list is the Manifesto for the Democratization of Europe, launched by Thomas Piketty and some 120 other European intellectuals in December, and now counting over 100,000 signatures. These new initiatives build on the long-running demands of tax justice groups like Attac, which was born out of the alter-globalization movement of the late 1990s and today leads a coalition calling for 1 million climate jobs in France alone. Echoes of a Green New Deal can also be found in the platform of La France Insoumise, albeit couched in more Euro-skeptic terms. Mélenchon’s party favors the language of green economic planning, and it presents the ecological transition as the task for an “independent” France. For all the discord over Europe, strategy, and style—not to mention accumulated grudges between party leaders—there are important


common threads binding all of these proposals, as well as the demands of the gilets jaunes. At their core is the question of who pays, and who gets paid, to lead the ecological transition. There is consensus among the broad left that corporate subsidies such as France’s CICE tax breaks must be directly reinvested in the green economy. But which green economy? Renewable energy, public transit, and agroecology are no doubt central to the equation. But so is an entire other sphere of un- and undercompensated care and service work—what Marxist-feminists like Nancy Fraser have called the labor of “social reproduction.” Building on the momentum of the gilets jaunes, the demands of these workers have also been rising to the surface in France. There are the stylos rouges (red pens), the teachers calling for salary hikes and an end to job cuts. There are the gilets roses (pink vests), the child-care workers mobilizing against planned reforms to unemployment insurance that especially threaten short-term contract workers like them.

Since the movement started, hundreds of protesters and bystanders have been gravely injured by police weapons, including one who was killed, twenty-two who have lost an eye, five who have lost their hands, and dozens more who have been permanently disabled. Jason (left) said police beat him in the back with a baton at one of the protests in mid-December. (Colin Kinniburgh)


And, among the gilets jaunes themselves, there are not only professional healthcare aides like Ingrid Levavasseur, who announced in January that she would head up a list of yellow-vest candidates for the European elections, but many like Yves Clarisse who devote their lives to taking care of their loved ones, for little to no compensation. (Clarisse is entitled to €500 a month in state support for looking after his father— about a third of the minimum wage—but even combined with his father’s pension, he told a local reporter, it’s barely enough to get them through the month.) The effort to revalue care work in this mold is central to why Green New Deal advocates on both sides of the Atlantic have put a universal basic income, a job guarantee, or some combination of the two at the center of their agenda. As scholars like Alyssa Battistoni have long stressed, policies privileging care, education, and other services are natural building blocks of an egalitarian, low-carbon economy. If done right, such policies would upturn the vicious race-to-the-bottom cycle practiced by multinational corporations like Lidl, which treat employees like disposable goods in the pursuit of selling ever more, well, disposable goods—imported from global South sweatshops at great carbon cost—to customers who are sorely in need of the discounts. Reading through the list of demands issued by the gilets jaunes in late November, it’s striking how much of this same agenda emerges. But so far, the connections between their movement and programs like the Green New Deal or the European Spring remain mostly between the lines. The “convergence of struggles” long heralded by French leftists, and flickering in these overlapping lists of demands, remains elusive. What the gilets jaunes have made clear in the interim is that to gain a foothold in France, let alone in Europe, a Green New Deal will need to harness some of the rage that animated the roundabouts this past winter. It will be an uphill battle to rally even a plurality of the French public to the idea that the democratic left, rather than the National Front, represents the most credible challenge to Macron’s three-decades-late reprise of “there is no alternative.” The European elections this May will be the first major test of which way the yellow-vest revolt ultimately points: toward a democratic, egalitarian alternative, anchored around an expansive vision of climate justice, or toward a hardening, zero-sum battle between the


neoliberal center and the far right. For now, the crackle of the bonfires that kept the protesters warm through the winter has largely given way again to a quieter smoldering of discontent. Still, at roundabouts like Jeanne Rose, a dedicated core of protesters are searching for next steps, while across the roundabout, a police car keeps close watch. Colin Kinniburgh is a Paris-based journalist and an editor at large at Dissent.


How Eugene Debs Became a Socialist

Before Eugene Debs became the most popular socialist in American history, he was an innovative and courageous labor leader. As leader of the American Railway Union (ARU), founded in 1893, he attempted to gather all the crafts in what was then the nation’s most essential industry into a single organization that could force employers to raise the wages and improve the working conditions of millions of wage-earners. The ARU’s claim to class unity was crippled, however, when its members voted, against Debs’s advice, to bar African Americans from joining. In the 1930s, the CIO took up the task of organizing workers by industry, instead of by individual trades. And this time, radical activists helped convince “labor’s new millions” to exclude nobody. As this excerpt from Eugene V. Debs: A Graphic Biography (illustrated by Noah Van Sciver, written by Paul Buhle and Steve Max with Dave Nance, and published by Verso this March) reminds us, the federal government essentially destroyed the ARU in 1894. Debs’s union had voted to support a strike by the poorly paid workers who built the plush Pullman sleeping cars coupled onto most interstate trains. President Grover Cleveland seized on the opportunity to douse the fires of labor militancy; he dispatched federal troops to break up the strike and clap the top officials of the ARU in jail. Debs served a six-month sentence for his “crime” and emerged from his cell a democratic socialist. One hundred and twenty-five years later, the hopes for a resurgent left depend again on the growth of a large and powerful labor movement. —Michael Kazin







Andrea Dworkin photographed in London 1988 (Stephen Parker/Alamy)


Reviews

What Men Want Charlotte Shane Last Days at Hot Slit: The Radical Feminism of Andrea Dworkin Edited by Johanna Fateman and Amy Scholder MIT Press, 2019, 408 pp. I suspect we are legion, we white women who first read Andrea Dworkin while cresting or just tipped past our teens. I was one, and Johanna Fateman, co-editor of Last Days at Hot Slit: The Radical Feminism of Andrea Dworkin, was another. “To read Dworkin at eighteen,” writes Fateman in the introduction, “was to see patriarchy with the skin peeled back.” Dworkin’s work presents male supremacy at its goriest and most sadistic by focusing on eroticized brutality, the habitual violence of men the world over who are, at this moment and every moment, “shoving it into her, over and over,” often when the “her” is unwilling, or when she’s a child—and sometimes until she dies, or after she’s dead, or both. This viscera is what makes Dworkin’s writing so compelling, and so repellant. For her, terror was a necessary tactic; she saw herself first and foremost as a political actor crafting “weapon[s] in a war . . . strategically, with a militarist’s heart.” Dworkin’s first publication, Woman Hating (1974) opened with an enduring mission statement: “revolution is the goal. [This book] has no other purpose.” Fateman and co-editor Amy Scholder are adroit, sensitive handlers of


this volatile material. Last Days at Hot Slit provides a service by virtue of its inclusion of previously unpublished pieces and excerpts from out-ofprint books, but there’s also great skill behind the respectful, honest depiction of Dworkin’s fraught development as an intellectual. She was massively talented and occasionally brilliant but in the 1980s, her thinking became recursive and compulsive, caught on the snag of itself, and her writing suffered accordingly. Because she was a militant, she could not allow hesitation, uncertainty, or ambivalence—the very experiences that lead theorists to new insights. The stakes were too high: one moment of vacillation might cost her the war. “In her singular scorched-earth theory,” Fateman writes, “pornography is fascist propaganda” and prostitution “the bottom rung of hell.” Both had to be eliminated completely before women could be free, as would intercourse as we know it. But through this volume’s skillful curation, the writer who pledged herself to severity and absolutism is revealed to be a complex, contradictory figure. When I was a college student, it was Dworkin’s unequivocal words that lavishly tended to my most nightmarish, inchoate impressions of what women’s existence entailed. Here was a writer who referred to a molested child as “a breachable, breakable thing any stranger can wipe his dick on,” who wrote “it is impossible to use a human body in the way women’s bodies are used in prostitution and to have a whole human being at the end.” She claimed that, “for men, their right to control and abuse the bodies of women is the one comforting constant,” and then elaborated on how frequently and unrepentantly they used that license. Was this what the world was truly like? I wondered. And is this what the world—meaning men, the dominant class everywhere on the planet—has in store for me? More than Firestone or Millett or Steinem, it is Dworkin who still leaves an indelible mark. Her crushing visions were irrefutable and inconceivable in equal measure, or at least irrefutable by me, inconceivable to me, a small-town girl who was more or less a virgin. In Last Days’ introduction, Fateman writes that she originally distanced herself from Dworkin’s politics “with the kind of clean, capricious break that youth affords,” but I think she’s being a little hard on herself. Dworkin’s champions have long accused critics and mere non-adherents alike of spurning her out of pure misogyny, but because she traffics in extremes, she drives away many who are better than that. Perhaps it


takes years of practice to be able to dismantle the militarized edifice of her books. To extend Dworkin’s preferred metaphor, successfully negotiating her work is like deactivating a bomb. If you can’t deactivate it, you should run away. The only other option is to blow up. I thought a lot about Andrea while I sold sex shows on the internet, years ago, and I’m using her first name now because that’s how I thought of her then and, frankly, still think of her—with a sense of tremendous intimacy. Webcamming continues to be the most unpleasant variation of sex work I’ve tried, in part because of the ways anonymity emboldens people. “What’s your biggest dildo?” some guys would ask right away, and I’d hold up one of the medium-sized ones, hoping they wouldn’t notice what else lay on the bed beside me, though they usually did. This was in the mid-2000s when anal sex was trendy, especially ATM (taking something from inside an ass and putting it directly into a mouth). “Heel fucking” also enjoyed a long stint of popularity on my site—an American platform that mainly hosted girls working in the former Eastern bloc, with occasional guest appearances from homegrown porn stars—and while it was possible to fake sliding the spike of a plastic stripper shoe into an anus, faking vaginal insertion was more challenging. This was the tail end of the porn era David Foster Wallace characterized as enthusiastically “vile”: “in nearly all hetero porn now there is a new emphasis on anal sex, painful penetrations, degrading tableaux, and the (at least) psychological abuse of women.” Flirting with outright Dworkinism, he speculated these films were mainly made “not for men who want to be aroused” but “for men who have problems with women and want to see them humiliated.” Of course, Dworkin maintained that for the vast majority of men, those desires are one and the same. (It’s a bit on-the-nose that these words come from a man who tried to push his girlfriend from a moving car. I quote him here because he is still a respected voice in spite of this established abuse, and I worry my own recollection of the cultural trends won’t be convincing enough.) Though I used “a” and “an” a moment ago while referring to body parts, what I mean is mine. The shows were solo, and without a man to do things to us, we, the site’s “models,” did things to ourselves. I was twenty-


one and lived alone in a two-bedroom apartment with my computer, immersed in these demands. The bleak universe captured in Dworkin’s writing has never felt nearer or realer to me than it did during this time. It seemed men didn’t understand women had bodies like their own: sensitive in predictable places, with nerve endings and pain receptors and intimate vascular tissues that weren’t hard to tear. To see ignorance in place of malice was an overly generous reading but I couldn’t make sense of it any other way. “Have you ever wondered why we are not just in armed combat against you?” Dworkin asked a group of 500 ostensibly receptive men in 1983 when she spoke at a conference “for Changing Men.” “It is because we still believe in your humanity, against all the evidence.” Men’s routine inability to recognize women as fellow humans, fundamentally like themselves, is the bedrock of Dworkin’s philosophy. But what I tried to attribute to a lack of education or empathy, she rightly framed as the result of vehement, collective, and persistent refusal. Dworkin claimed that when a woman looks at especially violent pornography, she’s either overcome by fear or else “she entirely dissociates herself from the photograph: refuses to believe or understand that real persons posed for it.” But I was the real person posing, so I attributed the (unconscious) dissociation to men, to absolve them of their tastes and make it more bearable to share the world with them. I didn’t want to believe in malevolence, and I didn’t always have to. One regular client, after I asked, spent hundreds of dollars worth of time speculating about why he always asked me to enact a scene of anal rape. Another, whose proclivities were forgettable, told me I’d helped him want to stay alive after a failed suicide attempt. A third, in his late teens, kept me company for hours each night, distracting me from the verbal abuse with inside jokes. An inability to admit the possibility of heterosocial tenderness is part of what makes Dworkin’s work so suffocating. Because of my past, reading Last Days at Hot Slit felt like going through the journal of a close family member. There is much I remember alongside much I hadn’t read before, much that hurts and angers me, and much that I appreciate. Dworkin’s words are as sewn into me as are certain songs or smells. For every moment I see her as a comrade (in her


hatred of incarceration, her mistrust of police) there is one when I see her as an enemy: for the degrading, almost gleefully cruel way she wrote about prostitutes after she herself traded sex for food and housing; for her mystifying attempt, with the help of Catharine MacKinnon, to expand the reach of Amerika’s (her preferred spelling) courts. What a subsequent decade of sex work and simple aging taught me was that the answer to my youthful inquiry of “is this what the world is really like?” is yes and no; yes, for some women, or even yes, sometimes, for all, but the world is not only this. I wish I could argue with her about this now, though it’s well established that Dworkin despised my kind of thinking. The epilogue of her second novel, Mercy (1990), is a scornful rejection of nuance written from the point of view of a lesbian academic who subscribes to “a self that is partly obscured, partly lost, yet still self-determining, still agentic.” (This is meant to indicate a cowardly, anti-feminist, patriarchal-collaborator’s line of thinking.) Dworkin found nothing of value in any challenge or criticism, not even when it came from (former) friends. As Fateman points out, Dworkin refused to “directly engage with the positions of her feminist adversaries,” which made it much easier to smear them—and she did, viciously. Last Days debuts “Goodbye to All This,” an ugly, agonized, and previously unpublished work from 1983, written in the wake of the divisive Barnard Conference on Sexuality in 1982, which Dworkin’s group, Women Against Pornography, picketed. In the essay, Dworkin derides by name those whose politics most deeply wounded her, many of them lesbian activists. Less than ten years before, in Woman Hating, Dworkin praised “erotic civil disobedience” and extorted readers to cast off the shackles of sexual taboos. But here, she unleashes the full force of her scorn on the “pierced, whipped, bitten, fist-fucked and fist-fucking, wild wonderful heretofore unimaginable feminist Girls.” “It’s nice to see girls get what they want,” she practically spits. “It’s astonishing to see girls want what they get.” Perhaps this is not evidence of self-contradiction but change. In 1995’s “My Life as a Writer,” Dworkin describes herself as an intellectually curious child who “did not like boundaries” and “saw adults as gatekeepers.” In Woman Hating, she cites Julian Beck’s words: “I am an


anarchist. I dont sue, I dont get injunctions. [sic]” Yet she went on to claim, while writing with MacKinnon in Pornography and Civil Rights (1988) that “women and children are being raped because” the Marquis de Sade’s books (among many other materials) are still read, and she coauthored a law designed to remove such works from circulation by suing booksellers into bankruptcy. Canonical works were hugely influential throughout her girlhood and beyond—“I had been brought up in an almost exclusively male literary tradition,” she explains in the preface to Our Blood (1976)—yet she later claimed “that art, those books, would have robbed me of my life.” Why only “would have”? What saved her from the fate that awaits the rest of us? “My own view,” she wrote, neatly excising the rhetorical complications of a self that would be selfdetermining, still agentic, “is that survival is a matter of random luck.” It’s the determined denial of an active, wily, strong, and subversive self that weakens Dworkin’s work to the point of breaking. She insisted her writing was about women, but it was about men: what they do, why they do it, and which lies they use in their defense. Women couldn’t be subjects, only faceless victims, and they were described as emptier still when they failed to live up to Dworkin’s politics. According to her, when “happy hooker[s]” and wives and other “militant conformist[s]” find themselves subjected to male violence, it “is akin to nailing the coffin shut: the corpse is beyond caring.” Consequently, portions of her writing are ghastly, maybe even unforgivable in their misogyny. “One does not violate something by using it for what it is,” she wrote in Pornography: Men Possessing Women. “A whore cannot be raped, only used.” In 1993’s “Prostitution and Male Supremacy,” a speech not included in Last Days, Dworkin assumed the mantle of an imagined “john” and said that “the prostituted woman” is “nobody real, I don’t have to deal with her . . . She is perceived as, treated as—and I want you to remember this, this is real—vaginal slime. She is dirty; a lot of men have been there. A lot of semen, a lot of vaginal lubricant. . . . Her mouth is a receptacle.” Dworkin imagined herself a champion for prostitutes when she wrote this: someone fighting “for” them by telling the truth about their lives. But the only people who ever


spoke to me like were faceless free chatters hiding behind disposable screen names. When I started seeing clients in person instead of online, no one ever said anything so hateful—and I don’t know how I could trust or organize alongside a woman who would. By articulating the internal monologue of the world’s worst women-hater, she became his mouthpiece. How do you talk about patriarchy without saying exactly what it wants to hear: that women are abject, disadvantaged, fragile, unable to protect ourselves? We still haven’t figured that out, though early discussions about this predicament took place while Dworkin was alive and hard at work. (They impacted her in no discernible way.) In the 1990s, academics like Renée Heberle and Sharon Marcus began to seriously reexamine feminist orthodoxy around pornography, sexual assault, and the use of the legal system to respond to rape. Fixating on sexual violence risked establishing it as the moment that, in Heberle’s words, defined “women’s possibilities for being in the world.” And treating sexual violation as inescapable, omnipresent, and shattering almost inevitably encouraged women to “turn to the very social and political institutions which continue to represent public patriarchy”—meaning the courts, the law, the state. A reliance on men to adjudicate and prohibit rape impeded women’s ability to believe they could devise their own strategies of response. Even now, feminists cling to what Heberle described as the mistaken belief that if we can make “society understand the truth about itself,” it will transform, and the preferred method of trying to force that understanding is through routine, graphic exposure of our sexual wounds. But patriarchal institutions aren’t moved by the petitions of suffering women. On the contrary, such spectacles confirm and reify the institutions’ obscene power. (Kavanaugh’s hearing and subsequent confirmation is a useful recent example.) What would it mean to deny that the power to stop rape lies exclusively in the hands of men, to begin a reversal of old narratives about female powerlessness and absolute male prerogative? These are the narratives Dworkin depicted so evocatively when she wrote passages like those in Mercy: “I’m just some bleeding thing cut up on the floor, a pile of something someone left like garbage, some slaughtered animal that got sliced and sucked and a man put his dick in it and then it didn’t matter if


the thing was still warm or not because the essential killing had been done.” She invested her life in these depictions, and the commitment atrophied her revolutionary instincts. In her stern but ultimately toothless address to the aforementioned conference of feminist-allied men, she did not threaten her listeners with retaliatory murder for their gender crimes but rather implored them for twenty-four hours without rape: “one day of respite, one day off . . . how could I ask you for less—it is so little.” That was in 1983, and her sense of hopeless resignation never let up. In “My Suicide,” Last Days’ final entry, she writes of her abusive exhusband, “I wish someone would help me out. . . . I’m pathetic. I can’t do it myself. . . . I’m treated like the world’s hardest bitch but I can’t finish off that particular beast.” She rhetorically asks if there isn’t “a secret world of women assassins” who could kill him for her: “the worst thing is how they rape us because we’re part of a group but we have to fight back as individuals.” If she’d ever strayed from the militant’s path, she might have found the sort of camaraderie for which she seemed to yearn. “My Suicide” is heartbreaking, saturated with the sorrow that lies beyond disappointment and dismay: the pain that’s left when you’re too emotionally exhausted to feel a sensation invigorated by expectation or even anger. “Please help the women,” reads the second-to-last line. It’s a prayer to a “ruthless” god who may not even be capable of mercy. She just couldn’t imagine a future in which women help ourselves. Charlotte Shane is a co-founder of Tiger-Bee Press, an independent publisher based in Brooklyn.

Beyond the Backlash Harold Meyerson Identity Crisis: The 2016 Presidential Campaign and the Battle for the Meaning of America By John Sides, Michael Tesler, and Lynn Vavreck Princeton University Press, 2018, 333 pp.


In recent years, Europe’s social democratic left has been confronted with a terrible case of what hitherto had been largely an American malady: an electorate turning to nativist, largely racist, politics. To be sure, over the past millennium, the continent had experienced periodic outbursts of religious slaughter, but save in those instances (say, the Crusades) when it went spoiling for a fight, most of the violence was inflicted on homegrown Protestants (in Catholic countries), Catholics (in Protestant countries), or Jews (everywhere). Immigrants—and correspondingly, nativists—were generally few and far between. More recently, of course, immigrants from the Middle East, Africa, and beyond have sought refuge in Europe, provoking a backlash that has contributed to the shrinking of the continent’s socialist and social democratic parties. This is but one of many reasons for the crisis of the European left, but it is, in many ways, a peculiarly American crisis that has left our European comrades floundering at sea. A number of the nations where social democracy had progressed the furthest—the Scandinavian nations in particular—were racially and religiously homogenous, the kind of places where class solidarity could flourish in the absence of ethnic tension. That’s one reason why socialism took root in Europe and never did here. Unlike our European counterparts, America has always been a nation of immigrants and nativists, of whites and racial minorities. In Identity Crisis, John Sides, Michael Tesler, and Lynn Vavreck tell the unhappy tale of the 2016 election, bringing together a range of metrics— polls, tallies of media appearances and campaign outlays, regression analyses and the like—to argue, largely convincingly, that the outcome was chiefly the result of Donald Trump’s appeal to racist fear and loathing. The authors, however, are political scientists, not historians. And while the United States had never before seen a presidential nominee, much less a president, like Trump, it’s seen multiple elections dominated by varieties of the hatred that Trump stirred up. From the antebellum and Civil War– era Democrats’ campaigns against “Black Republican” Abraham Lincoln and his party; to the countless anti-Catholic Republican campaigns against the immigrant Irish, Italian, and Eastern European Democrats of the late nineteenth and early twentieth century; to George Wallace’s runs for president and the Republican dog-whistle campaigns beginning with


Richard Nixon’s in 1968; nativist and racist sentiment has been as much, if not more, the rule than the exception in American elections.

A teenager at a Trump rally in 2016 in Cleveland, Ohio (Spencer Platt/Getty Images)

Or, at least, it has been during those periods when racial minorities were seeking their civic and economic rights and when non-Protestant or non-white immigrants were arriving en masse. During those periods, that is, when the question of who was an American, who was the protagonist of the national narrative, was subject to change. It was only during the period when immigration from anywhere but Western Europe was almost entirely prohibited—from 1924 to 1965—that American workers were able to sufficiently overcome their fears and animosities to build powerful unions and support the creation of a semi-welfare state. The backlash to Barack Obama’s presidency came in part because he had the misfortune to govern at a time when many Americans believed he symbolized not a triumph of egalitarianism but rather the rising of a new America in which whites would no longer constitute a majority. As Sides and his co-authors document, Obama was the first president whose popularity did not rise alongside rising consumer sentiment. For


them, this means that the assessments of the economy held by Republicans and Republican-leaning independents were racialized. Indeed, noting that the share of Republicans who supported Trump was roughly the same at all income levels, they argue that it was racial, not economic, anxiety that put Trump over the top in November 2016. My own view is that it’s more difficult than the authors admit to separate out one kind of anxiety from the other—that this anxiety, this anger, was, in quantum parlance, both particle and wave, both racial and economic. For one thing, the metrics by which they gauge views of economic and social wellbeing are those that measure the beliefs of individuals—consumer sentiment, wage data. Other deep dives into 2016 voter behavior, such as that by Jon Green and Sean McElwee, look at the experience of communities—of counties and zip codes. When researchers look at such collectivities, the socioeconomic roots of Trump support come more clearly into focus. As Green and McElwee note, “voters with favorable views toward Trump were more likely to live in geographic areas with worse health outcomes and a higher reliance on income from the Social Security Administration…. [T]he rate of increase in life expectancy between 1985 and 2010 was negatively correlated with Donald Trump’s vote share at the county level— that is, counties that saw slower or even negative growth in life expectancy over the past few decades saw larger Republican shifts in two-party vote share between 2008 and 2016.” They also noted a correlation between slower wage growth at the county level and the biggest electoral shifts toward Trump. Before policy elites awakened to these grim realities, the inhabitants of “flyover” America understood full well that they had been left behind. Neither private nor public capital was finding its way to non-metropolitan areas, and the already glaring disparities between town and country, between big city economies and those of small towns and rural areas, ballooned during Obama’s presidency. A recent study from the Brookings Institution shows that since 2008, the number of jobs has increased by 9 percent in large metropolitan areas, 5 percent in medium-sized metropolitan areas, 3 percent in small metropolitan areas, and 0 percent in small towns, while declining by 2 percent in rural areas close to big cities and by 4 percent in rural areas not close to cities at all. This is not to say that the white working class is the sole or even the


primary victim of deindustrialization. As any drive around Chicago, Cleveland, or Detroit makes strikingly clear, the factories in those cities shut down decades ago, depriving hundreds of thousands of AfricanAmerican workers of full-time, decently paid, often unionized employment, as William Julius Wilson documented in his 1996 study, When Work Disappears. Latinos had a foothold in those factories, too—in the 1970s and ’80s, the local unions in a number of auto factories in California were Latino-led. Those factories all closed, however, just as immigration from Latin America surged, stranding those immigrants in low-paid service-sector and non-union construction jobs. It is to say, however, that only the white segment of the abandoned working class has responded by moving right. One of the factors behind that movement—a historic factor that the authors don’t consider—is deunionization. Exit polls of presidential elections going back to the late 1960s have generally shown that the margin by which union members vote for the Democrat exceeds that of non-members by roughly 9 percentage points. For white male union members, however, that margin swells to 20 percentage points when compared to their non-union counterparts. Viewed through this prism, the shift of Pennsylvania, Ohio, Michigan, and Wisconsin into the Republican column in 2016 becomes a bit less mysterious: these are all states where levels of unionization have shrunk from postwar heights of close to 40 percent to current depths near single digits. And while deunionization hasn’t driven African Americans to the right, it has almost certainly reduced their turnout— also a factor in Trump’s victory in once-industrial heartland states. The efficacy of unions in turning out a more Democratic vote must come with caveats, however. During its rise to power in the 1930s and 1940s, the United Automobile Workers was able to produce huge Democratic majorities among its Michigan members in votes for federal and state offices. When it came to Detroit city elections, however, the UAW was seldom able to persuade its members—who in those years were predominantly white—to vote for its endorsed candidates. That’s chiefly because elected officials at the federal and state level concerned themselves with economic policy, while local politics was all about housing and policing—in other words, about policies that could keep black people in Detroit in their place.


In a sense, the 2016 presidential runoff resembled one of those Detroit city elections more than it did those that sent politicians to Washington. Sides, Tesler, and Vavreck document how both Trump and Hillary Clinton focused on the issue of Americans’ identity—posing a definitional choice between white or multiracial, bigoted or tolerant. Trump partially neutralized the Democrats’ advantage on economic issues by pledging not to touch Social Security or Medicare; Clinton didn’t exploit the Democrats’ economic advantage, choosing instead to ask voters to make a moral judgment on Trump and affirm the rise of a new, more diverse nation. “Stronger Together” really was her message, and it didn’t fly. It was Bernie Sanders’s message— which the authors basically ignore —that actually put Democrats more in sync with that new, more diverse nation, much as Trump’s message resonated more deeply with Republicans than had any party message in a very long time. Sanders and Trump each propelled their respective parties (notwithstanding Sanders’s refusal to call himself a Democrat) to don their new identities more openly. For the Democrats, that meant moving to a more egalitarian economic as well as social perspective, and a war on plutocracy; for the Republicans, it meant less egalitarian policies across the board. Not only does the new Republicanism bump up against the nation’s changing demographics, but Trump’s own discontents once in office have accelerated the party’s estrangement from a broader range of constituencies— especially among women—than white nationalism by itself ever could. Republicans seem committed to narrowing their already shrinking share of the American political universe. Columnist Ronald Brownstein had CNN break down the white working-class vote in the 2018 midterms between evangelicals and non-evangelicals, and found that while threequarters of evangelicals—both college-educated and non-collegeeducated—voted Republican, white working-class non-evangelicals defected in large numbers: 44 percent of the men and 57 percent of the women voted for the Democrat last November. Under Trump, Republicans have become not only more openly racist and nativist, but also more openly misogynistic. The most glaring identity crisis in the nation today isn’t America’s; it’s the Republicans’. There are only so many of the GOP’s compatriots whom they can denigrate, keep from the


polls, or gerry-mander away—at least if they have hopes of winning future elections. So can the Democrats regain power simply by virtue of the Republicans’ determination to estrange everyone outside their shrinking base? That, alas, seemed to be a strategic premise of Hillary Clinton’s campaign. Democrats must affirm the rights of all the groups that comprise the American mosaic, but they also must stress the social democratic economic causes that Clinton largely neglected or rejected if they mean to expand their electoral reach and actually seek to diminish our towering inequality. The United States isn’t roiled only by the racial identity crisis that Sides and his co-authors document. It’s also poised—if we can believe the polls on social democratic reforms—to remake much of its economic order. What would identity politics along those lines look like? Something like, “We are the 99 percent.” Harold Meyerson is executive editor of The American Prospect and a member of the Dissent editorial board.

The End of the World as We Know It? Jennifer Ratner-Rosenhagen Democracy and Truth: A Short History By Sophia Rosenfeld University of Pennsylvania, 2019, 224 pp. We’re Doomed. Now What?: Essays on War and Climate Change By Roy Scranton Soho Press, 2018, 360 pp. With precipitous declines in humanities course enrollments and punishing cuts to programming, the “crisis in the humanities” still rages. In recent


years, however, a growing number of academic humanists have made the move—perhaps because of the crisis in their own ranks—to train their attention on even larger crises threatening America and the world today. Sophia Rosenfeld’s Democracy and Truth: A Short History and Roy Scranton’s We’re Doomed. Now What?: Essays on War and Climate Change exemplify the growing trend of scholars willing to lean outside of the ivory tower to intervene on crucial public debates, but not so far as to tumble from it without the insights and explanatory schemes that make their interventions so effective and necessary. In Democracy and Truth, Rosenfeld examines our moment of “post truth,” “alternative facts,” and “truth isn’t truth” and reveals how contestations over truth are part and parcel of the history of democratic theory and practice. In her previous book, Common Sense: A Political History (2011), Rosenfeld showed how the notion of “common sense”— the inherent wisdom of the people—became instrumental to the formation of modern transatlantic democratic populism. She explored how, since the late seventeenth century, appeals to “common sense” were made by both the left and the right to exalt popular sovereignty as well as defend demagoguery. In Democracy and Truth, she argues that, much like “common sense,” which was never really common nor particularly sensible, “truth” has long been a fighting word in modern democracies, deployed in public struggles over authority and credibility. Democracy and Truth reveals that today’s struggles over what constitutes “the truth”—though disturbing and potentially dangerous—do not represent a radical rupture with the past. Trump’s pathological lying and distortions may be an aberration, but as Rosenfeld shows, conflicts over truth have been baked into modern democracies since the era of the eighteenth-century transatlantic revolutions, when a “moral and epistemic commitment to truth” rather than to a ruler came to “undergird the establishment of the new political order.” Indeed, what makes for democratic citizens rather than imperial (or totalitarian) subjects, she suggests, is the fact that the task of negotiating intellectual primacy and legitimacy falls to them. In America, “the exercise of democratic politics, including the specifics of its relationship to truth and knowledge, has remained an arena of struggle since the Founding.” In illuminating chapters on “the problem of democratic truth,”


intellectual expertise, populism in historical perspective, and “democracy in an age of lies,” Rosenfeld explains how the democratic idea of truth never quite lived up to its promise of influence by persuasion rather than force. This problem at the core of modern democracies seems to be hidden in plain sight from today’s political commentators: who, in a pluralistic democracy has the authority to adjudicate competing truth claims? And by what means should claims be classified as true and others as false? Where are the checks and balances in debates over the truth? Rosenfeld maintains that democracies have always depended on a continual testing and reformulating of what constitutes truth. If the Enlightenment taught anything, it was that knowledge of the world was subject to change, and so understandings, too, must be open to negotiation, interrogation, and revision. For governments founded on the premise of self-rule, with all citizens as potential carriers of epistemic authority, “it also follows that no individual, sector, or institution can hold a monopoly at any point on determining what counts as truth in public life.” That didn’t mean that all enlightened revolutionaries warmly embraced the wisdom of the crowd, despite their paeans to “common sense.” Plenty of those who pressed for political equality had no truck with intellectual egalitarianism and thought that only an educated elite was capable of discerning true knowledge and safeguarding it from superstition.


A home destroyed during the Woolsey Fire in Southern California last November (David McNew/Getty Images)

Where things have always been tricky is finding a workable balance of power between the information and methods of inquiry from educated experts (without sliding into elitism and authoritarianism) while understanding the needs and experiential wisdom of the demos (without sliding into a reactionary, know-nothing populism). The recent resurgence of populism in the United States, and in democracies elsewhere, surely signals a swing of the pendulum toward the latter. Rosenfeld in no way minimizes the pernicious effects of our populist moment, but she shows that too great a swing toward technocracy is no better. “In the end, dyedin-the-wool popu-lists and technocrats mimic one another in rejecting mediating bodies, . . . procedural legitimacy, and the very idea that fierce competition among ideas is necessary for arriving at political truth.” While Rosenfeld shows that there is a long backstory to our raging truth wars today, Roy Scranton shows that there may not be much of a future for the planet as we know it. Scranton first made his mark as a commentator on climate change (and war) in his blockbuster “Learning How to Die in the Anthropocene” essay in the New York Times in 2013. There he argued that grappling with the Anthropocene demands asking age old philosophical questions, with one crucial difference. Now,


questions such as “what is the meaning of life?” and “how should I live?” must be “universalized and framed in scales that boggle the imagination. . . . What does one life mean in the face of species death or the collapse of global civilization?” Scranton thus encouraged a philosophical reckoning with and against the “Anthropocene” as a concept and lived reality. “The rub is that now we have to learn how to die”—and to live— “not as individuals, but as a civilization.” Scranton opens We’re Doomed. Now What? with a haunting epigraph from Ralph Waldo Emerson’s 1844 essay “Experience”: “Where do we find ourselves?” His collection of essays—addressing everything from our climate death-spiral to his experiences as an American soldier in Baghdad and the self-defeating tribalisms within America and along its ligaments of empire abroad— can be read as an effort to answer Emerson’s question today. His answer seems to echo Emerson’s: “In a series of which we do not know the extremes, and believe that it has none.” “The time we’ve been thrown into,” Scranton writes in the book’s opening, “is one of alarming and bewildering change— the breakup of the post-1945 global order, a multispecies mass extinction, and the beginning of the end of civilization as we know it. Not one of us is innocent, not one of us is safe.” Essays with titles such as “Arctic Ghosts,” “The Precipice,” and “Raising a Daughter in a Doomed World” explore what it means—or if it’s even possible —to live with dignity in a world we’ve abused so carelessly, so relentlessly, and with such disastrous consequences for our children and our children’s children. We’re Doomed is a jeremiad, but one with a sense of humility and an appropriate amount of histrionics given the situation’s direness: Again and again and yet again we imagine ourselves at the precipice: we must change our ways, today, this very hour, or else we’ll really have to face the consequences. We see ourselves at the cliff’s edge, trembling with anxiety, our toes kicking stones into the abyss. We summon all our inner resources. We will ourselves to action. This is it, we say. It’s now or never.


Then something catches our attention. Dinner. Twitter. Soccer. Trump. Before we know it, life pulls us back into its comforting ebb and flow. It is rare to encounter an author who envies Friedrich Nietzsche, but Scranton does. After all, Nietzsche had it easy. He had to cope “only with the death of God, . . . while we must come to terms with the death of our whole world.” Scranton understands that this isn’t exactly how climate change works. It won’t wipe all humans off the face of the planet but just make living on it awful, especially for the world’s poorest communities. But he’s right to invoke Nietzsche when contemplating the climate change’s catastrophic implications, for Nietzsche’s philosophy was, as he himself put it, the “monument to a crisis.” Scranton’s essays raise hard questions for humanists dedicated to thinking and teaching others how to do so more effectively. “What is thinking good for today, among the millions of voices shouting to be heard, as we stumble and trip toward our doom?” he asks. Scranton leaves us little room to squirm our way back to tired conventions and bad-faith apologias. His view of the neoliberalization of the university is unsparing and wholly accurate. But it’s really the internet that dominates our intellectual exchanges and, more important, shapes our thoughts. The picture he paints of thinking in the age of a virtual public sphere is not pretty. For Scranton the internet is a veritable sea of slime, muck, and mental sewage. Even gold standard journalism and literary essays are “eminently disposable, fated to be consumed and retweeted and referred to for a few hours then forgotten, like everything else passing through the self-devouring gullet of the ouroborosian media Leviathan we live within.” It’s bad enough that the global temperatures are spiking and sea levels are rising; that upwards of 200 species are going extinct every day; and that disastrous floods, droughts, and storms are becoming our globe’s new normal. But what makes matters worse for Scranton is the prospect that any serious thinking he does about these calamities won’t amount to a hill of beans. So he lingers on the moral implications of his actions and inaction as a writer, scholar, and worrier about our “broken world.” There is a tug-of-war in Scranton’s essays between a rational


pessimism and a willful hope. On occasion, the pessimism gets the last word, like when he relents: “If we are honest with ourselves and take a broad enough historical view, we must humbly submit that thought has never really been all that good for that much.” “Thought,” he maintains, “has never been able to save us in the past.” Scranton’s depressing gospel might be too much to bear were it not for his singular prose style, which makes reading about the disastrous mess our generation and our parents’ generation have made of our world exceedingly worthwhile. Whereas historical and epistemological questions animate Rosenfeld’s book, ethical ones are the life blood of Scranton’s. Ironically, though, his ethics come to the fore when he is framing them epistemologically. For Scranton, embracing a pluralist conception of truth need not lead the rowdy epistemic chaos Rosenfeld explores but may instead be the best first step to healing social divisions and our planet. To create a new “global order of meaning” based on shared governance, responsibility, and benefits, “we need to give up defending and protecting our truth, our perspective, our Western values, and understand that truth is found not in one perspective but in its multiplication, not in one point of view but in the aggregate, not in opposition but in the whole.” There is no better expression of the promise of a pluralist and anti-absolutist conception of truth, and nothing further from Rudy Giuliani’s “truth isn’t truth” than Scranton’s insight here. Not all relativist truth claims are created equal. Both Rosenfeld and Scranton acknowledge that contemporary capitalism is part of the story, though neither subjects it to sustained analysis. Rosenfeld recognizes that “it’s time again to think about alternatives to the logic of ‘the market always knows best’” and that “the story of modern democracy remains also the story of modern capitalism, and any real solution to our current ills probably requires addressing them in tandem.” No doubt, big businesses seeking less regulation and tax burdens have found that populist anger can be turned into lucrative profits if they can simply show that governmental interventions are done by and for the benefit of “Washington insiders” and “liberal elites.” For Scranton, “fossil-fueled capitalism” and “consumer capitalism” are never far out of view, though he doesn’t delve deeply into their structure and function. What he does do, however, is emphasize that in the short run, the suffering wrought by climate change will not be evenly distributed


between the wealthy and the poor. “Money means you can flee, so you don’t get stuck in the Superdome.” But in the long run, when the center no longer holds and things fall apart, no amount of consolidated wealth will be able to shore up the globe’s ruins. “Money won’t stop the seas from rising” nor will it “save the Arctic and it won’t save Miami.” Rosenfeld and Scranton both offer some modest solutions to the crises we now face. For Rosenfeld, knowing the twinned histories of modern democracy and truth shows us the value of recommitting to a notion of truth as the product of human contestation and collaboration, and as something that democracy cannot do without. The quest for truth has no final goal, no finish line. But it requires calling out untruths, errors, and fabrications, again and again, and, if necessary, yet again. Scranton’s “solutions” are a mixed bag. At one point, he suggests it’s suicide: “The only moral response to global climate change is to commit suicide . . . If you really want to save the planet, you should die.” If that seems like too big a commitment, then he has some others to try first: redistribute the wealth of the 1 percent; put women in charge; distinguish between the fatality of our circumstances and nihilism; accept failure as a path to freedom; slow down, focus, do less. He also offers a bit of timely and timeless wisdom: “it’s at just this moment of crisis that our human drive to make meaning reappears as our only salvation. . . . Because if it’s true that we make our lives meaningful ourselves and not through revealed wisdom handed down by God or the Market or History, then it’s also true that we hold within ourselves the power to change our lives— wholly, utterly—by changing what our lives mean.” Despite his effort to disparage the usefulness of thought, he makes the very best argument for it right here. So where do we find ourselves in this age of cascading crises, in the relentless grip of our long now? With every new scandal we ask again in vain: have we hit rock bottom yet? Emerson surveyed his own turbulent times and concluded that the extremes were unknown and warned there may be none. Another way to put this is: it may be rock bottoms all the way down. But Emerson also offered some hope that thinking, the good, old-fashioned humanistic kind (the kind Scranton does despite his doubts of its efficacy), can make the world anew: “Beware when the great God lets loose a thinker on this planet. Then all things are at risk. It is as when


a conflagration has broken out in a great city, and no man knows what is safe, or where it will end.” If the humanities can’t produce thinkers who can get us out of this mess, they are still producing some of the best commentators on where it has come from and where it threatens to take us. Jennifer Ratner-Rosenhagen is the Merle Curti and Vilas-Borghesi Distinguished Professor of History at the University of Wisconsin– Madison


Thank you for your support Thanks to all who gave in 2018. Your generosity makes Dissent’s work possible. Bold text indicates donations at or above the Sustainer level. Lynn & Betty Adelman Albert Shanker Institute Robert Ambaras Anonymous in memory of Andy Lewis in honor of Luke Rodeheffer

Nancy Aries & Elliott Sclar Kaavya Asoka Joerg Auberg Bernard Avishai Joanne Barkan in memory of Philip Levine

James Barnett III Kenneth Bauzon Miriam Jean Bensman in memory of Joseph & Marilyn Bensman

Alan Berlow & Suzy Blaustein Paul Berman Joseph Blasi


Barbara Bloomfield Brian Bock Richard A. Bower Michael Bradie Gerard Bradley Todd Breitbart Paul M. Brinich Ann Bristow Mr Broder & Ms Wallace Zelda Bronstein Kelly Burdick in memory of Andy Lewis

RenĂŠe Vera Cafiero in honor of Henry and Hedwig Pachter

Michael & Margaret Carey Bill Carney Luther Carpenter John Carroll Leo Edward Casey Marilyn Chilcote Jack Clark Augustus Bonner Cochran III Bruce Cohen Mitchell Cohen Avern Cohn Marcellus T. Coltharp Peter Connolly in memory of Leo Ribuffo

Stanley Alan Corngold Cultures of Resistance Network Foundation Peggy Deamer Steve Deatherage Andrew and Dawn Delbanco Michael Delozier Roberta De Monticelli


Peter Dreier Thomas Durkin and Janis Roberts Mark Egerman Elias Foundation Stephen Ellenburg Jan Ellis James Enright Sarah Fan in memory of Andy Lewis

Mark Ferraz Richard Flacks Peter Ford Gary Gerstle & Elizabeth Lunbeck Virginia M. Gibbons Owen Goldfarb Debbie Goldman Thomas Golodik Linda Gordon Peter Gourevitch John M. Grant Tom Greenwell Vartan Gregorian Katherine Gross Robert Andrew Grossman Bernt Hagtvet John & Renata Hahn-Francini Henry Hauptman Dan Hayes in memory of Robert F. Kennedy & Martin Luther King Jr.

Richard E. Healey Jay Herson Paul Howe William Hunt Eleanor Hutner Charles I. Jarowski Louisa Johnston


Ruth & Dan Jordan Stephen Kalberg in memory of Lewis Coser

David Kandel Franz Kasler Jacob Kay Ralph Kaywin & Lisa Buchberg Michael Kazin Joseph Kennedy Martin Kilson Jennifer Klein Gerd Korman William Kornblum David Kusnet Chris La Tray Luis Lainer Judy & Lewis Leavitt Jonathan S. Lee Steven Lee Nelson Lichtenstein Susie Linfield Adlyn and Ted Loewenthal Laura Longworth in memory of Andy Lewis

Robert E. Lucore Sue Lyon John MacIntosh Joe Maes Kanan Makiya Matthew Mancini Peter Mandler David Marcus Guillermo Marmol Jim Martin Benjamin Martin Eric Maskin


Kasia McBride in memory of Andy Lewis

Joseph A. McCartin David McCurdy John McKinney Jim McNeill Phillip Meade Jack Mendelson Richard Brian Miller Seymour Miller & Edward Miller Nicolaus C. Mills Brian Thomas Mitchell in memory of Gene Vanderport

Michael B. Moore Charlie Morgan in memory of Andy Lewis

Brian Morton Neil Olsen One World Fund David Ost Gustav & Hanna Papanek Robert Parsh Harry Parsons Mike Pattberg James S. Phillips Maxine Phillips in memory of Andy Lewis & Simone Plastrik

David Plotke Puffin Foundation Jedediah Purdy in memory of Bill Howley

Eric Reeves Donald Reid Stephen Retherford Kalpana Rhodes Alan Ritter


Frank Roosevelt Daniel Rose Irwin Rosenthal Benjamin Ross Valerie Rourke Miller James Rule William Sanderson in memory of Andy Lewis

Jeffrey Scheuer David Schultz James Z. Schwartz in memory of Robert & Judith Schwartz

Madeleine Schwartz in memory of Andy Lewis

Shellie Sclan in memory of Marshall Berman

Jed Sekoff Burton Shapiro Stephan Shaw in honor of Sunita Viswanath

Nancy Sherman Arlene & Jerome Skolnick Matthew Specter & Marjan Mashhadi Ann Snitow Leon Sompolinsky Mark Stern Laurent Stern Monroe Strickberger Alice Stride in memory of Andy Lewis

Jack Stuart Thomas J. Sugrue Peter M. Sutko Leighton Sweet Natan Szapiro Paul Gustav Tamminen


Steve Tarzynski Morton Tenzer Dayna Tortorici in memory of Andy Lewis

Gerald Veiluva Elsie Visel Joseph Volpe Charles Wall David Walls Michael & Judy Walzer Sarah E. Walzer in honor of Michael & Judy Walzer

Audrey E. Waysse Gene Weinstein Paul Wexler Stephen J. Whitfield Jonathan M. Wiener Sean Wilentz Ed Winstead in memory of Andy Lewis

Elaine & James Wolfensohn Nina Wouk Richard Yeselson William & Marsha Zimmer John Zuraw


An Opposing Force Nick Serpe

Major media outlets have been covering the curious (at least to them) return of socialism for a few years now. But ever since Alexandria Ocasio-Cortez became an overnight media sensation, the socialist beat has boomed. In February, the radical centrists at The Economist boarded the bandwagon with a cover story on “millennial socialism.” More sophisticated than the typical road-to-Venezuela brief, the magazine treats the resurgence of socialist politics as a fashion of the “hip, young and socially conscious” but also gives serious attention to policy proposals like the Green New Deal and ongoing disagreements within the left. In other words, they take us seriously, but not too seriously. A leftward drift in transatlantic public opinion does not a revolution make. But while the “left today sees the third way as a dead end,” The Economist’s counter-offers are of 1990s vintage: economic growth instead of redistribution, cap-and-trade instead of a massive reorientation of the economy to fight climate change, labor market liberalization instead of union democracy, fiscal restraint instead of ambitious public spending. These mantras betray the intellectual torpor of an


establishment that has won so often it can barely imagine ever losing. The Economist claims that the left is “too pessimistic about the modern world” while also admitting, with somewhat baffling generosity, that socialists “want to expand and fulfil freedoms yet to be obtained” and democratize our societies. Their only response is that these ideas are unrealistic. The road from critique to power is long and difficult, but the effervescence of the left-wing political imagination indicates how seriously socialists take this challenge. We shouldn’t confuse attention from the mainstream press with victory, but articles like these remind us that the status quo’s staying power has less to do with inspiring alternatives than inertia. We’re building an opposing force. Nick Serpe is a senior editor at Dissent.


Editors Michael Kazin • Timothy Shenk Editors Emeriti Mitchell Cohen • Irving Howe (1920-1993) • Michael Walzer Book Review Editor Mark Levinson Senior Editors Natasha Lewis • Nick Serpe Associate Editor Joshua Leifer Editors at Large Kaavya Asoka • Tim Barker • Colin Kinniburgh • Sarah Leonard • David Marcus • Madeleine Schwartz Circulation Manager Flynn Murray Art Direction Rumors Intern Andrew Schwartz Editorial Board Atossa Araxia Abrahamian Kate Aronoff • Joanne Barkan Paul Berman • Sheri Berman David Bromwich • Luther P. Carpenter • Leo Casey • Mark Engler Cynthia Fuchs Epstein • Gary Gerstle Todd Gitlin • Sarah Jaffe Patrick Iber • William Kornblum Susie Linfield • Kate Losse Kevin Mattson • Deborah Meier Harold Meyerson • Nicolaus Mills Jo-Ann Mort • Julia Ott Maxine Phillips • Jedediah Purdy Ruth Rosen • James B. Rule Arlene Skolnick • Jim Sleeper Ann Snitow • Christine Stansell Jeffrey Wasserstrom • Sean Wilentz Contributing Editors Bernard Avishai • David Bensman Michelle Chen • Marcia Chatelain Jean L. Cohen • Tressie McMillan Cottom • Jeff Faux • Agnès Heller Jeffrey C. Isaac • Martin Kilson Mike Konczal • Jeremy Larner Laura Marsh • Brian Morton George Packer • Anson Rabinbach Alan Ryan • Rebecca Tuhus-Dubrow Cornel West • Richard Yeselson Publisher The Foundation for the Study of Independent Social Ideas in cooperation with the University of Pennsylvania Press Typefaces Neutral (Atelier Carvalho Bernau) Kozmos (David Rudnick)




Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.