9 minute read
Off-Label
How tech platforms decide what counts as journalism
by Emily Bell
Advertisement
Illustration by Richard A. Chance
In the aftermath of the deadly Capitol insurrection, technology platforms were forced to acknowledge their role in poisoning the media atmosphere, as the principal distributors of digital news and the sources of so much misinformation.
Facebook, Twitter, and Google acted as they never had before: Twitter flagged Donald Trump’s incendiary lies, removed some posts, then suspended his account; Facebook banned him for inciting violence.
Overnight, Web hosting services dropped Parler, a social network popular among right-wing extremists. The platforms that had delivered and sustained a toxic presidency were now abandoning their most mendacious hitmaker.
The great deplatforming of January 2021 had an immediate effect: in addition to Trump, thousands of conspiracy-theory accounts disappeared from the internet. It felt like a turning point that technology companies had long resisted, until the pandemic gave them a first push: last March, Mark Zuckerberg, Facebook’s chief executive, announced a “coronavirus information center” that would place “authoritative information” at the top of news feeds. (“You don’t allow people to yell ‘Fire!’ in a crowded room, and I think that’s similar to people spreading misinformation in the time of an outbreak like this,” he told journalists on a conference call.) From there, platforms began rolling out new features and responding directly
to misinformation flare-ups. In May, Twitter put a warning label on a Trump post for the first time, alerting users that it contained “potentially misleading information about voting processes.” Later that month, after police killed George Floyd, Trump made racist comments that Twitter hid behind a barrier; a warning label stated that the post had violated rules against glorifying violence. All of that came after a forty-year period of media deregulation, as I recently told the House of Representatives, that created an environment “optimized for growth and innovation rather than for civic cohesion and inclusion.” The result, as we’ve seen,
has been the unchecked spread of disinformation and extremism. But putting a stop to militarized fascist movements—and preventing another attack on a government building—will ultimately require more than content removal. Technology companies need to fundamentally recalibrate how they categorize, promote, and circulate everything under their banner, particularly news. They have to acknowledge their editorial responsibility.
The extraordinary power of tech platforms to decide what material is worth seeing—under the loosest possible definition of who counts as a “journalist”—has always been a source of tension with news publishers. These companies have now been put in the position of being held accountable for developing an information ecosystem based in fact. It’s unclear how much they are prepared to do, if they will ever really invest in pro-truth mechanisms on a global scale. But it is clear that, after the Capitol riot, there’s no going back to the way things used to be.
Between 2016 and 2020, Facebook, Twitter, and Google made dozens of announcements promising to increase the exposure of high-quality news and get rid of harmful misinformation. They claimed to be investing in content moderation and fact-checking; they assured us that they were creating helpful products like the Facebook News Tab. Yet the result of all these changes has been hard to examine, since the data is both scarce and incomplete. Gordon Crovitz—a former publisher of the Wall Street Journal and a cofounder of NewsGuard, which applies ratings to news sources based on their credibility—has been frustrated by the lack of transparency: “In Google, YouTube, Facebook, and Twitter we have institutions that we know all give quality ratings to news sources in different ways,” he told me. “But if you are a news organization and you want to know how you are rated, you can ask them how these systems are constructed, and they won’t tell you.” Consider the mystery behind blue-check certification on Twitter, or the absurdly wide scope of the “Media/News” category on Facebook. “The issue comes down to a fundamental failure to understand the core concepts of journalism,” Crovitz said.
Still, researchers have managed to put together a general picture of how technology companies handle various news sources. According to Jennifer Grygiel, an assistant professor of communications at Syracuse University, “we know that there is a taxonomy within these companies, because we have seen them dial up and dial down the exposure of quality news outlets.” Internally, platforms rank journalists and outlets and make certain designations, which are then used to develop algorithms for personalized news recommendations and news products.
Very occasionally, these designations are used to apply labels. Grygiel was instrumental in identifying the problem and pushing platforms to label state-controlled media outlets such as Russia Today and China’s People’s Daily. In the summer of 2020, Facebook announced that it would flag state media on its platform and on Instagram and would block state media from targeting US residents with advertising. (Today, the RT page on Facebook is pinned with a label advising that the publisher “may be partially or wholly under the editorial control of a state.”) Soon, Twitter announced that it too would label state-controlled media. Yet the practice of doing so has been inconsistent: Even if a page is flagged on Facebook, individual posts—RT videos, for example—continue to float around without a label. And Facebook has refused to identify Voice of America as state media—which posed a big problem when, last year, Trump decided to replace its staff with loyal propagandists.
Early attempts at labeling have also precipitated questions about what comes next: How far are social media platforms prepared to go in categorizing other pages that are just as manipulative but less glaring? Grygiel doesn’t like the notion of tech giants certifying journalists, but does feel a need to draw lines and to focus on misinformation-spewing websites that have ties to political funders or partisan think tanks. “We don’t want credentialing for news,” Grygiel told me, “but we can apply tests for what is definitely not news.”
Take the case of Texas Scorecard, which identifies on Facebook as a “Media/News Company.” On election night this past November, while the news cycle was dominated by the slow process of
vote counting, false stories were circulating at an altogether faster pace. Texas Scorecard published one of the most viral—an easily debunked article about the “suspicious” movement of large cases into and out of a Detroit voting center. (“The ‘ballot thief’ was my photographer,” Ross Jones, an investigative reporter for WXYZ Detroit, tweeted.) Its inaccuracy was the product not of poor reporting, but political interest; Texas Scorecard is a project of Empower Texans, a right-wing lobbyist group, and the categorization as “Media/News” was self-applied—on Facebook, almost anyone is permitted to call themselves a publisher. That has allowed Texas Scorecard to effectively disguise itself as a legitimate local news source to its nearly two hundred forty-five thousand followers—almost a hundred thousand more than the highly reputable Texas Tribune.
Over on Google, by contrast, Texas Scorecard did not show up in the “News” tab. Unlike Facebook’s honor system, Google’s search engine deploys an algorithm to decide who falls into the “news source” category. This is an automated process whereby Google indexes news sources according to a number of criteria, including how frequently sources are linked to elsewhere on the internet; to assess how the algorithm is doing, a panel of
human beings—“quality raters”—regularly check in on Google’s search results. But that doesn’t mean Google has solved the disinformation problem: the “news source” label doesn’t consistently reflect veracity; even the Epoch Times, the conspiracy-driven pro-Trump Falun Gong–linked newspaper, meets the standard. And Google users are increasingly engaged with the “Discover” feature, introduced in 2018, which recommends links on an individual’s home screen and is so highly personalized that it’s hard to track as a reliable recommender of legitimate journalism.
Politically funded local “news” sites like Texas Scorecard became a signature of the 2020 campaign cycle and represent a new model for using the trappings of journalism to wield dark-money influence. At the Tow Center for Digital Journalism, where I work, we conducted extensive research into this phenomenon, examining how platforms have struggled with these false proprietors of “journalism” in their labeling and flagging processes. Just last year, Facebook announced that it would prevent sites with “direct, meaningful ties” to political organizations from claiming to be news and using its platform for promotion. Yet Texas Scorecard, despite its connection to Empower Texans and being a blatant spreader of misinformation, remains “Media/News.”
In deciding where and how to apply labels, tech companies are, in an important sense, defining what journalism is. As Jillian York—the author of a new book, Silicon Values: The Future of Free Speech Under Surveillance Capitalism—pointed out to me, this is not a novel concern: “It feels as though we had many intense discussions around the issue of ‘Who is a journalist?’ in around 2010, when we were considering how to think about organizations like WikiLeaks,” she said. At the time, the Islamic State was on the rise, she recalled, and social media platforms were starting to experiment with more direct intervention in content moderation: misinformation whack-a-mole.
Since then, tech companies’ stubborn reluctance to get involved in editorial matters has provided us with a working definition of journalism—a confused and undermining one that offers a weak gesture toward “balance.” Facebook has practiced this kind of technological false equivalence as recently as 2018, when Mother Jones learned that it was subject to an algorithm change that weighted its site negatively and the Daily Wire, a right-wing site, positively. The difference between the two outlets comes down not merely to political orientation, but to quality: Mother Jones is a rigorously reported and fact-checked magazine with a track record of award-winning investigative journalism; the Daily Wire is dominated by the opinions of Ben Shapiro, a right-wing commentator with a track record of advancing untrue stories.
“The problem with all taxonomies is that even the ones that are useful are often wrong,” Ethan Zuckerman, a media scholar at the University of Massachusetts at Amherst, told me. But he hasn’t given up on labeling altogether. “We perhaps need new language for some of these digitally native, wildly popular disinformation sites,” he said. Zuckerman believes that tech platforms should make use of the work done by organizations like NewsGuard and the Trust Project, which develop standards for assessing the quality of news sources.
During his tenure as director of the Center for Civic Media at MIT, Zuckerman assisted in building Mediacloud, an open-source tool for examining the media ecosystem, which wrestled with how to categorize news-ish outlets such as Gateway Pundit and One America News Network. “We tried digital-native versus analog-native, but that was not very useful, and then we tried left, center-left,
center, center-right, and right, which was more helpful,” he said. (Researchers used Mediacloud to demonstrate that right-wing sites were creating a “propaganda feedback loop” while presenting themselves as news.) “I don’t think it is the case that we need journalism to be licensed—and certainly not credentialed by platforms,” York told me. “But if individuals or organizations are going to identify themselves as journalists, then there needs to be an accountability process.”
Of course, the people who will be making these designations are tech executives, who tend to espouse both a profound faith in the idea of free speech and an extreme skepticism of journalists. How they settle on their approach to labeling matters; the proven harm of failing to distinguish between truth and fiction, or to account for the motivations and funders of those who deliberately aim to mislead, requires that platforms be more open with news producers. But much depends, too, on whether the platforms actually want to change. Unless they utterly transform their revenue system, the odds don’t look good. cjr