11 minute read

cole

Next Article
I. Laurence claus

I. Laurence claus

Professor of Law

the line Between threats and free speech in Counterman v. Colorado II.

At first glance, Counterman v. Colorado looks like a stalking prosecution . Counterman had never met C .W ., a singer and musician in Counterman’s hometown Nevertheless, Counterman sent her hundreds of Facebook messages from 2014 to 2016 . C .W . repeatedly blocked Counterman, but each time he evaded her efforts by creating a new Facebook account .  Some of the messages suggested a nonexistent familiarity (“Good morning sweetheart”; “I am going to the store would you like anything?”) .  Others suggested that Counterman was watching C W ’s public movements And some “expressed anger” or “envisaged harm befalling her . ”

Colorado charged Counterman under a statute titled “Stalking ” Such statutes are common California was the first state to adopt one, after the killing of actress Rebecca Schaeffer by an obsessed fan in 1989 . Colorado’s statute likewise criminalized a range of conduct . But since prosecutors had no evidence against

Counterman other than his messages, the courts focused on the prohibition on “[r]epeatedly… mak[ing] any form of communication with another person” in “a manner that would cause a reasonable person to suffer serious emotional distress . ” The Colorado courts held that this “objective” focus on “reasonable[ness]” was permissible under the First Amendment’s exception for true threats

Writing for a five-justice majority, Justice Kagan disagreed . Language that would reasonably be understood as a threat is indeed unprotected speech, the majority concluded . However, the majority was concerned about the “chilling effect” on protected speech of using a solely objective test . Some protected speech might be self-censored as people became aware that others were being prosecuted for accidentally causing alarm . Accordingly, the Court required a showing of “recklessness” in true-threat cases to provide a “buffer zone” for protected speech The Court quoted language from an earlier opinion as setting forth “the most common formulation” of “recklessness”: when a person “consciously disregard[s] a substantial [and unjustifiable] risk that the conduct will cause harm to another ”

Justice Sotomayor, joined by Justice Gorsuch, concurred in the judgment because she viewed the case to entail stalking . Criminalizing Counterman’s repeated conduct under a recklessness standard does not pose much risk to protected speech . Applying the same standard to isolated statements, on the other hand, could chill protected speech .  Recklessness determinations invite juries to consider whether the risks of causing fear are justified by the “social utility” of the speaker’s activities—a “troubling standard for juries in a polarized nation to apply in cases involving heated political speech ” Justice Sotomayor’s approach would have eliminated the need to opine on the standards applicable to a oneoff comment . If forced to do so, however, Justice Sotomayor would have required that a defendant “intend” the challenged speech to cause fear . She contended that this standard was already implicit in the Court’s 2003 opinion in Virginia v. Black . In that case, she argued, the Court had required a showing that defendant had “intended to intimidate” as a predicate to criminal punishment for burning a cross at a Ku Klux Klan rally .

Justice Barrett, joined by Justice Thomas, dissented Justice Barrett read the historical record as supporting an objective standard of liability at the time of the founding .

Apart from its clarification of the true-threats exception, Counterman will bring comfort to those worried that the Court might pull back on the protections journalists enjoy from libel suits when writing about public figures on matters of public concern, a standard first articulated in New York Times Co. v. Sullivan . The Counterman majority borrowed Sullivan’s recklessness standard in crafting its test for true threats In a separate dissent in Counterman, Justice Thomas, a longtime critic of Sullivan, bemoaned “the majority’s surprising and misplaced reliance” on the case . That reliance suggests that he and Justice Gorsuch have not yet assembled the majority that would be needed to inter Sullivan .

If former President Trump is prosecuted for the violence at the U.S. Capitol on January 6, 2021, the Court’s treatment of the incitement doctrine will also be of interest . Every member of the Court endorsed the idea that only speech “intended” to incite crime escapes First Amendment protection The majority resisted the temptation to water down that standard and apply recklessness as the unitary mental state in the “chilling effect” setting, even though one court of appeals and an amicus brief suggested the possibility

The majority’s use of different standards in incitement and threat cases will require drawing lines between the categories Justice Sotomayor suggested that the task may be difficult: “Speech inciting imminent and dangerous unlawful activity will … be threatening to those who would be harmed by that illegality . ” Whether incitement cases will so readily be repackaged as threat cases—and prosecutable on the lower threat standard—remains to be seen It seems plausible, however, to distinguish between a threat to engage in violence personally and an effort to induce others to violence, even if in some cases a speaker does both

III.

Jessica heldman

Fellmeth-Peterson Associate Professor in Child Rights

Haaland v. Brackeen: Preserving indian families and tribal s overeignty

Congress passed the Indian Child Welfare Act (ICWA) in 1978 to “protect the best interests of Indian children and to promote the stability and security of Indian tribes and families . ” (Consistent with the terminology in ICWA, I will refer to Native American and Alaska Native tribes and individuals as “Indian”) . Since its enactment, ICWA has faced a multitude of legal challenges In Haaland v. Brackeen, a 7–2 opinion authored by Justice Barrett, the Supreme Court rejected the most recent and significant effort to invalidate the law, affirming ICWA’s constitutionality

The United States has a tragic history of unnecessarily and forcefully removing Indian children from their parents and communities and placing them with non-Indian families or in abusive boarding schools The goal of these practices was no secret; both public and private actors sought the eradication of tribal identity and the complete assimilation of Indians into the dominant culture By the time

Congress took notice, as many as one-third of Indian children had been removed from their families, with 85% placed in non-Indian homes, decimating Indian communities . To address this existential threat to tribes, Congress enacted ICWA .

ICWA establishes minimum federal standards for the removal and placement of Indian children . It requires tribes to be notified of any involuntary removal proceeding and provided the opportunity to intervene Before removing a child, ICWA requires a showing that “active efforts” were made to provide services to keep the Indian family intact and that continuance in the home is likely to cause serious harm to the child When removal is warranted, ICWA mandates a preference for foster care or adoptive placements with tribal affiliation— with priority given to a member of the child’s extended family . Importantly, the law does not bar non-Indian families from adopting or fostering Indian children . Such placement can occur if good cause is shown that the child cannot or should not be placed or adopted pursuant to the mandated preferences . legislate regarding Indian tribes, which is broad enough to encompass issues involving family law . The majority explained that this power derives from several sources, including the Indian Commerce Clause, Article 1, Section 8 of the Constitution, which gives Congress the authority to “regulate Commerce … with the Indian tribes . ”

Child advocates praise ICWA for representing the “gold standard” in child welfare policy: emphasizing the preservation of family connections and reducing the potential of cultural bias in removal decisions . Critics of ICWA argue that the law discriminates against non-Indian individuals to the detriment of Indian children they seek to adopt . Supporting this narrative and the effort to invalidate ICWA are special interests that would benefit from the weakening of tribal sovereignty .

Petitioners next argued that ICWA’s requirement that child welfare agencies show “active efforts” commands states to implement federal law, violating the anti-commandeering principle of the Tenth Amendment . The Court disagreed, pointing out that ICWA’s “active efforts” requirement applies to any party seeking removal, including private individuals .  As Justice Barrett explained, “Legislation that applies ‘evenhandedly’ to state and private actors does not typically implicate the Tenth Amendment ”

Following these rulings on the merits, the Court rejected the remaining equal protection and non-delegation claims on the basis that petitioners lacked standing . The claim that ICWA’s placement preferences unconstitutionally discriminate based on race was particularly concerning to tribes .  Congress’s special treatment of Indians has long been defined as “political rather than racial in nature . ” For now, this distinction remains .  However, in his concurrence, Justice Kavanaugh clearly articulated his interest in reaching the equal protection issue in the future

In Haaland v. Brackeen, petitioners—a birth mother, non-Indian potential adoptive parents, and the State of Texas—presented several grounds for invalidating ICWA . They first claimed that Congress exceeded its authority in enacting ICWA In rejecting this claim, the Court relied on extensive precedent recognizing Congress’s “plenary and exclusive” power to

Justice Gorsuch also concurred, writing separately to add historical context, an important contribution given the stakes of this case A narrow reading of the Indian Commerce Clause, as argued for by petitioners, would undermine Congress’s ability to regulate regarding Indian affairs, leaving states with that authority Justice Gorsuch emphasized that this is wholly incompatible with long-standing precedent and the Constitution’s promise of tribal sovereignty .

In his dissent, Justice Thomas claimed, in contrast with the majority, that history supports a narrower view of Congress’s authority that would not encompass family law . In a separate dissent, Justice Alito argued that ICWA’s placement preferences displace the states’ “best interest” standard guiding child placement decisions . However, as explained in the amicus brief submitted by more than 30 child advocacy organizations, “The guiding principles that animate ICWA—promoting family integrity, placement with extended family, and maintaining community and culture—are the very same factors that state statutes already direct courts to consider when determining a child’s best interests . ”

After decades of legal challenges, the Court unequivocally, and somewhat surprisingly, resolved highly contentious issues in favor of sustaining ICWA . This decision did not break new ground, and it may not be the final battle over ICWA, but it adhered to precedent and maintained an important status quo for Indian tribes and children . ICWA will continue to, as articulated by Justice Gorsuch, “secure the right of Indian parents to raise their families as they please; the right of Indian children to grow in their culture; and the right of Indian communities to resist fading into the twilight of history . ”

Warren Distinguished Professor of Law; Director, Center for Employment and Labor Policy

IV. Platform liability for algorithmic Moderation in the Google and Twitter Decisions

Are social media platforms liable for offline harm by users? In Twitter, Inc. v. Taamneh, the families of victims of a 2017 ISIS terrorist attack in Turkey filed suit against Twitter alleging that the defendants knowingly allowed ISIS and its supporters to use their platforms . In Reynaldo Gonzalez v. Google LLC, the family of Nohemi Gonzalez, an American killed in a 2015 terrorist attack by ISIS in Paris, brought a similar suit against Google, the owner of YouTube . At the heart of both lawsuits was the claim that recommendation algorithms are used as tools for recruiting, fundraising, and spreading propaganda, that the companies profited from advertisements placed on ISIS’s posts, and that the defendants knew ISIS was uploading this content but did not take enough steps to ensure the content was removed And at the heart of this question are two federal statutes, the Justice Against Sponsors of Terrorism Act (JASTA) and Section 230 of the Communications Decency Act (CDA) JASTA is a statute that permits United

States nationals who have been injured by international terrorism to file a civil suit for damages In 2016, Congress enacted JASTA to impose secondary liability on anyone who “aids and abets” by knowingly assisting or conspiring with a person or organization that committed an act of terrorism, as laid out in Section 2333 of the Antiterrorism and Effective Death Penalty Act of 1996 (AEDPA). In 1996, Congress enacted Section 230 to shield online platforms from liability for content posted by users . The scope and interpretation of Section 230 and the relationship between it and statutes like JASTA continue to be the subject of great debate as the online world expands and shapes every aspect of our lives

The opinion in Twitter v. Taamneh focused on Section 2333 of AEDPA . Relying on Halberstam v. Welch, the Court determined that Twitter did not meet the requirements for aiding and abetting The Court rested on two reasons First, there was no evidence to support that the social media platform was used to plan the attack Second, Twitter was not considered to have knowingly and intentionally allowed the content because it was the algorithms— rather than humans—that failed to restrict the content posted by ISIS To determine whether Twitter “knowingly and substantially assisted the principal violation” under Section 2333, the Court considered six factors: (1) the nature of the act assisted; (2) the amount of assistance provided; (3) whether the defendant was present at the time of the principal tort; (4) the defendant’s relation to the tortious actor; (5) the defendant’s state of mind; and (6) the duration of the assistance given . Reversing the decision of the Ninth Circuit, the Court determined that the nexus between Twitter and the terror attack was far removed, and the respondents had therefore failed to state a claim .

In Gonzalez v. Google, respondents similarly claimed that YouTube had become an essential part of ISIS’s terrorism program, used to recruit members, plan attacks, and issue threats . The theory of liability was that Google’s algorithm recommended personalized content to users, and specifically that it reviewed and approved ads posted by ISIS users and shared proceeds with ISIS through YouTube’s revenue sharing system The respondent’s core argument was that presentation of user-generated content, which receives immunity under Section 230, is different from recommendation of content, for which a platform may be liable . In a per curiam decision, the Court vacated the judgment and remanded the case to the Ninth Circuit, stating that the liability claims in the Gonzalez case were “materially identical” to those from the Twitter case: “Since we hold that the complaint in that case fails to state a claim for aiding and abetting under § 2333(d)(2), it appears to follow that the complaint here likewise fails to state such a claim . ” The Court also noted that the

Ninth Circuit had held the plaintiffs “plausibly alleged neither that ‘Google reached an agreement with ISIS,’ as required for conspiracy liability, nor that Google’s acts were ‘intended to intimidate or coerce a civilian population, or to influence or affect a government,’ as required for a direct-liability claim under § 2333(a) . ”

Section 230 has long been controversial . Section 230 provides what is often described as “Good Samaritans” protection to internet servers or platforms to moderate user-generated content .  The section distinguishes online providers from the publisher or speaker of information posted by users It has routinely been interpreted as protecting platforms from liability for both content that is removed and content that is not removed from the platform .

At the same time, there has been a growing debate about whether today, nearly three decades after it was enacted, Section 230 is due for reform given the robust ways in which platform engagement affects every aspect of our global communications In Gonzalez alone, 78 organizations submitted amicus briefs representing a broad spectrum of interests on how to interpret the clause . There are also numerous legislative reform proposals to hold companies accountable for algorithmically recommended content . One such proposal is the SAFE TECH Act, which would provide that Section 230 does not apply to ads or other paid content by online service providers Other proposals would distinguish between merely hosting user-generated posts and moderating or curating content, the latter being subject to liability in cases of online discrimination, harassment, misinformation, civil rights violations, and other harms . A new bill dubbed the No Section 230 Immunity for AI Act takes the path of differentiating between humans and machines, so that platforms would be liable to content generated by artificial intelligence based on large language models such as ChatGPT, Bard, and Copilot . Another path would be to retain Section 230’s immunity for the vast majority of harms, but impose liability in the most dangerous contexts, such as terrorism .

Any reform to Section 230 must be done with caution so that legal policy incentivizes private platforms to engage in ethical content moderation . Moreover, any imposition of platform liability must consider the effects of such regulation on competition, and any changes to the law should be carefully evaluated to ensure they strike a balance between protecting free speech, fostering innovation, and addressing legitimate concerns related to harmful content and platform accountability .

Orly Lobel

This article is from: