Richard Raysman and Peter Brown ()
The “Good Samaritan” (which is the statutory language used by Congress) provision of the Communications Decency Act of 1996 (CDA, codified at 47 U.S.C. §230(c)(1)), shields social media platforms from liability related to certain types of third-party content, including user-generated content that is the hallmark of most social media websites (which as of late includes a rapid proliferation of so-called “fake news”). More specifically, any “provider or user” of an “interactive computer service” cannot be treated as the “publisher or speaker” of any information provided by “another information content provider,” i.e., a user’s Tweets or video content uploaded to YouTube. This immunity means that, in most circumstances, social media providers can edit, screen or delete third-party content at their discretion and without fear of liability.
Congress enacted the “Good Samaritan” provision to promote the robust nature of online communication and to minimize government intrusion into development and use of the Internet. The expansive immunity has extended to protection of “interactive computer service” providers from claims of, inter alia, defamation (considered the prototypical cause of action in this area of law), discriminatory housing advertisements, negligence, violation of anti-sex-trafficking laws and public nuisance.
More recently, social media sites have been invoking §230 immunity to defend against claims alleging that the sites provide “material support” to terrorists in violation of the Anti-Terrorism Act (18 U.S.C. §2331 et seq.). The plaintiffs in these ATA suits often allege that social media platforms are “instrumental” to the rise and lethality of terrorist groups insofar as the platforms provide “material support” for terrorists because the platforms have allegedly become havens for hate speech, propaganda, fundraising and organizational coordination by terrorists, including the Islamic State of Iraq and Syria (ISIS). According to one Department of Justice official, many cases of domestic terrorism can now be traced to individuals who have viewed content of terrorist organizations on social media platforms.
The use of social media to weaponize and “crowdsource” terrorism has brought the platforms under intense scrutiny and criticism, particularly from European governments. For instance, after the London attack on the night of June 3rd, U.K. Prime Minister Theresa May alleged that the Internet has become a “safe space” for terrorist ideology.
Social media companies have responded. For instance, Twitter suspended nearly 400,000 accounts in the second half of 2016 for “violations related to promotion of terrorism.” Writ large, social media sites have spent at a minimum hundreds of millions of dollars in their efforts to stop terrorist communications on their platforms, including through use of spam bots, video “fingerprinting” technologies, reporting options for fellow users and human reviewers.
Although these efforts are intensifying, pressure on social media sites to thwart facilitation of terrorism is likely to increase in upcoming months, given intrinsic philosophical, public relations, logistical and technological challenges. As noted above, in numerous instances, social media sites succeeded in defending against legal claims of providing “material support” to terrorists, principally by invoking the CDA immunity provision. This article focuses on those successes, including a recent case in federal court in New York in which the CDA immunity defense precluded liability under the ATA.
Facebook Successfully Invokes CDA Immunity to Defend Against ‘Material Support’ for Terrorism Claims. In Cohen v. Facebook, — F. Supp. 3d —, 2017 WL 2192621 (E.D.N.Y. 2017), a consolidated action filed in the Eastern District of New York, the set of citizen plaintiffs alleged that Facebook assists terrorists affiliated with the Palestinian group Hamas in potentially perpetuating future terrorist acts against plaintiffs. The second set of plaintiffs includes the estates and family members of victims of past terrorist attacks in Israel. The claims of the citizen plaintiffs were dismissed without prejudice for lack of subject matter jurisdiction, as the “non-speculative, future harm” alleged by these plaintiffs of imminent violent attacks was insufficient to confer the “irreducible constitutional minimum” of standing under recent Supreme Court precedents.
The estate plaintiffs’ primary argument was that Facebook provided “material support” to terrorists by providing account access, “coupled with Facebook’s refusal to use available resources … to identify and shut down Hamas [ ] accounts.” Facebook moved to dismiss under the “Good Samaritan” provision.
The court agreed and dismissed plaintiffs’ justification for the inapplicability of CDA immunity, even after affording plaintiffs “the most generous reading of their allegations.” The court held that plaintiffs’ distinction between “policing accounts” and “policing content” by Facebook was a distinction without a difference in the context of §230(c)(1) immunity because “Facebook’s choices as to who may use its platform are inherently bound up in its decisions as to what may be said on its platform,” which is a “necessarily antecedent editorial decision” emblematic of the role of a “publisher or speaker.”
The court also rejected plaintiffs’ claims that Facebook merely provided a content-neutral account to its users, as the causation element relied on content posted on Facebook by Hamas, not its use of Facebook. Since the gravamen of plaintiffs’ complaint was not harm inflicted by Hamas’s ability to obtain Facebook accounts, but rather its use of Facebook’s platform for, among other things, “recruiting, planning [and] inciting terrorists attacks,” Facebook’s role in publishing this harmful and offensive content inherently endeavored to hold it liable as the publisher or speaker of the content provided by Hamas. Accordingly, Defendant’s motion was to dismiss was granted.
Twitter Is Also Shielded From ATA Liability by CDA Immunity. On multiple occasions, the Cohen opinion cited to the 2016 decision of a federal court in California in Fields v. Twitter, 217 F. Supp. 3d 1116 (N.D. Cal.), which concerned materially similar factual allegations and relief premised partially on the ATA. Plaintiffs are the family members of government contractors shot and killed by a Jordanian police officer, responsibility for which was claimed by ISIS. That said, plaintiffs did not claim a connection between Twitter and the police officer, and he did not have a Twitter account. Plaintiffs claimed that Twitter provided “material support” to terrorists because it facilitated ISIS’s use of its platform to spread extremist ideology, raise funds and recruit new members.
The court dismissed plaintiffs’ first amended complaint under the CDA because the claims attempted to hold Twitter liable as a publisher or speaker of ISIS’s hateful speech. In their second amended complaint, as in Cohen, plaintiffs’ alleged that Twitter’s provision of accounts to ISIS in the first place was not a publisher or editorial function immune under the “Good Samaritan” provision.
Again, the court dismissed plaintiffs’ claims as prohibited by the CDA. First, plaintiffs’ “provision of accounts” theory did not hold water because prohibiting an ISIS member from opening an account would necessarily be content-based, as Twitter could not ascertain whether a prospective user was an ISIS member without analyzing some speech, content or idea expressed by such prospective user that would indisputably entail a content-based evaluation.
The court likewise rejected plaintiffs’ claims that the structure and operation of Twitter’s website, i.e., that it embraces a “hands-off” policy which allowed ISIS to obtain myriad accounts, fell outside of a publisher function within the purview of the CDA immunity. The structure of Twitter reflected choices about which third-party content it wished to display and how it was displayed, and §230(c)(1) therefore applied. Finally, the court noted that the bulk of plaintiffs’ complaint was devoted to how Twitter failed to prevent ISIS from using the site in an undoubtedly abhorrent manner. The court dismissed the complaint without leave to amend on grounds that further amendment would be futile.
Lawsuits holding social media platforms liable for the results of the use of the platforms by terrorists are unlikely to abate soon. The Fields decision has been appealed to the Ninth Circuit. Cases brought under this theory are pending in Michigan and California, including a complaint filed in May 2017 alleging that certain social media platforms provided material support for ISIS and by extension, the terrorists that committed the San Bernardino attacks in December 2015.