Mary Anne Franks, Professor of Law at University of Miami School of Law.

Less than a week after the 1995 Oklahoma City bombing that left 168 people dead, Kenneth Zeran began receiving threatening phone calls at his home. He soon discovered the reason: without Zeran’s knowledge, an anonymous hoaxer had posted a message on an America Online (AOL) bulletin board advertising t-shirts and other paraphernalia glorifying the attack, providing Zeran’s home phone number for interested buyers to call. Although AOL complied with Zeran’s request that the message be removed, new messages with similar content continued to be posted to the site. At one point, Zeran was receiving threatening calls every two minutes. After an Oklahoma City radio station read the slogans on air and urged listeners to call Zeran, the phone calls became so threatening that Zeran’s house was placed under protective surveillance.

Zeran sued AOL for negligence, arguing that the company had failed to respond appropriately after being made aware of the nature of the posts. The case eventually made its way to the U.S. Court of Appeals for the Fourth Circuit, which held that Zeran’s claim was preempted by §230 of the Communications Decency Act (CDA). In reaching its decision, the court asserted that “Congress’ clear objective in passing §230 of the CDA was to encourage the development of technologies, procedures and techniques by which objectionable material could be blocked or deleted,” and holding AOL liable as a distributor for offensive content would conflict with this objective. The court reasoned that the possibility of distributor liability, which applies when a distributor is aware of the unlawful nature of the content, would prompt intermediaries like AOL to refrain from monitoring content at all.

In effect, the court held that entities such as AOL could not be held liable for being nonresponsive to unlawful content because doing so would encourage them to be nonresponsive to unlawful content. The court ignored the obvious point that Zeran’s experience suggested that online intermediaries were already insufficiently motivated to address unlawful content. The court provided no evidence for the claim that distributor liability would make them more so, and failed to recognize that taking distributor liability for websites and ISP off the table in fact “has the effect of discouraging self-policing of content,”[1] contrary to the goal the court itself cited. As one commentator describes it, “[w]ebsites and ISPs know that no matter how inflammatory third-party postings are, complaints from aggrieved parties will be to no avail, even after notice to the website or ISP.”[2]

In economics, the lack of incentive to guard against risk where one is protected from its consequences is known as a “moral hazard.” Zeran’s interpretation of §230 (c)(1), which states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” creates a clear moral hazard. Twenty years on, there is no evidence that broad immunity from liability has done anything more than encourage websites and ISPs to be increasingly reckless with regard to abusive and unlawful content on their platforms. Today, the Internet is awash in threats, harassment, defamation, revenge porn, propaganda, misinformation, and conspiracy theories, which disproportionately burden vulnerable private citizens including women, racial and religious minorities, and the LGBT community. They are the ones who suffer while the websites, platforms, and ISPs that make it possible for these abuses to flourish are protected from harm.

The moral hazard created by protecting interactive computer service providers from liability, even when they knowingly feature, aggregate, and distribute unlawful content, is compounded by the increasing corporate domination of the Internet. Amazon, Apple, Facebook, Google, and Microsoft are now the five largest firms in the world based on market value, and they exert outsized influence on Internet communication and commerce. The corporate structure itself creates its own moral hazard: “the nature of corporate action, where bureaucracy dictates that most of the actors are far removed from the actual harm that might occur as a result of their decisions, increases the likelihood of egregious conduct.”[3] The corporations that exert near-monopoly control of the Internet are thus doubly protected from the costs of their risky ventures even as they reap the benefits. The dominant business model of websites and social media services is based on advertising revenue, and “abusive posts still bring in considerable ad revenue… the more content that is posted, good or bad, the more ad money goes into their coffers.” As Astra Taylor writes in The People’s Platform, these Internet entities are “commercial enterprises designed to maximize revenue, not defend political expression, preserve our collective heritage, or facilitate creativity.”[4] As currently interpreted, §230 provides virtually no way to hold these increasingly powerful entities accountable for the harm they cause.

In a footnote, the Zeran court writes that the “CDA reflects Congress’ attempt to strike the right balance between the competing objectives of encouraging the growth of the Internet on one hand, and minimizing the possibility of harm from the abuse of that technology on the other.” While the court reiterates that Congress has the right to decide how to fulfill its own purposes, it notes “today’s problems may soon be obsolete while tomorrow’s challenges are, as yet, unknowable. In this environment, Congress is likely to have reasons and opportunities to revisit the balance struck in CDA.” Twenty years of moral hazards might be enough.

Endnotes:

[1] David Lukmire, Can the Courts Tame the Communications Decency Act?: The Reverberations of ‘Zeran v. America Online,’ 66 N.Y.U. Ann. Surv. Am. L. 371, 403 (2010)

[2] Id.

[3] David Niose, Fighting Back the Right: Reclaiming America from the Attack on Reason (2014), 45

[4] Astra Taylor, The People’s Platform: Taking Back Power and Culture in the Digital Age (2014) 221.

Mary Anne Franks is Professor of Law at University of Miami School of Law, where she teaches criminal law, criminal procedure, First Amendment law, family law, and Law, Policy, and Technology. She is the Vice-President and Legislative & Tech Policy Director of the Cyber Civil Rights Initiative, a nonprofit organization dedicated to combating online abuse and discrimination. 

This essay is part of a larger collection about the impact of Zeran v. AOL curated by Eric Goldman and Jeff Kosseff.