Recently, I searched on the Internet for two unrelated products, a pocket notary seal, and a frost free sillcock (an outdoor faucet device), using one of the so-called “big tech” search engines that millions of Americans use daily. After my search was complete, for several weeks thereafter, I received ads for similar products in the middle of unrelated Internet surfing. What I and millions of others experienced through this phenomenon is one of the filtering systems that the big tech companies employ to accomplish their advertisers’ marketing goals and handsomely profit therefrom. Given the recent acquisitions, reach and market size of the five big tech companies—Facebook, Amazon, Apple, Microsoft and Google—who service hundreds of millions of American customers among others, it is not surprising the tools that big tech has at its disposal. See, e.g., Katie Jones, The Big Five: Largest Acquisitions by Tech Company, Visual Capitalist, Oct. 11, 2019 (last accessed Feb. 14, 2021).

A high profile lawsuit filed last month against Twitter in the U.S. District Court for the Northern District of California, John Doe v. Twitter, 21-cv-00485 (N.D. Cal. filed Jan. 20, 2021), raises further questions concerning the filtering process utilized by Internet interactive service providers otherwise known as “big tech platforms,” and brings into sharp focus the scope of the immunity granted these entities under §230 of the Communications Decency Act (CDA), 47 U.S.C. §230. That section was established as Title V of the Telecommunications Act of 1996, and generally provides tech giants like Facebook and Twitter, among others, legal cover against lawsuits for material that appears on their platforms. The CDA, 47 U.S.C. §230(c)(1), bars civil liability for claims that “treat” a “provider or user of an interactive computer service … as the publisher or speaker of any information provided by another information content provider.” The section essentially permits free expression including inappropriate comment on these social platforms without legal repercussion. Nevertheless, the companies acknowledge publicly that they regulate content through filtering processes, and policies limiting such content as hate speech, threats, harassment and more.