Last week, I spoke at a technology law conference. Even though only one of the speakers was there to “officially” speak about ChatGPT, virtually every other speaker found some way to work in glowing  references to ChatGPT and how artificial intelligence tools are going to transform the legal profession. As the last speaker of the day, I waited patiently for someone—anyone—to take off their rose-colored glasses and discuss some of the ethical problems with ChatGPT’s use. When no one did, I felt compelled to mention some of the myriad problems with this AI tool, including its occasional habit of “hallucinations”—giving convincingly worded answers with fabricated information. “Mark my words,” I said, “before too long, lawyers will be in trouble for ChatGPT-researched briefs with made-up cases.” I had no idea just how prophetic those words would be.

The following day, I saw that Twitter and national media were abuzz with the story of New York lawyer Steven Schwartz, who represents plaintiff Roberto Mata against Avianca Airlines in a suit stemming from a personal injury in 2019. Schwartz, a lawyer for more than 30 years, submitted a brief “supported” by six “bogus judicial decisions with bogus quotes and bogus internal citations,” according to Judge Kevin Castel’s order to show cause. Among the false cases cited were Varghese v. China South Airlines, Martinez v. Delta Airlines, Shaboon v. Egypt Air, Petersen v. Iran Air, Miller v. United Airlines, and Estate of Durden v. KLM Royal Dutch Airlines. Saying that “the court is presented with an unprecedented circumstance,” Castel has set this matter for a June 8 sanctions hearing.