Since OpenAI launched its much-anticipated ChatGPT search tool last year, it has spread like wildfire—including in the legal field. The results have been mixed. While some have praised AI for potential efficiency gains in the delivery of legal services, others have raised concerns about the likelihood that relying on AI could result in incorrect or biased content, leading to legal errors.

This issue was brought into the spotlight recently when a U.S. lawyer made headlines for using ChatGPT to find supporting cases for a lawsuit he was pursuing against a Colombian airline. ChatGPT suggested cases and citations that actually did not exist, which the lawyer then used in legal filings, exposing him to the risk of sanctions.

ChatGPT. (Photo: Michael Marciano/ALM)

This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.

To view this content, please continue to their sites.

Not a Lexis Subscriber?
Subscribe Now

Not a Bloomberg Law Subscriber?
Subscribe Now

Why am I seeing this?

LexisNexis® and Bloomberg Law are third party online distributors of the broad collection of current and archived versions of ALM's legal news publications. LexisNexis® and Bloomberg Law customers are able to access and use ALM's content, including content from the National Law Journal, The American Lawyer, Legaltech News, The New York Law Journal, and Corporate Counsel, as well as other sources of legal information.

For questions call 1-877-256-2472 or contact us at [email protected]