New York lawyers are facing court action themselves after unwittingly using fake legal precedents that had been generated by ChatGPT, in a case for a personal injury client in the Federal Court.  When Judge P Kevin Castel examined the cases he concluded ‘six of the submitted documents appear to be bogus decisions with bogus quotes and bogus citations.’ The story made the front page of the New York Times last month (May) and sent shockwaves around the legal world.  The lawyers involved have been called to a hearing this week (8 June) with the possibility of disciplinary action.  They will be asking to explain how they came to cite non-existing cases and why they should not face penalties for the violation.

Some lawyers are citing this story as evidence that the pace of technology adoption in law needs to slow down because of the risk of machine error.  But the reality is that many lawyers, both in private practice and in-house, are using AI highly successfully and without mishap.  And let us not forget the raft of solicitor negligence cases on both sides of the Atlantic every year that show the risks of human error.  The challenge in this newly technology-enabled legal world is to understand the benefits of both human and technological contributions, also the risks with each, and then build strategies and systems that realize the best in both, whilst managing potential downsides.