One of the more important facets of adapting to machine learning in today’s legal sphere is knowing when not to use it. At the “Re-designing Legal Processes in the Age of Machine Learning” panel at the Corporate Legal Operations Consortium (CLOC) 2019 institute in Las Vegas, a group of law firm attorneys discussed how important it is not to develop a solution and then go looking for a problem.
But what exactly does it mean? While machine learning can be useful for repetitive process-level work such as separating relevant documents from a set, not every problem merits the type of investment necessary to properly bring it to bear.
“Sometimes machine learning is not the right solution,” said Josias Dewey, the innovation partner at Holland & Knight.
Collecting the amount of data necessary to teach an AI system how to engage with a given task requires time and know-how. There’s a good chance that law firms may have a leg up in at least one of those areas.
“Law firms are usually in pretty unique positions where we have pretty large sets of data,” Dewey said.
But the amount of data required by a machine-learning tool can vary from task to task, something that the panel urged the audience to consider before going too far down the road with a project and finding out that they don’t have the information they need.
Even if that data exists, odds are that it will have to be processed. In some cases that might entail removing words such as “the” or “ and” plus other words that aren’t particularly useful to the models being used to teach the AI.
But other vocabulary is crucial, especially considering the kind of specialized language that pervades legal documents.
“Lawyers don’t speak like normal human beings,” said Ziggy Williamson, lead software developer at Holland & Knight.
However, massive amounts of data isn’t always necessary to perform repetitive tasks, just so long as context isn’t important. Named-entity recognition models, for example, can extract names and dates from documents without having to be fed piles of data first.
Still, even if a solution can be deployed to handle a task, that doesn’t mean that everyone can sit back and relax. Williamson cautioned that even with machine learning at the wheel, it’s still important to have a human review take place at the end of the process. Dewey echoed that sentiment.
“Is it ‘bet the company’ stuff? If it is, then maybe you want to have a backup,” he said.