The general public’s notion of “artificial intelligence” (AI) as humanlike cyborgs or diabolical, hyper-intelligent machines has more to do with Hollywood than actual advanced computing. This perception, said Ray Thomas, Jr., an attorney and subject matter expert in data privacy and security at IBM, is wrong. Instead, he defines AI as “not just one thing,” but “an umbrella that covers a portfolio of technologies.”
This increasingly large mix of tools lumped under the AI umbrella and how they can be leveraged by lawyers was the focus of the day 3 keynote at Legalweek: The Experience, titled “AI: Best Practices on the Challenges and Opportunities Within the Field & How It Can Benefit People and Society.” IBM’s Brian Kuhn, a co-creator of AI platform Watson Legal, said that AI technology learns within the context “of what you do and in the context of how you represent your clients.”
Technology that is able to “learn” is not exactly new in the legal space. For the past several years, technology billed as AI and reliant on some form of learning from user decisions, context and other variables has been available for tasks like e-discovery, contract review, and legal research. But the technology, the panelists agreed, is still in development.
“Artificial intelligence is in its early stages. It’s not going to show up, cut your back lawn, paint your porch,” said Andrew Arruda, co-founder of ROSS Intelligence, a legal research technology. Rather, Arruda likens the present as “kind of like the early days of the Model T automobile,” as “AI” today is pushing breakthroughs in areas like data retrieval and cutting through massive amounts of unstructured data. As for the future, he added it’s difficult to predict where things will head.
“These systems, like us, continue to learn. So we’re already making significant breakthroughs,” Arruda said.
AI, though, is a term that tends to rouse a diverse array of feelings among lawyers. Depending on where one falls on the spectrum, some may feel like AI will herald breakthroughs for them and their practice, while others think it may be coming for their jobs.
Further, given the relative complexity of technology, many lawyers are somewhat adverse to the technology due to a self-perceived inability to understand it. Zev Eigen, global director of data analytics at Littler Mendelson, thinks this viewpoint is looking at the challenge the wrong way. He advised hat lawyers need to be educated consumers rather than engineers.
One way law firms can currently leverage “AI” technology (or technology that can learn from massive data sets) is in their hiring process. Eigen noted that most companies have a “status quo” for their hiring approach, a perceived model for what that person ought to be. By using advanced data analytics practices, he thinks employers can overcome notions like the quality of law school, law school grades, where someone is from, and so on, that typically define what one often prefers in an employee. “These are all suboptimal things, [but] we’ve been doing them for so long.”
In his own search for building a data team, Eigen said he doesn’t look at resumes until after using analytics to find his candidates, which has allowed him to diversify his practice by hiring “everyone from MIT PhDs” to those that don’t have degrees.
Arruda also noted that the cost savings enabled by not having to throw more attorneys at data problems will help legal services reach people that are currently priced out of using them. Still, he said he thinks “there will be more opportunities for lawyers,” but perhaps “not all lawyers will make $350 an hour.”
Natalie Pierce, co-chair of Littler Mendelson’s robotics, AI and automation industry group, echoed these concerns, adding that 80 percent of Americans cannot afford legal services. And for lawyers, that model may not last into the future.
“To not use the technology available, that’s how lawyers and all other occupations are going to be left behind,” she added.