Baker & Hostetler, a law firm with 900 lawyers, has recently announced that it has engaged an additional “lawyer.” The new “lawyer” is ROSS, a robot, utilizing artificial intelligence, who was added to Baker & Hostetler’s bankruptcy team. The firm has licensed the artificial intelligence product developed by Ross Intelligence, which will be powered by IBM’s Watson technology. Ross, marketed as “the world’s first artificially intelligent attorney,” will do certain limited and focused legal research for Baker & Hostetler’s 50-lawyer bankruptcy group. The “lawyer” is capable of sorting through unstructured data at a rate of over a billion documents per second; its mission is to sort through legal documents to strengthen the firm’s cases. Users can also ask natural language questions and receive answers. The system is designed to review the data “intelligently,” identifying the most relevant legal authority instead of producing raw results located with keywords. Lawyers can interact with the relevant passages of the law found by ROSS. The robot also will continuously improve its search results as it monitors the law for changes. ROSS will be billed as a subscription service.
This is another intriguing advancement of technology as applied to legal practice. While we acknowledge that technology is here to stay and growing more ubiquitous and more effective, from electronic discovery to the availability of real-time transcription in the courtroom, we nevertheless wonder to what extent too much reliance will be placed on such artificial intelligence. Without “human” attorney review, it may be that the robot will miss the subtleties and nuances of the law and that certain relevant documents or authority will go unnoticed. One obvious question is the assessment of credibility. Is Ross capable of identifying the viability of such defenses as equitable estoppel, or much more subtle but equally important causes of action, such as breach of the covenant of good faith and fair dealing? Provided the limitations of the technology are recognized, this kind of service does have its place.
At the end of the day, lawyers remain responsible for the ultimate work product regardless of how prepared and by whom. The ethical norm requiring all lawyers to exercise “independent professional judgment” when representing a client, a bedrock principle that forms the foundation of a number of our Rules of Professional Conduct, may actually be breached by lawyer overreliance on a machine. More significantly, the RPCs, both at the ABA level and at the state level, just do not address what may sometimes present as a serious problem in lawyering. Legal educators, especially clinical teachers, are increasingly lamenting the inability of students to analyze complex problems and provide practical solutions to those problems because of overreliance on electronic research databases. To the extent that Ross functions like a LEXIS or Westlaw search and presents appropriate documents for first review, there still may be a need for attorney “instinct” and awareness of other factors not necessarily programmable. On the other hand, human lawyers miss things that ROSS and its equivalents may catch. We wonder whether new standards for diligence for lawyers should include proper use of such artificial intelligence.
The jury is still out as to whether current professional liability standards will need to be adapted to these new technologies, and even ROSS cannot predict that verdict with certainty. In the meantime, we caution lawyers to remember that ROSS is not subject to discipline and that lawyers still remain responsible for the ultimate work product.