The rapid evolution of technology—especially those that we label “intelligent” or “autonomous”—is presenting a growing challenge to the legal profession. Charged with the ethical imperatives to remain competent, to communicate, and to be candid, counsel are confronting a reality in which the impact and efficacy of AI and many analytics tools—including those used for e-discovery—are hidden in process opacity from even the most digitally sophisticated. Algorithms and the data sets on which they are trained are at the heart of today’s advanced tools; assessment of their impact (which underpins competent legal advice) requires specialized knowledge that was never the subject of a law school syllabus.
Rule 1.1 of the ABA Model Rules of Professional Responsibility has long mandated that we remain competent in our areas of practice. Lawyers therefore necessarily engage in continual learning. We master enough about new subject matter to advise clients as their pursuits evolve (successfully or not) in new directions. Counsel often do that with the assistance of clients, who inform us about the services or products they are creating or buying or patenting. They explain, for example, how a specified static pressure determined the size of the fan blades they sold their customer (now litigation opponent) for the cooling tower it was building. Clients help clarify the meaning of the technical documents the opponent delivers in discovery and collaborate in uncovering probative facts. And counsel, in turn, are charged with knowing enough about the law applicable to the client’s pursuits so as to provide advice that is competent (or to recognize when we need to enlist the assistance of those with necessary expertise). That paradigm has instilled in counsel a confidence to tackle new subject matter and tools.
With the rapid and pervasive rise in the use of complex technology across our society, however, lawyers should dispense with any belief that they can learn it all. This is increasingly consequential as technology generates more and more actionable decision guidance on which humans are prone to rely. Even lawyers who eschew technological areas of practice are feeling the undeniable influence of technology in their matters. Matrimonial lawyers are engaged by clients whose behavior on social media is in question. Real estate lawyers represent landlords who use algorithms to evaluate potential tenants. Criminal defense lawyers stand before sentencing judges who intend to consider algorithmic assessments of defendant’s risk of recidivism. (See Wisconsin v. Loomis, 881 N.W.2d 749 (Wis. 2016) for a chilling example.) Issues of bias emanating from the algorithms or training data embedded in today’s technologies are surfacing as unintended consequences across the spectrum of human activity. (Read Ph.D. mathematician Cathy O’Neil’s book, Weapons of Math Destruction—How Big Data Increases Inequality and Threatens Democracy, for pointed explication.)
We are squarely in an era when our clients are using—and facing—applications with capabilities and implications they do not understand and cannot explain. So are we lawyers. The need for the input of specialists, who understand how these technologies function and are “trained” and who can evaluate and explain the results of their operation, is increasingly necessary to arm counsel with the facts needed for competent legal representation. (For those believing expertise doesn’t matter, recall In re Biomet M2a Magnum Hip (Implant Products Liability) Litigation (MDL 2391), No. 3:12-MD-2391, 2013 WL 6405156 (N.D. Ind. Aug. 21, 2013), for example, in which a lack of statistical understanding led to the court’s belief that the .55 percent to 1.33 percent of relevant data in the discard pile was reasonable, when in fact a full calculation reveals that it was likely that some 60 percent of all relevant data remained in the discard pile.) And it is only with such facts in hand that a lawyer can accurately communicate to their client potential legal risks and choices—or speak with due candor to a court, or be fair to an opponent (ABA Model Rules 1.4, 3.3, 3.4).
The problem is no less acute for lawyers engaged in e-discovery. First, in the case of discovery tools, our clients were not the creators and generally are not the technical experts in their use; unlike in other practice areas, our clients cannot elucidate the technology or its impact. Indeed, they often look to the lawyers for that guidance.
Second, perhaps because discovery used to be a manual process in which lawyer eyes were the tool of choice, lawyers have been slow to grasp that the many phases of discovery involve technical processes in which the tools of choice need to be operated by those with the right technical expertise. We know from studies conducted under the auspices of the National Institute of Standards and Technology a decade ago that the results of e-discovery review tools and methods can vary dramatically, and that the results vary in part with the expertise with which they are deployed (see https://trec-legal.umiacs.umd.edu/). The need for expertise is only increasing with the rise in the complexity of both data and tools.
There is growing recognition that new guidance in navigating this evolving technological terrain is necessary, and it is beginning to emanate from a variety of sources. The ABA has attempted to propel lawyers into this fast-changing era, calling out the need for lawyers to “keep abreast of the benefits and risks associated with relevant technology” (ABA Model Rule 1.1, comment 8). Being aware of benefits and risks is only one step; knowing how to evaluate and respond is another.
More recently, the International Organization for Standardization issued standards for e-discovery (ISO/IEC 27050-3:2017 Code of Practice for Electronic Discovery). Deferring in full to the requirements of the applicable jurisdiction and the legal advice of counsel, the standards walk the reader through the phases of discovery, including potentially applicable technologies, elucidating what occurs in each phase and considerations to avoid failure. The discussion is potentially very educational for lawyers. It provides useful information for overseeing and documenting the processes that unearth the facts at issue (and at the lawyer’s discretion, for meeting the ethical obligation to supervise (ABA Model Rules 5.1-5.3)).
Currently, the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems is homing in on policies of measurement, competence, accountability, and transparency as fundamental to the ethically aligned design of tools based on artificial intelligence (see https://standards.ieee.org/industry-connections/ec/autonomous-systems.html); these same measures can arm counsel with the information they need to wisely use (or oppose another’s unwise or erroneous use of) automated or intelligent tools. Other organizations are considering these issues as well.
With the escalation in the choices and use of machine learning and other algorithmic tools, the dichotomy between competent counsel and those who eschew calling upon specialists is growing. It is well past time for lawyers to put aside the notion that legal prowess is sufficient. Forward-thinking counsel must constantly inquire about the impact and efficacy of increasingly powerful technologies and align with those who have requisite expertise—be it in security or statistics or computer science or data science or some other aspect. Until such technological education is included in a Juris Doctor, that is the only ethical choice.
Julia Brickell is executive managing director and General Counsel of H5. She speaks and writes frequently on methods of evaluating and addressing the strategic challenges and risks posed to companies by the explosion of electronically stored information.