The Georgetown Law Advanced E-Discovery Institute, held December 6 and 7 in McLean, Va., was an excellent, well-attended event. As expected, it featured leading federal judges in the e-discovery world and many attorneys well-versed in this field. Noteworthy about this program was how it boldly addressed the numerous issues that have surfaced this past year around the concepts of technology-assisted review (TAR). TAR, computer-assisted review (CAR), and predictive coding are all variants of recent lurching moves in the legal industry to move towards more efficient ways to search and review data and are, by far the topic du jour. Three 90-minute sessions attempted to clarify the underlying concepts, methodologies, benefits, and practical applications of these efficiency enhancing technologies.
I am fairly confident that if you polled the 600 attendees for a definition of these terms, you would get 600 or more versions of the answer. At the end of the three sessions, attended by a small section of the bar that is interested in electronic data discovery, it was clear that we are in the very early stages of using advanced technology on a broad basis to increase the efficiency of EDD.
There seemed to be very few attendees who believe that discovery could actually be provided in a box, or that it could possibly be anywhere near simple. By the conclusion of the last session, I was convinced that confusion and uncertainty were the most prevalent emotions. Why? Comedian Stephen Wright, in a monologue a few decades back, asked "If you were speeding through space at the speed of light, would you know if your headlights were on?" As of today, a small segment of the judiciary and lawyers are indeed speeding along in attempting to use TAR on a broader basis, but even this group is a bit uncertain to exactly where the space ship is heading.
In what appears to be the speed of light, the old ways of searching for documents have seemingly been discredited. It is now claimed by a small but expanding group of judges that if you are not using some variation of TAR, you are or soon may be falling below professional standards.
There is a slight problem, however, because there are no common standards or clear protocols governing how and when these technologies should be applied. I found it quite vexing that several of the judges at the conference made it clear that they do not know how these technologies actually function, and likely never will. This, of course, is not a critical defect because they can rely on technology experts to clarify particular issues as they arise. But does this not potentially bring us back to where the focus of discovery reverts to a contest of experts, creating a new set of issues regarding the reliability of various technologies and workflow to occupy the court?
I am very much in favor of TAR. In most variations, if applied properly and for the right types of cases, it should increase the accuracy of data searches. Implementing sampling and workflow efficiencies should result in a cost effective result for the client.
Unfortunately, surveying the response to the use of TAR even within this select group of lawyers, we are not there yet. Highlighting the lack of understanding of the technologies and confusion as to when they should be applied, only a very small segment of the bar is currently relying upon TAR.
The first of the three sessions addressed some of the recent case law which has either permitted the use of TAR, or as in the "Hooters" case (EOHRB Inc. v. HOA Holdings LLC (C.A. No. 7409-VCL), actually required the parties to use a predictive coding protocol if they are not able to show cause why they should not. This session highlighted that there is far from any consensus among the judiciary as to when and how these technologies should be applied. It was further affirmed that we are at a very nascent point in the development of case law. A clear message derived from this session is that an attorney should know the specific judge's interest, expertise, and assertiveness when it comes to the use of TAR.
The second TAR session highlighted the point that there is a lot of math, science, and statistics underlying any of these technologies. This area remains a moving target, with not much common ground on how data should be collected, sampled, searched, or reviewed. Because most of us do not understand the complicated concepts "behind the curtain" of these methodologies, there is a strong reluctance to simply accept the "trust me, it's all (or enough of it) there" assurances of their adversary.
There are no acceptable standards yet as to what constitutes a valid response to a discovery request using technology that is still so new to the legal profession. The discussion inevitably turned towards the right of an attorney to examine the adversary's seed (sampling) sets, workflow, and non-responsive document collections. Of course, then came a discussion of how large must a seed set be, why that size, and of course, how do you know in the first place how "rich" is your responsive percentage at the very outset of a case so that you are implementing an adequate search of the data?