In our prior articles (Part 1 and Part 2), we looked at how the market’s collective understanding of predictive coding must shift if the technology is going to continue to gain traction in 2014. But the one issue that we have yet to consider is how the technology must refocus lawyers’ attention on the details. After all, details matter — in litigation or just about any other situation a lawyer confronts.
This was understandable, at first, as everyone initially focused on cheaper and faster review. The oft-cited RAND Study from 2012 makes cost reduction an easy target for e-discovery vendors looking to get in the mix with the same basic claim that their way is the most cost effective. And let’s be clear: Cheaper and faster discovery is certainly a laudable goal. But it also feels like if that continues to be the primary focus of this powerful technology, we are not raising the bar; we are not driving innovation and imagining a better way to practice law. In short, we are resigning ourselves to the fact that good might be good enough.
In actuality, the combination of cheaper and faster are only preferable goals when the substantive quality of the output of the process either doesn’t matter at all or is of minimal importance. I cannot imagine that most lawyers, regardless of their cost consciousness, would embrace such a position. In fact, many lawyers initially resisted the use of advanced technologies in the review process because they believed (albeit mistakenly) that quality from human review could not be sacrificed in exchange for efficiencies that could be gained from using the technology. In other words, faster was less desirable than better in their eyes. That is the thing with technology — new solutions evolve and existing technologies are tweaked to meet new demands. We now are entering a new age of technology assisted review; and in this age, the details finally are getting the attention they deserve.
This approach to predictive coding, which is defined by an increased focused on interactive analytics in order to gain deep insight into the case earlier in the process, allows discovery counsel and merits counsel to serve the same master. Review teams need not be relegated to collating documents into three stacks: potential privileged, non-responsive, and responsive. Instead, the review teams work early in the life of the case to locate the most important category of documents — documents that might actually be exhibits at a deposition or in a motion (and no, I don’t just mean “hot” documents). The most important metric should be how quickly did the review team locate the most important documents. This is the metric that gives us insight if we are finding meaning in the world created by over broad collection and imprecise discovery requests.
Consider the estimate that was provided in 2010, which stated the ratio of pages discovered to pages offered as exhibits was 1,000 to 1. Given the exponential growth of data, I think it is fair to estimate that ratio is even more disproportionate as we look forward to 2014. Moreover, even the proposed amendments to the Federal Rules of Civil Procedure are unlikely to do much initially to curb this waste. Technology must meet the demands until the law catches up with the problem.
If we embrace a view of review technology that goes beyond mere culling using predictive coding, and instead shift our focus to leveraging technology to help us find the documents that actually matter, we might be able to meaningfully reduce the waste associated with large scale document review. Lawyers could abandon the practice of searching for merely responsive documents (i.e., collating documents) and instead focus our attention on finding the most important documents regarding the most important issues in the case. If we can make this our highest priority, then machine learning will prove its true worth in 2014. Then we will move beyond mere predictive coding as we have come to know it.