On Feb. 8, Magistrate Judge Andrew Peck conducted a status conference regarding the discovery protocol in Da Silva Moore v. Publicis Groupe, including a detailed discussion on the appropriate use of technology-assisted review. During the conference, Peck opined, “It certainly works better than most of the alternatives, if not all of the alternatives. So the idea is not to make it perfect … [t]he idea is to make it significantly better than the alternatives without nearly as much cost.”

Shortly thereafter on Feb. 24, Peck issued a formal opinion addressing the efficacy and appropriate use of technology-assisted review. In this landmark opinion, Peck concludes: “What the Bar should take away from this Opinion is that computer-assisted review is an available tool and should be seriously considered for use in large-data-volume cases where it may save the producing party (or both parties) significant amounts of legal fees in document review. Counsel no longer have to worry about being the ‘first’ or ‘guinea pig’ for judicial acceptance of computer-assisted review.”

For the multitudes of practitioners waiting for the bench to officially “bless” the use of technology-assisted review, the wait appears to be over. However, despite Peck’s opinion officially opening formal judicial discussion of technology-assisted review, Da Silva Moore is not a “one size fits all” endorsement that eliminates the need to adhere to existing best practices. Prospectively, courts will likely continue to follow the existing rubric—seeking to marry a defensible document review process with robust, innovative technology.

Best practices guidance begins with judicial scrutiny of keyword searching, which clearly identifies that though many practitioners consider keyword searching the “gold standard” for culling the size of a document review set, this notion did not originate with the courts. Numerous judicial opinions have been highly critical of the method while further noting that their role as triers of fact does not extend to endorsing a particular method or service provider. Such a notion is advanced in Da Silva Moore, in which the court states, “[N]or does this opinion endorse any vendor … nor any particular review tool.” Instead, as Judge Paul Grimm noted in Victor Stanley v. Creative Pipe, parties leveraging any type of ESI search methodology should “be aware of the strengths and weaknesses of various methodologies … and select the one most appropriate for its intended task.”

Prior to discussing technology-assisted review in Da Silva Moore, Judge Peck opined on the importance of cooperation between the parties and proof of any method’s efficacy in William A. Gross Construction Associates, Inc. v. American Manufacturers Mutual Insurance, Co. Specifically, Peck identified the importance of “carefully craft[ing] appropriate keywords, with input from ESI custodians” while implementing a methodology that is “quality control tested to ensure accuracy and elimination of ‘false positives.’” Logically, Peck’s existing opinions align with Da Silva Moore—particularly his emphasis on “making [the process] significantly better than the alternative without nearly as much cost.” Further, there is an abundance of reputable commentary outside of court opinions from e-discovery authorities condoning the use of technology-assisted review. For example, Da Silva Moore strongly endorsed the Sedona Conference, which issued an open endorsement, noting that automated search methods may be “reasonable, valuable, and even necessary” to reduce costs and safely cull the amount of ESI for human review. Furthermore, Judge Peck has offered commentary outside the courtroom, echoing sentiments from case law by identifying the inefficiencies of keyword searching, the lack of any judicial opinion endorsing it, and asserting that analysis of any search technology would require evidence of adequate quality control, sampling, and validation of recall and precision.

This commentary culminates with the De Silva Moore opinion, in which Peck contextualizes these best practices and identifies specific lessons learned by resolving the parties’ e-discovery dispute. First, Peck identifies the importance of presenting proof of extensive quality control testing and verification. Second, he notes the benefits of strategically staging discovery—starting with the documents most likely to be relevant—to better control costs. Third, Peck emphasizes that counsel openly talk with their clients to identify relevant custodians and document sources and accordingly share that information with the opposition to evince thorough cooperation when scrutinized by a judge.

Finally, Peck states the usefulness of having the e-discovery technology providers available to explain “complicated e-discovery concepts … understandable to judges who may not be tech-savvy.” In other words, Peck urges counsel to develop a sound process for testing and validating results and be prepared to explain the quality control process used during the project. Developing thorough processes is the key to success, whether using technology-assisted review or more traditional linear review strategies.

Ultimately, technology-assisted review is best viewed as a complementary platform that aids human document review, and the Da Silva Moore opinion further acknowledges this notion. Without quality human training and monitoring, the technology is no more defensible than keyword searching or linear review. Moreover, even with proper training and monitoring, these “smart” review platforms aim to reduce the amount of time humans spend reviewing documents—not to replace their legal analysis or advocacy skills. Going forward, any parties who remain hesitant toward technology-assisted review platforms should consider and closely monitor both existing and forthcoming commentary from the bench on the matter.