In Da Silva Moore v. Publicis Groupe, 287 F.R.D. 182 (S.D.N.Y. 2012), affirmed, 2012 U.S.Dist. LEXIS 58742 (2012), a much-written-about opinion, Magistrate Judge Andrew Peck, a jurist highly regarded for his knowledge of e-discovery, held that a party could be compelled to use predictive coding, over objections as to its reliability, to review electronically stored information for discovery production.

In In re Biomet M2a Magnum Hip Implant Products Liability Litigation, No. 3:12-md-02391-RLM-CAN (N.D. Ind. April 18, 2013), the court held that a party could not be compelled to use predictive coding, despite the argument that it would be far more accurate in identifying responsive ESI than the keyword searching that had been used, because it simply would cost too much money and yield too little in the way of responsive documents. The two opinions appear to reach opposite holdings but do not — in fact, they employ similar analyses. A close look at the two cases will help us understand predictive coding on a technical and practical level.

A Bit on Predictive Coding

While much has been written about predictive coding, it’s still worth a brief overview. Predictive coding applications use linguistic and statistical algorithms to identify, across data sets, files that resemble other files that have been identified as responsive. Typically, the reviewer(s) most knowledgeable regarding the matter will code a "seed" set of documents as responsive and nonresponsive.

The application will then code the next set of documents, "reading" the seed set choices using the application’s algorithms to predict what in the next set would be tagged responsive/nonresponsive. The reviewer then reviews the second set and changes incorrect coding, thus "teaching" the application. The application will code a third set, and the review of multiple datasets coded by the application is repeated until the application "stabilizes," i.e., reaches an acceptable percentage average of correct coding choices. At that point, it is deployed across the entire dataset.

For all review datasets, the process should be at least as accurate, if not considerably more so, than attorney review of the entire dataset. For large datasets, the process should be considerably less expensive than paying lawyers to conduct first- and second-level reviews of large volumes of documents.

Da Silva Moore

In Da Silva Moore, the plaintiffs brought a "collective" action, alleging gender and pregnancy discrimination and related employment claims against the defendant, one of the world’s "big four" advertising conglomerates. The matter was assigned to Peck to supervise pretrial issues. The parties sought to agree to an e-discovery protocol to address review of the approximately 3 million electronic documents stored by the agreed-upon custodians (i.e., if additional custodians were considered, the document count would rise).

Given that much has been written about Peck requiring that the defendant use predictive coding, it is important to review the opinion in some detail. The parties met and conferred in an attempt to agree upon an ESI protocol, with the court getting involved only to monitor the status of the negotiations and to break stalemates, if necessary. Both parties favored predictive coding.

The defendant sought to create the seed set by coding a portion of it through judgment sampling, i.e., manually, by reviewers, with another portion of the set created by first deploying keywords and then manually coding the top 50 hits from those searches. The defendant agreed to provide all nonprivileged documents to the plaintiffs to review the defendant’s coding, i.e., where any relevant documents were missed or irrelevant ones included.

Additionally, the plaintiffs provided the defendant with keywords and the defendant repeated the process of keyword search followed by manual review, using senior attorneys most familiar with the matter, to add an additional 4,000 documents to the seed set.

The parties, however, did not agree as to how to review the coding for quality control.

One issue in predictive coding is whether the seeding has been sufficient to address all of the factual issues that underlie the matter. If a matter had 10 key factual issues, some with overlapping facts, others with wholly separate time periods, personnel, subjects, etc., coding to search for records involving one issue may be somewhat or quite different from that of another, and records that had to be tagged as responsive for one might have to be tagged as nonresponsive for another issue to make searching for records responsive to that issue work.

Each set focusing on an issue can be thought of as a "concept cluster." To test whether the coding was identifying all concept clusters, the defendant proposed coding by choosing at least 500 records for each cluster, ranking them on a scale of relevance and coding seven rounds per cluster. At the conclusion of the seventh round, the defendant would review a random sample of just under 2,400 discarded records, i.e., those tagged as nonresponsive, to ensure that records with high relevance rankings were not discarded. The defendant would share both kept and discarded records (save privileged ones) with the plaintiffs to demonstrate the reliability of the coding.

The plaintiffs, while agreeing in general to predictive coding, nevertheless had concerns that it was new technology that had yet to be proven reliable. The court accepted the defendant’s proposal, but with the caveat that, if after the seventh round the coding had not yet stabilized, additional rounds would be required.

The plaintiffs filed objections to the defendant’s final protocol, arguing that the defendant’s method allowed no way to check the reliability of its search methodology. The court found the plaintiffs’ concerns "at best, premature," given that the court would be closely supervising the discovery process, the plaintiffs could review how the defendant coded every record in the seed set and the court could quickly resolve any objection the plaintiffs had as a result of such coding. The court further noted that the objections were premature because the resolution of other factual issues, such as whether collective action and/or class certification the plaintiffs would add, could expand the dataset to be coded and so lead to potential coding errors (if, for example, new concept clusters arose but were not added to the coding), which would ripen objections for which the court would order relief.

A close reading of Da Silva Moore reveals that for all of the hype associated with it, it is a carefully reasoned opinion that did not order predictive coding. Rather, it simply found no merit to the plaintiffs’ objections to the defendant’s search strategy, reserving the power to revisit the issue as the strategy was executed across the full dataset.

Biomet

In Biomet, the defendant, like the defendant in Da Silva Moore, used keyword searching at the macro level to assemble the concept clusters it wanted to review closely, then used predictive coding to review those clusters granularly. Searching with keywords across 19.5 million records reduced the set to 3.9 million, which was further reduced to 2.5 million records with de-duplication. Predictive coding was then deployed across the 2.5 million records, costing Biomet $1.07 million. Biomet estimated that total discovery costs would be between $2 million and $3.25 million.

Biomet sought to persuade the plaintiffs of the accuracy of the searches by inviting the plaintiffs to suggest additional keywords for the initial search, and to examine the nonprivileged records removed through predictive coding to demonstrate the accuracy of the latter search process.

The plaintiffs rejected both offers and, going in the exact opposite direction of the plaintiffs in Da Silva Moore, moved to compel Biomet to deploy predictive coding across the 15.6 million records that had been searched only by keywords and which had not ended up in the 2.5 million de-duplicated records searched using predictive coding.

The plaintiffs made several arguments in support of their objection, and the court rejected every one of them. The plaintiffs cited a recent article that discussed the putative inaccuracy of keyword searches and claimed that that inaccuracy made the subsequent predictive coding searches suspect, as they would not be starting from the proper place or covering the right records in the dataset. The plaintiffs made additional objections not relevant to our discussion.

The court reasoned that its task was not to decide whether predictive coding was "better … than keyword searching," but whether Biomet had discharged its discovery obligations. Finding that Biomet had taken reasonable steps and that the plaintiffs’ request was disproportionate in cost to the value of the matter, it overruled the plaintiffs’ objection. It found that Biomet’s practices complied "fully with the requirements of Federal Rules of Civil Procedure 26(b) and 34(b)(2)," i.e., that the disclosing party need not disclose when doing so would be disproportionate to the value at issue. The court further saw no inconsistency between Biomet’s procedures and "the Seventh Circuit principles relating to the discovery of electronically stored information," and saw Biomet’s approach as consistent with the Sedona Conference’s positions regarding proportionality and best practices.

Looking to the specifics of the matter, the court noted that tests of statistical samples randomly chosen from the 19.5 million record dataset projected, "with a 99 percent confidence rate," that no more than 1.33 percent of the unselected documents would be responsive and that, in total, no more than 2.47 percent of the original 19.5 million documents were responsive.

Biomet’s keyword/de-duplication approach had identified 16 percent of the original 19.5 million, i.e., its keyword search may have resulted in over-inclusive clusters, adding expenses for the defendant in ferreting out and discarding more false positives, but did not result in missing any responsive documents, and so did not prejudice the plaintiffs. Thus, the court found the plaintiffs’ request that Biomet deploy predictive coding over the entire set of 19.5 million records, at the cost of additional millions of dollars, to run afoul of "the proportionality standard in Rule 26(b)(2)(C)." The small number of additional responsive records that that search would yield was not sufficient justification to order Biomet to spend additional millions in discovery costs, given that Biomet had already produced millions of responsive documents and spent millions to do so.

Analysis and Conclusion

The practitioner should take away several lessons from Da Silva Moore and Biomet. One is that arguing about the benefits or deficiencies of predictive coding in the abstract will get you nowhere. The specifics of the matter control; the key to all review practice is reasonableness, which cannot be measured without knowing those specifics.

Discovery review until recently was exclusively a human, manual task, which means that it was subject to all of the problems that make human practices less than perfect: mistakes, differences in opinion, lack of time and personnel, etc. Any technology-assisted review must be measured against real practice, with all of its imperfections, and not against a mythical discovery standard of perfection.

Moreover, those who object to discovery review methodology must begin by recognizing that, at bottom, any objection must always show that the cost of any proposed alternative will be proportionate to the value of the matter, taking into account the amount already spent, even if the alternative search might or would yield additional responsive records. Without making such a showing, arguments requiring more predictive coding or less of it will be denied. •

Leonard Deutchman is general counsel and administrative partner of LDiscovery LLC, a firm with offices in New York City, Fort Washington, Pa., McLean, Va., Chicago, San Francisco and London that specializes in electronic digital discovery and digital forensics.