Ken Strutin ()
At a time when human laws keep the innocent from connecting with justice, Moore’s law might bring them together. Cognitive computing, deep learning and natural language can uncover mosaics of innocence concealed from human eyes. The supercomputer that helps law enforcement find the guilty, when inclined, is poised to identify the unexonerated.
Computer programs, tirelessly and innovatively searching databases, hold the keys to unexplored justice. The hidden interstices of innocence locked away in files and mainframes might be teased out by algorithms. Indeed, exoneration is born from information that is often inaccessible, but when discovered, unassailable.
Consider the injustices brought to light by group exonerations, reported by the National Registry of Exonerations, and systemwide problems making headlines. See, e.g., “Massachusetts Supreme Court Dismisses 21,000 Drug Convictions Following Chemist Misconduct,” Paper Chase (Jurist), April 21, 2017.
A tranche of error-ridden convictions creates pressure to reinvestigate. See Caitlin M. Plummer and Imran J. Syed, “Shifted Science and Post-Conviction Relief,” 8 Stan. J. C.R. & C.L. 259 (2012). But the momentum of serendipitous discoveries is too slow for individual cases of innocence camouflaged by arm twisted pleas, undeveloped records, and post-conviction finality. See Simon A. Cole, “Scandal, Fraud, and the Reform of Forensic Science,” 119 W. Va. L. Rev. 523 (2016).
Massive computer databases daily muster evidence connecting individuals to crimes but leave little technology for the exposition of innocence. Indeed, the proximity of biometric databases and criminal case histories creates a cyberhood of usual suspects.
And yet, artificial intelligence might penetrate the big data of closed cases to find innocence where humans cannot.
Legal research is mired in the quicksand of precedent and citation analysis. But there is an innovative approach in textual research that might be adapted to tackling the unstructured data of wrongful convictions.
Literature Based Discovery (LBD) has been used to scour medical research via PubMed in uncomprehended ways leading to innovations. See Don R. Swanson, “Undiscovered Public Knowledge,” 56 Libr. Q. 103 (1986). It is a method for tethering distinct databases of scientific writings that seemingly have nothing to tie them together.
[I]t allows users to identify biologically meaningful links between any two sets of articles A and C in PubMed, even when A and C share no articles or authors in common and represent disparate topics or disciplines. This fundamental text mining strategy provides a service that cannot be carried out feasibly via standard PubMed searches.
Neil R. Smalheiser et al., “Arrowsmith Two-Node Search Interface,” 94(2) Computer Methods & Programs in Biomed. 190 (2009).
Two distinct literatures, not joined by footnotes or recursive citations, are conjugated through semantically related terms and concepts (B).
Legal precedent demands that one case be cited by another; that footnotes anchor every assertion. LBD transcends this pedantic process by an unblinkered quest for new knowledge.
Online legal libraries with global searching produce results in multiple databases from a single query, but without meaningful synthesis. The narrow byways of law databases alone will not exonerate anyone. Needed are new ways of connecting the innocent with multi-layered knowledge.
The next step is to ask IBM’s Watson (see Ecosystem Program) to forge an analytics of innocence that can red flag as well as rectify exoneration cases. See, e.g., Jason Shueh, “As States Battle Opioid Addiction, IBM’s Watson Joins the Fray,” State Scoop, June 8, 2017.
The Analytics of Innocence
Something akin to Arrowsmith could be launched with data from criminal files filtered through the library of exonerations.
Two full-text databases, one with the facts and formulations underlying guilt, the other a catalog of known and emergent errors. Compilations already exist as guideposts, such as the National Registry of Exonerations. In-between, a thinking machine, without bias or politics, could pick out elusive patterns of innocence.
One subset of cases, heavy on forensics, might fill database A and run against reports, studies and news of forensic lab failures, personnel misconduct, scientific revision cataloged in database C, generating a list of promising investigations (B) into actual innocence, alternate suspects, Brady violations, systemic problems and non-crimes.
In essence, this is what post-conviction projects and pro se prisoners do through painstaking human toil. See Emily West, “DNA Exonerations 1989-2014: Review of Data and Findings from the First 25 Years,” 79 Alb. L. Rev. 717 (2015/2016).
If we are to have justice that can be relied upon, a super-audit of all convictions, and their foundations, must be routine—supported by a formula for innocence and computer to run it.
An innocence machine should be operated with the same frequency and verve as CODIS, AFIS and every other crime solving database. At a time when computer learning is applied to improve the accuracy of forensic analyses, it ought to scrutinize the rightness of criminal convictions. See “Scientists Automate Key Step in Forensic Fingerprint Analysis,” Nat’l Inst. Standards & Tech. (NIST) News, Aug. 14, 2017.
Already, doctors and medical researchers at the University of Texas MD Anderson Cancer Center through their Oncology Expert Advisor (OEA) are embracing this concept: “By understanding and analyzing data in a patient’s profile as well as information published in medical literature, the OEA can then work with a doctor to create evidence-based treatment and management options that are unique to that patient.” Howard Lee, “Paging Dr. Watson: IBM’s Watson Supercomputer Now Being Used in Healthcare,” 85 J. Am. Health Info. Mgmt. Ass’n 44 (May 2014).
Thus, exonerative technology has the potential to filter individual innocence from the machinery of guilt.
Supercomputers can be groomed to find meaningful connections between the unstructured data of conviction and the lessons of exoneration. Given that these machines already exist, the only question is why they haven’t been turned on?