Timothy M. Tippins
Timothy M. Tippins (NYLJ/Rick Kopstein)

Justice Potter Stewart famously quipped that although he could not define pornography, “I know it when I see it.” Jacobellis v. Ohio, 84 S.Ct. 1676, 1683 (1964). Bias presents the opposite problem. It can be readily defined but you do not necessarily know it when you see it. Given that custody evaluators are as susceptible to bias as anyone else and because bias can destroy the reliability of their conclusions, it is imperative that lawyers and judges be able to recognize its telltale signs. While “there is no blood test for evaluator bias,” (J.P. Wittmann, “Evaluating Evaluations: An Attorneys Handbook for Analyzing Child Custody Reports” (MatLaw Systems, 2013, p. 160)), it can be inferred from the evaluator’s methods and behaviors. This article will delineate some key indicators that can help legal professionals spot bias in forensic assessments.

Key Indicator #1: Infidelity to Science

No small part of a psychologist’s education is devoted to understanding the limitations of psychological knowledge and the overarching importance of constricting professional conclusions to those that are supportable by the published knowledge-base of the profession.

This concept of constraint is inherent in the profession’s self-definition as a “scientific” field. Pick up any standard university Psych 101 textbook and you will find that psychology defines itself as “the scientific study of human behavior and mental processes.” C. G. Morris and A.A. Maisto, “Psychology: An Introduction,” (Prentice Hall, 11th Edition, 2002), at p. xii. Remove the word “scientific” from that definition and you are no longer talking about psychology. When psychology emerged in the 19th century as a distinct discipline, independent of its intellectual cousin philosophy (defined by Merriam-Webster as “a search for a general understanding of values and reality by chiefly speculative rather than observational means), its defining attribute was its commitment to the scientific method. It essentially guaranteed the public that “the conclusions about behavior that it produces derive from scientific methods” and that its work product has been “derived from and tested by scientific methods.” K.E. Stanovich, “How to Think Straight About Psychology,” 10th ed., p. 6 (Pearson, 2013).

Thus, while Socrates might speculate that behaviors “A” plus “B” warrant conclusion “C”, psychologists would assert such a conclusion only if it could be proven through scientifically conducted, replicable empirical research. As one university text aptly puts it: “In business, the motto is ‘show me the money.’ In science, it is ‘show me the evidence.’” D.G. Myers, Psychology, 7th Edition (Worth, 2004) p. 23. The text further directs that psychologists must “persistently ask … how do you know?” Id. That is precisely the question that attorneys and judges must invoke whenever confronting psychological testimony.

So just how does this mandate of professional constraint relate to the problem of bias in forensic evaluations? Because custody evaluations are mired in subjectivity (T.M. Tippins, “Bias in Custody Evaluations,” N.Y.L.J., July 6, 2017), they provide fertile ground within which bias, unless strictly checked, can flourish. Anchoring conclusions to empirical research reduces the range of subjectivity, and in so doing provides an important, constricting check on bias.

This nexus between constriction of professional conclusions and the quest for objective assessments, uncontaminated by bias, is evident in the AFCC Model Standards of Practice for Child Custody Evaluations (Family Court Review, Vol. 45 No. 1, January 2007 70-91 (hereafter Model Standards):

Child custody evaluators shall strive to be accurate, objective, fair and independent in their work and are strongly encouraged to utilize peer-reviewed published research in their reports.

Model Standards, §4.6.

The Model Standards further provide that not only should evaluators utilize the published research of the discipline but should also “provide full references to the cited research” in their reports. (Model Standards, §4.6[b]). Although the Frye rule (Frye v. United States, 293 F. 1013 (D.C. Cir. 1923)) “does not require that a forensic report cite specific professional literature in support of the report’s analyses and opinions” (Straus v. Strauss, 136 A.D.3d 419 (1st Dept. 2016)), conclusions expressed in the absence of such support are, in fact, merely personal, not expert, opinions. As clearly expressed by David A. Martindale, Ph.D.:

There is an important difference between an expert opinion and a personal opinion. When an expert has formulated an opinion, it is reasonably presumed that the expert has drawn upon information accumulated and published over the years. The defining attributes of an expert opinion relate not to the credentials held by the individual whose fingers type the words or from whose mouth the words flow; rather, the requisite characteristics relate to the procedures that were employed in formulating the opinion and the body of knowledge that forms the foundation upon which those procedures were developed. If the accumulated knowledge of the expert’s field was not utilized, the opinion expressed is not an expert opinion. It is a personal opinion, albeit one being expressed by an expert.

D.A. Martindale, “Cross-Examining Mental Health Experts in Child Custody Litigation,” The Journal of Psychiatry & Law, 29/Winter 2001, 483-511.

Thus, some key questions to consider when reviewing forensic work-product: (1) Are the conclusions expressed in the report firmly anchored to peer-reviewed empirical research? (2) Are there citations to such research in the report? (3) Are there such references in the evaluator’s underlying file? (4) Does such empirical support exist at all? (5) If there are such references in the report or the file, do the studies say what the evaluator claims they say? (6) Are the evaluator’s premises contradicted by reported research? Unless the attorney is well versed in the behavioral science literature, the last three questions may best be answered by a knowledgeable forensic consultant. If empirical support is weak or non-existent there is a strong probability that the gap between the case-specific data and the evaluator’s conclusions has been bridged by bias.

Key Indicator #2: Data Gathered vs. Data Reported

Sometimes evidence of bias will drip from the pages of the evaluator’s report. More often, however, the indicia of bias will lurk within the bowels of the evaluator’s underlying file. The only way to uncover this hidden bias will be by painstaking analysis of the entire forensic file, juxtaposing the data the evaluator gathered with that which he or she deigned to put before the court in the report.

Such a review is almost always productive because one of the most prevalent and pernicious forms of bias is confirmatory bias. This is “the tendency to seek information that supports one’s hypotheses about a family while neglecting the equally important step of seeking disconfirmatory information.” J.P. Wittmann, supra, at p. 161). The evaluator’s confirmation bias is often exacerbated by a deliberate suppression of the data that does not support the evaluator’s conclusions. Dr. Martindale has termed this phenomenon “confirmatory distortion”:

Confirmatory distortion is the term that I herewith offer to describe the process by which an evaluator, motivated by the desire to bolster a favored hypothesis, intentionally engages in selective reporting or skewed interpretation of data, thereby producing a distorted picture of the family whose custody dispute is before the court.

D.A. Martindale, “Confirmatory Bias and Confirmatory Distortion,” Journal of Child Custody Vol. 2, Issues 1-2, 2005.

Confirmatory distortion is not a rarity. For example, in one of this writer’s cases the evaluator recommended that Parent “A” be awarded custody. Parent “B” had had interim custody for nearly two years prior to trial, during which time the children had thrived academically and socially as evidenced by the report cards that Parent “B” provided to the evaluator. The evaluator never mentioned these facts in his report.

Another avenue to explore is any psychological testing that has been administered. Evaluators frequently do not interpret the test data themselves. Instead they send the test responses to an outside service that produces an interpretive report generated by proprietary software. Evaluators frequently incorporate large swatches of language from the computer reports in their own written reports. Comparing the computer reports with the evaluator’s report often reveals that the evaluator has included only those statements that support his or her conclusions and has suppressed other statements contained in the computer reports that would contradict the evaluator’s favored conclusions.

Accordingly, it is essential to gain access to the entire forensic file to ensure that the court will not be misled by biased or otherwise flawed opinions. Although pre-trial disclosure of the evaluator’s file has a controversial and complicated history in New York—and only in New York—the more enlightened, less Jurassic judges among us have in recent years recognized that such disclosure is critical if hidden bias is to be uncovered and made known to the court:

Giving counsel and the parties’ access to the underlying notes and raw data is undoubtedly the surest means of uncovering any bias on the part of the evaluator and any deficiencies or errors in the report, particularly where such bias or deficiencies or errors may not be evident from the conclusions expressed in the report.

K.C. v. J.C., 50 Misc.3d 892 (Sup. Ct., Westchester Co., Marx, J., 2015).

Key Indicator #3: Inattention to Protocols

Several professional organizations have promulgated practice protocols, guidelines and model standards to promote sound practice in the custody evaluation arena. The most detailed and comprehensive of these are the AFCC Model Standards. It was noted above that these standards can help reduce the operation of bias, given their direction that evaluators should base their conclusions on the empirical research of the field. In addition, the standards direct specific methodologies that will safeguard against bias.

The standards direct that “custody evaluators shall use empirically-based methods and procedures of data collection.” Model Standards, §5.6. They emphasize that evaluators should use “a balanced process … to increase objectivity, fairness and independence.” Model Standards, §5.5. Section 5.5(a) sets forth specific elements that will contribute to achieving balance and objectivity, such as utilizing the same evaluative criteria and assessment instruments for each parent, ensuring that each parent be given an opportunity to respond to any allegation made by the other and affording each essentially the same amount of interview time. The greater an evaluator’s departure from these safeguards, the greater the likelihood that bias has diminished the reliability of the opinion.

Conclusion

When bias taints a custody evaluation it must be exposed to ensure that the court’s decision will not be based on unreliable conclusions. The key indicators of bias described above should aid in discovering evaluator bias and presenting the court with a more accurate assessment of the forensic testimony that will lead to better custody decisions on behalf of the family before the court.

Justice Potter Stewart famously quipped that although he could not define pornography, “I know it when I see it.” Jacobellis v. Ohio , 84 S.Ct. 1676, 1683 ( 1964 ) . Bias presents the opposite problem. It can be readily defined but you do not necessarily know it when you see it. Given that custody evaluators are as susceptible to bias as anyone else and because bias can destroy the reliability of their conclusions, it is imperative that lawyers and judges be able to recognize its telltale signs. While “there is no blood test for evaluator bias,” (J.P. Wittmann, “Evaluating Evaluations: An Attorneys Handbook for Analyzing Child Custody Reports” (MatLaw Systems, 2013, p. 160)), it can be inferred from the evaluator’s methods and behaviors. This article will delineate some key indicators that can help legal professionals spot bias in forensic assessments.

Key Indicator #1: Infidelity to Science

No small part of a psychologist’s education is devoted to understanding the limitations of psychological knowledge and the overarching importance of constricting professional conclusions to those that are supportable by the published knowledge-base of the profession.

This concept of constraint is inherent in the profession’s self-definition as a “scientific” field. Pick up any standard university Psych 101 textbook and you will find that psychology defines itself as “the scientific study of human behavior and mental processes.” C. G. Morris and A.A. Maisto, “Psychology: An Introduction,” (Prentice Hall, 11th Edition, 2002), at p. xii. Remove the word “scientific” from that definition and you are no longer talking about psychology. When psychology emerged in the 19th century as a distinct discipline, independent of its intellectual cousin philosophy (defined by Merriam-Webster as “a search for a general understanding of values and reality by chiefly speculative rather than observational means), its defining attribute was its commitment to the scientific method. It essentially guaranteed the public that “the conclusions about behavior that it produces derive from scientific methods” and that its work product has been “derived from and tested by scientific methods.” K.E. Stanovich, “How to Think Straight About Psychology,” 10th ed., p. 6 (Pearson, 2013).

Thus, while Socrates might speculate that behaviors “A” plus “B” warrant conclusion “C”, psychologists would assert such a conclusion only if it could be proven through scientifically conducted, replicable empirical research. As one university text aptly puts it: “In business, the motto is ‘show me the money.’ In science, it is ‘show me the evidence.’” D.G. Myers, Psychology, 7th Edition (Worth, 2004) p. 23. The text further directs that psychologists must “persistently ask … how do you know?” Id. That is precisely the question that attorneys and judges must invoke whenever confronting psychological testimony.

So just how does this mandate of professional constraint relate to the problem of bias in forensic evaluations? Because custody evaluations are mired in subjectivity (T.M. Tippins, “Bias in Custody Evaluations,” N.Y.L.J., July 6, 2017), they provide fertile ground within which bias, unless strictly checked, can flourish. Anchoring conclusions to empirical research reduces the range of subjectivity, and in so doing provides an important, constricting check on bias.

This nexus between constriction of professional conclusions and the quest for objective assessments, uncontaminated by bias, is evident in the AFCC Model Standards of Practice for Child Custody Evaluations (Family Court Review, Vol. 45 No. 1, January 2007 70-91 (hereafter Model Standards):

Child custody evaluators shall strive to be accurate, objective, fair and independent in their work and are strongly encouraged to utilize peer-reviewed published research in their reports.

Model Standards, §4.6.

The Model Standards further provide that not only should evaluators utilize the published research of the discipline but should also “provide full references to the cited research” in their reports. (Model Standards, §4.6[b]). Although the Frye rule ( Frye v. United States , 293 F. 1013 ( D.C. Cir. 1923 ) ) “does not require that a forensic report cite specific professional literature in support of the report’s analyses and opinions” ( Straus v. Strauss , 136 A.D.3d 419 ( 1st Dept. 2016 ) ), conclusions expressed in the absence of such support are, in fact, merely personal, not expert, opinions. As clearly expressed by David A. Martindale, Ph.D.:

There is an important difference between an expert opinion and a personal opinion. When an expert has formulated an opinion, it is reasonably presumed that the expert has drawn upon information accumulated and published over the years. The defining attributes of an expert opinion relate not to the credentials held by the individual whose fingers type the words or from whose mouth the words flow; rather, the requisite characteristics relate to the procedures that were employed in formulating the opinion and the body of knowledge that forms the foundation upon which those procedures were developed. If the accumulated knowledge of the expert’s field was not utilized, the opinion expressed is not an expert opinion. It is a personal opinion, albeit one being expressed by an expert.

D.A. Martindale, “Cross-Examining Mental Health Experts in Child Custody Litigation,” The Journal of Psychiatry & Law, 29/Winter 2001, 483-511.

Thus, some key questions to consider when reviewing forensic work-product: (1) Are the conclusions expressed in the report firmly anchored to peer-reviewed empirical research? (2) Are there citations to such research in the report? (3) Are there such references in the evaluator’s underlying file? (4) Does such empirical support exist at all? (5) If there are such references in the report or the file, do the studies say what the evaluator claims they say? (6) Are the evaluator’s premises contradicted by reported research? Unless the attorney is well versed in the behavioral science literature, the last three questions may best be answered by a knowledgeable forensic consultant. If empirical support is weak or non-existent there is a strong probability that the gap between the case-specific data and the evaluator’s conclusions has been bridged by bias.

Key Indicator #2: Data Gathered vs. Data Reported

Sometimes evidence of bias will drip from the pages of the evaluator’s report. More often, however, the indicia of bias will lurk within the bowels of the evaluator’s underlying file. The only way to uncover this hidden bias will be by painstaking analysis of the entire forensic file, juxtaposing the data the evaluator gathered with that which he or she deigned to put before the court in the report.

Such a review is almost always productive because one of the most prevalent and pernicious forms of bias is confirmatory bias. This is “the tendency to seek information that supports one’s hypotheses about a family while neglecting the equally important step of seeking disconfirmatory information.” J.P. Wittmann, supra, at p. 161). The evaluator’s confirmation bias is often exacerbated by a deliberate suppression of the data that does not support the evaluator’s conclusions. Dr. Martindale has termed this phenomenon “confirmatory distortion”:

Confirmatory distortion is the term that I herewith offer to describe the process by which an evaluator, motivated by the desire to bolster a favored hypothesis, intentionally engages in selective reporting or skewed interpretation of data, thereby producing a distorted picture of the family whose custody dispute is before the court.

D.A. Martindale, “Confirmatory Bias and Confirmatory Distortion,” Journal of Child Custody Vol. 2, Issues 1-2, 2005.

Confirmatory distortion is not a rarity. For example, in one of this writer’s cases the evaluator recommended that Parent “A” be awarded custody. Parent “B” had had interim custody for nearly two years prior to trial, during which time the children had thrived academically and socially as evidenced by the report cards that Parent “B” provided to the evaluator. The evaluator never mentioned these facts in his report.

Another avenue to explore is any psychological testing that has been administered. Evaluators frequently do not interpret the test data themselves. Instead they send the test responses to an outside service that produces an interpretive report generated by proprietary software. Evaluators frequently incorporate large swatches of language from the computer reports in their own written reports. Comparing the computer reports with the evaluator’s report often reveals that the evaluator has included only those statements that support his or her conclusions and has suppressed other statements contained in the computer reports that would contradict the evaluator’s favored conclusions.

Accordingly, it is essential to gain access to the entire forensic file to ensure that the court will not be misled by biased or otherwise flawed opinions. Although pre-trial disclosure of the evaluator’s file has a controversial and complicated history in New York —and only in New York —the more enlightened, less Jurassic judges among us have in recent years recognized that such disclosure is critical if hidden bias is to be uncovered and made known to the court:

Giving counsel and the parties’ access to the underlying notes and raw data is undoubtedly the surest means of uncovering any bias on the part of the evaluator and any deficiencies or errors in the report, particularly where such bias or deficiencies or errors may not be evident from the conclusions expressed in the report.

K.C. v. J.C. , 50 Misc.3d 892 ( Sup. Ct., Westchester Co., Marx, J., 2015 ) .

Key Indicator #3: Inattention to Protocols

Several professional organizations have promulgated practice protocols, guidelines and model standards to promote sound practice in the custody evaluation arena. The most detailed and comprehensive of these are the AFCC Model Standards. It was noted above that these standards can help reduce the operation of bias, given their direction that evaluators should base their conclusions on the empirical research of the field. In addition, the standards direct specific methodologies that will safeguard against bias.

The standards direct that “custody evaluators shall use empirically-based methods and procedures of data collection.” Model Standards, §5.6. They emphasize that evaluators should use “a balanced process … to increase objectivity, fairness and independence.” Model Standards, §5.5. Section 5.5(a) sets forth specific elements that will contribute to achieving balance and objectivity, such as utilizing the same evaluative criteria and assessment instruments for each parent, ensuring that each parent be given an opportunity to respond to any allegation made by the other and affording each essentially the same amount of interview time. The greater an evaluator’s departure from these safeguards, the greater the likelihood that bias has diminished the reliability of the opinion.

Conclusion

When bias taints a custody evaluation it must be exposed to ensure that the court’s decision will not be based on unreliable conclusions. The key indicators of bias described above should aid in discovering evaluator bias and presenting the court with a more accurate assessment of the forensic testimony that will lead to better custody decisions on behalf of the family before the court.