Andrew S. Kaufman ()
Suits against radiologists have become ubiquitous in our society.1 Because these claims frequently involve delays in the diagnosis and treatment of progressive diseases such as cancer, the potential monetary exposure tends to be high.2 In general, there are four reasons why radiologists get sued: errors in perception (not appreciating an abnormality), errors in interpretation (calling a finding benign when it is malignant), failure to suggest the next appropriate step or procedure, and failure to communicate to the patient or clinician in a timely and appropriate manner. It is the first of these types of claims, the errors in perception, which forms the basis for the vast majority of radiology claims. Coincidentally, such claims are susceptible to a pernicious and insidious form of bias on the part of plaintiff’s expert as well as the jury.
In these types of cases, the focus of the inquiry is typically on whether a radiologist is necessarily liable for failing to appreciate an abnormal finding that is present on an imaging study. While it would seem that the question of whether an abnormal finding is “present” would be a straightforward one, for a variety of reasons, which will be discussed further below, often it is not. The determination of whether an abnormality is present may be a function of the context in which the study is being interpreted. The standard to which a radiologist is held, in these types of cases, is whether an abnormal finding would have been appreciated by another radiologist in similar circumstances.3 The law of the state of New York as evidenced by the Pattern Jury Instructions indicates that a doctor is to be judged by comparison to the average practitioner in the community.4 Confounding the ability to make an accurate diagnosis is the fact that abnormalities may appear amongst and in conjunction with a multitude of normal variants in terms of size, density and location that can serve to camouflage an abnormality, creating a “Where’s Waldo” effect.5 Perhaps an accessible analogy is the ability to see a particular star in the night sky. When the air is clear and the sky is totally black because the moon is not visible, even the smallest star will be readily detectable to the naked eye. However, if the sky is partially illuminated by the moon, if there are small clouds present, or if many other stars surround it, a smaller star may be far more difficult to see amongst a myriad of heavenly bodies.6 The fact that surrounding background structures make it difficult to identify abnormalities on occasion is a fact of radiologic life, but one about which lay people, including potential jurors, are not necessarily aware. As a result, there are times when one radiologist may be able to identify an abnormality while another may not. This disparity is perhaps euphemistically termed “inter-observer variability.” Most studies place inter-observer variability at the seemingly alarming rate of 20-50 percent.7 While this range may seem high, it may be that the process of radiologic interpretation is a bit more difficult than it would appear at first blush.
Context Is Key
Perhaps more important than inter-observer variability is the temporal context in which the inquiry is made. The law, as expounded by the PJI, focuses the jury on the facts existing at the time of the events without reference to after acquired information.8 The reason why the temporal context in which the inquiry is made is critical is suggested by a study performed by the Mayo Clinic in which up to 90 percent of missed abnormalities9 were reportedly perceptible in retrospect once a subsequent study or clinical information made the diagnosis clear. Can this astounding statistic truly be termed an “error rate”? Once a clinician is provided with after acquired information, in general and certainly more specifically in the context of a law suit, human nature is such that there is a strong tendency to permit that information to influence one’s retrospective view. The more interesting aspect of this phenomenon is that this influence does not occur at a conscious level,10 meaning the clinician is not aware it is occurring. The clinician may even acknowledge the difference in the two perspectives as a general proposition, while at the same time denying that his or her perspective has been distorted in a particular case. This process has been termed “hindsight bias” and it tends to be an insidious one. The psychological underpinnings of this process are not fully understood, but the most persuasive theory is simply that people prefer to feel capable rather than incapable or inferior. This is evidenced by an intriguingly simple study performed years ago in which drivers were asked whether they believed they were in the top 50 percent of drivers on the road. An astounding 82 percent responded in the affirmative.11 It is not entirely unexpected then that when faced with the choice that their interpretation would have been inaccurate had it not been for after-acquired information, or the notion that they are extremely capable, an expert retained for litigation purposes would tend to select their own capability as an explanation.
How can it be demonstrated to a lay person such as a juror that the process of hindsight bias is at work and may affect a radiology expert? An accessible, if not ingenious way to demonstrate the effect of unknowingly being subject to hindsight bias has been designed. There is a video currently available on the Internet that can be accessed at https://www.youtube.com/watch?v=vJG698U2Mvo12 or by searching for “selective attention test.” The observer is asked to watch a video and to determine how many times people in white shirts (mixed in with people in black shirts in the video) pass a basketball. I would recommend that the reader stop here for the time being and take the test before reading the balance of the article.
Given the subject matter of this article and the title of the video (“Selective Attention Test”), the reader may have been sufficiently forewarned of the incongruity, but absent any forewarning, about half of the people who watched the video did not see a gorilla prancing across the screen. Once advised of the incongruity (the gorilla), it is not surprising that 100 percent of the test takers in retrospect appreciated the presence of the gorilla. However, the vast majority that were advised of the presence of a gorilla prior to watching, felt that they would have noticed it even if they had they not been tipped off in advance. The video demonstrates the difficulty in appreciating findings other than one on which the viewer is anticipating or focusing. The phenomenon has been termed “inattentional blindness,” an apt phrase if there ever was one.13 Simply put, it is easier to see an expected rather than an unexpected finding. Some of us may have had the experience of looking for and spotting an open seat or two in a crowded movie theater, only to be asked the next day by friends why you ignored them at the theater. Further study of this phenomenon ironically reveals that the more dissimilar the appearance of the abnormality from what is expected, the more likely it will be seen retrospectively, but the less likely it is to be appreciated prospectively.14 This is certainly a counterintuitive result, but one that might explain what appears to be a radiologist’s inability to identify an “obvious” finding that is easily appreciated by a lay person. A related study called the gaze test15 revealed that much the same is true for static images and that a longer duration of “gaze time” does not correlate with improved accuracy. The implication is that the cause of a radiologist’s inability to identify an abnormality is unlikely to be related to the number of studies he or she is attempting to interpret during any given period of time.
The implications for claims of negligent misinterpretation are profound. Because of hindsight bias, inattentional blindness is sometimes mistaken for prospective negligence or at least that may be the basis of certain claims. Hindsight bias may cast doubt on the validity of the plaintiff’s expert’s retrospective testimony in the context of such claims. Not only is the plaintiff’s expert subject to hindsight bias, but the jury is also subject to the same process and thus, is more likely to accept what plaintiff’s expert espouses. Indeed, a jury’s own retrospective bias may unknowingly subject them to the same bias and lead them to accept that what plaintiff’s expert says is indeed accurate. A similar phenomenon occurred in the late 1980s, when it was suspected that eyewitness identifications tended to be inaccurate as a result of being subject to a variety of biases. Subsequently, it became clear, based on DNA evidence, that eyewitness identifications were, indeed, subject to high rates of suggestibility and error.16 In both the eye witness scenario and the retrospectively influenced interpretation of an imaging study, the jury, in a sense may become the unwitting victim of a witness’ suggestion.
There is an approach to more reliably determining whether an average radiologist would have come to the same allegedly mistaken impression as the interpretation at issue. It is called a blind read. This would require that plaintiff’s expert be asked to view an image without being advised of what was found on a subsequent image or of the clinical disease entity. As one can imagine, as a practical matter there is an obvious stumbling block to this approach in that the mere presence of a plaintiff’s (or defense) attorney in a radiologist’s office tends to suggest that there is an allegedly detectable abnormality on a film. Even if several sets of normal images are mixed with the set containing an abnormal finding, a heightened level of scrutiny of the films is triggered by the attorney’s presence and probably makes any such approach less than optimal. Consequently, often the obligation to expose hindsight bias often falls upon the defense attorney and it behooves the capable defense attorney to present and explain this concept to the jury. Some questions appropriate for cross examination along those lines include whether the expert was advised of the subsequent radiologic findings or of the clinical result prior to viewing the film in question, or whether the expert suspected that the films contained an abnormality, based upon the presence of the attorney. Questions concerning the validity, effect and extent of hindsight bias may be an effective avenue of cross examination. Such questioning can expose the hidden bias of an expert and, in the process, encourage the jurors to question their own retrospective perception of an abnormality. Given the prevalence of suits based on alleged failure to properly interpret radiologic studies, there will, no doubt, be ample opportunity for the defense to present such arguments.
1. L. Berlin and R.W. Hendrix, “Perceptual Errors and Negligence,” AJR 1998; 170:863-867.
3. New York State Pattern Jury Instruction 2:150.
5. Martin Hanford, “Where’s Waldo” (Walker Books).
6. The phrase was first coined by J. Rhodes, Photo Research Director, Vanity Fair.
7. This rate is quoted in the context of imaging for lung cancer. Higher rates have been quoted for mammography. L. Berlin, “Hindsight Bias,” AJR 2000; 175:597-6001.
8. Pattern Jury, Instructions 2:150.
9. L. Berlin, “Hindsight Bias,” supra note 7; J.R. Muhm and W.E. Miller, “Lung Cancer detected during a screening program using four-month chest radiographs,” Radiology 1983;148:609-15.
10. W. K. Erly, M. Tran, “Impact of Hindsight Bias on Intepretation of Nonenhanced CT Head Scans for Acute Stroke,” J. Comput. Assisted Tomog., 2010;34:229-232.
11. A.S. Kaufman, Behavioral Finance PLUS Journal, March 2008, Vol. XXI, No. 3.
13. D.J. Simmons, C.F. Chabris, “Gorillas in Our Midst: Sustained Inattentional Blindness,” Perception. 281059-107, A. Mack & I. Rock; 1998; “Inattentional Blindness”; MIT Press.
14. L. Berlin, “Hindsight Bias,” supra note 7.