Jump to ContentJump to Main Navigation
Truth MachineThe Contentious History of DNA Fingerprinting$

Michael Lynch, Simon A. Cole, and Ruth McNally

Print publication date: 2009

Print ISBN-13: 9780226498065

Published to Chicago Scholarship Online: March 2013

DOI: 10.7208/chicago/9780226498089.001.0001

Show Summary Details
Page of

PRINTED FROM CHICAGO SCHOLARSHIP ONLINE (www.chicago.universitypressscholarship.com). (c) Copyright University of Chicago Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in CHSO for personal use (for details see http://www.chicago.universitypressscholarship.com/page/privacy-policy). Subscriber: null; date: 17 July 2018

A Techno-Legal Controversy

A Techno-Legal Controversy

(p.39) Chapter Two A Techno-Legal Controversy
Truth Machine
University of Chicago Press

Abstract and Keywords

This chapter reviews a controversy about forensic DNA testing that ran for several years in the late 1980s and early 1990s and which was brought to a head in the legal domain, particularly in contested admissibility hearings in U.S. state and federal courts. The key issue was whether DNA profiling retained its scientific status when it was transferred to the domain of criminal investigation. The most active phases of dispute included nonscientists such as judges and lawyers, as well as scientists and technical specialists from different fields.

Keywords:   forensic DNA testing, admissibility hearings, federal courts, criminal investigation, scientific status

Technical controversy is an established topic for research in the history and sociology of science. In addition to exhibiting divergent theoretical assumptions, rival experimental designs, and contrary evidential interpretations, controversies exhibit collective procedures for reaching “closure” and restoring a sense of continuity and consensus. The expectation that scientific communities should be able to resolve their disputes through rational procedures and crucial tests, rather than through brute force or majority rule, dates back to the early-modern origins of experimental science (Shapin & Schaffer, 1985). Although key scientific terms—evidence, tests, matters of fact—parallel, and even derive from, legal uses of those terms (Shapiro, 1983), science is now widely held to be a source of more certain, and less arbitrary, procedures for making factual judgments and ending disputes. The role of experts, and especially “scientific” experts, has become increasingly prominent in legal, governmental, and administrative circles (Smith & Wynne, 1989; Jasanoff, 1990; 1992; 1995). At the same time, the role of the jury has diminished in the United Kingdom, the United States, and other nations that grant “ordinary” citizens and their “common sense” a primary role in the fact-finding tribunal.1 As we describe in chapter 8, with the ascendancy of expert evidence (and particularly DNA evidence), other, nonexpert forms of criminal evidence such as confessions and eyewitness testimony have undergone critical scrutiny. Nevertheless, in spite of these trends, judges often express skepticism about expert evidence and they do not readily (p.40) yield legal authority to experts. As we shall see when reviewing decisions about DNA profile evidence, the courts continue to place limits on expert evidence so that expert witnesses do not usurp the traditional province of judges and juries. The extent to which such traditional limits should be maintained in the face of the extraordinary power ascribed to DNA evidence is itself a major topic of controversy.

This chapter reviews a controversy about forensic DNA testing that ran for several years in the late 1980s and early 1990s. As we shall elaborate, this controversy had a number of distinctive features. First, while the scientific status of DNA evidence was a prominent issue, the controversy was brought to a head in the legal domain, and especially in contested admissibility hearings in U.S. state and federal courts. Disputes between scientists took place in courtrooms, as well as in scientific journals and advisory panels, and the existence of the controversy—including its parsing into phases (when it opened, and when it was more or less closed) and specifications of what it was about—were framed by legal procedures and criteria. Second, the most active phases of dispute included nonscientists (particularly judges and lawyers), as well as scientists and technical specialists from different fields. The identity of some participants as scientists or nonscientists involved a degree of ambiguity and contestation. And, third, the controversy was not only about the “scientific” status of a particular innovation (its acceptance as reliable and as a source of valid results within bona fide scientific fields), but also about whether that innovation retained its scientific status when it was transferred to the domain of criminal investigation.

Controversy Studies

Studies of controversies in the sociology and history of science challenge the idea that scientific disputes are resolved strictly through experimental testing and rational consideration of evidence. Without denying the central place of empirical experiments and observations, controversy studies emphasize that scientific disputes are more like political and legal disputes than is often assumed.

Controversy studies were inspired by Thomas Kuhn's influential Structure of Scientific Revolutions (1970 [1962]). Kuhn's title and central thesis drew strong parallels between political and scientific revolutions. Particularly significant for sociological purposes was Kuhn's argument (p.41) that revolutionary transformations of naturalistic understandings (for examples, the Copernican revolution in the sixteenth century, the chemical revolution at the end of the eighteenth century, and the rise of relativity in early twentieth-century physics) involved the overthrow of an established paradigm by an incommensurable matrix of theory and practice. Kuhn emphasized that incommensurable paradigms involved asymmetrical commitments: to work within one paradigm invariably skewed one's vision of the competing paradigm or paradigms. Although arguments during revolutionary transitions focus on theory and experimental evidence, historical scientists embedded in controversy had no recourse to historical hindsight or transcendental rationality when interpreting evidence and reaching consensus. Although Kuhn (1991) later put distance between himself and his sociological and political interpreters,2 his writings were used to support arguments to the effect that a scientist's “choice” between competing paradigms (and, by extension, “choice” among competing theories in controversies of lesser scope) was no choice at all, because historical scientists did not simply compare the competing paradigms from a neutral standpoint. Instead, any choice already was embedded within a paradigm: a nexus of existential commitments, including naturalistic assumptions, a training regime, a network of colleagues and patrons, and a way of working with research materials.

Among the best-known controversy studies are H. M. Collins's analysis of disputes over gravity wave experiments and Andrew Pickering's history of the ascendancy of what he calls the “quark-gauge world view” in particle physics.3 Collins's studies were particularly influential for (p.42) S&TS research. His studies were based on interviews with active participants in the controversy and on-site visits to relevant experimental facilities. Collins identified a “core set” of scientists in the gravity wave field who performed the experiments that prosecuted the controversies and pursued their resolution. A key medium of interaction among members of the core set was furnished by published and unpublished reports of experiments, though members also engaged in more personal forms of conduct through which they established trust, communicated tacit knowledge, and built coalitions.

Collins noted how experimental designs and results varied remarkably, and yet predictably, among core-set rivals. He argued that, instead of being sources of independent evidence for resolving controversies, experiments tended to be extensions of the partisan arguments that sustained controversy. He observed that “closure”—the practical ending of controversy in a scientific community—was not based on any single crucial experiment. The eventual winners wrote retrospective histories of the dispute, while die-hard losers, some of whom left the field or became marginal figures, often remained dissatisfied with the alleged disproof of their claims. In other words, evaluations of experimental evidence were bound together with assignments of credibility and relevance to particular experiments and experimentalists. Over the lifetime of the controversy, the scientific evidence and the boundaries of the relevant scientific community were intertwined and coproduced.

For Collins and others who study scientific controversies (often associated with “constructionism” in the social sciences [Hacking, 1999]), closure does not result entirely from the accumulation of evidence, control of sources of error, or rational agreement about previously disputed facts. Although participants, and the sociologists who study them, cannot ignore possible sources of evidence and error, it sometimes happens that closure is announced, and even acted upon, despite the persistence of disagreement. Indeed, forecasting closure and even announcing it as a fait accompli is a common rhetorical tactic used by participants in ongoing controversies. A small, or in some cases a substantial, minority faction may continue to cite evidence supporting its views, while complaining of “political” machinations employed by the triumphant faction (Gilbert & Mulkay, 1984). A sociological analysis of such disputes is complicated by the fact that judgments about relevance and credibility—which disciplines are relevant, and who counts as a credible spokesperson for the consensus—can themselves be subjects of controversy. The (p.43) cartographical metaphor of “boundary work” (Gieryn, 1983; 1995; 1999) and an analogy with political gerrymandering (Woolgar & Pawluch, 1985) are widely used in case studies of controversy and closure. In part, these sociological concepts point to the way controversies about facts are bound up in disputes about who is competent to “represent” the facts.4 As consensus develops over the course of a controversy, communal alignments form and boundaries are drawn between who is deemed competent and credible and who is dismissed incompetent or incredible.5

Hybrid Controversy

In his study of the gravity waves controversy, Collins (1999: 164) employs a diagram of the core set and various peripheral groups who consume the written products of the core set's experiments (fig. 2.1). The diagram resembles a target, with the most heavily involved “core” investigators at the center, and less directly involved scientists further out from the center. Further out are policymakers, journalists, and other interested onlookers and commentators, and way out on the edge are members of “the public.”6

The DNA fingerprint controversy took place on rougher terrain, and (p.44)

A Techno-Legal Controversy

Figure 2.1. Core set diagram (target diagram of consumers of scientific papers). Source: H. M. Collins 1999: 164). Reprinted with permission from Sage Publications Ltd.

with less settled divisions among the participants. It was not a pure case of a scientific controversy, analogous to the gravity waves case. During the gravity waves controversy a relatively small number of experimental physicists in the core set battled over discrepant findings and interpretations. Although the core set was not a hermetically sealed group, a restricted group of laboratories had a central role in generating and resolving the controversy. Some other scientists and nonscientists may have become aware of the controversy, and a few may have had a peripheral role in it, but most had little interest or direct engagement in the esoteric dispute.7

The DNA fingerprint controversy also involved disputes among experts, but it was not confined to a clearly identified network of natural (p.45) scientists. The technical content of the dispute extended well beyond any single discipline, as key participants included scientists and mathematicians from several different specialties, including molecular biology, forensic science, population genetics, and statistics. Active participants also included lawyers, judges, police employees, legal scholars, government officials, and science writers. The core set thus was joined by a “law set” (Edmond, 2001) that participated in key legal decisions, as well as by an “administrative set” of review panels and advisory groups and a “literary set” of legal scholars, science journalists, and other scribes and chroniclers (including the authors of this book). This ecology of overlapping and sometimes contending sets was complicated by the fact that members of some sets (and to an extent all sets) faced the task of specifying who had a relevant and legitimate role in the dispute. Law courts, journalists, and scientific review panels faced the task of deciding which fields were relevant and which members counted as bona fide spokespersons for those fields. In other words, they mediated the dispute, translated its terms, and adjudicated its boundaries. Which specialties occupied the “core” of the dispute was not given from the outset, nor was it always clear which scientists and administrators counted as spokespersons for the relevant fields.8

The debates among the members of these sets covered technical questions about molecular biology, population genetics, and statistical procedure, but they also covered practical, legal, and ethical questions about (p.46) the way police and forensic organizations handle criminal evidence and implement laboratory protocols. At the heart of the dispute were legal procedures for deciding the admissibility of evidence. Closure was as much a legal and administrative matter as it was a technical or scientific issue. Technical “fixes” and the presentation of expert evidence were important for bringing about the eventual acceptance of DNA testing in criminal justice, but equally important were efforts to devise administrative standards for assuring the courts that DNA evidence was correctly handled and analyzed. Some of the most interesting and dramatic confrontations occurred when expert witnesses were cross-examined by attorneys in front of judges, juries and (during the O. J. Simpson trial) massive media audiences. Law, science, and the public understanding of science were deeply intertwined in these confrontations.

The Legal Framing of Scientific Controversy

The key players in the DNA fingerprinting controversy included specialized “expert lawyers” and scientists who actively participated in legal challenges and public policy debates. Legal standards for deciding the admissibility of evidence framed the controversy, and some of the landmark events and turning points in the history of DNA testing (and, later, fingerprinting) took place in the courtroom. The controversy was an episode in legal history as much as it was a chapter in the history of science. The courts, scientific review panels, and the media investigated and publicly discussed the controversy. Many of the themes that Collins and other social historians have used for analyzing controversies were themselves used by participants whose actions produced, or sought to end or foreclose, controversy. Commentators spoke openly about the controversy: they quoted contradictory claims about experimental results and the competence of the experimenters; and they made forecasts, and disputed others' forecasts, about the eventual resolution of controversy. These contentious commentaries were not side issues in the “real” controversy, because they were important for framing, conducting, and settling the dispute. The commentaries, syntheses, and speculations about closure were crucial for establishing public (and government) interest and disinterest in the continuing saga.

The technico-legal dispute about DNA fingerprinting addressed fundamental social and political questions about the role of technical expertise (p.47) in a democratic system.9 Participants raised questions about the meaning and authority of numbers, and about the participation of citizens and the role of “common sense” in legal decision-making. The courts were asked to resolve, in a case-by-case way, the problematic relationship between idealized norms of science (abstract principles, rules, and protocols) and day-to-day practices in the laboratory and at the crime scene. In effect, the courts provided a forum for publicly discussing topics of central importance for science and technology studies (Lynch, 1998). These debates had little input from S&TS research, and they were restricted in scope, as the courts sought to devise practical and administrative solutions rather than to develop novel ideas and sophisticated arguments. The common themes, if not common ground, between particular court cases and research in the history, philosophy, and social studies of science present us with a fascinating, if difficult and challenging, phenomenon. To put a name on it, we may call it “legal metascience”: legally embedded conceptions of the general nature of science, and of the scientific status of particular practices and forms of evidence.

A study of legal metascience cuts both ways: it can deploy insight from philosophy, history, and social studies of science to analyze and criticize legal discourse about science (Jasanoff, 1991; 1992), but it also can lend critical insight into some of the conceptual problems that come into play in S&TS debates about science and its place in modern societies (Lynch, 1998; Lynch & Cole, 2005). The very themes of controversy, consensus, and closure, which historians and sociologists address in studies of technical controversy, are explicit topics of deliberation and debate in courtroom hearings about the admissibility of DNA profile evidence. The U.S. courts have codified specific standards for assessing novel forms of expert evidence. As we elaborate in interlude B, the “general acceptance” standard, which was articulated in a federal court decision Frye v. U.S. (1923), presents a judge with the social-historical task of deciding whether or not a novel form of (alleged) scientific evidence is generally accepted in the relevant fields of science. In its landmark decision Daubert v. Merrell Dow Pharmaceuticals, Inc. (1993), the U.S. Supreme Court advised federal courts to use a broader set of standards when assessing the reliability of expert scientific evidence, but many state courts (p.48) continue to use the Frye standard. Criminal court systems in other nations place less stress on admissibility hearings, but debates about the “scientific” status of forensic evidence also arise in trials and appeals, as well as in science advisory panel inquiries. Our study of these debates allows us to treat controversy, consensus, and closure not only as substantive phases in the social history of innovation, but also as themes that are used by key players who produce and assess the current state of controversy and closure.

Admissibility Hearings and the Unraveling of DNA Evidence

Two landmark U.S. Supreme Court rulings in the 1990s, Daubert v. Merrell Dow Pharmaceuticals, Inc. (1993), and Kumho Tire Co. v. Carmichael (1999), articulated standards for the admissibility of evidence in U.S. federal courts. The Supreme Court's conception of science directly influenced U.S. federal courts and many state courts during the DNA fingerprinting controversy, and it had less direct influence on courts in states (and even other countries) that do not formally adhere to U.S. federal guidelines for admissibility.10 However, many of the U.S. cases discussed in this book occurred before the Daubert ruling, or occurred in state courts that continued to used variants of the Frye “general acceptance” standard after 1993 to assess the admissibility of expert evidence. Related questions about the “expert” and “scientific” status of evidence are also aired in trial and appeal courts, not only in the United States but also in other countries such as the United Kingdom. In the remainder of this chapter we focus on the emergence of controversy in connection with the most notable admissibility hearing in the history of DNA profiling: the 1989 case New York v. Castro. First, however, we provide a sketch of some earlier cases.11

DNA fingerprinting was used in criminal investigations soon after the published announcement of its invention by Alec Jeffreys (who was later (p.49) knighted for the achievement) and associates at the University of Leicestershire, England (Jeffreys et al., 1985b). In 1986 and 1987, Jeffreys assisted a police investigation of the notorious “Black Pad murders”—two rape-murder cases in nearby Leicestershire villages.12 The first of the murders took place in 1983 along a footpath in a location known as “Black Pad” in the village of Narborough, and the second occurred in 1986 in the village of Enderby. Both involved the rape and strangulation of teenaged girls by an unknown assailant. The investigation was chronicled by crime writer Joseph Wambaugh in his book The Blooding (1989), the title of which refers to the mass screening of “voluntary” blood samples taken by the police from thousands of men in the local area. A mentally imbalanced kitchen porter was initially suspected of the murders, and even gave an ambiguous confession, but he was excluded when his DNA evidence did not match the crime samples. A key part of the story was that Colin Pitchfork, who eventually was convicted of the crime, initially evaded detection by submitting a blood sample given to him by a friend.

In 1987, DNA evidence was used in the Florida murder trial of Tommie Lee Andrews (see Andrews v. State, 1988), and many other cases soon followed. The defense in Andrews and some other early trials such as New Jersey v. Williams (1991) challenged the admissibility of DNA evidence, but all of these early challenges were unsuccessful.13 In some (p.50) of the early trials, the prosecution mobilized an impressive roster of experts, ranging from prominent molecular biologists, who endorsed the scientific status of the techniques, to forensic case specialists, who performed and supervised laboratory work for Cellmark, Lifecodes, the FBI laboratories, and other forensic organizations.14 The defense in many of the early trials and admissibility hearings did not call any experts. According to Neufeld & Colman (1990: 25), in some instances, the presiding judge refused to authorize funds to retain expert witnesses for the (court appointed) defense: “A critical factor in the defense's successful challenge was the participation of several leading scientific experts—most of whom agreed to testify without a fee.”15

Neufeld and Colman also mention that even when defense counsel were able to use expert witnesses, they found it difficult to find experts who would agree to appear: “The defense counsel in one case explained that he had asked dozens of molecular biologists to testify but all had refused. Interviews with some of the scientists revealed that most of them, being familiar with scientific research involving DNA typing, assumed the forensic application of the technique would be equally reliable” (Neufeld & Colman, 1990: 24). The assumption of reliability, and even infallibility, was encouraged by early statements by Jeffreys stressing that the technique (referring to the multilocus probe technique—see interlude A) produced near certain results: “The pattern is so varied (hypervariable) that any particular combination of the segments is as (p.51) unique as a fingerprint”16; “Suppose we could test a million people every second. How long would it take to find one exactly the same? The answer is, the universe itself would die before we found one the same. It is simply an incomprehensible number” (Jeffreys, quoted in Grove, 1989). A Home Office spokesman stated, “The procedure is very complicated but it provides the scientists with a DNA fingerprint which has been shown to be specific to a particular individual.”17 Media statements through the 1980s also used the analogy with fingerprints to stress the (near) certainty of individual identification: “the perfect fingerprint: unfakeable, unique, and running in families”18; “This is the most important innovation in the fight against crime since the discovery of fingerprints.”19

The first report of the National Research Council noted, in retrospect, that “in the publications in 1985 by Jeffreys and colleagues, the term ‘DNA fingerprint’ carried the connotation of absolute identification. The mass-media coverage that accompanied the publications fixed in the general public's minds the idea that DNA typing could be used for absolute identification. Thus, the traditional forensic paradigm of genetic testing as a tool for exclusion was in a linguistic stroke changed to a paradigm of identification” (NRC, 1992: 27). Another retrospective account also noted the early emphasis on error-free procedure and virtually certain frequency estimates:

In the popular mind the test became confused with the mapping of human genes, whereas in fact it probes only a handful of points on the chromosomes and ones which have no known function in determining physical makeup. The often faint, fuzzy, and distorted bands produced on autoradiographs were likened to the precise and unambiguous patterns of supermarket bar-codes. The chances of an innocent, coincidental match were touted at figures as low as 738,000,000,000,000 to one. The process was stated to be incapable of yielding a false match. (McLeod, 1991: 583)

(p.52) In the first few years after the introduction of DNA typing, defense lawyers were ill equipped to cross-examine the prosecution's experts on the subject, and to challenge the extraordinary frequency estimates they gave.20 In Andrews v. State (1988) the defense challenged the admissibility of the DNA evidence, but all of the expert witnesses who testified during the pretrial admissibility hearing were called by the prosecution. The defense attorney cross-examined the expert witnesses, but the questions often seemed ill informed. For example, in the cross-examination of Lifecodes witness Michael Baird, the defense attorney asked open-ended questions that relied upon Baird to specify possible problems, and did not press him with specific questions about documented sources of error or doubtful population genetic and statistical assumptions. Not surprisingly, Baird did not come to the aid of his interrogator.


  • Are there things other than the PH and the conductivity by which the reagent can cause the test to go afoul?
  • A.

  • Those are the basic points that need to be in place in order for the test to work.
  • Q.

  • Would any type of foreign substance contamination within the substance make a difference?
  • A.

  • Not in our experience.21
  • The cross-examiner sought, rather than used, knowledge about how the procedures in question worked, and the questions sometimes pursued contingencies (such as variations in voltage applied to gels) that seemed not to trouble the witness. The questions sometimes furnished the witness with the opportunity to expose the questioner's ignorance, such as when Baird corrected the defense attorney for assuming that the restriction fragments used in Lifecodes' analysis are from coding regions of DNA:


  • Do you know which ones they are? Is number one, for example, a predisposition for diabetes and number two blue eyes or can you tell us just that?
  • A.

  • These probes do not recognize anything that is understandable in terms of those, those kinds of physical traits.
  • (p.53) Q.

  • All right.
  • A.

  • They are called, you know, anonymous DNA regions.
  • When attempting to discredit prosecution witnesses with summary arguments, the defense made general ascriptions of vested interest to scientists, such as a prominent molecular biologist from MIT who testified for the prosecution:

    I would suggest by that while Doctor Houseman's credentials are impressive, to say the least, that he is not a totally dispassionate, totally disinterested member of the scientific community and may well have a career interest in having this test determined to be reliable by coincidence, since he also draws his paycheck by virtue of doing five to ten of these a week. And if the test were not found to be reliable, he might well suffer some career damage from that. (Andrews v. State, 1988: 66)

    This hypothetical argument, stressing the lack of total disinterest on the part of the witness, was easily rebutted by the prosecutor.

    The court heard from an independent witness from MIT, who I doubt seriously has any true vested interest in the outcome of this case. I don't think that his paychecks or his position on the faculty at MIT since 1975 would be severely damaged if this gets into evidence as Mr. Uhrig suggests. (67)

    The defense brief in another early case (New Jersey v. Williams, 1991: 5), deployed an even more global use of an interest argument to discredit the prosecution experts' testimony about PCR (the defense called no expert witnesses):

    The prosecution called nine (9) witnesses. All were qualified as experts in various fields of molecular biology, microbiology, genetics, immunology, population statistics, polymerase chain reaction, forensic serology, forensic science, forensic biology, DNA molecular biology etc. Each of these witnesses was infirm either by reason of close association and economic and professional reliance upon Cetus [Corporation] and the test in particular or they were not competent to give an opinion in the forensic context of PCR.

    The defense attorney associates the fact that the witnesses were recognized as having specialized knowledge about PCR with a vested interest (p.54) in promoting a corporate product (which at the time was held under patents assigned to Cetus Corporation). Then, to dismiss the testimony by witnesses with academic credentials, the attorney adds that they were not competent to give an opinion about the forensic context of use. Referring to another notable witness—Henry Erlich of Cetus Corporation—the defense attorney associated the scientist's weighty curriculum vitae with his expensive suit (conveying a distinctive sense of “lawsuit,” in an attempt to wed class resentment with suspicion of expert authority): “If a juror cannot quite understand allele drop-out or mixed samples, the issue should not be admitted because Dr. Erlich wears a five hundred dollar suit and has a CV four pounds in weight.”22

    Such arguments, which are commonplace in trials involving expert evidence, are variants of what S&TS scholars have called “interest arguments,” and the Williams attorney also invokes the word “context” in a familiar way to mark specific organizational differences in the configuration of an innovation. Attributing seemingly “objective” evidence to specific social interests and contexts is a well-known explanatory strategy in the sociology of scientific knowledge (see, for example, Barnes, 1977),23 but contrary to the general explanatory aims of the sociologist of knowledge, the attorneys in these early cases attempted to discredit the credibility of particular claims. Interest arguments used in a particular court case can be effective, depending upon the salience of the interests in question and the jury's receptivity to the attorney's line of attack. In the above instance, however, the attorney's argument did not specify just how the witness's alleged interest in promoting the technique biased the specific evidence he presented. Indeed, the attorney was reduced to complaining that Dr. Erlich's evidence was incomprehensible, and he attempted to transform the witness's CV from being a record of impressive expert credentials to being evidence of vested interests.

    (p.55) Not only did defense attorneys expose their ignorance in early cases, judges sometimes did not fare very well either when they waded into dialogue with expert witnesses. During the admissibility hearing for New Jersey v. Williams (1991), the judge (the Court in the transcript below) played the part of a befuddled and yet authoritative Simplicio confronted by a savant, Edward Blake, appearing for the prosecution) in a Galilean dialogue.24


  • The PCR product is evaluated again with a test gel, to see whether or not this 242 base pair DNA fragment has been produced in the sample.

  • Now when you say—when you reduce it to its pure form, it is about a drop.

  • Well, one has about a drop of fluid. Now—

  • Of pure DNA?

  • No, no, no. No, no, no. This is, perhaps, the thing that is confusing the Court. The court apparently has the idea that you can see molecules.
  • You can't see molecules. But you can test for their consequence. You simply—one has the idea that you have one of these cocktails. You have one of these cocktails and there is a lid on this thing. A little cap and that's probably about one hundred times larger than what we have. We have this fluid here and we stick it in a thermal cycler and after 30 or 40 cycles the stuff comes fuming out and all of a sudden your laboratory is taken over by these DNA molecules. That's not what we are talking about here, Judge. It is not like—it is not like in one of these things you see in science fiction movies. (43ff.)

  • If I were to look at that test tube at the beginning and then look at it at the end of three hours, would I see anything?

  • No.

  • Would it change its color?

  • No.

  • It would be either—I'd see nothing different?
  • (p.56) THE WITNESS.

  • You'd see nothing different.

  • But you would see something inside.

  • No. I could show you how to visualize the consequences of what has happened in those three hours. And the way that you visualize the consequence of what has happened in the three hours is by taking a little bit of that fluid out and running a test gel on it. And when you run a test gel on a little bit of the fluid that is taken out, you will either see or not see a fragment of DNA that was not present in the fluid before that is of the size of this gene that is being amplified.

  • When you say see, you mean with your own eyes?

  • Yes.

  • Not with the use of microscopes.

  • I don't mean see with your own eyes in the sense you can see a molecule and you can sit there and count one, two, three, four.

  • This is where I am having my trouble. How can you see something that you don't see?

  • How do you know something is separating if you can't see it? Because, Judge, you are asking how do the tools of all science work in general when you ask a question like that, and the way you see it is with some technical procedure that allows you to see the consequence of the molecule with a particular set of properties. (46)
  • This sequence could be described as a tutorial on how to use the verb “to see” in a particular technical context.25 The judge is conspicuously stuck in a conception of “seeing” that is tied to naked-eye visibility, whereas the expert witness, Edward Blake, deploys the verb in reference to analytical techniques that produce visible displays that implicate molecular processes that are well below the threshold of naked-eye perception. In the latter sense, “seeing” is bound up with demonstrating what is implicated by material evidence. Earlier in his tutorial, Blake referred to what was visibly shown by exhibits (which were not included in the transcript). From what he said, it is clear that the exhibits were schematic diagrams, much like those that are commonly presented in summary accounts of DNA profiling (see fig. A.1).26 The cartoon convention exhibits DNA with the standard icon, showing it being extracted from the initial (p.57) samples shown at the start of the scheme. The judge evidently mistook the visual demonstration to be a reference to the relative size of the molecular object in question.

    In other early cases, courts, and even defense attorneys, took expert witness statements at face value, apparently because they had no access to contrary information. Eric Lander (1989: 505) quotes from a Lifecodes scientist, Kevin McElfresh, who testified in Caldwell v. State (1990)—a death penalty rape-murder trial—and stated that declaring a match is a “very simple straightforward operation … there are no objective standards about making a visual match. Either it matches or it doesn't. It's like you walk into a parking lot and see two blue Fords parked next to each other.” Lander presents this statement as an example of misleading testimony that courts accepted without effective challenge.

    Castro and the “DNA Wars”

    It was not until 1989, in the New York murder trial of José Castro, that the first successful challenge to the admissibility of DNA evidence occurred. Before then, DNA evidence had been used in hundreds of U.S. trials,27 and it was entrenched in many other national court systems. Castro was soon followed by many other admissibility challenges, some of which were successful, and it touched off a controversy that was dubbed, with a dose of journalistic hyperbole, the “DNA wars.”28 Difficulties also arose in the United Kingdom in connection with a series of Manchesterarea rape cases, and also in an Australian case (McLeod, 1991). For the most part, the DNA wars took place in two venues: criminal courts, particularly admissibility hearings in U.S. courts, and the pages of the science press, especially Science, Nature, and lesser science magazines such as The Sciences. The two venues were deeply linked: news reports and (p.58) articles in the science press focused on forensic evidence in court cases, and these reports and articles fed back into courtroom dialogues and judicial rulings.

    The pretrial admissibility hearing in Castro was exceptional, because the court-appointed defense attorney came prepared with an arsenal of expertise. The attorney sought the help of Barry Scheck and Peter Neufeld—two young attorneys who later, through their involvement in the O. J. Simpson case and the Innocence Project, became famous for having expertise with DNA evidence. In the late 1980s Scheck and Neufeld were just beginning to learn about such evidence through informal seminars on the subject for defense lawyers.29 They, and a group of other defense lawyers, had become suspicious about the rapid acceptance of forensic DNA evidence and its unqualified presentation in the courts, and their suspicions were shared by a few biologists as well. Because of its timing and placement in New York City, the admissibility hearing in New York v. Castro turned into a highly visible forum for staging a challenge. Scheck and Neufeld prevailed upon Eric Lander, who had recently become concerned about the way molecular biology was being used in forensic science, to give evidence for the defense.30 Other molecular biologists also signed on, creating a rare situation in which the defense was able to counter the prosecution's expert firepower.

    The Castro case arose from the stabbing deaths of twenty-seven-year old Vilma Ponce and her two-year old daughter in 1987. José Castro, described as a local handyman, was arrested on the basis of an eyewitness report, and a small bloodstain was recovered from his wristwatch. Expert witnesses working for Lifecodes (one of the first private companies to perform forensic DNA analysis) testified that they extracted 0.5 μg of DNA from the bloodstain. Using the single-locus probe method, Lifecodes analyzed that sample, and compared it with samples from the two victims. Probes for three RFLP loci were compared along with a probe for the Y chromosome locus. According to Lander 1989: 502), “Lifecodes issued a formal report to the district attorney … stating that the (p.59) DNA patterns on the watch and the mother matched, and reporting the frequency of the pattern to be about 1 in 100,000,000 in the Hispanic population. The report indicated no difficulties or ambiguities.” Lander then went on to describe “fundamental difficulties” with the report. One problem had to do with the tiny amount and degraded quality of the DNA extracted from the bloodstain (or, as Lander describes it, “blood speck”) on the watch. This resulted in an extremely faint trace on the autoradiogram, and the DNA in the stain may also have been contaminated with DNA from bacteria. As Lander recounts, far from discounting the evidence, Lifecodes used the faint quality and potential contaminants as an interpretive resource for preserving the claimed match with the evidence from the mother.31

    Figure 2.2 presents autoradiographic evidence for one RFLP locus marked by a radioactive probe. The three lanes compare results from the analysis of the blood of the two victims (M = mother; D = daughter) with the results from the speck of blood on the defendant's watch (W). According to Lander, the prosecution witness (Michael Baird, Lifecodes' director of paternity and forensics) who presented the evidence “agreed that the watch lane showed two additional non-matching bands, but he asserted that these bands could be discounted as being contaminants ‘of a non-human origin that we have not been able to identify’” (Lander, 1989: 502). Lander went on to identify several other problems with the laboratory procedures and statistical analysis. During the lengthy admissibility hearing, expert witnesses on both sides agreed to meet without the lawyers present, because they were appalled by some of the evidential problems that emerged, and no less appalled by the disrespectful way the lawyers handled the questioning (Roberts, 1992; Thompson, 1993: 43). A “consensus about lack of consensus” came out of this ad hoc meeting. According to one prosecution witness, Richard Roberts, “We wanted to be able to settle the scientific issues through reasoned argument, to look at the evidence as scientists, not as adversaries” (quoted in Lewin, 1989: 1033). The two prosecution witnesses who took part in this meeting later retook the stand and recanted their earlier testimony supporting Lifecodes’ determination of a DNA match (Thompson, 1993: 43). The meeting also resulted in a recommendation to the National (p.60)

    A Techno-Legal Controversy

    Figure 2.2. Single-locus probe, from New York v. Castro. Reprinted from Lander 1989: 503).

    Research Council—the research arm of the National Academy of Science—to investigate forensic uses of DNA typing.

    During the extended pretrial hearing in Castro, the defence witnesses presented an array of problems that were not mentioned in the initial Lifecodes report. It seems likely that had Peter Neufeld and Barry Scheck not become involved, and had the defense not called in Lander and others to open up questions about controls, interpretation, sample custody, statistical representation, and so forth, the evidence would have been admitted. After the hearing, the judge excluded the DNA evidence, but in a qualified way. While acknowledging extensive and “fundamental” (p.61) problems raised by many of the expert witnesses, the judge ruled that forensic DNA profiling “can produce reliable results,”32 but that in this case the testing laboratory failed to use “the generally accepted scientific techniques and experiments.” The credibility of the technique itself was thus preserved, while blame was laid at the door of the way it was implemented.33 And, as it turned out, Castro pled guilty, even though the DNA evidence was excluded, but the fate of the DNA evidence received far more publicity.

    Despite the limited scope of its victory, the defense, aided by Lander and other expert witnesses and consultants, opened up a chamber of methodological horrors to disclose a litany of problems.34 Where DNA evidence had been presented, and for the most part accepted without effective challenge, it now seemed to unravel. Defense lawyers such as Scheck, Neufeld, and William Thompson held meetings and exchanged transcripts of admissibility hearings in which DNA evidence was effectively challenged.35

    The Castro hearing provided an object lesson for lawyers on how to attack DNA evidence. Defense lawyers began to learn more about DNA evidence and proponents no longer expected courts to accept their testimony without question. Even in cases in which the defense did not call expert witnesses, it was possible to use the record from Castro and other challenges to rebut the prosecution's witnesses. William Thompson described an amusing example of how an expert witness could be virtually transferred from one case to another:

    (p.62) [W]hat I did was I would pull quotes out of the transcripts or the published statements of these critical scientists and I would say to Bruce Weir [a prominent forensic analyst testifying for the prosecution], “Eric Lander says x, do you agree with that or do you not?” Or I would quote something that Lander said that sounded critical—“Do you agree with it or do you not?” And he'd say, that “Well, uh, I, I disagree.” So then I'd say that “At least with respect to this client there's disagreement in the scientific community, is there not?”36

    Publications by Lander (1989) in Nature, and Lewontin & Hartl (1991) in Science provided lawyers with authoritative sources they could cite to counter DNA evidence.37

    Lander (1989) and defense attorneys such as Neufeld, Scheck, and Thompson who took the lead in challenging DNA evidence developed two broad areas of criticism: first, elaborating an expansive array of technical problems with the collection, handling, and analysis of samples; and, second, citing a combination of statistical and populationgenetic problems associated with quantifying and reporting upon the probative value of DNA matches.38 To this pair of problems, we can add a third that often came up in court cases and official evaluations: organizational and administrative problems.

    (1) Technical Problems and Contingencies.

    In Castro and other early cases, radioactive probes were used to mark polymorphic segments of DNA, and these were photographed on x-ray film. These techniques were often glossed with the acronym RFLP (restriction fragment length polymorphism). The autoradiographic images produced through these techniques produced arrays of bands in adjacent lanes, which were visually inspected for matches. The “professional vision” (Goodwin, 1994) of forensic analysts was called into question during Castro and later cases, as defense attorneys and their expert witnesses made a point of the “subjectivity” of the work of handling samples, running equipment, and visually assessing the evidence.

    (p.63) When cross-examined about their practices, forensic scientists sometimes admitted to using methods to enhance the visibility of bands, both for analytic purposes and for showing evidence in court. Sometimes computer enhanced images were used to produce “cleaner” images than those originally developed on autoradiographs, and computers also were used to analyze bands, which were sometimes said to “create a result where none was apparent before by, for instance, locating very light bands” (Office of Technology Assessment, 1990: 119).

    In the Castro case, when questioned closely about their procedures, Lifecodes analysts acknowledged that, in order to support their determination that there was a match they found it necessary to discount particular discrepancies and to enhance the visibility and alignment of bands. The Castro defense argued that the forensic analysts assumed the existence of the match they set out to test, and treated discrepancies not as evidence of a mismatch, but as evidence of artifacts that partially obscured the match. For example, they could attribute such discrepancies to variations in the composition of a gel, or in the amount of electric current that “drove” samples in different lanes. When their ad hoc practices are considered at a more abstract level, the Lifecodes scientists faced a familiar dilemma in the history of experimental science:39 either they would have to throw out the evidence, despite an overall conviction that there was a match, or they would have to dismiss particular discrepancies in order to save the evidence. Weighing on both sides of the dilemma were considerations such as, on the one hand, the cost of freeing a murderer because imperfect evidence was discounted, and, on the other, prejudicing the case against the accused person by presuming evidence of guilt in the very analysis of that evidence. In response to criticisms in Castro and other early cases of ad hoc accounts of band shifting,40 forensic (p.64) organizations and scientific review panels began to develop standards for visually counting bands as aligned or not. Sometimes, such enhancements were as crude as darkening a faint band with a marking pen, or hand-tracing an autoradiograph onto an acetate slide.

    Another common target of technical criticism was contamination, whereby “foreign” DNA from human or nonhuman sources (for example, bacteria) can mix with, or even replace, the “target” DNA in a sample. Several possible sources were mentioned in Castro and later cases, and virtually every step of the process from collecting, storing, transporting, and analyzing evidence was cited as a source of possible contamination.41 Police and other agents with little or no training in laboratory procedures typically collect and handle criminal evidence samples, and defense attorneys often focused on the possibility that they could deliberately or inadvertently cross-contaminate evidence samples.

    The exercise of “subjective” (visual) judgment, and the possibilities of cross-contamination were among the most common themes in courtroom challenges, perhaps because they were relatively easy to grasp. However, many other possible sources of technical error at virtually every stage of analysis were aired during Castro and other contentious cases.

    (2) Statistics and Population Genetics.

    The most widely debated issues in Castro and other early cases had to do with the appropriateness of reference samples. In order to estimate the probability that a randomly chosen, unrelated individual's DNA profile would match a profile in evidence, it is necessary to specify the reference database. This is because the alleles marked with DNA probes occur at different rates, on average, in different human groups. Closely related people are more likely to share alleles—with identical twins being the limit case. DNA profiling was implemented for criminal investigations before extensive databases had been compiled. Early databases were cobbled together from blood (p.65) banks, volunteers from police forces, and other nonrandom samples, and they were divided into very rough “racial” subgroups. In early cases, databases were limited, especially for minority categories such as “Afro-Caribbean” (a category used in the United Kingdom).

    Questions about statistical estimation and population genetics were aired during Castro, and later became a subject of heated controversy in Science (the magazine). Lewontin and Hartl (1991; 1992) raised a thorny population genetic problem, which questioned the procedure of establishing allele frequencies for a general population and broad “racial” groups (or census categories): North American Caucasians, African Americans, Hispanics, Asians, and Native Americans in the United States; Caucasians, Afro-Caribbeans, and (South) Asians in the United Kingdom. Lewontin and Hartl argued that such groups were too crudely defined to accurately measure the probability of finding matching alleles in segregated urban areas, American Indian tribes, or isolated rural villages. This is because persons in such groups are likely to be more closely related to one another than to randomly chosen members of the general population or even of a broadly defined “racial” subgroup. Given patterns of residential segregation, and the fact that for street crimes the most likely suspects other than the defendant tend to live in the same local area and be similar in age, stature, ethnicity, and so forth, persons in the pool of most likely suspects would be more likely to share alleles than would randomly chosen persons from a general population or population subgroup. The problem of tailoring probability estimates to the demographics of the suspect pool in a particular case gets worse when one assumes that a close family member (such as a brother, or a “hidden half-brother” covertly sired by the suspect's father) is a possible suspect. In theory, at least, the relevant frequency estimates will vary with the constitution of the suspect pool, which, in turn, interacts with the substantive details of the case and assumptions about neighborhoods and crime patterns.

    Further debates concerned the common practice of generating forensic estimates by multiplying estimated frequencies of alleles in the relevant human population for each matching band (probe). Critics—most notably Lewontin and Hartl (1991; 1992)—questioned the Hardy-Weinberg equilibrium assumption that each matching allele is statistically independent, and that calculations of very low frequency in the population can be made with relatively small samples.

    (p.66) (3) Organizational Contingencies.

    As noted earlier, evidence samples that forensic laboratories analyze do not originate in the laboratory: they are collected at crime scenes by police employees, many of whom have little or no training in biology or forensic science. The samples commonly are transported in police vehicles (or the vehicles of private services contracted to the police), and stored at police facilities before being housed in a laboratory. If one extends the contingencies of the laboratory to include the sources of the material samples analyzed, then “laboratory error” can arise at any point in the continuum of practices (or “chain of custody”) from crime scene to court. Chains of custody are shot through with organizational contingencies, and they are addressed through bureaucratic and administrative work, including work performed at crime scenes by police and pathologists, and in laboratories by administrators and staff scientists.

    The summary term “preanalytical error” is sometimes used to describe the array of mistakes that can result in the possible spoilage, mislabeling, and misplacement of evidence samples. One can even extend the preanalytical sources of problems, if not errors, to the condition of evidence samples at the scene of the crime: conditions such as burial in damp earth, freezing and thawing, baking and irradiation under the sun, and contamination by animal deposits, or bacterial or other microbial infestation. They also can include conditions under which samples are stored in a laboratory or police station.42

    A problem that emerged very early (and which emerged more recently in connection with a scandal about the handling of DNA samples in a Houston crime laboratory) involved mundane errors in the labeling and handling of samples.43 For example, in a case in Reading, England, the laboratory mixed up two samples and the suspect was convicted of rape on the results of the analysis of another suspect's blood. At the time of the Castro case, and to some extent still, forensic organizations employed different local protocols, maintained proprietary secrets, and incompletely implemented recommended proficiency tests and controls (Thompson et al., 2003).

    In Castro, the defense charged that Lifecodes kept poor records of the probes and controls it used. The company's expert witnesses, when questioned about the identity of the control sample for the XY chromosome (p.67) test, first said the control was from HeLa cells (an “immortal cell line” named after Henrietta Lacks, the source of the lethal cancer from which the cell line was first developed in the 1950s), which was odd, since the profile showed no evidence of Y chromosome. Then, it was traced to a male Lifecodes employee who, it was alleged, had an unusually short Y chromosome. Lander questioned this, and Baird (the Lifecodes witness) later testified that the source of the evidence was a woman identified via the resultant RFLP profile (which, Lander noted, was an odd way of treating a control sample). “The confusion had probably resulted from faulty recollections (by Baird and the technician) and faulty inferences (about the male scientist), but it underscored the need for meticulous record-keeping in DNA forensics, which may not originally have been so clear” (Lander, 1989: 503). As this quotation makes clear, mundane administrative practices (record-keeping, in this case) often framed the credibility of DNA data and interpretations of those data. And, as we shall see in chapter 7, recommendations for assuring the quality of administration often stood proxy for technical procedures and the credibility of analytical results.


    The problems discussed in this chapter barely scratch the surface of the often highly technical and circumstantial problems that were aired in court and in the science press in the late 1980s and early 1990s. Problems grew more numerous, and more technically complicated, the more that lawyers, expert witnesses, and critical analysts pursued them. Moreover, the pursuit of such problems, especially in the context of the adversary legal system, infused mundane police and laboratory protocols with monumental significance. The “deconstructive” operations of adversary interrogation peeled back the impressive veneer of DNA matches and the impressive statistics that accompanied them, creating a set of problems that criminal justice organizations and science advisory panels set out to repair in the early 1990s.

    Despite the effectiveness of “deconstructive” strategies in some cases, DNA profiling continued to be used, for the most part effectively, in hundreds of cases in the United States and United Kingdom. And, within a few years after the Castro case, it became common to read that the controversy was over. By the late 1990s, it was widely assumed that (p.68) the most significant of the problems raised during the DNA wars of the early 1990s had been solved, due to technical and administrative efforts to standardize protocols, reorganize laboratory practices, and train personnel. The assumption that the most serious problems had been solved was an important element of the closure of the controversy, but we shall have occasion to revisit at least some of those problems in the aftermath of closure.

    Before delving into the closure of the controversy, and into the aftermath of closure, we shall examine specific themes—protocols, chains of custody, and the uses of statistical estimates—which were problematized in specific cases in the United States and United Kingdom. An appreciation of those themes and how they became problematic will enable us to better appreciate how the currently “unassailable” status of DNA evidence was achieved.


    (1.) A New York Times article on trends in the United States (Glaberson, 2001) reported a steep decline in the proportion of cases settled by jury trials in both civil and criminal courts.

    (2.) Kuhn was sparing in his comments on “Kuhnians” in social studies of science, and for the most part addressed criticisms arising from more traditional quarters in philosophy of science. In a conference presentation on “The road since ‘Structure.’” Kuhn (1991) makes a brief and disdainful remark about sociologists of science associated with the Edinburgh School “Strong Programme” in the sociology of scientific knowledge (e.g., Bloor, 1976; Barnes, 1977). This remark is not included in the published essay by the same title in Kuhn (2000).

    (3.) Collins has investigated two different phases of controversy, separated by twenty-five years (see Collins, 1975; 1985, chap. 4, for discussion of the early research, and Collins, 2004, for the later phase). The first phase, in the late 1960s and early ′70s, focused on Joseph Weber's claims to have detected gravity waves with a relatively simple apparatus. Weber's claims were widely dismissed by other physicists, though Collins argues that there was never a crucial test or a definitive disproof. The second phase involves a far more credible (and expensive) effort to detect gravity waves. The main publication of Pickering's study of controversies in particle physics is his Constructing Quarks (1984).

    (4.) Both the political sense of “representation” (speaking or acting on behalf of a community or group) and the referential sense of “representation” (a sign or picture standing for an object or idea) come into play here (Latour, 2004b; Pitkin, 1967: 221–22).

    (5.) Collins distinguishes between a more coherent “core group” that emerges from the core set during the active period of controversy. Some participants in the controversy become increasingly marginalized as consensus develops among members of the core group—a group largely defined by such consensus—and thus become members of the “set” but not the “group,” and those at the very edge of the circle of participants (including some who may once have been central) may be denounced as cranks, purveyors of fraud, and pseudoscientists.

    (6.) Superficially, the target diagram resembles a schema developed by phenomenologist Alfred Schutz (1964a) to describe the structure of social relationships centered around an individual, with close intimates occupying the center, and expanding outward to less and less intimate associates and consociates, ending finally with complete strangers at the outer edge. The major difference is that Schutz's scheme is a map of personal relations from a first-person standpoint, while Collins's scheme is meant to describe a social network characterized by greater expertise at the center, and decreasing expertise as one moves outward. Borrowing from Schutz, we can imagine that a peripheral player in Collins's scheme can occupy the center of a network in which that player's scheme of relevance takes precedence over the technical skills of the core set that Collins identifies and privileges. And, if we endow this new scheme of relevance with institutional authority (rather than simple subject-relevance), we can get some idea of what is at stake in judicial administration of expert evidence.

    (7.) In later work, Collins and Robert Evans (2002) developed a conception of expertise—and of the study of expertise—that extends the notion of the core set to apply to broader public controversies. For a critical exchange on Collins and Evans's theory, see Jasanoff (2003), Wynne (2003), Rip (2003), and Collins & Evans (2003).

    (8.) Because of its public visibility, the DNA fingerprint controversy was more like the “cold fusion” affair, which drew massive international publicity for several months after Stanley Pons and Martin Fleischmann's announcement of the discovery (see Gieryn, 1992; Lewenstein, 1995; Collins & Pinch, 1998). One of the prominent questions during the cold fusion affair concerned which subfields of physics and chemistry were relevant for testing the contested claims. Nuclear physicists claimed the high ground in the hierarchy of science, and the press tended to accept their judgments. The fact that Pons and Fleischmann were chemists tended to count against their credibility when prominent physicists dismissed their discovery. Which scientists and fields of science counted as part of the core set was contingent upon public acceptance of the credibility of particular fields, laboratories, and spokespersons. It was not the case that a small set of experts who possessed the technical competence to examine and criticize Pons and Fleischmann's claims settled the dispute. Partly because of the low cost of the experimental apparatus, a large and confusing array of replications was attempted. The press, Internet newsgroups, and other media sources were used to monitor and disseminate the results, and the public mediation of the experiments and results fed into the performance of the experiments and the accountability of the results. See the introduction to the Cold Fusion Archive by Bruce Lewenstein, available at www.wpi.edu/Academics/Depts/Chemistry/Courses/CH215X/coldfusion.html.

    (9.) See Galbraith (1967) for the role of “technostructure” in industry, and Winner (1977) for a critical review of theories of technocratic politics.

    (10.) For example, even though New York continues to adhere to the Frye standard, the judge in New York v. Hyatt (a 2001 case involving fingerprint evidence discussed in Lynch & Cole [2005]) explicitly mentions the Daubert factors even while acknowledging that they do not formally apply. International influence is more difficult to trace, but our discussions with attorneys and forensic scientists in the United Kingdom indicated that they closely followed notable U.S. cases.

    (11.) For a much more detailed historical account of Castro and other early challenges to the admissibility of DNA evidence, see Aronson (2007).

    (12.) Also see Office of Technology Assessment (1990: 8) for a brief summary.

    (13.) Richard Charles Williams was indicted in 1983 and charged along with codefendant Thomas Manning with the 1981 murder of a New Jersey state policeman, Philip Lamonaco. According to the prosecution's brief, the defendants were both sitting in a blue Chevy Nova which had been stopped on Interstate Route 80 by Trooper Lamonaco. The prosecution alleged that Williams shot Lamonaco, but that the trooper was able to discharge his own weapon after being fatally wounded. The Nova was found abandoned a few hours later, and blood was recovered from the passenger's seat, headrest, and door panel. Ballistics evidence identified a gun that was recovered as the murder weapon, and other evidence indicated that it had been purchased by Williams on the same day as the murder. His fingerprints were found on items left in the car which also were purchased that day. Williams and Manning remained fugitives before being arrested in 1984 and 1985, respectively, and they were jointly tried in 1986–87. Tests for blood type and enzyme markers presented at the trial indicated that the blood in the Nova could have come from either defendant, but not from the victim. The trial resulted in a hung jury, and Williams was then tried separately from Manning in 1991. Prior to Williams's retrial, the prosecution commissioned a new set of tests on the blood samples. One test employed a newly developed method using the PCR DQ-alpha system, while others used older methods of blood analysis. The RFLP method was not used, because the blood samples taken from the Nova in 1981 were judged to be of insufficient quality.

    (14.) When DNA evidence was first introduced as evidence in criminal trials, it was not yet established which field, or fields, would furnish expert witnesses. In early cases, prosecutors called a combination of expert witnesses, ranging from spokespersons for forensic labs to “independent” experts from academic fields of molecular biology and population genetics. Aronson 2007: 42–54) provides a detailed account of the first admissibility hearing in which the defense called expert witnesses to challenge the prosecution: People of New York v. Wesley (1988). In that case, the presiding judge accepted the prosecution's experts from the various fields, but placed restrictions on the defense's experts (and specifically, on the testimony of molecular biologist Neville Colman) on the grounds that their expertise was not directly related to forensic DNA testing. Despite such restrictions, the case set a precedent for treating expertise in molecular biology and population genetics as relevant to the evaluation of DNA evidence.

    (15.) Both prosecution and defense can be deterred by the expense of DNA testing, but with the involvement of the FBI and the National Institute of Standards and Technology (NIST), and the allocation of large amounts of government funding to developing and testing DNA typing, forensics expertise is routinely available to prosecutors, but available only to defendants with sufficient funds and well-informed and energetic attorneys.

    (16.) Alec Jeffreys, quoted in the Times, London, 12 March 1985. In New York v. Wesley (1988), a similar statement was made by Michael Baird, a forensic scientist working for Lifecodes (the major competitor to Cellmark—the company with which Jeffreys was associated). Baird testified in direct examination “that the DNA patterns or prints that we get are very individualistic in that the pattern of DNA that you can obtain by using a series of DNA probes is as unique as a fingerprint” (quoted in Aronson, 2007: 46).

    (17.) Times, London, 27 November 1985.

    (18.) Economist, “Cherchez la gene,” January 1986: 68–69.

    (19.) M. McCarthy, quoted in the Times, 13 November 1987.

    (20.) See Aronson 2007: chap. 3) on the introduction of DNA analysis in the United States.

    (21.) Andrews v. State (1988).

    (22.) New Jersey v. Williams (1991).

    (23.) Interest explanations are both regularly used and regularly discredited in the sociology of scientific knowledge. The problem is connected with the “regress” problem: that the sociologists trades in a form of attribution that can easily be turned against her own account. This possibility of confusing (or, the impossibility of separating) the discursive form of a social explanation from that of a social debunking strategy challenged the sociology of knowledge from the outset. Efforts to “strengthen” the sociology of knowledge by refusing to exempt science and mathematics from its purview remained vulnerable to the argument that “interest explanations” involved an inherent contradiction when familiar argumentative tropes were mobilized in a general, nonevaluative, program of explanation (Woolgar, 1981; Gilbert & Mulkay, 1984; Lynch, 1993).

    (24.) This sequence of trial transcript is also featured in Jordan (1997) and Jasanoff (1998). Similar expressions of judicial naïvety are quoted by Aronson 2007: 45) from the transcript of the admissibility hearing in New York v. Wesley (1988). In this case, Baird was testifying for the prosecution, and the judge asked him, “The term genes, what is the relationship with the term DNA?” and “What is a chromosome, Doctor?”

    (25.) See Coulter & Parsons (1991) for an illuminating discussion of the varied uses of verbs of visual perception.

    (26.) See Halfon (1998) for an analysis of an instance of this type of schematic diagram.

    (27.) Aronson 2007: 56) reports that estimates of the number of U.S. cases included an indefinite, but large, number of guilty pleas as well as at least eighty trials in which DNA evidence was admitted.

    (28.) Thompson 1993: 23) traces the “DNA war” or “wars” metaphor to a combination of press and participant commentaries. For example, science journalist Leslie Roberts 1991: 736) quoted John Hicks, head of the FBI Laboratories, as saying that the debate about DNA typing was no longer a “search for the truth, it is a war.” Although coincident with various other so-called culture and science “wars,” this particular struggle did not focus on trends in universities and the “culture industry,” but was a dispute among scientists played out in the science media and the criminal courts.

    (29.) Interview with Lynch and Cole, 21 May 2003. For further information about the situation with defense lawyers, and Scheck & Neufeld's involvement in challenging prosecution uses of novel forensic evidence, see Aronson 2007, chaps. 2 and 3). Also see Neufeld & Colman (1990); Neufeld & Scheck (1989); and Parioff (1989).

    (30.) Aronson 2007: 60) describes how Neufeld first met Lander at a meeting in Cold Spring Harbor, New York, and later persuaded him to review the evidence from Lifecodes that the prosecution put forward.

    (31.) A similar interpretive strategy was used by prosecution witnesses in the Regina v. Deen case discussed in chapter 5, and during an interview (18 October 2005) William Thompson also informed us of more recent cases, using very different techniques, in which a variant of it was deployed by prosecution experts.

    (32.) New York v. Castro (1989), quoted in Thompson 1993: 44).

    (33.) Although it was criticized for being too stringent with some of its recommendations, the first NRC (1992) report also affirmed the credibility of what the Castro court called “the generally accepted scientific techniques.” while criticizing the way the techniques were administered in forensic laboratories. Such partitioning is characteristic of official reviews of technological controversies (Perrow, 1984): criticism is ascribed to human error, or in this case corporate error, in a particular case, while preserving systematic features of the technology which, of course, would require a more massive and expensive effort to change. See chapter 9 for a striking instance of this sort of error account in connection with latent fingerprint examination.

    (34.) Steve Woolgar 1988: 30ff.) coined the expression “methodological horrors” to describe highly general interpretative problems both in empirical science and empirical science studies. Here, we use the term in a more particularized way to describe a field of contingencies and possible sources of error that adversary attacks on forensic DNA evidence have raised.

    (35.) William Thompson, interview (May 2003). See Aronson 2007: 78–79).

    (37.) Prosecutors also could turn to the pages of Science, as Lewontin and Hartl's arguments were rebutted by Chakraborty & Kidd (1991) in the same issue of Science, leading to further complaints about the way the Science editors commissioned the rebuttal after Lewontin and Hartl had submitted their article (see Roberts, 1991).

    (38.) Neufeld & Colman (1990) and Thompson & Ford (1988; 1991) also collaborated with scientists in early publications on the problems with DNA evidence.

    (39.) Although ad hoc practices are commonly associated with unsound scientific practice, a deep question remains as to whether they are a necessary part of the local, judgmental work of examining data. In a study of sociological efforts to code data, Garfinkel (1967) suggested that ad hoc practices were inescapable resources for making sense of, and classifying, data. Holton's (1978) analysis of Robert Millikan's oil drop experiment is the canonical study of a case that documents an experiment that succeeded because the experimenter violated the canons of experimental practice by presuming the correct result in the analysis of the data from the experimental runs that demonstrated that result. Daniel Kevles's (1998) study of the Theresa Imanishi-Kari/David Baltimore affair documents how badly things can go wrong when outsiders appeal to popular conceptions of experimental practice when investigating a charge of fraud.

    (40.) For example, band shifting was a major issue in Maine v. McLeod (1989). See Jasanoff 1992: 32–33).

    (41.) The possibility of cross-contamination became even more acute when PCR-based techniques were widely implemented. A 1990 report by the Office of Technology Assessment (OTA) noted that with techniques using PCR, cross-contamination was especially problematic: “Even flipping the top of a tube containing DNA can create an aerosol with enough sample to contaminate a nearby tube” (OTA, 1990: 69). The report added, “Should a suspect's sample accidentally contaminate a questioned sample containing degraded (or no) DNA, subsequent PCR amplification of the questioned sample would show that it perfectly matched the suspect” (70). (This possibility later provided a basis for argument in the O. J. Simpson trial.)

    (42.) See McLeod 1991: 585, note 15).

    (43.) See chapter 8.