It’s Science Jim, but not as ... oh no, it’s not science after all.*
Sometimes, as has been recognised by the Appeal Court in R v Holdsworth [1] (one of the ‘shaken baby’ cases in which we were instructed by the defence), “today’s scientific orthodoxy may become tomorrow’s outdated learning”. Perhaps the tide is now beginning to turn in favour of those who, like us, have raised serious doubts about some of the ‘science’ being advanced in courts to obtain prosecutions. The satisfaction with this turn of events is dampened by the tragedy that most of this ‘new’ knowledge is not new to science at all, it has simply been an inconvenience to what has been termed the ‘unfettered’ opinion of forensic practitioners. As pointed out in 2009 by a critical appraisal of forensic practices,
“The bottom line is simple: In a number of forensic science disciplines, forensic science professionals have yet to establish either the validity of their approach or the accuracy of their conclusions, and the courts have been utterly ineffective in addressing this problem.”[2]
Now, yet another authoritative scientific report has agreed that many of the techniques used by forensic laboratories have no scientific basis; it’s just that the courts have simply taken the labs’ word so far and thereby enabled such bad practice. Indeed, reporting scientists of at least one English forensic science provider cites an Appeal Court judgement (R v Dlugosz[3]) to support what now is acknowledged to be a scientifically unfounded practice – subjective judgement regarding potential contributors to DNA mixtures. In R v Reed & Anor[4] in 2009 the court endorsed the prosecution’s expert opining on how DNA came to be on items; a claim undermined by subsequent scientific reviews and opinion.
The President’s Council of Advisors on Science and Technology (PCAST) is an advisory group of leading scientists and engineers, appointed by the President of the United States to provide scientific advice. In September 2016, PCAST released a critique of several methods used in ‘forensic science’, including the interpretation of mixed DNA profiles[5]. The findings should cause serious concern to everyone involved in the UK criminal justice system, although we suspect that some will try to wave these away because somehow American science must be different to UK science. Having worked on and continuing to work on, cases in the UK and the USA, we can say with confidence that this is an international problem and not one confined to the USA.
The PCAST report has some obviously worrying, if unsurprising, conclusions;
“expert witnesses have often overstated the probative value of their evidence, going far beyond what the relevant science can justify.”
This is no surprise to any scientifically literate observer of courts, but is apparently not so obvious to many influential participants, scientific and legal.
As the rest of the world appears to be realising that all is not well with forensic practices and the need to have better review of the reliability of the results emanating from labs, we have witnessed courts in the UK move further along a path which appears to endorse unscientific and occasionally anti-scientific opinion, particularly in matters involving ‘experience’ and ‘subjective’ judgement. We have written on these topics before. PCAST echo our previous concern;
“Subjective methods require particularly careful scrutiny because their heavy reliance on human judgment means they are especially vulnerable to human error, inconsistency across examiners, and cognitive bias. ...
We note, finally, that neither experience, nor judgment, nor good professional practices (such as certification programs and accreditation programs, standardized protocols, proficiency testing, and codes of ethics) can substitute for actual evidence of foundational validity and reliability.”
We find it abhorrent that, in the light of so much evidence that these approaches are unscientific, that there are so-called scientific firms and individual scientists who are prepared to cite legal judgements as support for such ‘subjective opinions’ rather than remain within the limits of science. As PCAST noted;
“Judges’ decisions about the admissibility of scientific evidence rest solely on legal standards; they are exclusively the province of the courts and PCAST does not opine on them. But, these decisions require making determinations about scientific validity.
It is the proper province of the scientific community to provide guidance concerning scientific standards for scientific validity ... ”.
It is science that gives the scientist legitimacy as an expert, to ignore science in providing expert opinion surely creates a strange legal environment where the very expertise that enabled the witness to be considered an ‘expert’ is discarded when it comes to providing an opinion.
Similarly, we urge everyone to treat with the utmost scepticism experts’ apparently blind but frequently expressed belief that accreditation guarantees the accuracy and reliability of a result, and even their opinion. Sometimes an expert report will include reference to having been ‘peer-reviewed’ or some other indication that the opinion in the report is shared by others, at least in the same firm as the expert. This common practice is also debunked by PCAST;
“Similarly, an expert’s expression of confidence based on personal professional experience or expressions of consensus among practitioners about the accuracy of their field is no substitute for error rates estimated from relevant studies. For forensic feature-comparison methods, establishing foundational validity based on empirical evidence is thus a sine qua non. Nothing can substitute for it.” [our emphasis]
We have always considered the inclusion of phrases claiming authority for opinions within reports because they have been ‘peer-reviewed by other competent and suitably trained experts’ to be questionable: trained and deemed competent by whom? Usually the very same company being paid for the report. All of our statements are, in that sense, peer-reviewed but we do not consider it appropriate to include that within the statement. The statement is the opinion of the signatory. If it requires a second opinion then it requires a second statement. In our opinion peer-review as presented in such statements is simply hearsay unless the peer-reviewer also prepares a statement. Even then, what value would additional statements have, coming from the same company where all staff have been trained for, possibly tested in, and adopted the firm’s policy?
Turning to the specific example of DNA profiling where we have a long history of challenging the failings of the orthodoxy as regards Low Template samples and the statistical calculations presented when a mixture of DNA is found, the PCAST report stated:
“The fundamental difference between DNA analysis of complex-mixture samples and DNA analysis of single-source and simple mixtures lies not in the laboratory processing, but in the interpretation of the resulting DNA profile. ...
probabilistic genotyping software programs clearly represent a major improvement over purely subjective interpretation. However, they still require careful scrutiny to determine
(1) whether the methods are scientifically valid, including defining the limitations on their reliability (that is, the circumstances in which they may yield unreliable results) and
(2) whether the software correctly implements the methods. This is particularly important because the programs employ different mathematical algorithms and can yield different results for the same mixture profile.
Appropriate evaluation of the proposed methods should consist of studies by multiple groups, not associated with the software developers, that investigate the performance and define the limitations of programs by testing them on a wide range of mixtures with different properties.”
We have challenged several of these ‘probabilistic genotyping’ softwares (e.g. STRMix, LikeLTD,). In our opinion those have not been tested in the manner described by PCAST. All have had varied success in courts around the world and each works in a different way.
As we have pointed out in our challenges, and now agreed by PCAST;
“A number of papers have been published that analyze known mixtures in order to address some of these issues [with interpreting DNA mixtures]. Two points should be noted about these studies.
First, most of the studies evaluating software packages have been undertaken by the software developers themselves. While it is completely appropriate for method developers to evaluate their own methods, establishing scientific validity also requires scientific evaluation by other scientific groups that did not develop the method.
Second, there have been few comparative studies across the methods to evaluate the differences among them—and, to our knowledge, no comparative studies conducted by independent groups”
These comport with at least some of the issues that we have taken up in casework. However, faced with a challenge from what appear to be lone voices in the UK, the courts have generally sided with the apparent ‘orthodoxy’ regardless of the fact that there is no real orthodoxy in this area as PCAST has identified. Indeed, in one of our New York cases the developer of one software system was effectively rubbishing another.
The question remains as to how long it will be before the criminal justice system in the UK realises that this is not an isolated problem – it affects past and present cases big and small. The system of Legal Aid recognises cost rather than the quality of expertise. This system connected with the fragmented nature of the ‘defence’ market, the inevitable inequality of almost every type of arms between prosecution and defence experts, and the emerging propensity to smear any expert brave enough to challenge the orthodoxy, are perpetuating the admission of flawed and dangerous expert testimony. How and when will the situation change in the light of the increasing evidence that there are serious problems with the claimed orthodoxy?
A recent request for defence funding for a case involving a serious challenge to probabilistic genotyping software was met with this response from the Legal Aid Agency;
“As far as we are aware probabilistic genotyping software is simply using software to produce an analysis of the likelihood of a genetic match between 2 DNA samples, which is often done by most DNA experts…”
So much for any chance for, “appropriate evaluation”. Even in the laboratories that provide such evidence, the software is used only by a tiny handful of staff specifically trained in its use; it is not routine or simple.
As somewhat of a footnote, having acted for the defence in the now widely discussed footwear case of RvT[6], we are pleased to see vindication from PCAST for part of our stance in that case (i.e. that footwear mark assessment is not science – a point actually agreed by the Appeal Court);
“Such claims for “identification” based on footwear analysis are breathtaking—but lack scientific foundation.
...
PCAST finds that there are no appropriate black-box studies to support the foundational validity of footwear analysis to associate shoeprints with particular shoes based on specific identifying marks. Such associations are unsupported by any meaningful evidence or estimates of their accuracy and thus are not scientifically valid.”
Ironically, despite an almost identical argument regarding ‘forensic gait analysis’ in R v Otway[7] the Appeal Court did not accept the argument. Gait analysis does not meet any of the criteria proposed by PCAST (or indeed any other science).
PCAST has included helpful appendices intended to assist barristers and judges to assess the quality of scientific claims. It is common experience that when methods find their way into courts it becomes increasingly difficult to challenge them – and it seems few people are prepared to do so. We hope that the PCAST report, added to the increasing realisation that forensic science appears to be more forensic than science, will encourage barristers and judges to be more critical of the science that they deal with.
*This article was published in Barrister Magazine in February 2017.
[1] R v Holdsworth [2008] EWCA Crim 971
[2] National Academy of Sciences of the USA, (2009) Strengthening Forensic Science in the United States: A Path Forward.
[3] R v Dlugosz, [2013] EWCA Crim 2
[4] R v Reed & Anor [2009] EWCA Crim 2698
[5] President’s Council of Advisors on Science and Technology. Report to the President Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods. September 2016
[6] R v T [2010] EWCA Crim 2439
[7] R v Otway [2011] EWCA Crim 3