In the wake of the tragedy at Sandy Hook, there has been increased interest in looking at ways to prevent such devastating crimes. Variations on the theme of gun control is obviously at the forefront of that discussion, judging by the national debate at the moment. But others are talking about psychological screening tools to asses future risk of violence in criminals.
With echoes of the fantastical PreCrime Unit in the sci-fi thriller Minority Report, the desire for a pre-screener for violence is nothing new. But a recent article published in the journal Behavioral Sciences & The Law details what the authors – preeminent forensic psychologists – conclude is a fatal flaw in actuarial risk assessment instruments used to predict recidivism rates. From the abstract:
“Consistent with past research, ARAI scores were moderately and significantly predictive of failure in the aggregate, but group probability estimates had substantial margins of error and individual probability estimates had very large margins of error.”
Described another way,
“[T]he researchers established through a traditional statistical procedure, logistic regression, that the margins of error around individual scores were so large as to make risk distinctions between individuals “virtually impossible.” In only one out of 90 cases was it possible to say that a subject’s predicted risk of failure was significantly higher than the overall baseline of 18 percent.”
At issue is whether or not such actuarial surveys can be admissible in court. Researchers Stephen Hart and David Cooke say ‘no,’ and declare they have definitively proven that any accuracy in predictive ability is a statistical artifact resulting in “fundamental uncertainty,” whereas the massive margin of error is “reality.”
- The APA ethics code requires psychologists to inform clients of “the strengths and limitations of test results and interpretation” and to “indicate any significant limitations of their interpretations.”
- The fundamental uncertainty of actuarial risk assessment “cannot be overcome,” therefore Hart and Cooke recommend use of such statistical algorithms be stopped.
- The “image of certitude” projected by actuarial risk assessments is misleading and can result in cognitive biases, therefore their admissibility in court should be seriously questioned.
- Courts should not rely on any one assessment of an individual’s supposed traits or characteristics, but instead must look at all information in context.
As I see it, the implications for school psychologists are twofold. First, school psychologists should be wary of any assessment instrument purporting to determine a student’s risk of future violence. Just the legal implications alone should give pause to anyone considering administering – and interpreting – such an assessment in the context of a school setting.
Second, Hart and Cooke’s final recommendation that legal professionals and the courts “recognize that their decisions ultimately require consideration of the totality of circumstances – not just the items of a particular test” is exactly in line with best practices for school psychologists. We are never supposed to make an academic or clinical decision based on the results of only one test.
The pressure is on right now, and in such a climate schools may be tempted to change or adopt practices quickly in order to ease political tension. Research like this reminds us of the importance of ethics codes and best practices guides: they provide a steady rudder to guide us through periodic storms.