The other night I was watching a Dateline episode on a false confession that landed someone in jail for a crime he didn’t commit. The story is quite similar to that of Brendan Dassey in Making a Murderer: a learning-disabled boy being coerced by detectives into falsely confessing to having committed heinous crimes. These are very upsetting stories. Even more upsetting is that false confessions are quite common in the US and have led to a great number of wrongful convictions.
It occurred to me that the false confession debate provides an intriguing analogy with the replication debate, which was recently reignited after the publication of a critique in Science of the Reproducibility Project. Many people have written great blogposts about this latest controversy already (e.g., Uri Simonsohn, Simine Vazire, Sanjay Srivastava, Michael Inzlicht, Dorothy Bishop, Daniël Lakens, and David Funder). This post approaches the debate from a different angle. I explore whether the false confession analogy holds a lesson for the reproducibility debate.
In a false confession, a suspect is pushed, cajoled, and bullied by one or more police detectives to confess to having committed a crime. In the Dateline and Making a Murderer cases, it is plainly visible (the interrogations were recorded) that the heinous acts the suspect is made to confesses were all suggested by the detectives themselves. The suspect just randomly guesses until he produces the desired answer.
Maybe the detectives honestly believe that the suspect has committed the crime and that they are just forcing him to own up to the facts, as a result of confirmation bias. Or maybe the detectives don’t really believe the suspect is guilty at all but they need (or are pushed by those higher on the ladder) to make a quick arrest to meet some quota.
Dateline points out that in the UK (and presumably in many other countries) confessions elicited under duress are no longer admissible in court. The interview process has undergone a complete overhaul. This overhaul was initially resisted by veteran detectives. As an English police detective puts it: “Senior people thought that this was a draconian piece of legislation that was gonna prevent us from detecting anything ever again […], that it was going to tie our hands behind our back.” But they were wrong, as Dateline presenter Keith Morrison intones. Detection rates in homicide cases in the UK are over 90%. This makes sense, of course. The police is no longer wasting time on innocent suspects and can now devote its resources to actually solving the crime.
The replication crisis is the product of a whole tradition of extracting false confessions from the data. Researchers push and cajole the data as long as they need for the data to “give up” the effect (not reporting nonsignificant effects, optional stopping, selectively removing outliers, and so on). Whether the researchers really believe the data harbor the predicted effect or not, is a question that no one may be able to answer to but my guess is the vast majority of researchers sincerely did/do.
One side of the replication debate wants us to progress toward the UK situation. Eliciting false confessions from the data is no longer admissible in the court of science and new policies are proposed or in place to curb their use (preregistration, open data, open code, open reviews). This is morally the right thing to do, of course, but it also makes great practical sense. Why commit further resources pursuing “effects”that are likely false confessions? Better to direct our gaze elsewhere.
The other side of the replication debate seems to want to continue the tradition of extracting false confessions from the data. Like the senior police detectives in the UK, they bemoan policies that journals have put in place to curb reliance on false confessions. I suspect members of this side of the debate will turn out to be like those greybeards on the UK police force: on the wrong side of history.