Collabra: Psychology has a submission option called streamlined review. Authors can submit papers that were previously rejected by another
journal for reasons other than a lack of scientific, methodological, or ethical
rigor. Authors request permission from the original journal and then submit their revised manuscript with the original
action letters and reviews. Editors like me then make a decision about the revised
manuscript. This decision can be based on the ported reviews or we can solicit
further reviews.
One recent streamlined submission had previously been rejected by an APA journal. It is a failed self-replication. In the original experiment, the authors had found that a certain form of semantic priming, forward priming, can be eliminated by working-memory load, which suggests that forward semantic priming is not automatic. This is informative because it contradicts theories of automatic semantic priming. When they tried to follow up on this work for a new paper, however, the researchers were unable to obtain this elimination effect in two experiments. Rather than relegating the study to the file drawer, they decided to submit it to the journal that had also published their first paper on the topic. Their submission was rejected. It is now out in Collabra: Psychology. The reviews can be found here.
[Side note: I recently conducted a little poll on Twitter asking
whether or not journals should publish self-nonreplications. A staggering 97% of
the respondents said journals should indeed publish self-nonreplications. However,
if anything, this is evidence of the Twitter bubble I’m in. Reality is more recalcitrant.]
I thought the other journal’s reviews were thoughtful. Nevertheless,
I reached a different conclusion than the original editor. A big criticism in
the reviews was the concern about “double-dipping.” If an author publishes a
paper with a significant finding, it is unfair to let that same author then
publish a paper that reports a nonsignificant finding, as this gives the researcher
two bites at the apple.

People are (still) rewarded for the number of articles they publish, so letting someone first publish a finding and then a nonreplication of this finding is unfair. It is as if in football (the real football, where you use your feet to propel the ball) you get a point for scoring a goal and then an additional point for missing a shot from the same position.
However understandable, this idea loses its persuasive power
once we take the scientific record into account. As scientists, we want to
understand the world and lay a foundation for further research. It is therefore
important to have good estimates of effect sizes and the confidence we should
have in them. A nonreplication serves to correct the scientific record. It
tells us that the effect is less robust than we initially thought. This is
useful information for meta-analysts, who can now include both findings in
their collection. And even more importantly, it is very useful for researchers
who want to build on this research. They now know that the finding is less
reliable than they previously thought. It might prevent them from wandering into
a potential blind alley.
As with anything in science, allowing the publication of
self-nonrreplications opens the door to gaming the system. People could p-hack
their way to a significant finding, publish it and then fail to “replicate” the
finding in a second paper. As an added bonus, the self-nonreplication will also
give them the aura of earnest, self-critical, and ethical researchers.
Moreover, the self-nonreplication pretty much inoculates the finding from
“outside” replication efforts. Why try to replicate something that even the
authors themselves could not replicate?
That’s not two, not three, but four birds with one stone!
You might think that I’m making up the inoculation motive for dramatic effect.
I’m not. A researcher I know actually suspects another researcher of using the
inoculation strategy.
How worried should we be about the misuse of
self-nonreplications? I’m not sure. One potential safeguard is to have the
authors explain why they performed the replication. Did they think there was
something wrong with the original finding or were they just trying to build on
it and were surprised to discover they couldn’t reproduce the original finding?
And if a researcher makes a habit of publishing self-nonreplications, I’m sure
people would be on to them in no time and questions would be asked.
So I think we should publish self-nonreplications. (1) They
help to make the scientific record more accurate. (2) They are likely to
prevent other researchers from ending up in a cul-de-sac.
The concern about double-dipping is only a concern given our
current incentive system, which is one more indication that this system is detrimental
to good science. But that’s a topic for a different post.