In my last post, I described a (mostly) successful replication by Steegen et al. of the ”crowd-within effect.” The authors of that replication effort felt that it would be nice to mention all the good replication research practices that they had implemented in their replication effort.
And indeed, positive psychologist that I am, I would be remiss if I didn’t extol the virtues of the approach in that exemplary replication paper, so here goes.
Make sure you have sufficient power.
We all know this, right?
Preregister your hypotheses, analyses, and code.
I like how the replication authors went all out in preregistering their study. It is certainly important to have the proposed analyses and code worked out up front.
Make a clear distinction between confirmatory and exploratory analyses.
The authors did here exactly as the doctor, A.D. de Groot in this case, ordered. It is very useful to perform exploratory analyses but they should be separated clearly from the confirmatory ones.
Report effect sizes.
Use both estimation and testing, so your data can be evaluated more broadly, by people from different statistical persuasions.
Use both frequentist and Bayesian analyses.
Yes, why risk being pulled over by a Bayes trooper or having a run-in with the Frequentist militia? Again, using multiple analyses allows your results to be evaluated more broadly.
Adopt a co-pilot multi-software approach.
A mistake in data analysis is easily made and so it makes sense to have two or more researchers analyse the data from scratch. A co-author and I used a co-pilot approach as well in a recent paper (without knowing the cool name for this approach, otherwise we would have bragged about it in the article). We discovered that there were tiny discrepancies between our analyses with each of us making a small error here and there. The discrepancies were easily resolved but the errors probably would have gone undetected had we not used the co-pilot approach. Using a multi-software approach seems a good additional way to minimize the likelihood of errors.
Make the raw and processed data available.
When you ask people to share their data, they typically send you the processed data but the raw data are often more useful. The combination is even more useful as it allows other researchers to retrace the steps from raw to processed data.
Use multiple ways to assess replication success.
This is a good idea in the current climate where the field has not settled on a single method yet. Again, it allows the results to be evaluated more broadly than with a single-method approach.
Maybe these methodological strengths are worth mentioning too?, the first author of the replication study, Sara Steegen, suggested in an email.