Doorgaan naar hoofdcontent

Posts

Posts uit juni, 2013 tonen

The Tyrion Lannister Paradox: How Small Effect Sizes can be Important

There has been a lot of debate lately about effect sizes. On the one hand, there are effects in the social priming literature that seem surprisingly large given the subtlety of the manipulation, the between-subjects design, and the (small) sample size. On the other hand, some researchers can be heard complaining about small effect sizes in other areas of study (for example cognitive psychology). Why would we want to study small effects? This is not a new question.  We could go further back in history but let’s stop in 1992, the year in which an insightful article on small effect sizes appeared, authored by Deborah Prentice and Dale Miller. Prentice and Miller argue that there are two valid reasons why psychologists study small effects. The first reason is that researchers are trying to establish the minimal conditions under which an effect can be found. They accomplish this by minimally manipulating the independent variable.  The second reason is that researc

Wacky Hermann and the Nonsense Syllables: The Need for Weirdness in Psychological Experimentation

Earlier this week I attended a symposium in Nijmegen on Solid Science in Psychology. It was organized mostly by social psychologists from Tilburg University along with colleagues from Nijmegen. (It is heartening to see members of Stapel’s former department step up and take such a leading role in the reformation of their field.) The first three speakers were Uri Simonsohn, Leif Nelson, and Joe Simmons of false-positive psychology fame. I enjoyed their talks (which were not only informative but also quite humorous for such a dry subject) but I already knew their recent papers and I agree with them, so their talks did not change my view much. Later that day the German social psychologist Klaus Fiedler spoke. He offered an impassioned view that was somewhat contrary to the current replication rage. I didn’t agree with everything Fiedler said but I did agree with most of it. What’s more, he got me to think and getting your audience to think is what a good speaker wants.

The Diablog on Replications and Validity Continues

In the latest conversational turn in my ongoing dialog (diablog?) with Dan Simons about replications and validity, Dan provides some useful insights into what qualifies as a direct replication: a direct replication can be functionally the same if it uses the same materials, tasks, etc. and is designed to generalize across the same variations as the original. I agree completely. As Dan notes, no replication can be exact and some changes are inevitable for the experiment to make sense. At Registered Replication Reports (RRR), Dan and his colleague Alex Holcombe have instituted some interesting procedures: Our approach with Registered Replication Reports is to ask the original authors to specify a range of tolerances on the parameters of the study.  This is a great idea. What I like even more is that Dan not only talks the talk but also walks the walk. He is using this approach in his own papers by adding a paragraph to the method section in which he states th

More Thoughts on Validity and Replications

In my previous post I described how direct replications provide insight into the reliability of findings but not so much their validity. Dan Simons yesterday wrote a insightful post in response to this (I love the rapid but thoughtful scientific communication afforded by blogs). As Dan said, we are basically in agreement: it is very important to conduct direct replications and it is also important to assess the validity of our findings. Dan’s post made me think about this issue a little more and I think I can articulate it more clearly now (even though I have just been sitting in the sun drinking a couple of beers). To clarify, let me first quote Dan when he describes my proposal: This approach, allowing each replication to vary provided that they follow a more general script, might not provide a definitive test of the reliability of a finding. Depending on how much each study deviated from the other studies, the studies could veer into the territory of conceptual