Doorgaan naar hoofdcontent

Posts

Posts uit 2014 tonen

ROCing the Boat: When Replication Hurts

Though failure to replicate presents a serious problem, even highly-replicable  results may be consistently and dramatically misinterpreted if dependent measures are not carefully chosen. This sentence comes from a new paper by Caren Rotello, Evan Heit, and Chad Dubé to be published in Psychonomic Bulletin & Review.   Replication hurts in such cases because it reinforces artifactual results. Rotello and colleagues marshal support for this claim from four disparate domains: eyewitness memory, deductive reasoning, social psychology, and studies of child welfare. In each of these domains researchers make the same mistake by using the same wrong dependent measure. Common across these domains is that subjects have to make detection judgments: was something present or was it not present? For example, subjects in eyewitness memory experiments decide whether or not the suspect is in a lineup. There are four possibilities.             ...

The Diablog on Replication with Dan Simons

Dan Simons Last year, I had a very informative and enjoyable blog dialogue, or diablog, with Dan Simons about the reliability and validity of replication attempts. Unfortunately, there was never an easy way for anyone to access this diablog. It has only occurred to me today (!) that I could remedy this situation by creating a meta-post. Here it is. In my first post on the topic , I argued that it is important to consider to consider not only the reliability but also the validity of replication attempts because it might be problematic if we try to replicate a flawed experiment.   Dan Simons responded to this, arguing that deviations from the original experiment, while interesting, would not allow us to determine the reliability of the original finding. I then had some more thoughts . To which Dan wrote another constructive response . My final point was that direct replications should be augmented with systematic variations of the original experiment...

Verbal Overshadowing: What Can we Learn from the First APS Registered Replication Report?

Suppose you witnessed a heinous crime being committed right before your eyes. Suppose further that a few hours later, you’re being interrogated by hard-nosed detectives Olivia Benson and Odafin Tutuola. They ask you to describe the perpetrator. The next day, they call you in to the police station and present you with a lineup. Suppose the suspect is in the lineup. Will you be able to pick him out? A classic study in psychology  suggest Benson and Tutuola have made a mistake by first having you describe the perpetrator because the very act of describing the perpetrator will make it more difficult for you to identify him out of the lineup. This finding is known as the verbal overshadowing effect and was discovered by Jonathan Schooler. In the experiment that is of interest here, he and his co-author, Tonya Engstler-Schooler, found that verbally describing the perpetrator led to a 25% accuracy decrease in identifying him. This is a sizeable difference with practical implications. B...

Developing Good Replication Practices

In my last post, I described a (mostly) successful replication by Steegen et al. of the ”crowd-within effect.” The authors of that replication effort felt that it would be nice to mention all the good replication research practices that they had implemented in their replication effort. And indeed, positive psychologist that I am, I would be remiss if I didn’t extol the virtues of the approach in that exemplary replication paper, so here goes. Make sure you have sufficient power. We all know this, right? Preregister your hypotheses, analyses, and code. I like how the replication authors went all out in preregistering their study. It is certainly important to have the proposed analyses and code worked out up front. Make a clear distinction between confirmatory and exploratory analyses. The authors did here exactly as the doctor, A.D. de Groot in this case , ordered. It is very useful to perform exploratory analyses but they should be separated clearly from the ...

Is There Really a Crowd Within?

In 1907 Francis Galton (two years prior to becoming “Sir”) published a paper in Nature titled “Vox populi” (voice of the people). With the rise of democracy in the (Western) world, he wondered how much trust people could put in public judgments. How wise is the crowd, in other words? As luck would have it, a weight-judging competition was carried on at the annual show of the West of England Fat Stock and Poultry Exhibition (sounds like a great name for a band) in Plymouth. Visitors had to estimate the weight of a prize-winning ox when slaughtered and “dressed” (meaning that its internal organs would be removed). Galton collected all 800 estimates. He removed thirteen (and nicely explains why) and then analyzed the remaining 787 ones. He computed the median estimate and found that it was less than 1% from the ox’s actual weight. Galton concludes: This result is, I think, more creditable to the trust-worthiness of a democratic judgment than might have been expected. ...

Who’s Gonna Lay Down the Law in Psytown?

These are troubled times in our little frontier town called Psytown. The priest keeps telling us that deep down we’re all p-hackers and that we must atone for our sins. If you go out on the streets, you face arrest by any number of unregulated police forces and vigilantes. If you venture out with a p-value of .065, you should count yourself lucky if you run into deputy Matt Analysis. He’s a kind man and will let you off with a warning if you promise to run a few more studies, conduct a meta-analysis, and remember never to use the phrase “approaching significance” ever again. It could be worse. You could be pulled over by a Bayes Trooper. “Please step out of the vehicle, sir.” You comply. “But I haven’t done anything wrong, officer, my p equals .04.” He lets out a derisive snort “You reckon that’s doin’ nothin’ wrong ? Well, let me tell you somethin’, son. Around these parts we don’t care about p . We care about Bayes factors. And yours is way below the legal limit. Y...

My Take on Replication

There are quite a few comments on my previous post already, both on this blog and elsewhere. That post was my attempt to make sense of the discussion that all of a sudden dominated my Twitter feed (I’d been offline for several days). Emotions were runing high and invective was flying left and right. I wasn’t sure what the cause of this fracas was and tried to make sense of where people were coming from and suggest a way forward.  Many of the phrases in the post that I used to characterize the extremes of the replication continuum are paraphrases of what I encountered online rather than figments of my own imagination. What always seems to happen when you write about extremes, though, is that people rush in to declare themselves moderates. I appreciate this. I’m a moderate myself. But if we were all moderates, then the debate wouldn’t have spiralled out of control. And it was this derailment of the conversation that I was trying to understand. But before (or more likely a...

Trying to Understand both Sides of the Replication Discussion

I missed most of the recent discussion on replication because I’m on vacation. However, the weather’s not very inviting this morning in southern Spain, so I thought I’d try to catch up a bit on the fracas, and try to see where both sides are coming from. My current environment induces me to take a few steps back from it all. Let’s see where this goes. Rather than helping the discussion move forward, I might, in fact, inadvertently succeed in offending everyone involved. Basically, the discussion is between what I’ll call the Replicators and the Replication Critics Reactionaries . I realize that the Replicators care about more than just replication. The Reactionaries are reactionary in the sense that they  Critics are opposing the replication movement. The Replicators and the Reactionaries  Critics are the endpoints of what probably is close to a continuum. Who are the Replicators? As best as I can tell, they are a ragtag group of (1) mid-career-to-senior methodologi...