Doorgaan naar hoofdcontent


Posts uit januari, 2014 tonen

Why Social-Behavioral Primers Might Want to be More Self-critical

During the investigation into the scientific conduct of Dirk Smeesters , I expressed my incredulity about some of his results to a priming expert. His response was: You don’t understand these experiments. You just have to run them a number of times before they work. I am convinced he was completely sincere. What underlies this comment is what I’ll call the shy-animal mental model of experimentation. The effect is there; you just need to create the right circumstances to coax it out of its hiding place. But there is a more appropriate model: the 20-sided-die model (I admit, that’s pretty spherical for a die but bear with me). A social-behavioral priming experiment is like rolling a 20-sided die, an icosahedron. If you roll the die a number of times, 20 will turn up at some point. Bingo! You have a significant effect. In fact, given what we now know about questionable and not so questionable research practices, it is fair to assume that the researchers are actually roll

Escaping from the Garden of Forking Paths

My previous post was prompted by a new paper by Andrew Gelman and Eric Loken (GL) but it did not discuss its the main thrust because I had planned to defer that discussion to the present post. However, several comments on the previous post (by Chris Chambers and Andrew Gelman himself) leapt ahead of the game and so there already is an entire discussion in the comment section of the previous post about the topic of our story here. But I’m putting the pedal to the metal to come out in front again. Simply put, GL’s basic claim is that researchers often unknowingly create false positives. Or, in their words: it is possible to have multiple potential comparisons, in the sense of a data analysis whose details are highly contingent on data, without the researcher performing any conscious procedure of fishing or examining multiple p-values. My copy of the Dutch Translation Here is one way in which this might work. Suppose we have a hypothesis that two groups differ from

Donald Trump’s Hair and Implausible Patterns of Results

In the past few years, a set of new terms has become common parlance in post-publication discourse in psychology and other social sciences: sloppy science, questionable research practices, researcher degrees of freedom , fishing expeditions, and data that are too-good-to-be-true. An excellent new paper by Andrew Gelman and Eric Loken takes a critical look at this development. The authors point out that they regret having used the term fishing expedition in a previous article that contained critical analyses of published work. The problem with such terminology, they assert, is that it implies conscious actions on the part of the researchers even though—as they are careful to point out--the people who have coined, or are using, those terms (this includes me) may not think in terms of conscious agency. The main point Gelman and Loken make in the article is that there are various ways in which researchers can unconsciously inflate effects. I will write more ab