tag:blogger.com,1999:blog-6322739827777311964.post5620105625212566306..comments2024-03-29T05:56:48.403+01:00Comments on Drang naar Samenhang: Escaping from the Garden of Forking PathsRolf Zwaanhttp://www.blogger.com/profile/07617143491249303266noreply@blogger.comBlogger5125tag:blogger.com,1999:blog-6322739827777311964.post-14412222283279974312014-01-13T18:00:53.581+01:002014-01-13T18:00:53.581+01:00But if we are not doing confirmatory research and ...But if we are not doing confirmatory research and are instead performing exploratory work, then why would we want to artificially restrict ourselves to pre-registered hypotheses, methods, or analyses that are apparently made up by the researcher?<br /><br />I understand the desire to restrict researcher degrees of freedom and force researchers to generate real predictions; but if there is really no theoretical justification for the predictions, then the request is just a waste of time. <br /><br />The best outcome I can foresee from an emphasis on pre-registration is that researchers will sit down and take a good look at their theoretical ideas and carefully consider which ones generate predictions that are sufficiently precise to promote a well-designed experiment. For many cases, researchers will realise that they have no such ideas and thus will not pre-register anything. The realisation of such a situation is valuable, but I hardly think we can praise an approach whose main benefit is that it is not implemented. Researchers should think carefully about their ideas and experimental designs, but that can be done without pre-registration. <br /><br />Thanks for the link to Borsboom's piece. I liked it a lot, even though we come to different conclusions about the benefits of pre-registration. <br />Greg Francishttps://www.blogger.com/profile/04587796537062283918noreply@blogger.comtag:blogger.com,1999:blog-6322739827777311964.post-27593406201489220362014-01-13T13:53:59.981+01:002014-01-13T13:53:59.981+01:00>To put it another way, if a researcher is doin...>To put it another way, if a researcher is doing fully<br />>confirmatory work, then pre-registration is not necessary.<br />>If a researcher is doing fully exploratory work, then<br />>pre-registration should not be done at all.<br /><br />It's hard to argue with this. But I suspect that people are almost never actually doing fully confirmatory work in psychology, because (cf. Denny Borsboom's recent piece, http://osc.centerforopenscience.org/author/denny-borsboom.html) there is so little theory and what there is, is incomplete. We're generally about as far as you can get from the confirmatory paradigm as exemplified by, say, Eddington's eclipse experiment. Pre-registration reduces researcher degrees of freedom, which is needed not just because some people are dishonest or lazy, but because the theory you're typically working with doesn't even begin to predict what might happen with a whole pile of variables that you hadn't thought of.<br />For example, consider the failure to reproduce Gailliot et al's work on ego-depletion - see http://www.psychfiledrawer.org/replication.php?attempt=MTIw. This ought to be about as confirmatory as you can get, since the second group of researchers had all the materials, methods, and scripts from the original study. There are many possible explanations for their null result, but failure to account for a potentially large number of other (unimagined) variables seems like a plausible one. (Of course, that failure might have been on the part of the reproducing group. Over at http://www.psychfiledrawer.org/replication.php?attempt=MTQ1 there's a *successful* replication of Gailliot et al.'s original results. Who's to say who's right? Does a single null replication constitute falsification if you know that your theory isn't "complete" and never can be?)<br /><br />I suspect that people in disciplines that are considered "harder" sciences than psychology, but "softer" than physics (that's pretty well everything done in a lab, I guess!) might do well to study how the better psychologists design their experiments; if you can successfully eliminate most of the effects of hidden variables in psychology, it should be a snap in genomics or neuroscience.Nick Brownhttps://www.blogger.com/profile/18266307287741345798noreply@blogger.comtag:blogger.com,1999:blog-6322739827777311964.post-1466280669774027742014-01-13T13:16:56.363+01:002014-01-13T13:16:56.363+01:00Andrew, I had to think about this a little but it ...Andrew, I had to think about this a little but it makes perfect sense. True/false presupposes the very knowledge that we seek. I guess "unsupported positive" would be a better term. I've always had this impression about social priming as well. The notion itself strikes me as plausible; it is just that the experiments provide no support for it. Rolf Zwaanhttps://www.blogger.com/profile/07617143491249303266noreply@blogger.comtag:blogger.com,1999:blog-6322739827777311964.post-59573372107664799652014-01-13T12:50:44.662+01:002014-01-13T12:50:44.662+01:00Since it was brought up in this post and the previ...Since it was brought up in this post and the previous post by Rolf, Andrew, and Chris, I wanted to try out an argument against pre-registration. I should say upfront that I am not really opposed to pre-registration, but I think this argument suggests it is rather silly for many situations in experimental psychology. <br /><br />My concern is about what should be inferred when a researcher sticks to the plan. Does success for a pre-registered strategy lend some extra confidence in the results or in the theoretical conclusion? Does it increase belief in the process that produced the registered hypotheses? A consideration of two extremes suggests that it does not. <br /><br />Extreme case 1. Suppose a researcher generates a hypothesis by flipping a coin. It comes up "heads", so the researcher pre-registers the hypothesis that there will be a significant difference of means. The experiment is subsequently run and finds the predicted difference. Whether the observed difference is real or not, surely such an experimental outcome does not actually validate the process by which the hypothesis was generated. For the experiment to validate the prediction of the hypothesis (not just the hypothesis itself), there needs to be some justification for the prediction. <br /><br />Extreme case 2. Suppose a researcher generates a hypothesis by deriving an effect size from a quantitative theory that has previously been published in the literature. The researcher pre-registers this hypothesis and the subsequent experiment finds the predicted difference. Such an experimental finding may be strong validation of the hypothesis and of the quantitative theory, but it does not seem that pre-registration has anything to do with such validation. Since the theory has previously been published, other researchers could follow the steps of the original researcher and derive the very same predicted effect size. In a situation such as this it seems unnecessary to pre-register the hypothesis because it follows from existing ideas. <br /><br />Most research problems are neither of these extremes, but I still cannot see a situation where pre-registration helps. If the predicted hypotheses (and methods and measures) are clearly derived from existing theory, then pre-registration does not add much to the investigation. On the other hand, if the hypotheses (and methods and measures) are not clearly defined by existing theory, then pre-registration cannot change that situation. <br /><br />To put it another way, if a researcher is doing fully confirmatory work, then pre-registration is not necessary. If a researcher is doing fully exploratory work, then pre-registration should not be done at all. A problem we have in the field is that many people think only confirmatory work is proper and that exploratory work is non-scientific. To the contrary, both processes are essential to science. <br /><br />Moreover, it is not true that only confirmatory work can reject or validate theoretical predictions. The difference between confirmatory and exploratory work is mostly about the efficiency of the experimental design. Confirmatory work is focused on specific questions, so the design emphasises getting answers to those questions and is likely to give definitive answers. Exploratory work is less focused on specific questions, so the design is less likely to produce definitive answers to any questions (but it might, just by happenstance). <br /><br />For some of the specific cases where people have argued for pre-registration, the true problem was that the reported data did not provide a convincing argument for or against presented theoretical ideas. If researchers will just pay attention to the uncertainty in the measurements relative to the considered theoretical ideas, then it does not really matter whether the design is confirmatory or exploratory or whether the experiment was pre-registered or not.Greg Francishttps://www.blogger.com/profile/04587796537062283918noreply@blogger.comtag:blogger.com,1999:blog-6322739827777311964.post-37998177383926881852014-01-13T12:10:53.519+01:002014-01-13T12:10:53.519+01:00Rolf:
Just to add to the mess, let me clarify tha...Rolf:<br /><br />Just to add to the mess, let me clarify that I don't like the terms "false positive" and "false negative." I think that many of the problems we describe arise because there is this idea that the purpose of a scientific study is to figure out whether a hypothesis is "true." Sometimes this works out, for example most of us assume that whatever Daryl Bem is trying to study is actually false.<br /><br />But most of the time the true/false distinction does not really make sense. For example, consider that study that claimed to find a relation between men's arm circumference and an interaction between their socioeconomic status and their political attitudes. The correlation that the researchers found in their data: is it "real" in the sense of applying to the general population? Well, I don't think the true correlation is 0. What I do think is that they have a type M error (that is, their estimate from their sample is much higher than the correlation in the population) and that they are likely to have a type S error (that is, it is likely the sign of the association in the population is of opposite sign than what they found in the sample).<br /><br />But what of the researchers' more general hypothesis, that there is some relation between upper-body strength and political attitude, with some connection to evolution? Yes, I think this is true, in some sense it <em>has</em> to be true in that there is no way these possible relations are exactly zero. But that doesn't mean that anything useful came out of the published paper. It's not that they had a "false positive" or "false negative," that's not really the issue.Andrew Gelmanhttps://www.blogger.com/profile/02715992780769751789noreply@blogger.com