tag:blogger.com,1999:blog-6322739827777311964.post739234770966594823..comments2023-11-30T13:15:01.893+01:00Comments on Drang naar Samenhang: Premature Experimentation: Revaluing the Role of Essays and Thought ExperimentsRolf Zwaanhttp://www.blogger.com/profile/07617143491249303266noreply@blogger.comBlogger16125tag:blogger.com,1999:blog-6322739827777311964.post-67288050392819062402014-01-09T14:57:21.065+01:002014-01-09T14:57:21.065+01:00Angel, I'm curious here. What group did you me...Angel, I'm curious here. What group did you mean with "you guys"? Psychologists? Dutch people? Professors? Persons of the male gender? Bloggers?Anonymoushttps://www.blogger.com/profile/09640729547040033538noreply@blogger.comtag:blogger.com,1999:blog-6322739827777311964.post-1201612986426368362013-10-31T23:05:09.747+01:002013-10-31T23:05:09.747+01:00Thanks Angel. I'm glad you like the writing an...Thanks Angel. I'm glad you like the writing and a faithful follower of the blog. I very much enjoy not having to write in an academic style. I hope you will stay in the social sciences. If anything, we need more people with a natural sciences mindset, not fewer.Rolf Zwaanhttps://www.blogger.com/profile/07617143491249303266noreply@blogger.comtag:blogger.com,1999:blog-6322739827777311964.post-84958655126731751172013-10-31T23:02:52.329+01:002013-10-31T23:02:52.329+01:00I agree Matthijs. We should better identify the di...I agree Matthijs. We should better identify the different stages of the scientific process and realize they each have their role to play. At the top are confirmatory experiments performed in multiple labs. However, thought experiments, observations, exploratory studies are also important but should not be presented as confirmatory studies. I could cite Wagenmakers et al. again, but I've cited them so often in various posts already that I feel I'd have to charge them for it.Rolf Zwaanhttps://www.blogger.com/profile/07617143491249303266noreply@blogger.comtag:blogger.com,1999:blog-6322739827777311964.post-76411906181696630962013-10-31T22:56:51.154+01:002013-10-31T22:56:51.154+01:00Well put, Fred! I concur.Well put, Fred! I concur.Rolf Zwaanhttps://www.blogger.com/profile/07617143491249303266noreply@blogger.comtag:blogger.com,1999:blog-6322739827777311964.post-11529705670372364982013-10-31T22:13:03.572+01:002013-10-31T22:13:03.572+01:00I am obsessed with physics (predisposed to become ...I am obsessed with physics (predisposed to become disenchanted with the social sciences). I dropped out of grad school for reasons that many of your articles adress. I am a faithful, albeit, primitive follower, Rolf. I didn’t know guys like you knew how to write (didn’t see much evidence of that at the University of Pennsylvania). “Beauty is truth, truth beauty”—John Keats. Your essays prove that Keats was right. Anonymoushttps://www.blogger.com/profile/17607813902494721687noreply@blogger.comtag:blogger.com,1999:blog-6322739827777311964.post-88809123628588245532013-10-31T13:59:37.474+01:002013-10-31T13:59:37.474+01:00I agree with one of the above writers that psychol...I agree with one of the above writers that psychology (especially during university training) takes a quite narrow selection from the toolbox that could be associated with experimentation and the scientific method in general. to put it coarsely: if it isn't a randomized controlled trial (or some non-clinical look-a-like) it isn't worth looking at for an aspiring or arrived experimental psychologists. I find these type of experiments to be the capstone of the scientific method, but of course not the whole story. It would be great if we could expand our view and pick up other rigorous scientific tools associated with experiments "below" the RCT (or cognitive psychology experiment from which we generalize to the population). For example learning our students how to do statistics when you are doing an intervention with just one person. How do you determine whether your treatment or advice resulted in some better outcome (above random variation) for THIS patient? Or indeed learning them the value of observational studies (by for example showing what is done in neighbouring fields such as biology). By taking this more broader grasp from the experimentation toolbox we would actually be able to arrive better prepared at the moment when we do our big-sample/ hypothesis testing study. And with better prepared I mean with much better intuitions, theory and predictions.Anonymoushttps://www.blogger.com/profile/03133175404764410990noreply@blogger.comtag:blogger.com,1999:blog-6322739827777311964.post-19003766125312536082013-10-30T21:08:53.409+01:002013-10-30T21:08:53.409+01:00To decide whether something is "true" or...To decide whether something is "true" or not is an appraisal of the truth likeness of a prediction by a theory. There is always a theory, we are not explorers looking for a new continent. <br /><br />A failed replication must have consequences for the theory making the original claim. Therefore: "Only that when others try to replicate and fail to get the same results, we should not automatically assume that this implies that the original effect was a false positive. Again, caution." <br /><br />No, we should maybe not conclude a false positive, but there must be a consequence for the theory that produced the prediction and I believe the claim by Cesaro, that there basically is no theoretical account that is good enough to produce empirically precise and accurate predictions, is correct. I prefer Rolf's conclusion, let's stop publishing "facts" that are basically just signs of correlations. <br /><br />Failed replications mean we should hurry back to to the drawing board and improve on those theories or discard them altogether.<br /><br />Measurement contextuality and individual differences may be at work here, but such problems are solvable, theoretically, mathematically, as they have been solved in many other fields of science. So blaming such confounders to me is premature settling for weak theories. Tackling such problems requires investing a lot of time and energy in a discussion about measurement in psychological science, complexity and physics at the ecological scale, thought experiments included!Anonymoushttps://www.blogger.com/profile/01414244802603249395noreply@blogger.comtag:blogger.com,1999:blog-6322739827777311964.post-63019723043664073622013-10-30T17:39:27.893+01:002013-10-30T17:39:27.893+01:00As with any finding, you should explain: here'...As with any finding, you should explain: here's what I did and here's what I found. Where you need to be cautious is in speculating what that means. When you don't know what parameters affected your results you should say so -- in priming and other research as well. Of course we often don't know what we don't know until later, but that's a good reason to be conservative.<br /><br />I don't think Joe is saying that nobody else should replicate other than the original authors. Only that when others try to replicate and fail to get the same results, we should not automatically assume that this implies that the original effect was a false positive. Again, caution.<br /><br />I think progress in science can be a messy and time consuming process. We are often overly eager to make firm conclusions. In the back and forth we should be careful not to lose sight that our goal is to figure out what's actually going on, not just argue back and forth about whether one side jumped to a conclusion to quickly or the other side dismissed it prematurely.<br /><br />Experimental data isn't the only way to resolve these questions, but it certainly can inform us (that's why I liked Joe's use of the term "ambiguous" but did not like "uninformative"). We just have to recognize that we need more than one or two data points (experiments) to say "this is true" or "no it's not."Dave Nussbaumhttps://www.blogger.com/profile/08638557883580286521noreply@blogger.comtag:blogger.com,1999:blog-6322739827777311964.post-78352584174516293582013-10-29T22:49:08.491+01:002013-10-29T22:49:08.491+01:00This is true. The current incentive structure work...This is true. The current incentive structure works against thought experiment. I'm hoping this will change. I'm already seeing more essayistic work in open access journals like Frontiers.Rolf Zwaanhttps://www.blogger.com/profile/07617143491249303266noreply@blogger.comtag:blogger.com,1999:blog-6322739827777311964.post-88288336915439358912013-10-29T22:47:08.218+01:002013-10-29T22:47:08.218+01:00Thanks Dave. Good to hear we are in agreement. The...Thanks Dave. Good to hear we are in agreement. The findings of these type of studies are generally overstated. If Cesario is right that the experiments can only be replicated by the authors themselves then it seems impossible to understate the findings: they only apply to this researchers' subject population, lab space, experimenters, equipment, campus, floor of the building, side of the building, time zone, presence or absence of a professional sports team and of a zoo, and so on.Rolf Zwaanhttps://www.blogger.com/profile/07617143491249303266noreply@blogger.comtag:blogger.com,1999:blog-6322739827777311964.post-24454715079633740462013-10-29T22:43:29.231+01:002013-10-29T22:43:29.231+01:00Thanks, Steve! I agree. It's also the case tha...Thanks, Steve! I agree. It's also the case that "significant" effects probably create a hindsight bias that causes people to overlook methodological weaknesses: "It looks like a crappy experiment but because there was an effect, it must have been good after all." This form of hindsight bias (of which we probably all are guilty) will disappear with preregistration.Rolf Zwaanhttps://www.blogger.com/profile/07617143491249303266noreply@blogger.comtag:blogger.com,1999:blog-6322739827777311964.post-3455208609924803352013-10-29T22:40:44.782+01:002013-10-29T22:40:44.782+01:00I agree that researchers often lose sight of real ...I agree that researchers often lose sight of real world behavior when they start experimenting and doing experiments about experiments, something my own area, cognitive psychology, is more guilty of than social psychology.Rolf Zwaanhttps://www.blogger.com/profile/07617143491249303266noreply@blogger.comtag:blogger.com,1999:blog-6322739827777311964.post-83015765441477635712013-10-29T19:58:26.358+01:002013-10-29T19:58:26.358+01:00Could some of the aversion to thought experiments ...Could some of the aversion to thought experiments in psychology be due to the fact that reviewers will always (at least in my experience!) punish any speculative sections toward the end of journal articles? Obviously I see why they do this- it's never good to go too far beyond one's data. But could this have the unintended consequence of engendering a somewhat thoughtless and atheoretical attitude among researchers?Anonymoushttps://www.blogger.com/profile/17996612842697816575noreply@blogger.comtag:blogger.com,1999:blog-6322739827777311964.post-66294013670443330592013-10-29T19:50:04.468+01:002013-10-29T19:50:04.468+01:00I agree with some of your points here Rolf, and pe...I agree with some of your points here Rolf, and perhaps it's my youthful exuberance, but I think it's fine to run the studies and learn on the fly. As long as you're not overstating your findings, or are willing to pare them back when it's called for, I don't see any necessary disconnect between thinking and experimenting. The problem is when you run one study and pretend like you've bottled a phenomenon. That's understandably and rightly frustrating for many people. Certainly we should stay away from words like uninformative that (I think unintentionally) devalue the just-as-useful efforts to better understand the phenomenon (or lack thereof) in question.<br /><br />I definitely agree with you that there should be a whole lot more emphasis on, and credit given for, thought experiments and essays and exploratory work the likes of which Paul Rozin bemoans the lack of.Dave Nussbaumhttps://www.blogger.com/profile/08638557883580286521noreply@blogger.comtag:blogger.com,1999:blog-6322739827777311964.post-65592218748856675112013-10-29T19:24:53.320+01:002013-10-29T19:24:53.320+01:00~ Excellent post Rolf! Having read across the dis...~ Excellent post Rolf! Having read across the disciplines for years, I've often had the impression that "data" (and significant findings) are often used as a replacement for critical or deep thinking on the topic at hand. So findings are promoted rather than clear elaborations as to what might be possible mechanisms behind the data. Of course, this is what the system in place rewards, so it's not too surprising. Steve Fiorehttps://www.blogger.com/profile/05699825959332154759noreply@blogger.comtag:blogger.com,1999:blog-6322739827777311964.post-48381817214243783452013-10-29T19:23:42.583+01:002013-10-29T19:23:42.583+01:00Sorry if this is a tangent, but do you suppose the...Sorry if this is a tangent, but do you suppose the tendency to rush into experimentation has anything to do with a lack of emphasis on observation?<br /><br />There are a couple of different models for how to do science. One is based on physics--try to put everything into a few rules, best represented as equations. Another is based on Aristotle or Darwin--observe the world around you and look for patterns. <br /><br />It seems to me that the more observations you collect, the more precise a hypothesis you can formulate--and since your observations will already be in terms of human behavior, it's a little easier to figure out how to translate from theory to experimental prediction.<br /><br />Observations of real-world behavior might suggest, say, that priming is more likely to have an effect where people don't have a strong opinion, or don't have an emotional stake in the outcome, or are under social pressure, or have a particularly wide attentional window due to positive mood or some other factor. This is just off the top of my head as a non-expert in the field; somebody who thinks about priming regularly and observes the world through that lens would have a better set of predictions.<br /><br />So, isn't it telling that at dozens of high-ranked psychology graduate programs, not a single one claims to be looking for students with keen observational skills? Or that naturalistic/observational studies seem to be the province of social psychologists and anthropologists? Surely experiments don't tell us anything if we don't already have some idea what we're looking for?E.M.https://www.blogger.com/profile/07258276540400935661noreply@blogger.com