Doorgaan naar hoofdcontent

Wacky Hermann and the Nonsense Syllables: The Need for Weirdness in Psychological Experimentation


Earlier this week I attended a symposium in Nijmegen on Solid Science in Psychology. It was organized mostly by social psychologists from Tilburg University along with colleagues from Nijmegen. (It is heartening to see members of Stapel’s former department step up and take such a leading role in the reformation of their field.) The first three speakers were Uri Simonsohn, Leif Nelson, and Joe Simmons of false-positive psychology fame. I enjoyed their talks (which were not only informative but also quite humorous for such a dry subject) but I already knew their recent papers and I agree with them, so their talks did not change my view much.

Later that day the German social psychologist Klaus Fiedler spoke. He offered an impassioned view that was somewhat contrary to the current replication rage. I didn’t agree with everything Fiedler said but I did agree with most of it. What’s more, he got me to think and getting your audience to think is what a good speaker wants.

Fiedler’s talk was partly a plea for creativity and weirdness in science. He likened the scientific process to evolution. There are phases of random generation and phases of selection. If we take social psychology, many would say that this field has been in a state of relatively unconstrained generation of ideas (see my earlier posts on this). According to Fiedler, this is perfectly normal.

Also perfectly normal is the situation that we find ourselves in now, a phase in which many people express doubts about the reliability and validity of much of this research. These doubts are finding their way into various replication efforts as ways to select the good ideas from the bad ones. As I’ve discussed earlier (and in the ensuing blogalog with Dan Simons, here, here, here, here, and here), direct replications are a good start, but somewhat less direct replications are also necessary to select the most valid ideas. 

So I’m glad we’re having this revolution. At the same time, I confess to having an uneasy feeling. During Fiedler’s talk, I had a Kafkaesque vision of an army of researchers dutifully but joylessly going about their business: generating obvious hypotheses guaranteed to yield large effect sizes, performing power analyses, pre-registering their experiments, reporting each and every detail of their experiments, storing and archiving their data, and so on. Sure, this is science. But where is the excitement? Remember, we’re scientists, not librarians or accountants. To be sure, I have heard people wax poetic about initiatives to archive data. But are these people for real? Archiving your data is about as exciting as filing your taxes.

Wacky Hermann
Creativity and weirdness are essential for progress in science. This is what Fiedler argued and I agree. Heck, people at the time must have found it pretty silly that Hermann Ebbinghaus spent hours each day to memorize completely useless information (nonsense syllables) by reciting them—with the same neutral voice inflection each time— to the sound of a metronome. Try telling that at a party when asked what you do for a living! And yet psychology would not have been the same if Ebbinghaus had decided to spend his time in a more conventional manner, for example by discussing the latest political issues in the local Biergarten, by taking his dog on strenuous walks, or by forming his own garage band (though Wacky Hermann and the Nonsense Syllables would have been a killer name).

So I agree with Klaus Fiedler. We need creativity and weirdness in science. We need to make sure that the current climate does not stifle creativity. But we also need to have mechanisms in place to select the most valid ideas. I think we can have our cake and eat it too by distinguishing between exploratory and confirmatory research, as others have already suggested.

It is perfectly okay to be wacky and wild (the wackier and wilder the better as far as I’m concerned), as long as you indicate that your research is exploratory (perhaps there should be journals or journal sections devoted to exploratory ideas). But if your research is confirmatory (and I think each researcher should to both exploratory and confirmatory research), then you do need to do all the boring things that I described earlier. Because boring as they might be, they are also unavoidable if we want to have a solid science.


Reacties

  1. Nice post, Rolf. For those using fMRI I would add one caveat: know thy methods. I don't mean be able to explain to a colleague how BOLD works, or know how to get raw data off the scanner and through an SPM script into a GUI. I mean really understand each and every step that you claim you need to use. That means understanding how your acquisition is established - the stimuli, the control condition, subject selection/screening, the scanner parameter settings, everything - what processing steps you're applying and why - including slice timing correction, motion correction, filtering and so on - as well as the statistical limitations of the final grand analysis. When you do this you quickly realize that many parts of this complex puzzle are poorly validated for your particular application. Thus, you should recognize that you are essentially a test pilot attempting to get a novel device off the ground for the first time. (That is, unless you are attempting to replicate a previous study, in which case you may be the second pilot.)

    It is a fact that most of the steps used in a standard fMRI experiment have not been tested under the conditions in which they are now being used. An example: slice timing correction and motion correction for resting-state fMRI. Why is this? It's because we're more excited by attempting to understand the brain than we are excited by learning the limitations of all the steps in our methods, working through validation of our tools, etc. That stuff just isn't so much fun, like archiving your data.

    I can offer a silver lining. For those who do decide to dig deep into the limitations of fMRI, you will see a whole universe of opportunities generated by the limitations of so much of what we use in generating the colored blobs on a brain. In short, it will make you think. Really think. And that may just make you more creative and whacky, but whacky for the right reasons.

    BeantwoordenVerwijderen
    Reacties
    1. Thanks. I agree that know thy methods is important for any type of study but especially for fMRI and that the limitations of this method in particular are not well understood. I like your initially counterintuitive notion that closer scrutiny of the limitations of the method will make you more creative.

      Verwijderen
    2. Initially counter-intuitive notion is exactly right. Just ask Jack Gallant about knowing the methods and subsequent creativity!

      Verwijderen
  2. Thanks for posting this, Rolf -- I'm looking forward to seeing the videos from the conference when they become available.

    One thought on weirdness and science that's been bouncing around in my head for a while but I haven't managed to get around to blogging about. I was thinking about in terms of counter-intuitivenes, but weirdness is a good label too. The main point is that there's a big difference between weirdness for its own sake and weirdness for a good reason.

    In recent years we've seen a big increase in the publication of weird and counter-intuitive results. Cynically speaking, many of these findings are not only designed to generate attention (from the media, but also from journals themselves, who also have an incentive to publish "surprising" results), but are also frequently based on less-than-solid science.

    On the other hand, some of the best research in psychology is counter-intuitive (and weird), but it's counter-intuitive for a reason. For example, the classic dissonance experiment in which people are paid $1/$20 to lie and say that a boring task was interesting. It was extremely counter-intuitive, in that it violated the basic expectations of the dominant paradigms at the time, and weird in that its methods shared nothing in common with rats pressing levers.

    But the counter-intuitiveness and weirdness was not arbitrary or gratuitous. It was theory-driven and highlighted the differences between theories in a dramatic and compelling way. I think it was vital not only to clearly demonstrate the validity of the theory, but also to make it compelling enough that people would take note.

    Without having seen Fiedler's talk yet, maybe I'm veering off on a tangent, but it seems to me that we should absolutely be encouraging weird research, and of course it should be conducted in a responsible way as you describe, but we should not necessarily be promoting weirdness for its own sake, but because it can provide an important contribution to the development of science in a way that boring, repetitive science may not be able to.

    BeantwoordenVerwijderen
    Reacties
    1. Thanks Dave. This is an important qualification and maybe indicates where the evolution analogy breaks down. We don't want random generation of ideas. There needs to be some method to the madness. For example, the Psych Science study on thinking outside the box was on the wrong side of weird. The expression "thinking outside the box" doesn't even refer to real boxes! So we are in complete agreement. We don't want weirdness for weirdness' sake. We want theoretically motivated weirdness. Maybe I should have made this more clear.

      Verwijderen
  3. I wasn't there, but I read Ruud Abma's report (http://depublicatiefabriek.blogspot.nl/), and according to him Fiedler's argument hinged on the idea that in social psychology there are simply too many factors that could explain an experiment's result to put much weight on direct replication. So if a direct replication fails, 'it could be anything', and it's better to put your energy into conceptual replications. That's a common line of thinking, but it seems to imply that all those boring things would be pointless. If that was Fiedler's argument, I'm not sure he would agree with your call for combining exploratory and confirmatory research.

    BeantwoordenVerwijderen
    Reacties
    1. I believe it is correct that this was Fiedler's more general point and this is one of the things I disagree with. If there are too many factors that could explain an experiment's results then it is a bad experiment.

      Verwijderen
  4. I admire the spirit, but 'whackiness' is the wrong criterion. Ebbinghaus wasn't being whacky, he was being theoretically and hypothesis driven first, which then made his method, while a bit whacky, make sense.

    BeantwoordenVerwijderen
  5. Good point. I agree. It echoes the point made by Dave Nussbaum. We don't want weirdness for weirdness' sake. But we should should not shy away for things that might look weird at first sight.

    BeantwoordenVerwijderen
  6. Hi Rolf, nice post. I always think Hermann Ebbinghaus and his nonsense syllables would have made a great scene in Monty Python’s Flying Circus. Perhaps in the ministery of silly words…

    I completely agree that weirdness in science can be a good thing. However, one thing I especially admire about Hermann Ebbinghaus is the fact that he also seemed very reluctant to publish his findings and only did so after replicating himself numerous times. It took him years and years of doing his weird stuff over and over again, before he himself was convinced that it was worth publishing. Thus, for Hermann Ebbinghaus at least, weirdness and replication rage can go hand in hand.

    BeantwoordenVerwijderen
    Reacties
    1. :) Michael Palin would have been great as Ebbinghaus.

      Good point about Ebbinghaus' drive to replicate before publishing--an example to us all.

      Verwijderen
  7. A while back on one of Dorothy Bishop's posts, I suggested making a distinction between Experiments and Observations. I think this maps on to your distinction between confirmatory and exploratory research.

    http://deevybee.blogspot.com.au/2011/10/accentuate-negative.html?showComment=1319783913618

    As Neuroskeptic pointed out in response, people may still claim that their study was an Experiment. But I think the idea of Registered Reports (a la Cortex) gets around that.

    BeantwoordenVerwijderen
    Reacties
    1. Yep, that looks like the same distinction. I like the idea of registered reports (we're using it in our special issue on replication of Frontiers in Cognition), although I think it would be good to test drive it first in a few journals before a wholesale adoption (you can never fully anticipate how people will try to rig the system to their own advantage).

      I see nothing wrong with publishing exploratory studies or observations, so maybe there shouldn't be a bias against them, even though confirmatory studies carry more weight, obviously. Journals could have a section titled Confirmation and another one called exploration.

      Verwijderen

Een reactie posten