Friday, September 27, 2013

30 Questions about Priming with Science and the Department of Corrections

We know about claims that priming with “professor” makes you perform better on a general knowledge test but apparently the benefits of science don’t stop there. A study published earlier this year reports findings that priming with science-related words (logical, theory, laboratory, hypothesis, experiment) makes you more moral. Aren’t we scientists great or what? But before popping the cork on a bottle of champagne, we might want to ask some questions, not just about the research itself but also about the review and publishing process involving this paper. So here goes.

(1) The authors note (without boring the reader with details) that philosophers and historians have argued that science plays a key role in the moral vision of a society of “mutual benefit.” From this they derive the prediction that this notion of science facilitates moral and prosocial judgments. Isn’t this a little fast?
(2) Images of the “evil scientist” (in movies usually portrayed by an actor with a vaguely European accent) pervade modern culture. So if it takes only a cursory discussion of some literature to form a prediction, couldn’t one just as easily predict that priming with science makes you less moral? I’m not saying it does of course; I’m merely questioning the theoretical basis for the prediction.
(3) In Study 1, subjects read a date rape vignette (a little story about a date rape). The vignette is not included in the paper. Why not? There is a reference to a book chapter from 2001 in which that vignette was apparently used in some form (was it the only one by the way?) but most readers will not have direct access to it, which makes it difficult to evaluate the experiment. In other disciplines, such as cognitive psychology, it has been common for decades to include (examples of) stimuli with articles. Did the reviewers see the vignette? If not, how could they evaluate the experiments?
(4) The subjects (university students from a variety of fields) were to judge the morality of the male character’s actions (date rape) on a scale from 1 (completely right) to 100 (completely wrong). Afterwards, they received the question “How much do you believe in science?” For this a 7-point scale was used. Why a 100-point scale in one case and a 7-point scale in the other? The authors may have good reasons for this but they play it close to the vest on this one.
(5) In analyzing the results, the authors classify the students’ field of study as a science or a non-science. Psychology was ranked among the sciences (with physics, chemistry, and biology) but sociology was deemed a non-science. Why? I hope the authors have no friends in the sociology department. Communication was also classified as a non-science. Why? I know many communication researchers who would take issue with this. The point is, this division seems rather arbitrary and provides the researchers with several degrees of freedom.
(6) The authors report a correlation of r=.36, p=.011. What happens to the correlation if, for example, sociology is ranked among the sciences?
(7) Why were no averages per field reported, or at least a scatterplot? Without all this relevant information, the correlation seems meaningless at best. Weren't the reviewers interested in this information? And how about the editor?
(8) Isn’t it ironic that the historians and philosophers, who in the introduction were credited with having introduced the notion of science as moral force in society are now hypothesized to be less moral than others (after all, they were ranked among the non-scientists)? This may seem like a trivial point but it really is not when you think about it.
(9) Study 2 uses the vaunted “sentence-unscrambling task” to prime the concept of “science.” You could devote an entire blog post to this task but I will move on only to make a brief observation. The prime words were laboratory, scientists, hypothesis, theory, and logical. The control words were…. Well what were they? The paper isn’t clear about it but it looks like paper and shoes were two of them (there’s no way to tell for sure and apparently no one was interested in finding out). 
(10) Why were the control words not low-frequency long words (assuming shoe and paper are representative for this category) that are low in imageability like the primes? Now the primes stick out like a sore thumb among the other words from which a sentence has to be formed whereas the control words are a much closer fit.
(11) Doesn’t this make the task easier in the control condition? If so, there is another confound.
(12) Were the control words thematically related, like the primes obviously were?
(13) If so, what was the theme? If not, doesn’t it create a confound to have salient words in the prime condition that are thematically related and can never be used in the sentence and to have non-salient words in the control condition that are not thematically related?
(14) Did the researchers inquire after the subjects’ perceptions of the task? Weren't the reviewers and editor curious about this?
(15) Wouldn’t these subjects have picked up on the scientific theme of the primes?
(16) Wouldn’t this have affected their perceptions of the experiment in any way?
(17) What about the results? What about them indeed? Before we can proceed, we need to clear up a tiny issue. It turns out that there are a few booboos in the article. An astute commenter on the paper had noticed anomalies in the results of the study and some impossibly large effect sizes. The first author responded with a string of corrections. In fact, no fewer than 18 of the values reported in the paper were incorrect. Here, I’ve indicated them for you.



You will not find them in the article itself. The corrections can be found in the comment section.
(18) It is good thing that PLoS ONE has a comment section of course. But the question is this. Shouldn’t such extensive corrections have been incorporated in the paper itself? People who download the pdf version of the article will not know that pretty much all the numbers that are reported in the paper are wrong. That these numbers are wrong is the author’s fault but at least she was forthcoming in providing the corrections. It would seem to be the editor's and publisher's responsibility to make sure the reader has easy access to the correct information. The authors would also be served well by this. 
(19) In her correction (which comprises about 25% the size of the original paper), the first author explains that the first three studies were reran because the reviewer requested different, more straightforward dependent variables that directly assessed morality judgments rather than related judgments related to punitiveness or blame, or that were too closely tied to the domain of science, which were used in the original submission. Apparently, many of the errors occurred because the manuscript was not properly updated with the new information. Why did the reviewers and editor miss all of these inconsistencies, though?
(20) And what happened to the discarded experiments? Surely they could have been included along with the new experiments? There are no word limitations at PLoS ONE.  Having authored a 14-experiment paper that was recently published in this journal, I'm pretty sure I'm right on this one.

Let’s return to the paper armed with the correct (or so we assume) results.

(21) The subjects in Study 2 were primed with “science” or read the neutral words (which were not provided to the reader) and then read the date rape vignette (which was not provided to the reader) and made moral judgments about the actions in the vignette (whatever they were). The corrected data show that the subjects in the experimental condition rated the actions as more immoral than did the control condition. However, as the correction also states, the standard deviation was much higher in the control condition (28.02) than in the experimental condition (7.96). These variances are highly unequal; doesn’t this compromise the t-test that was reported?
(22) The corrections mention that the high variance in the neutral condition is caused by two subjects, one giving the date rape a 10 on the 100-point scale (in other words, finding it highly acceptable) and the other a 40. The average for that condition is 81.57, so aren’t these outliers, at least the 10 score? (By the way, was this date-rape approving subject reported to the relevant authorities?)
(23) In Study 3 subjects received the same priming manipulation as in Experiment 2 and they rated the likelihood that they would engage in one of the several activities the next month, some of which were prosocial, some which were not. The prosocial actions listed were giving to charity, giving blood, and volunteering. Were these all the actions that were used in the experiment? It is not clear from the paper.
(24) Were the values that were used in the statistical test the averages of the responses to the categories of items (e.g., the average rating for the three prosocial actions)?
(25) And what happened to the non-prosocial activities? Shouldn't a proper analysis have included those in a 2 (prime) by 2 (type of activity) ANOVA? 
(26) If this analysis is performed, is the interaction significant?
(27) In the corrected data the effect size is .85. Doesn’t this seem huge? Readers of my previous post already know the answer: Yes, to the untrained eye perhaps but it is the industry standard (Step 7 in that post).
(28) The corrections state that Study 4 originally contained a third condition but that it was left out at the behest of a reviewer who felt that it muddles rather than clarifies the findings (yes, we wouldn’t want the findings to be muddled, would we?). I appreciate the honesty but was everyone, including the editor, on board with this serious amputation?
(29) The initial version of the corrections (yes, I forgot to mention that there were two versions of corrections) mentioned that there were 26 participants in the control condition and 17 in the experimental condition. Where does this huge discrepancy come from? And does it affect the analyses?
(30) In the discussion it is mentioned that Study 2 investigated academic dishonesty. This was one of the experiments that was dropped, right? Another (minor) addition for the corrections perhaps.

I guess there are a great many more questions to ask but let me stop here. The article uses logical, hypothesis, theory, laboratory, and scientist as primes. I can make a sentence out of those: Absent a theory, it is logical that there is no basis for the hypothesis that was tested in the laboratory and (sloppily) reported by the scientist

[Update, April 10, 2014. As I found out only recently (if you're forming a rapid response team, don't forget not to invite me), back in September of last year, the first author of the PLoS ONE article addressed (most of) these questions in the comments section of that article. The response provides more information and acknowledges some weaknesses of the study.]

Tuesday, September 17, 2013

How to Cook up Your Own Social Priming Article


  1.  Come up with an idea for a study. Don't sweat it. It's not as hard as it looks. All you need to do is take an idiomatic expression and run with it. Here we go: the glass is half-full or the glass is half-empty.
    2.     Create a theoretical background. Surely there is some philosopher (preferably a Greek one) who has said something remotely relevant about optimists and pessimists while staring at a wine glass. Include him. For extra flavor you might want to add an anthropologist or a sociologist into the mix; Google is your friend here. Top it off with a few social psychology references. There, you have your theoretical framework. That wasn’t so hard, was it?
    3.     Think of a manipulation. Again, this is nothing to get nervous about. All you need to do is take the expression literally. Imagine this scenario. The subject is in a room. In the glass-full condition, a confederate comes in with an empty glass and a bottle of water. She then pours the glass half full and leaves the room. In the glass-half-empty condition, she comes in with a full glass and a bottle. She then pours half the glass back into the bottle and leaves.
    4.     Think of a dependent measure. This is where the fun begins. As you may know, the dependent measure of choice in social priming research is candy. You simply cannot go wrong with candy! So let’s say the subjects get to choose ten pieces of differently colored pieces of candy from a container that has equal numbers of orange and brown M&Ms. Your prediction here is that people in the half-full condition will be more likely to pick the cheery orange M&Ms than those in the half-empty condition, who will tend to prefer the gloomy brown ones.
    5.     Get a sample. You don’t want to overdo it here. About 30 students from a nondescript university will do nicely. Only 30 in a between-subjects design?, you worry. Worry no more. This is how we roll in social priming.
    6.     Run Experiment 1. Don’t fuss about issues like the age and gender of the subjects and details of the procedure; you won't be reporting them anyway.
    7.     Analyze the results. Normally, you’d worry that you might not find an effect. But this is social priming remember? You are guaranteed to find an effect. In fact, your effect size will be around .8. That’s social priming for you!
    8.     Now on to Experiment 2. Come up with a new manipulation. What’s wrong with the glass and bottle from Experiment 1?, you might wonder. Are you kidding? This is social priming research. You need a new dependent measure. Just let your imagination run wild. How about balloons? In the half-full condition, the confederate walks in with an inflated balloon and lets half the air out in front of the subject. In the half empty condition, she half-inflates a balloon. And bingo! You’re done (careful with the word bingo, by the way; it makes people walk real slow).
    9.     Think of a new dependent measure. Why not have the subjects list their favorite TV shows? Your prediction here is that the half-full condition will list more sitcoms like Seinfeld and Big Bang Theory than the half-empty condition, which will list more crime shows like CSI and Law & Order (or maybe one of those stupid vampire shows). You could also include a second dependent measure. How about having subjects indicate how much they identify with Winnie de Pooh characters? Your prediction here is obvious: the half full condition will identify with Tigger the most while the half empty condition will prefer Eeyore by a landslide.
    10. Repeat steps 5-7.
    11. Now you are ready to write your General Discussion. You want to discuss the implications of your research. Don’t be shy here. Talk about the major implications for business, health, education, and politics this research so evidently has.
    12. For garnish, add a quirky celebrity quote. Don’t work yourself into a lather. Just go to www.goodreads.com to find a quote. Here, I already did the work for you: “Some people see the glass half full. Others see it half empty. I see a glass that's twice as big as it needs to be.”  George Carlin. Just say something clever like: Unless you are like George Carlin, it does make a difference whether the glass is half empty or half full.
    13. The next thing you need is an amusing title. And here your preparatory work really pays off. Just use the expression from Step 1 as your main title, describe your (huge) effect in the subtitle and your done: Is the Glass Half Empty or Half Full? The Effect of Perspective on Mood.
    14. Submit to a journal that regularly publishes social priming research. They’ll eat it up.
    15. Wax poetically about your research in the public media. If it wasn’t a good idea to be modest in the general discussion, you really need to let loose here. Like all social priming research, your work has profound consequences for all aspects of society. Make sure the taxpayer (and your Dean, haha) knows about it.
    16. If bloggers are critical about your work, just ignore them. They’re usually cognitive psychologists with nothing better to do.
    17. Once you’ve worked through this example, you might try your hand at more advanced topics like coming out of the closet. Imagine all the fun you’ll have with that one!
    18. Good luck!