We use cookies to track usage and preferences. See Cookie Policy
Pocketmags Digital Magazines
CA
Pocketmags Digital Magazines
Christmas Presents
   You are currently viewing the Canada version of the site.
Would you like to switch to your local site?
Digital Subscriptions > Skeptical Inquirer > September October 2017 > P-Hacker Confessions: Daryl Bem and Me

P-Hacker Confessions: Daryl Bem and Me

Stuart Vyse is a psychologist and author of Believing in Magic: The Psychology of Superstition, which won the William James Book Award of the American Psychological Association. He is a fellow of the Committee for Skeptical Inquiry.

Cornell University psychologist Daryl Bem and I have something in common. Yes, we are both research psychologists, but that’s not what I mean.

For me, it started when I was just a young graduate student. Statistics courses are a standard part of graduate training in psychology, because statistical methods are still the coin of the realm in psychological research. Most graduate students are required to conduct empirical research as part of their doctoral dissertations, and if they go on to academic positions, they often continue to do quantitative studies throughout their careers. Training in statistics is important because statistical number crunching techniques are the way we determine whether our results mean anything or not. Most of my graduate school cohort hated anything that looked like math, but—to my surprise—I discovered that I liked statistics courses. I took more of them than were required, and my relatively strong background in stats was an important factor in landing an academic position. (Let that be a lesson to any psychology students who might be reading this.) In graduate school, I coached my math-phobic friends on how to enter data into the computer and analyze it, and in my academic life, I did the same with students and colleagues.

With all this background, I got to be pretty good at statistical consulting, and as a result, needy researchers often came knocking. Publishing trends are gradually changing, but even now, most studies need to report statistically significant results to have any chance of getting published. Journal editors are much less interested in studies in which nothing happened, so everyone is on a quest to achieve the vaunted p (for probability) < .05 that indicates the findings are unlikely to have happened by chance. When a friend’s research or my own appeared to have come up short, I was pretty good at salvaging something from the rubble. I might suggest altering the design of the study by combining data from previously separated groups of participants, or massaging the numbers in some way. These were techniques I’d learned at my mentors’ knees, and although we had some inkling that we were fudging the results a bit, we consoled ourselves by openly reporting the steps we’d gone through and supplying some plausible-sounding justification for each manipulation. We didn’t think we were doing anything wrong.

READ MORE
Purchase options below
Find the complete article and many more in this issue of Skeptical Inquirer - September October 2017
If you own the issue, Login to read the full article now.
Single Issue - September October 2017
$3.99
Or 399 points
Annual Digital Subscription
Only $ 4.00 per issue
$23.99
Or 2399 points

View Issues

About Skeptical Inquirer

Politicization of Scientific Issues: Looking through Galileo’s Lens or through the Imaginary Looking Glass Bigfoot as Big Myth: Seven Phases of Mythmaking The Fallacy Fork Why It’s Time to Get Rid of Fallacy Theory The Fakery of Electrodermal Screening
Ways to Pay Pocketmags Payment Types
At Pocketmags you get Secure Billing Great Offers HTML Reader Gifting options Loyalty Points