ebook img

The Quick Fix - Why Fad Psychology Can't Cure Our Social Ills PDF

361 Pages·3.067 MB·English
by  SingalJesse
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview The Quick Fix - Why Fad Psychology Can't Cure Our Social Ills

Begin Reading Table of Contents A Note About the Author Copyright Page   Thank you for buying this Farrar, Straus and Giroux ebook.   To receive special offers, bonus content, and info on new releases and other great reads, sign up for our newsletters.   Or visit us online at us.macmillan.com/newslettersignup   For email updates on the author, click here. The author and publisher have provided this e-book to you for your personal use only. You may not make this e-book publicly available in any way. Copyright infringement is against the law. If you believe the copy of this e-book you are reading infringes on the author’s copyright, please notify the publisher at: us.macmillanusa.com/piracy. To my parents, Sydney Altman and Bruce Singal, who gave me all the love and support I needed and then some, who taught me there are no quick fixes, and who let me skip school once in a while just for the hell of it INTRODUCTION If you’re the sort of person who buys and reads books about human behavior, then it is likely you have recently encountered an exciting, counterintuitive new psychological idea that seems as if it could help solve a pressing societal problem like educational inequality, race relations, or misogyny. Maybe you came across it in a TED Talk. Or, if not there, in an op-ed or blog post or book. It is, after all, a golden age for popular behavioral science. As the University of Virginia law professor Gregory Mitchell, a keen critic and observer of the field, wrote in 2017, “With press releases from journals and universities feeding a content-hungry media, publishing houses looking for the next Gladwellian bestseller, governments embracing behavioral science, and courts increasingly open to evidence from social scientists, psychologists have more opportunities than ever to educate the public on what they have learned about why people behave as they do.”1 I should know. Starting in March 2014, I was the editor of Science of Us, New York magazine’s newly launched online social science section. It was my job, and the job of the very talented people I worked with, to find new, interesting behavioral science research to write about every day of the week and to do so in a rigorous, sometimes skeptical manner. Thanks to a fairly stats-heavy master’s program I had completed, I knew some of the differences between good and bad research, and some of the ways quantitative claims can mislead. What I didn’t anticipate was the fire hose of overhyped findings that would fill my email in-box daily, the countless press releases from research institutions touting all-caps AMAZING results that would surely blow my mind, and the minds of our readers, bringing us impressive web traffic in the process. I treaded water as best I could, trying to resist the lure of bad science by writing and editing stories we could be proud of. But I don’t think I quite grasped the full scale of the problem. That changed in September 2015, six months or so into my job. That was when I met with Jeffrey Mosenkis, who does communications for the research organization Innovations for Poverty Action and who also happens to have a PhD in comparative human development (a blend of anthropology, social psychology, and other fields). Perhaps because he had noticed that I appreciated the debunking of overhyped research findings, he had decided to pass along a tip: I should look at the flaws in the implicit association test. Maybe you’ve heard of this test. Commonly referred to as the IAT, it is seen by important people with impressive credentials as the most promising technological tool for attenuating the impact of racism. The idea is that by gauging your reaction time to various combinations of words and images—say, a black face next to the word “happy,” or a white one next to the word “danger”—the test can reveal your unconscious, or implicit, bias against various groups. The test, introduced in 1998, has been a blockbuster success. Anyone can take it on Harvard University’s website, and over the years its architects and evangelists, some of the biggest names in social psychology, have adapted it for all sorts of diversity-training uses. It would be hard to overstate its impact on schools, police departments, corporations, and many other institutions. In his email, Jeff said there was a story waiting to be told by a journalist; the test was weak and unreliable, statistically speaking. A group of researchers had published work showing rather convincingly that the test barely measures anything of real-world import. Which, if true, would naturally raise some questions about the possibility that Harvard was apparently “diagnosing” millions (literally millions) of people as unconsciously biased on the basis of a flimsy methodology, and about all the money being spent on IAT-based trainings. I was intrigued by Jeff’s email and soon got in touch with one of the skeptical researchers, Hart Blanton, who was then at the University of Connecticut. As I started looking into his claims, I realized that I had simply assumed the test did what its most enthusiastic proponents said it did, despite the rather audacious nature of their claim: that a ten-minute computer task with no connection to the real world could predict subtle forms of real-world discrimination. I had credulously accepted these claims because I had figured that if almost the entire discipline of social psychology had embraced this innovation as a cutting-edge tool in the fight against racism, and a multitude of organizations outside academia had followed suit, all these people must have known what they were doing. This brought a pang of shame. “I believe this thing because a lot of people say it is true” is not a great stance for a science writer and editor. After a lot of reading and interviewing, I concluded Blanton was correct. As I wrote in a subsequent article, the statistical evidence for the sorts of claims the IAT’s creators were making was sorely lacking.2 There was a gap between what many people believed to be true about the test and what the evidence revealed. Part of what was going on was that the IAT told a good story that lined up with certain liberal anxieties about race relations, tinged with an optimistic note—Sure, everyone’s racist deep down, but this test can help us discover and undo those biases, one person at a time!—and people wanted to believe that story. If millions of people believed in the IAT largely on the basis of good storytelling and the impressive credentials of its creators, what did that say about the manner in which we talked about these issues? Could it be that urgent tasks like police reform were being approached in the wrong way? And what about the broader current state of behavioral science? What else was I missing? WHAT I SOON REALIZED is that our society’s fascination with psychology has a dark side: many half-baked ideas—ideas that may not be 100 percent bunk but which are severely overhyped—are being enthusiastically spread, despite a lack of hard evidence in their favor. The IAT is one example, but there are numerous others. The popularity of these ideas, as well as the breathless manner in which they are marketed by TED Talks and university press offices and journalists and podcasts, is not harmless. It misallocates resources to overclaiming researchers when others are experiencing a funding crunch, and it degrades the institution of psychology by blurring the line between behavioral science and behavioral pseudoscience. Perhaps most important, these ideas are frequently being adopted by schools, corporations, and nonprofits eager to embrace the Next Big Thing to come out of the labs and lecture halls of Harvard or the University of Pennsylvania. As the decision-makers who work in these institutions have grown more fluent in science and cognizant of the need to look to behavioral scientists for guidance (a good thing), they’ve also become more susceptible to half-baked behavioral science (a bad thing). And this explosion of interest in psychological science has occurred at a time when, if anything, people should be more cautious about embracing new and exciting psychological claims. As we’ll see, many findings in psychology—including those featured in introductory textbooks—are failing to replicate, meaning that when researchers attempt to re-create them with new experiments, they are coming up short. This so-called replication crisis has cast a giant shadow over the entire field of psychology, and the best available evidence suggests that a sizable chunk of published psychological findings may be false (though the size of that chunk is a source of heated debate). Many psychologists themselves may be unwittingly helping to promote half-baked science that seems to be built upon a solid foundation of published research. In all likelihood, we’ll look back wincingly at some of the popular theories being taught and developed today and, all too often, transformed into sleek interventions that promise to alleviate our ills.* THIS BOOK IS AN ATTEMPT to explain the allure of fad psychology, why that allure is so strong, and how both individuals and institutions can do a better job of resisting it. It is important to improve our understanding of how behavioral scientific information circulates in the public realm, because only sound knowledge will earn us the improvements we wish for. Just as we can’t enact successful environmental and energy policies while denying global warming, and we can’t improve global public health without taking a stand against anti-vaccination myths, we will never solve the pressing social issues of the day—racism and inequality and the education gaps and so many others— while relying on claims about human behavior and how to change it that are half-true at best.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.