Beth Morling - Research Methods in Psychology_ Evaluating a World of Information.pdf
THIRD EDITION
Research Methods in Psychology EVALUATING A WORLD OF INFORMATION
THIRD EDITION
Research Methods in Psychology EVALUATING A WORLD OF INFORMATION
Beth Morling UNIVERSITY OF DELAWARE
n W. W. NORTON & COMPANY, INC.NEW YORK • LONDON
W. W. Norton & Company has been independent since its founding in 1923,
when William Warder Norton and Mary D. Herter Norton first published
lectures delivered at the People’s Institute, the adult education division of
New York City’s Cooper Union. The firm soon expanded its program beyond
the Institute, publishing books by celebrated academics from America and
abroad. By midcentury, the two major pillars of Norton’s publishing program—
trade books and college texts—were firmly established. In the 1950s, the Norton
family transferred control of the company to its employees, and today—with
a staff of four hundred and a comparable number of trade, college, and
professional titles published each year—W. W. Norton & Company stands as
the largest and oldest publishing house owned wholly by its employees.
Copyright © 2018, 2015, 2012 by W. W. Norton & Company, Inc.
All rights reserved Printed in Canada
Editor: Sheri L. Snavely Project Editor: David Bradley Editorial Assistant: Eve Sanoussi Manuscript/Development Editor: Betsy Dilernia Managing Editor, College: Marian Johnson Managing Editor, College Digital Media: Kim Yi Production Manager: Jane Searle Media Editor: Scott Sugarman Associate Media Editor: Victoria Reuter Media Assistant: Alex Trivilino Marketing Manager, Psychology: Ashley Sherwood Design Director and Text Design: Rubina Yeh Photo Editor: Travis Carr Photo Researcher: Dena Digilio Betz Permissions Manager: Megan Schindel Composition: CodeMantra Illustrations: Electragraphics Manufacturing: Transcontinental Printing
Permission to use copyrighted material is included in the Credits section beginning on page 603.
Library of Congress Cataloging-in-Publication Data
Names: Morling, Beth, author. Title: Research methods in psychology : evaluating a world of information / Beth Morling, University of Delaware. Description: Third Edition. | New York : W. W. Norton & Company, [2017] | Revised edition of the author’s Research methods in psychology, [2015] | Includes bibliographical references and index. Identifiers: LCCN 2017030401 | ISBN 9780393617542 (pbk.) Subjects: LCSH: Psychology—Research—Methodology—Textbooks. | Psychology, Experimental—Textbooks. Classification: LCC BF76.5 .M667 2017 | DDC 150.72—dc23 LC record available at https://lccn.loc.gov/2017030401
Text-Only ISBN 978-0-393-63017-6
W. W. Norton & Company, Inc., 500 Fifth Avenue, New York, NY 10110 wwnorton.com W. W. Norton & Company Ltd., 15 Carlisle Street, London W1D 3BS
1 2 3 4 5 6 7 8 9 0
https://lccn.loc.gov/2017030401
http://wwnorton.com
For my parents
vii
Brief Contents
PART I Introduction to Scientific Reasoning CHAPTER 1 Psychology Is a Way of Thinking 5
CHAPTER 2 Sources of Information: Why Research Is Best and How to Find It 25
CHAPTER 3 Three Claims, Four Validities: Interrogation Tools for Consumers of Research 57
PART II Research Foundations for Any Claim CHAPTER 4 Ethical Guidelines for Psychology Research 89
CHAPTER 5 Identifying Good Measurement 117
PART III Tools for Evaluating Frequency Claims CHAPTER 6 Surveys and Observations: Describing What People Do 153
CHAPTER 7 Sampling: Estimating the Frequency of Behaviors and Beliefs 179
PART IV Tools for Evaluating Association Claims CHAPTER 8 Bivariate Correlational Research 203
CHAPTER 9 Multivariate Correlational Research 237
PART V Tools for Evaluating Causal Claims CHAPTER 10 Introduction to Simple Experiments 273
CHAPTER 11 More on Experiments: Confounding and Obscuring Variables 311
CHAPTER 12 Experiments with More Than One Independent Variable 351
PART VI Balancing Research Priorities CHAPTER 13 Quasi-Experiments and Small-N Designs 389
CHAPTER 14 Replication, Generalization, and the Real World 425
Statistics Review Descriptive Statistics 457
Statistics Review Inferential Statistics 479
Presenting Results APA-Style Reports and Conference Posters 505
Appendix A Random Numbers and How to Use Them 545
Appendix B Statistical Tables 551
viii
BETH MORLING is Professor of Psychology at the University of Delaware. She attended Carleton College in Northfield, Minnesota, and received her Ph.D. from the University of Massachusetts at Amherst. Before coming to Delaware, she held positions at Union College (New York) and Muhlenberg College (Pennsylvania). In addition to teaching research methods at Delaware almost every semester, she also teaches undergraduate cultural psychology, a seminar on the self- concept, and a graduate course in the teaching of psychology. Her research in the area of cultural psychology explores how cultural practices shape people’s motivations. Dr. Morling has been a Fulbright scholar in Kyoto, Japan, and was the Delaware State Professor of the Year (2014), an award from the Council for Advancement and Support of Education (CASE) and the Carnegie Foundation for the Advancement of Teaching.
About the Author
ix
Preface
Students in the psychology major plan to pursue a tremendous variety of careers— not just becoming psychology researchers. So they sometimes ask: Why do we need to study research methods when we want to be therapists, social workers, teachers, lawyers, or physicians? Indeed, many students anticipate that research methods will be “dry,” “boring,” and irrelevant to their future goals. This book was written with these very students in mind—students who are taking their first course in research methods (usually sophomores) and who plan to pursue a wide variety of careers. Most of the students who take the course will never become researchers themselves, but they can learn to systematically navigate the research information they will encounter in empirical journal articles as well as in online magazines, print sources, blogs, and tweets.
I used to tell students that by conducting their own research, they would be able to read and apply research later, in their chosen careers. But the literature on learning transfer leads me to believe that the skills involved in designing one’s own studies will not easily transfer to understanding and critically assessing studies done by others. If we want students to assess how well a study supports its claims, we have to teach them to assess research. That is the approach this book takes.
Students Can Develop Research Consumer Skills To be a systematic consumer of research, students need to know what to priori- tize when assessing a study. Sometimes random samples matter, and sometimes they do not. Sometimes we ask about random assignment and confounds, and sometimes we do not. Students benefit from having a set of systematic steps to help them prioritize their questioning when they interrogate quantitative infor- mation. To provide that, this book presents a framework of three claims and four validities, introduced in Chapter 3. One axis of the framework is the three kinds of claims researchers (as well as journalists, bloggers, and commentators) might make: frequency claims (some percentage of people do X), association claims (X is associated with Y), and causal claims (X changes Y). The second axis of
x PREfACE
the framework is the four validities that are generally agreed upon by methodol- ogists: internal, external, construct, and statistical.
The three claims, four validities framework provides a scaffold that is rein- forced throughout. The book shows how almost every term, technique, and piece of information fits into the basic framework.
The framework also helps students set priorities when evaluating a study. Good quantitative reasoners prioritize different validity questions depending on the claim. For example, for a frequency claim, we should ask about measurement (construct validity) and sampling techniques (external validity), but not about ran- dom assignment or confounds, because the claim is not a causal one. For a causal claim, we prioritize internal validity and construct validity, but external validity is generally less important.
Through engagement with a consumer-focused research methods course, students become systematic interrogators. They start to ask more appropriate and refined questions about a study. By the end of the course, students can clearly explain why a causal claim needs an experiment to support it. They know how to evaluate whether a variable has been measured well. They know when it’s appro- priate to call for more participants in a study. And they can explain when a study must have a representative sample and when such a sample is not needed.
What About Future Researchers? This book can also be used to teach the flip side of the question: How can produc- ers of research design better studies? The producer angle is presented so that stu- dents will be prepared to design studies, collect data, and write papers in courses that prioritize these skills. Producer skills are crucial for students headed for Ph.D. study, and they are sometimes required by advanced coursework in the undergraduate major.
Such future researchers will find sophisticated content, presented in an accessible, consistent manner. They will learn the difference between media- tion (Chapter 9) and moderation (Chapters 8 and 9), an important skill in theory building and theory testing. They will learn how to design and interpret factorial designs, even up to three-way interactions (Chapter 12). And in the common event that a student-run study fails to work, one chapter helps them explore the possi- ble reasons for a null effect (Chapter 11). This book provides the basic statistical background, ethics coverage, and APA-style notes for guiding students through study design and execution.
Organization The fourteen chapters are arranged in six parts. Part I (Chapters 1–3) includes introductory chapters on the scientific method and the three claims, four validities framework. Part II (Chapters 4–5) covers issues that matter for any study: research
xiSupport for Students and Instructors
ethics and good measurement. Parts III–V (Chapters 6–12) correspond to each of the three claims (frequency, association, and causal). Part VI (Chapters 13–14) focuses on balancing research priorities.
Most of the chapters will be familiar to veteran instructors, including chapters on measurement, experimentation, and factorial designs. However, unlike some methods books, this one devotes two full chapters to correlational research (one on bivariate and one on multivariate studies), which help students learn how to interpret, apply, and interrogate different types of association claims, one of the common types of claims they will encounter.
There are three supplementary chapters, on Descriptive Statistics, Inferential Statistics, and APA-Style Reports and Conference Posters. These chapters provide a review for students who have already had statistics and provide the tools they need to create research reports and conference posters.
Two appendices—Random Numbers and How to Use Them, and Statistical Tables—provide reference tools for students who are conducting their own research.
Support for Students and Instructors The book’s pedagogical features emphasize active learning and repetition of the most important points. Each chapter begins with high-level learning objectives— major skills students should expect to remember even “a year from now.” Impor- tant terms in a chapter are introduced in boldface. The Check Your Understanding questions at the end of each major section provide basic questions that let students revisit key concepts as they read. Each chapter ends with multiple-choice Review Questions for retrieval practice, and a set of Learning Actively exercises that encourage students to apply what they learned. (Answers are provided at the end of the book.) A master table of the three claims and four validities appears inside the book’s front cover to remind students of the scaffold for the course.
I believe the book works pedagogically because it spirals through the three claims, four validities framework, building in repetition and depth. Although each chapter addresses the usual core content of research methods, students are always reminded of how a particular topic helps them interrogate the key validities. The interleaving of content should help students remember and apply this questioning strategy in the future.
I have worked with W. W. Norton to design a support package for fel- low instructors and students. The online Interactive Instructor’s Guide offers in-class activities, models of course design, homework and final assignments, and chapter-by-chapter teaching notes, all based on my experience with the course. The book is accompanied by other ancillaries to assist both new and experienced research methods instructors, including a new InQuizitive online assessment tool, a robust test bank with over 750 questions, updated lecture and active learning slides, and more; for a complete list, see p. xix.
xii PREfACE
Teachable Examples on the Everyday Research Methods Blog Students and instructors can find additional examples of psychological science in the news on my regularly updated blog, Everyday Research Methods (www .everydayresearchmethods.com; no password or registration required). Instruc- tors can use the blog for fresh examples to use in class, homework, or exams. Students can use the entries as extra practice in reading about research studies in psychology in the popular media. Follow me on Twitter to get the latest blog updates (@bmorling).
Changes in the Third Edition Users of the first and second editions will be happy to learn that the basic organi- zation, material, and descriptions in the text remain the same. The third edition provides several new studies and recent headlines. Inclusion of these new exam- ples means that instructors who assign the third edition can also use their favorite illustrations from past editions as extra examples while teaching.
In my own experience teaching the course, I found that students could often master concepts in isolation, but they struggled to bring them all together when reading a real study. Therefore, the third edition adds new Working It Through sections in several chapters (Chapters 3, 4, 5, 8, and 11). Each one works though a single study in depth, so students can observe how the chapter’s central concepts are integrated and applied. For instance, in Chapter 4, they can see how ethics concepts can be applied to a recent study that manipulated Facebook newsfeeds. The Working It Through material models the process students will probably use on longer class assignments.
Also new in the third edition, every figure has been redrawn to make it more visually appealing and readable. In addition, selected figures are annotated to help students learn how to interpret graphs and tables.
Finally, W. W. Norton’s InQuizitive online assessment tool is available with the third edition. InQuizitive helps students apply concepts from the textbook to practice examples, providing specific feedback on incorrect responses. Some questions require students to interpret tables and figures; others require them to apply what they’re learning to popular media articles.
Here is a detailed list of the changes made to each chapter.
http://www.everydayresearchmethods.com
http://www.everydayresearchmethods.com
xiiiChanges in the Third Edition
CHAPTER MAJOR CHANGES IN THE THIRD EDITION
1. Psychology Is a Way of Thinking
The heading structure is the same as in the second edition, with some updated examples. I replaced the facilitated communication example (still an excellent teaching example) with one on the Scared Straight program meant to keep adolescents out of the criminal justice system, based on a reviewer’s recommendation.
2. Sources of Information: Why Research Is Best and How to Find it
I simplified the coverage of biases of intuition. Whereas the second edition separated cognitive biases from motivated reasoning, the biases are now presented more simply. In addition, this edition aims to be clearer on the difference between the availability heuristic and the present/present bias. I also developed the coverage of Google Scholar.
3. Three Claims, Four Validities: Interrogation Tools for Consumers of Research
The three claims, four validities framework is the same, keeping the best teachable examples from the second edition and adding new examples from recent media. In response to my own students’ confusion, I attempted to clarify the difference between the type of study conducted (correlational or experimental) and the claims made about it. To this end, I introduced the metaphor of a gift, in which a journalist might “wrap” a correlational study in a fancy, but inappropriate, causal claim.
When introducing the three criteria for causation, I now emphasize that covariance is about the study’s results, while temporal precedence and internal validity are determined from the study’s method.
Chapter 3 includes the first new Working It Through section.
4. Ethical Guidelines for Psychology Research
I updated the section on animal research and removed the full text of APA Standard 8. There’s a new figure on the difference between plagiarism and paraphrasing, and a new example of research fabrication (the notorious, retracted Lancet article on vaccines and autism). A new Working It Through section helps students assess the ethics of a recent Facebook study that manipulated people’s newsfeeds.
5. Identifying Good Measurement
This chapter retains many of the teaching examples as the second edition. For clarity, I changed the discriminant validity example so the correlation is only weak (not both weak and negative). A new Working It Through section helps students apply the measurement concepts to a self-report measure of gratitude in relationships.
6. Surveys and Observations: Describing What People Do
Core examples are the same, with a new study illustrating the effect of leading questions (a poll on attitudes toward voter ID laws). Look for the new “babycam” example in the Learning Actively exercises.
7. Sampling: Estimating the Frequency of Behaviors and Beliefs
Look for new content on MTurk and other Internet-based survey panels. I updated the statistics on cell-phone-only populations, which change yearly. Finally, I added clarity on the difference between cluster and stratified samples and explained sample weighting.
I added the new keyword nonprobability sample to work in parallel with the term probability sample. A new table (Table 7.3) helps students group related terms.
xiv PREfACE
CHAPTER MAJOR CHANGES IN THE THIRD EDITION
8. Bivariate Correlational Research
This chapter keeps most of the second edition examples. It was revised to better show that association claims are separate from correlational methods. Look for improved moderator examples in this chapter. These new examples, I hope, will communicate to students that moderators change the relationship between variables; they do not necessarily reflect the level of one of the variables.
9. Multivariate Correlational Research
I replaced both of the main examples in this chapter. The new example of cross- lag panel design, on parental overpraise and child narcissism, has four time periods (rather than two), better representing contemporary longitudinal studies. In the multiple regression section, the recess example is replaced with one on adolescents in which watching sexual TV content predicts teen pregnancy. The present regression example is student-friendly and also has stronger effect sizes.
Look for an important change in Figure 9.13 aimed to convey that a moderator can be thought of as vulnerability. My own students tend to think something is a moderator when the subgroup is simply higher on one of the variables. For example, boys might watch more violent TV content and be higher on aggression, but that’s not the same as a moderator. Therefore, I have updated the moderator column with the moderator “parental discussion.” I hope this will help students come up with their own moderators more easily.
10. Introduction to Simple Experiments
The red/green ink example was replaced with a popular study on notetaking, comparing the effects of taking notes in longhand or on laptops. There is also a new example of pretest/posttest designs (a study on mindfulness training). Students sometimes are surprised when a real-world study has multiple dependent variables, so I’ve highlighted that more in the third edition. Both of the chapter’s opening examples have multiple dependent variables.
I kept the example on pasta bowl serving size. However, after Chapter 10 was typeset, some researchers noticed multiple statistical inconsistencies in several publications from Wansink’s lab (for one summary of the issues, see the Chronicle of Higher Education article, “Spoiled Science”). At the time of writing, the pasta study featured in Chapter 10 has not been identified as problematic. Nevertheless, instructors might wish to engage students in a discussion of these issues.
11. More on Experiments: Confounding and Obscuring Variables
The content is virtually the same, with the addition of two Working It Through sections. The first one is to show students how to work through Table 11.1 using the mindfulness study from Chapter 10. This is important because after seeing Table 11.1, students sometimes think their job is to find the flaw in any study. In fact, most published studies do not have major internal validity flaws. The second Working It Through shows students how to analyze a null result.
12. Experiments with More Than One Independent Variable
Recent work has suggested that context-specific memory effects are not robust, so I replaced the Godden and Baddeley factorial example on context-specific learning with one comparing the memory of child chess experts to adults.
xv
CHAPTER MAJOR CHANGES IN THE THIRD EDITION
13. Quasi-Experiments and Small-N Designs
I replaced the Head Start study for two reasons. First, I realized it’s not a good example of a nonequivalent control group posttest-only design, because it actually included a pretest! Second, the regression to the mean effect it meant to illustrate is rare and difficult to understand. In exchange, there is a new study on the effects of walking by a church.
In the small-N design section, I provided fresh examples of multiple baseline design and alternating treatment designs. I also replaced the former case study example (split-brain studies) with the story of H.M. Not only is H.M.’s story compelling (especially as told through the eyes of his friend and researcher Suzanne Corkin), the brain anatomy required to understand this example is also simpler than that of split- brain studies, making it more teachable.
14. Replication, Generalization, and the Real World
A significant new section and table present the so-called “replication crisis” in psychology. In my experience, students are extremely engaged in learning about these issues. There’s a new example of a field experiment, a study on the effect of radio programs on reconciliation in Rwanda.
Supplementary Chapters In the supplementary chapter on inferential statistics, I replaced the section on randomization tests with a new section on confidence intervals. The next edition of the book may transition away from null hypothesis significance testing to emphasize the “New Statistics” of estimation and confidence intervals. I welcome feedback from instructors on this potential change.
Changes in the Third Edition
xvi
Acknowledgments
Working on this textbook has been rewarding and enriching, thanks to the many people who have smoothed the way. To start, I feel fortunate to have collaborated with an author-focused company and an all-around great editor, Sheri Snavely. Through all three editions, she has been both optimistic and realistic, as well as savvy and smart. She also made sure I got the most thoughtful reviews possible and that I was supported by an excellent staff at Norton: David Bradley, Jane Searle, Rubina Yeh, Eve Sanoussi, Victoria Reuter, Alex Trivilino, Travis Carr, and Dena Diglio Betz. My developmental editor, Betsy Dilernia, found even more to refine in the third edition, making the language, as well as each term, figure, and refer- ence, clear and accurate.
I am also thankful for the support and continued enthusiasm I have received from the Norton sales management team: Michael Wright, Allen Clawson, Ashley Sherwood, Annie Stewart, Dennis Fernandes, Dennis Adams, Katie Incorvia, Jordan Mendez, Amber Watkins, Shane Brisson, and Dan Horton. I also wish to thank the science and media special- ists for their creativity and drive to ensure my book reaches a wide audience, and that all the media work for instructors and students.
I deeply appreciate the support of many col- leagues. My former student Patrick Ewell, now at Kenyon College, served as a sounding board for new examples and authored the content for InQuizitive. Eddie Brummelman and Stefanie Nelemans provided additional correlations for the cross-lag panel design in Chapter 9. My friend Carrie Smith authored the Test Bank for the past two editions and has made it
an authentic measure of quantitative reasoning (as well as sending me things to blog about). Catherine Burrows carefully checked and revised the Test Bank for the third edition. Many thanks to Sarah Ainsworth, Reid Griggs, Aubrey McCarthy, Emma McGorray, and Michele M. Miller for carefully and patiently fact-checking every word in this edition. My student Xiaxin Zhong added DOIs to all the refer- ences and provided page numbers for the Check Your Understanding answers. Thanks, as well, to Emily Stanley and Jeong Min Lee, for writing and revising the questions that appear in the Coursepack created for the course management systems. I’m grateful to Amy Corbett and Kacy Pula for reviewing the ques- tions in InQuizitive. Thanks to my students Matt Davila-Johnson and Jeong Min Lee for posing for photographs in Chapters 5 and 10.
The book’s content was reviewed by a cadre of talented research method professors, and I am grateful to each of them. Some were asked to review; others cared enough to send me comments or examples by e-mail. Their students are lucky to have them in the classroom, and my readers will benefit from the time they spent in improving this book:
Eileen Josiah Achorn, University of Texas, San Antonio Sarah Ainsworth, University of North Florida Kristen Weede Alexander, California State University,
Sacramento Leola Alfonso-Reese, San Diego State University Cheryl Armstrong, Fitchburg State University Jennifer Asmuth, Susquehanna University Kristin August, Rutgers University, Camden
xviiAcknowledgments
Jessica L. Barnack-Tavlaris, The College of New Jersey Gordon Bear, Ramapo College Margaret Elizabeth Beier, Rice University Jeffrey Berman, University of Memphis Brett Beston, McMaster University Alisa Beyer, Northern Arizona University Julie Boland, University of Michigan Marina A. Bornovalova, University of South Florida Caitlin Brez, Indiana State University Shira Brill, California State University, Northridge J. Corey Butler, Southwest Minnesota State University Ricardo R. Castillo, Santa Ana College Alexandra F. Corning, University of Notre Dame Kelly A. Cotter, California State University, Stanislaus Lisa Cravens-Brown, The Ohio State University Victoria Cross, University of California, Davis Matthew Deegan, University of Delaware Kenneth DeMarree, University at Buffalo Jessica Dennis, California State University, Los Angeles Nicole DeRosa, SUNY Upstate Golisano Children’s Hospital Rachel Dinero, Cazenovia College Dana S. Dunn, Moravian College C. Emily Durbin, Michigan State University Russell K. Espinoza, California State University, Fullerton Patrick Ewell, Kenyon College Iris Firstenberg, University of California, Los Angeles Christina Frederick, Sierra Nevada College Alyson Froehlich, University of Utah Christopher J. Gade, University of California, Berkeley Timothy E. Goldsmith, University of New Mexico Jennifer Gosselin, Sacred Heart University AnaMarie Connolly Guichard, California State University,
Stanislaus Andreana Haley, University of Texas, Austin Edward Hansen, Florida State University Cheryl Harasymchuk, Carleton University Richard A. Hullinger, Indiana State University Deborah L. Hume, University of Missouri Kurt R. Illig, University of St. Thomas Jonathan W. Ivy, Pennsylvania State University, Harrisburg W. Jake Jacobs, University of Arizona Matthew D. Johnson, Binghamton University Christian Jordan, Wilfrid Laurier University Linda Juang, San Francisco State University
Victoria A. Kazmerski, Penn State Erie, The Behrend College Heejung Kim, University of California, Santa Barbara Greg M. Kim-Ju, California State University, Sacramento Ari Kirshenbaum, Ph.D., St. Michael’s College Kerry S. Kleyman, Metropolitan State University Penny L. Koontz, Marshall University Christina M. Leclerc, Ph.D., State University of New York
at Oswego Ellen W. Leen-Feldner, University of Arkansas Carl Lejuez, University of Maryland Marianne Lloyd, Seton Hall University Stella G. Lopez, University of Texas, San Antonio Greg Edward Loviscky, Pennsylvania State University Sara J. Margolin, Ph.D., The College at Brockport, State
University of New York Azucena Mayberry, Texas State University Christopher Mazurek, Columbia College Peter Mende-Siedlecki, University of Delaware Molly A. Metz, Miami University Dr. Michele M. Miller, University of Illinois Springfield Daniel C. Molden, Northwestern University J. Toby Mordkoff, University of Iowa Elizabeth Morgan, Springfield College Katie Mosack, University of Wisconsin, Milwaukee Erin Quinlivan Murdoch, George Mason University Stephanie C. Payne, Texas A&M University Anita Pedersen, California State University, Stanislaus Elizabeth D. Peloso, University of Pennsylvania M. Christine Porter, College of William and Mary Joshua Rabinowitz, University of Michigan Elizabeth Riina, Queens College, City University of New York James R. Roney, University of California, Santa Barbara Richard S. Rosenberg, Ph.D., California State University,
Long Beach Carin Rubenstein, Pima Community College Silvia J. Santos, California State University, Dominguez Hills Pamela Schuetze, Ph.D., The College at Buffalo, State
University of New York John N. Schwoebel, Ph.D., Utica College Mark J. Sciutto, Muhlenberg College Elizabeth A. Sheehan, Georgia State University Victoria A. Shivy, Virginia Commonwealth University Leo Standing, Bishop’s University
xviii ACkNOwLEDGMENTs
Harold W. K. Stanislaw, California State University, Stanislaus Kenneth M. Steele, Appalachian State University Mark A. Stellmack, University of Minnesota, Twin Cities Eva Szeli, Arizona State University Lauren A. Taglialatela, Kennesaw State University Alison Thomas-Cottingham, Rider University Chantal Poister Tusher, Georgia State University Allison A. Vaughn, San Diego State University Simine Vazire, University of California, Davis Jan Visser, University of Groningen John L. Wallace, Ph.D., Ball State University Shawn L. Ward, Le Moyne College Christopher Warren, California State University, Long Beach Shannon N. Whitten, University of Central Florida Jelte M. Wicherts, Tilburg University Antoinette R. Wilson, University of California, Santa Cruz James Worthley, University of Massachusetts, Lowell Charles E. (Ted) Wright, University of California, Irvine Guangying Wu, The George Washington University
David Zehr, Plymouth State University Peggy Mycek Zoccola, Ohio University
I have tried to make the best possible improvements from all of these capable reviewers.
My life as a teaching professor has been enriched during the last few years because of the friendship and support of my students and colleagues at the Uni- versity of Delaware, colleagues I see each year at the SPSP conference, and all the faculty I see regularly at the National Institute for the Teaching of Psychology, affectionately known as NITOP.
Three teenage boys will keep a person both enter- tained and humbled; thanks to Max, Alek, and Hugo for providing their services. I remain grateful to my mother-in-law, Janet Pochan, for cheerfully helping on the home front. Finally, I want to thank my husband Darrin for encouraging me and for always having the right wine to celebrate (even if it’s only Tuesday).
Beth Morling
Media Resources for Instructors and Students
G
N
G
INTERACTIVE INsTRUCTOR’s GUIDE Beth Morling, University of Delaware The Interactive Instructor’s Guide contains hundreds of downloadable resources and teaching ideas, such as a discussion of how to design a course that best utilizes the textbook, sample syllabus and assignments, and chapter-by-chapter teaching notes and suggested activities.
POwERPOINTs The third edition features three types of PowerPoints. The Lecture PowerPoints provide an overview of the major headings and definitions for each chapter. The Art Slides contain a complete set of images. And the Active Learning Slides provide the author’s favorite in-class activities, as well as reading quiz- zes and clicker questions. Instructors can browse the Active Learning Slides to select activities that supplement their classes.
TEsT BANk C. Veronica Smith, University of Mississippi, and Catherine Burrows, University of Miami The Test Bank provides over 750 questions using an evidence-centered approach designed in collabora- tion with Valerie Shute of Florida State University and Diego Zapata-Rivera of the Educational Testing Service. The Test Bank contains multiple-choice and short-answer questions classified by section, Bloom’s taxonomy, and difficulty, making it easy for instructors to construct tests and quizzes that are meaningful and diagnostic. The Test Bank is available in Word RTF, PDF, and ExamView® Assessment Suite formats.
INQUIZITIVE Patrick Ewell, Kenyon College InQuizitive allows students to practice applying terminology in the textbook to numerous examples. It can guide the students with specific feedback for incorrect answers to help clarify common mistakes. This online assessment tool gives students the repetition they need to fully understand the material without cutting into valuable class time. InQuizitive provides practice in reading tables and figures, as well as identifying the research methods used in studies from popular media articles, for an integrated learning experience.
EVERYDAY REsEARCH METHODs BLOG: www.everydayresearchmethods.com The Research Methods in Psychology blog offers more than 150 teachable moments from the web, curated by Beth Morling and occasional guest contributors. Twice a month, the author highlights examples of psychological science in the news. Students can connect these recent stories with textbook concepts. Instructors can use blog posts as examples in lecture or assign them as homework. All entries are searchable by chapter.
COURsEPACk Emily Stanley, University of Mary Washington, and Jeong Min Lee, University of Delaware The Coursepack presents students with review opportunities that employ the text’s analytical frame- work. Each chapter includes quizzes based on the Norton Assessment Guidelines, Chapter Outlines created by the textbook author and based on the Learning Objectives in the text, and review flash- cards. The APA-style guidelines from the textbook are also available in the Coursepack for easy access.
H
r
C
xix
http://www.everydayresearchmethods.com
xx
Contents
Preface ix Media Resources for Instructors and Students xix
PART I Introduction to Scientific Reasoning
CHAPTER 1
Psychology Is a Way of Thinking 5
Research Producers, Research Consumers 6 Why the Producer Role Is Important 6
Why the Consumer Role Is Important 7
The Benefits of Being a Good Consumer 8
How Scientists Approach Their Work 10 Scientists Are Empiricists 10
Scientists Test Theories: The Theory-Data Cycle 11
Scientists Tackle Applied and Basic Problems 16
Scientists Dig Deeper 16
Scientists Make It Public: The Publication Process 17
Scientists Talk to the World: From Journal to
Journalism 17
Chapter Review 22
Contents
xxiContents
CHAPTER 2
Sources of Information: Why Research Is Best and How to Find It 25
The Research vs. Your Experience 26 Experience Has No Comparison Group 26
Experience Is Confounded 29
Research Is Better Than Experience 29
Research Is Probabilistic 31
The Research vs. Your Intuition 32 Ways That Intuition Is Biased 32
The Intuitive Thinker vs. the Scientific Reasoner 38
Trusting Authorities on the Subject 39 Finding and Reading the Research 42 Consulting Scientific Sources 42
Finding Scientific Sources 44
Reading the Research 46
Finding Research in Less Scholarly Places 48
Chapter Review 53
CHAPTER 3
Three Claims, Four Validities: Interrogation Tools for Consumers of Research 57
Variables 58 Measured and Manipulated Variables 58
From Conceptual Variable to Operational Definition 59
Three Claims 61 Frequency Claims 62
Association Claims 63
Causal Claims 66
Not All Claims Are Based on Research 68
Interrogating the Three Claims Using the Four Big Validities 68 Interrogating Frequency Claims 69
Interrogating Association Claims 71
Interrogating Causal Claims 74
Prioritizing Validities 79
Review: Four Validities, Four Aspects of Quality 80 wORkING IT THROUGH Does Hearing About Scientists’ Struggles Inspire
Young Students? 81
Chapter Review 83
xxii CONTENTs
PART II Research Foundations for Any Claim
CHAPTER 4
Ethical Guidelines for Psychology Research 89
Historical Examples 89 The Tuskegee Syphilis Study Illustrates Three Major Ethics Violations 89
The Milgram Obedience Studies Illustrate a Difficult Ethical Balance 92
Core Ethical Principles 94 The Belmont Report: Principles and Applications 94
Guidelines for Psychologists: The APA Ethical Principles 98 Belmont Plus Two: APA’s Five General Principles 98
Ethical Standards for Research 99
Ethical Decision Making: A Thoughtful Balance 110 wORkING IT THROUGH Did a Study Conducted on Facebook Violate Ethical
Principles? 111
Chapter Review 113
CHAPTER 5
Identifying Good Measurement 117
Ways to Measure Variables 118 More About Conceptual and Operational Variables 118
Three Common Types of Measures 120
Scales of Measurement 122
Reliability of Measurement: Are the Scores Consistent? 124 Introducing Three Types of Reliability 125
Using a Scatterplot to Quantify Reliability 126
Using the Correlation Coefficient r to Quantify Reliability 128
Reading About Reliability in Journal Articles 131
Validity of Measurement: Does It Measure What It’s Supposed to Measure? 132
Measurement Validity of Abstract Constructs 133
Face Validity and Content Validity: Does It Look Like a
Good Measure? 134
Criterion Validity: Does It Correlate with Key Behaviors? 135
Convergent Validity and Discriminant Validity: Does the
Pattern Make Sense? 139
The Relationship Between Reliability and Validity 142
xxiiiContents
Review: Interpreting Construct Validity Evidence 143
wORkING IT THROUGH How Well Can We Measure the Amount of Gratitude Couples Express to Each Other? 145
Chapter Review 147
PART III Tools for Evaluating Frequency Claims
CHAPTER 6
Surveys and Observations: Describing What People Do 153
Construct Validity of Surveys and Polls 153 Choosing Question Formats 154
Writing Well-Worded Questions 155
Encouraging Accurate Responses 159
Construct Validity of Behavioral Observations 165 Some Claims Based on Observational Data 165
Making Reliable and Valid Observations 169
Chapter Review 175
CHAPTER 7
Sampling: Estimating the Frequency of Behaviors and Beliefs 179
Generalizability: Does the Sample Represent the Population? 179 Populations and Samples 180
When Is a Sample Biased? 182
Obtaining a Representative Sample: Probability Sampling Techniques 186
Settling for an Unrepresentative Sample: Nonprobability Sampling Techniques 191
Interrogating External Validity: What Matters Most? 193 In a Frequency Claim, External Validity Is a
Priority 193
When External Validity Is a Lower Priority 194
Larger Samples Are Not More Representative 196
Chapter Review 198
xxiv CONTENTs
PART IV Tools for Evaluating Association Claims
CHAPTER 8
Bivariate Correlational Research 203
Introducing Bivariate Correlations 204 Review: Describing Associations Between Two Quantitative
Variables 205
Describing Associations with Categorical Data 207
A Study with All Measured Variables Is Correlational 209
Interrogating Association Claims 210 Construct Validity: How Well Was Each Variable Measured? 210
Statistical Validity: How Well Do the Data Support
the Conclusion? 211
Internal Validity: Can We Make a Causal Inference from
an Association? 221
External Validity: To Whom Can the Association Be Generalized? 226
wORkING IT THROUGH Are Parents Happier Than People with No Children? 231
Chapter Review 233
CHAPTER 9
Multivariate Correlational Research 237
Reviewing the Three Causal Criteria 238 Establishing Temporal Precedence with Longitudinal
Designs 239 Interpreting Results from Longitudinal Designs 239
Longitudinal Studies and the Three Criteria for Causation 242
Why Not Just Do an Experiment? 242
Ruling Out Third Variables with Multiple-Regression Analyses 244 Measuring More Than Two Variables 244
Regression Results Indicate If a Third Variable Affects
the Relationship 247
Adding More Predictors to a Regression 251
Regression in Popular Media Articles 252
Regression Does Not Establish Causation 254
Getting at Causality with Pattern and Parsimony 256 The Power of Pattern and Parsimony 256
Pattern, Parsimony, and the Popular Media 258
xxvContents
Mediation 259 Mediators vs. Third Variables 261
Mediators vs. Moderators 262
Multivariate Designs and the Four Validities 264 Chapter Review 266
PART V Tools for Evaluating Causal Claims
CHAPTER 10
Introduction to Simple Experiments 273
Two Examples of Simple Experiments 273 Example 1: Taking Notes 274
Example 2: Eating Pasta 275
Experimental Variables 276 Independent and Dependent Variables 277
Control Variables 278
Why Experiments Support Causal Claims 278 Experiments Establish Covariance 279
Experiments Establish Temporal Precedence 280
Well-Designed Experiments Establish Internal Validity 281
Independent-Groups Designs 287 Independent-Groups vs. Within-Groups Designs 287
Posttest-Only Design 287
Pretest/Posttest Design 288
Which Design Is Better? 289
Within-Groups Designs 290 Repeated-Measures Design 290
Concurrent-Measures Design 291
Advantages of Within-Groups Designs 292
Covariance, Temporal Precedence, and Internal Validity in Within-Groups Designs 294
Disadvantages of Within-Groups Designs 296
Is Pretest/Posttest a Repeated-Measures Design? 297
Interrogating Causal Claims with the Four Validities 298 Construct Validity: How Well Were the Variables Measured and Manipulated? 298
External Validity: To Whom or What Can the Causal Claim Generalize? 301
Statistical Validity: How Well Do the Data Support the Causal Claim? 304
Internal Validity: Are There Alternative Explanations for the Results? 306
Chapter Review 307
xxvi CONTENTs
CHAPTER 11
More on Experiments: Confounding and Obscuring Variables 311
Threats to Internal Validity: Did the Independent Variable Really Cause the Difference? 312
The Really Bad Experiment (A Cautionary Tale) 312
Six Potential Internal Validity Threats in One-Group,
Pretest/Posttest Designs 314
Three Potential Internal Validity Threats in Any Study 322
With So Many Threats, Are Experiments Still Useful? 325
wORkING IT THROUGH Did Mindfulness Training Really Cause GRE Scores to Improve? 328
Interrogating Null Effects: What If the Independent Variable Does Not Make a Difference? 330
Perhaps There Is Not Enough Between-Groups Difference 332
Perhaps Within-Groups Variability Obscured the Group Differences 335
Sometimes There Really Is No Effect to Find 342
wORkING IT THROUGH Will People Get More Involved in Local Government If They Know They’ll Be Publicly Honored? 344
Null Effects May Be Published Less Often 345
Chapter Review 346
CHAPTER 12
Experiments with More Than One Independent Variable 351
Review: Experiments with One Independent Variable 351 Experiments with Two Independent Variables Can
Show Interactions 353
Intuitive Interactions 353
Factorial Designs Study Two Independent Variables 355
Factorial Designs Can Test Limits 356
Factorial Designs Can Test Theories 358
Interpreting Factorial Results: Main Effects and Interactions 360
Factorial Variations 370 Independent-Groups Factorial Designs 370
Within-Groups Factorial Designs 370
Mixed Factorial Designs 371
Increasing the Number of Levels of an Independent Variable 371
Increasing the Number of Independent Variables 373
Identifying Factorial Designs in Your Reading 378 Identifying Factorial Designs in Empirical Journal Articles 379
Identifying Factorial Designs in Popular Media Articles 379
Chapter Review 383
xxviiContents
PART VI Balancing Research Priorities
CHAPTER 13
Quasi-Experiments and Small-N Designs 389
Quasi-Experiments 389 Two Examples of Independent-Groups
Quasi-Experiments 390
Two Examples of Repeated-Measures
Quasi-Experiments 392
Internal Validity in Quasi-Experiments 396
Balancing Priorities in Quasi-Experiments 404
Are Quasi-Experiments the Same as Correlational Studies? 405
Small-N Designs: Studying Only a Few Individuals 406 Research on Human Memory 407
Disadvantages of Small-N Studies 410
Behavior-Change Studies in Applied Settings:
Three Small-N Designs 411
Other Examples of Small-N Studies 417
Evaluating the Four Validities in Small-N Designs 418
Chapter Review 420
CHAPTER 14
Replication, Generalization, and the Real World 425
To Be Important, a Study Must Be Replicated 425 Replication Studies 426
The Replication Debate in Psychology 430
Meta-Analysis: What Does the Literature Say? 433
Replicability, Importance, and Popular Media 436
To Be Important, Must a Study Have External Validity? 438 Generalizing to Other Participants 438
Generalizing to Other Settings 439
Does a Study Have to Be Generalizable to Many People? 440
Does a Study Have to Take Place in a Real-World Setting? 447
Chapter Review 453
xxviii CONTENTs
Statistics Review Descriptive Statistics 457 Statistics Review Inferential Statistics 479 Presenting Results APA-Style Reports and Conference Posters 505 Appendix A Random Numbers and How to Use Them 545 Appendix B Statistical Tables 551 Areas Under the Normal Curve (Distribution of z) 551
Critical Values of t 557
Critical Values of F 559
r to z' Conversion 564
Critical Values of r 565 Glossary 567 Answers to End-of-Chapter Questions 577 Review Question 577
Guidelines for Selected Learning Actively Exercises 578 References 589 Credits 603 Name Index 607 Subject Index 611
THIRD EDITION
Research Methods in Psychology EVALUATING A WORLD OF INFORMATION
PART I
Introduction to Scientific Reasoning
Your Dog Hates Hugs NYMag.com, 2016
Mindfulness May Improve Test Scores Scientific American, 2013
http://NYMag.com
5
Psychology Is a Way of Thinking THINKING BACK TO YOUR introductory psychology course, what do you remember learning? You might remember that dogs can be trained to salivate at the sound of a bell or that people in a group fail to call for help when the room fills up with smoke. Or perhaps you recall studies in which people administered increasingly stron- ger electric shocks to an innocent man although he seemed to be in distress. You may have learned what your brain does while you sleep or that you can’t always trust your memories. But how come you didn’t learn that “we use only 10% of our brain” or that “hitting a punching bag can make your anger go away”?
The reason you learned some principles, and not others, is because psychological science is based on studies—on research—by psychologists. Like other scientists, psychologists are empiricists. Being an empiricist means basing one’s conclusions on systematic observations. Psychologists do not simply think intuitively about behavior, cognition, and emotion; they know what they know because they have conducted studies on people and animals acting in their natural environments or in specially designed situations. Research is what tells us that most people will administer electric shock to an innocent man in certain situations, and it also tells us that people’s brains are usually fully engaged—not just 10%. If you are to think like a psychologist, then you must think like a researcher, and taking a course in research methods is crucial to your understanding of psychology.
This book explains the types of studies psychologists conduct, as well as the potential strengths and limitations of each type of study. You will learn not only how to plan your own studies but
1 LEARNING OBJECTIVES
A year from now, you should still be able to:
1. Explain what it means to reason empirically.
2. Appreciate how psychological research methods help you become a better producer of information as well as a better consumer of information.
3. Describe five practices that psychological scientists engage in.
6 CHAPTER 1 Psychology Is a Way of Thinking
also how to find research, read about it, and ask questions about it. While gaining a greater appreciation for the rigorous standards psychologists maintain in their research, you’ll find out how to be a systematic and critical consumer of psychological science.
RESEARCH PRODUCERS, RESEARCH CONSUMERS Some psychology students are fascinated by the research process and intend to become producers of research. Perhaps they hope to get a job studying brain anatomy, documenting the behavior of dolphins or monkeys, administering per- sonality questionnaires, observing children in a school setting, or analyzing data. They may want to write up their results and present them at research meetings. These students may dream about working as research scientists or professors.
Other psychology students may not want to work in a lab, but they do enjoy reading about the structure of the brain, the behavior of dolphins or monkeys, the personalities of their fellow students, or the behavior of children in a school setting. They are interested in being consumers of research information—reading about research so they can later apply it to their work, hobbies, relationships, or personal growth. These students might pursue careers as family therapists, teachers, entrepreneurs, guidance counselors, or police officers, and they expect psychology courses to help them in these roles.
In practice, many psychologists engage in both roles. When they are planning their research and creating new knowledge, they study the work of others who have gone before them. Furthermore, psychologists in both roles require a curi- osity about behavior, emotion, and cognition. Research producers and consumers also share a commitment to the practice of empiricism—to answer psychological questions with direct, formal observations, and to communicate with others about what they have learned.
Why the Producer Role Is Important For your future coursework in psychology, it is important to know how to be a producer of research. Of course, students who decide to go to graduate school for psychology will need to know all about research methods. But even if you do not plan to do graduate work in psychology, you will probably have to write a paper following the style guidelines of the American Psychological Association (APA) before you graduate, and you may be required to do research as part of a course lab section. To succeed, you will need to know how to randomly assign people to groups, how to measure attitudes accurately, or how to interpret results from a graph. The skills you acquire by conducting research can teach you how psycho- logical scientists ask questions and how they think about their discipline.
7Research Producers, Research Consumers
As part of your psychology studies, you might even work in a research lab as an undergraduate (Figure 1.1). Many psy- chology professors are active researchers, and if you are offered the opportunity to get involved in their laboratories, take it! Your faculty supervisor may ask you to code behaviors, assign participants to different groups, graph an outcome, or write a report. Doing so will give you your first taste of being a research producer. Although you will be supervised closely, you will be expected to know the basics of conducting research. This book will help you understand why you have to protect the anonymity of your participants, use a cod- ing book, or flip a coin to decide who goes in which group. By participating as a research producer, you can expect to deepen your understanding of psychological inquiry.
Why the Consumer Role Is Important Although it is important to understand the psychologist’s role as a producer of research, most psychology majors do not eventually become researchers. Regard- less of the career you choose, however, becoming a savvy consumer of informa- tion is essential. In your psychology courses, you will read studies published by psychologists in scientific journals. You will need to develop the ability to read about research with curiosity—to understand it, learn from it, and ask appropriate questions about it.
Think about how often you encounter news stories or look up information on the Internet. Much of the time, the stories you read and the websites you visit will present information based on research. For example, during an election year, Americans may come across polling information in the media almost every day. Many online newspapers have science sections that include stories on the lat- est research. Entire websites are dedicated to psychology-related topics, such as treatments for autism, subliminal learning tapes, or advice for married couples. Magazines such as Scientific American, Men’s Health, and Parents summarize research for their readers. While some of the research—whether online or printed— is accurate and useful, some of it is dubious, and some is just plain wrong. How can you tell the good research information from the bad? Understanding research methods enables you to ask the appropriate questions so you can evaluate informa- tion correctly. Research methods skills apply not only to research studies but also to much of the other types of information you are likely to encounter in daily life.
FIGURE 1.1 Producers of research. As undergraduates, some psychology majors work alongside faculty members as producers of information.
8 CHAPTER 1 Psychology Is a Way of Thinking
Finally, being a smart consumer of research could be crucial to your future career. Even if you do not plan to be a researcher—if your goal is to be a social worker, a teacher, a sales representative, a human resources professional, an entrepreneur, or a parent—you will need to know how to interpret published research with a critical eye. Clinical psychologists, social workers, and family therapists must read research to know which therapies are the most effective. In fact, licensure in these helping professions requires knowing the research behind evidence-based treatments—that is, therapies that are supported by research. Teachers also use research to find out which teaching methods work best. And the business world runs on quantitative information: Research is used to predict what sales will be like in the future, what consumers will buy, and whether investors will take risks or lie low. Once you learn how to be a consumer of information—psychological or otherwise—you will use these skills constantly, no matter what job you have.
In this book, you will often see the phrase “interrogating information.” A con- sumer of research needs to know how to ask the right questions, determine the answers, and evaluate a study on the basis of those answers. This book will teach you systematic rules for interrogating research information.
The Benefits of Being a Good Consumer What do you gain by being a critical consumer of information? Imagine, for exam- ple, that you are a correctional officer at a juvenile detention center, and you watch a TV documentary about a crime-prevention program called Scared Straight. The program arranges for teenagers involved in the criminal justice system to visit prisons, where selected prisoners describe the stark, violent realities of prison life (Figure 1.2). The idea is that when teens hear about how tough it is in prison, they will be scared into the “straight,” law-abiding life. The program makes a lot
FIGURE 1.2 Scared straight. Although it makes intuitive sense that young people would be scared into good behavior by hearing from current prisoners, such intervention programs have actually been shown to cause an increase in criminal offenses.
9Research Producers, Research Consumers
of sense to you. You are considering starting a partnership between the residents of your detention center and the state prison system.
However, before starting the partnership, you decide to investigate the efficacy of the program by reviewing some research that has been conducted about it. You learn that despite the intuitive appeal of the Scared Straight approach, the program doesn’t work—in fact, it might even cause criminal activity to get worse! Several published articles have reported the results of randomized, controlled studies in which young adults were assigned to either a Scared Straight program or a control program. The researchers then collected criminal records for 6–12 months. None of the studies showed that Scared Straight attendees committed fewer crimes, and most studies found an increase in crime among participants in the Scared Straight programs, compared to the controls (Petrosino, Turpin-Petrosino, & Finckenauer, 2000). In one case, Scared Straight attendees had committed 20% more crimes than the control group.
At first, people considering such a program might think: If this program helps even one person, it’s worth it. However, we always need empirical evidence to test the efficacy of our interventions. A well-intentioned program that seems to make sense might actually be doing harm. In fact, if you investigate further, you’ll find that the U.S. Department of Justice officially warns that such programs are inef- fective and can harm youth, and the Juvenile Justice and Delinquency Prevention Act of 1974 was amended to prohibit youth in the criminal justice system from interactions with adult inmates in jails and prisons.
Being a skilled consumer of information can inform you about other pro- grams that might work. For example, in your quest to become a better student, suppose you see this headline: “Mindfulness may improve test scores.” The prac- tice of mindfulness involves attending to the present moment, on purpose, with a nonjudgmental frame of mind (Kabat-Zinn, 2013). In a mindful state, people simply observe and let go of thoughts rather than elaborating on them. Could the practice of mindfulness really improve test scores? A study conducted by Michael Mrazek and his colleagues assigned people to take either a 2-week mindfulness training course or a 2-week nutrition course (Mrazek, Franklin, Philips, Baird, & Schooner, 2013). At the end of the training, only the people who had practiced mindfulness showed improved GRE scores (compared to their scores beforehand). Mrazek’s group hypothesized that mindfulness training helps people attend to an academic task without being distracted. They were bet- ter, it seemed, at controlling their minds from wandering. The research evidence you read about here appears to support the use of mindfulness for improving test scores.
By understanding the research methods and results of this study, you might be convinced to take a mindfulness-training course similar to the one used by Mrazek and his colleagues. And if you were a teacher or tutor, you might consider advising your students to practice some of the focusing techniques. (Chapter 10 returns to this example and explains why the Mrazek study stands up to interro- gation.) Your skills in research methods will help you become a better consumer of
10 CHAPTER 1 Psychology Is a Way of Thinking
studies like this one, so you can decide when the research supports some programs (such as mindfulness for study skills) but not others (such as Scared Straight for criminal behavior).
CHECK YOUR UNDERSTANDING
1. Explain what the consumer of research and producer of research roles have in common, and describe how they differ.
2. What kinds of jobs would use consumer-of-research skills? What kinds of jobs would use producer-of-research skills?
HOW SCIENTISTS APPROACH THEIR WORK Psychological scientists are identified not by advanced degrees or white lab coats; they are defined by what they do and how they think. The rest of this chapter will explain the fundamental ways psychologists approach their work. First, they act as empiricists in their investigations, meaning that they systematically observe the world. Second, they test theories through research and, in turn, revise their theories based on the resulting data. Third, they take an empirical approach to both applied research, which directly targets real-world problems, and basic research, which is intended to contribute to the general body of knowledge. Fourth, they go further: Once they have discovered an effect, scientists plan further research to test why, when, or for whom an effect works. Fifth, psychologists make their work public: They submit their results to journals for review and respond to the opinions of other scientists. Another aspect of making work public involves sharing findings of psy- chological research with the popular media, who may or may not get the story right.
Scientists Are Empiricists Empiricists do not base conclusions on intuition, on casual observations of their own experience, or on what other people say. Empiricism, also referred to as the empirical method or empirical research, involves using evidence from the senses (sight, hearing, touch) or from instruments that assist the senses (such as thermometers, timers, photographs, weight scales, and questionnaires) as the basis for conclusions. Empiricists aim to be systematic, rigorous, and to make their work independently verifiable by other observers or scientists. In Chapter 2,
1. See pp. 6–7. 2. See pp. 7–8.
❯❯ For more on the contrast between empiricism and
intuition, experience, and authority, see Chapter 2,
pp. 26–31.
11How Scientists Approach Their Work
you will learn more about why empiricism is considered the most reliable basis for conclusions when compared with other forms of reasoning, such as expe- rience or intuition. For now, we’ll focus on some of the practices in which empiricists engage.
Scientists Test Theories: The Theory-Data Cycle In the theory-data cycle, scientists collect data to test, change, or update their theories. Even if you have never been in a formal research situation, you have probably tested ideas and hunches of your own by asking specific questions that are grounded in theory, making predictions, and reflecting on data.
For example, let’s say you need to take your bike to work later, so you check the weather forecast on your tablet (Figure 1.3). The application opens, but you see a blank screen. What could be wrong? Maybe your entire device is on the blink: Do the other apps work? When you test them, you find your calculator is working, but not your e-mail. In fact, it looks as if only the apps that need wireless are not working. Your wireless indicator looks low, so you ask your roommate, sitting nearby, “Are you having wifi problems?” If she says no, you might try resetting your device’s wireless connection.
Notice the series of steps in this process. First, you asked a particular set of questions, all of which were guided by your theory about how such devices work. The questions (Is it the tablet as a whole? Is it only the wifi?) reflected your theory that the weather app requires a working electronic device as well as a wireless connection. Because you were operating under this theory, you chose not to ask other kinds of questions (Has a warlock cursed my tablet? Does my device have a bacterial infection?). Your theory set you up to ask certain questions and not others. Next, your questions led you to specific predictions, which you tested by collecting data. You tested your first idea about the problem (My device can’t run any apps) by making a specific prediction (If I test any application, it won’t work). Then you set up a situation to test your prediction (Does the calculator work?). The data (The calculator does work) told you your initial prediction was wrong. You used that out- come to change your idea about the problem (It’s only the wireless-based apps that aren’t working). And so on. When you take systematic steps to solve a problem, you are participating in something similar to what scientists do in the theory-data cycle.
THE CUPBOARD THEORY VS. THE CONTACT COMFORT THEORY
A classic example from the psychological study of attachment can illustrate the way researchers similarly use data to test their theories. You’ve probably observed that animals form strong attachments to their caregivers. If you have a dog, you know he’s extremely happy to see you when you come home, wagging his tail and jumping all over you. Human babies, once they are able to crawl, may follow their parents or caregivers around, keeping close to them. Baby monkeys exhibit similar behavior, spending hours clinging tightly to the mother’s fur. Why do animals form such strong attachments to their caregivers?
FIGURE 1.3 Troubleshooting a tablet. Troubleshooting an electronic device is a form of engaging in the theory-data cycle.
12 CHAPTER 1 Psychology Is a Way of Thinking
One theory, referred to as the cupboard theory of mother-infant attachment, is that a mother is valu- able to a baby mammal because she is a source of food. The baby animal gets hungry, gets food from the mother by nursing, and experiences a pleas- ant feeling (reduced hunger). Over time, the sight of the mother is associated with pleasure. In other words, the mother acquires positive value for the baby because she is the “cupboard” from which food comes. If you’ve ever assumed your dog loves you only because you feed it, your beliefs are consistent with the cupboard theory.
An alternative theory, proposed by psycholo- gist Harry Harlow (1958), is that hunger has little to do with why a baby monkey likes to cling to the warm, fuzzy fur of its mother. Instead, babies are attached to their mothers because of the comfort of cozy touch. This is the contact comfort theory. (In addition, it provides a less cynical view of why your dog is so happy to see you!)
In the natural world, a mother provides both food and contact comfort at once, so when the baby
clings to her, it is impossible to tell why. To test the alternative theories, Harlow had to separate the two influences—food and contact comfort. The only way he could do so was to create “mothers” of his own. He built two monkey foster “mothers”—the only mothers his lab-reared baby monkeys ever had. One of the mothers was made of bare wire mesh with a bottle of milk built in. This wire mother offered food, but not comfort. The other mother was covered with fuzzy terrycloth and was warmed by a lightbulb suspended inside, but she had no milk. This cloth mother offered comfort, but not food.
Note that this experiment sets up three possible outcomes. The contact com- fort theory would be supported if the babies spent most of their time clinging to the cloth mother. The cupboard theory would be supported if the babies spent most of their time clinging to the wire mother. Neither theory would be supported if monkeys divided their time equally between the two mothers.
When Harlow put the baby monkeys in the cages with the two mothers, the evidence in favor of the contact comfort theory was overwhelming. Harlow’s data showed that the little monkeys would cling to the cloth mother for 12–18 hours a day (Figure 1.4). When they were hungry, they would climb down, nurse from the wire mother, and then at once go back to the warm, cozy cloth mother. In short, Harlow used the two theories to make two specific predictions about how the monkeys would interact with each mother. Then he used the data he recorded (how much time the monkeys spent on each mother) to support only one of the theories. The theory-data cycle in action!
FIGURE 1.4 The contact comfort theory. As the theory hypothesized, Harlow’s baby monkeys spent most of their time on the warm, cozy cloth mother, even though she did not provide any food.
13How Scientists Approach Their Work
THEORY, HYPOTHESIS, AND DATA
A theory is a set of statements that describes general principles about how variables relate to one another. For example, Harlow’s theory, which he developed in light of extensive observations of primate babies and mothers, was about the overwhelming importance of bodily contact (as opposed to simple nourishment) in forming attachments. Contact comfort, not food, was the primary basis for a baby’s attachment to its mother. This theory led Harlow to investigate particular kinds of questions—he chose to pit contact comfort against food in his research. The theory meant that Harlow also chose not to study unrelated questions, such as the babies’ food preferences or sleeping habits.
The theory not only led to the questions; it also led to specific hypothe- ses about the answers. A hypothesis, or prediction, is the specific outcome the researcher expects to observe in a study if the theory is accurate. Har- low’s hypothesis related to the way the baby monkeys would interact with two kinds of mothers he created for the study. He predicted that the babies would spend more time on the cozy mother than the wire mother. Notably, a sin- gle theory can lead to a large number of hypotheses because a single study is not sufficient to test the entire theory—it is intended to test only part of it. Most researchers test their theories with a series of empirical studies, each designed to test an individual hypothesis.
Data are a set of observations. (Harlow’s data were the amount of time the baby monkeys stayed on each mother.) Depending on whether the data are consistent with hypotheses based on a theory, the data may either support or challenge the theory. Data that match the theory’s hypothe- ses strengthen the resea rcher ’s con- fidence in the the- ory. When the data do not match the theory’s hypotheses, however, those results indicate that the theory needs to be revised or the research design needs to be improved. Figure 1.5 shows how these steps work as a cycle.
FIGURE 1.5 The theory-data cycle.
Theory leads researchers to
pose particular
research questions, which lead to an appropriate
research design. In the context of the design,
researchers formulate
hypotheses. Researchers then
collect and analyze
Su pp
or t Revision
Nonsupporting data lead to revised theories or improved research
design.
Supporting data strengthen
the theory.
data, which feed back
into the cycle.
14 CHAPTER 1 Psychology Is a Way of Thinking
FEATURES OF GOOD SCIENTIFIC THEORIES
In scientific practice, some theories are better than others. The best theories are supported by data from studies, are falsifiable, and are parsimonious.
Good Theories Are Supported by Data. The most important feature of a scientific theory is that it is supported by data from research studies. In this respect, the contact comfort theory of infant attachment turned out to be better than the cup- board theory because it was supported by the data. Clearly, primate babies need food, but food is not the source of their emotional attachments to their mothers. In this way, good theories, like Harlow’s, are consistent with our observations of the world. More importantly, scientists need to conduct mul- tiple studies, using a variety of methods, to address different aspects of their theories. A theory that is supported by a large quantity and variety of evi- dence is a good theory.
Good Theories Are Falsifiable. A second impor- tant feature of a good scientific theory is falsifiability. A theory must lead to hypotheses that, when tested, could actually fail to support the theory. Harlow’s theory was falsifiable: If the monkeys had spent more time on the wire mother than the cloth mother,
the contact-comfort theory would have been shown to be incorrect. Similarly, Mrazek’s mindfulness study could have falsified the researchers’ theory: If stu- dents in the mindfulness training group had shown lower GRE scores than those in the nutrition group, their theory of mindfulness and attention would not have been supported.
In contrast, some dubious therapeutic techniques have been based on theories that are not falsifiable. Here’s an example. Some therapists practice facilitated communication (FC), believing they can help people with developmental disorders communicate by gently guiding their clients’ hands over a special keyboard. In simple but rigorous empirical tests, the facilitated messages have been shown to come from the therapist, not the client (Twachtman-Cullen, 1997). Such studies demonstrated FC to be ineffective. However, FC’s supporters don’t accept these results. The empirical method introduces skepticism, which, the supporters say, breaks down trust between the therapist and client and shows a lack of faith in people with disabilities. Therefore, these supporters hold a belief about FC that is not falsifiable. To be truly scientific, researchers must take risks, including being prepared to accept data indicating their theory is not supported. Even practi- tioners must be open to such risk, so they can use techniques that actually work. For another example of an unfalsifiable claim, see Figure 1.6.
FIGURE 1.6 An example of a theory that is not falsifiable. Certain people might wear a tinfoil hat, operating under the idea that the hat wards off government mental surveillance. But like most conspiracy theories, this notion of remote government mindreading is not falsifiable. If the government has been shown to read people’s minds, the theory is supported. But if there is no physical evidence, that also supports the theory because if the government does engage in such surveillance, it wouldn’t leave a detectable trace of its secret operations.
15How Scientists Approach Their Work
Good Theories Have Parsimony. A third important feature of a good scientific theory is that it exhibits parsimony. Theories are supposed to be simple. If two theories explain the data equally well, most scientists will opt for the simpler, more parsimonious theory.
Parsimony sets a standard for the theory-data cycle. As long as a simple theory predicts the data well, there should be no need to make the theory more com- plex. Harlow’s theory was parsimonious because it posed a simple explanation for infant attachment: Contact comfort drives attachment more than food does. As long as the data continue to support the simple theory, the simple theory stands. However, when the data contradict the theory, the theory has to change in order to accommodate the data. For example, over the years, psychologists have collected data showing that baby monkeys do not always form an attachment to a soft, cozy mother. If monkeys are reared in complete social isolation during their first, crit- ical months, they seem to have problems forming attachments to anyone or any- thing. Thus, the contact comfort theory had to change a bit to emphasize the importance of contact comfort for attachment especially in the early months of life. The theory is slightly less parsimonious now, but it does a better job of accommo- dating the data.
THEORIES DON’T PROVE ANYTHING
The word prove is not used in science. Researchers never say they have proved their theories. At most, they will say that some data support or are consistent with a theory, or they might say that some data are inconsistent with or compli- cate a theory. But no single confirming finding can prove a theory (Figure 1.7). New information might require researchers, tomorrow or the next day, to change and improve current ideas. Similarly, a single, disconfirming finding does not lead researchers to scrap a theory entirely. The disconfirming study may itself have been designed poorly. Or perhaps the theory needs to be mod- ified, not discarded. Rather than thinking of a theory as proved or disproved by a single study, scientists evaluate their theories based on the weight of the evidence, for and against. Harlow’s theory of attachment could not be “proved” by the single study involving wire and cloth mothers. His laboratory conducted dozens of individual studies to rule out alternative explanations and test the theory’s limits.
❮❮ For more on weight of the evidence, see Chapter 14, p. 436.