Beth Morling - Research Methods in Psychology_ Evaluating a World of Information.pdf
THIRD EDITION
Research Methods in Psychology EVALUATING A WORLD OF INFORMATION
THIRD EDITION
Research Methods in Psychology EVALUATING A WORLD OF INFORMATION
Beth Morling UNIVERSITY OF DELAWARE
n W. W. NORTON & COMPANY, INC.NEW YORK • LONDON
W. W. Norton & Company has been independent since its founding in 1923,
when William Warder Norton and Mary D. Herter Norton first published
lectures delivered at the People’s Institute, the adult education division of
New York City’s Cooper Union. The firm soon expanded its program beyond
the Institute, publishing books by celebrated academics from America and
abroad. By midcentury, the two major pillars of Norton’s publishing program—
trade books and college texts—were firmly established. In the 1950s, the Norton
family transferred control of the company to its employees, and today—with
a staff of four hundred and a comparable number of trade, college, and
professional titles published each year—W. W. Norton & Company stands as
the largest and oldest publishing house owned wholly by its employees.
Copyright © 2018, 2015, 2012 by W. W. Norton & Company, Inc.
All rights reserved Printed in Canada
Editor: Sheri L. Snavely Project Editor: David Bradley Editorial Assistant: Eve Sanoussi Manuscript/Development Editor: Betsy Dilernia Managing Editor, College: Marian Johnson Managing Editor, College Digital Media: Kim Yi Production Manager: Jane Searle Media Editor: Scott Sugarman Associate Media Editor: Victoria Reuter Media Assistant: Alex Trivilino Marketing Manager, Psychology: Ashley Sherwood Design Director and Text Design: Rubina Yeh Photo Editor: Travis Carr Photo Researcher: Dena Digilio Betz Permissions Manager: Megan Schindel Composition: CodeMantra Illustrations: Electragraphics Manufacturing: Transcontinental Printing
Permission to use copyrighted material is included in the Credits section beginning on page 603.
Library of Congress Cataloging-in-Publication Data
Names: Morling, Beth, author. Title: Research methods in psychology : evaluating a world of information / Beth Morling, University of Delaware. Description: Third Edition. | New York : W. W. Norton & Company, [2017] | Revised edition of the author’s Research methods in psychology, [2015] | Includes bibliographical references and index. Identifiers: LCCN 2017030401 | ISBN 9780393617542 (pbk.) Subjects: LCSH: Psychology—Research—Methodology—Textbooks. | Psychology, Experimental—Textbooks. Classification: LCC BF76.5 .M667 2017 | DDC 150.72—dc23 LC record available at https://lccn.loc.gov/2017030401
Text-Only ISBN 978-0-393-63017-6
W. W. Norton & Company, Inc., 500 Fifth Avenue, New York, NY 10110 wwnorton.com W. W. Norton & Company Ltd., 15 Carlisle Street, London W1D 3BS
1 2 3 4 5 6 7 8 9 0
https://lccn.loc.gov/2017030401
http://wwnorton.com
For my parents
vii
Brief Contents
PART I Introduction to Scientific Reasoning CHAPTER 1 Psychology Is a Way of Thinking 5
CHAPTER 2 Sources of Information: Why Research Is Best and How to Find It 25
CHAPTER 3 Three Claims, Four Validities: Interrogation Tools for Consumers of Research 57
PART II Research Foundations for Any Claim CHAPTER 4 Ethical Guidelines for Psychology Research 89
CHAPTER 5 Identifying Good Measurement 117
PART III Tools for Evaluating Frequency Claims CHAPTER 6 Surveys and Observations: Describing What People Do 153
CHAPTER 7 Sampling: Estimating the Frequency of Behaviors and Beliefs 179
PART IV Tools for Evaluating Association Claims CHAPTER 8 Bivariate Correlational Research 203
CHAPTER 9 Multivariate Correlational Research 237
PART V Tools for Evaluating Causal Claims CHAPTER 10 Introduction to Simple Experiments 273
CHAPTER 11 More on Experiments: Confounding and Obscuring Variables 311
CHAPTER 12 Experiments with More Than One Independent Variable 351
PART VI Balancing Research Priorities CHAPTER 13 Quasi-Experiments and Small-N Designs 389
CHAPTER 14 Replication, Generalization, and the Real World 425
Statistics Review Descriptive Statistics 457
Statistics Review Inferential Statistics 479
Presenting Results APA-Style Reports and Conference Posters 505
Appendix A Random Numbers and How to Use Them 545
Appendix B Statistical Tables 551
viii
BETH MORLING is Professor of Psychology at the University of Delaware. She attended Carleton College in Northfield, Minnesota, and received her Ph.D. from the University of Massachusetts at Amherst. Before coming to Delaware, she held positions at Union College (New York) and Muhlenberg College (Pennsylvania). In addition to teaching research methods at Delaware almost every semester, she also teaches undergraduate cultural psychology, a seminar on the self- concept, and a graduate course in the teaching of psychology. Her research in the area of cultural psychology explores how cultural practices shape people’s motivations. Dr. Morling has been a Fulbright scholar in Kyoto, Japan, and was the Delaware State Professor of the Year (2014), an award from the Council for Advancement and Support of Education (CASE) and the Carnegie Foundation for the Advancement of Teaching.
About the Author
ix
Preface
Students in the psychology major plan to pursue a tremendous variety of careers— not just becoming psychology researchers. So they sometimes ask: Why do we need to study research methods when we want to be therapists, social workers, teachers, lawyers, or physicians? Indeed, many students anticipate that research methods will be “dry,” “boring,” and irrelevant to their future goals. This book was written with these very students in mind—students who are taking their first course in research methods (usually sophomores) and who plan to pursue a wide variety of careers. Most of the students who take the course will never become researchers themselves, but they can learn to systematically navigate the research information they will encounter in empirical journal articles as well as in online magazines, print sources, blogs, and tweets.
I used to tell students that by conducting their own research, they would be able to read and apply research later, in their chosen careers. But the literature on learning transfer leads me to believe that the skills involved in designing one’s own studies will not easily transfer to understanding and critically assessing studies done by others. If we want students to assess how well a study supports its claims, we have to teach them to assess research. That is the approach this book takes.
Students Can Develop Research Consumer Skills To be a systematic consumer of research, students need to know what to priori- tize when assessing a study. Sometimes random samples matter, and sometimes they do not. Sometimes we ask about random assignment and confounds, and sometimes we do not. Students benefit from having a set of systematic steps to help them prioritize their questioning when they interrogate quantitative infor- mation. To provide that, this book presents a framework of three claims and four validities, introduced in Chapter 3. One axis of the framework is the three kinds of claims researchers (as well as journalists, bloggers, and commentators) might make: frequency claims (some percentage of people do X), association claims (X is associated with Y), and causal claims (X changes Y). The second axis of
x PREfACE
the framework is the four validities that are generally agreed upon by methodol- ogists: internal, external, construct, and statistical.
The three claims, four validities framework provides a scaffold that is rein- forced throughout. The book shows how almost every term, technique, and piece of information fits into the basic framework.
The framework also helps students set priorities when evaluating a study. Good quantitative reasoners prioritize different validity questions depending on the claim. For example, for a frequency claim, we should ask about measurement (construct validity) and sampling techniques (external validity), but not about ran- dom assignment or confounds, because the claim is not a causal one. For a causal claim, we prioritize internal validity and construct validity, but external validity is generally less important.
Through engagement with a consumer-focused research methods course, students become systematic interrogators. They start to ask more appropriate and refined questions about a study. By the end of the course, students can clearly explain why a causal claim needs an experiment to support it. They know how to evaluate whether a variable has been measured well. They know when it’s appro- priate to call for more participants in a study. And they can explain when a study must have a representative sample and when such a sample is not needed.
What About Future Researchers? This book can also be used to teach the flip side of the question: How can produc- ers of research design better studies? The producer angle is presented so that stu- dents will be prepared to design studies, collect data, and write papers in courses that prioritize these skills. Producer skills are crucial for students headed for Ph.D. study, and they are sometimes required by advanced coursework in the undergraduate major.
Such future researchers will find sophisticated content, presented in an accessible, consistent manner. They will learn the difference between media- tion (Chapter 9) and moderation (Chapters 8 and 9), an important skill in theory building and theory testing. They will learn how to design and interpret factorial designs, even up to three-way interactions (Chapter 12). And in the common event that a student-run study fails to work, one chapter helps them explore the possi- ble reasons for a null effect (Chapter 11). This book provides the basic statistical background, ethics coverage, and APA-style notes for guiding students through study design and execution.
Organization The fourteen chapters are arranged in six parts. Part I (Chapters 1–3) includes introductory chapters on the scientific method and the three claims, four validities framework. Part II (Chapters 4–5) covers issues that matter for any study: research
xiSupport for Students and Instructors
ethics and good measurement. Parts III–V (Chapters 6–12) correspond to each of the three claims (frequency, association, and causal). Part VI (Chapters 13–14) focuses on balancing research priorities.
Most of the chapters will be familiar to veteran instructors, including chapters on measurement, experimentation, and factorial designs. However, unlike some methods books, this one devotes two full chapters to correlational research (one on bivariate and one on multivariate studies), which help students learn how to interpret, apply, and interrogate different types of association claims, one of the common types of claims they will encounter.
There are three supplementary chapters, on Descriptive Statistics, Inferential Statistics, and APA-Style Reports and Conference Posters. These chapters provide a review for students who have already had statistics and provide the tools they need to create research reports and conference posters.
Two appendices—Random Numbers and How to Use Them, and Statistical Tables—provide reference tools for students who are conducting their own research.
Support for Students and Instructors The book’s pedagogical features emphasize active learning and repetition of the most important points. Each chapter begins with high-level learning objectives— major skills students should expect to remember even “a year from now.” Impor- tant terms in a chapter are introduced in boldface. The Check Your Understanding questions at the end of each major section provide basic questions that let students revisit key concepts as they read. Each chapter ends with multiple-choice Review Questions for retrieval practice, and a set of Learning Actively exercises that encourage students to apply what they learned. (Answers are provided at the end of the book.) A master table of the three claims and four validities appears inside the book’s front cover to remind students of the scaffold for the course.
I believe the book works pedagogically because it spirals through the three claims, four validities framework, building in repetition and depth. Although each chapter addresses the usual core content of research methods, students are always reminded of how a particular topic helps them interrogate the key validities. The interleaving of content should help students remember and apply this questioning strategy in the future.
I have worked with W. W. Norton to design a support package for fel- low instructors and students. The online Interactive Instructor’s Guide offers in-class activities, models of course design, homework and final assignments, and chapter-by-chapter teaching notes, all based on my experience with the course. The book is accompanied by other ancillaries to assist both new and experienced research methods instructors, including a new InQuizitive online assessment tool, a robust test bank with over 750 questions, updated lecture and active learning slides, and more; for a complete list, see p. xix.
xii PREfACE
Teachable Examples on the Everyday Research Methods Blog Students and instructors can find additional examples of psychological science in the news on my regularly updated blog, Everyday Research Methods (www .everydayresearchmethods.com; no password or registration required). Instruc- tors can use the blog for fresh examples to use in class, homework, or exams. Students can use the entries as extra practice in reading about research studies in psychology in the popular media. Follow me on Twitter to get the latest blog updates (@bmorling).
Changes in the Third Edition Users of the first and second editions will be happy to learn that the basic organi- zation, material, and descriptions in the text remain the same. The third edition provides several new studies and recent headlines. Inclusion of these new exam- ples means that instructors who assign the third edition can also use their favorite illustrations from past editions as extra examples while teaching.
In my own experience teaching the course, I found that students could often master concepts in isolation, but they struggled to bring them all together when reading a real study. Therefore, the third edition adds new Working It Through sections in several chapters (Chapters 3, 4, 5, 8, and 11). Each one works though a single study in depth, so students can observe how the chapter’s central concepts are integrated and applied. For instance, in Chapter 4, they can see how ethics concepts can be applied to a recent study that manipulated Facebook newsfeeds. The Working It Through material models the process students will probably use on longer class assignments.
Also new in the third edition, every figure has been redrawn to make it more visually appealing and readable. In addition, selected figures are annotated to help students learn how to interpret graphs and tables.
Finally, W. W. Norton’s InQuizitive online assessment tool is available with the third edition. InQuizitive helps students apply concepts from the textbook to practice examples, providing specific feedback on incorrect responses. Some questions require students to interpret tables and figures; others require them to apply what they’re learning to popular media articles.
Here is a detailed list of the changes made to each chapter.
http://www.everydayresearchmethods.com
http://www.everydayresearchmethods.com
xiiiChanges in the Third Edition
CHAPTER MAJOR CHANGES IN THE THIRD EDITION
1. Psychology Is a Way of Thinking
The heading structure is the same as in the second edition, with some updated examples. I replaced the facilitated communication example (still an excellent teaching example) with one on the Scared Straight program meant to keep adolescents out of the criminal justice system, based on a reviewer’s recommendation.
2. Sources of Information: Why Research Is Best and How to Find it
I simplified the coverage of biases of intuition. Whereas the second edition separated cognitive biases from motivated reasoning, the biases are now presented more simply. In addition, this edition aims to be clearer on the difference between the availability heuristic and the present/present bias. I also developed the coverage of Google Scholar.
3. Three Claims, Four Validities: Interrogation Tools for Consumers of Research
The three claims, four validities framework is the same, keeping the best teachable examples from the second edition and adding new examples from recent media. In response to my own students’ confusion, I attempted to clarify the difference between the type of study conducted (correlational or experimental) and the claims made about it. To this end, I introduced the metaphor of a gift, in which a journalist might “wrap” a correlational study in a fancy, but inappropriate, causal claim.
When introducing the three criteria for causation, I now emphasize that covariance is about the study’s results, while temporal precedence and internal validity are determined from the study’s method.
Chapter 3 includes the first new Working It Through section.
4. Ethical Guidelines for Psychology Research
I updated the section on animal research and removed the full text of APA Standard 8. There’s a new figure on the difference between plagiarism and paraphrasing, and a new example of research fabrication (the notorious, retracted Lancet article on vaccines and autism). A new Working It Through section helps students assess the ethics of a recent Facebook study that manipulated people’s newsfeeds.
5. Identifying Good Measurement
This chapter retains many of the teaching examples as the second edition. For clarity, I changed the discriminant validity example so the correlation is only weak (not both weak and negative). A new Working It Through section helps students apply the measurement concepts to a self-report measure of gratitude in relationships.
6. Surveys and Observations: Describing What People Do
Core examples are the same, with a new study illustrating the effect of leading questions (a poll on attitudes toward voter ID laws). Look for the new “babycam” example in the Learning Actively exercises.
7. Sampling: Estimating the Frequency of Behaviors and Beliefs
Look for new content on MTurk and other Internet-based survey panels. I updated the statistics on cell-phone-only populations, which change yearly. Finally, I added clarity on the difference between cluster and stratified samples and explained sample weighting.
I added the new keyword nonprobability sample to work in parallel with the term probability sample. A new table (Table 7.3) helps students group related terms.
xiv PREfACE
CHAPTER MAJOR CHANGES IN THE THIRD EDITION
8. Bivariate Correlational Research
This chapter keeps most of the second edition examples. It was revised to better show that association claims are separate from correlational methods. Look for improved moderator examples in this chapter. These new examples, I hope, will communicate to students that moderators change the relationship between variables; they do not necessarily reflect the level of one of the variables.
9. Multivariate Correlational Research
I replaced both of the main examples in this chapter. The new example of cross- lag panel design, on parental overpraise and child narcissism, has four time periods (rather than two), better representing contemporary longitudinal studies. In the multiple regression section, the recess example is replaced with one on adolescents in which watching sexual TV content predicts teen pregnancy. The present regression example is student-friendly and also has stronger effect sizes.
Look for an important change in Figure 9.13 aimed to convey that a moderator can be thought of as vulnerability. My own students tend to think something is a moderator when the subgroup is simply higher on one of the variables. For example, boys might watch more violent TV content and be higher on aggression, but that’s not the same as a moderator. Therefore, I have updated the moderator column with the moderator “parental discussion.” I hope this will help students come up with their own moderators more easily.
10. Introduction to Simple Experiments
The red/green ink example was replaced with a popular study on notetaking, comparing the effects of taking notes in longhand or on laptops. There is also a new example of pretest/posttest designs (a study on mindfulness training). Students sometimes are surprised when a real-world study has multiple dependent variables, so I’ve highlighted that more in the third edition. Both of the chapter’s opening examples have multiple dependent variables.
I kept the example on pasta bowl serving size. However, after Chapter 10 was typeset, some researchers noticed multiple statistical inconsistencies in several publications from Wansink’s lab (for one summary of the issues, see the Chronicle of Higher Education article, “Spoiled Science”). At the time of writing, the pasta study featured in Chapter 10 has not been identified as problematic. Nevertheless, instructors might wish to engage students in a discussion of these issues.
11. More on Experiments: Confounding and Obscuring Variables
The content is virtually the same, with the addition of two Working It Through sections. The first one is to show students how to work through Table 11.1 using the mindfulness study from Chapter 10. This is important because after seeing Table 11.1, students sometimes think their job is to find the flaw in any study. In fact, most published studies do not have major internal validity flaws. The second Working It Through shows students how to analyze a null result.
12. Experiments with More Than One Independent Variable
Recent work has suggested that context-specific memory effects are not robust, so I replaced the Godden and Baddeley factorial example on context-specific learning with one comparing the memory of child chess experts to adults.
xv
CHAPTER MAJOR CHANGES IN THE THIRD EDITION
13. Quasi-Experiments and Small-N Designs
I replaced the Head Start study for two reasons. First, I realized it’s not a good example of a nonequivalent control group posttest-only design, because it actually included a pretest! Second, the regression to the mean effect it meant to illustrate is rare and difficult to understand. In exchange, there is a new study on the effects of walking by a church.
In the small-N design section, I provided fresh examples of multiple baseline design and alternating treatment designs. I also replaced the former case study example (split-brain studies) with the story of H.M. Not only is H.M.’s story compelling (especially as told through the eyes of his friend and researcher Suzanne Corkin), the brain anatomy required to understand this example is also simpler than that of split- brain studies, making it more teachable.
14. Replication, Generalization, and the Real World
A significant new section and table present the so-called “replication crisis” in psychology. In my experience, students are extremely engaged in learning about these issues. There’s a new example of a field experiment, a study on the effect of radio programs on reconciliation in Rwanda.
Supplementary Chapters In the supplementary chapter on inferential statistics, I replaced the section on randomization tests with a new section on confidence intervals. The next edition of the book may transition away from null hypothesis significance testing to emphasize the “New Statistics” of estimation and confidence intervals. I welcome feedback from instructors on this potential change.
Changes in the Third Edition
xvi
Acknowledgments
Working on this textbook has been rewarding and enriching, thanks to the many people who have smoothed the way. To start, I feel fortunate to have collaborated with an author-focused company and an all-around great editor, Sheri Snavely. Through all three editions, she has been both optimistic and realistic, as well as savvy and smart. She also made sure I got the most thoughtful reviews possible and that I was supported by an excellent staff at Norton: David Bradley, Jane Searle, Rubina Yeh, Eve Sanoussi, Victoria Reuter, Alex Trivilino, Travis Carr, and Dena Diglio Betz. My developmental editor, Betsy Dilernia, found even more to refine in the third edition, making the language, as well as each term, figure, and refer- ence, clear and accurate.
I am also thankful for the support and continued enthusiasm I have received from the Norton sales management team: Michael Wright, Allen Clawson, Ashley Sherwood, Annie Stewart, Dennis Fernandes, Dennis Adams, Katie Incorvia, Jordan Mendez, Amber Watkins, Shane Brisson, and Dan Horton. I also wish to thank the science and media special- ists for their creativity and drive to ensure my book reaches a wide audience, and that all the media work for instructors and students.
I deeply appreciate the support of many col- leagues. My former student Patrick Ewell, now at Kenyon College, served as a sounding board for new examples and authored the content for InQuizitive. Eddie Brummelman and Stefanie Nelemans provided additional correlations for the cross-lag panel design in Chapter 9. My friend Carrie Smith authored the Test Bank for the past two editions and has made it
an authentic measure of quantitative reasoning (as well as sending me things to blog about). Catherine Burrows carefully checked and revised the Test Bank for the third edition. Many thanks to Sarah Ainsworth, Reid Griggs, Aubrey McCarthy, Emma McGorray, and Michele M. Miller for carefully and patiently fact-checking every word in this edition. My student Xiaxin Zhong added DOIs to all the refer- ences and provided page numbers for the Check Your Understanding answers. Thanks, as well, to Emily Stanley and Jeong Min Lee, for writing and revising the questions that appear in the Coursepack created for the course management systems. I’m grateful to Amy Corbett and Kacy Pula for reviewing the ques- tions in InQuizitive. Thanks to my students Matt Davila-Johnson and Jeong Min Lee for posing for photographs in Chapters 5 and 10.
The book’s content was reviewed by a cadre of talented research method professors, and I am grateful to each of them. Some were asked to review; others cared enough to send me comments or examples by e-mail. Their students are lucky to have them in the classroom, and my readers will benefit from the time they spent in improving this book:
Eileen Josiah Achorn, University of Texas, San Antonio Sarah Ainsworth, University of North Florida Kristen Weede Alexander, California State University,
Sacramento Leola Alfonso-Reese, San Diego State University Cheryl Armstrong, Fitchburg State University Jennifer Asmuth, Susquehanna University Kristin August, Rutgers University, Camden
xviiAcknowledgments
Jessica L. Barnack-Tavlaris, The College of New Jersey Gordon Bear, Ramapo College Margaret Elizabeth Beier, Rice University Jeffrey Berman, University of Memphis Brett Beston, McMaster University Alisa Beyer, Northern Arizona University Julie Boland, University of Michigan Marina A. Bornovalova, University of South Florida Caitlin Brez, Indiana State University Shira Brill, California State University, Northridge J. Corey Butler, Southwest Minnesota State University Ricardo R. Castillo, Santa Ana College Alexandra F. Corning, University of Notre Dame Kelly A. Cotter, California State University, Stanislaus Lisa Cravens-Brown, The Ohio State University Victoria Cross, University of California, Davis Matthew Deegan, University of Delaware Kenneth DeMarree, University at Buffalo Jessica Dennis, California State University, Los Angeles Nicole DeRosa, SUNY Upstate Golisano Children’s Hospital Rachel Dinero, Cazenovia College Dana S. Dunn, Moravian College C. Emily Durbin, Michigan State University Russell K. Espinoza, California State University, Fullerton Patrick Ewell, Kenyon College Iris Firstenberg, University of California, Los Angeles Christina Frederick, Sierra Nevada College Alyson Froehlich, University of Utah Christopher J. Gade, University of California, Berkeley Timothy E. Goldsmith, University of New Mexico Jennifer Gosselin, Sacred Heart University AnaMarie Connolly Guichard, California State University,
Stanislaus Andreana Haley, University of Texas, Austin Edward Hansen, Florida State University Cheryl Harasymchuk, Carleton University Richard A. Hullinger, Indiana State University Deborah L. Hume, University of Missouri Kurt R. Illig, University of St. Thomas Jonathan W. Ivy, Pennsylvania State University, Harrisburg W. Jake Jacobs, University of Arizona Matthew D. Johnson, Binghamton University Christian Jordan, Wilfrid Laurier University Linda Juang, San Francisco State University
Victoria A. Kazmerski, Penn State Erie, The Behrend College Heejung Kim, University of California, Santa Barbara Greg M. Kim-Ju, California State University, Sacramento Ari Kirshenbaum, Ph.D., St. Michael’s College Kerry S. Kleyman, Metropolitan State University Penny L. Koontz, Marshall University Christina M. Leclerc, Ph.D., State University of New York
at Oswego Ellen W. Leen-Feldner, University of Arkansas Carl Lejuez, University of Maryland Marianne Lloyd, Seton Hall University Stella G. Lopez, University of Texas, San Antonio Greg Edward Loviscky, Pennsylvania State University Sara J. Margolin, Ph.D., The College at Brockport, State
University of New York Azucena Mayberry, Texas State University Christopher Mazurek, Columbia College Peter Mende-Siedlecki, University of Delaware Molly A. Metz, Miami University Dr. Michele M. Miller, University of Illinois Springfield Daniel C. Molden, Northwestern University J. Toby Mordkoff, University of Iowa Elizabeth Morgan, Springfield College Katie Mosack, University of Wisconsin, Milwaukee Erin Quinlivan Murdoch, George Mason University Stephanie C. Payne, Texas A&M University Anita Pedersen, California State University, Stanislaus Elizabeth D. Peloso, University of Pennsylvania M. Christine Porter, College of William and Mary Joshua Rabinowitz, University of Michigan Elizabeth Riina, Queens College, City University of New York James R. Roney, University of California, Santa Barbara Richard S. Rosenberg, Ph.D., California State University,
Long Beach Carin Rubenstein, Pima Community College Silvia J. Santos, California State University, Dominguez Hills Pamela Schuetze, Ph.D., The College at Buffalo, State
University of New York John N. Schwoebel, Ph.D., Utica College Mark J. Sciutto, Muhlenberg College Elizabeth A. Sheehan, Georgia State University Victoria A. Shivy, Virginia Commonwealth University Leo Standing, Bishop’s University
xviii ACkNOwLEDGMENTs
Harold W. K. Stanislaw, California State University, Stanislaus Kenneth M. Steele, Appalachian State University Mark A. Stellmack, University of Minnesota, Twin Cities Eva Szeli, Arizona State University Lauren A. Taglialatela, Kennesaw State University Alison Thomas-Cottingham, Rider University Chantal Poister Tusher, Georgia State University Allison A. Vaughn, San Diego State University Simine Vazire, University of California, Davis Jan Visser, University of Groningen John L. Wallace, Ph.D., Ball State University Shawn L. Ward, Le Moyne College Christopher Warren, California State University, Long Beach Shannon N. Whitten, University of Central Florida Jelte M. Wicherts, Tilburg University Antoinette R. Wilson, University of California, Santa Cruz James Worthley, University of Massachusetts, Lowell Charles E. (Ted) Wright, University of California, Irvine Guangying Wu, The George Washington University
David Zehr, Plymouth State University Peggy Mycek Zoccola, Ohio University
I have tried to make the best possible improvements from all of these capable reviewers.
My life as a teaching professor has been enriched during the last few years because of the friendship and support of my students and colleagues at the Uni- versity of Delaware, colleagues I see each year at the SPSP conference, and all the faculty I see regularly at the National Institute for the Teaching of Psychology, affectionately known as NITOP.
Three teenage boys will keep a person both enter- tained and humbled; thanks to Max, Alek, and Hugo for providing their services. I remain grateful to my mother-in-law, Janet Pochan, for cheerfully helping on the home front. Finally, I want to thank my husband Darrin for encouraging me and for always having the right wine to celebrate (even if it’s only Tuesday).
Beth Morling
Media Resources for Instructors and Students
G
N
G
INTERACTIVE INsTRUCTOR’s GUIDE Beth Morling, University of Delaware The Interactive Instructor’s Guide contains hundreds of downloadable resources and teaching ideas, such as a discussion of how to design a course that best utilizes the textbook, sample syllabus and assignments, and chapter-by-chapter teaching notes and suggested activities.
POwERPOINTs The third edition features three types of PowerPoints. The Lecture PowerPoints provide an overview of the major headings and definitions for each chapter. The Art Slides contain a complete set of images. And the Active Learning Slides provide the author’s favorite in-class activities, as well as reading quiz- zes and clicker questions. Instructors can browse the Active Learning Slides to select activities that supplement their classes.
TEsT BANk C. Veronica Smith, University of Mississippi, and Catherine Burrows, University of Miami The Test Bank provides over 750 questions using an evidence-centered approach designed in collabora- tion with Valerie Shute of Florida State University and Diego Zapata-Rivera of the Educational Testing Service. The Test Bank contains multiple-choice and short-answer questions classified by section, Bloom’s taxonomy, and difficulty, making it easy for instructors to construct tests and quizzes that are meaningful and diagnostic. The Test Bank is available in Word RTF, PDF, and ExamView® Assessment Suite formats.
INQUIZITIVE Patrick Ewell, Kenyon College InQuizitive allows students to practice applying terminology in the textbook to numerous examples. It can guide the students with specific feedback for incorrect answers to help clarify common mistakes. This online assessment tool gives students the repetition they need to fully understand the material without cutting into valuable class time. InQuizitive provides practice in reading tables and figures, as well as identifying the research methods used in studies from popular media articles, for an integrated learning experience.
EVERYDAY REsEARCH METHODs BLOG: www.everydayresearchmethods.com The Research Methods in Psychology blog offers more than 150 teachable moments from the web, curated by Beth Morling and occasional guest contributors. Twice a month, the author highlights examples of psychological science in the news. Students can connect these recent stories with textbook concepts. Instructors can use blog posts as examples in lecture or assign them as homework. All entries are searchable by chapter.
COURsEPACk Emily Stanley, University of Mary Washington, and Jeong Min Lee, University of Delaware The Coursepack presents students with review opportunities that employ the text’s analytical frame- work. Each chapter includes quizzes based on the Norton Assessment Guidelines, Chapter Outlines created by the textbook author and based on the Learning Objectives in the text, and review flash- cards. The APA-style guidelines from the textbook are also available in the Coursepack for easy access.
H
r
C
xix
http://www.everydayresearchmethods.com
xx
Contents
Preface ix Media Resources for Instructors and Students xix
PART I Introduction to Scientific Reasoning
CHAPTER 1
Psychology Is a Way of Thinking 5
Research Producers, Research Consumers 6 Why the Producer Role Is Important 6
Why the Consumer Role Is Important 7
The Benefits of Being a Good Consumer 8
How Scientists Approach Their Work 10 Scientists Are Empiricists 10
Scientists Test Theories: The Theory-Data Cycle 11
Scientists Tackle Applied and Basic Problems 16
Scientists Dig Deeper 16
Scientists Make It Public: The Publication Process 17
Scientists Talk to the World: From Journal to
Journalism 17
Chapter Review 22
Contents
xxiContents
CHAPTER 2
Sources of Information: Why Research Is Best and How to Find It 25
The Research vs. Your Experience 26 Experience Has No Comparison Group 26
Experience Is Confounded 29
Research Is Better Than Experience 29
Research Is Probabilistic 31
The Research vs. Your Intuition 32 Ways That Intuition Is Biased 32
The Intuitive Thinker vs. the Scientific Reasoner 38
Trusting Authorities on the Subject 39 Finding and Reading the Research 42 Consulting Scientific Sources 42
Finding Scientific Sources 44
Reading the Research 46
Finding Research in Less Scholarly Places 48
Chapter Review 53
CHAPTER 3
Three Claims, Four Validities: Interrogation Tools for Consumers of Research 57
Variables 58 Measured and Manipulated Variables 58
From Conceptual Variable to Operational Definition 59
Three Claims 61 Frequency Claims 62
Association Claims 63
Causal Claims 66
Not All Claims Are Based on Research 68
Interrogating the Three Claims Using the Four Big Validities 68 Interrogating Frequency Claims 69
Interrogating Association Claims 71
Interrogating Causal Claims 74
Prioritizing Validities 79
Review: Four Validities, Four Aspects of Quality 80 wORkING IT THROUGH Does Hearing About Scientists’ Struggles Inspire
Young Students? 81
Chapter Review 83
xxii CONTENTs
PART II Research Foundations for Any Claim
CHAPTER 4
Ethical Guidelines for Psychology Research 89
Historical Examples 89 The Tuskegee Syphilis Study Illustrates Three Major Ethics Violations 89
The Milgram Obedience Studies Illustrate a Difficult Ethical Balance 92
Core Ethical Principles 94 The Belmont Report: Principles and Applications 94
Guidelines for Psychologists: The APA Ethical Principles 98 Belmont Plus Two: APA’s Five General Principles 98
Ethical Standards for Research 99
Ethical Decision Making: A Thoughtful Balance 110 wORkING IT THROUGH Did a Study Conducted on Facebook Violate Ethical
Principles? 111
Chapter Review 113
CHAPTER 5
Identifying Good Measurement 117
Ways to Measure Variables 118 More About Conceptual and Operational Variables 118
Three Common Types of Measures 120
Scales of Measurement 122
Reliability of Measurement: Are the Scores Consistent? 124 Introducing Three Types of Reliability 125
Using a Scatterplot to Quantify Reliability 126
Using the Correlation Coefficient r to Quantify Reliability 128
Reading About Reliability in Journal Articles 131
Validity of Measurement: Does It Measure What It’s Supposed to Measure? 132
Measurement Validity of Abstract Constructs 133
Face Validity and Content Validity: Does It Look Like a
Good Measure? 134
Criterion Validity: Does It Correlate with Key Behaviors? 135
Convergent Validity and Discriminant Validity: Does the
Pattern Make Sense? 139
The Relationship Between Reliability and Validity 142
xxiiiContents
Review: Interpreting Construct Validity Evidence 143
wORkING IT THROUGH How Well Can We Measure the Amount of Gratitude Couples Express to Each Other? 145
Chapter Review 147
PART III Tools for Evaluating Frequency Claims
CHAPTER 6
Surveys and Observations: Describing What People Do 153
Construct Validity of Surveys and Polls 153 Choosing Question Formats 154
Writing Well-Worded Questions 155
Encouraging Accurate Responses 159
Construct Validity of Behavioral Observations 165 Some Claims Based on Observational Data 165
Making Reliable and Valid Observations 169
Chapter Review 175
CHAPTER 7
Sampling: Estimating the Frequency of Behaviors and Beliefs 179
Generalizability: Does the Sample Represent the Population? 179 Populations and Samples 180
When Is a Sample Biased? 182
Obtaining a Representative Sample: Probability Sampling Techniques 186
Settling for an Unrepresentative Sample: Nonprobability Sampling Techniques 191
Interrogating External Validity: What Matters Most? 193 In a Frequency Claim, External Validity Is a
Priority 193
When External Validity Is a Lower Priority 194
Larger Samples Are Not More Representative 196
Chapter Review 198
xxiv CONTENTs
PART IV Tools for Evaluating Association Claims
CHAPTER 8
Bivariate Correlational Research 203
Introducing Bivariate Correlations 204 Review: Describing Associations Between Two Quantitative
Variables 205
Describing Associations with Categorical Data 207
A Study with All Measured Variables Is Correlational 209
Interrogating Association Claims 210 Construct Validity: How Well Was Each Variable Measured? 210
Statistical Validity: How Well Do the Data Support
the Conclusion? 211
Internal Validity: Can We Make a Causal Inference from
an Association? 221
External Validity: To Whom Can the Association Be Generalized? 226
wORkING IT THROUGH Are Parents Happier Than People with No Children? 231
Chapter Review 233
CHAPTER 9
Multivariate Correlational Research 237
Reviewing the Three Causal Criteria 238 Establishing Temporal Precedence with Longitudinal
Designs 239 Interpreting Results from Longitudinal Designs 239
Longitudinal Studies and the Three Criteria for Causation 242
Why Not Just Do an Experiment? 242
Ruling Out Third Variables with Multiple-Regression Analyses 244 Measuring More Than Two Variables 244
Regression Results Indicate If a Third Variable Affects
the Relationship 247
Adding More Predictors to a Regression 251
Regression in Popular Media Articles 252
Regression Does Not Establish Causation 254
Getting at Causality with Pattern and Parsimony 256 The Power of Pattern and Parsimony 256
Pattern, Parsimony, and the Popular Media 258
xxvContents
Mediation 259 Mediators vs. Third Variables 261
Mediators vs. Moderators 262
Multivariate Designs and the Four Validities 264 Chapter Review 266
PART V Tools for Evaluating Causal Claims
CHAPTER 10
Introduction to Simple Experiments 273
Two Examples of Simple Experiments 273 Example 1: Taking Notes 274
Example 2: Eating Pasta 275
Experimental Variables 276 Independent and Dependent Variables 277
Control Variables 278
Why Experiments Support Causal Claims 278 Experiments Establish Covariance 279
Experiments Establish Temporal Precedence 280
Well-Designed Experiments Establish Internal Validity 281
Independent-Groups Designs 287 Independent-Groups vs. Within-Groups Designs 287
Posttest-Only Design 287
Pretest/Posttest Design 288
Which Design Is Better? 289
Within-Groups Designs 290 Repeated-Measures Design 290
Concurrent-Measures Design 291
Advantages of Within-Groups Designs 292
Covariance, Temporal Precedence, and Internal Validity in Within-Groups Designs 294
Disadvantages of Within-Groups Designs 296
Is Pretest/Posttest a Repeated-Measures Design? 297
Interrogating Causal Claims with the Four Validities 298 Construct Validity: How Well Were the Variables Measured and Manipulated? 298
External Validity: To Whom or What Can the Causal Claim Generalize? 301
Statistical Validity: How Well Do the Data Support the Causal Claim? 304
Internal Validity: Are There Alternative Explanations for the Results? 306
Chapter Review 307
xxvi CONTENTs
CHAPTER 11
More on Experiments: Confounding and Obscuring Variables 311
Threats to Internal Validity: Did the Independent Variable Really Cause the Difference? 312
The Really Bad Experiment (A Cautionary Tale) 312
Six Potential Internal Validity Threats in One-Group,
Pretest/Posttest Designs 314
Three Potential Internal Validity Threats in Any Study 322
With So Many Threats, Are Experiments Still Useful? 325
wORkING IT THROUGH Did Mindfulness Training Really Cause GRE Scores to Improve? 328
Interrogating Null Effects: What If the Independent Variable Does Not Make a Difference? 330
Perhaps There Is Not Enough Between-Groups Difference 332
Perhaps Within-Groups Variability Obscured the Group Differences 335
Sometimes There Really Is No Effect to Find 342
wORkING IT THROUGH Will People Get More Involved in Local Government If They Know They’ll Be Publicly Honored? 344
Null Effects May Be Published Less Often 345
Chapter Review 346
CHAPTER 12
Experiments with More Than One Independent Variable 351
Review: Experiments with One Independent Variable 351 Experiments with Two Independent Variables Can
Show Interactions 353
Intuitive Interactions 353
Factorial Designs Study Two Independent Variables 355
Factorial Designs Can Test Limits 356
Factorial Designs Can Test Theories 358
Interpreting Factorial Results: Main Effects and Interactions 360
Factorial Variations 370 Independent-Groups Factorial Designs 370
Within-Groups Factorial Designs 370
Mixed Factorial Designs 371
Increasing the Number of Levels of an Independent Variable 371
Increasing the Number of Independent Variables 373
Identifying Factorial Designs in Your Reading 378 Identifying Factorial Designs in Empirical Journal Articles 379
Identifying Factorial Designs in Popular Media Articles 379
Chapter Review 383
xxviiContents
PART VI Balancing Research Priorities
CHAPTER 13
Quasi-Experiments and Small-N Designs 389
Quasi-Experiments 389 Two Examples of Independent-Groups
Quasi-Experiments 390
Two Examples of Repeated-Measures
Quasi-Experiments 392
Internal Validity in Quasi-Experiments 396
Balancing Priorities in Quasi-Experiments 404
Are Quasi-Experiments the Same as Correlational Studies? 405
Small-N Designs: Studying Only a Few Individuals 406 Research on Human Memory 407
Disadvantages of Small-N Studies 410
Behavior-Change Studies in Applied Settings:
Three Small-N Designs 411
Other Examples of Small-N Studies 417
Evaluating the Four Validities in Small-N Designs 418
Chapter Review 420
CHAPTER 14
Replication, Generalization, and the Real World 425
To Be Important, a Study Must Be Replicated 425 Replication Studies 426
The Replication Debate in Psychology 430
Meta-Analysis: What Does the Literature Say? 433
Replicability, Importance, and Popular Media 436
To Be Important, Must a Study Have External Validity? 438 Generalizing to Other Participants 438
Generalizing to Other Settings 439
Does a Study Have to Be Generalizable to Many People? 440
Does a Study Have to Take Place in a Real-World Setting? 447
Chapter Review 453
xxviii CONTENTs
Statistics Review Descriptive Statistics 457 Statistics Review Inferential Statistics 479 Presenting Results APA-Style Reports and Conference Posters 505 Appendix A Random Numbers and How to Use Them 545 Appendix B Statistical Tables 551 Areas Under the Normal Curve (Distribution of z) 551
Critical Values of t 557
Critical Values of F 559
r to z' Conversion 564
Critical Values of r 565 Glossary 567 Answers to End-of-Chapter Questions 577 Review Question 577
Guidelines for Selected Learning Actively Exercises 578 References 589 Credits 603 Name Index 607 Subject Index 611
THIRD EDITION
Research Methods in Psychology EVALUATING A WORLD OF INFORMATION
PART I
Introduction to Scientific Reasoning
Your Dog Hates Hugs NYMag.com, 2016
Mindfulness May Improve Test Scores Scientific American, 2013
http://NYMag.com
5
Psychology Is a Way of Thinking THINKING BACK TO YOUR introductory psychology course, what do you remember learning? You might remember that dogs can be trained to salivate at the sound of a bell or that people in a group fail to call for help when the room fills up with smoke. Or perhaps you recall studies in which people administered increasingly stron- ger electric shocks to an innocent man although he seemed to be in distress. You may have learned what your brain does while you sleep or that you can’t always trust your memories. But how come you didn’t learn that “we use only 10% of our brain” or that “hitting a punching bag can make your anger go away”?
The reason you learned some principles, and not others, is because psychological science is based on studies—on research—by psychologists. Like other scientists, psychologists are empiricists. Being an empiricist means basing one’s conclusions on systematic observations. Psychologists do not simply think intuitively about behavior, cognition, and emotion; they know what they know because they have conducted studies on people and animals acting in their natural environments or in specially designed situations. Research is what tells us that most people will administer electric shock to an innocent man in certain situations, and it also tells us that people’s brains are usually fully engaged—not just 10%. If you are to think like a psychologist, then you must think like a researcher, and taking a course in research methods is crucial to your understanding of psychology.
This book explains the types of studies psychologists conduct, as well as the potential strengths and limitations of each type of study. You will learn not only how to plan your own studies but
1 LEARNING OBJECTIVES
A year from now, you should still be able to:
1. Explain what it means to reason empirically.
2. Appreciate how psychological research methods help you become a better producer of information as well as a better consumer of information.
3. Describe five practices that psychological scientists engage in.
6 CHAPTER 1 Psychology Is a Way of Thinking
also how to find research, read about it, and ask questions about it. While gaining a greater appreciation for the rigorous standards psychologists maintain in their research, you’ll find out how to be a systematic and critical consumer of psychological science.
RESEARCH PRODUCERS, RESEARCH CONSUMERS Some psychology students are fascinated by the research process and intend to become producers of research. Perhaps they hope to get a job studying brain anatomy, documenting the behavior of dolphins or monkeys, administering per- sonality questionnaires, observing children in a school setting, or analyzing data. They may want to write up their results and present them at research meetings. These students may dream about working as research scientists or professors.
Other psychology students may not want to work in a lab, but they do enjoy reading about the structure of the brain, the behavior of dolphins or monkeys, the personalities of their fellow students, or the behavior of children in a school setting. They are interested in being consumers of research information—reading about research so they can later apply it to their work, hobbies, relationships, or personal growth. These students might pursue careers as family therapists, teachers, entrepreneurs, guidance counselors, or police officers, and they expect psychology courses to help them in these roles.
In practice, many psychologists engage in both roles. When they are planning their research and creating new knowledge, they study the work of others who have gone before them. Furthermore, psychologists in both roles require a curi- osity about behavior, emotion, and cognition. Research producers and consumers also share a commitment to the practice of empiricism—to answer psychological questions with direct, formal observations, and to communicate with others about what they have learned.