Preface
The most important thing I learned in almost thirty years of teaching experimental psychology is that the best way to learn, and the best way to teach, is by actively engaging in the research process. By "best" I don't just mean effective in terms of achieving learning goals. I mean best in terms of being less boring and more fun for both students and teachers. This is just as true for advanced graduate students as it is for undergraduates in their first methods class—indeed on their very first day in that class. Research methods are best learned by doing research, and modern technology makes it possible to provide this experience to students at all levels.
When I first started teaching in the 1990s, the music was better, but the opportunities for engaging students in real research were limited. Today, the landscape has fundamentally changed. With the advent of online participant recruitment, easy-to-use survey platforms, the Open Science Framework for sharing materials, and AI-assisted tools, it is possible to develop and launch a study, collect data from real participants, and download and analyze the results all within a single two-hour lab session. What once took an entire semester can now be accomplished in a typical class period.
The Three Goals of This Book
This book has three primary goals. First and foremost, it serves as a comprehensive textbook covering the core principles of behavioral research. Second, it aims to bring the most useful professional research tools into the classroom to create the best possible project-oriented learning experience. And third, it provides a practical guide to the modern methods and online platforms that define contemporary behavioral science.
In service of these goals, the book is designed for flexibility; it can be used as a standalone textbook that covers all material in an introductory or graduate methods course, a lab manual, or both.
In the sections that follow, we will review each of these goals in more detail.
Goal 1: A Comprehensive Research Methods Textbook
This book covers all the core topics in a typical introductory undergraduate or graduate course in behavioral science methods. Students will develop their critical thinking skills and learn to navigate the role of theory in science. They will also learn the fundamentals of measurement sampling and participant recruitment. The chapters are organized to progressively cover research designs of increasing complexity, from descriptive and correlational studies to more advanced correlational and experimental methods, including factorial designs, covariate analyses, and longitudinal methods.
The book is organized into two parts, aimed at students of different levels and research experience. Part I is geared toward a typical undergraduate curriculum, covering foundational topics in depth. Part II provides a comprehensive guide to conducting online research for readers at all levels, including graduate students and professional researchers. It offers a thorough overview of the online research landscape, addressing modern challenges like online participant recruitment, data quality, data cleaning, best practices for designing and launching studies, and cutting-edge tools such as AI in qualitative and mixed methods research.
However, this structure is flexible; many chapters in Part II can be easily incorporated into a first methods course, just as advanced readers may find sections of Part I to be a useful reference. Finally, dedicated chapters on ethics and writing in Part II provide examples of topics that are essential components of both introductory and more advanced courses.
Therefore, the book can be used as a traditional textbook by itself, without the accompanying projects. For instructors who wish to incorporate a more project-oriented approach, we describe that model below.
Goal 2: A Project-Oriented Approach
The second purpose of this book is to bring professional research tools into the classroom to create the best possible project-oriented learning experience for both students and professors. To do so, my co-authors and I have developed several methodological innovations that reimagine how research methods can be taught. The chapters in this book are structured around CLABs (Classroom-Laboratory hybrids), which are integrated sessions where theoretical concepts are immediately followed by hands-on projects. CLABs foster Collaborative Learning About Behavioral Science.
To facilitate project-oriented learning, we have created numerous research projects with real data collected from over 2,500 online participants specifically for this book. Part I, Chapters 1-8, is geared toward the introductory student and combines traditional content with projects and assignments to bring the concepts to life and encourage independent learning. The content and projects are carefully scaffolded to support students towards incremental mastery of key concepts.
Chapters generally follow a four-step learning progression: First, chapters introduce a theoretical concept (e.g. correlation). Second, students work with data to see real-world applications of that concept (e.g. download an existing dataset from this book's OSF page and conduct a correlation analysis between anxiety and depression in a 500-participant online sample and visualize the scatterplot). Third, students engage in a guided research project. As part of the guided project, they examine an already existing research study where the data have been collected, and the materials are available to work with. Students engage with these guided projects as a researcher would. They start by formulating a hypothesis prior to seeing the data, work with a Qualtrics or Engage file to create the study, download and analyze the data, examine whether the results support their hypotheses, and write a brief explanation of the findings. The final step is the creation of their own project, such as a correlational study created on Qualtrics. This four-step process builds skills incrementally and brings each methodological concept to life.
We have also incorporated the latest technological tools to facilitate students' ability to create their own projects. One of the most productive examples is the use of AI to create their own 7-10 item measurement scales while they learn about measurement. This assignment, covered in Chapter 4, enables students to create their own measurement instruments to measure just about anything that interests them and allows them to explore and research a much wider set of topics than would otherwise be possible.
Finally, we have implemented a portfolio-based assessment approach where students build a cumulative record of their research accomplishments throughout the course. At the end of each small project, students copy and paste their SPSS output to their portfolio and then describe and interpret the findings. This creates a simple and effective way to assess participation and engagement in each activity and also creates a repository of skills that the students acquire over the semester.
All materials, including Qualtrics files, surveys, datasets, and analysis instructions, are available through this book's Open Science Framework page. Every project is also accompanied by short, engaging videos that guide students through each step of the activity.
A Flexible and Modular Approach
While we encourage using the hands-on activities to create a fun and engaging class, we designed the book with maximum flexibility in mind. The components are modular, allowing you to adapt them to your specific course goals and resources.
For example:
- You can use the book as a traditional textbook, assigning only the core readings and forgoing the hands-on components entirely.
- You can introduce data analysis without live data collection by using the guided projects, which rely on the pre-collected datasets available on the book's OSF page.
- You can pick and choose activities, assigning a guided project from one chapter while having students collect their own data in another.
- You can have students design their own research studies, collect their own data, analyze it, and present their findings as full APA-style reports, short write-ups, or as posters.
The result is a textbook that doesn't just tell students about research—it enables them to do research. In the first-class session, students dive in, with a simple and engaging data collection exercise. By mid-semester, they are typically designing and conducting original studies for which they create their own measurement scales and which they program on Qualtrics, collect the data online, and analyze using statistical software. And by the end of the course, they will have developed a comprehensive portfolio showcasing their ability to create their own measures, interpret the results of t-tests, multiple regression, and other statistical analyses, create randomized experiments on Qualtrics, formulate research questions, design studies, collect and analyze their own online data, communicate research findings, and much more.
This approach reflects my deepest conviction about learning: we master what we practice, not what we read. The students I've taught over the years have consistently confirmed this principle. Those who thrive—whether they become researchers, practitioners, or professionals in entirely different fields—are those who have experienced research firsthand.
Goal 3: A Guide to Modern Online Methods
Finally, the third goal of this book is to provide a comprehensive guide for online research. Part II, Chapters 9-16, is geared towards advanced undergraduates, graduate students, and researchers alike.
Part II addresses the real-world challenges and opportunities of online data collection. It begins with a detailed overview of the online participant ecosystem, from large-scale market research panels to researcher-centric platforms like Connect. It provides an in-depth, three-chapter treatment of data quality, one of the most pressing issues in the field today. It also shows readers how to identify and understand the sources of poor-quality data, how to implement practical data quality solutions, and how to implement data cleaning procedures.
From there, the guide offers step-by-step best practices for the entire research process. This includes issues of representativeness in online samples, designing effective online studies, setting up and launching projects, determining fair payment, managing longitudinal data, and navigating the unique ethical considerations of online research. While geared toward advanced readers, many of these chapters can be easily incorporated into an introductory course to give all students a grounding in modern research practices.
My coauthors and I feel this book adheres to a model of transformational teaching (Slavich and Zimbardo, 2012), which involves dynamic relationships between students and teachers to facilitate not only learning but personal growth. In this approach, teachers function as "intellectual coaches" and provide a framework for learning and disseminating core concepts while allowing for independent and active learning from students. Mentoring and teaching should be transformational for both student and professor where both parties can learn from each other.
Research in the Cloud brings that opportunity to every student. I invite you to join us in this reimagined approach to research methods education. Whether you are a student encountering these concepts for the first time or an instructor looking to transform your teaching, I hope this book helps you experience the excitement and insights that come from doing real research from day one.
Housekeeping for Instructors
Before teaching the way we envision with this book, professors need to ensure students have access to the tools required for research. This means picking a survey platform, thinking through sources of participants, and selecting a statistical package to teach with, among other things.
Many colleges and universities have institutional access to the Qualtrics survey platform. As a result, each activity throughout the book comes with Qualtrics files (.qsf files). Many screenshots that guide students through various activities and the video series that supports each guided project also use Qualtrics. If you do not have access to Qualtrics, there are many other survey platforms, some of which offer a free student version. Engage by CloudResearch is one such platform. Engage is available for free use with this book. Chapter 8 focuses on the AI capabilities of Engage and there are other features that make it a good starting point for students like the ability to use AI to create an initial version of a survey.
Beyond survey platforms, students who engage in independent research will need access to research participants. Some activities may be conducted by sending surveys to friends and family or gathering data from the college or university student subject pool. In other instances, the most practical option will be online participant recruitment. For example, in Chapter 3 students learn about descriptive research and the guided project shows them how to gather a sample matched to the US Census for greater representativeness. Chapter 9 introduces students to the kinds of platforms available for online participant recruitment, and across the entire book, students learn about Connect. Some instructors who have already taught with the book have had students gather data both online and offline and used the differences between the two as a learning opportunity.
For those who want to collect online data, instructors can apply to the Spark Fund administered by CloudResearch. The Spark Fund provides several hundred dollars teachers can use to collect online data within the classroom. Within Connect, the Teams feature makes it easy to manage student accounts and monitor the projects students launch.
After collecting data, students will need a statistical package to analyze it. We demonstrate how to conduct analyses using SPSS, as it is still one of the most widely used statistical packages in the behavioral sciences. Some undergraduate courses also use free and open-source software like JASP or Jamovi, and some use R. A guide to conducting the statistical analyses for each activity in the book with these different statistical packages is available on the resource page we have created. So, too, are PowerPoint slides, lecture outlines, test banks and other materials teachers might find helpful when adopting the text.
Finally, given that this book encourages students to conduct independent research projects it is worth a note about IRB's. Research that is conducted solely for educational purposes and is not intended for external dissemination, does not typically require IRB approval. Thus, students can generally conduct the projects outlined in this course without IRB approval. However, it is common for students to come up with creative and exciting projects that deserve to be showcased to a larger audience. In such cases, IRB approval will be warranted. Each instructor should speak with their institution's IRB to understand the requirements of their specific institution. Chapter 15 includes ethical considerations for online studies and best practices for writing consents and IRB proposals.
Leib Litman
Professor of Psychology, Lander College
Author Information
Aaron Moss, PhD., is a social psychologist who studies online research methods. He has taught for more than 10 years and written over 20 articles and chapters, covered in places like The Wall Street Journal and The Harvard Business Review. Aaron lives in Albany, NY, and teaches at Siena University.
Jonathan Robinson, PhD., is Computer Science Professor at Touro University and CloudResearch CTO. His innovative work transforms technology into research tools that impact research methodology. Author of "Conducting Online Research on Amazon Mechanical Turk and Beyond," he challenges students to grow while living his motto: "Dream and make it happen."
Leib Litman, Ph.D., is Professor of Psychology at Touro University and Chief Research Officer at CloudResearch. He has almost 30 years of experience teaching research methods, and he has written close to 100 articles and book chapters in experimental and cognitive psychology.
Introduction
Leonardo da Vinci is famous for many things—his painting, drawing, and contributions to engineering, architecture, and scientific thought to name a few. Of all the noble pursuits he set his mind to, describing the tongue of a woodpecker is a puzzling one.
Walter Isaacson wrote a biography about Leonardo. After learning about the woodpecker in a notebook, Isaacson wondered: "Who on Earth would decide one day, for no apparent reason, that he wanted to know what the tongue of a woodpecker looks like?" The answer was Leonardo; his reason was curiosity.
Of all the motives behind scientific inquiry, curiosity may be the most virtuous. It's what led Orville and Wilbur Wright—two bicycle mechanics who lacked scientific training—to conduct systematic tests that led to the first machine powered human flight. It's what Einstein credited for his ability to grasp the fundamentals of the cosmos, writing, "I have no special talents, I am only passionately curious" (as quoted in Isaacson, 2008). Curiosity is, in fact, the motor behind many scientific advancements from Copernicus's heliocentric view of the universe to Newton's formulation of the laws of gravity, from Darwin's theory of natural selection to Pasteur's germ theory, and from Watson, Crick, and Franklin's discovery of DNA's structure to Jennifer Doudna's development of CRISPR for editing that DNA. As a catalyst for discovery, curiosity is hard to beat.
While some people are curious about germs, genes, or galaxies, everyone is curious about the human mind. Questions about why people think, feel, and act as they do are central to everyday life and the focus of research in the behavioral sciences. Even a cursory glance at this branch of science reveals much to be curious about.
Why, for example, do people get bored? What leads to a meaningful life? How well do we know ourselves? What is the connection between the mind and the brain? Do humans have free will? What qualities do people want in a romantic partner? What causes mental illness? How well do attitudes predict behavior? Does religion promote morality? When do people feel regret? Does personality change with age? The interesting questions are endless.
And yet, discovering things about the human mind or behavior requires more than curiosity. Anyone who wants to know why people do what they do must also understand the methods behavioral scientists use to conduct research. These include things like how to form a research question, design a study, test a hypothesis, measure psychological concepts, identify patterns in data, demonstrate cause-and-effect relationships, and communicate research findings to others. In today's world, it also means understanding how research methods are integrated with the digital tools that have become essential to how behavioral science operates. Teaching the fundamentals of behavioral science and how to use its newest tools is the goal of this book.
How to Use This Book
This book was designed with a two-part structure to serve learners at different stages of their research journey. Part I explains the basics of behavioral research; Part II guides you through how to apply these methods online.
Part 1: Foundations of Behavioral Research
The first eight chapters provide a hands-on introduction to behavioral research methods. For students new to behavioral research, this is the starting point. We take a friendly, step-by-step approach to core concepts like forming research questions, understanding different types of studies, and collecting data. Throughout these chapters, students learn by doing—working through carefully designed activities that make abstract concepts concrete and manageable. More than just reading about research methods, students conduct small studies, analyze real data, and develop research skills. We created an entire video series to accompany these projects.
Part 2: Online Research Methods
The second half of the book provides a guide to conducting online research. This section can be used as a standalone guide for conducting online research. This section is valuable for students in an advanced methods courses, for people working on an honors thesis or independent research project, for graduate student's planning a master's thesis or doctoral dissertation, and for postdoctoral researcher or faculty members who have little experience with online research. The chapters in Part II describe the diverse ecosystem of online participant recruitment and look at topics like where to find online participants, how to manage data quality, best practices for setting up an online study, best practices for survey programming and data cleaning, and ethics.
The two-part structure of this book means different readers can approach the book in different ways. Students in an introductory methods course can focus on the research methods fundamentals covered in Part I while referencing Part II as necessary. Those who are specifically interested in online research methods, can start with Part II's applications to online research, referencing Part I where needed.
Regardless of starting point, this book will help you develop practical skills that are valued across academic and professional settings. You will learn to design studies, collect and analyze data, use modern research platforms, manage projects, and communicate findings effectively. These skills are increasingly sought after in fields ranging from market research and public policy to technology, law, and healthcare.
Just as Leonardo da Vinci's curiosity led him to make amazing discoveries about the world, this book will equip you with the tools to systematically investigate questions about the human mind. Whether you are pursuing an academic career or planning to apply these skills in industry, you will learn methods that professional researchers use every day in universities, businesses, non-profits, and government agencies.
Our goal is to help you develop not just technical knowledge, but the ability to satisfy your curiosity about human nature through rigorous research. As you work through this book, remember that each method you learn and each project you complete brings you closer to understanding the fascinating complexities of human thought and behavior. Have fun!
Chapter 1 Introduction
Trying to learn about human behavior simply by observing people in your life is like trying to understand the ocean by standing on the beach. You might notice the waves and the tide, see a few creatures on the shore, and feel the water's temperature, but most marine life will remain hidden from view. Similarly, casual observations of the people around you might reveal interesting behaviors and noteworthy patterns, but these observations only scratch the surface. They can't tell you why people do what they do or how reliable your observations are. To truly understand people, you need a systematic approach.
Unlike traditional research methods textbooks, this book will introduce you to behavioral research through hands-on activities. Instead of just reading about research methods, we will use the tools and methods researchers use when they conduct scientific investigations. This will allow you to experience how behavioral scientists collect data, analyze results, and develop theories by participating in each step.
In Module 1.1, we will see how researchers transform abstract psychological characteristics into measurable variables. This module will contain your first activity and your first experience working with data. In Module 1.2, we will learn how researchers transform observations into theories, using the history of personality as an example. Then, in Module 1.3, we will explore the diversity of the behavioral sciences, examining how different disciplines approach the study of human behavior. Finally, in Module 1.4, we will consider how behavioral scientists reach cause and effect conclusions and how the critical thinking necessary for scientifically establishing causal relationships varies from how most people understand cause and effect in everyday life. Through the activities in this chapter, you will discover how behavioral scientists use systematic methods to move beyond casual observations toward scientific explanations.
Be prepared: this chapter introduces a new way of thinking about people and a new approach to learning. The hands-on activities might feel different from traditional coursework, but they are designed to help you develop the knowledge and skills of a behavioral scientist. By completing each exercise, you will begin to understand the research process from the inside. You will also build knowledge and skills that will serve you well throughout this course and beyond.
Chapter Outline
Doing Research, Not Just Reading About It
Learn how to turn concepts into variables and work with data
Have you ever wondered why your roommate waits until the night before an exam to start studying while you begin days in advance? Or why some people thrive in large social gatherings whereas others feel drained? These aren't just idle curiosities. They are the kinds of questions that have launched entire fields of scientific research.
Traditionally, research methods textbooks begin by describing how behavioral scientists think about theories, how they weigh evidence, and the methods they use to understand people. But this book takes a different approach. It lets you experience what research is like.
In just a minute, you will participate in an exercise that assesses personality. While personality is a fascinating topic, the goal of this exercise is to illustrate some fundamental ideas about behavioral science. These include how scientists measure psychological concepts, how they collect and organize data, and how they use that data to understand people's thoughts and behavior. After you complete the project, we will discuss the role of theories and evidence in behavioral science. Then, we will describe the academic disciplines that follow this approach to research. By the end of the chapter, we will have explored not only how behavioral research operates but some of the professions, disciplines, and settings where behavioral scientists study human behavior. Let's jump in!
Research Activity 1.1: Your First Research Project
If you read this book's Introduction—and, if you didn't our question is why not?—you know this book aims to teach you about research methods by having you do research. Here's your first chance.
You will take a short personality test known as the TIPI (pronounced tip-ee). TIPI stands for Ten Item Personality Inventory (Gosling et al., 2003), and it was designed to quickly measure The Big Five personality traits described in Box 1.1.
After you complete the test, you will receive a score for each dimension of your personality. Write these scores down or take a screenshot. We will use them later in the chapter.
Your scores on the TIPI will be the first example in this book of how behavioral scientists collect data to measure human characteristics. As we will see, turning abstract psychological characteristics like extraversion or conscientiousness into numbers is the first step in scientifically studying human behavior.
The Big Five Personality Traits
Openness to Experience: This trait captures a person's curiosity, creativity, and willingness to try new things. People high in openness tend to be imaginative, intellectual, and adventurous. They appreciate art and beauty, and enjoy exploring new ideas. People low in openness are typically more conventional, practical, and prefer routine.
Conscientiousness: This trait reflects a person's level of organization, dependability, and self-discipline. Highly conscientious people are typically goal-oriented, reliable, and methodical. They pay attention to detail and meet deadlines. People low in conscientiousness are more spontaneous and may struggle with organization and follow-through.
Extraversion: This trait describes a person's social energy and tendency to seek out interactions. Extraverts are outgoing, energetic, and gain energy from being around others. Introverts are more reserved, prefer smaller social interactions, and need alone time to recharge their energy.
Agreeableness: This trait reflects a person's interpersonal style, particularly compassion and empathy. Highly agreeable people are kind, sympathetic, and value harmony in relationships. They tend to be trusting and concerned with others' feelings. Those low in agreeableness are more competitive and may prioritize their own needs over others'.
Neuroticism (or Emotional Stability): This trait measures emotional reactivity and stability. People high in neuroticism experience more frequent negative emotions like anxiety, depression, and vulnerability to stress. Those low in neuroticism (high in emotional stability) are typically calm, resilient, and better at managing stress.
Taking a Personality Test
To take the TIPI, go to: https://psytests.org/big5/tipien.html. Select the button that says "Begin," and then on the next page select "Begin" again. Once the test starts, you will be asked ten questions. They should take less than a minute to complete.
Your results will look like Figure 1.1. Scores on each trait will range between 1 and 7. For example, the agreeableness score in Figure 1.1 is 6.5.
You can toggle between raw scores and percentiles, as shown in Figure 1.2. A percentile represents the percentage of people whose scores fell below yours on each trait. Percentiles are interesting because they show where you stand relative to others. According to Figure 1.2, an agreeableness score of 6.5 is higher than 92% of people who have taken the test.
After reviewing your scores, remember to keep the results tab open, write down your average on each trait, or save a screenshot. You will need these soon.
Drawing Connections to Behavioral Research
Let's explore what this exercise reveals about how behavioral scientists make measurements that are useful for scientific research (Figure 1.3).
First, behavioral research often requires measuring abstract characteristics about people. Unlike physical characteristics such as height, weight, time, or distance, there is no ruler or scale that measures a psychological construct like agreeableness. Instead, researchers must develop questionnaires and other measurement instruments. But developing a measure raises interesting questions, such as: what evidence is there that this particular measure—called an operational definition—is a good measure, and how does the researcher know if the measure is accurate? The answer to these questions involves evaluating reliability and validity, two concepts we will learn about in Chapter 4.
Second, measuring personality is an example of quantitative research. In quantitative research, scientists convert complex characteristics, like agreeableness or openness to experience, into numbers that can be analyzed. Each measured characteristic represents a variable, which is simply a measure on which people have different scores. To see variability in personality, all you need to do is compare your scores to other people. Are you higher or lower than others in extraversion, agreeableness, and emotional stability? Measuring variables makes it possible to systematically study how people differ and what those differences mean.
Third, once researchers can measure a characteristic, they can look for patterns in data. For instance, studies have found that people who score low in conscientiousness tend to receive more parking tickets, earn less money, and even die earlier than those who score high in conscientiousness (Alderotti et al., 2023; Bogg & Roberts, 2013). Conversely, people high in conscientiousness often experience greater career success, higher job satisfaction, and better physical health, including lower rates of chronic disease (Judge et al., 2013; Strickhouser et al., 2017; Wilmot & Ones, 2019). Research like this reveals how different variables are related, allowing scientists to discover fascinating patterns in human thought and behavior.
Finally, taking the TIPI gave you a glimpse of what it's like to be a research participant. Think about your experience: how seriously did you take the assessment? Did you have fun? Did the length of the test feel appropriate? Would your responses have been different if the test was longer? Your answers to these questions concern what is called participant engagement, which as every seasoned researcher knows, is as important to the quality of a study as good measures or sound study design. Throughout this book, we will explore how researchers balance scientific rigor with practical considerations like participant engagement.
Now that you understand some basics of behavioral research, try developing your own question about personality. For instance, which behaviors do you think extraversion or openness to experience might predict? What kinds of outcomes do you think are common among people low in emotional stability? Once you can form these kinds of questions, you will be a step closer to designing a study that can provide an answer.
Research Activity 1.2: Your First Data Analysis
Now that you have taken the TIPI, let's dig into some data.
For this exercise, we created a Google Sheet where you—and everyone else who reads this book—can enter your personality scores. You can access the sheet here: https://bit.ly/4hojglZ.
Once you open the sheet, you will see five columns (one for each trait) and several rows of student data (we occasionally clean up the file so it's easy to work with). The structure of this spreadsheet reflects how behavioral data is typically organized. No matter how complex the project is, researchers enter their data into a spreadsheet just like this, where each row represents a person and each column represents a variable. Once the data are ready, you can conduct statistical analyses and create figures to visualize the results.
To add your data to the thousands of students who have completed this exercise, enter your scores into each column, staying within a single row. When you enter a number in columns B through F, a new student ID will appear in column A. Figure 1.4 shows what the data should look like.
Row 2 in columns I through M contains a formula that computes the average for each trait (e.g., =AVERAGE(B:B)). When you add your data, the averages within each column and the bar chart will automatically update.
Even though the task here is simple, you are working with data by entering numbers, computing averages, and examining the visual patterns across many people. Behavioral scientists do something similar in each study they conduct.
As you examine the data, consider these questions: Which personality trait has the highest average? Which has the lowest? Do these patterns match what you expected? How might the pattern change if the data represented the entire U.S. population rather than students enrolled in research methods? These are the kinds of questions researchers might ask as they analyze these data.
Later in this book, we will learn more sophisticated ways to collect, organize, and analyze data, but the basic processes will remain the same. Behavioral research always involves converting abstract constructs into numbers, collecting data from people, entering the data into a spreadsheet, analyzing the data, and creating visualizations to understand patterns and communicate about the implications for human behavior.
Stop and Discuss!
Before you proceed, discuss the TIPI with a friend or group from class. If you cannot have these conversations in person, use a class discussion board or a group chat on something like Canvas or Blackboard. Talking about your ideas helps you think critically about what you've learned, gives you a chance to hear new perspectives, and allows you to practice communicating your ideas effectively. Communicating effectively is an important skill, whether you are working in a research team, solving problems in a professional setting, or just trying to understand the world better. So, give these questions a shot.
- Which personality traits would you look for in an ideal coworker, friend, or romantic partner? Do these traits change as you consider different relationships or do they remain stable? How do the traits you desire in each person compare to your own personality?
- How do your scores compare to how you see yourself? Do they align or are there surprises? Which personality trait would you change if you could?
- Would your scores on the test be the same if you retook it next week? What about ten years from now? Why might personality change? Why might it stay the same?
- Why do you think researchers developed a short measure like the TIPI? What trade-offs might they face compared to using a longer personality test?
- Are there any ethical concerns when collecting personality data online? If so, how might researchers address those concerns?
Research Portfolio
Throughout Part I of this book, the icon above will appear many times. Anytime you see this icon, add the assignment, discussion questions, or other activities to a Word or Google document that will serve as your research portfolio. This will be a record of your research activities and projects throughout the semester. The portfolio will allow you to reflect on what you have learned. In addition to being part of your grade for this class, your portfolio can be used when applying to graduate school or any relevant jobs. By creating a collection of your independent work, you can document your development as a student and researcher. Your first chance to create your portfolio begins now.
Portfolio Entry #1 – Personality and the TIPI
- Copy and Paste your raw scores and percentile scores from the TIPI onto your portfolio.
- Copy and Paste the class's average scores into your portfolio.
- Thought Question: After reflecting on your scores and the discussion questions above, write 3-5 sentences answering one of the above questions.
From Curiosity to Theory: Building Scientific Knowledge
Explore how scientists develop theories to explain patterns in data
So far, you have taken a brief personality assessment, entered your data into a spreadsheet, analyzed the data by calculating average personality scores, and discussed what the results mean. But you may be wondering: where did the idea of five personality traits come from in the first place? And why these five traits? To answer those questions, let's examine how researchers developed modern theories of personality. In doing so, we will learn what scientific theories are and the role of theory in the scientific process.
The Making of a Scientific Theory: The Big Five Theory of Personality
In the mid-1930s, Gordon Allport and Henry Odbert began studying personality with a simple but powerful idea: the words people use to describe personality might reveal its underlying structure. This lexical approach, as it became known, assumed that the most important personality characteristics would be captured in everyday language. To investigate their idea, Allport and Odbert combed through Webster's New International Dictionary, recording every word that could be used to describe differences between people. They found about 18,000. Then, they shortened their list and grouped words into bins that described things like enduring traits, physical characteristics, and social or emotional states.
Following Allport and Odbert, several psychologists examined whether the original groups of personality words could be used to identify the building blocks of personality (e.g., Cattell et al., 1970; Goldberg, 1981; Tupes & Christal, 1992). To do this, they asked large groups of people to rate both themselves and people they knew on many of the words that Allport and Odbert had identified (e.g., Fiske, 1949). For example, a participant might rate themselves on how "talkative," "organized," or "sympathetic" they are. By gathering thousands of ratings, researchers were able to examine which words consistently appeared together.
Examining patterns between words revealed something remarkable: even though people used thousands of words to talk about personality, personality characteristics grouped together into just a few clusters. Eventually, these clusters came to be known as the Big Five theory of personality (Goldberg, 1981)—the idea that personality is best organized into five basic dimensions: openness, conscientiousness, extraversion, agreeableness, and neuroticism (emotional stability) (see Box 1.1). According to the Big Five theory, any personality-related word will correlate with at least one of the five basic traits, and these five traits are universal, which means they are consistent across cultures (John and Srivastava, 1999).
Once the Big Five theory was established, researchers created various questionnaires to measure the traits. These questionnaires include the ten-item personality inventory you completed earlier as well as the Big Five Inventory (BFI for short, contains 44 items) and the NEO Personality Inventory with 240 items (Costa & McCrae, 2008). Each of these measures has been translated into different languages and presents a tradeoff between the time taken to administer the test and the precision in the results.
What Makes a Theory Good?
The idea that personality traits cluster into five dimensions is an example of a scientific theory. In everyday language, the word "theory" often means a guess or speculation, as in "I have a theory about why my roommate is always late." In science, however, a theory is a data-driven explanation for a set of observations. Theories make specific predictions about what should happen in different situations.
Good scientific theories share several important characteristics (see Figure 1.5). First, they are data-driven, which means they emerge from many measurements or observations. The Big Five theory is a good example; its five traits were derived from hundreds of thousands of personality ratings collected by researchers spanning from Allport and Odbert to today.
Second, scientific theories strive for parsimony. This means they take something seemingly complex and explain it in the simplest way possible. The Big Five achieves parsimony by showing how thousands of different words can be organized into just five traits.
For example, suppose we start with a single word people often use to describe themselves or others: meticulous. Our question is simple: which of the Big Five traits does "meticulous" belong to or does it belong to none of them? We can answer this by looking at patterns in data. Imagine we ask a few hundred students to rate themselves on a 1–7 scale on a list of personality words that includes meticulous plus representative markers for each Big Five domain—organized, reliable, careful (Conscientiousness); talkative (Extraversion); curious (Openness); warm (Agreeableness); and anxious (Neuroticism). Correlation analyses (statistical techniques that we will describe later) show that people who rate themselves as meticulous also rate themselves as organized and reliable, but not as sloppy or careless. The clustering provides evidence that being meticulous is part of the Conscientiousness trait. When researchers use this procedure with thousands of other words that describe people's personality, almost all the words tie to at least one of the Big Five factors, demonstrating parsimony, or how thousands of everyday descriptors efficiently reduce to a small set of theoretical constructs.
Over the years, scientists have tested whether any of the traits in the Big Five overlap enough to be condensed. If so, three or four traits may capture all the variation in personality, creating a more parsimonious theory. But the data have consistently shown that five dimensions—and no fewer—best account for how personality clusters together.
Third, and perhaps most importantly, scientific theories must be falsifiable. This means they must be capable of being proven wrong. The Big Five theory, for instance, makes specific predictions about what patterns researchers should find when studying personality. If these predictions turned out to be wrong, the theory would need to be revised or replaced.
Theory vs Hypothesis
A theory like the Big Five makes specific predictions—in this case, that five basic dimensions of personality should consistently emerge when studying people from different groups. Researchers translate these kinds of theoretical predictions into specific, testable statements called hypotheses.
A hypothesis is a precise claim about what researchers expect to find in a particular study. For example, the Big Five theory predicts that personality has five universal dimensions. Based on this theory, researchers might form the hypothesis that "personality ratings from German participants will reveal the same five dimensions found in U.S. participants." This hypothesis is testable because it is possible to collect empirical data that either supports or refutes it.
Theories are always subject to refinement. For that reason, scientists avoid saying they have "proven" a theory. Instead, they talk about evidence that supports or fails to support theoretical predictions. A pattern of supportive evidence across many studies increases scientists' confidence in a theory, allowing other researchers to build upon its ideas. Evidence that fails to support a theory suggests the ideas need to be modified.
Testing Hypotheses, Refining Theories
Let's look at how theories operate in the scientific process using the Big Five as an example (Figure 1.6). If personality really has five universal dimensions, then research should find the same five dimensions among people from different groups, regardless of language, age, culture, religion, or other factors.
Based on this theory, researchers might examine if people from Germany, Brazil, Japan, and Nigeria show a similar five-factor structure in personality, despite their differences in language and culture. To test the hypothesis that personality dimensions should be the same across cultures, researchers would gather data from people in each country.
To do that, they might assemble personality descriptors across different languages, gather ratings from local participants, and analyze whether the same five dimensions emerge. Each time the hypothesis is tested in a new country, the data would either support the Big Five theory by finding the predicted five-factor structure or challenge the theory by finding something else.
Researchers who have done this work have made some fascinating discoveries. Across more than 50 cultures and dozens of countries, the five-factor structure appears, regardless of whether people rate themselves or their peers (e.g., Allik & McCrae, 2004; Benet-Martínez & John, 1998; McCrae & Costa, 1997; McCrae & Terracciano, 2005). However, some studies in Asian cultures have found evidence for a sixth personality dimension related to interpersonal harmony (e.g., Ashton & Lee, 2007; Lee & Ashton, 2004). This sixth dimension has led to an alternative theory called the HEXACO model, which proposes that personality has six basic dimensions instead of five (HEX means six in Greek). When competing theories emerge, like the HEXACO model, researchers design new studies to test where the theories make different predictions, repeating the scientific process depicted in Figure 1.6.
And this brings us to another aspect of what makes for a "good" scientific theory. A theory doesn't have to be "true" to be useful or influential. Instead, theories are evaluated on how well they make predictions and contribute to scientific progress. Even if a theory is eventually falsified, it can still push scientific knowledge forward by forcing researchers to develop better explanations for the patterns they observe.
Overall, the story of personality theory illustrates how science advances through a continuous cycle of prediction, testing, and refinement. From a general theory like the Big Five, researchers formed specific, testable hypotheses. Then, they collected empirical data to test these hypotheses. Each study either supported the theory by finding the predicted result or challenged it by finding something different. When researchers discovered a sixth personality dimension in Asian cultures, it led to a competing theory. Then the cycle continued as researchers gathered more evidence to evaluate which theory better explained the patterns in the data.
Sometimes the evidence clearly supports one theory over another. Other times, competing theories both capture important aspects of reality. For instance, the debate between five versus six personality dimensions continues today (Thielmann et al., 2022), with different studies supporting different conclusions. It is, however, only through this ongoing process of theory development, empirical testing, and refinement that scientific knowledge grows more sophisticated over time.
Behavioral Science in the Real-World
Examine where behavioral science is practiced
Earlier, you experienced how behavioral scientists measure variables and analyze data. While the project you completed was relatively simple, you did the same thing professional researchers do when they address real-world problems. For instance, organizations often use personality assessments to improve hiring decisions and build effective teams; clinical psychologists use them to understand clients and tailor approaches to treatment; the military uses personality testing to match recruits to roles where they are likely to succeed; and courts use personality assessments when making decisions about rehabilitation programs or evaluating people's mental competency. Even dating apps use personality tests to help people find compatible partners.
Yet, there is much more to behavioral science than the study of personality.
What is Behavioral Science?
Behavioral science is generally understood as the study of how people and animals behave, think, and interact with each other and their environments. It explores the causes of human thought, emotion, and behavior by examining the influence of biology, psychology, culture, and society. Behavioral scientists learn about people using methods like observations, experiments, and surveys and they often apply their knowledge toward solving problems, improving well-being, or designing better systems and policies.
A behavioral scientist might, for instance, be a therapist helping people navigate mental health challenges, a professor researching and teaching consumer behavior, a neuroscientist analyzing brain activity in mice, or a social psychologist studying how group dynamics influence individual decision-making and conformity. They might also be a school psychologist helping students overcome learning difficulties, an economist studying home prices or inflation, a demographer analyzing population trends, or a researcher examining how people use technology. The world of behavioral research is big and diverse.
The Disciplines of Behavioral Science
Table 1.1 lists several academic disciplines that fall under the umbrella of "behavioral science." Examining the table gives a sense of the field's breadth. But this list is incomplete. Not every field that examines human behavior has been included, and within disciplines like psychology, there are often sub-disciplines such as cognitive psychology, social psychology, industrial and organizational psychology, and clinical psychology. Further complicating classification, the boundaries between disciplines are porous. Scientists trained in one discipline may pursue a topic traditionally associated with another discipline. Or, as is increasingly common, researchers with different areas of expertise may collaborate in what is called interdisciplinary research.
| Discipline | Description |
|---|---|
| Anthropology | Explores human cultures, societies, and their development over time. |
| Behavioral Neuroscience | Studies the relationship between the brain, nervous system, and behavior. |
| Communication Studies | Investigates human communication patterns and their effects on individuals and society. |
| Criminology | Examines the causes, consequences, and prevention of crime and deviant behavior. |
| Economics | Investigates how individuals, groups, and societies allocate resources and make decisions. |
| Linguistics | Studies language structure, usage, and its role in human communication and cognition. |
| Marketing/Consumer Behavior | Examines how individuals and groups make decisions about purchasing and using goods and services. |
| Political Science | Studies political processes, institutions, and behavior within social systems. |
| Psychology | Examines human thoughts, feelings, and behaviors through scientific methods. |
| Public Health | Focuses on protecting and improving population health by studying behaviors and social determinants. |
| Sociology | Studies social structures, relationships, and the patterns of human behavior within groups. |
Note: The behavioral sciences encompass a range of disciplines that explore human behavior, each with unique questions and methods.
Where Behavioral Science Happens: Labs, Businesses, Governments, and Beyond
Behavioral scientists work in a variety of organizations. While many work at colleges and universities, combining research with teaching, others work in industry helping organizations understand consumer behavior or improve employee performance. A significant number work at think-tanks and government agencies, evaluating policy or tracking vital population statistics. Others work in healthcare or technology, in non-profits and international organizations, applying their expertise to improve public health, design user-friendly systems, or address pressing social issues.
Given the diversity of behavioral science, you might wonder: what ties these fields together? The answer is a commitment to scientific methods. In fact, the emphasis on careful observation, systematic measurement, and the rigorous testing of ideas isn't bound by location or organization, which means behavioral research can pop up in unexpected places like professional baseball.
Perhaps you are familiar with Michael Lewis's 2003 book Moneyball—it was made into a movie starring Brad Pitt. Moneyball tells the story of the Oakland Athletics and how they revolutionized player evaluation using scientific principles. Rather than rely on traditional scouting wisdom or a gut feeling about players, the A's scientifically studied what helps teams win. When they applied this knowledge during the 2002 season, they won 103 games—including an American League record at the time, 21 in a row—and tied for the best record in baseball while spending about $100 million less than their competitors. Eventually, their approach was adopted by other teams, and then it spread to other sports.
What makes science such a powerful approach to human behavior is its ability to reveal unexpected insights and challenge common assumptions. Just as Moneyball challenged traditional wisdom in baseball, behavioral science frequently uncovers surprising patterns in human behavior that contradict what "everyone knows" to be true. As we will learn in the next module, this is thanks largely to the scientific way researchers think about cause and effect.
How Behavioral Scientists Think About Cause and Effect
Learn how everyday intuitions about cause and effect differ from behavioral research methods
In everyday life, people often learn about the world around them through a combination of personal experiences, learning from other people, intuition, and rational arguments. These ways of learning guide much of our daily decision-making about everything from who to trust, which career to pursue, whether a medical treatment is safe, and how to invest for the future. Yet, when behavioral scientists seek to understand the world, they add one more tool to the mix: empirical evidence.
Empirical evidence is information gathered through systematic methods of measurement and experimentation. Sometimes, empirical evidence leads to the same conclusions as personal experience or intuition. At other times, however, empirical evidence leads to an understanding of the world that is not only counterintuitive but unlikely to emerge from any other form of knowing except scientifically testing ideas. The Moneyball example from the preceding section illustrates this idea as does the personal story in the next section. As we will see, it is precisely because experiments have the potential to reveal surprising and unexpected information that behavioral scientists prioritize empirical evidence for everything from evaluating mental health treatments to designing marketing materials.
How Do People Learn about Cause and Effect in Everyday Life?
When I (Leib Litman) was a little boy, I learned an important lesson about health: eating too many grapes causes diabetes. This was a fact, proven by direct experience, and witnessed by my family.
My great-grandmother survived one of history's darkest chapters. During World War II, when Nazi forces surrounded Leningrad (now St. Petersburg) in 1941, she lived through what became one of the greatest intentional starvation events in modern history. For two and a half years, the city was cut off from food supplies. At least 800,000 people died of starvation. My great-grandmother survived.
After the blockade was lifted in the winter of 1944, my family traveled to Tashkent, a city in Uzbekistan known for its sweet fruits. There, for the first time in years, my great-grandmother had access to grapes. A lot of grapes. After surviving on minimal rations for so long, she ate them constantly, making up for years of deprivation.
Shortly thereafter, she developed diabetes.
To everyone around my grandmother, the cause-and-effect relationship was clear. Eating large quantities of grapes had caused her to develop diabetes. This was her lived experience, and I grew up hearing this story from everyone in my family. The lesson was simple: too many grapes are very bad for you because they can cause diabetes.
It was only later, when I began studying biology in college, that I learned diabetes doesn't quite work that way. It is extremely unlikely for someone to develop diabetes simply by eating grapes. What can happen, however, is that people who have a genetic predisposition to diabetes, people whose bodies are already struggling to process sugar, can begin experiencing symptoms when they consume large amounts of sugar. In all likelihood, the grapes didn't cause my great-grandmother to develop diabetes; they revealed a condition that was already developing.
Even though my family misunderstood the cause of my great grandmother's diabetes, this story reveals a lot about how people understand cause and effect in everyday life. My great grandmother's understanding was based on immediate personal experience. She ate the grapes, and immediately afterward, developed diabetes. Her experience was crystal clear: one thing led to the other. As humans, we very often interpret one event following another as having a cause-and-effect relationship.
For me as a child, my understanding of diabetes was shaped by not only my great grandmother's personal experience, but also by the authoritative opinions of people I trusted. Everyone in my family agreed that grapes caused diabetes. Therefore, my belief wasn't just based on one person's experience; it was reinforced by the social consensus of people I loved and respected.
In everyday life, personal experiences, authoritative opinions, and social consensus are often extremely valuable. We rely on them when forming opinions about friendships, career decisions, where to go on vacation, and whom to trust. But, they can sometimes mislead us when it comes to understanding cause and effect. So, too, can rational arguments.
Let's consider a commonly held belief: cracking your knuckles causes arthritis. Many people are convinced this is true, and for similar reasons that my family was convinced eating grapes causes diabetes. Some people crack their knuckles throughout their lives and then, at a certain point, develop arthritis and joint pain. Their personal experience leads them to believe that knuckle-cracking caused the arthritis. When thousands of people have this same experience and share their stories, a powerful social consensus emerges. Surely all these people can't be wrong.
In addition to personal experience and social consensus, there is another factor at play: rational argument. It just stands to reason that repeatedly cracking your knuckles over and over again can't be good for your joints. If someone cracks their knuckles their entire life and then develops knuckle pain, it makes logical sense that one caused the other. When multiple sources of knowledge all point to the same conclusion, they create a compelling cause and effect narrative. In this case, a narrative about how cracking knuckles causes arthritis.
But just as with the grapes, there is an alternative explanation. Maybe people who cracked their knuckles all their lives would have developed arthritis anyway. And maybe there are many people who developed arthritis without ever cracking their knuckles. Examining how behavioral scientists think about cause and effect reveals how we can answer questions like this in a scientific way, and how different that approach is than people's everyday experience.
The Role of Behavioral Science in Understanding Cause and Effect
To properly answer questions about cause and effect, we need behavioral research methods that go beyond personal experience, social consensus, and rational argument.
In the case of cracking knuckles and arthritis, there is a paper that has addressed this issue scientifically. In a 2011 study, deWeber, Olszewski, and Ortolano examined 215 people between the ages of 50 and 89 who had received X-rays of their hands. The researchers divided participants into two groups: 135 people who had confirmed arthritis in their hands, and 80 people who did not have arthritis.
The researchers contacted all the participants and asked them about their knuckle-cracking habits. Specifically, whether they cracked their knuckles, which joints they cracked, how often they cracked them each day, and how many years they had been doing it.
Overall, 20% of the participants reported that they habitually cracked their knuckles. But here is where the findings get interesting: among people who had arthritis, 18.0% reported cracking their knuckles. Among people who did not have arthritis, 23.2% reported cracking their knuckles.
Think about what this means. If knuckle cracking caused arthritis, we would expect to find more knuckle crackers among people with arthritis. Instead, the researchers found slightly fewer knuckle crackers in this group, although the difference wasn't statistically meaningful. In other words, the study found that among people who had arthritis, roughly the same percentage cracked their knuckles as among those who did not have arthritis. This means some percentage of people will develop arthritis as they age, and some of these people will happen to crack their knuckles. But the knuckle-cracking doesn't cause arthritis.
Stop and Discuss!
Now, it's your turn to consider how beliefs about cause and effect work. Think about a claim you have heard or believed. Something from your family, social media, advertising, or everyday conversation. Examples might include: "Drinking coffee stunts your growth," "Reading in dim light ruins your eyesight," "Cracking your neck causes strokes," "Going outside with wet hair causes colds," or "Eating carrots improves your vision."
Choose one claim and discuss these questions with a classmate, friend, or group:
- What is the cause-and-effect claim? State it clearly: "X causes Y."
- What role does personal experience play in why people believe this? Do people have stories about experiencing this connection themselves?
- What role does social consensus play? How widespread is this belief? Who typically shares or reinforces it (family members, friends, authority figures, media)?
- What role does rational argument play? Does the claim seem logical or intuitive? Why might it "make sense" even without scientific evidence?
- Can you think of alternative explanations? What else might explain the observed pattern besides a direct causal relationship?
- Can you think of a scientific study that would directly examine this claim? Describe the methodology of this study.
After your discussion, reflect on how confident you are that the cause-and-effect relationship is real. What would you need to see or know to be more confident in your conclusion?
The Power of Critical Thinking
Cause-and-effect claims are everywhere in society. Does playing violent video games cause aggression? Does listening to Mozart make babies smarter? Does playing a musical instrument increase IQ? Does a particular type of medicine or vaccine improve health outcomes?
Whenever we hear statements like "eating sweet food causes diabetes" or "cracking knuckles leads to arthritis," we should immediately ask: Where does this knowledge come from? Is it based on personal experience? Social consensus? Rational argument? Or has it been tested through systematic research that examines different groups of people?
Learning to think critically about different claims and the evidence for them is one of the most valuable skills that comes from studying behavioral research methods. In Chapter 7, we will explore methodologies and techniques for designing studies that can answer questions about cause and effect using experimental methods. When experimental methods are not available or practical, researchers use correlational methodologies. We will examine these in Chapters 5 and 6.
The methods of behavioral research are used to answer practical questions every day: Does a particular message on a cereal box improve sales? Does a specific website design enhance user experience? Which version of a song makes people more likely to enjoy it? Companies, governments, healthcare providers, and countless other institutions rely on behavioral research methods to make informed decisions about cause and effect.
For these reasons, understanding cause and effect relationships is a central goal of behavioral science. The story of my great-grandmother and the grapes taught me something important: even our most vivid personal experiences, reinforced by the people we trust and supported by seemingly rational arguments, can sometimes lead us astray. The antidote is not to dismiss experience or intuition entirely. Instead, we should aim to complement them with systematic, scientific approaches to understanding the world. That is what behavioral research methods offer: a way to test our beliefs, challenge assumptions, and discover truths that might otherwise remain hidden.
How Well Do People Understand Cause and Effect?
The sections above described how people understand cause and effect in daily life and why it is valuable to learn how behavioral scientists evaluate cause and effect claims. Merging these ideas, we might wonder: how well do most people understand the scientific requirements for establishing cause and effect?
In 2019, the Pew Research Center conducted a major study to assess Americans' understanding of basic scientific concepts. They surveyed over 4,000 adults and asked them eleven questions covering topics from biology to physics to scientific reasoning. One of the questions asked about causal inference. Here's the question:
A scientist is conducting a study to determine how well a new medication treats ear infections. The scientist tells the participants to put 10 drops in their infected ear each day. After two weeks, all participants' ear infections had healed.
Which of the following changes to the design of this study would most improve the ability to test if the new medication effectively treats ear infections?
The survey presented people with four options:
- Create a second group of participants with ear infections who use 15 drops a day
- Have participants use ear drops for only 1 week
- Create a second group of participants with ear infections who do not use any ear drops
- Have participants put ear drops in both their infected ear and healthy ear
Take a moment to think about the question. Which answer option would you choose?
Stop and Discuss!
Before reading further, think about these questions.
- Which option do you think would most improve the study's ability to determine whether the medication works?
- Based on what we've discussed about knuckle-cracking and how eating grapes might lead to diabetes, what's wrong with the original study design? What's missing?
- Why might someone looking at this study conclude the medication works, even if it doesn't?
- Can you think of alternative explanations for why all the participants' ear infections healed after two weeks?
What the Pew Study Revealed
When Pew examined the results of the survey, only 60% of people selected the correct answer: creating a second group of participants who don't use any ear drops. And even among people who chose correctly, many may struggle to articulate why this is the right answer.
Think about what this means. The hypothetical study created by Pew has exactly the same problem as my great-grandmother's conclusion about grapes and diabetes and the same problem as the belief that knuckle-cracking causes arthritis. When the study participants took the ear drops and their infections went away, it created a compelling narrative: the medication caused the healing. After all, people used the drops and they got better. One event followed the other. A caused B.
But what is missing? A control group. A control group is a comparison group of people who don't receive the treatment. As it turns out, many ear infections heal on their own within two weeks, without any treatment at all. The human immune system is quite effective at fighting off these infections. So, when everyone in the study gets better after using the ear drops, we cannot know whether the medication caused the improvement or whether they would have recovered anyway.
The only way to find out is to compare two groups: one that receives the medication and one that doesn't. If people in both groups recover at similar rates, the medication probably isn't doing much. But if significantly more people recover in the treatment group than in the control group, that's evidence the medication works.
This is the fundamental logic of experimental research, and it will appear again and again throughout this book. Without a control group, our ability to make cause and effect inferences disappears almost entirely. We are essentially relying on the same reasoning that convinced my grandmother that grapes cause diabetes—observing that B happened after A and assuming A must have caused B.
The Bigger Picture
The Pew study tells us something important. Most people don't automatically question their experience. When we see one thing follow another, especially when it aligns with our expectations, hopes, or intuitions, we tend to assume a causal connection.
Establishing cause and effect requires more than observing that one event followed another. It requires rigorous testing of alternative explanations. One of the main goals of behavioral research is to help develop the critical thinking skills necessary to evaluate claims about what causes what, whether you encounter them in scientific research, in the news, in advertising, or in everyday conversations.
The ability to reason clearly about cause and effect influences countless decisions we make throughout our lives. Learning to think like a behavioral scientist means learning to ask the right questions, to demand appropriate evidence, and to recognize when claims about causation are well-supported versus when they are based on the same kind of reasoning that once convinced an entire family that grapes cause diabetes. In future chapters, you will have the opportunity to develop these critical thinking skills.
Summary
In this chapter, you took the first steps into behavioral research by completing a personality test and learning how researchers measure human characteristics. We saw how researchers turn complex psychological traits into measurable variables, how they enter data into databases, and how their findings are applied in various real-world settings. Through the development of personality theory, we explored how scientific theories emerge from careful observation, make testable predictions, and evolve as new evidence emerges. While personality is just one small piece of behavioral science, it illustrates the field's broader mission: to systematically study human behavior using scientific methods.
We also learned about the essential characteristics of scientific theories. Good theories are data-driven, parsimonious, and falsifiable. Scientists evaluate theories based on how well they make useful predictions and contribute to the development of scientific knowledge. Through a continuous cycle of prediction, testing, and refinement, scientists develop an increasingly sophisticated view of human behavior.
In the next chapter, we will learn about the platforms researchers use to recruit participants, the software they use to design studies, and the methods they employ to analyze data. Most importantly, you will start using these tools. In the next chapter, you will move closer toward conducting your own behavioral research.
Frequently Asked Questions
What are the Big Five personality traits?
The Big Five personality traits are Openness to Experience, Conscientiousness, Extraversion, Agreeableness, and Neuroticism (Emotional Stability). These five dimensions emerged from research showing that thousands of personality-related words cluster into just five basic traits that are consistent across cultures.
What makes a scientific theory good?
Good scientific theories share three important characteristics: they are data-driven (emerge from systematic observation and analysis), parsimonious (explain complex phenomena using the simplest explanation), and falsifiable (make specific predictions that could be proven wrong).
What is the difference between a theory and a hypothesis?
A theory is a data-driven explanation for a set of observations that makes specific predictions about what should happen in different situations. A hypothesis is a precise, testable claim about what researchers expect to find in a particular study, derived from a broader theory.
Why are control groups essential in behavioral research?
Control groups are essential because without them, we cannot distinguish true causal effects from coincidence. A control group provides a comparison of people who don't receive the treatment, allowing researchers to determine if observed changes are actually caused by the intervention or would have happened anyway.
Key Takeaways
- Behavioral research transforms abstract psychological characteristics into measurable variables that can be systematically analyzed
- Quantitative research converts complex characteristics into numbers, enabling researchers to discover patterns in human thought and behavior
- Operational definitions are specific, measurable ways to capture abstract concepts like personality traits
- Scientific theories are data-driven explanations that make specific, falsifiable predictions—not just guesses or speculation
- Good theories are parsimonious, explaining complex phenomena in the simplest way possible
- Hypotheses are specific, testable predictions derived from broader theories
- Behavioral science spans many disciplines united by commitment to scientific methods: careful observation, systematic measurement, and rigorous testing
- Empirical evidence gathered through systematic measurement can reveal truths that contradict personal experience, social consensus, and intuition
- Control groups are essential for establishing cause-and-effect relationships—without them, we cannot distinguish true causal effects from coincidence
- Critical thinking about cause and effect requires questioning whether claims are based on personal experience, social consensus, rational argument, or systematic research









