Introduction
When you hear the word "experiment," you might think of a scientist in a white lab coat mixing chemicals or recording observations through a microscope. While some behavioral experiments take place in laboratories, the essence of an experiment isn't about location or equipment; it's about logic and method.
In Chapter 1, we learned about the logic behavioral scientists apply to understand cause and effect. You may recall the Pew Research Center survey that assessed scientific literacy among US adults. As a reminder, Pew presented people with the following scenario:
A scientist is conducting a study to determine how well a new medication treats ear infections. The scientist tells the participants to put 10 drops in their infected ear each day. After two weeks, all participants' ear infections had healed.
Then, Pew asked people what would most improve this study's ability to say that the drug caused the ear infections to go away. The answer, identified by just 60% of people, was the inclusion of a control group that did not receive the drops to treat an ear infection.
In daily life, few people think about cause and effect the way behavioral scientists do. Instead, people naturally assume a connection between actions and outcomes. After taking a new vitamin, they may attribute improved mood or energy to the supplement rather than to better weather, more sleep, or simply the passage of time. This tendency to see causation is deeply human—and sometimes inaccurate. The experimental method is valuable precisely because our intuitions about cause and effect can, at times, be misleading.
In this chapter, we will learn not only what an experiment is, but also how experiments allow researchers to draw conclusions about causation that would be impossible through any other method. In Module 7.1, we will explore the fundamentals of experimental design. We will learn how experiments address the directionality and third-variable problems that limit correlational research, and we will explain the logic that makes experiments the gold standard for establishing causality in scientific research.
Module 7.2 will then guide you through a hands-on project using the Heinz dilemma. We will learn about a common experimental manipulation known as perspective-taking and how it affects people's moral judgments. The project will show you how to manipulate perspective-taking by randomly assigning participants to conditions and how to analyze the resulting data with appropriate statistical tests. This module will give you hands-on experience with the steps of conducting a simple experiment.
In Module 7.3, we will explore variations on experimental design, focusing on repeated measures experiments. We will learn how within-subjects designs allow each participant to experience multiple conditions, enabling more powerful comparisons with fewer people. Using another guided project, we will examine how different possible outcomes of Heinz's actions affect people's moral judgments. In this module, you will practice implementing counterbalancing and analyzing repeated measures data.
Finally, Module 7.4 introduces factorial designs that allow researchers to examine multiple variables simultaneously. Through a factorial study of perspective-taking and wealth, you will learn to design and analyze more complex experiments.
Overall, working through each module will help you develop an understanding of what experiments are, how they establish causality, and the different ways they are used within the behavioral sciences. By the end of the chapter, you will have the knowledge and skills to design, conduct, and analyze your own experimental studies.
Chapter Outline
How Experiments Establish Causality
Learn what defines an experiment and how manipulation, control, random assignment, and replication help researchers isolate causal effects.
What is an Experiment?
You are probably familiar with colloquial usage of the word experiment. It means to test, to try out, and to learn from experience. That's what scientists do when conducting experimental research. Unlike correlational studies, which measure variables and examine their association, experiments require researchers to act. To see how, let's examine the pieces of an experiment before we discuss the logic.
Imagine you are participating in a study about social attitudes. The researcher hands you a survey and asks you to answer a simple question: "What are your views about abortion?" You consider the issue and mark your response. For the second question, you are once again asked about abortion but this time the item reads: "What do you think God's view is about abortion?" Just like the first question, the response options range from strongly oppose to strongly support.
Now, consider a slightly different scenario. You walk into the lab and are presented with two questions about abortion. But this time, the order is reversed. The question about God's views on abortion comes first, followed by the question about your own views. Would your answers be the same in this scenario as in the previous one? For many people, they are not.
The scenarios above are from an experiment conducted by psychologists Benjamin Converse and Nicholas Epley (2007). They wanted to test whether considering God's perspective causes people's position toward abortion to change. They found that people who are asked to think about God's views before their own view often express less support for abortion than people who share their personal opinion first. Although this experiment raises many interesting questions about religion and morality, it also illustrates how simple an experiment can be.
Key Elements of the Experimental Method
First, notice how the researchers created two groups of participants by manipulating something specific: the order of the questions (Figure 7.1). Some participants considered their personal views first, while others considered God's views first. This simple manipulation is an example of an independent variable—the thing researchers deliberately change within an experiment to see what effect it has.
Second, consider what the researchers measured: people's views about abortion. This is the dependent variable because researchers hypothesize that it depends on, or is affected by, the independent variable. By measuring people's views about abortion after the manipulation, Converse and Epley were able to examine whether thinking about God's perspective first caused people to express less support for abortion compared to when people didn't consider God's perspective before reporting their own view.
Finally, and most importantly, participants were assigned to groups within the study using random assignment. Random assignment is like flipping a coin: heads the participant goes to condition A (asking about personal views first), tails they go to condition B (asking about God's views first).
Even though it is simple, random assignment is powerful. When people are randomly assigned to a condition in an experiment, any pre-existing differences in things like age, education, religious beliefs, attitudes about abortion, and countless other factors are equally distributed across conditions. In other words, random assignment solves the third-variable problem by ensuring that everything that could influence people's attitudes about abortion is neutralized as an explanation for behavior. Everything, that is, except for the experimental manipulation.
The Rationale of Random Assignment
Random assignment is so important to experiments it is worth analyzing in depth.
Let's imagine a researcher conducting a version of the study above. They recruit 500 participants: 250 people who strongly support abortion and 250 who strongly oppose abortion. These are people's pre-existing views, before the study begins.
As each person enters the study, the researcher flips a coin to determine whether the participant will see questions about God's view first (heads) or their own view (tails). When the coin is flipped for the 250 pro-choice participants, each person has a 50-50 chance of being in either condition. The same is true for each pro-life participant.
It is important for each person to have an equal chance of being in either condition because their pre-existing views about abortion could, obviously, affect the results. If more pro-life people wound up in the God-first condition, the researcher would not know whether to attribute the group's scores to the manipulation or their pre-existing views. The same thing is true in the opposite direction for pro-choice people. But, with random assignment, there is no need to worry about this problem. If the researcher recruits enough people for the study, the coin flip ensures people with strong pro-life views and people with strong pro-choice views are distributed about equally between conditions.
And herein lies the magic of random assignment: the same logic that applies to people's pre-existing views about abortion applies to every other characteristic that might affect attitudes about abortion, too. Some participants might be influenced by their religious upbringing, others by personal experiences, and still others by political ideology. Random assignment ensures that all these factors, and ones the researcher may not have even considered, have an equal chance of appearing in either condition (Figure 7.2).
Another way to think about random assignment is to realize that it turns every variable that might affect the results of an experiment into a constant across the two conditions. This means people in Condition A and Condition B will have about the same values, on average, across all potential third variables. In fact, the only thing that differs between the two conditions for certain is the experimental manipulation, which means any differences observed in the dependent variable must be caused by whether participants thought about God's views first or second.
Sample Size and Replication
Of course, random assignment is not perfect. If we flip a coin ten times, we might get seven heads and three tails just by chance. Similarly, if the researcher in the example above only had ten participants, they might end up with more pro-choice people in one condition than the other. But as more people are recruited and the sample size increases, the odds of these imbalances decrease. With 100 participants, it would be rare for random assignment to create large differences between conditions on any variable. With 500 or more participants, such differences are virtually impossible. This is why sample size matters in experiments.
The larger the sample, the more confident researchers can be that random assignment has effectively controlled for all possible third variables. Most experiments aim for at least 50 participants per condition—enough that random assignment can work its magic. But even with large samples, any single experiment might get "unlucky" with its random assignment. This is why replication—conducting the same experiment multiple times—is important. Even if one study happens to have an uneven distribution of some important third variable, it is unlikely that multiple replications would have the same uneven distribution. When several studies show the same effect, researchers can be increasingly confident that the manipulation really causes a change in the dependent variable.
The Logic of Experimental Studies
Now that we understand the elements of an experiment, let's return to the logic of experimental design. Remember, one more time, the scenario Pew gave people to measure scientific literacy.
A scientist tests a new medication by ordering participants with an ear infection to use 10 drops every day for two weeks. The design element that most improves this study's ability to establish whether the medication causes the ear infections to go away is a control group. In fact, without this group it is impossible to tell whether the drug has any effect at all. Here's why.
First, while intuition may tell us that people who take a drug and experience relief probably got better because of the drug, it is possible that the drug actually had no effect. As it turns out, most people with ear infections feel better within 2-3 days and the issue is often resolved within 7-10 days even without medical treatment. This means people's ear infections may have gotten better not because the drug was effective, but because most ear infections get better within two weeks. Without the control group, we cannot tell what caused the infections to improve.
A second reason a control group is essential is because the patients might feel better simply because they believed they were receiving an effective treatment. This is known as the placebo effect, and research shows it can create real physiological changes from reduced pain, to improved mood, and even measurable changes in brain activity and immune function. The placebo effect is so powerful that without a control group it is impossible to determine whether any observed benefits came from the actual medication or from patients' beliefs about receiving treatment.
Finally, a third reason a control group is necessary is because, in real life, things other than the experimental manipulation change over time. People who are sick with an ear infection might take better care of themselves during the study period, getting more rest or staying hydrated. Without creating a second group of patients and randomly assigning people to conditions, these behavioral changes are conflated with the medication.
While control groups are essential to experimental research, they can vary from study to study. In some studies, such as the ear infection example, researchers use a true control group: a group of patients who receive no medication at all. In other experiments, like the God views study, researchers use a comparison control group (a group who was asked to report on their own beliefs first). Despite the differences, the essential principle remains the same: experiments require comparing what happens with the experimental manipulation to what happens without it. And, without this baseline for comparison there is no logical way to establish cause-and-effect relationships.
Where Experiments Appear in the Real-World
Experimental methods have a big impact on society. One of the most consequential applications is in clinical trials that test new drugs and medical treatments. Consider how these trials work.
Just as the God Views study randomly assigned participants to different conditions, a clinical trial randomly assigns patients to receive either the experimental treatment, such a new drug, or a placebo. A placebo is an inert pill that looks like real medication but lacks active ingredients. In a clinical trial, the placebo acts as a control group—a baseline to compare against the effect of the drug. Even though clinical trials often contain several groups where researchers can test different doses of the drug, at least one treatment group and the control group are essential. When participants are randomly assigned to these conditions, every factor that might affect health besides the drug—age, stress, mental health, exercise, diet—is equally distributed between groups.
Clinical trials also contain another critical feature: they are typically "double-blind." In a double-blind study neither the patients nor the researchers interacting with them know who received the drug versus the placebo. This prevents people's expectations from influencing the study's results.
If patients knew they were receiving the real drug, for instance, they might expect to feel better and report improved health or reduced anxiety even if the drug was not actually effective. In fact, there is an entire body of research on what is called the "placebo effect" where people report all kinds of changes in health when given a placebo. Evidence suggests the placebo effect is not just an illusion; people's expectations create real physiological changes (Price et al., 2008). A double-blind design also prevents researchers from interacting with participants from consciously or unconsciously treating people in one group differently than the other, potentially biasing the study's outcomes. For these reasons, a double-blind design is standard in clinical trials.
Experimental design also underlies progress in nearly every corner of society—from how farmers grow food to how businesses design websites, from policing strategies to political messaging. Any time someone wants to isolate the effect of one change—whether it's a new fertilizer, a redesigned app, or a public health campaign—they turn to the core logic of experiments: randomly assign people to groups, hold everything else constant, and observe what happens. These methods allow people to make causal claims with confidence, which is why they are so widely used.
In fact, once you start looking, you will see experiments everywhere. Companies test advertising strategies with what they call "A/B tests". Governments evaluate new policies using randomized trials. Sports scientists test training routines. Even dating apps run experiments to see which profiles people are most likely to swipe on. Although the tools may look different in each setting, the underlying logic is the same. Experimental methods strip away noise and uncover whether one thing causes another or not.
Experiments versus Correlational Studies
As a final word in this section, let's consider how different experiments are from correlational research using the God's view study as an example.
In a correlational study, researchers might ask whether people's religious views correlate with attitudes about abortion. The researchers would measure people's religious beliefs and attitudes about abortion. They might find that as people's religious beliefs increase, their support for abortion decreases. But, this correlation could not tell the researchers whether people's religious beliefs cause anti-abortion views, whether having anti-abortion views draws people to religion, or whether some other factor explains both.
The experimental approach, in contrast, manipulates which thoughts are salient when people express their views about abortion and controls for all possible alternative explanations. This is why experiments are considered the gold standard for establishing causation in behavioral science. By carefully controlling the research situation, manipulating specific variables, and holding others constant through random assignment, experiments allow researchers to draw strong conclusions about cause-and-effect relationships.
Research Activity 7.1: Design an Experiment
Now that we've learned about the fundamentals of experimental research, let's examine how to create an experiment.
The God's view study provides a perfect example of how to implement an experiment online. It also illustrates a valuable idea: experiments do not have to be complex to be important. An experiment can consist of nothing more than two sentences presented in a different order. As long as there is a manipulation and participants are randomly assigned to conditions, you can create an interesting experiment. Let's look at how to do this using Qualtrics and Engage.
The video for this activity will show you how to create the experiment: https://bit.ly/CH7gv. There are three important steps.
First, the content. In this case, that means writing the questions about people's personal view of abortion and God's view. Both questions should go into a single "block." We've seen how to create blocks and organize content within Qualtrics in previous chapters.
Second, you need to set up the experimental conditions. That means copying the first block you created and changing the order of the questions so there are two paths through the survey: one where participants see the question about their own views first and another where they see the question about God's views first. After copying the block, changing the order is as simple as dragging and dropping the items.
Third, you need to implement random assignment—the essential feature for making this an experiment rather than a correlational study. Random assignment in Qualtrics or Engage relies on a tool called the "Randomizer" within the Survey Flow. After you add the Randomizer, you can drag your two blocks into the randomization. Then, toggle the settings so the Randomizer selects just one condition for each participant (Figure 7.3).
With these three steps, you have created the core of an experiment. To this core, you can add a welcome message, some demographic questions, and an end of study message. Then you are ready to gather data. You could also easily tweak this project to study other questions you are curious about.
Creating Variations on the "God's View" Experiment
The God's view experiment works by activating a specific mental framework before asking people about their opinion. When people consider God's perspective on abortion before their own, they tend to express less support for abortion than when they answer in the reverse order. The activation of this religious or moral framework—what psychologists call priming—influences how people subsequently evaluate the issue.
To create your own version of this experiment, you could keep the structure but change the dependent variable. For example, you could ask people about God's views on climate change before measuring their environmental attitudes. You could inquire about God's views on wealth distribution or taxation before asking people about their perspectives on those issues. Or, you could ask about God's views on capital punishment before asking people about the death penalty. The point is: the experimental structure remains the same, but the topic changes.
As an alternative to changing the dependent variable, you could modify the independent variable. Instead of activating people's mental framework for religion, you might ask them to reflect on what their parents would think about an issue before stating their own opinion. You could also ask people to consider how future generations, or people in their political party would evaluate an issue before giving their judgment. Although the group or entity people are asked to consider may change, the manipulations follow the same principle: change the mental context in which people form and express their attitudes. By systematically varying the context people are asked to consider, you can understand how different social contexts shape people's judgments.
Research Portfolio
Portfolio Entry #25: Programming the 'God's view' study as an Experiment
Once you have created your Qualtrics or Engage version of the God's view study, create an anonymous link and paste it in your portfolio. Do the same for the second study in which you change either the independent or dependent variable. Then write a few sentences describing what your experiment investigates and what you expect to find.
Make sure the experiment you created follows best practices of survey design. Each experimental condition should be organized in its own block, you should use the randomizer to randomly and evenly present conditions to participants, and, the first block in the experiment should introduce the study and the last block should gather demographic information.
In the next module, we will explore another variation of experimental design by studying how perspective-taking affects moral judgments in the Heinz dilemma.
Guided Project: Can Perspective-Taking Shift Moral Judgment?
Conduct an experimental study on perspective-taking and moral judgment.
In this chapter, we have learned how experiments establish cause-and-effect relationships. Now you can put this understanding to work in a guided project.
For this project, we will return to the Heinz dilemma. However, instead of describing people's responses (as you did in Chapter 3) or exploring which moral foundations correlate with people's judgments (as you did in Chapter 5), we are going to test whether people's judgments change when they engage in perspective-taking. This project will give you hands-on experience with the important elements of experimental research: implementing an experimental manipulation, randomly assigning participants to conditions, analyzing group differences, and interpreting causal effects.
Each step of the process is explained in the accompanying video for this project:
Project Goals
The primary goal of this project is to introduce you to the basics of experimental design. By working through a guided project, you will learn to form your own experimental hypotheses, how to program a between-subjects experiment in Qualtrics, how to analyze a two-groups design, and how to create figures to visualize the results of an experiment. Let's begin with the background of this study.
Part 1: Understanding Perspective-Taking
In Chapter 3, we conducted a descriptive study examining how people respond to the Heinz dilemma. You may find it helpful to review that study before beginning this project.
As a reminder, the findings in Chapter 3 revealed that when people were asked whether Heinz should steal the drug or not, they were split. Slightly more than half of people said Heinz should not have stolen the drug. Despite this tendency, people were also sympathetic to Heinz's situation, rating his behavior as somewhat morally acceptable. In this experiment, we will explore what might cause some people to support Heinz's actions and others to oppose them.
One factor that influences people's judgments across a range of moral situations is how much they empathize with others. Just as you might better understand a friend's decision after "putting yourself in their shoes," asking people to take Heinz's perspective might lead them to view his actions more sympathetically. Indeed, perspective-taking is a powerful psychological phenomenon that has been shown to increase empathy and compassion for others, reduce prejudice, lead people from different groups to see divisive issues more similarly, promote helping behavior, and increase recognition of others' hardships (Batson et al., 1997; Batson et al., 2005; Davis et al., 1996; Galinsky & Moskowitz, 2000; Simon et al., 2019; Todd et al., 2012).
The basic idea is that when people imagine another person's perspective, it's easier to consider that person's feelings and circumstances. This often leads to more empathetic judgments of their behavior. We will draw on this well-established area of research to test if perspective-taking causes people to be more likely to say it's okay for Heinz to steal the drug.
Before developing your hypotheses, explore some background research on perspective-taking. Take 15 minutes or so to read about perspective-taking and empathy on Google Scholar, looking at articles by researchers like Batson, Davis, or Galinsky (all cited above). Consider how perspective-taking might influence moral judgments in the Heinz dilemma.
After familiarizing yourself with the research, develop your hypotheses about how perspective-taking might affect people's responses to the Heinz dilemma. Consider these questions:
- How do you predict perspective-taking will affect whether people think Heinz should steal the drug (the yes/no decision)?
- How do you think perspective-taking will affect ratings of how morally acceptable it is for Heinz to steal the drug (the 1-7 scale)?
Research Portfolio
Portfolio Entry #26: Making Predictions for the Perspective-Taking Manipulation
For each prediction, write a few sentences in your portfolio explaining your reasoning. Consider which aspects of Heinz's situation might become more salient when people take his perspective. How might understanding his emotional state affect moral judgments? Remember, in experimental research, researchers make causal predictions (e.g., taking Heinz's perspective should cause people to judge his actions differently than remaining objective).
Part 2: Research Design, Materials, and Methods
Now that you have hypotheses, it is time to test them. Here is an overview of the study materials and experimental design.
As with all other experiments, the key is to create a different experience for participants across conditions. In this study, there are two. In the Perspective-Taking condition, people are presented with the following instructions before they read the Heinz dilemma: "On the next page, you will be presented with a brief scenario. Please read the scenario carefully. As you are reading, try to visualize clearly and vividly what the main character, Heinz, is thinking, feeling, and experiencing. Look at the world through his eyes and walk in his shoes."
In the Objective condition, people are presented with similar but slightly different instructions, before they read about Heinz: "On the next page, you will be presented with a brief scenario. Please read the scenario carefully. As you are reading the scenario, try to remain objective and emotionally detached. Try not to get caught up in what the main character, Heinz, might be thinking, feeling, and experiencing."
After reading these instructions, participants in both conditions are presented with the Heinz dilemma and asked to write for 2-3 minutes, following instructions from their assigned condition. Finally, the participants answer questions about Heinz's actions. Figure 7.4 presents an overview of the design.
Dependent Measures
The dependent variables in this study are the same items used in previous chapters to measure people's reactions to the Heinz dilemma. First, "Should Heinz have stolen the drug for his wife?" (Yes/No). Second, "How morally acceptable was it for Heinz to steal the drug?" (1 = Not at all acceptable to 7 = Completely acceptable).
Creating the Study
To examine how this experiment was implemented, download the Qualtrics survey file from the OSF project page: osf.io/a8kev. The file is named "RITC_SURVEY_CH07_HeinzPerspectiveTaking.qsf" and it is within the "Ch. 7 – Experimental Research" folder. Upload it to your Qualtrics or Engage account.
Once you have the survey open, review the questionnaires and measures. Examine how the manipulations and measures are structured. Then look at the survey flow. The instructional video highlights the key features of the design.
One indispensable part of the experiment is random assignment to conditions. When participants start the study, Qualtrics assigns them to either the perspective-taking or objective condition with equal probability (like flipping a coin). You can see the random assignment within the survey flow. Notice how each participant is randomly assigned to one of the two conditions (Figure 7.5).
Part 3: Data Collection
After designing the study, the next step is to gather data. As with all guided projects, we have done that for you.
We gathered data from 100 participants on Connect. We paid each person $0.75 for a ~5-minute study. To analyze the data, download the "RITC_DATA_CH07_HeinzBtwnSubjects.sav" file from the OSF page. Like the survey file, the data is in the folder labeled "Ch 7 – Experimental Research."
Part 4: Analyzing What You Found
This experiment has two outcome measures that require different statistical tests. For the yes/no decision about whether Heinz should steal the drug, you need to conduct a chi-square test. This test will compare the proportion of participants who said "yes" in each condition. The video for this activity will show you how to conduct the analysis and create a figure showing the percentage of "yes" responses by condition. You can also follow the instructions in HOW TO Box 7.1.
For the moral acceptability ratings (measured on a 1-7 scale), you need to conduct an independent samples t-test. This test compares the average ratings between the perspective-taking and objective conditions. Once again, you should conduct the test and create a figure for the results.
After running these analyses, interpret whether perspective-taking had a significant effect on either outcome measure. A significant result (p < .05) suggests the manipulation caused changes in participants' judgments. The direction of any differences tells you whether perspective-taking made participants more or less supportive of Heinz's actions. Pay attention to whether the results align with your hypotheses—did perspective-taking influence moral judgments in the way you predicted?
Analyze Yes/No Responses with Chi-Square Test
Open the dataset
- Open SPSS and load the "RITC_DATA_CH07_HeinzBtwnSubjects.sav" file
- Check that the "Steal" variable is coded (1 = Yes, 0 = No)
- Verify that "Condition" is coded (1 = PerspectiveTaking, 0 = Control/Objective)
Run the Chi-Square analysis
- Click on "Analyze" in the top menu
- Select "Descriptive Statistics > Crosstabs"
- Move "Steal" to the "Rows" box
- Move "Condition" to the "Columns" box
- Click the "Statistics" button
- Check "Chi-square"
- Click "Continue"
- Click the "Cells" button
- Check "Column percentages"
- Click "Continue"
- Click "OK" to run the analysis
Create a bar chart showing the percentage of "Yes" responses
- Click on "Graphs" in the top menu → select "Chart Builder"
- Choose "Bar" chart from the gallery
- Drag a Simple Bar chart to the canvas
- Drag "Condition" to the X-Axis
- Drag "Steal" to the Y-Axis
- Click "OK" to create the chart
Interpret the results
- In the Chi-Square Tests table, find the Pearson Chi-Square value
- Look at the "Asymptotic Significance (2-sided)" value (p-value)
- If p < .05, there is a significant difference between conditions
- In the Crosstab table, examine the column percentages
- Compare the percentage of "Yes" responses in each condition
- Determine if perspective-taking increased or decreased support for stealing
- Compare the findings to your original hypotheses
Analyze Moral Acceptability Ratings with a t-test
Open the dataset
- Open the RITC_DATA_CH07_HeinzBtwnSubjects.sav" file if it is not already open
Run the Independent Samples t-Test
- Click on "Analyze" in the top menu
- Select "Compare Means > Independent-Samples T Test"
- Move "Acceptability" to the "Test Variable(s)" box
- Move "Condition" to the "Grouping Variable" box
- Click the "Define Groups" button
- Enter "1" for Group 1 (PerspectiveTaking)
- Enter "0" for Group 2 (Objective)
- Click "Continue"
- Click "OK" to run the analysis
Create a bar chart with error bars
- Click on "Graphs" in the top menu → Select "Chart Builder"
- Choose "Bar" chart from the gallery
- Drag a Simple Bar chart to the canvas
- Drag "Condition" to the X-Axis
- Drag "Acceptability" to the Y-Axis
- Click the "Element Properties" button
- Click on the "Error Bars" tab
- Select "Standard Error" with Multiplier = 1
- Click "OK" to create the chart
- Double-click the chart to edit in Chart Editor
- Add appropriate titles and labels
- Click "Close" when finished
Interpret the results
- In the "Independent Samples Test" table, locate the t-value, df, and Sig. (2-tailed)
- Look at the "Sig. (2-tailed)" value (p-value) for the t-test
- If p < .05, there is a significant difference between conditions
- Examine your bar chart to see the direction of the effect
- Which condition shows higher moral acceptability ratings?
- Determine if perspective-taking led to increased or decreased moral acceptability ratings
- Compare your findings to your original hypotheses
Research Portfolio
Portfolio Entry #27: Report a Two-Group Randomized Experiment
Once you have conducted the analyses, paste the output and the graphs you created into your portfolio. Using the templates for reporting t-tests and chi-square results you have seen in previous chapters, report and interpret the results of the experiment. Did the results align with your hypotheses?
Your Turn: Exploring Perspective-Taking Effects
Now that you have seen how perspective-taking can influence moral judgments, you are ready to investigate how this manipulation might affect other judgments and behaviors. The manipulation from this experiment—asking people to imagine someone else's thoughts and feelings versus asking them to remain objective—can be applied to many different scenarios.
For instance, you might examine how perspective-taking affects judgments in other ethical dilemmas. What happens when people take the perspective of someone who cheated on an exam because they needed a passing grade to keep their scholarship? Or does perspective-taking make people more sympathetic toward someone who lied to protect a friend? The same manipulation that influenced judgments about Heinz might shape how people view other moral transgressions.
Perspective-taking might also influence how people judge controversial policies. You could present participants with a story about someone affected by immigration policies, healthcare costs, or educational inequalities. Would taking that person's perspective change support for the associated policy? Answering these questions could shed light on how personal stories affect people's attitudes.
Another possibility is to investigate perspective-taking in interpersonal conflicts. You could describe a disagreement between roommates, coworkers, or romantic partners. Maybe you have a story from your own life. If so, you can explore whether taking one person's perspective changes how people assign blame in the situation.
Whatever topic you choose, you can use the same basic experimental structure. Some participants receive instructions to take the perspective of the person in your scenario, while others are told to remain objective. Then, you present participants with the scenario and measure some dependent variable that perspective-taking might plausibly affect.
You can use the Qualtrics or Engage survey from the Heinz experiment as a template. Simply replace the Heinz dilemma with your chosen scenario and modify the dependent measures to fit your research question. The video that accompanies this activity shows you how to adapt the materials while maintaining the essential structure of a two-group experiment.
Research Portfolio
Portfolio Entry #28: Design Your Own Two-Groups Between-Subjects Experiment
Create a Qualtrics perspective taking experiment. It can relate to any question of interest where perspective taking may cause a change in judgment. Paste the preview link to the experiment in your portfolio.
Variations on Experimental Design: Repeated Measures Experiments
Explore repeated measures designs by conducting a guided project on how different consequences influence moral decision-making.
We have discussed experiments where different groups of participants experience different conditions—what researchers call a between-subjects design. In the God's view study, for instance, each participant either answered questions about God's view first or their own views first. There is, however, another way to structure experiments—one where each participant experiences every experimental condition in the study.
To demonstrate how this works, consider a study examining how background music affects cognitive performance. In a between-subjects design, some participants are randomly assigned to listen to classical music while solving math problems and others to solve problems in silence. As we discussed, this design requires a lot of participants. Remember, with only a few participants in each condition, it is possible to get "unlucky" and end up with groups that differ in some important way, even after random assignment.
This is where within-subjects designs (also called repeated measures) offer an advantage. Instead of comparing different groups, researchers can have each person solve problems with music and in silence. Within this design, it is possible to see how music affects each person's performance relative to their own baseline ability. In other words, each person serves as their own control. This makes it possible to see how the same person performs under different conditions, rather than comparing different groups of people (Figure 7.6). As a result, fewer participants are needed.
However, within-subjects designs face their own challenges. And the main challenge is order effects—the possibility that experiencing one condition affects how people respond to later conditions. In the math study, for instance, you might expect people to get better at math problems due to practice and regardless of the music condition they are assigned to. Or, you might see participants get tired and perform worse later in the study than at the start. To address these concerns, repeated measures experiments counterbalance the order of conditions across participants. This means some people will be assigned music first and silence second, while others get the opposite order. Counterbalancing ensures that the order of the conditions cannot explain the study's outcomes.
Repeated measures are not suited to all situations. For instance, consider Stanley Milgram's famous obedience experiments, where participants were instructed to deliver increasingly strong electric shocks to a stranger (actually an actor who received no real shocks). This study cannot work as a repeated measures design. Once a participant has experienced the study, they cannot forget the situation was staged or that the shocks were fake. Their experience in the first condition permanently alters how they respond in subsequent conditions.
Similarly, most studies that involve deception, require a between-subjects design. This is because the knowledge people gain in one condition often creates a lasting change that would contaminate how they respond to subsequent conditions. Whenever this is the case, behavioral scientists must rely on between subjects designs, despite their need for larger sample sizes.
Guided Research Project: How Consequences Shape Moral Judgments
In previous studies with the Heinz dilemma, stealing the drug always saved Heinz's wife. But in the real world, outcomes are often less certain. Sometimes, a novel treatment only eases people's pain rather than providing a cure. At other times, experimental treatments offer uncertain benefits. How do these different outcomes affect whether people think stealing is justified?
In this guided research project, you will gain experience with the key elements of a repeated measures design: creating multiple versions of a scenario, counterbalancing their presentation, and analyzing how the same people respond across conditions. The video that accompanies this project will guide you through each step:
Project Goals
This project has several goals.
First, you will generate hypotheses about people's judgments under different conditions in the Heinz dilemma. This will further your ability to think about research questions and develop testable predictions.
Second, you will program a repeated measures experiment with three conditions. Programming the experiment will give you further knowledge of how to use Qualtrics and hands-on experience with the practice of counterbalancing measures.
Third, this project will extend your knowledge of statistical analyses. You will learn how to analyze within-subjects data and report the results.
Finally, as in other guided assignments, you will practice creating a figure to visualize the results. Overall, this project will expand your knowledge of experimental design while allowing you to practice some of the basic skills required for experimental research.
Part 1: Understanding the Repeated Measures Design
In this study, participants were presented with three versions of the Heinz dilemma that differed only in the consequences of the drug Heinz considered stealing. In one version, the drug would save his wife. In another version, the drug was described as easing pain and suffering. Finally, in the third version, the drug was experimental with uncertain benefits.
Because this was a repeated measures design, each participant saw all three versions of the experiment. This design allows us to examine how the same person's moral judgments changed as the consequences of stealing varied.
Developing Your Hypotheses
How do you think people will react to the different versions of the scenario? Develop a few hypotheses about whether you expect differences between conditions and which conditions you think will lead to the most favorable ratings of Heinz's behavior.
Research Portfolio
Portfolio Entry #29: Making Predictions for a Repeated Measures Experiment
Write a paragraph in your portfolio explaining your predictions and the thinking behind them. Remember, participants see all conditions of the experiment, so you are predicting how the same person's judgments will change across different versions of the scenario.
Part 2: Research Design, Materials, and Methods
The key to this experiment is creating three versions of the dilemma that differ only in the consequences of stealing the drug. The basic scenario remains identical across versions—Heinz cannot afford the drug, the company will not lower the price, he considers stealing it. The only thing that varies is what the drug will do for his wife. Across conditions participants were told to imagine:
Life-Saving Version: "As a result of Heinz's behavior, his wife is cured and lives a long life."
Pain-Relief Version: "As a result of Heinz's behavior, his wife spends her last few weeks free of pain but the drug does not save her life."
Experimental Version: "As a result of Heinz's behavior, his wife will be able to take the experimental drug but its benefits are uncertain."
After reading each version, participants answered two questions: 1) Should Heinz steal the drug? (Yes/No), and 2) How morally acceptable would it be for Heinz to steal the drug? (1 = Not at all acceptable to 7 = Completely acceptable). These were the same dependent measures used in previous experiments.
Counterbalancing
Because this is a repeated measures design, each participant saw all three versions of the scenario. However, the order of the scenarios was randomly assigned to control for order effects.
Someone who saw the life-saving version first might judge subsequent versions more harshly by comparison. Or participants might become more accepting of stealing as they think repeatedly about Heinz's situation, regardless of the consequences. To avoid these and all other "order effects" we used the randomization tool to present the three versions in random order (Figure 7.7).
The randomization tool ensured that each version of the dilemma appeared equally as often in the first, second, and third position across all participants. The instructional video for this project shows you how to implement counterbalancing in Qualtrics.
Creating the Study
To see how this experiment was implemented in Qualtrics download the "RITC_SURVEY_CH07_HeinzWithinSubjects.qsf" file from the OSF page. It is in the folder labeled "Ch 7 – Experimental Research."
Once you upload this file into Qualtrics or Engage, look around. Notice how all participants read one version of the dilemma and then were asked to imagine one of three different outcomes before they answered the dependent variables (Figure 7.6). Navigate to the survey flow and examine the randomizer. Notice how all three elements are presented evenly across participants. Once you are done reviewing the survey, it is time to get the data.
Part 3: Data Collection
We gathered data from 50 participants on Connect—half the number compared to the between-subjects design. We paid each person $0.50 for a ~3-minute study.
While the data collection itself was simple, preparing the file for analysis required one important step not necessary in any study you have conducted until now: we had to ask Qualtrics for the order each participant viewed the manipulations in. Adding this information to the data file required just a few extra clicks while downloading the data, but without it we would not have been able to create a condition variable in SPSS that identifies the order each participant received when working through the study. The video that accompanies this assignment explains how to add the viewing order to a data file.
When you are ready to work with the data, find the folder labeled "Ch 7 – Experimental Research" on the OSF project page. Then, download the "RITC_DATA_CH07_WithinSubjects.sav" file.
Part 4: Analyzing What You Found
Within-subjects experiments require a different analytical approach than between-subjects designs because the same people respond across different conditions.
To analyze the data from a within-subjects experiment, it's common to use a repeated measures analysis of variance (ANOVA). The repeated measures ANOVA compares participants' responses within different conditions to see if there are significant differences. HOW TO Box 7.3 provides step by step instructions for conducting this analysis, as does the instructional video online.
When examining the statistical output, you need to know what to look for and why.
First, examine the table labeled "Tests of Within-Subjects Effects" (Figure 7.8). Find the row with your factor name, in this case "Consequences", and check the significance value. You should see a p-value below .05. If p < .05, there is a significant overall effect, meaning that at least one condition differs from the others. Later, you will see the F statistic and degrees of freedom (df) under the row of "Sphericity assumed" for reporting the results.
To understand the differences between conditions, review the descriptive statistics table. This table provides the average response for each condition, allowing you to see the pattern of responses. To see which means are significantly different from one another, analyze the Pairwise Comparisons Table (Figure 7.9). This table shows which specific conditions differ from each other with a p < .05 indicating statistical significance. Note which conditions differ and which do not.
Analyze Within-Subjects Experiment Data
Open the dataset
- Open SPSS and load the "RITC_DATA_CH07_HeinzWithinSubjects.sav" file
Run the Repeated Measures ANOVA
- Click on "Analyze" in the top menu
- Select "General Linear Model > Repeated Measures..."
- In the Define Factor dialog box:
- Type "Consequences" as the factor name
- Enter "3" for Number of Levels
- Click "Add" then "Define"
- Move the three acceptability variables (e.g., acceptable_SL) to the Within-Subjects Variables box
Request Estimated Marginal Means
- Click "Options" to open the Options dialog
- In the "Estimated Marginal Means" section:
- Select your factor "Consequences" and click "Add"
- Check "Compare main effects"
- Select "Bonferroni" from the dropdown for multiple comparisons
- Also check "Descriptive statistics" to get means and SDs
- Click "Continue" and then "OK" to run the analysis
Interpret the results
- First check the "Tests of Within-Subjects Effects" table for overall significance
- Then examine the "Pairwise Comparisons" table to see which specific conditions differ
- Use the mean differences and p-values to determine statistical significance
Creating a Figure
Your results should include a figure showing how moral judgments changed across the three conditions. HOW TO Box 7.4 provides instructions for how to create this figure. Include error bars and make sure axes are clearly labeled. Your result should look like Figure 7.10.
Create a Figure for Within-Subjects Results
Create a Bar Chart
- Click on "Graphs" in the top menu
- In the dropdown select the "Bar..." Option
- In the pop up, select "Simple" and under "Data in Chart Are" choose "Summaries of separate variables"
- Select "Define"
Set Up the Variables
- Drag all three variables to the "Bars Represent" box
- Make sure the variables represent MEANS
Add Error Bars
- Click on the "Options" button
- Select the "Display Error Bars" box
- Choose "Standard Error" and set the multiplier to "1"
- Click "Continue" then "OK" to add the error bars
Add Labels
- Double click on the Chart
- In the pop up window, right click --> Show Data Labels
Results and Interpretation
After analyzing the data, interpret what the findings mean and write them up. Then, compare the results to your original hypotheses.
Reporting the Results of a Repeated Measures ANOVA
Here is an example of how to report the results of this analysis. You can use this as a template for your own results.
"A repeated measures ANOVA revealed that the drug's consequences significantly affected people's moral judgments, F(2, 104) = 7.65, p < .05.
A post-hoc analysis with Bonferroni corrections indicated that participants found stealing more acceptable when the drug saved Heinz's wife (M = 4.57, SD = 2.30) compared to when it eased her pain (M = 3.98, SD = 2.28), p < .01. They also judged stealing more acceptable when the drug was life-saving than when it was experimental (M = 4.19, SD = 2.21), p = .02. The difference between a drug that eased pain and was experimental was not significant, p > .56."
Notice how this write-up includes both the overall F test from the ANOVA and the specific comparisons between conditions. The overall test information comes from the "Tests of Within Subjects Effects" table (Figure 7.8). The mean and standard deviation of each condition is taken from the descriptive statistics table, and the p values come from the Pairwise Comparisons table. With these statistics, you have the information necessary to make sense of the study.
Research Portfolio
Portfolio Entry #30: Reporting the Results of a Repeated-Measures Experiments: Heinz Dilemma and Different Drug Outcomes
Describe your original hypotheses. Then, report the results of the repeated measures ANOVA. Your results section should include the figure showing differences in condition means. After reporting the statistical results, explain what they mean in plain language. Are people more willing to justify breaking the law when the benefits are more certain? Consider how your results connect to broader questions about how people make moral decisions.
Beyond Simple Experiments: Factorial Designs
Investigate interactions by running a factorial experiment on perspective-taking and moral judgments.
Every experiment we have seen so far manipulated just one variable. Yet, in the real world, multiple factors often work together to influence people's thoughts, feelings, and behaviors. Factorial designs allow researchers to examine this kind of complexity. Instead of asking "Does X cause Y?" factorial designs ask questions like "Under what conditions does X cause Y?" or "For whom is the effect of X on Y strongest?"
Think about the finding from earlier showing that people express less support for abortion after considering God's perspective first. This effect probably is not the same for everyone. Someone who does not believe in God might be unaffected by the manipulation. But for a deeply religious person, considering God's perspective might have a big effect on the personal opinion they report.
To investigate this idea, a researcher would need to examine both variables together: thinking about God's views AND how religious people are. This could be as simple as adding a measure of religiosity to the experiment you viewed earlier. After the main task, participants could answer a few questions about their religious beliefs while completing other demographic information. Based on the responses, the researcher could classify each person as either high or low in religiosity.
Figure 7.11 shows what the results of this experiment might look like. The graph reveals several important patterns that illustrate why factorial designs are so important.
The most interesting pattern is how the effect of the God prime differs between highly religious and less religious participants. For highly religious people, thinking about God's views first reduces support for abortion—notice the difference between the blue and orange bars on the left side of the graph. But for less religious people, thinking about God's views has no effect—the bars on the right side are about the same.
This pattern is called an interaction. An interaction occurs when the effect of one variable (God prime) depends on the level of another variable (religiosity). This interaction indicates that thinking about God's views does not affect everyone equally. It decreases religious people's support for abortion but has no effect on less religious people's support. By using a factorial design such as this, researchers can specify for whom an effect occurs and how strongly.
Beyond the interaction, a factorial design also provides information about what are called main effects. A main effect is simply the effect one independent variable has on the dependent variable, while ignoring the other independent variable in the design. We can see a main effect of religiosity in Figure 7.11 by noticing the overall difference between the left and right sides of the graph. Regardless of when they think about God's views (first or second), highly religious people express less support for abortion than less religious people. If we averaged the two bars on the left side of the graph and averaged the two bars on the right side, you would find that religious people support abortion more than non-religious people. That is the main effect of religiosity.
Guided Research Project: When Does Perspective-Taking Work?
In this guided project, we will examine how multiple factors work together to influence behavior using a factorial design. Once again, we will return to perspective-taking in the Heinz dilemma. But this time, we will investigate whether its effects depend on Heinz's economic circumstances. Does perspective-taking make people more sympathetic to Heinz's actions regardless of his wealth, or does his socio-economic status change how people respond to his situation?
This project will give you hands-on experience with the key elements of factorial design: manipulating multiple variables simultaneously and analyzing both main effects and interactions. As with previous projects, the accompanying video provides step-by-step instructions:
Project Goals
This project has several objectives.
First, you will form hypotheses regarding the interaction between two independent variables and their effects on participants' responses. Second, you will design and implement a 2 × 2 factorial experiment using Qualtrics. Third, this project will broaden your statistical expertise; you will learn how to conduct and interpret analyses appropriate for factorial designs, including assessing main effects and interactions between variables. Fourth, you will create a visualization that effectively communicates both main effects and interaction effects in your data. Overall, this project will expand your experimental design toolkit while reinforcing fundamental research skills, particularly in the context of multi-factor studies that allow for examining how variables may interact to influence outcomes.
Part 1: Understanding the Factorial Research Question
In the previous study, we saw that taking Heinz's perspective makes people more sympathetic to his actions. But does this effect work equally well in all situations? What if Heinz was wealthy enough to pay for the medication? Would taking his perspective still make people more understanding of his actions?
A factorial design can answer these questions. In addition to manipulating perspective-taking, it is possible to also manipulate Heinz's economic circumstances. By examining both perspective-taking and wealth simultaneously, we can ask several interesting questions: 1) Does perspective-taking increase support for Heinz's actions regardless of his wealth?, 2) Are people generally more accepting of stealing when someone is poor versus wealthy?, and 3) Does the effect of perspective-taking depend on Heinz's economic circumstances?
This last question addresses the interaction. Maybe perspective-taking is especially powerful when someone is poor but less effective when they have some wealth. Or perhaps perspective-taking helps people understand Heinz's choice regardless of his financial situation.
Research Portfolio
Portfolio Entry #31: Generate Hypotheses for a Factorial Experiment
To get started, generate some hypotheses about the main effects and interaction in this study and enter them in your portfolio. For each hypothesis, write a few sentences explaining why you expect those results.
Part 2: Research Design, Materials, and Methods
After you have developed hypotheses about how perspective-taking and wealth might influence moral judgments, let's examine how to test these predictions using a factorial design. The study requires four versions of the Heinz dilemma that combine the two manipulations.
The Experimental Manipulations
The factorial design requires manipulating two independent variables. This creates four distinct conditions that participants might be assigned to within the experiment (Figure 7.12).
The first variable is perspective-taking, which we manipulated through the following instructions.
Perspective-Taking Condition: "On the next page, you will be presented with a brief scenario. Please read the scenario carefully. As you are reading, try to visualize clearly and vividly what the main character, Heinz, is thinking, feeling, and experiencing. Look at the world through his eyes and walk in his shoes."
Objective Condition: "On the next page, you will be presented with a brief scenario. Please read the scenario carefully. As you are reading the scenario, try to remain objective and emotionally detached. Try not to get caught up in what the main character, Heinz, might be thinking, feeling, and experiencing."
The second variable was Heinz's wealth. We manipulated relative wealth within the scenario by telling participants one of two things:
Poor Condition: "Despite working extra jobs and asking everyone they knew for help, Heinz and his wife had only managed to gather about half the money needed for the drug."
Wealthy Condition: "Heinz and his wife had $200,000 in savings—enough to buy the drug, but using all their money would leave them nothing for her ongoing care and treatment."
Although the wealth condition did not represent truly lavish wealth, we wanted people to think Heinz had enough money that he did not need to steal the drug. When we combined the manipulations, it created the four possible conditions depicted in Figure 7.12.
Random Assignment
In a factorial design, random assignment means giving each participant an equal chance of experiencing any of the experimental conditions. After reading their assigned version of the dilemma, participants answered the dependent variables. In this study, we only asked "How morally acceptable would it be for Heinz to steal the drug?" (1 = Not at all acceptable to 7 = Completely acceptable).
Creating the Study
To see the factorial experiment in Qualtrics download the "RITC_SURVEY_CH07_Factorial.qsf" file from the OSF page. It is in the folder labeled "Ch. 7 – Experimental Research."
Once you upload this file into Qualtrics or Engage, notice how the four versions of the scenario were created by combining the two manipulations. Then, examine how random assignment works with multiple variables. Do you notice anything different from past studies? After you review the survey, you are ready for the data.
Part 3: Data Collection
We gathered data from 200 participants on Connect, once again aiming for about 50 people per condition. We paid each person $0.75 for a ~5-minute study. To analyze the data, download the "RITC_DATA_CH07_HeinzFactorial.sav" file from the OSF page.
As described in the accompanying video, after downloading the data we had to create a variable for each condition. You can find these variables in the data file. You will use them in the analysis.
Part 4: Analyzing What You Found
In factorial experiments, the statistical analysis needs to examine how each variable affects responses independently (main effects) and how they work together (interactions). Both HOW TO Box 7.5 and the instructional video provide instructions for how to conduct the analysis.
Testing Perspective-Taking and Wealth
Generate Research Hypotheses
- Develop hypotheses about perspective-taking's effect when Heinz is wealthy vs. poor
- Consider three types of predictions:
- Main effect of perspective-taking (improves moral judgments overall)
- Main effect of wealth (wealth = being judged more harshly)
- Interaction effect (perspective-taking might help poor Heinz more)
- Write a paragraph explaining your predictions with theoretical rationale
Analyze the Data
- Download the "Heinz Dilemma - Factorial Design.sav" file from OSF
- Run a Two-Way ANOVA in SPSS:
- Click "Analyze" → "General Linear Model" → "Univariate"
- Move "Acceptability" to the Dependent Variable box
- Move both "Perspective" and "Wealth" to the Fixed Factors box
- In Options, select "Descriptive statistics" and "Estimates of effect size"
- Select OK to run the analysis
Analyzing Factorial Effects
The main analysis for factorial designs uses Analysis of Variance (ANOVA), which examines three pieces of information.
First, it tests for a main effect of perspective-taking: whether people judged Heinz's actions differently when taking his perspective versus remaining objective, ignoring both wealth conditions. Second, it tests for a main effect of wealth: whether people judge stealing differently when Heinz is poor versus wealthy, ignoring both perspective-taking conditions. Finally, and most importantly, it tests for an interaction between the variables: whether the effect of perspective-taking differs depending on Heinz's wealth.
The ANOVA provides F-statistics and p-values for each of these effects. A significant result (p < .05) for main effects tells you that the variable affected judgments on its own. A significant interaction tells you that the variables work together in more complex ways. As shown in Figure 7.13, the main effect of perspective-taking was not significant (p > .05). But the main effect of wealth and the interaction both were (p < .05). A look at the means for the two wealth conditions shows that people found Heinz's behavior more acceptable when he was poor than relatively wealthy. But the interaction was the more interesting finding.
To understand the interaction, we can examine what are called simple effects tests. These tests tell us whether one independent variable produces a difference in people's ratings of acceptability at just one level of the other independent variable. For example, does perspective-taking matter when Heinz is described as poor?
In the SPSS output, simple effects tests are labeled "pairwise comparisons" (Figure 7.14). As with the other tests we have seen, a significant result is indicated by a p < .05. From the table, we see that when people remained objective (emotionally detached), they judged wealthy Heinz much more harshly than poor Heinz. But when people took Heinz's perspective, his wealth didn't matter as much; people were more understanding of his actions regardless of his financial situation. In other words, perspective-taking had a big impact on how people viewed wealthy Heinz (making them more sympathetic), but it didn't change their judgments of poor Heinz very much (they were already fairly sympathetic to him).
This interaction suggests something interesting about human psychology. When we stay emotionally detached, we use simple rules like "wealthy people shouldn't steal because they have other options." But when we try to understand someone's perspective, we look beyond surface characteristics like wealth and consider their emotional experience. We better understand the desperation, fear, and love that might drive someone to break the law to save their spouse.
Creating a Figure
The results of a factorial design are typically displayed in either a line or bar graph. These graphs allow researchers to show how the effect of a single variable changes across levels of another variable. The hypothetical results from Figure 7.11 offer an example. Figure 7.15 is another example, showing how wealth and perspective-taking affect judgments in the Heinz dilemma (Figure 7.15).
Using the instructions in HOW TO Box 7.5, create a figure for the Heinz dilemma. Your figure should show moral acceptability ratings on the y-axis (vertical) and perspective on the x-axis (horizontal). There should also be separate lines for the poor versus wealthy conditions. The video that accompanies this project also demonstrates how to create these visualizations and interpret patterns of main effects and interactions.
Results and Interpretation
Now that you have analyzed the data, let's interpret what it means.
Start by comparing your results to your hypotheses. Did perspective-taking affect moral judgments overall? What about Heinz's level of wealth? More importantly, did these factors interact? Did perspective-taking have different effects depending on Heinz's economic circumstances?
When describing factorial results, researchers typically start with the main effects and then move to any interactions. Here's an example:
"A 2 × 2 ANOVA revealed a significant main effect of wealth, F(1, 196) = 9.80, p = .002. People rated stealing as more acceptable when Heinz was poor (M = 4.68, SD = 1.95) than when he was wealthy (M = 3.82, SD = 1.95). The main effect of perspective-taking was not significant, F(1, 196) = 2.09, p = .149. However, there was a significant interaction between wealth and perspective-taking, F(1, 196) = 6.63, p = .011."
After reporting that the interaction was significant, it is helpful to break down the effect of one variable at each level of the other. In this study, for instance, wealth affected people's judgments differently depending on whether they were taking Heinz's perspective or remaining objective.
"Among participants who remained objective, Heinz's behavior was judged much less acceptable when he was wealthy compared to when he was described as poor. Yet when participants engaged in perspective-taking, this difference disappeared. This suggests that perspective-taking increases people's understanding of Heinz's situation, regardless of his financial circumstances.
Remember that interactions can be complicated. Use clear language and concrete examples to help readers understand how your variables work together to influence moral judgments.
Research Portfolio
Portfolio Entry #32: Report the Results of a Factorial Experiment
Report the results of the experiment, using the template above as a guide. Your results section should include a figure showing the interaction. A line or bar graph with wealth on the x-axis and separate lines for perspective-taking versus objective conditions helps readers visualize how these variables work together. Make sure to include error bars and clear labels.
After reporting the statistical results, explain what they mean for understanding moral judgment. Remember that interactions can be complicated. Use clear language and concrete examples to help readers understand how your variables work together to influence moral judgments.
Consider what questions your results raise for future research: are there other factors that might moderate the effects of perspective-taking on moral judgments?
Your Turn: Exploring Moderators of Perspective-Taking
Now that you have worked through a factorial design, you can explore how other factors might influence when perspective-taking works best. The 2 × 2 design you just used can be adapted to examine many different moderators—variables that turn the effect of perspective taking on or off.
Consider which aspects of Heinz's situation might affect how people respond to perspective-taking. For instance, you might vary the relationship between Heinz and the person he's helping. Would perspective-taking have the same effect if Heinz was stealing medicine not for his wife (as in our example), but for a friend or a stranger? Do his actions perhaps become less understandable, and therefore less acceptable, as the distance between him and the person he's helping grows?
Using the factorial design, you could randomly assign participants to either take Heinz's perspective or remain objective, and to read about helping different targets. This design would allow you examine whether perspective-taking is equally effective at increasing sympathy regardless of relationship or whether it works better for some relationships than others.
Alternatively, you might manipulate information about the company that designed the drug. Would perspective-taking increase people's acceptance of Heinz's behavior if the company was charging a reasonable markup for the drug versus price-gouging during a health crisis? Or what if people learned the company had spent hundreds of millions of dollars developing the drug versus acquiring the patent for the drug from another company? Does perspective taking work regardless of the reason why the drug costs so much?
Whatever you decide to manipulate, you can use the Qualtrics survey from our example as a template. All you need to do is conduct your own experiment to replace the wealth manipulation with a variable of your choice, while keeping the perspective-taking manipulation the same. Remember that factorial designs require larger samples, so you want to aim for about 50 participants per condition. In a 2 × 2 design that means gathering data from about 200 people.
Summary
Throughout this chapter, we explored the powerful methodology of experimental research—the method that allows behavioral scientists to establish cause-and-effect relationships. Experiments can establish causality thanks to three essential elements: manipulation of an independent variable, random assignment of participants to conditions, and measurement of a dependent variable. We learned that random assignment is the "magic" of experimentation. It ensures all potential third variables are equally distributed across conditions, allowing researchers to isolate the causal effect of their manipulation.
We have also explored several experimental approaches, beginning with a simple between-subjects design where different participants experience different conditions, such as in the God's views study. Then, we examined within-subjects designs, where the same participants experience all conditions in the study. These designs offer advantages like greater statistical power with fewer participants since each person serves as their own control, but they also present challenges like order effects that must be addressed through counterbalancing. The Heinz dilemma with different consequences (life-saving, pain relief, experimental drug) demonstrated this approach.
Finally, we examined factorial designs that manipulate multiple variables simultaneously, allowing researchers to examine both main effects and interactions. Each of these designs helps behavioral scientists understand cause and effect relationships.
Through guided research projects, we gained hands-on experience implementing experimental manipulations like perspective-taking, randomly assigning participants to conditions using Qualtrics, analyzing experimental data with appropriate statistical tests (t-tests, chi-square, ANOVA), and interpreting and visualizing results from different experimental designs. These practical applications reinforced our understanding of experimental methodology.
As important as the hands-on activities are, don't forget the conceptual information we learned in this chapter. In studying experiments, we learned how researchers address the directionality and third-variable problems that limit correlational research. Experimental design can be remarkably simple yet powerful, as demonstrated by the God's views study that changed only the order of two questions.
By mastering experimental techniques, we now have the tools to design studies that can move beyond describing relationships to understanding what causes what in human behavior—the ultimate goal of behavioral research. The ability to establish causality will serve as a foundation for your future work, whether you are conducting your own research or evaluating the claims made by others.
Frequently Asked Questions
What is an experiment in behavioral science?
An experiment is a research method that establishes cause-and-effect relationships through three essential elements: manipulation of an independent variable, random assignment of participants to conditions, and measurement of a dependent variable. Random assignment ensures all potential third variables are equally distributed across conditions, allowing researchers to isolate the causal effect of their manipulation.
What is random assignment and why is it important?
Random assignment is like flipping a coin to determine which condition a participant experiences. It is essential because it ensures that any pre-existing differences in characteristics like age, education, or attitudes are equally distributed across conditions. This solves the third-variable problem by making all potential confounding variables constants across conditions, so any observed differences must be caused by the experimental manipulation.
What is the difference between between-subjects and within-subjects designs?
In a between-subjects design, different participants experience different conditions. In a within-subjects (repeated measures) design, each participant experiences all conditions in the study. Within-subjects designs require fewer participants since each person serves as their own control, but they must use counterbalancing to control for order effects.
What is a factorial design and what are interactions?
A factorial design manipulates multiple independent variables simultaneously, allowing researchers to examine both main effects and interactions. An interaction occurs when the effect of one variable depends on the level of another variable, revealing when and for whom effects are strongest. For example, perspective-taking might reduce bias against wealthy people but have no effect for poor people.
Key Takeaways
- Experiments establish cause-and-effect relationships through manipulation of an independent variable, random assignment of participants to conditions, and measurement of a dependent variable.
- Random assignment ensures all potential third variables are equally distributed across conditions, effectively making them constants rather than confounds.
- Control groups provide a baseline for comparison, allowing researchers to isolate the effect of the manipulation.
- The placebo effect demonstrates why control groups are essential—people may improve simply because they believe they are receiving treatment.
- Replication increases confidence in findings by demonstrating effects across multiple studies.
- Double-blind designs prevent expectations from influencing results by keeping both participants and researchers unaware of condition assignments.
- Between-subjects designs assign different participants to different conditions, avoiding carryover effects but requiring larger samples.
- Within-subjects designs (repeated measures) have each participant experience all conditions, requiring fewer participants but necessitating counterbalancing to control order effects.
- Counterbalancing randomly varies the order of conditions across participants to ensure order effects cannot explain results.
- Factorial designs manipulate multiple independent variables simultaneously, allowing researchers to examine both main effects and interactions.
- Main effects describe the effect of one independent variable while ignoring other variables in the design.
- Interactions occur when the effect of one variable depends on the level of another variable, revealing when and for whom effects are strongest.









