Chapter 15

Ethics and the Participant Experience

Conducting ethical and respectful online research

Aaron J. Moss, PhD, Leib Litman, PhD, & Jonathan Robinson, PhD ~35 min read

Introduction

Participants sometimes contact researchers or the platform they are using to report a problem with a study. In 2023, the CloudResearch team received one of these messages. The person was upset because they had been rejected from a study. The rejection meant the participant would not be paid for their time. They felt the rejection was unwarranted, so they contacted us to voice their frustration.

Our team quickly replied. We informed the participant that CloudResearch would contact the researcher, ask them to double check their records, and explain why the rejection was warranted. Within minutes of receiving our reply, the participant wrote back:

"I feel better now, no matter what the outcome is! Thank you so much for listening to me and you have a great rest of the day...and THANK YOU SO MUCH for the nice reply."

Notice, nothing changed. Our email didn't overturn the rejection, nor did it promise payment. CloudResearch didn't take any sides, and the participant didn't instantly receive an explanation for the rejection. All those things came later. So, why did the participant feel better? We think it's because someone listened.

Stories like this are a reminder that behind every online data point there is a person. The people who take part in online studies invest their time and emotions into the work. When researchers treat participants with care, everyone benefits—scientists obtain quality data and participants feel valued for their contributions. But when the relationship between researcher and participant breaks down, participants feel dismissed and disrespected. In the worst cases, they lose their time, their money, or faith in the research process.

This chapter examines the ethical considerations relevant for conducting online research. Module 15.1 begins by examining the historical context that shaped current ethical guidelines, focusing on the Belmont Report and its three core principles: Respect for Persons, Beneficence, and Justice. It then illustrates how these principles translate into practical applications such as informed consent, risk/benefit analysis, and fair participant selection, specifically in online contexts. Finally, the module shifts to the practical application of these ethical principles in the day-to-day conduct of online research. We will learn how to address practical questions such as how to set fair compensation, the responsible handling of participant rejections, ensuring participant privacy and data confidentiality, and the importance of clear communication.

After discussing the basics of research ethics, Module 15.2 provides a guide for preparing a research ethics proposal. This module contains templated answers to common IRB questions and is intended to help anyone filling out an IRB application for online studies. Ultimately, this chapter aims to help you conduct online research that is not only scientifically rigorous but also respectful and considerate of the participants who make research possible.

Chapter Outline

Module 15.1

The Basis of Ethical Research

Explore the background of ethics in human subjects research

All colleges, universities, and research hospitals in the United States that accept money from the federal government are required by law to have an institutional review board (IRB) that ensures the ethical conduct of research involving human participants.

An IRB, or ethics board, is a committee of people who review research proposals to make sure studies meet ethical guidelines. Before a researcher can conduct a study, they must describe to the IRB their methods, strategy for recruiting participants, any potential risks to participants, and how risks will be mitigated. The IRB reviews the proposal and either approves the study, requests modifications, or rejects the proposal if the risks to participants are unacceptable.

The criteria IRBs use to evaluate research stem from the Belmont Report, a foundational document in research ethics created in the 1970s by a diverse commission of scientists, ethicists, and citizens. The Belmont Report was created in response to significant ethical failures in research such as the Tuskegee Syphilis Study, in which the U.S. Public Health Service observed Black men with syphilis for several decades, deliberately withholding effective treatments like penicillin. This unethical conduct led to preventable suffering, the continued spread of the disease, and loss of life. Such events underscored the need for formal ethical guidelines to protect research participants.

The Belmont Report established three core principles for human subjects research: respect for persons, beneficence, and justice. These three principles are translated into applied research practices, as illustrated in Figure 15.1.

For example, the ethical principle of respect for persons means that people should be treated as intelligent beings who can decide if they want to participate in a research project or not. A practical application of this principle is informed consent. Informed consent is the process of telling participants what they will be asked to do in a study, what the risks and benefits of participation are, and asking if they agree to participate.

Ethical Research Framework diagram showing how the Belmont Report's three core principles (Respect for Persons, Beneficence, and Justice) translate into key applications like Informed Consent, Systematic Assessment of Risks and Benefits, and Fair Procedures, with practical requirements and applications in online research listed below each principle
Figure 15.1. A depiction of how the core principles of the Belmont Report get applied in specific parts of the research process.

A second principle from the Belmont Report is beneficence. Beneficence involves protecting participants' well-being by weighing the risks of a study against its potential benefits. This is one of the main things IRBs consider when evaluating research. In online research, this principle includes protecting participants' data and privacy.

The third principle from the Belmont Report concerns justice, which calls for fairness in the selection of research participants and in the distribution of research benefits.

While the Belmont Report's principles are foundational across research, it is important to recognize that most contemporary online studies present minimal risk to participants. Thus, from the perspective of the principle of beneficence, the risks (which are minimal) are typically justified by the benefits in a risk/benefit analysis. For this reason, the overwhelming majority of online studies qualify for exempt review—the lowest level of scrutiny an IRB directs toward research projects. At the same time, some aspects of online research can be distressing to participants.

In this chapter we will explore how the ethical principles of research are applied in online studies. Our discussion will begin with what is currently understood about the risk of harm in online studies.

Risk of Harm in Online Research Studies

The principle of beneficence is a cornerstone of the Belmont Report. It obliges researchers to secure the well-being of participants by maximizing potential benefits and minimizing potential harms. This requires a careful assessment of the risks involved in any research study. The first step in this assessment is to identify potential sources of harm.

What might harm participants in the online research environment, and what level of risk do those harms pose? Historically, most online research has been considered to involve minimal risk (e.g., Kraut et al., 2004). Most online studies consist of questionnaires, vignettes, or tasks measuring cognitive processes like memory, attention, and reaction time. Engaging with such stimuli generally does not provoke more stress than what someone might encounter in everyday life, which is often the benchmark for evaluating harm in behavioral research.

Large studies conducted on platforms like Mechanical Turk have asked participants whether participating in online studies is more stressful than everyday life (Litman & Moss., 2020; Moss et al., 2023). One study with over 10,000 MTurk participants who had completed hundreds of thousands of studies found that just 4% of people indicated that participating in online studies was more distressing than daily experiences. However, and even among this 4% of participants, most (75%) said that the benefits of participating in research outweighed the risks. These findings suggest that online research generally poses minimal risk for participants. Furthermore, because these participants had collectively completed hundreds of thousands of studies conducted by thousands of different researchers, it further suggests that online research studies are often benign.

At the same time, it is important to understand why a small percentage of participants reported that research was sometimes distressing. By understanding these participants and their experiences, researchers may be able to improve the experience of all participants.

To understand what people find distressing about research, the same study followed up with the ~4% of participants who reported that online studies were more distressing than daily life. It asked them to describe the sources of their distress. The answers fell into five categories: 1) Unclear instructions, 2) Unfair rejections, 3) Uncommunicative researchers 4) Competition for studies, and 5) Upsetting or inappropriate study materials. Table 15.1 lists these categories along with relevant quotes from participants.

In the next several sections, we will explore how researchers can structure studies to minimize each source of participant distress, starting with research on sensitive topics.

Table 15.1. Examples of challenges participants face in online research.
Category Participant Quotes
Fear of Rejections "Avoiding rejections, getting rejections regardless of honest/good quality work."

"Finding requesters that don't reject for a simple mistake."

"Knowing you didn't miss an attention check, but getting a rejection anyway."
Unfair Treatment "Most researchers are nice people, but some will just not respond or threaten people with blocks or rejections. Some researchers will lie about their pay rate."
Sensitive Study Content "Occasionally I'm asked to watch and comment on a video that is upsetting. I find some of the videos upsetting for personal reasons, and I'm even more upset that they don't contain a content disclaimer before I try them."

"Sometimes I have to look at and react to graphic pictures."
Unclear Instructions "Unclear instructions."
Intense Competition "MTurk has hired so many workers that it takes as much time to find work as it does to actually do it. HRs literally disappear in milliseconds rather than hours."

"Finding surveys to do is nearly impossible due to intense competition from others wanting to grab the same surveys. So in the end you end up with less to do unless you catch it first."

"Good surveys being posted one at a time and disappearing instantly; not being able to make a living wage underpaid work."

Research on Sensitive Topics Online

One area of concern within the risk/benefit assessment that IRBs conduct involves research on sensitive topics, such as trauma, sexual abuse, or other potentially upsetting experiences. A common assumption is that asking participants about such topics might increase the risk of harm or even retraumatize people, especially in an online setting where researchers cannot provide immediate support.

However, research studies, including those conducted offline, suggest that for most people participating in research that touches on sensitive or traumatic experiences is not significantly more distressing than everyday life (Newman & Kaloupek, 2004; Cromer & Newman, 2011; Newman & Kaloupek, 2009). Studies with survivors of trauma, for instance, show that people generally feel the benefits of the research outweigh discomfort, often expressing willingness to participate even if they had foreknowledge of the study's content (Griffin et al., 2003). Indeed, as reported in a literature review of this topic "the risks associated with asking about abuse are overstated and inconsistent with actual data" (Gleaves et al., 2007).

Nevertheless, when online participants report distress related to study content, it often involves exposure to graphic images such as violence, gore, sexually explicit content or painful personal experiences. This can be particularly upsetting to participants when such content is presented without adequate warning. A significant portion of participants who report distress from such content also indicate that clear warnings beforehand would mitigate their negative experience.

To conduct research on sensitive topics ethically and effectively in online environments, and thereby uphold the principle of beneficence, researchers should consider adopting the best practices outlined below. Specifically, there are four actions that are aligned with the principles of ethical research.

Clearly communicate the nature of the study, both in and out of the consent form. Provide participants with clear information about potentially disturbing content before they begin the study. This allows participants to make informed decisions about their participation. The notice should be included in the consent document, but also in the study description on the recruitment platform. The reason is that most people don't read consent forms. For example, researchers in one study embedded the phrase "some researchers wear yellow pants" within their consent form and most participants did not notice it was there (Douglas et al., 2020). This study occurred in-person in a laboratory, so the odds that people are reading online consent forms are even lower.

Allow participants to skip sensitive content. Allowing participants to skip sensitive questions is an easy and practical solution to mitigate against the possibility that some people will be upset at being forced to answer certain questions. Our experience shows that few participants use this option, but the presence of an option to skip questions reduces people's stress without compromising data quality.

Conduct thorough pilot testing. Testing study materials and gathering feedback from participants allows researchers to identify and address potentially distressing issues before conducting a full study.

Provide access to support. If a study may upset some participants, researchers can provide clear directions for support resources. While these resources are rarely needed, having these resources available fulfills the researcher's obligation to look out for participant's welfare.

Fair Payment

Some of the most common sources of frustration for participants are fear of having studies rejected unfairly and low pay rates. For instance, participants often report that for a simple mistake, without clear reasons, or even when they believe they have provided quality work their study will be rejected and the payment withheld. Participants also report that they perceive some researchers to be dishonest, most commonly regarding stated pay rates.

How Much Should a Study Pay? The question of fair compensation has generated more debate than any other aspect of online research ethics (see Moss et al., 2023). This debate centers on two issues: whether online research requires paying people minimum wage, and what constitutes fair payment when participation is voluntary.

A majority of people who participate in survey studies report that it is something to do in their leisure time (Moss et al., 2023). An analysis of tens of millions of tasks completed on Mechanical Turk over nearly a decade shows most researchers offered baseline compensation of 12-13 cents per minute, equivalent to $7.25-$7.80 per hour (Moss et al., 2023). More recent data from the Connect platform indicated higher rates, averaging $10 per hour. Notably, participants themselves report that hourly wages near $11 are fair compensation (Moss, 2024).

Based on this evidence and practical considerations, we recommend a baseline rate of payment of at least $8.50 per hour for standard survey research. This rate acknowledges the budget constraints many researchers face while remaining within established norms. For context, a typical 10-minute study with 300 participants would cost approximately $530 including platform fees.

This baseline rate should be adjusted upward for studies that are longer, more complex, or require special participant engagement. Higher compensation for demanding tasks not only reflects greater participant effort but also tends to improve data quality and participant retention, particularly in intensive longitudinal studies (see Litman et al., 2020).

Rejecting Participants

Some people have argued that rejecting participants in online research is unethical (e.g., Damer, 2019; Milland, 2016; Semuels, 2018; Silberman et al., 2018). Others have disagreed (e.g., Agley et al., 2024; Litman & Robinson, 2020). How does this disagreement fit within the context of research ethics, and how can we know whether it is ethical to reject online participants or not?

The ethical principle most closely tied to rejecting participant submissions is autonomy, which falls under respect for persons. In upholding participant autonomy, researchers and IRBs have traditionally agreed that participants are free to skip questions they do not want to answer and that people can withdraw from a study at any time without penalty. Things become more complicated, however, when evaluating the quality of online data. Is it ethical to reject someone who misses multiple attention check questions? What about someone who gives random responses or copies and pastes text from the web? Where do researchers draw the line on quality?

Even though everyone agrees participants should be treated with respect and that there must be some tolerance for mistakes (e.g., missing an occasional attention check), not everyone holds the view that autonomy also requires participants to commit to the goals of the project. However, as Edwards (2005) wrote, "Consent must conceptually involve some commitment to the project on which consent is based and, along with it, certain responsibilities to oneself and to others…There are at least minimal standards of decency and respect to uphold on both sides." In other words, autonomy does not imply that participants can act with indifference to others or the research.

Under this conception, participants who agree to participate in a study also agree to follow the instructions provided by the researcher and to abide by the rules of the platform where the study takes place. These rules generally preclude lying, cheating, misrepresenting oneself, responding randomly, and using AI or other forms of automation to complete the survey. When participants violate these rules, rejections are warranted.

In addition to concerns about individual behavior, there are some collective reasons to issue rejections when participants violate the rules for research. Two of these reasons are the social interest in maintaining a source of high-quality data and protecting the integrity of complex (and therefore often high paying) research studies (Agley et al., 2024; Litman & Robinson, 2020). As shown in Chapters 10 and 11, low quality data has the potential to not only mislead researchers but also societies in the form of false research findings. Because online research platforms are used by everyone from market research to non-profits and government agencies (see Chapters 1 and 2), there is value in ensuring the quality of data from online sources. When researchers fail to reject low quality submissions, entire platforms can be spoiled as the case of Mechanical Turk, described in Chapter 10, demonstrates. To ensure rejections are issues fairly, it helps to use validates measures of data quality and to form a plan for evaluating participant responses, like those based on the advice presented in Chapter 11.

Protecting Privacy

Online research platforms offer participants anonymity as they participate in studies. This anonymity is often an attractive feature of online platforms for participants, and it increases people's willingness to participate in studies on sensitive topics. In most studies, researchers only see a random platform ID and otherwise anonymized participant responses.

Even though participant IDs are anonymous, it is best to treat them as if they were not. This might mean removing IDs from a datafile once the project is complete, instituting a policy of not sharing IDs between labs or colleagues, and ensuring that any files posted on a site like the Open Science Framework do not contain participant IDs. Sometimes the best policy will be to create a separate file with all participant IDs and random numbers or alphanumeric strings. Within the main datafile these random strings can be associated with each respondent while the linking file or "key" is kept stored on a separate computer.

Other information that should be similarly handled with care is participants' IP address and geolocation. Survey platforms often collect this information automatically, although it can be turned off. In the past, some IP addresses have been associated with fraud using virtual private networks (Dennis et al., 2020; Gautam et al., 2018). For that reason, tools exist to check whether a specific IP address has been associated with fraud (e.g., Waggoner et al., 2019). Although checking IP addresses can sometimes be helpful, it is not sufficient to protect data quality and not recommended over the other practices outlined in Chapters 11 and 12. Like participant IDs, IP addresses and other data about participants' location should be removed from datasets that are shared with the public.

Finally, it is worth considering what to do when studies need to collect personally identifying information, such as email addresses, voice recordings, or video. As AI-powered survey tools emerge, voice and video recordings will likely become more common. Often, the best way to handle personally identifiable information is with a mix of the methods discussed so far. First, it is important to be transparent in the study description, consent form, and the survey itself about what data is being collected and how it will be used. Second, all data should be securely stored with identifying information stored separately from other responses whenever possible. Finally, the data should only be shared once it is de-identified, and researchers should consider deleting identifying information once it is no longer needed. These steps can protect participant privacy.

Other General Best Practices for Online Studies

The sections above address specific ethical issues. The following sections outline general practices that help researchers conduct more ethical and effective studies. Each section below translates the ethical principles we learned about earlier into effective research procedures.

Set Proper Expectations

Perhaps the most effective way to show respect for persons is to set proper expectations at the start of the study. This is done with transparent communication. As touched upon earlier, researchers should provide participants with sufficient information about the study before they commit their time and effort to the project.

Within the title, study description, and consent form, researchers should share what participants will be asked to do. This includes whether participants will be asked to download software, participate in follow-up surveys, interact with other participants, make audio or video recordings, view sensitive content, or share personal information. While studies using deception may limit these disclosures, researchers should still communicate details that might affect someone's willingness to engage with the project and affirm participants' desire to have their data included in the analyses after they know the study's full purpose.

Pilot Test Materials

Pilot testing is an invaluable way to identify potential problems before they impact an entire study. Through pilot testing, researchers gain valuable insights into how participants interpret study materials and engage with experimental manipulations.

Beyond improving study materials, pilot testing provides practical benefits as well. A pilot study allows researchers to accurately gauge a study's duration, helping them set appropriate compensation rates. Technical issues such as broken links or malfunctioning survey elements can be identified and resolved. And, perhaps most valuably, pilot testing generates candid participant feedback about the research experience, enabling researchers to optimize their protocols before investing in full-scale data collection.

Always consider running the study with a handful of people before launching the full study.

Use Demographic Screening Appropriately

Many online studies recruit a specific group of participants. Earlier in this chapter, we learned about different ways to conduct demographic screening. When using within survey screeners across different participant recruitment sources (i.e., market research panels and researcher-centric platforms), there are two important things to remember.

First, participants should always be paid for screening questions. On sites like Connect, researchers can issue partial payments. On market research panels, compensation is handled by the panel, but participants receive something for completing screeners. Second, within survey screening requires advanced set up that should always be tested before launching the full survey. If there is a mistake in the skip logic or branching options used to qualify participants, researchers can waste time and money while exposing participants to a frustrating survey experience.

Build Trust with Effective Communication

Clear communication can demonstrate respect for participants and uphold the principle of justice. When researchers clearly communicate about the study's requirements, what participants will be asked to do, when follow up waves of longitudinal data will be collected, and other similar topics, they not only show respect for participants, but also establish trust. This trust can lead to better data collection.

Equally as important is the communication that occurs when participants encounter problems. In these situations, clear communication can resolve technical issues, assuage participant frustrations, or explain why a rejection was warranted.

Regardless of the reason for communication, when it is clear and professional it can often transform an impersonal transaction into a more meaningful one, as the example at the start of this chapter showed. A brief message explaining a technical issue or acknowledging receipt of quality data can transform an impersonal transaction into a more meaningful one.

Issue Payments Promptly

After participants complete a project, researchers must decide who to approve and who to reject. During this time, participants wait. Waiting can be burdensome. Therefore, researchers should strive to review submissions and process payments as quickly as possible after a study is completed. Doing so shows respect for participants' contributions.

Module 15.2

Conducting Ethical Research: Writing an IRB Application

Examine a sample IRB application and learn how to complete your own research proposal

In Module 15.1, we learned about the ethical principles that guide research involving human participants. Now, we will transition to the practical application of these principles. We will learn how to write an IRB application.

Submitting an IRB proposal is a critical step for nearly every research project. This module aims to demystify the process. We will use an example from a real study, one that investigated the relationship between eating, sleep, and depression, to illustrate how to address the questions IRBs ask. By walking through how a researcher might answer these questions for a specific online study, we will gain a clearer understanding of how to prepare IRB applications. This understanding, and the answers to questions in this example, can later be applied to your own IRB proposals.

The Purpose of an IRB Review

Before we examine specific IRB questions, recall that the purpose of an IRB is to protect the rights and welfare of human research participants. The IRB reviews research applications to ensure projects align with the ethical principles of Respect for Persons, Beneficence, and Justice. They also ensure that research complies with federal law and relevant regulations.

Common Components of an IRB Proposal: An Example Study on Eating, Sleep, and Depression

Let's imagine we are preparing an IRB application for a study that aims to explore associations between when people eat meals, when they sleep, and symptoms of depression. The study will be conducted through an anonymous online survey on the Connect platform and administered on Qualtrics. We will look at key sections of an IRB application and discuss how to form responses to different questions an IRB might ask. Each of the questions and responses below come from an actual study that was approved by a university IRB.

Most IRB applications start by asking for general information about the project and the people conducting it. The application will often ask for a description of the study's aims, background, and significance. Following these questions the IRB is likely to ask for a description of the research design and methodology, the recruitment methods that will be used, how informed consent will be gathered, what risks and benefits there are to the research, and how researchers will ensure participants' privacy and confidentiality.

Research Design and Methodology

A typical IRB question will ask: "Please describe your research design (e.g., anonymous online survey, experiment, qualitative interviews). What procedures will participants undergo step-by-step? What instruments, questionnaires, or tasks will be used?"

The goal here is to give the IRB a clear picture of what a participant will do and experience from start to finish. It is best to be precise and provide all necessary study materials as appendices.

Sample response

"This study will employ a cross-sectional research design using an anonymous online survey. The survey will be created and hosted on the Qualtrics platform.

Procedures: Potential participants will be recruited from the CloudResearch Connect online participant recruitment platform. They will first see an advertisement for a study titled 'Survey on Behavior.' This advertisement will specify the study's approximate duration (18 minutes) and compensation ($2.50). Upon clicking to participate, people will be directed to the Qualtrics survey. The first page will be an informed consent document detailing the study's purpose, procedures, risks, benefits, confidentiality measures, voluntary nature of participation, right to withdraw, and researcher contact information.

Participants will indicate their consent by clicking a button to proceed. They will then complete several validated questionnaires assessing their sleep quality, chronotype (morningness/eveningness preference), eating timing, attitudes toward eating, binging behaviors, and depressive symptoms. Finally, participants will answer a set of standard demographic questions. No deception is involved in this study. Upon completion, participants will be provided with a completion code to receive their compensation via the Connect platform.

Instruments/Measures: The survey will include the following validated measures: the PROMIS Sleep Disturbance Short Form (Yu et al., 2012), the Munich Chronotype Questionnaire (Roenneberg et al., 2003), the Morningness-Eveningness Questionnaire (5-item version; Adan & Almiral, 1991), the Food Timing Screener (Chakradeo et al., 2022), the Eating Beliefs Questionnaire (EBQ-18; Burton et al., 2018), the Questionnaire on Eating and Weight Patterns-5 (QEWP-5; Yanovski et al., 2015), and the Center for Epidemiologic Studies Depression Scale-Revised (CES-D-R; Eaton et al., 2004). Demographic questions will include age, gender, ethnicity, and education level."

Subject Population

A typical IRB question will ask: "Who are your target participants? What are the inclusion and exclusion criteria? How many participants do you plan to recruit, and how did you determine this sample size? Will you be including any vulnerable populations?"

Answers to these questions should be specific about who can and cannot participate and why. They should also justify the sample size and clearly state if vulnerable populations are involved (and if so, what extra protections are in place). Online participants are not considered a vulnerable population except in rare circumstances, and it is okay to state that explicitly in the application.

Sample response

"The target participants for this study are adults aged 18 years and older who are registered members of the Connect online research platform.

Inclusion Criteria: Participants must be 18 years of age or older and able to read and understand English, as the survey is administered in English.

Exclusion Criteria: Individuals under the age of 18 will be excluded. This study will not specifically recruit or include known vulnerable populations such as prisoners, pregnant women, or people with known cognitive impairments that would preclude informed consent. The online platform (Connect) requires users to be 18 or older.

Sample Size: We aim to recruit 1,250 participants. This sample size was primarily determined by looking at past studies that have aimed to study relationships between the variables of interest."

Recruitment Methods

A typical IRB question will ask: "How will potential participants be identified and recruited? What will recruitment materials (e.g., flyers, online advertisements, email scripts) say?"

In response to this question, the IRB wants to see exactly what potential participants will see. Many IRB members may not be familiar with the specific platform being used and so it is helpful to provide details about the recruitment process to make this clear.

Sample response

"Participants will be recruited from Connect (connect.cloudresearch.com), an online platform where people voluntarily register to participate in research studies. Our study will be posted on the Connect participant dashboard, visible to registered users who meet basic eligibility (e.g., U.S. residents, 18+).

Recruitment Material: The study will be advertised on the Connect dashboard with the following information (it is helpful to include a screenshot of what this looks like, when possible):

Connect platform dashboard showing a sample study posting titled 'Survey on Behavior' with $2.50 payment at 8.33 per hour, 18 minutes duration, 100 spots available, and description about eating habits, sleeping habits, and mood
Figure 15.2. A sample study posting on the Connect platform.

People who are interested in the study can then click to view more details and, if they choose, proceed to the informed consent page and the survey itself. No direct emails or personalized solicitations will be used for initial recruitment beyond the platform posting."

Informed Consent Process

A typical IRB question will ask: "How will informed consent be obtained and documented? (Attach the consent form/script, see below). How will you ensure participants understand the information? How will you ensure participation is voluntary?"

This section is important as it directly relates to the principle of Respect for Persons. The consent form is the primary document here. The IRB will want to see the entire form and often provide a template or specific sections of text they think should be included. (See the full consent form below). The IRB's primary concern with consent is to be certain that participants are making a truly informed decision about whether to participate.

For most online studies, researchers request a waiver of documentation for informed consent. The reason for the waiver is simple: in online studies participants are remote and anonymous. Requiring people to sign a consent form with their name would not only be impractical, but it would be the only thing that identifies who participated in the study. For these reasons, most IRB's grant a waiver of documentation for the consent process.

Sample response

"Informed consent will be obtained via an online information sheet presented as the first page of the Qualtrics survey. Participants must indicate their consent to participate before proceeding to the survey questions. This is a 'clickwrap' consent method commonly used and accepted for minimal risk online research where a signed document is not feasible and could compromise anonymity. We are requesting a waiver of signed consent for this study."

Risks and Protections

A typical IRB question will ask: "What are all potential risks to participants (even if minimal)? How will these risks be minimized? How will participant privacy and data confidentiality be maintained?"

IRBs need assurance that researchers have thought through any potential negative impacts, however small, and how they will protect participants. For online surveys, psychological risks (like boredom or discomfort with questions) and privacy/confidentiality are typically minor but should still be explicitly stated on the IRB form.

Sample response

"Potential Risks: This study is considered to pose minimal risk to participants. Potential risks are no greater than those ordinarily encountered in daily life or during the completion of standard psychological questionnaires. These may include possible boredom or fatigue due to the length of the survey (approximately 18 minutes), minimal psychological discomfort when answering questions about mood, eating habits, or sleep patterns, as these can be personal topics.

Protections and Risk Mitigation:

Voluntary Participation & Withdrawal: The informed consent process will clearly state that participation is voluntary and participants can withdraw at any time without penalty by closing their browser window. They will also be informed they can skip any questions they prefer not to answer.

Anonymity/Confidentiality: No direct personally identifiable information (e.g., name, email address, specific date of birth) will be collected as part of the survey responses. Participants will be identified on the Connect platform via an anonymous alphanumeric string, which is used solely for awarding compensation and cannot be linked to their survey data by the researchers in any identifiable way in publicly shared datasets.

IP addresses collected by Qualtrics (if any, depending on settings) will be anonymized by the platform and from the dataset before any public sharing.

Data Storage: All electronic survey data will be stored on a password-protected computer accessible only to the Principal Investigator and the Faculty Advisor.

Data Retention and Sharing: The original de-identified dataset will be retained on the secure computer for a minimum of 3 years after study completion. After completion of the study and any publications, a modified, fully de-identified dataset, with any platform-specific alphanumeric identifiers and IP addresses removed, will be prepared and may be uploaded to the Open Science Framework (OSF), an online data repository, to promote scientific transparency and allow for secondary analysis by other researchers, in line with open science practices. No information that could reasonably identify an individual will be included in any publicly shared dataset.

Minimal Discomfort: Questions are standard and widely used. If any question causes discomfort, participants are free to skip it. The study does not involve deception or exposure to distressing stimuli beyond what might be encountered when reflecting on common life experiences.

Contact Information: Participants will be provided with contact information for the researchers and the IRB should they have any concerns or experience distress."

Benefits

A Typical IRB Questions will ask: "What are the potential direct benefits to participants? What are the potential benefits to society or scientific knowledge?"

It's okay to be honest and realistic here. It is common for survey participants to receive no direct personal benefits beyond compensation. The best strategy is to focus on the scientific or societal value of the research.

Sample response

"Direct Benefits to Participants: Participants are not expected to receive any direct benefits from participating in this research, other than the monetary compensation provided for their time and effort. Some participants may find reflecting on their habits and mood to be interesting or informative, but this is not a guaranteed benefit.

Benefits to Society/Scientific Knowledge: The primary benefits of this study are scientific. The findings may contribute to a better understanding of the complex relationships between eating patterns, sleep quality, chronotype, and depressive symptoms. This knowledge could potentially inform the development of future public health recommendations or interventions aimed at improving mood and well-being by addressing modifiable lifestyle factors related to circadian rhythms."

Compensation/Payment

A typical IRB question may ask: "Will participants be compensated for their time and effort? If so, how much, in what form (e.g., money, gift card, course credit), and when will they receive it?"

In answering this question, clearly state the compensation. Explain that for online studies compensation is typically delivered via the platform. Justify the amount as fair for the estimated time, referencing platform norms if possible. In this section, it is important to distinguish between researcher-centric platforms like Connect where payment amounts are set by the researchers and market research panels where the researcher has no control over how much participants get paid. Below, we provide template answers for both types of data collection options.

Sample response for researcher-centric platforms:

"Yes, participants will be compensated for their participation. Upon completion of the 18-minute survey, participants will receive $2.50. This payment will be processed through the Connect online research platform directly to their Connect account. This compensation rate is approximately $8.33 per hour, which is consistent with current payment norms for similar online survey studies conducted on this platform. The payment is for completing the study; partial payment is not offered if the study is not completed, though participants can withdraw at any time."

Sample response for market research platforms:

"Yes, participants will be compensated for their participation. Upon completion of the 18-minute survey, participants will receive compensation in the amount that they have agreed to with the platform through which they enter the survey. While using this platform, we are not able to set the compensation rate. Most participants receive gift cards, rewards points, or cash equivalent to a few dollars or less. The payment will be commensurate with norms on the platform and for completing the entire study; partial payment is not offered if the study is not completed, though participants can withdraw at any time."

Below is a typical consent form that is included on the first page of the Qualtrics study.

This will appear at the start of the survey. Respondents will not be able to continue with the survey unless they click the arrow, indicating that they have read and understood the information.

Summary

In this chapter we explored the practical application of research ethics in online studies. While all human subjects research rests on the foundation of the three core principles established by the Belmont Report—respect for persons, beneficence, and justice—online studies face some special challenges in ensuring these practices are met. While most online studies present minimal risk to participants, researchers must carefully consider how these principles translate into concrete practices.

When participants report distress in online studies, it typically stems more from aspects of finding studies and interacting with researchers and less from the content of studies themselves. To improve participants' experience and uphold the practices of ethical research, researchers can: design user friendly studies, communicate clearly with participants, ensure data privacy and security, pay fairly, and offer access to support resources, when necessary.

Before conducting an online study, the IRB application process serves as a crucial checkpoint for ensuring these ethical principles are embedded in research. By systematically addressing questions about methodology, participant protection, risks and benefits, and data management, researchers demonstrate their commitment to ethical conduct while creating a roadmap for responsible research.

In the next chapter, we will examine how to write about online research. In particular, we will describe how to write a Method section that adequately describes your source of participants and sampling strategy. Then, we will provide some general advice about writing scientific papers.

Frequently Asked Questions

What are the three core principles of the Belmont Report?

The Belmont Report established three core principles for human subjects research: Respect for Persons (treating people as intelligent beings who can decide if they want to participate), Beneficence (protecting participants' well-being by weighing risks against potential benefits), and Justice (fairness in participant selection and distribution of research benefits).

What percentage of online research participants find studies more distressing than daily life?

Research with over 10,000 MTurk participants found that just 4% indicated participating in online studies was more distressing than daily experiences. Even among this 4%, most (75%) said the benefits of participating in research outweighed the risks, suggesting online research generally poses minimal risk for participants.

What is a fair payment rate for online survey research?

Based on research evidence and practical considerations, a baseline rate of at least $8.50 per hour is recommended for standard survey research. Participants themselves report that hourly wages near $11 are fair compensation. This rate should be adjusted upward for longer, more complex studies or those requiring special participant engagement.

What are the main sources of distress for online research participants?

When participants report distress, it typically falls into five categories: unclear instructions, unfair rejections, uncommunicative researchers, intense competition for studies, and upsetting or inappropriate study materials presented without adequate warning. Most distress stems from interactions with researchers rather than study content itself.

Is it ethical to reject participants in online research?

Rejecting participants can be ethical when they violate research rules such as lying, cheating, misrepresenting themselves, responding randomly, or using AI to complete surveys. Consent involves commitment to the research project, and there are social interests in maintaining high-quality data sources. Researchers should use validated measures of data quality and form clear plans for evaluating responses.

Key Takeaways

  • The Belmont Report established three core principles for ethical research: respect for persons, beneficence, and justice
  • Informed consent is the practical application of respect for persons, ensuring participants understand what they will be asked to do
  • Most online studies present minimal risk to participants—only 4% report finding research more distressing than daily life
  • When participants report distress, it typically stems from unclear instructions, unfair rejections, uncommunicative researchers, competition for studies, or sensitive content without warning
  • For sensitive topics, researchers should clearly communicate study content, allow skipping questions, pilot test materials, and provide support resources
  • A baseline payment rate of at least $8.50 per hour is recommended for standard survey research
  • Rejecting participants is ethical when they violate research rules such as lying, cheating, or responding randomly
  • Participant IDs and IP addresses should be treated as sensitive information and removed from publicly shared datasets
  • Set proper expectations by clearly communicating what participants will be asked to do before they commit their time
  • Pilot testing helps identify problems, gauge study duration, and gather feedback before full-scale data collection
  • Most online studies qualify for exempt review—the lowest level of IRB scrutiny
  • IRB applications should include clear descriptions of methodology, recruitment, consent procedures, risks, benefits, and data protection measures