Introduction
Science is built on the idea that people will share their work. Sometimes that happens in a conference talk or an invited lecture. In some cases, scientists share their work in a fancy TEDTalk, a book, or an address to policy makers. But the most common way to share scientific research is in a humble journal article. Why do scientists write these?
In the book Write it Up, psychologist Paul Silvia offers several reasons for publishing academic research. At the top of his list are noble reasons: to share knowledge, advance scientific understanding, and have a positive impact on the world (Silvia, 2014). Not all writing meets those lofty objectives, though. People write for practical reasons like to get a job, complete a dissertation, or fulfill the requirements of an assignment. And human nature being what it is, people sometimes write for unsavory reasons: vanity, spite, to maintain a reputation. Regardless of your reason for writing, there is no reason to not do it well.
Even though the principles of good writing stretch across genres, scientific writing often follows certain conventions. Within the behavioral sciences, these conventions come from the American Psychological Association's (APA) publication manual. APA style papers often have an Introduction, Method, Results, and Discussion sections (Figure 16.1). Within each section, readers expect certain pieces of information.
In a Method section, for instance, readers expect to find information about who served as participants, how they were recruited, what they did, how the researchers chose the measures participants completed, and anything else relevant to assessing the appropriateness of the sample and measures for addressing the research question.
The goal of this chapter is to learn how to write an effective Method section. We will learn about some unique details that should be a part of the sample description when data are gathered online. We will also explore features that should be a part of all Method sections and how to ensure the Method is complete. At the end of the chapter, you will find a sample Method from a published paper along with comments on how it meets the standards of article reporting detailed by the APA (American Psychological Association, 2020). Finally, the chapter ends with some general tips about writing.
Chapter Outline
Writing a Method Section
Learn about the pieces of a method section and how to describe online studies
The Purpose of the 'Method' Section
Each section of a scientific paper has a purpose. The Introduction frames the research question and describes why it is important; the Method tells readers what was studied, how it was studied, and who participated in the project; the Results describe what was found; and the Discussion summarizes what it all means. Even though the Method is sandwiched in the middle, many people write it first. That is because the Method is often straightforward: it tells people what the researchers did and how they did it.
While description is the practical goal of a Method, there is also a rhetorical purpose. The Method seeks to convince readers that the study's procedures were effective and fit for the research question. To do that, it must provide them with information to evaluate both the reliability and validity of the results. There is a catch, however: limited space.
Most articles have word limits. And, even if they don't, readers have attention limits. Writing an effective Method requires deciding what information goes in, what gets left out, and what belongs in an Appendix or online Supplement. The Method must also be written well enough to engage readers in the minutia of research methods. Let's look at how this is done.
Parts of a Method Section
In the book Scientific Writing for Psychology, Robert Kail (2015) says to "use subheadings freely" in the Method section.
Some subheadings are necessary. Everyone, for instance, expects a section describing "Participants and Design" or "Measures and Outcomes." But beyond that a paper may contain sections describing the "Procedure," "Apparatus," or "Open Science, Data Sharing, and Transparency" practices.
The subheadings should reflect the unique elements of the research. For experimental studies, consider separate subsections for Design, Procedures, and Materials. For online studies, an additional subsection addressing Data Quality Measures may be warranted. When determining which subheadings to include, consider what information readers need to replicate the methodology, evaluate the appropriateness of the methods, and understand any methodological limitations.
Let's look at some subsections in detail.
Participants and Design
The participants section describes who participated in the project and how they were sampled.
Where did you recruit participants? How many people took part? How many dropped out early? What were the demographic characteristics of the people who completed the project? When was the data gathered? How much were participants compensated? How long did the study take participants to complete? These are some of the questions a "Participants" section should answer.
For online studies specifically, this section should also address:
- The platform used for recruitment (e.g., Connect, Mechanical Turk, a university participant pool)
- Any screening criteria or attention checks used
- Geographic restrictions on participation
- Browser or device requirements
- Completion rates and patterns of attrition
- Steps taken to ensure data quality
The norms for describing sample demographics vary by discipline. The APA's journal article reporting standards say to "Report major demographic characteristics...and important topic-specific characteristics (e.g., achievement level in studies on educational interventions)" (American Psychological Association, 2020). For most studies conducted online, researchers report the age, race, ethnicity, gender, and topic-specific characteristics of the sample. When a study has a lot of demographic information, a table is a great way to present it. A table presents a lot of information in a small space, as shown in Table 16.1.
| Category | Standard | Open | Inexperienced |
|---|---|---|---|
| Annual Household Income | |||
| < 20k | 15 | 20.5 | 14.2 |
| 20-39k | 28.5 | 27.1 | 20 |
| 40-59k | 23 | 19.8 | 25.5 |
| Marital Status | |||
| Married | 35 | 33 | 42 |
| Divorced | 10 | 9 | 9.5 |
| Never married | 55 | 58 | 48 |
| Race | |||
| White | 70 | 77 | 82 |
| Black | 12 | 10 | 8 |
| Asian | 10 | 7 | 5 |
| Highest Degree | |||
| No college degree | 42 | 35 | 38 |
| College degree | 50 | 55 | 52 |
| Political Party | |||
| Republican | 22 | 20 | 21 |
| Democrat | 45 | 48 | 32 |
| Independent | 30 | 26 | 27 |
| Religion | |||
| Christian | 40 | 42 | 50 |
| Atheist | 25 | 22 | 18 |
| Other | 10 | 8 | 10 |
Beyond describing the sample, a "Participants" subsection often includes information about the study design and how sample size was determined. Where appropriate, researchers report the statistical power they had to detect the effect of interest. If a project contains an experiment, near the end of this section is a good place to describe the design and how participants were assigned to conditions.
An example "Participants" subsection appears below. It comes from the same paper referenced in Table 16.1 about online research participants (Robinson et al., 2019). As we will see, it covers many of the points outlined above. Other points, like the sample source and why it was appropriate, were explained before the Method section. The topic of the paper concerned how a participant's level of experience on sites like Mechanical Turk may present a problem for some types of research.
Participants section from Robinson et al., (2019). "We aimed to collect data from 750 people—250 in each of our three samples (standard, open, inexperienced). We expected the study to take about 15 minutes and paid each participant $1.00. Although we did not conduct a formal power analysis, we aimed to recruit large samples in line with past work examining data quality on online platforms.
The final dataset included 768 responses. There were more responses than participants we aimed to collect data from because two participants entered the study more than once and 34 people dropped out of the study early. We retained data from all participants who completed all our measures of data quality (n = 758). This cutoff resulted in removing incomplete responses from the two people who entered the study more than once and eight people who completed less than 35% of the study. After exclusions, the sample included nearly equal numbers of men (n = 375) and women (n = 350), and the average age was 33.9 years (SD = 10.17) (see Table 1 for detailed demographic information)."
Procedure
The procedure describes what happened in the study and, when done well, why things occurred as they did. Some procedure sections are long. Most are short. The Procedure should contain enough detail for readers to evaluate the methods and potentially replicate the study.
Topics traditionally covered in the Procedure include: the source of participants, approach to sampling, inclusion and exclusion criteria, and other details needed to evaluate the study. Within online studies, researchers should also report the compensation participants received, how long the study took, anything special about the sampling process, and any ethical considerations.
As an example of how omitting details impedes a reader's ability to evaluate the research, consider "nationally representative" data. Across many behavioral science disciplines, nationally representative data are rare as we learned in Chapter 9. Yet, over the years, several polling organizations and some participant recruitment companies have taken to calling data that were gathered using non-probability quotas "nationally representative." When behavioral scientists repeat this description in papers without detailing the sampling approach, readers have little chance of adequately understanding the research or its results.
Beyond sampling details, the procedure should inform readers about what participants did in the study. For short surveys or questionnaires, researcher may write something like, "Participants completed the following scales in a random order, with item order within each instrument also randomized. A full list of items and other related material can be found in the Supplementary Materials section" (Rivera et al., 2022). Following this sentence, another subsection may describe each scale in the study.
For longer or more complicated projects, the procedure section may be detailed. A daily diary study, longitudinal investigation, video interview, dyadic study in which participants interact with one another, or other similarly complex study would require more description than a short survey. In a longitudinal investigation, for instance, the researcher should detail how participants were recontacted, what attrition occurred between each wave, and what incentives were offered to increase retention.
Finally, if necessary, the Procedure is a good place to describe any agreements with the institutional review board, any unusual ethical standards the research adhered to, or any safety monitoring the study required.
As an example of what the Procedure section can look like, consider the paragraph below. It comes from the same paper as the Participant description section above (Robinson et al., 2019).
"To recruit participants, we created three separate studies on MTurk and varied the participant qualifications for each. All three studies were set up and managed using the TurkPrime platform [23]. In the first study (Standard), we used standard participant qualifications of at least a 95% approval rating and more than 100 HITs completed. In the second study (Open), we used no qualifications, meaning the study was open to all participants on MTurk. Finally, in the third study (Inexperienced), we required participants to be inexperienced by setting the qualification requirement to less than 50 HITs completed.
Data collection for all three studies started at the same time and ended after approximately one day (standard = 25 hours, open = 22 hours, inexperienced = 25 hours). After all three studies ended, we used the TurkPrime database to query participants' approval rating and number of HITs completed in the open sample.
Participants completed the Asian Disease experiment, Mt. Everest experiment, Trolley Dilemma experiment, Big Five Personality Inventory (BFI), Cognitive Reflection Test (CRT), and demographic questions. Each experimental manipulation—Asian Disease, Mt. Everest, Trolley Dilemma—had two conditions and participants were randomly assigned to conditions. The order of the experimental manipulations, the BFI, and the CRT was randomized across participants. After participants completed all tasks, they answered demographic questions. We included four attention check questions at various points in the survey—two in the BFI and two in the demographics section."
Apparatus
The Apparatus section describes any methods, technologies, or tools used to collect data.
In online research, an "Apparatus" is often some software, hardware (i.e., a device), website, or technique for collecting data. Whenever research relies on a new method or technique, researchers should give readers a detailed description and information about where to learn more. Examples of new methods that should be mentioned in the apparatus section are using voice recordings to learn about participants' reasoning processes (Ristow & Hernandez, 2023), using mobile devices to capture data about behavior (Harari et al., 2021), or even using passive recording technology to assess the quality of participants' responses (Permut et al., 2019). In each of these papers, the researchers described the specific apparatus, explained how it works, and made a case for its validity.
Measures and Outcomes
This subsection is easy to get right. All it does is describe how the researcher measured what they measured and makes the case that those measures were appropriate (i.e., reliable and valid).
Just as with the Apparatus section, measures and outcomes (or sometimes just "Materials") that are commonly used within a field require less explanation than things that are new. For well-established measures, all that is needed is a description of the measure and a reference to it. The description should tell readers how many items the measure has, what kind of response scale participants were given (e.g., 1 to 7, -3 to 3), what the scale labels or anchors were, and what evidence there is for the measure's reliability and validity.
For new measures, more space is required to describe why the measure was appropriate for the situation and what evidence there is of its psychometric properties. Even if the evidence is preliminary and even if it's based on your own data (as is likely to be the case), readers will want to know why the measure is suited to the situation. It's this section's job to tell them.
We present the Measures subsection from a journal article that used online participants to investigate what motivates people to stay single as they move throughout life.
From Park et al., 2023:
Measures: Profile Indicators
Fundamental Social Motives. All participants completed eight subscales (each consisting of six items) from the Fundamental Social Motives Inventory (FMI; Neel et al., 2016). Note that among the 11 subscales of the FMI, three related to mate retention or parenting were not included in the survey as they were not relevant to (all) singles. The internal consistency was high for all subscales as follows: self-protection (e.g., "I think a lot about how to stay safe from dangerous people"; α = .88), disease avoidance (e.g., "I avoid places and people that might carry diseases"; α = .88), affiliation—group (e.g., "I enjoy working with a group to accomplish a goal"; α = .86), affiliation—exclusion concern (e.g., "I would be extremely hurt if a friend excluded me"; α = .89), affiliation—independence (e.g., "Having time alone is extremely important to me"; α = .84), status (e.g., "It's important to me that other people look up to me"; α = .81), mate seeking (e.g., "I spend a lot of time thinking about ways to meet possible dating partners"; α = .93), and kin care—family (e.g., "It is extremely important to me to have good relationships with my family members"; α = .92). All items were assessed on a 7-point scale, ranging from 1 (strongly disagree) to 7 (strongly agree).
Measures: Predictors
Background Variables. Four variables assessed at background were examined as predictors of profile membership. These include gender (men vs. women), age, dating history (have vs. have not been in a relationship before), and marital history (ever vs. never been married). Note that given the limited number of individuals belonging to the "other" category for gender (n = 6), we dropped them from the analysis including gender. The number of divorced and widowed individuals was also small (n = 30), thus we collapsed ever-married individuals into one category. We kept these individuals in the model given previous work suggesting potential differences in never-married versus ever-married individuals' social networks (Pinquart, 2003).
Attachment Insecurity. The Experiences in Close Relationships–Relationship Structures questionnaire (Fraley et al., 2011) was used to assess global (i.e., relationship-general) attachment insecurity. Participants responded to six items assessing attachment avoidance (e.g., "I don't feel comfortable opening up to others"; α = .86) and three items assessing attachment anxiety (e.g., "I often worry that other people do not really care for me"; α = .88) on a 7-point scale (1 = strongly disagree; 7 = strongly agree).
Fear of Being Single. Participants responded to the Fear of Being Single scale (Spielmann et al., 2013) which includes six items such as "I feel anxious when I think about being single forever" (α = .85). The items were rated using a 5-point scale (1 = not at all true; 5 = very true).
Measures: Outcomes
Satisfaction With Being Single. The Satisfaction With Relationship Status Scale (Lehmann et al., 2015) was used to measure satisfaction with being single. Participants were asked to think about their current relationship status (which, for all the participants, would be being single) and respond to questions such as "How happy are you with your current status?" (α = .92) using a 4-point scale (1 = not at all; 4 = to a great extent).
Life Satisfaction. Participants responded to the Satisfaction With Life Scale (Diener et al., 1985) using a 7-point scale (1 = strongly disagree; 7 = strongly agree). Items include five statements such as "In most ways my life is close to my ideal" (α = .89). Correlations among all study variables can be found in the Supplemental Material.
Reporting Data Cleaning
An important part of the Measures and Outcomes subsection is explaining how the data were cleaned and screened. These measures should ideally be pre-registered. Within the paper, however, researchers need to explain how the measures were picked, what evidence there is for their effectiveness, and how people performed on them.
If a study follows the advice for using attention check questions, we learned about in Chapters 10-12, the Measures section should include a heading that says, "Data Screening." This section should describe the instructed response, nonsense, or nearly non-existent event questions that were used to screen for data quality. It should also mention any open-ended items used to measure quality and what criteria constitute a passing or failing answer. Finally, the section should report how many participants were excluded based on screening.
Open Science, Data Sharing, and Transparency
Science has changed a lot in the era of online research. Some changes are thanks to technology (e.g., Anderson et al., 2019; Buhrmester et al., 2018) and some are the result of shifts within the scientific community. One large shift is in the planning, reporting, and archiving practices of behavioral scientists, a change broadly known as "open science."
Many of the methodological changes that fall under the umbrella of open science are intended to make research more transparent, more cumulative, and more collaborative. These changes have been spurred by fraud, shoddy research practices, and an inability to replicate the published findings from various disciplines (Bhattacharjee, 2013; Klein et al., 2018; Simmons et al., 2011). In a sign of progress, many researchers have changed the way they conduct and report their studies (Nosek et al., 2022).
What this means for the Method section is that many readers will expect to see a description of open science practices. These statements typically include information about how the sample size was determined, which cases of data were removed from analyses, and whether all manipulations and measures are being reported in the study (and if not, why not; Simmons et al., 2012). Open practice statements also typically include information about whether the study was pre-registered (along with a link to the pre-registration), how interested parties can obtain the data files, study materials, and analysis code (usually with a link to the repository), and whether the authors have any conflicts of interest. Sometimes, researchers add other disclosures such as their position toward the data or research question (e.g., Ledgerwood et al., 2023), but overall, the aim of these statements is to make the reporting of research findings more clear and more open.
Statements about open science are often short. Because manuscripts typically have word limits, the disclosures researchers provide are sometimes scattered throughout the Method section. Whenever necessary, they can also be placed under their own heading labeled something like "Open Practices Statement" or "Data Sharing." Two examples of these kinds of statements are below, reflecting the variability these statements can entail:
Data Sharing Statement - From Clifton & Kerry, 2023
"Data, study materials (including all measures administered but not relevant to this study), and code are publicly available: (https://osf.io/r3ksa/?view_only=f240306aede3473a 8551729a6fb9bf34)."
Ethics and Open Practices Statement - From Sun et al., 2022
"We used data from three of our existing datasets. Data collection and coding procedures for Sample 1 were approved by Institutional Review Boards (IRBs) at Washington University in St. Louis (IRB ID: 201206090; Study Title: Personality and Intimate Relationships Study) and the University of California, Davis (IRB ID: 669518–15; Study Title: Personality and Interpersonal Roles Study). Data collection procedures for Samples 2 and 3 were approved by the IRBs at the University of Pennsylvania (Sample 2; IRB ID: 831767; Study Title: Moral Change Goals) and the University of California, Davis (Sample 3; IRB ID: 1328211-2). Data collection procedures for trait ratings (which we use for supplemental analyses; see Supplemental Material, Sections 6–7) were approved by the IRB at University of Pennsylvania (IRB ID: 844999; Study Title: Best and Worst Trait Ratings).
For Sample 1, we used data from the first wave of the longitudinal Personality and Interpersonal Roles Study (PAIRS). Other published articles have used the PAIRS dataset (for a full list of citations, see https://osf.io/3uag4/wiki/home/). A few articles used the self- and informant-reports of personality traits that we use in supplemental analyses, but none have used the best and worst trait measures included in this study. For Samples 2 and 3, we used data from a study on personality change goals. The previously published article using these samples (Sun & Goodwin, 2020) used the self- and informant-reports of personality traits that we use in supplemental analyses, but did not use the best and worst trait measures. Codebooks for all measures in these datasets are available at https://osf.io/jce7k/. Below, we describe the measures and procedures relevant to the current article.
The codebook, data (posted in a way that prevents targets from finding out what their friends said about them), and R scripts required to reproduce the analyses reported in this paper are available at https://osf.io/jce7k/. We did not preregister these analyses as we were already familiar with the datasets when we conceptualized this project. Instead, to limit the risk of overinterpreting potentially spurious effects, we highlight the findings that replicate across at least two samples (at a conventional p < .05 threshold) and are therefore more likely to be robust.
The effects reported in the results section that met this replication threshold also met an alternative standard of evidence for claims of new discoveries—whether the effects are significant at a p < .005 threshold (v et al., 2018)—in at least one sample. Note that we coded and analyzed a few additional variables in the Sample 1 data for an undergraduate research project (see Supplemental Material, Section 1). We later refined the scope of the current paper to the variables that are presented in this paper and coded only these variables in the Sample 2 and 3 data. Apart from the additional variables coded in Sample 1, we report all coded variables."
Once the Method section is complete, it's time to write the Introduction, Results, and Discussion sections. There are many great resources for learning to write these sections effectively. Below, we present a complete Method section from a recent paper with the hope that you see how it meets the criteria presented throughout this chapter. After presenting this method, we will examine some general advice for writing.
Example Method from Hartman et al., 2023:
Method
Participants and design
Study 1a Three hundred and two adults from MTurk participated in Study 1a. We used CloudResearch's MTurk Toolkit (Litman et al., 2017) to target participants within the United States and to recruit participants in different age groups. Specifically, we recruited 50 participants in six separate groups, with each group corresponding to a different decade of age (20s through 70s). Participants were paid $0.50 to complete the study which we estimated would take 3 minutes. All data were collected in April 2019, and data collection ended after 3 days.
Study 1b We recruited 350 adults from Prime Panels. As with Study 1a, we split the sample into six groups of approximately 50 participants each, with each group corresponding to a different decade of age. Because Prime Panels aggregates several panels to collect large samples, participants were compensated based on the platform they were recruited through. Some participants may have completed the study in exchange for flight miles, points, money, or other rewards. All data were gathered in April 2019; data collection closed after 3 hours.
Procedure
We presented participants with the AVI. Some of the questions had four response options and some had five. We instructed participants to answer to the best of their ability without using outside sources and to select "I don't know" when applicable. We also stressed that we would not penalize participants if they did not know the answers. As in the instrument development study, we asked participants to provide information about their age to verify that the database information was accurate. These included open-ended questions about participants' current age, the year they graduated from high school, and how old they were during Watergate (participants under 50 typically wrote "not born yet"). For exploratory purposes we also asked participants to select the decade of their life in which they were the happiest and to elaborate on what was positive about that time. They then selected the decade of their life that was most difficult and described what made it so. The results from these items are not reported here.
Analytic approach
We used the difference method to assess each person's relative knowledge of historical (questions about pop culture prior to the year 2000) and contemporary (questions about pop culture after the year 2000) culture. For each participant, we separately summed the number of correct responses on all items measuring historical and contemporary knowledge, then converted the sums to percentages. Finally, we calculated a difference score by subtracting the percentage of correct responses to contemporary questions from the percentage of correct responses to the historical questions. This yielded a difference score variable with a range of −100% (correctly answered all contemporary questions and no historical questions) to +100% (correctly answered all historical questions and no contemporary questions). For Study 1a, in addition to using the CloudResearch database to target participants whom we expected to fall into six age groups, we asked participants to self-report their age. We opted to rely on self-reported age in our analyses because that is the data most researchers would have access to. There was a strong correspondence between self-reported age and database age (r = .965, p < .001). In both studies we used linear regression, predicting the continuous self-reported age variable using performance on the AVI items as the predictor. We also assessed the utility of the instrument for distinguishing between decades of age. To do so, we split participants' self-reported age into six groups, with each group corresponding to a different decade, and tested differences between the groups using a one-way ANOVA. To assess the value of using the difference score, as opposed to just using the scores on the contemporary or historical questions, we also examined the correlations between self-reported age and each of these three measures. Here and in later studies we tested both the full 19-item instrument (see Table S1) as well as a shorter six-item subset (AVI-S). In addition to being easier to implement, research indicates that much of the age-related differences in people's knowledge can often be captured by a few items rather than by multiple items (e.g., Schroeders et al., 2021). The six-item subset comprised the three historical items that older adults most often answered correctly and the three contemporary items that younger adults most often answered correctly (see Table 1). Historical items include Bonanza (1959–1973), The Way We Were (1974), and The First Time I Ever Saw Your Face (1969), while contemporary items include Somebody That I Used to Know (2011), How You Remind Me (2001), and Boom Boom Pow (2009). Most analyses showed the difference between the full 19-item scale and the shorter six-item version was insubstantial (e.g., the full scale predicted 68.2% of the variance in age, while the shorter version predicted 64.3%). Therefore, we report the results of the shorter scale. Analyses using the full version of the AVI are available in the supplemental materials.
Writing Advice
Explore what makes writing good and how to improve your own writing
Beyond accurately describing your methods and reporting your results, there are some lessons you can learn to ensure people will want to read what you write. We have gathered some of those lessons here.
The Transaction
When setting out to write a scientific paper, it's important to remember that scientific writing, like all writing, is an act of communication. The writer tries to put their thoughts, and some part of themselves, into words. This transaction works best when people act like people.
Too often, writers feel pressured to adopt a formal or scholarly tone when they write about scientific topics. This is a mistake. The goal of writing is not to impress readers with jargon or convoluted sentences, but to convey research clearly and effectively. If you're passionate about the topic and you write well, readers will be carried in your wake.
Effective scientific writing bridges the gap between your study's findings and your audience's understanding of the issue. The aim should therefore be to express your thoughts and discoveries in a manner that is accessible, precise, and engaging. The best way to do this is to embrace simplicity, keep the reader's perspective in mind, and know that a little warmth and humanity go a long way in scientific writing.
The Opening
What is the purpose of a paper's first sentence? Answer: to entice the reader into the second sentence!
Nearly every book about writing contains the admonition to "Start strong." This advice is often repeated for a reason: readers who are unenthused by the first sentence seldom make it to the second. That means your opening, or lede, needs to grab the reader and convince them that your article is worth reading.
What are ways to grab attention? A surprisingly simple method is to ask a question. For example, here's how two researchers began a paper on a cognitive bias known as the anchoring and adjustment heuristic:
"In what year was George Washington elected president? What is the freezing point of vodka? Few people know the answers to these questions, but most can arrive at a reasonable estimate by tinkering with a value they know is wrong" (Epley & Gilovich, 2001)."
With just two questions and one sentence, the reader is drawn into the paper.
Another effective strategy is to begin with an interesting statistic, fact, or observation. That is how a 2019 paper by investigating the role of hand shaking in promoting deal-making began:
"After years of negotiations between Prime Minister Shinzo Abe of Japan and President Xi Jinping of China, diplomats from both countries arranged for the two leaders of Asia's biggest economies to meet at a 2014 economic summit for a single purpose: to shake hands. The handshake took months of scheduling to arrange, with the news media noting that the "small gesture holds great importance" for future negotiations and would be "parsed for deeper meaning" (Schroeder et al., 2019)."
Finally, a third strategy is to directly comment on behavior:
"When children draw on walls, reject daily baths, or leave the house wearing no pants and a tutu, caretakers may reasonably doubt their capacity for rational decision-making. However, recent evidence suggests that even very young children possess sophisticated decision-making capabilities for reasoning about physical causality (e.g., Gopnik et al., 2004, Gweon and Schulz, 2011), social behavior (e.g., Gergely, Bekkering, & Király, 2002), future events (e.g., Denison and Xu, 2010, Kidd et al., 2012, Téglás et al., 2011), concepts and categories (e.g., Piantadosi et al., 2012, Xu et al., 2009), and word meanings (e.g., Xu & Tenenbaum, 2007)." (Kidd et al., 2013).
Each of these methods is effective because they involve people thinking, feeling, and behaving in ways that behavioral scientists care about. There are, of course, other ways to begin a paper such as telling a story (Neel & Lassetter, 2019), using a quote (Gray et al., 2014), or pointing out a problem or contradiction (Kerry et al., 2023). Regardless of how you open your paper, do it in a way that grabs the reader's interest and sets the stage for your research question.
Imitation
No one is born knowing how to write. Learning takes practice, practice, and more practice. Yet one way to accelerate this process is to imitate.
Imitating good writing doesn't mean you try to sound like someone else; you have your own voice that only you can bring to the page. But imitation does mean you should try to examine what good writers do and do some of it in your writing.
The ways to open a paper above are examples of something that can be imitated. Everyone can try to pose an interesting rhetorical question about the topic of their research. Similarly, if you read good writing, you will notice other things that can be imitated such as simple word choice, how to frame an introduction or report results, and how to end a paper on a strong note. Many good writers learned what they know by imitating others. You should do the same.
Actions and Ideas
At the heart of good writing is a subject → verb → object structure. This structure works because it helps readers visualize who is doing what, and active verbs drive the writing forward.
Unfortunately, the subject → verb → object structure gets easily buried under ideas, especially in scientific writing. For example, consider this sentence: "Does a nuanced understanding of the normative interplay between architectural design elements and cultural connotations among seasoned professionals predict the subjective aesthetic evaluations of architectural compositions by novice observers?"
Who is doing what in this sentence? What is the study about? It's hard to tell because the writer has buried the action beneath the ideas. A more direct way to say the same thing is: "Do people who know a lot about building design have similar opinions about architecture as those who are new to it?"
What makes a lot of scientific writing hard to read is something the writer Helen Sword calls nominalizations or zombie nouns (Sword, 2012). Zombie nouns are words that used to be a verb or an adverb but appear within a sentence as a noun. For example, the word expect is a verb that often appears in print as "expectation." The word crony is a noun that appears as cronyism. It isn't necessarily a problem when one or two of these nominalizations appear in a sentence—they can help express complex ideas. The problem, however, is when too many nominalizations appear in the same sentence—then, they kill your writing. Sentences that are loaded with nominalizations bury your concepts and ideas in abstractions, preventing readers from following who is doing what.
Scientific writing leans toward nominalizations because writers need to express abstract ideas. If you remember Chapter 1, the variables that behavioral scientists study—life satisfaction, self-efficacy, happiness, regret—are not physical things but abstractions. As a result, it's easy to get lost talking about the concepts within a study and forget that there are supposed to be people doing the thinking, feeling, and behaving behind the data points. Thus, limiting nominalizations is one key to clear writing.
Jargon
Science is full of jargon, and not all of it is bad. Jargon helps experts communicate complex ideas (jargon is often a special case of the nominalizations we talked about above). But not all jargon is good. In fact, most of it is bad because it impedes clear communication.
The most common type of academic jargon is the nominalizations described above, but a close second is acronyms. Some acronyms are useful. But when acronyms are unnecessary or overused, they can make writing hard to read.
For instance, if you are a social psychologist, you likely know that if someone starts talking to you about how WEIRD their sample of participants is, WEIRD has a specific meaning other than unusual or strange (it stands for Western, Educated, Industrialized, Rich, and Democratic). WEIRD is a useful acronym because it helps researchers remember that explanations for human behavior often need to apply beyond the small number of Western developed nations where most behavioral research has been conducted in the past (Henrich et al., 2010).
In many other instances, acronyms are not helpful. Many researchers, for instance, have probably found themselves reading a paper in which the authors created an acronym to refer to the measure or scale used in the study, but the acronym is not intuitive or easy to remember. It may simplify the writer's task, but it makes more work for the reader. This is a bad acronym.
In many cases, the best policy will be to eliminate as much jargon as possible. This will also make your work accessible to people beyond your field or area of expertise.
Ideas First, (Scientists Second)
Scientific writing is part of an ongoing conversation. Your work should contribute to what has come before. Nowhere is this connection clearer than in the introduction.
In the Introduction to a scientific paper, your task is to describe the relevant research that motivated your study and set the stage for how your research will address the question you were interested in. In describing previous research, your writing will be better if you focus on what is and isn't known about your topic rather than on what past studies or researchers have shown. To see the difference, consider the examples below.
An Example with Scientists and Studies First. Kahneman (1995) conducted foundational work on counterfactual thinking. By asking participants to think about actions they have taken and actions they could have taken but didn't (inaction), he found that regrets of action tend to be stronger and more common than those of inaction because people often find it easier to imagine undoing an action they took (and mentally returning to the status quo) than to imagine what would have resulted from something they never did in the first place. However, research by Gilovich and Medvec (1994; 1995) shows a more complicated picture. Using surveys, interviews, and two experiments, they had participants think about both the short and long-term effects of both actions they took and regretted and things they never did but regretted not acting upon later. Their results indicated a temporal shift—namely, that people's regrets of action are more intense in the short term but regrets of inaction gain prominence in the long run.
An Example with Ideas and Findings First. Foundational work on counterfactual thinking indicated that regrets of action tend to be stronger and more common than those of inaction because it's typically easier to imagine undoing an action taken (and mentally returning to the status quo) than to imagine what would have resulted from an unchosen option (Kahneman, 1995). Other research paints a more complex picture, documenting a temporal shift in people's regrets over actions and inactions—namely, that regrets of action are more intense in the short term, but regrets of inaction gain prominence and stand out in the long run (Gilovich & Medvec, 1994, 1995; for an exception, see Morrison & Roese, 2011).
What you write in your introduction will be more effective and more enjoyable to read if it is framed in terms of ideas and findings rather than studies and scientists. It will also probably be shorter.
Punctuation
While you're learning to write, learn to punctuate. There are several punctuation marks that are the friend of any serious writer. These are: the em dash, the colon, the semi-colon, the comma, and—perhaps surprisingly—the period.
In On Writing Well (2016), William Zinsser says, "There's not much to be said about the period except that most writers don't reach it soon enough." Everyone can learn from that.
Ending Strong
Just as you want to start strong, you should end strong. The best way to do that is to identify the take home message from your work. What has your study uncovered? What does the reader need to remember? Restating that idea at the end is a good way to finish.
One technique you can use to add a little flair to your ending is to return to whatever theme, idea, or technique you used to grab the reader's interest in the first place. If you posed a rhetorical question at the start of your paper, maybe you can answer that question at the end. If you observed something about behavior, perhaps you have something more to say about that behavior in light of your results. Tying your ending to the hook you used to grab readers attention is a nice way to round out your paper and send readers on their way with a satisfying flourish.
Recommended Books on Writing
Learning to write well is a process. The books below discuss many of the ideas above in more depth and offer a good starting point for students who are serious about learning to write non-fiction.
- Write it Up by Paul Silvia (2015)
- Scientific Writing for Psychology by Robert V. Kail (2015)
- On Writing Well by William Zinsser (2006)
- The Sense of Style by Steven Pinker (2014)
- Writing to Learn by William Zinsser (1988)
Frequently Asked Questions
What is the purpose of the Method section in a scientific paper?
The Method section serves two purposes: it describes what was studied, how it was studied, and who participated in the project. It also has a rhetorical purpose—to convince readers that the study's procedures were effective and fit for the research question by providing information to evaluate both reliability and validity of the results.
What information should be included when describing online study participants?
For online studies, the participants section should address the platform used for recruitment, any screening criteria or attention checks used, geographic restrictions on participation, browser or device requirements, completion rates and patterns of attrition, and steps taken to ensure data quality, in addition to standard demographic information.
What are open science practices and why should they be reported?
Open science practices are methodological changes intended to make research more transparent, cumulative, and collaborative. Statements typically include information about how sample size was determined, which cases were removed from analyses, whether the study was pre-registered, how to obtain data files and materials, and any conflicts of interest. These practices emerged in response to fraud, shoddy research practices, and replication failures.
What makes scientific writing effective?
Effective scientific writing bridges the gap between study findings and audience understanding by being accessible, precise, and engaging. Key principles include embracing simplicity, keeping the reader's perspective in mind, using subject-verb-object structure with active verbs, limiting nominalizations, avoiding unnecessary jargon, and starting and ending strong.
What are nominalizations and why should writers avoid overusing them?
Nominalizations, or 'zombie nouns,' are words that used to be verbs or adverbs but appear as nouns (e.g., 'expectation' instead of 'expect'). While one or two can help express complex ideas, too many in a sentence bury concepts in abstractions, preventing readers from following who is doing what. Scientific writing leans toward nominalizations because of abstract concepts, making it important to consciously limit them.
Key Takeaways
- The Method section serves both descriptive and rhetorical purposes—it tells readers what was done and convinces them the procedures were appropriate
- Online studies require additional details including platform used, screening criteria, geographic restrictions, and data quality measures
- Participants subsections should describe who participated, how they were sampled, demographics, compensation, study duration, and design
- Procedure sections should contain enough detail for readers to evaluate methods and potentially replicate the study
- Measures and outcomes descriptions should include number of items, response scales, anchors, and evidence of reliability and validity
- Data cleaning procedures should be pre-registered and reported, including attention checks and exclusion criteria
- Open science statements should address sample size determination, pre-registration, data availability, and conflicts of interest
- Effective scientific writing is accessible, precise, and engaging—not overly formal or filled with jargon
- Strong openings grab reader attention through questions, interesting facts, or observations about behavior
- Nominalizations (zombie nouns) should be limited to keep writing clear and focused on who is doing what
- Ideas should come first in introductions, with scientists and studies cited parenthetically rather than leading sentences
- End strong by restating the take-home message and connecting back to your opening hook









