In this blog, we highlight some subtle and not so subtle aspects of the CloudResearch Dashboard you can use to make navigation and completing study-related tasks easier.
Did you know that you can search through studies on the CloudResearch Dashboard using the survey ID, Amazon HIT ID, study title, project name, or tags? While most people probably rely on the project name by default, tags are an efficient and convenient way to organize and search your studies (six months from now, you may not remember your project’s name but you will probably remember the project’s theme and that’s where tags can be helpful).
Tags allow you to associate up to five keywords with a study and then use those words to quickly call up a list of related studies. Because most researchers typically have multiple lines of research they are pursuing at the same time, tags can be used to organize your streams of research on CloudResearch.
To add a tag to your study, simply click the tag indicator under the Project Name and Title.
Then, type your tag and click Add Tag.
Another way you can make searching studies more efficient is by archiving old studies. Although CloudResearch does not currently have an option to completely delete a study, archiving achieves the same objective by making all study-related actions inactive and hiding the study so it’s only visible when you choose to view Archived studies on the Dashboard. Archiving removes clutter, especially as the number of studies you run increases.
One Dashboard detail you may have never noticed is that the background of each study is color-coded. Studies that are set up and ready to launch have a beige background, live studies have a green background, and completed studies have a blue background. A study’s background can help you identify which project you are looking for in a list, especially when you have copied multiple studies in the past.
If you’d like to preview your survey, see what your HIT looks like on MTurk, or see the CloudResearch HIT window before launching your study, you can do so from the CloudResearch Dashboard. If you click on your study, extra details will appear below. In the upper left corner of the extra details is an External Survey Link you can use to preview your survey as programmed in your survey platform (e.g., Qualtrics, SurveyGizmo).
Below that is an MTurk HIT Preview link you can use to view your study as it appears on MTurk, provided you log in with an MTurk account and have the worker qualifications necessary for the HIT. Finally, the CloudResearch Survey Preview link will allow you to see what the HIT looks like to workers after it is routed through CloudResearch.
From the Dashboard, you can use a number of tools to directly modify your HIT and manage workers. For example, when your study is live, you can choose to pause your HIT (if you need to make a mid-study adjustment) or restart your HIT if data collection has slowed (see our blog on the Restart feature).
In addition to modifying your HIT, you can manage workers from the Dashboard too. If your study uses manual approval, you can choose Manage pending responses in order to accept or reject workers’ submissions. And, if you would like to grant workers bonuses, you can select Grant Bonuses to award workers a bonus.
When you use an Include list on your study, you can email workers on the list directly from the Dashboard. Once your study is live, the Email Included Workers button will appear, allowing you to email all Included workers to let them know the study is available. While this little button seems simple, it can save you the time of emailing each worker individually or learning to write the code necessary to email all workers at once.
A new feature on the Dashboard is the option to see the demographics of workers who have taken your study as they come in. Once your study is live, you can click View Study Demographics to see the demographic breakdown of people who have submitted your study.
Please note: the demographic information presented in this window is based on demographic data CloudResearch has about each worker. This means any worker who has not answered our demographic qualification questions will not be included in this report. Also, it’s possible for our data to be slightly different than what you see in your final dataset. Because we do not have access to the data workers provide in your survey, any discrepancy between the data we have and the way workers answer questions within your survey would cause the numbers to differ.
Under the Actions column of any study, you will see the option to Copy HIT. Copying a HIT does exactly what you would expect: it creates a copy of the HIT, mirroring all the details of the original (e.g., Panel options, method of worker approval, Pro Features, etc.) One less obvious aspect of using the Copy HIT function is that copying automatically excludes the original HIT from the copy. This makes copying a useful function when you run a subsequent study in the same line of inquiry. By copying the HIT you maintain several relevant features and automatically exclude everyone who took your first study from the second study.
After setting up a study, you may want to review all of the details before you launch. Instead of navigating through each tab one-at-a-time, you can click on the See Survey Specs button. A window will open with all of your study details listed on one page. You can use this page to review your study.
Finally, when you run a study with our MTurk Toolkit, we provide several stats you can use to make important study-related decisions. For example, when you click on your study title from the main Dashboard, you will see a dropdown box that displays stats like the average, median, and expected time to complete the study. With these numbers you can see whether it is taking workers longer to complete the study than you estimated and adjust the pay accordingly.
Another important piece of information in the dropdown box is the number of workers who have started your HIT, been approved or rejected, and the study’s bounce rate. This information can help you determine whether something about your study is unattractive to workers (using the bounce rate) or whether an aspect of your study setup is making it hard for workers to complete the study (using the dropout and completion rates; see our blog on dropout and completion rates for more details).