Citizen scientists--volunteers who help with data collection and other tasks—play an increasingly important role in science research. But this research requires skilled hands. How do we know that a citizen scientist has the necessary skills to support robust science findings?
Dr. Cathlyn (Cat) Stylinski, a researcher at the University of Maryland Center for Environmental Science’s Appalachian Laboratory, is leading a team of collaborators to address this question using “embedded assessments.” The project is supported by a $1.98 million four-year National Science Foundation (NSF) grant, and the findings will inform citizen science across the U.S. and beyond.
Just like in school, the abilities of volunteers in citizen science projects need to be assessed. “But most assessments are formal tests, and are not going to be very appealing to folks who are contributing their free time to support science,” said Stylinski, an expert in public engagement with science. So, she and her colleagues have been exploring embedded assessments, which can be seamlessly integrated into volunteers’ regular activities. “They might play a game as part of the training, which is fun and helps them gain confidence, but also shows us how well they can perform their science task,” explained Stylinski.
The problem is that developing embedded assessments require time and resources often lacking in many citizen science efforts. Stylinski and her colleagues received the new NSF grant to explore innovative strategies to give citizen science leaders easier access to this assessment tool.
Under a previous NSF grant, the team learned that many citizen science projects do not assess their volunteers’ skills, which can include identifying birds, counting butterflies, measuring plant height, building proteins, classifying gravitational waves—even posing science questions and hypotheses for research. And those that do assess often only use casual observations or conversations, which can lead to an inaccurate view of volunteers’ abilities. The team developed embedded assessments for several pilot projects and discovered that they were very useful—unearthing weaknesses in volunteers’ skills and in the training that sought to support these skills. But they also learned that embedded assessments took significant time and expertise to create, well beyond the capacity of many citizen science staff.
Stylinski and her fellow researchers will use the current NSF grant to try to develop common embedded assessment tools that could be used by many different projects. They will develop these tools with the help of 10 citizen science project leaders from around the country. The team will also work with five additional leaders to change the way volunteers’ data is analyzed to see if their skills are improving over time. “Our collaborative approach has the benefit of building capacity and community among the projects, while also creating resources that consider many different needs and perspectives,” said Stylinski.
She and her colleagues are interested in learning if either approach will streamline the embedded assessment process, and, if so, what each might tell us about the skills of citizen scientists. Ultimately, their research seeks to make embedded assessment accessible to more projects, which in turn can help improve science literacy among volunteers while supporting science integrity of citizen science efforts.
Stylinski’s partners on this NSF grant are Karen Peterman, Karen Peterman Consulting; Rachel Becker-Klein, Two Roads Consulting; Andrea Wiggins, University of Nebraska Omaha; Tina Phillips, Cornell Lab of Ornithology; and Amy Grack-Nelson, Science Museum of Minnesota.