Development of Civic Learning Measures


Traditional metrics focused on civic knowledge are insufficient to accurately evaluate the impacts of deeper civic learning: (a) civic attitudes and dispositions; b) civic knowledge; and c) civic skills. Extant measures of civic attitudes and, especially, civic skills, with few exceptions, are insufficiently backed by evidence of their validity or not fully aligned with the DKP’s learning goals. Although efforts in the field have led to an expansion of the instruments and scales available to measure students’ civic learning (Tedeschi et al., 2021), compelling validity evidence in support of the intended uses and interpretations of scores remains scarce (Flake, 2021). Researchers and practitioners need valid ways to measure student learning to build a credible evidence-based foundation for improving civic education. 

DKP researchers have developed an assessment framework and initial measurement toolkit, and is presently gathering additional evidence to comprehensively address the criteria for validity described in 2014 Standards for Educational and Psychological Testing (American Educational Research Association et al., 2014).

The additional evidence will ensure that items in the DKP’s current measures are appropriately understood by students and that the measures are sensitive to variation in the targeted learning outcomes. In addition, results of these studies will clarify sources of error affecting scores on the outcome measures, making it possible to modify the measures and measurement protocols to attain sufficient accuracy to support rigorous evaluation of civics curricula. This work will raise the bar for measurement in civic education research, and will equip a wider community of researchers and educators with the high-quality measurement tools they need to advance the field. 

The current studies underway include:

  • Cognitive interviews to assess the alignment between students’ understanding of the items used, and the targeted constructs, a research method aimed at exploring and analyzing students’ thought processes as they engage with test instruments.
  • A “generalizability study” (Brennan, 2001), a well-established research framework in the field of educational and psychological measurement that provides empirical evidence regarding the optimal number of items, raters, and occasions our measurement toolkit should have.

Funded by the Spencer Foundation, Carnegie Corporation of New York, the Mellon Foundation, and Lucas Education Research

Contact: Dr. David Kidd (