AuPS Logo Programme
Previous Next PDF

Summative Assessments: when is enough too much?

D.A. Saint and A. Elliott, School of Medical Sciences, University of South Australia, Adelaide, SA 5000, Australia.

Introduction: In most of our courses, we subject the students to a variety of summative assessments, often with the assumption that this gives a better (or fairer) “grading” of the students. Here, we consider this assumption, and discuss how many summative assessments are needed to properly assess a student’s learning.

Methods: We examined the grade history of 130 students in 2nd year physiology. The course includes six summative assessments, plus additional tutor and peer assessment. We examined the performance of students in each summative assessment worth equal to or greater than 3.5% [i=individual score, g=group score]. This included a research application form (g) worth 3.5%, a literature review (i) worth 8.75%, a final report (i) worth 7%, tutorials (i) worth 15%, a group presentation (g) worth 5.25% and a final exam (i) worth 50%. We measured the degree of correlation between grade scores in each of these different assessment tasks and the overall result for the semester.

Results: Group assessments had the lowest correlation (r = < 0.2) with overall course score. Moderate to strong correlations were observed between individual literature review (r = 0.54; P < 0.0001) and final report (r = 0.60; P < 0.0001). The strongest correlation of final course score was with exam performance (r = 0.94; P < 0.0001). Comparison of exam score and final course showed a mean bias of 8.8% with limits of agreement from −20.3 to 2.6%. Moderate correlations were observed between tutorials and final report (r = 0.47, P < 0.0001), tutorials and final exam (r = 0.57; P < 0.0001), and final report and exam (r = 0.43; P < 0.0001). Weak correlations (r < 0.3) were observed between all other assessments.

Discussion: If the objective of summative assessments is to provide a “ranking” of students, then multiple assessments appear not to be needed as larger assessments are significantly correlated. Moreover, final exam score appears to be strongly correlated with final course score such that this may provide a suitable single assessment for ranking students. On the other hand, one must consider the principle that assessment drives learning, and on this basis, perhaps several assessments are indeed needed. The balance between pure “ranking” of students, and facilitating student learning needs to be considered when setting multiple assessments.