AuPS Logo Programme
Contents
Previous Next PDF

Using EMQs and MCQs to assess higher order thinking and knowledge in physiology

C.P. Sevigny, Department of Physiology, The University of Melbourne, Parkville, VIC 3052, Australia.

Teaching and assessing larger and larger cohorts is a challenge presented by a majority of universities in the current climate. Responding to this challenge while providing excellence and accuracy is one faced by academics in an effort to maintain quality of teaching, student engagement, integrity and relevance of assessment components, and higher-level learning outcomes in an environment where student-teacher ratios often exceed 500: 1.

Within the Department of Physiology at the University of Melbourne, we face class sizes ranging from 440 (Bachelor of Biomedicine) to 1200 (Bachelor of Science) students per year at second-year level and 250-400 students (combined degrees) at third-year level. Marking written assessment tasks for these cohorts could require over 250 hours for one subject, which is often the responsibility of a single academic, and this time commitment cannot coincide with university deadlines. The alternative arrangement of setting scanned multiple choice exams is often discounted as testing only basic knowledge-based learning, and not presenting a sufficient challenge to testing higher-level understanding and learning outcomes.

In response, we utilise an exam format of extended multiple-choice questions (EMQs), which test student capacity for knowledge, understanding, and applications. EMQs provide options A-Z on the scan sheet, and enable these 26 options to be used for labelling graphs (e.g. lettered points, durations, curves, axis values, etc.), labelling diagrams (e.g. “inhibition of which neuron (A-H) would result in…”), providing a list of possible answers in an embedded answer (cloze) question format, and a variety of other formats which challenge students, yet are scanned by machine.
By using a mixture of these question types, we have been successfully assessing higher-level learning outcomes in large cohorts while minimising marking time at second and third-year level. The format also ensures objectivity in marks, hence consistency across cohorts, which is difficult to achieve when marking 700 essay questions in a row. The results also provide detailed reports on question quality (point biserial), and difficulty. Students enjoy the format, as they spend the majority of their time thinking instead of writing; they feel they have been assessed fairly and accurately; they receive their results more quickly, and they can receive feedback as quantitative statistical outcomes comparing them to the rest of the cohort on individual questions or topic areas.

While it is important to note that this format is unable to assess some learning outcomes, such as capacity to write scientifically, or work socially, we have made an effort to teach those skills in parallel subjects which enjoy more modest cohort sizes (200-300 students).