Do Multiple-Choice Exams Really Measure Learning?
- Yusuf Öç
- 7 days ago
- 3 min read
Outstanding Paper Award and Why This Matters for Assessment Design
I am delighted to share that our paper, Comparing the effectiveness of multiple answer and single answer multiple choice questions in assessing student learning, received an Outstanding Paper Award. If you would like to read the original paper, you can access it here (Link) free of charge thanks to Bayes Business School and City St George's University of London.
Why “Multiple-Answer” Questions May Lead to Deeper Understanding
Multiple choice questions remain one of the most widely used assessment tools in higher education. They are efficient, scalable, and easy to grade, especially in large classes and online programmes. Yet they are often criticised for encouraging surface learning, rewarding memorisation, and failing to capture deeper understanding. These concerns have become even more pressing in the age of generative AI. If a tool can instantly provide the correct answer, what exactly are we assessing when students sit a multiple choice exam?
In our study, we explored whether changing the format of multiple choice questions can meaningfully change what and how students learn. Specifically, we compared traditional single answer multiple choice questions with multiple answer questions, where more than one option can be correct.

How the study was set up
The research was conducted in a fully online MSc Marketing programme. Students were randomly split into two groups and completed two summative quizzes during the module. The key feature of the design was that the questions themselves were identical, but the answer format differed. In one version, students selected a single correct answer. In the other, the same question required students to identify multiple correct answers. Three weeks later, the formats were reversed across the two groups. We also collected data on study time, perceived workload, perceived difficulty, course interest, and students’ experiences, supported by focus groups to understand what was happening behind the numbers.
What we found
Multiple answer questions produced lower average grades and were perceived as more difficult, with less favourable initial attitudes. However, study time and perceived workload did not differ significantly between the two formats. Students did not work less. Instead, qualitative evidence showed they studied differently. They reported being pushed away from memorisation and toward understanding, integration, and careful evaluation of alternatives. In short, the multiple answer format increased
Why this matters for practice
Assessment shapes learning behaviour. Students study strategically based on what they expect to be tested. Single answer questions can unintentionally encourage a recognition mindset, where students focus on spotting one correct option. Multiple answer questions shift the task toward evaluation and judgment, requiring students to think through why several options might be correct and why others are not. This is particularly valuable at postgraduate level where we want students to analyse, apply, and integrate concepts rather than only recall them. The format can also help respond to academic integrity concerns, since questions that require interpretation and comparison are harder to outsource to quick lookup strategies.
How Can This Be Used in Practice?
The key takeaway from the study is not that one format should replace the other. Rather, the evidence supports a hybrid assessment approach.
A well-designed exam might:
Use single-answer questions to assess foundational knowledge and core concepts
Use multiple-answer questions to assess deeper understanding, application, and critical thinking
Crucially, students need preparation. When multiple-answer questions are introduced without guidance, anxiety increases. However, formative quizzes, clear instructions, and worked examples dramatically reduce this effect.
From a practical perspective, educators can:
Introduce multiple-answer questions gradually
Use low-stakes formative assessments to familiarise students with the format
Clearly communicate how answers are scored and what the questions are designed to test
When students understand the purpose of the assessment, they are far more likely to engage with it productively.
A simple implementation blueprint
If you want to try this quickly, here is a practical structure that works well in many modules:
Start with a short formative quiz that includes both formats
Use single answer questions for core definitions and key principles
Use multiple answer questions for scenarios, diagnostics, and applied decision making
Provide brief feedback explaining why each correct option is correct and why distractors are wrongThis keeps efficiency while increasing the likelihood that assessment supports deeper learning.
Feel free to reach me out if you want to try this out.



Comments