Landing : Athabascau University

Top 10 stupid mistakes in design of Multiple Choice questions

http://donaldclarkplanb.blogspot.co.uk/2015/08/top-10-stupid-mistakes-in-design-of.html

Nice little tutorial from Donald Clark. Mostly very sound advice on good multiple-choice question design, with a very clear focus on formative rather than summative assessment, and with a good emphasis on providing useful feedback rather than just testing. Also, some sensible recommendations about avoiding dumb questions.

I'm far from convinced that MCQs have any value outside of a few quite limited learning contexts. In anything interesting, there is a risk of focusing on things that are easily tested rather than things that actually matter. For instance:

A streetcar is heading out of control to a fork in the tracks, and only you have control over the points to determing which way it will go. It if goes down one track, it will kill a baby. If it goes down the other will kill a famous philosopher. What do you do?

a) kill the baby

b) kill the philosopher

c) do nothing

d) a type of ice cream

Like any learning intervention, an MCQ should always be a deliberately and carefully chosen activity made to fit a clearly identified and constructively aligned need. Often, particularly if you are hoping to teach something really useful, they just don't work for that, especially if you have to work hard to come up with discerning questions. The question above would actually be made even worse if the learner were (more plausibly and 'correctly') asked to identify an ethical system that would imply one or other of the alternatives. While it might show that they knew about those philosophical stances, it would reduce critical, ethical and aesthetic issues to trivial matters of fact that could easily lead learners to false beliefs about their own knowledge while by-passing the thinking process that would actually be evidence of genuine competence and understanding. At least as bad, it would bypass the possibility of thinking laterally about the problem (e.g. what if you stepped into its path yourself, or pushed the baby at the philosopher, and what about the people on the streetcar?) that, though not what the setter of the test might intend, would be really interesting to explore.

I'm similarly bothered by one of Donald's examples, on the price of a baseball bat. This one is an old chestnut that anyone who has seen it before will 'solve' in a second but that anyone that hasn't will almost certainly get wrong. What is being tested here? Much more importantly, what is it teaching? The only answers I can come up with lead me to worry greatly about the motivation of the test-setter and worry much more about the effect it will have on the learner. Cognitive dissonance can be useful, sure. But this is just a cheap power play. It's very much like the Socratic Method and it is wrong for the same reasons: it's bullying, plain and simple. A bad lesson to learn.

That said, if you do genuinely need to provide MCQs, this is good advice on how to do it.

 

Comments

  • Mary McNabb November 19, 2015 - 5:31pm

    I guess multiple choice questions are an efficient way of "measuring knowledge". There was a prof from U. Alberta who did studied the MCQs on Alberta Education's standardized tests (Grades 3, 6, 9 and diploma exams in Grade 12) and came to the conclusion that they tested reading more than content. To prove his point, he showed us the patterns in the test and then gave us questions from a Math 31 test. Even though most of the audience had not taken Math 31 (calculus), we were able to answer the questions and get about 60% based on our knowledge of how the test was constructed. The strategies were very similar to the strategies on the TV show Who Wants to Be a Millionaire, so that's how we taught test taking for achievement tests. 

  • Jon Dron November 19, 2015 - 7:20pm

    Nice example, Mary. It suggests to me that they are often just a good way to measure knowledge of how to do MCQs!

    I think objective tests are mostly OK for some basic fact-oriented subjects as long as they are used solely as a formative learning tool and not for accreditation. Even when you use tricks like confidence weightings to reduce the benefits of guessing to a minimum, they still tell us very little about what people can actually do, but they can in principle be a useful way for learners to figure out for themselves what they know about. Unfortunately, even when the marks count for nothing and they are the only people that will ever see the results, many learners still tend to try to game the system. I'm not sure whether that is in our nature or something we have learned, but it's weird.