Landing : Athabascau University

The death of the exam: Canada is at the leading edge of killing the dreaded annual ‘final’ for good | National Post

By Jon Dron 3 July 2015 @ 12:41pm Comments (5)
http://news.nationalpost.com/news/canada/the-death-of-the-exam-canada-is-at-the-leading-edge-of-killing-the-final-for-good

Good news!

There's not much to disagree with in this article, that reports on some successful efforts to erode the monstrously ugly blight of exams in Canada and beyond, and some of the more obvious reasoning behind the initiatives to kill them. They don't work, they're unfair, they're antagonistic to learning, they cause pain, etc. All true.

Comments

  • Hongxin Yan July 3, 2015 - 2:49pm

    As long as we have a good way to evaluate students' performance, not necessarily final exam. It all depends on the learning outcomes of the courses. For example, if you want students to remember something, if you don't use exam, you'd better provide other effective alternatives for students to show it, otherwise you don't design such learning outcomes in courses.

    So, the main question might not be whether we need final exam, but to ask ourselves what our students really need to learn or master from my courses?

     

  • Jon Dron July 3, 2015 - 5:05pm

    Thanks Hongxin,

    It is absolutely true that memorization of facts is an important aspect of quite a lot of skills. There are occasions where it is impractical or impossible to rely on others for help or information, where we just need to remember stuff: surgery, sailing, even making a philosophical argument, for instance. There are likely many more occasions where I would consider it a sign of gross incompetence not to talk with others or look it up online. But, if it's memory we are assessing, then it's the application of what students remember that we need to assess, and that has to be in an authentic context, or at least the nearest thing to it that we can afford or safely manage. There is less than no point in assessing people's ability to regurgitate facts in a setting so far removed from any realistic context that it tells us nothing about their actual competence at all. Quite apart from their terrible effects on intrinsic motivation, the ways they prevent smart people from being recognized, and their outlandish costs (always additional to learning and always detrimental to it - a double whammy), exams don't even do what they are supposed to do most of the time.

    There are plenty of exam-like assessments that do provide quite authentic conditions - doing things in labs, talking about the subject with an expert, operating simulations, writing code on a net-connected computer, writing to a deadline, doing things in studios, building stuff in workshops, etc - it depends on the subject which are appropriate in any given context. There are even some traditional written exam formats that make a bit of sense - my colleague Richard Huntrods's exemplary use of exams to reflect on the work done in the course, for example, which is a useful learning activity in itself and which is relatively unthreatening and so reduces some of the more obvious harmful effects. My target is the typical unseen, proctored, written exam of which we are unaccountably over-fond in both higher education and schools. We have to get rid of those or, at the very least, make them justified rare exceptions rather than the default method of assessment

  • Hongxin Yan July 7, 2015 - 10:38am

    Thanks Jon.

    Your points make lots of sense to me. I like the idea of assessing memorization through application. Actually, I think competency-based learning is the way of future education, and a few universities are starting doing that or trying that (e.g. Western Governors University). This might be a good model for AU since AU's majority students are adults who have more immediate opportunities or needs of applying what they have learned. So, project-based, problem-solving, case study, research paper, product design, portfolio, presentation, discussion, ... all look good to me for this purpose.

    One thing still bothers me. In some subjects, we need students to get a certain level of skills. For example, in math, we might want students to solve a problem in 5 mins, not a whole hour. So, if we evaluate how skillful a student is, timing should be an important factor (?). I am not sure what's the alternative of exam or quize to do it. Thoughts? 

  • Jon Dron July 7, 2015 - 12:03pm

    I strongly agree - WGU has an extremely sound basic model that clearly works, economically and pedagogically. We do have most of the pieces - challenge processes, independent study courses, PLAR, etc - and have been doing this in pockets for decades, but we should seriously consider building whole programs and perhaps even the whole university this way.

    If (and only if) timing matters in an authentic setting, then by all means time it! It's absolutely fine as long as the conditions are realistic, the problems are genuine and situated, and the task and timing matches closely what competent individuals will actually need to do in the real world. There are lots of ways you could assess that, from lightweight reflections (just ask how long it took) to webcam-recorded evidence, to oral questions via a webinar and so on. But it would be necessary to think carefully about its authenticity and about what it actually shows. Speed only matters in an applied context so students should be doing this under real-world conditions or the nearest thing we can provide to that. And, if a problem should be solved in 5 minutes, then that's roughly how long the assessment should take. And unless, in real life, people have to solve a succession of such questions in 3 hours, we should not require it of them in an exam. Equally, if a problem should take days (many programming tasks are like that) then that's how long we should allow.

    I recall an interview a long time ago where I was given a real-life programming task the company needed doing to complete in a few minutes using MS Access, a system I had not touched for years and had barely played with even then.  I failed the task abysmally but I was offered the job anyway, because they were observing my problem-solving approach while I did it  - which included chatting with them, looking stuff up online, making intelligent use of help files, etc - and that is what they were actually testing. I suspect that is the kind of thing we would want to assess in a mathematician rather than the results on an exam paper.

    As long as it is authentic to the task and/or helps people to learn (it is constructively aligned), I have no problems with any particular form of assessment. The trouble with sit-down written exams of the sort we habitually use is that, with very few exceptions, they are not authentic and they make a negative contribution to the learning process.

  • Hongxin Yan July 7, 2015 - 12:47pm

    very good points, Jon.