Landing : Athabascau University

Personalization in Lumen’s “Next Gen” OER Courseware Pilot

http://opencontent.org/blog/archives/3965

I always enjoy reading posts by David Wiley. This is a good one on the progress of Lumen Learning but the main reason I am bookmarking it is for one of the clearest explanations I have seen of the central problem with far-too-common naive approaches to personalized learning. David uses the example of Google's seldom-used 'I'm feeling lucky' button to explain why having a machine (or, as he puts it 'a passionless algorithm') make learning choices for you, even if they are pretty likely to be good ones from a short-sighted objectives-based perspective, is normally a bad idea.

I'd go a bit further. Having a human make those choices for you can be equally bad for learning. While human judgement might lead to better choices than a dispassionate algorithm, the problem in learning is not so much one of making the best choices to reach an objective, but of learning how to make those choices yourself. There is a risk that careless use of analytics by teachers to lead students in a particular direction might simply substitute a human for a machine. Beyond the most trivial of skills (not to trivialize trivial skills) effective teaching - the stuff that persists and transforms - is not about making choices on the behalf of a learner. It is much more about provoking and responding (and a host of other things like caring, nurturing, challenging, soothing, inspiring, etc, none of which can be done well by machines).

Having teachers make choices is not what David is talking about, though. He rightly emphasizes the importance of engaging in 'good old-fashioned conversations', which are the very opposite of teacher control, and of simply using models from the machine to help inform those conversations. This is great. The more you know about someone, the richer the conversations can be and, as an expert with a good understanding of the model, a teacher should be able to interpret it wisely - an aid to decision-making, not a decision-maker in itself.

I'm not so sure about feeding the model back to the learner directly though. In all but the most trivial of models there are some big risks of misapprehensions, misdirection, missing parts, and misattributions. Any model is just that - a simplification and abstraction of a much more complex whole.  As long as it is understood that way by the learner then you would think all should be fine, but it is not so simple. For example, I was given one of those dreadful fitness tracker devices that uses just such a simple model. It miscounts steps, fails to understand the concept of cycling, sailing, swimming, playing a guitar or even of a standing desk, but none-the-less continues to present believable-looking statistics about my health to me and even tells me in pure Skinner fashion to get up and jog, without having the slightest idea about the state of my knees or ankles, let alone my distaste for jogging. I completely understand the crude and ugly behaviourist reward/punishment pedagogy it attempts to inflict on me and am fully aware of the fact that it is often hundreds of percent wrong about my activity and I completely get the limitations of the model. But it still draws me in. No matter how much I can intellectually explain that there is nothing inherently meaningful about it counting 500 or 15,000 steps in a day, those reassuring graphs affect me, and not in a good way. Sometimes I have found myself walking places in order to reach the machine's target when I would otherwise have cycled (a much healthier alternative) and congratulate myself on a nice looking graph when I know that all I have been doing is playing the guitar (which the machine identifies as walking - maybe it's my foot tapping). It's a sure sign of extrinsic motivation when, even though I am the only one that knows or cares, I cheat. Being aware of limitations is not enough.