Landing : Athabascau University

Knewton is cool, but it is very dangerous

Knewton is cool, but it is very dangerous

At the Edtech Innovation 2013 conference last week I attended an impressive talk from Jose Ferreira on Knewton, a tool that does both large-scale learning analytics and adaptive teaching. Interesting and ingenious though the tool is, its implications are chilling. 

Ferreira started out his talk with a view of the history educational technology that somewhat mirrors my own, starting with language as the seminal learning technology that provided the foundation for the rest (I would also consider other thinking tools like drawing, dance and music as being important here, but language is definitely a huge one). He then traced technology innovations like writing, printing, etc and, a little inaccurately, mapped these to their reach within the world population. So, printing reached more people than writing, for instance, and formal schooling opened up education to more people than earlier cottage industry approaches. That mapping was a bit selective as it ignored the near-100% reach of language, as well as the high penetration of broadcast technologies like TV and radio and cinema. But I was OK with the general idea - that educational technologies offer the potential for more people to learn more stuff. That is good.

The talk continued with a commendable dismissal of the industrial model of education that developed a couple of hundred years ago. This model made good economic sense at the time and made much of the improvement to the human condition since then possible (and the improvements are remarkable),  but it makes use of a terrible process that was a necessary evil at the time but that, with modern technologies and needs, no longer makes sense. From a learning perspective it is indeed ludicrous to suggest that groups of people of a similar age should learn the same way at the same time. But there is more. Ferreira skipped over an additional, and crucial, key concern with this model of education. A central problem with the industrial model, when used for more than basic procedural knowledge, is not just that everyone is learning the same way at the same time but that they are (at least if it works, which it thankfully doesn't) learning the same things. That is a product of the process, not its goal. No one but a fool would deliberately design a system that way: it is simply what happens when you have to find a solution to teaching a lot of people at once, with only simple technologies like timetables, classrooms and books to help, and a very limited set of teaching resources to handle it. It is not something to strive for, unless your goal is cultural and socio-economic subjugation. Although, as people like Illich and Freire eloquently demonstrated a long time ago, such oppression may be the implicit intent, most of us would prefer that not to be the case. Thankfully, what and how we think we teach is very rarely, if ever, precisely what and how people actually learn.  At least, that has been the case till now.  The Knewton system might actually make that process work. 

Knewton has two distinct functions that were not clearly separated in Ferreira's talk but that are fundamentally different in nature. The first is the feedback on progress for teachers and learners that the system provides. With a small proviso that bad interpretations of such data may do much harm, I think the general idea behind that is great, assuming a classroom model and the educational system that surrounds it remains much as it is now. The technology provides information about learner progress and teaching effectiveness in a palatable form that is genuinely useful in guiding teachers to better understand both how they teach and the ways that students are engaging with the work. It is technically impressive and visually appealling - little fleas on an ontology map showing animated versions of students' learning paths are cool. Given the teaching context that it is trying to deal with, I have no problems with that idea and applaud the skill and ingenuity of the Knewton team in creating a potentially useful tool for teachers. If that were all it did, it would be excellent. However, the second and far more worrying broad function of Knewton is to channel and guide learners themselves in the 'right' direction. This is adaptive hypermedia writ large, and it is emphatically not great. This is particularly problematic as it is based on a (large) ontology of facts and concepts that represent what is 'right' from an expert perspective, not on the values of such things nor on the processes for achieving mastery, that may be very different from their ontological relationships with one another.

There is one massive problem with adaptive hypermedia of this nature, notwithstanding the technical problems thanks to the inordinate complexity of the algorithms and mass of data points used here, and ignoring the pedagogical weaknesses of treating expert understanding as a framework for teaching. The big problem is more basic: that it assumes there is a right answer to everything. This is a model of teaching and learning (in that order) that is mired in an objectives driven model. But my reaction here (and while he was talking) to Ferriera's talk, which I assume was meant to teach me about Knewton, self-referentially shows that's not always the main value in effective teaching and learning. Basically, what he wanted to tell me is clearly not, mainly, what I learned.  And that is always the case in any decent learning experience worthy of the name. In fact, the backstories, interconnections, recursive, iterative constructions and reconstructions of knowledge that go on in most powerful learning contexts are typically the direct result of what might be perceived by those seeking efficient mastery of learning outcomes as inefficiency.  In educational transactions that work as they should, some of what we learn can be described by learning outcomes but the real big learning that goes on is usually under the waterline and goes way beyond the defined objectives. While skill acquisition is a necessary part of the process and helps to provide foci and tools to think with, meaningful learning is also transformative, creative and generative, and it hooks into what we already know in unpredictable ways.

So Knewton is reinforcing a model that deals with a less-than-complete subset of the value of education. So what? There's nothing wrong with that in principle and that's fine if that is all it does. We don't have to listen to its recommendations, the whole Web is just a click away and, most importantly, we can construct our own interpretations and make our own connections based on what it helps to teach us. It gives us tools to think with. If Knewton is part of a learning experience, surely there is nothing wrong with making it easier to reach certain objectives more easily? If nothing else, teaching should make learning less painful and difficult than it would otherwise have been, and that's exactly what the system is doing. The problem though is that, if Knewton works as advertised, the paths it provides probably are the most efficient way to learn whatever fact or procedure the system is trying to teach us. This leads to the crucial problem: assuming it works, Knewton reinforces our successful learning strategies (as measured by the narrow objectives of the teacher) and encourages us to take those paths again and again. By adapting to us, rather than making us adapt ourselves, we are not stretched to have to find our own ways through something confusing or vague and we don't get to explore less fruitful paths that sometimes lead to serendipity and, less commonly but more importantly, to transformation, and that stretch us to learn differently. Knewton, if it works as intended, makes a filter bubble that restricts the range of ways that we have to learn, creating habits of behaviour that send us ever more efficiently to our goals. Fundamentally, learning changes people, and learning how to learn in different ways, facing different problems differently, is precisely what it is all about: mechanical skills just give us better tools for doing that.  The Knewton model does not encourage change and diversity in how we learn: it encourages reinforcement. That is probably fine if we want to learn (say) how to operate a machine, perform mathematical operations, or remember facts, as part of a learning process intended to achieve something more. However, though important, this is not the be-all and end-all of what effective education is all about and is arguably the lesser part of its value. Effective education is about changing how we think. Something that reinforces how we already think is therefore very bad. Human teachers model ways of knowing and thinking that open us up to different ways of thinking and learning - that's what makes Knewton a useful tool for teachers, because it helps to better reveal how that happens and allows them to reflect and adapt.

None of this would matter all that much if Knewton remains simply one of an educational arsenal of weapons against ignorance in a diverse ecosystem of tools and methods. However, that does not match Ferriera's ambitions for it: he wants it to reach and teach 100% of the world's population. He wants it to be freely available and used by everyone, to be the Google of education. That makes it far more dangerous and that's why it worries me. I am pleased to note that Ferreira is not touting the tool as having value in teaching of softer subjects like art, literature, history, philosophy, or education, and that's good. But there are those, and I hope Ferreira is not among them, who would like to analyse development in such learning contexts and build tools that make learning in such areas easier in much the same way as Knewton currently does in objectives-driven skill learning. In fact, that is almost an inevitability, an adjacent possible that is too tempting to ignore. This is the thin end of a wedge that could, without much care, critical awareness and reflection about the broader systemic implication, be even more disastrous than the industrial model that Ferreira rightly abhors. Jose Ferreira is a likeable person with good intentions and some neat ideas, so I hope that Knewton achieves modest success for him and his company, especially as a tool for teachers. But I hope even more that it doesn't achieve the ubiquitous penetration that he intends.

Comments

  • Bruno Kavanagh July 9, 2013 - 11:55am

    Glassy-eyed 'let's change the world' types like Ferreira make me physically queasy. Perhaps I just fear (or resent) their ruthless efficiency and effectiveness. God help us all.

    It seems to me the $1m question lies between the terms 'adaptive' and 'adaptable'. If the user's in charge (as in the latter case) I can stomach any suggestions a cyborg might make. Why not? As JD says I can always ignore them (although, in practice, how many would actually do that?)

    An anecdote to illustrate: I went on Amazon the other day to look for some relatively obscure classical CDs. I had a handwritten list of discs I wanted to research, possibly buy. When I logged in the top three recommendations from Amazon were exactly the titles that headed my back-of-a-napkin list. This was the result of no previous search I'd made on these specific titles. Just algorithms, assessing - with ruthless accuracy - my preferences based on previous browsings.

    Efficiency? Convenience? Or a world we don't want to live in? Here's the point: What about the titles I might have found by chance if I'd had to search for myself?

    And, at an upcoming concert, I'll be finding myself talking to fellow enthusiasts who have exactly the same recordings. A bland world - ripe for exploitation by the unscrupulous. Freire would be revolving at CD-speed in his grave (as would Neil Postman and numerous others).

    I think this is the point JD is making (very effectively in my view) about Knewton. The take-home? Adaptive - beware. Adaptable - ok (maybe) if you must. Possibly good. Still thinking...

    Thanks for a great post.

     

  • Jon Dron July 9, 2013 - 12:56pm

    Thanks Bruno

    This is indeed dangerous territory. I recommend Eli Pariser's Filter Bubble (site links to the book from various resellers as well as other related presentations and resources). It's one of the more balanced and readable variants on the theme from someone who is quite familiar with what is happening under the hood as well as the broader systemic issues. 

    I think that adaptable plus adaptive, such as Amazon's recommendations, is not a bad way to go, as long as it still allows for serendipity, doesn't actively stop you from searching further, and is not the only channel that you use. Adaptable adaptivity is better. I am fond of Judy Kay's concept of scrutable adaptation, and similar ideas underpin much of my work in the area. However, as her own work and the explicit collaborative filters used on Amazon (where you can shape your recommendations more carefully) show, it is very hard indeed to get the right balance of control and ease of use. For my PhD I invented a system that solves the problem in almost every way apart from the minor inconvenience that it is virtually unusable so the effort outweighs the benefits.

    Nothing much beats a real, knowledgable human being to help guide your learning journey/journey of discovery, but it is very hard to both find the right human(s) and to deal with the cost, errors and inefficiencies that inevitably arise. Tools like Knewton and Amazon, and for that matter Google Search, are powerful and effective, leveraging the wisdom of the crowd pretty well, putting aside problems like  the Matthew Effect and preferential attachment for now. I think that filter bubbles are here to stay and they are going to get bigger so, though we should try to curb their wilder excesses, we must learn to live with them and learn to fool them into not fooling us.

    Jon