Landing : Athabascau University

learner-teaching-learning analytics

I've been having some interesting discussions in Banff this week with folks interested in 'learning analytics'. I put it in quotes because I'm not convinced that it is a) a distinct field or b) one thing.

Ignoring issues of massive overlaps and shared values with other fields (such as data mining, collaborative filtering, adaptive hypermedia, natural language processing, learning design and evaluation and so on) which make it hard to distinguish at times, it seems to me that there are at least three subfields:

learner analytics: used by admins, policy makers, governments and so on to see what learners are doing with a view to taking some action at a pragmatic or policy level as a result. May also be used by teachers to monitor and understand learners and their needs. Rarely, but potentially, of use to learners.

teaching analytics: looking at the success or otherwise of teaching interventions - courses, assessments, teaching acts, content construction, learning design, etc, with a view to changing the teaching process to make it better. Pretty much exclusively the domain of those involved in the teaching process like teachers and instructional designers.

learning analytics: looking at how people are learning, including construction of artefacts, interactions with others, progression, etc, with a view to taking direct action to improve it, usually (but by no means necessarily) by and for the learner.

I care about learning analytics and see great practical value in teaching analytics. Analysing learning and teaching is almost entirely about helping people to learn and, while it may be poorly done, the intentions are almost all aimed at making learners' lives better. Analysing learners involves some murkier areas: it may have many motivations, including potentially risky ones like implementing efficiencies, targeting for marketing, allocating resources and so on as well as clearly good things like identifying under-represented groups or at-risk learners. I suspect that it may become the most popular analytics domain in education but, because of the dangers, it demands more serious cross-disciplinary and ethically well-considered research than the others. 

Comments

  • Tanya Elias March 4, 2011 - 6:17pm

    Hi Jon,

    Originally I had thought that 'learning analytics' the much smaller definitions you've noted above, bu it is clear to me that it is going to be used to describe all of the above related, but very different aspect.

    I also agree that learner analytics will also be the most appealing among public ed admin and corporate heads alike.  When I heard the promise of "being able to identify the students most at risk of dropping out in the next five days" touted by Phil Ice from American Public University, I knew it exactly what my company would want to explore... but I also have questions, How do you know you've identified the right students and do interventions make a difference? (I guess if you don't intervene and the students you predicted would drop really do, it would prove you were right...)

    This line of thinking has already led me to ask some questions: If you know which students (or employees) are likely to be unsuccessful isn't the shortest path in improved stats, to refine the recruitment process?  If we identify the people likely to drop does it become a self-fulfilling prophecy?  Or do we dump so many resources into them, only to allow other students to fall through the cracks?  I don't have any answers t this point, but I do get the feeling that it is the type of analytics that could definately be used for either good or evil, depending on the underlying theories, assumptions and motivators.

  • Jon Dron March 15, 2011 - 1:00am

    I fear the hard machine. Respect it too. Maybe love it a little. Those are really good points. We need to be so so careful about where the hard machine leads us. I am much keener on machines that give us flexibility of use and transparency of operation as a rule, despite the fact that they also involve harder work to make them do what we want. My big fear here is that such learner analytics will be used unreflectively or (worse, because harder to fix) automatically. I'm also a little worried about what those in charge might do with teaching analytics, especially if they get aggregated and lead to university analytics. Even if the machine is soft and results are treated with caution, each higher level of aggregation and abstraction has a tendency to lead to dangerous and unwarranted generalisations - at least, that seems to be a general rule that I have aggregated from lower levels of abstraction. What started as a useful tool for guidance can so easily become an instrument of control.