Landing : Athabascau University

Instructional quality of Massive Open Online Courses (MOOCs)

http://www.sciencedirect.com/science/article/pii/S036013151400178X
This is a very interesting, if (I will argue) flawed, paper by Margaryan, Bianco and Littlejohn using a Course Scan instrument to examine the instructional design qualities of 76 randomly selected MOOCs (26 cMOOCs and 50 xMOOCs - the imbalance was caused by difficulties finding suitable cMOOCs). The conclusions drawn are that very few MOOCs, if any, show much evidence of sound instructional design strategies. In fact they are, according to the authors, almost all an instructional designer’s worst nightmare, on at least some dimensions.  
I like this paper but I have some fairly serious concerns with the way this study was conducted, which means a very large pinch of salt is needed when considering its conclusions. The central problem lies in the use of prescriptive criteria to identify ‘good’ instructional design practice, and then using them as quantitative measures of things that are deemed essential to any completed course design. 

Doubtful criteria 

It starts reasonably well. Margaryan et al use David Merrill’s well-accepted abstracted principles for instructional design to identify kinds of activities that should be there in any course and that, being somewhat derived from a variety of models and theories, are pretty reasonable: problem centricity, activation of prior learning, expert demonstration, application and integration. However, the chinks begin to show even here, as it is not always essential that all of these are explicitly contained within a course itself, even though consideration of them may be needed in the design process - for example, in an apprenticeship model, integration might be a natural part of learners’ lives, while in an open ‘by negotiated outcome’ course (e.g. a typical European PhD) the problems may be inherent in the context. But, as a fair approximation of what activities should be in most conventional taught courses, it’s not bad at all, even though it might show some courses as 'bad' when they are in fact 'good'. 
The authors also add five more criteria abstracted from literature relating rather loosely to 'resources', including: expert feedback; differentiation (i.e. personalization); collaboration; authentic resources; and use of collective knowledge (i.e. cooperative sharing). These are far more contentious, with the exception of feedback, which almost all would agree should be considered in some form in any learning design (and which is a process thing anyway, not a resource issue). However, even this does not always need to be the expert feedback that the authors demand: automated feedback (which is, to be fair, a kind of ossified expert feedback, at least when done right), peer feedback or, best of all, intrinsic feedback can often be at least as good in most learning contexts. Intrinsic feedback (e.g. when learning to ride a bike, falling off it or succeeding to stay upright) is almost always better than any expert feedback, albeit that it can be enhanced by expert advice. None of the rest of these 'resources' criteria are essential to an effective learning design. They can be very useful, for sure, although it depends a great deal on context and how it is done, and there are often many other things that may matter as much or more in a design, like including support for reflection, for example, or scope for caring or passion to be displayed, or design to ensure personal relevance. It is worth noting that Merrill observes that, beyond the areas of broad agreement (which I reckon are somewhat shoehorned to fit), there is much more in other instructional design models that demands further research and that may be equally if not more important than those identified as common.

It ain't what you do...

Like all things in education, it ain’t what you do but how you do it that makes all the difference, and it is all massively dependent on subject, context, learners and many other things. Prescriptive measures of instructional design quality like these make no sense when applied post-hoc because they ignore all this. They are very reasonable starting frameworks for a designer that encourage focus on things that matter and can make a big difference in the design process, but real life learning designs have to take the entire context into account and can (and often should) be done differently. Learning design (I shudder at the word ‘instructional’ because it implies so many unhealthy assumptions and attitudes) is a creative and situated activity. It makes no more sense to prescribe what kinds of activities and resources should be in a course than it does to prescribe how paintings should be composed. Yes, a few basics like golden ratios, rules of thirds, colour theory, etc can help the novice painter produce something acceptable, but the fact that a painting disobeys these ‘rules’ does not make it a bad painting: sometimes, quite the opposite. Some of the finest teaching I have ever seen or partaken of has used the most appalling instructional design techniques, by any theoretical measure.

Over-rigid assumptions and requirements

One of the biggest troubles with such general-purpose abstractions is that they make some very strong prior assumptions about what a course is going to be like and the context of delivery. Thanks to their closer resemblance to traditional courses (from which it should be clearly noted that the design criteria are derived) this is, to an extent, fair-ish for xMOOCs. But, even in the case of xMOOCs, the demand that collaboration, say, must occur is a step too far: as decades of distance learning research has shown (and Athabasca University proved for decades), great learning can happen without it and, while cooperative sharing is pragmatic and cost-effective, it is not essential in every course. Yes, these things are often a very good idea. No, they are not essential. Terry Anderson’s well-verified (and possibly self-confirming, though none the worse for it) theorem of interaction equivalency  makes this pretty clear. 

cMOOCs are not xMOOCs

Prescriptive criteria as a tool for evaluation make no sense whatsoever in a cMOOC context. This is made worse because the traditional model is carried to extremes in this paper, to the extent that the authors bemoan the lack of clear learning outcomes. This doesn’t naturally fall out from the design principles at all, so I don't understand why they are even mentioned, and it seems an abitrary criterion that has no validity or justification beyond the fact that they are typically used in university teaching. As teacher-prescribed learning outcomes are anathema to Connectivism it is very surprising indeed that the cMOOCs actually scored higher than the xMOOCs on this metric, which makes me wonder whether the means of differentiation were sufficiently rigorous. A MOOC that genuinely followed Connectivist principles would not provide learning outcomes at all: foci and themes, for sure, but not ‘at the end of this course you will be able to x’. And, anyway, as a lot of research and debate has shown, learning outcomes are of far greater value to teachers and instructional designers than they are to learners, for whom they may, if not handled with great care, actually get in the way of effective learning. It's a process thing - helpful for creating courses, almost useless for taking them. The same problem occurs in the use of course organization in the criteria - cMOOC content is organized bottom-up by learners, so it is not very surprising that they lack careful top-down planning, and that is part of the point.

Apparently, some cMOOCs are not cMOOCs either

As well as concerns about the means of differentiating courses and the metrics used, I am also concerned with how they were applied. It is surprising that there was even a single cMOOC that didn’t incorporate use of 'collective knowledge’ (the authors' term for cooperative sharing and knowledge construction) because, without that, it simply isn’t a cMOOC: it’s there in the definition of Connectivism . As for differentiation, part of the point of cMOOCs is that learning happens through the network which, by definition, means people are getting different options or paths, and choosing those that suit their needs. The big point in both cases is that the teacher-designed course does not contain the content in a cMOOC: beyond the process support needed to build and sustain a network, any content that may be provided by the facilitators of such a course is just a catalyst for network formation and a centre around which activity flows and learner-generated content and activity is created. With that in mind it is worth pointing out that problem-centricity in learning design is an expression of teacher control which, again, is anathema to how cMOOCs work. Assuming that a cMOOC succeeds in connecting and mobilizing a network, it is all but certain that a great deal of problem-based and inquiry-based learning will be going on as people post, others respond, and issues become problematized. Moreover, the problems and issues will be relevant and meaningful to learners in ways that no pre-designed course can ever be. The content of a cMOOC is largely learner-generated so of course a problem focus is often simply not there in static materials supplied by people running it. cMOOCs do not tell learners what to do or how to do it, beyond very broad process support which is needed to help those networks to accrete. It would therefore be more than a little weird if they adhered to instructional design principles derived from teacher-led face-to-face courses in their designed content because, if they did, they would not be cMOOCs. Of course, it is perfectly reasonable to criticize cMOOCs as a matter of principle on these grounds: given that (depending on the network) few will know much about learning and how to support it, one of the big problems with connectivist methods is that of getting lost in social space, with insufficient structure or guidance to suit all learning needs, insufficient feedback, inefficient paths and so on. I'd have some sympathy with such an argument, but it is not fair to judge cMOOCs on criteria that their instigators would reject in the first place and that they are actively avoiding. It's like criticizing cheese for not being chalky enough.

It's still a good paper though

For all that I find the conclusions of this paper very arguable and the methods highly criticizable, it does provide an interesting portrait of MOOCs using an unconventional lens. We need more research along these lines because, though the conclusions are mostly arguable, what is revealed in the process is a much richer picture of the kinds of things that are and are not happening in MOOCs. These are fine researchers who have told an old story in a new way, and this is enlightening stuff that is worth reading.
 
As an aside, we also need better editors and reviewers for papers like this: little tell-tales like the fact that ‘cMOOC’ gets to be defined as ‘constructivist MOOC’ at one point (I’m sure it's just a slip of the keyboard as the authors are well aware of what they are writing about) and more typos than you might expect in a published paper suggest that not quite enough effort went into quality control at the editorial end. I note too that this is a closed journal: you'd think that they might offer better value for the money that they cream off for their services.

Comments

  • Apostolos Koutropoulos November 18, 2014 - 1:53pm

    Thanks for sharing!  What's your position on MOOC related research being published in closed-access sources?  For me it seems a bit incogruous :)

  • Marti Cleveland-Innes November 18, 2014 - 2:04pm

    Thanks for the insights on this, Jon. A small group of us at AU are researching ID principles for MOOCs, which don't really count as formal education but fit more closely to public education ala museums and libraries. This is not to be sneezed at, but will require attention to design, and clear articulation of the relationship between MOOC participation and credits in formal learning environments.

     

    Cheers,

    MCI

  • Jon Dron November 18, 2014 - 2:16pm

    Thanks @Apostolos - I'm opposed in principle to closed journals, at least when they make use of the outputs of authors whose work is already funded from the public purse, when they use free labour by reviewers and editors, then sell it back to them at a whacking profit, denying access not only to the people that paid for it but, surprisingly often, even the writers. It made sense in times of information scarcity, where there was genuine value to be gained from printing and distributing paper journals that demanded substantial resources and expertise. It's insane now.

    I'm still on the fence about author-pays versions of 'open'. On the whole I think it is an awful idea akin to vanity publishing - at the very least, it discriminates against those with lesser funding, no matter how good their research, and it accounts for a huge number of predatory emails I receive every day asking me to submit papers or join editorial committees on shady for-profit fly-by-night but vaguely 'proper' looking journals. On the other hand, there is a significant cost involved, even for fully online journals using open source software: at the very least they need admin assistance, technical support, and hosting. I'm very sad indeed to see that one of my favourite open journals,  JIME, has gone down that path now, but they make a compelling case that they cannot afford to run it for free any more. It seems to me that this is a place where alternative funding, whether through governments/research councils, crowd-sourcing or even voluntary contribution might make a lot of sense. Some (including our own IRRODL) already make use of such things.

  • Jon Dron November 18, 2014 - 2:44pm

    Thanks @Marti - yes, I think alternative models akin to those of museums and libraries do make sense, at least for xMOOCs. Nice way of seeing it. I'm not sure about the credits though: seems to me that this should be entirely disaggregated rather than articulated as, the moment such things are introduced, you wind up with fundamentally irresolvable conflicts between learning and accreditation. Of course, to support learning, you might ask for portfolios or similar outputs, which might be very good evidence to later use to gain certification.

    cMOOCs are quite a different matter. Because the learning design (such as it is) essentially comes from the participants, they are less easily dealt with from a traditional ID perspective. We have quite a lot to say that is relevant to this in our recent book. Personally, I think most of the answers lie in the design of environments, including pedagogies and other techniques as well as virtual spaces and algorithms, that enhance the ability of the crowd to teach itself, rather than to superimpose traditional models that emerge out of mediaeval physical constraints on top of them. It's about ways of learning through networks, sets and collectives, not through the kind of learning designs that work in traditional groups, most of which just don't fit with big-scale social learning (though can work at a big scale if a largely asocial objectivist model is used). This is relatively new territory, though we have a lot to learn from earlier constructivist and informal learning research, especially in the areas of distributed cognition and communities of practice.

    It would also be very useful to get away from the whole notion of objectives-driven fixed-length courses altogether - I like the library/museum framing for that. I'm quite a fan of JIT small-chunk methods that can be picked and assembled, like one might pick and assemble books or views of exhibits in a museum - Khan Academy, YouTube, Instructables, Q&A sites, StackExchange, etc - where good ID can definitely be of very great value.