Great commentary by Tony Bates on an interesting and informative article (at http://net.educause.edu/ir/library/pdf/eli3035.pdf - Tony's link may not work, but this one should) on the future of virtual learning tools and spaces.
The proposed environment in the article sounds much like earlier initiatives like OKI (a bit) and ELF (a lot), with a bunch of analytics and adaptivity thrown in. It's essentially about mashups, and presents a slight update of very much what I wrote about in some of the later chapters of my 2007 book, Control & Constraint in E-Learning: Choosing When to Choose, on how to build learning environments, incorporating more modern standards and tools than were around then. Like I wrote then, making such services assemblable by learners and teachers rather than administrators is a great idea. I still reckon that is an extremely good idea, certainly from an architectural viewpoint, and I am glad that it is becoming a bit more mainstream and is supported by more standards. Hopefully the latest ones will have more luck than their forebears. I am much saddened that ELF more or less bit the dust when it got too big for its boots and tried to do too much, and OKI's finest moment became the mediocre Sakai. Whatever the fate of earlier attempts, we should absolutely not be thinking of monolithic applications that do everything any more. I also like the emphasis on breaking free of walled gardens, the importance of universal design, and recognition of alternative forms of learning and accreditation.
Tony's comments are, as always, great. I am particularly taken with the importance of modelling edtech on human behaviour rather than vice versa, though I think that it's important to allow some push and shove - we can do more with virtual learning environments than humans can do in physical spaces, with quite a different set of constraints and freedoms, so it's not about replicating so much as it is about accommodating. We shape our dwellings, our dwellings shape us. I think that is implicit in what Tony writes and is clear from his earlier writings, but maybe doesn't come out clearly enough in this post.
I also agree with Tony that Lego is not the right metaphor, despite its intuitive appeal - I reckon 'glue' or 'velcro' (these things have to be reassemblable and replaceable) are better analogies but, actually, a more flexible metaphor is needed because the glue and velcro play in such systems play an active transmissive role as well as sticking things together. It's more like Arduino kits than Lego. We should be making tools that can be attached firmly to one another, not that conform to a preset set of shapes, and we should be making them capable of passing signals across boundaries, likely through semi-permeable membranes that organize and filter them differently for different needs. The connection matters.
Tony goes a little far in his critique of standardization, I think. Standards are important and necessary - we just have to be really careful about what we standardize and where we celebrate diversity, and we should not become too beholden to them. Standards like HTML, SMTP, CSS, TCP, RSS, UTF-8, MIME and so on are actually pretty critical - the Internet would be a sorry place without them. When the alternative is some market-driven rubbish like Facebooks oxymoronic 'open' graph or Microsoft's numerous attempts to pretend something is a standard (as long as they control it) I would prefer a standards-driven approach every time. Likewise so-called 'open' APIs that are only open insofar as they let a bit of data in and out and allow a bit of manipulation along the way, but keep hold of it for themselves: a fact you generally only notice when you want to move it somewhere else or repurpose it in ways unsupported by the APIs. I am pleased to see that the trend is now shifting away from SAAS (we knew it was a dangerous idea to give someone else your data decades ago) to containers-as-a-service that give most of the advantages with almost no lost control. Standards like IMS-LD, SCORM, learning object metadata and so on are definitely overkill - they attempt to standardize the wrong things at the wrong level. They are like standards built for horses and carriages when we have already invented helicopters. Perhaps even LTI, QTI, xAPI, OpenBadges and Calipre occupy dubious territory, inasmuch as they to a greater or lesser extent standardize an implicit set of assumptions about the nature of education that might be (and have been) challenged. I personally think those five are actually extremely useful and potentially extraordinarily valuable, though, and should be encouraged: the assumptions they embed are not necessarily closed to different organizational and pedagogical models, on the whole. That doesn't mean that we should necessarily avoid a tool or method simply because it doesn't use them nor that we should religiously adhere to them when we need to do more than they allow. What we should avoid are tools that lock us in, and standards are a great way of avoiding that. Standards are particularly useful when they become part of the velcro, able to translate one standard (or closed source) to another. Only one tool needs to support a standard as long as what links them can translate effectively. Even screenscrapers can work, at a pinch, if they can liberate data into a standards-based format.
The report is, I think, a good attempt at recommending an effective way of building learning environments, even though it simply reflects cutting edge practice (most of which has fully functional antecedents stretching back more than a decade), rather than looking to the pedagogical future as the title suggests. It pulls together many of the good ideas that have been doing the rounds in edtech research circles for the past few years and presents a coherent picture of what that means when you operationalize it. There are some weaknesses because of that, though - not only is it a view from the inside out (it is still largely thinking about how to use these systems in a traditional teaching context), it is a bit caught up with trends rather than deep patterns of change. I am particularly concerned that things like analytics, recommender systems and adaptive 'personalized' systems do have a role in that future, but maybe not the critical, transformative or ubiquitous one that evangelists claim for them. They are just a few more 'bricks' (or connectors) to throw into the mix, useful tools, not a backbone of future learning. This is particularly true of adaptive (personalized) tools. Most of those who are on this bandwagon have not paid enough attention to the decades of work preceding their (re)discovery of adaptive and recommender systems and are not well attuned to their copious limitations yet. Analytics, combined with human interpretation, is more promising, but the blind spots are currently quite huge and, until we have a better idea of both inputs and measurements, may ossify contingent irrelevances (improving attendance or grades without improving learning, for instance, or assuming that the fact that most students do better when they submit work steadily means that all should be advised to do so). Much more important are the tools for filtering, organizing and managing the social flow, for connecting us with the right people in the right way at the right time, wherever and whoever they might be, as well as for (as the article rightly points out) building flexible and open systems. As Tony suggests, what matters is the human part of this, and the big changes are those that affect our ability to learn from one another more easily and more effectively, that celebrate and support diversity, that make learning more personal, not more personalized. Enhancement is where it's at, not replacement. We need the Six-Million-Dollar Learner, not HAL. Some tools and standards that have little to do with learning, particularly in accreditation and recording progress, matter too, both for the threats and the opportunities they present to educational institutions and the ways that they and the processes they support will force organizational and consequently pedagogic change.
Final thought: NGDLE is an appalling acronym! By definition, it is unlikely to catch on.
Bookmarks are a great way to share web pages you have found with others (including those on this site) and to comment on them and discuss them.
We welcome comments on public posts from members of the public. Please note, however, that all comments made on public posts must be moderated by their owners before they become visible on the site. The owner of the post (and no one else) has to do that.
If you want the full range of features and you have a login ID, log in using the links at the top of the page or at https://landing.athabascau.ca/login (logins are secure and encrypted)
Posts made here are the responsibility of their owners and may not reflect the views of Athabasca University.
Comments
Great comments, Jon.
You are right - I'm not against standards per se. For instance I'm driven crazy by the lack of standards when trying to integrate apps or tools such as Deezer, Apple TV or Sonos within an existing home entertainment system. How many remote controls do we need? One! We absolutely need various apps and tools to work together as seamlessly as possible in education too.
My concern with the NGDLE approach is, as you suggest, that it doesn't have a strong digital pedagogy around which to build standards. You could of course argue that standards are independent of applications, but I don't agree. Standards should allow you to do what you want with technology, and hence you should have some idea of what you want, or, more importantly, of what you don't want. What I don't want is a complex, technological system that both teachers and learners find increasingly difficult to navigate or apply, which is what I fear the NGDLE approach will result in. But I also accept I could be wrong on this and I welcome the discussion the EDUCAUSE paper is generating.
Lastly, good luck on your discussions of where Athabasca is going or should go. It's such an important institution, but it does need to change.
- Tony Bates