Amazon's recommender algorithm works pretty well: if people start to gather together ingredients needed for making a thermite bomb, Amazon helpfully suggests other items that may be needed to make it, including hardware like ball bearings, switches, and battery cables. What a great teacher!
It is disturbing that this seems to imply that there are enough people ordering such things for the algorithm to recognize a pattern. However, it would seem remarkably dumb for a determined terrorist to leave such a (figuratively and literally) blazing trail behind them, so it is just as likely to be the result of a very slightly milder form of idiot, perhaps a few Trump voters playing in their backyards. It's a bit worrying, though, that the 'wisdom' of the crowd might suggest uses of and improvements to some stupid kids' already dangerous backyard experiments that could make them way more risky, and potentially deadly.
Building intelligent systems is not too hard, as long as the activity demanding intelligence can be isolated and kept within a limited context or problem domain. Computers can beat any human at Go, Chess, or Checkers. They can drive cars more safely and more efficiently than people (as long as there are not too many surprises or ethical dilemmas to overcome, and as long as no one tries deliberately to fool them). In conversation, as long as the human conversant keeps within a pre-specified realm of expertise, they can pass the Turing Test. They are even remarkably much better than humans at identifying, from a picture, whether someone is gay or not. But it is really hard to make them wise. This latest fracas is essentially a species of the same problem as that reported last week of Facebook offering adverts targeted at haters of Jews. It's crowd-based intelligence, without the wisdom to discern the meaning and value of what the crowd (along with the algorithm) chooses. Crowds (more accurately, collectives) are never wise: they can be smart, they can be intelligent, they can be ignorant, they can be foolish, they can even (with a really smart algorithm to assist) be (or at least do) good; but they cannot be wise. Nor can AIs that use them.
Human wisdom is a result of growing up as a human being, with human needs, desires, and interests, in a human society, with all the complexity, purpose, meaning, and value that it entails. An AI that can even come close to that is at best decades away, and may never be possible, at least not at scale, because computers are not people: they will always be treated differently, and have different needs (there's an interesting question to explore as to whether they can evolve a different kind of machine-oriented wisdom, but let's not go there - SkyNet beckons!). We do need to be working on artificial wisdom, to complement artificial intelligence, but we are not even close yet. Right now, we need to be involving people in such things to a much greater extent: we need to build systems that informate, that enhance our capabilities as human beings, rather than that automate and diminish them. It might not be a bad idea, for instance, for Amazon's algorithms to learn to report things like this to real human beings (though there are big risks of error, reinforcement of bias, and some fuzzy boundaries of acceptability that it is way too easy to cross) but it would definitely be a terrible idea for Amazon to preemptively automate prevention of such recommendations.
There are lessons here for those working in the field of learning analytics, especially those that are trying to take the results in order to automate the learning process, like Knewton and its kin. Learning, and that subset of learning that is addressed in the field of education in particular, is about living in a human society, integrating complex ideas, skills, values, and practices in a world full of other people, all of them unique and important. It's not about learning to do, it's about learning to be. Some parts of teaching can be automated, for sure, just as shopping for bomb parts can be automated. But those are not the parts that do the most good, and they should be part of a rich, social education, not of a closed, value-free system.
Bookmarks are a great way to share web pages you have found with others (including those on this site) and to comment on them and discuss them.
We welcome comments on public posts from members of the public. Please note, however, that all comments made on public posts must be moderated by their owners before they become visible on the site. The owner of the post (and no one else) has to do that.
If you want the full range of features and you have a login ID, log in using the links at the top of the page or at https://landing.athabascau.ca/login (logins are secure and encrypted)
Posts made here are the responsibility of their owners and may not reflect the views of Athabasca University.
Comments
Jon,
I agree with the points you are raising here.
There is a pervsuasive (and generally inaccurate) notion that learning is the acquisition of simple sets of skills. We seem to believe that out of that acquisition of skill sets that higher order thinking and problem solving simply emerge magically.
If that were the case then the recommender engine model would work great. Students would have the much heralded (of late) playlists of lessons to build those skills and viola!
However, this model excludes the most basic truth about learning which is that it is labor intensive, experience dependent, and therefore not realy programmable in the way that Sal Khan and other ed tech gurus seem to believe.
As in all things, it ain't what you do, it's the way that you do it.
Having worked on recommender systems, especially adaptive ones, for my PhD and for some years afterwards, I do see that there are many ways they can have a place. But there are also enormous dangers and, as you suggest, having them drive a teacher-determined learning agenda is not smart. Collaborative filters of the sort used by Amazon, Netflix, etc, turn out to be less promising than you might at first think, sadly: either too crude to work (it's not about relatively static preferences, as in books or music, but about evolving learning needs that change as you learn) or too difficult to use (e.g. my PhD systems!).
I find small chunks of stuff to learn from (YouTube videos, StackExchange dialogues, etc) can be immensely useful when used by learners to achieve goals they have set for themselves: I have learned a great many skills that way, which are a necessary part of (but only part of) learning. And there is great value within a small, known community to sharing and ranking stuff that the group uses - the objects the bind, the ideas that connect, the shared cognitive artefacts - which can be greatly enriched with added visualization, analytics, and rich qualitative metadata, as long as these simply support, rather than drive learning, and reflect rather than dominate the group's dynamics.
As an interesting addendum to my post, Amazon's recommendations turn out to be far more benign than the media at first suggested: the recommendations come about as a result of people making backyard fireworks and doing science experiments. Context is everything, and context gets lost in large-scale recommender systems whose purpose is to sell stuff, not to support learning!
Jon,
I think that recommender systems can be good (and I know yours were/are). I was responding to the corporate instantiations in education in particular.
Mike Caulfield had a nice piece about Netflix recommender engines not really recommended things for you, but rather recommending things you might like that they have rights to.
I also agree that I have sought out YouTube and Stack Overflow and other similar places to support my own learning, and they have been immensely helpful. The difference there is the self-directed piece I think.