Landing : Athabascau University

More on Connectivism: a response to Stephen Downes

This is my (lengthy) response to Stephen Downes's responses  to my previous post on connectivism .  For those catching up with this, my original post was an attempt to make sense of Downes's view of connectivism as a distinctive learning theory, in which I compared it with other related theories, then examined it on its own terms before concluding that connectivism was better seen as a theory for learning than a theory of learning. I saw this as a good thing because the more open formulation as a theory for learning allows us to treat it as a transformative set of related ideas, a fairly tight theme around which we can explore a range of adjacent possibles and move forward more effectively as a set of people with shared interests working in very closely related areas on how to better support learning in the networked age. Though a functionally complete uber-theory of learning which negates or renders pointless any and all others would be a pretty useful thing, (which, as David Wiley astutely observesas George Siemens agrees, and as I will again argue, is not what Stephen's version of connectivism comes close to yet, and is probably not even what it aims for) it would not negate the value of the connectivist model of the world as a catalyst for positive change, and it does not diminish connectivism's power if it lacks such a theory behind it.

In his reply to my post, Downes provides some useful clarifications that eliminate acouple of my negative interpretations of his theory which, depending on what he meant, might have rendered it inconsistent or unoriginal. He makes it clear that he really does mean this as a novel learning theory, as opposed to an amalgam of connectionism and one or more other ideas, and presents some arguments to explain this position. This is a great example of how we learn in a network and helps to validate both of our perspectives on what makes them valuable. There are, however, many things in my post the Downes finds unclear and, now that he has further clarified what he means, several things that I still find problematic in his version of the theory, so this post is intended to explore a few of the issues in greater depth than my last post, in an attempt to build a clearer understanding of where the agreements and disagreements between us lie and, perhaps, to move along to a richer understanding of the value and meaning of the connectivist model of learning in a networked world.

Things I will not respond to in detail

I will largely ignore the first half of Downes's riposte because most of it seems more of a specious attempt at sophistry than a serious attempt to further understanding. As well as the odd bit of ad hominem nonsense, Downes spends a great deal of time making pedantic arguments against each part of my sense-making discussion of how connectivism differs from other theories, treating it as though it were a syllogistic argument against his own views.  As the point of this discussion was precisely to show why these various related theories are not what he means by connectivism, to thereby move towards what he does mean through a process of elimination, and it was very blatantly not a syllogistic argument let alone one against connectivism, I am a little bewildered by why he does this. I was making an attempt to make sense of something quite subtle and complex by explaining why very similar and overlapping theories that sound very much like it are not, in fact, the same. In the process I hoped to help both myself and others understand the theory in the context in which it resides and to see how and why it is important. I have since rediscovered a post in which George (as usual) did a better job of explaining the relationship between connectivism and similar theories but, if any of our various slightly differing interpretations of connectivism are even partly valid, such slightly redundant sense-making is valuable. Furthermore, if the related theories I describe are a part of what connectivism means and they are valid theories (which I think they are on both counts) then it strengthens the theory, it doesn't weaken it. Unless, of course, it turns out to be nothing more than the sum of a few other theories with nothing new to add but, as Downes clarifies in his response and as I concluded ( a little differently) in my post, that is not the case here. Downes's line-by-line approach to argument seems only intended to inflame, but he of all people should realize how much that weakens and dilutes the genuine and decent arguments that he does make, especially when so few actually hit home. Another pedantic cherry-picking response would serve no useful purpose in furthering this discussion.  We should argue about real differences, and celebrate real similarities. This is what I will aim for here.

A couple of materially relevant points

will pick up on a couple of points Downes makes early on that do seem material. The first is that Downes does not care whether or not his variation of connectivism is a learning theory. He claims:

"Honestly, I don't care whether it's a learning theory. I also don't care whether it is original to me, whether it borrows from someone else's work, or any of the usual academic trappings."

I think I was confused by things like the title of the piece ('Connectivism as a learning theory' , for those catching up with this) and sentences like, "So in this post, let me clear, first, about what a theory actually is, and then let me outline the ways in which connectivism can be thought of as a learning theory." or "This is one reason we say connectivism is a learning theory” or "These are the foundations of connectivism as a learning theory.” 

Had I realized it didn't matter to Downes at all whether or not it was a theory then it would have saved us all a lot of time! Apart from anything else, the main conclusion of my post was precisely that, indeed, it does not matter that it is a learning theory. In fact, a big part of my point was and remains that describing it as a learning theory is actually counter-productive because it imposes constraints in places that no constraints should be and, more worryingly, makes it a hostage to fortune when it already has substantial legs of its own to stand on. However, Downes's arguments that follow do imply that whether or not it is a learning theory is important to him, and that also matters to me for the aforementioned reasons, so I will continue.

Downes also complains that I am trying to put a wedge between his view and George's view. That is true for the reasons I mention in the previous paragraph. I think that George's view better retains a clear focus on the role of the network in both supporting and being an extension of the individual learner, without demanding that all social networks should themselves learn or be capable of things like perception and recognition. This makes it far more likely to be of value to others and far more capable of adaptation and growth.  In a recent post to OLDaily Downes himself sums it up nicely:

"The Siemens answer is multimodal extension. The networks reach out and integrate with each other. Thus, for example, a concept might be contained partially in a human brain but the full extension of the concept might extend beyond the network of neurons to include, and interact with (as part of the same network) extra-neural entities (like computers, other people, and the like).

The Downes answer is pattern recognition (yes yes I know William Gibson wrote a book of that name, and that the concept is widely discussed by others). One network perceives patterns in another network and interprets or recognizes these patterns as something. So, for example, a social network might recognize 'genius' in a person via the presentation of patterns of behaviour by that person that cause responses typical of recognition of genius in society."

Exactly so. Though I have minor quibbles about the relevance and value of needing to describe knowledge using a single vocabulary of networks, I substantially share a perspective on knowledge as being at least representable in a useful way as connections between things, viewed from a perspective of an entity that thinks, learns, uses and is deeply entangled with such connections. This equally encompasses connections between emergent and designed artefacts in and arising from those networks, including models and theories, and this in turn means other explanations and theories remain as relevant and meaningful as ever. Incommensurable theories of learning are often useful and valuable, and remain valid in a connectivist context. There is no contradiction between thinking of meaning as socially constructed and, at the same time, as being describable in network terms, for instance. Emergent and designed entities can behave in ways that a networked theory fails to adequately and/or efficiently describe or explain. Passion, motivation, reflective practice, the role of ritual, the broad patterns reflecting a dynamic between structure and dialogue and so on might be described in network terms, but would lose impact and significance in the process, so I remain a fan of plural models. Like those earlier theorists in the field of socially distributed cognition that Downes finds irrelevant but that, on re-reading, I find profoundly similar, I see knowledge as not just occurring in our heads but both extended by and encompassing our social, conceptual and material networks. We think with our networks, not just about them. I also agree that networks have dynamic behaviours that, in some cases (though only some), can be usefully seen as emergent and active self-organized systems that are distinct from the parts of which they are constituted, and that these can play a role in supporting or inhibiting the learning process.  There are some small but significant differences though. A view that I think George and I broadly share is essentially ego-centric, where the focus of attention and central node of the network that we are concerned with is the learner, as opposed to Downes's more abstract and generalized bundle of interacting and probably overlapping networks, each of which is itself learning, and exhibiting behaviours that go along with that. It's a small difference, but I think it is the thin end of a wedge that separates our perspectives. I prefer to talk about systems than networks, because it provides us with a richer vocabulary and toolset, but systems can be represented in network terms too. I believe that systems teach us and we teach them; systems embody knowledge and we feed them with information; and systems are not passive receptacles but active transformative agents that are far from neutral, that affect our learning and our behaviour in ways that do not always resemble what we feed them with, and that can work both for and against us. But what is of interest is how people learn within and as part of those systems. I do not believe that systems recognize genius in any meaningful way at all, though there may be identifiable patterns of behaviour that we can see in a system and plentiful ways that the designed or emergent configuration of networks can affect how perceptions of individuals change and can be affected by the views of others in that system. 

The relevance of collective intelligence

While Downes does not care if his own ideas are original, he does seem to care if someone else uses them. Downes claims that I (actually, it was Terry Anderson and I) acquired the group-network distinction from him. We did not. Casting aspersions on my academic credibility through an ad hominem argument is surprising coming from someone who has a background in philosophy. However, it also provides an inroad into my main arguments as to why Downes's account of connectivism bothers me as well as where we agree, so I will start with this as it is as good a place as any.

Though we did not realize it at the time, the distinction between networks and groups actually probably originated with Barry Wellman who distinguishes between group-centred and networked-individualist perspectives in very much the same way we (and Downes) do. Wellman himself credits others such as Freeman who, in 1992, shared prototypical if less distinct views of the differences. Pavitt, back in 1994, wrote in a textbook of the distinction between networks and structured groups in a way that suggests it was not a new idea even then. Tschan and von Cranach, in the mid-90s, investigated roles of structures and tasks within groups in contrast to network analysis methods, while sociologist Randall Collins has made a big thing of the relevance of structural groups distinct from individual networks in his ritual-oriented explanations. Others spoke of almost identical ideas in terms of formal vs informal networks. Similar ideas, with a greater range of differentiations than the simple group/network distinction but none-the-less encompassing it, can be found in the work of Jan Van Dijk. The same sorts of distinction were being made by people in other or overlapping fields, such as Ross MayfieldKevin KellyJohn Seely Brown and Clay Shirky, though not all used precisely the same vocabulary in the same way. In short, it was an idea that was in the air throughout the 1990s if not before, and that was in common use across several academic communities by the early 2000s. Downes also failed (and continues to fail) to attribute it to Wellman or to anyone else.  I mention this not just because it irks me that he feels the need to claim ownership of a common idea he did not originate but more usefully to introduce the concept of collectives. Our paper's central point was that, in addition to the distinction that had already been made between groups and networks, it was also important to consider a third kind of entity that we referred to then (and now, though in a slightly more refined way) as 'collectives'. Our understanding of collectives in many ways resembles Downes's view of learning occurring in networks. A collective, like Downes's concept of a network, could be seen as doing, behaving, perceiving, recognizing and learning, at least in some of its many possible forms. I hope that you will therefore bear with me a while as I expand more fully on what I think 'collective' means because I think it demonstrates the substantial similarities between our views, the small but significant differences which are of material importance to my critique of Downes's Connectivism as a learning theory, and because it partly explains why I would prefer connectivism to be seen as a uniting and generative theory of how to learn rather than a narrower theory of learning.

Defining collectives

Downes rightly observes that Wikipedia's definition of 'collective intelligence' includes things that no one would describe as emergent. It is fine to call me out on that though, as Downes is familiar with what I actually mean because he has read at least one of my papers on the topic (see above), it is a little pedantic. Rather than the Wikipedia page, I should have linked to something more appropriate like for example Toby Segaran's great practical guide to programming collective intelligence, Joe Gregorio's nice post on stigmergy and the Web, most of Francis Heylighen's work or (still a bit broad in places but mostly in the right area) MIT's The Handbook of Collective Intelligence. 'Collective intelligence' is a term whose use has spread across many disciplines and that means different things to different people. My own use of the term comes mainly from the area of systems and cybernetics. To clarify, I meant it solely and exclusively in the sense of behaviour arising from independent and/or dependent interactions of many individual entities, algorithmically combined to act as a single agent. Such algorithms can be enacted by the individual actors within the collective and/or by some other authority, such as a computer program or human mediator. It is, of course, possible and very common indeed for one or more of the agents contributing to a collective to itself be a collective. Complex systems are built this way so, at some point, the broader field of collective intelligence begins to blur with its cybernetic subset.

Normally, when talking about such things, I tend to avoid using the 'intelligence' part of the term, simply referring to such collective entities as 'collectives', in part because they can be collectively very dumb. However, in truth, I mainly use the term because a student, who had used a couple of the systems I had built that were intended to support self-organized learning in groups and networks, described the idea as being like the Borg collective in Star Trek TNG (in its initial though not its later incarnation). He meant that it was an example of a kind of a hive mind. I liked it, and the name stuck.  There may be a better word. Hutton started it all in 1789 by talking about 'superorganisms' and others see it as a sub-branch of cybernetics, but I will stick with 'collectives' because it is more familiar.

Collectives in more detail

Examples of collectives very non-exhaustively include those arising from stigmergic behaviours (such as in ant trails, termite mounds, ant nest tidying, movements of money markets, position of Google search results, tag cloud behaviours, rating/voting systems employing feedback loops, money in street performers' hats etc) from other complex group coordination behaviours (like bee dancing, bird flocking, cattle herding, fish schooling, coordination of cells in human bodies, plant growth, neural networks, traffic flows etc), and from parts of what is involved in more complex human systems (meme spread, city growth, technology evolution, organizational norming, etc). Though the concept does very much encompass emergent behaviours in networks, there are also collectives that are better, though not uniquely, described using the language of sets. Moreover, even those collectives that are unequivocally best described in network terms often employ simpler networks than those that fit Downes's criteria for a network that learns, though that fit George's view of networks as extensions of cognition. A neural network is a particular form of collective, but so is any multi-cellular organism, and so is the crowd+algorithm (in this case a simple average) that correctly estimates the number of candies in a jar or the position of a lost sunken ship.  In brief, any algorithmically combined collection of individual entities that can thereafter be viewed as a single actor within a system may be described as a collective. Collectives tend to lead to systems that are self-organizing, though this often depends upon the scale at which you observe them and the context: It is not a defining characteristic but it is a common feature. They can be and, in human systems, often are the result of intentionally applied algorithms. However, the fact that collectives in turn affect actors of which they consist and thus change its overall behaviour in a constant iterative cycle means that feedback loops can become important and the overall system may reach self-organized criticality even when someone or something has deliberately intervened, usually using software, to make the process happen.  A simple example of this is voting behaviour in some electoral systems. The first collective algorithm here is as simple as it gets: someone counts votes and the person with the largest number wins. The combined decisions of many individuals are treated by the counter as a single decision made by a single entity, conventionally labelled as the 'electorate'. This is about the simplest way to use crowd wisdom that I know of and is at the very edge of what 'collective' means. It is a long way from the self-organized networks Downes is talking about. If there is a sequential cycle (e.g. US presidential primaries) then the announced results of those vote-counts play a significant role in determining behaviour of future individual voters, whose personal algorithms include assessing probability of candidate success as a weighting factor when deciding for whom to vote. This means that early voters have (according to Knight and Schiff) up to five times the influence over the result as later voters. The aggregated behaviours of the crowd start to affect future behaviours of individuals within the crowd. In this particular case, though self-organizing principles start to come into play, because each crowd is different and voting occurs in a linear sequence, it is not a sustainable self-organizing process. It just explains a pattern of behaviour. Only when the same crowd influences its own members, as we see for instance in the up-down voting on sites like Reddit, Stack Exchange or Slashdot, or the stigmergic behaviours in Google Search, fed with energy or information from outside the collective (all complex adaptive systems need an external 'energy' source to drive them), can it become truly self-organizing. The linked paper from Heylighen in the previous sentence, incidentally, contains an interesting account of how intelligence and learning may occur on the Web that may be of interest to those seeking such explanations and that echoes some of Downes's thinking on the subject.

The relationship between connectivist accounts and collectives

Collective accounts explain how there is not only knowledge held by individual people and artefacts in a set or network (and occasionally in a group), but how the collective can itself become an active agent which collects, processes, stores and re-represents knowledge, transforming it for the benefit (or not) of its members or others. Especially when emergence and self-organized criticality come into play, this typically seems to possess a kind of knowledge that differs from that of the individual parts of which it is comprised. Downes's archetypal example of a neural network demonstrates this well. The algorithms relating to how unintelligent and unconscious neurons interact with and respond to one another, though very simple and not resembling thought in the slightest at an individual level, collectively lead to our own intelligence and probably to consciousness itself. Less spectacularly but none-the-less quite interestingly, an automated collaborative filter, such as those lying behind most of Amazon's recommendation engines, can mine similarities in implicit or explicit user preferences in order to discover novel things (including novels, neatly enough) that an individual user might like by combining the explicit and implicit recommendations of others that have behaved similarly before. The individuals making up a crowd of potential or actual book-buyers are, on the whole, not intentionally recommending books to other people. On the whole they are buying books, searching or browsing catalogues, or expressing their likes for books in order to get better recommendations from the system. But, combined with appropriate algorithms, they together become part of a collective that recommends books. Observing that this is not unlike one of the roles of a conventional teacher, some years ago I built a series of systems that were intended to use collective methods such as this to support learning; that enabled the crowd to act in some ways as its own teacher (see the vast majority of my papers up to around 2007 for examples of this). 

If they work as he claims, Downes's learning networks are a species of collective. The differences are, however, telling. Collectives for me are interesting phenomena that can be used to help people to learn - I do not need nor wish to treat all of them as necessarily knowing anything, as long as they can act in some way that helps people to learn or where knowledge of their emergent behaviours can improve learning. Collectives are diverse and non-uniform entities and, relatedly, arise from multiple mechanisms that do not rely solely on the existence of a particular subset of network dynamics to be of value. I tend to avoid treating everything as a network because set operations are often far more efficient and are common in naturally occurring collectives, though you could, if you wish, describe all of them in the language of networks.  Collectives are agents that, in some cases, can help people to learn. Just as importantly, in other cases they can be a hindrance and, in some cases, can have little or no value at all. Perhaps more significantly to the general thrust of the argument, in making use of such collectives to support learning I am bringing my own understanding of what helps me and others to learn to bear. I am making deliberate choices as to what might be of value to a learner, such as selection of useful learning resources, their order of presentation, support for connecting with other people, identification of expertise, evaluation of resources, and often drawing on other theories that relate to learning like what motivates people to learn in the first place or the need for reflection in the learning process. This is different in granularity, scope and purpose from what Downes wants to achieve, I think, but I hope to soon show that he is similarly bound by such assumptions and prior beliefs.

These fairly subtle distinctions between my views and those of Downes show that there are far more similarities than differences between our perspectives. We are, in almost all cases, talking about the much the same things, seeking the same ends, sharing similar attitudes and describing the same kinds of phenomena in not dissimilar ways. With that in mind, I will move on to why I think Downes's view, though clarified by his response to my last post, remains problematic for me.

Why I have problems with Downes's theory

Once we get beyond the arguments Downes makes against things that I too argue against, we finally reach the beginnings of a riposte that does address the small syllogistic argument found near the end of my initial post and that does clarify his actual argument. It is helpful that he clarifies:

"And yes, I say that these connections are the learning. In humans, neurons are just the tools we use to make connections. In societies, humans are just the tools we (they?) use to make connections. "

He also helpfully clarifies:

" We can say 'a change of state in one neuron can result in a change of state in the second neuron'. And in the same way, 'a change of state in one human can result in a change of state in the second human'. The nature of these states is different; in a neuron, it might be a difference in the concentration of potassium ions, in a human it might be the acquisition of a social disease (or an idea). The physical instantiation of the connection can be different, but the fact of the connection can be the same."

Finally, Downes does get round to launching a substantial challenge, in which he claims that his model of network learning is self-consistent, that it does apply to social networks, and that it is an empirically observable phenomenon.

" It is a matter of empirical fact whether or not neurons and humans associate and form connections with each other according to the same underlying principles."

As he claims to be unable to understand at all the paragraph in which I actually argue against this assertion I will use this as a launchpad to explain my arguments in a slightly different way that I hope will be clearer to him and others that remain unpersuaded by my initial post. 

The forthcoming argument in brief

In the arguments that follow, I will observe that the empirical facts surrounding social networks and those surrounding neural networks are of a quite different order. Specifically, I will argue that an analysis of social networks demands that we invoke prior models, theories and symbolisms that are not based on the innate structure of an observable network as would be the case for a neural network, but on what we choose to observe and analyze. This leads to circularity if we wish to think of them as things that learn because we must start with a theory of what matters in learning (whether as individual learners or researchers of networks) that our analysis is intended to confirm. I will then argue that, even if my previous argument were flawed, the differences between neural networks and social networks are so great that applying similar principles to describe learning within them demands much more than Downes provides because, on the face of it, the principles' application to a real social network in a similar fashion to their application in a neural network would operationally lead to nothing but a chaotic storm of signals resembling an epileptic fit. This does not mean social networks cannot exhibit self-organized regularities but casts doubt on whether they can be seen as perceiving, learning or acting in any way.  I will then observe that, thanks to their unbounded nature, the only meaningful way to describe the output of a social network is to treat individuals within it as functionally equivalent to its own output neurons. This means that every individual experiences a different network, rather than a single consolidated whole, which returns the emphasis to the meaning-making of the individual, does not demand that the fuzzy-boundaried network itself learns and, even if it did, would not be able to provide us with useful information to help us with learning design or pedagogy. By the time I get to the end of the arguments I will eventually wind up agreeing that Downes's account of learning in social networks makes sense to me, but that it does so for the same reason that it makes sense to think of the broader category of all self-organizing systems as learning systems, including evolutionary processes, thermostatic regulators and traffic systems, whether or not they conform to the particular set of theories that Downes describes as learning theories. I present an alternative, simpler but arguably equally valid theory that has explanatory value but that is equally inadequate to the task of helping people to learn for the same reasons. I will conclude with a reiteration that connectivism does not need to be a theory of learning to be of value, which is self-evident from the empirical evidence that, despite some fuzziness and disagreement about the learning theory aspect of it, it actually is and has been of great value. If all of that seems obvious to you already, you can probably skip the next few sections and move on to the end without missing much. If not, I commend you to what follows.

Empirical facts are not all created equal

Downes felt that my account of differences between neural and social networks were irrelevant to his argument. In this section I share a few of the ways they are not.

Boundaries

The boundary that separates a neural network from its external context and stimuli makes it easy, at least in principle if not entirely in practice, to examine the network as a single unified entity in its entirety (notwithstanding the blurriness that socially distributed cognition introduces to the equation). A person's skin, say, provides a clear and unequivocal delineation of the neural network in a person. In contrast, when we look at social networks, we must always choose which connections to include and which to exclude because they always extend indefinitely. Sometimes we do encounter what superficially resemble 'natural' boundaries, such as when we analyze patterns of tweets from within Twitter, but this is a boundary of convenience: qualitatively the same kind of interactions that we are observing within the Twitter network are also occurring beyond that network in very much the same way and we are not observing those (as a point of interest nor, for that matter, are we observing the whole of Twitter because its API only gives us a pre-selected 1% of the network or only allows us to trace our own ego networks). This is not true of neural networks where, qualitatively, the connections become substantially different once we examine connections outside them. Neural networks are discrete systems.  If we are trying to examine a social network as a system, we have to make an active choice about its perimeter. I will explain why that matters soon, but first I will describe some other choices we have to make apart from where to draw the boundaries.

Diversity of edges and nodes

In a neural network there is no choice at all as to what to measure in an edge. The edges in neural networks (except when things go wrong) behave in consistent and predictable ways, with weights and values that vary across a well-defined range of self-similar signals. As for edges, likewise for nodes: the nodes in a neural network are easily described using algorithms that accurately predict their reaction to those similar stimuli. Downes misleadingly claims that neurons are very diverse. This is only trivially true in the sense that fingerprints are very diverse. For all their dissimilarities fingerprints, like neurons, can only usefully be described in a very limited number of ways across a very limited set of dimensions. 

When we model a social network as a system we have to choose what constitutes an interaction within that system - what an interaction means and what an effect means. We always use proxies for interaction such as number of message exchanges, self-reported closeness, 'following' or 'friend' status on a social network, membership of shared groups, use of shared tags, reading of similar posts, and so on. Except for binary values (e.g. friending) a decision normally has to be made as to what 'intensity' means. More sophisticated variants may use content analysis to gain some insight into the kinds of signals passing between nodes as well, and there are methods that can be used to make sense of two-or-more mode networks (ie. where nodes and edges are a mix of qualitatively different kinds). These proxies barely scratch the surface of the possible range of signals and effects that we might choose to observe, including the infinite possible word combinations, pheromones, photos, buildings, manufactured artefacts, touch, facial expressions, sighs, postures, etc, etc, etc we might examine. Nor does it take into account the complexities of pace or the distance in time between cause and effect. It is even more difficult to identify the effects of a signal on a node although, because we are describing bounded entities, this is less of a theoretical than an intractable practical problem.

When analyzing social networks we always, and often deliberately, use incomplete datasets (social networks are not bounded). We make decisions like ignoring anything beyond second-order connections, or using levels to isolate clusters. We typically 'clean' the network data by removing outliers. We filter, shape and simplify. We make broad abstractions, ignoring anything that we do not consider relevant to our particular description, that does not seem material to our current intent. We actively seek things like forbidden triads and structural holes because we know what we are looking for in advance of any analysis and we know what they mean for our purposes.  No matter how much smarter we become at doing this it will always be an infinitesimal fraction of what we could describe. Note again that this is entirely different from our neural network where the decision is largely made for us or, at least, it is pretty obvious what constitutes a signal and exactly what constitutes a reaction to it.

Choices

Complexity alone does not prove Downes's theory wrong in the slightest. The problem is that, in making those choices, we necessarily make assumptions about what matters that are prior to our examination of the network itself.  This is not bottom-up analysis of the same kind that we use when examining neural networks: in a social network we observe what we choose to observe from an infinite range of possibilities. The choice of connections and responses will thus always be determined by our existing models (or putative models) of what knowledge is and which kinds of knowledge matter to the kind of description or effect we are looking for. This is true whether we are performing 'traditional' social network analysis, whether we are trying to observe a self-organized network to examine how it 'learns' or what knowledge it contains, or whether and how we as individuals respond to the 'network' inputs we receive as individual nodes in a social network. This is not just an academic point about the difficulty of meaningful social network analysis but a feature of what it means to be a human being interacting with the world. The constructivists recognize this and they have a point. We are not tabulae rasae, even if we are more helpless when we come into the world than most species. Our prior models that we have constructed over our lifespans and those that are innate in our biological make-up play an enormous role in determining how we respond to future stimuli.  To summarize, what we describe as a social network is deeply dependent on what we choose to observe. The choices we make are not arbitrary but influenced and formed from our theories, models, beliefs and symbol systems, which is exactly what we were trying to get away from through using this description. Conversely, what we describe as a neural network is a bounded phenomenon in which all the relevant signals and reactions are in principle knowable. This does not mean that Downes is necessarily wrong in principle to suggest that learning may be found in the interactions of many entities, as I will kind-of agree with by the end of this post, but it does raise some big questions as to the practical application of the theory in the real world or what might possibly count as evidence that any instantiation of it is true. It is inherently a victim to circularity, because of the choices we have to make in identifying boundaries, signals and reactions. This speaks to David Wiley’s concerns about defining entities in terms of connections - it begs the question.

Inputs and outputs

Keeping boundaries in focus, it is worth noting that Downes repeatedly refers to 'society' as something that learns. It appears then that this is not an ego-centric network that he is describing but one in which we are looking at the whole system as an abstraction. As Latour and others have amply demonstrated, 'society' is a very elusive and incredibly fuzzy term that means very many different things to many different people. An initial problem here, if Downes wishes to avoid circularity, is not to explain how society learns, but to explain what society is (if anything). ANT theorists and practitioners have evolved a range of powerful methodologies to do this in ways that are quite useful when the target is to understand a particular socio-technical system but that would be hopelessly inadequate to address Downes's intent. This highlights a deeper problem though. Networks that exhibit the four characteristics that Downes mentions have another important pair of characteristics -  they have inputs and they have outputs. A neural network, for example, always has input neurons that lead to a change in one or more output neurons, passing through one or (usually many) more hidden layers along the way which transform the signals into patterns that we might reasonably describe as learning (or, more directly, that translate into behaviours).

Inputs are fairly easy to grasp in a social network - they must be things that people do that affect other people, directly or indirectly. As Downes puts it:

 'a change of state in one human can result in a change of state in the second human'. 

Though this is clear it is a little difficult to understand operationally because there is no hidden layer along the way of the sort found in neural networks. Every single 'neuron' (or analogue) in the entire network is getting stimulation from 'outside' the network, whatever 'outside' means in this context - more on that soon. Moreover, many of the connections in a social network are bidirectional or symmetric, which adds significant complexity to the range of possible interactions.  This would lead to the functional equivalent of epilepsy in a neural network, with all neurons firing in what would appear to an outside observer to be some kind of fit, even if we have already chosen a limited set of edges, values and responses to consider. It is possible, though unlikely and certainly demanding of significantly further proof, that there are configurations of social networks which might not result in a chaotic  Red Queen regime of this nature but that could still exhibit learning as Downes describes it. It certainly complicates things but it may be that it is just another aspect of the infinitely greater complexity of a social network compared with a neural network rather than a profound argument against social networks being capable of learning per se. Moreover, it does not mean that self-organization in social networks does not and cannot occur - it clearly does. But it needs a clearer mechanism to explain it, and it leads to a further intriguing issue and one that will eventually lead me to a kind of acceptance of Downes's point of view even though it raises some notable concerns about what the hell it means in practical terms, and whether it has any deep significance.

People as output neurons

As soon as we introduce the concept of an outside observer we are led to consider what the equivalent of output neurons might be in this network. As we cannot (I think) be describing an external object or system on which 'society' (or any social black box we choose to consider) acts, I think this means that we must be talking about its interpretation by individual people, each of whom processes the inputs he or she receives via topologically local or broadcast routes from others in a vaguely defined network that are, in their own different ways, all doing the same kind of thing too. This recursive definition in which a processing node is also the summary output of the entire network (unlike the behaviour of a human body as an output of a neural network) is a little tricky to grasp, but sort of makes sense. The overall network of stimuli outside a person feeds into the neural network inside a person through that person's input neurons, mediated through sensory nerve endings in eyes, ears, skin, nasal cavities, tongue, stomach, etc, most of which themselves pre-process the inputs in different ways. Note, however, that the 'overall' network is not some independently observable entity that some god-like being could point at nor that we could isolate as anything separate from an individual's perspective. Whereas we can see the output of a neural network as relating to and embedded in a unified body and having meaning in that unified context, a social network can only be defined relative to a given node. Every single node is part of a qualitatively and quantitatively different network with different boundaries from every other. We might point to and intentionally isolate clusters, deliberately organized entities like groups, nations, organizations, Twitter, etc and emergent collectives. We might equally treat these as higher-level distinct nodes in the network, but there is no God's-eye view here, no single thing like 'society' that makes use of this network for some other purpose or that is interacting with other things of a similar or even of a different nature. So, each node in the network (ie. each person) learns something different as a result of being a unique output neuron receiving massive recombinant inputs from those people and objects in his or her immediate and/or broadcast network, which includes, incidentally, not just humans and their shared artefacts but also other things that could be described as being part of their sensed network like wind speed, colour temperature, chemicals in the air, hormone balance, contents of their stomachs, sounds of raindrops falling, itching in their feet, etc, etc, etc, each of which will have varying degrees of salience in any learning context.  This ego-centric model of an individual human as an output neuron of a massively complex super-network (at least global and perhaps galactic) has some credibility if we are trying to understand the role in which this mass of inputs has on helping someone to learn, inasmuch as it is then (loosely speaking) up to the individual to determine which if any of the signals he or she receives matters, reducing the potentially infinite range of possibilities to something that is still massively complex but that is at least finite. In effect, individual learning could be looked upon as a very complicated form of network analysis of an ego network done by people, and that this is actually what is meant by sense-making. This still places the onus on an individual to make sense of a complex mass of inputs and still demands some kind of model on the part of the recipient of the data (which may be held in a combination of emergent neural networks plus the results of evolutionary shaping). But, and it is a big 'but', this doesn't demand that the extrinsic relativistically defined network exhibits any kind of learning in and of itself, even if it does have regularities and dynamics that we may use and observe from our unique position in relation to it. As that network is by definition different for every individual, it is hard to see how the concept of a network itself learning something has any useful explanatory value beyond the rather trivial observation that we make sense of regularities and patterns in the world we encounter - that the network is part of our thinking apparatus, as George, I and others suggest. Making a bridge between that philosophically interesting account of individual learning and learning design remains an unresolved issue that still focuses on the individual. In terms of whether a network itself learns, it begs the question.

A problem for Downes, not for Siemens

It is worthwhile reiterating at this point that deliberate selection of the regularities that we reckon to be significant is not a problem at all if our intent is simply to use that knowledge, in combination with a model of what matters, to further our individual learning through understanding of how networks behave and observing regularities in complex systems. It is absolutely OK to prejudge what is significant if you are thinking of an ego network as a means to help individual people to learn and/or an extension of our own cognition with its own part to play. Accepting that self-organization yields discernible patterns in the world around us does not require that the networks themselves have to learn or perceive.  It is part of the 'how' not of the 'what' aspect of connectivism as a theory of how to learn. There are some really useful things that can be gleaned through network analysis, formal and informal, global and ego-centric without having to accept any theory of networks themselves learning. We can see patterns, discover things like bottlenecks, social connections, popular themes, ways to improve information flow through networks, unexpected clusters and cliques, triads, directionality, and much much more. Knowing how networks behave means that we can use them better. Moreover, knowing that other people are doing the same thing means that we can better understand how we negotiate and come to common understandings of meaning and value. Indeed, what Downes identifies as things his theory can predict or explain,

"such as: societies organized using parliaments rather than mobs will be more stable and will last longer; such as: too much extraneous neural noise, such as a loud buzzing sound, will make it difficult to learn; such as: increasing social resistance through immunization protects society against disease; and on and on and on"

are entirely explicable using network analysis techniques that do not rely on a concept of networks containing knowledge in some, however loosely, brain-like way. 

A believable theory? Actually, yes it is. But...

Whether or not it can be empirically instantiated without appealing to prior symbol systems, there is a broadly philosophical sense in which I find Downes's model to be quite compelling. Downes wishes to claim that the combination of signals between nodes and their effects on other nodes, configured in a particular set of ways, constitute emergent learning of some kind in the network as a whole,  and implies that this remains true whether or not we are actually able to empirically identify more than a tiny fraction of such interactions or make sense of them without recourse to prior models.  As a broad view of the universe as a self-organizing information system that can, in some sense, be seen to have knowledge and to change through co-adaptation of its parts, iI like it. I like it for the reasons that I like William Olaf Stapledon's StarmakerLast and First Men, and Sirius, because it plays with what it means to think, to learn, to be conscious (Stapledon was, interestingly, a philosophy professor as well as science fiction pioneer). Likewise it resembles Gregory Bateson's view of the entire world as a cohesive cybernetic mind (not totally dissimilar to Lovelock's Gaia hypothesis but much subtler and richer) which is a useful thinking tool. It also directly implies that traffic flows, ecosystems, bodies, cities and galaxies can be thought of as entities that learn too, assuming that the pattern of connections between their parts obey the rules that Downes lays out (they do). It resembles other accounts that I am very fond of, such as Stuart Brand's 'How Buildings Learn' and Kevin Kelly's 'What Technology Wants', in which we can see an emergent pattern that can be looked upon in some ways as intentional in a collective system, emerging as a result of apparently free and conscious acts of individual people.  If this is all that Downes is claiming, then that’s OK. However, Downes wants more than that:

"I care precisely and only about the following:
- whether I can describe what learning is and explains why learning occurs. 
- whether I can use this knowledge to help people learn and make their lives better
"

Downes has indeed described something about what some learning is, whether or not it has practical import for the purposes he intends and whether or not it applies to all learning. This is, however, necessarily true because, assuming that we can in some (not all) ways describe social systems as self-organizing, any and every self-organizing system learns: it achieves meta-stability or actual stability in relation to the things around it. Even a simple thermostat learns to adapt a system to reach the correct temperature. Given that we are considering not thermostats but complex adaptive systems with inherently open-ended behaviours, it also makes sense to talk about networks. As Kevin Kelly claimed in his brilliant 1994 book "Out of Control" , "The only organization capable of unprejudiced growth, or unguided learning, is a network". As Kelly demonstrates, this is a perspective that underpins how ecosystems, evolution, organisms, single cells, crowds, ant colonies, galaxies, geological formations, beats of a human heart and financial systems can be described as learning systems. I agree that networks that conform to Downes's description might indeed be seen as learning in this broad sense, but so do many others. I think that Downes means something more precise than that. I think from what he has written in our dialogue and elsewhere so far that Downes is describing learning in terms of four distinct network processes or features, rather than describing the entire set of all self-organized systems. Downes's account is, however, just one of a quite large class of self-organizing systems, all of which can be meaningfully described as learning in this broad sense.

I can come up with a compelling theory of learning in social networks too, that equally explains learning in brains, and that I think fits with a connectivist account like a glove and that is incommensurate with other theories for much the same reasons. In 'Out of Control', Kelly makes the thought-provoking observation that, in any system that evolves, death is the best teacher. From this perspective, species of organism (or connections in brains, or more complex entities like cities, or memes in social networks, or technologies) 'learn' through natural selection, either evolving to be fit to survive or dying, in a complex web of interactions in which other similar and dissimilar entities are in competition with them and also learning, which leads to higher levels of emergent behaviours, increased complexity and changes to the entire environment in which it all happens. Throw in a bit more evolutionary theory such as the need for parcellation between ecosystems along with some limited isthmuses and occasional disruptions, and we are heading towards a theory that sounds to me like a pretty plausible theory of networked learning in both social networks and brains, as well as other self-organizing systems. We could use it to describe how ideas come and go, theories develop, arguments resolve and much much more. It works, I think. Others have run with this idea and explored it in greater depth, of course, such as in neuro-evolutionary methods of machine learning and accounts of brain development following a neural Darwinist model. While probably true at some level, and providing a pretty good and fairly full explanation that is consistent with a connectivist account, this is only of practical import if we can use it to actually help people to learn - a theory of how to learn, not of learning itself. It actually doesn't matter at all even whether it is a full and complete explanation of all learning in all systems or not. Downes's theory and 'mine' (I make not claims at all that this is novel) both beg the question of how it might help people to learn and make their lives better. These accounts only have legs if we can put them to use, and doing so invokes models and purposes that are prior to the accounts themselves, very notably connectivism as a theory of how to learn itself, so we have not really addressed the issue at all.

Does it matter?

All of this is nothing more than nit-picking if we simply accept that connectivism is a broad family of ideas and theories about how to learn in a networked society, all of which adopt a systems view, all of which recognize the distributed nature of knowledge, all of which embrace the role of mediating artefacts, all of which recognize that more is different, all of which adopt a systems perspective, all of which describe or proscribe ways to engage in this new ecology. We do not have to agree on the details for it to perform useful work. From this perspective connectivism can be a broad church that accepts the theories I mentioned in my previous post and that George refers to in his earlier post on the subject as being related and meaningful, not to mention later ideas like Dave Cormier's appealing notion of rhizomatic learning that are probably somewhat influenced by connectivist ways of thinking and that I would certainly want to include as one of the family. It can self-consistently accept a theory such as evolution to explain learning such as I present it above, but it does not need to be or contain a unifying theory of learning to be very useful indeed. In fact, shoe-horning it into such a theory, especially a single theory that must conform to a particular pattern, makes it all too easy to ignore or underplay the value of things like that broader class of collectives, like the other ways self-organized systems learn, like the often invaluable role of intentional structures (including traditional groups) in the network, and even like those incommensurable other theories of learning (actually, behaviourism is not really incommensurable because it makes no claims about mechanism even though it is mostly dumb, and instructivism is not a theory of learning though it may imply a primitive version of such a theory, but we will let that pass - sorry, I let a little pedantry slip back in there!). All of these and more can play a valuable role in supporting and enabling learning. Indeed, if Downes's theory were a true model of learning in social networks, it would presumably have to allow that such theories are themselves emergent actors with a part to play in such networks, albeit at a higher structural level than those provided through his connectionist account. 

My worry is that if, instead of seeing connectivism as a family of ideas, theories and approaches that offer value in helping us to see outside the box of traditional educational methods, we see it as a single cohesive theory of learning, then that theory had better be fairly unassailable or someone coming to it afresh will likely observe its flaws and move on to the next. That's fine in some ways and represents a big part of how knowledge evolves - we do not have to be wed to theories at the hip - but it massively detracts from connectivism's value as a stable centre and catalyst for action, which is a role that it has played very well for very many people over the past few years. I'm not suggesting for a moment that we should not aspire to finding such an all-encompassing theory, but to tie it inextricably to a theory that already works well enough to inspire many thousands of people around the world to see the world differently seems worse than pointless to me. It is precisely that fear that inspired me to write my previous post and it is why, beyond the learning value of thinking about it in such detail, I have just written another very long post on the subject. If I were the one presenting Downes's theory then it would not matter so much as it would just be one voice among many, but his name is often associated with George as one of the theory's central authorities so his perspective matters more than mine. He is far better known in educational circles than I, he has written a lot about it, and he has worked with George in high-visibility events like CCK08, so his opinions carry a potentially significant greater amount of weight. 

As always, I welcome correction, clarification, elucidation, confirmation and debate. This is how learning evolves in networks, and demonstrates their value far better than the content of my posts or those of Downes. The process of writing this, including considering the arguments, the influences, the comments and the reactions to it as well as the actual process of writing the words, has been a learning experience for me and I certainly don't think it has come close to closing down any particular learning path yet. As always when learning happens, it opens more opportunities for learning than it closes, creates confusion as it resolves confusion, and tilts the balance of chaos and order in a slightly different direction. And, I have no doubt, at least some of what I have written is wrong, and I will recognize it as wrong as the conversation continues. We - you, me and all learners - are explorers on the edge of the adjacent possible and that is always expanding. I celebrate that.

 

Jon Dron

Jon Dron

still learning, never learning enough
About me

I am a full professor and Associate Dean, Learning & Assessment in the School of Computing & Information Systems, and a member of The Technology-Enhanced Knowledge Research Institute at...

Comments

  • Anonymous May 8, 2014 - 7:46pm

    "All of this is nothing more than nit-picking if we simply accept that connectivism is a broad family of ideas and theories about how to learn in a networked society, all of which adopt a systems view, all of which recognize the distributed nature of knowledge, all of which embrace the role of mediating artefacts, all of which recognize that more is different, all of which adopt a systems perspective, all of which describe or proscribe ways to engage in this new ecology"

    -I think this is probably as far as one needs to go in understanding/stating the case.


    - Ken Anderson

  • Jon Dron May 9, 2014 - 9:59am

    @Ken - agreed, that's the big point. To call a shift to connectivist thinking a paradigm shift might be pushing it a bit far but there has definitely been a shift in emphasis over the past few decades from individual assimilation of knowledge to social construction of knowledge to distributed networks of connected knowledge. The older models do not die or lose their original value, but are augmented and enriched by those that follow. With that in mind I think it is useful that we can treat such diverse thinkers as Dewey, Vygotsky, Knowles, and Jonassen as working within the spectrum of social constructivist theories, just as it is useful to think of Saloman, Wenger, Siemens, Downes, Cormier (and me) as working within the spectrum of connectivist theories. It allows us to see patterns and commonalities, make associations and highlight differences more easily than were we to treat each as a simply an independent theorist with antecedent influences. This helps to give an emergent shape and shared purpose to an otherwise complex collection of overlapping and connected ideas, and that is useful.