Landing : Athabascau University
  • Blogs
  • Jon Dron
  • In games, brains work differently when playing vs. a human

In games, brains work differently when playing vs. a human

  • Public
By Jon Dron February 10, 2009 - 6:06am

http://community.brighton.ac.uk/jd29/weblog/41219.html

Full story at: http://jondron.cofind.net:80/frshowresource.php?tid=5325&resid=1394

Reporting on a study comparing functional MRI scans of people playing the generalised form of the Prisoner's Dilemma. Half were told they were playing against a machine, half were told they were playing a human. In fact, all were playing a machine. For this particular game there should be no difference in their behaviours, if they were playing the game logically, whether the other party were a machine or a human. But there was. Parts of the brain that are more active in trying to comprehend another person's mental state were busier in those who believed they were playing against a human than in those who thought they were playing a machine.

This seems to point at something very important about how we learn with others and why it matters to learn things together. One of the distinctive and important roles of a teacher (who may be a peer, an expert, a pedagogue, etc) is to help learners to think differently. When we adjust our responses according to who we think we are talking to, we create mental models of that person and, in doing so, open opportunities to change how we think - to be like them. If that propensity is stronger when interacting with a real person than with a machine, then it presents a good case for real people in online learning. It is not that it is impossible to learn on our own (mediated through technologies like computers and books), but this gives some useful supporting evidence as to why it is often much harder than when we interact with a real person.

The study tells us little about how our brains work when we are talking to many people at once but that would seem to be a fruitful area for further study. From my perspective, it would be particularly interesting to find out whether these brain areas are more or less active when interacting with the collective or the network, where it is much harder to identify another's mental state because you (usually) don't know who you are talking to. I would hypothesise that groups, networks and collectives would form a continuum of activity between inter-personal interaction and interaction with a machine, but it may be subtler than that. For instance, in some circumstances it is possible that those sociable areas of the brain might be more active when trying to work out the mental states of a whole group than when trying to do it for an individual.

It would also be interesting to explore ways that we can compensate for this weakness in systems without people to talk with. Would it help to include exercises that require us to think like other people (scenario construction, play-acting, poetry etc)? Or could we find ways to fool people that they were talking to a real person? I guess this would ideally involve some kind of Turing-Prize-winning AI, but maybe there are halfway houses. For instance, an FAQ system where questions unanswered by an automated engine are passed to real people might be sufficiently human to have the right effect.

On a similar note, it would also be intriguing to analyse differences between questions asked of peers and questions asked of experts. If we are really trying to think like the person we are talking to, then we might expect different kinds of question, which would suggest that we are opening up new opportunities to think like that person. The same would apply in group learning, where we might be dealing with multiple models and accepting or rejecting them as part of the learning process. There's a theory or two in here somewhere!
Created:Tue, 10 Feb 2009 13:06:00 GMT