Landing : Athabascau University

Guesses and Hype Give Way to Data in Study of Education - NYTimes.com

http://www.nytimes.com/2013/09/03/science/applying-new-rigor-in-studying-education.html?_r=0

This is a report on the What Works Clearinghouse, a set of 'evidence-based' experimental studies of things that affect learning outcomes in US schools, measured in the traditional 'did they do better on the tests' manner. It's a great series of reports.

I have a number of big concerns with this approach, however, quite apart from the simplistic measurements of learning outcomes that ignore what is arguably the most important role of education - it is about changing how you think, not just about knowing stuff or acquiring specific skills. There is not much measurement of that apart from, indirectly, through the acquisition of the metaskill of passing tests, which seems counter-productive to me. What bothers me more though is the naive analogy between education and clinical practice. The problem is an old one that Checkland expressed quite nicely when talking of soft systems:

“Thus, if a reader tells the author ‘I have used your methodology and it works’, the author will have to reply ‘How do you know that better results might not have been obtained by an ad hoc approach?’ If the assertion is: ‘The methodology does not work’ the author may reply, ungraciously but with logic, ‘How do you know the poor results were not due simply to you incompetence in using the methodology?’

Not only can good methodologies be used badly, bad methodologies can be used well. Teaching and learning are creative acts, each transaction unique and unrepeatable. The worst textbook in the world can be saved by the best teacher, the best methodology can be wrecked by an incompetent or uncaring implementation. Viewed by statistical evidence alone, lectures are rubbish, but most of us who have been educated for long enough using such methods can probably identify at least the odd occasion when our learning has been transformed by one. Equally, if we have been subjected to a poorly conducted active learning methodology, we may have been untouched or, worse, put off learning about the subject. It ain't what you do, it's the way that you do it.

Comparing education with medicine is a category mistake. It would be better to compare it with music or painting, for instance. 'Experimental studies show that children make better art with pencils than with paints' might be an interesting finding as a statistical oddity, but it would be a crass mistake to therefore no longer allow children to have access to paintbrushes. 'On average, children playing violins make a horrible noise' would not be a reason to stop children from learning to play the violin, though it is undoubtedly true. But it is no more ridiculous than telling us that 'textbook X leads to better outcomes than textbook Y', that a particular pedagogy is more effective than another, or that the effectiveness of a particular piece of educational software produces no measurable improvement over not using it. Interestingly, the latter point is made in a report from the 'What Works Clearinghouse' site at http://ies.ed.gov/ncee/pubs/20094041/pdf/20094041.pdf which, amongst other interesting observations, makes the point that the only thing that does make a statistical difference in the study is teacher/student ratios. Low ratios allow teachers to exhibit artistry, to adapt to learners' needs, to demonstrate caring for individuals' learning more easily. This is not about a method that works - it is about enabling multiple methods, adapted to needs. It is about allowing the teacher to be an artist, not an assembly worker implementing a fixed set of techniques.

I am not against experimental studies as long as we are very clear and critical in our interpretation of them and do not over-generalize the results. It would be very useful to know that something really does not ever work for anyone, but I'm not aware of many unequivocal examples of this. Even reward and punishment, that fails in the overwhelming majority of cases, has at least some evidence of success in some cases for some people - very few, but enough to show it is not always wrong.

Even doing nothing which, surely, must be a prime candidate for universal failure, sometimes works very well. I was once in a maths class at school taken by a teacher who, for the last few months of the two-year course, was taken ill. His replacements (for some time we had a different teacher every week, most of whom were not maths teachers and knew nothing of the syllabus) did very little more than sit at the front of the class and keep order while we studied the textbook and chatted amongst ourselves. The average class grade in the national exams sat at the end of it all was considerably higher than had ever been achieved in that school previously - over half of us got A grades where, in the past, twenty percent would have been a good showing. Of course, 'nothing' does not begin to describe what actually happened in the class in the absence of a teacher. The textbook itself was a teacher and, more importantly, we were one another's teachers. Our sick teacher had probably inspired us and the very fact that we were left adrift probably pulled us closer together and made us focus differently than we would have done in the presence of a teacher. Maybe we benefited from the diversity of stand-in teachers. We were probably the kind of group that would benefit from being given more control over our own learning - we were the top set in a school that operated a streaming policy so, had it happened to a different group, the results might have been disastrous. Perhaps we were just a statistically improbably group of math genii (not so for me, certainly, so we might rule that one out!). Maybe the test was easier that year (unlikely as about half a dozen other groups didn't show such improvement, but perhaps we just happened to have learned the right things for that particular test). I don't know. And that is the point: the process of learning is hugely complex, multi-faceted, influenced by millions of small and large factors. Again, this is more like art than medicine. The difference between a great painting and a mediocre one is, in many cases, quantitatively small, and often a painting that disobeys the 'rules' may be far greater than one that keeps to them. The difference between a competent musician and a maestro is not that great, viewed objectively. In fact, many of my favourite musicians have objectively poor technique, but I would listen to them any day rather than a 'perfect' rendition of a midi file played by an unerring computer. The same is true of great teaching although this doesn't necessarily mean it is necessarily the result of a single great teacher - the role may be distributed among other learners, creators of content, designers of education systems, etc.  I'm fairly sure that, on average, removing a teacher from a classroom at a critical point would not be the best way to ensure high grades in exams, but in this case it appeared to work, for reasons that are unclear but worth investigating. An experimental study might have overlooked us and, even if it did not, would tell us very little about the most important thing here: why it worked. 

We can use experimental studies as a starting point to exploring how and why things fail and how and why they succeed. They are the beginning of a design process, or steps along the way, but they are not the end. It is useful to know that low teacher/student ratios are a strong predictor of success, but only because it encourages us to investigate why that is so. It is even more interesting to investigate why it does not always appear to work. Unlike clinical studies, the answer is seldom reduceable to science and definitely not to statistics, but knowing such things can make us better teachers.

I look forward to the corollary of the What Works Clearinghouse - the Why it Works Clearinghouse.