They make you think
A couple of days ago Susie Mallett's Conductor blog carried a long posting, called 'Let us be careful about aphorisms, principles and proverbs', expressing concern over some of the simplistic and potentially misleading terms in which conductors' work is often described. This was couched in a characteristically complex account of actual conductive practice. I commented under this iconoclastic and challenging piece yesterday, but it would not let me go.
It made me think. I do hope that some who read my words here will take the time to read this posting. You too may find it though-provoking, as I did:
One of the things that it had me thinking about was the (to me) ever-interesting problem of 'Conductive Education research'.
CE-research: find the paradigm
What implication might the complexity of the mechanisms of conduction hold for empirical outcome-evaluation (which is all that many mean when they talk about 'CE-research')?
One of the problems for adopting the sort of research methodology that is so often called for under the call for 'evidence-based practice' is how to ensure that would-be comparative groups receive comparable input. In the past, taking the lead from the critical research review by Due Ludwig and her colleagues in Alberta, I have often (somewhat mischievously, I admit) advocated the proposal of using 'treatment manuals' as a step towards achieving this. I am not altogether sure, but I rather suspect that Susie Mallet's referred to here rather explodes this possibility – not that anyone has tried out treatment manuals in CE-research over more than ten years since that research review was published).
I doubt that there is anything new in what I am writing here. This problem must have been apparent since the dawn of research into teaching processes, of any kind. Of course it must be widely known (and probably deeply theoretised too) that every instance of teaching and learning is unique, of itself, never to be repeated. I do not consider there to be anything profound in expecting that it could ever be anything else, given circumstances that are of their nature so multifactorial, interactive and dynamic. I really wonder whether a 'treatment manual' can really match the continually flexing nature of good conductive pedagogy – so why bother to try? Why try to jam conduction into the Procrustean bed of clinical-style comparative evaluation? And if one cannot manage a treatment manual, how can one pretend to be comparing treatments?
So what is a poor empirical evaluator to do about this? This is not immediately my problem – but I do not consider a satisfactory answer to be 'Ignore it'. Perhaps a start would be 'Admit it', then dip deeply into the phenomenon to be studied, and come up with something altogether new, sufficient to the task. Perhaps this might involve asking some very different questions.
As usual, I wonder what others think.
Ludwig, S. and others (2000) Conductive Education for children with cerebral palsy, Edmonton, Alberta Heritage Foundation for Medical Research.
Specifically relevant here are their mentions of 'treatment manuals, see pp. 27, 27, 30, 31 and 36
Mallett, S. (2011) Let us be careful about aphorisms, principles and proverbs, Conductor, 10 April