Evidence in teaching – notes from the reading group

Suzi read Why “what works” won’t work: evidence-based practice and the democratic deficit in educational research, Biesta, G 2007 and a chapter by Alberto Masala from the forthcoming book From Personality to Virtue: Essays in the Philosophy of Character, ed Alberto Masala and Jonathan Webber, OUP, 2015

Biesta gives what is broadly an argument against deprofessionalisation in the context of government literacy and numeracy initiatives at primary school level. I found the main argument somewhat unclear. It was most convincing talking about the difficulty in defining what education is for, making it difficult to test whether an intervention has worked. Talks at length about John Dewey and his description of education as a moral practice and learning as reflective experimental problem solving.

“A democratic society is precisely one in which the purpose of education is not given but is a constant topic for discussion and deliberation.”

Masala’s paper is on virtue/character education but is of wider interest as it talks very clearly about educational theory. I found particularly useful in this context the distinction between skill as a competence (defined by performance, so easily testable) and skill as mastery (defined by a search for superior understanding and less easily tested), and the danger of emphasising competence.

Hilary read Version Two: Revising a MOOC on Undergraduate STEM Teaching, which briefly outlined some key approaches and intended developments in a Coursera MOOC aimed at STEM graduates and post docs interested in developing their teaching.

The author of the blog post is Derek Bruff (director of the Vanderbilt University Center for Teaching, and senior lecturer in the Vanderbilt Department of Mathematics with interests in agile learning, social media and SRS – amongst other things: see http://derekbruff.org/)

Two key points:

  1. MOOC centred learning communities – the MOOC adopted a facilitated blended approach, building on the physical groupings of graduate student participants by facilitating 42 learning communities across the US, UK and Australia to use face to face activities to augment the course materials, and improve completion rates.
  2. Red Pill: Blue Pill – adopting the metaphor used by George Siemens in the Data, Learning and Analytics MOOC to give two ways to complete the course – either an instructor-led approach which was more didactic and focussed on the ability to understand and apply a broad spectrum of knowledge OR a student-directed approach which used peer graded assignments and gave the students the opportunity to pick the materials which most interested them, and so focus on gaining a deeper but less comprehensive understanding of the topic.

Final take away – networked learning is hard, as would be the logistics of offering staff / student development opportunities as online and face-to-face modules, with different pathways through the materials, but interesting …

Steve read Building evidence into education, 2013 report by Ben Goldacre for the UK government

Very accessible summary of the case for evidence-based pedagogy in the form of large-scale randomised controlled trials. Compares current ‘anecdote/authority’ edu research with past medical work – lots of interesting analogies. Focused on primary/secondary education but some ideas can transfer to higher – although would be more challenging.

Presents counterarguments to a number of common arguments against the RCT approach – it IS ethical if comparing methods where you don’t know which is best (and if you do know, why bother trialling?!). Difficulty in measuring is not a reason to discount, RCTs are a way to remove noise. Talks about importance of being aware of context and applicability. Uses some good medical examples to illustrate points.

Sketches out an initial framework – teachers don’t need to be research experts (doctors aren’t), should be research-focused team leading and guiding with stats/trials experts etc.

Got me thinking – definitely worth a read.

Roger read “Using technology for teaching and learning in higher education: a critical review of the role of evidence in informing practice, (2014) by Price and Kirkwood

This study explores the extent to which evidence informs teachers’ use of TEL in Higher Education. It involved a literature review, online questionnaire and focus groups. The authors found that there are differing views on what constitutes evidence which reflect differing views on learning and may be characteristic of particular disciplines. As an example they suggest a preference for large-scale quantitative studies in medical education.
In general evidence is under-used by teachers in HE, with staff influenced more by their colleagues and more concerned about what works rather than why. Educational development teams have an important role as mediators of evidence.

This was a very readable and engaging piece, although the conclusions didn’t come as much of a surprise!  The evidence framework they used (page 6) was interesting, with impact categorised as micro (e.g. individual teacher), meso (e.g. within a department) or Macro (across multiple institutions).

Mike read Evidence-based education: is it really that straightforward?, 2013, Marc Smith, Guardian Education response to Ben Goldacre

This is a thoughtful and well argued response to Goldacre’s call for educational research to learn from medical research, particularly in the form of randomised controlled trials. Smith is not against RCTs, but suggests they are not a silver bullet.

Smith applauds the idea that we need teachers to drive the research agenda and that we do need more evidence. His argument that it will be challenging to change the culture of teaching to achieve this, seems valid, but is not necessarily a reason not to try. The thrust of his argument is that  RCTs, whilst effective in medicine, are harder to apply to education due to the complexity of teaching and learning. He believes (and I tend to agree) that cause and effect are harder to determine in the educational context. Smith argues  that in medicine  there is a specific problem (an illness or condition) and a predefined intended outcome (change to that condition). This can be problematic in the medical context, but is even harder to measure in education. I would add that the environment as a whole is harder to control and interventions more difficult to replicate. Different teachers could attempt to deliver the same set of interventions, but actually deliver radically different sessions to learners who will interact with the learning in a variety of ways. Can education be thought of as a change of state caused by an intervention in the same way we would prescribe a drug for a specific ailment?

All this is not to say that RCTs cannot play a role, but that you have to think about what you are trying to research before choosing your methodology (some of the interventions Goldacre addressed related to specific quantitative measurable things like teenage pregnancy rates, or criminal activity). Perhaps it is my social scientist bias, bit I woudl still want to triangulate using a range of methods (quantitative and qualitative).

From a personal perspective, I sometimes think that ideas translated from science to a more social scientific context can lose some scientific validity in the process (though this is maybe most true at the level of theory than scientific practice. For example Dwarkins translated selfish genes into the concept of cultural memes, suggesting cultural traits are transmitted in the same way as genetic code. Malcolm Gladwell’s tipping point is a metaphor from epidemiology which he applies to the spreading of ideas, bringing much metaphorical baggage in the process. Perhaps random control trials could provide better evidence for the validity of these theories too?