Evidence in teaching – notes from the reading group

Suzi read Why “what works” won’t work: evidence-based practice and the democratic deficit in educational research, Biesta, G 2007 and a chapter by Alberto Masala from the forthcoming book From Personality to Virtue: Essays in the Philosophy of Character, ed Alberto Masala and Jonathan Webber, OUP, 2015

Biesta gives what is broadly an argument against deprofessionalisation in the context of government literacy and numeracy initiatives at primary school level. I found the main argument somewhat unclear. It was most convincing talking about the difficulty in defining what education is for, making it difficult to test whether an intervention has worked. Talks at length about John Dewey and his description of education as a moral practice and learning as reflective experimental problem solving.

“A democratic society is precisely one in which the purpose of education is not given but is a constant topic for discussion and deliberation.”

Masala’s paper is on virtue/character education but is of wider interest as it talks very clearly about educational theory. I found particularly useful in this context the distinction between skill as a competence (defined by performance, so easily testable) and skill as mastery (defined by a search for superior understanding and less easily tested), and the danger of emphasising competence.

Hilary read Version Two: Revising a MOOC on Undergraduate STEM Teaching, which briefly outlined some key approaches and intended developments in a Coursera MOOC aimed at STEM graduates and post docs interested in developing their teaching.

The author of the blog post is Derek Bruff (director of the Vanderbilt University Center for Teaching, and senior lecturer in the Vanderbilt Department of Mathematics with interests in agile learning, social media and SRS – amongst other things: see http://derekbruff.org/)

Two key points:

  1. MOOC centred learning communities – the MOOC adopted a facilitated blended approach, building on the physical groupings of graduate student participants by facilitating 42 learning communities across the US, UK and Australia to use face to face activities to augment the course materials, and improve completion rates.
  2. Red Pill: Blue Pill – adopting the metaphor used by George Siemens in the Data, Learning and Analytics MOOC to give two ways to complete the course – either an instructor-led approach which was more didactic and focussed on the ability to understand and apply a broad spectrum of knowledge OR a student-directed approach which used peer graded assignments and gave the students the opportunity to pick the materials which most interested them, and so focus on gaining a deeper but less comprehensive understanding of the topic.

Final take away – networked learning is hard, as would be the logistics of offering staff / student development opportunities as online and face-to-face modules, with different pathways through the materials, but interesting …

Steve read Building evidence into education, 2013 report by Ben Goldacre for the UK government

Very accessible summary of the case for evidence-based pedagogy in the form of large-scale randomised controlled trials. Compares current ‘anecdote/authority’ edu research with past medical work – lots of interesting analogies. Focused on primary/secondary education but some ideas can transfer to higher – although would be more challenging.

Presents counterarguments to a number of common arguments against the RCT approach – it IS ethical if comparing methods where you don’t know which is best (and if you do know, why bother trialling?!). Difficulty in measuring is not a reason to discount, RCTs are a way to remove noise. Talks about importance of being aware of context and applicability. Uses some good medical examples to illustrate points.

Sketches out an initial framework – teachers don’t need to be research experts (doctors aren’t), should be research-focused team leading and guiding with stats/trials experts etc.

Got me thinking – definitely worth a read.

Roger read “Using technology for teaching and learning in higher education: a critical review of the role of evidence in informing practice, (2014) by Price and Kirkwood

This study explores the extent to which evidence informs teachers’ use of TEL in Higher Education. It involved a literature review, online questionnaire and focus groups. The authors found that there are differing views on what constitutes evidence which reflect differing views on learning and may be characteristic of particular disciplines. As an example they suggest a preference for large-scale quantitative studies in medical education.
In general evidence is under-used by teachers in HE, with staff influenced more by their colleagues and more concerned about what works rather than why. Educational development teams have an important role as mediators of evidence.

This was a very readable and engaging piece, although the conclusions didn’t come as much of a surprise!  The evidence framework they used (page 6) was interesting, with impact categorised as micro (e.g. individual teacher), meso (e.g. within a department) or Macro (across multiple institutions).

Mike read Evidence-based education: is it really that straightforward?, 2013, Marc Smith, Guardian Education response to Ben Goldacre

This is a thoughtful and well argued response to Goldacre’s call for educational research to learn from medical research, particularly in the form of randomised controlled trials. Smith is not against RCTs, but suggests they are not a silver bullet.

Smith applauds the idea that we need teachers to drive the research agenda and that we do need more evidence. His argument that it will be challenging to change the culture of teaching to achieve this, seems valid, but is not necessarily a reason not to try. The thrust of his argument is that  RCTs, whilst effective in medicine, are harder to apply to education due to the complexity of teaching and learning. He believes (and I tend to agree) that cause and effect are harder to determine in the educational context. Smith argues  that in medicine  there is a specific problem (an illness or condition) and a predefined intended outcome (change to that condition). This can be problematic in the medical context, but is even harder to measure in education. I would add that the environment as a whole is harder to control and interventions more difficult to replicate. Different teachers could attempt to deliver the same set of interventions, but actually deliver radically different sessions to learners who will interact with the learning in a variety of ways. Can education be thought of as a change of state caused by an intervention in the same way we would prescribe a drug for a specific ailment?

All this is not to say that RCTs cannot play a role, but that you have to think about what you are trying to research before choosing your methodology (some of the interventions Goldacre addressed related to specific quantitative measurable things like teenage pregnancy rates, or criminal activity). Perhaps it is my social scientist bias, bit I woudl still want to triangulate using a range of methods (quantitative and qualitative).

From a personal perspective, I sometimes think that ideas translated from science to a more social scientific context can lose some scientific validity in the process (though this is maybe most true at the level of theory than scientific practice. For example Dwarkins translated selfish genes into the concept of cultural memes, suggesting cultural traits are transmitted in the same way as genetic code. Malcolm Gladwell’s tipping point is a metaphor from epidemiology which he applies to the spreading of ideas, bringing much metaphorical baggage in the process. Perhaps random control trials could provide better evidence for the validity of these theories too?

53 powerful ideas (well, 4 of them at least) – notes from the reading group

This month we picked articles from SEDA’s 53 powerful ideas all teachers should know about blog.

Mike read Students’ marks are often determined as much by the way assessment is configured as by how much students have learnt

Many of the points made in this article are hard to dispute. Different institutions and subject areas vary so widely that not only are how marks are determined different between say Fine Art and Medicine, but also between similar subjects at the same institution, and also between the same subject at different institutions. This may reflect policy or process (eg dropping the lowest mark before calculating final grade).  In particular, Gibbs argues that coursework tends to encourage students to focus on certain areas of the curriculum, rather than testing knowledge of the whole curriculum.  Gibbs also feels these things are not always clear to external examiners. He does not feel that QAA emphasis on learning outcomes address these shortcomings.

The article (perhaps not surprisingly) does not come up with a perfect answer to what is a complex problem. Would we expect Fine Artists to be assessed in the same way as doctors? How can we ensure qualifications from different institutions are comparable? Some ideas are explored, such as asking students to write more course work essays to cover the curriculum, and then marking a sample. This is however rejected as something students would not tolerate. The main thing I can take from this is that thinking carefully about what you really need to assess when designing the assessment is important (nothing new really). For example, is it important that students take away a breadth of knowledge of the curriculum, or develop a sophistication of argument? Design the assessment to reflect the need.

Suzi read Standards applied to teaching are lower than standards applied to research and You can measure and judge teaching

The first article looks at the difference between the way academics receive training for teaching and the way research and the way teaching and research are evaluated and accredited. Teaching, as you might imagine, comes off worse in all cases. There aren’t any solutions proposed, though the author muses on what would happen if they were treated in the same way:

“Imagine a situation in which the bottom 75% of academics, in terms of teaching quality, were labelled ‘inactive’ as teachers and so didn’t do it (and so were not paid for it).”

The second argues that students can evaluate courses well if you ask them right things: to comment on behaviour which are known to affect learning. There didn’t seem to be enough evidence in the article to really evaluate his conclusions.

The argument put at the end seemed sensible: that evaluating for student engagement works well (while evaluating for satisfaction, as we do in the UK, doesn’t).

The SEEQ, a standardised (if long) list of questions for evaluating teaching by engagement, looks like a useful resource.

Roger read Students do not necessarily know what is good for them.

This describes three examples where students and/or the NUS have demanded or expressed a preference for certain things, which may not actually be to their benefit in the longer term. He believes that these cases can be due to a lack of sophistication of learners (“unsophisticated learners want unsophisticated teaching”) or a lack of awareness of what the consequences of their demands might be (in policy or practice). The first example is class contact hours. Gibbs asserts that there is a strong link between total study hours (including independent study) and learning gain, but no such link between class contact hours and learning gain. Increasing contact hours often means increasing class sizes which generally means a dip in student performance levels.   Secondly he looks at assessment criteria, saying that students are demanding “ever more detailed specification of criteria for marking” , which he states are ineffective in themselves for helping students get good marks, as people interpret criteria differently. A more effective mechanism would be discussion of a range of examples where students have approached a task in different ways, and how these meet the criteria. Thirdly he says that students want marks for everything, but evidence suggests that they learn more when receiving formative feedback with no marks, as otherwise they can focus more on the mark than the feedback itself.

The solution, he suggests is to make evidence-based judgements which take into account student views, but are not entirely driven by them, to try to help students develop their sophistication as learners and to explain why you are taking a certain approach. This article resonated with me in a number of ways, especially with regard to assessment criteria and feedback. There is an excellent example of practice in the Graduate School of Education where the lecturer provides a screencast in which she goes through an example of a top level assignment, explaining what makes it so good.  She has found that this has greatly reduced the number of student queries along the lines of “What do I need to do to get a first / meet the criteria”.  I also strongly agree with his point about explaining to students the rationale for taking a particular pedagogic approach. Sometimes we can assume that students know why a certain teaching method is educationally beneficial in a particular context, but in reality they don’t. And sometimes students resist particular approaches (peer review anyone!) without necessarily having the insight into how they may be helpful for their learning.

6 very good things about MIT’s #medialabcourse MOOC

I started taking MIT’s Media Lab’s Learning Creative Learning MOOC (often referred to as #medialabcourse or LCL) at the beginning of February. It’s something I’ve done in my spare time rather than directly for work but it’s been a great experience and I wanted to reflect on what has worked so well for me.

1. Google+ communities. Google+ turns out to be really rather good for groups and group discussions. The combination of threaded discussion (with email notifications of responses) and micro-blogging type front-page (making it easy to scan through new posts) has certainly promoted impressively engaging and lively discussion. It’s even (and I can’t believe I’m saying this about a Google product) nice to look at.

2. Small groups. People who enrolled in time were placed into small groups, each with its own email list, and each encouraged to set up its own Google+ group. These small groups (my own included) have largely petered-out – but others have survived, often by picking up refugees from the less active groups, and I joined one of those. They provide a safer, less public, arena for discussion – especially for those people who are perhaps less confident or for material that doesn’t seem important / relevant / polished enough to share with the world.

3.Openness. LCL was designed to be almost entirely open, based on P2PU’s mechanical MOOC. Course reading is published on a public website and the main community is an open Google+ group. Weekly emails are sent out to remind people about this week’s activity and reading. Even with the small groups, I get the impression it’s those who left their Google+ communities as open who have survived because they could pick up new members. As well as being a Good Thing, this openness helps to make it easier to navigate the course, and to access the materials from a range of computers and devices.

4. Variety. Each week there are suggested readings, an activity, and further resources. There’s also a video panel discussion, and of course there’s continuous activity and discussion on the Google+ community. Early on the course, the course leaders stated explicitly that people should engage with what they can / what interests them and not feel they have to do everything. The variety of tasks and materials (some of the “readings” are short videos) make it possible to stay engaged even when you have little time to spare.

5. Events. There are live-broadcast panel discussion each week, directly relating to the week’s reading and activity. The video stream for these is embedded within a chat forum so that you can chat with your fellow students while you watch, and submit questions for the Q&A section at the end. These broadcasts feel very personal and inclusive, they are relaxed and conversational in tone. Course moderators join the chat rooms – providing helpful information, support with technical issues, and (maybe more than anything else) a real sense that the online participants do matter. In terms of a teaching device, I’m not sure how well they work – I find myself picking up fragments of the video and fragments of the chat and not properly engaging in either. But they can be useful place to reflect on and refine my ideas and they help give the course a nice pace.

6. Enthusiasm. Mitch Resnik, Natalie Rusk, and the rest of the course team exude enthusiasm for their subject, excitement about the course, and an openness that makes you feel like a real student. They seem friendly and genuinely interested in what online participants are saying. I think their attitude sets the tone for the community as a whole.

Active learning – notes from reading group

Active learning might be an unhelpfully broad topic but there are some very helpful ideas in these papers.

  • Bonwell, C. (1991), Active learning: creating excitement in the classroom, Eric Digest – The article starts by defining what AL is, the key factor being that students must do more than just listen e.g. read. write, discuss, problem solve. It identifies the main barrier to use of AL as risk, for example that students will not participate, or that the teacher loses control.  It suggests ways to address this for example by trying low risk strategies such as short, structured, well-planned activities.
  • Prince, M. (2004), Does Active Learning Work? A Review of the Research, Journal of Engineering Education, 93(3), 223-232. Splits active learning into constituent parts and looks at the evidence for (often relatively minor) interventions covering each of these parts, in an attempt to identify what really works. A useful reference for anyone looking for quantitative evidence for active learning type interventions and a useful discussion of what leads to successful (or unsuccessful) problem-based-learning.
  • Jenkins, M. (2010), Active Learning Typology: a case study of the University of Gloucestershire. The paper describes how an ‘active learning ‘strategy has been implemented at the University of Gloucester. In the first paragraph Jenkins provides some references on active learning to unpacks its meaning that helped us to better understand the term and put it into context,  for example, …the role of the teacher is not to transmit knowledge to a passive recipient, but to structure the learner’s engagement with the knowledge, practising the high-level cognitive skills that enable them to make that knowledge their own (Laurillard, 2008; 527). page 2. At the same time this is compared to the understanding of ‘active learning’ of the staff at the university which through  a survey were asked to identify their conceptions of active learning. The results identified three categories ‘families’, 1) external (student are active when they learn by doing), 2) ‘internal (student are active when they are engaged in cognitive processes) and 3) holistic (it is a composite of the two, and students are active learning is generally investigative, developmental, creative. An interesting perspective is a distinction in the interpretation where the emphasis is placed on the student or the teacher, Is active learning what the teacher gets the students to do or what learning is done by students? The data showed that there is a split between some staff practising ‘active teaching’ and other practising ‘active learning’. The outcome of the project has produced a framework for staff to work with which is very useful and identifies common elements of active learning in these five categories: Co-learning opportunities, Authenitcity, Reflection, Skills development, Student support.

Applying the Mumford method to report-writing

Philosopher Stephen Mumford has developed a process for writing academic papers, known as the Mumford method. It involves producing a summary of your argument in a very particular format, using this summary when speaking (both as notes for yourself and as a hand-out for the audience), refining it after feedback each time you present, and eventually writing up. It’s been used by professors through to a-level students and always sounded like a convincing idea.

I decided to try it out when working on a recent internal report on Open Education at Bristol, in collaboration with my colleague Jane Williams, and it worked well. We initially produced a handout, roughly in the format Mumford describes. After several iterations of this handout we used it as our plan for the final briefing paper.

Although we started with the Mumford method instructions, I made some small refinements for the slightly different circumstances. My summary was:

  • single-sided
  • landscape with 4 columns of 10pt text (as the points being made tended to be relatively brief)
  • sub-divided into section headings (these did not neatly fit with the 4 columns but that was fine)
  • produced in Google Drive to allow collaboration (this involved using a table for the columns – a little fiddly but workable)

We used this handout both for meetings with individuals and when presenting the paper at larger meetings for consultation, and it was very effective as an aid to discussion.

I was tasked with writing up and found I could relatively quickly write up the report based on the outline (which I had talked through many times by this point). Each of the four columns produced almost exactly one A4 page of relatively spare prose, more than I had anticipated. But the argument remained very clear and it was extremely easy to produce a summary of the key points, drawing almost directly from the handout. It’s definitely something I’ll use again.

Event: Re-imagining open education, published works and social media, 16 Oct 2012, London

By Suzi Wells

Having booked at the last minute I was a little unsure what to expect at this one-day workshop. It was publicised through the MEDEV website. Not having a background in medical education I wasn’t sure how relevant it would be to me.

I’m very glad I managed to go along. Two main projects were discussed: Oxford’s Open Spires (and especially Great Writers Inspire and the World War I centenary); and Newcastle’s PublishOER (working with Elvisier to investigate the use of publisher’s materials in OERs).

There’s lots I’d like to follow up from the day, and it was fantastic timing for our new project on OERs at Bristol. My full (and rather rough) notes are below. A few of the key things for me were this:

  • OERs are not new, but it feels like we’re at the beginning of something
  • it is not an area that universities can ignore, and this seems to be increasingly well-recognised
  • if Elsivier are anything to go by, publishers are also recognising that this is something that they need to engage with (though Elsivier may have more reason to engage than most because of the academic boycott against them)
  • the issues around licensing (and, especially in the case of medical content, consent) are complex – they can be made more manageable but they will still be non-trivial

Workshop details on MEDEV website

Continue reading

Smart pens for worked examples – a new case-study

By Suzi Wells

There’s a new case study on our main site: worked examples using smart pens in biochemistry. As part of our e-pens pilot project, Gus Cameron in Biochemistry has been providing animated-PDFs with audio commentary for his students, showing how to work through questions.

The pens have proved easy to use and the materials well-received by students. It’s unfortunate that the PDFs produced are, because of their audio content, strictly-speaking not standards compliant. But they work fine in Adobe PDF viewer, which is freely available and standard for computers at Bristol. I’m not sure we’d recommend them to create a bank of re-usable learning materials but as a quick and easy way to create just-in-time materials, especially containing diagrams or notation, they seem very good.

Maths in the bio sciences, and elsewhere

By Suzi Wells

Last week I attended the HEA BioMaths Challenges workshop at the (rather grand) Nuffield Foundation building in London. It was an excellent event, and I came away with more interesting things to follow up than I will have time for.

As well as the usual drivers – mixed student ability, experience and confidence in maths and statistics – several speakers mentioned the changing requirements for maths in the Bio Sciences. More and bigger data sets are available, that you can’t begin to understand without quite advanced statistical techniques. And maths is required to create predictive models of biological systems, models which have become possible as our understanding of these systems has become more complete.

Confidence was also mentioned, with one speaker declaring war on maths phobia. Toby Carter  (Anglia Ruskin University) demonstrated the impressively fun-looking StartLogo TNG which they have used very successfully with 3rd year students and postgraduates as a way in to simulations that does not require programming experience. It also got me wondering how much the Code Club initiative, after-school programming clubs for 10-11 year olds, might be indirectly addressing maths confidence and helping to open up advanced maths to students.

Hearing about how people used open educational resources (OERs) within this context was also interesting. Not much mention of repurposing them and incorporating them into an institution’s own material. But students would use them, sometimes being directed to them within problem classes, and one person mentioned using them as lecture notes: printing them out and talking around the maths.

Following this event there will be more information on the Biomaths Education Network website, set up by Jenny Koenig (Cambridge) and Dawn Hawkins (Anglia Ruskin University). At Bristol I’ll add more to our Maths and Stats Teaching (UoB only) section on Blackboard. I believe the slides from the day will also be available in the future.

(Note to speakers / attendees – as we were told Chatham House Rules applied, and my notes were somewhat disorganised, I haven’t attributed anything than wasn’t in people’s slides as they appear on the data stick. Drop me an email suzi.wells@bristol.ac.uk if you’d like me to change anything.)

Reading group notes: MOOCs

Characteristics of MOOCs (from Wikipedia) – (Roger) – participants distributed, course materials available on the web, built on a connectivist approach, typically free but may charge for accreditation, typical components might be a weekly presentation, discussion questions, suggested further resources, personal reflection and sharing of resources .  I also tried registering for Stephen Downes Change. Mooc.ca– and was interested that the 4 types of activity suggested for the course reflect quite well important aspects of the way I work : these are 1. aggregate , 2. remix  3. re-purpose and 4. feed forward

Disrupting College, Clayton M. Christensen, Michael B. Horn, 2011 (Suzi) – Policy paper arguing that we are in for a massive change (a disruption) in the way HE works and that (amongst other things) the only way for existing institutions to take advantage of this is to create autonomous business units to work in this area.

What Can We Learn From Stanford University’s Free Online Computer Science Courses?, Seb Schmoller, 2011 (Suzi) – Seb’s experiences on the Stanford AI course and his thoughts about what this means for the sector. Stanford will be learning a lot and getting well ahead of the game by running these courses. Other institutions will not be able to get their numbers – collaboration may be the only way to compete.

Suggested reading