Suzi read Real geek: Measuring indirect beneficiaries – attempting to square the circle? From the Oxfam Policy & Practice blog. I was interested in the parallels with our work:
- They seek to measure indirect beneficiaries of our work
- Evaluation is used to improve programme quality (rather than organisational accountability)
- In both cases there’s a pressure for “vanity metrics”
- The approaches they talk about sound like an application of “agile” to a fundamentally non-technological processes
The paper is written at an early point in the process of redesigning their measurement and evaluation of influencing. Their aim is to improve the measurement of indirect beneficiaries at different stages of the chain, adjust plans, “test our theory of change and the assumptions we make”. Evaluation is different when you are a direct service provider than when you are a “convenor, broker or catalyst”. They are designing an evaluation approach that will be integrated into day to day running of any initiative – there’s a balance between rigor and amount of work to make it happen.
The approach they are looking at – which is something that came up in a number of the papers other people read – is sampling: identifying groups of people who they expect their intervention to benefit and evaluating it for them.
Linked to from this paper was Adopt adapt expand respond – a framework for managing and measuring systemic change processes. This paper presents a set of reflection questions (and gives some suggested measures) which I can see being adapted for an educational perspective:
- Adopt – If you left now, would partners return to their previous way of working?
- Adapt – If you left now, would partners build upon the changes they’ve adopted without us?
- Expand – If you left now, would pro-poor outcomes depend on too few people, firms, or organisations?
- Respond – If you left now, would the system be supportive of the changes introduced (allowing them to be upheld, grow, and evolve)?
Roger read “Technology and the TEF” from the 2017 Higher Education Policy Institute (HEPI) report “Rebooting learning for the digital age: What next for technology-enhanced higher education?”.
This looks at how TEL can support the three TEF components, which evidence teaching excellence.
For the first TEF component, teaching quality, the report highlights the potential of TEL in increasing active learning, employability especially digital capabilities development, formative assessment, different forms of feedback and EMA generally, and personalisation. In terms of evidence for knowing how TEL is making an impact in these areas HEPI emphasises the role of learning analytics.
For the second component, learning environment, the report focusses on access to online resources, the role of digital technologies in disciplinary research-informed teaching, and again learning analytics as a means to provide targeted and timely support for learning. In terms of how to gather reliable evidence it mentions the JISC student digital experience tracker, a survey which is currently being used by 45 HE institutions.
For the third component, student outcomes and learning gain, the report once again highlights student digital capabilities development whilst emphasising the need to support development of digitally skilled staff to enable this. It also mentions the potential of TEL in developing authentic learning experiences, linking and networking with employers and showcasing student skills.
The final part of this section of the report covers innovation in relation to the TEF. It warns that “It would be a disaster” if the TEF stifled innovation and increased risk-averse approaches in institutions. It welcomes the inclusion of ’impact and effectiveness of innovative approaches, new technology or educational research’ in the list of possible examples of additional evidence as a “welcome step.” (see Year 2 TEF specification Table 8)
Mike read Sue Watling – TEL-ing tales, where is the evidence of impact and In defence of technology by Kerry Pinny. These blog posts reflect on an email thread started by Sue Watling in which she asked for evidence of the effectiveness of TEL. The evidence is needed if we are to persuade academics of the need to change practice. In response, she received lots of discussion, including and what she perceived to be some highly defensive posts. The responses contained very little by way of well- researched evidence. Watling, after Jenkins, ascribes ‘Cinderella Status’ to TEL research, which I take to mean based on stories, rather than fact. She acknowledges the challenges of reward, time and space for academics engaged with TEL. She nevertheless makes a pleas that we are reflective in our practice and look to gather a body of evidence we can use in support of the impact of TEL. Watling describes some fairly defensive responses to her original post (including the blog post from James Clay that Hannah read for this reading group). By contrast. Kerry Pinny’s post responds to some of the defensiveness, agreeing with Watling – if we can’t defend what we do with evidence, then this in itself is evidence that something is wrong.
The problem is clear, how we get the evidence is less clear. One point from Watling that I think is pertinent is that it is not just TEL research, but HE pedagogic research as a whole, that lacks evidence and has ‘Cinderella status’. Is it then surprising that TEL HE research, as a subset of HE pedagogic research, reflects the lack of proof and rigour? This may in part be down to the lack of research funding. As Pinny points out, it is often the school or academic has little time to evaluate their work with rigour. I think it also relates to the nature of TEL as a set of tools or enablers of pedagogy, rather than a singular approach or set of approaches. You can use TEL to support a range of pedagogies, both effective and non-effective, and a variety of factors will affect its impact. Additionally, I think it relates to the way Higher Education works – the practice there is and what evidence results tends to be very localised, for example to a course, teacher or school. Drawing broader conclusions is much, much harder. A lot of the evidence is at best anecdotal. That said, in my experience, anecdotes (particularly form peers) can be as persuasive as research evidence in persuading colleagues to change practice (though I have no rigorous research to prove that).
Suzanne read Mandernach, J. 2015, ” Assessment of Student Engagement in Higher Education: A Synthesis of Literature and Assessment Tools“, International Journal of Learning, Teaching and Educational Research Vol. 12, No. 2, pp. 1-14, June 2015
This text was slightly tangential, as it didn’t discuss the ideas behind evidence in TEL specifically, but was a good example of an area in which we often find it difficult to find or produce meaningful evidence to support practice. The paper begins by recognising the difficulties in gauging, monitoring and assessing engagement as part of the overall learning experience, despite the fact that engagement is often discussed within HE. Mandernach goes back to the idea of ‘cognitive’ ‘behavioural’ and ‘affective’ criteria for assessing engagement, particularly related to Bowen’s ideas that engagement happens with the leaning process, the object of study, the context of study, and the human condition (or service learning). Interestingly for our current context of building MOOC-based courses, a lot of the suggestions for how these engagement types can be assessed is mainly classroom based – for example the teacher noticing the preparedness of the student at the start of a lesson, or the investment they put into their learning. On a MOOC platform, where there is little meaningful interaction on an individual level between the ‘educator’ and the learner, this clearly becomes more difficult to monitor, and self-reporting becomes increasingly important. In terms of how to go about measuring and assessing engagement, student surveys are discussed – such as the Student Engagement Questionnaire and the Student Course Engagement Questionnaire. The idea of experience sampling – where a selection of students are asked at intervals to rate their engagement at that specific time – is also discussed as a way of measuring overall flow of engagement across a course, which may also be an interesting idea to discuss for our context.
Suggested reading
- Evidence in teaching – notes from the reading group – Nov 2015
- HEPI report (2017) Rebooting learning for a digital age – what next for technology-enhanced higher education? – section on TEL and the TEF
- Sue Watling – TEL-ing tales, where is the evidence of impact?
- In defence of technology by Kerry Pinny
- Show me the evidence by James Clay
- Real geek: Measuring indirect beneficiaries – attempting to square the circle? (about non-profits and campaigning orgs but possibly relevant to TEL)
- Advocates of randomised controlled trials in education should look more closely at the differences between medical research and education research
- Increasing the Use of Evidence-Based Teaching in STEM Higher Education: A Comparison of Eight Change Strategies
- Evidence-based Higher Education at UC Davis’ iAMSTEM Hub
- Lessons from the Digital Classroom “In four small schools scattered across San Francisco, a data experiment is under way. That is where AltSchool is testing how technology can help teachers maximize their students’ learning.”
- Mandernach, J. 2015, ” Assessment of Student Engagement in Higher Education: A Synthesis of Literature and Assessment Tools“, International Journal of Learning, Teaching and Educational Research Vol. 12, No. 2, pp. 1-14, June 2015
- Various things on evaluation
- Zero Correlation Between (student course) Evaluations and Learning
- Do the best teachers get the best ratings? (Spoiler: No, they get bad ratings because they make their students do difficult things)
- Do students know what’s good for them? Of course they do, and of course they don’t.