National Institute for Digital Learning good reads from 2019 – notes from the reading group

Do MOOCs contribute to student equity and social inclusion? A systematic review by Sarah R Lambert (read by Suzi Wells)

This study is a large literature review looking at both empirical research and practitioner guidance around using MOOCs and other open (as in free) learning to promote student equity and social inclusion. The study starts from 2014 because that’s the second wave of MOOCs, where more stable and sustainable practice begins to emerge.

The aims of the MOOCs were broken into two broad areas: those focused on student equity (making education fairer for tertiary learners and those preparing to enrol in HE) – this was the most common aim; and those who sought to improve social inclusion (education of non-award holders & lifelong learners). The literature was predominantly from US, UK, Australia, but they were only studying literature – possibly unsurprisingly for articles written in English. There was a near 50/50 split between empirical research and policy/practice recommendations, and the studies focused slightly more on MOOCs than on other other open learning. One notable finding was that the success rate (among published studies at least) was high – more often than not they met or exceeded their aims.

Lambert includes lots of useful detail about factors that may have led to successful projects. MOOC developers should learn about their learners and make content relevant and relatable to them. Successful projects often had community partners involved in the design, delivery & support – in particular, initiatives with large cohorts (~100) that were very successful all had this. Designing for the learners meant things like: designing for mobile-only and offline access, teaching people in their own language (or at least providing mother-tongue facilitation) and, where relevant, mixing practitioners with academics in the content.

Facilitating and support the learning was also key to success. Local study groups or face-to-face workshops were used by some projects to provide localisation and contextualisation. Facilitators would ideally be drawn from existing community networks.

A related point was to design content from scratch – recycling existing HE materials was not as successful. This should be done in an interdisciplinary team and/or community partnership. Being driven entirely by an IT or digital education team was an indicator that a project would not meet its aims. Projects need technical expertise but education and/or widening participation too. Open as is free-to-use is fine, licence didn’t seem to have an impact.

In short:

  • Work with the people you intend to benefit.
  • Create, don’t recycle.
  • Don’t expect the materials to stand by themselves.

If you’re interested in social justice through open learning, think OU not OERs.

What does the ‘Postdigital’ mean for education? Three critical perspectives on the digital, with implications for educational research and practice by Jeremy Knox (read by Suzanne Collins)

This article explores the idea of what ‘post-digital’ education means, specifically thinking about human-technology relationships. It begins with an analysis of the term post-digital’, embracing the perspective of ‘post’ as a critical appraisal of the understanding of digital rather than simply meaning a different stage, after, the digital. This initial analysis is worth a read, but not my main focus for this reading group so here I’ll jump straight to the main discussion, which is based on three critical perspectives on digital in education.

The first is “Digital as Capital”. Here, Knox talks about the commercialisation and capitalisation of digital platforms, such as social media. This platform model is increasingly based on the commodification of data, and so inevitably students/teachers/learners become seen as something which can be analysed (eg learning analytics), or something under surveillance. If surveillance is equated to recognition, this leads to further (perhaps troubling?) implications. Do you need to be seen to be counted as a learner? Is learning always visible? Does this move away from the idea of the web and digital being ‘social constructivist’?

Secondly, Knox looks at “Digital as Policy”. This (for me) slightly more familiar ground discusses the idea that ‘digital’ education is no longer as separate or distinct from ‘education’ as it once was. In a ‘post-digital’ understanding, it is in fact mainstream rather than alternative or progressive. The digital in education, however, often manifests as metrification in governance – eg schools are searchable in rankings based on algorithms. In this sense, ‘digital education’ moves away from ‘classroom gadgets’ (as Knox puts it) and sees it as something intrinsic and embedded in policy, with strategic influence.

Lastly, he discusses “Digital as Material”, which focuses on surfacing the hidden material dimensions of a sector which was often seen as ‘virtual’ and therefore ‘intangible’. The tangible, material aspects of digital education include devices, servers, and other physical elements which require manual labour and material resources. On one hand there is efficiency, but on the other there is always labour. As education, particularly digital education, often comes from a sense of social egalitarianism and social justice, this is a troubling realisation, and one which lead to a rethink in the way digital education is positioned in a post-digital perspective.

In conclusion, Knox suggests that ‘post-digital’ should be understood as a ‘holding to account of the many assumptions associated with digital technology’, which I feel sums up his argument and is probably something we should try and do more of regardless of whether we’re a ‘digital’ or ‘post-digital’ education office.

What’s the problem with learning analytics? by Neil Selwyn (read by Michael Marcinkowski)

For this last session I read Neil Selwyn’s ‘What’s the Problem with Learning Analytics’ from the Journal of Learning Analytics. Appearing in a journal published by the Society for Learning Analytics Research, Selwyn’s socio-technical approach toward the analysis of learning analytics was a welcome, if somewhat predictable, take on a field that too often seems to find itself somewhat myopically digging for solution to its own narrow set of questions.

Setting learning analytics within a larger social, cultural, and economic field of analysis, Selwyn lays out an orderly account of a number of critical concerns, organized around the implications and values present in learning analytics.

Selwyn lists these consequences of learning analytics as areas to be questioned:

  1. A reduced understanding of education: instead of a holistic view of education it is reduced to a simple numerical metric.
  2. Ignoring the broader social contexts of education: there is a danger that by limiting the understanding of education that we ignore important contextual factors affecting education.
  3. Reducing students’ and teachers’ capacity for informed decision-making: the results of learning analytics comes to overtake other types of decision making.
  4. A means of surveillance rather than support: in their use, learning analytics can have more punitive rather than pedagogical implications.
  5. A source of performativity: students and teachers each begin to focus on achieving results that can be measured by analytics rather than other measures of learning.
  6. Disadvantaging a large number of people: like any data driven system, decisions about winners and losers can be unintentionally baked into the system.
  7. Servicing institutional rather than individual interests: the analytics has more direct benefit for educational institutions and analytic providers than it does for students.

He goes on to list several questionable values embedded in learning analytics:

  1. A blind faith in data: There is a risk that there is a contemporary over-valuation of the importance of data.
  2. Concerns over the data economy: What are the implications when student data is monetized by companies?
  3. The limits of free choice and individual agency: Does a reliance on analytic data remove the ability of students and educators to have a say in their education?
  4. An implicit techno-idealism: Part of leaning analytics is a belief in the benefits of the impacts of technology.

Toward this, Selwyn proposes a few broad areas for change designed to improve learning analytics’ standing within a wider field of concern:

  1. Rethink the design of learning analytics: allow for more transparency and customization for students.
  2. Rethink the economics of learning analytics: give students ownership of their data.
  3. Rethink the governance of learning analytics: establish regulatory oversite for student data.
  4. A better public understanding of learning analytics: educate the wider public of the ethical implications of the application of learning analytics to student data.

Overall, Selwyn’s main point remains the most valuable: the idea of learning analytics should be examined within the full constellation of social and cultural structures within which it is embedded. Like any form of data analytics, learning analytics does not exist as a perfect guide to any action, and the insights that are generated by it need to be understood as only partial and implicated by the mechanisms designed to generate the data. In the end, Selwyn’s account is a helpful one — it is useful to have such critical voices welcomed into SOLAR — but the level at which he casts his recommendations remains too broad for anything other than a starting point. Setting clear policy goals and fostering a broad understanding of learning analytics are hopeful external changes that can be made to the context within which learning analytics is used, but in the end, what is necessary is for those working in the field of learning analytics who are constructing systems of data generation and analysis to alter the approaches that they take, both in the ‘ownership’ and interpretation of student data. This enforces the need for how we understand what ‘data’ is and how we think about using it to change. Following Selwyn, the most important change might be to re-evaluate the ontological constitution of data and our connection to it, coming to understand it not as something distinct from students’ education, but an integral part of it.

Valuing technology-enhanced academic conferences for continuing professional development. A systematic literature. Professional Development in Education by Maria Spilker (read by Naomi Beckett )

This literature review gives an analysis of the different approaches taken to enhance academic conferences technologically for continued professional development. Although there have been advances and new practices emerging, a definite coherent approach was lacking. Conferences were being evaluated in specific ways that were not considering all sides.

‘Continued professional development for academics is critical in times of increased speed and innovation, and this intensifies the responsibilities of academics.’ 

This makes it more important to ensure when academics come together at a conference, there is a systematic approach to look at what they should be getting out of the time spent there. The paper suggests this is something that needs to be looked out when first starting to plan a conference, what are the values?

The paper talks about developing different learning experiences at a conference to engage staff and build their professional development. There is often little time for reflection and the paper suggests looking at more ways to include this. Using technology is an example of a way this could be done. Engagement on Twitter for example gives users another channel to discuss and network, and this takes them away from the normal traditional conference formats.  Making more conferences online also gives users the opportunities to reach out to further networks.

The paper mentions their Value Creation Network, looking at what values we should be taking out of conferences. These include, immediate value, potential value, applied value, realised value, and re-framing value. Looking at these to begin with is a good start to thinking about how we can frame academic conferences, so delegates get the most use out of the time spent there, and work on their own professional development too.

We asked teenagers what adults are missing about technology. This was the best response by Taylor Fang (read by Paddy Uglow)

Some thoughts I took away:

  • Traditionally a “screen” was a thing to hide or protect, and to aid privacy. Now it’s very much the opposite.
  • Has society changed so much that social media is the only place that young people can express themselves and build a picture of who they are and what their place is in the world?
  • Adults have a duty to help young people find other ways to show who they are to the world (and the subsequent reflection back to themself)
  • Digital = data = monetisation: everything young people do online goes into a money-making system which doesn’t have their benefit as its primary goal.
  • Young people are growing up in a world where their importance and value is quantified by stats, likes, shares etc, and all these numbers demonstrate to them that they’re less important than other people, and encourages desperate measures to improve their metrics.
  • Does a meal/holiday/party/etc really exist unless it’s been published and Liked?
  • Does the same apply to Learning Analytics? Are some of the most useful learning experiences those which don’t have a definition or a grade?

Suggested reading

Online communities – notes from the reading group

Amy read Professors share ideas for building community in online courses. The over-arching narrative of this piece was that ‘humanizing learning’ was the most effective was to build online learning communities, which occurs when students connect on a emotional and social level when engaging with the community. The author, Sharon O’Malley, suggest six methods for achieving this:

  1. Let students get to know you – instructors need to present themselves as ‘real people’ – this can be done by appearing goofy or telling genuine anecdotes in videos, for example. Students should also be encouraged to reveal their non-academic lives, in order for others to feel more like they know them personally, rather than just in the learning context
  2. Incorporating video and audio resources and feedback
  3. Meet in real time – students can talk to each other in real time and make instant connections
  4. Work in small groups – students get connected with others in their group – instead of feeling like they’re in a class of fifty, they feel they are in a class of 5, 10 etc.
  5. Require constant interaction – group projects and collaborative writing assignments force students to engage with each other out of the session
  6. Rise to the challenge – building community takes time – it takes planning and experimentation. Stick with it if it doesn’t immediately work!

Roger introduced a Building learning communities card activity. This is an activity from QAA Scotland, designed to stimulate discussion about what helps an effective learning community. The activity cards suggest the following factors:

  • Clearly defined and inclusive values
  • A clearly articulated and shared purpose
  • Clearly articulated and shared purpose goals
  • Active and vibrant interaction
  • Owned and managed by its people
  • Dedicated structure
  • Collaboration
  • Adequate and appropriate support
  • Understood and respected expectations
  • Adequate and appropriate resources
  • Built in evaluation

The instructions ask the group to consider which of these are essential and which are “nice to haves”.   The activity was certainly effective in stimulating discussion in reading group.]

Suzi watched Building Community: A Conversation with Dave Cormier – a recording of an edX webinar from 2014 – video. Here Cormier, who coined the term MOOC, talks to edX about how they could and should use online learning communities.

Cormier talks about four models of learning that you could scale up online:

  • One-to-one (adaptive learning, tutoring on skype?)
  • One-to-many (video lectures on MOOCs)
  • Cooperative learning: many-to-many, all working on the same thing
  • Collaborative learning: many-to-many, shared interest but each with own project

Collaborative learning is the one which he thinks is particularly – perhaps only – served by online communities. The real life equivalent being chaos, or maybe conferences (which, arguably, don’t work well for learning).

He draws the distinction between mastery learning (where skills can be ticked off a list as you progress) and complexity. Communities are not a particularly useful tool for mastery, or for checking who has learnt what. They are much better suited for complexity. This seemed to echo discussions we’d had about the difference between using gamification and using playfulness in learning – gamification being more for mastery, playfulness for complexity.

Cormier offers some tips on building a successful community.

  • A community should have, should move people towards building, shared values and a shared language.
  • Drive participation by having a famous person (but this can become one-to-many) or by asking annoying questions that people can’t resist engaging with (eg “how do we recognise cheating as a valuable part of education?”).
  • Shape participation by assigning roles to people and having course leader presence to set the tone.
  • Give people ways to get to know each other and make connections: recognising who people are and recognising aspects of yourself in them.

His view on evaluation and measuring success might be more specific to the MOOC context. He suggests borrowing techniques from advertising to demonstrate their value (but he doesn’t give details). The outcomes he suggests you might hope for are things like building more interest in your research area, or building the brand of an academic / department / institution.

He also asks some interesting questions. About the authenticity of work we give to students – how will their work persist? Can it be right that so much of students work is destined to be thrown away? About life beyond the community – how will the community persist? Communities are emotional – you shouldn’t just pull the plug at the end.

Lots of this is challenging in an educational context. For instance, communities take time to build but we generally work with units that last for a teaching block at most. Our online Bristol Futures courses only last four weeks. I wonder if this is to do with setting expectations. Perhaps we need thin and thick communities: the thin communities being time-bound but with much more scaffolding and a narrower purpose, the thick communities being more what Cormier is talking about here.

I also read The year we wanted the internet to be smaller (on the growth of niche communities in 2017) and 11 tips for building an engaged online community (practical advice aimed at NGOs). Both are interesting in their own right and worth a read. In both the idea of shared values, shared language and a sense of purpose came up. They also talk also recognition: communities as a place where you find “your people”. This resonates with my positive experiences of online communities but is, again, challenging in an education context. As Suzanne pointed out I think – if the tone and being among “your people” is important you must be able to walk out and find something different if you don’t feel comfortable. And it may be far better that you work with people who aren’t just  “your people”, or at least who don’t start that way.

Suggested reading

Online communities in education

From other sectors

Education communities – articles that are 10+ years old

Suggested listening

Evidence – notes from the reading group

Suzi read Real geek: Measuring indirect beneficiaries – attempting to square the circle? From the Oxfam Policy & Practice blog. I was interested in the parallels with our work:

  • They seek to measure indirect beneficiaries of our work
  • Evaluation is used to improve programme quality (rather than organisational accountability)
  • In both cases there’s a pressure for “vanity metrics”
  • The approaches they talk about sound like an application of “agile” to a fundamentally non-technological processes

The paper is written at an early point in the process of redesigning their measurement and evaluation of influencing. Their aim is to improve the measurement of indirect beneficiaries at different stages of the chain, adjust plans, “test our theory of change and the assumptions we make”. Evaluation is different when you are a direct service provider than when you are a “convenor, broker or catalyst”. They are designing an evaluation approach that will be integrated into day to day running of any initiative – there’s a balance between rigor and amount of work to make it happen.

The approach they are looking at – which is something that came up in a number of the papers other people read – is sampling: identifying groups of people who they expect their intervention to benefit and evaluating it for them.

Linked to from this paper was Adopt adapt expand respond – a framework for managing and measuring systemic change processes. This paper presents a set of reflection questions (and gives some suggested measures) which I can see being adapted for an educational perspective:

  • Adopt – If you left now, would partners return to their previous way of working?
  • Adapt – If you left now, would partners build upon the changes they’ve adopted without us?
  • Expand – If you left now, would pro-poor outcomes depend on too few people, firms, or organisations?
  • Respond – If you left now, would the system be supportive of the changes introduced (allowing them to be upheld, grow, and evolve)?

Roger read “Technology and the TEF” from the 2017 Higher Education Policy Institute (HEPI)  report “Rebooting learning for the digital age: What next for technology-enhanced higher education?”.

This looks at how TEL can support the three TEF components, which evidence teaching excellence.

For the first TEF component, teaching quality, the report highlights the potential of TEL in increasing active learning, employability especially digital capabilities development, formative assessment, different forms of feedback and EMA generally, and personalisation. In terms of evidence for knowing how TEL is making an impact in these areas HEPI emphasises the role of learning analytics.

For the second component, learning environment, the report focusses on access to online resources, the role of digital technologies in disciplinary research-informed teaching, and again learning analytics as a means to provide targeted and timely support for learning. In terms of how to gather reliable evidence it mentions the JISC student digital experience tracker, a survey which is currently being used by 45 HE institutions.

For the third component, student outcomes and learning gain, the report once again highlights student digital capabilities development whilst emphasising the need to support development of digitally skilled staff to enable this. It also mentions the potential of TEL in developing authentic learning experiences, linking and networking with employers and showcasing student skills.

The final part of this section of the report covers innovation in relation to the TEF.  It warns that “It would be a disaster” if the TEF stifled innovation and increased risk-averse approaches in institutions. It welcomes the inclusion of ’impact and effectiveness of innovative approaches, new technology or educational research’ in the list of possible examples of additional evidence as a “welcome step.” (see Year 2 TEF specification Table 8)

Mike read  Sue Watling – TEL-ing tales, where is the evidence of impact and In defence of technology by Kerry Pinny. These blog posts reflect on an email thread started by Sue Watling in which she asked for evidence of the effectiveness of TEL. The evidence is needed if we are to persuade academics of the need to change practice.  In response, she received lots of discussion, including and what she perceived to be some highly defensive posts.  The responses contained very little by way of well- researched evidence. Watling, after Jenkins, ascribes ‘Cinderella Status’ to TEL research, which I take to mean based on stories, rather than fact.  She acknowledges the challenges of reward, time and space for academics engaged with TEL. She nevertheless  makes a pleas that we are reflective in our practice and look to gather a body of evidence we can use in support of the impact of TEL. Watling describes some fairly defensive responses to her original post (including the blog post from James Clay that Hannah read for this reading group). By contrast. Kerry Pinny’s post responds to some of the defensiveness, agreeing with Watling – if we can’t defend what we do with evidence, then this in itself is evidence that something is wrong.

The problem is clear, how we get the evidence is less clear. One point from Watling that I think is pertinent is that it is not just TEL research, but HE pedagogic research as a whole, that lacks evidence and has ‘Cinderella status’. Is it then surprising that TEL HE research, as a  subset of  HE pedagogic research, reflects the lack of proof and rigour? This may in part be down to the lack of research funding. As Pinny points out, it is often the school or academic has little time to evaluate their work with rigour.  I think it also relates to the nature of TEL as a  set of tools or enablers of pedagogy, rather than a singular approach or set of approaches. You can use TEL to support a range of pedagogies, both effective and non-effective, and a variety of factors will affect its impact.  Additionally, I think it relates to the way Higher Education works – the practice there is and what evidence results tends to be very localised, for example to a course, teacher or school. Drawing broader conclusions is much, much harder.  A lot of the evidence is at best anecdotal. That said, in my experience, anecdotes (particularly form peers) can be as persuasive as research evidence in persuading colleagues to change practice (though I have no rigorous research to prove that).

Suzanne read Mandernach, J. 2015, ” Assessment of Student Engagement in Higher  Education: A Synthesis of Literature and Assessment Tools“, International Journal of Learning, Teaching and Educational Research Vol. 12, No. 2, pp. 1-14, June 2015

This text was slightly tangential, as it didn’t discuss the ideas behind evidence in TEL specifically, but was a good example of an area in which we often find it difficult to find or produce meaningful evidence to support practice. The paper begins by recognising the difficulties in gauging, monitoring and assessing engagement as part of the overall learning experience, despite the fact that engagement is often discussed within HE. Mandernach goes back to the idea of ‘cognitive’ ‘behavioural’ and ‘affective’ criteria for assessing engagement, particularly related to Bowen’s ideas that engagement happens with the leaning process, the object of study, the context of study, and the human condition (or service learning). Interestingly for our current context of building MOOC-based courses, a lot of the suggestions for how these engagement types can be assessed is mainly classroom based – for example the teacher noticing the preparedness of the student at the start of a lesson, or the investment they put into their learning. On a MOOC platform, where there is little meaningful interaction on an individual level between the ‘educator’ and the learner, this clearly becomes more difficult to monitor, and self-reporting becomes increasingly important. In terms of how to go about measuring and assessing engagement, student surveys are discussed – such as the Student Engagement Questionnaire and the Student Course Engagement Questionnaire. The idea of experience sampling – where a selection of students are asked at intervals to rate their engagement at that specific time – is also discussed as a way of measuring overall flow of engagement across a course, which may also be an interesting idea to discuss for our context.

Suggested reading