National Institute for Digital Learning good reads from 2019 – notes from the reading group

Do MOOCs contribute to student equity and social inclusion? A systematic review by Sarah R Lambert (read by Suzi Wells)

This study is a large literature review looking at both empirical research and practitioner guidance around using MOOCs and other open (as in free) learning to promote student equity and social inclusion. The study starts from 2014 because that’s the second wave of MOOCs, where more stable and sustainable practice begins to emerge.

The aims of the MOOCs were broken into two broad areas: those focused on student equity (making education fairer for tertiary learners and those preparing to enrol in HE) – this was the most common aim; and those who sought to improve social inclusion (education of non-award holders & lifelong learners). The literature was predominantly from US, UK, Australia, but they were only studying literature – possibly unsurprisingly for articles written in English. There was a near 50/50 split between empirical research and policy/practice recommendations, and the studies focused slightly more on MOOCs than on other other open learning. One notable finding was that the success rate (among published studies at least) was high – more often than not they met or exceeded their aims.

Lambert includes lots of useful detail about factors that may have led to successful projects. MOOC developers should learn about their learners and make content relevant and relatable to them. Successful projects often had community partners involved in the design, delivery & support – in particular, initiatives with large cohorts (~100) that were very successful all had this. Designing for the learners meant things like: designing for mobile-only and offline access, teaching people in their own language (or at least providing mother-tongue facilitation) and, where relevant, mixing practitioners with academics in the content.

Facilitating and support the learning was also key to success. Local study groups or face-to-face workshops were used by some projects to provide localisation and contextualisation. Facilitators would ideally be drawn from existing community networks.

A related point was to design content from scratch – recycling existing HE materials was not as successful. This should be done in an interdisciplinary team and/or community partnership. Being driven entirely by an IT or digital education team was an indicator that a project would not meet its aims. Projects need technical expertise but education and/or widening participation too. Open as is free-to-use is fine, licence didn’t seem to have an impact.

In short:

  • Work with the people you intend to benefit.
  • Create, don’t recycle.
  • Don’t expect the materials to stand by themselves.

If you’re interested in social justice through open learning, think OU not OERs.

What does the ‘Postdigital’ mean for education? Three critical perspectives on the digital, with implications for educational research and practice by Jeremy Knox (read by Suzanne Collins)

This article explores the idea of what ‘post-digital’ education means, specifically thinking about human-technology relationships. It begins with an analysis of the term post-digital’, embracing the perspective of ‘post’ as a critical appraisal of the understanding of digital rather than simply meaning a different stage, after, the digital. This initial analysis is worth a read, but not my main focus for this reading group so here I’ll jump straight to the main discussion, which is based on three critical perspectives on digital in education.

The first is “Digital as Capital”. Here, Knox talks about the commercialisation and capitalisation of digital platforms, such as social media. This platform model is increasingly based on the commodification of data, and so inevitably students/teachers/learners become seen as something which can be analysed (eg learning analytics), or something under surveillance. If surveillance is equated to recognition, this leads to further (perhaps troubling?) implications. Do you need to be seen to be counted as a learner? Is learning always visible? Does this move away from the idea of the web and digital being ‘social constructivist’?

Secondly, Knox looks at “Digital as Policy”. This (for me) slightly more familiar ground discusses the idea that ‘digital’ education is no longer as separate or distinct from ‘education’ as it once was. In a ‘post-digital’ understanding, it is in fact mainstream rather than alternative or progressive. The digital in education, however, often manifests as metrification in governance – eg schools are searchable in rankings based on algorithms. In this sense, ‘digital education’ moves away from ‘classroom gadgets’ (as Knox puts it) and sees it as something intrinsic and embedded in policy, with strategic influence.

Lastly, he discusses “Digital as Material”, which focuses on surfacing the hidden material dimensions of a sector which was often seen as ‘virtual’ and therefore ‘intangible’. The tangible, material aspects of digital education include devices, servers, and other physical elements which require manual labour and material resources. On one hand there is efficiency, but on the other there is always labour. As education, particularly digital education, often comes from a sense of social egalitarianism and social justice, this is a troubling realisation, and one which lead to a rethink in the way digital education is positioned in a post-digital perspective.

In conclusion, Knox suggests that ‘post-digital’ should be understood as a ‘holding to account of the many assumptions associated with digital technology’, which I feel sums up his argument and is probably something we should try and do more of regardless of whether we’re a ‘digital’ or ‘post-digital’ education office.

What’s the problem with learning analytics? by Neil Selwyn (read by Michael Marcinkowski)

For this last session I read Neil Selwyn’s ‘What’s the Problem with Learning Analytics’ from the Journal of Learning Analytics. Appearing in a journal published by the Society for Learning Analytics Research, Selwyn’s socio-technical approach toward the analysis of learning analytics was a welcome, if somewhat predictable, take on a field that too often seems to find itself somewhat myopically digging for solution to its own narrow set of questions.

Setting learning analytics within a larger social, cultural, and economic field of analysis, Selwyn lays out an orderly account of a number of critical concerns, organized around the implications and values present in learning analytics.

Selwyn lists these consequences of learning analytics as areas to be questioned:

  1. A reduced understanding of education: instead of a holistic view of education it is reduced to a simple numerical metric.
  2. Ignoring the broader social contexts of education: there is a danger that by limiting the understanding of education that we ignore important contextual factors affecting education.
  3. Reducing students’ and teachers’ capacity for informed decision-making: the results of learning analytics comes to overtake other types of decision making.
  4. A means of surveillance rather than support: in their use, learning analytics can have more punitive rather than pedagogical implications.
  5. A source of performativity: students and teachers each begin to focus on achieving results that can be measured by analytics rather than other measures of learning.
  6. Disadvantaging a large number of people: like any data driven system, decisions about winners and losers can be unintentionally baked into the system.
  7. Servicing institutional rather than individual interests: the analytics has more direct benefit for educational institutions and analytic providers than it does for students.

He goes on to list several questionable values embedded in learning analytics:

  1. A blind faith in data: There is a risk that there is a contemporary over-valuation of the importance of data.
  2. Concerns over the data economy: What are the implications when student data is monetized by companies?
  3. The limits of free choice and individual agency: Does a reliance on analytic data remove the ability of students and educators to have a say in their education?
  4. An implicit techno-idealism: Part of leaning analytics is a belief in the benefits of the impacts of technology.

Toward this, Selwyn proposes a few broad areas for change designed to improve learning analytics’ standing within a wider field of concern:

  1. Rethink the design of learning analytics: allow for more transparency and customization for students.
  2. Rethink the economics of learning analytics: give students ownership of their data.
  3. Rethink the governance of learning analytics: establish regulatory oversite for student data.
  4. A better public understanding of learning analytics: educate the wider public of the ethical implications of the application of learning analytics to student data.

Overall, Selwyn’s main point remains the most valuable: the idea of learning analytics should be examined within the full constellation of social and cultural structures within which it is embedded. Like any form of data analytics, learning analytics does not exist as a perfect guide to any action, and the insights that are generated by it need to be understood as only partial and implicated by the mechanisms designed to generate the data. In the end, Selwyn’s account is a helpful one — it is useful to have such critical voices welcomed into SOLAR — but the level at which he casts his recommendations remains too broad for anything other than a starting point. Setting clear policy goals and fostering a broad understanding of learning analytics are hopeful external changes that can be made to the context within which learning analytics is used, but in the end, what is necessary is for those working in the field of learning analytics who are constructing systems of data generation and analysis to alter the approaches that they take, both in the ‘ownership’ and interpretation of student data. This enforces the need for how we understand what ‘data’ is and how we think about using it to change. Following Selwyn, the most important change might be to re-evaluate the ontological constitution of data and our connection to it, coming to understand it not as something distinct from students’ education, but an integral part of it.

Valuing technology-enhanced academic conferences for continuing professional development. A systematic literature. Professional Development in Education by Maria Spilker (read by Naomi Beckett )

This literature review gives an analysis of the different approaches taken to enhance academic conferences technologically for continued professional development. Although there have been advances and new practices emerging, a definite coherent approach was lacking. Conferences were being evaluated in specific ways that were not considering all sides.

‘Continued professional development for academics is critical in times of increased speed and innovation, and this intensifies the responsibilities of academics.’ 

This makes it more important to ensure when academics come together at a conference, there is a systematic approach to look at what they should be getting out of the time spent there. The paper suggests this is something that needs to be looked out when first starting to plan a conference, what are the values?

The paper talks about developing different learning experiences at a conference to engage staff and build their professional development. There is often little time for reflection and the paper suggests looking at more ways to include this. Using technology is an example of a way this could be done. Engagement on Twitter for example gives users another channel to discuss and network, and this takes them away from the normal traditional conference formats.  Making more conferences online also gives users the opportunities to reach out to further networks.

The paper mentions their Value Creation Network, looking at what values we should be taking out of conferences. These include, immediate value, potential value, applied value, realised value, and re-framing value. Looking at these to begin with is a good start to thinking about how we can frame academic conferences, so delegates get the most use out of the time spent there, and work on their own professional development too.

We asked teenagers what adults are missing about technology. This was the best response by Taylor Fang (read by Paddy Uglow)

Some thoughts I took away:

  • Traditionally a “screen” was a thing to hide or protect, and to aid privacy. Now it’s very much the opposite.
  • Has society changed so much that social media is the only place that young people can express themselves and build a picture of who they are and what their place is in the world?
  • Adults have a duty to help young people find other ways to show who they are to the world (and the subsequent reflection back to themself)
  • Digital = data = monetisation: everything young people do online goes into a money-making system which doesn’t have their benefit as its primary goal.
  • Young people are growing up in a world where their importance and value is quantified by stats, likes, shares etc, and all these numbers demonstrate to them that they’re less important than other people, and encourages desperate measures to improve their metrics.
  • Does a meal/holiday/party/etc really exist unless it’s been published and Liked?
  • Does the same apply to Learning Analytics? Are some of the most useful learning experiences those which don’t have a definition or a grade?

Suggested reading

JISC Horizon Report on wellbeing and mental health – notes from reading group

Suzi read the first section of the JISC Horizon Report mental health and wellbeing section. This talked about the increasing demands on mental health services and discussed some possible causes including worries about money and future prospects, diet, use of social media, and reduced stigma around talking about mental health.

Many institutions are increasing their efforts around student wellbeing. The report mentioned a new task force looking at the transition to university and support in first year: Education Transitions Network.

Four technologies are mentioned as currently being used within HE:

  • Learning analytics to identify students in need of being checked in on
  • Apps and online mood diaries, online counselling
  • Peer support (overseen by counsellors) via Big White Wall
  • Chatbots

The report didn’t have a great amount of detail on how these are used. Using learning analytics to see who’s not engaging with online content seems like the simplest application and is doable in many systems but even this would require care. Revealing that you are keeping students under surveillance in a way they might not expect may cause them to lose trust in the institution and retreat further (or game the system to avoid interventions). Then again, maybe it’s just helping us know the sort of things a student might expect us to know. Universities can be quite disjointed – in a way that may not seem natural or helpful to students. Analytics could provide much needed synaptic connections.

It also struck me that using technology to support wellbeing (and even mental health) is in some ways similar to teaching: what you’re trying to achieve is not simple to define and open to debate.

Johannes read the blog post Learning Analytics as a tool for supporting student wellbeing and watched a presentation by Samantha Ahern. Samantha Ahern is a Data Scientist at the UCL and does research concerning the implications of Learning Analytics on student wellbeing.

In her presentation, she outlined the current problem of the HE Sector with student wellbeing and provided some alarming numbers about the increase of reported mental disorders of young adults (16 –24 years old). According to the NHS survey on mental health in the UK, around 8% of male and 9% of female participants had diagnosed mental health issues in the year 1992. This numbers increased to more than 19% of males and even 26% of females in 2014. Interestingly, females are much more likely to report mental health issues than males, who, unfortunately, are the ones doing most harm to themselves.

In her opinion, the HE institutions have a great responsibility to act when it comes to tackling mental health problems. However, not all activities actually support students. She argues, that too many university policies put the onus to act on the student. But the ones that would need help the most, often do not report their problems. Therefore, the universities should take a much more active role and some rethinking needs to take place.

Her main argument is, that although learning analytics is still in its beginnings and it might sound like a scary and complicated topic, it is worth doing research in this field, as it has the capabilities to really improve student wellbeing when it is done correctly.

It was very interesting to read and listen to her arguments, although it was meant to be as an introduction to learning analytics and did not provide any solutions to the issues.

Roger read “AI in Education – Automatic Essay Scoring”, referenced on page 27 of the JISC Horizons report. Is AI ready to give “professors a break” as suggested in a 2013 article from the New York Times referring to work by EdX on development of software which will automatically assign a grade (not feedback) to essays. If so then surely this would improve staff wellbeing?

Fortunately for the Mail Online, who responded to the same edX news in outraged fashion (“College students pulling all-nighters to carefully craft their essays may soon be denied the dignity of having a human being actually grade their work”) it doesn’t seem that this is likely any time soon.

Recent work from 2 Stanford researchers built on previous results from a competition to develop an automatic essay scoring tool, increasing the alignment of the software with human scorers from 81% in the previous competition to 94.5%.  This to me immediately begged the question – but how consistent are human scorers? The article did at least acknowledge this saying “assessment variation between human graders is not something that has been deeply scientifically explored and is more than likely to differ greatly between individuals.”

Apparently the edX system is improving as more schools and Universities get involved so they have more data to work with, but on their website they state it is not currently available as a service.  The article acknowledges the scepticism in some quarters, in particular the work of Les Perelman, and concludes that there is still “a long way to go”.

Chrysanthi read Learning analytics: help or hindrance in the quest for better student mental wellbeing?, which discusses the data learners may want to see about themselves and what should happen if the data suggests they are falling behind.

Learning analytics can detect signs that may indicate that a student is facing mental health issues and/ or may drop out. When using learning analytics to detect these signs, the following issues should be considered:

  • Gather student’s data ethically and focus on the appropriate metrics to see if a student is falling behind and what factors may be contributing to this.
  • Give students a choice about the data they want to see about themselves and their format, especially when there are comparisons with their cohort involved.
  • Support students at risk, bearing in mind they may prefer to be supported by other students or at least members of staff they know.
  • Talk to students about how to better use their data and how to best support them.

Chrysanthi also read the “What does the future hold” section in JISC Horizon Report Mental Health and Wellbeing, which attempts to predict how wellbeing may be handled in the next few years:

  • Within 2 years, students will have a better understanding of their mental health, more agency, increased expectations for university support and will be more likely to disclose their mental health conditions, as they become destigmatised. Institutions will support them by easing transitions to university and providing flexible, bite-sized courses that students can take breaks from. The importance of staff mental health will also be recognised. New apps will attempt to offer mental wellbeing support.
  • In 3-5 years, institutions will manage and facilitate students supporting each other. Students’ and staff wellbeing will be considered in policy and system design, while analytics will be used to warn about circumstances changing. We may see companion robots supporting students’ needs.
  • In 5 years, analytics may include data from the beginning of students’ learning journey all the way to university to better predict risks.

The Horizon group then gives suggestions to help with the wellbeing challenge, including providing guidance, offer education on learning, personal and life skills to students, and regularly consulting the student voice. Next steps will include exploring the possibility of a wellbeing data trust to enable organisations to share sensitive student data with the aim of helping students, of a wellbeing bundle of resources, apps, etc and more work on analytics, their use to help students and staff and the ethical issues involved.

Naomi read ‘Do Online Mental Health Services Improve Help-Seeking for Young People? A Systematic Review’.

This article from 2014 talks about young people using online services to look for help and information surrounding mental health. The review investigates the effectiveness of online services but does state that a lot more research needs to be done within this area. The article flits between the idea of seeking help and self-help and talks about the benefits of both. It mentions how young people now feel they should problem solve for themselves, so providing an online space for them to access useful information is a great way for them to seek help.

The review mentions how ‘only 35% of young people experiencing mental health problems seek professional face to face help’.  This statistic adds to the fact that online services are needed to provide help and assistance to those in need. It does add that young people do have improved mental health literacy and are better at recognising that they or someone they know may need help. With face to face professional help becoming increasingly harder to access more are turning to online information. It has to be said however that online help has no follow up, and young people can often be given information online, with no way to continue gaining assistance.

One interesting part of the article talked about structured and unstructured online treatment programmes. Although effective at reducing depression and anxiety, structured programmes had poor uptakes and high drop outs with no way for help to be maintained. Unstructured programmes are more useful in the sense that the user could select links that appear useful and disregard to information that seems irrelevant.

This article wasn’t student focused and only covered data collected from younger people, but the ideas behind the review are poignant in a higher education background.

Suggested reading

Jisc Horizon Report mental health and wellbeing section

Or investigate / try out one or more of the online services listed here (or any other – these just seem like helpful lists):

Or related articles

Near future teaching – notes from reading group

For our latest reading group, following Sian Bayne’s fascinating Near Future Teaching seminar for BILT, we wanted to look in more depth at the project materials and related reading.

Michael read ‘Using learning analytics to scale the provision of personalized feedback,’ a paper by Abelardo Pardo, Jelena Jovanovic, Shane Dawson, Dragan Gasevic and Negin Mirriahi. Responding to the need to be able to provide individual feedback to large classes of students, this study presented and tested a novel system for utilizing learning analytic data generated by student activity within a learning management system in order to deliver what the authors called ‘personalized’ feedback to students. As it was designed, the system allowed instructors to create small, one or two sentence pieces of feedback for each activity within a course. Based on these, each week students would be able to receive a set of ‘personalized’ feedback that responded to their level of participation. In the study, the authors found an improvement in student satisfaction with the feedback they received, but only a marginal improvement in performance, as compared to previous years. There were limits to the methodology — the study only made use of at most three years of student data for comparison — and the author’s definition of ‘personalized feedback’ seemed in practice to be little more than a kind of customized boilerplate feedback, but nevertheless the study did have a few interesting points. First, it was admirable in the way that it sought to use learning analytics techniques to improve feedback in large courses. Second, the authors took the well thought out step to not make the feedback given to be about the content of the course, but instead it focused on providing feedback on student study habits. That is, the feedback might encourage students to make sure they did all the reading that week if they weren’t doing well, or might encourage them to be sure to review the material if they had already reviewed it all once. Third, the article offered an interesting recounting of the history of the concept of feedback as it moved from focusing only on addressing the gap between targets and actual performance to a more wholistic and continuous relationship between mentor and student.

Suzi read Higher education, unbundling, and the end of the university as we know it by Tristran McCowan. This paper starts with a thorough guide to the language of unbundling and the kinds of things that we talk about when we talk about unbundling, followed by an extensive discussion of what this means for higher education. My impression from the article was that “unbundling” may be slightly unhelpful terminology, partly because it covers a very wide range of things, and partly because – if the article is to be believed – it’s a fairly neutral term for activities which seem to include asset-stripping and declawing universities. As an exploration of the (possible) changing face of universities it’s well worth a read. You can decide for yourself whether students are better off buying an album than creating their own educational mixtape.

Roger read “Future practices”.   For world 1 , human led and closed, I was concerned that lots was only available to “higher paying students” and there was no mention at all of collaborative learning. For world 2, human led and open, I liked the the idea of the new field of “compassion analytics”, which would be good to explore further, lots of challenge based learning and open content. World 3, tech led and closed, was appealing in its emphasis on wellbeing in relation to technology, and a move away from traditional assessment, with failure recognised more as an opportunity to learn, and reflection and the ability to analyse and synthesise prioritised. From world 4 I liked the emphasis on lifelong learning and individual flexibility for students eg to choose their own blocks of learning.

Chrysanthi read Future Teaching trends: Science and Technology. The review analyzes 5 trends:

  • datafication – e.g. monitoring students’ attendance, location, engagement, real-time attention levels,
  • artificial intelligence – e.g. AI tutoring, giving feedback, summarizing discussions and scanning for misconceptions, identifying human emotions and generating its own responses rather than relying only on past experience and data,
  • neuroscience and cognitive enhancement – e.g. brain-computer interfaces, enhancement tools like tech that sends currents to the brain to help with reading and memory or drugs that improve creativity and motivation,
  • virtual and augmented realities – e.g. that help to acquire medical skills for high-risk scenarios without real risk, or explore life as someone else to develop empathy, and
  • new forms of value – enabling e.g. the recording and verification of all educational achievements and accumulation of credit over one’s lifetime, or the creation of direct contracts between student-academic.

I liked it because it gave both pros and cons in a concise way. It allows you to understand why these trends would be useful and could be adopted widely, at the same time as you are getting a glimpse of the dystopian learning environment they could create if used before ethical and other implications have been considered.

Suggested reading

Miscellany – notes from the reading group

No theme this month – just free choice. Here’s what we read (full notes below):

Naomi read Stakeholders perspectives on graphical tools for visualising student assessment and feedback data.

This paper from the University of Plymouth looks at the development and progression of learning analytics within Higher Education. Luciana Dalla Valle, Julian Stander, Karen Gretsey, John Eales, and Yinghui Wei all contributed.  It covers how four graphical visualisation methods can be used by different stakeholders to interpret assessment and feedback data. The different stakeholders being made up of external examiners, learning developers, industrialists (employers), academics and students.

The paper discusses how there is often difficulty pulling information from assessments and feedback as there can be a lot of data to cover. Having graphic visualisations means information can be shared and disseminated quickly, as there is one focal point to concentrate on. Its mentioned that some can include ‘too much information that can be difficult for teachers to analyse when limited time is available.’ But it is also discussed how it is important then to evaluate the visualisations from the point of view of the different stakeholder who may be using them.

The paper looks at how learning analytics can be seen as a way to optimise learning and allow stakeholders to fully understand and take on board the information that they are provided with. For students it was seen as a way to get the most out of their learning whilst also flagging student’s facing difficulties. The paper also talks about how it brings many benefits to students who are described as the ‘overlooked middle’. Students are able to easily compare their assessments, attainment, and feedback to see their progression. Student’s agreed that the visualisations could assist with study organisation and module choice, and it’s also suggested taking these analytics into account can improve social and cultural skills. For external examiners, analytics was seen as a real step forward in their learning and development. For them it was a quick way to assimilate information and improve their ‘knowledge, skills and judgement in Higher Education Assessment. Having to judge and compare academics standards over a diverse range of assessment types is difficult and visual graphics bring some certain simplicity to this. For learning developers too, using these images and graphics are suggested to help in ‘disseminating good practice.

The paper goes on to explain how it does improve each of the stakeholder’s evaluation of assessment. It goes into a lot of detail of the different visualisations suggested, commenting on their benefits and drawbacks of each set of data, which is worth taking a more detailed look at. It should also be noted that the paper suggested there could be confidential or data protection issues involved with sharing or releasing data such as this as in most cases this certain data is only seen at faculty or school level. Student demoralisation is also mentioned near the end of the paper as being a contributing factor to why these graphics may not always work in the best ways. It finishes by suggesting how it would be interesting to study student’s confidence and self-esteem changes due to assessment data sharing. It an interesting idea that needs to be carefully thought out and analysed to ensure it produces a positive and constructive result for all involved.

Suzanne read: Social media as a student response system: new evidence on learning impact

This paper begins with the assertion that social media is potentially a “powerful tool in higher education” due to its ubiquitous nature in today’s society. However, also recognising that to date the role of social media in education has been a difficult one to pin down. There have been studies showing that it can both enhance learning and teaching and be a severe distraction for students in the classroom.

The study sets out to answer these two questions:

  • What encourages students to actively utilise social media in their learning process?
  • What pedagogical advantages are offered by social media in enhancing students’ learning experiences?

To look at these questions, the researchers used Twitter in a lecture-based setting with 150 accounting undergraduates at an Australian university. In the lectures, Twitter could be used in two ways: as a ‘backchannel’ during the lecture, and as a quiz tool. As a quiz tool, the students used a specific hashtag to Tweet their answers to questions posed by the lecturer in regular intervals during the session, related to the content that had just been covered. These lectures were also recorded, and a proportion of the students only watched the recorded lecture as they were unable to attend in person. Twitter was used for two main reasons. First, the researchers assumed that many students would already be familiar and comfortable with Twitter. Secondly, using Twitter wouldn’t need any additional tools, such as clickers, or software (assuming that students already had it on their devices).

Relatively early on, several drawbacks to using Twitter were noted. There was an immediate (and perhaps not surprising?) tension between the students and lecturers public and private personas on Twitter. Some students weren’t comfortable Tweeting from their own personal accounts, and the researchers actually recommended that lecturers made new accounts to keep their ‘teaching’ life separate from their private lives. There was also a concern about the unpredictability of tapping into students social media, in that the lecturer had no control over what the students wrote, in such a public setting. It also turned out (again, perhaps not unsurprisingly?) that not all students liked or used Twitter, and some were quite against it. Finally, it was noted that once students were on Twitter, it was extremely easy for them to get distracted.

In short, the main findings were that the students on the whole liked and used Twitter for the quiz breaks during the lecture. Students self-reported being more focused, and that the quiz breaks made the lecture more active and helped with their learning as they could check their understanding as they went. This was true for students who actively used Twitter in the lecture, those who didn’t use Twitter but were still in the lecture in person, and those who watched the online recording only. During the study, very few students used Twitter as a backchannel tool, instead preferring to ask questions by raising a hand, or in breaks or after the lecture.

Overall, I feel that this supports the idea that active learning in lectures is enhanced when students are able to interact with the material presented and the lecturer. Breaking up content and allowing students to check their understanding is a well-known and pedagogically sound approach. However, this study doesn’t really provide any benefit in using Twitter, or social media, specifically. The fact that students saw the same benefit regardless of whether they used Twitter to participate, or were just watching the recording (where they paused the recording to answer the questions themselves before continuing to the answers), seems to back this up. In fact, in not using Twitter in any kind of ‘social’ way, and trying to hive off a private space for lecturers and students to interact in such a public environment seems to be missing the point of social media altogether. For me, the initial research questions therefore remain unanswered!

Suzi read Getting things done in large organisations

I ended up with a lot to say about this so I’ve put it in a separate blog post: What can an ed techie learn from the US civil service?. Key points for me were:

  • “Influence without authority as a job description”
  • Having more of a personal agenda, and preparing for what I would say if I got 15 minutes with the VC.
  • Various pieces of good advice for working effectively with other people.

Chrysanthi read Gamification in e-mental health: Development of a digital intervention addressing severe mental illness and metabolic syndrome (2017). This paper talks about the design of a gamified mobile app that aims to help people with severe chronic mental illness in combination with metabolic syndrome. While the target group is quite niche, I love the fact that gamification is used in a context that considers the complexity of the wellbeing domain and the interaction between mental and physical wellbeing. The resulting application, MetaMood, is essentially the digital version of an existing 8-week long paper-based program with the addition of game elements. The gamification aims to increase participation, motivation and engagement with the intervention. It is designed to be used as part of a blended care approach, combined with face to face consultations. The game elements include a storyline, a helpful character, achievements, coins and a chat room, for the social element. Gamification techniques (tutorial, quest, action) were mapped to traditional techniques (lesson, task, question) to create the app.

The specific needs of the target group needed the contributions of an interdisciplinary team, as well as relevant game features; eg the chat room includes not only profanity filter, but also automatic intervention when keywords like suicide are used (informing the player of various resources available to help in these cases). Scenarios, situations and names were evaluated for their potential to trigger patients, and changes were made accordingly; eg the religious sounding name of a village was changed, as it could have triggered delusions.

The 4 clinicians that reviewed the app said it can proceed to clinical trial with no requirement for further revision. Most would recommend it to at least some of their clients. The content was viewed as acceptable and targeted by most, the app interesting, fun & easy to use. I wish there had been results of the clinical trial, but it looks like this is the next step.

Roger read “Analytics for learning design: A layered framework and tools”, an article from the British Journal of Educational Technology.

This paper explores the role analytics can play in supporting learning design. The authors propose a framework called the “Analytics layers for learning design (AL4LD)”, which has three layers: learner, design and community analytics.

Types of learner metrics include engagement, progression and student satisfaction while experiencing a learning design. Examples of data sources are VLEs or other digital learning environments, student information systems, sensor based information collected from physical spaces, and “Institutional student information and evaluation (assessment and satisfaction) systems”. The article doesn’t go into detail about the latter, for example to explore and address the generic nature of many evaluations eg NSS, which is unlikely to provide meaningful data about impact of specific learning designs.

Design metrics capture design decisions prior to the implementation of the design. Examples of data include learning outcomes, activities and tools used to support these. The article emphasises that “Data collection in this layer is greatly simplified when the design tools are software systems”. I would go further and suggest that it is pretty much impossible to collect this data without such a system, not least as it requires practitioners to be explicit about these decisions, which otherwise often remain hidden.

Community metrics are around “patterns of design activity within a community of teachers and related stakeholders”, which could be within or across institutions. Examples of data include types of learning design tools used and popular designs in certain contexts. These may be shared in virtual or physical spaces to raise awareness and encourage reflection.

The layers inter-connect eg learning analytics could contribute to community analytics by providing evidence for the effectiveness of a design. The article goes on to describe four examples. I was particularly interested in the third third one which describes the “experimental Educational Design Studio” from the University of Technology Sydney. It is a physical space where teachers can go to explore and make designs, ie it also addresses the community analytics layer in a shared physical space.

This was an interesting read, but in general I think the main challenge is collection of data in the design and community aspects. For example Diana Laurillard has been working on systems to do this for many years, but there seems to have been little traction. eg The learning design support environment  and the Pedagogical Patterns Collector.

Amy read: Addressing cheating in e-assessment using student authentication and authorship checking systems: teachers’ perspectives. Student authentication and authorship systems are becoming increasingly well-used in universities across the world, with many believing that cheating is on the rise across a range of assessments. This paper looks at two universities (University A in Turkey and University B in Bulgaria) who have implemented the TeSLA system (an Adaptive Trust-based eAssessment System for Learning). The paper doesn’t review the effectiveness of the TeSLA system, but rather the views of the teachers on whether the system will affect the amount of cheating taking place.

The research’s main aim is to explore the basic rationale for the use of student authentication and authorship systems, and within that, to look at four specific issues:

  1. How concerned are teaching about the issue of cheating and plagiarism in their courses?
  2. What cheating and plagiarism have teachers observed?
  3. If eAssessment were introduced in their courses, what impact do the teaching think it might have on cheating and plagiarism?
  4. How do teachers view the possible use of student authentication and authorship checking systems, and how well would such systems fit with their present and potential future assessment practises?

Data was collected across three different teaching environments: face-to-face teaching, distance learning and blended learning. Data was collected via questionnaires and interviews with staff and students.

The findings, for the most part, were not hugely surprising: the main type of cheating that took place at both universities was plagiarism, followed by ghost-writing (or the use of ‘essay mills’). These were the most common methods of cheating in both exam rooms and online. The difference between the reasons staff believed students cheated and why students cheated varied widely too. Both teachers and students believed that:

  • Students wanted to get higher grades
  • The internet encourages cheating and plagiarism, and makes it easy to do so
  • There would not be any serious consequences if cheating and plagiarism was discovered

However, teachers also believed that students were lazy and wanted to take the easy way out, whereas students blamed pressure from their parents and the fact they had jobs as well as studying for reasons.

Overall, staff were concerned with cheating, and believed it was a widespread and serious problem. The most common and widespread problem was plagiarism and ghost writing, followed by copying and communicating with others during assessments. When asked about ways of preventing cheating and plagiarism, teaching were most likely to recommend changes to educational approaches, followed by assessment design, technology and sanctions. Teachers across the different teaching environments (face-to-face, blended and distance learning) were all concerned with the increase in cheating that may take place with online/ eAssessments. This was especially the case for staff who taught on distance learning courses, where students currently take an exam under strict conditions. Finally, all staff believed that the use of student authentication and authorship tools enabled greater flexibility in access for those who found it difficult to travel, as well as in forms of assessment. However, staff believed that cheating could still take place regardless of these systems, but that technology could be used in conjunction with other tools and methods to reduce cheating in online assessments.