National Institute for Digital Learning good reads from 2019 – notes from the reading group

Do MOOCs contribute to student equity and social inclusion? A systematic review by Sarah R Lambert (read by Suzi Wells)

This study is a large literature review looking at both empirical research and practitioner guidance around using MOOCs and other open (as in free) learning to promote student equity and social inclusion. The study starts from 2014 because that’s the second wave of MOOCs, where more stable and sustainable practice begins to emerge.

The aims of the MOOCs were broken into two broad areas: those focused on student equity (making education fairer for tertiary learners and those preparing to enrol in HE) – this was the most common aim; and those who sought to improve social inclusion (education of non-award holders & lifelong learners). The literature was predominantly from US, UK, Australia, but they were only studying literature – possibly unsurprisingly for articles written in English. There was a near 50/50 split between empirical research and policy/practice recommendations, and the studies focused slightly more on MOOCs than on other other open learning. One notable finding was that the success rate (among published studies at least) was high – more often than not they met or exceeded their aims.

Lambert includes lots of useful detail about factors that may have led to successful projects. MOOC developers should learn about their learners and make content relevant and relatable to them. Successful projects often had community partners involved in the design, delivery & support – in particular, initiatives with large cohorts (~100) that were very successful all had this. Designing for the learners meant things like: designing for mobile-only and offline access, teaching people in their own language (or at least providing mother-tongue facilitation) and, where relevant, mixing practitioners with academics in the content.

Facilitating and support the learning was also key to success. Local study groups or face-to-face workshops were used by some projects to provide localisation and contextualisation. Facilitators would ideally be drawn from existing community networks.

A related point was to design content from scratch – recycling existing HE materials was not as successful. This should be done in an interdisciplinary team and/or community partnership. Being driven entirely by an IT or digital education team was an indicator that a project would not meet its aims. Projects need technical expertise but education and/or widening participation too. Open as is free-to-use is fine, licence didn’t seem to have an impact.

In short:

  • Work with the people you intend to benefit.
  • Create, don’t recycle.
  • Don’t expect the materials to stand by themselves.

If you’re interested in social justice through open learning, think OU not OERs.

What does the ‘Postdigital’ mean for education? Three critical perspectives on the digital, with implications for educational research and practice by Jeremy Knox (read by Suzanne Collins)

This article explores the idea of what ‘post-digital’ education means, specifically thinking about human-technology relationships. It begins with an analysis of the term post-digital’, embracing the perspective of ‘post’ as a critical appraisal of the understanding of digital rather than simply meaning a different stage, after, the digital. This initial analysis is worth a read, but not my main focus for this reading group so here I’ll jump straight to the main discussion, which is based on three critical perspectives on digital in education.

The first is “Digital as Capital”. Here, Knox talks about the commercialisation and capitalisation of digital platforms, such as social media. This platform model is increasingly based on the commodification of data, and so inevitably students/teachers/learners become seen as something which can be analysed (eg learning analytics), or something under surveillance. If surveillance is equated to recognition, this leads to further (perhaps troubling?) implications. Do you need to be seen to be counted as a learner? Is learning always visible? Does this move away from the idea of the web and digital being ‘social constructivist’?

Secondly, Knox looks at “Digital as Policy”. This (for me) slightly more familiar ground discusses the idea that ‘digital’ education is no longer as separate or distinct from ‘education’ as it once was. In a ‘post-digital’ understanding, it is in fact mainstream rather than alternative or progressive. The digital in education, however, often manifests as metrification in governance – eg schools are searchable in rankings based on algorithms. In this sense, ‘digital education’ moves away from ‘classroom gadgets’ (as Knox puts it) and sees it as something intrinsic and embedded in policy, with strategic influence.

Lastly, he discusses “Digital as Material”, which focuses on surfacing the hidden material dimensions of a sector which was often seen as ‘virtual’ and therefore ‘intangible’. The tangible, material aspects of digital education include devices, servers, and other physical elements which require manual labour and material resources. On one hand there is efficiency, but on the other there is always labour. As education, particularly digital education, often comes from a sense of social egalitarianism and social justice, this is a troubling realisation, and one which lead to a rethink in the way digital education is positioned in a post-digital perspective.

In conclusion, Knox suggests that ‘post-digital’ should be understood as a ‘holding to account of the many assumptions associated with digital technology’, which I feel sums up his argument and is probably something we should try and do more of regardless of whether we’re a ‘digital’ or ‘post-digital’ education office.

What’s the problem with learning analytics? by Neil Selwyn (read by Michael Marcinkowski)

For this last session I read Neil Selwyn’s ‘What’s the Problem with Learning Analytics’ from the Journal of Learning Analytics. Appearing in a journal published by the Society for Learning Analytics Research, Selwyn’s socio-technical approach toward the analysis of learning analytics was a welcome, if somewhat predictable, take on a field that too often seems to find itself somewhat myopically digging for solution to its own narrow set of questions.

Setting learning analytics within a larger social, cultural, and economic field of analysis, Selwyn lays out an orderly account of a number of critical concerns, organized around the implications and values present in learning analytics.

Selwyn lists these consequences of learning analytics as areas to be questioned:

  1. A reduced understanding of education: instead of a holistic view of education it is reduced to a simple numerical metric.
  2. Ignoring the broader social contexts of education: there is a danger that by limiting the understanding of education that we ignore important contextual factors affecting education.
  3. Reducing students’ and teachers’ capacity for informed decision-making: the results of learning analytics comes to overtake other types of decision making.
  4. A means of surveillance rather than support: in their use, learning analytics can have more punitive rather than pedagogical implications.
  5. A source of performativity: students and teachers each begin to focus on achieving results that can be measured by analytics rather than other measures of learning.
  6. Disadvantaging a large number of people: like any data driven system, decisions about winners and losers can be unintentionally baked into the system.
  7. Servicing institutional rather than individual interests: the analytics has more direct benefit for educational institutions and analytic providers than it does for students.

He goes on to list several questionable values embedded in learning analytics:

  1. A blind faith in data: There is a risk that there is a contemporary over-valuation of the importance of data.
  2. Concerns over the data economy: What are the implications when student data is monetized by companies?
  3. The limits of free choice and individual agency: Does a reliance on analytic data remove the ability of students and educators to have a say in their education?
  4. An implicit techno-idealism: Part of leaning analytics is a belief in the benefits of the impacts of technology.

Toward this, Selwyn proposes a few broad areas for change designed to improve learning analytics’ standing within a wider field of concern:

  1. Rethink the design of learning analytics: allow for more transparency and customization for students.
  2. Rethink the economics of learning analytics: give students ownership of their data.
  3. Rethink the governance of learning analytics: establish regulatory oversite for student data.
  4. A better public understanding of learning analytics: educate the wider public of the ethical implications of the application of learning analytics to student data.

Overall, Selwyn’s main point remains the most valuable: the idea of learning analytics should be examined within the full constellation of social and cultural structures within which it is embedded. Like any form of data analytics, learning analytics does not exist as a perfect guide to any action, and the insights that are generated by it need to be understood as only partial and implicated by the mechanisms designed to generate the data. In the end, Selwyn’s account is a helpful one — it is useful to have such critical voices welcomed into SOLAR — but the level at which he casts his recommendations remains too broad for anything other than a starting point. Setting clear policy goals and fostering a broad understanding of learning analytics are hopeful external changes that can be made to the context within which learning analytics is used, but in the end, what is necessary is for those working in the field of learning analytics who are constructing systems of data generation and analysis to alter the approaches that they take, both in the ‘ownership’ and interpretation of student data. This enforces the need for how we understand what ‘data’ is and how we think about using it to change. Following Selwyn, the most important change might be to re-evaluate the ontological constitution of data and our connection to it, coming to understand it not as something distinct from students’ education, but an integral part of it.

Valuing technology-enhanced academic conferences for continuing professional development. A systematic literature. Professional Development in Education by Maria Spilker (read by Naomi Beckett )

This literature review gives an analysis of the different approaches taken to enhance academic conferences technologically for continued professional development. Although there have been advances and new practices emerging, a definite coherent approach was lacking. Conferences were being evaluated in specific ways that were not considering all sides.

‘Continued professional development for academics is critical in times of increased speed and innovation, and this intensifies the responsibilities of academics.’ 

This makes it more important to ensure when academics come together at a conference, there is a systematic approach to look at what they should be getting out of the time spent there. The paper suggests this is something that needs to be looked out when first starting to plan a conference, what are the values?

The paper talks about developing different learning experiences at a conference to engage staff and build their professional development. There is often little time for reflection and the paper suggests looking at more ways to include this. Using technology is an example of a way this could be done. Engagement on Twitter for example gives users another channel to discuss and network, and this takes them away from the normal traditional conference formats.  Making more conferences online also gives users the opportunities to reach out to further networks.

The paper mentions their Value Creation Network, looking at what values we should be taking out of conferences. These include, immediate value, potential value, applied value, realised value, and re-framing value. Looking at these to begin with is a good start to thinking about how we can frame academic conferences, so delegates get the most use out of the time spent there, and work on their own professional development too.

We asked teenagers what adults are missing about technology. This was the best response by Taylor Fang (read by Paddy Uglow)

Some thoughts I took away:

  • Traditionally a “screen” was a thing to hide or protect, and to aid privacy. Now it’s very much the opposite.
  • Has society changed so much that social media is the only place that young people can express themselves and build a picture of who they are and what their place is in the world?
  • Adults have a duty to help young people find other ways to show who they are to the world (and the subsequent reflection back to themself)
  • Digital = data = monetisation: everything young people do online goes into a money-making system which doesn’t have their benefit as its primary goal.
  • Young people are growing up in a world where their importance and value is quantified by stats, likes, shares etc, and all these numbers demonstrate to them that they’re less important than other people, and encourages desperate measures to improve their metrics.
  • Does a meal/holiday/party/etc really exist unless it’s been published and Liked?
  • Does the same apply to Learning Analytics? Are some of the most useful learning experiences those which don’t have a definition or a grade?

Suggested reading

Curriculum design – notes from the reading group

Exploring curriculum design approaches (report by Suzanne Collins)

Suzanne talked about her work exploring curriculum design approaches (separate blog post) where she looks at looks at methodologies such as ABC, Carpe Diem, CAIeRO and ELDeR.

ABC Learning Design (notes by Suzi Wells)

ABC learning design is a rapid design / review methodology developed by Clive Young and Nataša Perović in 2014, and drawing on Laurillard’s ‘Conversational Framework’.

The method is centered around 90 minute workshop, during which participants:

  • Describe their unit in a tweet
  • Map out the types of activity, currently undertaken or planned, against Laurillard’s six learning types
  • Storyboard the unit using the ABC cards

In the DEO a few of us – Roger Gardner, Suzanne Collins, and I – have trialled this approach. Initially this was with a small number of academics interested in redesigning their credit-bearing units. We made much fuller use of it when supporting the design of the FutureLearn courses, following which Suzanne and I presented on this at a UCL conference in 2018: our presentation on using ABC at Bristol.

One advantage of the methodology is that you could run a single 90 minute workshop looking at an entire programme, allowing potential links between the units to become apparent. The short length of the workshop gives at least some chance to get everyone from the unit together in one place.

The cards are Creative Commons licensed and have been widely adapted, adding in activities and terminology that is more relevant to their context. On the ABC site you can download ABC cards for MOOCs designed for FutureLearn and EdX. At the conference we heard how people have used stickers in the storyboarding stage to surface things that they are interested in: employability skills, alignment with the education strategy, and university-wide themes (such as Bristol’s global citizenship, innovation & enterprise, and sustainable futures themes).

Obviously a 90 minute workshop is not going to give you time to finalise many details but ABC is quick to learn, very adaptable, and sparks good conversations. It’s remarkable how much can be done in a short time.

Beyond podcasting: creative approaches to designing educational audio (notes by Chrysanthi Tseloudi)

This paper talks about a pilot that aimed to encourage academics to use podcasts in their teaching through a tool in their VLE. The pilot included initial workshops and various types of support for the 25 participants that decided to try it out. All participants used audio, apart from 1 team, which used video podcasts. 9 of them shared their experience for this paper. They had produced various types of resources: videos about clinical techniques (nursing), audio based on research projects which also received audio feedback from the academic (sport), “questions you’re afraid to ask” (art & design), answers to distance learning students’ questions to reduce the sense of isolation in the VLE (communications), etc.

Academics enjoyed using audio for learner-centred pedagogies, but they also encountered some barriers. Expectations for high quality may be a barrier for both staff and students, while assessing student work in this format is time consuming. Not being familiar with the technology can be frustrating and impeding for staff, as they would rather not ask students to do something they didn’t feel confident they could do well themselves. Students are not necessarily more confident than them in using this technology. Following the pilot, the institution’s capacity to support such activities was evaluated and some solutions to support staff were devised.

This was a nice paper with a variety of ideas on using audio for teaching. I found the point about voice on the VLE increasing connectivity and reducing isolation particularly interesting, and would love to see any relevant research on this.

Suggested reading

Approaches to curriculum design

Related articles

Exploring Curriculum Design Approaches

Recently at the University of Bristol, we’ve all been thinking a lot about learning design, developing curriculum and ways of assessment. BILT’s focus on TESTA for transforming assessment is one way you can see this in action. In higher education, learning design can quickly get complicated – for example redesigning a whole programme – and is increasingly new and exciting – with online or blended aspects, new assessment methods or innovative pedagogies. A method of working when approaching curriculum, programme or learning design can speed up the process and make it much more enjoyable for everyone involved. Helpfully, there are several working methods based on story-boarding which provide a way to navigate this process, and which focus on a team approach to designing learning.

The Digital Education Office have mainly used an approach based on UCL’s ABC: you can read more about our use of this method from a series of blog posts by Suzi Wells and I on a previous ABC conference held at UCL. 

Such curriculum design approaches all facilitate discussion and evaluation of current and future learning designs by bringing together relevant stakeholders, learning design specialists and support staff. In the Sway presentation embedded here, we’ll have a quick look at a few, in order to get a taste of what these approaches involve, and how they’ve been used by others. Follow this link to open the Sway in a new tab or window.

Notes from the reading group – free choice

Digital wellbeing toolkit from BBC R&D (notes from Suzi Wells)

This is a toolkit from BBC R&D is designed for addressing wellbeing in digital product development. Given the increasing focus on student wellbeing, I was interested in whether it could be useful for online & blended learning.

The key resources provided are a set of values cards along with some exploration of what these mean, particularly for young people (16-34 year olds). These were developed by the BBC through desk research, focus groups and surveys.

The toolkit – the flashcards especially – could certainly be used to sense-check whether very digital teaching methods are really including and supporting the kinds of things we might take for granted in face-to-face education. It could also be useful for working with students to discuss the kinds of things that are important to them in our context, and identifying the kinds of things that really aren’t.

Some of the values seem too obvious (“being safe and well”, “receiving recognition”), or baked-in to education (“achieving goals”, “growing myself”), and I worry that could be off-putting to some audiences. The language also seemed a little strange “human values” – as though “humans” were an alien species. It can feel like the target audience for the more descriptive parts would be someone who has never met a “human”, much less a young one. Nonetheless, the flashcards in particular could be a useful way to kick off discussions.

Three example flash cards showing the values: being inspired, expressing myself, having autonomy

New studies show the cost of student laptop use in lecture classes (notes from Michael Marcinkowski)

The article I read for this session highlighted two new-ish articles which studied the impact that student laptop use had on learning during lectures. In the roughly ten years that laptops have been a common sight in lecture halls, a number of studies have looked at what impact they have on notetaking during class. These previous studies have frequently found a negative association with laptop use for notetaking in lectures, not only for the student using the laptop, but also for other students sitting nearby, distracted by the laptop screen.

The article took a look at two new studies that attempted to tackle some of the limitations of previous work, particularly addressing the correlative nature of previous findings: perhaps low performing students prefer to use laptops for notetaking so that they can do something else during lectures.

What bears mentioning is that there is something somewhat quaint about studying student laptop use. In most cases, it seems to be a foregone conclusion and there is no getting it back into the box. Students will use laptops and other digital technologies in class — there’s almost no other option at this point. Nevertheless, the studies proved interesting.

The first of the highlighted studies featured an experimental set up, randomly assigning students in different sections of an economics class to different conditions: notetaking with laptop, without laptop, or with tablet laying flat on the desk. The last condition was designed to test the effect of students’ being distracted by seeing other students’ screens; the supposition being that if the tablet was laid flat on a desk, it wouldn’t be visible to other students. The students’ performance was then measured based on a final exam taken by students across the three conditions.

After controlling for demographics, GPA, and ACT entrance exam scores, the research found that performance was lower for students using digital technologies for notetaking. However, while performance was lower on the multiple choice and short answer sections of the exam, performance on the essay potion of the exam was the same across all three conditions.

While the study did address some shortcomings of previous studies (particularly with its randomized experimental design), it also introduced several others. Importantly it raised questions about how teachers might teach differently when faced with a class of laptop users or what effect forcing a student who isn’t comfortable using a laptop might have on their performance. Also, given that multiple sections of an economics class was the subject of the study, what role does the discipline being lectured on play in the impact of laptop use?

The second study attempted to address those though a novel design which linked students’ propensity to use or not use laptops in optional-use classes based on whether or not they were forced to or forced not to use them in another class on the same day. Researchers looked at institution-wide student performance at an institution that had a mix of classes which required, forbade, or had no rules about laptop use.

By looking at student performance in classes in which laptop use was optional, but by linking that performance to whether students would be influenced in their laptop choices based on other classes held the same day, researchers wanted to be able to measure student performance when they had a chance not to use a laptop in class. That is, the design allowed researchers to understand in general how many students might be using a laptop in a laptop-optional class, but still allowing individual students to make a choice based on preference.

What they found was that student performance worsened for classes that shared a day with laptop mandated classes and improved on days when classes were shared with laptop prohibited classes. This is in line with previous studies, but interestingly, the negative effects were seen more strongly in weaker students and in quantitative classes.

In the end, even while these two new studies reinforce what had been previously demonstrated about student laptop use, is there anything that can be done to counter what seem to be the negative effects of laptop use for notetaking? More than anything, what seems to be needed are studies looking at how to boost student performance when using technology in the classroom.

StudyGotchi: Tamagotchi-Like Game-Mechanics to Motivate Students During a Programming Course & To Gamify or Not to Gamify: Towards Developing Design Guidelines for Mobile Language Learning Applications to Support User Experience; 2 poster/ demo papers in the EC-TEL 2019 Proceedings. (notes from Chrysanthi Tseloudi)

Both papers talk about the authors’ findings on gamified applications related to learning.

The first regards the app StudyGotchi, based on Tamagochi (a virtual pet the user takes care of), which aims to encourage first year Java programming students to complete tasks on their Moodle platform in order to keep a virtual teacher happy. Less than half the students downloaded the relevant app, with half of those receiving the control version that didn’t have the game functions (180) and half receiving the game version (194). According to their survey, of those that didn’t download it, some reported that was because they weren’t interested in it. Of those that replied to whether they used it, less than half said they did. According to data collected from students that used either version of the app, there was no difference in either the online behaviour or the exam grades of the students, between the groups that used the game or non-game versions. The authors attribute this to the lack of interaction, personalisation options and immediate feedback on the students’ actions on Moodle. I also wonder whether a virtual teacher to be made happy in particular is the best choice of “pet”, when hopefully there is already a real teacher supporting students’ learning. Maybe a virtual brain with regions that light up when a quiz is completed or any non-realistic representation connected to the students’ own development would be more helpful in increasing students’ intrinsic motivation, since ideally they would be learning for themselves, and not to make someone else happy.

The second paper compares 2 language learning apps, one of which is gamified. The non-gamified app (LearnIT ASAP) includes exercises where students fill in missing words, feedback to show whether the answer is correct/ incorrect and statistics to track progress. The gamified app (Starfighter) includes exercises where the students steer through an asteroid field by selecting answers to given exercises and a leaderboard to track progress and compete with peers. The evaluation involved interviewing 11 20-50 year old individuals. The authors found that younger and older participants had different views about the types of interactions and aesthetics of the two apps. Younger participants would have preferred swiping to tapping, older participants seemed to find the non-gamified app comfortable because it looked like a webpage, but were not so sure about the gamified app. The game mechanics of Starfighter were thought to be more engaging, while the pedagogical approach of LearnIT ASAP was thought to be better in terms of instructional value and effectiveness. While the authors mention that the main difference between the apps is gamification, considering the finding that the pedagogical approach of one of the apps is better, I wonder if that is actually the case. Which game elements actually improve engagement and how is still being researched, so I would really like to see comparisons of language learning apps where the existence or not of game elements is indeed the only difference. Using different pedagogical approaches between the apps is less likely to shed light on the best game elements to use, but it does emphasize the difficulty of creating an application that is both educationally valuable and fun at the same time.

Accessibility, inclusivity, universal design – notes from the reading group

Naomi looked at the Accessibility of e-learning OU Course, and read the 10 key points from the UCL Blog

The summary comments written by Jessica Gramp summed up the OU course and gave a good overview of what the course covered, as well as an idea of how wide the disability scope is. It was an interesting read for someone who’s knowledge of accessibility in e-learning is quite limited.

The post gave information on how there are two views of disability. The Medical Model, describes ‘the problem of disability as stemming from the person’s physical or mental limitation.’ And the Social Model, ‘sees disability as society restricting those with impairments in the form of prejudice, inaccessible design, or policies of exclusion.’

The idea of society restricting those with impairments through inaccessible design was interesting, as it is something most people have done, but often give little thought to.  We often like to design things to look ‘pretty’ but give little thought to those using screen readers or think about how we would describe an image for example. What is also mentioned in the post is how accessibility is about both technical and usable access for people with disabilities. Jessica gives the example of a table of data. Although it may be technically accessible for someone who is blind, the meaning of the data would be lost on a screen reader and would no sense and be unusable to the user. The post and course both talk about evaluation accessibility, but for me it’s something that needs to come right at the beginning of the design. There is no point designing something that uses spreadsheets for example if screen readers won’t produce the correct data and meanings to the users.

The last point Jessica makes, which I really liked, was that accessible learning environments help everyone, not just those with disabilities.

“This last point reflects my own preference for listening to academic papers while running or walking to work, when I would be otherwise unable to “read” the paper. As a student and full-time employee, being able to use this time to study enables me to manage my time effectively and merge my fitness routine, with study time. This is only possible because my lecturers, and many journals these days too, provide accessible documents that can be read out loud using my mobile smartphone.” – Jessica Gramp

A thought-provoking blog post that gave me a lot to think about and made me put more thought into the work I create online.

Whilst reading this I also came across on article on Twitter from Durham’s student paper The Palatinate. This talks about how Durham University have introduced lecture capture to their lectures. However, the English department have opted out, citing changes to the teaching relationships, and a ‘lack of credible evidence that lecture capture improves academics attainment.’ In the departments’ email, they talk about the ‘danger of falling attendance, and the potential compromise of the classroom as a safe place, where controversial material can be discussed.’

These are all good points, but the writer of the article points out that accessibility needs may be more important than these factors. With such a wide range of disabilities, lecture capture could provide help in lectures to those that need it. The question also needs to be answered that if they aren’t going to use lecture capture, what are they doing to help their students with disabilities?

It was an interesting article that makes us think about how much accessibility weighs in within teaching and learning. It should be at front of what we are thinking when we first start designing how we are going to teach, or present data. But there is often a stigma and it can also cause tensions and challenges. Going forward, these need to be addressed, rather than be ignored.

Suzi read Applying Universal Design for Learning (UDL) Principles to VLE design from the UCL blog. A short, but very thorough and clear post, written as part of UCL’s Accessible Moodle project. For the main part this is, reassuringly enough, a re-framing of things we know make for good accessible web design (resizing text, designing for screen readers, etc). However, it did include the following:

“The VLE should also offer the ability to customise the interface, in terms of re-ordering frequently accessed items, placement of menus and temporarily hiding extraneous information that may distract from the task at hand.”

Not suggestions I have seen before in an accessibility context, possibly because they are more difficult to implement. In particular, the idea of limiting distracting information – that being an accessibility issue – seems obvious once it’s been said. It’s something that would be welcome for a wide range of our students and staff.

Suzi also read Advice for making events and presentations accessible from GOV.UK. Again this is very clear, straightforward advice, well worth being aware. The advice is for face-to-face events but covers points on supporting a partially remote audience. Some of the points that I had not thought of included:

  • Ask your participants an open question about their requirements well before the event. Their wording is “Is there anything we can do to enable you to be able to fully participate in this event?”
  • Don’t use white slide backgrounds because of the glare. For example, GOV.UK slide decks use black text on grey or white text on dark blue.
  • Give audio or text descriptions of any video in your presentation.

There are also some interesting suggestions in the comments. I found the comments particularly interesting as they seem to be individuals speaking directly about their own needs (or possibly those of people they work with) and what they would find most useful. Suggestions include ensuring there is good 3G or 4G coverage, as wifi might not be enough to support assistive technologies, and opening with a roll call (because as a blind person you can’t just glance around the room to see who is there). One commenter suggests you should always sing the key points from your presentation (to an existing tune, no need to compose especially) – an idea I love but not one I’m up to implementing.

Chrysanthi watched 2 videos from the list 15 inspiring inclusive design talks:

When we design for disability, we all benefit | Elise Roy

In this talk, Elise Roy gives examples of inventions that were initially inspired by/ for people with disabilities, but turned out to be useful for people without as well. These include:

  1. Safety glasses that visually alert the user about changes in pitch coming from a tool (which can mean the tool will kick back) before the human ear can pick it up (theirs).
  2. A potato peeler that was designed for people with arthritis but was so comfortable that others used it.
  3. Text messaging, which was originally conceived for deaf people.

Her suggestion is to design for people with disabilities first, rather than the norm. This could mean that the solution is not only inclusive, but potentially better, than if it was designed for the norm. So rather than “accommodate” people with disabilities, use that energy to come up with innovative solutions that are beneficial to all.

Derek Featherstone: Accessibility is a Design Tool

Derek Featherstone makes a similar point to Elise Roy, that designing for accessibility can help everyone. Looking at how outliers/ people at the ends of a spectrum will be influenced by a design decision can also help understand how the average person will be affected. “If we look at the extremes, everybody else is going to be somewhere in the middle”. Between no vision and perfect vision, between no hearing and perfect hearing etc.

The main points to consider for accessibility as a design tool:

  1. People with disabilities may have needs for specific types of content, on top of the content everyone else gets, in order to make decisions: e.g. to choose a health provider, they don’t just need to know how far away the provider is, but perhaps where the wheelchair ramp is at the practice, as that might affect whether they choose to go to this one or choose a different one. Designers should find out what kind of extra content they need. Other examples: Are there captions for this film I am considering watching?
  2. When trying to make something accessible, it is important to consider why it is included in the first place, rather than just what it is. That could be the difference between providing a confusing textual description of an element, and a clear one of how the information the element portrays affects the people accessing it. E.g. instead of trying to textually describe a change of boundaries on a map, give someone the ability to look up their post code and see if they are affected by that change.
  3. Proximity; this known design principle of grouping related items together (e.g. images to their textual explanations, instructions to the parts they refer to etc) is even more important for people with certain types of disability, like low vision. This is because it is much easier for them to lose the context, as they see much less of the interface at a time. Derek suggests getting an understanding of this by examining an interface looking at it through your fist, like holding a straw. Actions, buttons etc should be placed in a way that the desired action is located where the person would expect according to the patterns of use that have been established already. If so far, the action is on a specific part of the screen, changing that will be confusing. Buttons should be distinguishable from each other even without reading, so e.g. for buttons previous & next, using the exact same colours, font, sizes, etc means the user needs to read to distinguish.

Finally, it is important to not get so caught up in the technical requirements of making something accessible on paper, that we forget what it is we are trying to achieve.

Suzanne read New regulations for online learning accessibility (WonkHE, 23 Sept 2018)

Published in WonkHe in September 2018, this article by Robert McLaren outlines the new regulations for online learning accessibility. McLaren works for the think-tank Policy Connect, which published a report in collaboration with Blackboard Ally after the government ratified the EU Web Accessibility Directive on the 23rd of September. This directive clarifies the position of HE institutions as public sector bodies and thus includes them in the requirements for web accessibility. This means that VLEs, online documents, video recordings etc are all counted as web content, and need to meet the four principles of accessible web design: that it is perceivable, operable, understandable, and robust. Additionally, VLEs will also have to include an accessibility statement outlining the accessibility compliance of the content, directing students to tools to help them get the most from the content (such as browser plugins), and explaining how students can flag any inaccessible content. As McLaren notes, this has long been considered good practice, and isn’t really anything new, but is now a legal duty.

The article then outlines several areas which may still need addressing in VLE content. The first is ensuring content is usable. The example he uses is the prevalence of scanned pdfs which are hard or impossible to work with (as they appear as image, rather than text) for disabled students, but also for non-disabled students and those working with mobile devices. From this point, McLaren moves to discuss briefly the idea of universal design, which he defines as “educational practice that removes the barriers faced by disabled students and thereby benefits all students.” In the article, he claims that the rise in universal design has in part been fuelled by cuts to Disabled Students Allowances and the increasing shift in focus to universities to remove barriers for disabled students rather than DSA and other measures which work to mitigate these barriers once they are in place.

The article then suggests a model for ensuring the change required to meet these needs: “We recommended a cascading approach. Government should work with sector organisations to provide training for key staff such as learning technologists, who can in turn train and produce guidance for teaching staff.” As the report was sponsored by Blackboard Ally, it is perhaps not surprising that another side of their solution is to provide a range of usable and flexible resources, which Ally helps users ensure they are providing. The final remarks, however, surely stand true no matter how achieved (through Ally or other means): “An inclusive approach allows all students to learn in the ways that suit them best. If the sector can respond effectively to these regulations, all students, disabled and non-disabled, will benefit from a better learning experience.”

Suggested reading

Miscellany – notes from the reading group

No theme this month – just free choice. Here’s what we read (full notes below):

Naomi read Stakeholders perspectives on graphical tools for visualising student assessment and feedback data.

This paper from the University of Plymouth looks at the development and progression of learning analytics within Higher Education. Luciana Dalla Valle, Julian Stander, Karen Gretsey, John Eales, and Yinghui Wei all contributed.  It covers how four graphical visualisation methods can be used by different stakeholders to interpret assessment and feedback data. The different stakeholders being made up of external examiners, learning developers, industrialists (employers), academics and students.

The paper discusses how there is often difficulty pulling information from assessments and feedback as there can be a lot of data to cover. Having graphic visualisations means information can be shared and disseminated quickly, as there is one focal point to concentrate on. Its mentioned that some can include ‘too much information that can be difficult for teachers to analyse when limited time is available.’ But it is also discussed how it is important then to evaluate the visualisations from the point of view of the different stakeholder who may be using them.

The paper looks at how learning analytics can be seen as a way to optimise learning and allow stakeholders to fully understand and take on board the information that they are provided with. For students it was seen as a way to get the most out of their learning whilst also flagging student’s facing difficulties. The paper also talks about how it brings many benefits to students who are described as the ‘overlooked middle’. Students are able to easily compare their assessments, attainment, and feedback to see their progression. Student’s agreed that the visualisations could assist with study organisation and module choice, and it’s also suggested taking these analytics into account can improve social and cultural skills. For external examiners, analytics was seen as a real step forward in their learning and development. For them it was a quick way to assimilate information and improve their ‘knowledge, skills and judgement in Higher Education Assessment. Having to judge and compare academics standards over a diverse range of assessment types is difficult and visual graphics bring some certain simplicity to this. For learning developers too, using these images and graphics are suggested to help in ‘disseminating good practice.

The paper goes on to explain how it does improve each of the stakeholder’s evaluation of assessment. It goes into a lot of detail of the different visualisations suggested, commenting on their benefits and drawbacks of each set of data, which is worth taking a more detailed look at. It should also be noted that the paper suggested there could be confidential or data protection issues involved with sharing or releasing data such as this as in most cases this certain data is only seen at faculty or school level. Student demoralisation is also mentioned near the end of the paper as being a contributing factor to why these graphics may not always work in the best ways. It finishes by suggesting how it would be interesting to study student’s confidence and self-esteem changes due to assessment data sharing. It an interesting idea that needs to be carefully thought out and analysed to ensure it produces a positive and constructive result for all involved.

Suzanne read: Social media as a student response system: new evidence on learning impact

This paper begins with the assertion that social media is potentially a “powerful tool in higher education” due to its ubiquitous nature in today’s society. However, also recognising that to date the role of social media in education has been a difficult one to pin down. There have been studies showing that it can both enhance learning and teaching and be a severe distraction for students in the classroom.

The study sets out to answer these two questions:

  • What encourages students to actively utilise social media in their learning process?
  • What pedagogical advantages are offered by social media in enhancing students’ learning experiences?

To look at these questions, the researchers used Twitter in a lecture-based setting with 150 accounting undergraduates at an Australian university. In the lectures, Twitter could be used in two ways: as a ‘backchannel’ during the lecture, and as a quiz tool. As a quiz tool, the students used a specific hashtag to Tweet their answers to questions posed by the lecturer in regular intervals during the session, related to the content that had just been covered. These lectures were also recorded, and a proportion of the students only watched the recorded lecture as they were unable to attend in person. Twitter was used for two main reasons. First, the researchers assumed that many students would already be familiar and comfortable with Twitter. Secondly, using Twitter wouldn’t need any additional tools, such as clickers, or software (assuming that students already had it on their devices).

Relatively early on, several drawbacks to using Twitter were noted. There was an immediate (and perhaps not surprising?) tension between the students and lecturers public and private personas on Twitter. Some students weren’t comfortable Tweeting from their own personal accounts, and the researchers actually recommended that lecturers made new accounts to keep their ‘teaching’ life separate from their private lives. There was also a concern about the unpredictability of tapping into students social media, in that the lecturer had no control over what the students wrote, in such a public setting. It also turned out (again, perhaps not unsurprisingly?) that not all students liked or used Twitter, and some were quite against it. Finally, it was noted that once students were on Twitter, it was extremely easy for them to get distracted.

In short, the main findings were that the students on the whole liked and used Twitter for the quiz breaks during the lecture. Students self-reported being more focused, and that the quiz breaks made the lecture more active and helped with their learning as they could check their understanding as they went. This was true for students who actively used Twitter in the lecture, those who didn’t use Twitter but were still in the lecture in person, and those who watched the online recording only. During the study, very few students used Twitter as a backchannel tool, instead preferring to ask questions by raising a hand, or in breaks or after the lecture.

Overall, I feel that this supports the idea that active learning in lectures is enhanced when students are able to interact with the material presented and the lecturer. Breaking up content and allowing students to check their understanding is a well-known and pedagogically sound approach. However, this study doesn’t really provide any benefit in using Twitter, or social media, specifically. The fact that students saw the same benefit regardless of whether they used Twitter to participate, or were just watching the recording (where they paused the recording to answer the questions themselves before continuing to the answers), seems to back this up. In fact, in not using Twitter in any kind of ‘social’ way, and trying to hive off a private space for lecturers and students to interact in such a public environment seems to be missing the point of social media altogether. For me, the initial research questions therefore remain unanswered!

Suzi read Getting things done in large organisations

I ended up with a lot to say about this so I’ve put it in a separate blog post: What can an ed techie learn from the US civil service?. Key points for me were:

  • “Influence without authority as a job description”
  • Having more of a personal agenda, and preparing for what I would say if I got 15 minutes with the VC.
  • Various pieces of good advice for working effectively with other people.

Chrysanthi read Gamification in e-mental health: Development of a digital intervention addressing severe mental illness and metabolic syndrome (2017). This paper talks about the design of a gamified mobile app that aims to help people with severe chronic mental illness in combination with metabolic syndrome. While the target group is quite niche, I love the fact that gamification is used in a context that considers the complexity of the wellbeing domain and the interaction between mental and physical wellbeing. The resulting application, MetaMood, is essentially the digital version of an existing 8-week long paper-based program with the addition of game elements. The gamification aims to increase participation, motivation and engagement with the intervention. It is designed to be used as part of a blended care approach, combined with face to face consultations. The game elements include a storyline, a helpful character, achievements, coins and a chat room, for the social element. Gamification techniques (tutorial, quest, action) were mapped to traditional techniques (lesson, task, question) to create the app.

The specific needs of the target group needed the contributions of an interdisciplinary team, as well as relevant game features; eg the chat room includes not only profanity filter, but also automatic intervention when keywords like suicide are used (informing the player of various resources available to help in these cases). Scenarios, situations and names were evaluated for their potential to trigger patients, and changes were made accordingly; eg the religious sounding name of a village was changed, as it could have triggered delusions.

The 4 clinicians that reviewed the app said it can proceed to clinical trial with no requirement for further revision. Most would recommend it to at least some of their clients. The content was viewed as acceptable and targeted by most, the app interesting, fun & easy to use. I wish there had been results of the clinical trial, but it looks like this is the next step.

Roger read “Analytics for learning design: A layered framework and tools”, an article from the British Journal of Educational Technology.

This paper explores the role analytics can play in supporting learning design. The authors propose a framework called the “Analytics layers for learning design (AL4LD)”, which has three layers: learner, design and community analytics.

Types of learner metrics include engagement, progression and student satisfaction while experiencing a learning design. Examples of data sources are VLEs or other digital learning environments, student information systems, sensor based information collected from physical spaces, and “Institutional student information and evaluation (assessment and satisfaction) systems”. The article doesn’t go into detail about the latter, for example to explore and address the generic nature of many evaluations eg NSS, which is unlikely to provide meaningful data about impact of specific learning designs.

Design metrics capture design decisions prior to the implementation of the design. Examples of data include learning outcomes, activities and tools used to support these. The article emphasises that “Data collection in this layer is greatly simplified when the design tools are software systems”. I would go further and suggest that it is pretty much impossible to collect this data without such a system, not least as it requires practitioners to be explicit about these decisions, which otherwise often remain hidden.

Community metrics are around “patterns of design activity within a community of teachers and related stakeholders”, which could be within or across institutions. Examples of data include types of learning design tools used and popular designs in certain contexts. These may be shared in virtual or physical spaces to raise awareness and encourage reflection.

The layers inter-connect eg learning analytics could contribute to community analytics by providing evidence for the effectiveness of a design. The article goes on to describe four examples. I was particularly interested in the third third one which describes the “experimental Educational Design Studio” from the University of Technology Sydney. It is a physical space where teachers can go to explore and make designs, ie it also addresses the community analytics layer in a shared physical space.

This was an interesting read, but in general I think the main challenge is collection of data in the design and community aspects. For example Diana Laurillard has been working on systems to do this for many years, but there seems to have been little traction. eg The learning design support environment  and the Pedagogical Patterns Collector.

Amy read: Addressing cheating in e-assessment using student authentication and authorship checking systems: teachers’ perspectives. Student authentication and authorship systems are becoming increasingly well-used in universities across the world, with many believing that cheating is on the rise across a range of assessments. This paper looks at two universities (University A in Turkey and University B in Bulgaria) who have implemented the TeSLA system (an Adaptive Trust-based eAssessment System for Learning). The paper doesn’t review the effectiveness of the TeSLA system, but rather the views of the teachers on whether the system will affect the amount of cheating taking place.

The research’s main aim is to explore the basic rationale for the use of student authentication and authorship systems, and within that, to look at four specific issues:

  1. How concerned are teaching about the issue of cheating and plagiarism in their courses?
  2. What cheating and plagiarism have teachers observed?
  3. If eAssessment were introduced in their courses, what impact do the teaching think it might have on cheating and plagiarism?
  4. How do teachers view the possible use of student authentication and authorship checking systems, and how well would such systems fit with their present and potential future assessment practises?

Data was collected across three different teaching environments: face-to-face teaching, distance learning and blended learning. Data was collected via questionnaires and interviews with staff and students.

The findings, for the most part, were not hugely surprising: the main type of cheating that took place at both universities was plagiarism, followed by ghost-writing (or the use of ‘essay mills’). These were the most common methods of cheating in both exam rooms and online. The difference between the reasons staff believed students cheated and why students cheated varied widely too. Both teachers and students believed that:

  • Students wanted to get higher grades
  • The internet encourages cheating and plagiarism, and makes it easy to do so
  • There would not be any serious consequences if cheating and plagiarism was discovered

However, teachers also believed that students were lazy and wanted to take the easy way out, whereas students blamed pressure from their parents and the fact they had jobs as well as studying for reasons.

Overall, staff were concerned with cheating, and believed it was a widespread and serious problem. The most common and widespread problem was plagiarism and ghost writing, followed by copying and communicating with others during assessments. When asked about ways of preventing cheating and plagiarism, teaching were most likely to recommend changes to educational approaches, followed by assessment design, technology and sanctions. Teachers across the different teaching environments (face-to-face, blended and distance learning) were all concerned with the increase in cheating that may take place with online/ eAssessments. This was especially the case for staff who taught on distance learning courses, where students currently take an exam under strict conditions. Finally, all staff believed that the use of student authentication and authorship tools enabled greater flexibility in access for those who found it difficult to travel, as well as in forms of assessment. However, staff believed that cheating could still take place regardless of these systems, but that technology could be used in conjunction with other tools and methods to reduce cheating in online assessments.

Flexible and inclusive learning – notes from reading group

Amy read: Why are we still using LMSs, which discusses the reasons LMS systems have not advanced dramatically since they came onto the market. The key points were:

  • There are five core features that all major LMS systems have: they’re convenient; they offer a one-stop-shop for all University materials, assessments and grades; they have many accessibility features built in; they’re well integrated into other institutional systems and there is a great deal of training available for them.
  • Until a new system with all these features comes onto the market, the status quo with regard to LMS systems will prevail.
  • Instructors should look to use their current LMS system in a more creative way.

Mike read: Flexible pedagogies: technology-enhanced learning HEA report

This paper provided a useful overview of flexible learning, including explanations of what it might mean, dilemmas and challenges for HE. The paper is interesting to consider alongside Bristol’s Flexible and Inclusive learning paper. For the authors, Flexible learning gives students choice in the pace, place and mode of their learning. This is achieved through application of pedagogical practice, with TEL positioned as an enable or way of enhancing this practice. Pace is about schedules (faster or slower), or allowing students to work at their own pace. Place is about  physical location and distance. Mode includes notions of distance and blended learning.

Pedagogies covered include personalised learning, flexible learning – (suggesting it is similar to adaptive learning in which materials adapt to individual progress), gamification, fully online and blended approaches. The paper considers the implications of offering choice to students for example, over what kind of assessment. An idealised form would offer a very individualised choice of learning pathway, but with huge implications on stakeholders.

In the reading, group, we had an interesting discussion as to whether students are always best equipped to understand and make such choices. We also wondered how we would resource the provision of numerous pathways.  Other  risks include potential for information overload for students, ensuring systems and approaches work with quality assurance processes. Barriers include interpretations of KIS data which favours contact time.

We would have a long way to go in achieving the idealised model set out here. Would a first step be to change the overall diet of learning approaches across a programme, rather than offering choice at each stage? Could we then introduce some elements of flexibility in certain areas of programmes, perhaps a bit like the Medical School’s Self Selected Components, giving students choice in a more manageable space within the curriculum?

Suzanne read:  Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. The main points were:

  • Self-regulated learning is something which happens naturally in HE, as students will assess their own work and give themselves feedback internally. This paper suggests this should be harnessed and built on in feedback strategies in HE.
  • Shift in focus to see students having a proactive rather than reactive role in feedback practices, particularly focused on deciphering, negotiating and acting on feedback.
  • The paper suggests 7 principles for good feedback practice, which encourages this self-regulation: 1. clarifying what good performance is; 2. facilitating self-assessment; 3. delivering high quality feedback information; 4. encouraging dialogue; encouraging self-esteem and motivation; 6. giving opportunities to close the gap between where the student is now and where they need/want to be; 7. using feedback to improve teaching.
  • For our context, this gives some food for thought in terms of the limitations of a MOOC environment for establishing effective feedback practices (dialogue with every student is difficult if not impossible, for example), and emphasises the importance of scaffolding or training effective peer and self-assessment, to give students the confidence and ability to ‘close the gap’ for themselves.

Suzanne also read: Professional Development Through MOOCs in Higher Education Institutions: Challenges and Opportunities for PhD Students Working as Mentors

This paper reports on a small-scale (20 participants), qualitative study into the challenges and opportunities for PhD students acting as mentors in the FutureLearn MOOC environment. As a follow-on from the above reading, using mentors can be a way to help students with the peer and self-assessment practices, which is why I decided to read it in parallel. However, it also focuses on the learning experiences of the PhD student themselves as they perform the mentor role, also giving these students a different (potentially more flexible and inclusive) platform to develop skills.

Overall, the paper is positive about the experiences of PhD MOOC mentors, claiming that they can develop skills in various areas, including:

  • confidence in sharing their knowledge and interacting with people outside their own field (especially for early career researchers, who may not yet have established themselves as ‘expert’ in their field);
  • teaching skills, particularly related to online communication, the need for empathy and patience, and tailoring the message to a diverse audience of learners. It’s noteworthy here that many of these mentors had little or no teaching experience, so this is also about giving them teaching experience generally, not teaching in MOOCs specifically;
  • subject knowledge, as having to discuss with the diverse learning community (of expert and not expert learners) helped them consolidate their understanding, and in some cases pushed them to find answers to questions they had not previously considered.

Roger read Authentic and Differentiated Assessments

This is a guide aimed at School teachers. Differentiated assessment involves students being active in setting goals, including the topic, how and when they want to be evaluated. It also involves teachers continuously assessing student readiness in order to provide support and evaluate when students are ready to move on in the curriculum.

The first part of the article describes authentic assessment, which it defines as asking students to apply knowledge and skills to real world settings, which can be a powerful motivator for them. A four stage process to design authentic assessment is outlined.

The second part of the article focuses on differentiated assessment. We all have different strengths and weaknesses in how we best demonstrate our learning, and multiple and varied assessments can help accommodate these. The article stresses that choice is key, including of learning activity as well as assessment. Project and problem based learning are particularly useful.  Learning activities should always consider multiple intelligences and the range of students’ preferred ways of learning, and there should be opportunities for individual and group tasks as some students will perform better in one or the other.

Hannah read: Research into digital inclusion and learning helps empower people to make the best choices, a blog by the Association for Learning and Teaching about bridging the gap between digital inclusion and learning technology. The main points were:

  • Britain is failing to exploit opportunities to give everyone fair and equal access to learning technology through not doing enough research into identifying the best way to tackle the problem of digital exclusion
  • Learning technology will become much more inclusive a way of learning once the digital divide is addressed
  • More must be done to ensure effective intervention; lack of human support and lack of access to digital technology are cited as two main barriers to using learning technology in a meaningful way
  • We need to broaden understanding of the opportunities for inclusion, look into how to overcome obstacles, develop a better understanding of the experiences felt by the excluded and understand why technological opportunities are often not taken up

Suzi read:  Disabled Students in higher education: Experiences and outcomes which discusses the experience of disabled students, based on surveys, analysis of results, interviews, and case studies at four, relatively varied, UK universities. Key points for me were:

  • Disability covers a wide range of types and severity of issues but adjustments tend to be formulaic, particularly for assessment (25% extra time in exams)
  • Disability is a problematic label, not all students who could do will choose to identify as disabled
  • Universal design is the approach they would advocate where possible

Suzi also read: Creating Better Tests for Everyone Through Universally Designed Assessments a paper written for the context of large-scale tests for US school students, which nonetheless contains interesting background and advice useful (if not earth-shattering). The key messages are:

  • Be clear about what you want to assess
  • Only assess that – be careful not to include barriers (cognitive, sensory, emotional, or physical) in the assessment that mean other things are being measured
  • Apply basic good design and writing approaches – clear instructions, legible fonts, plain language