Horizon Report 2017 – notes from the reading group

This time we looked at the NMC Horizon Report 2017 and related documents.

Amy read ‘Redesigning learning spaces’ from the 2017 Horizon report and ‘Makerspaces’ from the ELI ‘Things you should know about’ series. The key points were:

  • Educational institutions are increasingly adopting flexible and inclusive learning design and this is extending to physical environments.
  • Flexible workspaces, with access to peers from other disciplines, experts and equipment, reflect real-world work and create social environments that foster cross-discipline problem-solving.
  • For projects created in flexible environments to be successful, the facilitator allow the learners to shape the experience – much of the value of a makerspace lies in its informal nature, with learning being shaped by the participants rather than the facilitator.
  • There are endless opportunities for collaboration with makerspaces, but investment – both financial and strategic – is essential for successful projects across faculties.

Roger read blended learning designs. This is listed as a short term key trend, driving ed tech adoption in HE for the next 1 to 2 years. It claims that the potential of blended learning is now well understood, that blended approaches are widely used, and that the focus has moved to evaluating impact on learners. It suggests that the most effective uses of blended learning are for contexts where students can do something which they would not otherwise be able to, for example via VR. In spite of the highlighting this change in focus it provides little detailed evidence of impact in the examples mentioned.

Suzi read the sections on Managing Knowledge Obsolescence (which seemed to be around how we in education can make the most of / cope with rapidly changing technology) and Rethinking the Role of Educators. Interesting points were:

  • Educators as guides / curators / facilitators of learning experiences
  • Educators need time, money & space to experiment with new technology (and gather evidence), as well as people with the skills and time to support them
  • HE leaders need to engage with the developing technology landscape and build infrastructure that supports technology transitions

Nothing very new, and I wasn’t sure about the rather business-led examples of how the role of university might change, but still a good provocation for discussion.

Hannah read ‘Achievement Gap’ from the 2017 Horizon Report. It aimed to talk about the disparity in enrolment and performance between student groups, as defined by socioeconomic status, race, ethnicity and gender, but only really tackled some of these issues. The main points were:

  • Overwhelming tuition costs and a ‘one size fits all’ approach of Higher Education is a problem, with more flexible degree plans being needed. The challenge here is catering to all learners’ needs, as well as aligning programmes with deeper learning outcomes and 21st century problems.
  • A degree is becoming increasingly vital for liveable wages across the world, with even manufacturing jobs increasingly requiring post secondary training and skills.
  • There has been a growth in non-traditional students, with online or blended offerings and personalise and adaptive learning strategies being implemented as a retention solution.
  • Some Universities across the world have taken steps towards developing more inclusive offerings: Western Governors University are offering competency based education where students develop concrete skills relating to specific career goals; Norway, Germany and Slovenia offer free post secondary education; under the Obama administration, it was made so that students can secure financial aid 3 months earlier to help them make smarter enrolment decisions; in Scandinavian countries, there is a lot of flexibility in transferring to different subjects, something that isn’t widely accepted in the UK but could help to limit the drop-out rate.
  • Some countries are offering different routes to enrolment in higher education. An example of this is Australia’s Fast Forward programme provides early information to prospective students about alternative pathways to tertiary education, even if they have not performed well in high school. Some of these alternative pathways include online courses to bridge gaps in knowledge, as well as the submission of e-portfolios to demonstrate skills gained through non-formal learning.
  • One thing I thought the article didn’t touch on was the issue of home learning spaces for students. Some students will share rooms and IT equipment, or may not have access to the same facilities as others.

Flexible and inclusive learning – notes from reading group

Amy read: Why are we still using LMSs, which discusses the reasons LMS systems have not advanced dramatically since they came onto the market. The key points were:

  • There are five core features that all major LMS systems have: they’re convenient; they offer a one-stop-shop for all University materials, assessments and grades; they have many accessibility features built in; they’re well integrated into other institutional systems and there is a great deal of training available for them.
  • Until a new system with all these features comes onto the market, the status quo with regard to LMS systems will prevail.
  • Instructors should look to use their current LMS system in a more creative way.

Mike read: Flexible pedagogies: technology-enhanced learning HEA report

This paper provided a useful overview of flexible learning, including explanations of what it might mean, dilemmas and challenges for HE. The paper is interesting to consider alongside Bristol’s Flexible and Inclusive learning paper. For the authors, Flexible learning gives students choice in the pace, place and mode of their learning. This is achieved through application of pedagogical practice, with TEL positioned as an enable or way of enhancing this practice. Pace is about schedules (faster or slower), or allowing students to work at their own pace. Place is about  physical location and distance. Mode includes notions of distance and blended learning.

Pedagogies covered include personalised learning, flexible learning – (suggesting it is similar to adaptive learning in which materials adapt to individual progress), gamification, fully online and blended approaches. The paper considers the implications of offering choice to students for example, over what kind of assessment. An idealised form would offer a very individualised choice of learning pathway, but with huge implications on stakeholders.

In the reading, group, we had an interesting discussion as to whether students are always best equipped to understand and make such choices. We also wondered how we would resource the provision of numerous pathways.  Other  risks include potential for information overload for students, ensuring systems and approaches work with quality assurance processes. Barriers include interpretations of KIS data which favours contact time.

We would have a long way to go in achieving the idealised model set out here. Would a first step be to change the overall diet of learning approaches across a programme, rather than offering choice at each stage? Could we then introduce some elements of flexibility in certain areas of programmes, perhaps a bit like the Medical School’s Self Selected Components, giving students choice in a more manageable space within the curriculum?

Suzanne read:  Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. The main points were:

  • Self-regulated learning is something which happens naturally in HE, as students will assess their own work and give themselves feedback internally. This paper suggests this should be harnessed and built on in feedback strategies in HE.
  • Shift in focus to see students having a proactive rather than reactive role in feedback practices, particularly focused on deciphering, negotiating and acting on feedback.
  • The paper suggests 7 principles for good feedback practice, which encourages this self-regulation: 1. clarifying what good performance is; 2. facilitating self-assessment; 3. delivering high quality feedback information; 4. encouraging dialogue; encouraging self-esteem and motivation; 6. giving opportunities to close the gap between where the student is now and where they need/want to be; 7. using feedback to improve teaching.
  • For our context, this gives some food for thought in terms of the limitations of a MOOC environment for establishing effective feedback practices (dialogue with every student is difficult if not impossible, for example), and emphasises the importance of scaffolding or training effective peer and self-assessment, to give students the confidence and ability to ‘close the gap’ for themselves.

Suzanne also read: Professional Development Through MOOCs in Higher Education Institutions: Challenges and Opportunities for PhD Students Working as Mentors

This paper reports on a small-scale (20 participants), qualitative study into the challenges and opportunities for PhD students acting as mentors in the FutureLearn MOOC environment. As a follow-on from the above reading, using mentors can be a way to help students with the peer and self-assessment practices, which is why I decided to read it in parallel. However, it also focuses on the learning experiences of the PhD student themselves as they perform the mentor role, also giving these students a different (potentially more flexible and inclusive) platform to develop skills.

Overall, the paper is positive about the experiences of PhD MOOC mentors, claiming that they can develop skills in various areas, including:

  • confidence in sharing their knowledge and interacting with people outside their own field (especially for early career researchers, who may not yet have established themselves as ‘expert’ in their field);
  • teaching skills, particularly related to online communication, the need for empathy and patience, and tailoring the message to a diverse audience of learners. It’s noteworthy here that many of these mentors had little or no teaching experience, so this is also about giving them teaching experience generally, not teaching in MOOCs specifically;
  • subject knowledge, as having to discuss with the diverse learning community (of expert and not expert learners) helped them consolidate their understanding, and in some cases pushed them to find answers to questions they had not previously considered.

Roger read Authentic and Differentiated Assessments

This is a guide aimed at School teachers. Differentiated assessment involves students being active in setting goals, including the topic, how and when they want to be evaluated. It also involves teachers continuously assessing student readiness in order to provide support and evaluate when students are ready to move on in the curriculum.

The first part of the article describes authentic assessment, which it defines as asking students to apply knowledge and skills to real world settings, which can be a powerful motivator for them. A four stage process to design authentic assessment is outlined.

The second part of the article focuses on differentiated assessment. We all have different strengths and weaknesses in how we best demonstrate our learning, and multiple and varied assessments can help accommodate these. The article stresses that choice is key, including of learning activity as well as assessment. Project and problem based learning are particularly useful.  Learning activities should always consider multiple intelligences and the range of students’ preferred ways of learning, and there should be opportunities for individual and group tasks as some students will perform better in one or the other.

Hannah read: Research into digital inclusion and learning helps empower people to make the best choices, a blog by the Association for Learning and Teaching about bridging the gap between digital inclusion and learning technology. The main points were:

  • Britain is failing to exploit opportunities to give everyone fair and equal access to learning technology through not doing enough research into identifying the best way to tackle the problem of digital exclusion
  • Learning technology will become much more inclusive a way of learning once the digital divide is addressed
  • More must be done to ensure effective intervention; lack of human support and lack of access to digital technology are cited as two main barriers to using learning technology in a meaningful way
  • We need to broaden understanding of the opportunities for inclusion, look into how to overcome obstacles, develop a better understanding of the experiences felt by the excluded and understand why technological opportunities are often not taken up

Suzi read:  Disabled Students in higher education: Experiences and outcomes which discusses the experience of disabled students, based on surveys, analysis of results, interviews, and case studies at four, relatively varied, UK universities. Key points for me were:

  • Disability covers a wide range of types and severity of issues but adjustments tend to be formulaic, particularly for assessment (25% extra time in exams)
  • Disability is a problematic label, not all students who could do will choose to identify as disabled
  • Universal design is the approach they would advocate where possible

Suzi also read: Creating Better Tests for Everyone Through Universally Designed Assessments a paper written for the context of large-scale tests for US school students, which nonetheless contains interesting background and advice useful (if not earth-shattering). The key messages are:

  • Be clear about what you want to assess
  • Only assess that – be careful not to include barriers (cognitive, sensory, emotional, or physical) in the assessment that mean other things are being measured
  • Apply basic good design and writing approaches – clear instructions, legible fonts, plain language

 

Teaching at scale: engagement, assessment and feedback – notes from reading group

Chris read #53ideas 27 – Making feedback work involves more than giving feedback – Part 1 the assessment context. A great little paper full of epithets that perfectly describe the situation I find myself in. ‘You can write perfect feedback and it still be an almost complete waste of time’. ‘University policies to ensure all feedback is provided within three weeks seem feeble’.’On many courses no thought has been given to the purpose of the learning other than that there is some subject matter that the teacher knows about’. ‘Part time teachers are seldom briefed properly about the course and its aims and rationale, and often ignore criteria’. The take home message, for me, was that the OU is an exemplar in the area of giving good, useful, consistent feedback even when the marking load is spread over a number of people: ‘If a course is going to hire part-time markers then it had better adopt some of the Open University’s practices or suffer the consequences.’

Jane recommended: Sea monsters& whirlpools: Navigating between examination and reflection in medical education. Hodges, D. (2015). Medical Teacher 37: 3, 261-266. Interesting paper around how diverse forms of reflective practice employed by medical educators are compatible with assessment. She also mentioned “They liked it if you said you cried”: how medical students perceive the teaching of professionalism

Suzi read E-portfolios enhancing students’ self-directed learning: a systematic review of influencing factors

This 2016 paper is based on a systematic literature review of the use of online portfolios, with most of the studies taking place in an HE context. They looked at what was required for portfolio use to foster self-directed learning. Their conclusions were that students need the time and motivation to use them, and also that portfolios must:

  • Be seamlessly-integrated into teaching
  • Use appropriate technology
  • Be supported by coaching from staff (this is “important if not essential”)

Useful classification: purpose (selection vs learning) and volition (voluntary vs mandated) from Smith and Tillema (2003). Useful “Practical implications” section towards the end.

Suzi read How & Why to Use Social Media to Create Meaningful Learning Assignments

A nice example of a hypothetical (but well thought-through) Instagram assignment for a history of art course, using hashtags and light gamification. Included good instructions and motivation for students.

Has some provocative claims about the use of social media:
“It’s inevitable if we want to make learning relevant, practical and effective.”
“social media, by the behaviours it generates, lends itself to involving students in learning”
Also an interesting further reading section.

Suzi read #53ideas 40 – Self assessment is central to intrinsic motivation

Feeling a sense of control over learning leads to higher levels of engagement and persistence. If possible this would be the what, how, where and when. But “taking responsibility for judgements about their own learning” – so good self & peer assessment – may be enough. Goes through an example of self & peer assessment at Oxford Polytechnic. Challenging to our context, as this was highly scaffolded, with the students practicing structured self-assessment for a year before engaging in peer assessment. Draws on Carl Rogers principles for significant learning. Interesting wrt the need to create a nurturing, emotionally supportive space for learning.

Suggested reading

Engagement and motivation

Social media and online communities

Assessment and feedback

More general, learning at scale

Resilience – notes from reading group

These seems to be a lot of interest in resilience in higher education at the moment. For myself, while I know we can all learn how to better cope with the stuff life throws at us, my initial reaction to the topic with was along these lines:

My impression from these papers is that resilience is not well-defined and interventions, although often very plausible, are not evidence-based. Putting that concern aside, the techniques which seemed most suited to be incorporated in university education were:

  • building nurturing social networks,
  • fostering a sense of purpose, and
  • encouraging reflection.

I read Resilience: how to train a tougher mind (BBC Future) and Jackson, D., Firtko, A. and Edenborough, M. (2007) ‘Personal resilience as a strategy for surviving and thriving in the face of workplace adversity: a literature review’, Journal of Advanced Nursing, 60, 1: 1-9.

Resilience is broadly: the ability to keep going in face of adversity and to get back to normal functioning afterwards. It can mean different things in different situations and might not always be wholly positive. For example, one study looked at at-risk youths for whom self-reported resilience meant disconnection and the ability to go-it-alone – not necessarily something to foster.

Both papers talked predominantly about quite extreme situations: children whose schools were close to the twin towers on 9/11, and nurses who work in high-pressure and traumatic environments. In both a lot of the conclusions seem to be based on self-report, for example how people say that they coped under extreme stress.

There are lots of traits, attitudes, and techniques mentioned as helpful for resilience and most of these are thought to be things which can be learned or developed. They include:

  • social support, especially nurturing relationships (including mentoring)
  • faith, spirituality, sense of purpose
  • positive outlook, optimism, humour, seeking the positive
  • emotional insight, for example through reflective journaling
  • life balance

There are several programmes seeking to develop these traits in school children through mindfulness, sometimes mixed with other techniques. These programmes include: Mindfulness in Schools Project (UK), Inner Resilience Programme (US), Penn Resiliency Training (US). The nursing paper does not mention mindfulness, focusing more on hardiness, optimism, repressive coping, and journaling (more stereotypical activities for middle-aged women, perhaps?).

Both papers touch on the idea that you can’t help others to be calm and resilient if you are not resilient yourself, and so on the importance of promoting resilience in those with caring responsibilities (nurses, teachers).

There are no magic bullets though and nobody claiming large or long-lasting effects for any intervention (once it’s finished). What we have is a bag of techniques and ideas.

Threshold concepts – notes from the reading group

Suzi read Before and after students “get it”: threshold concepts by James Rhem (2013)

This relatively short article is part general discussion but mostly practical advice. The points I found most interesting were:

  • “Learning thresholds” might have been a better name, according to Ray Land.
  • There’s been success using threshold concepts as a way to get academics talking about their subject from an education point of view. They are something that people “get” and often enjoy engaging with, though they might struggle to agree on a definitive list of concepts for their subject.
  • To get through the liminal space takes “recursive, deep learning” (which I take to mean an immersive experience). This can be difficult to achieve.
  • We need to help students become more resilient and more optimistic, to help them make it through (there was little idea of how to do this though).
  • Trying to simplify the concepts for students may be counter-productive as it may encourage mimicry.

It made me reflect on conversations I’ve had about students mathematical ability when they arrive at university: they might make it through a-level but not really understand or be able to apply the concepts. This seems very similar to the contrast between mimicry and crossing the threshold.

Mike read Demistifying thereshold concepts by Darrell Rowbottom is a critique of the concept from a philosophy professor (2007)

Threshold concepts, as an idea, appeal to me, but I have found them to be a slippery/troublesome concept in themselves. It was interesting to read this critique which critiqued Meyer’s and Land’s ideas, and those who state they have found examples of them in particular subject areas. The paper took issue with:

  • the interpretation of a concept and the application of the theory, which Rowbottom states is closer to ability
  • explores whether these things are bounded in the way the term threshold implies. Thresholds will be relative (different for different people)
  • the woolly language used eg they are ‘significant’ in terms of the transformation that occurs
  • suggests they are not definable and not measurable. You cannot empirically isolate them or test for them (the latter is part of a wider issue for education for me).

Whilst much of this is valid, and as Suzi mentioned, Land would  use a different name if starting from scratch, I still think the idea has some use. I suggest most theories of education are difficult to isolate or prove, and  thinking about the most troublesome and transformative concepts can still help design curricula and focus teaching and learning.

Gem read  What’s the matter with Threshold Concepts? by Lori Townsend, Amy Hofer and Korey Brunetti is a guest post on the ARClog Blog (Blogging by and for academic and research librarians, posted Jan 2015).   This short piece was a response to some of the arguments against Threshold Concepts. The authors attempted a reasonable rebuttal of seven main arguments against Threshold concepts (listed below for interest) and they made some good counter-arguments, some with respect to information literacy instruction (discipline-specific).

Arguments against Threshold Concepts

  1. Threshold concepts are aren’t based on current research about teaching
  2. Everything is a threshold concept
  3. Threshold concepts are unproven
  4. Threshold concepts don’t address skill development
  5. Threshold concepts ignore the diversity of human experience
  6. Threshold concepts are hegemonic
  7. Threshold concepts require us to agree on all the things

The authors (I felt) successfully argued that there was theoretical value to using these concepts and helped me appreciate the usefulness of this theory as a pedagogic model (this was discussed further with the reading group). Jargon and woolly language is a real barrier to comprehension and being able to critically appraise different educational theories (for me at least coming from a science background). I have struggled with some theoretical approaches to pedagogy but the Threshold concept model, or at least my understanding of it, is one approach that I see useful and comprehensible from the point of view of both teacher and leaner having related experiences of both to this model.

Their conclusion “it’s useful to think of threshold concepts as a model for looking at the content we teach in the context of how learning works” was very thought provoking.

For me I relate traversing the liminal space as acquiring a new, albeit difficult skill (ability, idea) and then the consolidation of this new acquisition. The application of this new skill occurs only once I have passed through the Threshold and am on the other side (thus able to apply this new knowledge successfully to a task).

Roger readThreshold concepts: implications for game design”. This paper describes a project to develop an educational game covering threshold concepts in information literacy.  The authors give an account of the lessons learnt through the process of designing and testing the game.  They conclude that their original idea of a single player game did not reflect the team-based nature of research, the individual competitive game structure did not match the collaborative educational approach they were trying to model, and opportunities were needed for expert input in the game process. They suggest strategies for future improvements including using more open game structures, incorporating debriefing and offering social as well as individual learning contexts.

Other suggested reading

Games and gamification – notes from the reading group

Suzi read Do points, levels and leaderboards harm intrinsic motivation?

This study attempted to shed light on when/why common gamification techniques (points, levels, leaderboards) harm intrinsic motivation, as measured by the intrinsic motivation inventory (IMI). They found that, for this image-tagging task, intrinsic motivation was not harmed and the number of tags increased with all three interventions. They conclude that these techniques could be useful for some tasks. There are limitations, which the authors acknowledge. In this situation leaderboards, etc don’t mean anything here, they don’t create stress, in other situations they might well.

Suzi watched FOTE12: Nicola Whitton ‘What is the Future of Digital Games and Learning’. This was an interesting short talk, covering interesting examples:

Whitton argues that a key idea from games that’s overlooked is play. She talks about the idea of creating a “magic circle” – a safe space to practice, have fun, and make mistakes. Her suggestions  for considering gamification include: implement some mystery, do something unexpected, be playful, and create a safe space to make mistakes.

Chris read about the Reading Game from Macquarie. This is basically exactly the same as Peerwise, and appears to be defunct – probably because Peerwise has cornered the market. So, I then talked about my recent experiences of Peerwise. We’ve just used it with our first years, with mixed results because they didn’t engage as much as I would have liked, and many people only did the minimum required for credit. However, Peerwise contains a scoring system that rewards students for various kinds of participation, and some people have reported that using this to introduce an element of competition can motivate students to participate. So next year, rather than asking students to do a certain amount of work for credit, they will be asked to achieve a certain score. Watch this space….

Mike looked at Evoke, an online multiplayer game with grand ambitions to help people ‘change the world’ by collectively addressing problems.  Element see relevant to HE and Bristol Futures in particular, whilst parts of the approach would (I suspect) alienate some potential participants.. The idea of coming up with ‘Evokations’ (grand challenges people can respond to) has been used successfully elsewhere. The use of mentors to facilitate, prizes to incentivise seem sound. Evole had a time-based (weekly) structure with people being drip fed the stages, which reminded me of the Twelve days of Twitter course. The thing that might be off-putting to some is the suggestion that people take on superhero-like persona. The point scoring part looked complicated, but may have worked to motivate some.

Roger read Lameras (2015) Essential Features of Serious Games Design in Higher Education . This paper provides some useful scaffolding for teachers thinking about using games or gamification techniques. Particularly useful were:

  • the game design planner, which provides some prompts for teachers considering using games, eg around learning outcomes, feedback, and the teacher’s role, as well as which types and characteristics of games might be most appropriate in the context, eg types of player choice and challenge, nature of any collaboration or competition, and rules
  • The mapping of learning attributes to game attributes, eg ways in which games can support information transmission, collaboration, and discussion . Key game attributes include  rules, goals, choices, tasks, challenges, competition, collaboration, and feedback, which are evidenced in game features such as missions, puzzles, scoring, progress indicators, leaderboards, branching tasks, gaining / losing lives and team activities

It is evident from reading the paper that there is a strong overlap between game design and good learning design in general, for example in the importance of feedback, challenge, choice and social learning.

MOOCs: what have we learnt? – notes from the reading group

Steve read HEA: Liberating learning: experiences of MOOCs

MOOCs are increasing in popularity. Will this continue? Registrations, drop outs, completions. Will they disrupt HE?

10-person sample size, people who completed Southampton MOOC. Want to understand motivations, opportunities, problems. Discussed findings with five academics who taught/led it. Aware of small scale, so no recommendations – but reflections and suggestions.

Themes from findings:
1 Flexible, fascinating and free – can fit into lives, customise pace, no financial commitment.
2 Feeling part of something – social & international aspect, even for passive ‘lurkers’
3 Ways of learning – prefer sequential over dipping in/out.
4 A bit of proof? – cost sensitivity to purchasing accreditation. Only 1 wanted this.

Four-quadrant model for MOOC engagement, suggests stuff to include. Two axes:
personal enjoyment vs work/education
studying alone vs social learning

Steve also read What are MOOCs Good For?

MOOC boom and bust? High-profile implementation at San Jose failed, inc backlash from academics. General completion/dropout rate  (SB: do we care about drop outs? Most are window shoppers). Experiments and options/opportunities are still expanding. In summary, more data needed but need to moderate expectations – still a place for innovation, also integrating with traditional teaching – take best bits of both?

Roger read: Practical Guidance from MOOC Research: Students Learn by Doing

This is one of a series of blog posts by Justin Reich, who is Executive Director of the Teaching Systems lab at MIT, which ” investigates the complex, technology-mediated classrooms of the future and the systems we need to develop to prepare teachers for those classrooms.”
In this post from July 2015, Justin’s main point is that when developing MOOCs it is better for student learning to focus on development of interactive activities as opposed to high production videos.  He mentions particularly the value of formative peer assessment, synchronous online discussion and simulations “that create learning experiences that students may not have in other contexts”.
If making videos then focus on the early parts of the course, as watching tends to drop off later in courses. There is some evidence that students prefer Khan academy type screencasts with pen animations rather than talking over slides.

Suzi read Why there are so many video lectures in online learning, and why there probably shouldn’t be

The article argues that video is expensive, particularly if you aim for higher production values (which many people do). Their methodology was a literature review, interviews with experts, and studying the use of video in over 20 MOOCs. There’s no evidence that video does (or doesn’t) work as a learning tool, and little or none that high production values add much. Learners wrongly self-report that they learn well from video (cf the study of physics videos – Saying the wrong thing: improving learning with multimedia by including misconceptions

They argue that people should:

  • think twice before using video
  • use video where it really does add value (virtual field trips, creating rapport, manipulating time and space, telling stories, motivating learners, showcasing historical footage, conducting demonstrations, visual juxtaposition)
  • focus on media-literacy for the content experts and DIY approaches (eg filming on mobile phones)

Suzi also read 10 ways MOOCs have forced universities into a rethink

Broadly an argument that MOOCs are changing HE. MOOCs have given universities the impetus to experiment with pedagogy (notably, fewer lectures), assessment, accreditation, and course structure. They have made more common to think in terms of a digital education strategy. They are also disrupting universities: HEIs are no longer the only providers of HE and cheaper degrees are becoming available. They’ve highlighted an unmet demand (for something like evening classes?) and particularly in vocational and practical subjects. Clark talks about global networks of universities being like airline consortia – the passenger buys one ticket but makes their journey over several airlines.

Mike read  ‘7 ways to make MOOCs Sticky’, a blog post by Donald Clark and also ‘Bringing the Social back to MOOCs’ by Todd Bryant in an EduCause review.

The former looked at design to keep a MOOC audience coming back.  The latter looked at how MOOCs can encompass social learning (rather than just provide content). A point of contention between the two is the importance of social learning – overemphasised if you believe Clark and missing from many MOOCs if you believe Todd.

Clark, drawing on MOOC data from Derby’s Dementia MOOC, listed 7 ways to retain learners. For me, his seven points divide into three related areas, audience, structure and the value of social. He framed the discussion in the recognition that we cannot apply metrics from campus courses to things that are free, open and massive  courses. Clark is often a provocative commentator though, and his downplaying of the social is interesting.

An overarching theme of Clark’s post is audience sensitivity, though at times the audience he is most sensitive to seems to be himself. In my experience, this is a tough challenge for MOOCs. To Clark this is about not treating MOOC learners like undergraduates who are ‘physically and psychologically at University’. He rightly states they have different needs and interests. As someone who has helped design MOOCs, it is hard to make something that is all things to all people, and often it is about providing a range of activities, levels and opportunities for learners to engage.

Related to audience sensitivity, Clark sees a value in keeping MOOCs shorter (definitely wise), modular (allowing people to dip into bits), with less reliance on a weekly structure and coherent whole. This is maybe less about keeping learners, and more about allowing them to get what they want from parts of a course. It would be great to come up with ways to evaluate MOOCs for learners who want to take bits of courses. Post-course surveys are self-selecting and largely made up of completers. It is also a tough design challenge to appeal to such learners whilst also trying to deliver depth and growth through a course. Clark is involved in some companies who develop adaptive learning systems, perhaps reflecting a similar philosophy. Adaptive approaches may provide some answers in the future.

Clark is also is not a fan of the weekly structure, at least in terms of following through with a cohort. I think many learners like both the structure and the social, and these is are the main differentiating factors for MOOCs that mean they are not just a set of online materials. Many learners find the event driven, weekly structure motivating, and it is event many enjoy and learn the social element of MOOCs more than the content. I was always keen to draw out the social elements, to give learners the chance to contribute to the course and learn from each other.  Clark is somewhat scathing of social constructivism and the kind of learning emphasised in C-MOOCs.

This is in contrast to Bryant’s article. For Bryant, too many MOOCs are ‘x-MOOCs’ – largely about content and neglecting the social.  Interestingly, he does cite features of EdX and Coursera that have the potential to change this by allowing learners to work in groups and buddy up during courses. We would have really valued such features when I was working on MOOC about Enterprise. FutureLearn is not currently well equipped in this area.  He goes on to explore other ways of helping people collaborate off platform through user groups and crowd sourcing/ knowledge building tools. This would work well for some, but doubtless exclude others. He considers simulations, virtual worlds and ‘alternate reality games’ – simulations played in the real world. These could all play a role, but for me, alongside a core MOOC structure. Bryant sees MOOCs as a potential ‘bridge between open content and collaborative learning’. I suspect Bryant and Clark would value very different kinds of MOOC. Should we try to appeal to both extremes (and all in between) or pitch the MOOC at a particular audience? Probably the latter, but it still isn’t easy.

Psychology and education – notes from the reading group

Chris read Is it time to rethink the way university lectures are delivered?, a short article about a Science paper from 2011. A class of Canadian physics-major freshmen was split into two and one week of material was delivered differently to the two halves of the class. The first half stuck to the tried and tested lecture-using-powerpoint format, whilst the other half used a more ‘interactive’ approach termed ‘deliberate practice’: discussion groups, preclass reading assignments, in-class clicker-questions, online quizzes. Lo and behold, in a test the following week the second cohort scored 74% on a test about the material and the other half  only got 41%, thus illustrating that three days later they could remember the material better. The study has come in for a lot of criticism about methodology – only 211 of 271 students actually took the test (how would the others have altered the results?), and the people that designed it were also the ones that delivered the intervention so may well have been ‘teaching to the test’. However, the general feeling seems to be that though the study is flawed, the conclusions are broadly correct. It also illustrates that having a Nobel Prize allows you to publish anything you like anywhere you want.

Chris also read A better way to practice, 2012 . Written by Noa Kagayame, a Julliard School of Music violinist turned performance psychologist. His argument is that it is better to practice smart than practice hard – take home aphorisms from this article are Practice makes permanent and Perfect practice makes perfect, the implication being that unless you practice correctly you can reinforce bad habits. That seems logical enough. He also argues that more thoughtful study can reduce the time needed for practice and increase the likelihood of successful performance, but I (and many of the commenters below the fold) disagree with him about this. Whilst this might be true at the highest levels, at lower levels when it’s all about training muscle memory there’s simply no substitute for doing it over and over again.

Steve watched The key to success? Grit and read True Grit, Angela Lee Duckworth & Lauren Eskreis-Winkler, 2013. I’d phrase ‘grit’ as perseverance – effort and stamina to achieve something difficult over an extended period of time. In the Tortoise and the Hare, the hare has talent, but the tortoise has grit and achieves more in the end. This summary indicates that talent and grit are often orthogonal, or negatively correlated. In the past persistence was assessed against physical challenges, but this may not relate to long-term mental grit. Modern assessment is by questioning against traits e.g. ‘I finish whatever I begin’. ((to complete)).

Suzi read Stereotype threat and women’s math performance and Mindsets and Math/Science Achievement

Both papers discuss how mindset might affect learning.

Stereotype threat is a stress-induced threat of self-fulfilling a negative and well-known stereotype. For example an elderly man looking for his keys may worry about looking senile, become stressed, and so find it harder to find his keys. The paper puts forward evidence that women’s performance in difficult maths tests can be affected by the threat of fulfilling a negative stereotype: that maths is not a girls subject. Other studies have looked at stereotype threat in relation to racial stereotypes.

Growth mindset is the belief that intelligence can be improved. Not everyone has it, others have a “fixed mindset”. Many people will tell you that they are just not a maths person. The paper states that mindsets can predict maths/science performance over time, and can mitigate for negative effects such as stereotype threat.

Both are interesting and seem plausible. Some of the suggested strategies for reducing stereotype threat and/or increasing growth mindset are:

  • feedback should emphasise the high standards of the test, and that the student has the potential to meet them
  • frame high-stakes tests as “assessing current skills and not long-term potential to learn”
  • praise effort and process, not intelligence
  • describe great mathematicians and scientists as people who loved and devoted themselves to the subject (not born geniuses)

Evidence in teaching – notes from the reading group

Suzi read Why “what works” won’t work: evidence-based practice and the democratic deficit in educational research, Biesta, G 2007 and a chapter by Alberto Masala from the forthcoming book From Personality to Virtue: Essays in the Philosophy of Character, ed Alberto Masala and Jonathan Webber, OUP, 2015

Biesta gives what is broadly an argument against deprofessionalisation in the context of government literacy and numeracy initiatives at primary school level. I found the main argument somewhat unclear. It was most convincing talking about the difficulty in defining what education is for, making it difficult to test whether an intervention has worked. Talks at length about John Dewey and his description of education as a moral practice and learning as reflective experimental problem solving.

“A democratic society is precisely one in which the purpose of education is not given but is a constant topic for discussion and deliberation.”

Masala’s paper is on virtue/character education but is of wider interest as it talks very clearly about educational theory. I found particularly useful in this context the distinction between skill as a competence (defined by performance, so easily testable) and skill as mastery (defined by a search for superior understanding and less easily tested), and the danger of emphasising competence.

Hilary read Version Two: Revising a MOOC on Undergraduate STEM Teaching, which briefly outlined some key approaches and intended developments in a Coursera MOOC aimed at STEM graduates and post docs interested in developing their teaching.

The author of the blog post is Derek Bruff (director of the Vanderbilt University Center for Teaching, and senior lecturer in the Vanderbilt Department of Mathematics with interests in agile learning, social media and SRS – amongst other things: see http://derekbruff.org/)

Two key points:

  1. MOOC centred learning communities – the MOOC adopted a facilitated blended approach, building on the physical groupings of graduate student participants by facilitating 42 learning communities across the US, UK and Australia to use face to face activities to augment the course materials, and improve completion rates.
  2. Red Pill: Blue Pill – adopting the metaphor used by George Siemens in the Data, Learning and Analytics MOOC to give two ways to complete the course – either an instructor-led approach which was more didactic and focussed on the ability to understand and apply a broad spectrum of knowledge OR a student-directed approach which used peer graded assignments and gave the students the opportunity to pick the materials which most interested them, and so focus on gaining a deeper but less comprehensive understanding of the topic.

Final take away – networked learning is hard, as would be the logistics of offering staff / student development opportunities as online and face-to-face modules, with different pathways through the materials, but interesting …

Steve read Building evidence into education, 2013 report by Ben Goldacre for the UK government

Very accessible summary of the case for evidence-based pedagogy in the form of large-scale randomised controlled trials. Compares current ‘anecdote/authority’ edu research with past medical work – lots of interesting analogies. Focused on primary/secondary education but some ideas can transfer to higher – although would be more challenging.

Presents counterarguments to a number of common arguments against the RCT approach – it IS ethical if comparing methods where you don’t know which is best (and if you do know, why bother trialling?!). Difficulty in measuring is not a reason to discount, RCTs are a way to remove noise. Talks about importance of being aware of context and applicability. Uses some good medical examples to illustrate points.

Sketches out an initial framework – teachers don’t need to be research experts (doctors aren’t), should be research-focused team leading and guiding with stats/trials experts etc.

Got me thinking – definitely worth a read.

Roger read “Using technology for teaching and learning in higher education: a critical review of the role of evidence in informing practice, (2014) by Price and Kirkwood

This study explores the extent to which evidence informs teachers’ use of TEL in Higher Education. It involved a literature review, online questionnaire and focus groups. The authors found that there are differing views on what constitutes evidence which reflect differing views on learning and may be characteristic of particular disciplines. As an example they suggest a preference for large-scale quantitative studies in medical education.
In general evidence is under-used by teachers in HE, with staff influenced more by their colleagues and more concerned about what works rather than why. Educational development teams have an important role as mediators of evidence.

This was a very readable and engaging piece, although the conclusions didn’t come as much of a surprise!  The evidence framework they used (page 6) was interesting, with impact categorised as micro (e.g. individual teacher), meso (e.g. within a department) or Macro (across multiple institutions).

Mike read Evidence-based education: is it really that straightforward?, 2013, Marc Smith, Guardian Education response to Ben Goldacre

This is a thoughtful and well argued response to Goldacre’s call for educational research to learn from medical research, particularly in the form of randomised controlled trials. Smith is not against RCTs, but suggests they are not a silver bullet.

Smith applauds the idea that we need teachers to drive the research agenda and that we do need more evidence. His argument that it will be challenging to change the culture of teaching to achieve this, seems valid, but is not necessarily a reason not to try. The thrust of his argument is that  RCTs, whilst effective in medicine, are harder to apply to education due to the complexity of teaching and learning. He believes (and I tend to agree) that cause and effect are harder to determine in the educational context. Smith argues  that in medicine  there is a specific problem (an illness or condition) and a predefined intended outcome (change to that condition). This can be problematic in the medical context, but is even harder to measure in education. I would add that the environment as a whole is harder to control and interventions more difficult to replicate. Different teachers could attempt to deliver the same set of interventions, but actually deliver radically different sessions to learners who will interact with the learning in a variety of ways. Can education be thought of as a change of state caused by an intervention in the same way we would prescribe a drug for a specific ailment?

All this is not to say that RCTs cannot play a role, but that you have to think about what you are trying to research before choosing your methodology (some of the interventions Goldacre addressed related to specific quantitative measurable things like teenage pregnancy rates, or criminal activity). Perhaps it is my social scientist bias, bit I woudl still want to triangulate using a range of methods (quantitative and qualitative).

From a personal perspective, I sometimes think that ideas translated from science to a more social scientific context can lose some scientific validity in the process (though this is maybe most true at the level of theory than scientific practice. For example Dwarkins translated selfish genes into the concept of cultural memes, suggesting cultural traits are transmitted in the same way as genetic code. Malcolm Gladwell’s tipping point is a metaphor from epidemiology which he applies to the spreading of ideas, bringing much metaphorical baggage in the process. Perhaps random control trials could provide better evidence for the validity of these theories too?

53 powerful ideas (well, 4 of them at least) – notes from the reading group

This month we picked articles from SEDA’s 53 powerful ideas all teachers should know about blog.

Mike read Students’ marks are often determined as much by the way assessment is configured as by how much students have learnt

Many of the points made in this article are hard to dispute. Different institutions and subject areas vary so widely that not only are how marks are determined different between say Fine Art and Medicine, but also between similar subjects at the same institution, and also between the same subject at different institutions. This may reflect policy or process (eg dropping the lowest mark before calculating final grade).  In particular, Gibbs argues that coursework tends to encourage students to focus on certain areas of the curriculum, rather than testing knowledge of the whole curriculum.  Gibbs also feels these things are not always clear to external examiners. He does not feel that QAA emphasis on learning outcomes address these shortcomings.

The article (perhaps not surprisingly) does not come up with a perfect answer to what is a complex problem. Would we expect Fine Artists to be assessed in the same way as doctors? How can we ensure qualifications from different institutions are comparable? Some ideas are explored, such as asking students to write more course work essays to cover the curriculum, and then marking a sample. This is however rejected as something students would not tolerate. The main thing I can take from this is that thinking carefully about what you really need to assess when designing the assessment is important (nothing new really). For example, is it important that students take away a breadth of knowledge of the curriculum, or develop a sophistication of argument? Design the assessment to reflect the need.

Suzi read Standards applied to teaching are lower than standards applied to research and You can measure and judge teaching

The first article looks at the difference between the way academics receive training for teaching and the way research and the way teaching and research are evaluated and accredited. Teaching, as you might imagine, comes off worse in all cases. There aren’t any solutions proposed, though the author muses on what would happen if they were treated in the same way:

“Imagine a situation in which the bottom 75% of academics, in terms of teaching quality, were labelled ‘inactive’ as teachers and so didn’t do it (and so were not paid for it).”

The second argues that students can evaluate courses well if you ask them right things: to comment on behaviour which are known to affect learning. There didn’t seem to be enough evidence in the article to really evaluate his conclusions.

The argument put at the end seemed sensible: that evaluating for student engagement works well (while evaluating for satisfaction, as we do in the UK, doesn’t).

The SEEQ, a standardised (if long) list of questions for evaluating teaching by engagement, looks like a useful resource.

Roger read Students do not necessarily know what is good for them.

This describes three examples where students and/or the NUS have demanded or expressed a preference for certain things, which may not actually be to their benefit in the longer term. He believes that these cases can be due to a lack of sophistication of learners (“unsophisticated learners want unsophisticated teaching”) or a lack of awareness of what the consequences of their demands might be (in policy or practice). The first example is class contact hours. Gibbs asserts that there is a strong link between total study hours (including independent study) and learning gain, but no such link between class contact hours and learning gain. Increasing contact hours often means increasing class sizes which generally means a dip in student performance levels.   Secondly he looks at assessment criteria, saying that students are demanding “ever more detailed specification of criteria for marking” , which he states are ineffective in themselves for helping students get good marks, as people interpret criteria differently. A more effective mechanism would be discussion of a range of examples where students have approached a task in different ways, and how these meet the criteria. Thirdly he says that students want marks for everything, but evidence suggests that they learn more when receiving formative feedback with no marks, as otherwise they can focus more on the mark than the feedback itself.

The solution, he suggests is to make evidence-based judgements which take into account student views, but are not entirely driven by them, to try to help students develop their sophistication as learners and to explain why you are taking a certain approach. This article resonated with me in a number of ways, especially with regard to assessment criteria and feedback. There is an excellent example of practice in the Graduate School of Education where the lecturer provides a screencast in which she goes through an example of a top level assignment, explaining what makes it so good.  She has found that this has greatly reduced the number of student queries along the lines of “What do I need to do to get a first / meet the criteria”.  I also strongly agree with his point about explaining to students the rationale for taking a particular pedagogic approach. Sometimes we can assume that students know why a certain teaching method is educationally beneficial in a particular context, but in reality they don’t. And sometimes students resist particular approaches (peer review anyone!) without necessarily having the insight into how they may be helpful for their learning.