A student voice in the DEO: What have the Student Digital Champions found so far?

Since the 2019 Digital Experience Insight Survey, which revealed so much about students’ experiences of the digital learning environment at Bristol even before the pandemic, the DEO have been keen to channel student voices straight into our work. With 2020 turning out the way that it did, it was even more crucial to make that a reality, and so we worked with Bristol SU to recruit 12 Student Digital Champions (SDCs) from across all faculties in the University. They don’t have a lot of time each week with us, but they’ve definitely been making the most of that time so far!  

You can get to know them a little better by viewing this introduction video, also found on the DEO Student Digital Champion project page. 

What have they been up to? 

Since joining the DEO team, the SDCs have been actively getting out into their faculties, going to course rep meetings and faculty meetings, and talking to staff Digital Champions and other key staff. They’re reporting that even just being in meetings with their ‘digital champion’ hat on has been sparking interesting conversations with course reps and students about the student experience of digital learning in 2020 so far. 

They’ve already actively worked with us on two new DEO guides, which have been instigated from student feedback in the Pulse surveys. These are the guides on Interactivity in large sessions, and Breakout roomsThey’ve also worked to cocreate and give feedback on the Assessment Checklist and Troubleshooting guide, and other areas of the new Digitally Ready online space on assessment, which launched on 5th January to support students during this assessment period.  

What have they found?

The remit of the SDCs is to look for patterns emerging in the student experience across faculties and schools, and work together on the key themes of student engagement in learning and community building. They’ve been tasked with getting students to talk about solutions to their problems too: we want to hear ideas for what could be done differently, or what is working really well and how that could be expanded.

So far they’ve noticed…

Some of the common themes which seem to be merging across the student experience include:

The cohort conundrum  

Students are feeling disconnected, are lacking a sense of belonging and a sense of a shared experience. Many are reporting that this is partly due to other students not being active and engaged in online sessions, particularly in not turning on their videos. On the other hand, students also said they feel anxious themselves about being in online sessions, particularly breakout sessions, and in turning on their own mics and video. In the Engineering faculty, students actually felt there was an increase in engagement betweestudents when using the general discussion forum to ask questions. Students seem to be asking more questions and sharing information with each other.  

‘I don’t wanna be just a guy on the screen. I want us to be more like a cohort.’  [Year 1 Student, Centre for Innovation] 

Clarity and simplicity make good online course spaces 

Echoing student feedback in previous years, students are now more than ever keen on things being concise, clear, and easy to navigate. Videos around 20-30 minutes seem to be the maximum that students feel they can engage with, with most preferring 10-15 minutes. Our SDCs are also reporting that a messy Blackboard course space can be pretty discouraging, especially to first year students! 

Group work online is brilliant/impossible (delete as appropriate)  

We’re hearing loud and clear that the tools of online learning – shared documents, MS Teams, BB Collaborate and BB journals – are potentially great in making group work easier to manage, and coordinate. Students are getting to grips with what these systems can offer, and love the flexibility of it (when the technology allows – internet connection problems are frequently mentioned too!)But they would like more guidance on these tools, and how to use them effectively. At the same time, the lack of group identity, and the fact that they may not have actually met their peers in person, is making things difficult.  

Only few people are tuning up. How can I trust someone to do their work when we’ve never met?’ [Year 1 Student, Arts] 

And they’ve suggested…

There are already several projects in the pipeline, ideas for what might be possible, and pilots in progress. A snapshot of these include: 

A Breakout Room toolkit – A toolkit for staff, made by students, on how to plan and delivery the best breakout room experience. This is broken down by year, recognising that first years have different needs and situations than returning studentsIt includes ideas for group sizes and permanence (3-5 week rotations for groups seems popular), and establishing group identity, as well as how to encourage students to actively participate. More on this soon… 

‘Online mingle’ pilot – In partnership with the Centre for Innovation, creating a template for how to run ‘speed dating’ type welcome sessions for students, where they can get to know each other and practice speaking online in a safe and fun environment.  

Motivation Panels – Here, more experienced students are there to support first years involved in team/group work, and spark a sense of what their degree is about, and feel motivated by the subject. Led by course reps and students, this is a way to feel part of something bigger than your own unit or programme.  

Shared spaces – using tools like MS Teams to explore ways for students to meet regularly and informally. This could include news and inspiration, notices of events, a ‘Help me out’ forum, and introductions to different people within their programme or school. 

Groupwork toolkits – Deliverables to help students choose the best tools to use, and how to use them, for group work, as well as how to maximise group work as a way to meet people, and gain the sense of social interaction often missing online.  

School assembles – Regular school-wide live sessions, to give a sense of belonging and motivation across a school, rather than just within a unit or programme. These are already been run in the School of Psychological Science, and the SDCs are working to find out what it is about them which are so engaging, and how that might be replicated across the university.  


 

National Institute for Digital Learning good reads from 2019 – notes from the reading group

Do MOOCs contribute to student equity and social inclusion? A systematic review by Sarah R Lambert (read by Suzi Wells)

This study is a large literature review looking at both empirical research and practitioner guidance around using MOOCs and other open (as in free) learning to promote student equity and social inclusion. The study starts from 2014 because that’s the second wave of MOOCs, where more stable and sustainable practice begins to emerge.

The aims of the MOOCs were broken into two broad areas: those focused on student equity (making education fairer for tertiary learners and those preparing to enrol in HE) – this was the most common aim; and those who sought to improve social inclusion (education of non-award holders & lifelong learners). The literature was predominantly from US, UK, Australia, but they were only studying literature – possibly unsurprisingly for articles written in English. There was a near 50/50 split between empirical research and policy/practice recommendations, and the studies focused slightly more on MOOCs than on other other open learning. One notable finding was that the success rate (among published studies at least) was high – more often than not they met or exceeded their aims.

Lambert includes lots of useful detail about factors that may have led to successful projects. MOOC developers should learn about their learners and make content relevant and relatable to them. Successful projects often had community partners involved in the design, delivery & support – in particular, initiatives with large cohorts (~100) that were very successful all had this. Designing for the learners meant things like: designing for mobile-only and offline access, teaching people in their own language (or at least providing mother-tongue facilitation) and, where relevant, mixing practitioners with academics in the content.

Facilitating and support the learning was also key to success. Local study groups or face-to-face workshops were used by some projects to provide localisation and contextualisation. Facilitators would ideally be drawn from existing community networks.

A related point was to design content from scratch – recycling existing HE materials was not as successful. This should be done in an interdisciplinary team and/or community partnership. Being driven entirely by an IT or digital education team was an indicator that a project would not meet its aims. Projects need technical expertise but education and/or widening participation too. Open as is free-to-use is fine, licence didn’t seem to have an impact.

In short:

  • Work with the people you intend to benefit.
  • Create, don’t recycle.
  • Don’t expect the materials to stand by themselves.

If you’re interested in social justice through open learning, think OU not OERs.

What does the ‘Postdigital’ mean for education? Three critical perspectives on the digital, with implications for educational research and practice by Jeremy Knox (read by Suzanne Collins)

This article explores the idea of what ‘post-digital’ education means, specifically thinking about human-technology relationships. It begins with an analysis of the term post-digital’, embracing the perspective of ‘post’ as a critical appraisal of the understanding of digital rather than simply meaning a different stage, after, the digital. This initial analysis is worth a read, but not my main focus for this reading group so here I’ll jump straight to the main discussion, which is based on three critical perspectives on digital in education.

The first is “Digital as Capital”. Here, Knox talks about the commercialisation and capitalisation of digital platforms, such as social media. This platform model is increasingly based on the commodification of data, and so inevitably students/teachers/learners become seen as something which can be analysed (eg learning analytics), or something under surveillance. If surveillance is equated to recognition, this leads to further (perhaps troubling?) implications. Do you need to be seen to be counted as a learner? Is learning always visible? Does this move away from the idea of the web and digital being ‘social constructivist’?

Secondly, Knox looks at “Digital as Policy”. This (for me) slightly more familiar ground discusses the idea that ‘digital’ education is no longer as separate or distinct from ‘education’ as it once was. In a ‘post-digital’ understanding, it is in fact mainstream rather than alternative or progressive. The digital in education, however, often manifests as metrification in governance – eg schools are searchable in rankings based on algorithms. In this sense, ‘digital education’ moves away from ‘classroom gadgets’ (as Knox puts it) and sees it as something intrinsic and embedded in policy, with strategic influence.

Lastly, he discusses “Digital as Material”, which focuses on surfacing the hidden material dimensions of a sector which was often seen as ‘virtual’ and therefore ‘intangible’. The tangible, material aspects of digital education include devices, servers, and other physical elements which require manual labour and material resources. On one hand there is efficiency, but on the other there is always labour. As education, particularly digital education, often comes from a sense of social egalitarianism and social justice, this is a troubling realisation, and one which lead to a rethink in the way digital education is positioned in a post-digital perspective.

In conclusion, Knox suggests that ‘post-digital’ should be understood as a ‘holding to account of the many assumptions associated with digital technology’, which I feel sums up his argument and is probably something we should try and do more of regardless of whether we’re a ‘digital’ or ‘post-digital’ education office.

What’s the problem with learning analytics? by Neil Selwyn (read by Michael Marcinkowski)

For this last session I read Neil Selwyn’s ‘What’s the Problem with Learning Analytics’ from the Journal of Learning Analytics. Appearing in a journal published by the Society for Learning Analytics Research, Selwyn’s socio-technical approach toward the analysis of learning analytics was a welcome, if somewhat predictable, take on a field that too often seems to find itself somewhat myopically digging for solution to its own narrow set of questions.

Setting learning analytics within a larger social, cultural, and economic field of analysis, Selwyn lays out an orderly account of a number of critical concerns, organized around the implications and values present in learning analytics.

Selwyn lists these consequences of learning analytics as areas to be questioned:

  1. A reduced understanding of education: instead of a holistic view of education it is reduced to a simple numerical metric.
  2. Ignoring the broader social contexts of education: there is a danger that by limiting the understanding of education that we ignore important contextual factors affecting education.
  3. Reducing students’ and teachers’ capacity for informed decision-making: the results of learning analytics comes to overtake other types of decision making.
  4. A means of surveillance rather than support: in their use, learning analytics can have more punitive rather than pedagogical implications.
  5. A source of performativity: students and teachers each begin to focus on achieving results that can be measured by analytics rather than other measures of learning.
  6. Disadvantaging a large number of people: like any data driven system, decisions about winners and losers can be unintentionally baked into the system.
  7. Servicing institutional rather than individual interests: the analytics has more direct benefit for educational institutions and analytic providers than it does for students.

He goes on to list several questionable values embedded in learning analytics:

  1. A blind faith in data: There is a risk that there is a contemporary over-valuation of the importance of data.
  2. Concerns over the data economy: What are the implications when student data is monetized by companies?
  3. The limits of free choice and individual agency: Does a reliance on analytic data remove the ability of students and educators to have a say in their education?
  4. An implicit techno-idealism: Part of leaning analytics is a belief in the benefits of the impacts of technology.

Toward this, Selwyn proposes a few broad areas for change designed to improve learning analytics’ standing within a wider field of concern:

  1. Rethink the design of learning analytics: allow for more transparency and customization for students.
  2. Rethink the economics of learning analytics: give students ownership of their data.
  3. Rethink the governance of learning analytics: establish regulatory oversite for student data.
  4. A better public understanding of learning analytics: educate the wider public of the ethical implications of the application of learning analytics to student data.

Overall, Selwyn’s main point remains the most valuable: the idea of learning analytics should be examined within the full constellation of social and cultural structures within which it is embedded. Like any form of data analytics, learning analytics does not exist as a perfect guide to any action, and the insights that are generated by it need to be understood as only partial and implicated by the mechanisms designed to generate the data. In the end, Selwyn’s account is a helpful one — it is useful to have such critical voices welcomed into SOLAR — but the level at which he casts his recommendations remains too broad for anything other than a starting point. Setting clear policy goals and fostering a broad understanding of learning analytics are hopeful external changes that can be made to the context within which learning analytics is used, but in the end, what is necessary is for those working in the field of learning analytics who are constructing systems of data generation and analysis to alter the approaches that they take, both in the ‘ownership’ and interpretation of student data. This enforces the need for how we understand what ‘data’ is and how we think about using it to change. Following Selwyn, the most important change might be to re-evaluate the ontological constitution of data and our connection to it, coming to understand it not as something distinct from students’ education, but an integral part of it.

Valuing technology-enhanced academic conferences for continuing professional development. A systematic literature. Professional Development in Education by Maria Spilker (read by Naomi Beckett )

This literature review gives an analysis of the different approaches taken to enhance academic conferences technologically for continued professional development. Although there have been advances and new practices emerging, a definite coherent approach was lacking. Conferences were being evaluated in specific ways that were not considering all sides.

‘Continued professional development for academics is critical in times of increased speed and innovation, and this intensifies the responsibilities of academics.’ 

This makes it more important to ensure when academics come together at a conference, there is a systematic approach to look at what they should be getting out of the time spent there. The paper suggests this is something that needs to be looked out when first starting to plan a conference, what are the values?

The paper talks about developing different learning experiences at a conference to engage staff and build their professional development. There is often little time for reflection and the paper suggests looking at more ways to include this. Using technology is an example of a way this could be done. Engagement on Twitter for example gives users another channel to discuss and network, and this takes them away from the normal traditional conference formats.  Making more conferences online also gives users the opportunities to reach out to further networks.

The paper mentions their Value Creation Network, looking at what values we should be taking out of conferences. These include, immediate value, potential value, applied value, realised value, and re-framing value. Looking at these to begin with is a good start to thinking about how we can frame academic conferences, so delegates get the most use out of the time spent there, and work on their own professional development too.

We asked teenagers what adults are missing about technology. This was the best response by Taylor Fang (read by Paddy Uglow)

Some thoughts I took away:

  • Traditionally a “screen” was a thing to hide or protect, and to aid privacy. Now it’s very much the opposite.
  • Has society changed so much that social media is the only place that young people can express themselves and build a picture of who they are and what their place is in the world?
  • Adults have a duty to help young people find other ways to show who they are to the world (and the subsequent reflection back to themself)
  • Digital = data = monetisation: everything young people do online goes into a money-making system which doesn’t have their benefit as its primary goal.
  • Young people are growing up in a world where their importance and value is quantified by stats, likes, shares etc, and all these numbers demonstrate to them that they’re less important than other people, and encourages desperate measures to improve their metrics.
  • Does a meal/holiday/party/etc really exist unless it’s been published and Liked?
  • Does the same apply to Learning Analytics? Are some of the most useful learning experiences those which don’t have a definition or a grade?

Suggested reading

Notes from the reading group – free choice

Digital wellbeing toolkit from BBC R&D (notes from Suzi Wells)

This is a toolkit from BBC R&D is designed for addressing wellbeing in digital product development. Given the increasing focus on student wellbeing, I was interested in whether it could be useful for online & blended learning.

The key resources provided are a set of values cards along with some exploration of what these mean, particularly for young people (16-34 year olds). These were developed by the BBC through desk research, focus groups and surveys.

The toolkit – the flashcards especially – could certainly be used to sense-check whether very digital teaching methods are really including and supporting the kinds of things we might take for granted in face-to-face education. It could also be useful for working with students to discuss the kinds of things that are important to them in our context, and identifying the kinds of things that really aren’t.

Some of the values seem too obvious (“being safe and well”, “receiving recognition”), or baked-in to education (“achieving goals”, “growing myself”), and I worry that could be off-putting to some audiences. The language also seemed a little strange “human values” – as though “humans” were an alien species. It can feel like the target audience for the more descriptive parts would be someone who has never met a “human”, much less a young one. Nonetheless, the flashcards in particular could be a useful way to kick off discussions.

Three example flash cards showing the values: being inspired, expressing myself, having autonomy

New studies show the cost of student laptop use in lecture classes (notes from Michael Marcinkowski)

The article I read for this session highlighted two new-ish articles which studied the impact that student laptop use had on learning during lectures. In the roughly ten years that laptops have been a common sight in lecture halls, a number of studies have looked at what impact they have on notetaking during class. These previous studies have frequently found a negative association with laptop use for notetaking in lectures, not only for the student using the laptop, but also for other students sitting nearby, distracted by the laptop screen.

The article took a look at two new studies that attempted to tackle some of the limitations of previous work, particularly addressing the correlative nature of previous findings: perhaps low performing students prefer to use laptops for notetaking so that they can do something else during lectures.

What bears mentioning is that there is something somewhat quaint about studying student laptop use. In most cases, it seems to be a foregone conclusion and there is no getting it back into the box. Students will use laptops and other digital technologies in class — there’s almost no other option at this point. Nevertheless, the studies proved interesting.

The first of the highlighted studies featured an experimental set up, randomly assigning students in different sections of an economics class to different conditions: notetaking with laptop, without laptop, or with tablet laying flat on the desk. The last condition was designed to test the effect of students’ being distracted by seeing other students’ screens; the supposition being that if the tablet was laid flat on a desk, it wouldn’t be visible to other students. The students’ performance was then measured based on a final exam taken by students across the three conditions.

After controlling for demographics, GPA, and ACT entrance exam scores, the research found that performance was lower for students using digital technologies for notetaking. However, while performance was lower on the multiple choice and short answer sections of the exam, performance on the essay potion of the exam was the same across all three conditions.

While the study did address some shortcomings of previous studies (particularly with its randomized experimental design), it also introduced several others. Importantly it raised questions about how teachers might teach differently when faced with a class of laptop users or what effect forcing a student who isn’t comfortable using a laptop might have on their performance. Also, given that multiple sections of an economics class was the subject of the study, what role does the discipline being lectured on play in the impact of laptop use?

The second study attempted to address those though a novel design which linked students’ propensity to use or not use laptops in optional-use classes based on whether or not they were forced to or forced not to use them in another class on the same day. Researchers looked at institution-wide student performance at an institution that had a mix of classes which required, forbade, or had no rules about laptop use.

By looking at student performance in classes in which laptop use was optional, but by linking that performance to whether students would be influenced in their laptop choices based on other classes held the same day, researchers wanted to be able to measure student performance when they had a chance not to use a laptop in class. That is, the design allowed researchers to understand in general how many students might be using a laptop in a laptop-optional class, but still allowing individual students to make a choice based on preference.

What they found was that student performance worsened for classes that shared a day with laptop mandated classes and improved on days when classes were shared with laptop prohibited classes. This is in line with previous studies, but interestingly, the negative effects were seen more strongly in weaker students and in quantitative classes.

In the end, even while these two new studies reinforce what had been previously demonstrated about student laptop use, is there anything that can be done to counter what seem to be the negative effects of laptop use for notetaking? More than anything, what seems to be needed are studies looking at how to boost student performance when using technology in the classroom.

StudyGotchi: Tamagotchi-Like Game-Mechanics to Motivate Students During a Programming Course & To Gamify or Not to Gamify: Towards Developing Design Guidelines for Mobile Language Learning Applications to Support User Experience; 2 poster/ demo papers in the EC-TEL 2019 Proceedings. (notes from Chrysanthi Tseloudi)

Both papers talk about the authors’ findings on gamified applications related to learning.

The first regards the app StudyGotchi, based on Tamagochi (a virtual pet the user takes care of), which aims to encourage first year Java programming students to complete tasks on their Moodle platform in order to keep a virtual teacher happy. Less than half the students downloaded the relevant app, with half of those receiving the control version that didn’t have the game functions (180) and half receiving the game version (194). According to their survey, of those that didn’t download it, some reported that was because they weren’t interested in it. Of those that replied to whether they used it, less than half said they did. According to data collected from students that used either version of the app, there was no difference in either the online behaviour or the exam grades of the students, between the groups that used the game or non-game versions. The authors attribute this to the lack of interaction, personalisation options and immediate feedback on the students’ actions on Moodle. I also wonder whether a virtual teacher to be made happy in particular is the best choice of “pet”, when hopefully there is already a real teacher supporting students’ learning. Maybe a virtual brain with regions that light up when a quiz is completed or any non-realistic representation connected to the students’ own development would be more helpful in increasing students’ intrinsic motivation, since ideally they would be learning for themselves, and not to make someone else happy.

The second paper compares 2 language learning apps, one of which is gamified. The non-gamified app (LearnIT ASAP) includes exercises where students fill in missing words, feedback to show whether the answer is correct/ incorrect and statistics to track progress. The gamified app (Starfighter) includes exercises where the students steer through an asteroid field by selecting answers to given exercises and a leaderboard to track progress and compete with peers. The evaluation involved interviewing 11 20-50 year old individuals. The authors found that younger and older participants had different views about the types of interactions and aesthetics of the two apps. Younger participants would have preferred swiping to tapping, older participants seemed to find the non-gamified app comfortable because it looked like a webpage, but were not so sure about the gamified app. The game mechanics of Starfighter were thought to be more engaging, while the pedagogical approach of LearnIT ASAP was thought to be better in terms of instructional value and effectiveness. While the authors mention that the main difference between the apps is gamification, considering the finding that the pedagogical approach of one of the apps is better, I wonder if that is actually the case. Which game elements actually improve engagement and how is still being researched, so I would really like to see comparisons of language learning apps where the existence or not of game elements is indeed the only difference. Using different pedagogical approaches between the apps is less likely to shed light on the best game elements to use, but it does emphasize the difficulty of creating an application that is both educationally valuable and fun at the same time.

Digital Accessibility Events for 2019/20

The Digital Education Office are hosting a series of events focusing on Digital Accessibility. AbilityNet are running four sessions on individual accessibility needs. Speakers will share their lived experience of various conditions and impairments and discuss how these influence the way they access and consume digital content.

They will share their professional experience as Accessibility and Assistive Technology Professionals in supporting Disabled Learners in the context of accessing digital platforms and content.

The sessions will engage participants by developing their understanding of potential pitfalls when creating digital content and will include easy to consume guidance on creating accessible content for all audiences. Additional support videos and guidance will be provided after the events.

With new legislation requiring the University to ensure that all content published on websites, intranets or mobile apps is accessible, these talks offer a chance to learn how to improve the materials and content you create to support students learning.

 

You can find out more and book tickets for the individual sessions via the following links:

Digital Accessibility and Sight Impairment 30th October 2pm – 4pm

Digital Accessibility and Mental Health 13th November 2pm – 4pm

Digital Accessibility and Physical Impairment 4th December 2pm – 4pm

Digital Accessibility and Neurodiversity 15th December 2pm – 4pm

 

 

Curriculum theories – notes from reading group

Thanks to Sarah Davies for setting us some fascinating reading!

Connected curriculum chapter 1 (notes from Chris Adams)

The connected curriculum is a piece of work by Dilly Fung from UCL. It is an explicit attempt to outline how departments in research-intensive universities can develop excellent teaching by integrating their research into it; the ‘connected’ part of the title is the link between research and teaching. At it’s heart is the idea that the predominant mode of learning for undergraduates should be active enquiry, but that rather than students discovering for themselves things which are well-established, they should be discovering things at the boundaries of what is known, just like researchers do.

It has six strands:

  • Students connect with researchers and with the institution’s research. Or, in other words, the research work of he department is explicitly built into the curriculum
  • A throughline of research activity is built into each programme. Properly design the curriculum so that research strands run though it, and it builds stepwise on what has come before.
  • Students make connections across subjects and out to the world. Interdisciplinarity! Real world relevance.
  • Students connect academic learning with workplace learning. Not only should we be teaching them transferable skills for a world of rapid technological change, but we need to tell them that too.
  • Students learn to produce outputs – assessments directed at an audience. Don’t just test them with exams
  • Students connect with each other, across phases and with alumni. This will create a sense of community and belonging.

This last point is then expanded upon. Fung posits that the curriculum is not just a list of what should be learned, but is the whole experience as lived by the student. Viewing the curriculum as a narrow set of learning outcomes does not product the kind of people that society needs, but is a consequence of the audit culture that pervades higher education nowadays. Not all audit is bad – the days when ‘academic freedom’ gave people tenure and the freedom to teach terribly and not do any research are disappearing, and peer-review is an integral part of the university system – but in order to address complex global challenges we need a values based curriculum ‘defined as the development of new understandings and practices, through dialogue and human relationships, which make an impact for good in the world.’

I liked it sufficiently to buy the whole book. It addresses a lot of issues that I see in my own department – the separation of research from teaching, and the over-reliance on exams, and the lack of community, for example.

Connected curriculum chapter 2 (notes from Suzi Wells)

As mentioned in chapter 1, the core proposition is that the curriculum should be ‘research-based’ – ie most student learning “should reflect the kinds of active, critical and analytic enquiry undertaken by researchers”.

Fung gives a this useful definition of what that means in practice. Students should:

  • Generate new knowledge through data gathering and analysis
  • Disseminate their findings
  • Refine their understanding through feedback on the dissemination

All of it seems fairly uncontroversial in theory and tends to reflect current practice, or at least what we aspire to in current practice. There’s some discussion of the differences in what research means to different disciplines, and how that filters through into assessment of students, and potentially some useful studies on just how effective this all is.

Fung mentions the Boyer Commission (US 1998) and its proposed academic bill of rights, including (for research intensive institutions): “expectation of and opportunity for work with talented senior researchers to help and guide the student’s efforts”. Given increasing student numbers, this is possibly a less realistic expectation to meaningfully meet than it once was.

There’s some useful discussion about what is needed to make research-based-teaching work.

I was particularly interested in the idea that providing opportunity for this form of learning isn’t everything. Socio-economic factors mean that students may have differing beliefs about their own agency. Fung cites Baxter-Magdola (2004) on the importance of students having ‘self-authorship’ which includes ‘belief in oneself as possessing the capacity to create new knowledge’ and ‘the ability to play a part within knowledge-building communities’. You can’t assume all students arrive with the same level of this, and this will affect their ability to participate.

This part of the chapter also talks about the importance of not just sending students off “into the unknown to fend for themselves” – imagine a forest of ivory towers – but to give them support & structure. Activities need to be framed within human interactions (including peer support).

Towards the end there is a nod to it being anglo-centric – African and Asian educational philosophy and practice may be different – but little detail is given.

How Emotion Matters in Four Key Relationships in Teaching and Learning in Higher Education” (notes from Roger Gardner)

This is a 2016 article by Kathleen Quinlan, who is now Director of the Centre for the Study of Higher Education and Reader in Higher Education at University of Kent, but was working at Oxford when this was written.

She writes that while historically there has been less focus on Bloom’s affective domain than the cognitive, recently interest in the relation of emotions to learning has been growing although it is still under-researched. The article comes out of a review of the existing literature and conversations with teachers at the National University of Singapore in August 2014.

The paper focusses on four relationships: students with the subject matter, teachers, their peers and what she calls “their developing selves”. For each section Quinlan includes a summary of implications for teaching practice, which provide some very useful suggestions, ranging from simple things such as encouraging students to introduce each other when starting activities to help foster peer relationships, to advocating further research and exploration into when it is appropriate and educationally beneficial for teachers to express emotions and when not.

Quinlan says “discussions about intangibles such as emotions and relationships are often sidelined”, but it now seems essential to prioritise this if we are to support student wellbeing, and this paper provides some helpful prompts and suggestions for reflection and developing our practice.  If you are short of time I recommend looking at the bullet point “implications for practice”.

What is “significant learning”? (notes from Chrysanthi Tseloudi)

In this piece, Dr. Fink talks about the Taxonomy of Significant Learning; a taxonomy that refers to new kinds of learning that go beyond the cognitive learning that Bloom’s taxonomy addresses. The taxonomy of significant learning – where significant learning occurs when there is a lasting change in the learner that is important in their life – is not hierarchical, but relational and interactive. It includes six categories of learning:

Foundational knowledge: the ability to remember and understand specific information as well as ideas and perspectives, providing the basis for other kinds of learning.

Application: learning to engage in a new kind of action (intellectual, physical, social, etc) and develop skills that allow the learner to act on other kinds of learning, making them useful.

Intergration: learning to see, understand, and make new connections between different things, people, ideas, realms of ideas or realms of life. This gives learners new (especially intellectual) power.

Human Dimension: learning about the human significance of things they are learning – understanding something about themselves or others, getting a new vision of who they want to become, understanding the social implications of things they have learned or how to better interact with others.

Caring: developing new feelings, interests, values and/ or caring more about something that before; caring about something feeds the learner’s energy to learn about it and make it a part of their lives.

Learning how to learn: learning about the learning process; how to learn more efficiently, how to learn about a specific method or in a specific way, which enables the learner to keep on learning in the future with increasing effectiveness.

The author notes that each kind of learning is related to the others and achieving one kind helps achieve the others. The more kinds of learning involved, the more significant is the learning that occurs – with the most significant kind being the one that encompasses all six categories of the taxonomy.

Education Principles: Designing learning and assessment in the digital age (notes from Naomi Beckett)

This short paper is part of a guide written by Jisc. It covers what Education Principles are and why they are such a vital characteristic of any strategy. Coming from someone unspecialised in this area it was an interesting read to understand how principles can bring staff together to engage and develop different education strategies. The guide talks about how principles can ‘provide a common language, and reference point for evaluating change’.

The paper talks about having a benchmark in which everyone can check their progress. I like this idea. So often projects become too big and the ideas and values are lost on what was first decided as a team. Having a set of principles is a way to bring everything back together and is a useful way to enable a wide variety of staff to engage with each other. The guide mentions how having these principles means there is a ‘common agreement on what is fundamentally important.’

Having these principles developed at the beginning of a project puts the important ideas and values into motion and is a place to look back to when problems arise. Principles should be action oriented, and not state the obvious. Developing them in this way allows for a range of staff members to bring in different ideas and think about how they want to communicate their own message.

I also followed up by reading ‘Why use assessment and feedback principles?’ from Strathclyde’s Re-Engineering Assessment Practices (REAP) project.

Suggested reading

Near future teaching – notes from reading group

For our latest reading group, following Sian Bayne’s fascinating Near Future Teaching seminar for BILT, we wanted to look in more depth at the project materials and related reading.

Michael read ‘Using learning analytics to scale the provision of personalized feedback,’ a paper by Abelardo Pardo, Jelena Jovanovic, Shane Dawson, Dragan Gasevic and Negin Mirriahi. Responding to the need to be able to provide individual feedback to large classes of students, this study presented and tested a novel system for utilizing learning analytic data generated by student activity within a learning management system in order to deliver what the authors called ‘personalized’ feedback to students. As it was designed, the system allowed instructors to create small, one or two sentence pieces of feedback for each activity within a course. Based on these, each week students would be able to receive a set of ‘personalized’ feedback that responded to their level of participation. In the study, the authors found an improvement in student satisfaction with the feedback they received, but only a marginal improvement in performance, as compared to previous years. There were limits to the methodology — the study only made use of at most three years of student data for comparison — and the author’s definition of ‘personalized feedback’ seemed in practice to be little more than a kind of customized boilerplate feedback, but nevertheless the study did have a few interesting points. First, it was admirable in the way that it sought to use learning analytics techniques to improve feedback in large courses. Second, the authors took the well thought out step to not make the feedback given to be about the content of the course, but instead it focused on providing feedback on student study habits. That is, the feedback might encourage students to make sure they did all the reading that week if they weren’t doing well, or might encourage them to be sure to review the material if they had already reviewed it all once. Third, the article offered an interesting recounting of the history of the concept of feedback as it moved from focusing only on addressing the gap between targets and actual performance to a more wholistic and continuous relationship between mentor and student.

Suzi read Higher education, unbundling, and the end of the university as we know it by Tristran McCowan. This paper starts with a thorough guide to the language of unbundling and the kinds of things that we talk about when we talk about unbundling, followed by an extensive discussion of what this means for higher education. My impression from the article was that “unbundling” may be slightly unhelpful terminology, partly because it covers a very wide range of things, and partly because – if the article is to be believed – it’s a fairly neutral term for activities which seem to include asset-stripping and declawing universities. As an exploration of the (possible) changing face of universities it’s well worth a read. You can decide for yourself whether students are better off buying an album than creating their own educational mixtape.

Roger read “Future practices”.   For world 1 , human led and closed, I was concerned that lots was only available to “higher paying students” and there was no mention at all of collaborative learning. For world 2, human led and open, I liked the the idea of the new field of “compassion analytics”, which would be good to explore further, lots of challenge based learning and open content. World 3, tech led and closed, was appealing in its emphasis on wellbeing in relation to technology, and a move away from traditional assessment, with failure recognised more as an opportunity to learn, and reflection and the ability to analyse and synthesise prioritised. From world 4 I liked the emphasis on lifelong learning and individual flexibility for students eg to choose their own blocks of learning.

Chrysanthi read Future Teaching trends: Science and Technology. The review analyzes 5 trends:

  • datafication – e.g. monitoring students’ attendance, location, engagement, real-time attention levels,
  • artificial intelligence – e.g. AI tutoring, giving feedback, summarizing discussions and scanning for misconceptions, identifying human emotions and generating its own responses rather than relying only on past experience and data,
  • neuroscience and cognitive enhancement – e.g. brain-computer interfaces, enhancement tools like tech that sends currents to the brain to help with reading and memory or drugs that improve creativity and motivation,
  • virtual and augmented realities – e.g. that help to acquire medical skills for high-risk scenarios without real risk, or explore life as someone else to develop empathy, and
  • new forms of value – enabling e.g. the recording and verification of all educational achievements and accumulation of credit over one’s lifetime, or the creation of direct contracts between student-academic.

I liked it because it gave both pros and cons in a concise way. It allows you to understand why these trends would be useful and could be adopted widely, at the same time as you are getting a glimpse of the dystopian learning environment they could create if used before ethical and other implications have been considered.

Suggested reading

Feedback, NSS & TEF – notes from reading group

Chrysanthi read “Thanks, but no-thanks for the feedback”. The paper examines how students’ implicit beliefs about the malleability of their intelligence and abilities influence how they respond to, integrate and deliberately act on the feedback they receive. It does so, based on a set of questionnaires completed by 151 students (113 females and 38 males), mainly from social sciences.

Mindset: There are two kinds of mindsets regarding malleability of one’s personal characteristics; People with a growth mindset believe that their abilities can grow through learning and experience; people with a fixed mindset believe they have a fixed amount of intelligence which cannot be significantly developed. “If intelligence is perceived as unchangeable, the meaning of failure is transformed from an action (i failed) to an identity (i am a failure)” (p851).

Attitudes towards feedbackSeveral factors that influence whether a person accepts a piece of feedback – e.g. how reflective it is of their knowledge and whether it is positive or negative – were measured, as well as 2 outcome measures.

Defence mechanisms: Defence mechanisms are useful in situations we perceive as threatening, as they help us control our anxiety and protect ourselves. But if we are very defensive, we are less able to perceive the information we receive accurately, which can be counterproductive; e.g. a student may focus on who has done worse, to restore their self-esteem, rather than who has done better, which can be a learning opportunity.

The results of the questionnaires measuring the above showed that more students had a fixed mindset (86) than growth (65) and that their mindset indeed affected how they responded to and acted on feedback.

  • Growth mindset students are more likely to challenge themselves and see the feedback giver as someone who can push them out of their comfort zone in a good way that will help them learn. They are more motivated to change their behaviour in response to the received feedback, engage in developmental activities and use the defence mechanisms considered helpful.
  • Fixed mindset students are also motivated to learn, but they are more likely to go about it in an unhelpful way. They make choices that help protect their self-esteem, rather than learn, they are not as good at using the helpful defence mechanisms, they distort the facts of the feedback or think of an experience as all good or all bad. The authors seemed puzzled by the indication that fixed students are motivated to engage with the feedback, but they do so by reshaping reality or dissociating themselves from the thoughts and feelings surrounding said feedback.

Their recommendations?

  • Academics should be careful in how they deliver highly emotive feedback, even if they don’t have the time to make it good and individualised.
  • Lectures & seminars early in students’ studies, teaching them about feedback’s goal and related theory and practice, as well as action action-orientated interventions (eg coaching), so they learn how to recognize any self-sabotaging behaviours and manage them intelligently.
  • Strategies to help students become more willing to experience – and stay with – the emotional experience of failure. Eg, enhance the curriculum with opportunities for students to take risks, so they become comfortable with both “possibility” and “failure”.

I think trying to change students’ beliefs about the malleability of their intelligence would go a long way. If one believes their abilities are fixed and therefore if they don’t do well, they are a failure, a negative response to feedback is hardly surprising. That said, the responsibility of managing feedback should not fall entirely on the student; it still needs to be constructive, helpful and given in an appropriate manner.

Suzi read: An outsider’s view of subject level TEFA beginner’s guide to the Teaching Excellence FrameworkPolicy Watch: Subject TEF year 2 by the end of which she was not convinced anyone knows what the TEF is or how it will work.

Some useful quotes about TEF 1

Each institution is presented with six metrics, two in each of three categories: Teaching QualityLearning Environment and Student Outcomes and Learning Gain. For each of these measures, they are deemed to be performing well, or less well, against a benchmarked expectation for their student intake.

… and …

Right now, the metrics in TEF are in three categories. Student satisfaction looks at how positive students are with their course, as measured by teaching quality and assessment and feedback responses to the NSS. Continuation includes the proportion of students that continue their studies from year to year, as measured by data collected by the Higher Education Statistics Agency (HESA). And employment outcomes measures what students do (and then earn) after they graduate, as measured by responses to the Destination of Leavers from Higher Education survey – which will soon morph into Graduate Outcomes.

Points of interest re TEF 2

  • Teaching intensity (contact hours) won’t be in the next TEF
  • All subjects will be assessed (at all institutions), with results available in 2021
  • Insufficient data for a subject at an institution could lead to “no award” (so you won’t fail for being too small to measure)
  • Resources will be assessed
  • More focus on longitudinal educational outcomes, not (binary) employment on graduation
  • It takes into account the incoming qualifications of the students (so it does something like the “value add” thing that school rankings do) but some people have expressed concern that it will disincentivise admitting candidates from non-traditional backgrounds.
  • There will be a statutory review of the TEF during 2019 (reporting at the end of the year) which could change anything (including the gold / silver / bronze rankings)

Suzi also read Don’t students deserve a TEF of their own which talks about giving students a way in to play with the data so that, for example, if you’re more interested in graduate career destinations than in assessment & feedback you can pick on that basis (not on the aggregated data). It’s an interesting idea and may well happen but as a prospective student I can’t say I understood myself — or the experience of being at university — well enough for that to be useful. There’s also a good response talking about the kind of things (the library is badly designed, lectures are at hours that don’t make sense because rooms are at a premium, no real module choice) you might find out too late about a university that would not be covered by statistics.

Roger read “How to do well in the National Student Survey (NSS)” an article from Wonkhe,  written in March 2018. The author, Adrian Burgess, Professor of Psychology at Aston University, offers some reflections based on an analysis of NSS results from 2007 to 2016.

Whilst many universities have placed great emphasis on improving assessment and feedback, this has “brought relatively modest rewards in terms of student satisfaction” and remains the area with the lowest satisfaction.

Burgess’ analysis found that the strongest predictors of overall satisfaction were “organisation and management” closely followed by “teaching quality”.

Amy read Feedback is a two-way street. So why does the NSS only look one way?, an article by Naomi Winstone and Edd Pitt. This piece highlighted the issue that the NSS questions on feedback are framed as if feedback should be a passive experience – that students should be given their feedback. In 2017, the question was changed from “I have received detailed comments” to “I have received useful comments”. Both the old and new question frames feedback as something that is received, a ‘transmission-focussed mindset’, whereas Winstone and Pitt argue that feedback should be a two-way relationship – with the student working with the feedback and their tutor to develop.
The authors do not believe that changing the NSS question will solve all of the problems with students perception of feedback (though it will definitely help!) but they do believe that by promoting feedback as something that individuals work with, have responsibility for and seek out if they feel they need to develop in a certain area, that gradually the mindset will change and become a more sustainable form of learning for students.

Suggested reading

From WonkHE

From the last time we did assessment & feedback, which was July 2017 (I’ve left in who read what then)