Feedback, NSS & TEF – notes from reading group

Chrysanthi read “Thanks, but no-thanks for the feedback”. The paper examines how students’ implicit beliefs about the malleability of their intelligence and abilities influence how they respond to, integrate and deliberately act on the feedback they receive. It does so, based on a set of questionnaires completed by 151 students (113 females and 38 males), mainly from social sciences.

Mindset: There are two kinds of mindsets regarding malleability of one’s personal characteristics; People with a growth mindset believe that their abilities can grow through learning and experience; people with a fixed mindset believe they have a fixed amount of intelligence which cannot be significantly developed. “If intelligence is perceived as unchangeable, the meaning of failure is transformed from an action (i failed) to an identity (i am a failure)” (p851).

Attitudes towards feedbackSeveral factors that influence whether a person accepts a piece of feedback – e.g. how reflective it is of their knowledge and whether it is positive or negative – were measured, as well as 2 outcome measures.

Defence mechanisms: Defence mechanisms are useful in situations we perceive as threatening, as they help us control our anxiety and protect ourselves. But if we are very defensive, we are less able to perceive the information we receive accurately, which can be counterproductive; e.g. a student may focus on who has done worse, to restore their self-esteem, rather than who has done better, which can be a learning opportunity.

The results of the questionnaires measuring the above showed that more students had a fixed mindset (86) than growth (65) and that their mindset indeed affected how they responded to and acted on feedback.

  • Growth mindset students are more likely to challenge themselves and see the feedback giver as someone who can push them out of their comfort zone in a good way that will help them learn. They are more motivated to change their behaviour in response to the received feedback, engage in developmental activities and use the defence mechanisms considered helpful.
  • Fixed mindset students are also motivated to learn, but they are more likely to go about it in an unhelpful way. They make choices that help protect their self-esteem, rather than learn, they are not as good at using the helpful defence mechanisms, they distort the facts of the feedback or think of an experience as all good or all bad. The authors seemed puzzled by the indication that fixed students are motivated to engage with the feedback, but they do so by reshaping reality or dissociating themselves from the thoughts and feelings surrounding said feedback.

Their recommendations?

  • Academics should be careful in how they deliver highly emotive feedback, even if they don’t have the time to make it good and individualised.
  • Lectures & seminars early in students’ studies, teaching them about feedback’s goal and related theory and practice, as well as action action-orientated interventions (eg coaching), so they learn how to recognize any self-sabotaging behaviours and manage them intelligently.
  • Strategies to help students become more willing to experience – and stay with – the emotional experience of failure. Eg, enhance the curriculum with opportunities for students to take risks, so they become comfortable with both “possibility” and “failure”.

I think trying to change students’ beliefs about the malleability of their intelligence would go a long way. If one believes their abilities are fixed and therefore if they don’t do well, they are a failure, a negative response to feedback is hardly surprising. That said, the responsibility of managing feedback should not fall entirely on the student; it still needs to be constructive, helpful and given in an appropriate manner.

Suzi read: An outsider’s view of subject level TEFA beginner’s guide to the Teaching Excellence FrameworkPolicy Watch: Subject TEF year 2 by the end of which she was not convinced anyone knows what the TEF is or how it will work.

Some useful quotes about TEF 1

Each institution is presented with six metrics, two in each of three categories: Teaching QualityLearning Environment and Student Outcomes and Learning Gain. For each of these measures, they are deemed to be performing well, or less well, against a benchmarked expectation for their student intake.

… and …

Right now, the metrics in TEF are in three categories. Student satisfaction looks at how positive students are with their course, as measured by teaching quality and assessment and feedback responses to the NSS. Continuation includes the proportion of students that continue their studies from year to year, as measured by data collected by the Higher Education Statistics Agency (HESA). And employment outcomes measures what students do (and then earn) after they graduate, as measured by responses to the Destination of Leavers from Higher Education survey – which will soon morph into Graduate Outcomes.

Points of interest re TEF 2

  • Teaching intensity (contact hours) won’t be in the next TEF
  • All subjects will be assessed (at all institutions), with results available in 2021
  • Insufficient data for a subject at an institution could lead to “no award” (so you won’t fail for being too small to measure)
  • Resources will be assessed
  • More focus on longitudinal educational outcomes, not (binary) employment on graduation
  • It takes into account the incoming qualifications of the students (so it does something like the “value add” thing that school rankings do) but some people have expressed concern that it will disincentivise admitting candidates from non-traditional backgrounds.
  • There will be a statutory review of the TEF during 2019 (reporting at the end of the year) which could change anything (including the gold / silver / bronze rankings)

Suzi also read Don’t students deserve a TEF of their own which talks about giving students a way in to play with the data so that, for example, if you’re more interested in graduate career destinations than in assessment & feedback you can pick on that basis (not on the aggregated data). It’s an interesting idea and may well happen but as a prospective student I can’t say I understood myself — or the experience of being at university — well enough for that to be useful. There’s also a good response talking about the kind of things (the library is badly designed, lectures are at hours that don’t make sense because rooms are at a premium, no real module choice) you might find out too late about a university that would not be covered by statistics.

Roger read “How to do well in the National Student Survey (NSS)” an article from Wonkhe,  written in March 2018. The author, Adrian Burgess, Professor of Psychology at Aston University, offers some reflections based on an analysis of NSS results from 2007 to 2016.

Whilst many universities have placed great emphasis on improving assessment and feedback, this has “brought relatively modest rewards in terms of student satisfaction” and remains the area with the lowest satisfaction.

Burgess’ analysis found that the strongest predictors of overall satisfaction were “organisation and management” closely followed by “teaching quality”.

Suggested reading

From WonkHE

From the last time we did assessment & feedback, which was July 2017 (I’ve left in who read what then)

Thoughts from a recent GW4 meeting at University of Bath

 

On Friday 23rd March, Mike, Naomi, Robyn, Han and I headed over to Bath for the latest GW4 meeting of minds. As decided in the previous meeting, the main topics for discussion were e-assessment and portfolios, but we also discussed MOOC development and learning analytics. Unfortunately, no one from Exeter could make it up this time, so it was us from Bristol, along with colleagues from Bath and Cardiff. As before, we used Padlets to pool ideas and discussion points as we discussed in smaller groups.

Portfolios 

Portfolios seem to be a common focus (dare I even say, headache). Bath and Cardiff have been using Mahara, and have been trying to overcome some of its limitations in-house. There was a strong feeling that none of us have found a portfolio which delivers what we need, and that if we ganged up on the providers they might be able to find a solution. The next step is to try to define what it is we do need from a portfolio, which tools we use (or have already investigated), and what we can do to find a common solution. Some immediate themes were e-portfolios as assessment tools (and how they integrate with current systems), GDPR implications, students being able to share parts of portfolios externally and internally, and how long students can have access to their portfolio.

MOOCS

As something we all have experience of, to a greater or lesser degree, there was inevitably quite a bit of discussion around MOOCs. We talked about the processes we follow to develop MOOCs, and the different support we provide to academics. For example, Gavin from Bath showed us how he uses Camtasia to produce videos in house; in fact, he was able to knock up an example of such a video in 20 minutes during the session, with mini interviews and shots from the day. We also discussed the data we get from FutureLearn, and how we all find it difficult to do anything with that data. With so much information, and not much time, it tends to become something we’d all like to do more with but never quite find the time for.

The discussion also retuned to an idea we’ve been kicking around GQ4 for a while, that of a collaborative MOOC. We discussed the idea of perhaps making courses for pre-entry undergrads, or students embarking on PhDs, or perhaps staff development and CPD courses for new academics (which Cardiff are already building a bank of in FutureLearn). The idea of creating small modular courses or MOOCS, where each of us could provide a section which is based on our own expertise and interests, was also popular…let’s see how this develops!

E-assessment

Tools and systems around e-assessment was also a common theme. As well as thinking about Blackboard assignments, use of Turnitin and QMP, there was also talk about peer assessment tools and practice and adopting a BYOD approach. It seemed that we all had experiences of e-assessment being very mixed, with huge disparity in adoption and approach within our institutions. We’re all working on e-assessment, it seems, for example our EMA project, which is quite similar to that of Bath. However, other trials are also going ahead, such as Cardiff’s trial of ‘Inspera‘. I think we’re all keen to see what their experiences of that project are, as the Scandinavian approach to e-exams has often been heralded as the future!

What next?

For the future, we discussed more of a ‘show and tell’ approach, where we could get a closer look at some of the things we’re up to. There was also talk of upping our use of communication channels in between meeting in person, particularly using the Yammer group more frequently, and perhaps having smaller virtual meetings for specific topics.

It wasn’t decided who would host the next session, particularly as Exeter weren’t represented, although we did tentatively offer to host here at Bristol. But, seeing as Bath really did set the bar high for lunch expectations – with special mention to the excellent pies and homemade cake – if we do host I think we’d better start planning the food already…!

 

 

Flexible and inclusive learning – notes from reading group

Amy read: Why are we still using LMSs, which discusses the reasons LMS systems have not advanced dramatically since they came onto the market. The key points were:

  • There are five core features that all major LMS systems have: they’re convenient; they offer a one-stop-shop for all University materials, assessments and grades; they have many accessibility features built in; they’re well integrated into other institutional systems and there is a great deal of training available for them.
  • Until a new system with all these features comes onto the market, the status quo with regard to LMS systems will prevail.
  • Instructors should look to use their current LMS system in a more creative way.

Mike read: Flexible pedagogies: technology-enhanced learning HEA report

This paper provided a useful overview of flexible learning, including explanations of what it might mean, dilemmas and challenges for HE. The paper is interesting to consider alongside Bristol’s Flexible and Inclusive learning paper. For the authors, Flexible learning gives students choice in the pace, place and mode of their learning. This is achieved through application of pedagogical practice, with TEL positioned as an enable or way of enhancing this practice. Pace is about schedules (faster or slower), or allowing students to work at their own pace. Place is about  physical location and distance. Mode includes notions of distance and blended learning.

Pedagogies covered include personalised learning, flexible learning – (suggesting it is similar to adaptive learning in which materials adapt to individual progress), gamification, fully online and blended approaches. The paper considers the implications of offering choice to students for example, over what kind of assessment. An idealised form would offer a very individualised choice of learning pathway, but with huge implications on stakeholders.

In the reading, group, we had an interesting discussion as to whether students are always best equipped to understand and make such choices. We also wondered how we would resource the provision of numerous pathways.  Other  risks include potential for information overload for students, ensuring systems and approaches work with quality assurance processes. Barriers include interpretations of KIS data which favours contact time.

We would have a long way to go in achieving the idealised model set out here. Would a first step be to change the overall diet of learning approaches across a programme, rather than offering choice at each stage? Could we then introduce some elements of flexibility in certain areas of programmes, perhaps a bit like the Medical School’s Self Selected Components, giving students choice in a more manageable space within the curriculum?

Suzanne read:  Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. The main points were:

  • Self-regulated learning is something which happens naturally in HE, as students will assess their own work and give themselves feedback internally. This paper suggests this should be harnessed and built on in feedback strategies in HE.
  • Shift in focus to see students having a proactive rather than reactive role in feedback practices, particularly focused on deciphering, negotiating and acting on feedback.
  • The paper suggests 7 principles for good feedback practice, which encourages this self-regulation: 1. clarifying what good performance is; 2. facilitating self-assessment; 3. delivering high quality feedback information; 4. encouraging dialogue; encouraging self-esteem and motivation; 6. giving opportunities to close the gap between where the student is now and where they need/want to be; 7. using feedback to improve teaching.
  • For our context, this gives some food for thought in terms of the limitations of a MOOC environment for establishing effective feedback practices (dialogue with every student is difficult if not impossible, for example), and emphasises the importance of scaffolding or training effective peer and self-assessment, to give students the confidence and ability to ‘close the gap’ for themselves.

Suzanne also read: Professional Development Through MOOCs in Higher Education Institutions: Challenges and Opportunities for PhD Students Working as Mentors

This paper reports on a small-scale (20 participants), qualitative study into the challenges and opportunities for PhD students acting as mentors in the FutureLearn MOOC environment. As a follow-on from the above reading, using mentors can be a way to help students with the peer and self-assessment practices, which is why I decided to read it in parallel. However, it also focuses on the learning experiences of the PhD student themselves as they perform the mentor role, also giving these students a different (potentially more flexible and inclusive) platform to develop skills.

Overall, the paper is positive about the experiences of PhD MOOC mentors, claiming that they can develop skills in various areas, including:

  • confidence in sharing their knowledge and interacting with people outside their own field (especially for early career researchers, who may not yet have established themselves as ‘expert’ in their field);
  • teaching skills, particularly related to online communication, the need for empathy and patience, and tailoring the message to a diverse audience of learners. It’s noteworthy here that many of these mentors had little or no teaching experience, so this is also about giving them teaching experience generally, not teaching in MOOCs specifically;
  • subject knowledge, as having to discuss with the diverse learning community (of expert and not expert learners) helped them consolidate their understanding, and in some cases pushed them to find answers to questions they had not previously considered.

Roger read Authentic and Differentiated Assessments

This is a guide aimed at School teachers. Differentiated assessment involves students being active in setting goals, including the topic, how and when they want to be evaluated. It also involves teachers continuously assessing student readiness in order to provide support and evaluate when students are ready to move on in the curriculum.

The first part of the article describes authentic assessment, which it defines as asking students to apply knowledge and skills to real world settings, which can be a powerful motivator for them. A four stage process to design authentic assessment is outlined.

The second part of the article focuses on differentiated assessment. We all have different strengths and weaknesses in how we best demonstrate our learning, and multiple and varied assessments can help accommodate these. The article stresses that choice is key, including of learning activity as well as assessment. Project and problem based learning are particularly useful.  Learning activities should always consider multiple intelligences and the range of students’ preferred ways of learning, and there should be opportunities for individual and group tasks as some students will perform better in one or the other.

Hannah read: Research into digital inclusion and learning helps empower people to make the best choices, a blog by the Association for Learning and Teaching about bridging the gap between digital inclusion and learning technology. The main points were:

  • Britain is failing to exploit opportunities to give everyone fair and equal access to learning technology through not doing enough research into identifying the best way to tackle the problem of digital exclusion
  • Learning technology will become much more inclusive a way of learning once the digital divide is addressed
  • More must be done to ensure effective intervention; lack of human support and lack of access to digital technology are cited as two main barriers to using learning technology in a meaningful way
  • We need to broaden understanding of the opportunities for inclusion, look into how to overcome obstacles, develop a better understanding of the experiences felt by the excluded and understand why technological opportunities are often not taken up

Suzi read:  Disabled Students in higher education: Experiences and outcomes which discusses the experience of disabled students, based on surveys, analysis of results, interviews, and case studies at four, relatively varied, UK universities. Key points for me were:

  • Disability covers a wide range of types and severity of issues but adjustments tend to be formulaic, particularly for assessment (25% extra time in exams)
  • Disability is a problematic label, not all students who could do will choose to identify as disabled
  • Universal design is the approach they would advocate where possible

Suzi also read: Creating Better Tests for Everyone Through Universally Designed Assessments a paper written for the context of large-scale tests for US school students, which nonetheless contains interesting background and advice useful (if not earth-shattering). The key messages are:

  • Be clear about what you want to assess
  • Only assess that – be careful not to include barriers (cognitive, sensory, emotional, or physical) in the assessment that mean other things are being measured
  • Apply basic good design and writing approaches – clear instructions, legible fonts, plain language

 

Teaching at scale: engagement, assessment and feedback – notes from reading group

Chris read #53ideas 27 – Making feedback work involves more than giving feedback – Part 1 the assessment context. A great little paper full of epithets that perfectly describe the situation I find myself in. ‘You can write perfect feedback and it still be an almost complete waste of time’. ‘University policies to ensure all feedback is provided within three weeks seem feeble’.’On many courses no thought has been given to the purpose of the learning other than that there is some subject matter that the teacher knows about’. ‘Part time teachers are seldom briefed properly about the course and its aims and rationale, and often ignore criteria’. The take home message, for me, was that the OU is an exemplar in the area of giving good, useful, consistent feedback even when the marking load is spread over a number of people: ‘If a course is going to hire part-time markers then it had better adopt some of the Open University’s practices or suffer the consequences.’

Jane recommended: Sea monsters& whirlpools: Navigating between examination and reflection in medical education. Hodges, D. (2015). Medical Teacher 37: 3, 261-266. Interesting paper around how diverse forms of reflective practice employed by medical educators are compatible with assessment. She also mentioned “They liked it if you said you cried”: how medical students perceive the teaching of professionalism

Suzi read E-portfolios enhancing students’ self-directed learning: a systematic review of influencing factors

This 2016 paper is based on a systematic literature review of the use of online portfolios, with most of the studies taking place in an HE context. They looked at what was required for portfolio use to foster self-directed learning. Their conclusions were that students need the time and motivation to use them, and also that portfolios must:

  • Be seamlessly-integrated into teaching
  • Use appropriate technology
  • Be supported by coaching from staff (this is “important if not essential”)

Useful classification: purpose (selection vs learning) and volition (voluntary vs mandated) from Smith and Tillema (2003). Useful “Practical implications” section towards the end.

Suzi read How & Why to Use Social Media to Create Meaningful Learning Assignments

A nice example of a hypothetical (but well thought-through) Instagram assignment for a history of art course, using hashtags and light gamification. Included good instructions and motivation for students.

Has some provocative claims about the use of social media:
“It’s inevitable if we want to make learning relevant, practical and effective.”
“social media, by the behaviours it generates, lends itself to involving students in learning”
Also an interesting further reading section.

Suzi read #53ideas 40 – Self assessment is central to intrinsic motivation

Feeling a sense of control over learning leads to higher levels of engagement and persistence. If possible this would be the what, how, where and when. But “taking responsibility for judgements about their own learning” – so good self & peer assessment – may be enough. Goes through an example of self & peer assessment at Oxford Polytechnic. Challenging to our context, as this was highly scaffolded, with the students practicing structured self-assessment for a year before engaging in peer assessment. Draws on Carl Rogers principles for significant learning. Interesting wrt the need to create a nurturing, emotionally supportive space for learning.

Suggested reading

Engagement and motivation

Social media and online communities

Assessment and feedback

More general, learning at scale