Chrysanthi read “Thanks, but no-thanks for the feedback”. The paper examines how students’ implicit beliefs about the malleability of their intelligence and abilities influence how they respond to, integrate and deliberately act on the feedback they receive. It does so, based on a set of questionnaires completed by 151 students (113 females and 38 males), mainly from social sciences.
Mindset: There are two kinds of mindsets regarding malleability of one’s personal characteristics; People with a growth mindset believe that their abilities can grow through learning and experience; people with a fixed mindset believe they have a fixed amount of intelligence which cannot be significantly developed. “If intelligence is perceived as unchangeable, the meaning of failure is transformed from an action (i failed) to an identity (i am a failure)” (p851).
Attitudes towards feedback: Several factors that influence whether a person accepts a piece of feedback – e.g. how reflective it is of their knowledge and whether it is positive or negative – were measured, as well as 2 outcome measures.
Defence mechanisms: Defence mechanisms are useful in situations we perceive as threatening, as they help us control our anxiety and protect ourselves. But if we are very defensive, we are less able to perceive the information we receive accurately, which can be counterproductive; e.g. a student may focus on who has done worse, to restore their self-esteem, rather than who has done better, which can be a learning opportunity.
The results of the questionnaires measuring the above showed that more students had a fixed mindset (86) than growth (65) and that their mindset indeed affected how they responded to and acted on feedback.
- Growth mindset students are more likely to challenge themselves and see the feedback giver as someone who can push them out of their comfort zone in a good way that will help them learn. They are more motivated to change their behaviour in response to the received feedback, engage in developmental activities and use the defence mechanisms considered helpful.
- Fixed mindset students are also motivated to learn, but they are more likely to go about it in an unhelpful way. They make choices that help protect their self-esteem, rather than learn, they are not as good at using the helpful defence mechanisms, they distort the facts of the feedback or think of an experience as all good or all bad. The authors seemed puzzled by the indication that fixed students are motivated to engage with the feedback, but they do so by reshaping reality or dissociating themselves from the thoughts and feelings surrounding said feedback.
Their recommendations?
- Academics should be careful in how they deliver highly emotive feedback, even if they don’t have the time to make it good and individualised.
- Lectures & seminars early in students’ studies, teaching them about feedback’s goal and related theory and practice, as well as action action-orientated interventions (eg coaching), so they learn how to recognize any self-sabotaging behaviours and manage them intelligently.
- Strategies to help students become more willing to experience – and stay with – the emotional experience of failure. Eg, enhance the curriculum with opportunities for students to take risks, so they become comfortable with both “possibility” and “failure”.
I think trying to change students’ beliefs about the malleability of their intelligence would go a long way. If one believes their abilities are fixed and therefore if they don’t do well, they are a failure, a negative response to feedback is hardly surprising. That said, the responsibility of managing feedback should not fall entirely on the student; it still needs to be constructive, helpful and given in an appropriate manner.
Suzi read: An outsider’s view of subject level TEF, A beginner’s guide to the Teaching Excellence Framework, Policy Watch: Subject TEF year 2 by the end of which she was not convinced anyone knows what the TEF is or how it will work.
Some useful quotes about TEF 1
Each institution is presented with six metrics, two in each of three categories: Teaching Quality, Learning Environment and Student Outcomes and Learning Gain. For each of these measures, they are deemed to be performing well, or less well, against a benchmarked expectation for their student intake.
… and …
Right now, the metrics in TEF are in three categories. Student satisfaction looks at how positive students are with their course, as measured by teaching quality and assessment and feedback responses to the NSS. Continuation includes the proportion of students that continue their studies from year to year, as measured by data collected by the Higher Education Statistics Agency (HESA). And employment outcomes measures what students do (and then earn) after they graduate, as measured by responses to the Destination of Leavers from Higher Education survey – which will soon morph into Graduate Outcomes.
Points of interest re TEF 2
- Teaching intensity (contact hours) won’t be in the next TEF
- All subjects will be assessed (at all institutions), with results available in 2021
- Insufficient data for a subject at an institution could lead to “no award” (so you won’t fail for being too small to measure)
- Resources will be assessed
- More focus on longitudinal educational outcomes, not (binary) employment on graduation
- It takes into account the incoming qualifications of the students (so it does something like the “value add” thing that school rankings do) but some people have expressed concern that it will disincentivise admitting candidates from non-traditional backgrounds.
- There will be a statutory review of the TEF during 2019 (reporting at the end of the year) which could change anything (including the gold / silver / bronze rankings)
Suzi also read Don’t students deserve a TEF of their own which talks about giving students a way in to play with the data so that, for example, if you’re more interested in graduate career destinations than in assessment & feedback you can pick on that basis (not on the aggregated data). It’s an interesting idea and may well happen but as a prospective student I can’t say I understood myself — or the experience of being at university — well enough for that to be useful. There’s also a good response talking about the kind of things (the library is badly designed, lectures are at hours that don’t make sense because rooms are at a premium, no real module choice) you might find out too late about a university that would not be covered by statistics.
Roger read “How to do well in the National Student Survey (NSS)” an article from Wonkhe, written in March 2018. The author, Adrian Burgess, Professor of Psychology at Aston University, offers some reflections based on an analysis of NSS results from 2007 to 2016.
Whilst many universities have placed great emphasis on improving assessment and feedback, this has “brought relatively modest rewards in terms of student satisfaction” and remains the area with the lowest satisfaction.
Burgess’ analysis found that the strongest predictors of overall satisfaction were “organisation and management” closely followed by “teaching quality”.
Suggested reading
From WonkHE
- An outsider’s view of subject level TEF
- Don’t students need a TEF of their own?
- The NSS: a survey – local boy David Kernohan
- What have we learned from the new (and improved) NSS?
- NSS 2018 – feeling the heat (think you can explore the data a bit on this one)
- How to do well in the National Student Survey (NSS)
- A beginners guide to the TEF
- THES – Learning gain: political expedient or meaningful measure (downloaded as it’s a pain accessing it via the library, but we do seem to be able to do that)
From the last time we did assessment & feedback, which was July 2017 (I’ve left in who read what then)
- (Jul 2017 – Suzanne) Enhancing assessment feedback practice in higher education: The EAT framework, Carol Evans
- Making Sense of Assessment Feedback in Higher Education, Carol Evans (very long)
- Cognitive Style as Environmentally Sensitive Individual Differences in Cognition: A Modern Synthesis and Applications in Education, Business, and Management, Maria Kozhevnikov1,2 , Carol Evans3 , and Stephen M. Kosslyn4 (very long)
- A Learning Patterns Perspective on Student Learning in Higher Education: State of the Art and Moving Forward, Vermunt (mentioned by Evans when she came to talk) (very long)
- (Jul 2017 – Roger ) Learning the Language of Assessment: assessment literacy and assessment guidance for students, Edinburgh teaching matters blog
- (Jul 2017 – Suzi) Sadler, D. R. (2013) ‘Opening up feedback: Teaching learners to see‘ (mentioned by Evans when she came to talk)
- Exploring value co-creation (CCV) in the Law Feedback Project at ESLTIS 2016 by Imogen Moore and Laura Bennett, School of Law
- Feedback: Encouraging Engagement and Dialogue – notes on a seminar, Hannah’s notes on Imogen Moore’s recent talk
- Thanks, but no-thanks for the feedback, Alex Forsythe & Sophie Johnson
- Managing dialogic use of exemplars (just a related-sounding journal article)
- What lies beneath: exploring the deeper purposes of feedback on student writing through considering disciplinary knowledge and knowers (just a related-sounding journal article)
- The implications of programme assessment patterns for student learning (just a related-sounding journal article)
- Or you could have a look at the SEDA blog for related posts (53 powerful ideas, etc)
- Feedback is a two-way street. So why does the NSS only look one way?