Suzi read Handbook from UMass – PROGRAM-Based Review and Assessment: Tools and Techniques for Program Improvement
A really clear and useful guide to the process of setting up programme level assessment. The guide contains well-pitched explanations, along with activities, worksheets, and concrete examples for each stage of the process: understanding assessment, defining programme goals and objectives, designing the assessment, selecting assessment methods, analysing and reporting. Even the “how to use this guide” section struck me as helpful, which is unheard of.
The proviso is that your understanding of what assessment is for would need to align with theirs, or you would need to be mindful of where it doesn’t. As others do, they talk about assessment to improve, to inform, and to prove and they do also nod to external requirements (QAA, TEF, etc in our context). However, their focus is on assessment as part of the project of continual (action) research into, and improvement of, education in the context of the department’s broader mission. This is a more holistic approach that might bring in a wide range of measures including student evaluations of the units, data about attendance, and input from employers. I like this focus but it might not be what people are expecting.
During the group we discussed the idea of combining some of the ideas from this, and the approach Suzanne read about (see below). A central team would collaborate with academic staff within the department in what is essentially a research project, supporting conversations between staff on a project, bringing in the student voice and leaving them with the evidence-base and tools to drive conversations about education in their context – empowering staff.
(Side note – on reflection I’m pretty sure this is the reason this particular reading appealed to me.)
Chrysanthi read Characterising programme‐level assessment environments that support learning by Graham Gibbs & Harriet Dunbar‐Goddet.
The authors propose a methodology for characterising programme-level assessment environments, so that they can later be studied along with the students’ learning.
In a nutshell, they selected 9 characteristics that are considered important either in quality assessment or for learning (e.g. variety and volume of assessment). Some of these were similar to the TESTA methodology Suzanne described. They selected 3 institutions that were different in terms of structure (e.g. more or less fixed, with less or more choice of modules, traditional or variety in assessment methods etc. They selected 3 subject areas, the same in all institutions. They then collected data about the assessment in these and coded each characteristic so there would be 3 categories: low, medium, high. Finally, they classified each characteristic for each subject in each institution according to this coding. They found that the characteristics were generally consistent within institution, showing a cultural approach to assessment, rather than a subject- related one. They also identified patterns, e.g. that assessment aligned well with goals correlates with variety in methods. While the methodology is useful, their coding of characteristics as low-medium-high is arbitrary and their sample small, so the stated quantities in the 3 categories are not necessarily good guidelines.
Chrysanthi also watched a video from the same author Suzanne read about: Tansy Jessop: Improving student learning from assessment and feedback – a programme-level view (video, 30 mins).
There was a comparison of 2 contradictory case studies, 1 that seemed like a “model” assessment environment, but where the students did not put in much effort and were unclear about the goals and unhappy, and 1 that seemed problematic in terms of assessment but where students knew the goals and were satisfied. The conclusion was that rather than having a teacher plan a course perfectly and transmit large amounts of feedback to each student, it might be worth encouraging students to construct it themselves in a “messy” context, expanding constructivism to assessment as well.
Additionally, as students are more motivated by summative assessment, have a staged assessment where students are required to complete some formative assessment that feeds into their summative assessment. Amy & Chris suggested that this has already started happening in some courses.
Finally, the speaker noted that making the formative assessment publicly available, such as in blog posts, motivates the students, that it would be better if assessment encouraged working steadily throughout the term, rather than mainly at peak times around examinations and that feedback is important for goal clarity and overall satisfaction.
Both paper and video emphasised the wide variety in assessment characteristics between different programs. In the paper’s authors’ words, “one wonders what the variation might have been in the absence of a quality assurance system”.
The discussion went into the marking system and the importance students give to the numbers, even when they are often irrelevant to the big picture and their future job.
Amy summarised a summary she had created after attending a Chris Rust Assessment Workshop at the University. The workshop focussed on the benefits of programme-level assessment, looking at the current problems with assessment in universities and offering practical solutions and advice on creating programme-level assessments. The workshop started by looking at curriculum sequencing – it’s benefits and drawbacks, and illustrated this with examples where it had been successful.
Chris then discussed ‘capstone and cornerstone’ modules as a model for programme-level assessment, and explain where it had been a success in other universities. He discussed the pseudo-currency of marks and looked at ways we can alter our marking systems to improve student’s attitude to assessments and feedback. He ended the session by looking at the ways you can engage students with feedback effectively, and workshop attendees shared their advice with colleagues on how they engage their students with feedback. You can find the summary here.
Suzanne read Transforming assessment through the TESTA project by Tansy Jessop (who will be the next Education Excellence speaker) and Yaz El Hakim, which briefly describes the TESTA project, the methods they use and the outcomes they have noted so far. There are also references within the text to more detailed publications on specific areas of the methods, or on specific outcomes, if you want to find out more detail.
In brief, the TESTA project started in 2009, and has now expanded to 20 universities in the UK, Australia and the Netherlands, with 70 programmes having used TESTA to develop their assessment. The article begins by giving a pretty comprehensive overview of the reasons why programme assessment is so high on the agenda, including the recognition that assessment affects student study behaviours, and that assessment demonstrates what we value in learning, so we should make sure it really is focused on the right things. There was also a discussion about how the ‘modularisation’ of university study has left us with a situation of very separated assessments, which make it difficult to really see the impact of assessment practices across a programme, particularly for students who take a slower approach to learning. Ultimately the TESTA project is about getting people to talk about their practices on a ‘big picture’ level, identity areas which could be improved, and then work from a base of evidence to make those improvements. There is a detailed system of auditing current courses, including sampling, interviews with teaching s and programme directors, student questionnaires, and focus groups. the information from this is then used as a catalyst for discussion and change, which will manifest differently in each different programme and context.
The final paragraph of the report sums it up quite well: “The value of TESTA seems to lie in getting whole programmes to discuss evidence and work together at addressing assessment and feedback issues as a team, with their disciplinary knowledge, experience of students, and understanding of resource implications. The voice of students, corroborated by statistics and programme evidence has a powerful and particular effect on programme teams, especially as discussion usually raises awareness of how students learn best.”
Suggested reading
- Integration of assessment within a programme (video, 3 mins) by Oxford Brookes
- Programme Focused Assessment A short guide Liz McDowell and members of www.pass.brad.ac.uk project team
- Programme Assessment Strategies (PASS) Evaluation Report of the project at Bradford
- Literature review from the project at Bradford
- Case studies (summaries from Birkbeck’s page on PLA)
- Medical School: This case study is important to include as it is for a medical degree and they undertake a quite radical redesign of the programme. This programme was remodelled in discussion with the GMC so it could serve as a template for degrees where the outcomes are monitored by an external body. http://www.pass.brad.ac.uk/wp4medschoolcasestudy.pdf
- Employability: This is about how an ‘employability’ module is integrated into a programme and could be a template for the integration of ‘generic’ skills within a specific programme. http://www.pass.brad.ac.uk/case-studies/10-teesside-engineering.pdf
- Maths/ statistics & Biomedical Science: This maths example introduces some interesting ideas, such as differentiating study blocks from assessment blocks. This allows students to consolidate learning over time and is especially useful for techniques/ skills that can be forgotten if not used e.g. SPSS http://www.pass.brad.ac.uk/case-studies/2-brunel-maths.pdf
- This example gives a little more detail about how the break between study blocks and assessment blocks could be implemented in maths/ biomedical sciences & physiotherapy http://www.pass.brad.ac.uk/case-studies/11-brunel.pdf
- Business Management: In this project marks were taken from the modules and migrated into an overarching project/ task specific to the subject area which then provided a final mark to sit alongside other the individual module marks. http://www.pass.brad.ac.uk/case-studies/6-coventry.pdf
- An example of a postgraduate programme where four functional business areas are used in each module to relate to the programme outcomes http://www.pass.brad.ac.uk/case-studies/14-northumbria-business.pdf
- Postgraduate Design: This example includes having programme assessment criteria which different modules use with a different emphasis. http://www.pass.brad.ac.uk/case-studies/9-northumbria-design.pdf
- Nottingham’s assessment framework (for wider context see their website)
- Transforming assessment through the TESTA project by Tansy Jessop (next Education Excellence speaker) and Yaz El Hakim
- Tansy Jessop: Improving student learning from assessment and feedback – a programme-level view (video, 30 mins)
- Characterising programme‐level assessment environments that support learning by Graham Gibbs & Harriet Dunbar‐Goddet
- Handbook from UMass – PROGRAM-Based Review and Assessment: Tools and Techniques for Program Improvement
- Programme level assessment strategies (video, 45mins) “In this presentation from the 2013 Westminster Learning Futures webinar series, Professor Peter Hartley of the University of Bradford talks about how strategic consideration of assessment at programme level can improve curriculum design.”
- The effects of programme assessment environments on student learning, February 2007, Graham Gibbs, Harriet Dunbat-Goddet, Oxford Learning Institute
-
Professor Graham Gibbs at the Learning @ City Conference 2012 – “Improving student learning through assessment and feedback in the new higher education landscape” (video, 45 mins)