Notes from the reading group – free choice

Digital wellbeing toolkit from BBC R&D (notes from Suzi Wells)

This is a toolkit from BBC R&D is designed for addressing wellbeing in digital product development. Given the increasing focus on student wellbeing, I was interested in whether it could be useful for online & blended learning.

The key resources provided are a set of values cards along with some exploration of what these mean, particularly for young people (16-34 year olds). These were developed by the BBC through desk research, focus groups and surveys.

The toolkit – the flashcards especially – could certainly be used to sense-check whether very digital teaching methods are really including and supporting the kinds of things we might take for granted in face-to-face education. It could also be useful for working with students to discuss the kinds of things that are important to them in our context, and identifying the kinds of things that really aren’t.

Some of the values seem too obvious (“being safe and well”, “receiving recognition”), or baked-in to education (“achieving goals”, “growing myself”), and I worry that could be off-putting to some audiences. The language also seemed a little strange “human values” – as though “humans” were an alien species. It can feel like the target audience for the more descriptive parts would be someone who has never met a “human”, much less a young one. Nonetheless, the flashcards in particular could be a useful way to kick off discussions.

Three example flash cards showing the values: being inspired, expressing myself, having autonomy

New studies show the cost of student laptop use in lecture classes (notes from Michael Marcinkowski)

The article I read for this session highlighted two new-ish articles which studied the impact that student laptop use had on learning during lectures. In the roughly ten years that laptops have been a common sight in lecture halls, a number of studies have looked at what impact they have on notetaking during class. These previous studies have frequently found a negative association with laptop use for notetaking in lectures, not only for the student using the laptop, but also for other students sitting nearby, distracted by the laptop screen.

The article took a look at two new studies that attempted to tackle some of the limitations of previous work, particularly addressing the correlative nature of previous findings: perhaps low performing students prefer to use laptops for notetaking so that they can do something else during lectures.

What bears mentioning is that there is something somewhat quaint about studying student laptop use. In most cases, it seems to be a foregone conclusion and there is no getting it back into the box. Students will use laptops and other digital technologies in class — there’s almost no other option at this point. Nevertheless, the studies proved interesting.

The first of the highlighted studies featured an experimental set up, randomly assigning students in different sections of an economics class to different conditions: notetaking with laptop, without laptop, or with tablet laying flat on the desk. The last condition was designed to test the effect of students’ being distracted by seeing other students’ screens; the supposition being that if the tablet was laid flat on a desk, it wouldn’t be visible to other students. The students’ performance was then measured based on a final exam taken by students across the three conditions.

After controlling for demographics, GPA, and ACT entrance exam scores, the research found that performance was lower for students using digital technologies for notetaking. However, while performance was lower on the multiple choice and short answer sections of the exam, performance on the essay potion of the exam was the same across all three conditions.

While the study did address some shortcomings of previous studies (particularly with its randomized experimental design), it also introduced several others. Importantly it raised questions about how teachers might teach differently when faced with a class of laptop users or what effect forcing a student who isn’t comfortable using a laptop might have on their performance. Also, given that multiple sections of an economics class was the subject of the study, what role does the discipline being lectured on play in the impact of laptop use?

The second study attempted to address those though a novel design which linked students’ propensity to use or not use laptops in optional-use classes based on whether or not they were forced to or forced not to use them in another class on the same day. Researchers looked at institution-wide student performance at an institution that had a mix of classes which required, forbade, or had no rules about laptop use.

By looking at student performance in classes in which laptop use was optional, but by linking that performance to whether students would be influenced in their laptop choices based on other classes held the same day, researchers wanted to be able to measure student performance when they had a chance not to use a laptop in class. That is, the design allowed researchers to understand in general how many students might be using a laptop in a laptop-optional class, but still allowing individual students to make a choice based on preference.

What they found was that student performance worsened for classes that shared a day with laptop mandated classes and improved on days when classes were shared with laptop prohibited classes. This is in line with previous studies, but interestingly, the negative effects were seen more strongly in weaker students and in quantitative classes.

In the end, even while these two new studies reinforce what had been previously demonstrated about student laptop use, is there anything that can be done to counter what seem to be the negative effects of laptop use for notetaking? More than anything, what seems to be needed are studies looking at how to boost student performance when using technology in the classroom.

StudyGotchi: Tamagotchi-Like Game-Mechanics to Motivate Students During a Programming Course & To Gamify or Not to Gamify: Towards Developing Design Guidelines for Mobile Language Learning Applications to Support User Experience; 2 poster/ demo papers in the EC-TEL 2019 Proceedings. (notes from Chrysanthi Tseloudi)

Both papers talk about the authors’ findings on gamified applications related to learning.

The first regards the app StudyGotchi, based on Tamagochi (a virtual pet the user takes care of), which aims to encourage first year Java programming students to complete tasks on their Moodle platform in order to keep a virtual teacher happy. Less than half the students downloaded the relevant app, with half of those receiving the control version that didn’t have the game functions (180) and half receiving the game version (194). According to their survey, of those that didn’t download it, some reported that was because they weren’t interested in it. Of those that replied to whether they used it, less than half said they did. According to data collected from students that used either version of the app, there was no difference in either the online behaviour or the exam grades of the students, between the groups that used the game or non-game versions. The authors attribute this to the lack of interaction, personalisation options and immediate feedback on the students’ actions on Moodle. I also wonder whether a virtual teacher to be made happy in particular is the best choice of “pet”, when hopefully there is already a real teacher supporting students’ learning. Maybe a virtual brain with regions that light up when a quiz is completed or any non-realistic representation connected to the students’ own development would be more helpful in increasing students’ intrinsic motivation, since ideally they would be learning for themselves, and not to make someone else happy.

The second paper compares 2 language learning apps, one of which is gamified. The non-gamified app (LearnIT ASAP) includes exercises where students fill in missing words, feedback to show whether the answer is correct/ incorrect and statistics to track progress. The gamified app (Starfighter) includes exercises where the students steer through an asteroid field by selecting answers to given exercises and a leaderboard to track progress and compete with peers. The evaluation involved interviewing 11 20-50 year old individuals. The authors found that younger and older participants had different views about the types of interactions and aesthetics of the two apps. Younger participants would have preferred swiping to tapping, older participants seemed to find the non-gamified app comfortable because it looked like a webpage, but were not so sure about the gamified app. The game mechanics of Starfighter were thought to be more engaging, while the pedagogical approach of LearnIT ASAP was thought to be better in terms of instructional value and effectiveness. While the authors mention that the main difference between the apps is gamification, considering the finding that the pedagogical approach of one of the apps is better, I wonder if that is actually the case. Which game elements actually improve engagement and how is still being researched, so I would really like to see comparisons of language learning apps where the existence or not of game elements is indeed the only difference. Using different pedagogical approaches between the apps is less likely to shed light on the best game elements to use, but it does emphasize the difficulty of creating an application that is both educationally valuable and fun at the same time.

Miscellany – notes from the reading group

No theme this month – just free choice. Here’s what we read (full notes below):

Naomi read Stakeholders perspectives on graphical tools for visualising student assessment and feedback data.

This paper from the University of Plymouth looks at the development and progression of learning analytics within Higher Education. Luciana Dalla Valle, Julian Stander, Karen Gretsey, John Eales, and Yinghui Wei all contributed.  It covers how four graphical visualisation methods can be used by different stakeholders to interpret assessment and feedback data. The different stakeholders being made up of external examiners, learning developers, industrialists (employers), academics and students.

The paper discusses how there is often difficulty pulling information from assessments and feedback as there can be a lot of data to cover. Having graphic visualisations means information can be shared and disseminated quickly, as there is one focal point to concentrate on. Its mentioned that some can include ‘too much information that can be difficult for teachers to analyse when limited time is available.’ But it is also discussed how it is important then to evaluate the visualisations from the point of view of the different stakeholder who may be using them.

The paper looks at how learning analytics can be seen as a way to optimise learning and allow stakeholders to fully understand and take on board the information that they are provided with. For students it was seen as a way to get the most out of their learning whilst also flagging student’s facing difficulties. The paper also talks about how it brings many benefits to students who are described as the ‘overlooked middle’. Students are able to easily compare their assessments, attainment, and feedback to see their progression. Student’s agreed that the visualisations could assist with study organisation and module choice, and it’s also suggested taking these analytics into account can improve social and cultural skills. For external examiners, analytics was seen as a real step forward in their learning and development. For them it was a quick way to assimilate information and improve their ‘knowledge, skills and judgement in Higher Education Assessment. Having to judge and compare academics standards over a diverse range of assessment types is difficult and visual graphics bring some certain simplicity to this. For learning developers too, using these images and graphics are suggested to help in ‘disseminating good practice.

The paper goes on to explain how it does improve each of the stakeholder’s evaluation of assessment. It goes into a lot of detail of the different visualisations suggested, commenting on their benefits and drawbacks of each set of data, which is worth taking a more detailed look at. It should also be noted that the paper suggested there could be confidential or data protection issues involved with sharing or releasing data such as this as in most cases this certain data is only seen at faculty or school level. Student demoralisation is also mentioned near the end of the paper as being a contributing factor to why these graphics may not always work in the best ways. It finishes by suggesting how it would be interesting to study student’s confidence and self-esteem changes due to assessment data sharing. It an interesting idea that needs to be carefully thought out and analysed to ensure it produces a positive and constructive result for all involved.

Suzanne read: Social media as a student response system: new evidence on learning impact

This paper begins with the assertion that social media is potentially a “powerful tool in higher education” due to its ubiquitous nature in today’s society. However, also recognising that to date the role of social media in education has been a difficult one to pin down. There have been studies showing that it can both enhance learning and teaching and be a severe distraction for students in the classroom.

The study sets out to answer these two questions:

  • What encourages students to actively utilise social media in their learning process?
  • What pedagogical advantages are offered by social media in enhancing students’ learning experiences?

To look at these questions, the researchers used Twitter in a lecture-based setting with 150 accounting undergraduates at an Australian university. In the lectures, Twitter could be used in two ways: as a ‘backchannel’ during the lecture, and as a quiz tool. As a quiz tool, the students used a specific hashtag to Tweet their answers to questions posed by the lecturer in regular intervals during the session, related to the content that had just been covered. These lectures were also recorded, and a proportion of the students only watched the recorded lecture as they were unable to attend in person. Twitter was used for two main reasons. First, the researchers assumed that many students would already be familiar and comfortable with Twitter. Secondly, using Twitter wouldn’t need any additional tools, such as clickers, or software (assuming that students already had it on their devices).

Relatively early on, several drawbacks to using Twitter were noted. There was an immediate (and perhaps not surprising?) tension between the students and lecturers public and private personas on Twitter. Some students weren’t comfortable Tweeting from their own personal accounts, and the researchers actually recommended that lecturers made new accounts to keep their ‘teaching’ life separate from their private lives. There was also a concern about the unpredictability of tapping into students social media, in that the lecturer had no control over what the students wrote, in such a public setting. It also turned out (again, perhaps not unsurprisingly?) that not all students liked or used Twitter, and some were quite against it. Finally, it was noted that once students were on Twitter, it was extremely easy for them to get distracted.

In short, the main findings were that the students on the whole liked and used Twitter for the quiz breaks during the lecture. Students self-reported being more focused, and that the quiz breaks made the lecture more active and helped with their learning as they could check their understanding as they went. This was true for students who actively used Twitter in the lecture, those who didn’t use Twitter but were still in the lecture in person, and those who watched the online recording only. During the study, very few students used Twitter as a backchannel tool, instead preferring to ask questions by raising a hand, or in breaks or after the lecture.

Overall, I feel that this supports the idea that active learning in lectures is enhanced when students are able to interact with the material presented and the lecturer. Breaking up content and allowing students to check their understanding is a well-known and pedagogically sound approach. However, this study doesn’t really provide any benefit in using Twitter, or social media, specifically. The fact that students saw the same benefit regardless of whether they used Twitter to participate, or were just watching the recording (where they paused the recording to answer the questions themselves before continuing to the answers), seems to back this up. In fact, in not using Twitter in any kind of ‘social’ way, and trying to hive off a private space for lecturers and students to interact in such a public environment seems to be missing the point of social media altogether. For me, the initial research questions therefore remain unanswered!

Suzi read Getting things done in large organisations

I ended up with a lot to say about this so I’ve put it in a separate blog post: What can an ed techie learn from the US civil service?. Key points for me were:

  • “Influence without authority as a job description”
  • Having more of a personal agenda, and preparing for what I would say if I got 15 minutes with the VC.
  • Various pieces of good advice for working effectively with other people.

Chrysanthi read Gamification in e-mental health: Development of a digital intervention addressing severe mental illness and metabolic syndrome (2017). This paper talks about the design of a gamified mobile app that aims to help people with severe chronic mental illness in combination with metabolic syndrome. While the target group is quite niche, I love the fact that gamification is used in a context that considers the complexity of the wellbeing domain and the interaction between mental and physical wellbeing. The resulting application, MetaMood, is essentially the digital version of an existing 8-week long paper-based program with the addition of game elements. The gamification aims to increase participation, motivation and engagement with the intervention. It is designed to be used as part of a blended care approach, combined with face to face consultations. The game elements include a storyline, a helpful character, achievements, coins and a chat room, for the social element. Gamification techniques (tutorial, quest, action) were mapped to traditional techniques (lesson, task, question) to create the app.

The specific needs of the target group needed the contributions of an interdisciplinary team, as well as relevant game features; eg the chat room includes not only profanity filter, but also automatic intervention when keywords like suicide are used (informing the player of various resources available to help in these cases). Scenarios, situations and names were evaluated for their potential to trigger patients, and changes were made accordingly; eg the religious sounding name of a village was changed, as it could have triggered delusions.

The 4 clinicians that reviewed the app said it can proceed to clinical trial with no requirement for further revision. Most would recommend it to at least some of their clients. The content was viewed as acceptable and targeted by most, the app interesting, fun & easy to use. I wish there had been results of the clinical trial, but it looks like this is the next step.

Roger read “Analytics for learning design: A layered framework and tools”, an article from the British Journal of Educational Technology.

This paper explores the role analytics can play in supporting learning design. The authors propose a framework called the “Analytics layers for learning design (AL4LD)”, which has three layers: learner, design and community analytics.

Types of learner metrics include engagement, progression and student satisfaction while experiencing a learning design. Examples of data sources are VLEs or other digital learning environments, student information systems, sensor based information collected from physical spaces, and “Institutional student information and evaluation (assessment and satisfaction) systems”. The article doesn’t go into detail about the latter, for example to explore and address the generic nature of many evaluations eg NSS, which is unlikely to provide meaningful data about impact of specific learning designs.

Design metrics capture design decisions prior to the implementation of the design. Examples of data include learning outcomes, activities and tools used to support these. The article emphasises that “Data collection in this layer is greatly simplified when the design tools are software systems”. I would go further and suggest that it is pretty much impossible to collect this data without such a system, not least as it requires practitioners to be explicit about these decisions, which otherwise often remain hidden.

Community metrics are around “patterns of design activity within a community of teachers and related stakeholders”, which could be within or across institutions. Examples of data include types of learning design tools used and popular designs in certain contexts. These may be shared in virtual or physical spaces to raise awareness and encourage reflection.

The layers inter-connect eg learning analytics could contribute to community analytics by providing evidence for the effectiveness of a design. The article goes on to describe four examples. I was particularly interested in the third third one which describes the “experimental Educational Design Studio” from the University of Technology Sydney. It is a physical space where teachers can go to explore and make designs, ie it also addresses the community analytics layer in a shared physical space.

This was an interesting read, but in general I think the main challenge is collection of data in the design and community aspects. For example Diana Laurillard has been working on systems to do this for many years, but there seems to have been little traction. eg The learning design support environment  and the Pedagogical Patterns Collector.

Amy read: Addressing cheating in e-assessment using student authentication and authorship checking systems: teachers’ perspectives. Student authentication and authorship systems are becoming increasingly well-used in universities across the world, with many believing that cheating is on the rise across a range of assessments. This paper looks at two universities (University A in Turkey and University B in Bulgaria) who have implemented the TeSLA system (an Adaptive Trust-based eAssessment System for Learning). The paper doesn’t review the effectiveness of the TeSLA system, but rather the views of the teachers on whether the system will affect the amount of cheating taking place.

The research’s main aim is to explore the basic rationale for the use of student authentication and authorship systems, and within that, to look at four specific issues:

  1. How concerned are teaching about the issue of cheating and plagiarism in their courses?
  2. What cheating and plagiarism have teachers observed?
  3. If eAssessment were introduced in their courses, what impact do the teaching think it might have on cheating and plagiarism?
  4. How do teachers view the possible use of student authentication and authorship checking systems, and how well would such systems fit with their present and potential future assessment practises?

Data was collected across three different teaching environments: face-to-face teaching, distance learning and blended learning. Data was collected via questionnaires and interviews with staff and students.

The findings, for the most part, were not hugely surprising: the main type of cheating that took place at both universities was plagiarism, followed by ghost-writing (or the use of ‘essay mills’). These were the most common methods of cheating in both exam rooms and online. The difference between the reasons staff believed students cheated and why students cheated varied widely too. Both teachers and students believed that:

  • Students wanted to get higher grades
  • The internet encourages cheating and plagiarism, and makes it easy to do so
  • There would not be any serious consequences if cheating and plagiarism was discovered

However, teachers also believed that students were lazy and wanted to take the easy way out, whereas students blamed pressure from their parents and the fact they had jobs as well as studying for reasons.

Overall, staff were concerned with cheating, and believed it was a widespread and serious problem. The most common and widespread problem was plagiarism and ghost writing, followed by copying and communicating with others during assessments. When asked about ways of preventing cheating and plagiarism, teaching were most likely to recommend changes to educational approaches, followed by assessment design, technology and sanctions. Teachers across the different teaching environments (face-to-face, blended and distance learning) were all concerned with the increase in cheating that may take place with online/ eAssessments. This was especially the case for staff who taught on distance learning courses, where students currently take an exam under strict conditions. Finally, all staff believed that the use of student authentication and authorship tools enabled greater flexibility in access for those who found it difficult to travel, as well as in forms of assessment. However, staff believed that cheating could still take place regardless of these systems, but that technology could be used in conjunction with other tools and methods to reduce cheating in online assessments.

Playful learning – notes from the reading group

Suzi read Playful learning: tools, techniques, and tactics by Nicola Whitton

This is a useful scene-setting article, suggesting ways of framing discussions on playful learning and pointing the way to unexplored territory suitable for future research.

There are three ideas about how to talk about playful learning:

  • The magic circle – a socially constructed space in which play can happen
  • A mapping of aspects of games onto playful learning: surface structures of playful learning <> mechanics of games, deep structures <> activities of play, implicit structure <> philosophy of playfulness
  • Tools / techniques / tactics of playful learning: objects artefacts & technologies / approaches / mechanics and attributes – these could serve as prompts for getting playfulness into teaching

Whitton suggests three characteristics of the magic circle which make it pedagogically useful: “the positive construction of failure; support for learners to immerse themselves in the spirit of play; and the development of intrinsic motivation to engage with learning activities.” In playful activities, failure will be framed positively, participants suspend disbelief which can encourage creativity, and participation is voluntary so there is intrinsic motivation (the difficulty of this last in particular in a formal education setting is acknowledged).

There’s a lot of acknowledgement that playfulness may not be an easy fit in higher education. Obstacles include: the inescapably of real world power relationships, confusing gamification with true playfulness, the need for things to be mandatory and assessed, existing attitudes to failure, prejudice about play being for children, lack of time, confidence and social capital.

I wasn’t certain about the point about play being a privilege. While certain types of play might attract a relatively slender demographic (escape rooms, real world games) and it’s important not to assume that everyone would want to engage in these, adults seem to play to learn in a range of contexts. I thought about the kinds of spaces where you would see playful learning: cooking, Karaoke, parkour, getting dressed up, new social media platforms (when FB started and everyone was poking each other and and biting each other and throwing bananas, hashtags came from playing with the way Twitter worked), and of other adult pursuits. There is playfulness in higher education too, although it’s often not explicitly described as such. Maybe there is a danger of rarefying play and almost making it by definition a domain for geeks alone, not recognising play that has not been made explicit.

This got me thinking about why we play, and why we might want to play in HE, and about one of my favourite quotes:

“The things we want are transformative, and we don’t know or only think we know what is on the other side of that transformation. Love, wisdom, grace, inspiration – how do you go about finding these things that are in some ways about extending the boundaries of the self into unknown territory, about becoming someone else?”

— Rebecca Solnit, A field guide to getting lost

This sums up a lot of what university could and should be. Playfulness has to have a very key role in that: place to play with possible selves, both academic and social.

Chrysanthi read Gamifying education: what is known, what is believed and what remains uncertain: a critical review by Christo Dichev and Darina Dicheva.

This is a review aiming to find what is known about gamification in educational contexts based on empirical evidence, rather than beliefs. The authors seem to find that much more is believed or uncertain, rather than known. For example, their main findings are that a. gamification has started being used at a pace much faster than researchers pace at understanding how it works, b. there is very little knowledge about how to effectively apply it in specific contexts and c. there is not enough evidence about its benefit long term.

While the understanding of how to engage, motivate and aid learning through gamification is inadequate, researchers are still praising the practice, thus inflating expectations about its effectiveness. The frequent use of performance-centric game elements like points, levels, badges and leaderboards is noteworthy; in absence of justification from the researchers implementing them, the authors hypothesize that this happens because they are similar to traditional classroom practices and easy to implement. But this leaves other major game elements out; authors note – among others – role play, narrative, choice, low risk failure. These tiny elements are then expected to affect broad concepts like motivation, with researchers often concluding that that is the case, without enough evidence to claim it is so.

This implies a somewhat blind application of the easiest-to-implement elements of gamification, with the belief that it will be enough to motivate students to perform better. But how are points different to marks and levels different to grades and chapters?

Perhaps gamification can’t be a canned, one-size-fits-all-learning-contexts solutions. Perhaps researchers and practitioners need to put in time and at least a bare minimum of imagination to create something that will be engaging enough for students, for the evidence supporting it to not be stamped “inconclusive” when under scrutiny.

David read Playful learning in Higher Education: developing a signature pedagogy by Rikke Toft Nørgård, Claus Toft-Nielsen & Nicola Whitton (2017)

This paper starts off with a bit of a rant about the commercialisation of higher education and the focus on metrics to measure performance and how this creates an assessment-driven environment focused on goal-oriented behaviours characterised by avoidance of risk and fear of failure. The authors see recent gameful approaches in higher education as a response to this but warn that while gamification may increase motivation, games often focus on extrinsic motivational drivers and the results may be short-lived. They also cite research which points to issues around perceived appropriateness and students manipulating points-based incentive systems (and my colleagues and I have encountered examples of this in out teaching).

In contrast to gamification, they regard playful learning as something which encourages intrinsic and longer-term motivation by offering the chance to explore and experiment without fear of being judged for failure and therefore being able to learn from it. They use the ideas of the ‘magic circle’ and ‘lusory attitude’ to describe the environment in which this can occur. The concept of the magic circle is used a lot in gaming and is a metaphor for the ‘space’ we enter into when we fully engage with a game, accepting the different norms and codes of practice (or actively constructing them with other ‘players’). This can refer to physical (e.g. sports) or virtual/imaginary (e.g. computer games) or a combination of both (e.g. a child’s tea party). For this to work, we need to assume an ‘lusory attitude’. This gives participants a shared mindset in which they are free to play, experiment and fail in a safe place.

The Magic Circle – How Games Transport Us to New Worlds

The authors then turn to the question of how to implement such an approach. Using the results of two studies about what students report (a) makes their learning enjoyable and (b) disengages them, they develop a ‘signature pedagogy’ for playful learning in higher education. The notion of ‘signature pedagogy’ they assume is split into three levels:

  • The foundation is formed by Implicit (playful) structures, which are the necessary assumptions and attitudes (values, habits, ethics)
    • Lusory attitude
    • Democratic values and openness
    • Acceptance of risk-taking and failure
    • Intrinsic motivation
  • Deep (play) structures represent the nature of the activities which the implicit structures facilitate
    • Active and physical engagement
    • Collaboration with diverse others
    • Imagining possibilities
    • Novelty and surprise
  • Surface (game) structures are the ‘mechanics’ of an activity, including the materials, tools and actions involved
    • Ease of entry and explicit progression
    • Appropriate and flexible levels of challenge
    • Engaging game mechanics
    • Physical or digital artefacts

The authors see the implicit (playful) structures as the necessary starting point for their ‘signature pedagogy’ but do not say how students get to this point. Indeed they acknowledge the inherent paradox in their model:

“…for many students to view learning as valuable then it must be valued by the system (assessed), yet it is simultaneously this assessment that makes learning stressful and undermines the creation of a safe and comfortable environment.”

For me, then, this article leaves three interrelated questions to be discussed:

  1. For playful learning to be successful, do students need to have the implicit structures already in place or can students acquire these through the playful activity itself?
  2. If these implicit structures are prerequisite, how do we get students to acquire them?
  3. As this involves a change in students’ attitudes which the authors argue are reinforced by the current assessment-driven environment, does this pedagogical approach have any chance of success without change at the programme or institutional level?

Suzanne read Unhappy families: using tabletop games as a technology to understand play in education by John Lean, Sam Illingworth, Paul Wake, published in the ALT Journal special issue

In this article, the authors decide to take a step back when considering the ‘future’ of digital technologies in relation to playful learning by considering traditional table top games as a form of technology. They aimed to better understand the affordances of digital game tools by looking at table top games as an analogue, in order to reflect critically on the pedagogical uses of games and playful learning. Their hypothesis was that table top games (see the article for a full definition of how they classify a game as ‘table top’), are successful because: 1) they provide an immediate and accessible shared space, which is also social; 2) this space and the game are both easily modified by players and educators; and 3) they provide a tactile, sensory experience. So, in essence, that they are social, modifiable and tactile, which are all things that could be transferred into digital games in education, but which are often overlooked.

To explore this hypothesis, they used a specific game, that of  ‘Gloom‘, which was played by participants at the 2017 ALT Playful Learning Conference. In relation to the first hypothesis, they found that the game encouraged a lot of social interaction. Firstly, the game encourages players to talk about their recent lived experiences as a means of deciding who gets to go first (ie, who has had the worst day thus far). Additionally, there is a storytelling element of the game, which also encourages an empathetic interaction between the game and the players, as well as between players.

Regarding how modifiable the game is, the players found it was easy to change and adapt the game, even during play. They also explored the ways of playing around with and stretching the rules, to create different rules or games within the game play. The authors note that this is often not as easily achieved in digital games, where rules can be more fixed and more difficult to circumvent. Thirdly, the players did undoubtedly find the game tactile, as the cards provide a physical element, further enhanced by the way the cards are played. The cards themselves have transparent elements, so as you stack cards you create different versions of them, allowing for the storytelling element.

In conclusion, the authors used this game play experience to revisit some preconceptions about what ‘play’ or ‘playfulness’ is in a game context. They felt that the ‘true’ play seemed to happen when the players had modified the rules to the point where the game itself was almost no longer required. The players were exploring and testing their new game playfully, in the way that they were interacting with each other and the environment. In terms of education, they felt that this playfulness had great potential for learning. The process of negotiating the play, and working out how to play with others who might have different ideas to you (for example, either wanting to stick to the rules or wanting to break them) is potentially a powerful social learning opportunity.

However, they also noted that this very character of playful learning – that it is negotiated and created by the context and participants – makes it extremely difficult to categorise or understand pedagogically. If we need to allow for such variety of outcomes in playful learning, it can be difficult to work out how we can situate it within other educational structures, like lesson plans or learning objectives.

Suggested reading