Digital wellbeing toolkit from BBC R&D (notes from Suzi Wells)
This is a toolkit from BBC R&D is designed for addressing wellbeing in digital product development. Given the increasing focus on student wellbeing, I was interested in whether it could be useful for online & blended learning.
The key resources provided are a set of values cards along with some exploration of what these mean, particularly for young people (16-34 year olds). These were developed by the BBC through desk research, focus groups and surveys.
The toolkit – the flashcards especially – could certainly be used to sense-check whether very digital teaching methods are really including and supporting the kinds of things we might take for granted in face-to-face education. It could also be useful for working with students to discuss the kinds of things that are important to them in our context, and identifying the kinds of things that really aren’t.
Some of the values seem too obvious (“being safe and well”, “receiving recognition”), or baked-in to education (“achieving goals”, “growing myself”), and I worry that could be off-putting to some audiences. The language also seemed a little strange “human values” – as though “humans” were an alien species. It can feel like the target audience for the more descriptive parts would be someone who has never met a “human”, much less a young one. Nonetheless, the flashcards in particular could be a useful way to kick off discussions.
New studies show the cost of student laptop use in lecture classes (notes from Michael Marcinkowski)
The article I read for this session highlighted two new-ish articles which studied the impact that student laptop use had on learning during lectures. In the roughly ten years that laptops have been a common sight in lecture halls, a number of studies have looked at what impact they have on notetaking during class. These previous studies have frequently found a negative association with laptop use for notetaking in lectures, not only for the student using the laptop, but also for other students sitting nearby, distracted by the laptop screen.
The article took a look at two new studies that attempted to tackle some of the limitations of previous work, particularly addressing the correlative nature of previous findings: perhaps low performing students prefer to use laptops for notetaking so that they can do something else during lectures.
What bears mentioning is that there is something somewhat quaint about studying student laptop use. In most cases, it seems to be a foregone conclusion and there is no getting it back into the box. Students will use laptops and other digital technologies in class — there’s almost no other option at this point. Nevertheless, the studies proved interesting.
The first of the highlighted studies featured an experimental set up, randomly assigning students in different sections of an economics class to different conditions: notetaking with laptop, without laptop, or with tablet laying flat on the desk. The last condition was designed to test the effect of students’ being distracted by seeing other students’ screens; the supposition being that if the tablet was laid flat on a desk, it wouldn’t be visible to other students. The students’ performance was then measured based on a final exam taken by students across the three conditions.
After controlling for demographics, GPA, and ACT entrance exam scores, the research found that performance was lower for students using digital technologies for notetaking. However, while performance was lower on the multiple choice and short answer sections of the exam, performance on the essay potion of the exam was the same across all three conditions.
While the study did address some shortcomings of previous studies (particularly with its randomized experimental design), it also introduced several others. Importantly it raised questions about how teachers might teach differently when faced with a class of laptop users or what effect forcing a student who isn’t comfortable using a laptop might have on their performance. Also, given that multiple sections of an economics class was the subject of the study, what role does the discipline being lectured on play in the impact of laptop use?
The second study attempted to address those though a novel design which linked students’ propensity to use or not use laptops in optional-use classes based on whether or not they were forced to or forced not to use them in another class on the same day. Researchers looked at institution-wide student performance at an institution that had a mix of classes which required, forbade, or had no rules about laptop use.
By looking at student performance in classes in which laptop use was optional, but by linking that performance to whether students would be influenced in their laptop choices based on other classes held the same day, researchers wanted to be able to measure student performance when they had a chance not to use a laptop in class. That is, the design allowed researchers to understand in general how many students might be using a laptop in a laptop-optional class, but still allowing individual students to make a choice based on preference.
What they found was that student performance worsened for classes that shared a day with laptop mandated classes and improved on days when classes were shared with laptop prohibited classes. This is in line with previous studies, but interestingly, the negative effects were seen more strongly in weaker students and in quantitative classes.
In the end, even while these two new studies reinforce what had been previously demonstrated about student laptop use, is there anything that can be done to counter what seem to be the negative effects of laptop use for notetaking? More than anything, what seems to be needed are studies looking at how to boost student performance when using technology in the classroom.
StudyGotchi: Tamagotchi-Like Game-Mechanics to Motivate Students During a Programming Course & To Gamify or Not to Gamify: Towards Developing Design Guidelines for Mobile Language Learning Applications to Support User Experience; 2 poster/ demo papers in the EC-TEL 2019 Proceedings. (notes from Chrysanthi Tseloudi)
Both papers talk about the authors’ findings on gamified applications related to learning.
The first regards the app StudyGotchi, based on Tamagochi (a virtual pet the user takes care of), which aims to encourage first year Java programming students to complete tasks on their Moodle platform in order to keep a virtual teacher happy. Less than half the students downloaded the relevant app, with half of those receiving the control version that didn’t have the game functions (180) and half receiving the game version (194). According to their survey, of those that didn’t download it, some reported that was because they weren’t interested in it. Of those that replied to whether they used it, less than half said they did. According to data collected from students that used either version of the app, there was no difference in either the online behaviour or the exam grades of the students, between the groups that used the game or non-game versions. The authors attribute this to the lack of interaction, personalisation options and immediate feedback on the students’ actions on Moodle. I also wonder whether a virtual teacher to be made happy in particular is the best choice of “pet”, when hopefully there is already a real teacher supporting students’ learning. Maybe a virtual brain with regions that light up when a quiz is completed or any non-realistic representation connected to the students’ own development would be more helpful in increasing students’ intrinsic motivation, since ideally they would be learning for themselves, and not to make someone else happy.
The second paper compares 2 language learning apps, one of which is gamified. The non-gamified app (LearnIT ASAP) includes exercises where students fill in missing words, feedback to show whether the answer is correct/ incorrect and statistics to track progress. The gamified app (Starfighter) includes exercises where the students steer through an asteroid field by selecting answers to given exercises and a leaderboard to track progress and compete with peers. The evaluation involved interviewing 11 20-50 year old individuals. The authors found that younger and older participants had different views about the types of interactions and aesthetics of the two apps. Younger participants would have preferred swiping to tapping, older participants seemed to find the non-gamified app comfortable because it looked like a webpage, but were not so sure about the gamified app. The game mechanics of Starfighter were thought to be more engaging, while the pedagogical approach of LearnIT ASAP was thought to be better in terms of instructional value and effectiveness. While the authors mention that the main difference between the apps is gamification, considering the finding that the pedagogical approach of one of the apps is better, I wonder if that is actually the case. Which game elements actually improve engagement and how is still being researched, so I would really like to see comparisons of language learning apps where the existence or not of game elements is indeed the only difference. Using different pedagogical approaches between the apps is less likely to shed light on the best game elements to use, but it does emphasize the difficulty of creating an application that is both educationally valuable and fun at the same time.