AI and education – notes from reading group

Will Higher Ed Keep AI in Check? (notes from Chrysanthi Tseloudi)

In this article, Frederick Singer argues that whether the future of AI in education is a positive or a dystopian one depends not on a decision to use or not use AI, but on retaining control over how it is used.

The author starts by mentioning a few examples of how AI is/ may be used outside of education – both risky and useful. They then move on to AI’s use in educational contexts, with examples including using an AI chat bot for students’ queries regarding enrolment and financial issues, as well as AI powered video transcription that can help accessibility. The area they identify has the most potential for both risk and benefit is AI helping educators address individual students’ needs and indicating when intervention is needed; there are concerns about data privacy and achieving the opposite results, if educators lose control.

The final example they mention is using AI in the admissions process, to sidestep human biases and help identify promising applicants, but without automatically rejecting students that are not identified as promising by the AI tool.

I think this is something to be cautious about. Using AI for assessment – whether for admission, to mark activities, progress, etc – certainly has potential, but AI is not free of human biases. In fact, there have been several examples where they are full of them. The article Rise of the racist robots – how AI is learning all our worst impulses and Cathy O’Neil’s Ted talk The era of blind faith in big data must end report that AI algorithms can be racist and sexist, because they rely on datasets that contain biases already; e.g. a dataset of successful people is essentially a dataset of past human opinions of who can be successful, and human opinions are biased –  if e.g. only a specific group of people have been culturally allowed to be successful, a person that doesn’t belong to that category will not be seen by AI as equally (or more) promising as those who do belong to it. AI algorithms can be obscure, it is not necessarily obvious what they are picking up on to make their judgements and so it’s important to be vigilant and for the scientists who make them to implement ways to counteract potential discriminations arising from it.

It’s not hard to see how this could apply in educational contexts. For example, algorithms that use datasets from disciplines that are currently more male dominated might rank women as less likely to succeed, and algorithms that have been trained with data that consists overwhelmingly of students of a specific nationality and very few internationals might mark international students’ work lower. There are probably ways to prevent this, but awareness of potential bias is needed for this to be done. All this considered, educators seeing AI as a tool that is free of bias would be rather worrying. Understanding the potential issues there is key in retaining control.

How Artificial Intelligence Can Change Higher Education (notes from Michael Marcinkowski)

For this meeting, I read ‘How Artificial Intelligence Can Change Higher Education,’ a profile of Sebastian Thrun. The article detailed Thrun’s involvement with the popularization of massive open online courses and the founding of his company, Udacity. Developed out of Thrun’s background working at Google in the field of artificial intelligence, Udacity looks to approach the question of education as a matter of scale: how can digital systems be used to vast numbers of people all over the world. For Thrun, the challenge for education is how it can be possible to develop student mastery of a subject through online interactions, while at the same time widening the pathways for participation in higher education.

The article, unfortunately, focused most on the parallels between Thrun’s work in education and in his involvement with the development of autonomous vehicles, highlighting the potential that artificial intelligence technologies have for both, while avoiding any discussion of the particulars of how this transformational vision might be achieved.

Nevertheless, the article still opened up some interesting concerns around questions of scale and how best of approach the question of how education might function at a scale larger than as traditionally conceived. At the heart of this question is the role that autonomous systems might have in helping to manage this kind of large scale educational system. That is, at what point and for what tasks is it appropriate to take human educators out of the loop or to place them in further remove from the student. In particular, areas such as the monitoring of student well-being and one-on-one tutoring came out as areas ripe for both innovation and controversy.

While it was disappointing that the article largely avoided the actual issues of the uses of artificial intelligence in education, it did offer an unplanned for lesson about AI in education. Like in the hype surrounding self-driving cars, the promises for a new educational paradigm that were put forward in this 2012 article still seem far off. While the mythos of the Silicon Valley innovator might cast Thrun as a rebel who is singularly able to see the true path forward for education, most of his propositions for education, when they were not pie-in-sky fantasies, repeated well worn opinions present throughout the history of education.

Suggested reading

Leave a Reply