At Cloud Academy, we are really interested in keeping updated about the latest technologies in education and learning to use some of them in our platform. For instance, we have been experimenting with recent techniques from Natural Language Processing (NLP) for scaling and automating quiz creation, and for calibrating newly generated questions.
I recently had the opportunity to attend the 21st International Conference on Artificial Intelligence in Education (AIED 2020), and it was a great opportunity to hear about some of the latest research, as well as presenting a contribution to the field. The conference was supposed to be held in Ifrane, Morocco from July 6-10 2020, but due to the COVID-19 pandemic, it was held completely online in a virtual format.
Organizing an international conference is no easy task, and having to move everything online having only a few weeks to do so makes it even more difficult. However, the organizers did a great job with organizing an event that could be attended by people from all over the world, spanning different time zones. There were several tracks in parallel, and every presentation was held as a Zoom meeting, with twenty minutes for the presentation and ten minutes for a Q&A session.
Here are some of the key takeaways from the AIED conference, focusing on only some of the many interesting topics which were presented.
Recent technological advancements enabled the possibility of replicating many aspects of traditional education online, from university classes to general courses for acquiring new skills. However, in most cases, the approach adopted in online education is a carbon copy of what is done in person, and this limits what could be achieved with online learning. This issue was particularly visible in the last few months when education was forced to move (almost completely) online, even though it was not ready to do so.
This concern was mentioned several times in the keynote presentations at the conference and, in particular, in the panel discussion that was held the third day. Significantly, it was titled “COVID-19 and the Future of AI in Education and Training” and it focused heavily on the challenges of online education, which should be looked at in a different way from the challenges of on-site education. While this is generally true for all the aspects of the educational activity, it requires special attention while developing AI systems for education.
Many talks at AIED20 focused on techniques for creating students’ models, trying to model the learning process (both at an individual level and at a classroom level), with the goal of understanding how students learn. Such information can be leveraged to understand which are the aspects that lead to better academic results and possibly personalize the learning experience to improve students’ outcomes.
This is a very broad category, and I mention here only some of the works which were presented at the conference.
One work, titled “The sound of inattention,” tried to predict students mind-wandering using features from the instructor’s speech as input. In practice, the authors tried to understand whether it is possible to recognize which aspects of the instructors’ speech (e.g. frequency, talking speed, etc.) cause the students to lose attention. Experimental results show that it is possible to predict students’ inattention from such features, suggesting that instructors could focus on those aspects while giving their lectures, especially in the case of pre-recorded video lectures.
Another work focused on a technique to detect “wheel-spinning” in an online examination platform. Wheel-spinning is what happens when a student keeps answering a problem, without successfully doing so. The proposed technique is interesting since it allows us to understand whether the platform should intervene to free the student from wheel-spinning, possibly suggesting to revise some content or to perform other exams before trying the current one again. Indeed, some research works — including one presented at AIED20 — showed that, if a student is stuck on a problem-solving task, by taking a break performing some other tasks and later returning to the original problem, we increase the probability that the student manages to solve the task.
Natural Language Processing techniques for supporting education
Natural Language Processing (NLP) focuses on the interactions between computers and natural human languages, particularly how to program computers to process and analyze large amounts of natural language data. Recent years have witnessed a tremendous advancement in NLP, and it is becoming increasingly used in many different domains. For instance, we have been experimenting with it at Cloud Academy for scaling and automating quiz creation and for calibrating newly generated questions. Several papers about NLP techniques were presented at the AIED conference, including our contribution.
From a high-level perspective, NLP papers presented at this conference focused on three aspects which are of high interest to Cloud Academy: automated grading of students’ answers, question generation, and quality control for new questions.
Automatic grading of students’ answers and essays is a challenging but extremely relevant task, especially for online education. Indeed, one of the issues of online learning is the large number of students that instructors have to deal with, which makes the manual correction of exams almost impossible. For this reason, multiple-choice questions (MCQ) are often the choice for online examinations, since they can be automatically corrected. However, using open-ended questions together with MCQs would improve the accuracy of students’ grading. If we were able to reliably correct open-ended questions in an automatic way, it would hugely help online examinations.
Question generation, too, can be very useful for helping instructors create effective questions from a given corpus of documents, such as all the lectures of a course. Indeed, question creation is a very time-consuming process, and providing some automatically generated candidate questions — which can be chosen, modified, or discarded by the instructor — can enable faster question generation.
Also, new questions (both automatically and manually generated) need to be assessed before being used in actual exams, and this was the focus of some other works in the conference. Indeed, not all the questions are suited for scoring students, and it is necessary to perform some kind of quality assurance to find the ones that should not be used. Using NLP techniques, it is possible to detect and discard the questions that should not be used to score students.
Supporting teachers and students
Some talks presented techniques for supporting teachers and students, focusing more on facilitating the human-computer interaction in the educational domain. In particular, the major interest was on Virtual Teaching Assistants (VTAs). VTAs are chat-based or speech-based applications that can automatically answer some of the students’ requests to reduce the number of emails and messages that the instructors have to answer. Indeed, VTAs can answer the most common questions (which are asked many times by different students), allowing the instructors to focus on the requests that really require a human-to-human interaction.
This was the second conference that our team attended in a virtual format this year, having taken part in the Learning Analytics and Knowledge Conference in March.
Again, the pace of the presentation was quite intense but very well organized, given that researchers from different time-zones presented their contributions.
The location and timing for next year’s conference have not been announced yet, but I am already looking forward to the progress of the research in the community and how the many interesting ideas presented this year will be developed further.