I am in the process of finalising a second edition of Teaching in a Digital Age. (For more on this, see Working on the second edition of Teaching in a Digital Age: some lessons learned about open publishing.) I will post over the next few weeks the relatively few new sections in the book for comments and feedback.
For the former Chapter 7 (now Chapter 8), on Pedagogical differences between media, I am adding a new section in three parts on emerging media, in particular serious games, virtual/augmented/mixed reality, and artificial intelligence.
This is my draft of the section on artificial intelligence. I am indebted to Linda Harasim for her great arguments and ideas on this topic, and many others for their help in providing material for this chapter. However, further suggestions, criticisms and constructive feedback will be truly appreciated.
8.7c.1 Focusing on AI’s affordances for teaching and learning
Artificial intelligence (AI) is a daunting topic as there are so many issues with respect to its use in education. AI is also currently going through yet another period of extreme hype as a panacea for education, currently being at the top of the peak of inflated expectations, but this hype is driven mainly by successful applications outside the field of education, such as in finance and marketing. Furthermore the term ‘AI’ is increasingly being used (incorrectly) as a general term for any computational activity.
Even in education, there are very different possible areas of application of AI. Zeide (2019) makes a very useful distinction between institutional, student support and instructional applications (Figure 8.7.c.1 below).
Although AI applications for institutional or student support purposes are very important, this chapter is focused on the pedagogical affordances of different media and technologies (what Zeide calls ‘instructional’ applications). In particular, the focus in this section will be on the role of AI as a form of media or technology for teaching and learning, its pedagogical affordances, and its strengths and weaknesses in this area.
Moreover, AI is really a sub-set of computing. Thus all the general affordances of computing in education set out in Section 5 of this chapter will apply to AI. This section aims to tease out the extra potential that AI can offer in teaching and learning. This will mean particularly focusing on its role as a medium rather than a general technology in teaching, which means looking at a wider context than just the computational aspects of AI, in particular its pedagogical role.
8.7c.2 What is artificial intelligence?
There are many different definitions of artificial intelligence, most of which are operational, in terms of what AI can do. For example, here are two general definitions:
artificial intelligence is the attempt to create machines that can do things previously possible only through human cognition
a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation
Zawacki-Richter et al. (2019), in a review of the literature on AI in higher education, report that those authors that defined artificial intelligence tended to described it as:
intelligent computer systems or intelligent agents with human features, such as the ability to memorise knowledge, to perceive and manipulate their environment in a similar way as humans, and to understand human natural language.
Klutka et al. (2018) also define AI in terms of what it can do in higher education (Figure 8.7.c.3 below):
There are three basic computing requirements that set ‘modern’ AI apart from other computing applications:
- access to massive amounts of data;
- computational power on a large scale to manage and analyze the data;
- powerful and relevant algorithms for the data analysis.
8.7c.3 Why use artificial intelligence for teaching and learning?
There are two somewhat different goals for the general use of artificial intelligence. The first is to increase the efficiency of a system or organization, primarily by reducing the high costs of labour, namely by replacing relatively expensive human workers with relatively less costly machines (automation). In education in particular, the main cost is that of teachers and instructors. Politicians, entrepreneurs and policy makers increasingly see a move to automation as a way of reducing the costs of education.
The second is to increase the effectiveness of teaching and learning, in economic terms to increase outputs: better learning outcomes and greater benefits for the same or more cost. With this goal, AI would be used alongside or supporting teachers and instructors.
Klutka et al. (2018) provide a general statement of the potential of AI in higher education ‘instruction’ through Figure 8.7c.4:
These are understandable goals, but we shall see later in this section that such goals to date are mainly aspirational rather than real.
In terms of this book, a key focus is on developing the knowledge and skills required by learners in a digital age. The key test then for artificial intelligence is to what extent it can assist in the development of these higher level skills.
8.7c.4 Affordances and examples of AI use in teaching and learning
Zawacki-Richter et al. (2019) in a review of the literature on AI in education initially identified 2,656 research papers in English or Spanish, then narrowed the list down by eliminating duplicates, limiting publication to articles in peer-reviewed journals published between 2007 and 2018, and eliminating articles that turned out in the end not to be about the use of AI in education. This resulted in a final 145 articles which were then analysed. Zawicki-Richter et al. then classified these 145 papers into different uses of AI in education. This section draws heavily on this classification. (It should be noted that within the 145 articles, only 92 were focused on instruction/student support. The rest were on institutional uses such as identifying at risk students before admission).
The Zawicki-Richter study offers one insight into the main ways that AI has been used in education for teaching and learning over the ten years between 2007 and 2018, the closest we can come to ‘affordances’. First, three main general categories (with considerable overlap) from the study are provided below, followed by some specific examples.
8.7c.4.1 Intelligent tutoring systems (29 out of 92 articles reviewed by Zawacki-Richter et al.)
Intelligent tutoring systems:
- provide teaching content to students and, at the same time, support them by giving adaptive feedback and hints to solve questions related to the content, as well as detecting students’ difficulties/errors when working with the content or the exercises;
- curate learning materials based on student needs, such as by providing specific recommendations regarding the type of reading material and exercises done, as well as personalised courses of action;
- facilitate collaboration between learners, for instance, by providing automated feedback, generating automatic questions for discussion, and the analysis of the process.
8.7c.4.2 Assessment and evaluation (36 out of 92)
AI supports assessment and evaluation through:
- automated grading;
feedback, including a range of student-facing tools, such as intelligent agents that provide students with prompts or guidance when they are confused or stalled in their work;
evaluation of student understanding, engagement and academic integrity.
8.7c.4.3 Adaptive systems and personalization (27 out of 92)
AI enables adaptive systems and the personalization of learning by:
- teaching course content then diagnosing strengths or gaps in student knowledge, and providing automated feedback;
- recommending personalized content;
- supporting teachers in learning design by recommending appropriate teaching strategies based on student performance;
- supporting representation of knowledge in concept maps.
Klutka et al. (2018) identified several uses of AI for teaching and learning in universities in the USA. ECoach, developed at the University of Michigan, provides formative feedback for a variety of mainly large classes in the STEM field. It tracks students progress through a course and directs them to appropriate actions and activities on a personalized basis. Other applications listed in the report include sentiment analysis (using students’ facial expressions to measure their level of engagement in studying), an application to monitor student engagement in discussion forums, and organizing commonly shared mistakes in exams into groups for the instructor to respond once to the group rather than individually.
A chatbot is programming that simulates the conversation or ‘chatter’ of a human being through text or voice interactions (Rouse, 2018). Chatbots in particular are a tool used to automate communications with students. Bayne (2014) describes one such application in a MOOC with 90,000 subscribers. Much of the student activity took place outside the Coursera platform within social media. The five academics teaching the MOOC were all active on Twitter, each with large networks, and Twitter activity around the MOOC hashtag (#edcmooc) was high across all instances of the course (for example, a total of around 180,000 tweets were exchanged around its first run). A ‘Teacherbot’ was designed to roam the tweets using the course Twitter hashtag, using keywords to identify ‘issues’ then choosing pre-designed responses to these issues, which often entailed directing students to more specific research on a topic. For a review of research on chatbots in education, see Winkler and Söllner (2018).
8.7c.4.5 Automated essay grading
Natural language processing (NLP) artificial intelligence systems – often called automated essay scoring engines – are now either the primary or secondary grader on standardized tests in at least 21 states in the USA (Feathers, 2019). According to Feathers:
Essay-scoring engines don’t actually analyze the quality of writing. They’re trained on sets of hundreds of example essays to recognize patterns that correlate with higher or lower human-assigned grades. They then predict what score a human would assign an essay, based on those patterns.
Feathers though claims that research from psychometricians and AI experts show that these tools are susceptible to a common flaw in AI: bias against certain demographic groups (see Ongweso, 2019).
Lazendic et al. (2018) offer a detailed account of the plan for machine grading in Australian high schools. They state:
It is …crucially important to acknowledge that the human scoring models, which are developed for each NAPLAN writing prompt, and their consistent application, ensure and maintain the validity of NAPLAN writing assessments. Consequently, the statistical reliability of human scoring outcomes is fundamentally related to and is the key evidence for the validity of NAPLAN writing marking.
In other words, the marking must be based on consistent human criteria. However, it was announced later (Hendry, 2018) that Australian education ministers agreed not to introduce automated essay marking for NAPLAN writing tests, heeding calls from teachers’ groups to reject the proposal.
Perelman (2013) developed a computer program called the BABEL generator that patched together strings of sophisticated words and sentences into meaningless gibberish essays. The nonsense essays consistently received high, sometimes perfect, scores when run through several different scoring engines. See also Mayfield, 2013, and Parachuri, 2013, for thoughtful analyses of the issues in the automated marking of writing.
At the time of writing, despite considerable pressure to use automated essay grading for standardized exams, the technology still has many questions lingering over it.
8.7c.5 Strengths and weaknesses
There are several ways to assess the value of the teaching and learning affordances of particular applications of AI in teaching and learning:
- is the application based on the three core features of ‘modern’ AI: massive data sets, massive computing power; powerful and relevant algorithms? Does the application have clear benefits in terms of affordances over other media, and particularly general computing applications? If not, it is probably not AI but just a general computing application;
- does the application facilitate the development of the skills and knowledge needed in a digital age?
- is there unintended bias built into the algorithms? Does it appear to discriminate against certain categories of people?
- is the application ethical in terms of student and teacher/instructor privacy and their rights in an open and democratic society?
- are the results of the application ‘explainable’? For example, can a teacher or instructor or those responsible for the application understand and explain to students how the results or decisions made by the AI application were reached?
These issues are addressed below.
8.7c.5.1 Is it really a ‘modern’ AI application in teaching and learning?
Looking at the Zawicki-Richter et al. study and many other research papers published in peer-reviewed journals, very few so-called AI applications in teaching and learning meet the criteria of massive data, massive computing power and powerful and relevant algorithms. Much of the intelligent tutoring within conventional education is what might be termed ‘old’ AI: there is not a lot of processing going on, and the data points are relatively small. Many so-called AI papers focused on intelligent tutoring and adaptive learning are really just general computing applications.
Indeed, so-called intelligent tutoring systems, automated multiple-choice test marking, and automated feedback on such tests have been around since the early 1980s. The closest to modern AI applications appear to be automated essay grading of standardised tests administered across an entire education system. However there are major problems with the latter. More development is clearly needed to make automated essay grading a valid exercise.
The main advantage that Klutka et al. (2018) identify for AI is that it opens up the possibility for higher education services to become scalable at an unprecedented rate, both inside and outside the classroom. However, it is difficult to see how ‘modern’ AI could be used within the current education system, where class sizes or even whole academic departments, and hence data points, are relatively small, in terms of the numbers needed for ‘modern’ AI. It cannot be said to date that modern AI has been tried, and failed, in teaching and learning; it’s not really even been tried.
Applications outside the current formal system are more realistic, for MOOCs, for instance, or for corporate training on an international scale. The requirement for massive data does suggest that the whole education system could be massively disrupted if the necessary scale could be reached by offering modern AI-based education outside the existing education systems, for instance by large Internet corporations that could tap their massive markets of consumers.
However, there is still a long way to go before AI makes that feasible. This is not to say that there could not be such applications of modern AI in the future, but at the moment, in the words of the old English bobby, ‘Move along, now, there’s nothing to see here.’
However, for the sake of argument, let’s assume that the definition of AI offered here is too strict and that most of the applications discussed in this section are examples of AI. How do these applications of AI meet the other criteria above?
8.7c.5.2 Do the applications facilitate the development of the skills and knowledge needed in a digital age?
This does not seem to be the case in most so-called AI applications for teaching and learning today. They are heavily focused on content presentation and testing for understanding and comprehension. In particular, Zawicki-Richter et al. make the point that most AI developments for teaching and learning – or at least the research papers – are by computer scientists, not educators. Since AI tends to be developed by computer scientists, they tend to use models of learning based on how computers or computer networks work (since of course it will be a computer that has to operate the AI). As a result, such AI applications tend to adopt a very behaviourist model of learning: present/test/feedback. Lynch (2017) argues that:
If AI is going to benefit education, it will require strengthening the connection between AI developers and experts in the learning sciences. Otherwise, AI will simply ‘discover’ new ways to teach poorly and perpetuate erroneous ideas about teaching and learning.
Comprehension and understanding are indeed important foundational skills, but AI so far is not helping with the development of higher order skills in learners of critical thinking, problem-solving, creativity and knowledge-management. Indeed, Klutka et al. (2018) claim that that AI can handle many of the routine functions currently done by instructors and administrators, freeing them up to solve more complex problems and connect with students on deeper levels. This reinforces the view that the role of the instructor or teacher needs to move away from content presentation, content management and testing of content comprehension – all of which can be done by computing – towards skills development. The good news is that AI used in this way supports teachers and instructors, but does not replace them. The bad news is that many teachers and instructors will need to change the way they teach or they will become redundant.
8.7c.5.3 Is there unintended bias in the algorithms?
It could be argued that all AI does is to encapsulate the existing biases in the system. The problem though is that this bias is often hard to detect in any specific algorithm, and that AI tends to scale up or magnify such biases. These are issues more for institutional uses of AI, but machine-based bias can discriminate against students also in a teaching and learning context as well, and especially in automated assessment.
8.7c.5.4 Is the application ethical?
There are many potential ethical issues arising from the use of AI in teaching and learning, mainly due to the lack of transparency. What data are being collected, who owns or controls it, how is it being interpreted, how will it be used? Policies will need to be put in place to protect students and teachers/instructors (see for instance the U.S. Department of Education’s student data policies for schools), and students and teachers/instructors need to be involved in such policy development.
8.7c.5.5 Are the results explainable?
The biggest problem with AI generally, and in teaching and learning in particular, is the lack of transparency. How did it give me this grade? Why I am directed to this reading rather than that one? Why isn’t my answer acceptable? Lynch (2017) argues that most data collected about student learning is indirect, inauthentic, lacking demonstrable reliability or validity, and reflecting unrealistic time horizons to demonstrate learning.
‘current examples of AIEd often rely on …. poor proxies for learning, using data that is easily collectable rather than educationally meaningful.’
8.7c.6.1. Dream on, AI enthusiasts
The first thought is how far we still have to go from what is possible in the present to what the expectations of AI are, with regard to teaching and learning. What works well in finance or medicine does not necessarily translate to teaching and learning contexts.
One reason is that there is a strong affective or emotional influence in learning. Students often learn better when they feel that the instructor or teacher cares. In particular, students want to be treated as individuals, with their own interests, ways of learning, and some sense of control over their learning. Although at a mass level human behaviour is predictable and to some extent controllable, each student is an individual and will respond slightly differently from other students in the same context. Because of these emotional and personal aspects of learning, students need to relate in some way to their instructor. Learning is not only a complex activity where only a relatively minor part of the process can be effectively automated, it is an intensely human activity, that benefits enormously from personal relationships and social interaction. We shall see that this relational aspect of learning can be handled equally well online as face-to-face, but it means using computing to support communication as well as delivering and testing content acquisition.
8.7c.6.2 Not fit for purpose
Above all, AI has not progressed to the point yet where it can support the higher levels of learning required in a digital age or the teaching methods needed to do this, although other forms of computing or technology, such as simulations, games and virtual reality, can.
In particular AI developers have been largely unaware that learning is developmental and constructed, and instead have imposed an old and less appropriate method of teaching based on behaviourism and an objectivist epistemology. However, to develop the skills and knowledge needed in a digital age, a more constructivist approach to learning is needed. There has been no evidence to date that AI can support such an approach to teaching, although it may be possible.
8.7c.6.3 AI’s real agenda
AI advocates often argue that they are not trying to replace teachers but to make their life easier or more efficient. This should be taken with a grain of salt. The key driver of AI applications is cost-reduction, which means reducing the number of teachers, as this is the main cost in education. In fact, the key lesson from all AI developments is that we will need to pay increased attention to the affective and emotional aspects of life in a robotic-heavy society, so teachers will become even more important.
Another problem with artificial intelligence is that the same old hype keeps going round and round. The same arguments for using artificial intelligence in education go back to the 1980s. Millions of dollars went into AI research at the time, including into educational applications, with absolutely no payoff.
There have been some significant developments in AI since then, in particular pattern recognition, access to and analysis of big data sets, and formalized decision-making within limited boundaries. The trick though is to recognise exactly what kind of applications these new AI developments are good for, and what they cannot do well. In other words, the context in which AI is used matters, and needs to be taken account of.
8.7c.6.4 Defining AI’s role in teaching and learning
Nevertheless, there is plenty of scope for useful applications of AI in education, but only if there is continuing dialogue between AI developers and educators as new developments in AI become available. But that will require being very clear about the purpose of AI applications in education and being wide awake to the unintended consequences.
In education, AI is still a sleeping giant. ‘Breakthrough’ applications of AI for teaching and learning are probably not going to come from the mainstream universities and colleges, but from outside the formal post-secondary system, through organizations such as LinkedIn, lynda.com, Amazon or Coursera, that have access to large data sets that make the applications of AI scalable and worthwhile (to them). However, this would pose an existential threat to public schools, colleges and universities. The issue then becomes: who is best placed to protect and sustain the individual in a digital age: multinational corporations or a public education system?
The key question then is whether technology should aim to replace teachers and instructors through automation, or whether technology should be used to empower not only teachers but also learners. Above all, who should control AI in education: educators, students, computer scientists, or large corporations? These are indeed existential questions if AI does become immensely successful in reducing the costs of teaching and learning: but at what cost to us as humans? Fortunately AI is not yet in a position to provide such a threat; but it may well do so soon.
There is a short conclusion and summary to the section on emerging technologies to come.
Over to you
This section on artificial intelligence was one of the hardest parts of the book for me to write. I am not a specialist in AI and I am conscious that I may have missed some significant work in the use of AI for teaching and learning.
In particular do you agree that most ‘AI’ applications in teaching and learning are more ‘old’ AI than ‘modern’ AI and that we have yet to see a really significant application of modern AI in teaching and learning?
Bayne, S. (2014) Teacherbot: interventions in automated teaching Teaching in Higher Education, Vol. 20. No.4
Feathers, T. (2019) Flawed Algorithms Are Grading Millions of Students’ Essays, Motherboard: Tech by Vice, 20 August
Hendry, J. (2018) Govts dump NAPLAN robo marking plans itnews, 30 January
Klutka, J. et al. (2018) Artificial Intelligence in Higher Education: Current Uses and Future Applications Louisville Ky: Learning House
Lazandic, G., Justus, J.-A., and Rabinowitz, S. (2018) NAPLAN Online Automated Scoring Research Program: Research Report, Canberra, Australia: Australian Curriculum, Assessment and Reporting Authority
Lynch, J. (2017) How AI will destroy education, buZZrobot, 13 November
Mayfield, E. (2013) Six ways the edX Announcement Gets Automated Essay Grading Wrong, e-Literate, April 8
Ongweso jr. E. (2019) Racial Bias in AI Isn’t Getting Better and Neither Are Researchers’ Excuses Motherboard: Tech by Vice, July 29
Parachuri, V. (2013) On the automated scoring of essays and the lessons learned along the way, vicparachuri.com, July 31
Perelman. L. (2013) Critique of Mark D. Shermis & Ben Hamner, Contrasting State-of-the-Art Automated Scoring of Essays: Analysis, Journal of Writing Assessment, Vol. 6, No.1
Rouse, M. (2019) What is chatbot? Techtarget Network Customer Experience, 5 January
Winkler, R. & Söllner, M. (2018): Unleashing the Potential of Chatbots in Education: A State-Of-The-Art Analysis. Academy of Management Annual Meeting (AOM) Chicago: Illinois
Zawicki-Richter, O. et al. (2019) Systematic review of research on artificial intelligence applications in higher education – where are the educators? International Journal of Technology in Higher Education (in press – details to be added)
Zeide, E. (2019) Artificial Intelligence in Higher Education: Applications, Promise and Perils, and Ethical Questions EDUCAUSE Review, Vol. 54, No. 3, August 26
The overview seems fair, with a reasonable escape clause at the end saying ‘but this could change.’
Large Data Sets
One doesn’t have to have thousands of students to generate large data sets that might be useful. Understanding the behavior (e.g. eye movements and emotional indicators) of an individual learner can generate very large data sets.
At least in STEM education there is a considerable amount of patterned information to be ingested. Skinner demonstrated in the 1950’s that such information can be effectively and economically learned with the ancestors of today’s machines. As a student in his class I learned that he made a meaningful distinction between what was appropriate for machine-based instruction and what wasn’t. That same distinction seems valid today, although often not recognized. This doesn’t dispute Tony’s analysis, only reinforces it.
Many thanks, James, for excellent comments. Much appreciated.
This is great Tony and a very good intro into the unending hype around AI in education.
One suggestion I could make (you may deal with this elsewhere but perhaps somewhere around the conclusion section?) is around the knowledge and understanding of AI by today’s teaching professionals. I personally feel that there are not nearly enough faculty (those without digital teaching as their core discipline) who know enough about AI and can have a serious debate around something that, potentially, may have a very serious effect on their professional teaching careers in the years to come.
Yes, it may not be part of their subject knowledge, but, there seems to be an inordinate amount of sticking heads in the sand and hoping the hype can just be ignored for longer. As you allude to, its only a matter of time before we see some real progress. Teachers/lectures and the like need to be having more informed, robust debates so they can push back on the corporate agenda that will inevitably dominate the conversation, no? Teaching professionals should use this opportunity to define the pathway for AI’s growing influence and not allow themselves to fall into the same hype and ultimately be bullied by it later.
Looking forward to the finished publication
Great comments, Miles. I agree that we all need to understand more about how AI actually works. Unfortunately, by its nature (especially though patented ‘no-see’ algorithms) it is not transparent at all. This is not helped by the way computer scientists write about its educational use. The focus in journal articles is always on the statistical analysis and choice of algorithms, and never on the intended learning outcomes (which usually are the same as for a face-to-face class). What we want to know is what AI can do well in teaching, that’s different or more effective than just ‘replacing’ a teacher teaching content. More dialogue between computer scientists and educators is essential.
Ai has been used for decades in learning – Google, Google Scholar, YouTube etc. and there are literally dozens of examples that are absent here – work on adaptive learning at ASU, chatbots for many different purposes, content creation, face recognition, ID and so on. I also find it rather puritan on ethics. if what you required is complete transparency then you’d better block Google and Google Scholar.