October 28, 2016

Automation or empowerment: online learning at the crossroads

Listen with webReader
Image: Applift

Image: AppLift, 2015

You are probably, like me, getting tired of the different predictions for 2016. So I’m not going to do my usual look forward for the year for individual developments in online learning. Instead, I want to raise a fundamental question about which direction online learning should be heading in the future, because the next year could turn out to be very significant in determining the future of online learning.

The key question we face is whether online learning should aim to replace teachers and instructors through automation, or whether technology should be used to empower not only teachers but also learners. Of course, the answer will always be a mix of both, but getting the balance right is critical.

An old but increasingly important question

This question, automation or human empowerment, is not new. It was raised by B.F. Skinner (1968) when he developed teaching machines in the early 1960s. He thought teaching machines would eventually replace teachers. On the other hand, Seymour Papert (1980) wanted computing to empower learners, not to teach them directly. In the early 1980s Papert got children to write computer code to improve the way they think and to solve problems. Papert was strongly influenced by Jean Piaget’s theory of cognitive development, and in particular that children constructed rather than absorbed knowledge.

In the 1980s, as personal computers became more common, computer-assisted learning (CAL or CAD) became popular, using computer-marked tests and early forms of adaptive learning. Also in the 1980s the first developments in artificial intelligence were applied, in the form of intelligent math tutoring. Great predictions were made then, as now, about the potential of AI to replace teachers.

Then along came the Internet. Following my first introduction to the Internet in a friend’s basement in Vancouver, I published an article in the first edition of the Journal of Distance Education, entitled ‘Computer-assisted learning or communications: which way for IT in distance education?’ (1986). In this paper I argued that the real value of the Internet and computing was to enable asynchronous interaction and communication between teacher and learners, and between learners themselves, rather than as teaching machines. This push towards a more constructivist approach to the use of computing in education was encapsulated in Mason and Kaye’s book, Mindweave (1989). Linda Harasim has since argued that online collaborative learning is an important theory of learning in its own right (Harasim, 2012).

In the 1990s, David Noble of York University attacked online learning in particular for turning universities into ‘Digital Diploma Mills’:

‘universities are not only undergoing a technological transformation. Beneath that change, and camouflaged by it, lies another: the commercialization of higher education.’

Noble (1998) argued that

‘high technology, at these universities, is often used not to ……improve teaching and research, but to replace the visions and voices of less-prestigious faculty with the second-hand and reified product of academic “superstars”.

However, contrary to Noble’s warnings, for fifteen years most university online courses followed more the route of interaction and communication between teachers and students than computer-assisted learning or video lectures, and Noble’s arguments were easily dismissed or forgotten.

Then along came lecture capture and with it, in 2011, Massive Open Online Courses (xMOOCs) from Coursera, Udacity and edX, driven by elite, highly selective universities, with their claims of making the best professors in the world available to everyone for free. Noble’s nightmare suddenly became very real. At the same time, these MOOCs have resulted in much more interest in big data, learning analytics, a revival of adaptive learning, and claims that artificial intelligence will revolutionize education, since automation is essential for managing such massive courses.

Thus we are now seeing a big swing back to the automation of learning, driven by powerful computing developments, Silicon Valley start-up thinking, and a sustained political push from those that want to commercialize education (more on this later). Underlying these developments is a fundamental conflict of philosophies and pedagogies, with automation being driven by an objectivist/behaviourist view of the world, compared with the constructivist approaches of online collaborative learning.

In other words, there are increasingly stark choices to be made about the future of online learning. Indeed, it is almost too late – I fear the forces of automation are winning – which is why 2016 will be such a pivotal year in this debate.

Automation and the commercialization of education

These developments in technology are being accompanied by a big push in the United States, China, India and other countries towards the commercialization of online learning. In other words, education is being seen increasingly as a commodity that can be bought and sold. This is not through the previous and largely discredited digital diploma mills of the for-profit online universities such as the University of Phoenix that David Noble feared, but rather through the encouragement and support of commercial computer companies moving into the education field, companies such as Coursera, Lynda.com and Udacity.

Audrey Watters and EdSurge both produced lists of EdTech ‘deals’ in 2015 totalling between $1-$2 billion. Yes, that’s right, that’s $1-$2 billion in investment in private ed tech companies in the USA (and China) in one year alone. At the same time, entrepreneurs are struggling to develop sustainable business models for ed tech investment, because with education funded publicly, a ‘true’ market is restricted. Politicians, entrepreneurs and policy makers on the right in the USA increasingly see a move to automation as a way of reducing government expenditure on education, and one means by which to ‘free up the market’.

Another development that threatens the public education model is the move by very rich entrepreneurs such as the Gates, the Hewletts and the Zuckerbergs to move their massive personal wealth into ‘charitable’ foundations or corporations and use this money for their pet ‘educational’ initiatives that also have indirect benefits for their businesses. Ian McGugan (2015) in the Globe and Mail newspaper estimates that the Chan Zuckerberg Initiative is worth potentially $45 billion, and one of its purposes is to promote the personalization of learning (another name hi-jacked by computer scientists; it’s a more human way of describing adaptive learning). Since one way Facebook makes its money is by selling personal data, forgive my suspicions that the Zuckerberg initiative is a not-so-obvious way of collecting data on future high earners. At the same time, the Chang Zuckerberg initiative enables the Zuckerberg’s to avoid paying tax on their profits from Facebook. Instead then of paying taxes that could be used to support public education, these immensely rich foundations enable a few entrepreneurs to set the agenda for how computing will be used in education.

Why not?

Technology is disrupting nearly every other business and profession, so why not education? Higher education in particular requires a huge amount of money, mostly raised through taxes and tuition fees, and it is difficult to tie results directly to investment. Surely we should be looking at ways in which technology can change higher education so that it is more accessible, more affordable and more effective in developing the knowledge and skills required in today’s and tomorrow’s society?

Absolutely. It is not so much the need for change that I am challenging, but the means by which this change is being promoted. In essence, a move to automated learning, while saving costs, will not improve the learning that matters, and particularly the outcomes needed in a digital age, namely, the high level intellectual skills of critical thinking, innovation, entrepreneurship, problem-solving , high-level multimedia communication, and above all, effective knowledge management.

To understand why automated approaches to learning are inappropriate to the needs of the 21st century we need to look particularly at the tools and methods being proposed.

The problems with automating learning

The main challenge for computer-directed learning such as information transmission and management through Internet-distributed video lectures, computer-marked assessments, adaptive learning, learning analytics, and artificial intelligence is that they are based on a model of learning that has limited applications. Behaviourism works well in assisting rote memory and basic levels of comprehension, but does not enable or facilitate deep learning, critical thinking and the other skills that are essential for learners in a digital age.

R. and D. Susskind (2015) in particular argue that there is a new age in artificial intelligence and adaptive learning driven primarily by what they call the brute force of more powerful computing. Why AI failed so dramatically in the 1980s, they argue, was because computer scientists tried to mimic the way that humans think, and computers then did not have the capacity to handle information in the way they do now. When however we use the power of today’s computing, it can solve previously intractable problems through analysis of massive amounts of data in ways that humans had not considered.

There are several problems with this argument. The first is that the Susskinds are correct in that computers operate differently from humans. Computers are mechanical and work basically on a binary operating system. Humans are biological and operate in a far more sophisticated way, capable of language creation as well as language interpretation, and use intuition as well as deductive thinking. Emotion as well as memory drives human behaviour, including learning. Furthermore humans are social animals, and depend heavily on social contact with other humans for learning. In essence humans learn differently from the way machine automation operates.

Unfortunately, computer scientists frequently ignore or are unaware of the research into human learning. In particular they are unaware that learning is largely developmental and constructed, and instead impose an old and less appropriate method of teaching based on behaviourism and an objectivist epistemology. If though we want to develop the skills and knowledge needed in a digital age, we need a more constructivist approach to learning.

Supporters of automation also make another mistake in over-estimating or misunderstanding how AI and learning analytics operate in education. These tools reflect a highly objectivist approach to teaching, where procedures can be analysed and systematised in advance. However, although we know a great deal about learning in general, we still know very little about how thinking and decision-making operate biologically in individual cases. At the same time, although brain research is promising to unlock some of these secrets, most brain scientists argue that while we are beginning to understand the relationship between brain activity and very specific forms of behaviour, there is a huge distance to travel before we can explain how these mechanisms affect learning in general or how an individual learns in particular. There are too many variables (such as emotion, memory, perception, communication, as well as neural activity) at play to find an isomorphic fit between the firing of neurons and computer ‘intelligence’.

The danger then with automation is that we drive humans to learn in ways that best suit how machines operate, and thus deny humans the potential of developing the higher levels of thinking that make humans different from machines. For instance, humans are better than machines at dealing with volatile, uncertain, complex and ambiguous situations, which is where we find ourselves in today’s society.

Lastly, both AI and adaptive learning depend on algorithms that predict or direct human behaviour. These algorithms though are not transparent to the end users. To give an example, learning analytics are being used to identify students at high risk of failure, based on correlations of previous behaviour online by previous students. However, for an individual, should a software program be making the decision as to whether that person is suitable for higher education or a particular course? If so, should that person know the grounds on which they are considered unsuitable and be able to challenge the algorithm or at least the principles on which that algorithm is based? Who makes the decision about these algorithms – a computer scientist using correlated data, or an educator concerned with equitable access? The more we try to automate learning, the greater the danger of unintended consequences, and the more need for educators rather than computer scientists to control the decision-making.

The way forward

In the past, I used to think of computer scientists as colleagues and friends in designing and delivering online learning. I am now increasingly seeing at least some of them as the enemy. This is largely to do with the hubris of Silicon Valley, which believes that computer scientists can solve any problem without knowing anything about the problem itself. MOOCs based on recorded lectures are a perfect example of this, being developed primarily by a few computer scientists from Stanford (and unfortunately blindly copied by many people in universities who should have known better.)

We need to start with the problem, which is how do we prepare learners for the knowledge and skills they will need in today’s society. I have argued (Bates, 2015) that we need to develop, in very large numbers of people, high level intellectual and practical skills that require the construction and development of knowledge, and that enable learners to find, analyse, evaluate and apply knowledge appropriately.

This requires a constructivist approach to learning which cannot be appropriately automated, as it depends on high quality interaction between knowledge experts and learners. There are many ways to accomplish this, and technology can play a leading role, by enabling easy access to knowledge, providing opportunities for practice in experientially-based learning environments, linking communities of scholars and learners together, providing open access to unlimited learning resources, and above all by enabling students to use technology to access, organise and demonstrate their knowledge appropriately.

These activities and approaches do not easily lend themselves to massive economies of scale through automation, although they do enable more effective outcomes and possibly some smaller economies of scale. Automation can be helpful in developing some of the foundations of learning, such as basic comprehension or language acquisition. But at the heart of developing the knowledge and skills needed in today’s society, the role of a human teacher, instructor or guide will remain absolutely essential. Certainly, the roles of teachers and instructors will need to change quite dramatically, teacher training and faculty development will be critical for success, and we need to use technology to enable students to take more responsibility for their own learning, but it is a dangerous illusion to believe that automation is the solution to learning in the 21st century.

Protecting the future

There are several practical steps that need to be taken to prevent the automation of teaching.

  1. Educators – and in particular university presidents and senior civil servants with responsibility for education – need to speak out clearly about the dangers of automation, and the technology alternatives available that still exploit its potential and will lead to greater cost-effectiveness. This is not an argument against the use of technology in education, but the need to use it wisely so we get the kind of educated population we need in the 21st century.
  2. Computer scientists need to show more respect to educators and be less arrogant. This means working collaboratively with educators, and treating them as equals.
  3. We – teachers and educational technologists – need to apply in our own work and disseminate better to those outside education what we already know about effective learning and teaching.
  4. Faculty and teachers need to develop compelling technology alternatives to automation that focus on the skills and knowledge needed in a digital age, such as:
    • experiential learning through virtual reality (e.g. Loyalist College’s training of border service agents)
    • networking learners online with working professionals, to solve real world problems (e.g. by developing a program similar to McMaster’s integrated science program for online/blended delivery)
    • building strong communities of practice through connectivist MOOCs (e.g. on climate change or mental health) to solve global problems
    • empowering students to use social media to research and demonstrate their knowledge through multimedia e-portfolios (e.g. UBC’s ETEC 522)
    • designing openly accessible high quality, student-activated simulations and games but designed and monitored by experts in the subject area.
  5. Governments need to put as much money into research into learning and educational technology as they do into innovation in industry. Without better and more defensible theories of learning suitable for a digital age, we are open to any quack or opportunist who believes he or she has the best snake oil. More importantly, with better theory and knowledge of learning disseminated and applied appropriately, we can have a much more competitive workforce and a more just society.
  6. We need to educate our politicians about the dangers of commercialization in education through the automation of learning and fight for a more equal society where the financial returns on technology applications are more equally shared.
  7. Become edupunks and take back the web from powerful commercial interests by using open source, low cost, easy to use tools in education that protect our privacy and enable learners and teachers to control how they are used.

That should keep you busy in 2016.

Your views are of course welcome – unless you are a bot.


Bates, A. (1986) Computer assisted learning or communications: which way for information technology in distance education? Journal of Distance Education Vol. 1, No. 1

Bates, A. (2015) Teaching in a Digital Age Victoria BC: BCcampus

Harasim, L. (2012) Learning Theory and Online Technologies New York/London: Routledge

Mason, R. and Kaye, A (Eds).(1989)  Mindweave: communication, computers and distance education. Oxford: Pergamon

McGugan, I. (2015)Why the Zuckerberg donation is not a bundle of joy, Globe and Mail, December 2

Noble, D. (1998) Digital Diploma Mills, Monthly Review http://monthlyreview.org/product/digital_diploma_mills/

Papert, S. (1980) Mindstorms: Children, Computers and Powerful Ideas New York: Basic Books

Skinner, B. (1968)  The Technology of Teaching, 1968 New York: Appleton-Century-Crofts

Susskind, R. and Susskind, D. (2015) The Future of the Professions: How Technology will Change the Work of Human Experts Oxford UK: Oxford University Press

Watters, A. (2015) The Business of EdTech, Hack Edu, undated http://2015trends.hackeducation.com/business.html

Winters, M. (2015) Christmas Bonus! US Edtech Sets Record With $1.85 Billion Raised in 2015 EdSurge, December 21 https://www.edsurge.com/news/2015-12-21-christmas-bonus-us-edtech-sets-record-with-1-85-billion-raised-in-2015


  1. Thanks for another insightful and allusive post on the direction that education seems to be taking.

    Re: “Automation can be helpful in developing some of the foundations of learning, such as basic comprehension or language acquisition.” — As a former second and foreign language teacher who got into elearning through using CD-ROM and online automated tutoring software, and having read as much research as I’ve been able to, I have to disagree with even this concession.

    Adaptive testing and spaced repetition are great to get candidates to memorise vocabulary and grammatical constructs and pass standardised multiple choice tests but those test scores correlate poorly with candidates’ abilities to actually use language spontaneously and productively in the real world, e.g. for studying at college and university or in the workplace. (Even ETS are struggling to justify the lack of correlation between their TOEFL tests and candidates’ competencies in the student world).

    Here’s a quick example of a study examining what happens with automated teaching with very basic levels of foreign language acquisition: Nielson, K. B. (2011). Self-Study with Language Learning Software in the Workplace: What Happens? Language Learning & Technology, 15(3), 110–129. Retrieved from: http://llt.msu.edu/issues/october2011/nielson.pdf

    • Matt, I think that your comment is right. But, other way, I have had, several years ago, the experience of learning typing with a automated system -screen based, “type a-s-d-f pause a-s-d-f…” and so. And I really learnt to type… at the beginning.

      Perhaps there is a time for everything in these low level competences: some initial automated work to move later to other learning activities.

      Perhaps the only things that you can learn from a machine are the kind of things that a machine can do itself.

  2. P.S. Re: your recommendations for educators spreading their knowledge to cultivate a better informed IT educational IT world, Ben Shapiro gave a nice talk at UoT a couple of years ago: http://kmdi.utoronto.ca/kmdi-talk-dr-ben-shapiro/

    Something that I’d also add is that universities putting the words “social constructivism” in their curricula and mission/values statements is insufficient. I’ve had conversations with online tutors at such institutions who have no idea what social constructivist methods even look like let alone use them in their own teaching (And why should they? They’re not given the time or paid to learn them either). In short, institutions need to do more than pay lip-service to alternative learning and teaching approaches.

  3. Ewout ter Haar says:

    “Will machines replace teachers? On the contrary, they are capital equipment to be used by teachers to save time and labor. In assigning certain mechanizable functions to machines, the teacher emerges in his proper role as an indispensable human being. ”

    Skinner, B. F. “Teaching Machines; from the Experimental Study of Learning Come Devices Which Arrange Optimal Conditions for Self Instruction.” Science (New York, N.Y.) 128.3330 (1958): 969–977. Print.

    The article is actually quite interesting.

  4. Tony
    Best for 2016. Thank you for this really insightful and well referenced post that captures many of the things I have been feeling and expresses them better than I ever could. Much appreciated and much food for thought.

  5. Tony

    Thank you for our incisive commentary which invites all of us in the sector to engage in a meaningful dialogue

    The future of learning is indeed a focused conversation that we need to have. We need to have it since higher education is in flux for a range of reasons – mainly financial, demographic and structural. As technology continues to develop its role in this conversation is important, but it is important to keep this in perspective.

    Learners are not demanding technology enhanced, automated learning. What they are looking for is affordable, flexible, relevant and appropriate learning when and where they need it. While technology will help with this work, it is not driving the work. We can see this in terms of strong demand for blended and classroom learning coupled with ongoing strong demand for online learning. Most of this online learning is not automated, but involves both peer to peer and learner: faculty interaction. What learners want is the combination of well designed learning resources and quality, engaging interactions. The National Study of Student Engagement shows this very clearly .

    It is interesting that many of the private investments have failed to understand this need for meaningful relationships and genuine engagement. Pearson Corporation – one of the largest corporate players in education world-wide – is struggling. Their reported annual organic revenue growth over the last five years is poor: 1 per cent, minus 1 per cent, 1 per cent and 0 per cent. Their sale of the Financial Times and their interest in The Economist in 2015 was undertaken in part to shore up the balance sheet but also to provide new capital to sustain their strategy of converting from being the world’s largest publisher to being the world’s largest provider of educational assessment, resources and learning opportunities. Other education players are also finding their bet on educational technology problematic. Shares in Apollo Education Group, which runs the University of Phoenix, have fallen nearly 90 per cent since early 2012. Rupert Murdoch’s company News Corp has sold Amplify (see here), its digital education unit, after investing nearly $1bn and losing money. While investors continue to flow money into the sector, returns on this investment are either low or non-existent.

    Philanthropic players – Gates and Hewlett, for example – are spreading their bets widely. While some are focused on technology enabled learning, others are more about infrastructure and support for quality initiatives focused on relationships.

    Public policy is also shifting away from technology enabled strategies towards strategies which give emphasis to flexibility, access and equity. While there is a growing awareness of how technology can increase access, flexibility and affordability in education, few jurisdictions are rushing to technology as a driver for public policy. Indeed, if anything, many have realized that technology does not mean cost-reduction or increased learning outcomes, better equity or learner effectiveness. “No significant difference” means just that.

    Many have raised red flags about machine intelligence and artificial intelligence – with the alarm bell being rung most loudly by such names as Stephen Hawking, Elon Musk and Bill Gates. Their concern is that we will drift towards what Ray Kurzweil has called “the singularity” where the power of AI exceeds human capacity for decision making and becomes, de facto, how decisions are made (see here). Extrapolating this to learning, the singularity world would suggest that learners could simply plug themselves in (Kurzweil sees humans physically interfacing with technology through implants) and use an AI network for language, ideation and learning. While many regard this as simply too far fetched, Hawking, Musk and Gates think that these developments are closer than many of us realize.

    The good news is that learners use the “rule of two feet” – their behaviour tells us how they react to these kinds of developments. The Gartner Hype Cycle is a useful reminder of the difference, over time, between vendor promises and reality. MOOC’s have not transformed universities and colleges and neither has big data. While new developments may do so, it is more likely that learners will look towards relational processes and engagement rather than an x-box-like solution to their learning needs, at least that is what it looks like for the next decade or so.

    Technology can aid this work. The wearable simultaneous translation engine showcased at CES in Las Vegas in January 2016 is an aid to interaction; adaptive assessments for learning help learners track their progress and may trigger the need for interaction with a peer or an instructor: new forms of learning design are focused on increasing engagement not reducing it. Technology is increasingly focused on collaborative and engagement tools and less on automation when it comes to learning.

    So, while we need to be cognizant of developments in machine learning and artificial intelligence, we need to focus our energy on leverage them as opportunities to increase interaction and engagement, rather reduce the relational quality of learning.

  6. Thanks for the blog post – enjoyed reading this and it does reflect work we are doing here in Alberta.

    You might find this article from my colleague Phil McRae interesting:

    Also, you may be heartened by Pearson’s struggle and the general lack of return on capital from the investments made in technology-driven educational investments.

    Three years ago I had a long discussion with the author of The Singularity is Near – Ray Kurzweil. While he took the view that implants and devices were how people would know things, I took the view that learning is fundamentally about relationships between individuals and knowledge. It was a lively public debate between us.

  7. Just to keep this lively, there is an interesting review in the current edition of Prospect which speaks to Ton’ys theme as well.. See http://www.prospectmagazine.co.uk/arts-and-books/are-only-poets-safe-book-review-the-future-of-the-professions-richard-susskind-daniel-susskind-robotisation

  8. A recent study finds that nearly one half of young people fear that their jobs will be automated within the next ten years.

    Nearly half of young people fear jobs will be automated in 10 years – report

    Economists assume that no matter the rate of technological advance, humans will be able to keep up with automation. More specifically, they assume that with education, humans will be able to keep pace with automation.

    Economists also assume that the necessary factors of production are: Land (Material Resources), (Human) Labor, and Capital (Machines).

    It would be more correct to say that these are the factors of distribution (of income), whereas the real factors of production are: Know-how, Matter, and Energy.

    Kenneth E. Boulding calls economic progress based on these factors the KEM saga.

    For more on the KEM saga, see:
    The Evolutionary Philosophy of Kenneth E. Boulding

    There is no a-priori economic theory that predicts that humans will always be a necessary factor of production. This is unlike the case in physics where theory predicts that no one will be able to invent a perpetual motion machine, or a space ship capable of faster-than-light speed.

    Humans are just one kind of KEM bundle.

    So, if we buy Boulding’s thesis, human labor could be supplanted by some other combination of KEM in the future. This has happened already to one class of employee: the horse.

    For more, see: The Goal of the Future Should Be Full Unemployment

    Given the rapid advance of technology, educators need to radically re-think their approach to automation. The traditional approach is to facilitate transformations of brain matter by presenting students with well crafted learning experiences. A radical new approach will be to help humans create intelligent agents to represent their interests.

    As Artificial General Intelligence emerges, and is able to pass the Turing Test, AGI agents with a comprehensive model of the student’s personality will become mind clones of the student.

    These AGI agents will be able to pass an enhanced form of the Turing Test where the AGI agent can convince the loved ones of the student that they really are the student.

    In fact, some authors believe that mind clones will become a digital version of the person they simulate. They see mind cloning as one path to digital immortality.

    For more on digital immortality, see:
    Review of Virtually Human: the promise – and peril – of digital immortality

    For more on the role educational technology can play, see:
    A Synthesis of the Personal Learning Environment and the Intelligent Agent Development Tool

    To follow the links in this comment, go to my blog post at:
    A Radical Way For Education To Help Humans Keep Pace With Automation

Speak Your Mind