December 16, 2017

Automation or empowerment: online learning at the crossroads

Image: Applift

Image: AppLift, 2015

You are probably, like me, getting tired of the different predictions for 2016. So I’m not going to do my usual look forward for the year for individual developments in online learning. Instead, I want to raise a fundamental question about which direction online learning should be heading in the future, because the next year could turn out to be very significant in determining the future of online learning.

The key question we face is whether online learning should aim to replace teachers and instructors through automation, or whether technology should be used to empower not only teachers but also learners. Of course, the answer will always be a mix of both, but getting the balance right is critical.

An old but increasingly important question

This question, automation or human empowerment, is not new. It was raised by B.F. Skinner (1968) when he developed teaching machines in the early 1960s. He thought teaching machines would eventually replace teachers. On the other hand, Seymour Papert (1980) wanted computing to empower learners, not to teach them directly. In the early 1980s Papert got children to write computer code to improve the way they think and to solve problems. Papert was strongly influenced by Jean Piaget’s theory of cognitive development, and in particular that children constructed rather than absorbed knowledge.

In the 1980s, as personal computers became more common, computer-assisted learning (CAL or CAD) became popular, using computer-marked tests and early forms of adaptive learning. Also in the 1980s the first developments in artificial intelligence were applied, in the form of intelligent math tutoring. Great predictions were made then, as now, about the potential of AI to replace teachers.

Then along came the Internet. Following my first introduction to the Internet in a friend’s basement in Vancouver, I published an article in the first edition of the Journal of Distance Education, entitled ‘Computer-assisted learning or communications: which way for IT in distance education?’ (1986). In this paper I argued that the real value of the Internet and computing was to enable asynchronous interaction and communication between teacher and learners, and between learners themselves, rather than as teaching machines. This push towards a more constructivist approach to the use of computing in education was encapsulated in Mason and Kaye’s book, Mindweave (1989). Linda Harasim has since argued that online collaborative learning is an important theory of learning in its own right (Harasim, 2012).

In the 1990s, David Noble of York University attacked online learning in particular for turning universities into ‘Digital Diploma Mills’:

‘universities are not only undergoing a technological transformation. Beneath that change, and camouflaged by it, lies another: the commercialization of higher education.’

Noble (1998) argued that

‘high technology, at these universities, is often used not to ……improve teaching and research, but to replace the visions and voices of less-prestigious faculty with the second-hand and reified product of academic “superstars”.

However, contrary to Noble’s warnings, for fifteen years most university online courses followed more the route of interaction and communication between teachers and students than computer-assisted learning or video lectures, and Noble’s arguments were easily dismissed or forgotten.

Then along came lecture capture and with it, in 2011, Massive Open Online Courses (xMOOCs) from Coursera, Udacity and edX, driven by elite, highly selective universities, with their claims of making the best professors in the world available to everyone for free. Noble’s nightmare suddenly became very real. At the same time, these MOOCs have resulted in much more interest in big data, learning analytics, a revival of adaptive learning, and claims that artificial intelligence will revolutionize education, since automation is essential for managing such massive courses.

Thus we are now seeing a big swing back to the automation of learning, driven by powerful computing developments, Silicon Valley start-up thinking, and a sustained political push from those that want to commercialize education (more on this later). Underlying these developments is a fundamental conflict of philosophies and pedagogies, with automation being driven by an objectivist/behaviourist view of the world, compared with the constructivist approaches of online collaborative learning.

In other words, there are increasingly stark choices to be made about the future of online learning. Indeed, it is almost too late – I fear the forces of automation are winning – which is why 2016 will be such a pivotal year in this debate.

Automation and the commercialization of education

These developments in technology are being accompanied by a big push in the United States, China, India and other countries towards the commercialization of online learning. In other words, education is being seen increasingly as a commodity that can be bought and sold. This is not through the previous and largely discredited digital diploma mills of the for-profit online universities such as the University of Phoenix that David Noble feared, but rather through the encouragement and support of commercial computer companies moving into the education field, companies such as Coursera, Lynda.com and Udacity.

Audrey Watters and EdSurge both produced lists of EdTech ‘deals’ in 2015 totalling between $1-$2 billion. Yes, that’s right, that’s $1-$2 billion in investment in private ed tech companies in the USA (and China) in one year alone. At the same time, entrepreneurs are struggling to develop sustainable business models for ed tech investment, because with education funded publicly, a ‘true’ market is restricted. Politicians, entrepreneurs and policy makers on the right in the USA increasingly see a move to automation as a way of reducing government expenditure on education, and one means by which to ‘free up the market’.

Another development that threatens the public education model is the move by very rich entrepreneurs such as the Gates, the Hewletts and the Zuckerbergs to move their massive personal wealth into ‘charitable’ foundations or corporations and use this money for their pet ‘educational’ initiatives that also have indirect benefits for their businesses. Ian McGugan (2015) in the Globe and Mail newspaper estimates that the Chan Zuckerberg Initiative is worth potentially $45 billion, and one of its purposes is to promote the personalization of learning (another name hi-jacked by computer scientists; it’s a more human way of describing adaptive learning). Since one way Facebook makes its money is by selling personal data, forgive my suspicions that the Zuckerberg initiative is a not-so-obvious way of collecting data on future high earners. At the same time, the Chang Zuckerberg initiative enables the Zuckerberg’s to avoid paying tax on their profits from Facebook. Instead then of paying taxes that could be used to support public education, these immensely rich foundations enable a few entrepreneurs to set the agenda for how computing will be used in education.

Why not?

Technology is disrupting nearly every other business and profession, so why not education? Higher education in particular requires a huge amount of money, mostly raised through taxes and tuition fees, and it is difficult to tie results directly to investment. Surely we should be looking at ways in which technology can change higher education so that it is more accessible, more affordable and more effective in developing the knowledge and skills required in today’s and tomorrow’s society?

Absolutely. It is not so much the need for change that I am challenging, but the means by which this change is being promoted. In essence, a move to automated learning, while saving costs, will not improve the learning that matters, and particularly the outcomes needed in a digital age, namely, the high level intellectual skills of critical thinking, innovation, entrepreneurship, problem-solving , high-level multimedia communication, and above all, effective knowledge management.

To understand why automated approaches to learning are inappropriate to the needs of the 21st century we need to look particularly at the tools and methods being proposed.

The problems with automating learning

The main challenge for computer-directed learning such as information transmission and management through Internet-distributed video lectures, computer-marked assessments, adaptive learning, learning analytics, and artificial intelligence is that they are based on a model of learning that has limited applications. Behaviourism works well in assisting rote memory and basic levels of comprehension, but does not enable or facilitate deep learning, critical thinking and the other skills that are essential for learners in a digital age.

R. and D. Susskind (2015) in particular argue that there is a new age in artificial intelligence and adaptive learning driven primarily by what they call the brute force of more powerful computing. Why AI failed so dramatically in the 1980s, they argue, was because computer scientists tried to mimic the way that humans think, and computers then did not have the capacity to handle information in the way they do now. When however we use the power of today’s computing, it can solve previously intractable problems through analysis of massive amounts of data in ways that humans had not considered.

There are several problems with this argument. The first is that the Susskinds are correct in that computers operate differently from humans. Computers are mechanical and work basically on a binary operating system. Humans are biological and operate in a far more sophisticated way, capable of language creation as well as language interpretation, and use intuition as well as deductive thinking. Emotion as well as memory drives human behaviour, including learning. Furthermore humans are social animals, and depend heavily on social contact with other humans for learning. In essence humans learn differently from the way machine automation operates.

Unfortunately, computer scientists frequently ignore or are unaware of the research into human learning. In particular they are unaware that learning is largely developmental and constructed, and instead impose an old and less appropriate method of teaching based on behaviourism and an objectivist epistemology. If though we want to develop the skills and knowledge needed in a digital age, we need a more constructivist approach to learning.

Supporters of automation also make another mistake in over-estimating or misunderstanding how AI and learning analytics operate in education. These tools reflect a highly objectivist approach to teaching, where procedures can be analysed and systematised in advance. However, although we know a great deal about learning in general, we still know very little about how thinking and decision-making operate biologically in individual cases. At the same time, although brain research is promising to unlock some of these secrets, most brain scientists argue that while we are beginning to understand the relationship between brain activity and very specific forms of behaviour, there is a huge distance to travel before we can explain how these mechanisms affect learning in general or how an individual learns in particular. There are too many variables (such as emotion, memory, perception, communication, as well as neural activity) at play to find an isomorphic fit between the firing of neurons and computer ‘intelligence’.

The danger then with automation is that we drive humans to learn in ways that best suit how machines operate, and thus deny humans the potential of developing the higher levels of thinking that make humans different from machines. For instance, humans are better than machines at dealing with volatile, uncertain, complex and ambiguous situations, which is where we find ourselves in today’s society.

Lastly, both AI and adaptive learning depend on algorithms that predict or direct human behaviour. These algorithms though are not transparent to the end users. To give an example, learning analytics are being used to identify students at high risk of failure, based on correlations of previous behaviour online by previous students. However, for an individual, should a software program be making the decision as to whether that person is suitable for higher education or a particular course? If so, should that person know the grounds on which they are considered unsuitable and be able to challenge the algorithm or at least the principles on which that algorithm is based? Who makes the decision about these algorithms – a computer scientist using correlated data, or an educator concerned with equitable access? The more we try to automate learning, the greater the danger of unintended consequences, and the more need for educators rather than computer scientists to control the decision-making.

The way forward

In the past, I used to think of computer scientists as colleagues and friends in designing and delivering online learning. I am now increasingly seeing at least some of them as the enemy. This is largely to do with the hubris of Silicon Valley, which believes that computer scientists can solve any problem without knowing anything about the problem itself. MOOCs based on recorded lectures are a perfect example of this, being developed primarily by a few computer scientists from Stanford (and unfortunately blindly copied by many people in universities who should have known better.)

We need to start with the problem, which is how do we prepare learners for the knowledge and skills they will need in today’s society. I have argued (Bates, 2015) that we need to develop, in very large numbers of people, high level intellectual and practical skills that require the construction and development of knowledge, and that enable learners to find, analyse, evaluate and apply knowledge appropriately.

This requires a constructivist approach to learning which cannot be appropriately automated, as it depends on high quality interaction between knowledge experts and learners. There are many ways to accomplish this, and technology can play a leading role, by enabling easy access to knowledge, providing opportunities for practice in experientially-based learning environments, linking communities of scholars and learners together, providing open access to unlimited learning resources, and above all by enabling students to use technology to access, organise and demonstrate their knowledge appropriately.

These activities and approaches do not easily lend themselves to massive economies of scale through automation, although they do enable more effective outcomes and possibly some smaller economies of scale. Automation can be helpful in developing some of the foundations of learning, such as basic comprehension or language acquisition. But at the heart of developing the knowledge and skills needed in today’s society, the role of a human teacher, instructor or guide will remain absolutely essential. Certainly, the roles of teachers and instructors will need to change quite dramatically, teacher training and faculty development will be critical for success, and we need to use technology to enable students to take more responsibility for their own learning, but it is a dangerous illusion to believe that automation is the solution to learning in the 21st century.

Protecting the future

There are several practical steps that need to be taken to prevent the automation of teaching.

  1. Educators – and in particular university presidents and senior civil servants with responsibility for education – need to speak out clearly about the dangers of automation, and the technology alternatives available that still exploit its potential and will lead to greater cost-effectiveness. This is not an argument against the use of technology in education, but the need to use it wisely so we get the kind of educated population we need in the 21st century.
  2. Computer scientists need to show more respect to educators and be less arrogant. This means working collaboratively with educators, and treating them as equals.
  3. We – teachers and educational technologists – need to apply in our own work and disseminate better to those outside education what we already know about effective learning and teaching.
  4. Faculty and teachers need to develop compelling technology alternatives to automation that focus on the skills and knowledge needed in a digital age, such as:
    • experiential learning through virtual reality (e.g. Loyalist College’s training of border service agents)
    • networking learners online with working professionals, to solve real world problems (e.g. by developing a program similar to McMaster’s integrated science program for online/blended delivery)
    • building strong communities of practice through connectivist MOOCs (e.g. on climate change or mental health) to solve global problems
    • empowering students to use social media to research and demonstrate their knowledge through multimedia e-portfolios (e.g. UBC’s ETEC 522)
    • designing openly accessible high quality, student-activated simulations and games but designed and monitored by experts in the subject area.
  5. Governments need to put as much money into research into learning and educational technology as they do into innovation in industry. Without better and more defensible theories of learning suitable for a digital age, we are open to any quack or opportunist who believes he or she has the best snake oil. More importantly, with better theory and knowledge of learning disseminated and applied appropriately, we can have a much more competitive workforce and a more just society.
  6. We need to educate our politicians about the dangers of commercialization in education through the automation of learning and fight for a more equal society where the financial returns on technology applications are more equally shared.
  7. Become edupunks and take back the web from powerful commercial interests by using open source, low cost, easy to use tools in education that protect our privacy and enable learners and teachers to control how they are used.

That should keep you busy in 2016.

Your views are of course welcome – unless you are a bot.

References

Bates, A. (1986) Computer assisted learning or communications: which way for information technology in distance education? Journal of Distance Education Vol. 1, No. 1

Bates, A. (2015) Teaching in a Digital Age Victoria BC: BCcampus

Harasim, L. (2012) Learning Theory and Online Technologies New York/London: Routledge

Mason, R. and Kaye, A (Eds).(1989)  Mindweave: communication, computers and distance education. Oxford: Pergamon

McGugan, I. (2015)Why the Zuckerberg donation is not a bundle of joy, Globe and Mail, December 2

Noble, D. (1998) Digital Diploma Mills, Monthly Review http://monthlyreview.org/product/digital_diploma_mills/

Papert, S. (1980) Mindstorms: Children, Computers and Powerful Ideas New York: Basic Books

Skinner, B. (1968)  The Technology of Teaching, 1968 New York: Appleton-Century-Crofts

Susskind, R. and Susskind, D. (2015) The Future of the Professions: How Technology will Change the Work of Human Experts Oxford UK: Oxford University Press

Watters, A. (2015) The Business of EdTech, Hack Edu, undated http://2015trends.hackeducation.com/business.html

Winters, M. (2015) Christmas Bonus! US Edtech Sets Record With $1.85 Billion Raised in 2015 EdSurge, December 21 https://www.edsurge.com/news/2015-12-21-christmas-bonus-us-edtech-sets-record-with-1-85-billion-raised-in-2015

More thoughts on artificial intelligence and human learning

ex machina 2

Several events have prompted this reflection.

Man shoots computer

I have a new hero and his name is Lucas Hinch. Frustrated with his Dell PC, he took it into the street in Colorado Springs and shot it eight times. The police were summoned and later in court his firearm was confiscated, but Hinch is reported as saying: ‘It was worth it. It was glorious. Angels sung on high.’

Ex Machina

I have just seen a very good new movie, Ex Machina, which is about Nathan, a carefully selected young company program developer charged with Turing-testing, i,.e. evaluating the capabilities, and ultimately the consciousness, of an alluring female robot called AVA. AVA features artificial intelligence developed mainly by structuring search engine data and is designed by Caleb, the reclusive CEO of a large Internet search engine company.

Now this is a movie, and a very good one, so it has to be entertaining and in movies, anything is possible, but it is worth seeing because of the intelligent script and in particular the interaction between both Nathan and the robot, and between Nathan and Caleb, where they discuss the nature and the (possible) potential of AI.

Thoughts prompted by these events

1. Dream on, AI enthusiasts

The first thought is how far we still have to go from what is possible in the present to what the expectations of AI are in the future. AI is nowhere close to achieving the kinds of thinking and emotional intelligence demonstrated in Ex Machina. (I am sure there will be someone who will want to correct me on this – go for it.)

Although we are increasingly dependent on computers, they are still frustratingly incapable of doing what seem to humans to be the simplest procedures. Take corporate voice messaging, for instance. Now if I had a gun, and I could find a physical embodiment of telephone companies’ voice messaging, I would take it out into the street and shoot it. The only reason I am calling a company is because I can’t get standard information or a resolution to a matter through the corporate web site. If the best AI can do (and these companies have the money and motivation to have the best) is to provide set answers to a limited number of predetermined questions that rarely if ever address the reason you have called, then we are not just at the pre-human stage of AI development but at the stage of creating primordial bacteria (which is no mean feat in itself).

Have you ever tried Siri? It is pathetically limited (although the voice is very nice). However, if anyone is stupid enough to fall in love with Siri, as in the movie ‘Her’ (one of the most sentimentally awful movies I have ever seen) then they really do deserve whatever comes to them.

2. Current models of AI are just wrong

Since AI tends to be developed by computer scientists, they tend to use models of the brain based on how computers or computer networks work (since of course it will be a computer that has to operate the AI). Ex Machina got it right though in suggesting that a completely different kind of hardware (what Caleb called wetware) will be needed that better matches the way that human brains actually work. Thus the basis of AI needs to reflect the biological rather than mechanical foundation of human behaviour.

However, I am not convinced that Caleb’s software solution of modelling human behaviour through the analysis of big data captured through search engines will work, either, because despite the wide range of uses of search engines by humans, they still nowhere near capture the full range of human behaviour. People do behave differently on the Internet than in other areas of their lives. While hundreds of thousands play violent games or use online pornography, for example, this is not reflected (despite impressions given by the media) in terms of actual behaviour in real world contexts. Most humans have the ability to separate reality from fantasy, and online behaviour is different from behaviour in other contexts.

3. Do we want robots to be like people?

This is the question that really needs to be answered, and my view is that the answer is unequivocally ‘no.’ Several excellent movies such as Space Odyssey 2001 as well as Ex Machina indirectly raise this question, and the answer is always negative for the future of human life. There are aspects of human life that are better done by machines, such as coal mining, booking airline tickets or even housework, but decision-making and ethics for example are best left to admittedly imperfect human beings, because decision-making and ethics need to privilege the admittedly self-interests of humans, not those of robots (or more likely, large corporations).

One reason of course that there is so much interest in AI is that corporations want to reduce the costs of human workers by replacing them with machines. There comes a point though where naked free market interests work against the general human condition. It is no coincidence that the growing gap between the richest 1% and the rest of the world parallels the increased use of automation. The benefits of automation are not shared equally.

When we come to education in particular, the main cost is that of teachers and instructors. But learning is not only a complex activity where only a relatively minor part of the process can be effectively automated, it is an intensely human activity, that benefits enormously from personal relationships and social interaction.

4. What should we use AI for?

We need to treat technology as tools, not autonomous systems. Technology is a means to an end and the end must be determined by human beings. If we take education as an example, technology can be immensely helpful in supporting teachers, learners and learning. It can be used to make teaching more efficient, so long as it does not attempt to replace the relational aspects of teaching and learning. What it should not be used for is to replace the human element in teaching, or even, in the long term, learners themselves.

5. The need to establish rules and guidelines for AI

Although we are already seeing some of the social consequences of an unequal distribution of wealth resulting from the automation of human activities, we have been lucky so far in a sense that AI has proved to be so difficult to extend beyond very simple procedures. We have not yet had to face some of the ethical, social and security issues that will arise if AI becomes more successfully developed. (The first area is likely to be in transportation, with the automation of driving.)

However, as many science fiction writers have predicted, we are probably getting to the point where we now need some controls, rules, guidelines and procedures that will help determine the limits of AI in general, and computer-based learning in particular, in terms of where and how AI-controlled automation should be applied. In education, this means using computers to support teachers in the design and delivery of teaching and learning, in assessment of ‘routine’ and predictable forms of learning, and in indicating students at risk, and possible causes and actions to be taken. In all cases, though, these applications of computing need to be under the direct control of either learners or teachers, or increasingly by both.

What I foresee is something like a Charter of Rights for humans in a world where AI is not only prevalent but also powerful (but then I’m an incorrigeable optimist).

In the meantime, go and see Ex Machina, and enjoy it as a very interesting movie, even if some of the assumptions about the future are likely to be wrong – and some horribly right. For some interesting discussion of the morality of AVA, go to: IMDb

References

Rad, C. (2015) MAN SHOOTS DELL COMPUTER 8 TIMES AFTER GETTING BLUE SCREEN OF DEATH IGN, 22 April

A short history of educational technology

Charlton Heston as Moses: what language is used on the tablets?

Charlton Heston as Moses. Are the tablets of stone an educational technology? (See Selwood, 2014, for a discussion of the possible language of the Ten Commandments)

The first section of my chapter on ‘Understanding Technology in Education’ for my open textbook on Teaching in a Digital Age was a brief introduction to the challenge of choosing technologies in education. This section aims to provide a little historical background. This will not be anything new to most readers of this blog, but remember the the book is not aimed at educational technologists or instructional designers, but at regular classroom teachers, instructors and professors.

Particularly in recent years, technology has changed from being a peripheral factor to becoming more central in all forms of teaching. Nevertheless, arguments about the role of technology in education go back at least 2,500 years.  To understand better the role and influence of technology on teaching, we need a little history, because as always there are lessons to be learned from history. Paul Saettler’s ‘The Evolution of American Educational Technology’ (1990) is one of the most extensive historical accounts, but only goes up to 1989. A lot has happened since then. I’m giving you here the postage stamp version, and a personal one at that.

Technology has always been closely linked with teaching. According to the Bible, Moses used chiseled stone to convey the ten commandments, probably around the 7th century BC. But it may be more helpful to summarise educational technology developments in terms of the main modes of communication.

Oral communication

One of the earliest means of formal teaching was oral – though human speech – although over time, technology has been increasingly used to facilitate or ‘back-up’ oral communication. In ancient times, stories, folklore, histories and news were transmitted and maintained through oral communication, making accurate memorization a critical skill, and the oral tradition is still the case in many aboriginal cultures. For the ancient Greeks, oratory and speech were the means by which people learned and passed on learning. Homer’s Iliad and the Odyssey were recitative poems, intended for public performance. To be learned, they had to be memorized by listening, not by reading, and transmitted by recitation, not by writing.

Nevertheless, by the fifth century B.C, written documents existed in considerable numbers in ancient Greece. If we believe Socrates, education has been on a downward spiral ever since. According to Plato, Socrates caught one of his students (Phaedrus) pretending to recite a speech from memory that in fact he had learned from a written version. Socrates then told Phaedrus the story of how the god Theuth offered the King of Egypt the gift of writing, which would be a ‘recipe for both memory and wisdom’. The king was not impressed. According to the king,

‘it [writing] will implant forgetfulness in their souls; they will cease to exercise memory because they will rely on what is written, creating memory not from within themselves, but by means of external symbols. What you have discovered is a recipe not for memory, but for reminding. And it is no true wisdom that you offer your disciples, but only its semblance, for by telling them many things without teaching them anything, you will make them seem to know much, while for the most part they will know nothing. And as men filled not with wisdom but the conceit of wisdom, they will be a burden to their fellow men.’

Phaedrus, 274c-275, translation adapted from Manguel, 1996

I can just hear some of my former colleagues saying the same thing about social media.

The term ‘lecture’, which comes from the Latin ‘to read’, is believed to originate from professors in medieval times reading from the scrolled manuscripts handwritten by monks (around 1200 AD). Because the process of writing on scrolls was so labour intensive, the library would usually have only one copy, so students were usually forbidden direct access to the manuscripts. Thus scarcity of one technology tends to drive the predominance of other technologies.

Slate boards were in use in India in the 12th century AD, and blackboards/chalkboards became used in schools around the turn of the 18th century. At the end of World War Two the U.S. Army started using overhead projectors for training, and their use became common for lecturing, until being largely replaced by electronic projectors and presentational software such as Powerpoint around 1990. This may be the place to point out that most technologies used in education were not developed specifically for education but for other purposes (mainly business.)

Although the telephone dates from the late 1870s, the standard telephone system never became a major educational tool, not even in distance education, because of the high cost of analogue telephone calls for multiple users, although audio-conferencing has been used to supplement other media since the 1970s.  Video-conferencing using dedicated cable systems and dedicated conferencing rooms have been in use since the 1980s. The development of video compression technology and relatively low cost video servers in the early 2000s led to the introduction of lecture capture systems for recording and streaming classroom lectures in 2008. Webinars now are used largely for delivering lectures over the Internet.

None of these technologies though changes the oral basis of communication for teaching.

Written communication

The role of text or writing in education also has a long history. Even though Socrates is reported to have railed against the use of writing, written forms of communication make analytic, lengthy chains of reasoning and argument much more accessible, reproducible without distortion, and thus more open to analysis and critique than the transient nature of speech. The invention of the printing press in Europe in the 15th century was a truly disruptive technology, making written knowledge much more freely available, very much in the same way as the Internet has done today. As a result of the explosion of written documents resulting from the mechanization of printing, many more people in government and business were required to become literate and analytical, which led to a rapid expansion of formal education in Europe. There were many reasons for the the development of the Renaissance and the Enlightenment, and triumph of reason and science over superstition and beliefs, but the technology of printing was a key agent of change.

Improvements in transport infrastructure in the 19th century, and in particular the creation of a cheap and reliable postal system in the 1840s, led to the development of the first formal correspondence education, with the University of London offering an external degree program by correspondence from 1858. This first formal distance degree program still exists today in the form of the University of London International Program. In the 1970s, the Open University transformed the use of print for teaching through specially designed, highly illustrated printed course units that integrated learning activities with the print medium, based on advanced instructional design.

With the development of web-based learning management systems in the mid-1990s, textual communication, although digitized, became, at least for a brief time, the main communication medium for Internet-based learning, although lecture capture is now changing that.

Broadcasting and video

BBC television studio and radio transmitter, Alexandra Palace, London  Image: © Copyright Oxyman and licensed for reuse under this Creative Commons Licence

BBC television studio and radio transmitter, Alexandra Palace, London
Image: © Copyright Oxyman and licensed for reuse under a Creative Commons Licence

The British Broadcasting Corporation (BBC) began broadcasting educational radio programs for schools in the 1920s. The first adult education radio broadcast from the BBC in 1924 was a talk on Insects in Relation to Man, and in the same year, J.C. Stobart, the new Director of Education at the BBC, mused about ‘a broadcasting university’ in the journal Radio Times (Robinson, 1982).Television was first used in education in the 1960s, for schools and for general adult education (one of the six purposes in the current BBC’s Royal Charter is still ‘promoting education and learning’).

In 1969, the British government established the Open University (OU), which worked in partnership with the BBC to develop university programs open to all, using a combination originally of printed materials specially designed by OU staff, and television and radio programs made by the BBC but integrated with the courses. It should be noted that although the radio programs involved mainly oral communication, the television programs did not use lectures as such, but focused more on the common formats of general television, such as documentaries, demonstration of processes, and cases/case studies (see Bates, 1985). In other words, the BBC focused on the unique ‘affordances’ of television, a topic that will be discussed in much more detail later. Over time, as new technologies such as audio- and video-cassettes were introduced, live broadcasting, especially radio, was cut back for OU programs, although there are still some general educational channels broadcasting around the world (e.g. TVOntario in Canada; PBS, the History Channel, and the Discovery Channel in the USA).

The use of television for education quickly spread around the world, being seen in the 1970s by some, particularly in international agencies such as the World Bank and UNESCO, as a panacea for education in developing countries, the hopes for which quickly faded when the realities of lack of electricity, cost, security of publicly available equipment, climate, resistance from local  teachers, and local language and cultural issues became apparent. Satellite broadcasting started to become available in the 1980s, and similar hopes were expressed of delivering ‘university lectures from the world’s leading universities to the world’s starving masses’, but these hopes too quickly faded for similar reasons. However, India, which had launched its own satellite, INSAT, in 1983, used it initially for delivering locally produced educational television programs throughout the country, in several indigenous languages, using Indian-designed receivers and television sets in local community centres as well as schools. India is still using satellites for tele-education into the poorest parts of the country at the time of writing (2014).

In the 1990s the cost of creating and distributing video dropped dramatically due to digital compression and high-speed Internet access.  This reduction in the costs of recording and distributing video also led to the development of lecture capture systems. The development of lecture capture technology allows students to view or review lectures at any time and place with an Internet connection. The Massachusetts Institute of Technology (MIT) started making its recorded lectures available to the public, free of charge, via its OpenCourseWare project, in 2002.  YouTube started in 2005 and was bought by Google in 2006. YouTube is increasingly being used for short educational clips that can be downloaded and integrated into online courses. The Khan Academy started using YouTube in 2006 for recorded voice-over lectures using a digital blackboard for equations and illustrations. Apple Inc. in 2007 created iTunesU to became a portal or a site where videos and other digital materials on university teaching could be collected and downloaded free of charge by end users.

Until lecture capture arrived, learning management systems had integrated basic educational design features, but this required instructors to redesign their classroom-based teaching to fit the LMS environment. Lecture capture on the other hand required no changes to the standard lecture model, and in a sense reverted back to primarily oral communication supported by Powerpoint or even writing on a chalkboard. Thus oral communication remains as strong today in education as ever, but has been incorporated into or accommodated by new technologies.

Computer technologies

Computer-based learning

In essence the development of programmed learning aims to computerize teaching, by structuring information, testing learners’ knowledge, and providing immediate feedback to learners, without human intervention other than in the design of the hardware and software and the selection and loading of content and assessment questions. B.F. Skinner started experimenting with teaching machines that made use of programmed learning in 1954, based on the theory of behaviourism (see Chapter 3, Section 3.2.). Skinner’s teaching machines were one of the first forms of computer-based learning. There has been a recent revival of programmed learning approaches as a result of MOOCs, since machine based testing scales much more easily than human-based assessment.

PLATO was a generalized computer assisted instruction system originally developed at the University of Illinois, and, by the late 1970s, comprised several thousand terminals worldwide on nearly a dozen different networked mainframe computers (Wikipedia). It was in fact a highly successful system, lasting almost 40 years, and incorporated key on-line concepts: forums, message boards, online testing, e-mail, chat rooms, instant messaging, remote screen sharing, and multi-player games.

Attempts to replicate the teaching process through artificial intelligence (AI) began in the mid-1980s, with a focus initially on teaching arithmetic. Despite large investments of research in AI for teaching over the last 30 years, the results generally have been disappointing. It has proved difficult for machines to cope with the extraordinary variety of ways in which students learn (or fail to learn.) Recent developments in cognitive science and neuroscience are being watched closely but at the time of writing the gap is still great between the basic science, and analysing or predicting specific learning behaviours from the science.

More recently we have seen the development of adaptive learning, which analyses learners’ responses then re-directs them to the most appropriate content area, based on their performance. Learning analytics, which also collects data about learner activities and relates them to other data, such as student performance, is a related development. These developments will be discussed in further detail in Section 8.7.

Computer networking

Arpanet in the U.S.A was the first network to use the Internet protocol in 1982. In the late 1970s, Murray Turoff and Roxanne Hiltz at the New Jersey Institute of Technology were experimenting with blended learning, using NJIT’s internal computer network. They combined classroom teaching with online discussion forums, and termed this ‘computer-mediated communication’ (CMC) (Hiltz and Turoff, 1978). At the University of Guelph in Canada, an off-the-shelf software system called CoSy was developed in the 1980s that allowed for online threaded group discussion forums, a predecessor to today’s forums contained in learning management systems. In 1988, the Open University in the United Kingdom offered a course, DT200, that as well as the OU’s traditional media of printed texts, television programs and audio-cassettes, also included an online discussion component using CoSy. Since this course had 1,200 registered students, it was one of the earliest ‘mass’ open online courses. We see then the emerging division between the use of computers for automated or programmed learning, and the use of computer networks to enable students and instructors to communicate with each other.

The Word Wide Web was formally launched in 1991. The World Wide Web is basically an application running on the Internet that enables ‘end-users’ to create and link documents, videos or other digital media, without the need for the end-user to transcribe everything into some form of computer code. The first web browser, Mosaic, was made available in 1993. Before the Web, it required lengthy and time-consuming methods to load text, and to find material on the Internet. Several Internet search engines have been developed since 1993, with Google, created in 1999, emerging as one of the primary search engines.

Online learning environments

In 1995, the Web enabled the development of the first learning management systems (LMSs), such as WebCT (which later became Blackboard). LMSs provide an online teaching environment, where content can be loaded and organized, as well as providing ‘spaces’ for learning objectives, student activities, assignment questions, and discussion forums. The first fully online courses (for credit) started to appear in 1995, some using LMSs, others just loading text as PDFs or slides. The materials were mainly text and graphics. LMSs became the main means by which online learning was offered until lecture capture systems arrived around 2008.

By 2008, George Siemens, Stephen Downes and Dave Cormier in Canada were using web technology to create the first ‘connectivist’ Massive Open Online Course (MOOC), a community of practice that linked webinar presentations and/or blog posts by experts to participants’ blogs and tweets, with just over 2,000 enrollments. The courses were open to anyone and had no formal assessment. In 2012, two Stanford University professors launched a lecture-capture based MOOC on artificial intelligence, attracting more than 100,000 students, and since then MOOCs have expanded rapidly around the world.

Social media

Social media are really a sub-category of computer technology, but their development deserves a section of its own in the history of educational technology. Social media cover a wide range of different technologies, including blogs, wikis, You Tube videos, mobile devices such as phones and tablets, Twitter, Skype and Facebook. Andreas Kaplan and Michael Haenlein (2010) define social media as

a group of Internet-based applications that …allow the creation and exchange of user-generated content, based on interactions among people in which they create, share or exchange information and ideas in virtual communities and networks.

Social media are strongly associated with young people and ‘millenials’ – in other words, many of the students in post-secondary education. At the time of writing social media are only just being integrated into formal education, and to date their main educational value has been in non-formal education, such as fostering online communities of practice, or around the edges of classroom teaching, such as ‘tweets’ during lectures or rating of instructors. It will be argued though that they have much greater potential for learning.

A paradigm shift

It can be seen that education has adopted and adapted technology over a long period of time. There are some useful lessons to be learned from past developments in the use of technology for education, in particular that many claims made for a newly emerging technology are likely to be neither true nor new. Also new technology rarely completely replaces an older technology. Usually the old technology remains, operating within a more specialised ‘niche’, such as radio, or integrated as part of a richer technology environment, such as video in the Internet.

However, what distinguishes the digital age from all previous ages is the rapid pace of technology development and our immersion in technology-based activities in our daily lives. Thus it is fair to describe the impact of the Internet on education as a paradigm shift, at least in terms of educational technology. We are still in the process of absorbing and applying the implications. The next section attempts to pin down more closely the educational significance of different media and technologies.

Over to you

1. Given the target audience, is this section necessary or useful?

2. Given also the need to brief, what would you add, change, or leave out?

Next

We start getting to the meat of the chapter in the next section, which examines more closely differences between media and technologies and the concept of educational affordances of technology.

References

Hiltz, R. and Turoff, M. (1978) The Network Nation: Human Communication via Computer Reading MA: Addison-Wesley

Kaplan, A. and Haenlein, M. (2010), Users of the world, unite! The challenges and opportunities of social media, Business Horizons, Vol.  53, No. 1, pp. 59-68

Manguel, A. (1996) A History of Reading London: Harper Collins

Robinson, J. (1982) Broadcasting Over the Air London: BBC

Saettler, P. (1990) The Evolution of American Educational Technology Englewood CO: Libraries Unlimited

Selwood, D. (2014) What does the Rosetta Stone tell us about the Bible? Did Moses read hieroglyphs? The Telegraph, July 15