November 18, 2017

Scary tales of online learning and educational technology

The Centre for Digital Media, Vancouver BC

The Centre for Digital Media, Vancouver BC

The Educational Technology Users Group (ETUG) of British Columbia held an appropriately Halloween-themed get together today called ‘The Little Workshop of Horrors’ at which participants were encouraged to share tales of failure and horror stories in the use of learning technologies.

This seemed to me a somewhat risky strategy but it actually worked really well. First the workshop was held in ‘the Hangar’, a large, covered space in (or rather beside) the Centre for Digital Media, a shared building used by UBC, Simon Fraser University, BCIT and the Emily Carr University of Art and Design. The Centre itself is a good example of collaboration and sharing in developing media-based programs, such as its Master of Digital Media. The Hangar lent itself to a somewhat spooky atmosphere, enhanced by a DJ who often accompanied presenters with ghoulish music.

Audrey’s Monsters

The workshop got off to an excellent start with a brilliant keynote from Audrey Watters on the Monsters of Educational Technology (The link will take you to her book on the subject). She identified a range of monsters (the examples are partly Audrey’s, partly mine):

  • Frankenstein’s monster that went wrong because its (hir?) master failed to provide it (em?) with love or social company (teaching machines?): in Audrey’s word’s ‘a misbegotten creature of a misbegotten science’,
  • vampires that suck the blood of students, e.g. by using their personal data (learning analytics?),
  • zombies, i.e. technologies or ed tech ideas that rise and die then rise again (e.g. technology will remove the need for schools, an idea that goes back to the early 1900s),
  • giants that become obsolete and die (Skinner, Merrill)
  • the Blob, which grows bigger and bigger and invades every nook and cranny (MOOCs?)
  • and the dragons, are the libertarian, free-market, Silicon-valley types that preach the ‘destruction’ and ‘re-invention’ of education.

Audrey Watters’ larger point is that if we are not careful, educational technology easily turns itself into a monster that drives out all humanity in the teaching and learning process. We need to be on constant watch, and, whenever we can, we need to take control away from large technology corporations whose ultimate purpose is not educational.

Not only was it a great, on topic, presentation, but it was also such a pleasure to meet at last Audrey in person, as I am a huge fan of her blog.

He was a monster, not because he was a machine, but because he wasn't loved

Confessions

Then came the confessional, at which a series of speakers confessed their sins – or rather, classic failures – about educational technology, often in very funny ways. What was interesting though about most of the tales was that although there was a disaster, in most cases out of the disaster came a lot of good things. (As one speaker said, ‘Success is failing many times without losing your optimism’; or ‘ A sailor gets to know the sea only after he has waded ashore.’).

One presenter reported going to a university to ‘sell’ Blackboard but was so nervous that her presentation was so bad they ended up going with Canvas (you see what I mean about some good coming out of these disasters!) Another described how over 20 years she has been trying to move faculty into more interactive and engaging technology than learning management systems, yet here she is still spending most of her time supporting faculty using an LMS.

One talked about spending years trying to promote IMS-based learning objects, only to find that Google’s search engine made meta-data identification redundant. Revealingly, he felt he knew at the time that the meta-data approach to learning objects was too complex to work, but he had to do it because that was the only way he could get funding. More than one speaker noted that Canada in the past has spent millions of dollars on programs that focused heavily on software solutions (anyone remember EduSource?) but almost nothing on evaluating the educational applications of technology or on research on new or even old pedagogies.

Another spoke about the demise of a new university, the Technical University of British Columbia, that was a purpose-built, new university deliberately built around an “integrated learning” approach, combining heavy use of on-line learning with mixed face-to-face course structures – in 1999. However, by 2002 it had only about 800 FTEs, and a new incoming provincial government, desperate to save money and eager to diminish the previous government’s legacy, closed the university and transferred the students (but not the programs) to Simon Fraser University. Nevertheless, the legacy did live on, with many of the learning technology staff moving later into senior positions within the Canadian higher education system.

I see instructional designers, educational technologists or learning ecology consultants (which was a new title for me) as the Marine Corps of the educational world. They have seen many battles and have (mostly) survived. They have even learned how to occasionally win battles. That’s the kind of wisdom of which academic leaders and faculty and instructors should make much better use.

One participant had such a bad experience at Simon Fraser University that she thinks of it as 'the haunted house on the hill.'

One participant had such a bad ed tech experience at Simon Fraser University that she thinks of it as ‘the haunted house on the hill.’

Happy Halloween, everyone!

Online learning for beginners: 5. When should I use online learning?

Knowledge-based industries include entertainment, such as video games design

Most subject disciplines now require students to know how technology influences their field of study

This is the fifth of a series of a dozen blog posts aimed at those new to online learning or thinking of possibly doing it. The other four are:

This question ‘When should I use online learning?’ is difficult to answer in a short post because there are many possible reasons, and as always in education, the answers are absolutely dependent on the specific context in which you are working, but the reasons can be classified under three main headings: academic, market, and policy/administrative.

Academic reasons

These boil down to relevancy and the changing nature of knowledge in a digital age.

Curriculum requirements

Technology is affecting the content of curriculum in nearly all subject disciplines. It is increasingly difficult to think of an academic area that is not undergoing profound changes as a result of information and communications technologies (ICTs). For instance, any business program now needs to look at the impact of social media and the Internet on marketing and on the delivery of goods. How are ICTs going to change financial investments and advising? In science and engineering, to what extent would animation, simulations or the use of virtual reality enable better understanding of three-dimensional phenomena, equations or formulae? In humanities and fine arts, to what extent are ICTs changing the way we express ourselves? How do we ensure our students are digitally literate and responsible? How do we prepare our students for a world controlled by massive technology companies who track our every movement and expression? It is difficult to think how these issues can be addressed without students themselves going online to study such issues.

Skills development

Also, the skills that our students will need to develop in a digital age will often best be achieved through the use of ICTs. In Chapter 1.2 of Teaching in a Digital Age, I give more detailed examples of such skills. Many of these skills are not only best developed by, but may not even be possible without, students spending an extensive period studying online.

However, I want to focus on two ‘core’ 21st century skills: independent learning and knowledge management. In a knowledge-based society, students will need to go on learning throughout life and outside the formal academic curriculum. Jobs are constantly changing as the knowledge base changes, and even our social lives are increasingly dominated by technological change. Independent learning – or self-learning – is a skill that itself can be taught. Online learning in particular requires self-discipline and independent learning, because the instructor is often not physically ‘there’. Thus gradually introducing learners to online learning can help build their independent learning skills.

Perhaps the overarching ’21st century skill’ though is knowledge management: how to find, analyse, evaluate, apply and communicate knowledge, especially when much of this knowledge is Internet-based or located, and constantly undergoing change. Students then need many opportunities to practice such skills, and online learning often provides a means by which this can be done in a cost-effective manner.

Whether we like it or not, an understanding and management of the use of ICTs is becoming critical in almost any subject area. Students will need to go online to study such phenomena, and to practice core 21st century skills. To do this students will need to spend much more time than at present studying online. (Again, though, we need to ensure that the balance between online and face-to-face time is also properly managed.)

Market reasons

Not only is knowledge undergoing rapid change, so are demographics. In most economically advanced societies, the population is aging. Over time, this will mean fewer younger students coming straight from high school, and more lifelong learners, perhaps already with post-secondary qualifications, but wanting to upgrade or move to a new profession or job and hence needing new knowledge and skills.

Also, with mass education, our students are increasingly diverse, in culture, languages and prior knowledge. One size of teaching does not fit all. We need ways then to individualise our programs. In particular, there are many pedagogical problems with very large lecture classes. They do not meet the needs of an increasingly diverse student population. Online learning is one way to allow students to work at different speeds, and to individualise the learning with online options enabling some choice in topics or level of study.

The changing population base offers opportunities as well as challenges. For instance, your area of research may be too specialised to offer a whole course or program within your current catchment area, but by going online you can attract enough students nationally or globally to make the effort worthwhile. These will be new students bringing in extra tuition revenues that can cover the full costs of an online masters degree, for instance. At the same time, online learning will enable critically important areas of academic development to reach a wider audience, helping create new labour markets and expand new areas of research.

Policy/administrative

We all know the situation where a President or Vice Chancellor has gone to a conference and come back ‘converted’. Suddenly the whole ship is expected to make an abrupt right turn and head off in a new direction. Unfortunately, online learning often leads to enthusiastic converts. MOOCs are a classic example of how a few elite universities suddenly got the attention of university leaders, who all charged off in the same direction.

Nevertheless, there can also be good policy reasons for institutional leadership wanting to move more to blended or flexible learning, for instance. One is to improve the quality of teaching and learning (breaking up large lecture classes is one example); another reason is to expand the reach of the university or college beyond its traditional base, for demographic and economic reasons; a third is to provide more flexibility for full-time students who are often working up to 15 hours a week to pay for their studies.

These policy shifts provide an excellent opportunity then to meet some of the academic rationales mentioned earlier. It is much easier to move into online learning if there is institutional support for this. This will include often extra money for release time for faculty to develop online courses, extra support in the way of instructional and media design, and even better chances of promotion or tenure.

Implications

  1. It can be seen that while market and policy reasons may be forcing you towards online learning, there are also excellent and valid academic reasons for moving in this direction.
  2. However, the extent to which online learning is a solution will depend very much on the particular context in which it will be used. It is essential that you think through carefully where it best fits within your own teaching context: blended learning for undergraduate students; masters programs for working professionals; skills development for applied learning; or all of these?
  3. Online learning is not going to go away. It will play a larger role in teaching in even the most campus-based institutions. Most of all, your students can benefit immensely from online learning, but only if it is done well.

Follow-up

Chapter 1, Fundamental Change in Education, of Teaching in a Digital Age, is basically a broader rationale for the use of online learning

Chapters 3 and 4 look at ways to individualise learning; see in particular:

Up next

‘How do I start?’

Your turn

If you have comments, questions or plain disagree, please use the comment box below.

 

Technology and alienation: symptoms, causes and a framework for discussion

Edvard Munch's The Scream (public domain) Location: National Gallery, Norway

Edvard Munch’s The Scream (public domain)
Location: National Gallery, Norway

This is the second post on the topic of technology, alienation and the role of education, with a particular focus on the consequences for teaching and learning. The first post was a general introduction to the topic. This post focuses on how technology can lead to alienation, and provides a framework for discussing the possibility of technology alienation in online learning and how to deal with it.

What do I mean by ‘alienation’?

Alienation is a term that has been around for some time. Karl Marx described alienation as the perception by people that they are becoming increasingly unable to control the social forces that shape their lives. Ultimately, highly alienated workers come to lose the sense that they can control any aspect of their lives, whether at work or at home, and become highly self-estranged. Such people are profoundly discontent, prone to alcohol and drug abuse, mental illness, violence, and the support of extreme social and political movements (Macionis and Plummer, 2012). Although Marx had an industrial society in mind, the definition works equally well to describe some of the negative effects of a digital society, as we shall see.

Causes

There are of course many different but related causes of alienation today:

  • the increasing inequality in wealth and in particular the perception by unemployed or low paid workers that they are being ‘passed by’ or not included in the wealth-generating economy. The feeling is particularly strong among workers who previously had well paid jobs (or expectations of well paid jobs) in manufacturing but have seen those jobs disappear in their lifetime. However, there are now also growing numbers of well educated younger people struggling to find well paid work while at the same time carrying a large debt as a result of increasingly expensive higher education;
  • one reason for the loss of manufacturing jobs is the effect of globalization: jobs going abroad to countries where the cost of labour is lower;
  • dysfunctional political systems are another factor, where people feel that they have little or no control over decisions made by government, that government is controlled by those with power and money, and political power is used to protect the ‘elites’;
  • lastly, and the main consideration in these posts, the role of technology, which operates in a number of ways that create alienation:
    • the most immediate is its role in replacing workers, originally in manufacturing, but now increasingly in service or even professional areas of work, including education;
    • a more subtle but nevertheless very powerful way in which technology leads to alienation is in controlling what we do, and in particular removing choice or decision-making from individuals. I will give some examples later;
    • lastly, many people are feeling increasingly exploited by technology companies collecting personal data and using it for commercial purposes or even to deny services such as insurance; in particular, the benefits to the end-user of technology seem very small compared to the large profits made by the companies that provide the services.

Symptoms

Here are some examples of how technology leads to alienation.

There have been several cases where intimate images of people have been posted on the Internet, without permission, and yet it has been impossible for the victims to get the images removed, at least until well after the damage has been done. The Erin Andrews case is the most recent, and the suicide of the 15 year old Amanda Todd is another example. These are extreme cases, but illustrate the perception that we have less and less control over social media and its potentially negative impact on their lives.

Sometimes the alienation comes from decisions made by engineers that pre-empt or deny human decision-making. I have always driven BMWs. Even when I had little money, I would buy a second hand BMW, mainly because of its superb engineering. However, I am driven crazy by my latest purchase. The ignition switches off automatically when I stop the car and automatically switches on again when I take my foot off the brake. One day I drove into my garage. I had stopped the car, and turned round to get something off the back seat. I took my foot off the brake and the car lurched forward and hit the freezer we have in the garage. If I had been on the street and done that, I could well have hit another car or even a pedestrian. The car also automatically locks the passenger doors. I have parked the car and started to walk away only to see my passengers pounding on the window to get out. I could cite nearly a hundred instances from this one car of decisions made by engineers that I don’t want made for me. In most cases (but not all) these default conditions can be changed, but that requires going through a 600 page printed manual. Furthermore these ‘features’ all cost money to install, money I would rather not pay if I had a choice.

We are just starting to see similar decisions by engineers creeping into online learning. One of the most popular uses of data analytics is to identify students ‘at risk’ of non-completion. As with the features in a car, there are potential benefits in this. However, the danger is that decisions based on correlations of other students’ previous behaviour with course completion may end up denying access to a program for a student considered ‘at risk’ but who may nevertheless might well succeed. In particular it could negatively profile black students in the USA, aboriginal students in Canada, or students from low income families.

A framework for discussion

I am dealing here with a highly emotive issue, and one where there will be many different and often contradictory perspectives. Let’s start with the ‘moral’ or ‘value’ issues. I start from the position that alienation is to be avoided if at all possible. It leads to destructive forces. In education in particular, alienation is the opposite of engagement, and for me, engagement is critical for student success. On the other hand, if people are really suffering, then alienation may well be a necessary starting point on the road to change or revolution. So it is difficult to adopt an objective stance to this topic. I want therefore to focus the discussion around the following issues:

  • what are the main developments in online learning that are occurring or will occur over the next few years?
  • who are the main drivers of change in this area?
  • what is the main value proposition? Why is this area being promoted? Who stands to benefit most from this development?
  • what are the risks or what is the downside of these developments? In particular, what is the risk that such developments may actually increase alienation in learners?

I will look at each of the following developments in the next series of blog posts within this framework, developments in online learning that have great promise but at the same time could, if not carefully managed, end up increasing alienation:

  • competency-based learning;
  • personalised and adaptive learning;
  • learning analytics;
  • online assessment methods (badges, machine marking, e-proctoring, e-portfolios, etc.);
  • unbundling of educational services

I will then end this series of posts with a discussion of ‘defensive’ strategies for learners and educators to deal with the negative impact of technology in a digital age.

References

Macionis, J. and Plummer, K. (2012) Sociology: A Global Introduction Don Mills ON: Pearson Education

Automation or empowerment: online learning at the crossroads

Image: Applift

Image: AppLift, 2015

You are probably, like me, getting tired of the different predictions for 2016. So I’m not going to do my usual look forward for the year for individual developments in online learning. Instead, I want to raise a fundamental question about which direction online learning should be heading in the future, because the next year could turn out to be very significant in determining the future of online learning.

The key question we face is whether online learning should aim to replace teachers and instructors through automation, or whether technology should be used to empower not only teachers but also learners. Of course, the answer will always be a mix of both, but getting the balance right is critical.

An old but increasingly important question

This question, automation or human empowerment, is not new. It was raised by B.F. Skinner (1968) when he developed teaching machines in the early 1960s. He thought teaching machines would eventually replace teachers. On the other hand, Seymour Papert (1980) wanted computing to empower learners, not to teach them directly. In the early 1980s Papert got children to write computer code to improve the way they think and to solve problems. Papert was strongly influenced by Jean Piaget’s theory of cognitive development, and in particular that children constructed rather than absorbed knowledge.

In the 1980s, as personal computers became more common, computer-assisted learning (CAL or CAD) became popular, using computer-marked tests and early forms of adaptive learning. Also in the 1980s the first developments in artificial intelligence were applied, in the form of intelligent math tutoring. Great predictions were made then, as now, about the potential of AI to replace teachers.

Then along came the Internet. Following my first introduction to the Internet in a friend’s basement in Vancouver, I published an article in the first edition of the Journal of Distance Education, entitled ‘Computer-assisted learning or communications: which way for IT in distance education?’ (1986). In this paper I argued that the real value of the Internet and computing was to enable asynchronous interaction and communication between teacher and learners, and between learners themselves, rather than as teaching machines. This push towards a more constructivist approach to the use of computing in education was encapsulated in Mason and Kaye’s book, Mindweave (1989). Linda Harasim has since argued that online collaborative learning is an important theory of learning in its own right (Harasim, 2012).

In the 1990s, David Noble of York University attacked online learning in particular for turning universities into ‘Digital Diploma Mills’:

‘universities are not only undergoing a technological transformation. Beneath that change, and camouflaged by it, lies another: the commercialization of higher education.’

Noble (1998) argued that

‘high technology, at these universities, is often used not to ……improve teaching and research, but to replace the visions and voices of less-prestigious faculty with the second-hand and reified product of academic “superstars”.

However, contrary to Noble’s warnings, for fifteen years most university online courses followed more the route of interaction and communication between teachers and students than computer-assisted learning or video lectures, and Noble’s arguments were easily dismissed or forgotten.

Then along came lecture capture and with it, in 2011, Massive Open Online Courses (xMOOCs) from Coursera, Udacity and edX, driven by elite, highly selective universities, with their claims of making the best professors in the world available to everyone for free. Noble’s nightmare suddenly became very real. At the same time, these MOOCs have resulted in much more interest in big data, learning analytics, a revival of adaptive learning, and claims that artificial intelligence will revolutionize education, since automation is essential for managing such massive courses.

Thus we are now seeing a big swing back to the automation of learning, driven by powerful computing developments, Silicon Valley start-up thinking, and a sustained political push from those that want to commercialize education (more on this later). Underlying these developments is a fundamental conflict of philosophies and pedagogies, with automation being driven by an objectivist/behaviourist view of the world, compared with the constructivist approaches of online collaborative learning.

In other words, there are increasingly stark choices to be made about the future of online learning. Indeed, it is almost too late – I fear the forces of automation are winning – which is why 2016 will be such a pivotal year in this debate.

Automation and the commercialization of education

These developments in technology are being accompanied by a big push in the United States, China, India and other countries towards the commercialization of online learning. In other words, education is being seen increasingly as a commodity that can be bought and sold. This is not through the previous and largely discredited digital diploma mills of the for-profit online universities such as the University of Phoenix that David Noble feared, but rather through the encouragement and support of commercial computer companies moving into the education field, companies such as Coursera, Lynda.com and Udacity.

Audrey Watters and EdSurge both produced lists of EdTech ‘deals’ in 2015 totalling between $1-$2 billion. Yes, that’s right, that’s $1-$2 billion in investment in private ed tech companies in the USA (and China) in one year alone. At the same time, entrepreneurs are struggling to develop sustainable business models for ed tech investment, because with education funded publicly, a ‘true’ market is restricted. Politicians, entrepreneurs and policy makers on the right in the USA increasingly see a move to automation as a way of reducing government expenditure on education, and one means by which to ‘free up the market’.

Another development that threatens the public education model is the move by very rich entrepreneurs such as the Gates, the Hewletts and the Zuckerbergs to move their massive personal wealth into ‘charitable’ foundations or corporations and use this money for their pet ‘educational’ initiatives that also have indirect benefits for their businesses. Ian McGugan (2015) in the Globe and Mail newspaper estimates that the Chan Zuckerberg Initiative is worth potentially $45 billion, and one of its purposes is to promote the personalization of learning (another name hi-jacked by computer scientists; it’s a more human way of describing adaptive learning). Since one way Facebook makes its money is by selling personal data, forgive my suspicions that the Zuckerberg initiative is a not-so-obvious way of collecting data on future high earners. At the same time, the Chang Zuckerberg initiative enables the Zuckerberg’s to avoid paying tax on their profits from Facebook. Instead then of paying taxes that could be used to support public education, these immensely rich foundations enable a few entrepreneurs to set the agenda for how computing will be used in education.

Why not?

Technology is disrupting nearly every other business and profession, so why not education? Higher education in particular requires a huge amount of money, mostly raised through taxes and tuition fees, and it is difficult to tie results directly to investment. Surely we should be looking at ways in which technology can change higher education so that it is more accessible, more affordable and more effective in developing the knowledge and skills required in today’s and tomorrow’s society?

Absolutely. It is not so much the need for change that I am challenging, but the means by which this change is being promoted. In essence, a move to automated learning, while saving costs, will not improve the learning that matters, and particularly the outcomes needed in a digital age, namely, the high level intellectual skills of critical thinking, innovation, entrepreneurship, problem-solving , high-level multimedia communication, and above all, effective knowledge management.

To understand why automated approaches to learning are inappropriate to the needs of the 21st century we need to look particularly at the tools and methods being proposed.

The problems with automating learning

The main challenge for computer-directed learning such as information transmission and management through Internet-distributed video lectures, computer-marked assessments, adaptive learning, learning analytics, and artificial intelligence is that they are based on a model of learning that has limited applications. Behaviourism works well in assisting rote memory and basic levels of comprehension, but does not enable or facilitate deep learning, critical thinking and the other skills that are essential for learners in a digital age.

R. and D. Susskind (2015) in particular argue that there is a new age in artificial intelligence and adaptive learning driven primarily by what they call the brute force of more powerful computing. Why AI failed so dramatically in the 1980s, they argue, was because computer scientists tried to mimic the way that humans think, and computers then did not have the capacity to handle information in the way they do now. When however we use the power of today’s computing, it can solve previously intractable problems through analysis of massive amounts of data in ways that humans had not considered.

There are several problems with this argument. The first is that the Susskinds are correct in that computers operate differently from humans. Computers are mechanical and work basically on a binary operating system. Humans are biological and operate in a far more sophisticated way, capable of language creation as well as language interpretation, and use intuition as well as deductive thinking. Emotion as well as memory drives human behaviour, including learning. Furthermore humans are social animals, and depend heavily on social contact with other humans for learning. In essence humans learn differently from the way machine automation operates.

Unfortunately, computer scientists frequently ignore or are unaware of the research into human learning. In particular they are unaware that learning is largely developmental and constructed, and instead impose an old and less appropriate method of teaching based on behaviourism and an objectivist epistemology. If though we want to develop the skills and knowledge needed in a digital age, we need a more constructivist approach to learning.

Supporters of automation also make another mistake in over-estimating or misunderstanding how AI and learning analytics operate in education. These tools reflect a highly objectivist approach to teaching, where procedures can be analysed and systematised in advance. However, although we know a great deal about learning in general, we still know very little about how thinking and decision-making operate biologically in individual cases. At the same time, although brain research is promising to unlock some of these secrets, most brain scientists argue that while we are beginning to understand the relationship between brain activity and very specific forms of behaviour, there is a huge distance to travel before we can explain how these mechanisms affect learning in general or how an individual learns in particular. There are too many variables (such as emotion, memory, perception, communication, as well as neural activity) at play to find an isomorphic fit between the firing of neurons and computer ‘intelligence’.

The danger then with automation is that we drive humans to learn in ways that best suit how machines operate, and thus deny humans the potential of developing the higher levels of thinking that make humans different from machines. For instance, humans are better than machines at dealing with volatile, uncertain, complex and ambiguous situations, which is where we find ourselves in today’s society.

Lastly, both AI and adaptive learning depend on algorithms that predict or direct human behaviour. These algorithms though are not transparent to the end users. To give an example, learning analytics are being used to identify students at high risk of failure, based on correlations of previous behaviour online by previous students. However, for an individual, should a software program be making the decision as to whether that person is suitable for higher education or a particular course? If so, should that person know the grounds on which they are considered unsuitable and be able to challenge the algorithm or at least the principles on which that algorithm is based? Who makes the decision about these algorithms – a computer scientist using correlated data, or an educator concerned with equitable access? The more we try to automate learning, the greater the danger of unintended consequences, and the more need for educators rather than computer scientists to control the decision-making.

The way forward

In the past, I used to think of computer scientists as colleagues and friends in designing and delivering online learning. I am now increasingly seeing at least some of them as the enemy. This is largely to do with the hubris of Silicon Valley, which believes that computer scientists can solve any problem without knowing anything about the problem itself. MOOCs based on recorded lectures are a perfect example of this, being developed primarily by a few computer scientists from Stanford (and unfortunately blindly copied by many people in universities who should have known better.)

We need to start with the problem, which is how do we prepare learners for the knowledge and skills they will need in today’s society. I have argued (Bates, 2015) that we need to develop, in very large numbers of people, high level intellectual and practical skills that require the construction and development of knowledge, and that enable learners to find, analyse, evaluate and apply knowledge appropriately.

This requires a constructivist approach to learning which cannot be appropriately automated, as it depends on high quality interaction between knowledge experts and learners. There are many ways to accomplish this, and technology can play a leading role, by enabling easy access to knowledge, providing opportunities for practice in experientially-based learning environments, linking communities of scholars and learners together, providing open access to unlimited learning resources, and above all by enabling students to use technology to access, organise and demonstrate their knowledge appropriately.

These activities and approaches do not easily lend themselves to massive economies of scale through automation, although they do enable more effective outcomes and possibly some smaller economies of scale. Automation can be helpful in developing some of the foundations of learning, such as basic comprehension or language acquisition. But at the heart of developing the knowledge and skills needed in today’s society, the role of a human teacher, instructor or guide will remain absolutely essential. Certainly, the roles of teachers and instructors will need to change quite dramatically, teacher training and faculty development will be critical for success, and we need to use technology to enable students to take more responsibility for their own learning, but it is a dangerous illusion to believe that automation is the solution to learning in the 21st century.

Protecting the future

There are several practical steps that need to be taken to prevent the automation of teaching.

  1. Educators – and in particular university presidents and senior civil servants with responsibility for education – need to speak out clearly about the dangers of automation, and the technology alternatives available that still exploit its potential and will lead to greater cost-effectiveness. This is not an argument against the use of technology in education, but the need to use it wisely so we get the kind of educated population we need in the 21st century.
  2. Computer scientists need to show more respect to educators and be less arrogant. This means working collaboratively with educators, and treating them as equals.
  3. We – teachers and educational technologists – need to apply in our own work and disseminate better to those outside education what we already know about effective learning and teaching.
  4. Faculty and teachers need to develop compelling technology alternatives to automation that focus on the skills and knowledge needed in a digital age, such as:
    • experiential learning through virtual reality (e.g. Loyalist College’s training of border service agents)
    • networking learners online with working professionals, to solve real world problems (e.g. by developing a program similar to McMaster’s integrated science program for online/blended delivery)
    • building strong communities of practice through connectivist MOOCs (e.g. on climate change or mental health) to solve global problems
    • empowering students to use social media to research and demonstrate their knowledge through multimedia e-portfolios (e.g. UBC’s ETEC 522)
    • designing openly accessible high quality, student-activated simulations and games but designed and monitored by experts in the subject area.
  5. Governments need to put as much money into research into learning and educational technology as they do into innovation in industry. Without better and more defensible theories of learning suitable for a digital age, we are open to any quack or opportunist who believes he or she has the best snake oil. More importantly, with better theory and knowledge of learning disseminated and applied appropriately, we can have a much more competitive workforce and a more just society.
  6. We need to educate our politicians about the dangers of commercialization in education through the automation of learning and fight for a more equal society where the financial returns on technology applications are more equally shared.
  7. Become edupunks and take back the web from powerful commercial interests by using open source, low cost, easy to use tools in education that protect our privacy and enable learners and teachers to control how they are used.

That should keep you busy in 2016.

Your views are of course welcome – unless you are a bot.

References

Bates, A. (1986) Computer assisted learning or communications: which way for information technology in distance education? Journal of Distance Education Vol. 1, No. 1

Bates, A. (2015) Teaching in a Digital Age Victoria BC: BCcampus

Harasim, L. (2012) Learning Theory and Online Technologies New York/London: Routledge

Mason, R. and Kaye, A (Eds).(1989)  Mindweave: communication, computers and distance education. Oxford: Pergamon

McGugan, I. (2015)Why the Zuckerberg donation is not a bundle of joy, Globe and Mail, December 2

Noble, D. (1998) Digital Diploma Mills, Monthly Review http://monthlyreview.org/product/digital_diploma_mills/

Papert, S. (1980) Mindstorms: Children, Computers and Powerful Ideas New York: Basic Books

Skinner, B. (1968)  The Technology of Teaching, 1968 New York: Appleton-Century-Crofts

Susskind, R. and Susskind, D. (2015) The Future of the Professions: How Technology will Change the Work of Human Experts Oxford UK: Oxford University Press

Watters, A. (2015) The Business of EdTech, Hack Edu, undated http://2015trends.hackeducation.com/business.html

Winters, M. (2015) Christmas Bonus! US Edtech Sets Record With $1.85 Billion Raised in 2015 EdSurge, December 21 https://www.edsurge.com/news/2015-12-21-christmas-bonus-us-edtech-sets-record-with-1-85-billion-raised-in-2015

That was the year, that was: main trends in 2015

Image: http://goodbye2015welcome2016.com/

Image: http://goodbye2015welcome2016.com/

Well, here we are at the end of another year. Doesn’t time fly! So here is my look back on 2015. I’ll do this in three separate posts. This one focuses on what I saw as the main trends in online learning in 2015.

Gradual disengagement

It was April, 2014, when I decided to stop (nearly) all professional activities, in order to complete my book, Teaching in a Digital Age, which came out in April this year. A year and eight months later, though, I haven’t stopped completely, as you will see. However, most of my activities this year were related to the publication or follow-up from the book. As a result I have reduced considerably my professional activities and this reduction will continue into 2016. Because I was less engaged this year with other institutions, I don’t have a good grip on all the things that happened during 2015 in the world of online learning. For a thorough review, see Audrey Watters excellent Top Ed-Tech Trends of 2015.

Nevertheless I’m not dead yet, I have been doing some work with universities (see next post), and I have been following the literature and talking to colleagues, so here’s what I took away from 2015.

1. The move to hybrid learning

This is clearly the biggest and most significant development of 2015. More and more faculty are now almost routinely integrating online learning into their campus-based classes. The most common way this is being done (apart from using an LMS to support classroom teaching) still remains ‘flipped’ classrooms, where students watch a lecture online then come to class for discussion.

There are lots of problems with this approach, in particular the failure to make better pedagogical use of video and the failure of many students to view the lecture before coming to class, but for many faculty it is an obvious and important first step towards blended learning, and more importantly it has the potential for more active engagement from learners.

As instructors get more experience of this, though, they start looking at better ways to combine the video and classroom experiences. The big challenge then becomes how best to use the student time on campus, which is by no means always obvious. The predominant model of hybrid learning though is still the (recorded) lecture model, but adapted somewhat to allow for more discussion in large classes.

In most flipped classroom teaching, the initiative tends to come from the individual instructor, but some institutions, such as the University of British Columbia and the University of Ottawa, are putting in campus-wide initiatives to redesign completely the large lecture class, involving teams of faculty, teaching assistants and instructional and web designers. I believe this to be the ‘true’ hybrid approach, because it looks from scratch at the affordances of online and face-to-face teaching and designs around those, rather than picking a particular design such as a flipped lecture. I anticipate that university or at least program-wide initiatives for the redesign of large first and second year classes will grow even more in 2016.

UBC's flexible learning initiative focuses on re-design to integrate online and classroom learing

UBC’s flexible learning initiative focuses on re-design to integrate online and classroom learing

2. Fully online undergraduate courses

Until fairly recently, the only institutions offering whole undergraduate programs fully online were either the for-profit institutions such as the University of Phoenix, or specialist open universities, such as the U.K Open University or Athabasca University in Canada.

Most for-credit online programs in conventional universities were at the graduate level, and even then, apart from online MBAs, fully online master programs were relatively rare. At an undergraduate level, online courses were mainly offered in third or more likely the fourth year, and more on an individual rather than a program basis, enabling regular, on-campus students to take extra courses or catch up so they could finish their bachelor degree within four years.

However, this year I noticed some quite distinguished Canadian universities building up to full undergraduate degrees available fully online. For instance, McMaster University is offering an online B.Tech (mainly software engineering) in partnership with Mohawk College. Students can take a diploma program from Mohawk then take the third and fourth year fully online from McMaster. Similarly Queens University, in partnership with the Northern College Haileybury School of Mines, is developing a fully online B.Tech in Mining Engineering. Queens is also developing a fully online ePre-Health Honours Bachelor of Science, using competency-based learning.

Fully online undergraduate programs will not be appropriate for all students, particularly those coming straight from high school. But the programs from Queens and McMaster recognise the growing market for people with two-year college diplomas, who are often already working and want to go on to a full undergraduate degree without giving up their jobs.

3. The automation of learning

Another trend I have noticed growing particularly strong in 2015, and one that I don’t like, is the tendency, particularly but not exclusively in the USA, to move to the automation of learning through behaviourist applications of computer technology. This can be seen in the use of computer-marked assignments in xMOOCs, the use of learning analytics to identify learners ‘at risk’, and adaptive learning that controls the way learners can work through materials. There are some elements of competency-based learning that also fit this paradigm.

This is a big topic which I will discuss in more detail in the new year in my discussion of the future of learning, but it definitely increased during 2015.

4. The growing importance of open source social media in online learning design

I noticed more and more instructors and instructional designers are incorporating social media into the design of online learning in 2015. In particular, more instructors are moving away from learning management systems and using open source social media such as blogs, wikis, and mobile apps, to provide flexibility and more learner engagement.

One important reason for this is to move away from commercially owned software and services, partly to protect student (and instructor) privacy. In a sense, this also a reaction to the automation and commercialization of learning, reflecting a difference in fundamental philosophy as well as in technology. Again, the increased use of social media in online learning is discussed in much more detail by Audrey Watters (see Social Media, Campus Activism and Free Speech).

5. More open educational materials – but not enough use

For me, the leader in OER in 2015 was the BCcampus open textbook project, and not just because I published my own book this way. This is proving to be a very successful program, already saving post-secondary students over $1 million from a total post-secondary student population of under 250,000. The only surprise is that many BC instructors are still resisting the move to open textbooks and that more jurisdictions outside Western Canada are not moving aggressively into open textbooks.

The general adoption of OER indeed still seems to be struggling. I noticed that some institutions in Ontario are beginning to develop OER that can be shared across different courses within the same institution (e.g. statistics). However, it would be much more useful if provincial or state articulation committees came together and agreed on the production of core OER that could be used throughout the same system within a particular discipline (and also, of course, made available to anyone outside). This way instructors would know the resources have been peer validated. Other ways to encourage faculty to use OER – in particular, ensuring the OER are of high quality both academically and in production terms – need to be researched and applied. It doesn’t make sense for online learning to be a cottage industry with every instructor doing everything themselves.

Is that it?

Yup. As I said, mine is a much narrower view of online learning trends than I have done in the past. You will note that I have not included MOOCs in my key trends for 2015. They are still there and still growing, but a lot of the hype has died down, and they are gradually easing into a more specialist niche or role in the wider higher education market. My strategy with MOOCs is if you can’t beat them, ignore them. They will eventually go away.

Next

The next two posts will:

  1. provide a summary of my activities in 2015
  2. provide a statistical analysis of the most popular posts on my blog in 2015

In the new year I will write a more general post on the future of online learning. In the meantime, have a great holiday season and see you in 2016.