July 20, 2018

How serious should we be about serious games in online learning?

An excerpt from the video game ‘Therapeutic Communication and Mental Health Assessment’ developed at Ryerson University

In the 2017 national survey of online learning in post-secondary education, and indeed in the Pockets of Innovation project, serious games were hardly mentioned as being used in Canadian universities or colleges. Yet there was evidence from the Chang School Talks in Toronto earlier this month that there is good reason to be taking serious games more seriously in online learning.

What are serious games?

The following definition from the Financial Times Lexicon is as good a definition as any:

Serious games are games designed for a purpose beyond pure entertainment. They use the motivation levers of game design – such as competition, curiosity, collaboration, individual challenge – and game media, including board games through physical representation or video games, through avatars and 3D immersion, to enhance the motivation of participants to engage in complex or boring tasks. Serious games are therefore used in a variety of professional situations such as education, training,  assessment, recruitment, knowledge management, innovation and scientific research. 

So serious games are not solely educational, nor necessarily online, but they can be both.

Why are serious games not used more in online learning?

Well, partly because some see serious games as an oxymoron. How can a game be serious? This may seem trivial, but many game designers fear that a focus on education risks killing the main element of a game, its fun. Similarly, many instructors fear that learning could easily be trivialised through games or that games can cover only a very limited part of what learning should be about – it can’t all be fun. 

Another more pragmatic reason is cost and quality. The best selling video games for instance cost millions of dollars to produce, on a scale similar to mainstream movies. What is the compelling business plan for educational games? And if games are produced cheaply, won’t the quality – in terms of production standards, narrative/plot, visuals, and learner engagement – suffer, thus making them unattractive for learners?

However, probably the main reason is that most educators simply do not know enough about serious games: what exists, how they can be used, nor how to design them. For this reason, the ChangSchoolTalks, organised each year by the School of Continuing Studies at Ryerson University, this year focused on serious games.

The conference

The conference, held on May 3rd in Toronto, consisted of nine key speakers who have had extensive experience with serious games, organised in three themes:

  • higher education
  • health care
  • corporate

The presentations were followed by a panel debate and question and answer session. The speakers were:

This proved to be an amazingly well-selected group of speakers on the topic. In one session run by Sylvester Arnab, he had the audience inventing a game within 30 seconds. Teams of two were given a range of  existing games or game concepts (such as Dictionary or Jeopardy) and a topic (such as international relations) and had up to two minutes to create an educational game. The winning team (in less than 30 seconds) required online students in political sciences to represent a country and suggest how they should respond to selected Tweets from Donald Trump.

I mentioned in an earlier blog that I suffered from such information overload from recent conferences that I had to go and lie down. It was at this conference where that happened! It has taken three weeks for me even to begin fully processing what I learned.

What did I learn?

Probably the most important thing is that there is a whole, vibrant world of serious games outside of education, and at the same time there are many possible and realistic applications for serious games in education, and particularly in online learning. So, yes, we should be taking serious games much more seriously in online learning – but we need to do it carefully and professionally.

The second lesson I learned is that excellent online serious games can be developed without spending ridiculous amounts of money (see some examples below). At the same time, there is a high degree of risk. There is no sure way of predicting in advance that a new game will be successful. Some low-cost simple games can work well; some expensively produced games can easily flop. This means careful testing and feedback during development.

For these and other reasons, research being conducted at Ryerson University and funded by eCampus Ontario is particularly important. Naza Djafarova and colleagues at Ryerson’s Chang School of Continuing Education are conducting research to develop a game design guide to enhance the process by which multidisciplinary teams, engaged in the pre-production stage, approach the design of a serious game. They have developed a process called the Art of Game Design methodology, for multidisciplinary teams involved in the design of serious games, and appraised in participatory workshops.

The Chang School has already developed a few prototype games, including:

  • Lake Devo, a virtual learning environment enabling online role-play activity in an educational context. Learners work synchronously, using visual, audio, and text elements to create avatars and interact in online role-play scenarios.
  • Skills Practice: A Home Visit that promotes the application of knowledge and skills related to establishing a therapeutic nurse-client relationship and completing a mental health assessment. Students assume the role of a community health nurse assigned to complete a home visit. Working with nurses and professors from George Brown College, Centennial College this project is working to establish a ‘virtual hospital’ with several serious games focused on maternity issues.

Thus serious games are a relatively high risk, high return activity for online learning. This requires building on best practices in games design, both within and outside education, sharing, and collaboration. However, as we move more and more towards skills development, experiential learning, and problem-solving, serious games will play an increasingly important role in online learning. Best to start now.

Towards an open pedagogy for online learning

Image: © University of Victoria, BC

Image: © University of Victoria, BC

The problems with OER

I was interviewed recently by a reporter doing an article on OER (open educational resources) and I found myself being much more negative than I expected, since I very much support the principle of open-ness in education. In particular, I pointed out that OER, while slowly growing in acceptance, are still used for a tiny minority of teaching in North American universities and colleges. For instance, open textbooks are a no brainer, given the enormous savings they can bring to students, but even in the very few state or provincial jurisdictions that have an open textbook program, the take-up is still very slow.

I have written elsewhere in more detail about why this is so, but here is a summary of the reasons:

  • lack of suitable OER: finding the right OER for the right context. This is a problem that is slowly disappearing, as more OER become available, but it is still difficult to find exactly the right kind of OER to fit a particular teaching context in too many instances. It is though a limitation that I believe will not last for much longer (for the reasons for this, read on).
  • the poor quality of what does exist. This is not so much the quality of content, but the quality of production. Most OER are created by an individual instructor working alone, or at best with an instructional designer. This is the cottage industry approach to design. I have been on funding review committees where institutions throughout a province are bidding for funds for course development or OER production. In one case I reviewed requests from about eight different institutions for funds to produce OER for statistics. Each institution (or rather faculty member) made its proposal in isolation of the others. I strongly recommended that the eight faculty members got together and designed a set of OER together that would benefit from a larger input of expertise and resources. That way all eight institutions were likely to use the combined OER, and the OER would likely be of a much higher quality as a result.
  • the benefits are less for instructors than students. Faculty for instance set the textbook requirement. They don’t have to pay for the book themselves in most cases. With the textbook often comes a whole package of support materials from the publisher, such as tests, supplementary materials, and model answers (which is why the textbook is so expensive). This makes life easier for instructors but it is the students who have to pay the cost.
  • OER take away the ‘ownership’ of knowledge from the instructor. Instructors do not see themselves as merely distributors of information, a conveyor belt along which ‘knowledge’ passes, but as constructors of knowledge. They see their lecture as unique and individual, something the student cannot get from someone else. And often it is unique, with an instructor’s personal spin on a topic. OER’s take away from instructors that which they see as being most important about their teaching: their unique perspective on a topic.
  • and now we come to what I think is the main problem with OER: OER do not make much sense out of context. Too often the approach is to create an OER then hope that others will find applications for it. But this assumes that knowledge is like a set of bricks. All you have to do is to collect bricks of knowledge together, add a little  mortar, and lo, you have a course. The instructor chooses the bricks and the students apply the mortar. Or you have a course but you need to fill some holes in it with OER. I suggest these are false metaphors for teaching, or at least for how people learn. You need a context, a pedagogy, where it makes sense to use open resources.

Towards an open pedagogy

I am making three separate but inter-linked arguments here:

  • OER are too narrowly defined and conceptualized
  • we need to design teaching in such a way that it is not just sensible to use OER but unavoidable
  • we should start by defining what we are trying to achieve, then identify how OER will enable this.

So I will start with the last argument first.

Developing the knowledge and skills needed in the 21st century

Again I have written extensively about this (see Chapter 1 of Teaching in a Digital Age), but in essence we need to focus specifically on developing core ‘soft’ or ‘intellectual’ skills in our students, and especially the core skills of independent learning and knowledge management. Put in terms of learning outcomes, in a world where the content component of knowledge is constantly developing and growing, students need to learn independently so they can continue to learn after graduation, and students also need to know how to find, analyse, evaluate, and apply knowledge.

If we want students to develop these and other ‘soft’ skills such as problem-solving, critical thinking, evidence-based argumentation, what teaching methods or pedagogy should we adopt and how would it differ from what we do now?

The need for teaching methods that are open rather than closed

The first thing we should recognise is that in a lecture based methodology, it is the instructor doing the knowledge management, not the student. The instructor (or his or her colleagues) decide the curriculum, the required reading, what should be covered in each lecture, how it should be structured, and what should be assessed. There is little independence for the learner – either do what you are instructed to do, or fail. That is a closed approach to teaching.

I am suggesting that we need to flip this model on its head. It should ultimately be the students learning and deciding what content is important, how it should be structured, how it can be applied. The role of the instructor then would not be to choose, organise and deliver content, but to structure the teaching to enable students to do this effectively themselves.

This also should not be a sudden process, where students suddenly switch from a lecture-based format as an undergraduate to a more open structure as a post-graduate, but a process that is slowly and increasingly developed throughout the undergraduate program or a two-year college program where soft skills are considered important. One way – although there are many others – of doing this is through project- or problem-based learning, where students start with real challenges then develop the knowledge and skills needed to address such challenges.

This does not mean we no longer need subject specialists or content experts. Indeed, a deep understanding of a subject domain is essential if students are to be steered and guided and properly assessed. However, the role of the subject specialist is fundamentally changed. He or she is now required to set their specialist knowledge in a context that enables student discovery and exploration, and student responsibility for learning. The specialist’s role now is to support learning, by providing appropriate learning contexts, guidance to students, criteria for assessing the quality of information, and quality standards for problem-solving, knowledge management and critical thinking, etc.

A new definition of open resources

Here I will be arguing for a radical change: the dropping of the term ‘educational’ from OER.

If students are to develop the skills identified earlier, they will need access to resources: research papers, reports from commissions, case-study material, books, first-hand reports, YouTube video, a wide range of opinions or arguments about particular topics, as well as the increasing amount of specifically named open educational resources, such as recorded lectures from MIT and other leading research universities.

Indeed, increasingly all knowledge is becoming open and easily accessible online. All publicly funded research in many countries must now be made available through open access journals, increasingly government and even some commercial data (think government commission reports, environmental assessments, public statistics, meteorological models) are now openly accessible online, and this will become more and more the norm. In other words, all content is becoming more free and more accessible, especially online.

With that comes of course more unreliable information, more false truths, and more deliberate propaganda. What better preparation for our students’ future is there than equipping them with the knowledge and skills to sift through this mass of contradictory information?  What better than to make them really good at identifying the true from the false, to evaluate the strength of an argument, to assess the evidence used to support an argument, whatever the subject domain? To do this though means exposing them to a wide range of openly accessible content, and providing the guidance and criteria, and the necessary prior knowledge, that they will need to make these decisions.

But we cannot do this if we restrict our students to already ‘approved’ OER. All content eventually becomes an educational resource, a means to help students to differentiate, evaluate and decide. By naming content as ‘educational’ we are already validating its ‘truth’ – we are in fact closing the mind to challenge. What we want is access to open resources – full stop. Let’s get rid of the term OER and instead fight for an open pedagogy.

Automation or empowerment: online learning at the crossroads

Image: Applift

Image: AppLift, 2015

You are probably, like me, getting tired of the different predictions for 2016. So I’m not going to do my usual look forward for the year for individual developments in online learning. Instead, I want to raise a fundamental question about which direction online learning should be heading in the future, because the next year could turn out to be very significant in determining the future of online learning.

The key question we face is whether online learning should aim to replace teachers and instructors through automation, or whether technology should be used to empower not only teachers but also learners. Of course, the answer will always be a mix of both, but getting the balance right is critical.

An old but increasingly important question

This question, automation or human empowerment, is not new. It was raised by B.F. Skinner (1968) when he developed teaching machines in the early 1960s. He thought teaching machines would eventually replace teachers. On the other hand, Seymour Papert (1980) wanted computing to empower learners, not to teach them directly. In the early 1980s Papert got children to write computer code to improve the way they think and to solve problems. Papert was strongly influenced by Jean Piaget’s theory of cognitive development, and in particular that children constructed rather than absorbed knowledge.

In the 1980s, as personal computers became more common, computer-assisted learning (CAL or CAD) became popular, using computer-marked tests and early forms of adaptive learning. Also in the 1980s the first developments in artificial intelligence were applied, in the form of intelligent math tutoring. Great predictions were made then, as now, about the potential of AI to replace teachers.

Then along came the Internet. Following my first introduction to the Internet in a friend’s basement in Vancouver, I published an article in the first edition of the Journal of Distance Education, entitled ‘Computer-assisted learning or communications: which way for IT in distance education?’ (1986). In this paper I argued that the real value of the Internet and computing was to enable asynchronous interaction and communication between teacher and learners, and between learners themselves, rather than as teaching machines. This push towards a more constructivist approach to the use of computing in education was encapsulated in Mason and Kaye’s book, Mindweave (1989). Linda Harasim has since argued that online collaborative learning is an important theory of learning in its own right (Harasim, 2012).

In the 1990s, David Noble of York University attacked online learning in particular for turning universities into ‘Digital Diploma Mills’:

‘universities are not only undergoing a technological transformation. Beneath that change, and camouflaged by it, lies another: the commercialization of higher education.’

Noble (1998) argued that

‘high technology, at these universities, is often used not to ……improve teaching and research, but to replace the visions and voices of less-prestigious faculty with the second-hand and reified product of academic “superstars”.

However, contrary to Noble’s warnings, for fifteen years most university online courses followed more the route of interaction and communication between teachers and students than computer-assisted learning or video lectures, and Noble’s arguments were easily dismissed or forgotten.

Then along came lecture capture and with it, in 2011, Massive Open Online Courses (xMOOCs) from Coursera, Udacity and edX, driven by elite, highly selective universities, with their claims of making the best professors in the world available to everyone for free. Noble’s nightmare suddenly became very real. At the same time, these MOOCs have resulted in much more interest in big data, learning analytics, a revival of adaptive learning, and claims that artificial intelligence will revolutionize education, since automation is essential for managing such massive courses.

Thus we are now seeing a big swing back to the automation of learning, driven by powerful computing developments, Silicon Valley start-up thinking, and a sustained political push from those that want to commercialize education (more on this later). Underlying these developments is a fundamental conflict of philosophies and pedagogies, with automation being driven by an objectivist/behaviourist view of the world, compared with the constructivist approaches of online collaborative learning.

In other words, there are increasingly stark choices to be made about the future of online learning. Indeed, it is almost too late – I fear the forces of automation are winning – which is why 2016 will be such a pivotal year in this debate.

Automation and the commercialization of education

These developments in technology are being accompanied by a big push in the United States, China, India and other countries towards the commercialization of online learning. In other words, education is being seen increasingly as a commodity that can be bought and sold. This is not through the previous and largely discredited digital diploma mills of the for-profit online universities such as the University of Phoenix that David Noble feared, but rather through the encouragement and support of commercial computer companies moving into the education field, companies such as Coursera, Lynda.com and Udacity.

Audrey Watters and EdSurge both produced lists of EdTech ‘deals’ in 2015 totalling between $1-$2 billion. Yes, that’s right, that’s $1-$2 billion in investment in private ed tech companies in the USA (and China) in one year alone. At the same time, entrepreneurs are struggling to develop sustainable business models for ed tech investment, because with education funded publicly, a ‘true’ market is restricted. Politicians, entrepreneurs and policy makers on the right in the USA increasingly see a move to automation as a way of reducing government expenditure on education, and one means by which to ‘free up the market’.

Another development that threatens the public education model is the move by very rich entrepreneurs such as the Gates, the Hewletts and the Zuckerbergs to move their massive personal wealth into ‘charitable’ foundations or corporations and use this money for their pet ‘educational’ initiatives that also have indirect benefits for their businesses. Ian McGugan (2015) in the Globe and Mail newspaper estimates that the Chan Zuckerberg Initiative is worth potentially $45 billion, and one of its purposes is to promote the personalization of learning (another name hi-jacked by computer scientists; it’s a more human way of describing adaptive learning). Since one way Facebook makes its money is by selling personal data, forgive my suspicions that the Zuckerberg initiative is a not-so-obvious way of collecting data on future high earners. At the same time, the Chang Zuckerberg initiative enables the Zuckerberg’s to avoid paying tax on their profits from Facebook. Instead then of paying taxes that could be used to support public education, these immensely rich foundations enable a few entrepreneurs to set the agenda for how computing will be used in education.

Why not?

Technology is disrupting nearly every other business and profession, so why not education? Higher education in particular requires a huge amount of money, mostly raised through taxes and tuition fees, and it is difficult to tie results directly to investment. Surely we should be looking at ways in which technology can change higher education so that it is more accessible, more affordable and more effective in developing the knowledge and skills required in today’s and tomorrow’s society?

Absolutely. It is not so much the need for change that I am challenging, but the means by which this change is being promoted. In essence, a move to automated learning, while saving costs, will not improve the learning that matters, and particularly the outcomes needed in a digital age, namely, the high level intellectual skills of critical thinking, innovation, entrepreneurship, problem-solving , high-level multimedia communication, and above all, effective knowledge management.

To understand why automated approaches to learning are inappropriate to the needs of the 21st century we need to look particularly at the tools and methods being proposed.

The problems with automating learning

The main challenge for computer-directed learning such as information transmission and management through Internet-distributed video lectures, computer-marked assessments, adaptive learning, learning analytics, and artificial intelligence is that they are based on a model of learning that has limited applications. Behaviourism works well in assisting rote memory and basic levels of comprehension, but does not enable or facilitate deep learning, critical thinking and the other skills that are essential for learners in a digital age.

R. and D. Susskind (2015) in particular argue that there is a new age in artificial intelligence and adaptive learning driven primarily by what they call the brute force of more powerful computing. Why AI failed so dramatically in the 1980s, they argue, was because computer scientists tried to mimic the way that humans think, and computers then did not have the capacity to handle information in the way they do now. When however we use the power of today’s computing, it can solve previously intractable problems through analysis of massive amounts of data in ways that humans had not considered.

There are several problems with this argument. The first is that the Susskinds are correct in that computers operate differently from humans. Computers are mechanical and work basically on a binary operating system. Humans are biological and operate in a far more sophisticated way, capable of language creation as well as language interpretation, and use intuition as well as deductive thinking. Emotion as well as memory drives human behaviour, including learning. Furthermore humans are social animals, and depend heavily on social contact with other humans for learning. In essence humans learn differently from the way machine automation operates.

Unfortunately, computer scientists frequently ignore or are unaware of the research into human learning. In particular they are unaware that learning is largely developmental and constructed, and instead impose an old and less appropriate method of teaching based on behaviourism and an objectivist epistemology. If though we want to develop the skills and knowledge needed in a digital age, we need a more constructivist approach to learning.

Supporters of automation also make another mistake in over-estimating or misunderstanding how AI and learning analytics operate in education. These tools reflect a highly objectivist approach to teaching, where procedures can be analysed and systematised in advance. However, although we know a great deal about learning in general, we still know very little about how thinking and decision-making operate biologically in individual cases. At the same time, although brain research is promising to unlock some of these secrets, most brain scientists argue that while we are beginning to understand the relationship between brain activity and very specific forms of behaviour, there is a huge distance to travel before we can explain how these mechanisms affect learning in general or how an individual learns in particular. There are too many variables (such as emotion, memory, perception, communication, as well as neural activity) at play to find an isomorphic fit between the firing of neurons and computer ‘intelligence’.

The danger then with automation is that we drive humans to learn in ways that best suit how machines operate, and thus deny humans the potential of developing the higher levels of thinking that make humans different from machines. For instance, humans are better than machines at dealing with volatile, uncertain, complex and ambiguous situations, which is where we find ourselves in today’s society.

Lastly, both AI and adaptive learning depend on algorithms that predict or direct human behaviour. These algorithms though are not transparent to the end users. To give an example, learning analytics are being used to identify students at high risk of failure, based on correlations of previous behaviour online by previous students. However, for an individual, should a software program be making the decision as to whether that person is suitable for higher education or a particular course? If so, should that person know the grounds on which they are considered unsuitable and be able to challenge the algorithm or at least the principles on which that algorithm is based? Who makes the decision about these algorithms – a computer scientist using correlated data, or an educator concerned with equitable access? The more we try to automate learning, the greater the danger of unintended consequences, and the more need for educators rather than computer scientists to control the decision-making.

The way forward

In the past, I used to think of computer scientists as colleagues and friends in designing and delivering online learning. I am now increasingly seeing at least some of them as the enemy. This is largely to do with the hubris of Silicon Valley, which believes that computer scientists can solve any problem without knowing anything about the problem itself. MOOCs based on recorded lectures are a perfect example of this, being developed primarily by a few computer scientists from Stanford (and unfortunately blindly copied by many people in universities who should have known better.)

We need to start with the problem, which is how do we prepare learners for the knowledge and skills they will need in today’s society. I have argued (Bates, 2015) that we need to develop, in very large numbers of people, high level intellectual and practical skills that require the construction and development of knowledge, and that enable learners to find, analyse, evaluate and apply knowledge appropriately.

This requires a constructivist approach to learning which cannot be appropriately automated, as it depends on high quality interaction between knowledge experts and learners. There are many ways to accomplish this, and technology can play a leading role, by enabling easy access to knowledge, providing opportunities for practice in experientially-based learning environments, linking communities of scholars and learners together, providing open access to unlimited learning resources, and above all by enabling students to use technology to access, organise and demonstrate their knowledge appropriately.

These activities and approaches do not easily lend themselves to massive economies of scale through automation, although they do enable more effective outcomes and possibly some smaller economies of scale. Automation can be helpful in developing some of the foundations of learning, such as basic comprehension or language acquisition. But at the heart of developing the knowledge and skills needed in today’s society, the role of a human teacher, instructor or guide will remain absolutely essential. Certainly, the roles of teachers and instructors will need to change quite dramatically, teacher training and faculty development will be critical for success, and we need to use technology to enable students to take more responsibility for their own learning, but it is a dangerous illusion to believe that automation is the solution to learning in the 21st century.

Protecting the future

There are several practical steps that need to be taken to prevent the automation of teaching.

  1. Educators – and in particular university presidents and senior civil servants with responsibility for education – need to speak out clearly about the dangers of automation, and the technology alternatives available that still exploit its potential and will lead to greater cost-effectiveness. This is not an argument against the use of technology in education, but the need to use it wisely so we get the kind of educated population we need in the 21st century.
  2. Computer scientists need to show more respect to educators and be less arrogant. This means working collaboratively with educators, and treating them as equals.
  3. We – teachers and educational technologists – need to apply in our own work and disseminate better to those outside education what we already know about effective learning and teaching.
  4. Faculty and teachers need to develop compelling technology alternatives to automation that focus on the skills and knowledge needed in a digital age, such as:
    • experiential learning through virtual reality (e.g. Loyalist College’s training of border service agents)
    • networking learners online with working professionals, to solve real world problems (e.g. by developing a program similar to McMaster’s integrated science program for online/blended delivery)
    • building strong communities of practice through connectivist MOOCs (e.g. on climate change or mental health) to solve global problems
    • empowering students to use social media to research and demonstrate their knowledge through multimedia e-portfolios (e.g. UBC’s ETEC 522)
    • designing openly accessible high quality, student-activated simulations and games but designed and monitored by experts in the subject area.
  5. Governments need to put as much money into research into learning and educational technology as they do into innovation in industry. Without better and more defensible theories of learning suitable for a digital age, we are open to any quack or opportunist who believes he or she has the best snake oil. More importantly, with better theory and knowledge of learning disseminated and applied appropriately, we can have a much more competitive workforce and a more just society.
  6. We need to educate our politicians about the dangers of commercialization in education through the automation of learning and fight for a more equal society where the financial returns on technology applications are more equally shared.
  7. Become edupunks and take back the web from powerful commercial interests by using open source, low cost, easy to use tools in education that protect our privacy and enable learners and teachers to control how they are used.

That should keep you busy in 2016.

Your views are of course welcome – unless you are a bot.

References

Bates, A. (1986) Computer assisted learning or communications: which way for information technology in distance education? Journal of Distance Education Vol. 1, No. 1

Bates, A. (2015) Teaching in a Digital Age Victoria BC: BCcampus

Harasim, L. (2012) Learning Theory and Online Technologies New York/London: Routledge

Mason, R. and Kaye, A (Eds).(1989)  Mindweave: communication, computers and distance education. Oxford: Pergamon

McGugan, I. (2015)Why the Zuckerberg donation is not a bundle of joy, Globe and Mail, December 2

Noble, D. (1998) Digital Diploma Mills, Monthly Review http://monthlyreview.org/product/digital_diploma_mills/

Papert, S. (1980) Mindstorms: Children, Computers and Powerful Ideas New York: Basic Books

Skinner, B. (1968)  The Technology of Teaching, 1968 New York: Appleton-Century-Crofts

Susskind, R. and Susskind, D. (2015) The Future of the Professions: How Technology will Change the Work of Human Experts Oxford UK: Oxford University Press

Watters, A. (2015) The Business of EdTech, Hack Edu, undated http://2015trends.hackeducation.com/business.html

Winters, M. (2015) Christmas Bonus! US Edtech Sets Record With $1.85 Billion Raised in 2015 EdSurge, December 21 https://www.edsurge.com/news/2015-12-21-christmas-bonus-us-edtech-sets-record-with-1-85-billion-raised-in-2015

Reading between the lines: the ‘intangibles’ in quality online teaching and learning

Teaching needs empathy, intuition and imagination, as well as technical competence.

Teaching needs empathy, intuition and imagination, as well as technical competence.

Contact North has organised a series of four webinars highlighting the practical advice and guidelines offered in my online, open textbook, Teaching in a Digital Age. The first webinar took place last week on September 29. It covered the first five chapters in the book, which discuss:

  • the implications of the major changes taking place in education
  • epistemologies that drive approaches to teaching and learning
  • different teaching methods and their appropriateness for developing the knowledge and skills needed in a knowledge-based society.

The aim of the webinar was not to cover the same ground as in the book,  but to provide an opportunity for participants to raise questions or comments about these issues, which was what they did. I received and answered nearly 30 different questions in the one hour. You can access the recording here: https://contactnorth.webex.com/contactnorth/lsr.php?RCID=67ca245af5fa7a21546ba37e10f306ba

In particular, there were questions about the importance of passion in teaching, whether learners today are really different, how to engage passive learners or introverts online, how to get students to take responsibility for learning, how to get students to collaborate online, and lastly whether cognitivism is an epistemology or a learning theory. I did answer all these questions briefly within the webinar.

On listening again to the recording, though, I was struck by the interest or concern of participants for what I would call the intangibles or the more human aspects in teaching and learning, such as the importance of passion in teaching and learning, dealing with learners’ ‘readiness’ or motivation to learn, building relationships between online learners and instructors, and how to encourage/develop interaction, discussion and collaboration between learners.

This brought home to me that for most instructors, teaching is not just a technical activity that can be categorized, systematised and computerised, but is a fundamentally human practice that requires empathy, intuition, and imagination. These are qualities that cannot be automated.

The next webinar, which will cover chapters 6-9 on media and technology selection, will be on November 3, 2015. For more details, click here.

 

Book review: Teaching and Learning in Digital Worlds

Workspace in the EVEA3D platform

Workspace in the EVEA3D platform

Gisbert, T. and Bullen, M. (2015) Teaching and Learning in Digital Worlds: Strategies and Issues in Higher Education Tarragona Spain: Publicacions Universitat Rovira i Virgili (pdf version available online for 2.84 Euros).

What the book is about

From the Introduction

[The book] examines the teaching and learning process in 3D virtual learning environments from both the theoretical and practical points of view. It is divided into four sections:

  • the first section discusses education in the 21st century from the perspective of learners in a digital society and examines the basic competences students need to respond to the personal and professional challenges they are likely to face. It also explores the issue of quality…..
  • the second section focuses on the educational and teaching strategies higher education professionals must take into account when developing educational processes in technology environments…in such environments simulation will be our best teaching strategy and evaluation our greatest challenge.
  • the third section explores the use of 3D virtual environments in education in general and in higher education in particular….
  • The fourth section examines the range of experiences we consider to be good practice when applying 3D technological environments to the teaching of competences at secondary and tertiary levels of education both nationally and internationally.

However, this doesn’t quite capture for me what the book is really about, so I will discuss a little more closely below some of the themes addressed by individual chapters.

As a point of clarification, I will use the term ‘immersive environments’ as a shorthand to describe simulations, games and virtual reality, a point I will come back to in my comments at the end of this post.

Who wrote it

The book is edited by Mercè Gisbert of the Universitat Rovira i Virgili in Catalonia, Spain, and Canadian Mark Bullen, formerly of the University of British Columbia and the Commonwealth of Learning. However, the majority of chapters are based on a study (Simul@) funded by the Spanish Ministry of Education and coordinated by Universitat Rovira i Virgili, but involving universities in Spain, Germany, and Portugal, thus providing a valuable insight into the thinking about immersive environments for education in Europe.

Full disclosure: I wrote a short prologue for the book.

Themes covered in the book

Rather than a chapter-by-chapter summary, I have selected certain themes that re-occur through the book.

1. Digital learners

There is a lot of discussion in the book about the nature of digital learners and their ‘readiness’ for learning through digital technologies. In particular, Bullen and Morgan summarise the conflicting views and the research around digital natives and digital immigrants, and provide a more ‘nuanced’ profile of categories of digital learners.  Martinez and Espinal in their chapter provide a detailed description of digital competence and how to assess it. Throughout the book there is emphasis on the need to ensure that learners have the necessary ‘digital competences’ to benefit fully from the use of immersive technologies for learning purposes (although the same applies to teachers, of course). For instance, de Oliveira et al., in their chapter, identify various components of digital competences.

2. Competences

One of the strengths of the book is that several authors make the point that the main educational value of immersive learning environments is for the development of ‘general competences’ such as learning to learn, teamwork, communication, problem solving and decision-making. Astigarraga provides a very good overview of the definition, identification and evaluation of competences, and Isus et al. develop this further with a chapter on evaluating the competences of teamwork and self-management. Larraz and Esteve devote their whole chapter to evaluating digital competence in immersive environments. These chapters will be valuable for anyone interested in competency-based learning, whether or not using immersive learning environments.

3. Key educational principles and affordances of immersive technologies

Another strength of the book is that several authors related the features of immersive environments to possible educational affordances, and the educational principles needed to exploit such affordances. Camacho and Esteve-Gonzáles have a list of 14 educational reasons for using immersive environments for learning and Cervera and Cela-Ranilla have collated from the general research literature about 15 key pedagogical principles ‘to be observed during learning processes’ when using immersive technologies for learning purposes.

4. Planning and implementing virtual learning environments

Towards the end of the book there are several chapters focusing on more practical issues. Marqués et al. describe the planning and implementation of a virtual world built in Sloodle, which combines OpenSim with Moodle, for educating both physical education and business management students. Estevez-González et al. take this further with a chapter on the tools used in Sloodle and the necessary steps needed to integrate OpenSim and Moodle. Lastly, Cela-Ranilla and Estevez-Gonzàlez provide an educational rationale for the design of the project. Garcia and Martin set out a design methodology for an immersive learning environment.

5. Experiences and good practices

The book ends with five chapters that describe actual applications of immersive learning environments, including PolyU developed at Hong Kong Polytechnic University (hotel and tourism management), a review of applications in economics and business courses, the use of an educational platform Virt-UAM developed at Universidad Autònoma de Madrid, and applications in law and psychology, and lastly a review of applications in secondary/high school education.

Critique

First, this is a very welcome and timely publication for several reasons:

  • it sets out very clearly the pedagogical rationale for the use of immersive learning environments;
  • it links immersive technologies very strongly to the development of competences;
  • it provides practical advice on the planning and implementation of immersive learning environments;
  • it provides a welcome European perspective on the topic.

From a personal perspective, it complements very nicely my own open, online textbook, Teaching in a Digital Age, where, because of space and time issues, I was unable to give this topic the treatment it deserves. Although not an open textbook, it is very accessible, available online for less than three euros ($3-4).

Given the book is mostly written by people for whom English is a second language, the chapters are clearly and well written, mostly free of the European English associated with European Commission projects.

Nevertheless, the European Commission has adopted the term competence rather than competency, which really irritates me, and this term is used throughout the book, when what the authors are really talking about are skills. Competent is an adjective meaning a minimal capacity to do something; incompetent is more frequently used in English English, and it is used to describe inadequacy. What we are really talking about here are skills, not competence. Skills have no limit, while competence tends to be categorical: you either have it or you don’t, which is why competency-based learning often requires 100% pass-rates. But skills such as problem-solving can get better and better, and that’s what we should be striving for in higher education, not a minimal pass requirement.

The editors have done a good job in ensuring that there is a coherence and progression between the different chapters, always a challenge in a multiple-authored book. However, I would have liked a summary chapter from the editors that pulled all the threads together, and also some more information about the authors.

The books strength and its weakness is the academic nature of the book, with more focus on theory, competences and affordances, and less on the actual technology design issues, although to be fair these start to appear at the back of the book. I would have liked to have seen more integration in the writing throughout the book between theory and practice.

The main omission is any discussion of costs in planning and developing immersive learning environments, which are time demanding of both learners and teachers. There are clear economies of scale that need to be employed to justify the high cost of initial design. If a virtual world and allied teaching strategies can be shared across several courses or even disciplines, the cost becomes more acceptable. There is also a high cost for students in terms of the time needed to master the technology and its educational applications if they only get one course in a virtual world. So it is a pity that there was so little discussion of costs and time in the book, and about the transfer of innovation into mainstream practice, which are significant challenges for the wider adoption of immersive technologies in education.

Nevertheless, this is a book I would highly recommend to all concerned about the implications of technology for learning design. Virtual learning environments hold great promise. We need more concerted efforts in higher education to use immersive learning environments, and this book is an essential guide.