October 31, 2014

Adapting student assessment to the needs of a digital age

Listen with webReader

Assessment 2

The story so far

Chapter 5 of my open textbook, ‘Teaching in a Digital Age’ is about the design of teaching and learning, which I am currently writing and publishing as I go.

I started Chapter 5 by suggesting that instructors should think about design through the lens of constructing a comprehensive learning environment in which teaching and learning will take place. I have been working through the various components of a learning environment, focusing particularly on how the digital age affects the way we need to look at some of these components.

I briefly described some of the key components of an effective learning environment in a series of blog posts:

In this post, I examine the assessment of students as a key component, and how assessment methods need to be adapted to meet the needs of a digital age. This is the last component I’m discussing, but it will be followed by a final post that discusses the value of designing teaching and learning through the lens of a comprehensive learning environment.

Learner assessment

‘I was struck by the way assessment always came at the end, not only in the unit of work but also in teachers’ planning….Assessment was almost an afterthought…

Teachers…are being caught between competing purposes of …assessment and are often confused and frustrated by the difficulties that they experience as they try to reconcile the demands.’

Earle, 2003

Learner assessment in a digital age

Because assessment is a huge topic, it is important to be clear that the purpose of this section is (a) to look at one of the components that constitute an effective and comprehensive learning environment, and (b) briefly to examine the extent to which assessment is or should be changing in a digital age. Assessment will be a recurring theme in this book, so in this section the treatment is deliberately cursory.

Probably nothing drives the behaviour of students more than how they will be assessed. Not all students are instrumental in their learning, but given the competing pressures on students’ time in a digital age, most ‘successful’ learners focus on what will be examined and how they can most effectively (i.e. in as little time as possible) meet the assessment requirements. Therefore decisions about methods of assessment will in most contexts be fundamental to building an effective learning environment.

The purpose of assessment

There are many different reasons for assessing learners. It is important to be clear about the purpose of the assessment, because it is unlikely that one single assessment instrument will meet all assessment needs. Here are some reasons (you can probably think of many more):

  1. to improve and extend students’ learning
  2. to assess students’ knowledge and competence in terms of desired learning goals or outcomes
  3. to provide the teacher/instructor with feedback on the effectiveness of their teaching and how it might be improved
  4. to provide information for employers about what the student knows and/or can do
  5. to filter students for further study, jobs or professional advancement
  6. for institutional accountability and/or financial purposes.

I have deliberately ordered these in importance for creating an effective learning environment. In terms of the needs of a digital age, assessment needs to focus on both developing and assessing skills. This means that continuous or formative assessment will be as important as summative or ‘end-of-course’ assessment.

A question to be considered is whether there is a need for assessment of learning in the first place. There may be contexts, such as a community of practice, where learning is informal, and the learners themselves decide what they wish to learn, and whether they are satisfied with what they have learned. In other cases, learners may not want or need to be formally evaluated or graded, but do want or need feedback on how they are doing with their learning. ‘Do I really understand this?’ or ‘How am I doing compared to other learners?’

However, even in these contexts, some informal methods of assessment by experts, specialists or more experienced participants could help other participants extend their learning by providing feedback and indicating the level of competence or understanding that a participant has achieved or has yet to accomplish. Lastly, students themselves can extend their learning by participating in both self-assessment and peer assessment, preferably with guidance and monitoring from a more knowledgeable or skilled instructor.

Methods of assessment

The form the assessment takes, as well as the purpose, will be influenced by the instructors’ or examiners’ underlying epistemology: what they believe constitutes knowledge, and therefore how students need to demonstrate their knowledge. The form of assessment should also be influenced by the knowledge and skills that students need in a digital age, which means focusing as much on assessing skills as knowledge of content.

There is a wide range of possible assessment methods. I have selected just a few to illustrate how technology can change the way we assess learners in ways that are relevant to a digital age:

  • computer-based multiple-choice tests: good for testing ‘objective’ knowledge of facts, ideas, principles, laws, and quantitative procedures in mathematics, science and engineering etc., and are cost-effective for these purposes. This form of testing though tends to be limited  in assessing high-level intellectual skills, such as complex problem-solving, creativity, and evaluation, and therefore less likely to be useful for developing or assessing many of the skills needed in a digital age.
  • written essays or short answers: good for assessing comprehension and some of the more advanced intellectual skills, such as critical thinking, but are labour intensive, open to subjectivity, and are not good for assessing practical skills. Experiments are taking place with automated essay marking, using developments in artificial intelligence, but so far automated essay marking still struggles with reliably identifying valid semantic meaning (for a balanced and more detailed account of the current state of machine grading, see Mayfield, 2013Parachuri, 2013).
  • project work: either individual but more commonly group-based, project work encourages the development of authentic skills that require understanding of content, knowledge management, problem-solving, collaborative learning, evaluation, creativity and practical outcomes. Designing valid and practical project work needs a high level of skill and imagination from the instructor.
  • e-portfolios (an online compendium of student work): enables self-assessment through reflection, knowledge management, recording and evaluation of learning activities, such as teaching or nursing practice, and recording of an individual’s contribution to project work (as an example, see  the use of e-portfolios in Visual Arts and Built Environment at the University of Windsor.); usually self-managed by the learner but can be made available or adapted for formal assessment purposes or job interviews
  • simulations, educational games (usually online) and virtual worlds: facilitate the practice of skills, such as complex and real time decision-making, operation of (simulated or remote) complex equipment, the development of safety procedures and awareness, risk taking and assessment in a safe environment, and activities that require a combination of manual and cognitive skills (see the training of Canadian Border Service officers at Loyalist College, Ontario). Currently expensive to develop, but cost-effective with multiple use, where it replaces the use of extremely expensive equipment, where operational activities cannot be halted for training purposes, or  where available as open educational resources.
Virtual world border crossing, Loyalist College, Ontario

Virtual world border crossing, Loyalist College, Ontario

It can be seen that some of these assessment methods are both formative, in helping students to develop and increase their competence and knowledge, as well as summative, in assessing knowledge and skill levels at the end of a course or program.

In conclusion

Nothing is likely to drive student learning more than the method of assessment. At the same time, assessment methods are rapidly changing and are likely to continue to change. Assessment in terms of skills development needs to be both ongoing and continuous as well as summative. There is an increasing range of digitally based tools that can enrich the quality and range of student assessment. Therefore the choice of assessment methods, and their relevance to other components, are vital elements of any effective learning environment.

Over to you

Your views, comments and criticisms are always welcome. In particular:

  • are there other methods of assessment relevant to a digital age that I should have included?
  • there is still a heavy reliance on computer-based multiple-choice tests in much teaching, mainly for cost reasons. However, although there are exceptions, in general these really don’t assess the high level conceptual skills needed in a digital age. Are there other methods that are equally as economical, particularly in terms of instructor time, that are more suitable for assessment in a digital age? For instance, do you think automated essay grading is a viable alternative?
  • would it be helpful to think about assessment right at the start of course planning, rather than at the end? Is this feasible?

Or any other comments on assessment as a critical component of a learning environment, please!

Next up

Why thinking in terms of a comprehensive learning environment is necessary but not sufficient when designing a course or program.

 

References

Earle, L. (2003) Assessment as Learning Thousand Oaks CA: Corwin Press

Mayfield, E. (2013) Six ways the edX Announcement Gets Automated Essay Grading Wrong, e-Literate, April 8

Parachuri, V. (2013) On the automated scoring of essays and the lessons learned along the way, vicparachuri.com,  July 31

 

Game-based and immersive courses in a community college system

Listen with webReader

SimSprayBradley, P. (2014) Getting in the game: Colorado colleges develop game-based, immersive courses Community College Weekly, March 3

The Colorado Community College System (CCCS) is one of the leading community college systems in exploring new online technologies. I have already reported on their use of remote labs for teaching introductory science courses at a distance. This article looks at the extensive use of immersion and game-based learning in the CCCS:

CCCS set aside $3 million through its Faculty Challenge Grant Program to encourage the development of courses and curriculum focusing on immersion and game-based learning (IGBL). Grants were awarded to 15 projects. The intent was that they would be “lighthouse projects,” illuminating the way for others to follow. Each solution would be scalable, shared with other institutions throughout the 13-college system.

Some of the 15 projects

Projects from this investment include the following:

  • CSI Aurora (Aurora CC) teaches the reality of forensic work through an immersive learning exercise involving a mock crime scene and mock criminal trial, with student participation from the archaeology, forensic anthropology, criminal justice, paralegal and science departments.
  • the Auto Collision Repair program at Morgan Community College purchased a SimSpray immersive virtual reality painting simulation unit, designed to assist in the teaching of spray painting and coating fundamentals. Using SimSpray decreases the expense of paint used to teach spray painting and prevents exposure to potentially dangerous fumes. The 3D SimSpray experience allows students to practice painting before ever stepping into the paint bay (I think in this case the real thing would be more fun!)
  • At Front Range Community College, Project Outbreak is a series of augmented reality scenarios in which microbiology students track and follow a potential epidemic in their local area to its source across international borders. Students use their mobile devices, the TagWhat geolocation app, Google Hangout and Google maps. Scenarios are designed to meet core competencies, promote global connectedness and give students a global perspective in solving real-world problems
  • the Community College of Aurora’s film school is in the process of using a $100,000 grant to create a virtual economy designed to mirror the reality of the studio system, from writing scripts to luring investors to screening the film in front of a real-life audiences. Over the past seven years, the film school has developed proprietary software that allows students to experience — virtually — every aspect of the filmmaking experience. The cost of rental housing in Los Angeles, New York and Denver can be accessed with a few clicks of a mouse. The cost of obtaining equipment can easily be calculated. Students working within a set budget can see how much to devote to paying actors and directors, producers and key grips.
  • an instructor at the the Community College of Denver is using ACCESS, a web-based game modelled after the board game “Life”, whixh simulates a person’s travels through his or her life, from college to retirement, with jobs, marriage, and possible children along the way. ACCESS teaches the course in a flipped format, allowing students to receive information through videos, podcasts, downloadable lectures and social media, and then discuss the materials in class. The course is designed to help students successfully complete remedial coursework.

Results

The article offers the following results from a ‘consultant’s report’ but I couldn’t find any corroboration:

  • where the ACCESS game was used, scores on quizzes jumped 14 percent and 71 percent of students completed the course, compared to 60 percent enrolled without the gaming component
  • students exhibited nearly identical pass/fail rates as non-IGBL courses.
  • 69 percent of students across semesters indicated that they were either more or much more satisfied with their IGBL course, as compared to other courses; 85 percent of students indicated that they were either more or much more satisfied with their IGBL instructor, as compared to other instructors.
  • students indicated that their IGBL course did a better or much better job (as compared to non-IGBL courses) of helping them achieve a variety of learning outcomes, including: having fun while learning (83 percent/73 percent); applying learning to new situations (81 percent/72 percent); staying engaged in learning (79 percent/73 percent); feeling involved in the college (69 percent/60 percent); working well with other students (67 percent/61 percent).

Over to you

Contact North has descriptions of a number of immersive learning projects under its ‘Pockets of Innovation‘ such as Loyalist College’s Border Simulation in Second Life.

See also:

Games-and-learning-in-digital-worlds-en-francais/

More news of video games

Games to defeat obesity, Napoleon, and students’ learning, and other games’ news

I’d be interested to hear from others who are using game-based immersive learning in the two year college system.

Game-based learning: special edition of the ETS journal

Listen with webReader

Forge FX's Heifer Village: Nepal

Forge FX’s Heifer Village: Nepal

Bellotti, F. et al. (2014) Guest editorial: Game-based learning for 21st century transferable skills: Challenges and Opportunities Educational Technology and Society, Vol. 17, No. 1

The Journal of Educational Technology and Society has a special issue on ‘Game-based learning for 21st century transferable skills: Challenges and Opportunities.

This special issue focuses on analysing how digital SGs [serious games] can contribute to the knowledge society’s higher demand towards acquiring transferable, transversal skills, that can be applied in different contexts, dealing with various scientific disciplines and subjects. Examples of such skills, often referred to as 21st century transferable skills, include, for example, collaboration, critical thinking, creative thinking, problem solving, reasoning abilities, learning to learn, decision taking, digital literacy (Voogt & Pareja Roblin, 2010).

Five papers have been selected covering the following topics:

  • a study that identifies a relationship between learning outcomes and physiological measurements of mental workload,
  • an evidence model for assessing persistence
  • two studies on pedagogical models …developed to support the effective use of serious games in formal education settings
  • an empirical investigation aimed at examining the interplay between learners’ motivation, engagement, and complex problem-solving outcomes in game-based learning
  • a large case-study of four formal education programs exploiting serious games based on multiuser virtual environments.

There is also a large number of papers on other topics in this edition. The focus is mainly on the k-12 sector, but the papers on serious games also have implications and potential for post-secondary education.

e-learning trends from South Africa

Listen with webReader

61760032

Chadwick, K. (2014) e-Learning Trends for 2014 Bizcommunity.com

This is an interesting perspective on corporate e-learning trends from Kirsty Chadwick in South Africa. I’ve focused on this, because trends in Africa are likely to be somewhat different from those here in North America, due to differences in access to the Internet and mobile phones. Here are her 10 picks:

  1. From textbook to tablet: the government of South Africa has launched a tablet program for high schools. ‘In 2014, 88,000 Huawei tablets will be distributed to 2200 public schools in Gauteng as part of a new e-learning initiative.’
  2. The shift to mobile: ‘Smartphone growth in Africa has increased by 43% annually since 2000, and experts predict that 69% of mobiles in Africa will have internet access by 2014.’
  3. More gaming
  4. MOOCs: ‘While MOOCs currently don’t have standardised quality assurance in place, this will likely change in the near future.’
  5. Social media: students’ success is very reliant on their ability to participate in study groups and that those who engage in these groups learn significantly more than students who don’t.
  6. Classes online: ‘2014 is likely to see a large number of businesses moving over to online training. Recent studies have projected that by 2019, 50% of all classes taught, will be delivered online.’
  7. Trading desktop for mobile: ‘2014 will be the year in which the number of mobile users will exceed the number of desktop users.’
  8. More learning for everyone: 47% of online learners are over the age of 26, compared to a significantly lower age group a few years ago
  9. HTML5: ‘improved JavaScript performance will begin to push HTML5 and the browser as a mainstream enterprise application development environment.’
  10. More interactivity: ‘courseware is likely to be more immersive and interactive ….the use of animations and games within learning environments keeps the tech-savvy generation engaged and entertained, leading to increased knowledge retention.’

Comment

How can I argue with someone in Africa on this? It looks pretty good to me from the other side of the world. However, I think there are some unique developments in online learning that will come out of Africa. So here’s my very tentative suggestions for e-learning in Africa in 2014.

I agree that in Africa generally, mobile learning, cheap tablets and open textbooks will become driving forces, saving on expensive and often hard to get foreign textbooks, and ensuring more locally adaptable learning materials.

The big growth though will be in non-formal education, where major strides have already been made in supporting small farmers and small business development for women, the development of entrepreneurs, and of IT competencies and skills, using mobile phones, social networking, and direct links to university and government agencies in the field.

Corporate education will be not far behind, but e-learning will be focused mainly in large and/or multinational companies.

Unfortunately, in many African countries, the penetration of online learning into formal education will be much slower, due to government bureaucratic barriers, lack of investment and failure by established institutions to recognize the importance of technology in education, and by governments not giving equal consideration to the need for teacher training in technology use as to investment in technology.

One or two African universities though will become world leaders in online learning through the use of local wi-fi networks and becoming commercial ‘hubs’ for global connections to the Internet, enabling them to cross-subsidize their online teaching activities.

Whatever the eventual outcome, what strikes me about Africa is the hope and the potential for major breakthroughs in online learning and e-learning. Necessity is the mother of invention.

61690027

 

Are we right to fear computers in education – or in life?

Listen with webReader

In this post, I’m going to look at some fun fiction about computers, then raise some questions about whether our fears are rational, or whether we really do need to question much more closely our addiction to technology, especially in education. This is not so much focused on specific new developments such as MOOCs (see: My Summer Paranoia) but on what it is reasonable to expect computers to do in education, and what we should not be trying to do with them.

Computers in film and print

There was an interesting article in the Globe and Mail on October 20 about IBM’s super computer, WATSON, being used to ‘help conquer business world challenges.’ Dr. Eric Brown of IBM actually described how WATSON was being used to help with medical diagnosis, or what he called ‘clinical-decision support,’ and how this approach could be extended to other areas in business, such as call-centre support, or financial services to identify ‘problems’ where large amounts of data need to be crunched (did he mean derivatives?)

Just after reading the article, I accidently came across an old 1970 movie on TVO last night, called, ‘Colossus: the Forbin Project‘. It was based upon the 1966 novel Colossus, by Dennis Feltham Jones, about a massive American defense computer, named Colossus, becoming sentient and deciding to assume control of the world. It does not have a good ending (at least for mankind’s freedom).

Colossus was the name given to the first large electronic computer, used to break the German Enigma code in the Second World War. It was located at Bletchley Park, England, not far from where the Open University's headquarters are located.

The date of the movie is interesting, made at the height of the Cold War, but when challenged by the power of in fact two supercomputers (Colossus in the USA and Guardian in the Soviet Union) which decide to communicate with each other and combine their power, the Americans and the Communists come together to fight – unsuccessfully – the mutual threats from the computers, suggesting there is more in common across humanity than there is between humanity and machines.

Of course, this movie came two years after Stanley Kubrik’s masterful 2001: A Space Odyssey, where HAL, the spaceship’s computer, begins to malfunction, kills nearly all the crew, and is finally shut down by the last remaining crew member, Dave Bowman. So we now have a score: humans 1, computers 1.

Then there is my personal favourite, the Matrix (1999). The film depicts a future in which reality as perceived by most humans is actually a simulated reality or cyberspace created by sentient machines to pacify and subdue the human population, while their bodies’ heat and electrical activity are used as an energy source. Upon learning this, computer programmer “Neo” is drawn into a rebellion against the machines, involving other people who have been freed from the “dream world” and into reality. I put this one down to a draw, since there have been two sequels and the battle continues.

Lastly, a new film is coming out in March, 2013, based on Orson Scott Carson’s wonderful book ‘Ender’s Game‘, first published in 1985 and slightly updated in 1991. (If you have teenage boys, this is a must for a Christmas present, especially if they generally hate reading). In preparation for an anticipated third invasion from an insectoid alien species, an international fleet maintains a school to find and train future fleet commanders. The world’s most talented children, including the novel’s protagonist, Ender Wiggin, are taken at a very young age to a training center known as the Battle School. There, teachers train them in the arts of war through increasingly difficult games including ones undertaken in zero gravity in the Battle Room where Ender’s tactical genius is revealed. Again, the book explores the intersection between virtuality and reality.

Computers: promise and reality

It is interesting to look at these old science fiction movies and novels and today’s computer world, and see where progress has been made, and where it hasn’t. Colossus in some ways anticipated the Internet, as the two computers searched for ‘pathways’ through which to communicate with each other. We certainly have much more remote surveillance, especially in the United Kingdom, where almost every public space is now under video surveillance, and where increasingly governments are exerting more monitoring over the Internet, both for protecting individual freedoms, such as monitoring sexual exploitation of minors, and for more insidious purposes, such as industrial and political espionage. Claims have been made that 2011: Space Oduyssey predicted the iPad. Ender’s Game comes very close to representing the complexity and depth of many computer games today, and conspiracy theorists will tell you that the first moon landing was filmed in Hollywood, so close do movies come to presenting fiction as reality.

However, despite Watson and distributed computing, many of the developments in this early science fiction have proved to be much more difficult to implement. In particular, although all these early movies assumed voice recognition, we are still a long way from having the fluency depicted in these movies, even after more than 40 years of research and development. For instance, try communicating with WestJet’s or Telus’s automated answering systems (and in WestJet’s case, it frequently fails to recognize the spoken language of even native English speakers – such as myself!) These ‘voice recognition’ systems manage simple algorithmic decisions (yes or no; options  1-5) but cannot deal with anything that is not predictable, which is often the very reason why you need to communicate with these organizations. In addition to the difficulties of voice recognition, these systems are clearly designed by computer specialists who do not take into account how humans behave, or the reasons they are likely to use the phone to communicate, rather than the Internet.

As Dr. Eric Brown of IBM admits, ‘When you try to create computer systems that can understand natural language, given all the nuance and ambiguity, it becomes a very significant problem.’ As he rightly says, human language is often implicit and tacit, using signs and meanings which humans have learned to almost automatically and most times correctly interpret, but which are very difficult for computers to interpret. Indeed, in recent years, more progress seems to have been made on face recognition than voice recognition, no doubt driven by security concerns.

Face recognition has made more progress than voice recognition

The biggest challenge though that computers face is in the field of artificial intelligence, and in particular how humans think and make decisions. As already noted, computers can handle algorithms very well, but this is a comparatively small component of human decision-making. Humans tend to be inductive or intuitive thinkers, rather than deductive or algorithmic thinkers. Computers tend to operate in absolute terms. If part of the algorithm fails, then the computer is likely to crash. Humans however are more qualitative and probabilistic in their thinking. They handle ambiguity better, are willing to make decisions on less than perfect information, and continue to operate even though they may be wrong in their thinking or actions – they tend to be much more self-correcting than computers.

Can we and should we?

This raises two important questions:

  • will it be possible to design machines that can think like humans?
  • And more importantly, if we can do this, should we?

These questions have particular significance for education, because as Dr. Brown of IBM said, ‘to build these kinds of systems you actually need to leverage learning, automatic learning and machine learning in a variety of ways.’

At the moment, even though WATSON, the world’s largest computer, can beat experts at chess, can outperform humans in memory games such as Jeopardy, and can support certain kinds of decision-making, such as medical diagnosis, it still struggles with non-algorithmic thinking. One human brain has many more nodes and networks than the largest computers today. According to Dharmendra Modha, director of cognitive computing at the IBM Almaden Research Center:

We have no computers today that can begin to approach the awesome power of the human mind. A computer comparable to the human brain would need to be able to perform more than 38 thousand trillion operations per second and hold about 3,584 terabytes of memory. (IBM’s BlueGene supercomputer, one of the worlds’ most powerful, has a computational capability of 92 trillion operations per second and 8 terabytes of storage.)

However, research and development in psychology probably will lead to developments in artificial intelligence that will enable very powerful computers, probably using networked distributed computing, to eventually outperform humans in more intuitive and less certain forms of thinking. Dr. Modha went on to predict that we’ll be able to simulate the workings of the brain by 2018. I’m not so sure. If we still haven’t satisfactorily cracked voice recognition after 40 years, it may take a little more than six years to tackle intuitive thinking. Nevertheless, I do believe eventually it will be possible to replicate in machines much of what is now performed by human brains. The issue then becomes whether this is practical or cost-efficient, compared with using humans for similar tasks, who in turn often have to be educated or trained at high cost to do these activities well.

Answering the second question – whether we should replace human thinking with computers – though is much more difficult. Machines have been replacing human activity since at least the Renaissance. The printing press put a lot of monks out of business. So won’t computers start making teachers redundant?

This assumes though that teaching and learning is purely about logic and reasoning. If only it were. So much of learning requires understanding of emotion and feelings, the ability of students to relate to their teachers and their fellow students, and above all, is about fostering, developing and supporting values, especially freedom, security, and well-being. Indeed, even some computer scientists such as Dr. Brown argue that computers are most valuable when they are used to support rather than replace human activities: ‘It’s technology to help humans do their jobs better, faster, more effectively, more efficiently‘. And, as in films such as Colossus and the Matrix, it’s about computers supporting humanity, not the other way round.

The implications for teaching and learning

Thus my belief (how will a computer handle that?) is that computers are wonderful tools for supporting teaching and learning, and as cognitive and computer scientists become more knowledgeable, computers will increase in value in meeting this purpose as time goes on, . However it means that these scientists need to work collaboratively, and more importantly as equals, with teachers and indeed learners, to ensure that computers are used in ways that respect not only the complexity of teaching and learning, but also the value systems that underpin a liberal education.

And it is here that I have the most concerns. There is, especially in the United States of America, a growing ideology that considers teachers to be ineffective or redundant and which seeks means to replace teachers with computers. Coursera-style MOOCs are just one example. Multiple-choice testing and open educational resources in the format of iTunes and OpenCourseWare are other examples.Once it’s ‘up there’, there are some who believe that the recorded lecture is the ‘teacher.’ It is not: it is a transmitter of content, which is not the same as a teacher.

Another concern for us, as humans, is to be continually aware of the difference between virtuality and reality. This is not to criticize the use of virtual reality for teaching, but it is to ensure that learners understand the significance of their actions when they transfer skills from a virtual to a real world, and to be able to distinguish which world they are in. This is not yet a major problem because virtual reality is disappointingly under-used in education, but it is increasingly a feature of the lives of young people. This sensitivity to the difference between virtuality and reality will become an increasingly important life skill, as we begin to merge them, for instance in the remote control of robot welders in pipelines. It’s important to know the difference between training (virtual reality) and life, when a mistake can lead to an explosion or an oil leak, which has very real consequences.

Lastly, I also have some concerns about the ‘open culture’ of web 2.0. In general, as readers will know, I am a great supporter of web 2.0 tools in education, and of open access in particular. However, this does not apply to all web 2.0 tools, or all ways in which they are used. Jared Lanier, one of the founders of virtual reality, says:

 “I know quite a few people … who are proud to say that they have accumulated thousands of friends on Facebook. Obviously, this statement can only be true if the idea of friendship is reduced.

Also, while in general Lanier supports the use of crowd sourcing and the ‘wisdom of the crowd’ that underlies moves towards cMOOCs and Siemen’s theory of connectivism, he criticizes:

the odd lack of curiosity about the limits of crowd wisdom. This is an indication of the faith-based motivations behind such schemes. Numerous projects have looked at how to improve specific markets and other crowd wisdom systems, but too few projects have framed the question in more general terms or tested general hypotheses about how crowd systems work.’

None of these concerns undermine my belief that computers, when used appropriately, can and do bring enormous benefits to teaching and learning. We shouldn’t anthropomorphize computers (they don’t like it) but, as I learned from ‘Downton Abbey’, like all good servants, they need to know their place.

Questions

1. Do you believe that ‘we’ll be able to simulate the workings of the brain by 2018’? I’d like to hear from brain scientists if they agree – too often what’s reported in science is not what the majority of scientists think.

2. If we could ‘simulate the workings of the brain’, what impact would it have on teaching and learning?

3. Do you believe that there is a desire in some countries to replace teachers with computers? Do you see Coursera and xMOOCs as part of this conspiracy?

4. Do you think I am being irrational in my concerns about computers in teaching?

Further reading

HAL 9000 (2012) Wikipedia

Houpt, S. (2012) IBM hones Watson the supercomputer’s skills to help conquer business world challenges The Globe and Mail, October 20

Lanier, J. (2010) You Are Not a Gadget New York: Alfred A. Knopf

Orson Scott Card (1994) Ender’s Game New York: Tor

Colossus: The Forbin Project