In this post, I’m going to look at some fun fiction about computers, then raise some questions about whether our fears are rational, or whether we really do need to question much more closely our addiction to technology, especially in education. This is not so much focused on specific new developments such as MOOCs (see: My Summer Paranoia) but on what it is reasonable to expect computers to do in education, and what we should not be trying to do with them.

Computers in film and print

There was an interesting article in the Globe and Mail on October 20 about IBM’s super computer, WATSON, being used to ‘help conquer business world challenges.’ Dr. Eric Brown of IBM actually described how WATSON was being used to help with medical diagnosis, or what he called ‘clinical-decision support,’ and how this approach could be extended to other areas in business, such as call-centre support, or financial services to identify ‘problems’ where large amounts of data need to be crunched (did he mean derivatives?)

Just after reading the article, I accidently came across an old 1970 movie on TVO last night, called, ‘Colossus: the Forbin Project‘. It was based upon the 1966 novel Colossus, by Dennis Feltham Jones, about a massive American defense computer, named Colossus, becoming sentient and deciding to assume control of the world. It does not have a good ending (at least for mankind’s freedom).

Colossus was the name given to the first large electronic computer, used to break the German Enigma code in the Second World War. It was located at Bletchley Park, England, not far from where the Open University's headquarters are located.

The date of the movie is interesting, made at the height of the Cold War, but when challenged by the power of in fact two supercomputers (Colossus in the USA and Guardian in the Soviet Union) which decide to communicate with each other and combine their power, the Americans and the Communists come together to fight – unsuccessfully – the mutual threats from the computers, suggesting there is more in common across humanity than there is between humanity and machines.

Of course, this movie came two years after Stanley Kubrik’s masterful 2001: A Space Odyssey, where HAL, the spaceship’s computer, begins to malfunction, kills nearly all the crew, and is finally shut down by the last remaining crew member, Dave Bowman. So we now have a score: humans 1, computers 1.

Then there is my personal favourite, the Matrix (1999). The film depicts a future in which reality as perceived by most humans is actually a simulated reality or cyberspace created by sentient machines to pacify and subdue the human population, while their bodies’ heat and electrical activity are used as an energy source. Upon learning this, computer programmer “Neo” is drawn into a rebellion against the machines, involving other people who have been freed from the “dream world” and into reality. I put this one down to a draw, since there have been two sequels and the battle continues.

Lastly, a new film is coming out in March, 2013, based on Orson Scott Carson’s wonderful book ‘Ender’s Game‘, first published in 1985 and slightly updated in 1991. (If you have teenage boys, this is a must for a Christmas present, especially if they generally hate reading). In preparation for an anticipated third invasion from an insectoid alien species, an international fleet maintains a school to find and train future fleet commanders. The world’s most talented children, including the novel’s protagonist, Ender Wiggin, are taken at a very young age to a training center known as the Battle School. There, teachers train them in the arts of war through increasingly difficult games including ones undertaken in zero gravity in the Battle Room where Ender’s tactical genius is revealed. Again, the book explores the intersection between virtuality and reality.

Computers: promise and reality

It is interesting to look at these old science fiction movies and novels and today’s computer world, and see where progress has been made, and where it hasn’t. Colossus in some ways anticipated the Internet, as the two computers searched for ‘pathways’ through which to communicate with each other. We certainly have much more remote surveillance, especially in the United Kingdom, where almost every public space is now under video surveillance, and where increasingly governments are exerting more monitoring over the Internet, both for protecting individual freedoms, such as monitoring sexual exploitation of minors, and for more insidious purposes, such as industrial and political espionage. Claims have been made that 2011: Space Oduyssey predicted the iPad. Ender’s Game comes very close to representing the complexity and depth of many computer games today, and conspiracy theorists will tell you that the first moon landing was filmed in Hollywood, so close do movies come to presenting fiction as reality.

However, despite Watson and distributed computing, many of the developments in this early science fiction have proved to be much more difficult to implement. In particular, although all these early movies assumed voice recognition, we are still a long way from having the fluency depicted in these movies, even after more than 40 years of research and development. For instance, try communicating with WestJet’s or Telus’s automated answering systems (and in WestJet’s case, it frequently fails to recognize the spoken language of even native English speakers – such as myself!) These ‘voice recognition’ systems manage simple algorithmic decisions (yes or no; options  1-5) but cannot deal with anything that is not predictable, which is often the very reason why you need to communicate with these organizations. In addition to the difficulties of voice recognition, these systems are clearly designed by computer specialists who do not take into account how humans behave, or the reasons they are likely to use the phone to communicate, rather than the Internet.

As Dr. Eric Brown of IBM admits, ‘When you try to create computer systems that can understand natural language, given all the nuance and ambiguity, it becomes a very significant problem.’ As he rightly says, human language is often implicit and tacit, using signs and meanings which humans have learned to almost automatically and most times correctly interpret, but which are very difficult for computers to interpret. Indeed, in recent years, more progress seems to have been made on face recognition than voice recognition, no doubt driven by security concerns.

Face recognition has made more progress than voice recognition

The biggest challenge though that computers face is in the field of artificial intelligence, and in particular how humans think and make decisions. As already noted, computers can handle algorithms very well, but this is a comparatively small component of human decision-making. Humans tend to be inductive or intuitive thinkers, rather than deductive or algorithmic thinkers. Computers tend to operate in absolute terms. If part of the algorithm fails, then the computer is likely to crash. Humans however are more qualitative and probabilistic in their thinking. They handle ambiguity better, are willing to make decisions on less than perfect information, and continue to operate even though they may be wrong in their thinking or actions – they tend to be much more self-correcting than computers.

Can we and should we?

This raises two important questions:

  • will it be possible to design machines that can think like humans?
  • And more importantly, if we can do this, should we?

These questions have particular significance for education, because as Dr. Brown of IBM said, ‘to build these kinds of systems you actually need to leverage learning, automatic learning and machine learning in a variety of ways.’

At the moment, even though WATSON, the world’s largest computer, can beat experts at chess, can outperform humans in memory games such as Jeopardy, and can support certain kinds of decision-making, such as medical diagnosis, it still struggles with non-algorithmic thinking. One human brain has many more nodes and networks than the largest computers today. According to Dharmendra Modha, director of cognitive computing at the IBM Almaden Research Center:

We have no computers today that can begin to approach the awesome power of the human mind. A computer comparable to the human brain would need to be able to perform more than 38 thousand trillion operations per second and hold about 3,584 terabytes of memory. (IBM’s BlueGene supercomputer, one of the worlds’ most powerful, has a computational capability of 92 trillion operations per second and 8 terabytes of storage.)

However, research and development in psychology probably will lead to developments in artificial intelligence that will enable very powerful computers, probably using networked distributed computing, to eventually outperform humans in more intuitive and less certain forms of thinking. Dr. Modha went on to predict that we’ll be able to simulate the workings of the brain by 2018. I’m not so sure. If we still haven’t satisfactorily cracked voice recognition after 40 years, it may take a little more than six years to tackle intuitive thinking. Nevertheless, I do believe eventually it will be possible to replicate in machines much of what is now performed by human brains. The issue then becomes whether this is practical or cost-efficient, compared with using humans for similar tasks, who in turn often have to be educated or trained at high cost to do these activities well.

Answering the second question – whether we should replace human thinking with computers – though is much more difficult. Machines have been replacing human activity since at least the Renaissance. The printing press put a lot of monks out of business. So won’t computers start making teachers redundant?

This assumes though that teaching and learning is purely about logic and reasoning. If only it were. So much of learning requires understanding of emotion and feelings, the ability of students to relate to their teachers and their fellow students, and above all, is about fostering, developing and supporting values, especially freedom, security, and well-being. Indeed, even some computer scientists such as Dr. Brown argue that computers are most valuable when they are used to support rather than replace human activities: ‘It’s technology to help humans do their jobs better, faster, more effectively, more efficiently‘. And, as in films such as Colossus and the Matrix, it’s about computers supporting humanity, not the other way round.

The implications for teaching and learning

Thus my belief (how will a computer handle that?) is that computers are wonderful tools for supporting teaching and learning, and as cognitive and computer scientists become more knowledgeable, computers will increase in value in meeting this purpose as time goes on, . However it means that these scientists need to work collaboratively, and more importantly as equals, with teachers and indeed learners, to ensure that computers are used in ways that respect not only the complexity of teaching and learning, but also the value systems that underpin a liberal education.

And it is here that I have the most concerns. There is, especially in the United States of America, a growing ideology that considers teachers to be ineffective or redundant and which seeks means to replace teachers with computers. Coursera-style MOOCs are just one example. Multiple-choice testing and open educational resources in the format of iTunes and OpenCourseWare are other examples.Once it’s ‘up there’, there are some who believe that the recorded lecture is the ‘teacher.’ It is not: it is a transmitter of content, which is not the same as a teacher.

Another concern for us, as humans, is to be continually aware of the difference between virtuality and reality. This is not to criticize the use of virtual reality for teaching, but it is to ensure that learners understand the significance of their actions when they transfer skills from a virtual to a real world, and to be able to distinguish which world they are in. This is not yet a major problem because virtual reality is disappointingly under-used in education, but it is increasingly a feature of the lives of young people. This sensitivity to the difference between virtuality and reality will become an increasingly important life skill, as we begin to merge them, for instance in the remote control of robot welders in pipelines. It’s important to know the difference between training (virtual reality) and life, when a mistake can lead to an explosion or an oil leak, which has very real consequences.

Lastly, I also have some concerns about the ‘open culture’ of web 2.0. In general, as readers will know, I am a great supporter of web 2.0 tools in education, and of open access in particular. However, this does not apply to all web 2.0 tools, or all ways in which they are used. Jared Lanier, one of the founders of virtual reality, says:

 “I know quite a few people … who are proud to say that they have accumulated thousands of friends on Facebook. Obviously, this statement can only be true if the idea of friendship is reduced.

Also, while in general Lanier supports the use of crowd sourcing and the ‘wisdom of the crowd’ that underlies moves towards cMOOCs and Siemen’s theory of connectivism, he criticizes:

the odd lack of curiosity about the limits of crowd wisdom. This is an indication of the faith-based motivations behind such schemes. Numerous projects have looked at how to improve specific markets and other crowd wisdom systems, but too few projects have framed the question in more general terms or tested general hypotheses about how crowd systems work.’

None of these concerns undermine my belief that computers, when used appropriately, can and do bring enormous benefits to teaching and learning. We shouldn’t anthropomorphize computers (they don’t like it) but, as I learned from ‘Downton Abbey’, like all good servants, they need to know their place.

Questions

1. Do you believe that ‘we’ll be able to simulate the workings of the brain by 2018’? I’d like to hear from brain scientists if they agree – too often what’s reported in science is not what the majority of scientists think.

2. If we could ‘simulate the workings of the brain’, what impact would it have on teaching and learning?

3. Do you believe that there is a desire in some countries to replace teachers with computers? Do you see Coursera and xMOOCs as part of this conspiracy?

4. Do you think I am being irrational in my concerns about computers in teaching?

Further reading

HAL 9000 (2012) Wikipedia

Houpt, S. (2012) IBM hones Watson the supercomputer’s skills to help conquer business world challenges The Globe and Mail, October 20

Lanier, J. (2010) You Are Not a Gadget New York: Alfred A. Knopf

Orson Scott Card (1994) Ender’s Game New York: Tor

Colossus: The Forbin Project 

5 COMMENTS

  1. Tony, you ask brain scientists if they think we’ll be able to simulate the workings of the brain by 2018. This is not a testable question because different scientists will have different interpretations of what it means to “simulate the workings of the brain.” Another untestable question is “Can machines think?”. Obviously, different researchers and philosophers will have different definitions of the word “think”.

    Turing proposed a test to determine whether a machine could fool scientists into thinking that it was a human capable of carring on a dialog in natural language [1]. Also, Hugh Loebner has underwritten a competition with prize money going to anyone who has programmed a computer that can pass the Turing test. So far, no one has. Searle claims that even if a machine is capable of passing the Turing test, there is no way of knowing whether it is really conscious or not [2].

    Of course, if brain scientists and AI researchers are someday able to program a computer that can pass the Turning test, that would have a big impact on education. Even if we know (somehow) that the machine isn’t conscious, it will be good enough for government work.

    Ray Kurzweil has made a $25,000 bet with Mitch Kapor (inventor of the spread sheet) that by 2029 a machine will be be able to pass a version of the Turing test that is much harder than the one proposed by Loebner. In drawing up the contract specifying the terms of their bet. One of the sticking points they had to hammer out was a definition of what it means to be “human”.

    Kurzweil contends that by 2029 some (many?) “humans” will have computer chips implanted in their brains, so Mitch and Ray had to determine if folks with chips in their brains should, for the purposes of the bet, be defined as human or not.

    Your second question – “If we could ‘simulate the workings of the brain’, what impact would it have on teaching and learning?” – is also one that I would like to recast in terms of the Turing test.

    I would ask: (1) “If a machine could be programmed to pass the Turing test, would it have sufficient intelligence to improve its own programming and hardware design?” (2) “If a machine can improve its own programming and hardware, would that lead to an intelligence explosion?” I tend to agree with IJ Good

    “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” [3]

    So, if/when a machine is able to pass the Turing test it will begin to have an impact on education. And, if/when an ultraintelligent machine comes into being, it will have a major impact on alot more than just education.

    So, getting back to Kurzweil’s predictions, in the Singularity is Near he predicts that a machine will pass his version of the Turing test by 2029 and that a Technological Singularity (his version of an ultraintelligent machine) will come into being by 2045. [4]

    Kurzweil is an optimist about the feasibility of creating a computer that can pass the Turing test, and so on.

    He’s also optimistic that AI and brain research will turning out for the good. But, even though he’s optimistic regarding the desireabilty of AI, he’s probably written more than anyone else about the potential dangers. And, he goes out of his way to invite critics of AI research (such as Bill Joy) to conferences sponsored by organizations such as Singularity University.

    You also ask: “Do you believe that there is a desire in some countries to replace teachers with computers? Do you see Coursera and xMOOCs as part of this conspiracy?”

    I think it is perfectly reasonable for the citizens of a country to want to use technology to both (a) improve the quality of instruction and (b) reduce the cost of instruction. Now, if we find ways to use computers to do things that were once done by teachers, is that the same as “replacing teachers with computers”?

    In some cases, teachers would much rather let the computer do things that they once had to do. For example, before the advent of computers teachers had to keep track of grades with pencil and paper. If they taught a large class, they generally tried to find someone else to tally up the grades. But if they couldn’t, they would have to do the arithmetic themselves. When personal computers and spreadsheets came into being, teachers with any sense used the spreadsheet to tally up the grades. In this case, they loved the idea of being replaced by a computer.

    On the other hand, college professors have been resisting rather simple and quite reasonable ideas for substituting recordings of lectures for live lectures. Way back in the early seventies, Stanford Dean of Education – J F Gibbons – came up with tutor video instruction [6], and faculty have been resisting this idea ever since. IMO, this is so for two reasons: (1) they fear that they might be innovated out of a job and (2) they prefer lecturing over other tasks they might end up doing (e.g. grading papers) even if they do not lose their job.

    As I see it, MOOCs are just the latest chapter in the history of faculty putting up resistance to technological change. Some claim that this story goes all the way back to ancient Greece when the poet societies (the educators of that era) put up resistance to the advent of writing. Before writing, story tellers used poetry to help them remember long stories (which was one way to pass the culture down from one generation to the next). So, with the slow advent of writing the poet societies thought they were being innovated out of a job, so they put up a fuss, which lasted for quite a long time.

    If Kurzweil is right, I can easily see faculty continuing to resist change, but as we approach 2029 (much less 2045) computers will become capable of doing more and more things that humans think they alone can do. So, more and more jobs will be lost to technology. So, it’s an issue for everyone, not just teachers.

    For a good read on the near term effect of automation on the job market, check out Race Against the Machine by Erik Brynjolfssson and Andrew McAfee [7]

    As for your last question: “Do you think I am being irrational in my concerns about computers in teaching?”

    I think your concerns are quite rational. If you are concerned that some teacher jobs might be lost to technology (or might not come into being in developing countries), then your concern is quite rational. However, I would encourage you and your readers to consider this issue in the larger context of how advancing computer technology holds both promise and peril for all of us.

    Personally, I’m optimist, but I do think it’s very important to discuss the potential dangers of advancing AI, not to mention the profound questions it raises for the future of humanity.

    Best,
    Fred

    [1] http://en.wikipedia.org/wiki/Turing_test

    [2] http://en.wikipedia.org/wiki/Chinese_room

    [3] http://en.wikipedia.org/wiki/I._J._Good

    [4] http://en.wikipedia.org/wiki/Technological_singularity

    [5] http://en.wikipedia.org/wiki/Bill_Joy

    [6] Tutored Video Instruction (TVI) is a collaborative learning methodology in which a small group of students studies a videotape of a lecture. We constructed a fully virtual version of TVI called Distributed Tutored Video Instruction (DTVI), in which each student has a networked computer with audio microphone-headset and video camera to support communication within the group. In this report, we compare survey questionnaires, observations of student interactions, and grade outcomes for students in the face-to-face TVI condition with those of students in the DTVI condition. Our analysis also includes comparisons with students in the original lecture. This two and a half year study involved approximately 700 students at two universities. Despite finding a few statistically significant process differences between TVI and DTVI, the interactions were for the most part quite similar. Course grade outcomes for TVI and DTVI were indistinguishable, and these collaborative conditions proved better than lecture. We conclude that this kind of highly interactive virtual collaboration can be an effective way to learn.

    Link:

    http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.73.1868

    [7] http://raceagainstthemachine.com/

    • Hi, Fred

      Many thanks for your excellent comment on my post on ‘Are we right to fear computers?’ This was exactly what I was hoping for when I asked for comments.

      As some who has worked nearly all his life in promoting the use of technology, of course I support the use of computers in teaching. However too often over-enthusiastic or unthoughtful claims are made for computing. That has always been true, but today I do have a concern about what limits we should put on computer applications in education, as computing power and the spread of the Internet increases.. Because we can do some things with computers doesn’t necessarily mean we should. However, this means setting down or agreeing on some criteria for judging whether technology applications are beneficial or harmful, and there is not, in my view, enough discussion or consideration given to this issue in education (or computing) – which is why I raised the issue in my blog post

      The other concern I have is that the value of computers in education depends very much on how we believe humans learn. Too often computer scientists (and unfortunately too many faculty) see education as a transfer of knowledge from those who know to those who don’t. Teachers though generally see learning as a process that requires the development of skills, particularly cognitive skills, but social and ethical skills as well. I’m not saying that computers can’t help with skills development – for instance simulations, the use of the Internet for inter-personal communication, or for repeated practice in math – but too often (as in Coursera MOOCs) teaching is seen merely as knowledge transference – filling empty buckets so instead of teachers transferring knowledge, computers do it. As someone once said, if a teacher can be replaced by a computer, they should be.

      So yes, I agree with you that computers can and have helped enormously in supporting teaching and learning, but it is also essential that the broader qualities we expect in teachers are not lost because computers can replace parts of the process. In the end, skilled teachers should retain control of the teaching and learning process, and not be thoughtlessly replaced, just because it’s possible, otherwise we will lose our humanity.

      Many thanks once again,

Leave a Reply to Excellent post by +Tony Bates with some final questions for further reflection. - S/R Cancel reply

Please enter your comment!
Please enter your name here