Share or Save arrow5 Responses
  1. John Larkin
    October 28, 2012 - 10:46 pm

    Hi Tony, computers may have access to almost infinite databases of knowledge, be capable of interpreting data, and even learn yet I do not think they will actually “understand” or “imagine” what they are actually doing. Chek out the Chinese Room Thought Experiment by John Searle.
    Cheers, John.

  2. Fred Beshears
    November 16, 2012 - 12:46 pm

    Tony, you ask brain scientists if they think we’ll be able to simulate the workings of the brain by 2018. This is not a testable question because different scientists will have different interpretations of what it means to “simulate the workings of the brain.” Another untestable question is “Can machines think?”. Obviously, different researchers and philosophers will have different definitions of the word “think”.

    Turing proposed a test to determine whether a machine could fool scientists into thinking that it was a human capable of carring on a dialog in natural language [1]. Also, Hugh Loebner has underwritten a competition with prize money going to anyone who has programmed a computer that can pass the Turing test. So far, no one has. Searle claims that even if a machine is capable of passing the Turing test, there is no way of knowing whether it is really conscious or not [2].

    Of course, if brain scientists and AI researchers are someday able to program a computer that can pass the Turning test, that would have a big impact on education. Even if we know (somehow) that the machine isn’t conscious, it will be good enough for government work.

    Ray Kurzweil has made a $25,000 bet with Mitch Kapor (inventor of the spread sheet) that by 2029 a machine will be be able to pass a version of the Turing test that is much harder than the one proposed by Loebner. In drawing up the contract specifying the terms of their bet. One of the sticking points they had to hammer out was a definition of what it means to be “human”.

    Kurzweil contends that by 2029 some (many?) “humans” will have computer chips implanted in their brains, so Mitch and Ray had to determine if folks with chips in their brains should, for the purposes of the bet, be defined as human or not.

    Your second question – “If we could ‘simulate the workings of the brain’, what impact would it have on teaching and learning?” – is also one that I would like to recast in terms of the Turing test.

    I would ask: (1) “If a machine could be programmed to pass the Turing test, would it have sufficient intelligence to improve its own programming and hardware design?” (2) “If a machine can improve its own programming and hardware, would that lead to an intelligence explosion?” I tend to agree with IJ Good

    “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” [3]

    So, if/when a machine is able to pass the Turing test it will begin to have an impact on education. And, if/when an ultraintelligent machine comes into being, it will have a major impact on alot more than just education.

    So, getting back to Kurzweil’s predictions, in the Singularity is Near he predicts that a machine will pass his version of the Turing test by 2029 and that a Technological Singularity (his version of an ultraintelligent machine) will come into being by 2045. [4]

    Kurzweil is an optimist about the feasibility of creating a computer that can pass the Turing test, and so on.

    He’s also optimistic that AI and brain research will turning out for the good. But, even though he’s optimistic regarding the desireabilty of AI, he’s probably written more than anyone else about the potential dangers. And, he goes out of his way to invite critics of AI research (such as Bill Joy) to conferences sponsored by organizations such as Singularity University.

    You also ask: “Do you believe that there is a desire in some countries to replace teachers with computers? Do you see Coursera and xMOOCs as part of this conspiracy?”

    I think it is perfectly reasonable for the citizens of a country to want to use technology to both (a) improve the quality of instruction and (b) reduce the cost of instruction. Now, if we find ways to use computers to do things that were once done by teachers, is that the same as “replacing teachers with computers”?

    In some cases, teachers would much rather let the computer do things that they once had to do. For example, before the advent of computers teachers had to keep track of grades with pencil and paper. If they taught a large class, they generally tried to find someone else to tally up the grades. But if they couldn’t, they would have to do the arithmetic themselves. When personal computers and spreadsheets came into being, teachers with any sense used the spreadsheet to tally up the grades. In this case, they loved the idea of being replaced by a computer.

    On the other hand, college professors have been resisting rather simple and quite reasonable ideas for substituting recordings of lectures for live lectures. Way back in the early seventies, Stanford Dean of Education – J F Gibbons – came up with tutor video instruction [6], and faculty have been resisting this idea ever since. IMO, this is so for two reasons: (1) they fear that they might be innovated out of a job and (2) they prefer lecturing over other tasks they might end up doing (e.g. grading papers) even if they do not lose their job.

    As I see it, MOOCs are just the latest chapter in the history of faculty putting up resistance to technological change. Some claim that this story goes all the way back to ancient Greece when the poet societies (the educators of that era) put up resistance to the advent of writing. Before writing, story tellers used poetry to help them remember long stories (which was one way to pass the culture down from one generation to the next). So, with the slow advent of writing the poet societies thought they were being innovated out of a job, so they put up a fuss, which lasted for quite a long time.

    If Kurzweil is right, I can easily see faculty continuing to resist change, but as we approach 2029 (much less 2045) computers will become capable of doing more and more things that humans think they alone can do. So, more and more jobs will be lost to technology. So, it’s an issue for everyone, not just teachers.

    For a good read on the near term effect of automation on the job market, check out Race Against the Machine by Erik Brynjolfssson and Andrew McAfee [7]

    As for your last question: “Do you think I am being irrational in my concerns about computers in teaching?”

    I think your concerns are quite rational. If you are concerned that some teacher jobs might be lost to technology (or might not come into being in developing countries), then your concern is quite rational. However, I would encourage you and your readers to consider this issue in the larger context of how advancing computer technology holds both promise and peril for all of us.

    Personally, I’m optimist, but I do think it’s very important to discuss the potential dangers of advancing AI, not to mention the profound questions it raises for the future of humanity.







    [6] Tutored Video Instruction (TVI) is a collaborative learning methodology in which a small group of students studies a videotape of a lecture. We constructed a fully virtual version of TVI called Distributed Tutored Video Instruction (DTVI), in which each student has a networked computer with audio microphone-headset and video camera to support communication within the group. In this report, we compare survey questionnaires, observations of student interactions, and grade outcomes for students in the face-to-face TVI condition with those of students in the DTVI condition. Our analysis also includes comparisons with students in the original lecture. This two and a half year study involved approximately 700 students at two universities. Despite finding a few statistically significant process differences between TVI and DTVI, the interactions were for the most part quite similar. Course grade outcomes for TVI and DTVI were indistinguishable, and these collaborative conditions proved better than lecture. We conclude that this kind of highly interactive virtual collaboration can be an effective way to learn.



    • Tony Bates
      November 16, 2012 - 1:24 pm

      Hi, Fred

      Many thanks for your excellent comment on my post on ‘Are we right to fear computers?’ This was exactly what I was hoping for when I asked for comments.

      As some who has worked nearly all his life in promoting the use of technology, of course I support the use of computers in teaching. However too often over-enthusiastic or unthoughtful claims are made for computing. That has always been true, but today I do have a concern about what limits we should put on computer applications in education, as computing power and the spread of the Internet increases.. Because we can do some things with computers doesn’t necessarily mean we should. However, this means setting down or agreeing on some criteria for judging whether technology applications are beneficial or harmful, and there is not, in my view, enough discussion or consideration given to this issue in education (or computing) – which is why I raised the issue in my blog post

      The other concern I have is that the value of computers in education depends very much on how we believe humans learn. Too often computer scientists (and unfortunately too many faculty) see education as a transfer of knowledge from those who know to those who don’t. Teachers though generally see learning as a process that requires the development of skills, particularly cognitive skills, but social and ethical skills as well. I’m not saying that computers can’t help with skills development – for instance simulations, the use of the Internet for inter-personal communication, or for repeated practice in math – but too often (as in Coursera MOOCs) teaching is seen merely as knowledge transference – filling empty buckets so instead of teachers transferring knowledge, computers do it. As someone once said, if a teacher can be replaced by a computer, they should be.

      So yes, I agree with you that computers can and have helped enormously in supporting teaching and learning, but it is also essential that the broader qualities we expect in teachers are not lost because computers can replace parts of the process. In the end, skilled teachers should retain control of the teaching and learning process, and not be thoughtlessly replaced, just because it’s possible, otherwise we will lose our humanity.

      Many thanks once again,

  3. edtech - issues | Annotary
    January 17, 2013 - 10:26 am

    [...] Look At How Open Source Textbooks May Actually Work – Edudemic Sort Share       2 months [...]

  4. […] post by +Tony Bates with some final questions for further reflection. Are we right to fear computers in education – or in life? In this post, I’m going to look at some fun fiction about computers, then raise some questions […]

Leave a Reply

Mobile Theme