Several events have prompted this reflection.
Man shoots computer
I have a new hero and his name is Lucas Hinch. Frustrated with his Dell PC, he took it into the street in Colorado Springs and shot it eight times. The police were summoned and later in court his firearm was confiscated, but Hinch is reported as saying: ‘It was worth it. It was glorious. Angels sung on high.’
I have just seen a very good new movie, Ex Machina, which is about Nathan, a carefully selected young company program developer charged with Turing-testing, i,.e. evaluating the capabilities, and ultimately the consciousness, of an alluring female robot called AVA. AVA features artificial intelligence developed mainly by structuring search engine data and is designed by Caleb, the reclusive CEO of a large Internet search engine company.
Now this is a movie, and a very good one, so it has to be entertaining and in movies, anything is possible, but it is worth seeing because of the intelligent script and in particular the interaction between both Nathan and the robot, and between Nathan and Caleb, where they discuss the nature and the (possible) potential of AI.
Thoughts prompted by these events
1. Dream on, AI enthusiasts
The first thought is how far we still have to go from what is possible in the present to what the expectations of AI are in the future. AI is nowhere close to achieving the kinds of thinking and emotional intelligence demonstrated in Ex Machina. (I am sure there will be someone who will want to correct me on this – go for it.)
Although we are increasingly dependent on computers, they are still frustratingly incapable of doing what seem to humans to be the simplest procedures. Take corporate voice messaging, for instance. Now if I had a gun, and I could find a physical embodiment of telephone companies’ voice messaging, I would take it out into the street and shoot it. The only reason I am calling a company is because I can’t get standard information or a resolution to a matter through the corporate web site. If the best AI can do (and these companies have the money and motivation to have the best) is to provide set answers to a limited number of predetermined questions that rarely if ever address the reason you have called, then we are not just at the pre-human stage of AI development but at the stage of creating primordial bacteria (which is no mean feat in itself).
Have you ever tried Siri? It is pathetically limited (although the voice is very nice). However, if anyone is stupid enough to fall in love with Siri, as in the movie ‘Her’ (one of the most sentimentally awful movies I have ever seen) then they really do deserve whatever comes to them.
2. Current models of AI are just wrong
Since AI tends to be developed by computer scientists, they tend to use models of the brain based on how computers or computer networks work (since of course it will be a computer that has to operate the AI). Ex Machina got it right though in suggesting that a completely different kind of hardware (what Caleb called wetware) will be needed that better matches the way that human brains actually work. Thus the basis of AI needs to reflect the biological rather than mechanical foundation of human behaviour.
However, I am not convinced that Caleb’s software solution of modelling human behaviour through the analysis of big data captured through search engines will work, either, because despite the wide range of uses of search engines by humans, they still nowhere near capture the full range of human behaviour. People do behave differently on the Internet than in other areas of their lives. While hundreds of thousands play violent games or use online pornography, for example, this is not reflected (despite impressions given by the media) in terms of actual behaviour in real world contexts. Most humans have the ability to separate reality from fantasy, and online behaviour is different from behaviour in other contexts.
3. Do we want robots to be like people?
This is the question that really needs to be answered, and my view is that the answer is unequivocally ‘no.’ Several excellent movies such as Space Odyssey 2001 as well as Ex Machina indirectly raise this question, and the answer is always negative for the future of human life. There are aspects of human life that are better done by machines, such as coal mining, booking airline tickets or even housework, but decision-making and ethics for example are best left to admittedly imperfect human beings, because decision-making and ethics need to privilege the admittedly self-interests of humans, not those of robots (or more likely, large corporations).
One reason of course that there is so much interest in AI is that corporations want to reduce the costs of human workers by replacing them with machines. There comes a point though where naked free market interests work against the general human condition. It is no coincidence that the growing gap between the richest 1% and the rest of the world parallels the increased use of automation. The benefits of automation are not shared equally.
When we come to education in particular, the main cost is that of teachers and instructors. But learning is not only a complex activity where only a relatively minor part of the process can be effectively automated, it is an intensely human activity, that benefits enormously from personal relationships and social interaction.
4. What should we use AI for?
We need to treat technology as tools, not autonomous systems. Technology is a means to an end and the end must be determined by human beings. If we take education as an example, technology can be immensely helpful in supporting teachers, learners and learning. It can be used to make teaching more efficient, so long as it does not attempt to replace the relational aspects of teaching and learning. What it should not be used for is to replace the human element in teaching, or even, in the long term, learners themselves.
5. The need to establish rules and guidelines for AI
Although we are already seeing some of the social consequences of an unequal distribution of wealth resulting from the automation of human activities, we have been lucky so far in a sense that AI has proved to be so difficult to extend beyond very simple procedures. We have not yet had to face some of the ethical, social and security issues that will arise if AI becomes more successfully developed. (The first area is likely to be in transportation, with the automation of driving.)
However, as many science fiction writers have predicted, we are probably getting to the point where we now need some controls, rules, guidelines and procedures that will help determine the limits of AI in general, and computer-based learning in particular, in terms of where and how AI-controlled automation should be applied. In education, this means using computers to support teachers in the design and delivery of teaching and learning, in assessment of ‘routine’ and predictable forms of learning, and in indicating students at risk, and possible causes and actions to be taken. In all cases, though, these applications of computing need to be under the direct control of either learners or teachers, or increasingly by both.
What I foresee is something like a Charter of Rights for humans in a world where AI is not only prevalent but also powerful (but then I’m an incorrigeable optimist).
In the meantime, go and see Ex Machina, and enjoy it as a very interesting movie, even if some of the assumptions about the future are likely to be wrong – and some horribly right. For some interesting discussion of the morality of AVA, go to: IMDb
Rad, C. (2015) MAN SHOOTS DELL COMPUTER 8 TIMES AFTER GETTING BLUE SCREEN OF DEATH IGN, 22 April