Do you find yourself worried by the implications of Humans, Channel 4’s new drama about the exploits of near-human intelligent robots? Have you ever fretted over the apocalyptic warnings of Stephen Hawking and Elon Musk about the threat of superintelligent artificial intelligence? Have your children ever lay wide-eyed thinking about robot drone armies, such as those in Marvel’s film Avengers: Age of Ultron?
But if you find this creepy or have answered “yes” to any of these questions, you should immediately watch footage from the recent DARPA Robotics Challenge.
The DARPA Robotics Challenge is unusual in that it requires bipedal robots to do only the everyday things humans do: getting out of cars, walking into buildings, climbing stairs, negotiating uneven ground, turning valves, and picking up and using a saw to cut a hole in a wall. Hardly skills worthy of Ninja Warrior UK, but to KAIST, the winning team which walked away with the US$2m prize, and all those that failed, it was tough.
The winning robot completed only eight of the nine tasks, many of which would not trouble a seven-year-old. In fact, all but three teams failed the rather basic challenge of getting out of a stationary car, even with no door to complicate matters.
Even simple things are hard
What makes this competition footage so funny is how mercilessly it punctures the myth of the supreme power of artificial intelligence. We’ve evolved – over millions of years – to live and move in the physical world. As such we tend to discount the sophistication necessary to do the simplest of things. We falsely ascribe simplicity to acts such as walking through doors and picking up power tools because we find them simple. In the same way, we find certain things – such as multiplying 82 by 17 in our heads – difficult, even though for a computer/machine this is basic.
This creates a cognitive bias: if a machine can do something we find hard, we tend to assume it can easily do the simple stuff as well. Like all biases, this isn’t necessarily true.
We also assume a generality bias: since we can do many different things, we assume that a machine which can do one of them can do the others as well. This conflicts with the way computing research happens, which tends to focus on getting a computer to do one thing (partly because there’s no way to easily research “doing everything”). Machines have grown up in a completely different environment from us, so it shouldn’t come as a surprise they are good at doing different things.
Science fiction still … fiction
The notion that “artificial intelligence” equals “computers (or non-humans) are people” stretches back to antiquity. The poet Ovid’s character Pygmalion falls in love with a statute he has carved, Galatea, so lifelike it (she) comes alive. The idea is still a powerful one. Hollywood, and fiction in general, loves robots. From The Terminator to A.I., from Her to Humans, a “machine person” is an easy trope with which to explore complex issues of embodied identity.
In fact robots (the Czech word for “worker”) emerged not from research but from the 1920’s Czech writer Karel Čapek’s play R.U.R., which played upon universal fears of the servants – the working class – taking over. So it’s the equivalent of fearing what would happen if Orcs took over London, or how to cope with a zombie apocalypse: it’s fun, but unrelated to reality.
Computers aren’t people
Computer scientist Jaron Lanier says the problem lies with the myth of computers as people, which survives due to a domineering subculture in the technical world. Visions of robots drive researchers on, generating new achievements that feed back into myth-making in fiction, which in turn encourages funding and further research.
In the 1960s, the film 2001: A Space Odyssey saw full artificial intelligence as only ten or 20 years away, a figure which has remained remarkably constant from all experts before and since. Our reactions are channelled by the computer as people myth, pushing us to think of it as a choice between stopping Skynet, Terminator-style, or welcoming our new mechanical overlords. At its heart, these fears expose the parallel and competing visions for what computing should be.
Early AI pioneer Alan Turing strongly articulated the computer as the beginnings of a synthetic human being: his Turing test defines artificial intelligence as one that’s indistinguishable from a human being.
On the other hand Douglas Engelbart pioneered an alternative vision: computing as a means to “augment human intellect” (Engelbart also gave us the mouse, bitmapped screens, and the graphical user interface). The closest Hollywood ever got to Engelbart’s vision was Neil Burger’s film Limitless, in which a pill allows humans to use the potential power of their entire brain. But as mere augmentation doesn’t raise the kind of philosophical questions demanded by fiction it’s unlikely to create a mythology juggernaut.
If you’re worried about AI and the rise of the machines, Lanier points out that while computer power has improved reliability has not – the time between failures hasn’t changed much in the last 40 years, so a conquered human race need wait only until the next system crash. And in any case, if DARPA’s challenge is anything to go by, shutting your door seems to be very effective at keeping robots out.
No comments:
Post a Comment