Showing posts sorted by relevance for query artificial intelligence. Sort by date Show all posts
Showing posts sorted by relevance for query artificial intelligence. Sort by date Show all posts

Tuesday, June 16, 2015

In the event of robot apocalypse, just wait for a system crash

Robots! Making easy tasks look difficult. DARPA

Do you find yourself worried by the implications of Humans, Channel 4’s new drama about the exploits of near-human intelligent robots? Have you ever fretted over the apocalyptic warnings of Stephen Hawking and Elon Musk about the threat of superintelligent artificial intelligence? Have your children ever lay wide-eyed thinking about robot drone armies, such as those in Marvel’s film Avengers: Age of Ultron?

But if you find this creepy or have answered “yes” to any of these questions, you should immediately watch footage from the recent DARPA Robotics Challenge.

The DARPA Robotics Challenge is unusual in that it requires bipedal robots to do only the everyday things humans do: getting out of cars, walking into buildings, climbing stairs, negotiating uneven ground, turning valves, and picking up and using a saw to cut a hole in a wall. Hardly skills worthy of Ninja Warrior UK, but to KAIST, the winning team which walked away with the US$2m prize, and all those that failed, it was tough.

The winning robot completed only eight of the nine tasks, many of which would not trouble a seven-year-old. In fact, all but three teams failed the rather basic challenge of getting out of a stationary car, even with no door to complicate matters.

Even simple things are hard

What makes this competition footage so funny is how mercilessly it punctures the myth of the supreme power of artificial intelligence. We’ve evolved – over millions of years – to live and move in the physical world. As such we tend to discount the sophistication necessary to do the simplest of things. We falsely ascribe simplicity to acts such as walking through doors and picking up power tools because we find them simple. In the same way, we find certain things – such as multiplying 82 by 17 in our heads – difficult, even though for a computer/machine this is basic.

This creates a cognitive bias: if a machine can do something we find hard, we tend to assume it can easily do the simple stuff as well. Like all biases, this isn’t necessarily true.

We also assume a generality bias: since we can do many different things, we assume that a machine which can do one of them can do the others as well. This conflicts with the way computing research happens, which tends to focus on getting a computer to do one thing (partly because there’s no way to easily research “doing everything”). Machines have grown up in a completely different environment from us, so it shouldn’t come as a surprise they are good at doing different things.

Science fiction still … fiction

The notion that “artificial intelligence” equals “computers (or non-humans) are people” stretches back to antiquity. The poet Ovid’s character Pygmalion falls in love with a statute he has carved, Galatea, so lifelike it (she) comes alive. The idea is still a powerful one. Hollywood, and fiction in general, loves robots. From The Terminator to A.I., from Her to Humans, a “machine person” is an easy trope with which to explore complex issues of embodied identity.

In fact robots (the Czech word for “worker”) emerged not from research but from the 1920’s Czech writer Karel Čapek’s play R.U.R., which played upon universal fears of the servants – the working class – taking over. So it’s the equivalent of fearing what would happen if Orcs took over London, or how to cope with a zombie apocalypse: it’s fun, but unrelated to reality.

Capek’s rise of the robots.

Computers aren’t people

Computer scientist Jaron Lanier says the problem lies with the myth of computers as people, which survives due to a domineering subculture in the technical world. Visions of robots drive researchers on, generating new achievements that feed back into myth-making in fiction, which in turn encourages funding and further research.

In the 1960s, the film 2001: A Space Odyssey saw full artificial intelligence as only ten or 20 years away, a figure which has remained remarkably constant from all experts before and since. Our reactions are channelled by the computer as people myth, pushing us to think of it as a choice between stopping Skynet, Terminator-style, or welcoming our new mechanical overlords. At its heart, these fears expose the parallel and competing visions for what computing should be.

Early AI pioneer Alan Turing strongly articulated the computer as the beginnings of a synthetic human being: his Turing test defines artificial intelligence as one that’s indistinguishable from a human being.

On the other hand Douglas Engelbart pioneered an alternative vision: computing as a means to “augment human intellect” (Engelbart also gave us the mouse, bitmapped screens, and the graphical user interface). The closest Hollywood ever got to Engelbart’s vision was Neil Burger’s film Limitless, in which a pill allows humans to use the potential power of their entire brain. But as mere augmentation doesn’t raise the kind of philosophical questions demanded by fiction it’s unlikely to create a mythology juggernaut.

If you’re worried about AI and the rise of the machines, Lanier points out that while computer power has improved reliability has not – the time between failures hasn’t changed much in the last 40 years, so a conquered human race need wait only until the next system crash. And in any case, if DARPA’s challenge is anything to go by, shutting your door seems to be very effective at keeping robots out.

The Conversation

Tuesday, April 28, 2015

Computers are knocking on the door of the company boardroom

What's your golf handicap old chap? Mopic

While women sitting on company boards remains a much-discussed topic, there is something new waiting to take a seat at the table: artificial intelligence, computers with company voting rights.

Deep Knowledge Ventures has appointed an algorithm called VITAL (Validating Investment Tool for Advancing Life Sciences) as a member of its board. It uses state-of-the-art analytics to assist in the process of making investment decisions in a given technology.

Of course, companies have used computer assisted analysis to analyse investment opportunities for a long time, but is the vision of a computer with equal voting rights as human board members a bit far-fetched?

Defining artificial intelligence

Alan Turing Wikimedia Commons

What does the future hold with regard to the influence of computers on business decisions – and can they ever be used in place of a human board member? The Turing Test, formulated by Alan Turing in the 1950s, provides a strict interpretation of machine intelligence. A human participant must be unable to tell whether they are communicating (through a typed, text medium) with a computer or a human. If the human participant cannot reliably tell whether their conversation partner is a computer, then Turing would argue the computer has demonstrated intelligence.

Numberphile: The Turing Test

Not everybody agrees that passing the Turing Test is enough for a computer to exhibit intelligence. In his Chinese Room argument, the Stanford philosopher John Searle described a closed room, into which a sentence written in Chinese is fed. A response emerges from the room, written in Chinese, that correctly answers the questions or conversational cues in the sentence submitted. The assumption could be made that inside the room is someone that can speak Chinese.

Instead, inside the room is a human who cannot speak Chinese but is equipped with manuals that exhaustively provide the appropriate Chinese characters to produce in response to those received. The argument holds that an appropriately programmed computer (the person in the room) could pass the Turing Test (by producing convincing Chinese) but would still not have an intelligent mind that we would regard as human intelligence (by understanding Chinese).

The Chinese Room

A computer in the boardroom

If we want computers to make business decisions and even have equal voting rights on a company board, what would it have to do in order for the other board members to have confidence in its decisions?

Part of the challenge of the Turing Test is syntax versus semantics. Compare the sentences “Fruit flies like bananas” and “Time flies like an arrow”. The sentence structure is similar but the meaning is entirely different, making it a linguistic challenge.

Even a very simple conversation relies upon a substantial amount of linguistic knowledge and understanding. Consider the following questions:

  • What was the result of the big match last night?

  • I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play? (these chess moves are from Turing’s original paper)

  • What book do you think of if I say 42?

These might seem easy for humans to understand, but are challenging for a computer. Thankfully, a computer making business decisions is not faced with such a general task as the Turing Test. But if we are serious about having a computer as a full member of a company board, what are the hurdles that need to be addressed? Here is a (almost certainly not complete) list.

  1. Access to LOTS of data: An automated approach to decision-making will require the use of big data. Company reports and accounts, economic data such as share prices, interest rates and exchange rates, and government statistics such as employment rates and house prices would all be obvious inputs. More subjective data such as newspapers, social media feeds and blogs might also be useful. Peer-reviewed scientific papers might also provide insight. Of course as always, the challenge with big data is to process the large quantities of data that will be be of different types (figures, text, charts), stored in different ways, and have missing elements.

  2. Cost: Much of the data required is likely to generate significant costs. Social media feeds may be free (but not always), but stock market information, company accounts, government data, scientific papers and so on are generally commercial products that must be paid for. In addition, there is the cost of developing and maintaining the system. The algorithm is likely to require continual development by highly skilled analysts and programmers.

  3. Complexity: Big data algorithms will be central to the boardroom decision support algorithm, but they will be underpinned by advanced analytics, many of which we are only just starting to understand and develop. To have a real impact there is likely to be some research required which would require staff with the relevant skills.

So, are we really at a point where a computer could take its place on the board? Technically it’s possible but the costs to develop and maintain, as well as subscribe to the data that is required, probably means that it is not within the reach of most companies and I suspect that the money would be better spent on a human decision maker – at least for now.

The Conversation

Tuesday, June 2, 2015

Mini-megalomaniac AI is already all around us, but it won't get further without our help

"Looks like there's an unexpected item in the bagging area, puny human." bagogames, CC BY

Avengers: Age of Ultron is the latest film about robots or artificial intelligence (AI) trying to take over the world. It’s not a new conceit, with the likes of The Terminator, War Games and The Matrix coming before it, but perhaps it’s a theme that rings more resonantly with us these days as intelligent software becomes more widespread.

Perhaps this explains the nagging fears about the potential impact on humanity of artificial super-intelligences – such as Ultron in this film, an AI accidentally created by the Avengers. But what relation do the evil AIs of science fiction have with scientific reality? Could AI take over the world? How would it do so, and why would it bother?

Ulterior motives

We need to consider the staples of motive and opportunity for our movie villain. For the motive, few would say intelligence in itself unswervingly leads to a desire to rule the world. Depicted in films AI is often driven by self-preservation, a realisation that fearful humans might shut them down. It’s what drives HAL 9000 to kill the crew in 2001: A Space Odyssey, and it’s why Ava in Ex Machina plots against her creator.

It seems unlikely we’d ever give our current intelligent software tools cause to feel threatened: they benefit us and there seems little motive in striving to create self-awareness in, for example, software that searches the web for the nearest Italian restaurant.

Another popular motive for the evilness of evil AI is its zealous application of logic. In the Avengers film, Ultron believes that he can only protect the earth by wiping out humanity. This death-by-logic is reminiscent of the notion that a computer would select a stopped clock over one that is two seconds slow, as the stopped clock is at least right twice a day. Ultron’s motivation, based on brittle logic combined with indifference to life, seems at odds with today’s AI systems that can already deal with uncertanty using mathematical formulas and are built to provide productive services for us.

Everybody wants to rule the world

When we consider the opportunity for an AI to rule the world we reach somewhat firmer ground. The famous Turning Test of machine intelligence was set up to measure a particular definition of intelligence, the ability to conduct a believable human conversation. If you can’t tell the difference between AI and human renditions of the same skill, the argument goes, the AI has demonstrated human-like qualities.

So what would a Turing Test for the skill of world domination look like? Compare the antisocial behaviours of AI with the attributes expected of human would-be world dominators. Such megalomaniacs need to control important parts of our lives, such as access to money or ability to travel freely. AI does that already: lending decisions are frequently made by machine intelligence that sifts through mountains of information to decide your creditworthiness. They even trade on the stock market. The intelligence and security services use the same information-gathering and processing to pick suspects out for travel watch lists.

An overlord would give orders and expect them to be followed; anyone who has stood helpless as a self-service till in a shop makes repeated bagging-related demands of them already knows what it feels like to be bossed about by AI.

Even given the motivation, the only world these swarmbots will conquer is one that’s accessible by wheels. Sergey Kornienko, CC BY-SA

Exterminate, exterminate

Finally, no megalomaniac Hollywood robot would be complete without at least some desire to kill us. Today’s military robots can identify targets without human intervention. It’s currently a human controller that gives permission to attack, but it’s not a stretch to say that the potential to kill automatically already exists within these AI, even if their code would require a rewrite to allow it.

These examples arguably show AI in control in limited but significant parts of life on Earth, but to truly dominate the world in the way they do in movies, these individual AIs would need to start working together to create a synchronised AI army. At which point that bossy self-service till talks to your health monitor and denies you beer, then combines with a credit scoring system to provide credit only if you buy a pair of trainers with a built in GPS tracker to detect their use, while your smart fridge allows you only kale until the fitness tracker records the required five-mile run as completed.

Engineers around the world are developing the internet of things, in which all manner of devices are networked together to offer new services and ways to interact. These are the billions of pieces of a jigsaw that would need to communicate and act together in order to bring about total world domination.

No call to welcome our robot overlords yet

If this all sounds worrying, I feel it’s unlikely – about as likely as the inexplicable cross-platform compatibility of an Apple Mac and an alien spaceship in Independence Day.

Our earthly AI and computer systems are written in a range of computer languages, hold different data in different ways and use different and incompatible rule sets and learning techniques. Unless we design them to be compatible there is no reason why two systems, developed by separate companies for separate purposes, would spontaneously communicate and share capabilities towards some greater common goal – at least not without a lot more help from us.

The Conversation

Wednesday, September 16, 2015

In defence of sex machines: why trying to ban sex robots is wrong

Universal Pictures

“Ban sex robots!” scream the tech headlines, as if they’re heralding the arrival of the latest artificial intelligence threat to humankind since autonomous killer robots. The campaign, led by academics Kathleen Richardson and Erik Billing, argues that the development of sex robots should be stopped because it reinforces or reproduces existing inequalities.

Yes, society has enough problems with gender stereotypes, entrenched sexism and sexual objectification. But actual opposition to developing sexual robots that aims for an outright ban? That seems shortsighted, even – pardon the pun – undesirable.

Existing research into sex and robots generally centres on a superficial exploration of human attachment, popularised by films such as Her and Ex Machina: a male-dominated, male-gaze approach of machine-as-sex-machine, often without consideration of gender parity. Groundbreaking work by David Levy, built on the early research into teledildonics – cybersex toys operable through the internet – describes the increasing likelihood of a society that will welcome sex robots. For Levy, sex work is a model that can be mirrored in human-robot relations.

Carving a new narrative

Richardson does not relish this prospect and to an extent I agree with her misgivings; it is a narrative that should be challenged. I absolutely agree that to do so would require, as Richardson states in her recent paper: “a discussion about the ethics of gender and sex in robotics”. Such a discussion is long overdue. In the gendering of robots, and the sexualised personification of machines, digital sexual identity is too often presumed, but to date little-considered.

Pygmalion’s statue is not a robot, but still springs to life to please her creator. Jean-Léon Gérôme/Bridgemen Art Library

The relationship between humans and their artificial counterparts runs right back to the myths of ancient Greece, where sculptor Pygmalion’s statue was brought to life with a kiss. It is the stuff of legend and of science fiction – part of our written history and a part of our imagined future. The feminist thinker Donna Haraway’s renowned A Cyborg Manifesto laid the modern groundwork for seriously considering a post-gendered world where distinction between natural and artificial life is blurred. Written in 1991, it is prescient in terms of thinking about artificial sexuality.

But just as we should avoid importing existing gender and sexual biases into future technology, so we should also be cautious not to import established prudishness. Lack of openness about sex and sexual identities has been a source of great mental and social anguish for many people, even entire societies, for centuries. The politics behind this lack of candour is very damaging.

The campaign seeks to avoid the sexualisation of robots, but at the cost of politicising them, and doing so in a narrow manner. If robots oughtn’t to have artificial sexuality, why should they have a narrow and unreflective morality? It’s one thing to have a conversation and conclude something about the development of technology; it’s another to demand silence before anyone has had the chance to speak.

A ‘gynoid’ customer service robot in female form. Gnsin, CC BY-SA

The scope for sex robots goes far beyond Richardson’s definition of them as “machines in the form of women or children for use as sex objects, substitutes for human partners or prostitutes”. Yes, we impose our beliefs on these machines: we anthropomorphise and we bring our prejudices and assumptions with us. Sex robots have, like much of the technology we use today, been designed by men, for men. Think of the objects we use everyday: smartphones better suited to a man’s larger hands and the pockets of men’s clothes, or pacemakers only suitable for 20% of women.

Machines are what we make them

But robotics also allows us to explore issues without the restrictions of being human. A machine is a blank slate that offers us the chance to reframe our ideas. The internet has already opened up a world where people can explore their sexual identity and politics, and build communities of those who share their views. Aided by technology, society is rethinking sex/gender dualism. Why should a sex robot be binary?

And sex robots could go beyond sex. What about the scope for therapy? Not just personal therapy (after all, companion and care robots are already in use) but also in terms of therapy for those who break the law. Virtual reality has already been trialled in psychology and has been proposed as a way of treating sex offenders. Subject to ethical considerations, sex robots could be a valid way of progressing with this approach.

To campaign against development is shortsighted. Instead of calling for an outright ban, why not use the topic as a base from which to explore new ideas of inclusivity, legality and social change? It is time for new approaches to artificial sexuality, which includes a move away from the machine-as-sex-machine hegemony and all its associated biases.

Machines are what we make them. At least, for now – if we’ve lost control of that then we have a whole other set of problems. Fear of a branch of AI that is in its infancy is a reason to shape it, not ban it. A campaign to stop killer robots is one thing, but a campaign against sex robots? Make love, not war.

The Conversation

Wednesday, July 1, 2015

Robot law: what happens if intelligent machines commit crimes?

I'd buy that for a dollar. Or, just steal it from you. elbragon, CC BY

The fear of powerful artificial intelligence and technology is a popular theme, as seen in films such as Ex Machina, Chappie, and the Terminator series.

And we may soon find ourselves addressing fully autonomous technology with the capacity to cause damage. While this may be some form of military wardroid or law enforcement robot, it could equally be something not created to cause harm, but which could nevertheless do so by accident or error. What then? Who is culpable and liable when a robot or artificial intelligence goes haywire? Clearly, our way of approaching this doesn’t neatly fit into society’s view of guilt and justice.

While some may choose to dismiss this as too far into the future to concern us, remember that a robot has already been arrested for buying drugs. This also ignores how quickly technology can evolve. Look at the lessons from the past – many of us still remember the world before the internet, social media, mobile technology, GPS – even phones or widely available computers. These once-dramatic innovations developed into everyday technologies which have created difficult legal challenges.

A guilty robot mind?

How quickly we take technology for granted. But we should give some thought to the legal implications. One of the functions of our legal system is to regulate the behaviour of legal persons and to punish and deter offenders. It also provides remedies for those who have suffered, or are at risk of suffering harm.

Legal persons – humans, but also companies and other organisations for the purposes of the law – are subject to rights and responsibilities. Those who design, operate, build or sell intelligent machines have legal duties – what about the machines themselves? Our mobile phone, even with Cortana or Siri attached, does not fit the conventions for a legal person. But what if the autonomous decisions of their more advanced descendents in the future cause harm or damage?

Criminal law has two important concepts. First, that liability arises when harm has been or is likely to be caused by any act or omission. Physical devices such as Google’s driverless car, for example, clearly has the potential to harm, kill or damage property. Software also has the potential to cause physical harm, but the risks may extend to less immediate forms of damage such as financial loss.

Second, criminal law often requires culpability in the offender, what is known as the “guilty mind” or mens rea – the principle being that the offence, and subsequent punishment, reflects the offender’s state of mind and role in proceedings. This generally means that deliberate actions are punished more severely than careless ones. This poses a problem, in terms of treating autonomous intelligent machines under the law: how do we demonstrate the intentions of a non-human, and how can we do this within existing criminal law principles?

Robocrime?

This isn’t a new problem – similar considerations arise in trials of corporate criminality. Some thought needs to go into when, and in what circumstances, we make the designer or manufacturer liable rather than the user. Much of our current law assumes that human operators are involved.

For example, in the context of highways, the regulatory framework assumes that there is a human driver to at least some degree. Once fully autonomous vehicles arrive, that framework will require substantial changes to address the new interactions between human and machine on the road.

As intelligent technology that by-passes direct human control becomes more advanced and more widespread, these questions of risk, fault and punishment will become more pertinent. Film and television may dwell on the most extreme examples, but the legal realities are best not left to fiction.

The Conversation

Thursday, July 23, 2015

Footballing androids of RoboCup are vital players in our robotic future

Joyce van Belkom/EPA

Playing football probably isn’t the most obvious use for robots. Yet every year, scientists and engineers from all over the world gather to pit their robotic creations against each other in one of the most remarkable soccer competitions you could imagine.

Founded in 1997 and most recently hosted by China, the annual Robocup has grown into a Mecca for roboticists, attracting thousands of participants and tens of thousands of visitors every year. It comprises several leagues in which teams of and individual robots demonstrate their ball-handling skills. Or, more accurately, the hardware and software design skills of their creators.

Tackling robotics' biggest challenges Jens Wolf/EPA

The founding goal of RoboCup was to create a fully autonomous robot soccer team that could beat the winners of the human World Cup by 2050. This raises three questions. Is this technically possible? How close are we to achieving this goal? And why on earth are scientists and engineers spending their time (and funding) on robotic football?

To answer the last question first, football demands many of the characteristics that make robotics and artificial intelligence (AI) in general challenging. It requires robots to work together and at the same time compete against other opponents. It involves dealing with a large amount of uncertainty about the environment in terms of those other robots. It also places high demands on the technology robots use to interact with the world – their ears, eyes, hands and feet – and puts them in situations where their hardware can easily fail or even break.

Taken together, these factors pose huge technical challenges and so robotic football provides an appropriate, standardised problem for testing new scientific ideas and engineering principles. If we can tackle this, we will be able to deal with many other difficult problems such as coordinating autonomous vehicles in traffic, using robots in search-and-rescue missions and even in service and industrial settings.

In fact, Robocup now includes specialised leagues such as RoboCup@Home, which is about integrating robots in everyday life at home – for example, building a robot butler. More recently the competition has included RoboCup@Work, which is about developing the factory of the future in which robots and humans work side-by-side.

Wheeled robots: great at football, not so good at mixing drinks. Shen Xiang/EPA

On the question of whether football-playing robots could ever beat the best human team in the world, it’s just worth pointing out that technology now moves incredibly fast. Sixty years ago, space travel seemed like an impossible dream. Today we are planning manned trips to Mars. Given this, I believe it is not unlikely we will be very close to achieving the original RoboCup goal by 2050.

So how are we doing? In the years since the competition’s creation we have seen a huge amount of progress. Matches in the simulation leagues (which are contested by software programs not physical robots) give the impression of watching a “real” game of football. And for several years you’ve been able to see great matches in the mid-sized wheeled robot league.

Protect your valuables, lads. Jens Wolf/EPA

I’ve also seen great computer science progress in what used to be the Sony AIBO league (between robotic dogs) but has now been replaced by the Aldebaran NAO league between a standard set of miniature humanoids that each team programs separately. But creating bespoke football-playing humanoid robots remains a great challenge because of the need to combine the mechanical hardware of a walking biped (which is very difficult compared to wheeled robotics) and the intelligence to support it.

Despite this, we have reached a stage where the competition has started tackling humanoid challenges in several leagues. And even though it remains a contest, what is truly beautiful about the robotic game is that the newly developed technologies and knowledge are openly shared by participants, boosting chances of success even further.

The Conversation

Thursday, September 3, 2015

Facebook's digital assistant blends AI with customer service staff – but will it cope without human help?

M – no Bond jokes please. Facebook

With the arrival of its monosyllabic M, Facebook has introduced its own personal digital assistant, following those from Apple (Siri), Microsoft (Cortana), Google (Now) and Amazon (Echo). Technically, M operates partly on the user’s smartphone via the Facebook Messenger app, but it is mostly a cloud-based service. Unlike the others, however, this isn’t just an artificial intelligence but a mix of smart machine learning and human assistance.

What makes M different is that it takes recommendations or answering queries one step further, able to actually make purchases or arrange services for you, and order deliveries. This is the logical conclusion of recommending something, allowing the system to spend your money for you as well. This approach might be risky, or might be brilliant. If it works, suppliers will be clamouring for Facebook’s M to spend users’ money with them, and Facebook will be able to take a percentage in return.

With Facebook’s enormous reach – the site recently claimed one billion concurrent users – even a small percentage of such a large number of users spending even relatively small sums of money would still add up to a great deal of cash for Facebook. Mind you, a few unfortunate misunderstandings of what a user wants to buy might lead to some negative publicity – and one can imagine some Facebook users attempting some very dubious transactions.

Technical and human intelligence

Under the hood, it appears Facebook is not using cutting-edge AI. While its digital assistant’s interface is stored and run from users’ phones, the processing occurs on Facebook’s servers in the cloud where computing power and data can be distributed. It uses technology from wit.ai, which is understood to use conditional random fields, a popular statistical technique dating from the 2000s, and maximum entropy classifiers, based on information theory. These pick up on the structure of the data, and use this to make predictions. These may not be cutting edge, but they are well established and understood. Not only that, but they can use prior knowledge, and one of M’s aims is to improve and to get better through training.

There’s a huge amount of contextual information about the user’s likes and preferences within Facebook’s enormous datasets, and this could help M’s algorithms provide answers. It could also be used to help constrain queries – things to exclude – particularly if both the purchaser and the recipient are Facebook users. But it will take leading edge AI techniques like sentic technologies, which attempt to extract mood, emotion, intention and meaning from text, in order to mine the full value of the text and image datasets generated by Facebook users.

M’s natural language processing picks out a message’s intent. But it has a lot to learn. Facebook

David Marcus, vice president of messaging products at Facebook and in charge of M, has said that without explicit consent M won’t embark on such data-mining. In fact there is a limited range of possible services and purchases that the software can perform automatically, while trickier tasks are carried out by the human element behind the scenes - customer service staff working for Facebook. Humans are needed to be able to cover the gaps in the AI’s ability to understand natural language, understanding what users are after, able to sign off purchases to ensure they’re reasonable, and legal.

While the idea is that M learns the right behaviours by associating the user’s intent with the solutions provided by human staff, for this to scale to even a fraction of Facebook Messenger’s 700,000 users, the AI will have to be good enough to relieve the human staff of their role. And that may take a while. Of course, M is being rolled out area by area – currently only San Francisco, of course – so perhaps the firm is just dipping a toe in the water to start with.

So while M may be the personal assistant of the future, at the moment it’s a curious mix of machine learning, automation, and human comprehension. But powered by the tutoring of actual humans and human-created data, in time it could still become more adept than the competition.

The Conversation

Monday, August 3, 2015

The autonomous killing systems of the future are already here, they're just not necessarily weapons – yet

(Potentially) killer AI tech is already here, built into many less ominous sounding everyday objects. zen_warden, CC BY-NC-ND

When the discussion of “autonomous weapons systems” inevitably prompts comparisons to Terminator-esque killer robots it’s perhaps little surprise that a number of significant academics, technologists, and entrepreneurs including Stephen Hawking, Noam Chomsky, Elon Musk, Demis Hassabis of Google and Apple’s Steve Wozniak signed a letter calling for a ban on such systems.

The signatories wrote of the dangers of autonomous weapons becoming a widespread tool in larger conflicts, or even in “assassinations, destabilising nations, subduing populations and selectively killing a particular ethnic group”. The letter concludes:

The endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting.

It’s hard to quibble with such concerns. But it’s important not to reduce this to science-fiction Terminator imagery, narcissistically assuming that AI is out there to get us. The debate has more important human, political aspects that should be subjected to criticism.

The problem is that this is not the endpoint, as they write; it is the starting point. The global artificial intelligence arms race has already begun. The most worrying dimension of which is that it doesn’t always look like one. The difference between offensive and defensive systems is blurred just as it was during the Cold War – where the doctrine of the pre-emptive strike, for example, that attack is the best defence, essentially merged the two. Autonomous systems can be reprogrammed to be one or the other with relative ease.

Autonomous systems in the real world

The Planetary Skin Institute and Hewlett-Packard’s Central Nervous System for the Earth (CeNSE) project are two approaches to creating a network of intelligent remote sensing systems that would provide early warning for such events as earthquakes or tidal waves – and automatically act on that information.

Launched by NASA and Cisco Systems, the Planetary Skin Institute strives to build a platform for planetary eco-surveillance, capable of providing data for scientists but also for monitoring extreme weather, carbon stocks, actions that might break treaties, and for identifying all sorts of potential environmental risks. It’s a good idea – yet the hardware and software, design and principles for these autonomous sensor systems and for autonomous weapons is essentially the same. Technology is ambivalent to its use: the internet, GPS satellites and many other systems used widely today were military in origin.

As an independent non-profit, the Planetary Skin Institute’s goal is to improve lives through its technology, claiming to provide a “platform to serve as a global public good” and to work with others to develop other innovations that could help in the process. What it doesn’t mention is the potential for the information it gathers to be immediately monetised, with real-time information from sensors automatically updating worldwide financial markets and triggering automatic buying and selling of shares.

The Planetary Skin Institute’s system offers remote, automated sensing systems providing real-time, tele-tracking data worldwide – its slogan is “sense, predict, act” – the same sort of principle, in fact, on which an AI autonomous weapon systems would work. The letter describes AI as a “third revolution in warfare, after gunpowder and nuclear arms”, but the capacity to build such AI weapons has been around since at least 2002, when drones transitioned from remote-control aircraft to smart weapons, able to select and fire upon their own targets.

The future is now

Instead of speculating about the future, we should deal with the legacy of autonomous systems from the Cold War, inherited from World War II and Cold War-era complexes between university, corporate and military research and development. DARPA, the US Defence Advanced Research Projects Agency is a legacy of the Cold War, founded in 1958 but still pursuing a very active high-risk, high-gain model for speculative research.

Research and development innovation spreads to the private sector through funding schemes and competitions, essentially the continuation of Cold War schemes through private sector development. The “security industry” is already tightly structurally tied to government policies, military planning and economic development. To consider banning AI weaponry is to point out the wider questions around political and economic systems that focus on military technologies because they are economically lucrative.

Relating the nuclear bomb to its historical context, the author EL Doctorow said: “First, the bomb was our weapon. Then it became our foreign policy. Then it became our economy.” We must critically evaluate the same trio as they affect autonomous weapons development, so that we discuss this inevitability not by obsessing on the technology but on the politics that allows and encourages it.

The Conversation

Tuesday, May 19, 2015

Elon Musk biography portrays a brutal character driven by lofty dreams

You WILL build the world's fastest electric car for me. OnInnovation/flickr, CC BY-ND

There’s this guy that’s pretty sure the thing you’re looking at right now is one of the greatest threats to humanity. No, he’s not talking about our growing obsession with staring at sheets of digitised glass, and the unhealthy sedentary existence associated with doing so. He’s talking about the thing living inside the machine humming quietly behind that glass: artificial intelligence.

It turns out that Elon Musk, one of the world’s most successful entrepreneurs, is plagued with existential worries. His new biography, Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future, reveals the billionaire engineer is seriously concerned about the rise of technology so intelligent it could destroy the human race in an effort to protect itself. Why should we listen to him? Well, when it comes to the future, he – more than anyone – is making it happen.

With his company Tesla, he’s transforming the automotive industry by producing the most compelling range of cars of all time – and they’re electric. With SolarCity, he’s producing the cheapest form of energy in most states of the US, and it’s through solar power. And with SpaceX, he’s building state-of-the-art rockets and spacecraft far cheaper than anyone else, that are about to become an order of magnitude cheaper through partial and full reusability.

SpaceX reusable rocket landing attempt

Why is Musk, who sold his founding stake in PayPal for more than US$150m (£97m), doing this? To increase our chances, to enable the expansion of humanity, to take one of our eggs out of this basket and place it on Mars as soon as possible. A decade ago, the vast majority of serious thinkers laughed these ideas off as ludicrous. Not now.

Musk had a difficult start in South Africa as the unassuming nerd from a broken family. He was a loner during childhood and suffered for years from bullying in the spartan Afrikaner culture he was brought up in. But now the man, sometimes known as General Musk, has risen. He has willed three recently thought impossible companies into existence, almost simultaneously, each now worth on the order of US$10 billion (£6 billion).

In the biography, Ashlee Vance level-headedly paints an insightful picture of the vehemence that is Musk, his growth into a leader, the personal sacrifice and torture he has chosen to endure, and the arduous development of his companies.

Probably not thinking ‘Where did I leave my lunch?’ OnInnovation/flickr, CC BY-ND

Any worries that the book would be a shallow money-grabbing rehash of a dozen YouTube interviews are swiftly put to bed. Vance battled to get Musk on side and eventually managed to conduct regular interview with the man himself, the people closest to him, and those that were there at pivotal points throughout his life so far.

As someone who started life in similarly difficult circumstances in post-Apartheid South Africa, with a background in aerospace and a strong affinity for the potential of Mars, I have a soft spot for big E. But Vance is not a fan-boy. His words describe a man whose unrelenting drive and genius is harrowing to those around him, who is mentally detached from what most of us pond-life think is normal – and how brutal such a combination makes this modern-day “Messiah-like” character on a personal level.

Musk is reported to have turned solid engineers into catatonic wrecks after a bad meeting, and built a culture where his employees adore him and fear him at the same time. And this aspect of Vance’s portrayal is particularly fascinating.

This guy has such lofty and inspiring ambitions for humanity. He has put everything on the line to take us forward as a species. Yet when it comes to dealing with individuals, in Vance’s book his lack of empathy appears to go beyond anything most of us have ever encountered. On the surface, Musk seems personable and easy-going. In Vance’s reality, he puts Spock to shame.

Even at 400 pages, the book could easily have spent another 100 pages delving more into the personal conversations with Musk, exploring his thoughts more deeply and speculating on the inner-workings of his mind. My wife, a psychiatrist, would have a field-day with this guy.

The Conversation

Friday, July 3, 2015

Robots can't kill you – claiming they can is dangerous

I didn't do it! Jiuguang Wang/flickr, CC BY-SA

Robots' involvement in human deaths is nothing new. The recent death of a man who was grabbed by a robot and crushed against a metal plate at a Volkswagen factory in Baunatal, Germany, attracted extensive media attention. But it is strikingly similar to one of the first recorded case of a death involving an industrial robot 34 years ago.

These incidents have happened before and will happen again. Even if safety standards continue to rise and the chance of an accident happening in any given human/robotic interaction goes down, such events will become more frequent simply because of the ever-increasing number of robots.

This means it is important to understand this kind of incident properly, and a key part of doing so is using accurate and appropriate language to describe them. Although there is a sense in which it is legitimate to refer to the Baunatal incident as a case of “robot kills worker”, as many reports have done, it is misleading, verging on the irresponsible, to do so. It would be much better to express it as a case of “worker killed in robot accident”.

Admittedly, putting it that way isn’t as eye-grabbing, but that’s precisely the point. The fact is, robots, despite what one might be encouraged to believe from sci-fi, and despite what may happen in the far future, currently lack what we consider real intentions, emotions and purposes. And contrary to recent alarmist claims, nor are they going to acquire those capacities in the near future.

They can only “kill” in the sense that a hurricane (or a car, or a gun) can kill. They can’t kill in the sense that some animals can, let alone in the human sense of murder. Yet murder is likely to be what springs to most people’s minds when they read “robot kills worker”.

High stakes

Insisting on getting this language right isn’t an academic exercise in pedantry. The stakes are high. For one thing, an unwarranted fear of robots could lead to another unnecessary “artificial intelligence winter”, a period where the technology ceases to receive research funding. This would delay or deny the considerable benefits robots can bring not just to industry but society in general.

But even if you’re not optimistic about the benefits of robots, you should still want to get this issue right. Since robots don’t have responsibility, humans are the ones responsible for what robots do. However, as robots become more prevalent, it will increasingly appear as if they actually have their own autonomy and intentions, for which it will seem they can and should be held responsible.

Meet your new colleague Shutterstock

Although there may eventually come a day when that appearance is matched by reality, there will be a long period of time (which has already begun) when this appearance will be false. Even now we are already tempted to categorise our interactions with robots into what we are responsible for and what they are responsible for. This raises the danger of scapegoating the robot, and failing to hold the human designers, deployers and users involved fully responsible.

Moral robots or morally made robots?

It’s not just those reporting on robots that need to get the language right. Policymakers, salespeople, and those in research and development who are designing the robots of today and tomorrow need to keep a clear head. Instead of asking “what’s the best way to make moral robots?”, we should ask “what’s the best way to morally make robots?”.

This subtle change in the language, if adopted, would result in big changes in design. For example, trying to give robots moral laws to follow would require us to provide them with a human-like level of common sense to apply those laws, something that would be far harder. Instead of following such a design dead end we could aim for machines that are a results of the designers' own morals, just as we try to ethically design non-robotic technology.

In the Volkswagen accident, a company spokesperson reportedly said “initial conclusions indicate that human error was to blame, rather than a problem with the robot”. Other reports spoke of it being human error rather than the robot “being at fault” or “accountable”. This implies that, in other circumstances, the robot could have been considered to blame for the accident.

If there was a “problem with the robot”, be it faulty materials, a misperforming circuit board, bad programming, poor design of installation or operational protocols, that problem – or not anticipating it – would still have been due to human error. Yes, there are industrial accidents where no human or group of humans is to blame. But we mustn’t be tempted by the appearance of agency in robots to absolve their human creators of responsibility. Not yet anyway.

The Conversation

Tuesday, June 16, 2015

We can build remote-controlled rescue robots, but what's coming next is even more exciting

Taking the wheel DARPA

Robots could one day save your life. That’s the hope of those who involved in the DARPA Robotics Challenge, which recently came to an end in California recently.

More than 20 teams from around the world built or programmed and then, importantly, controlled a robot through a series of eight tasks in a simulated disaster zone. The challenge, created in response to the Fukushima Dai-ichi nuclear disaster, required the robots to drive a car, open a door, cut a hole in a wall, traverse some rubble and climb some stairs, all in under an hour. The aim was to spur the development of robots that could perform search-and-rescue missions in locations too dangerous for humans to enter.

A team from the Korean university KAIST won the challenge by completing all the tasks in under 45 minutes. While the competition demonstrated what robotics can now do, it also showed just how challenging it is to build a machine that performs what are relatively simple tasks in human terms.

Knock knock DARPA

In robotics, there are two types of control. First there is the “low-level” control needed to coordinate the actions of motors. For example, the speed of wheels or the movement of a joint. Then there is the “high-level” control needed to carry out specific goals using the whole system. For example, picking something up then carrying it to a target.

The ideal outcome of the DARPA challenge would have been a robot that could complete the challenge autonomously, without any human control. In fact, all of the high-level control was performed by human operators (via remote control). Some of the lower-level control was also done in this way, including, in some cases, deciding where the robot should place its feet when walking.

The reason high-level autonomy was not more prominent in the competition was the incredible difficulty of creating and operating the hardware needed to perform the tasks. Most teams chose robots with a human-like body shape – although the winner extended human capabilities with wheeled knees and rotating waist – even though the rules didn’t limit them in this way. In order for a humanoid robot to perform an action with one part of its body, the rest of its body must also be coordinated to counteract the forces involved.

For example, for a robot to push a power tool through a wall it must generate enough force to push while also altering its balance to prevent itself from falling over due to the recoil. This kind of coordination happens in a very high-dimensional space, meaning parts have to moved in many different directions. Humanoid robots may have more than 30 joints that can be moved simultaneously, a complexity that is very hard to model computationally.

This difficulty meant that the majority of effort in the DARPA challenge went towards low-level control algorithms. Although this may be disappointing to those interested in fully autonomous robots, developing low-level control was actually one of the main intentions of the competition. Robust high-level autonomy can only be created once the lower-level systems are robust and reliable.

Robot to the rescue DARPA

The difference is striking if you compare DARPA’s Robotics Challenge to its Urban Challenge, in which teams competed to deliver self-driving cars. In this competition, the physical engineering tasks were mature and well-understood – we’ve been building working cars for more than 100 years. The result was a highly impressive display of autonomy as the engineers were able to concentrate on high-level control software.

The Robotics Challenge should be seen as just the beginning. As the physical bodies and low-level control software of humanoid robots improve, scientists at the interface of artificial intelligence and robotics can start to create the first complex autonomous behaviours for large-scale humanoids. So, when the next competition happens, we may see these fantastic machines thinking for themselves a little more.

The Conversation

Wednesday, August 5, 2015

How to embrace technology without dooming humanity to destruction

Official U.S. Air Force/Flickr, CC BY-NC

The world today is facing some serious global challenges: creating sustainable development in the face of climate change, safeguarding rights and justice, and growing ethical markets, for a start. All of these challenges share some connection with science and technology – some more explicitly than others.

We are currently witnessing a growth in traditional technology – with computers processing data in new and exciting ways. We’re also seeing the birth of transformative technology, such as bioengineering. But the question is not about old or new technology – rather, it is about how they are being used to facilitate or change human behaviour.

Good tech, bad tech

Developments in information and communication technology (ICT) are vitally important to help us make better, more informed choices about how we prepare for the future. For instance, democratic governance is about being able to articulate contesting views across society and from different parts of the government. The advent of the internet allows us to receive and spread such information. Likewise, security and public safety relies on having good information on risks and their potential threats. Consider, for example, the way police departments in New York and Memphis have been able to make better use of data to prevent crime.

While science and technology are giving us the tools to improve, they – and the people who use them – are also presenting serious problems. Technology connects us, but it also makes us vulnerable to cyber-attacks. The amount of information that we produce every day through our phones and computers can help shape our environment to cater to us. But it also means that our identities are perhaps more vulnerable than ever before, with smart phones and club cards tracking our every move.

Similarly, in biology, we are able to make amazing gains in physical corrections, repairs, amendments, and augmentations, whether replacing old limbs or growing new ones. But we must also seriously consider the issues around ethics, safety and security. The debate around gain of function experiments, which give diseases new properties to help us study them, is a good example.

Hopes and fears

To help us grasp the shape and scope of these challenges, the Millennium Project – an international think tank – releases an annual State of the Future report, which outlines the major hurdles facing humanity over the next 35 years. It illustrates our complicated relationship with science and technology. Just as the beginning of the industrial revolution influenced the underlying themes of Mary Shelley’s Frankenstein, we too are worried about the unforeseen complications that the latest developments could bring.

The report tells us of the great hopes that synthetic biology will help us write genetic code like we write computer code; about the power of 3D printing to customise and construct smart houses; of the future of artificial intelligence where the human mind and the computer mind meet, rather than conflict.

Frankenstein bringing his monster to life. twm1340/flickr, CC BY-SA

But at the same time, the authors of the report – Jerome Glenn, Elizabeth Florescu and their team – express fears that there is a great chance we could be outstripped in pace by the evolution of scientific and technological development. The authors suggest that we seek out human-friendly control systems, since advances in these fields mean that lone individuals could make and deploy weapons of mass destruction.

There are two concerns here: one to do with agency, the other relating to structures. Individuals have the potential to use scientific and technological advances to cause harm. This is a growing problem, as science and technology continues to degrade what Max Weber referred to as the state’s “monopoly on violence”.

To reduce the risks associated with agency, we will rely on structures that encourage good behaviour, such as systems for justice, education and the provision of basic necessities for life.

But it is not clear how we will arrive at such structures, and where the responsibility to develop them will fall; whether it’s to regions, states or international organisations. This is especially pressing, as many states have either foregone a welfare system, or are in the process of destroying it. It’s unclear where education and training come in, or how regulatory control is to work across so many local, national, societal, and commercial boundaries.

An ethical approach?

Whether or not our global society is outstripped by science and technology largely depends on us. And this is part of the problem, as William Nordhaus warned us as early as 1982, in his work on the Global Commons. The report calls for an ethical approach to creating systems, forms of information, and models of control that would allow us to engage with science and technology as it develops.

This means embedding ethical considerations into the way we think about the future. The authors want a larger discussion on global ethics, such as that we have seen rooted in the work done by the International Organisation for Standardisation – the world’s largest developer of voluntary international standards.

Ultimately, where we end up in relation to science and technology is a matter of coming to terms with how we interact with these developments. Until we do so, a safe and prosperous world may elude us.

The Conversation

Thursday, June 11, 2015

AI 'cheating' scandal makes machine learning sound like a sport – it isn't

Under an uncomfortable spotlight. Baidu image via Gil C/Shutterstock.com

News that Baidu, the Google of China, cheated to take the lead in an international competition for artificial intelligence technology has caused a storm among computer science researchers. It has been called machine learning’s “first cheating scandal” by MIT Technology Review and Baidu is now barred from the competition.

The Imagenet Challenge is a competition run by a group of American computer scientists which involves recognising and classifying a series of objects in digital images. The competition itself is no Turing test, but it is an important challenge, and one of commercial importance to many firms.

The cheating by Baidu was nothing sophisticated, more akin to an initial stolen glimpse at the answers, which was followed by more of the same when it went unnoticed. Even that makes it sound worse than it was. Part of the competition involved looking at the answers anyway: someone in the Baidu team simply did it more than they were officially allowed to.

In their paper about the submission, Baidu themselves weren’t claiming anything more than an engineering advance: they built a large supercomputer that could handle more data than previous implementations. A necessary advance, but very much a “scaling up” of existing solutions – one that would be financially outside the reach of a typical academic research group. They participated in the competition as an attempt to demonstrate that, after such significant investment in hardware, their new supercomputer was able to perform. They have since apologised for breaking the rules of the competition.

In any case, the significant breakthrough in the area had already been achieved by Geoff Hinton’s group at the University of Toronto. They produced the machine learning equivalent of the high jump’s “Frosbury Flop” to win the 2012 version of the competition with such a significant improvement that all leading entries are now derived from their model. That model itself also built on a two-decade-long program of research by Yann LeCun, then of New York University.

Blown out of proportion

The result of Baidu’s entry into the competition was posted as an “e-print” publication. E-prints are articles that are unreviewed. They are a slightly more formal versions of a “technical blog post”. The problem was identified by the community quickly, within three weeks, and a corrected version was published. This is science in action.

The “cheating scandal” was labelled as such by the very same prestigious technical publication that broadcast the initial results to its readers within two days of the e-print’s publication: MIT Technology Review.

Singling out MIT Technology Review in this case may be a little unfair, because this is part of a wider phenomenon where technical results are trumpeted in the press before they are fully tasted (let alone digested) by the scientific community. E-print publication is a good thing, it allows ideas to be spread quickly. However, the implications of those ideas need to be understood before they are presented as scientific fact.

Ideally knowledge moves forward through academic consensus, but in practice that consensus itself is swayed by outside forces. This raises questions about who is the ultimate arbiter of academic quality. One answer is opinion: the opinion of those that matter, such as governments, businesses, other scientists or even the press. Success in machine learning has meant it is attracting such attention.

Getting on with it for decades

Ironically, the developments that enabled recent breakthroughs in AI all took place outside of such close scrutiny. In 2004 the Canadian Institute for Advanced Research (CIFAR) funded a far-sighted program of research. An international collaboration of researchers was given the time, intellectual space and money that they needed to make these significant breakthroughs. This collaboration was led by Geoff Hinton, the same researcher whose team achieved the 2012 breakthrough result.

This breakthrough led to all the major internet giants fighting for their pound of academic flesh. Of those researchers involved in CIFAR, Hinton has been hired by Google, Yann LeCun leads Facebook’s AI Research team, Andrew Ng heads up research at Baidu and Nando de Freitas was recently recruited to Google DeepMind, the London start-up that Google lavished £400m on acquiring.

The Baidu cheating case is symptomatic of a big change in the landscape for those who work in machine learning and who drove these advances in AI. Until 2012, ideas from researchers in machine learning were under the radar. They were widely adopted commercially by companies like Microsoft and Google, but they did not impinge much on public consciousness. But two breakthrough results brought these ideas to the fore in the the public mind. The Imagenet result by Hinton’s team was one. The other was a program that could learn to play Atari video games. It was created by DeepMind, triggering their purchase by Google.

However, just as Deep Blue’s defeat of Kasparov didn’t herald a dawn in the age of the super-intelligent computer, neither will either of these significant accomplishments. They have not arrived through better understanding of the fundamentals of intelligence, but through more data, more computing power and more experience.

Who follows in whose wake?

These apparent breakthroughs have whetted the appetite. The technical press is becoming susceptible to tabloid sensationalism in this area, but who can blame them as companies and universities ramp up their claims of scientific advance? The advances are somewhat of an illusion, they are the march of technologists following in a scientific wake.

The wake-generators are a much harder to identify or track, even for their fellow scientists. But the very real danger is that expectations of significant advance or misunderstanding of the underlying phenomenon will bring about an AI bubble of the type we saw 30 years ago. Such bubbles are very damaging. When high expectations aren’t immediately fulfilled then entire academic domains can be dismissed and far-seeing proposals like CIFAR’s go unfunded.

Academics make those first waves. Boat wake via Dennis Tokarzewski/www.shutterstock.com

Even if Baidu’s result were valid, it would have been just the type of workaday scientific development that most of us spend most of our time trying to cook up. It did not merit a pre-publication announcement in MIT Technology Review and the pre-publication withdrawal should have been just a footnote to add to the diverse collection that keep all astute academics scientifically wary. Rather boringly, the only true marker of scientific advance is repeatability. Whether that is within the scientific community or by transfer of ideas to the commercial world.

When reporting on the scandal MIT Technology Review refers to participation in these competitions as a “sport”. I feel sporting analogies give a wrong idea of the spectacle of scientific progress. It is more like watching a painter at work. It is very rare that any single brushstroke reveals the entire picture. And even when the picture is complete, it may only tell us a limited amount about what the next creation will be.

The Conversation

Tiny cell superheroes are suiting up to give bone cancer the boot!

Imagine your body is a sprawling, high-tech kingdom, and usually, your immune system is the elite police force keeping everything...