If we are to provide intelligent answers to the moral and legal questions raised by the developments in robotics, lawyers and ethicists will have to work closely alongside the engineers and scientists developing the technology. When an army can strike at an enemy with no risk to lives on its own side, it may be less scrupulous in using force. Asimov's laws don't apply to machines which are designed to harm people. The US military plans to have a fifth of its combat units fully automated by the year 2020. More pressing moral questions are already being raised by the increasing use of robots in the military. This kind of speculation may miss the point, however. If we grant rights to more and more entities besides ourselves, will we dilute our sense of our own specialness?
Lanier talks of the dangers of "widening the moral circle" too much. If we see machines as increasingly human-like, will we come to see ourselves as more machine-like? Jaron Lanier, an internet pioneer, has warned of the dangers such technology poses to our sense of our own humanity. Similar problems arise with rule two, as the robot would have to be capable of telling an order apart from a casual request, which would involve more research in the field of natural language processing.ĭavid Hanson's K bot can mimic human expressions This may be easy for us humans, but it is a very hard problem for robots, as anyone working in machine vision will tell you. In fact, programming a real robot to follow the three laws would itself be very difficult.įor a start, the robot would need to be able to tell humans apart from similar-looking things such as chimpanzees, statues and humanoid robots. But to a roboticist they pose more problems than they solve. These three laws might seem like a good way to keep robots from harming people. He argued that intelligent robots should all be programmed to obey the following three laws:Ī robot may not injure a human being, or, through inaction, allow a human being to come to harmĪ robot must obey the orders given it by human beings except where such orders would conflict with the First LawĪ robot must protect its own existence as long as such protection does not conflict with the First or Second Law Isaac Asimov was already thinking about these problems back in the 1940s, when he developed his famous "three laws of robotics". Robots have become a lot more intelligent over the decades Whose fault is it if they make a bad investment?
Software robots - basically, just complicated computer programmes - already make important financial decisions. Is the designer to blame, or the user, or the robot itself? Robot vacuum cleaners and lawn mowers are already in many homes, and robotic toys are increasingly popular with children.Īs these robots become more intelligent, it will become harder to decide who is responsible if they injure someone. Robots were once confined to specialist applications in industry and the military, where users received extensive training on their use, but they are increasingly being used by ordinary people. And, a group of leading roboticists called the European Robotics Network (Euron) has even started lobbying governments for legislation.Īt the top of their list of concerns is safety.
This week, experts in South Korea said they were drawing up an ethical code to prevent humans abusing robots, and vice versa.
Scientists are already beginning to think seriously about the new ethical problems posed by current developments in robotics. If the idea of robot ethics sounds like something out of science fiction, think again, writes Dylan Evans.