Back to top

“If not yet the world, robots are starting to dominate the news headlines,” writes Patrick Lin in his introduction to Robot Ethics: The Ethical and Social Implications of Robotics.  For years, robots and other forms of artificial intelligence have been performing tasks in factories and making mass production easier than ever. The automation process has slowly transitioned into other areas as well. Robots now are used by militaries to attack enemies and serve as caregivers for infants and the elderly. There are robots used as sex toys, and robots that facilitate surgeons in performing difficult operations.

With new qualities and new responsibilities come new ethical questions. Who is responsible for the actions carried out by a robot? What happens when something goes wrong? Are there laws that prevent humans from abusing robots, and vice versa? What happens when robots start making ethical decisions? Wherein lies the ethical boundary between which tasks robots can perform, and which ones they can’t?

To prevent any misunderstandings about the ethical questions, it is well worth defining “robot ethics.” Wendell Wallach, lecturer at the Interdisciplinary Center for Bioethics at Yale University and co-author of Moral Machines: Teaching Robots Right from Wrong, defines robot ethics the following way:

“Robot Ethics tends to break down into two different fields. One looks at the societal and ethical issues that arise in the adoption of robots by humans, and the other looks at the prospect that the robots themselves may be capable of factoring ethical sensitivities and legal concerns into the very actions and choices that they make. Many scholars distinguish the two fields by calling the later Machine Ethics.”

While the first field deals with the appropriate use of robots in social contexts, the second field goes beyond mere programming. Instead, it addresses “whether increasingly autonomous robots will in some circumstances be able to engage in explicit ethical decision-making,” Wallach says.

To this day, there are “no laws, and no need for laws, about how humans should treat robots,” Wallach notes.

With robots entering a variety of new fields, taking over roles previously performed by humans, academics are investigating the human-robot relationships in more detail. For their article “The crying shame of robot nannies: an ethical appraisal,” Noel and Amanda Sharkey researched whether robots should be used as nannies, stating that “The whole idea of robot childcare is a new one and has not had time to get into the statute books. There have been no legal test cases yet and there is little provision in the law“ (180). In the case of robot nannies, Sharkey & Sharkey explain,

“The various international nanny codes of ethics (e.g. FICE Bulletin 1998) do not deal with the robot nanny but require the human nanny to ensure that the child is socialised with other children and adults and that they are taught social responsibility and values. These requirements are not enforceable by the law.” (180).

In fact, in “Robot Rights,” Guo and Zhang say that, “because different cultures may disagree on the most appropriate uses for robots, it is unrealistic and impractical to make an internationally unified code of ethics.” (Guo, S. & Zhang, G. (2009). Robot Rights, Letter to Science, 323, 876).

So if there are no laws for robots, who is ethically responsible for them? The “people who create and deploy them [the robots] for specific purposes are responsible,” Wallach states. Even though robots are beginning to make moral decisions, they are still simple machines and those who build, design, and deploy robots are responsible for their actions. Wallach points out that “the robots we have today are just migrating beyond being very simple machines and they have no intelligence, no smarts of their own.” He adds that, “they have no right either as moral agents, but more importantly as moral patients, as someone to whom we should give ethical regard or give any ethical concern.”

Nevertheless, Wallach recognizes that the recent advancements in robotics are adding intricacies to question of responsibility. “It is becoming more and more difficult for those who design and build semi-autonomous robotic systems to predict how those systems will act in new situations with new inputs,” he says. It is this fact that “makes the ethical question [of “Who bears responsibility when something goes wrong or someone is harmed”] more difficult,” Wallach explains. Nevertheless, he is convinced that “that does not mean that the robots are in any way shape or form responsible for their actions.”

The responsibility still lies with the humans; it is “the same kind of responsibility we have for any other tool we use,” Wallach says. Each time a robot assists in performing a surgery, you can still thank, - or blame, - a human. A similar situation is presented when robots care for children. “We could say in absolute terms that it is ethically unacceptable to create a robot that appears to have mental states and emotional understanding,” claim Sharkey & Sharkey. “However, if it is the child’s natural anthropomorphism that is deceiving her, then it could be argued that there are no moral concerns for the roboticist or manufacturer.” (172).

In fact, Sharkey & Sharkey even raise a point that takes the question even further. One of their article’s sections is even entitled “Is robot care better than minimal care?” By asking this question, they raise an important point: whose ethical responsibility is it when you don’t create or deploy a robot to perform a certain action?

Given the evidence of benefits occurring as a result of the use of robots by children in the home, in the classroom and in therapeutic applications (Sharkey & Sharkey, 162), it appears that humans in fact have an ethical responsibility to use robots in certain situations.

Despite this responsibility, however, the question isn’t as close as those in the Discovery News suggest. Wallach concludes, “those of us working in the field of robotics think that it is going to be a long time, if ever, before we will cross thresholds that we would be giving robots any kind of rights.”

Learn more about Isabel Eva Bohrer at www.isabelevabohrer.com.

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.