I have been studying the whole range of issues/opportunities in the commercial roll out of robotics for many years now, and I’ve spoken at a number of conferences about the best way for us to look at regulating robotics. In the process I’ve found that my guidelines most closely match the EPSRC Principles of Robotics, although I provide additional focus on potential solutions. And I’m calling it the 5 Laws of Robotics because it’s so hard to avoid Asimov’s Laws of Robotics in the public perception of what needs to be done.
The first most obvious point about these “5 Laws of Robotics” should be that I’m not suggesting actual laws, and neither actually was Asimov with his famous 3 Laws (technically 4 of them). Asimov proposed something that was hardwired or hardcoded into the existence of robots, and of course that didn’t work perfectly, which gave him the material for his books. Interestingly Asimov believed, as did many others at the time (symbolic AI anyone?) that it was going to be possible to define effective yet global behavioral rules for robots. Whereas, I don’t.
My 5 Laws of Robotics are:
- Robots should not kill.
- Robots should obey the law.
- Robots should be good products.
- Robots should be truthful.
- Robots should be identifiable.
What exactly does those laws mean?
Firstly, people should not legally able to weaponize robots, although there may be lawful exclusions for use by defense forces or first responders. Some people are completely opposed to Lethal Autonomous Weapon Systems (LAWS) in any form, whereas others draw the line at robot weapons being ultimately under human command, with accountability to law. Currently in California there is a proposed legislation to introduce fines for individuals building or modifying weaponized robots, drones or autonomous systems, with the exception of ‘lawful’ use.
Secondly, robots should be built so that they comply with existing laws, including privacy laws. This implies some form of accountability for companies on compliance in various jurisdictions, and while that is technically very complex, successful companies will be proactive because companies otherwise there will be a lot of court cases and insurance claims keeping lawyers happy but badly impacting the reputation of all robotics companies.
Thirdly, although we are continually developing and adapting standards as our technologies evolve, the core principle is that robots are products, designed to do tasks for people. As such, robots should be safe, reliable and do what they claim to do, in the manner that they claim to operate. Misrepresentation of the capabilities of any product is universally frowned upon.
Fourthly, and this is a fairly unique capability of robots, robots should not lie. Robots have the illusion of emotions and agency, and humans are very susceptible to being ‘digitally nudged’ or manipulated by artificial agents. Examples include robots or avatars claiming to be your friend, but could be as subtle as robots using a human voice just as if there was a real person listening and speaking. Or not explaining that a conversation that you’re having with a robot might have many listeners at other times and locations. Robots are potentially amazingly effective advertizing vehicles, in ways we are not yet expecting.
Finally, and this extends the principles of accountability, transparency and truthfulness, it should be possible to know who is the owner and/or operator of any robot that we interact with, even if we’re just sharing a sidewalk with them. Almost every other vehicle has to comply with some registration law or process, allowing ownership to be identified.
What can we do to act on these laws?
- Robot Registry (license plates, access to database of owners/operators)
- Algorithmic Transparency (via Model Cards and Testing Benchmarks)
- Independent Ethical Review Boards (as in biotech industry)
- Robot Ombudspeople (to liaise between the public, policy makers and the robotics industry)
- Rewarding Good Robots (design awards and case studies)
There are many organizations releasing guides, principles, and suggested laws. I’ve surveyed most of them and looked at the research. Most of them are just ethical hand wringing and accomplish nothing because they don’t factor in real world conditions around what the goals are, who would be responsible and how to make progress towards the goals. I wrote about this issue ahead of giving a talk at the ARM Developer Summit in 2020 (video included below).
Silicon Valley Robotics announced the first winners of our inaugural Robotics Industry Awards in 2020. The SVR Industry Awards consider the responsible design as well as technological innovation and commercial success. There are also some ethical checkmark or certification initiatives under preparation, but like the development of new standards, these can take a long time to do properly, whereas awards, endorsements and case studies can be available immediately to foster the discussion of what constitutes good robots, and, what are the social challenges that robotics needs to solve.
The Federal Trade Commission recently published “The Luring Test: AI and the engineering of consumer trust” describing the
For those not familiar with Isaac Asimov’s famous Three Laws of Robotics, they are:
First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov later added a Fourth (called the Zeroth Law, as in 0, 1, 2, 3)
Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm
Robin R. Murphy and David D. Woods have updated Asimov’s laws to be more similar to the laws I proposed above and provide a good analysis for what Asimov’s Laws meant and why they’ve changed them to deal with modern robotics. Beyond Asimov The Three Laws of Responsible Robotics (2009)
Some other selections from the hundreds of principles, guidelines and surveys of the ethical landscape that I recommend come from one of the original EPSRC authors, Joanna Bryson.
The Meaning of the EPSRC Principles of Robotics (2016)
And the 2016/2017 update from the original EPSRC team:
(2017) Principles of robotics: regulating robots in the real world, Connection Science, 29:2, 124-129, DOI: 10.1080/09540091.2016.1271400
Another survey worth reading is on the Stanford Plato site: https://plato.stanford.edu/entries/ethics-ai/
Source: Robonhub
Source Link: https://robohub.org/the-5-laws-of-robotics/