Alex StearnsPeters12/3/2017English 121 Theterm “robotics” was first coined by the legendary science fiction writer SirIsaac Asimov in his 1941 short story “Liar!”.
One of the first to see the vastpotential of the up and coming technologies that were yet to see publicapproval or interest in his time. Since then, however, robotics has been on astartling upward trajectory that has placed it into the forefront of cuttingedge technologies. While robotics has come with many benefits to modern dayhumanity it is also a subject of endless heated debates. Humanity is on theverge of a robot revolution.
And while many see it as a gateway to progress notseen since the Renaissance it could just as easily result in the end of humanity.With the ever-present threat of accidentally creating humanities unfeelingsuccessors it’s only natural to question how much, if at all, we should allowourselves to become reliant on our technologies.”As machines get smarter andsmarter, it becomes more important that their goals, what they are trying toachieve with their decisions, are closely aligned with human values,” saidStuart Russell, a professor of computer science at UC Berkley andco-author of the universities textbook on artificial intelligence. Astrong believer that the survival of humanity may well depend on instilling moralsin our AI’s, and that doing so could be the first step to ensuring a peacefuland safe relationship between people and robots, especially regarding simplersettings. “A domestic robot, for example, will have to know that you value yourcat,” he says, “and that the cat is not something that can be put in the ovenfor dinner just because the fridge is empty.” This begs the obviousquestion, how on Earth do we convince these potentially godlike beings toconform to a system of values that benefits us?While experts from several fieldsaround the world attempt to work through the ever-growing list of problems tocreate more obedient robots, others caution that it could be a double-edgedsword. While it may lead to machines that are safer and ultimately better itmay also introduce an avalanche of problems regarding the rights of the intelligencesthat we have created.
The notion that human/robotrelations might prove tricky far from a new one. In 1947, legendary sciencefiction writer Isaac Asimov introduced his Three Laws of Robotics in the shortstory collection I, Robot, which were designed to be a basic set oflaws that all robots must follow to ensure the safety of humans. 1) A robotcannot harm human beings, 2) A robot must obey orders given to it unless itconflicts with the first law, and 3) A robot must protect its own existenceunless in conflicts with either of the first two laws. Asimov’s robots adherestrictly to the laws and yet, limited by their rigid robot brains, become trappedin unresolvable moral dilemmas. In one story, a robot lies to a woman andfalsely tells her that a certain man loves her who doesn’t, because the truthmight hurt her feelings, which the robot interprets as a violation of the firstlaw. To not break her heart, the robot breaks her trust, traumatizing her and ultimatelyviolating the first law anyway. The conundrum ultimately drives therobot insane. Although fictional literature, Asimov’s Laws have remained acentral and basic point entry point for serious discussions about the nature ofmorality in robots and acting as a reminder that even clear, well defined rulesmay fail when interpreted by individual robots on a case to case basis.
Accelerating advances in new AItechnology have recently spurred an increased interest to the question of hownewly intelligent robots might navigate our world. With a future of highlyintelligent AI seemingly close at hand, robot morality has emerged as a growingfield of discussion, attracting scholars from ethics, philosophy, human rights,law, psychology, and theology. There have also been several public concerns asmany noteworthy minds in the scientific and robotics communities have cautionedthat the uprise of machines could well mean the end of the world.Public concern has centered around “thesingularity,” the theoretical moment when machine intelligence surpasses ourown. Such machines could defy humanities attempts to control them, the argumentgoes, and lacking proper morality, could use their superior intellects toextinguish the human race. Ideally, AI with human-level intelligence willneed a matching level of morality as a check against potential bad behaviors. However, as Russell’s example ofthe feline-roasting domestic robot illustrates, machines would not necessarilyneed to be super intelligent to create problems. Soon, we are likely tointeract with smaller scale, simpler, robots.
And those too, will benefit fromincreased moral awareness. The immediate issue, is not perfectly replicatinghumanlike morality, but rather making robots that are more sensitive toethically relevant aspects of their individual jobs.Ethical sensitivity, could makerobots better, more effective tools. Imagine if you will an automatic car thatalways followed the speed limit. Programmed to never speed. On paper, thisappears to be a sound idea, until a passenger is bleeding out in the back seat.They would shout at the car to break the speed limit and get them necessarymedical attention, but the car would respond, ‘Sorry, I can’t do that.’ Amachine that always follows its programming is useful, but limited.
A far moreuseful robot is one that can break the rules if something even worse willhappen if it doesn’tAs machines get smarter and moreindependent they will require increasingly fine-tuned moral capabilities. Theend goal, is to develop robots “that extend our will and our capability torealize whatever it is we dream.” But before machines can support therealization of our dreams, they must be able to understand our values, or atleast act in accordance with them.Which leads into the major hurdleof robot ethics: There is no universal or even largely agreed upon set of humanmorals. Morality is often specific to cultures, continually evolving, andeternally debated. If robots are to live by an ethical code, who will it bedecided by? Where will it come from? This one question will make or break thefield of advanced robotics. When, not if we create these machines, whatinevitably imperfect morals do we bestow upon our collective technologicalchild.
The rules could be set in stone,such as Asimov’s Three Laws of Robotics or the Ten Commandments; or they couldbe more flexible and open to interpretation. What is important is that themachine is given solid guidelines upon which to base its decisions. The future is uncertain, everchanging,and as impossible to correctly predict as ever with several new ideas about to berealized and created.
Robotics is a fascinating as well as terrifying field of studyfor the future. There are several real potential dangers facing roboticists as theyhope to create a utopia for humanity and strive to avoid the far too often pictureddystopia that could occur should they fail. Humanity has built a long and storiedpast on defying what has previously been considered impossible and with the helpof our robot friends will hopefully continue to do so for a long time yet. Dangerous?Yes. Worth it? Absolutely.
Works Cited”The Good, The Bad and The Robot:Experts Are Trying to Make Machines Be “Moral”.” Cal Alumni Association, 8 June 2015,alumni.berkeley.edu/california-magazine/just-in/2015-06-08/good-bad-and-robot-experts-are-trying-make-machines-be-moral.Partridge, Kenneth. Robotics. H.W.
Wilson Co., 2010.Asimov, Isaac. Foundation: I, Robot. Octopus Books, 1947.