Skip navigation

I, robot, need ethical guidance

Scholars say the rise of automated, and increasingly autonomous, robotic technologies requires greater ethical scrutiny.

BY TIM JOHNSON | JUN 10 2015

feature_junejuly2015_robotethics_644

It’s a dilemma so old, even its name sounds antiquated – and indeed the so-called “trolley problem” has been a hot item of debate for ethicists for nearly half a century. The quandary has various iterations, but the basic plot always remains the same: a runaway trolley roars down a track, out of control.

Ahead, several people, completely unaware of the imminent threat, stand on the track (in some versions, they’re actually tied down and unable to escape). To the right stands a spur that leads to a solid wall (sometimes there’s a lone person tied to the spur). In that one panicked moment, the trolley driver must make a life-and-death decision. Should he plough through the five helpless souls tied to the track, thus saving himself and anyone on the trolley? Or should he take the spur, killing himself, the riders and that one lone soul?

Now, make the trolley a self-driving car – one that’s programmed to react to this very situation – and you’ll have an idea of the dilemma facing engineers and ethicists today.

“Highly automated technologies are doing things that humans used to do on our own – and the more you automate, the more decisions are automated. In the case of a self-driving car, the moral question that emerges is: who makes these design decisions? Who decides whether the car goes this way or that?” says Jason Millar, a trained engineer who also teaches robot ethics at Carleton University and is a doctoral candidate in the department of philosophy at Queen’s University. “These are questions that have never come up in the past with traditional technologies.”

Robots, the ultimate automated machines, are increasingly replacing humans in some of the most commonplace functions of everyday life. They have already been involved in manufacturing for decades, and some people worry about the degree to which they will someday fill many human jobs (although that is as much an economic issue as a moral one). Nevertheless, in the next 10 to 20 years, experts say, these automated technologies will be doing everything from driving our cars to fighting our wars and will be taking the place of humans in the most intimate of roles.

As a consequence, emerging alongside the growing pervasiveness of robots in our society is the field of robot ethics, a discipline that brings together strange bedfellows from disparate departments. A number of the top global experts are Canadian, and these engineers, ethicists and philosophers are asking some of the key questions that will mould and shape our collective future.

Mr. Millar notes that while ethics has long been part of engineering instruction, the scope has always been rather limited. Traditional engineering ethics, he says, deals mostly with basic legal questions – not with the plethora of issues that come freighted with the rise of robots. “I know the kind of ethics training I got as an engineer, and the sophistication just isn’t sufficient,” he says. “We’re not building bridges; we’re creating robots that are caring for people’s lives. The stakes are much higher.”

While noting that science-fiction writer Isaac Asimov’s famous Three Laws of Robotics remain relevant (that a robot should not harm humans, should obey human commands, and should preserve its own existence only when that doesn’t conflict with the first two laws), he observes that we now need more. On a university campus, the two buildings that are often figuratively the furthest apart are the engineering faculty and the philosophy department – a gap, he says, that’s not just physical but fundamental. “We need a cultural shift in engineering and philosophy to understand the depth to which these two fields overlap in the case of automated technologies and robotics.”

The urgency of developing a good process for making ethical design decisions increases when one considers the range of roles that robots surely will occupy in the coming years. Neil McArthur, an associate professor of philosophy and associate director of the Centre for Applied and Professional Ethics at the University of Manitoba, says robots are becoming more and more lifelike, both physically and mentally. “Artificial intelligence is making their personalities much more realistic, and the manufacturing technologies are more biologically correct. Robots don’t look like C3PO anymore,” he says, referring to the Star Wars character.

Accordingly, these machines will (and are already) replacing humans in a variety of roles, from “family robots” that will tell stories to our kids and take group portraits, to machines serving as companions to the elderly, to mechanical slaves built to satisfy our sexual desires. Dr. McArthur observes that in Japan – a country we often look toward to see our own future – sex robots are already a reality, and these automated ladies can be found in brothels and homes from Tokyo to Yokohama. He estimates that so-called “sexbots” will become commonplace in North America in as little as a decade.

Unsurprisingly, says Dr. McArthur, the ethical issues in this area are many. One important moral issue is that these developments will almost certainly lead to serious economic disruption. “Robots are going to get to the point where they can do almost everything that humans get paid to do. And we have to ask: Is that good or bad?” This applies even in the area of replacing sex workers, some of whom, he argues, choose this profession as an empowering, lucrative option.

Moreover, the intimate coming together of man and machine also has serious implications for human behaviour. Dr. McArthur says young men who turn to sex robots – which are stylized in pornographic proportions and given passive personalities – might move into adulthood with a skewed vision of how a woman should look and act. Especially for those with less confidence in interacting with the opposite sex, there’s a real possibility of retreating into solitude with just their phone, computer and sexbot for companionship. These new servants may even have an impact on our motivational drive more generally, leading to a heightened withdrawal from human relationships.

On the opposite end of the spectrum from passive sexbots, some machines will increasingly have agency – that is, the ability to act independently. Ian Kerr, who holds the Canada Research Chair in Ethics, Law and Technology at the University of Ottawa, says the day will soon come when we cede control over major life-and-death decisions to robots.

Dr. Kerr, who holds degrees in law and biology along with a doctorate in philosophy, points to IBM’s supercomputer, nicknamed Watson, which gained celebrity status by winning on the television game show Jeopardy! and has now been tasked with diagnosing cancer using genomic data and trial-based evidence. According to some, Watson is now better at diagnosing the disease than human doctors. Dr. Kerr worries about the inevitable dependence that will grow out of this situation, especially since the machine is using processes and algorithms to arrive at these conclusions that are far beyond our own understanding.

“We’ll have no good reason not to defer to the machine, because the machine is better than the human at predicting,” he observes. “But at the same time, we’re giving up any knowledge or understanding about how the machine is making these decisions.”

Dr. Kerr is one of the founders of We Robot, an annual international conference that concentrates on the intersection of law and robotics. He is most concerned about the movement in advanced weapons systems toward delegating the ultimate decision – whether or not to kill – to machines. On the one hand, he admits, it makes perfect sense to have robots fight our battles. These automatons, after all, don’t suffer from the fog of war, can be programmed to respect certain moral codes, and will save us from sacrificing our sons and daughters.

But, fighting a war involves all sorts of complex and constantly shifting factors. Soldiers have to distinguish between members of traditional tribes carrying ceremonial weapons and actual enemy combatants. They must make complicated risk assessments. They need to react with a proportional response when engaged. These are all things that Dr. Kerr worries so-called “killer robots” will have difficulty doing.

“We should be careful before we relinquish such moral decision-making to machines,” he says. “And even if they had the sophistication, relinquishing the decision to kill to machines crosses a fundamental moral line.”

These human-machine interactions become even more complicated when one considers that humans have a proven tendency to put too much faith into expert systems, says Darren Abramson, an associate professor in Dalhousie University’s department of philosophy, who holds an MSc in computer science and a PhD in philosophy. “Our natural intuitive sense is to impute agency to systems that show expertise at one human-level task,” he observes. This is even more disconcerting when we consider that machines do fail, even in areas within their own expertise.

“We have these things that seem to be intelligent, because they’re doing human activities. But if we’re using these systems, we must be educated in the ways these fail, and in ways that are unintuitive to us.”

A self-driving car, for example, feels no social responsibility. It does not necessarily have the ability to prevent careless teenagers from misusing it (say, by using the car to pull them on a skateboard). Dr. Kerr also worries that we will reach a day when we’ve sunk far too much confidence into these autonomous, but ultimately incomplete, machines.

“I’m not worried about the car taking over, like KITT on [the 1980s TV series] Knight Rider, or the automated operating system taking over the ship in 2001: A Space Odyssey,” he explains. “I’m worried that humans will no longer have the skills to drive on a complex roadway.”

Which brings us back to the trolley problem. Experts in the field agree that these increasingly vital programming and design decisions need to be handled by more than a single engineer working alone in a lab. AJung Moon, a robot engineer, Vanier Scholar and PhD candidate studying human-robot interaction and robo-ethics at the University of British Columbia, helped found a group, the Open Robot Initiative, to address these various questions, including specifically the trolley problem.

Working with Carleton’s Dr. Millar and using an updated version of the dilemma – where the vehicle in question is now a self-driving car coming out of a tunnel, and the choice is between going straight ahead and hitting a child or slamming into a solid wall – the two engineers put the question to the public and got a wide range of opinions. “Quite a few people felt that the car should hit the child, because it’s my car, and it should protect me,” she says. Older respondents and females tended to choose self-sacrifice, while younger people and males more often chose the child.

While involving a greater number of people in the decision-making process may be messy, Ms. Moon feels that it’s essential, which is why her own lab, UBC’s Collective Advanced Robotics and Intelligent Systems Laboratory, or CARIS, created the Open Robot Initiative. “These decisions go beyond the personal. They’ll have implications for society down the road,” she says. And ethics aren’t necessarily top of mind when a single engineer is creating specs for a new robot in the lab. “They’re much more interested in solving all the technical issues they’re facing.”

The Open Robotic Initiative is therefore moving ahead to create a framework for integrating ethics into the design process. Ms. Moon would love to see each sub-field of robotics adopt a codified set of values, and the ORI is working on a consensus document that integrates societal values and that will be accessible to those in decision-making positions. She would also like to see a higher level of government and professional regulation.

It is absolutely vital that all stakeholders are involved in the process, says Ms. Moon – a point on which Dr. Kerr at U of Ottawa strongly concurs. While the future can be difficult to predict and technology seems to move faster every day, a collective approach can help ensure responsible and ethical robotics, he says.

“We need collaboration in the machines we build. We need lawyers thinking about liability and philosophers mindful of all these ethical ramifications, and most importantly we need engineers who are trained with a sensitivity and a skillset to deal with at least the basics of these ethical and broader policy and legal questions.”

PUBLISHED BY
Tim Johnson
Missing author information
COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Leave a Reply to George Tillman Cancel reply

Your email address will not be published. Required fields are marked *

  1. George Tillman / June 25, 2015 at 10:06

    The updated trolley dilemma has me wondering why in the design of a self-driving car, there is no option simply to stop. MUST the car always move? I do’t see why. This constraint in the thought experiment ignores reality. In the original dilemma, the trolly (or train) was on a track, so the idea of stopping seemed to be subliminally excluded. Designing self-driving cars makes many assumptions about options available that should be clarified.

  2. Greg Nacu / July 8, 2015 at 12:57

    I completely disagree. We don’t need philosophers to be involved in the construction of self-driving cars. The thought experiment scenarios presented are extremely rare. Meanwhile, the benefits from self-driving cars would lower the risk of death by automobile accident across the board. Automated cars don’t speed, don’t get distracted, don’t get drunk, don’t listen to music too loudly, don’t text or make phone calls while they drive and they don’t have an ego. If automated cars are as good as they promise to be, thousands and thousands of lives will be saved every year from accidents caused by “ethical” but stupid humans making stupid human mistakes. At the cost that maybe, every once in a while, a car would make a decision that a human might have made differently. It shouldn’t take a philosopher to see that self-driving cars would make us better off regardless of a few edge cases that philosophers, who have too time on their hands, can dream up.

  3. dontexpectmetobenicejustcusimawoman / November 6, 2016 at 05:42

    “Older respondents and females tended to choose self-sacrifice, while younger people and males more often chose the child.” This quote worries me.

    What if engineers generalize this to all women and make an unfair and sexist assumption?
    What if engineers program the automated cars that way-if it’s a male drivier-save him, if it’s a female driver-sacrifice her?? THAT WOULD BE SO UNFAIR.

    I am a woman but I would choose to save my life over the child because it’s my car and it should protect me, and obviously from my perspective, my life is more important than anything.

    I hope engineers don’t ASSUME that all female drivers will want to sacrifice themselves over the child. CERTAINLY I AM NOT!!!

Click to fill out a quick survey