The question of Robot Ethics

If your grandmother was in pain and asking for help, how would you want her robot caretaker to treat her? It may not be as long as you think until society has to confront the thorny question of robot ethics.

It’s a shorter leap than you might think, technically, from a Roomba vacuum cleaner to a robot that acts as an autonomous home-health aide, and so experts in robot ethics feel a particular urgency about these challenges. The choices that count as “ethical” range from the relatively straightforward — should Fabulon give the painkiller to Sylvia? — to matters of life and death: military robots that have to decide whether to shoot or not to shoot; self-driving cars that have to choose whether to brake or to swerve. These situations can be difficult enough for human minds to wrestle with; when ethicists think through how robots can deal with them, they sometimes get stuck, as we do, between unsatisfactory options.
Among the roboticists I spoke to, the favorite example of an ethical, autonomous robot is the driverless car, which is still in the prototype stage at Google and other companies. Wendell Wallach, chairman of the technology-and-ethics study group at Yale’s Interdisciplinary Center for Bioethics, says that driverless cars will no doubt be more consistently safe than cars are now, at least on the highway, where fewer decisions are made and where human drivers are often texting or changing lanes willy-nilly. But in city driving, even negotiating a four-way stop sign might be hard for a robot. “Humans try to game each other a little,” Wallach says. “They rev up the engine, move forward a little, until finally someone says, ‘I’m the one who’s going.’ It brings into play a lot of forms of intelligence.” He paused, then asked, “Will the car be able to play that game?”

www.nytimes.com


Leave a Reply

Your email address will not be published. Required fields are marked *