My 8/1/2015 WSJ Letter to the Editor, reprinted here:
[My proposed title was “Lame Robotic Strawmen”, but the WSJ ignored that.]
Benjamin W. Slivka
Clyde Hill, Wash.
Here is the original WSJ opinion piece I was responding to:
LIFE | IDEAS | ESSAY
Can We Create an Ethical Robot?
Without our social sense, an android will buy that last muffin, and a driverless car might run over a child
By JERRY KAPLAN
July 24, 2015 1:21 p.m. ET
As you try to imagine yourself cruising along in the self-driving car of the future, you may think first of the technical challenges: how an automated vehicle could deal with construction, bad weather or a deer in the headlights. But the more difficult challenges may have to do with ethics. Should your car swerve to save the life of the child who just chased his ball into the street at the risk of killing the elderly couple driving the other way? Should this calculus be different when it’s your own life that’s at risk or the lives of your loved ones?
Recent advances in artificial intelligence are enabling the creation of systems capable of independently pursuing goals in complex, real-world settings—often among and around people. Self-driving cars are merely the vanguard of an approaching fleet of equally autonomous devices. As these systems increasingly invade human domains, the need to control what they are permitted to do, and on whose behalf, will become more acute.
How will you feel the first time a driverless car zips ahead of you to take the parking spot you have been patiently waiting for? Or when a robot buys the last dozen muffins at Starbucks while a crowd of hungry patrons looks on? Should your mechanical valet be allowed to stand in line for you, or vote for you?
In the suburb where I live, downtown parking is limited to two hours during the day. The purpose of this rule is to broadly allocate a scarce resource and to promote the customer turnover critical to local businesses. Now imagine that I’m the proud owner of a fancy new autonomous car, capable of finding a spot and parking by itself. You might think that my car should be permitted to do anything that is legal for me to do—but in this case, should I be allowed to instruct it to repark itself every two hours?
Delegating my authority to the car undermines the intent of the law, precisely because it circumvents the cost intentionally imposed on me for the community’s greater good. We can certainly modify the rule to accommodate this new invention, but it is hard to see any general principles that we can apply across the board. We will need to examine each of our rules and adjust them on a case-by-case basis.
Then there is the problem of redesigning our public spaces. Within the next few decades, our stores, streets and sidewalks will likely be crammed with robotic devices fetching and delivering goods of every variety. How do we ensure that they respect the unstated conventions that people unconsciously follow when navigating in crowds?
A debate may erupt over whether we should share our turf with machines or banish them to separate facilities. Will it be “Integrate Our Androids!” or “Ban the Bots!”
And far more serious issues are on the horizon. Should it be permissible for an autonomous military robot to select its own targets? The current consensus in the international community is that such weapons should be under “meaningful human control” at all times, but even this seemingly sensible constraint is ethically muddled. The expanded use of such robots may reduce military and civilian casualties and avoid collateral damage. So how many people’s lives should be put at risk waiting for a human to review a robot’s time-critical kill decision?
Even if we can codify our principles and beliefs algorithmically, that won’t solve the problem. Simply programming intelligent systems to obey rules isn’t sufficient, because sometimes the right thing to do is to break those rules. Blindly obeying a posted speed limit of 55 miles an hour may be quite dangerous, for instance, if traffic is averaging 75, and you wouldn’t want your self-driving car to strike a pedestrian rather than cross a double-yellow centerline.
People naturally abide by social conventions that may be difficult for machines to perceive, much less follow. Finding the right balance between our personal interests and the needs of others—or society in general—is a finely calibrated human instinct, driven by a sense of fairness, reciprocity and common interest. Today’s engineers, racing to bring these remarkable devices to market, are ill-prepared to design social intelligence into a machine. Their real challenge is to create civilized robots for a human world.
—This essay is adapted from Mr. Kaplan’s new book, “Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence,” which will be published August 4 by Yale University Press