Machines have long helped us kill. From catapults to cruise missiles, mechanical systems have allowed humans to better destroy each other. Despite the increased sophistication of killing machines, one thing has remained constant—human minds are always morally accountable for their operation. Guns and bombs are inherently mindless, and so blame slips past them to the person who pulled the trigger.
But what if machines had enough of a mind that they could choose to kill all on their own? Such a thinking machine could retain the blame for itself, keeping clean the consciences of those who benefit from its work of destruction. Thinking machines may better the world in many ways, but they may also let people get away with murder.
Humans have long sought to distance themselves from acts of violence, reaping the benefits of harm without sullying themselves. Machines not only increase destructive power, but also physically obscure our harmful actions. Punching, stabbing and choking have been replaced by the more distant—and tasteful—actions of button pressing or lever pulling. However, even with the increased physical distance allowed by machine intermediaries, our minds continue to ascribe blame to those people behind them.
Studies in moral psychology reveal that humans have a deep-seated urge to blame someone or something in the face of suffering. When others are harmed, we search not only for a cause, but a mental cause—a thinking being who chose to cause the suffering. This thinking being is typically human, but need not be. In the aftermath of hurricanes and tsunamis, people often blame the hand of God, and in some historical cases people have even blamed livestock—French peasants once placed a pig on trial for murdering a baby.
Generally, our thirst for blame requires only a single thinking being. When we find one thinking being to blame, we are less motivated to blame another. If a human is to blame, there is no need to curse God. If a low-level employee is to blame, there is no need to fire the CEO. And if a thinking machine is to blame for someone's death, then there is no need to punish the humans who benefit.
Of course, for a machine to absorb blame it must actually be a legitimate thinker, and act in new, unpredicted ways. Perhaps machines could never do something "truly" new, but the same argument applies to humans "programmed" by evolution and their cultural context. Consider children, who are undoubtedly "programmed" by their parents and yet—through learning—are able to develop novel behavior and moral responsibility. Like children, modern machines are adept at learning, and it seems inevitable that they will develop contingencies unpredicted by their programmers. Already, algorithms have discovered new things unguessed by humans who create them.
Thinking machines may make their own decisions, but will shield humans from blame only when they decide to kill, standing between our minds and the destruction we desire. Robots already play a large role in modern combat: drones have killed thousands in the past few years, but are currently fully controlled by human pilots. To deflect blame in the case of drones, they must be governed by other intelligent machines; machines must learn to fly Predators all on their own.
This scenario may send shivers down spines (including mine), but makes cold sense from the perspective of policy makers. If "collateral damage" can be blamed on the decisions of machines, then military mistakes are less likely to dampen election chances. Moreover, if minded machines can be overhauled or removed—machine "punishment"—then people will feel less need to punish those in charge, whether for fatalities of war, botched (robotic) surgeries or (autonomous) car accidents.
Thinking machines are complex, but the human urge to blame is relatively simple. Death and destruction compel us to find a single mind to hold responsible. Sufficiently smart machines—if placed between destruction and ourselves—should absorb the weight of wrongdoing, shielding our own minds from the condemnation of others. We should all hope that this prediction never comes true, but when advancing technology collides with modern understandings of moral psychology, dark potentials emerge. To keep clean our consciences, we need only to create a thinking machine, and then vilify it.