[T]he project of designing artificial moral agents has the potential to revolutionize moral philosophy in the same way that philosophers' engagement with science continuously revolutionizes human self-understanding. New insights can be gained from confronting the question of whether and how a control architecture for robots might utilize (or ignore) general principles recommended by major ethical theories. Perhaps ethical theory is to moral agents as physics is to outfielders -- theoretical knowledge that isn't necessary to play a good game. Such theoretical knowledge may still be useful after the fact to analyze and adjust future performance. [...]
Does this talk of artificial moral agents overreach, contributing to our own dehumanization, to the reduction of human autonomy, and to lowered barriers to warfare? If so, does it grease the slope to a horrendous, dystopian future? I am sensitive to the worries, but optimistic enough to think that this kind of techno-pessimism has, over the centuries, been oversold. Luddites have always come to seem quaint, except when they were dangerous. The challenge for philosophers and engineers alike is to figure out what should and can reasonably be done in the middle space that contains somewhat autonomous, partly ethically-sensitive machines.