A new study has some interesting results pointing to resistance to AI stemming from moral and not just “practical” concerns. Pre-publication article here
While I might disagree that moral concerns are separable from “practical” or “pragmatic” ones on the grounds that the practical results of one’s actions have a moral valence, I do think this is an interesting set of findings and hope to see more research on moral, ethical, and even religious reactions to advanced technologies.
I’ll offer one way to think about this. Tools (technologies, things) aren’t moral actors in their own right, but their creation and use has moral implications. The more specific a tool, the more the morality that led to its development comes into focus.
Anthropologist Margaret Mead is widely (and probably incorrectly) credited with saying that the first sign of civilization in an ancient culture is a healed femur. Whoever actually made this claim should’ve taken created because they created an interesting thought experiment. There’s a moral claim being made that places the beginning of human civilization at the ability and willingness to share someone else’s burden and care for a community member who, left to the course of nature, would surely die before their bone could heal. There’s is certainly something appealing at implicitly saying the dawn of human civilization began with a very specific kind of tool: a splint. This new technology made such healing possible. The splint, a tool whose only purpose is to heal an injury and, on top of that, assumes someone else is able and willing to take on the survival load of the injured person has moral urgency. As a fable of community, civilization, and technology, this is quite compelling. For the purposes of morality and technology, this story creates one pole for moral technology: The morality of the splint is one of community, healing, and shared endeavors.
The morality of the sword is quite different and a good example of the other pole of morality and technology. As much as I like swords (I was a collegiate fencer and I do enjoy DnD as well as medieval books and movies) it is hard to avoid understanding their purpose. The morality of the sword stems from the idea that killing other humans is permissible and even sometimes desirable. At inception, a sword had no other purpose, there are far better, cheaper, and easier tools for building a house, excavating valuable resources, chopping wood or any other creative endeavor. Even for defense against animals or people, spears, clubs, and similar simpler tools can work better for most folks. The sword requires specialization to produce and to use effectively. It would seem to encourage, if not outright demand, hierarchy. To put it simply, swords are for specialists to use to kill people. The morality of the sword is one that elevates aggression in a zero sum game of survival and conquest.
Stark contrasts here for single-purpose, specialized tools. What does that mean for us today?
Many of our digital tools have multiple purposes and even their developers had multiple aims in their creation. So the morality, then, shifts to the use of the tool. Just like with humans, tools of general utility can widely vary in their moral implications. We, as people, project our morality on to these tools by how we use them. The most effective users will, intentionally or not, imprint their moral agendas onto these tools. Use, over time, will also create specializations in these tools that show off the morality that “won” among their various users. As the tools seem to do more “thinking” or at least have the autonomy to execute on decisions in ambiguous situations, the moral programming that guided their actual programming begins to have more “practical” or “pragmatic” import.
Its no wonder, then, as we create tools that are more an more like us, more autonomous and wide ranging, we struggle with the moral implications of why our tools get created and what they will be allowed to do.

