Making a Mimicry of Morality: An Argument Against Artificial Moral Agency
Rionna Sparrow
Citations
Abstract
This thesis opposes incorporating artificial intelligence (AI) into our responsibility practices by developing them as artificial moral agents, on the grounds that AIs lack a necessary condition for moral agency: sentience. First, I explain how AI autonomy complicates attributions of moral responsibility. Second, I support the claim that sentience is necessary for moral agency by discussing the substantial conceptual and practical challenges with developing artificial moral agents. Third, I argue that if such agents are possible, we ought not to consider them responsible or punishable because doing so yields counterintuitive responsibility practices. Fourth, I raise the ethical concern that creating AIs that apparently meet the demands for moral agency would confer substantial obligations on their developers and users. Lastly, I suggest that, rather than seeking to create artificial moral agents, we should develop AIs as tools for human use and retain human responsibility in high-stakes settings like medicine and warfare.
