Assessing Agency, Moral and Otherwise: Beyond the Machine Question

Joel Parthemore

Robotization and other forms of automation increasingly find themselves among the most-heard buzz words throughout the manufacturing sector and beyond. Beyond the mundane assembly-line robots, one hears about self-driving cars, “killer” battlefield robots, sex robots, prototype care robots, and so on. Laypersons and researchers alike talk about the more sophisticated examples in ways that appear to ascribe them agency and, in some cases, stops little short of personhood: describing them as having feelings, weighing choices, making decisions wrong and right. How much is hype and how much meant as substance? To what extent are people speaking metaphorically – and aware of doing so – to what extent do they really mean what they are saying? Are existing artefacts – or, if not, can potential future artefacts be – agents in any substantial sense? Can they be moral agents, capable of making moral decisions and being held responsible for the consequences? Most importantly, how do the answers to these questions shape our ethical interactions with machines that, in some important ways at least, remind us of ourselves? How do they inform our assignments of moral responsibility? This paper takes as its starting point that questions of artefactual agency and machine ethics are red herrings. What matters is what qualifies any purported agent as an actual agent and what qualifies certain agents – whatever their origins – as moral agents.
Sidansvarig: Goran.Sonessonsemiotik.luse | 2017-09-26