When human beings and computers are both involved in accomplishing a task and something goes wrong, we tend to blame the computer. There is little question about the existence of software bugs, but can all real-world problems involving computers (such as erroneous bank transactions, military training incidents, or privacy breaches) be attributed to errors in the software program? Today, do you think it is more likely that a computer or a human being would be responsible for a serious system problem? Why?
Answers
Moral responsibility is about human action and its intentions and consequences (Fisher 1999, Eshleman 2016). Generally speaking a person or a group of people is morally responsible when their voluntary actions have morally significant outcomes that would make it appropriate to blame or praise them. Thus, we may consider it a person’s moral responsibility to jump in the water and try to rescue another person, when she sees that person drowning. If she manages to pull the person from the water we are likely to praise her, whereas if she refuses to help we may blame her. Ascribing moral responsibility establishes a link between a person or a group of people and someone or something that is affected by the actions of this person or group. The person or group that performs the action and causes something to happen is often referred to as the agent. The person, group or thing that is affects by the action is referred to as the patient. Establishing a link in terms of moral responsibility between the agent and the patient can be done both retrospectively as well as prospectively. That is, sometimes ascriptions of responsibility involve giving an account of who was at fault for an accident and who should be punished. It can also be about prospectively determining the obligations and duties a person has to fulfill in the future and what she ought to do.
However, the circumstances under which it is appropriate to ascribe moral responsibility are not always clear. On the one hand the concept has varying meanings and debates continue on what sets moral responsibility apart from other kinds of responsibility (Hart 1968). The concept is intertwined and sometimes overlaps with notions of accountability, liability, blameworthiness, role-responsibility and causality. Opinions also differ on which conditions warrant the attribution of moral responsibility; whether it requires an agent with free will or not and whether humans are the only entities to which moral responsibility can be attributed (see the entry on moral responsibility).
On the other hand, it can be difficult to establish a direct link between the agent and the patient because of the complexity involved in human activity, in particular in today’s technological society. Individuals and institutions generally act with and in sociotechnical systems in which tasks are distributed among human and technological components, which mutually affect each other in contingent ways (Bijker, Hughes and Pinch 1987). Increasingly complex technologies can exacerbate the difficulty of identifying who or what is ‘responsible’. When something goes wrong, a retrospective account of what happened is expected and the more complex the system, the more challenging is the task of ascribing responsibility (Johnson and Powers 2005). Indeed, Matthias argues that there is a growing ‘ responsibility gap’: the more complex computer technologies become and the less human beings can directly control or intervene in the behavior of these technologies, the less we can reasonably hold human beings responsible for these technologies (Matthias, 2004).