Mary Shelley’s Frankenstein of 1818 is one of English literature’s most compelling horror stories. It is the tale of Dr. Frankenstein’s obsession with the creation of life and his subsequent abandonment of his creation. His creation, the monster, then hunts the creator. Does this prescient novel have any relevance to the rise of moral machines, AI, and robots? Is this Frankenstein’s equation revisited?
Rise of the Moral Machine[:] Exploring Virtue Through A Robot’s Eyes by Nigel Crook, Professor of AI and Robots, Oxford Brookes University, presents a unique scientific and Christian theological perspective on morality and machines. A key question: are machines moral agents in their own right and what are the implications for society?
Crook argues that “there is an inevitability to the pursuit of evermore humanlike AI products that I believe is pushing us firmly towards the creation of moral machines” [24]. The inevitability of the development of moral capacity in machines is due to three factors: increasing robot autonomy, the increasing integration of robots in society, and the increasing human likeness of robots [69].
That begs the question of what or whose morals? Of the many moral theories (or ethical reasoning systems) Crooks reviews consequentialism (utilitarianism), deontology, and virtue ethics. There are two ways to teach robots right from wrong. The “top-down approach” is a process of explicitly defining and encoding moral knowledge directly into the machine [80]. A “bottom-up approach” is to enable the robot to develop its moral capacity through a process of gradual adaptation, deploying machine learning technologies [84]. Crook concludes machines are still a long way off from achieving human-level moral capabilities [92].
In relation to “moral agency,” what is a Christian perspective? Crook highlights four principles. First, “moral living is only achievable with the realm of the Kingdom of God. Second, the inner life of the individual is the source of all moral behaviour. Third, God’s law defines the quality of moral life which is to be found in His Kingdom. Fourth, the focus of this moral life is the individual’s relationship first with God and then with others" [26-7].
Crook explains that for robots to develop human-level moral capabilities, they would need to mirror the essential features of human moral agency that facilitate the development of moral competence [117]. Crook leverages the work of Dallas Willard, a Christian philosopher, who provides a clearly defined functional description of the essential elements of the human self that facilitate human character [118]: choice (heart, will, spirit), thought (concepts, reasoning, judgments, images), feeling (emotions, sensations), the body (center of action and interaction with material word), the social context (interpersonal relationships) and the soul (which integrates all the other parts) [119].
This begets more questions. For example, what is “the soul?” Crook suggests that "the soul in a human being is very similar to an operating system in this respect: it integrates all of the different dimensions of the self” [124]. Another question: If a machine is to have some moral capacity, which is part of being human, then to what extent is the machine human or human-like? A related question, of course, is to then ponder what it means to be human or a person.
Crook explains that “from a theological perspective, each human is a unique person, an individual singularity emerging from a combined biological and spiritual reality” [199]. How? Genesis 2:7 records that God “breathed into his [Adam’s] nostrils the breath of life, and the man became a living being.” Crook argues that this is an important theological point because it allows people to rise above their natural instincts and be moral agents [203].
There is a difference with respect to machines. While machines may be developed that exceed human capacity related to cognitive abilities “the occurrence of the technical singularity that is feared, and the associated moral singularity, are both highly improbable. In my view, no artificial intelligence algorithm will ever have the power of the spirit that is within human beings that enables them to rise above their natural inclinations and act in ways that are consistently intelligent, creative and moral” [203-4].
What about the notion of “consciousness?” This sets human beings apart—they can reflect on their actions. Crook argues that machines are very different: “Computation is a poor analogy for thinking” [210]. Further, “conscious awareness seems to be fundamental to what it means to have a thought. Computation, on the other hand, does not require any format of consciousness, not even simulated consciousness” [210].
From a theological perspective, Crook argues that “machines do not have the ability for conscious thought or free will as humans experience it, and so are unable to meet two of the inner life focussed criteria for moral agency” which are to “possess an ensuring, conscious, inner life” and “possess (libertarian) free will [212]. The capacity to reflect, though implies a capacity for conscious deliberation, which we have already ruled out for machines” [213].
Crook concludes that there is a need for some robots to possess a degree of moral competence, especially those robots that engage in social interactions with people [225]. Yet, “robots will always fall short of the capacity for human-level moral agency, no matter how hyper-real they are as simulations of humans” [225].
Crook concludes that “we will be working towards a future in which humans and machines can collaborate where machines can know and respect the moral boundaries of our societies, and can actively seek the genuine good and well being of humankind and all of God’s creation" [225].
Overall, Crook deals with important issues from both scientific and theological perspectives—this is not a voice commonly heard as the development of AI and robots seems to outstrip the ability to establish moral underpinnings or regulatory parameters.
What are the implications of machines without morals? Dr. Frankenstein’s monster explains in his own words: “’Hateful day when I received life!’ I exclaimed in agony. ‘Cursed creator! Why did you form a monster so hideous that even you turned from me in disgust? …Satan had his companions, fellow-devils, to admire and encourage him; but I am solitary and detested!’” [122] Are these the words of a future robot without a moral compass?