Machine Autonomy Skepticism
1. Taking autonomous machines seriously
According to the US Department of Defense, as of October 2008 unmanned aircraft have flown over 500,000 hours and unmanned ground vehicles have conducted over 30,000 missions in support of troops in Iraq and Afghanistan. Over the past few years a number of government and military agencies, professional societies, and ethics boards have released reports suggesting policies and ethical guidelines for designing and employing autonomous war machines. In these reports, the word ‘autonomous’ is used more or less uncritically to refer to a variety of technologies, including automated systems, unmanned teleoperated vehicles, and fully autonomous robots. Describing such artifacts as ‘autonomous’ is meant to highlight a measure of independence from their human designers and operators. However, the very idea of autonomous artifacts is suspiciously paradoxical, and little philosophical work has been done to provide a general account of machine autonomy that is sensitive to both philosophical concerns and the current state of technological development. Without a framework for understanding the role human designers and operators play in the behavior of autonomous machines, the legal, ethical, and metaphysical questions that arise from their use will remain murky.
My project is to lay the groundwork for building an account of autonomous machines that can systematically account for the range of behavior demonstrated by our best machines and their relative dependence on humanity. Pursuing this project requires that we take autonomous machines seriously and not treat them as wide-eyed speculative fictions. As a philosophical project, taking autonomous machines seriously requires an address to the skeptic, who unfortunately occupies the majority position with respect to technology. The skeptic of machine autonomy holds that any technological machine designed, built, and operated by human beings is dependent on its human counterparts in a way that fundamentally constrains its possibilities for freedom and autonomy in all but the most trivial senses of the words.
In this chapter I respond to the skeptic in order to clear the ground for an account of machine autonomy. I will treat the Lovelace objection, cited in Turing’s famous 1950 discussion of thinking machines, as the clearest and strongest statement of machine autonomy skepticism (MAS). I argue that the Lovelace objection is best understood as a version of technological essentialism. Thus, a rejection of technological essentialism entails a rejection of MAS. In section 3 I survey some recent attempts to discuss the problem of autonomy as it arises in the robotics literature, and I argue that these treatments fail to adequately address the essentialism that grounds MAS. In section 4 I argue that the received interpretation of the Lovelace objection, which treats machine autonomy as an epistemological issue, likewise misses the force of her radical skeptical argument. I then argue that Turing’s original response to the Lovelace objection is best understood as an argument against artifact essentialism, and provides a uniquely adequate answer to the challenge. A proper understanding of Turing’s positive account will, I claim, provide a framework for thinking about autonomous machines that may generate practical, empirical methods for productively discussing the machines that inhabit our technological world. I conclude by arguing that developments in machine learning techniques within computer science accords with Turing’s understanding of the social integration of humans and machines.
2. Machine Autonomy Skepticism (MAS)
According to an essentialist theory of artifacts, artifacts depend for their natures on human mental activities. While explicit statements of this view are rare in the artificial intelligence literature, they pervade recent metaphysics, philosophy of mind, and philosophy of technology. Consider, for instance, the following formulations of what appear to be a general consensus view: Baker (2006): “Unlike natural objects, artefacts have natures, essences, that depend on mental activity”; Kroes and Meijers (2004): “Physical objects, with the exclusion of biological entities, have, as physical objects, no function and exhibit no ‘for-ness’: they acquire a teleological element and become technical artefacts only in relation to human intentionality”; McLaughlin (2001): “The function of an artefact is derivative from the purpose of some agent in making or appropriating the object; it is conferred on the object by the desires and beliefs of an agent. No agent, no purpose, no function”; Emmeche (2007): “Machines are extensions of human capacities and intrinsic parts of human sociocultural systems; they do not originate ex nihilo or fall readymade from heaven”; see also Searle (1995).
Machine Autonomy Skepticism finds a clear home in artifact essentialism. MAS can appeal to artifact essentialism to argue that even the most sophisticated performances of machines ultimately depend on human intentions and control. This is true even in cases where the machine appears to be engaged in independent performances, such as a self-driving vehicle, since these performances betray the careful human design that makes the performances possible. Thus, since all so-called ‘autonomous machines’ are artifacts, there are no genuinely autonomous machines. The essentialist theory of artifacts is invariant with respect to technological advance, since any potential advance presumably continues to generate artifacts of this essentially dependent sort. MAS inherits this invariance as well.
Besides its intuitive appeal, MAS has a strong rhetorical advantage in the debate over autonomous machines. Since MAS asserts that there are no autonomous machines, the use of unmanned drones and other advanced technologies poses no special ethical, legal, or metaphysical problems. On MAS, such devices ought to be treated like any other tool, where the operator or designer is attributed with, and ultimately held responsible for, the behavior of the machine. Maintaining a categorical distinction between artifacts and minds has the rhetorical advantage of clarity and simplicity. To be clear, one need not be skeptical of autonomous machines to deny that they pose a special philosophical problem. Many hold that autonomous machines may be possible, and in the event they are created ought to be treated similarly to any other autonomous agent, i.e., as a person, with all the traditional legal, ethical, and metaphysical privileges this status entails. Thus, autonomous machines pose no special philosophical problem, but merely raise the standard problems of autonomy the pertain to all other persons, human or otherwise. Although this view implicitly rejects artifact essentialism, it would nevertheless reject the proposal that unmanned drones are “autonomous” since they are clearly not persons in the relevant sense. I feel this response fails to take machine autonomy seriously as a live challenge, and is therefore not the target of my argument. The skeptic I’m interested doesn’t merely argue that no machine available today is autonomous; MAS is the view the no artifact can in principle be consider autonomous.
This position is most forcefully stated by Ada Lovelace, as quoted in Turing (1950): "’The [machine] has no pretensions to originate anything. It can do whatever we know how to order it to perform’ (her italics).” If the machine’s performance is not original but derives its orders from humans, then the machine does not enjoy the requisite independence to be considered autonomous. Lovelace’s objection might be made explicit as follows:
L1: Machines can only do whatever we order it to perform.
L2: Therefore, machine performances derive entirely from human orders.
L3: Therefore, machines are not autonomous.
This argument appears compelling across a variety of positive accounts of autonomy, which will be treated more carefully in the next section. Some particular account of autonomy is necessary to treat the move from L2 to L3. My goal in addressing the skeptic is not to defend (or attack) any particular account, but rather to undermine the categorical restriction on machines that motivates L1, and the inference from L1 to L2 above, which will be common to any MAS account regardless of their particular views on autonomy.
Distinguishing the skeptic of machine autonomy from the traditional skepticism of artificial intelligence requires situating the latter form of skepticism within what I call the classic philosophical discussion of artificial intelligence, which occupied philosophers from Turing’s paper to the turn of the century. Central to this debate was an incredibly compelling analogy between the paradigms of human intelligence and the formal operations of computers. As Dennett says,
"Computers are mindlike in ways that no earlier artifacts were: they can control processes that perform tasks that call for discrimination, inference, memory, judgment, anticipation; they are generators of new knowledge, finders of patterns–in poetry, astronomy, and mathematics, for instance–that heretofore only human beings could even hope to find."
Moreover, computers appeared to solve a problem that had plagued philosophy since we began working out modern materialist science four hundred years ago: the relationship between mind and matter. Computers demonstrated that it was possible to have complex, intelligent, meaningful activity carried out by a purely mechanical system. Dennett continues:
"The sheer existence of computers has provided an existence proof of undeniable influence: there are mechanisms–brute, unmysterious mechanisms operating according to routinely well-understood physical principles–that have many of the competences heretofore assigned only to minds."
This existence proof not only provided philosophers with a satisfying demonstration of how mental processes could be realized in physical matter, but has also generated a practical, empirical method for studying the mind: in order to understand how the mind performs some task, build an artificial system that performs the task and compare its performance with our own. Thus, we were able to turn an abstract conceptual problem into a positive scientific research agenda, and the force of the old philosophical puzzles slowly dissolved as scientists took the reins.
The classical skeptics of artificial intelligence , like Dreyfus (1972, 1992) and Searle (1980) remained unimpressed by this analogy, and proposed a variety of activities that computing machines simply could not do: computers did not understand the meaning of sentences or significance of the tasks they performed, they had no conscious access to their environment or their internal states, they did not approach the world with the emotion or affect that is characteristic of our merely human natures. Central to their attacks was an emphasis on the relevant disanalogies between the operations of formal computing machines and the complex behaviors of dynamic, embodied biological organisms. Partially due to the incredibly difficult task of clarifying exactly what they thought was missing from our best machines and partially due to the slowly plodding progress in computing machinery, the artificial intelligence debate stalemated in the early 90s with both camps claiming a de facto victory and little consensus achieved. There has since been relatively little movement in the debate despite the terrific theoretical and technological advances within cognitive science and related fields. At the moment, the issue of artificial intelligence is more widely recognized as a pedagogically useful thought experiment raised briefly in introductory philosophy classes than as a live philosophical challenge.
Turing’s 1950 treatment of the issue includes a number of objections to the possibility of artificial intelligence, much of which anticipates the objections that are more thoroughly explored in that classic debate. But Lovelace’s objection is unique among the objections Turing considers, and is explicitly singled out for a second treatment in the concluding remarks of that famous paper. Implicit in the Lovelace’s objection is a fundamental concession to the proponents of artificial intelligence that restructures the nature of the debate and motivates Turing’s positive suggestions in that final section. Lovelace’s objection does not entail that any particular feature of the mind is in principle off limits to a machine qua machine; on her view, the performances of the machine are limited only by the creativity and ingenuity of its human designers and operators. For this reason, Lovelace concedes a broad range of possible computer performances, including those that may appear indistinguishably human. For instance, if “understanding” is properly analyzed at the level of the causal interactions between neurons, Lovelace can fully admit that this same causal structure can be realized by a computer or some other artifact designed by appropriately clever scientists and engineers, such that it would count as a performance of that mental event. Nevertheless, Lovelace contends that the machine is only capable of such performances because of the “ordering” performed by those clever designers, and is therefore still fundamentally dependent on human input and control. In other words, the Lovelace objection is not an argument that machines cannot have minds. Lovelace’s objection does not turn on denying any particular performative activity of the machine, or in identifying some dissimilarity between that performance and its human counterpart. Instead, Lovelace’s objection is an argument that technical artifacts cannot perform independently of our orders at all, regardless of the quality of that performance. It is a skeptical argument against the possibility of machine autonomy.
This focus on the dependence of machines on human activity reorients the debate in two important ways. First, Lovelace’s objection does not necessarily endorse any particular view of the operations of the mind, and therefore her objection is out of place in a debate over the foundations of cognitive science where much of the classic artificial intelligence debate. Lovelace’s concession obviates the need to provide any criteria or working definition of thinking (or any other cognitive or mental process) in order to evaluate the performances of the machine. Lovelace is not arguing about the limitations of the formal processes of computation to render mental events, or the ineffable nature of mental activity itself, but instead about the unique dependence of technology on humanity, and the implications for any potential machine performance. While the classic skeptic is interested in the nature of the mind, and therefore considers the technology as such irrelevant to this investigation, Lovelace appeals to a more general theory about the nature of technology that outstrips the local concerns about the nature of minds. Placing the debate outside the framework of a theory of mind sets the Lovelace objection apart from all forms of classic skepticism.
This change of focus implies a second, and perhaps more important, difference between Lovelace and traditional AI skeptics. Lovelace’s focus on the issue of autonomy undermines the compelling but overemphasized analogy between humans and computers central to the classic debate. Since Lovelace appeals directly to the relationship between intelligent designers and the artifacts they design, her objection targets the broader class of technical artifacts than just those claimed to be mind-like due to some structural or behavioral similarity between their performances and those of a human. No one mistakes unmanned military drones or the Mars rovers for human beings, or think they have 'minds' in anything but the most abstract and philosophically controversial senses. Their purported autonomy does not rest on an analogy with human minds, but rather on the relationship between their behavior and the intervention (or lack thereof) of their designers and operators. These machines invite the challenge of Lovelace’s objection quite independent of the issues central to the classic debate.
Together, these differences make clear that Lovelace’s argument for machine autonomy skepticism is best understood as a version of artifact essentialism, and not a: no possible performance of an artifact is sufficient for undermining its essential nature as an artifact. Artifacts by their very nature depend on the mental activity of their designers, and are therefore are not candidates for autonomy in principle. While the assumption of artifact essentialism was often left implicit in the classic debate, I suspect that it is the source of much of the hard-nosed skepticism and chauvinism that, in part, gave rise to the eventual stalemate in the classic debate. Making this assumption explicit suggests that a satisfying answer to the skeptic cannot depend on an analogy between the performances of the machine and their human counterparts, and therefore will not mirror the familiar arguments from the classic debate. Instead, developing a satisfying account of machine autonomy requires an explicit analysis of the relationships between artifacts and their designers and operators. I turn in the next section to analyze this relationship.
No comments:
Post a Comment