1.15.2014

Rethinking Machines part 1: The "dual natures" theory of artifacts

/* I recently completed the first draft of my Ph.D. thesis in philosophy. As I prepare and polish for a final draft and defense in July I'll be posting a series of articles that systematically present the thesis on my blog. I'm including full citations and sources from my collection when available, so hopefully others find this as useful as I will.

I started this project in 2007 with the working title Rethinking machines: artificial intelligence beyond the philosophy of mind. The core of the thesis is that the primary philosophical challenge presented by artificial intelligence pertains not to our understanding of the mind, as it was overwhelmingly treated by philosophers in the classic AI debates of the 70's and 80's (see Dreyfus, Searle, Dennett, etc), but instead pertains to our understanding of technology, in the sense of the non-mindlike machines with which we machines with which we share existence (see Latour). Half of my dissertation involves a unique interpretation of Turing's discussion of "fair play for machines," an idea he develops in the course of his as a treatment of thinking machines, which I argue underlies his approach to artificial intelligence and represents the alternative view I'm endorsing. I've posted aspects of my interpretation of Turing in other posts on this blog if you'd like a preview of the more systematic presentation to come.

The other half of my thesis is a critique of the so-called "dual natures" view of artifacts. This is where my thesis and these blog posts will begin. */

Artifacts are materialordinary objects, and as such have physical descriptions that exhaustively explain their physical attributes and behaviors. Artifacts are also instruments of human design and creation, and as such also admit of descriptions in intentional terms that describe their functional nature in relation to human intentionality and purposes. The fact that artifacts can be given descriptions in both physical, mind-independent terms and intentional, mind dependent terms has motivated a strong consensus in the literature around the so-called "dual natures" view, view is most explicitly defended by Kroes and Meijer (2006):
technical artefacts can be said to have a dual nature: they are (i) designed physical structures, which realize (ii) functions, which refer to human intentionality. [...] In so far as technical artefacts are physical structures they fit into the physical conception of the world; in so far as they have intentionality-related functions, they fit into the intentional conception.
The functional aspect of technical artifacts depends on human mental activity, and this mind-dependedness is characteristic of and essential to the nature of artifacts as such. Kroes and Meijer continue:
“Physical objects, with the exclusion of biological entities, have, as physical objects, no function and exhibit no ‘for-ness’: they acquire a teleological element and become technical artefacts only in relation to human intentionality.”
This view of artifacts is widely held in the literature. Consider:
Baker (2006): “Unlike natural objects, artefacts have natures, essences, that depend on mental activity.”
McLaughlin (2001): “The function of an artefact is derivative from the purpose of some agent in making or appropriating the object; it is conferred on the object by the desires and beliefs of an agent. No agent, no purpose, no function.”
Emmeche (2007): “Machines are extensions of human capacities and intrinsic parts of human sociocultural systems; they do not originate ex nihilo or fall readymade from heaven.”
On the dual natures view of artifacts, an artifact has the functional character it does in virtue of some essential and necessary connection to the activity of some human mind.

Although I find many reasons to be dissatisfied with this view of artifacts, I claim that the development of artificially intelligent machines gives us strong reason for thinking the dual natures view to be an incomplete account of artifacts. Indeed, many of the central concepts in the philosophical literature around artificial intelligence turn precisely on the distinction between treating machines as tools and treating them as minds. Consider, for instance, Searle's (1980) distinction between "weak" and "strong" AI:
“According to weak AI, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion. But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states.”
Searle is skeptical of strong AI and denies that digital computers can think in anything like the way human brains think. But qua artifact he has no problem whatsoever with the use of speedy digital computers as an instrumental aid in the study of the mind, and supports so-called "weak AI". If the machine is not a mind, then it is merely a tool: a functional artifact of human creation. If the machine, contra Searle, suddenly springs into consciousness and becomes a genuine mind, then presumably it's status as an artifact drops out and becomes irrelevant. Haugeland (1980) makes the point explicit (emphasis original):
“If [the product is specified in terms of a computational structure], then a working model could probably be manufactured much more easily by means of electronics and programming; and that’s the only relevance of the technology.”
A working computer model of a cognitive "engine" would only be a technical artifact incidentally, presumably the way you and I are only bags of meat incidentally, in the sense that you can trade out significant potions of my meat body for artifact substitutes (my heart, my liver) without changing my essential nature. In other words, Haugeland draws a line in the sand: if the machine is an artifact then it is treated as an artifact, and if the machine is a mind then we treat it as a mind, and it's status as an artifact becomes irrelevant. I'll call this the technological irrelevancy thesis.

If we take the possibility of artificial intelligence seriously, the technological irrelevancy thesis is difficult to square with the dual natures of artifacts, since it suggests that some artifacts (the mindlike ones) might change their nature so as to no longer be essentially artifacts but instead essentially minds. The dual natures account gives no suggestion for how or why this might be the case.

There's lots of careful work to do to put together the pieces necessary to make a convincing argument that the dual natures account is inadequate. We're now at a place where I can introduce some terms I'll use in a technical sense, so that I can make the thesis explicit and lay out the structure of the argument.

artifact: any product of human construction (including nonfunctional products, like art, waste, atmospheric carbon, etc).

machine: any functional artifact (cars, hammers, bridges, etc)

tool: any functional artifact whose functional character depends on human mental activity

The dual natures account of artifacts can easily accept the claim that some objects of human construction are nonfunctional, like human waste, which is merely an artifactual biproduct of human activity and not typically itself of instrumental function. However, the dual natures account of artifacts insists that all machines are tools, in the sense given above. In other words, all functional artifacts have their function in virtue of the activity of some human mind.

My thesis is that this claim, that all machines are tools, is false, so the dual natures account of artifacts is if not outright false as well then at least is incomplete and in need of revision. Instead, I will argue that some machines have functional characters that cannot be explained in terms of its dependence on any other human mind, and therefore cannot be considered tools in the sense necessary for the dual natures account of artifacts. I call these artifacts "machine participants", and argue that they provide a counterexample to the dual natures account.

My strategy is to discuss in turn the various ways in which artifacts might depend on a mind for their functional character. There are two primary ways: either they are produced with explicit functional intentions, or they are used with specific functional intentions that might not have been explicit during its production. I call these the design and use relationships, respectively. Over the next few posts I'll look at these relationships in some detail, and argue that neither can provide an adequate account of machine participants. I'll also argue that the classical artificial intelligence debates were too exclusively focused on the small subset of machine participants that structurally mirror or reproduce some importance feature of human minds. Therefore, these debates largely ignored the impact that semi-autonomous, socially embedded machine agents might have on human social and mental activity. For this reason the classic debate has left us rather dramatically underequipped to handle the increasingly complex challenges that machine participants present in our current age.

My goal is to correct this deficiency by reviving interest in a discussion of machine participants as an important class of artifacts whose natures are unlike those typically assigned to artifacts on the dual natures account. My position is hardly unique. In fact, I'll provide an interpretation of Turing's test that takes machine participation a central concern in his discussion of "thinking machines". But I want to take this slowly and systematically, because in the process we'll be constructing an view of artifacts that might stand as an alternative to the dual natures account, and help us better understand machines, minds, and the environments we share.

No comments:

Post a Comment