Vol. XV, Sept. 2004

Page 1

Episteme  Volume XV  September 2004

episteme A Journal of Undergraduate Philosophy ep•i•ste•me \ep' i ste' mé\ n. [Gk. epistém(é)]: knowledge; specif., intellectually certain knowledge

Volume XV • September 2004

Denison University, Granville, Ohio



Episteme Volume XV • September 2004 Episteme is published under the auspices of the Denison University Department of Philosophy Granville, Ohio ISSN 1542-7072


Editors Andrew Hupp Matthew Tipping Assistant Editors Jason Stotts Marc Anderson Editorial Board Simon Kasper Kevin Connor Robert Wyllie Faculty Advisor Mark Moller

Episteme is published annually by a staff of undergraduate philosophy students at Denison University. Please send all inquiries to: The Editors, Episteme, Department of Philosophy, Blair Knapp Hall, Denison University, Granville, Ohio 43023.

Episteme aims to recognize and encourage excellence in undergraduate philosophy by providing examples of some of the best work currently being done in undergraduate philosophy programs around the world. Episteme intends to offer undergraduates their first opportunity to publish philosophical work. It is our hope that the journal will help stimulate philosophical dialogue and inquiry among students and faculty at colleges and universities. Episteme will consider papers written by undergraduate students in any area of philosophy; throughout our history we have published papers on a wide array of thinkers and topics, ranging from A n c ie n t to C o n tem p o ra r y a nd philosophical traditions including Analytic, Continental, and Eastern. Submissions should not exceed 4,000 words. All papers undergo a process of blind review by the editorial staff and are evaluated according to the following criteria: quality of research, depth of philosophical inquiry, creativity, original insight, and clarity. Final selections are made by consensus of the editors and the editorial board. Please provide three double-spaced paper copies of each submission and a cover sheet including: authorâ€&#x;s name, mailing address (current and permanent), email address, telephone number, college or university name, and title of submission, as well as one (electronic) copy formatted for Microsoft Word on a CD or a 3.5â€? disk. The deadline for submissions for Volume XVI is February 15, 2005.


Episteme A Journal of Undergraduate Philosophy Volume XV

September 2004

CONTENTS Statement of Purpose and Editorial Board

4

Table of Contents

5

Kantâ€&#x;s Proof of a Universal Principle of Causality: A Transcendental Idealistâ€&#x;s Reply to Hume Reza Mahmoodshahi, Cornell University

6

Afraid of the Dark: Nagel and Rationalizing the Fear of Death Jennifer Lunsford, Hartwick College

20

The Ghost is The Machine: A Defense of the Possibility of Artificial Intelligence Matt Carlson, Oberlin College

29

Functionalism and Artificial Intelligence Kevin Connor, Denison University

37

Call For Papers, Vol. XVI (2005)

47

The editors express sincere appreciation to the Denison University Research Foundation, the Denison Honors Program, Pat Davis, and Faculty Advisor Mark Moller for their assistance in making the publication of this journal possible. We extend special gratitude to the Philosophy Department Faculty: Barbara Fultner, David Goldblatt, Tony Lisska, Jonathan Maskit, Mark Moller, Ronald E. Santoni, and Steven Vogel for their support.


Kant’s Proof of a Universal Principle of Causality: A Transcendental Idealist’s Reply to Hume

I

REZA MAHMOODSHAHI

n his famous dictum, Lord Russell remarked: “The law of causality, I believe, like much that passes muster among philosophers, is a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed to do no harm.”1 Russell took the principle of causality to be entirely incoherent, and it‟s no wonder: since Hume, philosophers have thought of „causality‟ as a metaphysically dubious concept, one which purports a mysterious necessary connection between an event A and its respective effect B. Hume‟s momentous critique of the rationalist principle spawned a contemporary debate, one which undoubtedly motivated the entire Kantian enterprise, but one to which Kant also directly contributed in the Second Analogy of the Transcendental Analytic. In the introduction to the Prolegomena, Kant summarized Hume‟s accomplishment: “he proved incontrovertibly that it is entirely impossible for reason to think such a combination a priori and from concepts, for such a combination contains necessity; but it absolutely cannot be conceived why, because something is, some else must also necessarily be, and thus how the concept of such a connection can be introduced a priori” (4:257). Hume demonstrated that the rationalist a priori principle of causality is groundless, for “when we look about us towards external objects, and consider the operation of causes, we are never able, in a single instance, to discover any power or necessary connexion; any quality, which binds the effect to the cause.”2 Causation, for Hume, has mere inductive status; as such, it is not determinate and only succeeds in establishing a contingent connection between two events. The occurrence of an event A immediately and regularly followed by an event B is not an instantiation of the rationalist notion of necessary connection; rather, the mistaken construal of A and B‟s close arrangement as a necessary one is a consequence of mere habit of mind brought about by the constant „conjunction‟ of the two events in experience.3 In short,


KANT‟S PROOF

7

event A does not cause B, but merely precedes it in occasion. Kant thought that the only way to vindicate any principle of causality was to abandon attempts to derive its necessity through experiential grounds; “it must either be grounded completely a priori in the understanding or be entirely abandoned as a mere chimera” (B123). As Hume demonstrated, the explanatory efficacy of experience is necessarily limited to the observation of customary occurrences through which at best I might be able to affirm that in all formerly observed instances of A, B subsequently follows. Such grounds fall short of what‟s needed, that is, some grounds through which to derive a necessary law to which all-future, hitherto unknown, experience must conform. In recognizing that the objective reality of an a priori principle of causality can only be established through a priori means, Kant‟s reply to Hume must therefore be understood as an attempt to positively establish the concept through an appeal to the understanding, where “the effect is not merely joined to the cause, but rather is posited through it and results from it” in accordance with a universal rule (B124). In the Transcendental Analytic, Kant treats the understanding as the source of a priori concepts, which along with the forms of intuition, give rise to a priori cognition. Kant derives the pure concepts of the understanding, or the categories, from twelve logical functions or forms of judgment. These twelve logical functions are supposed to serve as „clues‟ to the corresponding ways in which we form concepts of objects. On the supposition that the “understanding is completely exhausted and its capacity entirely measured by these [logical] functions” (B107), Kant derives his Table of Categories: twelve categories for conceiving of the quantity, quality, relation, and modality of objects (B106). Kant goes on to argue in the transcendental deduction that all twelve pure concepts of the understanding apply universally and necessarily to the objects of experience. His argument here relies on the “transcendental unity of apperception”: a single unitary consciousness or continuous string of experiences is possible if and only if our intuitions, procured through the sensibility, are synthesized via thought through the categories so as to present us


8

REZA MAHMOODSHAHI

with the objects of experience. The application of the categories to what we might call our „sense-data‟ is a necessary condition for the representation of the objects of experience. In the second analogy, the category of interest—derived from the hypothetical form of judgment—purports to explain causal relations and dependencies (B106) amongst the objects of experience for “only thereby can I be justified in saying of the appearance itself, and not merely of my [own subjective] apprehension, that an [objective] sequence is to be encountered in it…” (B238). SUBJECTIVE-OBJECTIVE SUCCESSION The “Analogies of Experience”, of which the Second Analogy is a part, concerns the class of categories Kant calls relations. The relational category of causality, once applied to what‟s given to us in space and time, necessarily grounds “the real upon which, whenever it is posited, something else always follows” (B183). The argument for causality relies on a distinction between an objective and a subjective succession of representations, since Kant takes judgments concerning the objective alterations of the states of a substance to be justified if and only if every objective alteration behaves according to a necessary rule of succession, viz. causality. The analogies of experience, broadly speaking, rely on two assumptions: (i) the unity of apperception and (ii) the application of schematized categories. Again, “the unity of apperception” requires the necessary connection of perceptions and the synthetic unity of appearances in a single time. This ensures one, and not many, temporal intervals. The second assumption arises out of the need to place events along a temporal interval despite an inability to perceive time in itself. Time understood in abstraction from its phenomenal content tells us that we must pass through T1 before we reach T2. We cannot experience T1 after or at the same time as T2. It is through this trivial precept of timerelations that we avoid the contradictory notion of T1 as both present and future. That is, T1 which is prior to T2, cannot be both simultaneous and subsequent to T2, for “successive periods of


KANT’S PROOF

9

time constitute a series in which no one period can bear the same relation to that which precedes and that which follows.”4 Accordingly, objective time-relations are of two sorts: successive and simultaneous (co-existent). The Second Analogy turns to the successive order of our subjective perceptions and asks whether these successive perceptions of the states of a substance could have been ordered differently. To put it more precisely, given that private perceptions of the objects of experience constitute a successive sequence, are there sequences of perceptions such that the temporal-order is irreversible? Kant‟s thought is that if the temporal-order of a sequence of perceptions is irreversible (and certain other conditions hold), then our objective experience is possible only through the application of an a priori concept of the understanding. In other words, our experience of objective events presupposes the application of the causal category. Alternatively, if our apprehension of the manifold yields a sequence of perceptions such that the temporal -order is reversible, then in virtue of the reversibility of the subjective succession of representations, we know that no objective event has occurred. The absence of an objective event implies an indeterminate, wholly subjective temporal-order. An object that is not successive in itself is apprehended in some unique temporal order merely because our apprehension of the manifold of appearances is always successive (B234). In the absence of an objective event, we know that the states of the substance itself are co -existent; though our perceptions of it might occur in some other temporal order, such an order is contingent upon our assorted perceptual freedoms, e.g. scanning left-to-right, right-to-left, topto-bottom, and not determined by succession in the object itself. “Thus, e.g., the apprehension of the manifold in the appearance of a house that stands before me is successive. Now the question is whether the manifold of this house itself is also successive, which certainly no one will concede” (B235). Let‟s call our perception of the roof of a house AR and our perception of the doorway BR, and let‟s assume that AR and BR are independently perceptible. The house is meant to exemplify an object in which A and B do not succeed one another. Rather, they are co-


10

REZA MAHMOODSHAHI

existent since it is possible to experience either AR or BR prior to the other. AR and BR possess what Strawson calls “orderindifference”5, in view of the fact that [ARBR]-irreversibility does not hold. To use Beck‟s terminology6, [ARBR] does not imply the objective event [AB], which symbolizes a state A in an object which precedes a state B in an object. Nothing has happened; no objective event has occurred; no state has come to be in a substance that formerly was not (B237). Kant contrasts this sequence of successive perceptions of a house, which does not constitute an objective event (given that the manifold is not apprehended in a necessary order), with successive perceptions of a ship driven downstream. A moving ship is meant to serve as an obvious example of a sequence of successive perceptions that lacks “order-indifference”, and hence constitutes an objective event. “My perception of its position downstream follows the perception of its position upstream, and it is impossible that in the apprehension of this appearance the ship should first be perceived downstream and afterwards upstream” (B237). The subject‟s various perceptual freedoms, e.g. scanning left-to-right, right-to-left, top-to-bottom, etc., have no bearing on the temporal-order of the successive perceptions—the order is objectively determined. Let‟s call our perception of the ship upstream AR and our perception of the ship downstream BR. As a result of the successiveness of the object itself, it is not possible to view BR prior to AR, all subjects necessarily apprehend AR prior to BR, i.e. [ARBR]-irreversibility holds. Apprehension is “bound to” the order of the sequence of perceptions. Causality figures into Kant‟s objective-subjective distinction through the claim that a subject‟s conception of an objective event, i.e. [AB], necessitates or presupposes the application of a causal principle to the relevant objects of perception. In the absence of such a principle, we‟d lack the ability to Comprehend a determinate, necessary temporal-ordering. The successive perceptions of an objective event are necessarily connected according to a rule (B238). For otherwise, “if one were to suppose that nothing preceded an occurrence that it must follow in accordance with a rule, then all sequence of perception would be determined


KANT‟S PROOF

11

solely in apprehension, i.e., merely subjectively, but it would not thereby be objectively determined which of the perceptions must be the preceding one and which the succeeding one” (B239). Again, Kant‟s argument relies on a crucial objectivesubjective distinction, since an irreversible sequence of perceptions would require that one perception succeed another in the object of experience and not merely in the subject‟s apprehension of the manifold of appearances. Conceived in this manner, objectivity is effectively a form of inter-subjectivity: any subject must apprehend such an irreversible sequence of perceptions in a determinate order. The understanding, according to the universal law of cause and effect, imputes a temporal order to phenomena by attributing to each phenomenon a place in a temporal interval in relation to antecedent and subsequent phenomenon. In the Transcendental Deduction Kant established that we must employ concepts of objects in order to have objective experience. Here, in the Second Analogy, Kant affirms that “[we] render [our] subjective synthesis of apprehension objective only by reference to a rule in accordance with which the appearances in their succession, that is, as they happen, are determined by the preceding state” (B240). A NON SEQUITUR OF NUMBING GROSSNESS In the classic, The Bounds of Sense, P.F. Strawson famously assessed the merits of Kant‟s argument: “the order of perceptions is characterized not only as necessary, but as a determined order, an order in which our apprehension is bound down, or which we are compelled to observe. These may all perhaps be admitted as legitimate ways of expressing the denial of order-indifference. But from this point the argument proceeds by a non sequitur of numbing grossness.”7 As Strawson recognized, [ARBR]irreversibility does not imply [AB]-irreversibility, since this would require an A-type state of substance to necessarily give way to a B-type state of substance. No such necessity has been established. We cannot infer from the irreversibility of perceptions of the states of a substance, the irreversibility of the objects


12

REZA MAHMOODSHAHI

themselves. Thus, what Lovejoy similarly deemed to be “one of the most spectacular examples of the non sequitur…to be found in the history of philosophy”8 is as follows: 1. [ARBR]-irreversibility → [AB] 2. [AB] → [AB]-irreversibility 3. Therefore, [ARBR]-irreversibility → [AB]-irreversibility Strawson‟s charge denies the validity of (1) and a fortiori the validity of (2), which together amount to the implausible claim that [ARBR]-irreversibility → [AB]-irreversibility. The non sequitur is rooted in Kant‟s failure account for two conditions that must be satisfied if [ARBR]-irreversibility is to imply [AB]irreversibility. The first of these must be satisfied in order to know simply whether an objective event has occurred. Recall Kant‟s example of the house, where AR (the roof) co-exists with BR (the doorway). The principle of opposites or contraries, a metaphysical offshoot of the principle of non-contradiction, implies that incompatible conditions cannot co-exist. A static state of substance cannot logically suffer contrary things at the same time in the same part of itself.9 A house‟s roof and doorway certainly are not incompatible states of a substance, and as such, they are co-existent. Alternatively, Kant‟s example of a boat being driven downstream satisfies the non-coexistence condition as it cannot be both upstream and downstream (at the same time) in relation to some point along the river. Hereinafter, [AB] symbolizes an objective event, i.e. an objective succession in the substance itself; [ARBR] symbolizes our subjective representations of the states of substance. This notation is borrowed from Lewis White Beck (see references). Therefore, at the very least, we know that the movement of the boat constitutes an objective event, but this does not tell us whether [AB] or [BA] occurs. If we suppose non-coexistence, premise (1) should be reformulated as: i. [ARBR]-irreversibility → [AB] or [BA] (given noncoexistence) To know that [AB] and not-[BA] has occurred, we must know that perceptual isomorphism, i.e. “the condition that there be no relevant difference in the modes of causal dependence of AR on A and BR on B” holds. Perceptual isomorphism requires


KANT‟S PROOF

13

that the causal process that connects A with its perceptual effect AR occur prior to the causal process that connects B with its perceptual effect BR. There are a number of ways perceptual isomorphism can fail to hold. “A cunning arrangement of mirrors, designed to reflect some of the light over large distances before it reached my eyes might ensure that I saw later events before the earlier.”10 Or, to give a more concrete example, given that light travels faster than sound, we might see Cornell University‟s McGraw clock tower strike midnight before we hear its bells chime despite the fact that McGraw strikes one and its bells begin to ring at exactly the same time, i.e. midnight. Nonetheless, if this condition holds, AR will necessarily precede BR, that is, the objective event [AB] will compel us to observe AR & BR in one and only one order, viz., [ARBR]. Therefore: ii. [AB]→[ARBR]-irreversibility (given perceptual isomorphism) In light of (i) and (ii), Kant‟s causal schema, i.e. [AB]irreversibility, derived in premise (3) is valid if and only if we know that A and B are not co-existent and perceptual isomorphism holds. If we know non-coexistence, as we do in Kant‟s own boat example, the crux of Strawson objection has to do with the invalid move from the plausible objective temporal claim that B succeeds A in the object, i.e. [AB], to the objective casual schema, i.e. [AB]-irreversibility, which makes the stronger claim that A never succeeds B in the object, i.e. never-[BA]. To make the move from (1) to (2), we must know or have sufficient reason to believe that perceptual isomorphism holds, but to know this, we must know or have sufficient reason to believe [AB]irreversibility. Alas, this is the very causal schema Kant is seeking! LEWIS WHITE BECK: SAVIOR OF THE SECOND ANALOGY? Now, there are some who would like to save Kant from Strawson by claiming that a general causal law is the only thing that could ground objective succession—they try to avoid the non sequitur


14

REZA MAHMOODSHAHI

by avoiding appeals to particular causal laws. They argue that Kant is setting out to establish a universal principle of causality, a principle he treats as distinct from any empirical instantiations that might ground a causal connection between a particular A and a particular B. Although Kant‟s proof aims to provide the a priori basis for the relationship between successive objective states of substance, he says nothing about any particular causal law or what might constitute a proper antecedent condition. As he puts it himself, “there are thus certain laws, in fact a priori laws, that first make a nature possible. Empirical laws can obtain, and be discovered, only by means of experience, and indeed in virtue of these original laws through which experience itself first becomes possible” (B263). Though while a transcendental principle of the understanding is established entirely independent of all experience, particular casual laws are not solely derived through empirical means, since they are “grounded in or made possible by [a] transcendental principle of the understanding.”11 Hence, a defense against the classical charge of non sequitur that relies on a strong distinction between the transcendental principle and its empirical instantiations is unavailable. A particular causal law is necessarily subsumed under the transcendental principal, consequently, the separation is not sufficiently strong; the two are not logically exclusive, as they must be, if the proof is to survive Strawson‟s charge. It therefore seems that we must look elsewhere for an adequate defense of Kant. If Strawson is correct to interpret Kant to infer as he takes him to infer, then the charge of non sequitur is fitting. There‟s no doubt that simply inferring [AB]-irreversibility from [ARBR]irreversibility would qualify as a non sequitur, but as Lewis White Beck argues in his defense of Kant‟s proof, this is not Kant‟s inference. In a short essay, entitled A Non Sequitur of Numbing Grossness?, Beck interprets Kant as follows12: 1. [ARBR]-irreversibility → [AB]-irreversibility if and only if (i) A and B are not coexistent and (ii) perceptual isomorphism does not fail.


KANT’S PROOF

15

2. To know [AB], given [ARBR]-irreversibility, requires: (i) knowledge that A & B are opposite states of a substance; and (ii) knowledge of [AB]- irreversibility to ensure that perceptual isomorphism does not fail. 3. Knowledge of 3(i) is sufficient to know that A & B are not coexistent, i.e. there is an objective event, but knowledge of 3(i) is not sufficient to know whether [AB] or [BA] has occurred. 4. In virtue of Hume‟s conception of causality, I can know that [AB] occurs. 5. If I know [AB], then I know not-[BA], which implies knowledge of [AB]-irreversibility. 6. Given knowledge of [AB]-irreversibility, I know perceptual isomorphism does not fail in virtue of 3(ii). 7. [AB]-irreversibility is the schema of causation 8. Therefore, to know that [AB] occurs, I must know that A contains the casual condition of B. Beck‟s interpretation, in contrast to Strawson‟s, differs in that premise (2) attributes to Kant‟s proof recognition of the two conditions outlined above. Beck takes Kant to acknowledge these conditions at B234: “The objective relation of appearances [that is, of A and B] that follow upon one another is not to be determined through mere perception [that is, from the sequential relation of AR & BR].”13 However, one might be skeptical of Beck‟s defense. Premise (4), in particular, seems problematic, since to know perceptual isomorphism holds, I must know [AB] irreversibility, but to know this, I must know that [AB] and not-[BA] has occurred. But, can I know [AB] has occurred from mere experience? It appears, at first glance, that Beck is begging the question, since all that experience can offer is knowledge of [ARBR]. And this would be a valid objection to Beck as Kant is not entitled to (4) insofar as his argument is a general proof of the universal principle of causality. But as Beck points out, insofar as the Second Analogy is meant to serve as a reply to Hume, Kant is entitled to claim he knows [AB] occurred, “for Hume knows [AB] but has skeptical


16

REZA MAHMOODSHAHI

doubts about [AB]-necessarily.”14 Meaning, if we‟re to grant Kant knowledge of [AB], then perceptual isomorphism holds, thereby yielding sufficient grounds for inferring from [ARBR]irreversibility that [AB]-irreversibly occurs. If we treat the Second Analogy as a direct reply to Hume, in which case Kant can make use of Hume‟s own assumptions, Beck would have us believe that Strawson‟s charge is misplaced. But what did Hume mean when he said that we can know [AB]? In anticipation of Kant‟s later analytic/synthetic distinction, Hume maintained a two-pronged conception of reason, where on the one hand, it served to discover the pure relations between ideas, while on the other, it served to discover matters of fact in sensory experience.15 Hume‟s epistemic criteria therefore says that statements of relations of ideas are „either intuitively or demonstratively certain‟, where by „certain‟ Hume means that we are justified „by the mere operation of thought‟ in not questioning a statement‟s truth.16 Statement‟s about matters of fact, on the other hand, depend on evidence gained through experience. Accordingly, Hume rejects „obscure and uncertain‟17 metaphysical concepts such as power, force, or necessary connection, since they don‟t fall under either function of reason: Let an object be presented to a man of ever so strong natural reason and abilities; if that object be entirely new to him, he will not be able, by the most accurate examination of its sensible qualities, to discover any of its causes or effects.18 So, then, how can we know or have sufficient reason to believe that [AB] occurs within the Humean architectonic? To believe, says Hume, is simply to judge a proposition to be true. Causal inference is simply the product of belief in cause-effect relations: Belief is nothing but a more vivid, lively, forcible, firm, steady conception of an object, than what the imagination is ever able to attain. This variety of terms, which may seems so unphilosophical, is in-


KANT‟S PROOF

17

tended only to express that act of mind, which renders realities, or what is taken for such, more present to us than fictions.19 And so, while our causal inferences are not justified, we nonetheless come to believe them, and some cases, even come to know them. Once Kant knows [AB] has occurred and not [BA], he can then justifiably know [AB]-irreversibility occurs. By falling back on his grand doctrine of transcendental idealism, it then follows that A must contain the causal conditions of B, for otherwise Kant would argue, there‟d be no grounds in the understanding for experiencing the objective event [AB]-irreversibly. To know any such objective event necessitates the application of causal category, since the effect, i.e. B, is joined to the cause, i.e. A, in our understanding by thinking through the causal category. But, one is now stricken by the utter dependence of the second analogy on Kant‟s underlying doctrine of transcendental idealism. If Kant intends for [AB]-irreversibility to serve as the causal schema, then it can only do so if we presuppose Kant‟s conception of the world of appearances as given to us in the sensibility and brought under the categories. For otherwise, we‟d have yet another whopping non sequitur: it doesn‟t follow from Kant‟s premises in the second analogy that we must apply the pure concepts of the understanding to the manifold of appearances. We needn‟t, at least not in virtue of Kant‟s objective/ subjective distinction, posit a causal law. We might very well posit some other doctrine to account for irreversible objective succession in objects. We might be sympathizers of the early Cartesian, Nicholas Malebranche, whereby we account for irreversible objective succession by attributing it to God‟s will. If we obtain ideas of external things by viewing them within God himself, then there‟s no need (nor are we justified) to treat [AB]irreversibility as the causal schema. Again, if we abandon the force of transcendental idealism, Kant‟s reply to Hume fails. To establish the principle of causality, we must read the second analogy through a transcendental idealist‟s spectacles, since once removed a Malebranchian theory will do just as well. Kant, of


18

REZA MAHMOODSHAHI

course, wouldn‟t have it any other way.

Notes 1 2 3 4 5 6 7 8 9 10 11 12

13

14 15 16 17 18 19

Russell, 1. Hume, 136. Hume, 141. Ewing, 75-76. Strawson, 133. Beck, “A Non Sequitur of Numbing Grossness?” Strawson, 137. Loveoy, 303. See Plato, Republic Book IV Wilkerson, 80. Freidman, 172. My reconstruction here discards Beck‟s first premise and fills in premise 5 Beck, “A Non Sequitur of Numbing Grossness?” quoted with Beck‟s addendums, 149. Beck, “A Non Sequitur of Numbing Grossness?” , 150. Hume, 14. Hume, 24. Hume, 135. Hume, 4.6. Hume, 5.12. Bibliography

Beck, Lewis White. “A Non Sequitur of Numbing Grossness?” Kant Studien 67 (1976); reprinted in Essays on Kant and Hume (New Haven: Yale University Press, 1978) Beck, Lewis White. “Six Short Pieces on the Second Analogy of Experience,” in Essays on Kant and Hume (New Haven: Yale University Press, 1978) Clatterbaugh, Kenneth. The Causation Debate in Modern Philosophy (New York. Routledge, 1999) Ewing, A.C. Kant’s Treatment of Causality (London: 1924)


KANT’S PROOF

19

Freidman, Michael. “Causal Laws and the Foundations of Natural Science” in The Cambridge Companion to Kant, ed. Paul Guyer (Cambridge University Press, 1992) Hume, David. “The Idea of Necessary Connexion”, An Enquiry Concerning Human Understanding. (Oxford University Press; New Ed edition. 1999) Loveoy, On Kant’s Proof of the Causal Principle, Archiv furGeschichte der Philosophie (1906), p.303 Russell, Bertrand. ``On the Notion of Cause'', Proceedings of the Aristotelian Society, 13 1913), pp. 1-26. Strawson, P.F. The Bounds of Sense (London, 1966) Van Cleve, James. “Another Volley at Kant‟s Reply to Hume,” in Kant on Causality, Freedom, and Objectivity, ed. William Harper and Ralf Meerbote (U. of Minnesota P., 1984) Wilkerson, T.E. Kant’s Critique of Pure Reason, (London: Oxford University Press, 1976) Translation The Cambridge Edition of the Works of Immanuel Kant translated and edited by Paul Guyer and Allen W. Wood. (Cambridge University Press; New Ed edition, 1995)


Afraid of the Dark: Nagel and Rationalizing the Fear of Death

T

JENNIFER LUNSFORD

homas Nagel, in his article Death (1994) sets out to examine what it is about death that a person finds so objectionable. He begins by assigning value statements to life and death, those being good and evil respectively, and determines that death is no evil for the person who dies and therefore is nothing to fear. He contends that what one objects to when thinking about “death” is not death itself, but rather the loss of life. In a short paragraph, with an almost dismissing tone, Nagel touches on the idea that many people fear death because of a misunderstanding of what it is “like” to be dead, in that many people view death as a state, where Nagel, like Epicurus, sees death as a non-state. This is an idea very closely related to Nagel‟s inquiry yet he gives it short-shrift. In the course of this paper I examine this neglected concept of a “misunderstanding” of the experience of death, showing that this misunderstanding is in fact, a rational, if not the most fundamental cause of our fear of death. Throughout, I draw on both Nagel and Green‟s response (1982) response to Nagel. To understand the position being taken by myself and that being held by Nagel and Green we must first define what we mean by death and show how it is distinct from dying. Both Nagel and Green use the Epicurean idea of death, that being death as the end of existence. When one dies, they no longer are; they cease to be. Death is a non-state where one does not exist. There is no afterlife, no heaven or hell, not even a mind trapped in a decaying corpse in a box. When one dies, s/he ends. Kaput! However, this non-state of death is different from the experience or state of dying. For when one is dying they are still alive, no matter how close to death s/he is. Life and death are distinct opposites, like black and white, either one exists or one does not exist. But when dying, one is still existing and experiencing and feeling pleasure and pain. Essentially then, the difference is that dying is a state while death is a non-state.


AFRAID OF THE DARK

21

Nagel begins his article by trying to discover what is “evil” in or about death.1 Note, he is examining death, not dying. He begins this way because he feels that if something is being feared, it must be because there is something evil or objectionable about it. For instance, one may fear bees because s/he is allergic to them and being in the presence of a bee could cause extreme displeasure. Thus, according to Nagel, this possibility for displeasure is what is “evil” about the bee. Therefore, it is rational for this person to fear that which may cause him/her evil, that being the bee. But what is evil about death? What in death, a non-state, can cause displeasure to the person who is dead to the point where s/he would be justified in fearing it? In death one cannot experience or opine about things. One is neither capable of feeling displeasure or having an experience which could cause displeasure. Since there is nothing for the dead person to feel or experience that would be in any way evil, then there is nothing, according to Nagel, for him/her to fear. If one does not exist then no harm can come to him/her. Why would one fear something that could not cause him/her harm? (Nagel 1994) Nagel comes to the conclusion that it is not death, the non -state, that one objects to, but rather the deprivation of life. He uses the example of non-existence prior to birth. One does not typically object to his/her not existing before birth but one does object to not existing after life. This is because, before birth nothing was lost, but after death there is the deprivation of life. Because there is something in life to compare to non-existence, death becomes objectionable, because existing is clearly better than not existing. He points out that when value is attached to life experiences, such as “eating is good”, “jogging is bad”, and the bad outweighs the good, one would still choose to live for the small amount of good s/he was experiencing, because the alternative would be no experience at all. The idea of ceasing to experience is that which is objectionable for Nagel, not the prospect of nonexistence, because there would be no subject to opine about that state (or rather, non-state). (Nagel 1994) It is important to distinguish here that the objection to the loss of life is still an objection related to death, not dying. Nagel


22

JENNIFER LUNSFORD

is not saying that one objects to the process or experience of losing life, such as how one might object to being suffocated as a process for losing life. He is saying that, since death is the absence of life, and being in the dead non-state (in as much as a subject or object can be in a non-state) deprives someone of life, then that deprivation of life is that which is objectionable. O.H. Green argues that since being dead deprives someone of life then death and the loss of life are the same. (Green, 1982) Nagel rejects this notion, arguing that the loss of life is a mere side-effect or byproduct of death. It just so happens that one must lose life to be dead, in the same way that one must be injected with a needle in order to get a vaccination. One may fear vaccinations because of the pain involved in the shot, but it is not the vaccination that s/ he fears but rather the pain of being injected. The pain is a byproduct of the injection, and the injection is only necessary as a vehicle for the vaccination. The loss of life is a by-product of becoming dead (note becoming dead, not dying; becoming dead is the disappearance from the live state, the ceasing to exist), and becoming dead is what must happen in order to be dead. Green, as Iâ€&#x;ve already stated, disagrees with Nagel on this point. He argues that the fear of the loss of life, which Nagel accepts as rational, translates to a fear of death. Green begins also with the idea of good and evil as a way to examine the fear of death. He embraces the Epicurean notion, as does Nagel, that death is not an evil for the one who dies. He argues that there are two kinds of good and evil, subjective and objective. Subjective good and evil requires consciousness of the good and evil. Because the dead are not conscious, clearly death is not a subjective evil. Objective good and evil are things which temporally impede normal function. It can be said that death is the ultimate impediment of normal function and therefore must be an objective evil. But Green argues that objective good and evil must have a spatio-temporal subject to affect, and since death deprives one of his/her status as a subject, there is nothing for the objective good or evil to affect. But deathâ€&#x;s status as something not good or evil does not mean it is not something to fear. (Green 99105)


AFRAID OF THE DARK

23

Green defines fear as “an emotional response to expected disutility under conditions of subjective uncertainty.” (Green 105) Since death is the ultimate expected disutility, and certainly it can be said that there is a subjective uncertainty surrounding death and one‟s meditations on death, then by Green‟s definition death is most definitely something to fear. But, he notes, that the fear of death is rational only as the desirability to live, not as the undesirability to die. He argues that men fear not living longer, and since death is that which causes someone to not live longer, then it is death that one fears, not just the deprivation of life. It is only when one examines death and its relation to life, discovering that death is the end of life, that one begins to fear this thing called “death”. (Green 105) Both Nagel and Green discuss the experience of death as a reflection of the loss of life. The only difference is that Green argues that the two are the same. Yet I feel Green‟s overall argument is weak, and that he occasionally misinterprets Nagel, but he presents an interesting, methodical approach to defending the Epicurean good and evil argument. His article actually doesn‟t even deal with rationalizing the fear of death until the last page. Because of the compelling arguments presented by Nagel and Green, I find no justifiable way to argue that death, in the Epicurean sense, is an evil. But, I do not see Green‟s argument that it is rational to fear death solely because it is a loss of life to be sufficient. I am inclined to agree with Nagel that the loss of life is merely a by-product of death, so therefore they cannot be equated. I am now charged with the lofty task of justifying the rationale behind fearing death. The key to this is in the Nagel passage I mentioned earlier, where he dismisses the notion of fearing death due to a misunderstanding of it as a state. Because I will be focusing so closely on this passage, I feel it is necessary to quote it: The point that death is not regarded as an unfortunate state enables us to refute a curious but very common suggestion about the origin of the fear of death. It is often said that those who object to


24

JENNIFER LUNSFORD

death have made the mistake of trying to imagine what it is like to be dead. It is alleged that the failure to realize that this task is logically impossible (for the banal reason that there is nothing to imagine) leads to the conviction that death is a mysterious and therefore terrifying prospective state. But this diagnosis is evidently false, for it is just as impossible to imagine being totally unconscious as to imagine being dead…Yet people who are averse to death are not usually averse to unconsciousness (so long as it does not entail a substantial cut in the total duration of waking life). (Nagel 23) Here Nagel acknowledges the “common suggestion” that many fear death because they cannot logically comprehend what it is like to be dead. One is incapable even of comprehending the reality of unconsciousness. Thus, when thinking about what it would be like to be unconscious, what s/he is really doing is imagining what it would be like to be conscious while one‟s body was in a state similar to that of an unconscious person. Perhaps it would be that the mind would function but the body would be in paralysis, without any functioning sensing faculties, thereby trapping the mind in a dark box for all of eternity. This prospect alone is terrifying, yet the uncertainty and mystery of the myriad of possibilities after death amplifies that terror. For Nagel, this is absurd. Obviously, since death is not a state, then this idea of fearing the state of death is foolish and should be cast aside. He feels that this misunderstanding causes the common suggestion to be “evidently false” because it is “impossible to imagine being totally unconscious.” (Nagel 23) One‟s “failure to realize that this task is logically impossible” (Nagel, 23) leads them to fear death because it is mysterious. So, Nagel argues that because one does not realize that imagining oneself dead is logically impossible that is why s/he sees death as mysterious, and therefore, terrifying. Nagel does not reject the notion that things that are mysterious are terrifying, rather he feels that if one understood the logical impossibility of under-


AFRAID OF THE DARK

25

standing the situation then it would no longer be mysterious. But isnâ€&#x;t the logical impossibility of understanding something the very root of its mystery? People tend to be at the very least wary of things they do not understand. For instance, letâ€&#x;s say a human went to another planet, in a far distant galaxy. When she reached this planet she found that the entire civilization lived under water. She was greeted upon arrival by a creature from the planet who looked human. From her studies before arriving she knew that their physiological make-up was exactly the same as a humans. She saw no gills or breathing apparatus to aid the creature in breathing underwater. Upon asking how it was that the entire civilization breathed underwater the creature seemed confused and said that they breathed the same way as they did on land, through their mouths and noses and into their lungs. The human could not understand this, because on earth, if you breath water into your lungs you will drown. The creature invited the human to the Capitol of the city to meet their mayor but the only way to get there was by a long, tubular elevator that ended at the bottom of the sea. Were the human to take the elevator she would be so far below the surface that she would likely not be able to withstand the pressure, and if she found that she couldnâ€&#x;t breath, she would be too deep to reach the surface before the air in her lungs ran out. It would seem that this would be a frightening concept to the human. Logically, she knows that her body is not made to breath water, but the creatures on this planet, who are physiologically identical to humans, seem to have no problem. The human is aware that she is logically incapable of understanding how these creatures breath underwater. But that does not make the situation any less mysterious. If anything, it makes it more mysterious. The mystery, in conjunction with the frustration of being logically incapable of understanding something, will likely result in the human being too afraid to go to the Capitol building at the bottom of the sea. If the human were not aware that the situation was logically impossible to understand, then perhaps she would decide to follow the creature, since it seemed from observation that he


26

JENNIFER LUNSFORD

was physiologically identical to her, and he could breath underwater, so maybe she could too. Perhaps there was something about the functioning of the lungs that she did not understand. The mere possibility of a logical explanation would be, at least, something. This failure to realize the logical impossibility of the situation would, in a way, provide a possible explanation for the situation. It would create a reasonable solution to help alleviate the fear of the human. So, Nagelâ€&#x;s contention that the common suggestion is evidently false because one cannot logically comprehend true unconsciousness or death is itself evidently false because it is this logical impossibility that causes one to fear death. So, the fear of death comes from a fear of not understanding, a fear of the unknown. This is quite different from a fear of the loss of life. I would not even go so far as to call the objection to the loss of life a fear as much as simply, an objection. The loss of life is lamentable, not terrifying. One is not frightened of not existing and not experiencing, s/he is angry and sad. It does not scare me that, once dead, I will no longer be able to eat chocolate; it depresses me, sincerely depresses me. The thing that is frightening is the unknown state that will replace the state that I know, that I live in, that allows me to eat chocolate. Not only do I not know what that state is, but I cannot even imagine what it could be. This holds regardless of whether one believes in an afterlife or not. Whether death is a non-state or some sort of metaphysical existence, it is still entirely different from anything one has ever even remotely experienced. In that we are existing, physical beings, the idea of not-existing is not even within the realm of things our mind can comprehend. How can I, sitting here at my keyboard, existing, possibly understand what it would be like to not exist? It isnâ€&#x;t like being asleep or unconscious, it is not being. The very idea violates the law of noncontradiction. I cannot be and not be; and since I have always been, and only know what it is like to be, I cannot know what it is like to not be. And if our death is something metaphysical, I cannot know that metaphysical being in the same way that I can-


AFRAID OF THE DARK

27

not know not being. Since my pre-natal existence I have been a physical creature. I have lived and experienced the world through a body that smells and touches and sees. I have no idea what it is like to be not physical. A metaphysical existence would not be a glorified physical existence, up in the sky, sitting on a cloud, eating all that you want without gaining weight, which is a common misunderstanding of the metaphysical afterlife, as expounded by multiple religions. It would be something beyond the comprehension of my physical brain in the same way that not existing is beyond my comprehension. Since the state or non-state of death is completely incomprehensible it is uncomfortable to try to make sense of it. And once one realizes that it is logically impossible to make sense of, but also completely inevitable that it will occur, there is a panic and fear associated with making that leap. So, unlike fearing merely the implications of death, that being the loss of life, death itself, the experience or non-experience of death, is actually justifiably feared. Nagel contends that at the time one is dead s/he will have no opinion of it so one should not fear it, but that does not stop one from fearing it before hand. Not all things must be feared in the moment. They can be feared in anticipation. Nagel acknowledges this and says that one fears in anticipation the loss of life. But the loss of life is not what one is fearing, lost life is merely being lamented. It is the unknown, logically incomprehensible death state (or non-state) that incites fear. So many of lifeâ€&#x;s fears are rooted in a fundamental fear of the unknown, like being afraid of the dark, or afraid of strange places and people, or of the boogieman. In childhood many fears result from fearing the unknown, because one hasnâ€&#x;t experienced enough to know what to expect of different experiences. As we grow older and experience more we replace magic and mystery with science and fact and our fears are alleviated. Death is not something we can try or test and then know about. We will never have the benefit of otherâ€&#x;s experience to Shepherd us through the valley of darkness. And that is what is frightening; That unknown, incomprehensible abyss beyond the light of life. In the end we are still afraid of the dark.


28

JENNIFER LUNSFORD Notes

1

Note here that Nagel is discussing what is “wrong with death”. By the term “evil” he is referring to that which causes the objection or harm of death. Bibliography

Nagel, Thomas, “Death”, Language, Metaphysics, and Death, NY, Fordham University Press, 2nd ed., 1994, pp. 21-29. Green, O.H., Philosophy and Phenomenological Research, Vol. 43, No. 1. (Sep., 1982), pp. 99-105.


The Ghost is The Machine: A Defense of the Possibility of Artificial Intelligence

I

MATT CARLSON

n “Minds, Brains and Science,” John Searle attempts to show that the mind is necessarily more than simply an instantiation of a computer program. Parts of Searle‟s argument are quite persuasive; indeed some of his conclusions are both valid and, I believe, sound. However, his overall conclusion is somewhat misleading. The thesis that Searle refutes (strong artificial intelligence, or AI, as he terms it) is essentially the view that the brain is a sort of hardware and mind is a sort of software. Thus, according to this thesis, if we could program the correct software, we could program a mind. By refuting this thesis, Searle seems to have refuted all claims to the possibility of artificial intelligence. However, I will attempt to show that Searle‟s argument misses the mark and that, while the mind is not like a program running on certain hardware, the brain is. In so doing, I will also address the adequacy of the Turing test as a test of AI, as Searle‟s argument does show the worth of this test to be suspect. The Turing Test and Artificial Intelligence The Turing test is the first scientific attempt to test the abilities of AI; that is, to determine whether not a machine demonstrates intelligence. The test essentially works as follows: the test administrator asks a series of questions to two different interlocutors, A and B. However, A is a human being, while B is a machine. Their responses to the questions are returned as text to the test administrator. According to Turing, if a skilled administrator cannot determine which interlocutor is a human and which is a machine (based on the appropriateness of their responses), then the machine is exhibiting artificial intelligence. Now, given that the test administrator can ask anything of the machine (she could, in theory, just type gibberish), it seems that the standards for passing this test are sufficiently high. After all,


30

THE GHOST IS THE MACHINE

it is unlikely that even a modern computer, which can execute billions of instructions per second, would be able to consistently come up with appropriate responses to the queries in a timely fashion. Additionally, the program required to handle all of this language processing would have to be incredibly complex. All things considered, the Turing test sets a high standard for artificial intelligence. However, the Turing test is inherently flawed in such a way that it cannot truly measure the intelligence of a machine. The Turing test is essentially a behavioral test; that is, it measures the degree to which a machine succeeds in behaving like a human being. As such, it sets itself up for relatively straightforward counterexamples. One such counterexample, borrowed from Ned Block1, runs as follows: suppose that we build a machine that is essentially an incredibly complex jukebox. For any given input, the program on this machine searches its database for the appropriate output. Now, if we could build a database of billions of input statements, each with their appropriate outputs, it is likely that this machine would pass the Turing test. However, this machine operates on the same principle as a jukebox, a machine to which we would be loath to ascribe intelligence. Thus, the Turing test, stringent as it is, it not a sufficient test by which to judge whether machine is exhibiting intelligence. The simple fact that a machine can behave like a human in certain, limited circumstances, is not sufficient to show that it is actually intelligent. A more sweeping criticism of AI in general comes from John Searle, in the form of his famous „Chinese Roomâ€&#x; argument. The essential idea is this: imagine yourself (assuming that you do not understand any Chinese) in a room that has only the necessary items to create a rule-based input/output system. These items are: several baskets of funny shaped symbols (Chinese characters, unbeknownst to you), a very complex rulebook, and input and output slots. The rulebook contains rules for governing which symbols to put in the output slot, given the appearance of certain symbols in the input slot, and your internal state (e.g. whether or not you are in a state of having already received


MATT CARLSON

31

a certain input). Now, imagine that the rulebook is thorough enough so that you always can pass out an appropriate Chinese answer to an input question. This machine (you) would clearly pass the Turing test but it would not understand Chinese. The notion of understanding is critical here because Searle wants to ultimately claim that the Chinese room program cannot exhibit intelligence because it has only syntax (e.g. formal rules), and it cannot ascribe any meaning to the symbols that it confronts. Like Block, Searle creates an example in which there is a machine that can pass the Turing test, but it clearly does not exhibit intelligence. However, Searle‟s argument cuts deeper because it seems to show that intelligence could not possibly be programmed. The essential conclusion of this argument is that a digital computer is simply a computational machine and, as such, it can only „interpret‟ syntax, but not semantics. The human mind, by contrast, interprets and makes extensive use of semantic claims in addition to syntactic ones. The ability to attach meaning (semantics) to strings of data (syntax) is one of the key features of the mind. Thus, since a digital computer does not have access to semantics, it cannot be a mind, regardless of the complexity of the program that it runs. This, Searle claims, refutes the central claim of what he refers to as „strong AI‟; the view that an appropriate program, with the correct inputs and outputs, constitutes a mind, regardless of the sort of hardware on which it is run (e.g. whether it is a program run by a brain or a microprocessor). The Chinese room example certainly does provide additional reason to believe that the Turing test is not an adequate measure of machine intelligence. As Searle‟s example shows, a machine (or person, in this sort of scenario) could act as if they understood Chinese and thus pass the Turing test when there is clearly no such understanding present. But this is ultimately a problem for the Turing test, and not a problem for the prospects of AI.


32

THE GHOST IS THE MACHINE

The Chinese Room and the Possibility of Artificial Intelligence The question that Searle really wants to answer with the Chinese room argument is the following: “Is instantiating or implementing the right computer program with the right inputs and outputs, sufficient for, or constitutive of, thinking?”2 While the Chinese room argument certainly shows that implementing the right program is not constitutive of thinking, it does not show that this implementation is insufficient for thinking. Thinking is not simply running the appropriate program but, running such a program creates sufficient artificial brain activity to give rise to thinking. According to Searle, all mental phenomena are caused by processes and states within the brain. Searle states this more explicitly as follows: “Mental phenomena, all mental phenomena whether conscious or unconscious, visual or auditory, pains, tickles, itches, thoughts, indeed, all of our mental life, are caused by processes going on in the brain.”3 For the sake of simplicity, I will follow Searle and abbreviate this idea with the phrase „brains cause minds.‟ Searle is committed to this idea because it helps him to offer a solution to the mind-body problem. This theory has the consequence that the mind is essentially a sort of internal appearance of brain functions (that is, the way that we are aware of processes in the brain). While we are not directly conscious of the workings of the brain (e.g. I am not aware of how a certain dendrite is acting), we are conscious of the output of these processes, and it is this consciousness that forms the mind. However, it is very idea that also makes plausible a claim about the possibility of AI. Searle claims: “It is essential to our conception of a digital computer that its operations can be specified purely formally…”4 Further, “a typical computer „rule‟ will determine that when a machine is in a certain state and it has a certain symbol on its tape, then it will perform a certain operation…”5 That is, given a certain input and a certain discrete internal state, the machine will act in a certain way (that is, produce a certain output). But can‟t the operations of the brain be specified purely formally


MATT CARLSON

33

as well? The brain takes a certain input (say, particular sensory data) and, given a certain internal state, produces some output, which we are conscious of in the form of an appearance or thought. The operation of the brain is thus strongly analogous to the operation of a digital computer. But, Searle claims, “Minds are semantical in the sense that they have more than a formal structure, they have a content.”6 Thus, bearing in mind Searle‟s commitment to the idea that „brains cause minds,‟ it follows that he would have to accept one of the following claims: “Since brains cause minds, and the brain has no semantic content, the mind can‟t have any either,” or “Since brains cause minds, and the brain essentially is a digital computer, an appropriately complex digital computer can cause a mind as well.” Notice that if the computer, coupled with its program caused a mind (as opposed to simply constituting one) this mind could have semantic content in the way that our minds do. While the artificial brain is defined wholly by its syntax, the artificial mind that it causes is able to add meaning (or at least give this appearance of it) to this formal structure, just like the human mind does. Consider the first two premises that Searle employs in deriving his overall conclusion about AI. First, there is the claim that „brains cause minds.‟ Second, Searle posits as a conceptual truth the claim that „syntax is not sufficient for semantics.‟ Considered together, these two premises give rise to the following question: what is the source of semantics? It is relatively clear that the mind has access to semantics, but it is not so clear that the brain does. The brain can be described as having syntax; as having discrete states of molecules and electrons interacting so as to produce a mental process. These interactions are governed by formal rules (namely, the rules of biochemistry and physics). Thus, this syntax makes the brain a sort of formal system. But could the brain, a lump of wet, grey, biological matter, actually have a semantics as well? In other words, how could those discrete states of molecular and electronic interaction have a meaning in addition to their formal system? If Searle‟s argument is to make sense, it would also have to assume that there is some sort of meaning lodged in the particles of brain, which seems rather


34

THE GHOST IS THE MACHINE

difficult to believe. It seems more plausible that the brain, like any other organ, is simply a syntactic system. Since, as Searle claims, syntax is insufficient for semantics, and the brain just is a syntactic system, it seems to follow that the mind could not have a grasp of semantics as, according to Searle, it is completely caused by the workings of the brain. But the mind clearly does have access to semantics and meanings, so at least one of the premises considered here must be false. I believe that the false claim is the idea that syntax is insufficient for semantics. The universe is, in a sense, a rule-based system governed by syntax (e.g. the laws of physics), but semantics can be created within it (in the case of human minds). In most cases, it is true that syntax is not sufficient for semantics, as one would be hard pressed to defend the claim that a stone, for example, can have an understanding of meaning simply because it is governed by syntactic rules. However, I believe the solution here is that certain types of syntax, running on appropriate hardware, are sufficient for semantics. The complexity of the program running in the brain is sufficient to produce a conscious mind. Further, Searle‟s argument seems to hinge on the fact that there is only one sort of mind. Indeed, he seems to believe that a human mind and a machine mind (if it were possible) would have to be the same sort of mind. Searle summarizes the view that he criticizes, strong AI as he terms it, by stating: “According to the most extreme version of this view, the brain is just a digital computer and the mind is just a computer program.”7 However, Searle claims later that proponents of strong AI believe that it‟s only a matter of time before technology develops artificial brains and minds and that “These will be artificial brains and minds which are in every way the equivalent of human brains and minds.”8 In the first case, Searle claims that strong AI holds that an artificial mind and a human mind would be identical, because they would be running the same program (albeit on different hardware). However, in the second case, strong AI merely seems to claim that an artificial mind and a human mind would be equivalent (presumably in the sense that they both exhibited the


MATT CARLSON

35

ability to think). In his argument, Searle only concerns himself with refuting the first claim, which is a very bold claim that is not easily defensible. By contrast, the second claim, that a human mind and an artificial mind can be equivalent, but not identical, is much more plausible. However, by refuting the first claim, Searle also seems to believe that he has refuted the second claim (perhaps because he does not acknowledge a difference between them). This confusion between the sorts of minds in which AI might be manifested helps Searle produce his misleading conclusion concerning the Chinese room. The Chinese room argument is misleading because it asks us to compare the workings of an artificial brain with the workings of a human mind. The Chinese room is a completely formal, rule-bound system, as is the brain, whereas the workings of the human mind are not rule-bound. Given this unfair comparison, it is clear why it is so intuitive to claim that the Chinese room is not an example of intelligence. Brains, after all, are not intelligent, and the Chinese room is not even sufficiently complex to be a functioning brain. In short, the Chinese room does not represent a mind, which is, of course, what Searle intended to show with his example. However, this does not show that a digital computer cannot think. It only shows that a digital computer whose rules are not suitably complex to produce a mind cannot think. Thus, it is consistent to agree with parts of Searle‟s conclusion and still hold that AI is possible. Searle‟s fourth conclusion is especially interesting as it even hints at this possibility: “For any artifact that we might build which had mental states equivalent to human mental states, the implementation of a computer program would not by itself be sufficient. Rather, the artifact would have to have powers equivalent to the powers of the human brain.”9 I take it that Searle means equivalent here in the sense that it functions to produce consciousness. So, the artifact would simply have to function to produce consciousness. The human brain does this, presumably, through a complex system of electrons and molecules interacting with one another. The artifact would do this via a complex system of electrons interacting


36

THE GHOST IS THE MACHINE

with digital logic switches. But this artifact could still be made out of anything (including „old beer cans powered by windmills,‟ to borrow Searle‟s derisive example). Searle is right to say that the mind is not a computer program, and that a computer program cannot be a mind. However, this is not the real issue. At the heart of the notion of AI is the idea that the mind is caused by a computer program, of sorts, in the brain and thus, it is clear that a computer program can cause a mind. This mind would not have to be like the human mind necessarily (indeed, how can we even know that there is a single „type‟ of human mind that this new mind could be like?). In short, the mind is not simply complex hardware running a suitably complex program, but the brain is. Thus, if all mental activity is simply the internal appearance of brain activity, an artificial brain could produce a mind in much the same way that a real brain does. Notes 1 2 3 4 5 6 7 8 9

Block, 268-305. Searle, 36. Searle, 18. Searle, 30. Searle, 30-31. Searle, 31. Searle, 28. Searle, 29. Searle, 41. Bibliography

Block, Ned. Readings in Philosophy of Psychology. Cambridge, Mass.: Harvard University Press, 1980. Searle, John. Minds, Brains and Science. Cambridge, Mass.: Harvard University Press, 1984.


Functionalism and Artificial Intelligence Kevin Connor

O

ne of the most potentially important and least successful projects in computing during the previous half-century has been human-level artificial intelligence.1 This project has been daunting not because we do not understand the capabilities of the technology; indeed it has been proven that all computers, though some may be faster or more efficient than others, are nevertheless no more powerful in terms of the sorts of things they can compute than Alan Turing‟s abstract notion of a computing device, which he first presented in 1936(Stanford). Instead, the difficulty in programming artificially intelligent machines stems in large part from our lack of understanding of how our own minds function. AI research has tended to follow our uncertain and certainly unproven models of human intelligence-and hence its failures have tended to rest upon the failures of these models. My purpose here is to illustrate the decisive failure of one of the most important of these models, and present an alternative view that offers hope for the ultimate viability of strong AI. In Representation and Reality, Hilary Putnam attacks what is for computer scientists working on AI the most promising theory of human intelligence, functionalism. Putnam‟s definition of the functionalist model says that “…psychological states („believing that p,‟ „desiring that p,‟ „considering whether p,‟ etc.) are simply „computational states‟ of the brain. The proper way to think of the brain is as a digital computer. Our psychology is to be described as the software of this computer-its „functional organization.‟” (Putnam 73) Putnam‟s arguments in Representation and Reality are convincing, and I will discuss them in the first portion of this paper. In the latter portion, I will present an idea from Roger Penrose: that the promise of a model of the mind that is based in quantum physics may allow us to rethink the nature of the relationship between computers and the mind, providing a new research paradigm for strong AI in a world without the functionalist model. The simplest version of functionalism that Putnam describes is known as the Single Computational State functionalism. In this con-


38

FUNCTIONALISM AND ARTIFICIAL INTELLIGENCE

ception, each possible propositional attitude is describable in terms of a single state, which remains static across all physically possible organisms (“physically possible organisms” include machines). That is, “believing that snow is white” is supposed to be the same computational state for all organisms capable of having that belief.” (Putnam 80) Putnam then conceives the sort of model that is necessary for an organism to function in this manner. Obviously, some sort of thinking language or “mentalese” is necessary, and also some kind of function which determines whether new sentences are sufficiently understood to be added to the language (“c-function” for Putnam). This organism will also require what Putnam calls a “rational preference function”(Putnam 80) in order to decide how to act in any given situation, together with the c-function described above. The rational preference function would need some variables to mark the particular desires of the organism (i.e., when it is raining outside, to allow for the possibility that I‟m sad because I want to play baseball, or that I‟m happy because I‟m a farmer). To illustrate how this conception would work, suppose that I am such an organism and I am presented with a new propositional state, “kittens are small and fuzzy”, and let us further suppose that this is a perfectly adequate definition of the essence of “kitten”: that all organisms who had a complete understanding of kittens agreed that they are best described as small and fuzzy. In order to process this state, my c-function would check my degree of understanding of “small” and “fuzzy” in order to see whether I know enough about the component parts of the propositional state in order to allow it into my thinking language. If I allow the attitude into my language, then I assign it a degree of understanding based on my degrees of understanding of the component parts, and can now access thoughts and judgments about kittens with the aid of my rational preference function What Putnam finds troubling is that when we attempt to figure out meaning in this model, “all we are given to go on is the current subjective probability metric (the current degrees on confirmation), the current desires (the current “utilities”), and


KEVIN CONNOR

39

the underlying c--function by which the current subjective probability metric was formed on the basis of experience.” (Putnam 80) He says that at least the first two of these things might be totally different, even for meanings that we would like to say are the same, in different organisms: organisms will undoubtedly have different degrees of understanding of different sentences in the mental language, and will undoubtedly feel slightly different desires about those words. The result of this is that when you and I say “kitten,” we can never mean the same thing, which is undoubtedly unworkable. The problem is not solved even if we assume that there are sentences (“kittens are small and fuzzy” might be one of them) that are analytic terms that are universal across all organisms. Putnam says that this isn‟t going to work because we couldn‟t say that “small” and “fuzzy” have the same meaning for us analytically, because Putnam has shown that meanings cannot exist solely in the mind-that there is a linguistic division of meaning for words.2 If these words don‟t have analytic meaning for us, then it seems that we can‟t come to the same analytic meaning for the whole propositional state “kittens are small and fuzzy,” or further any propositional state. As Putnam sums up this line of reasoning, “there is no way to identify a computational state that is the same whenever any two people believe that there are a lot of cats in the neighborhood (or whatever). Even if the two people happen to speak the same language, they may have different stereotypes of a cat, different beliefs about the nature of cats, and so on (imagine two ancient Egyptians, one of whom believes cats are divine while the other does not).” (Putnam 82). Another form of functional formalism that he briefly considers he calls sociofunctionalism: “Why not think of the entire society of organisms together with an appropriate part of its physical environment as analogous to a computer, and seek to describe functional relations within this larger system?” (Putnam 74) For example, the state of “thinking that there are a lot of cats in the neighborhood” may be describable in terms of the thoughts that each person in the neighborhood has about the cats


40

FUNCTIONALISM AND ARTIFICIAL INTELLIGENCE

-each of their individual states builds together the full functional state quoted above. This is obviously a complicating move: we will have to draw functional relations across many different types of organisms and environments to create the full functionalist picture, which would be perhaps in principle possible. Putnam in short says that defining this complete system is a pipe dream. He says that when different people speak the same word, they inevitably have at least slightly different mental conceptions of that word, like the cat example from our discussion above about simple functionalism. We need some way of arbitrating these meanings; of deciding whether a particular conception fits a criterion that he calls “reasonableness.” In a society of millions of people, each with her own definition of cat, there must be some way of deciding which definitions are more correct or more complete; and the “real” definition would be synthesis of those that are most “reasonable.” He explains, “…this, I have argued, would be no easier to do than to survey human nature in toto. The idea of actually constructing such a definition of synonymy or coreferentiality is totally utopian.” (Putnam 75) That is, the project would involve a listing of uncountable (in the mathematical sense of infinite) possibilities of definitions in uncountable languages-there is no reducible formula for “reasonableness.” Putnam concedes that such a system may be in principle possible, noting that “Few philosophers are afraid of being utopian…”(Putnam 76) But my purpose here involves the implications of the demise of functionalism for artificial intelligence, and we need a reducible formula to program machines. The question of whether such a listing is in principle possible is moot to the AI programmers. Putnam supposes yet another way to reconceive the functionalist argument. This argument shifts the burden from computational states to computational relations, specifically equivalence relations. For example, we could try to figure out if when I say the word “cat” in my particular environment X and a Thai speaker says the word “meew” (which means “cat” in Thai) in her particular environment Y, whether we are in fact talking about the same concept. If we can enumerate all of the physical


KEVIN CONNOR

41

details involved the Thai conception of meew and the English conception of cat-admittedly a difficult project-then we can create an equivalence relation of the form “cat as used in English in this particular situation X is synonymous with meew as used in Thai in this particular situation Y.” As Putnam explains, this relation (and ones like it) “…must be a predicate that a Turing machine can employ: a recursive predicate or at worst a “trial and error” predicate.” (Putnam 85) Since all computers have been proven to be as powerful as Turing machines (and therefore each other) and recursive and “trial and error” (which we might call exponential) algorithms are computable, if slow, this gives great hope to a functionalist model of artificial intelligence. Obviously, this argument rests on the assumption that our minds function in a way very similar to Turing machines. But Putnam does not need to refute that claim to make his objections. First he notes that in order to make the difficult decision mentioned above regarding whether “cat” and “meew” actually refer to the same extension, we have to know a whole lot about the linguistic and environmental conventions in the situations. Without careful consideration of how the Thai language is used, it might appear that meew refers only to “Siamese cat” (it in fact refers to all cats) as those are the only sorts of cats that one encounters in Thailand. And there are uncountable variables like this that need to be considered, even just for our example. As Putnam says, “What is at stake…is the interpretation of the two discourses as wholes.” (Putnam 86) In any given discourse, it is necessary to learn something of the discourse before we can understand its terms. One cannot know what “existentialism” means without knowing some philosophy, or what “adverb” means without knowing some English, for example. Putnam finds two critical problems relating to this idea that occur for this theory. Putnam imagines a situation in which there are two scientific theories from two different cultures, one from Mars and one from Venus. These theories are about the same phenomenon, and are so similar that an outside observer would regard their meanings as identical, once he had discerned that their environ-


42

FUNCTIONALISM AND ARTIFICIAL INTELLIGENCE

ments were such that particular terms in the theory had identical meanings. If we are to analyze how we would make this claim about particular terms in the theories, we need to answer the question of what each term actually refers to in each culture, which will likely involve determining whether the theories are true for each culture. But if the theories are about large enough cosmological concepts, we need to know information about the whole universe before we can judge whether the theories are true, which is an awfully large reference set. As Putnam concludes, “…the assumption that in principle one can tell what is being referred to by a term used in an environment from a sufficiently complete description of that environment in terms of some standardized set of physical and computational parameters is false unless we widen the notion of the speaker’s environment to include the entire physical universe.” (Putnam 87) Considering the entire universe is obviously going to make the problem incomputable, which will bring down this theory of functionalism as far as AI is concerned. The second problem is related: “any theory that „defines‟ coreferentiality and synonymy must, in some way, survey all possible theories” (Putnam 87) For example, there are many different theories of functionalism, some of which we‟ve looked at thus far in this paper and some of which we haven‟t. If we are ever to write the equivalence relation that defines how we can tell whether a particular functionalist theory or element of a functionalist theory is then synonymous with another, we have to consider not only all possible theories of functionalism in existence, but also all possible functionalist theories. The trouble with this is that human societies are by their nature progressive in terms of how they conceptualize the world, so it is unclear how a human being living in any given society could account for these theories that have yet to be invented. As Putnam says, “To ask a human being in a time-bound culture to survey all modes of human linguistic existence-including those that will transcend his own-is to ask for an impossible Archimedean point.” (Putnam 89) Putnam‟s defeat of these and other forms of functional-


KEVIN CONNOR

43

ism and his conclusion that our minds are not in any way reducible to machine states or functional languages seems convincing to me. Since these computable states are precisely what are necessary for traditionalist conceptions of AI, it seems that the downfall of functionalism is a deadly blow for the future of AI research. A new research paradigm for strong AI is sorely needed. In Shadows of the Mind, Roger Penrose presents how such a paradigm can be conceived. Penrose explores how the young science of quantum mechanics might be brought to bear on our conceptions of consciousness. Quantum mechanics is a reduction of empirical realities to probabilities. When we attempt to understand subatomic particles, it turns out that we can‟t say as much as classical Newtonian physics says we ought to be able to about each individual particle. Indeed, basic facts about particles like position and velocity are inevitably altered depending on the sorts of measurements we take. In short, quantum mechanics can tell us probabilistically how a large number of particles will behave in a certain situation, but cannot ever predict for any given particle precisely what that particle will do. Penrose says that scientists have been generally unwilling to consider modeling the mind on a large scale using quantum mechanics. They might admit that on a small scale there may be quantum interactions taking place between the atoms of the brain, but “…it seems to be generally assumed that it is quite adequate to model the behavior of neurons themselves, and their relationships with one another, in a completely classical way. “Penrose 348) Once we have committed ourselves to modeling small parts of a system in a Newtonian fashion, accepted scientific practice necessitates that we model the system itself in a classical way as well. The result is a standard Newtonian model of brain function. Penrose argues that it may be possible to define a theory of the mind as a whole that is based on quantum physics-that the whole brain itself could be described as an example of quantum coherence, which refers to “…circumstances when large numbers of particles can collectively cooperate in a single quantum state


44

FUNCTIONALISM AND ARTIFICIAL INTELLIGENCE

which remains essentially unentangled with its environment.” (Penrose 351) This coherence would allow particle-level quantum interactions to have an effect on a system as large and complex as the brain. If quantum coherence could be demonstrated, it could act as a bridge between the concepts of brain and mind. The chemical and physical functioning of neurons in the brain could follow a Newtonian model and the functioning of the mind as we experience it could be explained by quantum coherence. How might the mind exhibit this quantum coherence, then? Penrose notes that our understanding of the brain has led to a classical picture “…in which neurons and their connecting synapses seem to play a role essentially similar to those of transistors and wires (printed circuits) in the electronic computers of today.” (Penrose 352) Given this understanding, we can and must use a classic computational model for this part of the structure. However, Penrose also says that research shows that the strength of these connections and even the physical connections themselves change over time-almost as though the silicon and steel in your personal computer were to rearrange themselves on a regular basis. The classical model attempts to explain this computationally, but Penrose finds as Putnam has that computational models inevitably fail at explaining such behavior. Penrose concludes, “…we must look for something different, as the appropriate type of controlling „mechanism‟-at least in the case of synaptic changes that might have some relevance to actual conscious activity.” (Penrose 354) Large-scale quantum coherence in the brain between individual neurons is a promising candidate for that mechanism. Furthermore, the young science of quantum computing, which uses principles of quantum mechanics and classical computing together to store data and perform operations simultaneously and flexibly on many particles, may eventually produce machines efficient enough to simulate this coherence.3 Penrose admits that the obstacles to constructing a quantum theory of the mind are large. He says, „[a human-level device] would have to incorporate the same kind of physical action


KEVIN CONNOR

45

that is responsible for evoking our own awareness. Since we do not yet have any physical theory of that action, it is certainly premature to speculate on when or whether such a putative device might be constructed.” (Penrose 393) The difficulty arises because once we have our physical theory, we‟ll also need a corresponding breakthrough in psychology that explains the connection between the quantum model and consciousness. This might seem to be a dubious proposition-Penrose admits that he has no idea how it might come about. However, it is in principle possible to model the mind in a quantum fashion, while Putnam has decisively ruled out modeling the mind in a classically functionalist way. If we could come up with these theories, we could then construct a machine whose physical states corresponded to the way physical states work in our minds, and would be functionally equivalent to humans. Although the prospect of truly artificially intelligent machines looks grim in the near term, we should not yet give up hope. Penrose says, “…in a clear sense, these are still early days in the physical understanding of our universe-particularly in relation to mental phenomena.” (Penrose 393-4) However, two and three bit quantum computers have already been built, which are capable of data sorting and simple arithmetic. As brilliant a mind as Richard Feynman believes that advances in quantum computing will stimulate advances in quantum physics.4 What Penrose has offered us is a research paradigm: strong AI researchers who have been treading water with functionalism can turn to a quantum model and begin solving these difficult problems. Notes 1

Or “strong AI;” see Searle, “Minds, Brains, and Programs”(1980), also Dreyfuss, What Computers Still Can’t Do (1992) 2 See the Twin Earth examples in Representation and Reality; also Putnam’s article Meaning and Reference


46

FUNCTIONALISM AND ARTIFICIAL INTELLIGENCE

3

See www.qubit.org; Arrighi, P. “Quantum Computation Explained to My Mother,” EATCS June 2003; Steane, A.M. “Quantum Computing,” Reports on Progress in Physics vol. 61 (1998) 4 See www.cs.caltech.edu/~westside/quantum-intro.html Bibliography Penrose, Roger. Shadows of the Mind. Oxford University Press: Oxford, 1994. Putnam, Hilary. Representation and Reality. The MIT Press: Cambridge MA, 1988 The Stanford Encyclopedia of Philosophy, “Turing Machine.” http://plato.stanford.edu/entries/turing-machine/


Episteme Announces the Scheduled Publication of Volume XVI • September 2005

CALL FOR PAPERS

Episteme is a student-run publication that aims to recognize and encourage excellence in undergraduate philosophy by providing students and faculty examples of some of the best work currently being done in undergraduate philosophy programs. Episteme will consider papers written by undergraduate students in any area of philosophy. Papers are evaluated according to the following criteria: quality of research, depth of philosophical inquiry, creativity, original insight and clarity. Submissions to be considered for the sixtheenth issue (September 2005) should adhere to the following stipulations: 1. A maximum of 4,000 words. 2. Combine research and original insight. 3. Provide a cover sheet that includes the following information: author‟s name, mailing address (current and permanent), email address, telephone number, college or university name, and title of submission. 4. Include a Works Cited page in MLA bibliographic format. Please use endnotes as a supplement. 5. The title page should bear the title of the paper only; the author‟s name should not appear on the submission itself. 6. Provide three double-spaced paper copies with numbered pages and one (electronic) copy formatted for Microsoft Word on a CD or a 3.5” disk.

Submissions must be postmarked by February 18, 2005, addressed: The Editors • Episteme • Department of Philosophy, Blair Knapp Hall • Denison University • Granville, OH 43023 Questions should be submitted to Jason Stotts (stotts_w@denison.edu)


.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.