Hard problem of consciousness The hard problem of consciousness is the problem of explaining how and why we have qualia or phenomenal experiences—how sensations acquire characteristics, such as colors and tastes. [1] The philosopher David Chalmers, who introduced the term "hard problem" of consciousness, [2] contrasts this with the "easy problems" of explaining the ability to discriminate, integrate information, report mental states, focus attention, etc. Easy problems are easy because all that is required for their solution is to specify a mechanism that can perform the function. That is, their proposed solutions, regardless of how complex or poorly understood they may be, can be entirely consistent with the modern materialistic conception of natural phenomena. Chalmers claims that the problem of experience is distinct from this set, and he argues that the problem of experience will "persist even when the performance of all the relevant functions is explained".[3] The existence of a "hard problem" is controversial and has been disputed by philosophers such as Daniel Dennett[4] and cognitive neuroscientists such as Stanislas Dehaene.[5] Clinical neurologist and skeptic Steven Novella has dismissed it as "the hard non-problem".[6] Contents [hide]  1Formulation of the problem o 1.1Chalmers' formulation o 1.2Easy problems o 1.3Other formulations o 1.4Historical predecessors  2Responses o 2.1Scientific attempts o 2.2Consciousness is fundamental or elusive
o 2.3Deflationary accounts o 2.4The source of illusion  3See also  4References Formulation of the problem[edit] Chalmers' formulation[edit] In Facing Up to the Problem of Consciousness (1995), Chalmers wrote:[3] It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does. In the same paper, he also wrote: The really hard problem of consciousness is the problem of experience. When we think and perceive there is a whir of information processing, but there is also a subjective aspect. 1. [PQ] Physical processing gives rise to experiences with a phenomenal character. 2. [Q] Our phenomenal qualities are thus-and-so. The first fact concerns the relationship between the physical and the phenomenal, whereas the second concerns the very nature of the phenomenal itself. Most responses to the hard problem are aimed at explaining either one of these facts or both. Easy problems[edit] Chalmers contrasts the hard problem with a number of (relatively) easy problems that consciousness presents. He emphasizes that what the easy
problems have in common is that they all represent some ability, or the performance of some function or behavior. Examples of easy problems include: the ability to discriminate, categorize, and react to environmental stimuli; the integration of information by a cognitive system; the reportability of mental states; the ability of a system to access its own internal states; the focus of attention; the deliberate control of behavior; the difference between wakefulness and sleep. Other formulations[edit] Other formulations of the "hard problem" include: [citation needed] "How is it that some organisms are subjects of experience?" "Why does awareness of sensory information exist at all?" "Why do qualia exist?" "Why is there a subjective component to experience?" "Why aren't we philosophical zombies?" Consciousness is fundamental or elusive[edit] Some philosophers, including David Chalmers in the late 20th century and Alfred North Whitehead earlier in the 20th century, argued that conscious experience is a fundamental constituent of the universe, a form of panpsychism sometimes referred to as panexperientialism. Chalmers
argued that a "rich inner life" is not logically reducible to the functional properties of physical processes. He states that consciousness must be described using nonphysical means. This description involves a fundamental ingredient capable of clarifying phenomena that has not been explained using physical means. Use of this fundamental property, Chalmers argues, is necessary to explain certain functions of the world, much like other fundamental features, such as mass and time, and to explain significant principles in nature. The philosopher Thomas Nagel posited in 1974 that experiences are essentially subjective (accessible only to the individual undergoing them), while physical states are essentially objective (accessible to multiple individuals). So at this stage, he argued, we have no idea what it could even mean to claim that an essentially subjective state just is an essentially non-subjective state. In other words, we have no idea of what reductivism really amounts to.[13] New mysterianism, such as that of the philosopher Colin McGinn, proposes that the human mind, in its current form, will not be able to explain consciousness.[15] Deflationary accounts[edit] Some philosophers, such as Daniel Dennett[4] and Peter Hacker[16] oppose the idea that there is a hard problem. These theorists have argued that once we really come to understand what consciousness is, we will realize that the hard problem is unreal. For instance, Dennett asserts that the socalled hard problem will be solved in the process of answering the "easy" ones (which, as he has clarified, he does not consider "easy" at all). [4] In contrast with Chalmers, he argues that consciousness is not a fundamental feature of the universe and instead will eventually be fully explained by natural phenomena. Instead of involving the nonphysical, he says, consciousness merely plays tricks on people so that it appears nonphysical —in other words, it simply seems like it requires nonphysical features to account for its powers. In this way, Dennett compares consciousness to stage magic and its capability to create extraordinary illusions out of ordinary things.[17] …… The Hard Problem of Consciousness
The hard problem of consciousness is the problem of explaining why any physical state is conscious rather than nonconscious. It is the problem of explaining why there is “something it is like” for a subject in conscious experience, why conscious mental states “light up” and directly appear to the subject. The usual methods of science involve explanation of functional, dynamical, and structural properties—explanation of what a thing does, how it changes over time, and how it is put together. But even after we have explained the functional, dynamical, and structural properties of the conscious mind, we can still meaningfully ask the question, Why is it conscious? This suggests that an explanation of consciousness will have to go beyond the usual methods of science. Consciousness therefore presents a hard problem for science, or perhaps it marks the limits of what science can explain. Explaining why consciousness occurs at all can be contrasted with so-called “easy problems” of consciousness: the problems of explaining the function, dynamics, and structure of consciousness. These features can be explained using the usual methods of science. But that leaves the question of why there is something it is like for the subject when these functions, dynamics, and structures are present. This is the hard problem. In more detail, the challenge arises because it does not seem that the qualitative and subjective aspects of conscious experience— how consciousness “feels” and the fact that it is directly “for me”—fit into a physicalist ontology, one consisting of just the basic elements of physics plus structural, dynamical, and functional combinations of those basic elements. It appears that even a complete specification of a creature in physical terms leaves unanswered the question of whether or not the creature is conscious. And it seems that we can easily conceive of creatures just like us physically and functionally that nonetheless lack consciousness. This indicates that a physical explanation of consciousness is fundamentally incomplete: it leaves out what it is like to be the subject, for the subject. There seems to be an unbridgeable explanatory gap between the physical world and consciousness. All these factors make the hard problem hard. The hard problem was so-named by David Chalmers in 1995. The problem is a major focus of research in contemporary philosophy
of mind, and there is a considerable body of empirical research in psychology, neuroscience, and even quantum physics. The problem touches on issues in ontology, on the nature and limits of scientific explanation, and on the accuracy and scope of introspection and first-person knowledge, to name but a few. Reactions to the hard problem range from an outright denial of the issue to naturalistic reduction to panpsychism (the claim that everything is conscious to some degree) to full-blown mindbody dualism. Table of Contents Stating the Problem a. Chalmers b. Nagel c. Levine 2. Underlying Reasons for the Problem 3. Responses to the Problem a. Eliminativism b. Strong Reductionism c. Weak Reductionism d. Mysterianism e. Interactionist Dualism f. Epiphenomenalism g. Dual Aspect Theory/Neutral Monism/Panpsychism 2. References and Further Reading 3. Responses to the Problem a. Eliminativism Eliminativism holds that there is no hard problem of consciousness because there is no consciousness to worry about in the first place. Eliminativism is most clearly defended by Rey 1997, but see also Dennett 1978, 1988, Wilkes 1984, and Ryle 1949. On the face of it, this response sounds absurd: how can one deny that conscious experience exists? Consciousness might be the one thing that is certain in our epistemology. But eliminativist views resist the idea that what we call experience is equivalent to consciousness, at least in the phenomenal, “what it’s like” sense. They hold that consciousness so-conceived is a 1.
philosopher’s construction, one that can be rejected without absurdity. If it is definitional of consciousness that it is nonfunctional, then holding that the mind is fully functional amounts to a denial of consciousness. Alternately, if qualia are construed as nonrelational, intrinsic qualities of experience, then one might deny that qualia exist (Dennett 1988). And if qualia are essential to consciousness, this, too, amounts to an eliminativism about consciousness. What might justify consciousness eliminativism? First, the very notion of consciousness, upon close examination, may not have well-defined conditions of application—there may be no single phenomenon that the term picks out (Wilkes 1984). Or the term may serve no use at all in any scientific theory, and so may drop out of a scientifically-fixed ontology (Rey 1997). If science tells us what there is (as some naturalists hold), and science has no place for nonfunctional intrinsic qualities, then there is no consciousness, so defined. Finally, it might be that the term ‘consciousness’ gets its meaning as part of a falsifiable theory, our folk psychology. The entities posited by a theory stand or fall with the success of the theory. If the theory is falsified, then the entities it posits do not exist (compare P.M. Churchland 1981). And there is no guarantee that folk psychology will not be supplanted by a better theory of the mind, perhaps a neuroscientific or even quantum mechanical theory, at some point. Thus, consciousness might be eliminated from our ontology. If that occurs, obviously there is no hard problem to worry about. No consciousness, no problem! But eliminativism seems much too strong a reaction to the hard problem, one that throws the baby out with the bathwater. First, it is highly counterintuitive to deny that consciousness exists. It seems extremely basic to our conception of minds and persons. A more desirable view would avoid this move. Second, it is not clear why we must accept that consciousness, by definition, is nonfunctional or intrinsic. Definitional, “analytic” claims are highly controversial at best, particularly with difficult terms like ‘consciousness’ (compare Quine 1951, Wittgenstein 1953). A better solution would hold that consciousness still exists, but it is
functional and relational in nature. This is the strong reductionist approach. b. Strong Reductionism Strong reductionism holds that consciousness exists, but contends that it is reducible to tractable functional, nonintrinsic properties. Strong reductionism further claims that the reductive story we tell about consciousness fully explains, without remainder, all that needs to be explained about consciousness. Reductionism, generally, is the idea that complex phenomena can be explained in terms of the arrangement and functioning of simpler, better understood parts. Key to strong reductionism, then, is the idea that consciousness can be broken down and explained in terms of simpler things. This amounts to a rejection of the idea that experience is simple and basic, that it stands as a kind of epistemic or metaphysical “ground floor.” Strong reductionists must hold that consciousness is not as it prima facie appears, that it only seems to be marked by immediacy, indescribability, and independence and therefore that it only seems nonfunctional and intrinsic. Consciousness, according to strong reductionism, can be fully analyzed and explained in functional terms, even if it does not seem that way. A number of prominent strongly reductive theories exist in the literature. Functionalist approaches hold that consciousness is nothing more than a functional process. A popular version of this view is the “global workspace” hypothesis, which holds that conscious states are mental states available for processing by a wide range of cognitive systems (Baars 1988, 1997; Dehaene & Naccache 2001). They are available in this way by being present in a special network—the “global workspace.” This workspace can be functionally characterized and it also can be given a neurological interpretation. In answer to the question “why are these states conscious?” it can be replied that this is what it means to be conscious. If a state is available to the mind in this way, it is a conscious state (see also Dennett 1991). (For more neuroscientifically-focused versions of the functionalist approach, see P.S Churchland 1986; Crick 1994; and Koch 2004.)
Another set of views that can be broadly termed functionalist is “enactive” or “embodied” approaches (Hurley 1998, Noë 2005, 2009). These views hold that mental processes should not be characterized in terms of strictly inner processes or representations. Rather, they should be cashed out in terms of the dynamic processes connecting perception, bodily and environmental awareness, and behavior. These processes, the views contend, do not strictly depend on processes inside the head; rather, they loop out into the body and the environment. Further, the nature of consciousness is tied up with behavior and action—it cannot be isolated as a passive process of receiving and recording information. These views are cataloged as functionalist because of the way they answer the hard problem: these physical states (constituted in part by bodily and worldly things) are conscious because they play the right functional role, they do the right thing. Another strongly reductive approach holds that conscious states are states representing the world in the appropriate way (Dretske 1995, Tye 1995, 2000). This view, known as “first-order representationalism,” contends that conscious states make us aware of things in world by representing them. Further, these representations are “nonconceptual” in nature: they represent features even if the subject in question lacks the concepts needed to cognitively categorize those features. But these nonconceptual representations must play the right functional role in order to be conscious. They must be poised to influence the higher-level cognitive systems of a subject. The details of these representations differ from theorist to theorist, but a common answer to the hard problem emerges. First-order representational states are conscious because they do the right thing: they make us aware of just the sorts of features that make up conscious experience, features like the redness of an apple, the sweetness of honey, or the shrillness of a trumpet. Further, such representations are conscious because they are poised to play the right role in our understanding of the world—they serve as the initial layer of our epistemic contact with reality, a layer we can then use as the basis of our more sophisticated beliefs and theories.
A further point serves to support the claims of first-order representationalism. When we reflect on our experience in a focused way, we do not seem to find any distinctively mental properties. Rather, we find the very things first-order representationalism claims we represent: the basic sensory features of the world. If I ask you to reflect closely on your experience of a tree, you do not find special mental qualities. Rather, you find the tree, as it appears to you, as you represent it. This consideration, known as “transparency,” seems to undermine the claim that we need to posit special intrinsic qualia, seemingly irreducible properties of our experiences (Harman 1990, though see Kind 2003). Instead, we can explain all that we experience in terms of representation. We have a red experience because we represent physical red in the right way. It is then argued that representation can be given a reductive explanation. Representation, even the sort of representation involved in experience, is no more than various functional/physical processes of our brains tracking the environment. It follows that there is no further hard problem to deal with. A third type of strongly reductive approach is higher-order representationalism (Armstrong 1968, 1981; Rosenthal 1986, 2005; Lycan 1987, 1996, 2001; Carruthers 2000, 2005). This view starts with the question of what accounts for the difference between conscious and nonconscious mental states. Higher-order theorists hold that an intuitive answer is that we are appropriately aware of our conscious states, while we are unaware of our nonconscious states. The task of a theory of consciousness, then, is to explain the awareness accounting for this difference. Higher-order representationalists contend that the awareness is a product of a specific sort of representation, a representation that picks out the subject’s own mental states. These “higher-order” representations (representations of other representations) make the subject aware of her states, thus accounting for consciousness. In answer to the hard problem, the higher-order theorist responds that these states are conscious because the subject is appropriately aware of them by way of higher-order representation. The higher-order representations themselves are held to be nonconscious. And since representation can plausibly be reduced to functional/physical
processes, there is no lingering problem to explain (though see Gennaro 2005 for more on this strategy). A final strongly reductive view, “self-representationalism,� holds that troubles with the higher-order view demand that we characterize the awareness subjects have of their conscious states as a kind of self-representation, where one complex representational state is about both the world and that very state itself (Gennaro 1996, Kriegel 2003, 2009, Van Gulick 2004, 2006, Williford 2006). It may seem paradoxical to say that a state can represent itself, but this can dealt with by holding that the state represents itself in virtue of one part of the state representing another, and thereby coming to indirectly represent the whole. Further, self-representationalism may provide the best explanation of the seemingly ubiquitous presence of selfawareness in conscious experience. And, again, in answer to the question of why such states are conscious, the selfrepresentationalist can respond that conscious states are ones the subject is aware of, and self-representationalism explains this awareness. And since self-representation, properly construed, is reducible to functional/physical processes, we are left with a complete explanation of consciousness. (For more details on how higher-order/self-representational views deal with the hard problem, see Gennaro 2012, chapter 4.) However, there remains considerable resistance to strongly reductive views. The main stumbling block is that they seem to leave unaddressed the pressing intuition that one can easily conceive of a system satisfying all the requirements of the strongly reductive views but still lacking consciousness (Chalmers 1996, chapter 3). It is argued that an effective theory ought to close off such easy conceptions. Further, strong reductivists seem committed to the claim that there is no knowledge of consciousness that cannot be grasped theoretically. If a strongly reductive view is true, it seems that a blind person can gain full knowledge of color experience from a textbook. But surely she still lacks some knowledge of what it’s like to see red, for example? Strongly reductive theorists can contend that these recalcitrant intuitions are merely a product of lingering confused or erroneous views of consciousness. But in the face of such
worries, many have felt it better to find a way to respect these intuitions while still denying the potentially unpleasant ontological implications of the hard problem. Hence, weak reductionism. c. Weak Reductionism Weak reductionism, in contrast to the strong version, holds that consciousness is a simple or basic phenomenon, one that cannot be informatively broken down into simpler nonconscious elements. But according to the view we can still identify consciousness with physical properties if the most parsimonious and productive theory supports such an identity (Block 2002, Block & Stalnaker 1999, Hill 1997, Loar 1997, 1999, Papineau 1993, 2002, Perry 2001). What’s more, once the identity has been established, there is no further burden of explanation. Identities have no explanation: a thing just is what it is. To ask how it could be that Mark Twain is Sam Clemens, once we have the most parsimonious rendering of the facts, is to go beyond meaningful questioning. And the same holds for the identity of conscious states with physical states. But there remains the question of why the identity claim appears so counterintuitive and here weak reductionists generally appeal to the “phenomenal concepts strategy” (PCS) to make their case (compare Stoljar 2005). The PCS holds that the hard problem is not the result of a dualism of facts, phenomenal and physical, but rather a dualism of concepts picking out fully physical conscious states. One concept is the third-personal physical concept of neuroscience. The other concept is a distinctively first-personal “phenomenal concept”—one that picks out conscious states in a subjectively direct manner. Because of the subjective differences in these modes of conceptual access, consciousness does not seem intuitively to be physical. But once we understand the differences in the two concepts, there is no need to accept this intuition. Here is a sketch of how a weakly reductive view of consciousness might proceed. First, we find stimuli that reliably trigger reports of phenomenally conscious states from subjects. Then we find what neural processes are reliably correlated with those reported experiences. It can then be argued on the basis of parsimony
that the reported conscious state just is the neural state—an ontology holding that two states are present is less simple than one identifying the two states. Further, accepting the identity is explanatorily fruitful, particularly with respect to mental causation. Finally, the PCS is appealed to in order to explain why the identity remains counterintuitive. And as to the question of why this particular neural state should be identical to this particular phenomenal state, the answer is that this is just the way things are. Explanation bottoms out at this point and requests for further explanation are unreasonable. But there are pressing worries about weak reductionism. There seems to be an undischarged phenomenal element within the weakly reductive view (Chalmers 2006). When we focus on the PCS, it seems that we lack a plausible story about how it is that phenomenal concepts reveal what it’s like for us in experience. The direct access of phenomenal concepts seems to require that phenomenal states themselves inform us of what they are like. A common way to cash out the PCS is to say that the phenomenal properties themselves are embedded in the phenomenal concepts, and that alone makes them accessible in the seemingly rich manner of introspected experience. When it is asked how phenomenal properties might underwrite this access, the answer given is that this is in the nature of phenomenal properties—that is just what they do. Again, we are told that explanation must stop somewhere. But at this point, it seems that there is little to distinguish that weak reductionist from the various forms of nonreductive and dualistic views cataloged below. They, too, hold that it is in the nature of phenomenal properties to underwrite first-person access. But they hold that there is no good reason to think that properties with this sort of nature are physical. We know of no other physical property that possesses such a nature. All that we are left with to recommend weak reductionism is a thin claim of parsimony and an overly-strong fealty to physicalism. We are asked to accept a brute identity here, one that seems unprecedented in our ontology given that consciousness is a macro-level phenomenon. Other examples of such brute identity —of electricity and magnetism into one force, say—occur at the foundational level of physics. Neurological and phenomenal properties do not seem to be basic in this way. We are left with
phenomenal properties inexplicable in physical terms, “brutally” identified with neurological properties in a way that nothing else seems to be. Why not take all this as an indication that phenomenal properties are not physical after all? The weak reductionist can respond that the question of mental causation still provides a strong enough reason to hold onto physicalism. A plausible scientific principal is that the physical world is causally closed: all physical events have physical causes. And since our bodies are physical, it seems that denying that consciousness is physical renders it epiphenomenal. The apparent implausibility of epiphenomenalism may be enough to motivate adherence to weak reductionism, even with its explanatory short-comings. Dualistic challenges to this claim will be discussed in later sections. It is possible, however, to embrace weak reductionism and still acknowledge that some questions remain to be answered. For example, it might be reasonable to demand some explanation of how particular neural states correlate with differences in conscious experience. A weak reductionist might hold that this is a question we at present cannot answer. It may be that one day we will be in a position to so, due to a radical shift in our understanding of consciousness or physical reality. Or perhaps this will remain an unsolvable mystery, one beyond our limited abilities to decipher. Still, there may be good reasons to hold at present that the most parsimonious metaphysical picture is the physicalist picture. The line between weak reductionism and the next set of views to be considered, mysterianism, may blur considerably here. d. Mysterianism The mysterian response to the hard problem does not offer a solution; rather, it holds that the hard problem cannot be solved by current scientific method and perhaps cannot be solved by human beings at all. There are two varieties of the view. The more moderate version of the position, which can be termed “temporary mysterianism,” holds that given the current state of scientific knowledge, we have no explanation of why some physical states are conscious (Nagel 1974, Levine 2001). The gap between experience and the sorts of things dealt with in
modern physics—functional, structural, and dynamical properties of basic fields and particles—is simply too wide to be bridged at present. Still, it may be that some future conceptual revolution in the sciences will show how to close the gap. Such massive conceptual reordering is certainly possible, given the history of science. And, indeed, if one accepts the Kuhnian idea of shifts between incommensurate paradigms, it might seem unsurprising that we, pre-paradigm-shift, cannot grasp what things will be like after the revolution. But at present we have no idea how the hard problem might be solved. Thomas Nagel, in sketching his version of this idea, calls for a future “objective phenomenology” which will “describe, at least in part, the subjective character of experiences in a form comprehensible to beings incapable of having those experiences” (1974, 449). Without such a new conceptual system, Nagel holds, we are left unable to bridge the gap between consciousness and the physical. Consciousness may indeed be a physical, but we at present have no idea how this could be so. It is of course open for both weak and strong reductionists to accept a version of temporary mysterianism. They can agree that at present we do not know how consciousness fits into the physical world, but the possibility is open that future science will clear up the mystery. The main difference between such claims by reductionists and by mysterians is that the mysterians reject the idea that current reductive proposals do anything at all to close the gap. How different the explanatory structure must be to count as truly new and not merely an extension of the old is not possible to gauge with any precision. So the difference between a very weak reductionist and a temporary, though optimistic mysterian may not amount to much. The stronger version of the position, “permanent mysterianism,” argues that our ignorance in the face of the hard problem is not merely transitory, but is permanent, given our limited cognitive capacities (McGinn 1989, 1991). We are like squirrels trying to understand quantum mechanics: it just is not going to happen. The main exponent of this view is Colin McGinn, who argues that a solution to the hard problem is “cognitively closed” to us. He
supports his position by stressing consequences of a modular view of the mind, inspired in part by Chomsky’s work in linguistics. Our mind just may not be built to solve this sort of problem. Instead, it may be composed of dedicated, domainspecific “modules” devoted to solving local, specific problems for an organism. An organism without a dedicated “language acquisition device” equipped with “universal grammar” cannot acquire language. Perhaps the hard problem requires cognitive apparatus we just do not possess as a species. If that is the case, no further scientific or philosophical breakthrough will make a difference. We are not built to solve the problem: it is cognitively closed to us. A worry about such a claim is that it is hard to establish just what sorts of problems are permanently beyond our ken. It seems possible that the temporary mysterian may be correct here, and what looks unbridgeable in principle is really just a temporary roadblock. Both the temporary and permanent mysterian agree on the evidence. They agree that there is a real gap at present between consciousness and the physical and they agree that nothing in current science seems up to the task of solving the problem. The further claim that we are forever blocked from solving the problem turns on controversial claims about the nature of the problem and the nature of our cognitive capacities. Perhaps those controversial claims will be made good, but at present, it is hard to see why we should give up all hope, given the history of surprising scientific breakthroughs. troubles! Facing Up to the Problem of Consciousness 1 Introduction Consciousness poses the most baffling problems in the science of the mind. There is nothing that we know more intimately than conscious experience, but there is nothing that is harder to explain. All sorts of mental phenomena have yielded to scientific investigation in recent years, but consciousness has stubbornly resisted. Many have tried to explain it, but the explanations always seem to fall short of the
target. Some have been led to suppose that the problem is intractable, and that no good explanation can be given. To make progress on the problem of consciousness, we have to confront it directly. In this paper, I first isolate the truly hard part of the problem, separating it from more tractable parts and giving an account of why it is so difficult to explain. I critique some recent work that uses reductive methods to address consciousness, and argue that such methods inevitably fail to come to grips with the hardest part of the problem. Once this failure is recognized, the door to further progress is opened. In the second half of the paper, I argue that if we move to a new kind of nonreductive explanation, a naturalistic account of consciousness can be given. I put forward my own candidate for such an account: a nonreductive theory based on principles of structural coherence and organizational invariance and a doubleaspect view of information. 2 The easy problems and the hard problem There is not just one problem of consciousness. "Consciousness" is an ambiguous term, referring to many different phenomena. Each of these phenomena needs to be explained, but some are easier to explain than others. At the start, it is useful to divide the associated problems of consciousness into "hard" and "easy" problems. The easy problems of consciousness are those that seem directly susceptible to the standard methods of cognitive science, whereby a phenomenon is explained in terms of computational or neural mechanisms. The hard problems are those that seem to resist those methods. The easy problems of consciousness include those of explaining the following phenomena:
the ability to discriminate, categorize, and react to environmental stimuli; the integration of information by a cognitive system; the reportability of mental states; the ability of a system to access its own internal states; the focus of attention; the deliberate control of behavior; the difference between wakefulness and sleep.
All of these phenomena are associated with the notion of consciousness. For example, one sometimes says that a mental state is conscious when it is verbally reportable, or when it is internally accessible. Sometimes a system is said to be conscious of some information when it has the ability to react on the basis of that
information, or, more strongly, when it attends to that information, or when it can integrate that information and exploit it in the sophisticated control of behavior. We sometimes say that an action is conscious precisely when it is deliberate. Often, we say that an organism is conscious as another way of saying that it is awake. There is no real issue about whether these phenomena can be explained scientifically. All of them are straightforwardly vulnerable to explanation in terms of computational or neural mechanisms. To explain access and reportability, for example, we need only specify the mechanism by which information about internal states is retrieved and made available for verbal report. To explain the integration of information, we need only exhibit mechanisms by which information is brought together and exploited by later processes. For an account of sleep and wakefulness, an appropriate neurophysiological account of the processes responsible for organisms' contrasting behavior in those states will suffice. In each case, an appropriate cognitive or neurophysiological model can clearly do the explanatory work. If these phenomena were all there was to consciousness, then consciousness would not be much of a problem. Although we do not yet have anything close to a complete explanation of these phenomena, we have a clear idea of how we might go about explaining them. This is why I call these problems the easy problems. Of course, "easy" is a relative term. Getting the details right will probably take a century or two of difficult empirical work. Still, there is every reason to believe that the methods of cognitive science and neuroscience will succeed. The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience. It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that
when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does. If any problem qualifies as the problem of consciousness, it is this one. In this central sense of "consciousness", an organism is conscious if there is something it is like to be that organism, and a mental state is conscious if there is something it is like to be in that state. Sometimes terms such as "phenomenal consciousness" and "qualia" are also used here, but I find it more natural to speak of "conscious experience" or simply "experience". Another useful way to avoid confusion (used by e.g. Newell 1990, Chalmers 1996) is to reserve the term "consciousness" for the phenomena of experience, using the less loaded term "awareness" for the more straightforward phenomena described earlier. If such a convention were widely adopted, communication would be much easier; as things stand, those who talk about "consciousness" are frequently talking past each other. The ambiguity of the term "consciousness" is often exploited by both philosophers and scientists writing on the subject. It is common to see a paper on consciousness begin with an invocation of the mystery of consciousness, noting the strange intangibility and ineffability of subjectivity, and worrying that so far we have no theory of the phenomenon. Here, the topic is clearly the hard problem - the problem of experience. In the second half of the paper, the tone becomes more optimistic, and the author's own theory of consciousness is outlined. Upon examination, this theory turns out to be a theory of one of the more straightforward phenomena - of reportability, of introspective access, or whatever. At the close, the author declares that consciousness has turned out to be tractable after all, but the reader is left feeling like the victim of a bait-and-switch. The hard problem remains untouched. 3 Functional explanation Why are the easy problems easy, and why is the hard problem hard? The easy problems are easy precisely because they concern the explanation of cognitive abilities and functions. To explain a cognitive function, we need only specify a mechanism that can perform the function. The methods of cognitive science are well-suited for this sort of explanation, and so are well-suited to the
easy problems of consciousness. By contrast, the hard problem is hard precisely because it is not a problem about the performance of functions. The problem persists even when the performance of all the relevant functions is explained. (Here "function" is not used in the narrow teleological sense of something that a system is designed to do, but in the broader sense of any causal role in the production of behavior that a system might perform.) To explain reportability, for instance, is just to explain how a system could perform the function of producing reports on internal states. To explain internal access, we need to explain how a system could be appropriately affected by its internal states and use information about those states in directing later processes. To explain integration and control, we need to explain how a system's central processes can bring information contents together and use them in the facilitation of various behaviors. These are all problems about the explanation of functions. How do we explain the performance of a function? By specifying a mechanism that performs the function. Here, neurophysiological and cognitive modeling are perfect for the task. If we want a detailed low-level explanation, we can specify the neural mechanism that is responsible for the function. If we want a more abstract explanation, we can specify a mechanism in computational terms. Either way, a full and satisfying explanation will result. Once we have specified the neural or computational mechanism that performs the function of verbal report, for example, the bulk of our work in explaining reportability is over. In a way, the point is trivial. It is a conceptual fact about these phenomena that their explanation only involves the explanation of various functions, as the phenomena are functionally definable. All it means for reportability to be instantiated in a system is that the system has the capacity for verbal reports of internal information. All it means for a system to be awake is for it to be appropriately receptive to information from the environment and for it to be able to use this information in directing behavior in an appropriate way. To see that this sort of thing is a conceptual fact, note that someone who says "you have explained the performance of the verbal report function, but you have not explained reportability" is making a trivial conceptual mistake about reportability. All it could possibly take to explain reportability is an explanation of how the relevant function is performed; the same goes for the other phenomena in question. Throughout the higher-level sciences, reductive explanation works in just this way. To explain the gene, for instance, we needed to specify the mechanism that stores and transmits hereditary information from one generation to the next. It turns out
that DNA performs this function; once we explain how the function is performed, we have explained the gene. To explain life, we ultimately need to explain how a system can reproduce, adapt to its environment, metabolize, and so on. All of these are questions about the performance of functions, and so are well-suited to reductive explanation. The same holds for most problems in cognitive science. To explain learning, we need to explain the way in which a system's behavioral capacities are modified in light of environmental information, and the way in which new information can be brought to bear in adapting a system's actions to its environment. If we show how a neural or computational mechanism does the job, we have explained learning. We can say the same for other cognitive phenomena, such as perception, memory, and language. Sometimes the relevant functions need to be characterized quite subtly, but it is clear that insofar as cognitive science explains these phenomena at all, it does so by explaining the performance of functions. When it comes to conscious experience, this sort of explanation fails. What makes the hard problem hard and almost unique is that it goes beyond problems about the performance of functions. To see this, note that even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience - perceptual discrimination, categorization, internal access, verbal report - there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience? A simple explanation of the functions leaves this question open. There is no analogous further question in the explanation of genes, or of life, or of learning. If someone says "I can see that you have explained how DNA stores and transmits hereditary information from one generation to the next, but you have not explained how it is a gene", then they are making a conceptual mistake. All it means to be a gene is to be an entity that performs the relevant storage and transmission function. But if someone says "I can see that you have explained how information is discriminated, integrated, and reported, but you have not explained how it is experienced", they are not making a conceptual mistake. This is a nontrivial further question. This further question is the key question in the problem of consciousness. Why doesn't all this information-processing go on "in the dark", free of any inner feel? Why is it that when electromagnetic waveforms impinge on a retina and are discriminated and categorized by a visual system, this discrimination and categorization is experienced as a sensation of vivid red? We know that conscious experience does arise when these functions are performed, but the very fact that it
arises is the central mystery. There is an explanatory gap (a term due to Levine 1983) between the functions and experience, and we need an explanatory bridge to cross it. A mere account of the functions stays on one side of the gap, so the materials for the bridge must be found elsewhere. This is not to say that experience has no function. Perhaps it will turn out to play an important cognitive role. But for any role it might play, there will be more to the explanation of experience than a simple explanation of the function. Perhaps it will even turn out that in the course of explaining a function, we will be led to the key insight that allows an explanation of experience. If this happens, though, the discovery will be an extra explanatory reward. There is no cognitive function such that we can say in advance that explanation of that function will automatically explain experience. To explain experience, we need a new approach. The usual explanatory methods of cognitive science and neuroscience do not suffice. These methods have been developed precisely to explain the performance of cognitive functions, and they do a good job of it. But as these methods stand, they are only equipped to explain the performance of functions. When it comes to the hard problem, the standard approach has nothing to say. 6 Nonreductive explanation At this point some are tempted to give up, holding that we will never have a theory of conscious experience. McGinn (1989), for example, argues that the problem is too hard for our limited minds; we are "cognitively closed" with respect to the phenomenon. Others have argued that conscious experience lies outside the domain of scientific theory altogether. I think this pessimism is premature. This is not the place to give up; it is the place where things get interesting. When simple methods of explanation are ruled out, we need to investigate the alternatives. Given that reductive explanation fails, nonreductive explanation is the natural choice. Although a remarkable number of phenomena have turned out to be explicable wholly in terms of entities simpler than themselves, this is not universal. In physics, it occasionally happens that an entity has to be taken as fundamental. Fundamental entities are not explained in terms of anything simpler. Instead, one takes them as basic, and gives a theory of how they relate to everything else in the world. For example, in the nineteenth century it turned out that electromagnetic processes could not be explained in terms of the wholly mechanical processes that
previous physical theories appealed to, so Maxwell and others introduced electromagnetic charge and electromagnetic forces as new fundamental components of a physical theory. To explain electromagnetism, the ontology of physics had to be expanded. New basic properties and basic laws were needed to give a satisfactory account of the phenomena. Other features that physical theory takes as fundamental include mass and spacetime. No attempt is made to explain these features in terms of anything simpler. But this does not rule out the possibility of a theory of mass or of space-time. There is an intricate theory of how these features interrelate, and of the basic laws they enter into. These basic principles are used to explain many familiar phenomena concerning mass, space, and time at a higher level. I suggest that a theory of consciousness should take experience as fundamental. We know that a theory of consciousness requires the addition of something fundamental to our ontology, as everything in physical theory is compatible with the absence of consciousness. We might add some entirely new nonphysical feature, from which experience can be derived, but it is hard to see what such a feature would be like. More likely, we will take experience itself as a fundamental feature of the world, alongside mass, charge, and space-time. If we take experience as fundamental, then we can go about the business of constructing a theory of experience. Where there is a fundamental property, there are fundamental laws. A nonreductive theory of experience will add new principles to the furniture of the basic laws of nature. These basic principles will ultimately carry the explanatory burden in a theory of consciousness. Just as we explain familiar high-level phenomena involving mass in terms of more basic principles involving mass and other entities, we might explain familiar phenomena involving experience in terms of more basic principles involving experience and other entities. In particular, a nonreductive theory of experience will specify basic principles telling us how experience depends on physical features of the world. These psychophysical principles will not interfere with physical laws, as it seems that physical laws already form a closed system. Rather, they will be a supplement to a physical theory. A physical theory gives a theory of physical processes, and a psychophysical theory tells us how those processes give rise to experience. We know that experience depends on physical processes, but we also know that this dependence cannot be derived from physical laws alone. The new basic principles
postulated by a nonreductive theory give us the extra ingredient that we need to build an explanatory bridge. Of course, by taking experience as fundamental, there is a sense in which this approach does not tell us why there is experience in the first place. But this is the same for any fundamental theory. Nothing in physics tells us why there is matter in the first place, but we do not count this against theories of matter. Certain features of the world need to be taken as fundamental by any scientific theory. A theory of matter can still explain all sorts of facts about matter, by showing how they are consequences of the basic laws. The same goes for a theory of experience. This position qualifies as a variety of dualism, as it postulates basic properties over and above the properties invoked by physics. But it is an innocent version of dualism, entirely compatible with the scientific view of the world. Nothing in this approach contradicts anything in physical theory; we simply need to add further bridging principles to explain how experience arises from physical processes. There is nothing particularly spiritual or mystical about this theory - its overall shape is like that of a physical theory, with a few fundamental entities connected by fundamental laws. It expands the ontology slightly, to be sure, but Maxwell did the same thing. Indeed, the overall structure of this position is entirely naturalistic, allowing that ultimately the universe comes down to a network of basic entities obeying simple laws, and allowing that there may ultimately be a theory of consciousness cast in terms of such laws. If the position is to have a name, a good choice might be naturalistic dualism. If this view is right, then in some ways a theory of consciousness will have more in common with a theory in physics than a theory in biology. Biological theories involve no principles that are fundamental in this way, so biological theory has a certain complexity and messiness to it; but theories in physics, insofar as they deal with fundamental principles, aspire to simplicity and elegance. The fundamental laws of nature are part of the basic furniture of the world, and physical theories are telling us that this basic furniture is remarkably simple. If a theory of consciousness also involves fundamental principles, then we should expect the same. The principles of simplicity, elegance, and even beauty that drive physicists' search for a fundamental theory will also apply to a theory of consciousness. (A technical note: Some philosophers argue that even though there is a conceptual gap between physical processes and experience, there need be no metaphysical gap, so that experience might in a certain sense still be physical (e.g. Hill 1991; Levine 1983; Loar 1990). Usually this line of argument is supported by
an appeal to the notion of a posteriori necessity (Kripke 1980). I think that this position rests on a misunderstanding of a posteriori necessity, however, or else requires an entirely new sort of necessity that we have no reason to believe in; see Chalmers 1996 (also Jackson 1994 and Lewis 1994) for details. In any case, this position still concedes an explanatory gap between physical processes and experience. For example, the principles connecting the physical and the experiential will not be derivable from the laws of physics, so such principles must be taken as explanatorily fundamental. So even on this sort of view, the explanatory structure of a theory of consciousness will be much as I have described.) …………… The hard problem of consciousness (Chalmers 1995) is the problem of explaining the relationship between physical phenomena, such as brain processes, and experience (i.e., phenomenal consciousness, or mental states/events with phenomenal qualities or qualia). Why are physical processes ever accompanied by experience? And why does a given physical process generate the specific experience it does—why an experience of red rather than green, for example? Contents [hide]
1 Hard problems and easy problems 2 Relation to arguments against physicalism and the explanatory gap
3 Reductionism 4 Nonreductionism 5 Psychophysical Theories 6 Mysterianism 7 References 8 Further reading 9 External links 10 See also Hard problems and easy problems The hard problem contrasts with so-called easy problems, such as explaining how the brain integrates information, categorizes and
discriminates environmental stimuli, or focuses attention. Such phenomena are functionally definable. That is, roughly put, they are definable in terms of what they allow a subject to do. So, for example, if mechanisms that explain how the brain integrates information are discovered, then the first of the easy problems listed would be solved. The same point applies to all other easy problems: they concern specifying mechanisms that explain how functions are performed. For the easy problems, once the relevant mechanisms are well understood, there is little or no explanatory work left to do. Experience does not seem to fit this explanatory model (though some reductionists argue that, on reflection, it does; see the section on reductionism below). Although experience is associated with a variety of functions, explaining how those functions are performed would still seem to leave important questions unanswered. We would still want to know why their performance is accompanied by experience, and why this or that kind of experience rather than another kind. So, for example, even when we find something that plays the causal role of pain, e.g. something that is caused by nerve stimulation and that causes recoil and avoidance, we can still ask why the particular experience of hurting, as opposed to, say, itching, is associated with that role. Such problems are hard problems. Cognitive models of consciousness (Barrs 1988) are sometimes described as potential solutions to the hard problem. However, it is unclear that any such model could achieve that goal. For example, consider global workspace theory, according to which the contents of consciousness are globally available for various cognitive processes such as attention, memory, and verbal report. Even if this theory is correct, the connection between such processes and experience—e.g., why they are accompanied by experience at all—might well remain opaque. For similar reasons, discovering neural correlates of consciousness might leave the hard problem unsolved: the question as to why those correlations exist would remain unanswered. Nevertheless, scientific advances on cognitive models and neural correlates of consciousness might well play important roles in a comprehensive solution. Relation to arguments against physicalism and the explanatory gap
The hard problem is often discussed in connection to arguments against physicalism (or materialism) which holds that consciousness is itself a physical phenomenon with solely physical properties. One of these arguments is the knowledge argument (Jackson 1982), which is based on thought experiments such as the following. Mary is a super-scientist with limitless logical acumen, who is raised far in the future in an entirely blackand-white room. By watching science lectures on black-and-white television, she learns the complete physical truth—everything in completed physics, chemistry, neuroscience, etc. Then she leaves the room and experiences color for the first time. It seems intuitively clear that upon leaving the room she learns new truths about what it is like to see in color. Advocates of the knowledge argument take that result to indicate that there are truths about consciousness that cannot be deduced from the complete physical truth. It is inferred from that premise that the physical truth fails to completely determine the truth about consciousness. And the latter result, most agree, would undermine physicalism. The hard problem relates closely to the claim that Mary learns new truths about color experiences when she first has such experiences. Arguably, if she learns new truths at that time, this is because the nature of color experiences cannot be fully explained in purely physical terms; otherwise, the reasoning runs, she would have already known the relevant truths. If such experiences are fully explicable in physical terms, then they should be objectively comprehensible, and Mary seems well positioned to grasp all objectively comprehensible properties. The general idea here is sometimes expressed as the claim that there is an explanatory gap (Levine 1983) between the physical and the phenomenal. A second argument often associated with the hard problem is the conceivability argument (Kripke 1972, Chalmers 1996). According to one version of the conceivability argument, also called the zombie argument, one can conceive of a micro-physical duplicate of a human that lacks conscious experiences. Given this, it is argued, such a micro-physical duplicate is possible, which entails that the physical facts do not necessitate the phenomenal or experiential facts. This, according to most philosophers, indicates that physicalism is false.
While many philosophers doubt that the conceivability of these zombie duplicates is indicative of their possibility, the hard problem primarily concerns the first step of the argument. If we can conceive of microphysical duplicates of ourselves that lack consciousness, then we lack a complete explanation for why the physical facts give rise to the experiential or phenomenal facts. This again shows the existence of an explanatory gap. Reductionism There is no consensus about the status of the explanatory gap. Reductionists deny that the gap exists. They argue that the hard problem reduces to a combination of easy problems or derives from misconceptions about the nature of consciousness. For example, Daniel Dennett (2005) argues that, on reflection, consciousness is functionally definable. On his view, once the easy problems are solved, there will be nothing about consciousness and the physical left to explain. Reductionists often appeal to analogies from the history of science. These philosophers compare nonreductionists, who accept the existence of the explanatory gap, to 17th Century vitalists concerned about the hard problem of life. Comparisons are also made to the scientifically ignorant concerned about hard problems of heat or light (Churchland 1996). Science has shown that the latter concerns are overblown: life, heat, and light can be physically explained. Likewise, say reductionists, for consciousness. Nonreductionists usually reject such analogies. Part of the analogy is usually accepted: the vitalists doubted that how organisms reproduce, move, self-organize, etc., could be explained in purely physical terms, in much the same way that nonreductionists doubt that consciousness can be explained in purely physical terms. However, what the vitalists sought to explain was how certain functions are performed. By contrast, consciousness does not seem to consist in the performance of functions. Nonreductionists take that difference to undermine the analogy between the hard problem of consciousness and the alleged hard problem of life. They reject the reductionists’ other analogies on similar grounds. Reductionism is entailed by influential theories in the philosophy of mind, including philosophical behaviorism, analytic functionalism, and eliminative materialism. Some philosophers take the merits of those
positions, such as their relative parsimony, to provide grounds for a reductionist approach to the hard problem. Other philosophers accept the existence of the explanatory gap and thus regard the hard problem as evidence against those theories. Nonreductionism All nonreductionists believe that the explanatory gap is genuine, but some nonreductionists argue that the gap is compatible with physicalism (Loar 1990/97). For nonreductionist physicalists, the gap reflects something about our perspective on the world, not the world itself. These philosophers hold that consciousness is an entirely physical phenomenon, and thus that phenomenal truths are nothing over and above physical truths, even though phenomenal truths cannot be deduced from micro-physical truths or the sorts of truths that Mary learns from her lectures. Non-reductionists must explain how to reconcile physicalism with the explanatory gap. (Reductionists do not share this burden, since they reject the gap.) Here nonreductionists sometimes invoke analogies to Kripkean (1972) empirical necessities. According to Kripke, the fact that heat is (decoherent) molecular motion is absolutely necessary—there is no possible situation in which there is one without the other—even though that fact was discovered empirically. One might object on the grounds that we can easily imagine a situation in which there is heat but, it turns out, no molecular motion. Against this, Kripke argues that on reflection such a situation is inconceivable. What we imagine existing without molecular motion is the sensation of heat—an experience typically caused in us by molecular motion—and not heat itself. Non-reductionists sometimes argue that similar reasoning could be used to explain why, in spite of the explanatory gap, the physical truth necessitates the truth about consciousness. However, as Kripke himself argues, in the case of consciousness there does not appear to be a distinction corresponding to that between heat and the sensation of heat. For example, anything that feels like pain is ipso facto pain. So, Kripke’s reasoning does not straightforwardly extend to the empirical necessities entailed by nonreductionist physicalism. Many nonreductionists acknowledge that more is required to reconcile physicalism with the explanatory gap. Here it is common to appeal to distinctive features of phenomenal concepts. Some propose that
phenomenal concepts are distinctive in that their referents—phenomenal states—are constituents of those very concepts. For example, David Papineau (2002) suggests that phenomenal concepts have the form that state: —, where the blank is filled in by an embedded phenomenal state, in something like the way a word may be embedded within quotation marks. He argues that the quotational structure of phenomenal concepts will produce a distinctive phenomenal/physical epistemic gap even if the embedded state is physical. But whether any such proposal can meet the nonreductionist’s burden remains controversial (Chalmers 2007). Some nonreductionists take the hard problem as a reason to reject physicalism. On most nonphysicalist views, consciousness is regarded as an irreducible component of nature. These views tend to differ primarily on how they characterize the causal relationship between consciousness and the physical world. According to interactionist dualism, for example, consciousness has both physical causes and physical effects; according to epiphenomenalism consciousness has physical causes but no physical effects; and according to neutral monism phenomenal properties are the categorical bases of physical properties, which are dispositional (neutral monism might or might not count as a version of physicalism, depending on whether the categorical bases physical properties are considered physical). Psychophysical Theories Some believe that solving the hard problem will require constructing a psychophysical theory that includes fundamental laws. No such theory has been developed in great detail, but some speculative proposals have been advanced. Certain interactionist dualists argue that phenomenal properties affect brain processes by filling in gaps resulting from quantum indeterminacy (Eccles 1986). Theories emerging from that sort of argument may involve positing psychophysical laws. And David Chalmers (1995), a leading nonreductionist, tentatively proposes that the basic link between the phenomenal and the physical exists at the level of information. He formulates a double aspect principle, on which phenomenal states realize informational states that are also realized in physical, cognitive systems such as the brain. Either proposal might provide a kind of solution to the hard problem: the laws would enable deductions of specific instances of experience from underlying physical structures. An important vestige of the
hard problem would, of course, remain: there would still be the question as to why these psycho-physical laws existed and not others. Such theorists are likely to argue that these laws are primitive, just like the basic laws of physics, and so the vestigial hard problem is neither more nor less puzzling than the question as to why the physical constants are what they are. Reductionists will argue that such proposal are misconceived, either because they depend on confused notions of consciousness or because they presuppose that solutions to the easy problems will not yield a solution to the hard problem. Nonreductionist physicalists will reject those reductionist arguments, but they also tend to reject the need for a fundamental psychophysical theory. Not all such theories conflict with nonreductionist physicalism. Indeed, these philosophers might accept something like Chalmers’ proposal and regard it as a way to bridge the explanatory gap. Unlike Chalmers, however, they will regard phenomenal information as a special sort of physical information—special in that its connection to other sorts of physical information will remain opaque without appropriate psychophysical laws. Mysterianism Some argue that we are unable to solve the hard problem. This view is sometimes called mysterianism, and its best-known champion is Colin McGinn (1989). McGinn argues that our minds are simply not constructed to solve the hard problem; we are cognitively closed to it, in something like the way rats are cognitively closed to calculus problems. But unlike the rats, we can grasp the nature of the problem that, according to McGinn, we cannot solve. McGinn locates the source of our cognitive closure not in the hard problem’s intrinsic complexity—he allows that the solution may be simple— but rather in how we form theoretical concepts. In his view, we form such concepts by extending concepts associated with perception of macroscopic objects. And he argues that any concepts produced by this mechanism will, like familiar physical concepts, inevitably leave the hard problem unsolved. This argument—both the premise about concept formation and the mysterian inference—is controversial (Stoljar 2006). And there are versions of mysterianism that do not rely on the argument. These include less
pessimistic versions on which scientific advances may one day enable us to solve the hard problem (Nagel 1998, Stoljar 2006). Mysterians differ on both reductionism and physicalism. McGinn and Thomas Nagel, a less pessimistic mysterian, reject reductionism. Daniel Stoljar, another less pessimistic mysterian, is officially neutral on reductionism. And whereas McGinn and Nagel are officially neutral on physicalism, Stoljar accepts it; indeed, his defense of mysterianism assumes physicalism. ……………… What shapes our subjective experiences? What is the relationship between the way science and philosophy approach the matter of consciousness? Why are some problems “easy”, and others “hard”? Professor of Philosophy David Chalmers describes the different views on the problem of consciousness. The hard problem of consciousness is a problem of how physical processes in the brain give rise to the subjective experiences of the mind and of the world. If you look at the brain from the outside you see this extraordinary machine – an organ consisting of 84 billion neurons that fire in synchrony with each other. When I see visual inputs come to my eyes, photons hit my eyes, they send a signal that goes up the optic nerve to the back of my brain. It sends neural firings propagating throughout my brain and eventually I might produce an action. From the outside I look like a complicated mechanism – a robot. The hard problem, by contrast, is the problem of how it is that all these processes give rise to subjective experience. And what’s distinctive about this is that it doesn’t seem to be a problem about objective mechanisms, about, for example, behaviors that the brain produces. We could tell a complete story about the objective mechanisms in the brain, the neurons that fire, the behaviors they produce and so on, explaining all these functions: awakeness, responsiveness, discrimination, monitoring. And there might still be a further question – why is all that associated with subjective experience?
Consciousness is actually present at a fundamental level inside the brain and inside all physical processes. This is the traditional philosophical view known as panpsychism. That maybe all of physics involve some element of consciousness at the basic level. Somehow all of this composes to yield my consciousness. It’s a beautiful unified attractive view of the world where consciousness and the physical world might be integrated all the way down. ……………… “Right now you have a movie playing inside your head,” says philosopher David Chalmers. It’s an amazing movie, with 3D, smell, taste, touch, a sense of body, pain, hunger, emotions, memories, and a constant voice over narrative. “At the heart of this movie is you, experiencing this, directly. This movie is your stream of consciousness, experience of the mind and the world.” This is one of the fundamental aspects of existence, Chalmers says: “There’s nothing we know about more directly…. but at the same time it’s the most mysterious phenomenon in the universe.” What is the difference between us and robots? Nobody knows the answers. Chalmers believes the questions answered so far — mainly, about what parts of the brain do which bits of processing — are the “easy” (in comparison) problems. The hard problem is why is it that all that processing should be accompanied by this movie at all. So Chalmer’s own first crazy idea: consciousness is fundamental. “Physicists sometimes take parts of the universe as fundamental building blocks — space or time, or mass.” These are taken as primitive and the rest is built up from there. Sometimes the list of fundamentals expands, such as when James Clerk Maxwell realized that electromagnetism couldn’t be explained from other known laws of physics, and so he postulated electric charge as a new fundamental idea. Chalmers thinks that’s where we are with consciousness.
Importantly, “This doesn’t mean you suddenly can’t do science with it. This opens up the way to do science with it.” He thinks we need to connect this fundamental with the other fundamentals. Chalmer’s second crazy idea: every system might be conscious at some level. Consciousness might be universal, an idea called panpsychism. The idea is not that photons are intelligent or thinking, or wracked with angst. Rather, it’s that “Photons have some element of raw subjective feeling, a precursor to consciousness.” Pause. “This might seem crazy to us,” he says, “but not to people from other cultures.” But also, he goes on, a simple way to link consciousness to fundamental laws is to link it to information processing. It’s possible that wherever information is being processed, there is some consciousness. Chalmers put that idea forward about twenty years ago, but at the time it wasn’t well developed. Now a neuroscientist, Giulio Tononi, has created a measure, phi, that counts the amount of information integration. In a human, there is a lot information integration. In a mouse, still quite a lot. As you go down to worms, microbes and photons it falls off rapidly, but never goes to zero. “I don’t know if this is right, but right now it’s the leading theory.” If true, this theory has many implications. “I used to think I shouldn’t eat anything that’s conscious. If you’re a panpsychist, you’ll be pretty hungry.” It’s also natural to ask about other systems, like computers. If consciousness is integrated information, and computers do integrate information, that raises ethical issues about developing intelligent computer systems, and turning them off. The hard problem of consciousness (for physicalism or materialism) is the problem of explaining in physical terms the simple fact that there is firstperson subjective awareness of anything at all instead of just dead matter blindly interacting. The hard problem presupposes a distinction between consciousness in this first-person sense and consciousness defined as an objective cognitive function or neural process. First-person, subjective consciousness is the fact of phenomenal experience, as distinct from the ability to report about inner mental states.
The hard problem of consciousness is the problem of explaining or understanding how this first-person subjective experience can possibly arise or be explained from within materialism or physicalism. One can imagine, for example, a physical universe identical to ours in every way except that the humans are zombies, i.e., have no subjective experience. There is thus no physical reason or explanation why phenomenal consciousness should exist as opposed to not exist. Physicalism or materialism, as philosophical positions, thus fail to account for this fact of phenomenal consciousness. As it is defined, however, phenomenal consciousness does not appear as an objective fact at all. For example, it would be impossible for there to be a physical "consciousness meter" to detect the existence of phenomenal consciousness. It would thus seem unreasonable to expect a physical explanation of it, as it is in effect defined to be non-physical. Consequently, some physicalists have simply denied its existence altogether and declared the hard problem a pseudo problem. Others view this as unsatisfactory, since there is nothing that exists with more certainty than their own conscious experience. For more discussion, see Tom McFarlane's answer to Why do some neuroscientists have a hard time seeing the hard problem of consciousness? David Chalmers has written an accessible article on this topic The hard problem of consciousness is that there is nothing in the known physical world or physical laws that can explain its basic quality called experience. If you analyse what are the physical elements that are assumed to create an experience (like neurons, neurotransmitters, electrical impulses in the brain), you will eventually find that they are only matter and physical force. Those do not contain any experience. It is described in detail in the following video: What is consciousness? Chalmers is not writing so much about 'the ability to recall or imagine' a red image or the smell of mothballs. He is talking about explaining why a red image or the smell of mothballs (either directly experienced or I suppose you could also be recalling or imagining or dreaming them) should accompany their associated physical/neurophysiological processes, because given our current scientific understanding these processes do not seem to entail any phenomenal conscious experience -- it is conceivable that they
could occur without it. It is not likely that Chalmers is experiencing something you don't, unless you are unable to experience red images, smell of mothballs, etc. If you respond to stimuli such as red images or smell of mothballs without experiencing them in a qualitative, phenomenal sense, then you would be a philosophical zombie. By definition, there is really no way to tell if someone else is a philosophical zombie, i.e. does not experience phenomenal consciousness. This could simply be due to our current ignorance regarding how it is possible that consciousness emerges from physical processes, or it could be that we need to consider panpsychism and posit that consciousness is a fundamental property of matter, or some other options (e.g. phenomenal consciousness is an illusion). But the "Hard" Problems of Consciousness all require the ability to imagine the possible interior life of another person, aka possessing a Theory of Mind. Introduction "I know I'm conscious, but I can't be sure others are." "What's the probability that insects are conscious?" "How is it that neural firing gives rise to the qualia I feel?" These are probably the most common ways to think about consciousness among science-minded people. However, some thinkers, like Daniel Dennett and Marvin Minsky, contest these statements as embodying a residual dualism: Such ideas reify consciousness as more than the functional operations that brains perform. What Hard Problem? Our philosophical science correspondent Massimo Pigliucci asks. The philosophical study of consciousness is chock full of thought experiments: John Searle’s Chinese Room, David Chalmers’ Philosophical Zombies, Frank Jackson’s Mary’s Room, and Thomas Nagel’s ‘What is it like to be a bat?’ among others. Many of these experiments and the endless discussions that follow them are
predicated on what Chalmers famously referred as the ‘hard’ problem of consciousness: for him, it is ‘easy’ to figure out how the brain is capable of perception, information integration, attention, reporting on mental states, etc, even though this is far from being accomplished at the moment. What is ‘hard’, claims the man of the p-zombies, is to account for phenomenal experience, or what philosophers usually call ‘qualia’: the ‘what is it like’, first-person quality of consciousness. I think that the idea of a hard problem of consciousness arises from a category mistake. I think that in fact there is no real distinction between hard and easy problems of consciousness, and the illusion that there is one is caused by the pseudoprofundity that often accompanies category mistakes. A category mistake occurs when you try to apply a conceptual category to a given problem or object, when in fact that conceptual category simply does not belong to the problem or object at hand. For instance, if I were to ask you about the color of triangles, you could be caught off guard and imagine that I have some brilliant, perhaps mystical, insight into the nature of triangles that somehow makes the category ‘color’ relevant to their description as geometrical figures. But of course this would be a mistake (on my part as well as on yours): triangles are characterized by angles, dimensions, and the ratios among their sides, but definitely not by colors. The same, I am convinced, goes for Chalmers’ hard problem (or Nagel’s question, and so on). The hard problem is often formulated as the problem of accounting for how and why we have phenomenal experience. Chalmers and Nagel think that even when all the scientific facts are in (which will take a lot more time, by the way) we will still be missing something fundamental. This led Chalmers to endorse a form of dualism, and Nagel to reject the current scientific understanding (which amounts to pretty much the same thing, really).
Let’s unpack this. Why phenomenal consciousness exists is a typical question for evolutionary biology. Consciousness is a biological phenomenon, like blood circulation, so its appearance in a certain lineage of hominids seems to be squarely a matter for evolutionary biologists to consider (they also have a very nice story to tell about the evolution of the heart). Not that I expect an answer any time soon, and possibly ever. Historical questions about behavioral traits are notoriously difficult to tackle, particularly when there are so few (any?) other species to adequately compare ourselves with, and when there isn’t much that the fossil record can tell us about it, either. Second, how phenomenal consciousness is possible is a question for cognitive science, neurobiology and the like. If you were asking how the heart works, you’d be turning to anatomy and molecular biology, and I see no reason things should be different in the case of consciousness. But once you have answered the how and the why of consciousness, what else is there to say? “Ah!” exclaim Chalmers, Nagel and others, “You still have not told us what it is like to be a bat (or a human being, or a zombie), so there!” But what it is like is an experience – which means that it makes no sense to ask how and why it is possible in any other senses but the ones just discussed. Of course an explanation isn’t the same as an experience, but that’s because the two are completely independent categories, like colors and triangles. It is obvious that I cannot experience what it is like to be you, but I can potentially have a complete explanation of how and why it is possible to be you. To ask for that explanation to also somehow encompass the experience itself is both incoherent, and an illegitimate use of the word ‘explanation’. At this point the gentle reader may smell echos of Daniel Dennett’s or Patricia Churchland’s ‘deflationary’ or ‘eliminativist’ responses to Chalmers & co. That, however, would be a mistake. Unlike Dennett, I don’t think for a moment that consciousness is an ‘illusion’; and unlike Churchland I reject the idea that we can (or that it would be useful to) do away with concepts such as
consciousness, pain, and the like, replacing them with descriptions of neurobiological processes. On this I’m squarely with Searle when he said that “where consciousness is concerned, the existence of the appearance is the reality” (chew on that for a bit, if you don’t mind). Consciousness as we have been discussing it is a biological process, explained by neurobiological and other cognitive mechanisms, and whose raison d’etre can in principle be accounted for on evolutionary grounds. To be sure, it is still largely mysterious, but (contra Dennett and Churchland) it is no mere illusion (it’s too metabolically expensive, and it clearly does a lot of important cognitive work), and (contra Chalmers, Nagel, etc.) it does not represent a problem of principle for scientific naturalism..