Thehumancondition

Page 1

Home This Human Condition Wiki is an open collaboration. We are working toward a new explanation of the human condition, one that comes out of the convergence of science with other fuzzier strands of thought, a view that honors the subjective and spiritual aspect of human experience. Many other sites touch on the Human Condition, but we are unique in our goal of developing a consensus limited by certain premises. Our starting point is Puzzling Evidence from Evolutionary Psychology and other psychology areas ( social, positive, personality and cognitive), as well as neuroscience, anthropology, Economics and artificial intelligence. Science and engineering offer many insights relevant to the Human Condition, but little of this knowledge has made it into more general awareness, and when it has it often degenerates to cartoon simplicity. Our method is reinterpretation–not a scientific theory, but a web of meaning encompassing many theories. Our ultimate goal is a compelling Human Story, but what we have now is various Analysis that refers to scientific results and other interpretive efforts, especially books. You will see that our synthesis demands nuanced straddles between opposing philosophical positions such as Determinism vs. Free Will and Nature Versus Nurture, and also between traditionally hostile disciplines such as Evolutionary Psychology and Sociology. Some of the “why” answers that we uncover touch on spiritual issues such as morality, our place in the universe and the meaning of life. We see truth in many traditional teachings about human nature, but we are also exploring how our beliefs need need to change as a response to modern culture, worldview and economics. We believe that our intuitive and cultural understandings of the human condition are greatly distorted. It is humbling to appreciate this, but we strongly believe that a humble human condition need not be meaningless or futile. Developing this realistic self-understanding is worthwhile. As individuals, we can act more effectively, with less frustration and disappointment. Together, we can intelligently strive for progress.

Biases We use bias without any negative connotation. We've chosen this term mainly because many of the human behaviors we discuss under this topic are technically known as biases, and also because the normal meaning of “bias” refers to our noticing these sorts of behavior in someone else. When a behavioral economist says that people in general have some specified bias, he is saying that people tend to behave in a way that is wrong according to the theory of the field. Normally, when we say that someone is biased, we mean that they tend to act in a particular way (when all right-thinking people know better.) In other words, a bias is an unacceptable truth about how people actually act. We believe that both kinds of biases are a natural consequence of The Way Things Work. Physics, brain structure and evolution have all conspired to design humans so that we frustrate each other's desires and expectations.


Rules to Live By Understanding demands us to reconceptualize biases as rules to live by. What is a rule? A rule is a structured basis for behavior with two parts: the action (what to do) and the conditions (when to do it): If rain is predicted, take an umbrella. Some rules are constraints on behavior: Never park there on Sunday. Constraints have the form of an ordinary rule: what to do (park there) and when (not on Sunday), but they modify behavior by constraining other rules rather than generating behavior. These rules are precise, with unambiguous conditions and action. It is easy to get carried away with the beauty of rule-based behavior and to argue that intelligent behavior is (or should be) based on clear rules. Our concern here is with broad vague rules like: If everyone else is doing X, do X. A philosophical digression: we are not interested in discussing whether human behavior is really based on rules, only in pointing out that people act as though they do. These rules are a sort of Story describing common behavior patterns.

Like for example? There is a huge literature on cognitive bias, including a list of a over a hundred biases that are substantiated by research. This is the motherlode of puzzling evidence! Unfortunately when interpretation is given, it is often with the misguided intention of explaining why people behave in this obviously wrong “biased” way, rather than understanding that these rules normally serve us well. If humans are seen to behave in a certain way over and over again, in many different situations, then instead of wailing and wringing our hands, we should consider that there may be a good reason why people do that. ~~COMPLEX_TABLES~~

Explanations for Bias After you've drunk a few gallons of our special kool-aid the question “Why do we have biases?” will sound as meaningless as “Why is blue?”, but for the moment we'll humor you by offering some explanations on behalf of those misguided humans.

These Rules are Best In a world that is pathologically unpredictable, where reliable information with clear implications is a thing of rare beauty, we can do no better. Much of the research on cognitive biases, especially in behavioral economics, involves artificial situations that have been contrived to be clear. This is so rare in everyday life that the frequent failure of our rules goes unnoticed, and we attribute the occasional lucky success to our own cleverness.


We quote from Gut Feelings: With this book, I invite you on a journey into a largely unknown land of rationality, populated by people just like us, who are partially ignorant, whose time is limited and whose future is uncertain. This land is not one many scholars write about. They prefer to describe a land where the sun of enlightenment shines down in beams of logic and probability, whereas the land we are visiting is shrouded in a mist of dim uncertainty. In my story, what seem to be “limitations” of the mind can actually be its strengths. We highly recommend this book, which goes on to convincingly argue that numerous cognitive biases are normally highly effective rules for living.

Cognitive Limitations Even when reliable information is available, and (for some odd reason) the consequences of our actions are predictable, it may be that we just aren't smart enough to make that decision optimally. Perhaps it is just too much work to make all of our decisions in a “rational” way. We have to make many, many decisions, and we can't spend much time deciding whether to go out for Chinese or Mexican. Instead we use fast_and_frugal rules (otherwise known as cognitive biases.)

Evolutionary Contingency It just so happens that our brains were put together to work in this way. In particular, we come from a long line of ancestors who, while undeniably successful in life, were not so bright. Mice and cockroaches can make decisions just fine without even knowing what “rational” means. Consciousness is a bag on the side of a brain plan that was established a hundred million years ago.

Intentional Opacity It may be that we are behaving in this way for a good reason, but we don't know the reason because it is instinctive human behavior that can only be understood from an evolutionary perspective (see Intentional Design.) It could be only that we don't have a “need to know”, but sometimes it seems important that we not know (see Intentional Opacity and Positive Illusions.)

Our Favorite Biases In this section of the wiki we discuss in detail a number of biases for which we have particularly interesting interpretations.

Above Average Effect


The above average effect is the prevalent positive illusion that oneself is above average in most ways, which is clearly mathematically impossible. You, dear reader, are surely above average in most ways, but it isn't possible for a majority of people to be above average. See Illusory superiority.

Confirmation Bias Rule: When recalling information relevant to a subject, don't bother recalling information that doesn't support your viewpoint. When interpreting information (making Story), don't think of interpretations that don't support your viewpoint. Confirmation bias is a tendency of people to favor information that confirms their beliefs or hypotheses. As a result, people gather evidence and remember information selectively, and interpret it in a biased way. See Confirmation Bias. What is noteworthy about confirmation bias is that it supports the The Argumentative Theory. Although confirmation bias is almost universally deplored as a regrettable failing of reason in others, the argumentative theory of reason explains that this bias is Adaptive Behavior because it aids in forming persuasive arguments by preventing us from being distracted by useless evidence and unhelpful stories. Interestingly, Charles Darwin made a practice of recording evidence against his theory in a special notebook, because he found that this contradictory evidence was particularly difficult to remember.

Conformity Bias Rule: When deciding what to do, look around and see what others most commonly do in this situation and imitate them. Aphorisms: When in Rome, do as the Romans do. The nail that sticks up will be hammered down. ⇒ These aphorisms don't capture the primary idea, which is that you gain valuable information about “what works� by copying those around you. Instead, they emphasize the social cost of nonconformity. Conformity bias is a particular interpretation of social Conformity that comes from the Boyd and Richerson theory of Cultural Evolution. Almost everyone who has considered the issue of social conformity acknowledges that people conform strongly to social behavioral norms. Interestingly, in most academic disciplines, the primary emphasis has been on the harmful effects of conformity, both as a constraint on individual freedom, and also as a pathology of decision making, where Groupthink or Herd Mentality leads to decisions that (in hindsight) were


“obviously wrong.” However, recently the idea that social decision making can give superior outcomes has been getting increasing attention, such as in Wisdom of the Crowd and Gut Feelings. From an evolutionary perspective, we would expect that such a pervasive decision bias must usually be strongly adaptive, especially given that it is also noted to have fairly frequent harmful effects. Cultural Evolution researchers have shown that in computer simulations of cultural evolution, conformity bias is necessary for cumulative cultural evolution to take place. Conformity bias is the cultural analog of DNA Repair, see Evolutionary Conservation. An interesting aspect of conformity bias is how surprising and nonintuitive we find it when we see an illustration of the power of social Conformity, such as in the Asch Conformity Experiments, where the subjects frequently conformed with the majority view, even when it was obviously wrong. Solomon Asch thought that the majority of people would not conform to something obviously wrong, but the results showed that 24% of the participants did not conform on any trial. 75% conformed at least once, and 5% conformed every time (37% conformity over subjects averaged across the critical trials). Other experiments involving authority figures have even more surprising results, such as the Stanford Prison Experiment and the Milgram Experiment. Although not directly relevant to conformity bias, which doesn't consider authority, people's responses to these results show that our self-concept is highly inaccurate when we consider our willingness to conform. A common response to seeing people doing foolish or reprehensible things under social pressure is to think “I would never do that”, and yet everyone conforms to social influences to a high degree, and most people will show astonishing levels of conformance in an experimental setting. This failure of self-awareness is an example of Intentional Opacity.

Positive Illusions It is fairly well established that (at least in western countries) people have these unrealistic positive self-favoring views: ● That they are unusually capable and virtuous (Illusory superiority), ● That they have more control over events than they do (Illusion of control), and ● That they are optimistic, believing misfortune unlikely and good outcomes likely (Optimism bias). Furthermore, it seems that these illusions are associated with mental health. See Illusion and Well-Being. It may be that these tendencies are less pronounced or absent in some cultures (see Is there a Universal Need for Positive Self-Regard?.) There is also some reason for methodological concern due to the heavy reliance on asking people to compare themselves to others in this literature (see Biases in Social Comparative Judgments.) Also, it may be possible to be mentally healthy without positive illusions, but this rare, perhaps because it requires a great deal of effort to achieve this perspective. It is an interesting question why this bias exists. First of all, why isn't this illusion harmful? Shouldn't this bias lead people to make bad decisions that would be avoided by unbiased analysis? If the bias were harmful we would expect it to be selected away, regardless of whether


it is a biological behavioral tendency or a cultural construct. Also, even if the harm is less than one might suppose, there must be some practical benefit of this bias that overwhelms the negative effect.

Mostly Harmless? The evidence on the harmfulness of positive illusion is mixed. There is considerable evidence that unwarranted optimism is a major factor in leading to bad decisions: business, medical or political (see Optimism bias), however there seems to be considerable Retrospective bias here —unreasonable optimism has undoubtedly contributed to most successful decisions as well. There is also evidence that positive illusion varies according to the situation in ways that reduce the harm. Most significantly, Effects of Mindset on Positive Illusions argues that when we debate a difficult problem the bias largely disappears. Interestingly, it also seems that positive bias is generally enhanced when we are pursuing a goal already decided on, which suggests that the benefit of positive illusions may be in aiding motivation during the implementation of the decision. Also, there seems to be consistent time variation in optimism, with optimism being high for future events, but decreasing as the moment of truth approaches, then once again increasing as the event passes.

Psychological Benefits The most obvious payoff from positive illusions is that they can directly make us feel good. This payoff provides a motivation to believe things that are unrealistically positive, and this motivation may distort our perceptions. Whether Positive Illusions are “motivated” or not has been a substantial controversy in social psychology. A belief is said to be “motivated” if we believe it because the belief is in some way desirable or psychologically helpful rather than because it is true. In social psychology it seems that the general assumption is that positive illusions are motivated by the desire to maintain a positive self-image and self-esteem, though if the belief is adopted because it is socially approved or socially helpful it would also be “motivated.” A somewhat similar psychological explanation is that positive illusions are necessary for mental health, and the need for emotional regulation outweighs the negative effects. The psychological explanation that optimism is needed to avoid depression is unsatisfactory because it begs the question of why our minds work this way. In fact, a much more convincing evolutionary explanation relating optimism and depression reverses the direction of causation, saying that the purpose of depression is to reduce optimism when things aren't going well (and a change in direction may be necessary.) It is plausible that realism is particularly valuable when change is needed, but this doesn't explain why optimism is the correct default for normal situations. You could argue that for unknown functional reasons positive illusions are just as necessary to mental functioning as positive blood pressure is to physiological functioning—it is a contingent fact of the way the brain works rather than an adaptation in the evolutionary sense. However, we feel that it is fruitful to pursue the Evolutionary Psychology approach of suspecting that mental phenomena have some practical benefit in terms of survival and reproduction.

Social Benefits


Perhaps we have positive illusions because these beliefs give social benefits. The most common social explanation (favored by sociologists and evolutionary psychologists) is that positive illusions are are socially self-serving because they aid in Impression management and persuasion. It is easier to argue that we are superior or deserving if we believe it ourselves. The belief that we are more virtuous than others is particularly suspect in this regard, however believing that you are virtuous might help you in behaving virtuously. Is there a Universal Need for Positive Self-Regard? offers an intriguing hybrid explanation— positive illusions are prevalent in western culture because they are indirectly reinforced by western cultural values emphasizing the importance of independence, confidence and personal happiness. These illusions help westerners to function in their culture because they support culturally desired characteristics.

Behavioral Benefits Perhaps positive illusions lead to more beneficial behavior than harmful behavior. Optimists persist longer in problem-solving than pessimists. For example, optimists persist twice as long in trying to solve an insoluble puzzle. This is intuitive, but why is generalized persistence good? After all, in the puzzle experiment, the optimists were only wasting their time (because the puzzle was insoluble.) Persistence is good because Prediction is Intractable; in the real world (as in the experiment) you have no idea whether a significant undertaking is possible or not. You've got to make your best guess about whether this is a good course, then stick with it until either you succeed or repeated failure leads you to conclude that it is difficult, perhaps impossible. Positive illusions are clearly related to cognitive biases concerning the future and planning such as Sunk Costs and the incorrigible human failure to learn from the failure of past predictions (see Prediction is Intractable).

Prestige Bias Rule: Identify other people around you who seem to be doing particularly well in life. Study their behavior carefully and imitate them. Aphorisms: Imitation is the sincerest form of flattery. Prestige bias is a particular interpretation of Social Comparison that comes from the Boyd and Richerson theory of Cultural Evolution. While Conformity Bias is the basic mechanism that protects the integrity of cultural knowledge, prestige bias is crucial for permitting new best practices to take hold. As the practice is adopted more widely, conformity bias takes over, and the decision doesn't even need to be considered. This is an evolutionary theory of why we are fascinated by the behaviors of high status people and sometimes adopt these behaviors (think “lifestyles of the rich and famous�). Together with Conformity Bias, these are innate inclinations toward behaviors that promote cultural evolution, genetic adaptations to cultural evolution, a consequence of Genetic-Cultural Coevolution.


Prestige is not necessarily exactly the same as social status, and may often differ from political power. What sort of thing is prestigious is determined by your culture, social class, and group membership, but that doesn't mean that it is completely arbitrary. If a culture assigns prestige to activities that harm the ability of that society to compete with other polities (such as by increasing internal conflict), then that culture would lose out. Similarly, a subculture can selfdestruct if adopts values that cause it to lose Mind Share. It is unsurprising to find cultures that assign prestige to economic productivity or effective use of political power, because those are things that aid in competition between cultures. But it may be that prestige has an innate bias toward wealth and power. All animals have a sense of quality and amount of food (see Value), and social mammals usually have some sort of dominance ranking (see Hierarchy). Humans are outliers, in that until 5000-10000 years ago, we mostly lived in egalitarian tribal groups (see Human Origins and Original Sin). It seems there is also a drive toward displaying status. That is, people make costly displays of their productivity, perhaps even reducing their personal genetic fitness. If you grow a 2000 pound yam and serve it up in a big party, then clearly this is a costly status display. It isn't even efficient, surely it is easier to grow 2000 pounds of smaller yams (they probably taste better too). That's the whole point. This example is from the book Not by Genes Alone, where the authors say that they think of 2000 pound yams whenever they see someone driving a Hummer. In our view, status display can be an altruistic behavior. Wanting to let everyone know how productive you are (status displays and bragging) drives people to broadcast culturally valuable information which genes-alone evolutionary psychology might want to keep in the family.

Sunk Costs Rule: Before abandoning some effort already underway, consider how much resources (time, money, political capital) you have already invested. Aphorisms: Don't change horses in the middle of the stream. There ain't no such thing as a free lunch. See Sunk costs for the economic critique of this rule. Your decision to continue a course of action should only depend on the current Value of doing so, and not on how much you have invested. Yet in a world that is pathologically unpredictable, nobody knows what the future value will be (least of all economists.) Valuable results usually only come from effort — though our efforts may be wasted, given that we cannot know the true value, we use the amount we have invested as a proxy.

Design Space Design space is a concept from engineering design: the collection of all possible designs for a thing, considered as a high-dimensional space parametrized by design decisions. This concept


is intimately related to design Optimization, since a space of possible designs must be identified before an optimal design can be selected. The use of a spatial metaphor allows evolution to be visualized as a trajectory in design space. See Evolution as Algorithm and Darwin's Dangerous Idea. Design space also implies the idea of a Tradeoff because a design with one or more desirable properties may be feasible (in the design space), but any single design cannot be optimized for all qualities simultaneously.

Evolution as Algorithm The idea of evolution by natural selection was originally developed by Charles Darwin to explain the change in form and function of plants and animals across generations, but evolution is a more general concept, something that happens whenever the conditions are right: ● Agents can reproduce, with offspring fairly accurately inheriting traits from their parent(s). ● Variation is introduced somehow, either by mutation or by inheritance of a mixture of parent traits. ● Environmental limits prevent all of the offspring from successfully reproducing. ● An agent's inherited form and function affects its ability to survive and reproduce. Before Darwin, evolution only meant some sort of change over time, often a trend with an implied direction or even a purpose (see Orthogenesis.) Evolution is an iterative interaction between organism and environment. The result of evolution is Adaptation – that organisms become better fitted to their place in the environment (their niche). Because the evolutionary system evaluates organisms by their ability to survive and reproduce, this increase in fitness is nothing other than a relative increase in successful reproduction. In this abstract view, evolution is an Optimization algorithm, where the evaluation function is fitness. In mathematical global optimization we are only interested in peaks higher than the one we are currently on. In the evolutionary landscape the relative height of peaks is less clearly defined because escaping into a new part of design space is often associated with exploiting a new evolutionary niche. In one view, you can say that a new niche has a very high fitness value for the pioneer organisms because they have no competition. The problem with this view is that they reproduce exponentially, rapidly filling the niche and restoring individual reproductive success to more normal levels. Fitness is then seen as declining (due to population pressure) even as the organism continues to optimize its design to better exploit the niche. Of course, fitness does depend on the environment, but so far as understanding the evolutionary virtue of radiating into new niches it may make more sense to say that the relative fitness of organisms that do not compete as ill-defined. So the power of the evolutionary algorithm is defined by its ability to exploit new niches, without any emphasis on finding the highest peak in all Design Space.


Levels It's a characteristic of human thought that we divide the world into categories, and often into sequences or hierarchies where one level builds upon another, or is in some sense more abstract. Though these are human social constructions, these conceptual organizations do often derive from underlying physical Reality (see Modularity), so behaving in this way is adaptive. What are important kinds of levels?

Emergent levels Emergence is when complex unpredictable behavior arises from the interaction of simpler entities. Consider these levels of existence: subatomic particles → atoms → molecules → living cells → organisms → social groups. Every later (higher level) grouping is strictly dependent on the existence of all the lower levels, and for any change visible at a high level there must be some change in the arrangement or state of the components at each lower level. See Supervenience.

Analytic levels In our effort to understand the world, humans have developed methods and bodies of thought related to understanding particular emergent levels. Paralleling the above levels of existence, parts of physics concern themselves with the behavior of subatomic particles and atoms, while chemistry concerns itself with general properties governing the construction of molecules, cell biology studies the internal operations of cells, organisms in general are studied by biomechanics and evolutionary biology, with sub-specialties for particular species and organs (psychology, neuroscience), and behavior of social groups is studied in social psychology, sociology, Economics and ethology.

Abstraction levels Abstraction and classification are powerful tools that we use to make sense of the world. The above analytic levels are an important special case of abstraction levels that we deliberately attach to the underlying physical reality of the corresponding emergent levels. Our Level Map of human reality shows some ways in which analytic levels may combine with more arbitrary classifications. This diagram does emphasize the emergence of mind from the physical world, but the layering also represents other kinds of relationships: part-of: The body is part of the world, and the brain and other organs are parts of the body. representation: The lowest levels, perception and Body Model represent the state of the world and of the body. Higher levels of unconscious processing (Emotion, etc.) represent the meaning of the situation


relevant to the person's interest, while the highest level storytelling is a representation of the state of the unconscious mind (see The Interpreter Theory).

Multi-level systems Pulling back to a higher conceptual level, we may go beyond developing abstract categories at a certain analytic level. We can instead propose new hierarchical organizations, with the understanding that multiple organizations and interpretations are useful, fluently switching between them as appropriate. This is precisely what we are doing here which this categorization of levels. When we adopt this sort of multi-level conceptual argument, some categories may be entirely unfamiliar, and take some getting used to before their value is appreciated. In Dennett's analysis of the intentional stance, the physical stance is well understood, but the design and intentional stances are more distinctive. See Intentional Design

Modularity Modularity refers to the tendency of complex systems (either natural or artificial) to be composed of subunits that are less tightly coupled. This decoupling of subsystems has such profound semantic advantages that it has been discovered both by Evolution and by pragmatic social practices such as government and engineering design. See Modularity. The division of the body into organs and the division of the mind into functional subsystems is not purely for intellectual convenience. When you start pulling a body apart you find tissue planes and abrupt changes in cell type. When you watch a body take shape during embryogenesis, you find that different organs spring from different developmental pathways. Similarly, neuroscience has shown that certain regions and networks in the brain are associated with particular functions: the amygdla with fear, the hippocampus with memory, and so on.

Optimization Everyone is doing the best they can at any given moment, and when they can do differently, they will. Maxima and minima


Representation Representation is a technique that is necessary in any system that process real-world signals or any other sort of information. In your brain, the firing rate of a particular neuron is your mind's representation of the temperature of a certain tiny spot on your left index finger. In language we make use of sounds or signs to communicate aspects of our thoughts. But a system doesn't have to have anything even vaguely resembling a mind in order to exploit representation. Notably, all known life uses DNA represent the structure of proteins. Likewise, in your house's digital thermostat, a certain register represents the current temperature (68) as a binary integer 10000100, and these bits are in turn represented by electronic signals. Representation is ever-present, but it often proves difficult to pin down just what we mean by representation. Semiotics is one effort to systematically study representation and related concepts (symbol, sign), but the results often seem obscure. We suffer from the dual problem that representation (in the form of language) is natural and effortless, but some metarepresentation (Story) is also a necessary part of any study of representation. This creates vast potential for Level Confusion. The representations that make up our own minds are also completely inaccessible to conscious awareness (see The User Interface Analogy), inclining us to Naive Realism and Mind/Body Dualism. It is like a fish studying water.


The field that has had far more impact on our understanding of representation and symbol is Computer Science, especially in the sub-disciplines Artificial Intelligence, Machine Learning and Digital Signal Processing. See also Representational Opacity.

Tradeoff The general ideal of a tradeoff is well known, but it becomes very prominent in design, including in the designs that are the result of evolution. When considering the space of all possible designs, it is possible to locate designs that have one or more more desirable properties, but trying to maximize too many qualities simultaneously is problematic. If qualities don't conflict, then it is a no-brainer to choose both. In evolution, sexual reproduction is an effective means of combining non-conflicting features. But in practical designs there are always resource constraints that may force tradeoffs even when features don't conflict.

Economics Economics has much to say about The Human Condition because economics dominates modern life. Important themes: â—? How the markets work because of human nature, how competitive status-striving and social cooperation interact synergistically to create cultural evolution (both social and technical) which has entirely transformed the human environment (see Genetic-Cultural Coevolution). This differs in several ways from standard economic assumptions, most obviously because economists have chosen a wildly inaccurate model of human behavior, but also because economic behavior needs to be understood as a social process (see Market as Culture). There's a lot more going on than just the market and the individual. â—? How economic life is also deceptively unsatisfying because our instincts entice us in to pouring our efforts into paid work, but the expected increase in happiness is not forthcoming. Once we satisfy our basic need for food, clothing and shelter, we pour most of our money into trying to win the status game, which is a zero-sum game that only a few can win. See Value, happiness and Hedonic treadmill. â—? A particular area of interest is the idea of sustainable economics. The idea that we should reduce our consumption and not strive so hard for greater happiness is compatible with the idea that we should reduce our consumption so as to not deplete our resources or degrade our environment.

The basis of the Dollar


What is the U.S. Dollar based on, and what does this have to do with human happiness? In an important sense, the dollar is based on our collective public opinion. But then again, so is the rest of the economy. One way to understand this is to look at how the amount of money in circulation is controlled, to the degree that it is controlled. Why control the money supply? Because the amount of money in circulation can speed up or slow down economic activity – too much money leads to inflation, to little to deflation and depression. How this happens is related to interest rates, but that connection separate topic then the money supply itself. One way to understand this is to follow the money, so to speak. A new Federal Reserve Note (a dollar) can be created as a replacement for an old bill, or as the paper version of a dollar that already exists as digits in an account. In these instances, banks buy currency from the government. That doesn’t change the overall money supply, but that doesn’t mean it’s not important. We’ll come back to it in a moment. When not a replacement for an old bill, a dollar can be introduced into the banking system by the Federal Reserve when the Fed wants to increase the money supply. The Reserve can just create money by in essence printing it, but it only does that very, very rarely. Much more typically the Fed works with money it has held back from the system - money on the Fed's balance sheet but kept out of the economy. And no matter whether the new money has been freshly printed or kept in reserve, the Fed raises (or lowers) the money supply in a way that depends on the cooperation and confidence of the private economy. Let's look at this in more detail. This can be confusing, but the point is to impact the amount of money at work. Because that’s when it can help speed or slow economic activity. When the central bank wants to increase the money supply in the banking system (and the economy at large) it will buy Treasury Bonds from private banks, paying in cash or the equivalent. Both the bond and the currency have been issued by the government; with one hand the government has issued this debt, which another part of the government buys up. Both are similarly trustworthy. But this is more than moving coins from one hand to the other. Because the private bank now has more money to put to active work. Think about it this way – the bond was not liquid, neither was the money the Fed had in reserve. But when one is traded for the other the Fed still has assets that are not working in the economy, but the bank now has money it can lend. And since banks like to lend money (it's how they normally make their profit) the cash gets pushed out into the world to work. A larger money supply leading to more economic activity. If the Reserve wishes to tighten the availability of money (and slow the economy), it simply reverses the process. The rate and direction at which this happens controls the amount of money in the supply. Let's look again for a second at where the Fed gets the money to buy the bonds. Typically that money is cash accrued as part of it's role as the central bank and banker of last resort - interest on loans, mandatory deposits from large banks, money appropriated by the Treasury Department. Some of this is money taken out of the economy when the Fed wants to reduce the money supply. Sometimes, very rarely, the Fed will simply print money. But the key again is that this is money that has been out of the banking system and the economy. The effect is the same whether it is newly minted or just held in reserve.


It seems like a zero sum game, like the total number of dollars is not changed. But keep in mind that the money the Fed has held back has not been having an impact on the economy. One interesting way to think about this is to consider the way debt can duplicate money. In this case the bond and the dollars used to pay for it can exist in the economy at the same time. That's an important point, because a similar thing happens when the government (in this case Congress) increases it's bonded indebtedness. That also increases the money supply (more on that below), and that also works though the credit markets. And as we saw Congress or the Fed can retire some of the debt to reduce the money supply, because the process is easily reversible. It would be possible for the Congress to ask the Fed and the Treasury department to print money to pay wages, contracts, etc. But it doesn't normally do that. Instead it also goes through the credit market. Which brings us to another important point. This strange seeming idea of the government buying and selling its own debt to control the money in the economy has an important aspect related to feedback. This mechanism includes a private link, something out of government control; by monitoring the willingness of the bond market to trade dollars for treasury notes, the Fed and Congress get regular information about the health of both. When we speak about a fiat currency, based on the full faith and credit of the government, this is a way to test what that means. So if investors lack confidence in the money introduced by the government, they would demand more of a premium to accept it, or they could conceivably refuse to take part entirely. In this way the dollar is linked to their impression of the government’s ability to pay its bills, which is in turn based on their opinion of its ability to tax. Which is in turn based on their opinion of the health of economy as a whole. The Fed no longer emphasizes its measurements of the money supply, in part because new forms of credit make the money supply hard to define and control. More on that below as well. But like central banks have done for centuries, the Fed rather works by making marginal changes in interest rates. The looser money is and the lower borrowing costs are, the more economic growth should be stimulated. With the inverse for tighter money. In other words, in good neo-classical economic fashion, the central bankers are not so much interested in what the total supply and demand numbers are for their product (in this case that product being money) they only have to monitor the change in price (the shift in the interest rate) that comes from adding or subtracting units at the margins. In this sense, control of the money supply can be seen as a form of marginal utility as described by Alfred Marshall, and in that it resembles most other modern markets. So the Federal Reserve makes money more or less available in order to change interest rates. When news reports say the fed is setting a target interest rate, what the reserve is actually doing to reach that interest rate is putting money in or taking money out of the system (via the network of large banks it works with, the people the Fed buy and sell bonds with) until the cost of borrowing (to be exact, fed funds rate) matches the number selected. As referenced above, this way for stimulating or slowing economic growth has a direct parallel to fiscal or Keynsian stimulus. Through that mechanism, Congress will go into debt (or pay off debt) to increase (or decrease) spending. When the economy is slowed, one classic form of fiscal stimulus is to make additional Food Stamp or unemployment compensation payments, the


point being that payments made through those programs can go out quickly and are very likely be used almost entirely and almost immediately. Cutting (or if the economy is overheating, raising) taxes would have a similar impact, but generally takes longer. The point is it all works the same way. The government going into debt increases the money supply because both the debt and the money Congress has to spend exist at the same time. In some situations, this has an advantage over central bank interest rate cuts as a form of economic stimulus, because if an economy is depressed enough, lowering interest rates might not spark any more economic activity. While Congress can decide to spend or not, at will. If all of this sounds inflationary, as if it should be constantly driving up prices by adding currency worth less and less, remember that the economy is almost always growing. Unless the country is in recession it will require an ever-larger money supply to match the added goods and services being produced. And all of the above mechanisms are at least intended to be used in a non-inflationary manner, with an awareness that too much currency, like too little, can have a negative economic impact. There is yet another way for new money to be introduced. Let’s back up and review - fiscal stimulus puts new money into the economy through government borrowing – Congress appropriates new spending and new debt, the Treasury Department in essence adds to the federal checking account, and the checks are written. If there is a need for paper money to cash the checks, dollars are printed and introduced along with the replacement currency mentioned above. As we have seen the central bank system of buying or selling government debt does something similar. But a far most important way of adding to the money supply is not centrally controlled in that way, although in an important sense it may be unconsciously driven by some of the same kinds of measurements. The private sector also adds to the money supply, much more dramatically than the government. Simply put, banks and other lenders create money by lending out more than they have. Think about it this way. A bank or credit card company sends you a credit card application with a five thousand dollar limit. You fill out the application, are accepted and go out and buy five thousand dollars worth of stuff. You now have five thousand dollars more stuff, and five thousand dollars in debt, plus interest. The merchants get their money from your bank and deposit it in their bank. Their bank now has five thousand dollars more than it had before. And if it needs currency, it can go to the government and buy five thousand one dollar bills. So now there are five thousand new dollars in the economy. Where did they come from? When were they created? They were created by the interaction of you, the merchants and the banks. There is an unspoken confidence that you, and millions of other people just like you, will produce enough (when the bills come due) to pay for the goods and services you now enjoy. In a broader sense you, the merchants, and the banks are all betting that the economy will grow enough to support the debt it is adding on. People working for the banks, credit card companies and the financial industry are charged with the task of evaluating how good a risk that is. But as a whole the system is not designed to pay attention to the size of the money supply it is increasing, only to the marginal utility of adding more credit in one particular instance. It operates by what we might call collective subjective action, as in another context could Adam Smith’s invisible hand. If the central bank raises or lowers interest rates, that influences those


moves. But ultimately the decision to lend or borrow is made by thousands of separate individuals, each answering to a separate set of demands. Just as it is in most large markets. All of which suggests the question - why do interest rates have such an impact on the rate of economic growth?

Broken Dialectics, Or Paradise Lost Marx wasn’t known for his writing satire, but if he was he might restate his dialectic today to go something like this… A center right party comes to power promising to increase prosperity by reducing regulations on industry, including the financial sector. Eventually the financial businesses take advantage of the deregulation to create an economic bubble. Which bursts, bringing a center left party to power. They reregulate, making them targets for criticism from the right. And the cycle starts all over. All joking aside, it could be argued we’ve seen two turns of the wheel in the last twenty years. The deregulation of the savings and loan industry lead to a crisis in the late eighties that helped get Bill Clinton elected, then the banking crisis of 2008 helped elect Barack Obama. Of course traditionally an economic dialectic was not thought to be dependent on something as ephemeral as the ideological nuances of a particular administration. The great tide of history was thought to be pushed by a much bigger watch-spring. The aspirations of entire classes of people say, or the creation of transformative technology. But we seem to have entered a post-industrial era which doesn’t fit the grand patterns as described. Consider two dialectics, one by Marx and the other by one of his more important conservative critics. For our purposes here we can take a dialectic to mean the arc of economic history driven along a path by opposing forces, or more broadly any grand recurrent pattern of economic development. Of course Marx saw class struggle as the essential push behind his dialectic. The dominant class (the thesis) conflicts with a rising class under it (the antithesis) and they eventually conflict to create a new dominant class (the synthesis, which becomes the new thesis). And so on. One of the most valuable notions in Marx is the observation that capitalist economies go through regular, periodic crises (see also The Irrational Markets Theory). Marx saw these are part of the rising conflicts between the thesis and antithesis then current, and he said the crises would grow in severity until they resulted in revolution. Which fits, except that our recent downturns seem more like troubling repetitions than reinforcing crescendos building into an apocalypse. Consider this - after a sharp economic downturn sparked by a post Civil War financial panic, a railroad strike shut down half a dozen cities. The strike was only put down by federal troops after a month of rioting and bloodshed. In spite of sometimes heated rhetoric about radicalism, our current political and economic fights are comparatively tame affairs. One thing that does seem reliable about history is that someone is always predicting its end, and they will always be wrong. Götterdämmerung turned into a debate over marginal tax rates. The grand ideological battle of the 19th and 20th century was settled in a compromise.


In fact, until the Great Depression the global economic crises did seem to get worse with each cycle. Since then the crises have been significantly softened by government action – as we shall see, an important sign that we are no longer in Marx’s world. So have we achieved the bright, shining Capitalist future? Not hardly. That future was to be provided by technological innovation, and the one thing technological innovation hasn’t given us is uniform, reliable prosperity. Marx’s ideas about the role of technology in the dialect were interesting. Marxism suggests that technology will determine important social patterns – the need for factory labor causing rural people to move into swelling cities for example – but that the technology is deployed and directed by the capitalist class. Which has to answer to its own set of inescapable imperatives, being as much locked into the dialectic as the workers. But for the dialect described by Walt Rostow in The Stages of Economic Growth: A NonCommunist Manifesto the driving force seems to be technology itself. Rostow says that technological innovation, in the pursuit of material wealth, pushes economies though a set pattern – the eponymous stages of growth. The third stage Rostow describes is what he calls take-off – the point at which mass industrialization becomes a self-reinforcing process. He calls it “the great watershed” in human affairs. The fourth and fifth stages follow within a few decades, culminating in what he calls the age of mass consumption. That is the period during which production simultaneously provides the wealth ordinary wage workers need to become middle class consumers, along with the goods they desire to consume. Prosperity becomes the self-reinforcing cycle; more workers making more goods earn more and spend more, requiring more goods. And we’ve seen this happen repeatedly; Rostow himself describes watching factories in post WWII Europe shift from having bicycle stands out front to opening up parking lots for employees’ cars. Rostow says what will come after mass consumption is “impossible to predict,” although he implies a great deal by citing Thomas Mann’s Buddenbrooks, a novel in which one generation makes money, the next rises in society and the third creates art. But recently something unexpected has happened to Rostow’s dialectic – the technology continued to improve and the factories didn’t need the workers. In short, deindustrialization. Wikipedia quotes the Organization for Economic Co-operation and Development (OECD) as saying that U.S. industrial production and manufacturing output rose from the eighties to the present. But “total industrial employment has been roughly constant at around 30 million people since the late 1970s…. (and) increasing labor productivity has led to higher levels of output without increases in the total number of workers [emphasis added]” It goes on to say that since the total number of workers has increased dramatically, there has been “a massive reduction in the percent of the labor force engaged in industry (from over 35% in the late 1960s to under 20% today). Industry (and specifically manufacturing) is thus less prominent in American life and the American economy now than in over a hundred years.” So just as Marx predicted, the management has had to use the technology to remain competitive, even when it means eroding the aggregate purchasing capacity of consumers. For his part Rostow accurately predicted that technology would tend to move to less developed economies. But he never properly dealt with what would happen to the countries maturing into this late stage of mass consumption while the industries migrate in pursuit of lower wages.


One suggestion is that the more developed counties should move up the chain of production and specialize in higher and higher value added products. That way they use our comparative advantage in education and sophisticated technology to make things that can not be made in an economy at an earlier stage of growth. And to a certain extent that has happened, although as time goes on it’s harder and harder to do and it doesn’t require the same size of workforce. It’s not a rising tide that is going to lift all boats. So goods are still plentiful, and cheaper than ever. And what we might call our potential aggregate material productivity - our ability to make stuff - is greater than any time in history. But we have a distribution problem. This aggregate material wealth is no longer connected to as great a level of mass wage fueled consumption. In fact, overcapacity in the factories has lead slack employment, which in turn spirals into increased overcapacity, and so on. The mirror image of the mass consumption cycle. Our economy continues to be vulnerable, when it should be robust. As we said, that inversion of mass consumption does fit Marx’s predictions. But rather than provoking cataclysm, it’s provoking stagnation. He’s not any more useful than Rostow at the moment. It’s hardly breaking new ground to point out that we seem to be in some kind of dialectical interregnum – all our grand economic plans are crumbling and shabby, empty cities of magnificent design where no one can stand to live. Lets take a moment to glance at a couple of differing critiques of our current situation before look for a working dialectic. For one thing, saying the contemporary American economy has problems distributing the results of its immense productivity is a drastic simplification, with ideological implications. Defenders of laissez-faire economics have argued that the distribution by the market may not be equitable, but it obeys it’s own powerful logic. The keystone of that logic is described by the efficient-market hypothesis. That theory has taken a terrible beating of late, as the market has demonstrated once again that as a system for distributing resources an unregulated market is anything but consistently logical. Or efficient. But setting aside the technical and ideological arguments, the question remains: why is the richest and most productive economy in history still haunted by the threat of recurrent high unemployment, in spite of being justly famous for regularly creating hundreds of thousands of jobs a month? This is obviously a problem that the marketplace hasn’t solved, or we wouldn’t be talking about it. Some critics have argued that the problem is not a lack of productivity, but that pursuit of lower wages. It appeals to our physiocratic instincts to say the reason this country’s middle class is in trouble is because we no longer make as many consumer products here, and we can’t address our situation without considering the changing relationship between what were once quaintly called the core economies and those in the rapidly developing counties. Clearly much of the mass production has moved to new territory, taking with it the vitality and the rapid creation of value it's known for. Marxists have described that as another sign of capitalism’s approaching end – deindustrialization as a pathology built into late capitalism. There are some things about that idea that make sense, for example the notion that rising labor power forces management to look overseas just as it forces management to look to new technology. But it would also be too much to try to deal with the breadth of those theories here. Let’s just take time for two quick notes. One, the theory of neocolonialism suggests that system exists to keep the 3rd world from developing, that the nations on the periphery are trapped in


underdevelopment so they remain dependent. But now, one of the big problems for the United States is serious competition from some of those counties. The U.S. is as dependent on China as visa versa. And two, if late capitalism is on the verge of impending doom, it’s a doom that’s been impending for quite some time now. As we saw, for their part neo-liberals like Rostow argue that the solution is to move into higher value added products, which has happened to some degree, although that’s not the solution for most workers. So the distribution problem remains in spite of all the dialectics. Why are our technology and productivity not bringing a more permanent and well-distributed kind of wealth? Something odd is happening in this economy. It’s as if we’re not moving up along the path of history so much as we are moving sideways, and our economy is become erratic and fragile while we’re doing it. The idea of a dialectic still works, in that we do have repeated patterns of economic development that appear to follow a set course. To return to the Marxist observation about migration to the cities, only ten percent of Americans live in rural areas now, as opposed to 1920 when it was closer to 50%. The same kind of shift happened in Victorian England, and is happening in China today. When the factories mushroom, poor people move from the country to the cities to work in them. But inherent in both dialectics is the notion of progress of a particular kind, the idea that the material condition of the population would improve over time. And it has. Consumer goods are cheap and sophisticated enough so that people can get things their parents and grandparents only dreamed of. But social critics have long noticed that material abundance does not make us feel prosperous and it hasn’t made us secure. Make no mistake, this is not an aesthetic critique; the problem is deeper than the often discussed modern feelings of unease. In spite of our aggregate material prosperity massive dislocations are only a downturn away, there are large numbers of people who are homeless and even more who are permanently unemployed, and one regularly cited study by the Institute of Medicine found that as of 2002, eighteen thousand Americans were dieing prematurely every year for lack of health insurance. So if our economy is not doing what we want, why no revolution? For one thing Marx never conceived of capitalists (with overwhelming political power rooted in their wealth) allowing governments to take the kind of action necessary to rescue the economic system. But that’s exactly what’s happened. Often with considerable struggle, the economically powerful in America have in the past have been made to recognize moments that threaten the survival of the economic system and supported governments that take appropriate action. The classic example being the New Deal. In the end we have more metaphorical blood spilled on the floor of Congressional committee rooms than real blood on the factory floors. And the missing capitalist paradise? One problem with Rostow is that he allowed for no limitations to capitalism’s ability to solve problems though technological innovation. As he put it, the only resource with no limit is the human imagination. He assumed a non-zero sum game. The problem being that just because the pie is always getting bigger, does not mean everyone is getting a bigger piece. Parenthetically we might say he made the opposite mistake from Marx, appropriately enough. Marx assumed the


shortsightedness and greed of capital. Rostow assumed capitalism wisdom and generosity, or at least its productivity with a natural tendency to make everyone rich. So both writers made mistakes about the emotional content of their systems. They described the economy as a set interlinking, impersonal forces. Machines. Which suddenly produce profound human happiness when a threshold is passed. In Marx that threshold is a kind of revolutionary progress, in Rostow it’s a level of productivity, but in both cases it’s a sudden paradise dependent on a quantity of material prosperity. The problem being that the emotional content made the systems move, and material prosperity didn’t stop them moving. To see this more clearly, reconsider the forces the two economists described as pushing their dialectics forward. Rostow never states it simply, but in his case the motion of the machine is driven by technological innovation. Marx does describe his in detail – it’s driven by the conflict between competing social classes. Both of those are important forces, but they are really secondary. The real watch spring is much more personal in scale, in that social competition and technological innovation are both driven by human desires. That means the real forces pushing these historical movements are the psychological wants and needs of individuals. Why is this important? Because that push does not disappear because of a new level of productivity. Economists largely assume people’s desires to be uniform and consistent, in other words, to be factored into the equation and forgotten. At least until utopia is achieved, when all human desires are fulfilled – in the capitalist context, when the economy is productive enough to make all people feel prosperous. But if we know anything about human psychology, we know that it is not simple or predictable. Prosperity is an elusive and slippery feeling. It seems more based on immaterial judgments – comparisons we make with our neighbors, competition between ourselves and others – which in an important sense can not really be addressed by material prosperity at all. Mainstream economists have tended to assume that sufficiency of aggregate material goods is synonymous with a rich nation, or to be more precise, a society with a large stockpile of value. They might recognize individual variations from that pattern, but they make no allowance for the mental complexity in their systems. But some have painted a more complex picture. Veblen pointed out that material goods are important to individuals for their symbolic value, something that is deeply subjective. Others have said that what we want to make us feel wealthy can change over time, and some have even gone so far as to say what we value (in the economic sense) follows Maslow’s hierarchy of needs. The argument being that once a lower such as physical safety is satisfied, then we pursue higher needs, such as the respect of peers. How can this influence a dialectic? For example, we can say that there is something eternal in ambition. No economic theory can work for long without taking it into account. But it doesn’t operate the same way in every case. An 18th century Dutch merchant and a 21st century New York bond trader might both be motivated to impress their wives, but the house the earlier man might have bought to do so might be laughably inadequate in the eyes of his latter counterpart. In a sense, material goods can never provide substantial prosperity for everyone, because the point of having them is to have more than someone else. That’s hardly a revelation. But what happens when some kinds of material prosperity begins to be taken for granted?


Veblen’s theories suggested that under some circumstances the same goods that provided psychological value could lose that ability. For example two cars in a family was once taken as a sign of status, but no longer. Simply because cars have become cheap. And adding a third one doesn’t help that much; the same level of psychological value can now only be achieved through a car that is substantially different or technologically superior, and much, much more expensive. It might be easy to make a moral judgment about this pattern, to say conspicuous consumption by buying a new car is wasteful. But that argument is hard to maintain in a rigorous economic sense; we simply have no uniform or objective way of judging what consumption is wasteful and what is not. But the impacts for the economy as a whole are fascinating. Reconsider Rostow’s picture of the last few decades of American economic growth. Technology was driving an increase in material wealth. But there were two processes at work – one technological and one psychological. For their part, consumers bought larger and larger amounts of consumer goods, in the hope that material prosperity would translate into psychological prosperity – chasing a goal that eluded them because the goods had become inexpensive. At the same time, even as the factories were being shipped overseas, something that looked like value was still being created in the stock, real estate and derivatives markets. Of course much of that proposed value proved to be false, but it did come into existence to satisfy a psychological need, not a physical one. Of course the shift to psychological value is slow. And most of what we’re calling proposed value is fungible, in that it can be translated into material prosperity, at least until it is proven false. Both kinds of value can coexisted and economic transformations are rarely total – people didn’t stop farming even after their neighbors move to the city. And it’s worth pointing out that we are dealing with trends in the average attitudes of a large number of people. Just because the center of gravity is moving in one way does not mean another part of the population is moving in another. But the point is that the crux of activity shifted, and that shift distorted the working of the economy. Excess consumption lead to excess debt and a low savings rate. Reliance on rapidly growing financial and real estate markets led to excess speculation. The high levels of these two things, debt and speculation, ultimately proved to be unsustainable, damaging to the economy as a whole. The aggregate psychological needs to the individuals can put an economy out of balance. And helped bring the predictions of dialectic paradise to a crashing end. But the story doesn’t end there, because (as for Rostow’s economic Buddenbrooks family) the degree of satisfaction achieved by debt and speculation also lose their appeal over time. We enter a stage of growth where consumers begin to move away from mass produced goods, just because mass produced goods are so available. And that promises to be a very different world.

Economy as Network If we assume that value is subjective, and that consumers accumulate it (or the things that give it) around themselves, and that businesses expand by providing it (or the tools for others to


provide it), then we begin to get a picture of the economy as existing, in the most fundamental way, within a network of social relations. Every person can be seen as occupying a node in this network - buying, selling, borrowing, lending, and laboring for wages - and significant long-term changes take place only when large numbers of these nodes are impacted. Thus the economy could be said to grow and store value in a permanent way by spreading it across a broad expanse of that network, in other words widely distributing it through society. Which makes the web of interactions, the economy, more healthily and robust. And makes fertile ground for innovation. For example, historian Eric Hobsbawm argued that the industrial revolution might not have happened if there were not a large number of skilled artisans and “mechanics” ready to become entrepreneurs (and turn their fabrication shops into small factories) when the opportunity presented itself. This does not mean that according to this theory some small parts of the network will not grow rapidly while others stagnate - consider the financial markets. But as we have seen this kind of volatile growth tends to be insecure, for the same reasons that made it rapidly expansive in the first place. If value is proposed, but then fails to be confirmed, on a small or localized scale within the network, the web of the economy can more easily absorb the contraction (see Value proposed and confirmed). Compare that with what this theory tell us happens when an important market crashes. When a speculative bubble bursts, a large amount of proposed value, money that had existed only on paper, suddenly vanishes. This creates a strain that spreads out into the network of the economy, as each individual and each enterprise attempt to avoid falling into the hole. If there is not enough real value stored in the neighboring parts of the network, the crisis can spread until it can threatens the entire system.

The Freelance Economy Since about 1980, there has been a trend in developed countries toward increased use of independent contractors (freelancers) instead of full-time employees. This coincides with the availability of cheap networked computers. The standard explanation for this trend, that it “reduces overhead costs”, is clearly an oversimplification, since if you look up advice on how to price your services as a consultant or contractor, the first advice is “figure out your overheads and add them in.” The advantages for a company of hiring exactly the labor they want, exactly when they want it, are so obvious that economists have had to go to some effort to explain why the concept of fulltime employment exists at all, see Theory of the firm. The basic conclusion is that hiring on need has overheads and risks too, and that companies have to balance these tradeoffs. Will you hire some guy off of the street to run your multi-million-dollar metal-bashing machine? It may not be a highly skilled job, but if he's too stoned to push the stop button when something goes wrong, then those few dollars you saved start to seem like a pretty foolish economy. One part of the answer lies in the fact that many people prefer the security of a regular paycheck, and will accept working for a lower hourly rate in exchange for avoiding the risk of being selfemployed.


Especially in the US, another way that freelancing reduces costs is that it becomes the worker's responsibility to pay for medical insurance, to save for retirement, and to accept the lost income of taking vacation time. These are things which are hidden in the “benefits” portion of the traditional full-time employment package, concealing their real costs. It is likely that freelancers reduce their consumption of benefits once they pay for them directly, which may or may not be to their long-term advantage. It's certainly easier to save for retirement when your employer sets up a plan where they money disappears before it ever gets into your bank account. As well as whatever genuine productivity benefits may come from matching workers up to needs, it is clearly also true that tax and labor regulations in developed countries indirectly encourage employers to favor contract workers. When you see unskilled workers such as janitors being paid as contract workers, this is probably the main reason. Contract workers often aren't covered by labor regulations, and the responsibility for paying taxes is shifted to the contractor. Some taxes may be avoided entirely, and it is also much more difficult for the government to make sure that all those independent contractors aren't padding their business expenses or hiding income. In large corporations with centralized management there are comparable organizational incentives to the use of contract workers. Full-time employees often have to be approved by management at the main office, and by an independent human resources department. When one division of the organization has work that they need done, this central management is a nuisance. Often this can be avoided by hiring a contractor. It's a common Story among freelancers that they felt forced into the decision, and were afraid at first, but came to value the freedom to work whenever and however they wanted. So is freelance work exploitative or liberating? It depends. On one hand, it's a solid principle of positive psychology that people tend to adjust to whatever comes their way, and find the silver lining in the cloud. This is what Daniel Gilbert refers to as the “psychological immune system” in Stumbling on Happiness. Since people who have become paralyzed will say often that it was the best thing that ever happened to them, this pattern of reluctant enthusiasm in freelancers is somewhat suspicious. On the other hand, there are quite a few people who do genuinely value the chance to get off the Hedonic Treadmill, to work less, consume less, and have more time to do whatever they want to do. See for example http://www.mrmoneymustache.com/.

The Return of the Gold Standard Who knows what causes the eternal appeal of the gold as a basis for money? Some ideas seem to return only by becoming mirror reflections of themselves. A century ago William Jennings Bryan rallied rural and small town populists to his presidential campaign by saying the country should leave the gold standard. In the 2008 presidential election two populists on the right have made a bid for similar votes by suggesting we return to it. There is probably a psychological reason for this. But first the history: In a speech to the Democratic convention of 1896 Bryan argued that farmers, small business people and other borrowers were being crucified on a “cross of gold.” He said by limiting the availability of currency that could be used for economic growth, the gold standard was causing a period of deflation that had lasted for more than twenty years. With prices for the products they


had to sell constantly falling, those borrowers had to in essence pay their loans back with money that was worth more it was worth when they took out the loans. In 2008 two of the more populist Republicans in the primary, former Arkansas Governor Mike Huckabee and Texas Congressman Ron Paul, both said the U.S. dollar should be based on gold or some other precious metal. They argued that federal deficit spending would inevitably lead to inflation and the devaluing of the dollar. They cite statistics that suggest gold has maintained it’s buying power compared to our current fiat currency. Ron Paul, Mike Huckabee, and others who call for the return to a gold and silver standard also have a conservative ideological reason for backing the idea – it would limit the amount of debt the federal government could issue and therefore limit the size and power of that government. It’s interesting to realize that economic libertarians seem to be backing a form of currency that would be by definition deeply restrictive on commerce. And few economists consider returning to a precious metal standard a serious proposal. For one thing, the amount of federal debt. compared to the size of the economy as a whole is actually fairly small, as is the current inflation rate. And many argue that what inflation we have is not due to an increase in the money supply, but to rising prices for important commodities like oil – which was also the cause of the inflation during the 70’s when the dollar did lose a lot of its purchasing power. It’s not much of an exaggeration to say the dollar is really on a crude oil standard, given the importance of that commodity to our economy. The dollar has fallen of late, and at times it has risen as well. One sign that the fall in the dollar may not be linked to government borrowing is the continuing ease with which the government borrows; if the U.S. were debauching the currency by issuing debt, treasury bonds should get harder to sell. And they’re not. A possible reason for the difference between how the gold standard is viewed now and a hundred years ago has to do with the vested interests of the groups the politicians hope to represent. In 1900 the bulk of the populists were farmers or otherwise tied to agricultural economy. Farmers require a steady supply of credit, if for no other reason than because their main income pays off only at harvest time. They have to fund their farms somehow during the rest of the year. Conversely the current Republican populists tend to be older people, many of them retired, or people with some savings. Inflation is a worry for someone living on savings or a fixed income, because interest payments and other returns on investments might not keep up with the rising prices. But it seems to us there is a psychological element at play here as well. One of the most unsettling things about our often very unsettling economic environment is the fact that we can’t put our hands on our worth. It can be disturbing to have to rely on a fiat currency, a piece of paper that is supposed to be a safe store of value only because we as a society say it is. Precious metals seem more real, closer to something permanent – a very comforting idea when the bulk of someone’s net worth might only exist as a string of digits in a mutual fund’s computer. It’s an instinct that recalls the Physiocrats, a group of European economists before Adam Smith who held that all economic value comes from the land and agriculture. In the American context these fears at times dovetail with a streak of paranoia about banks (and now the federal reserve) that has roots that go back at least to the Know Nothings.


The notion of returning to a precious metal standard for the dollar seems to regain currency (as it were) every time we have an economic downturn. Never mind that a gold or silver standard would be profoundly deflationary – as in depression causing, just like it was in 1890 – we couldn’t even be sure it would do what it’s supposed to. Gold and silver are supposed to be eternal and therefore safe. But it seems to us there is nothing inherent in the metals themselves that ensures value, other than the fact that they don’t corrode. If quartz crystals were rare it would make just as much sense to designate them as specie. And in fact the supply of gold and silver can fluctuate quite dramatically. There are documented cases of sharp inflation caused by gold and silver strikes in North America and in South Africa. Historians have argued that during the 16th century much of Europe the was gripped by inflation because of the huge amounts of gold and silver brought back from the new colonies in Latin America. As recently as 1980, speculators attempted to corner the silver market, provoking a bubble that burst on what was known as Silver Thursday. If our fiat currency is not based on precious metals, what is it based on? We say the dollar is in a sense based on our confidence, specifically our confidence in the ability of the U.S. Government to pay its bills. This is what we mean when we say that the dollar is backed by the full faith and credit of the United States.

Interest Rates and Economic Growth Why do interest rates have such an impact on the rate of economic growth? Our short answer is that sometimes they don’t – usually, but not always. According to standard models, a capitalist economy is a dynamic system. The possibility of designing a steady state economy notwithstanding, an economy should always be always growing or shrinking, but never static for any period of time. Economists call the growth “recovery” and the shrinking “recession” or “depression.” When the economy is growing, it is producing more goods and services, adding more value to the Gross Domestic Product and the total national wealth. When it is shrinking, just the opposite is happening. One of the functions of money is to act as a way to store value. In other words, part of the total worth of the economy exists in the form of currency, be it in a bank account or a billfold, where it will remain (we hope) until called on. Another function of money is to provide an easy way to measure value – to enumerate the worth of assets. (This whole concept of Value, is important, but we’ll have to come back to it.) Not all assets are in a form as liquid as money, but most are at least assigned a number that tries to match their financial significance. To make the assets easier to buy, sell or trade, people assign a figure for the amount of currency the asset is worth. So when the economy is growing, those total sum of these numbers is growing – the amount of currency plus the total money value of all assets. Which is why the money supply is normally growing as well.


But that relationship does not always work if flip it over. In other words, just because the amount of currency and the total worth of assets are increasing does not mean the economy is growing. The money describes the value and it stores the value, but it is not the same thing as the value. The correlation can get out of whack - one of the definitions of inflation is more money chasing less goods. Deflation, the mirror image where prices are falling and there may be too little money for the amount of good and services, can be even more destructive. What does this have to do with interest rates? We can say that the way those numbers grow is by thousands of separate people each making decisions – whether to buy something, whether to open or close a business – and their decisions are dependent on their expectations about the economic situation. It’s generally not articulated in this way, but people are constantly making predictions about whether their part of the economy will expand, and how quickly. This is a form of what we’ve been calling Collective Subjective action. In all the aggregate, maybe the most important predictions are those made by lenders and borrowers. For their part, lenders are constantly risking capital on the economic prospects of others - making loans to individuals, families and businesses in the hope of earning a return in the form of interest payments. That applies to traditional bankers, but also to credit card companies and other financial players. It’s worth remembering that under the system known as Fractional Banking, loans can actually create money. If a bank gets a deposit of $100, it may keep only $20 on hand, lending the other $80 out at interest. But the depositor still has the right to ask for that $100, it hasn’t disappeared. So the deposit and the new loan exist at the same time, and new money has entered the economy. Other kinds of credit, including government borrowing and credit cards, have a similar impact. The rate at which this happens is controlled by interest rates, which are guided by the central banks. In the U.S. the Federal Reserve puts money into the banking system or takes money out in order to raise or lower the rate of interest. The more money in the system, the lower the interest rate. And, most importantly, the lower the interest rate the more likely individuals are to make business decisions geared towards growth. As for the borrowers, individuals are more likely to make large purchases and businesses are more likely to expand if borrowing costs are lower. We can put it this way – if interest rates are low, individuals and business are more likely to attempt behaviors that could result in creation of value, or of more value, in the economy as a whole. And this new value, once created, is stored in money or in assets that have a worth described as being equivalent to an amount of currency. So we can say the lender creates the money, makes a loan of it to the individual or business, which (ideally) uses it to create value, enough (the borrower hopes) to pay the lender back. And when the debt is retired some of the new money is returned to non-existence and the rest solidified as it is attached to the newly created value - we might say the loop closed with the new parcel of value added to the economy as a whole, increasing the Gross Domestic Product, and the parties freed to go out and do it again. All right. All of what we’re describing so far is well understood and not controversial, even though it’s being described from a somewhat unconventional viewpoint. But the part to understand is that none of this happens automatically; we might feel like it’s automatic, because we normally don’t think about and articulate all the changes we are making.


This only becomes clear when the normal processes break down. The match between the amount of money and the amount of value works when the economy is growing enough to support this creation of new money. But what if it doesn’t? What if the individuals have miscalculated, the business expansion is a failure, and the new assets are not worth what the borrower and the lender thought? In other words, what happens if the value created is not real, or at least not solid enough to support the amount of currency it has been matched to? Of course that happens all the time. Assets don’t sell and their price has to be cut, people and businesses miscalculate and default, things don’t always work out. If these problems are rare enough, maybe the borrower and the lender between them can absorb the loss. But what if it happens to a lot of people at the same time? Then you have a problem, maybe even an economic crisis. Markets are not perfectly rational. They make mistakes, sometimes big ones. And sometimes they get so badly out of alignment that tools which ordinarily work stop having the expected effect. One classic example of when lowered interest rates could not spark economic growth is Japan in the 90's - the crash of the real estate bubble and the deflation that followed was so profound that the central bank cut interest rates to nearly zero, as low as they could go, without much result for years. This situation is called a Liquidity trap, and it teaches us two things. One, governments have to keep the option open to directly stimulate demand through deficit spending. This bothers some people because it looks too much like socialist style command and control economics, and in fact many of those people even object to central banks controlling interest rates for the same reason. But we can see that government action softens economic decline by looking at the history U.S. inflation and deflation. Before 1945 deflationary periods and depressions were a regular occurrence. But there have been almost no serious examples (serious by 19th century standards) since the institution of Keynesian principals during the Great Depression. The other thing we see from the occasional failure of interest rate cuts is to reinforce a point made above - currency (and worth enumerated as an amount of currency) describes economic value, but it is not the same thing as ultimate economic value. If it were we could raise out collective wealth by printing more money. Or, if we didn't want to do that, we could simply lower interest rates until it stimulated rapid economic expansion. To put it another way, say our economy is described by a powerful equation, and one of the numbers in that equation, one of the factors, is the interest rate. We should be able to change the result of the equation by changing the interest rate number, but it seems we can't always do it. This suggests that this is because value creation is also dependent on other things - in fact a complex web of social and physical interactions taking place in the society at large. Let's tease this out a bit farther. Keynesians describe the situation where interest rates don't stimulate growth as pushing on a string - you can get results by pulling, but you can't by pushing. That's because the demand for the loans isn't there. Keynes noted that an economy could settle into a new equilibrium after shrinking, a state of lingering depression. The advantage government spending has in that case is that it can create it's own demand, it doesn't have to wait for the market at large. But the people haven't changed, the technology hasn't changed, why should the economy be unwilling to respond to an interest rate cut, as it had before? To oversimplify, we can say that a


change had taken place in the context, in the web of interactions in which the economy rests, in the millions of unconscious (a better word might be unspoken) estimations the future that guide people's economic decisions. To make this clearer let's define two things. We can say that the Collective Subjective can approach the Objective, but at the most basic level is not the same thing. What is the collective subjective? In this case it says the behavior of crowds, or of markets – are capable of making intelligent, rational choices, but not infallible. The other term we need to define here is one we’ve been using as if it were already understood. Maybe because most people think they do understand it. But it’s actually a slippery devil that wriggles out of your hand as soon as you think you have it. That term is value, specifically economic value.

The Irrational Market Theory Late in life economist John Kenneth Galbraith addressed how he had been able to write so many books. He said he didn’t really produce that much that was new, he just waited until people made the same mistakes they always made and reworked his old material. We can see that market based economies are dynamic systems; an economy is normally either growing or shrinking, but never static. This plays out as a regular cycle of booms and busts in the financial systems, which historically would quickly spread to the economy at large. Between 1865 and 1929, there were five major “Panics” (as they were called in the 19th century) each more dramatic than the last: 1873, 1884, 1893, 1907, and 1929. The last one of course lead to the Great Depression, which in turn lead to the New Deal government interventions into the financial markets. Followers of the economist John Maynard Keynes (including Galbraith) credit government intervention with lessening the number and size of market crashes. But they have not eliminated them. And the debate over their causes and treatments might have been the single most important question in economics over the last hundred and fifty years. Since Adam Smith economists have argued that markets are rational decision making machines, and that the best way to decide how scarce resources should be distributed is by letting an informed aggregation of individuals freely set the prices for goods and services. In some ways we can see this theory works well; Supply and demand curves normally can predict with some accuracy how much of a product will sell in a particular market at a particular price. But to a careful observer the activities of markets at the macro level remain unpredictable. The aggregate worth of the companies listed on the Dow Jones stock exchange is not going to change by five percent in one business day, but that market has often moved by that much or more. Wikipedia lists more than forty occasions. And in fact between mid-September and midOctober of 2008 all three of the major U.S. stock exchanges lost between twenty and forty percent. The gross product of the economy as a whole may have declined during that period, but if so it was by less than a single percentage point. We might simply say that the stock exchanges do not reflect the true worth of the listed companies, or of the economy as a whole. But if so, why not? The main reason a stock market


exists is to rationally assign a market price to a public company and to let that firm use its worth to borrow capital. Large rises or falls in an entire stock market require rash, and it could be argued irrational, actions by a large number of people at the same time. And over the last century economists have spent an enormous amount of blood, sweat and tears trying to explain why markets suddenly jolt in one direction or another. One argument is that the brokers and investors are acting on incomplete or erroneous (or flat out falsified) information. The argument goes that when the truth is learned, the newly informed move quickly to adjust their position. But improvements in communications technology have not smoothed out the movements of the stock markets. Another theory suggests that monopoly or oligopoly power cause problems by restraining free trade or the flow of information. But movements in the U.S. stock markets reflect the decisions of thousands, possibly millions, of people; arguments that a small group of individuals can consistently determine the direction the stock markets smacks of conspiracy theories. We can’t underestimate the complexity and sophistication of the debates over these questions; a few lines here should not be mistaken for an encapsulation of decades of economics research. But in fact some studies suggest markets have an inherently irrational element (1), that even when all of the participants have equal power and are fully informed about what is happening, they still participate in bubbles and panics. This argument pushes economists uncomfortably into the realm of collective psychology and the behavior of crowds. But this isn’t new territory for experienced stock followers. Many use mathematical models to try to predict market behavior, but others turn to psychological analysis. One axiom states simply that stock markets are driven by either of two emotions – greed or fear. Legendary investor Warren Buffet went so far as to say the secret to making money in a stock market is to be fearful when others are greedy and greedy when others are fearful. That philosophy is well enough recognized to have a name – contrarianism. Questions about what causes speculative bubbles and crashes has not stopped people from effectively using market style mechanisms in all kinds of situations. This Wiki, for example, is based on the theory of Crowd sourcing – the idea that a group of people can effectively evaluate information when given the right opportunity. Which is a form of Collective Subjective action. We could say that markets work in spite of not being totally rational. Interestingly, this picture of markets and crowd sourcing resembles a longstanding argument about the nature of scientific fact. According to mathematician and philosopher of science Charles Sanders Peirce the scientific method does reach what we could call experimental truth, but not by any simple or automatic process. He argued that a theory may be proposed in an instance, but the value of it is arrived at only over time, as individual attempt to confirm or reject the assertion. All efforts to describe truth may be fallible, he said, and they have to undergo an experimental process that includes repeated attempts to confirm or deny them before what he called a fixation of belief takes place. He has been described as using the scientific method as a form of pragmatic epistemology. As he put it in 1877, “few persons care to study logic, because everybody conceives himself to be proficient enough in the art of reasoning already. But I observe that this satisfaction is limited to one's own reasoning, and does not extend to that of other men.”(2)


Market as Culture If we say all markets in all economies are constructs of human culture, therefore by definition we can say they can not determine an a priori objective value for any commodity. So we might say they can accurately find workable prices for goods and services, but that is not the same thing. (See price vs worth vs value ) In short, we can see that prices are determined by the opinions of individuals. These opinions are therefore subjective. A sensible position is that although aggregating those opinions can correct for their errors, it can also magnify those errors. And going from one subjective opinion to a collective subjective opinion does not change this. In fact, we can say subjective economic perceptions (well founded or not) can often becomes indistinguishable from economic reality, simply by existing. Real estate prices rise or fall (often quite sharply) based on the fact that people think they will rise or fall. The rumor becomes solidified and substantiates itself into an actual price. So prices are subjective, and subject to too many influences from too many people to be otherwise. From the viewpoint of the individual, another way to see this is as a problem of epistemology. Even if we assume that prices have an objective (not subjective) meaning, that meaning can be too elusive or too quickly shifting for an individual to make use of without the support of an elaborate context. So for example, it would be impossible for any one person to obtain and digest all the information available about any good-sized market on a given day. To act sensibly in that market we have to specialize, and learn to trust the judgment of others. Everyone, even the people buying and selling the goods and services we depend on (and therefore setting their prices) are doing the same thing. What people may think of as objective economic facts we can see are actually data from a series of culturally established shortcuts, ways for us to use information gathered and digested by others. This data can be communicated to individuals in countless ways – government reports, news items, statistical fragments, company statements, ads, gut feelings, rumors, discussions with friends, examples from people we trust – and some more reliable than others. Therefore we can say that people absorb and evaluate this mass of stuff, combining it with their own perspectives (however they were established) before making (possibly unconscious) decisions. Maybe as economic animals people operate in this environment easily enough to take it for granted. And it seems to work, well enough - things get bought and sold, people work and get paid, problems show up and (we hope) are fixed. But overall it seems surprising the degree to which they trust all this subjective data. For example, although people might like to think that when a large sum of money is at stake in a stock market, reliable information would be required for decision-making. But the actual truth may be that in an effort to get data before anyone else, players in those markets come to rely on unsubstantiated information. As the Wall Street saying goes, buy on the rumor and sell on the news. In other words, if a rumor is going around that a stock is going to rise because of some announcement by the company, buy before the announcement because waiting until the information is confirmed means acting too late to make a profit.


So again, we return to a state of epistemological uncertainty – we can never be truly sure of where prices in a market should be in an absolute sense. Since the market is itself a social construct, thinking there can be some a priori reality found through a market pricing mechanism is absurd. But while we can’t be sure what a price should be, we can be sure of what it is. The entire mechanism of pricing (putting monetary numbers on the worths of things) exists so we can be certain. When something is bought and sold at a particular price, no matter the thing and no matter the price, we as a society have just taken a snapshot of a transaction – a record of a moment, which can be used as a reference point. And this could be said to be the strength of the system, because that bit of freeze dried information is compared with others just like it, and the feedback from the comparison is also put in the mix. It may be one data point, but it’s part of a series. Which returns us to the notion of a market as collective subjective action. And also in an interesting way, to the notion of a market as a form of human culture. Buying and selling require two or more people. This means setting a successful price is an inherently social act. The people involved may lie and they may withhold information, but the parties in the transaction have to reach a fixed price point or the transaction is meaningless. It doesn’t take place. By using numbers to demarcate patterns in a constantly shifting dynamic system, in our theory the price setting mechanism comes to resemble the evolutionary metaphysics described by Robert Pirsig in his novel Lila In this context, consider the limits to how reliable data from even a large collective subjective can be. Groups of individuals can shift erratically, which is why early economists often found themselves looking to the literature on the behavior of crowds when trying to explain financial panics. This could help explain the irreducibly non-rational elements in the behavior of markets. we can say they are the results of some important components - the human beings. But we say the value of the goods and services to those human beings is the key – an economy grows by adding value, and it adds value from the viewpoint of individuals.

Positional Goods Positional Goods are products or services that individuals use (buy and consume, we might say, and of course display) for the sake of demonstrating their status. Modern consumption is a significant part of the exceedingly complex, generally unspoken language of status which modern individuals engage in, generally without thinking much about it. So while it might be tempting to say many things about these goods, perhaps we should at least initially limit our observations to a few points. In that very little we buy is purely for the sake of totally private enjoyment or to satisfy an entirely private need, we could argue that most goods have at least a positional element. And, in fact, it is possible to buy a positional good for the sake of demonstrating our status to no one but ourselves–we might buy a name brand food to eat alone for example, instead of the the store brand because we don't want to think of ourselves as the kind of person who buys the knock-off.


There are entire categories of products where the entire basis of the appeal seems to be positional. For example there are what we might call fake signal positional goods such as cheap copies of designer products, which generally come complete with a falsified label. Another example are Veblen Goods, named for the economist Thorstein Veblen who first described conspicuous leisure and consumption. Veblen Goods are those for which the demand paradoxically rises as the price goes up. This is in clear contradiction to traditional ideas of supply and demand, but makes sense if we realize that the whole point of owning Veblen Goods is to demonstrate the capacity to spend.

Post-industrial Economy A Post-industrial economy is one in which technology has advanced to the point that the economy is no longer characterized by the need for large numbers of factory workers. In other words, productive capacity and efficiency have increased to the point of mass lay-offs. This is similar to the process which occurs as an economy moves from an agricultural base to an industrial one–when huge portions of the population will leave the countryside for new work in the cities. Some descriptions say a post-industrial economy means a shift to the service sector and information based jobs. While this is accurate to a degree, it does not capture the level of dislocation–mass factory work and the mass consumption it allowed for are the two halves of the self-reenforcing cycle that has defined modern economic life. The comparison to the dislocation of the rural population at the beginning of industrialization is better. One issue that does not seem to be well described and dealt-with, at least in much of the public policy world, is what we might call the distribution problem–as technology advances, the same or a larger aggregate amount of industrial products are being produced, but the simple system by which the aggregate mass of workers who produced the products made enough to buy them no longer works. Karl Marx accurately described this as the source of the recurrent crises in capitalism, although in recent years it seems they might be described as a state of chronic malaise. Some economists have argued that this is a source structural unemployment– unemployment that will not go away as an upturn in the business cycle increases aggregate demand. But whatever it's nature it is clearly a new stage in economic development.

Cost vs Price vs Worth vs Value In everyday language, cost, price, worth and value are often interchangeable, but their usage differs somewhat, and they also have specialized meanings in Economics, business and Philosophy.

Everyday language value


The usefulness or desirability of good or service, how much you love it, or what it is “worth to me.” Value is not a number, but often we can compare the values of two things, especially if they are similar in use. Intuitively we see value as being intrinsic and stable over time, but analysis shows it must be somewhat dependent on individual preferences and social context. This stability of value stands in contrast to the fluctuations of market prices, and even when the price is stable, it may seem out of line with our perception of the value. cost, price The amount of money required to purchase something (a good or a service.) Cost is from the purchaser's viewpoint, so has a negative connotation (cost is bad). The difference is clearer when these words are used as verbs: Customer: “How much does that cost?” Clerk: “I don't know, I'll get my boss to price it.” worth An expected selling price of some form of property, an appraisal. When we talk about worth, we are taking the viewpoint of the owner, and speculating about a possible sale, or what it might cost to replace our property. Worth is a long-term perspective. As with value, we think of worth as being stable across market price fluctuations, and so as somewhat intrinsic in the thing. We tend to imagine an ideal buyer, one who values the property as least as much as we do. If sale prices refuse to align with our expectation, we may have to adjust our idea of worth, but we can do this without having to change our judgment of value, what it is “worth to me”. Price and worth are similar, but a true price only exists when we are actually selling, whereas we can speculate about worth at any time. Also, if we actually do sell, and prices are negotiable, we will set our price higher than what we think it is worth. Of course these generalizations about usage have many exceptions. Expressions of worth that are vague (worth a great deal) or dramatic (worth the world) refer to non-monetary value. We aren't anticipating the selling price of our civil rights or our children. A clever economist could probably calculate a cash value of civil rights, perhaps by comparing the social health of a place where they are common with one that lacks them to some degree, but by law and tradition it’s profoundly wrong to assign a price to them. Similarly, when we say we are counting the cost, or ask At what price?, we are not actually expecting a discussion of money.

In Economics Economics offers theories (see Story) that explain the observed behavior of buyers and sellers in markets, exchanging money for goods and services. In this real world we see that price, value, etc. are similar, so it is unsurprising that economic theory offers explanations of why they should tend to be the same. Economists have also examined various sorts of mismatch and offered theories to explain them, but equilibrium theories predict that, in the long run, value, price and worth will converge. Equilibrium theories are still popular with economists who are ideologically opposed to government intervention (see Austrian school and Chicago school), but the major dividing line between Classical economics and modern thought is appreciation of how poorly the premises of equilibrium and rational individual choice describe actual economic behavior. The economy is a Dynamical system that not only never reaches equilibrium, but is also chaotic. Our judgements


of value and worth are consistently biased in ways that do not correspond to economic rationality. This is the essence of Keynes’s Animal Spirits, the mood of the market. The market price is subjective value rendered specific by the market. The market does indeed average individual value judgments, but our behavior deviates from the economic ideal in consistent ways, and is strongly influenced by our social network, so what market converges toward is The Collective Subjective. Modern economics also recognizes that there are cases (such as environmental pollution) where an unregulated market does not lead to a desirable outcome. This Market failure can be understood as a long-term mismatch between price and value. Prices vary, even when the theoretical value of the products and number & wants of the potential buyers assigning worth to the goods do not– Similarly, if the demand for something goes up–if it’s worth to the people making up the collective subjective rises–the price should also go up, but not immediately. The disconnect may be small, but it exists. The fact of the lagtime's existence can be described as supply and demand in action. True enough. But that does not change the fact that supply and demand are not static, and therefore can be more broadly described as a demonstration of a process in action, proof of one situation becoming another situation, rather than instantly jolting into a new stasis. value How much usefulness or pleasure an individual gets from a commodity or service. In economics, Value is still stable and subjective (see utility), but it is also supposed that value has all the properties of a number–that you can not only compare, but also add, divide and do calculus on values. Economists suppose that there is no problem with unlike categories of value, so you can express the value to a person of having one year of free speech as being equal to some specific number of boxes of shredded wheat. Though it is theoretically a number, economic value can't be given in dollars or any other currency because value would then vary over time (due to inflation) and by country (according to exchange rate). nominal value is another name for cost. price, cost Unless the mechanics of the Market are being studied, economists take the after-the-fact view, assuming that cost and the (final) price are identical numbers, and they also average price across the market, so that it doesn't depend on the psychology of any particular buyer and seller. Real cost is adjusted for inflation, such as constant 1970 dollars. This better represents the underlying value because it doesn't change over time. worth In economics, worth is related to the theory of capital. It doesn't make sense to talk about the worth of a service or other forms of labor, because we can't accumulate a labor, expecting to sell it later. Economics predicts that we will sell something only when its worth in the market exceeds its value to us.

In Business Economic theories often describe the behavior of entire markets or economies fairly accurately, but success in business is almost entirely dependent on details that are hidden when


economists average across all sellers (competitors) and buyers (customers). Business thrive by exploiting imperfections in the market (see Arbitrage) and exploiting behaviors of customers and competitors that economists consider irrational (That is, not conforming to their theory. See Biases, Story and Homo Economicus.) value, worth (accounting) In accounting and finance, value is the same as what we have called worth (an expected sale price). Value is still somewhat subjective because of the difficulty of determining the value of something without actually selling it (see Prediction is Intractable). Accounting estimates the Book value by depreciation and other corrections, but the true value may be more, or (often) much less. value (marketing) In marketing, and many other aspects of business, value is understood to be highly subjective. Marketing in particular attempts to increase the perceived value of a product (so that it can be sold at a higher price) without making any costly changes, ideally without any change at all. This is done through branding and Product differentiation. price, cost The price of a widget is what you can sell it for, while the cost is how much you have to spend to make one. Profit is the difference between the two, determining the success or failure of a business.

Ricardian Equivalence (A Case Study In How Not To Look At Economic Behavior) There are assumptions about human nature built into the study of economics, assumptions that sometimes escape examination simply because they are built in so deeply. Take for example the notion of rational decision-making, in economics, a topic we’ve looked at in some in other places. When describing people’s behavior, economists often employ a generalized model known as economic man– homo economicus. Who is economic man? To simplify a simplification, homo economics is rational (acting on all available information to the best of his or her ability) and wants to maximize utility (gain as much financially, or in terms of consumption, leisure or pleasure as possible. A notion rooted in the concept of utilitarianism.) Clear enough, but maybe too simple. Take the debate over what is known as Ricardian Equivalence. Ricardian Equivalence is the idea that deficit spending by the government will not expand the economy, because consumers and business people will see the increasing deficits and will cut back on their own spending in expectation of higher taxes to pay for the government debt. It was named for late 18th/early 19th century English economist David Ricardo, who first proposed it and then rejected it. It seems Ricardo was ahead of those who came after, and not for the last time. Ricardo put it this way: “the people who paid the taxes never so estimate them, and therefore do not manage their private affairs accordingly. We are too apt to think that the (deficit) is burdensome only in proportion to what we are at the moment called to pay for it in taxes, without


reflecting on the probable duration of such taxes. It would be difficult to convince a man possessed of £20,000, or any other sum, that a perpetual payment of £50 per annum was equally burdensome with a single tax of £1000” Many consumers and business owners probably don’t even realize deficits inevitably lead to higher taxes, or if they do know it intellectually they but somehow feel it doesn’t apply to them. To suggest that consumers who might be swayed by the smell of a new car are rational enough actors to put off a purchase based on government fiscal policy seems, shall we say, unreasonable in the extreme. This not to say business people and consumers are never aware of deficits and never take them into account. But for them to become aware something else has to happen. For example, excess expansion of the money supply could cause a sharp rise in prices, or could force a devaluation that in turn causes inflation. The ordinary men and women in the economy then become aware, and act according to their understanding. But they are not reacting to the deficits themselves, but to the results of the deficits. If the deficits are properly managed, they may well remain invisible. And that seems like it should be the end of it – Ricardian Equivalence doesn’t work because people just don’t make decisions that way. But maybe that’s too obvious for some economists. The debate over Ricardian Equivalence has at times devolved quickly into questions such as whether families passing on government bonds represents a permanent form of wealth passed down through families and what impact an expanding populationwould have a long term effect on tax revenues. Interesting, appealingly technical questions. But also possibly beside the point. And as the issue becomes tangled in a series of these technical debates, observations of consumer decision making are ignored aside. Ricardian Equivalence becomes another in a series of similar macroeconomic debates about deficit spending. Which is not to say there is nothing to think about. For example, the question of whether the government deficit spending is really creating growth or simply moving it from one place (or one time) to another. By going into debt to finance fiscal stimulus, the government is after all borrowing investment money to spend it on expanding the economy – money that could stimulate economic growth if it was spent by any borrower, not just the government. This suggests crowding out, the possibility that government deficits take away from the private sector investment by soaking up saved money. Many economists think some crowding out is inevitable. Others say if the economy is slack, the government is not taking investment from any business, because that investment would not have been made due to the poor economic climate. Interesting question, and actually not all that easily answered. In Keynesian terms we could consider government deficit spending as a way to force savings back into action in the economy. Keynes wrote that depressions start because savings slow the spiral of activity, by reducing the amount of consumption and investment. This is the famous paradox of savings, that in a time of a slow economy what is good for the individual may be bad for the nation. Keynes seemed to regard this as an issue of timing – fiscal policy being used to in essence borrow economic activity from the future, a time (one hopes) when growth is plentiful, or even excessive. This in turn suggests the possibility that depending on the interest rate, a deficit could be paid off at a time when economic growth makes the taxes less of a burden than they


would have been when the money was borrowed. Of course that could also work the other way - the economy could decline, making future taxes more of a burden. Another possibility regarding crowding out is that government deficits probably have some kind of impacts on private investment, but that in most cases the rise in interest rates that might result from crowding out would be overwhelmed by the direct conscious result of central bank policies controlling interest rates. But one thing that is clear is that few, if any, people make day to day consumption decisions because of their expectations about future tax rates. And economists are missing something if they think otherwise. Of course all economic decisions are the result of learned behaviors, because economics is a social contruct. So maybe people could be educated to make consumer decisions based on expected future taxes. But probably not. Besides, if consumers really started behaving rationally, what would the marketing industry do for a living?

Sustainability Problems with sustainability as it is currently understood: ● The idea of limits to growth and resources exhaustion don't have much credibility in economic circles because catastrophe has been predicted over and over again and has failed to appear. Advocates of limits to growth would do well to study this long history of failed predictions before blithely claiming that “this time it's different.” ● The whole concept of what sustainable use of nonrenewable resources would mean bears some thought. How long are we planning to sustain life on earth? ● it's hard to see how the sort of change some envision could come about without either unlikely changes in human nature or enviro-totalitarianism. ● There seems to be some tendency to suppose that some government hierarchy of ecomandarins must take over from the markets, directing what must be produced and how, for the good of the masses (who would otherwise choose something else.) This fails to consider something that economists know well, which is that central planning is inevitably less efficient and less innovative because it can only draw on a small faction of the knowledge and skills present in the society as a whole. ● Economics has some things to say about conservation, especially the Jevons paradox, that increases in efficiency that pay off quickly rarely reduce energy consumption by as much as you would suppose, and may even increase consumption.

Towards A New Dialectic Two basic questions: Do economies evolve? And if they do, what drives them? Clearly they do change over time, in patterns that seem to repeat themselves in various circumstances. Karl Marx (among many others - although he was not the first, he was one of the first to try to describe it in a thorough way) have noticed that there is a regularity in how


economies go from being based in agricultural to being based on mass industry. We can even call economic evolution a dialectic after Marx's use of the term. But Marx's dialectic and that of others who criticized him lacks something basic. As Thorstein Veblen pointed out when criticizing the economic wisdom of his day (a criticism that still largely holds) that people, as economic actors do not change, and are not changed by their participation in the economy. Veblen said according to the standard theory, consumers exist to consume, and they consume because it gives them pleasure. When they are done consuming they return to the exact state they were in before they started, and pursue more opportunities to consume. This rigid view of human nature does not at all match what psychologists have told us about the complex dance of consumption, status, positional goods, pleasure, the fluidly adaptive nature of the marketplace, the subjective nature of value and the subjective process that creates prices,. That failing becomes massively debilitating once we consider how an economy is changed when a bulk of it's participants change - once the center of psychological gravity shifts among a group of economic actors, much more than a change in taste has taken place. And we may be experiencing an important shift of this kind, as many manufactured goods become so common as to make them unimportant as markers of status.

Virtual Currency (BITCOINS, UTILS AND DIGITAL CURRENCIES) The kind of money an economy uses tends to reflect the ideals and desires of the people who run that economy. In his book Debt: The First 5000 Years, David Graeber argues convincingly that empires such as Rome gravitated to gold and silver coins because it allowed for paying soldiers who might not trust their paymasters, and might not be trusted by them. Graeber contrasts this with economies that used various systems of obligation–formal or informal records of who owed what to whom–which by their nature required on-going interpersonal relationships. He cites cases where participants in an economy go to great lengths to avoid reaching a final resolution of accounts, because that would end the relationship. One classic example of this is the gift economies of the Trobriand Islanders, which Marcel Mausse wrote about. We might even compare the relation of a currency to an economy with the famous statement by Mausse’s uncle Emile Durkheim that “god is society, writ large” – that a society chooses a god that reflects its judgments; a currency is an economy, writ small. We could say that the currency reflects the values of a community, but that risks confusing the economic definition of value with the general definition. Gaeber might argue that the two terms are not as distinct as they seem, but that is an issue to be discussed separately. But as Graeber noticed, hard currency provides for a greater depersonalization of transactions–if you have someone in your debt, you have to deal with them; if you can be paid in specie, you don’t. New technologies–the internet, faster processing speeds–are allowing for the creation of a new kind of money, a fiat currency not issued by a large financial or political institution. The most prominent of these post-industrial virtual currencies is Bitcoins, although it’s not the only one. Virtual or digital currencies offer two great advantages–the potential for an extreme level of anonymity and depersonalization, and freedom from the influence of large political and


economic institutions. And in fact these two advantages are closely related, because one of the reasons to seek anonymity is to be free from controls imposed from above. But we should not think that virtual currencies could ever be free of ideals, ethics and assumptions–the cultural weight a currency carries is too deeply imbedded to avoid. In fact in this case, the advantages are the cultural values of the currency. Separate, but related to the preference for decentralization and anonymity in the virtual currencies is the way they express the most basic difference of the post-industrial from its industrial predecessor–a de-emphasis on material, industrial products in favor of products that exist in cyberspace or as experiences. But that is a much larger point. Do virtual currency believers want a money free of cultural baggage? Some clearly do. In fact, the slogan on the Bitcoins themselves is now the wonderfully ambiguous Latin phrase “Vires in Numeris,” meaning “Strength in Numbers.” And major Bitcoin investor Tyler Winklevoss much more clearly said has said he was backing Bitcoins in part because he wanted to put his “money and faith in a mathematical framework that is free of politics and human error.” Free of the errors of a central bank, certainly. And that’s what many virtual currency supporters say they want most of all–to be liberated from the power behind traditional fiat currencies, the centralized institutions that do the fiating. It’s hard not to feel some sympathy, given how much trust we are asked to maintain in organizations like the Federal Open Market Committee, which largely determines interest rates (the price to borrow money) and the dollar money supply itself. This is possibly the most important cultural assumption underlying virtual currencies, as well as the most obvious–that we don’t need centralized institutions. This appeals to the decentralizing ethos of the post-industrial age, but as often before it seems that the most important cultural biases are the organic ones that grow unnoticed. The people who develop and support virtual currencies are almost assuredly the kind of financially sophisticated and computer literate people who would naturally regard large economic institutions (public and private) as more of a hindrance than a help. As others have noticed, the kind of cell phone apps being developed seem naturally designed to address the needs of the kind of people who develop cell phone apps–more help in finding a good Thai restaurant than a pharmacy that sells generic medicines. Supporters of virtual currency argue that they can be used to make international money transfers cheaper and easier, and this is true. But we might expect the currencies to be more likely to facilitate the transfer of money from a Dutch spammer to a Russian hacker than from a California domestic to her grandmother in Bolivia, at least for the immediate future. From a more theoretical standpoint, it could be argued that decentralizing control of the money supply could be a way to let it more closely mimic the network nature of the economy. But when expressing a distrust of central bankers, a virtual currency places its trust in another group of sophisticated technocrats. We can see this best in the way the Bitcoins system makes a crucial decision–how it determines the money supply (a question of huge and constant debate when central bankers attempt to guide the fate of conventional fiat currencies). Other virtual currencies are less careful about the amount of currency they put in circulation, but builders of the Bitcoin system have clearly thought through the issue, so we can take theirs as the most sophisticated example. In short, the number of Bitcoins in circulation is preset by formula. The human network that is the economy would trade a dictatorship of a centralized human institution for a dictatorship of


decisions previously made by a group of software engineers and mathematicians who, for all intents and purposes, could be dead. The process by which Bitcoins are created is called “mining.” Note that it’s not called printing or minting. We might suspect that the term mining was chosen in the hope that some of the sense of stability and reliability questionably assigned to specie might transfer to the Bitcoin system– the founders perhaps wanted to metaphorically sprinkle the new virtual money with gold dust. Bitcoins are mined by solving an increasingly difficult series of mathematical puzzles, a process that requires sophisticated computer processing and which produces diminishing returns. In fact the math is designed to make it impossible to mine more Bitcoins after a certain limit has been reached. The point being to keep the mining process from being too easy, and to limit the number of Bitcoins in circulation. Here framers of the Bitcoin were clearly concerned that the Bitcoin system not debauched by excessive amounts of the currency coming into existence. A fairly realistic concern for any newly invented currency not backed by a big government. So the diminishing returns to the mining process mean that there is a functional cap to the number of Bitcoins that can be created. This means the Bitcoin could never become the dominant currency in an economy, because the supply of that kind of currency would have to grow to match the growth of the economy. Although critics of how market economies externalize issues such as their environmental limitations would disagree, classical capitalist economics would argue that an economy should be able to grow endlessly. And its currency should be able to match that. The limit on the number of Bitcoins will also have what is probably a separate and unanticipated impact on its viability as a currency. The small number of Bitcoins will mean it will always be a small player compared to other currencies, with a very thin market. This means it will be vulnerable to speculation, and with it, wild swings in prices and exchange rates. That might not bother people in the Bitcoin world, but it should. It may make Bitcoins more valuable as a commodity, but it will limit its appeal as a currency, by making it too unreliable for merchants and consumers to use as an everyday medium of exchange. The mathematical destiny that determines the Bitcoin supply is a direct descendant of Milton Freidman’s position that the money supply should simply increase by a steady percentage, with the power to print money taken away from a central bank and given to an impersonal formula. This assumes a level of market rationality that many would argue does not exist, and ignores the need for an institution that can react to the animal spirits of the times. It assumes that the economy is inherently stable and self-correcting, which critics would argue it clearly is not. But we’re not about to settle that old argument here. What kind of cultural impact would likely be expressed by a currency of this kind? Here we take up the other part of that equation–the anonymity. To see how this might play out it’s worth comparing virtual currencies to an older ancestor in the quest for utopian value neutral systems of accounting. English Philosopher Jeremy Bentham argued that to pick the best action of a set of choices, one simply needed to add up the amount of happiness created by the various options and chose the one that meant the most happiness (or the least pain) for the largest number. To this end he theorized a unit of measurement – a util – which represented a quantity of happiness created. Chose the option that creates the largest number of utils, he said, and you can never go wrong.


Many have pointed to problems they see in that system, but it does have the advantage of an implied anonymity–it doesn’t matter who enjoys the utils, so long as someone enjoys them. One important criticism that philosophers have leveled at Bentham and his utils is inextricably tied to this depersonalization. They argue that in theory Bentham’s kind of Utilitarianism would allow for horrific abuses of people, if the abuse created enough happiness for a large enough number to more than offset the pain imposed. Obviously this is not what Bentham had in mind. He was in fact, strikingly progressive for his times on issues of human equality. But something about the criticism sticks. They argue that if enough people enjoyed the benefits, even the misery of slavery would be allowed under Bentham’s system. He might argue that the math doesn’t work, that any slave’s bound condition would out weight the pleasure or utility created. Perhaps. But as was common with the moral arguments for and against American slavery before the Civil War, slave masters can always find a way to cook the books–to weigh the damage they do as less than the greater good the bondage serves. And who is the final judge, after all? Who’s running the big abacus that totals the utils? With virtual currencies, many of these questions of ultimate moral accounting are left to the machinery. Given the argument that there is no absolute privacy on-line, it’s worth pointing out that it is far from clear exactly how much anonymity Bitcoin's users could actually expect. How this will work in the real world has yet to be fully tested in court, or in the markets. And in many ways the atmosphere of anonymity surrounding Bitcoin and the other virtual currencies is only different in degree from the strong trend towards amorality in capitalism, a tendency in line with the expressed values of libertarianism. But that tendency (especially it's high tech manifestation) is deeply congruent with the support for virtual currencies, and much of it has to do with the appeal of the currencies' offer of anonymity. And the framers of virtual currencies other than Bitcoin do quite consciously attempt to offer absolute anonymity and attempt to be much less subject to the limitations imposed by the law and moral judgment imposed by a polity. And in fact the most important criticism of them is that they could facilitate money laundering and illegal activity. No accident then that, after drug trafficking, human trafficking and modern slavery are enterprises some fear would flourish under a virtual currency system. The broadest point? Perhaps that economies, and the currencies that express them are inescapably human systems, and that we as economic humans attempt to operate entirely beyond human judgment at our peril. And in fact this matches the notion that the system that determines prices is a collective subjective one, and that the economic value we pursue is a subjective one. Which implies a good deal about the what will happen to the economy as it moves more into the realm of non-manufactured, non-material goods.

Emotion Charles Darwin theorized that emotions were biologically determined and universal to human culture in The Expression of the Emotions in Man and Animals. However, the more popularized belief during the 1950s was that facial expressions and their meanings were culturally


determined through behavioural learning processes. More recently, beginning with Paul Eckman's work on facial expressions in the 1970's, it has become clear that the expression of emotion is culturally universal – evidence that emotions are innate. Other streams of evidence also point strongly in this direction. Facial expression research found a high agreement across members of diverse cultures on selecting emotional labels that fit facial expressions. Expressions found to be universal included those for anger, disgust, fear, happiness, sadness, and surprise. However some emotions are exhibited according to culture-specific prescriptions about who can show which emotions to whom and when. In the 1990s, Ekman expanded his list of basic emotions, including a range of positive and negative emotions that are not all encoded in facial muscles. The newly included emotions are: Amusement, Contempt, Contentment, Embarrassment, Excitement, Guilt, Pride in achievement, Relief, Satisfaction, Sensory pleasure, and Shame. [above text adapted from Paul Ekman] There are other strong reasons for believing that emotions are innate, notably the similar expressions of emotion in our relatives the social primates, and the clear anatomical and functional similarities between the brain regions associated with some emotions (such as fear) between humans and more distantly related animals such as rats. While the continuity of emotion with other animals is consistent with the traditional view of emotions as primitive, many of our emotions relate to regulating social interactions, and are likely to be evolutionarily recent in origin. Several brain regions associated with emotion are significantly enlarged in humans compared to our primate relatives. Neurological evidence from humans who are unable to experience emotion (see Descartes' Error) shows that emotion, rather than being an undesirable evolutionary relic, is essential to normal human functioning. This is consistent with theories of the smart unconscious.

Adaptive Behavior We say than a behavior is adaptive if it leads to improved survival and reproductive success. All organisms that have behavior (i.e. animals) are under evolutionary pressure to choose adaptive behaviors over maladaptive ones. In simple animals such as insects there is little dispute that behavior is primarily instinctual (innate or genetically determined). In humans, much behavior is clearly partially or entirely learned. The Nature Versus Nurture dispute is largely about whether humans have any important behavioral instincts. Yet whether behavior is learned or innate, human behavior must be reasonably adaptive or we would die off. We also say that the organism adapts to its environment, evolving toward greater fitness. As a noun, an Adaptation is an evolved anatomic feature or other trait of an organism that increases its fitness. Although we could speak of a behavioral adaptation, it is often more convenient to say that “Behavior X is adaptive.� For example, it is adaptive for squirrels to store nuts by burying them. Insofar as Truth is akin to usefulness, we can say that in evolutionary reasoning, showing that a trait is adaptive is the primary way of establishing truth (see Evolutionarily Stable Strategy.)


Evolutionary Conservation From observation of either rocks or living things, we might think that not changing is normal and needs no explanation. Rocks don't seem to change, and living things seem largely unchanged, generation after generation. This similarity is deceptive because the unchangingness of living things is a complex dynamic process that only incidentally depends on physical stability. Consider that a rock weathers away significantly after only thousands of years, while the cockroach has kept a recognizably unchanging form for 350 million years. Evolution is necessarily a very conservative process. It is a precondition for evolution that replication be highly accurate. If a random mutation has any effect it is extremely likely to be harmful, so mutation must be rare enough that most offspring aren't impaired. Sexual reproduction makes more rapid change is possible by gene recombination, but this is still based on reliable copying of genes. Rather than being a natural consequence of physics, life's unchangingness is actually a triumph over the physical tendency for things to wear down, deteriorate and randomize. The effort to remain unchanged begins at the level of the DNA structure that all living things share. DNA is itself redundant because it is formed out of complementary base-pairs. In addition to making reproduction simpler, this duplication this also allows DNA repair. Every living thing from bacteria up expends considerable energy in keeping its DNA from mutating. There are two themes here: ● Not changing is generally good. If you're doing o.k., then there is a considerable risk that any change will be bad (especially if the change is random.) If it ain't broke, don't fix it. ● Not changing is unnatural, it must be caused by some mechanism. In some sense, not changing is both unsurprising and surprising. We can understand that, due to the demands of survival, organisms have many ways of not changing, but this tendency to remain unchanged is an emergent property of life, not a direct consequence of physical stability or inertia. We're talking about two different kinds of not changing: ● Organisms have evolved abilities to resist change, to maintain constant functioning despite changes in the environment (Homeostasis), and to reproduce without mutation. This ability to resist corruption by environmental influences is essential to life, and is both a precondition for and consequence of evolution. ● Species or genes may remain little changed over long spans of time, when the baseline mutation rates would be more than adequate to cause large amounts of change. Something that refuses to evolve (like the cockroach) is evolutionarily conserved. This unchangingness is actually evolution's response to a relatively unchanging environment. The design is in some way the best possible (a local maximum in the Fitness Landscape, see Evolution as Algorithm); any change is less successful, so evolution keeps pushing back to the same point.

Evolutionary Psychology


Why do we act the way that we do, so typically human? Without applying evolutionary theory there is no scientific way to say whether a behavior is adaptive (serves a purpose) or not (see Intentional Design.) Ordinary psychology is to evolutionary psychology as geography is to geology. Geography describes the shape of the land, while geology is concerned with the processes that shaped the land—how it got the way that it is. Evolutionary psychology attempts to explain human motivations and behavior as being the consequence of evolution. Behaviors and capacities are assumed to be adaptive: to enhance survival and reproductive success. Evolutionary psychology is a large and rapidly growing field, and we won't attempt to summarize it here. See Evolutionary psychology and books such as The Happiness Hypothesis and The Tangled Wing that apply the perspective of evolutionary psychology to understanding the human condition. What we will try to do is provide some high level context for understanding evolutionary psychology, especially concerning criticisms from outside the field, controversies within the field, and the scientific reasons for the apparent preoccupation with sensitive issues such as sex differences in behavior or unpleasant behaviors such as selfishness and deception.

What We Think Some sort of evolutionary psychology is required to understand the subjective human experience, but we believe that aspects of the evolutionary psychology first proposed in the 1990's (see History of evolutionary psychology) are incomplete or have misplaced emphasis: ● The assumption that human evolution effectively stopped 10,000 or more years ago, and the associated idea of mismatch, that puzzling and non-adaptive current behaviors may have once been adaptive in that ancient environment. The emergence of genetic/cultural coevolution theory, in combination with genetic evidence and the surprising effectiveness of animal breeding experiments strongly suggest some degree of recent human evolution. At the same time, coevolution theories have found that some behaviors make a great deal more sense when people are viewed in their context as cultural animals. ● A general neglect of culture and the importance of Cultural Evolution, resulting in a strong tendency toward genetic determinism. Evolutionary psychology does propose that our psychological commonalities are a consequence of our genetic heritage. There is no blank slate, but this doesn't create as tight a “leash” on human behavior as some evolutionary psychologists suppose. This strong position on the Nature Versus Nurture debate has burdened evolutionary psychology with heartfelt opposition. Evolutionary psychology has also been hampered by an interpretive gap. Some proponents (such as Richard Dawkins) have undermined acceptance by coming across as arrogant iconoclasts (see Smarty-Pants Critique.) The sort of mechanistic evolutionary explanations offered by evolutionary psychology seem to demean and deny the Reality of fundamental human motivations and feelings. We believe that a more nuanced story can help by explaining the relationship between Mind and the brain through concepts such as Emergence (see Level Map).

So it's all about Sex?


Why are evolutionary psychologists so obsessed with sex? Isn't that rather juvenile? And why have they put forward those politically awkward arguments about the innateness of behavior differences between men and women? The answer is that evolutionary theory tells us that if there is anywhere that we would expect to find a strong selective influence on behavior it will be in behavior related to reproduction itself, and sex is a crucial part of human reproductive behavior. Humans have quite a few peculiarities in their reproductive strategies which evolutionary psychologists have connected to differences in male and female behavior (see Sex Differences). To many it seems nonsensical to propose that having lots of grandchildren is the intention underlying all human behavior because: ● Especially in the modern world, most of our daily activities don't advance our supposed reproductive goal in any obvious way (reading People magazine), and in many cases clearly reduce the prospects for our continued survival (binge drinking, running for president.) ● When you ask people what their intention is they will hardly ever mention reproductive success. If you press someone for the ultimate top-level goal it will usually be either emotional (because I love her), moral (it was the right thing to do) or spiritual (it gives meaning to my life.) Evolutionary psychologists reply that: ● Many apparently unproductive activities either do advance our reproductive success (in ways that are either subtle or politically incorrect), or were adaptive up until recently, or can be regarded as an incidental side-effect of important adaptive behaviors, and ● That we behave as though reproductive success were our intention even though this is not an intention that we are conscious of (see Intentional Opacity). Evolutionary psychologists are particularly interested in explaining why we have the emotions and motivations that we do.

Selfishness, Violence and Prejudice Another objection to evolutionary psychology is that it paints such a bleak picture of the human condition. Why are evolutionary psychologists so interested in unpleasant behaviors such as competitiveness, selfishness, deception, self-promotion, cheating and violence? Most people are kind, decent, peaceful and law-abiding, yet evolutionary psychologists explain self-serving and violent behavior as just another strategy for reproductive success. Furthermore, evolutionary psychology has had big problems with altruism, and at time seems to say that everything we do either has some hidden selfish motive (such as favoring relatives) or is basically a mistake. The convergence of evolutionary psychology with anthropology and social psychology has resulted a tentative solution to the “problem of altruism”, but only by pushing the violence and competition up a level, so that it is now between social groups. Paradoxically, one of the clearest examples of altruistic behavior is risking death in battle to defend the community. Furthermore, evolutionary social psychology argues that humans are not only innately selfish, but also innately groupish, tending to favor whatever group we find ourselves part of. This is unpleasantly like racism and other forms of prejudice.


The question of morality and whether humans are basically good or evil is big enough that it must be discussed elsewhere (see Good Or Evil?). Here we ask why evolutionary psychology is so obsessed with the unsavory side of human behavior. Some degree of selfishness is necessary to sustain human life; with no selfishness there is no life, and the moral discussion is cut short. Given selfish motivation, there is no need to suppose that lying, cheating and stealing are innate—the advantages are so obvious that they could be rediscovered by each generation. The truly interesting question is why beneficial cooperation is the norm and selfish abusive behavior is so rare; this is a major question that evolutionary psychology seeks to answer. If people are basically good, then that demands an evolutionary explanation.

Internal Controversies The most fundamental weakness of evolutionary psychology is that it often relies on speculations about what might have happened in the distant past. It is at risk of incorrect explanations of the status quo, and these can be misinterpreted as justifications for the status quo (see Just-So Stories). Even within evolutionary psychology there is considerable dispute about whether behaviors such as music and religion are adaptive or not, and to what degree they are hard-wired. This distinction is in fact somewhat ill-founded, because as Daniel Dennett points out in Darwin's Dangerous Idea, all adaptation is exaptation. This means every feature or behavior of an organism must have its origin in a feature that was purely accidental, or served a different purpose. While it is clearly true that many of an organism's structures and behaviors serve purposes, there are also many traits with no clear purpose. It may serve no purpose at all, or might have multiple minor benefits.

Just-So Stories Master Pangloss taught the metaphysico-theologo-cosmolonigology. He could prove to admiration that there is no effect without a cause; and, that in this best of all possible worlds, the Baron's castle was the most magnificent of all castles, and My Lady the best of all possible baronesses. “It is demonstrable,” said he, “that things cannot be otherwise than as they are; for as all things have been created for some end, they must necessarily be created for the best end. Observe, for instance, the nose is formed for spectacles, therefore we wear spectacles. The legs are visibly designed for stockings, accordingly we wear stockings. Stones were made to be hewn and to construct castles, therefore My Lord has a magnificent castle; for the greatest baron in the province ought to be the best lodged. We use Just-So Story in the sense that Stephen Jay Gould used it to criticize Evolutionary Psychology arguments. We admire Gould because he has a humility that many Evolutionary Psychology advocates seem to lack, however the term is not an all-purpose refutation of evolutionary arguments.


A “Just so story”, as we see it, it is the basic error of lazy evolutionary reasoning. That is, if we propose a mechanism for how something could be adaptive, we feel we have explained that thing, when it remains quite possible that: ● Dysfunction: The proposed mechanism doesn't work (not an Evolutionarily stable strategy), or ● Contingency: It could have been that way, but it just wasn't. The assumed conditions did exist at the critical evolutionary event, but some other unthought-of mechanism predominated. Or the assumed conditions did not exist so the mechanism didn't work in practice, and some other working mechanism did the job. Likely the actual path was less direct than we suppose. Dysfunction is the main error that evolutionary theorists concern themselves with. They have devoted great effort to mathematically proving that certain mechanisms work, given seemingly plausible (but inevitably greatly oversimplified) assumptions. A problem that we have with some authors of the smarty pants school is that they argue by means of round-trip error (see The Black Swan) that the lack of a proven-functional mechanism for altruism, etc., proves that it is theoretically impossible for the world be as it appears to be, so we are deceiving ourselves. For example, true altruism doesn't work evolutionarily, so it doesn't exist. “No proof for the evolution of altruism” is not proof of “no altruism”, and is also not proof that altruism arose via a nonevolutionary skyhook (see Darwin's Dangerous Idea.) We take both altruism and evolution as givens, so must conclude that we just don't have a working theory yet. Any story with either of those weaknesses is a “Just-so story”, so the caution to avoid “Just-so stories” is no more than the caution to avoid invalid or inadequately supported evolutionary arguments, but Gould was primarily concerned with Contingency error in his discussion, as this is not given as much consideration as it deserves.

Scientific Hypotheses Both Daniel Dennett and the Not by Genes Alone authors address the “just-so story” with the moderately valid objection that spinning off plausible theories is a large part of how we make progress, and also that the historical facts to address Contingency error are usually difficult or impossible to come by, so we are faced with either abandoning any attempt at evolutionary explanation or casting caution to the wind and making speculations that are consistent with what we do know. In order to be a productive scientific hypothesis a story should not only be consistent with the facts as we know them, but also make testable predictions.

Missing Historic Evidence A particular weakness common in evolutionary just-so stories is the speculation of convenient past conditions. One sure defense against this weakness is to only call on conditions that apply right now or documentably applied in historical times. This is the uniformitarian principle which was productively applied in geology. It can be regarded as a form of Occam's razor. A uniformitarian evolutionary argument should win points even if it lacks the simplicity of an argument based on hypotheticals. We believe that sociobiologists and evolutionary psychologists have been too quick to give up on uniformitarian arguments. Being able to speculate any conditions is just way too much rope. But geological uniformitarian dogma led to


the rejection for years of ideas that seem clear in hindsight, such as that landscape has been shaped by colossal deluges of a magnitude vastly exceeding historical observations. It seems likely that evolution has often depended on stressful unusual circumstances, especially when it comes to cooperating to achieve a higher level of organization. jared_diamond notes that politically, states arise in opposition to a common enemy, and not from enlightened selfinterest in a time of peace. Similarly, we can imagine the eukaryotic cell arose in a desperate situation where neither of the parent cells were viable on their own, so that the inevitable bugs in the new arrangement were not a devastating competitive disadvantage. See Meta-Evolution.

Narrative Fallacy The just so story is an instance of what nassim_taleb calls the “narrative fallacy” (in The Black Swan), which is that the Story is so beautiful, explains so much, that it must be true. Beauty is truth, truth beauty. But at first glance scientific truths about our place in the universe have often been ugly; their beauty is at best an acquired taste. “Beauty is truth” is just another way of looking at the unconscious “Gestalting” process that we use to evaluate Truth in all matters, great and small. Clearly our judgment of a “good story” is right often enough that we shouldn't ignore our intuitions, but some humility and appreciation of our cognitive frailty is also called for, especially given the strong evidence for self-deceptive overestimation of our powers (see Positive Illusions.) Another take on just-so-stories is that the “rightness” of a story depends on the consistency of the story with the other stories our culture tells us, its mythic power and summary of the human condition. Once we cast off the burden of evidence and spin stories based primarily on their beauty, then it's no surprise when we prove that things naturally end up the way that we see them (though the lens of our culture.) The rightness and beauty of a story is only a poor guide to its truth, but often it's all we've got.

Meta-Evolution By meta-evolution, we mean the evolution of mechanisms that assist evolution. These changes can be thought of as optimizing the evolutionary algorithm. Once we adopt the view of Evolution as Algorithm we can unify understanding of a number of events in human history and ancestry that are generally studied in isolation and often not even understood as being instances of a more general pattern. Since evolution is an optimization algorithm it makes sense to evaluate meta-evolutionary change by how it optimizes the performance of evolution according to the same criteria by which we evaluate artificial optimization algorithms: efficiency and generality. Efficiency is the speed by which the optimum is discovered. One of the hallmarks of a meta-evolutionary innovation is an increase in the speed of evolution. Generality refers to class of problems which the algorithm is applicable to. The Fitness landscape contains local peaks, so evolution faces a Global optimization problem. The power of the evolutionary algorithm is defined by its ability to find new peaks in the fitness landscape. Meta-evolutionary advances are also associated with spreading into new evolutionary niches, filling the world with life.


Meta-evolutionary timeline We identify these major meta-evolutionary events:

Origin of life: 3.5 billion years The origin of life itself can be seen as a meta-evolutionary event, since before then there was no evolution at all. It is likely that life did not spring forth with the slick gene / RNA / DNA / ribosome architecture that is now universal. Arriving at the point of a recognizable bacterial cell required a great deal of meta-evolutionary change, so much so that the now-lost history of origin is best understood as primarily meta-evolutionary, though there was undoubtedly also metabolic innovation associated with radiation into new niches.

Eukaryotes: 2 billion years Meta-evolutionary innovation is also the defining feature of the top level of classification of life: the distinction between kingdoms of eukaryote and prokaryote. Because the eukaryote nucleus can organize much more DNA, this change was crucial in enabling the evolution of more complex organisms, including all animals.

Sex: 1.5 billion years sex is the most important meta-evolutionary innovation in biological evolution. The power of sex is its ability to mix-and-match successful organisms with the reasonable expectation of creating a viable new organism. Before sex, the only mechanism of change was random mutation, which is usually harmful, and can at best only create a slightly different child organism. Sex greatly increased both the speed and power of evolution. Evolution is faster with sex because much more variation can be created without an excess of non-viable children harming reproductive success. Sex also allows the discovery of new peaks in the fitness landscape because when the children “parachute into� intermediate points in the design space they may land near a new peak. Though bacteria have various interesting sex lives, only in eukaryotes do we see what we normally think of as sex, with 50%/50% mixing of genes, coupled to reproduction. An animal's body, or even a tree, is primarily made up of non-reproductive cells. In order to have these sorts of specialized (somatic) tissues that we and other animals rely on, it is necessary for sex to be linked to reproduction (in germline organs). Complex organisms may reproduce without having sex (parthenogenesis), but these organisms never could have evolved without sexually reproducing ancestors, and if they reproduce exclusively by parthenogenesis, then they are stuck in an evolutionary dead end. Sex is also a clear example of the inadequacy of current evolutionary theory to explain metaevolutionary change (see the analysis page.)

Origins of culture: 2 million years


The evolution of homnids capable of cultural evolution was a major meta-evolutionary milestone, kicking off the whole process of Genetic-Cultural Coevolution. If sex creates a challenge for the selfish_gene theory, then cultural evolution is much worse. Some evolutionists offered up meme_theory, a straightforward (and inadequate) generalization of selfish gene theory to cultural evolution. Others argue that cultural change is so different that it makes no sense to call it evolution.

Cultural diversification: 20 thousand years There was another meta-evolutionary change deep in prehistory, associated with a sudden increase the rate of cultural change, as shown by the diversification in archeological artifacts. It seems likely that there are correlated genetic changes, the product of Genetic-Cultural Coevolution.

Agriculture and state: 5000 years Yet another turning point was at the onset of history, with the roughly coincident development of the state, grain agriculture and writing. This created a strong selection pressure on such civilized humans, and is likely one of the strongest drivers of recent human genetic evolution. In this process, humans have domesticated themselves, almost certainly with considerable change in human behavioral instincts. This view is in some conflict with the usual understanding of the Environment of evolutionary adaptedness in Evolutionary Psychology, which is generally assumed to correspond to tribal living more than 12,000 years ago.

Modern times: 250 years (to now) Things have changed yet again in modern times (for some value of modern.) There was clearly a major innovation in both rate of change and of radiation into new niches that occurred around the time of the industrial revolution. It is mostly a matter of taste whether to identify multiple waves of change (as in The Third Wave) or a single process of exponential speeding and intensification, but there is no denying that we live in interesting times.

Ongoing Genetic Evolution From the viewpoint of meta-evolution we can clarify some common misunderstandings about human evolution. There are three different mechanisms of human evolution: genetic mutation, sexual reassortment, and cultural evolution. Each builds on the forms that came before, but does not replace or stop the other forms of evolution. Behavior drives evolution, which means that when an organism behaves in a way that changes the environment it lives in, this creates a selection pressure that drives genetic change. This is a standard idea in in evolutionary theory (associated with Ernst W. Mayr) that applies to any organism that has behavior, but is clearly relevant to humans, given the large changes we have caused in our own environment. When cultural change drives behavioral change, then culture can drive genetics. This is what Genetic-Cultural Coevolution is all about.


One thing that creates confusion is the general failure to distinguish between sexual genetic reassortment and genetic mutation, lumping both as “genetic evolution.” We believe that two serious scientific problems underlie this: ● Theoretically, it has proven difficult to show how the almost universal prevalence of sex makes any evolutionary sense (see Evolution of sexual reproduction.) Possibly because of this embarrassment, there is a tendency to minimize the importance of sexual reassortment, and to neglect its study. ● Practically, DNA technology has greatly transformed the methods of understanding evolution, allowing a much greater rigor and precision. But these methods can only examine the mutation of individual genes, and the flow of individual genes through populations. It has been possible to do interesting science without really understanding what genes do, but to understand the importance of sexual reassortment we must not only understand how individual genes work, but also how a constellation of related genes and their non-genetic regulatory sequences interact to create structure and behavior. This is a vastly more difficult problem. Unfortunately, these difficulties combine to create an If all you have is a hammer... problem. We lack the tools to understand the importance of sexual reassortment, so the more tractable problem of genetic mutation gets most of the attention. Although there is good evidence that a few genetic mutations arose as a consequence of agriculture (such as the ability to digest milk as a adult) it is undoubtedly true that, due to its greater speed, sexual reassortment is the primary mechanism of recent genetic evolution. Yet this change is largely invisible using current tools. Note that the general pattern is that the later forms of evolution are faster than the earlier forms, but the ultimate range of adaptation is limited by the materials bequeathed to us by the earlier forms of evolution. So cultural evolution is limited by our genetic makeup, including our gene assortments and regulatory sequences. And the power of sexual reassortment is in turn limited by the palette of genes available to work with. Rather than cultural evolution superseding sexual reassortment, or sexual reassortment superseding genetic mutation, each builds on the layer below, and often tends to increase the selective pressure, speeding up the evolution of its component parts.

Progress in Evolution? If one is inclined toward teleological explanations then it is easy to see the hand of God in the theoretically inexplicable and oh-so-convenient appearance of these evolutionary mechanisms that are essential to our human state. We nave no need for that hypothesis. These mechanisms manifestly have proved themselves stable now that they are in place, so must be adaptive even in the narrow selfish-gene view. If it is difficult to develop plausible situations in which these mechanisms evolve by ordinary gradual micro-evolution, then we are forced to fall back to the position that improbable non-normative conditions led to an environment where evolution could proceed in the “correct” direction. This is not quite the same as supposing there was a Hopeful Monster that arose by mutation in a single generation, but close. Insofar as we conclude that meta-evolutionary innovations were improbable we give weight to the belief in Evolutionary contingency, but are not left entirely without a natural explanation. In particular, if we can


convincingly argue that a meta-evolutionary innovation was necessary for intelligent life to arise, then we can use the Anthropic principle to turn the Argument from Design on its head. Although they have mainly been applied to cosmology, anthropic arguments provide a subtle tautological explanation of why we might find ourselves in the best of all possible universes. See also silent_witnesses and retrospective bias. Technical note: Additive Genetic Interaction.

Nature Versus Nurture What causes poverty? What do we mean by cause? Clarify innate, nurture Nurture implies action by parent, teacher, etc. Randomness isn't nurture, though it may be someone's responsibility to protect from random effects, if possible. Acts of the child himself or of other children also aren't nurture, since they aren't properly responsible. Environmental multiplier What is the right way to do causal accounting when causes act in parallel? - intuition is that there is no clearly right answer. - if the crucial property of causality is the possibility of other outcomes in the absence of these factors, and all are preconditions, (necessary) then it does make sense to say that both are causes. - I'd say that the ease of modification of a precondition isn't usually relevant to whether it's a cause, though that's an obvious practical consideration. - if you had an “or” precondition (multiple sufficient causes), then that is a bit odd, but if you consider moral responsibility, of two people who shoot the victim, we say both are equally responsible. - delta nurture + delta nature is “or” like, but the outcome isn't discrete. This quantitative aspect doesn't show up in simple moral thinking of a criminal aspect, but could be seen like civil damages. If A and B were supposed to do some amount of work, and didn't deliver, then we would usually think it fair if their liability corresponded to what they were supposed to have done. Morally they're equally at fault, but one owes more than the other. When additive, it does seem to make sense to say cause is proportional to contribution. But does the multiplier theory mean a nature*environment interaction? It's a positive feedback. Gene environment correlation. - heritability has looked at genes*environment? But this would be non shared environment? - moral responsibility isn't exactly the same as causality, anyway. What does the environment need to be like to have a multiplier? Competition, or even just being nonuniform. The only thing that would clearly prevent a multiplier is an environment where the outcome would be completely unaffected by any sort of behavior. In other words, behavior has no consequence at all, at least w.r.t. the outcome. Consider f(g, e) = Ag + Be + Cge Mathematically, we'd be happy to say that if one term dominates, then f is mainly that term. We might drop the others, as an approximation. In a sensitivity analysis, given particular levels of variance in g and e, we could attribute output variance to one factor. Of course with more general nonlinearities it gets complicated, but here it's just variation / mean, for each factor. We are really talking about the variance for the additive terms too, but the A/B ratio comes into play


then. Of course models like these are ludicrously simple compared to the thing modeled, but that doesn't mean they can't give some insight. Is heritability interpretation tricky because it has to do with accounting for variability, and not the mean? Caring about difference is intuitive. It isn't even clear to me what this critique supposes the misleading naive interpretation is. Obviously there's an genetic explanation for the mean too (genetic similarity). Another incorrect critique of heritability is that it entirely fails to account for SES effects such as wealth and parental education. There's room for dispute of whether the accounting is correct, but this is exactly what heritability is trying to do. Generalizing beyond the sample does require an argument of sufficient similarity. This is the “between group comparison” problem. Cultural differences could also affect heritability. For example individualistic western (WEIRD) cultures could increase heritability of behavior because social conformity pressure is reduced. A liberal culture allows individuals to pursue their behavioral inclinations to a greater degree, increasing the diversity of outcomes, and increasing multiplier effects. If you just gave poor people enough to bump them above the threshold, would that end poverty? Not to get carried away here, but you do need to consider our fuzzy intuitions about causation. Suppose we made a causal budget for poverty: % interaction with intentional adults (nurture) % interactions with children and irresponsible adults. % economic opportunity % personal decisions (free will) % other bad luck But aren't your personal decisions influenced by your genes, how you were raised, and community norms? The same of is true of your parents, of course. So at some level this gets into free will. Of course social determinism is no more plausible than genetic, but it interesting to see the connection with free will, and the nearby responsibility. But that's blaming the victim, of course, and so off pretty much off limits. Events don't happen for a reason, but things survive for a reason. You can see why poverty survives. We ask the cause because we want to know whom to blame, or we wish to change it by breaking the causal chain. But any enduring system has a mesh of positive and negative feedbacks that maintain it. Poverty isn't a new problem. An attractor. History of social class? Always present in states. Maybe in some tribal agrarian and herding groups. Either a herd or land is capital, so you get wealth variation, even without hierarchy. People who are unemployed or under employed aren't effectively exploited by the economy. Then there are those working poor who often have more than one job, and are “exploited”. How do we get to be who we are? Taken literally, “nature versus nurture” asks whether something about us was within us even from when we were a child, or if it was caused by nurturing or negligence by the adults who were responsible for our care. To what degree is human behavior either genetically determined or established by culture? This point is strongly contested because it has political implications. If human behavior is entirely a consequence of culture, than bad behavior such as violence, greed and competitiveness is a consequence of bad culture, and humanity can be arbitrarily perfected by altering culture somehow. If humans are instinctively selfish, greedy, competitive and violent, then perhaps we live in the best of all possible worlds. A “nature” stance is seen as conservative and a “nurture” stance is felt to be progressive or liberal.


What Do We Think? Since 1900, a view widely promoted by intellectuals is that (in effect) there is no such thing as human nature. That is, if there are any human behavioral instincts, they play so little role in modern life as to be irrelevant. The view that human behavior is entirely socially constructed peaked during the 1970's. Since then, threads of research have come together to make clear that the human mind is not a blank book at birth. For some aspects of mind the innate structure is sort of a first draft, while for others it may only be an outline. ● Twin and adoption studies have shown that important differences between individuals (such as personality and intelligence) are largely hereditary, with the family environment having significantly less effect. See Behavioral Genetics. ● Clever experiments on infants have show that many capabilities that we all share in common, such as basic physics understanding (objects don't pass through each other) are present from a very young age, and individual differences such as anxiety level can also be detected. See Descarte's Baby. So it does make sense to say that humans (in general) naturally behave in certain ways and that each individual differs from others in how they naturally behave. Furthermore, behavioral researchers in psychology, anthropology and economics have developed theories about these human universals and individual differences that clearly show these ideas are extremely important for human self-understanding. They are important because evolution offers scientific narratives relating to basic human behaviors that concern every living human, issues such as cooperation, selfishness, competition, and the conditions that can be expected to promote mutually beneficial cooperation, and the many faces of social inequality. See Cheating, Social Conflict, Cultural Evolution, Individual Differences and Fairness and Hierarchy. We now know that the theories that all humans have equal mental potential and that pretty much everything is determined by the environment are wrong. But… So what? ● The theory that all humans are equal never made any biological sense because evolution needs variation to work. ● This supports pre-modern popular understandings of intelligence and personality that were never really displaced in popular awareness. ● The equality theory is substantially technically correct and is morally correct. The fact remains that all humans are very similar to each other and we should strive to give everyone equal opportunity to develop whatever their potential is.

Smarty-Pants Critique We have an unclearly articulated objection to works from the Evolutionary Psychology viewpoint such as The Moral Animal. Though we like many of the ideas, there seems to be something wrong with the attitude. It seems like be are being lectured by an insufferable smarty-pants know-it-all who takes iconoclastic glee in dashing our naive understandings of our behavior. If there are Icons of human understanding that have outlived their usefulness, then disposal must be done reverently, using approved techniques.


The best explanation of the problem with this literature is willful Level Confusion with denial of the reality of emergent phenomena such as emotion and culture. These authors also seem to disagree with our view that life can and should have a spiritual dimension, and some of them are evangelical atheists. The smarty-pants crowd has a particular problem with Stephen Jay Gould. Though they fairly criticize some of his more speculative theories, his resolute opposition to evolutionary psychology, and his dalliance with Richard Lewontin (purveyor of Marxist biology), what really bugs them is how his spin on evolution successfully competes with their own.

Mind It isn't technically correct to say that the relation of brain to mind is that of hardware to software because a purely hardwired system such as an analog computer has the crucial property of using physical phenomena to represent something. We consider the brain as a signal processing system, and say that brain is to mind as hardware is to signal. We prefer the term signal to the common information because a signal is something that by design represents something, whereas in information theory the technical meaning of “information� is unpredictability (basically randomness.) Under this counterintuitive technical definition you receive more information listening to radio static between stations than you do when you listen to the news broadcast. See The User Illusion for an excellent popular introduction to information theory.

Mind is by Design Distinct from Brain The reason that it is so effective to consider the mind to be distinct from the brain is that the brain is designed to make this possible. Although mind is not software, there is a precise analogy between the design of the body and the way we design computer hardware so that the software will function in the same way regardless of the precise details of the hardware construction, the local physical conditions (temperature, vibration) and random occurrences (cosmic rays.) In a computer the most important way this is achieved is by digital representation: we choose two physical states to represent each bit such that there is sufficient noise margin to keep disturbing influences from disrupting the intended operation. The brain is not nearly as deterministic as a digital computer because it is constructed out of imprecise biochemical goop and because it is fundamentally an analog computer. Since the brain is more susceptible to disturbance the body is designed to protect the brain from environmental disturbances much more thoroughly than a computer needs to be. The brain is on shock absorbers inside a hard shell, is kept at a constant temperature, and has its inputs filtered at the molecular level (the blood-brain barrier.)

Mind Represents Reality


Paradoxically, although the design of the body goes to great lengths to insulate mental processing from physical reality, the vast majority of mental processing concerns fairly direct representations of physical reality. Because a neuron is maintained in a controlled environment (with a fixed temperature, among other things), it can accurately represent the temperature of your fingertip. The brain defies reality so that it can more accurately represent reality. The first mental level (see Level Map) is primarily concerned with the mind's interface to reality. The functioning of this layer is unconscious, so we are unaware of how much work the brain is doing to maintain the user illusion that is our consciously accessible model of reality. Because of this inaccessibility, we greatly underestimate the indirectness, implicit assumptions and possibility for error in perception, and unless we are athletes, performers or roboticists, have little appreciation for the complexity and contextual nuance in the motions that we so effortlessly make when we try to change the world.

Mind Represents Unreality Because the physicality of the mind is applied to the representation of other things, the mind can easily represent things that have no physical reality. We can imagine the climate on Tatooine just as easily as we recall the climate last fall in Pittsburgh. This is why we do not learn anything new about the reality of a subjective mental state when we detect a physical pattern in the brain associated that state (see X proved real.) The reality of mental phenomena is a philosophical tarpit that we intend to lightly tread around. Let's just say that it would be foolish not to act as though some mental states are real, but that the potential for unreality also increases as we move to higher mental levels.

Philosophical Digression In order to talk about mind we must move beyond the physical stance to the design stance so that we can see what mental entities a physical structure represents. We observe that the mind has some behavior related to solving a real-world problem, and then we can reverse-engineer the physical structure of the brain. We can say that the optic nerve carries the visual signal from the eye to the brain and that the brain stem manages important body functions. These structures represent mental functions because they evolved to do so—we couldn't apply this sort of analysis to a random tangle of neurons in a petri dish. The distinction between mental phenomena and their physical substrate is one of Representation. Although the behavior of a given transistor in your computer's graphics card can be understood entirely from a physical perspective, we must adopt the design stance to understand that the output voltage is representing the color of a particular tiny region on Michael Jackson's nose. Once we understand this relationship we have explained the voltage and we can also predict what is likely to happen in the future far more accurately than we could based purely on local physical considerations.


Action Moving our bodies through space and interacting with objects is remarkably complex and subtle (see Motor coordination.) We do these things so effortlessly and unconsciously that we are unaware of how much is going on behind the scenes. One way to see this is when something goes wrong, as in Apraxia. In some forms of Blindsight, people who have no conscious vision are still able to do tasks such as putting a letter into a mail slot or walking through a maze. These actions are done equally unconsciously in normal people, but the very independence and cleverness of the unconscious motion planning becomes clear in blindsight victims because they can still perform these complex actions even though brain damage prevents any conscious visual perception. See Phantoms in the Brain for more neurological evidence. The complexity of algorithms that attempt to duplicate human action is also evidence of the hidden complexity. See Motion planning and Planning Alogrithms (online book). The simple feat of moving in a controlled way is a considerable challenge, especially when we consider highly dynamic actions such as running or jumping. Though recent walking robots are impressive (and creepy), earlier robots shuffled like invalids; it took 40 years of work, and substantial complexity. While we emphasize the innate talent for motion and action that we take for granted, consider also the unconscious nature of skilled performance such as dance, athletics or playing a musical instrument. Considerable practice is required to reach high levels of performance, but even ordinary amateur competence requires sufficient practice that the appropriate motion can be made unconsciously (see Muscle memory.) From an evolutionary perspective, we can see that it is unremarkable that action is an Unconscious process. Not only is consciousness poorly suited to this sort of fast-fuzzy-subtle undertaking, there was also no need for consciousness to become involved because action and motion at this level was already well developed in ancestors that lacked conscious thought. See Representational Opacity.

The Argumentative Theory Two people are caught in some highly compromising situation, and one says to the other: “Think of something quick!” Why is this funny? It captures something about the human condition—there were surely lots of times in our evolutionary past where the ability to think of something quick was a vital survival skill. We should not be surprised to find that humans are endowed with an excellent ability to make up explanations after the fact. We should not confuse the ready generation of explanations for motivation with actual understanding of our motivations. The argumentative theory of consciousness is similar to The Interpreter Theory in proposing that the function of consciousness is primarily verbal (and therefore social). While the interpreter theory explains puzzling neurological evidence, and is plausible from an evolutionary perspective, the argumentative theory goes farther by explaining puzzling evidence from cognitive science, social psychology, and behavioral economics, especially relating to the phenomenon of cognitive biases.


In short, the argument for the argumentative theory is primarily that the human mind seems well suited to generating persuasive arguments (stories) and not so well-suited to unbiased rational decision making. In particular, the notorious Confirmation Bias insures that information that supports our position is easily recalled, whereas useless opposing evidence rarely comes to mind unless we make a strong effort to recall it. Another form of evidence comes from how people change their decisions when they know they will have to justify their choice. People then often make worse decisions because they choose actions based on how easily justified they are. See Arguments for an Argumentative Theory.

Body Model As discussed in Mind, all mental processing of reality depends on creation of mental models, rather than some sort of direct apprehension of reality (Naive Realism.) More familiar examples of this relate to visual perception, such as of color, but this is equally true of all perceptible aspects of our body: our pose, our motion, touch sensation, etc. The generation of conscious perceptions is almost an incidental aspect of the body model, which is a crucial part of the unconscious control system we use to regulate our movement. This control depends on an Internal model of our body's behavior as a Dynamical system. Although we directly sense our body pose and motion through Proprioception and vision, controlled motion relies to a large degree on Feed-forward control using a predictive model (analogous to the Kalman filter) that estimates body motion by fusing sensory feedback information with the expected sensory response to the last motion command (the Efference copy.) Because of this feedback relation, body model is tightly integrated with Action. This ability to control our body is learned, but the result of this learning is fundamentally unconscious Procedural memory that we can only access as a demonstrated (or visualized) motion. While we can describe the motion that must be done, the only way to learn is to practice making the motion, training body model and Action to work together with smooth coordination. See Representational Opacity. Our tendency to visualize motion in an embodied manner gives evidence that the body model can operate in relative independence from actual motion, however the most dramatic demonstration of the fundamental autonomy of the body model from the actual body is the phenomenon of phantom limbs, where the perception of an existing, functioning, feeling limb persists even after the limb is gone. Phantoms in the Brain discusses phantom limbs at length, and also other puzzling aspects of body model, such as the relative ease with which our body model extends outside ourselves in the rubber-hand illusion. In the Level Map, the body model is shown to be in two-way communication with Emotion and the viscera. Our emotions influence our internal body state, and our perception of this “gut feeling� influences our emotion. Regulation of the interior milieu to implement Homeostasis can be considered an aspect of the body model. This mind/body connection is well established by science, and the effectiveness of manipulating the mind to affect the body or of manipulating the body to affect the mind has been exploited in alternative therapies, but the lack of appreciation for how this connection is mediated by the


body model has led some to suppose that mind is somehow actually implemented in the body as a whole, and to look for examples of information processing outside the brain (such as in the Enteric nervous system.) This is an example of Level Confusion.

Consciousness In philosophy consciousness has long been considered the central mystery of the mind. We use the concept of Story to encompass the more uniquely human aspects of consciousness. Semantically, this avoids confusion with other meanings of the word, especially the common sense of “not being unconscious” (being aware of our surroundings and responding.) More importantly, preferring “story” gives a distinctive spin that is consonant with our adoption of The Interpreter Theory and The User Interface Analogy and also with the argumentative theory of the evolution of consciousness. Intuitive views of consciousness and reason are based on introspection, and vastly underplay the creative and inferential nature of self-understanding (see Naive Realism) because most of what goes on in our minds is inaccessible to us (see Unconscious).

Intentional Opacity See Intentional Design for a discussion of the design stance and the intentional stance. In plain language, people act in an intentional way. For example, a person will consistently pursue some goal by whatever means available. In addition, we are often consciously aware of our intentions. This has led people to suppose that intention is a conscious process, and that if we observe a person acting in an intentional manner, we suppose that he has consciously arrived at that intention, and is now pursuing it. Yet there is a great deal of puzzling evidence that conflicts with this model of conscious intention. We often perceive people's intentions as being different than the intention that they report, especially when they appear to be acting in a self-interested way. Also, people often struggle to explain why they are doing what they are doing, and may admit the importance of subjective motivations such as emotions and gut feelings. Intentional Design says that we can regard people as being designed by evolution, so we can also regard their intentional behavior and their conscious awareness of their intentions as being designed to pursue the ultimate intention of survival and successful reproduction. If we are consciously aware of all of our intentional behavior, then we say that we have been designed for intentional transparency. If this is not always so, then there is intentional opacity.

Human Intentions What about the intentional design of humans under the evolutionary design stance? Is the intentional design transparent or opaque? On one hand, most people say they would like to have children someday, or if it's too late, have some regret that they didn't have children. On the


other hand, humans spend almost all of their time pursuing intentions that they don't see as related to having children, and there are huge blobs of motivational opacity in the human psyche. Although people say that they do something “because I love her” or “because it's the right thing to do”, when pressed for the reason why they love her or why it's the right thing to do, they stop, and think, and generate some sort of plausible explanation. They have applied themselves to the intention of explaining their action (see The Interpreter Theory). In short, we think there's a good case that humans are the victims of some perplexingly opaque intentional design. It is a form of Level Confusion to argue that humans are not designed to implement the intention of reproductive success because the humans don't have that intention. In fact, people's intentional motivations are so opaque that we can often predict people's behavior better using the intentional stance (taking reproductive success as their primary intention) than we can if we listen to what they are saying. For example, in mate choice the evolutionary intentional stance is better at predicting people's behavior than they themselves are at predicting their own behavior.

Why this Mismatch We observe humans as behaving in a certain way that seems adaptive, yet when we ask people about that sort of behavior, they are unaware of it, or deny it. This distortion of self-perception is similar to that in Positive Illusions. In the context of influence by mass media, this distortion has received some attention as the Third-Person Effect, where people expect that others are more influenced by media than they themselves are. See also The Third Person Effect. How can there be this mismatch between conscious intention and behavior, and why is it this way? First, we must understand that (as the The Interpreter Theory proposes), intentional behavior is not primarily caused by conscious intention. Although non-intuitive (see The User Interface Analogy), it is clearly possible for intentional (goal oriented) behavior to exist without conscious intention. First of all, non-human animals show intentional behavior, though few suppose that they are conscious in the human sense. Second, humans spend a great deal of their time “running on autopilot”, engaging in complex goaloriented behaviors (such as driving) without any associated conscious intention. The key point is that as long as people do behave adaptively it isn't necessary that they be aware of behaving adaptively. It is only necessary that their actual behavior be adaptive. When your dominant motivation is socially unacceptable (self-serving) it can even be adaptive to misunderstand yourself.

The Argumentative Theory But if conscious intention doesn't cause behavior, then why are we conscious at all? The Argumentative Theory proposes that the purpose of consciousness is social coordination. Consciousness is a largely verbal process that exists to explain and justify our own behavior (see The Interpreter Theory) and to persuade others to behave in ways that are beneficial to us (and often to them as well). Our conscious awareness of our intention may differ from what others might reasonably infer from our behavior. From an evolutionary perspective, it is deeply unsurprising to find that people


conform strongly to social influences (see Conformity Bias) and engage in self-serving behavior. It is what we would expect because these behaviors are adaptive. The evolutionary pressure on consciousness is different because the purpose of consciousness is to generate persuasive socially acceptable arguments. Social conformity and self-interest are not generally considered valid arguments in favor of a position. These behaviors are adaptive (and therefore necessary), but are not a legitimate basis for argument. Since it is not a legitimate basis for argument, it is useless or harmful for the interpreter to generate such an explanation, so we don't. This mismatch between behavior and conscious intention is then an evolutionary adaptation to individual/group conflict. See also The Evolution and Psychology of Self-deception for the Evolutionary Psychology interpretation of this sort of behavior.

The Interpreter Theory The interpreter theory says that decision-making, judgment, perception, and virtually everything else that takes place in the brain is unconscious, and that what we understand as conscious thought is a distinct process that after the fact generates explanations for our actions and our experiences. Similar claims sometimes presented are that consciousness is an illusion or that consciousness is out of the loop in decision-making. At first this idea seems nonsensical, but there is good evidence from neurology and psychology that the mind does in fact frequently function this way in the presence of neurological and psychological stressors (Confabulation and rationalization). Michael Gazzaniga developed this theory during his study of people with surgically split brains. It is only a small step to suppose that after-the-fact explanation is the norm. We argue that this structure of mind is unsurprising given the way the brain works and how the human brain has evolved from the brains of simpler animals that lack language and conscious thought. See Representational Opacity, but in short, the simpler brains of other animals easily make decisions without requiring the generation of explanations. Evolution follows an “if it ain't broke, don't fix it” strategy, so we make decisions in the same unconscious way. The unique human need to generate explanations for our actions or to argue in favor of possible group actions has been addressed by the addition of a distinct new capability, the interpreter. Our concept of Story is closely related to the interpreter theory, while Jonathan Haidt uses the metaphor of the elephant and the rider. The Argumentative Theory goes on to explain how this separation of explanation from decision-making frees our internal story-teller to engage in biased stories intended to give others the same intuitive understanding that we've already intuitively arrived at. One objection to the interpreter theory is “Wouldn't this mean there is no free will?” While this theory is not consistent with conscious free will as traditionally understood in philosophy, Unconscious free will is still quite possible.


Naive Realism In the basic meaning of Naïve realism, someone mistakes their perception of reality for reality itself. It's easy to fall for the user interface illusion because our perceptions seem really real and because perception does tend to be fairly accurate. We are only aware of the failings of perception when we consider optical illusions or conflicting eyewitness testimony. Such naivete may extend to worldview, judgment and emotion (The Happiness Hypothesis discusses this.) Some believe that their worldview is “just the way the world is,” and that their judgments and emotions are the only way to respond to situations and events. This level of naivete clearly leads to conflict with others because they frequently do disagree with our interpretations. The belief that everyone has the same worldview is clearly untenable, but when we are angry most of us believe that we (and all right-thinking people) have the correct worldview, and anyone who disagrees “just doesn't get it”, or is biased, crazy or evil.

Personality As cultural animals, humans have come up with many ways to describe the ways that particular individuals tend to behave, and also have devised many ways of categorizing individuals (personality types). Since the social world is often the most important aspect of the human environment, from an evolutionary perspective, it is not surprising that people are highly motivated to understand any regularities in the social environment that could be exploited, just as we are interested in useful properties of the material environment (what is good to eat, and so on.) This is a sort of theory building (Story), which is useful descriptively (in communicating our understandings about people) and predictively (in anticipating how they tend to behave.) Given that people do have behavioral regularities, this must somehow be manifested in the physical structure and connectivity of their brain, which we would also expect to be under strong generic influence. That is, personality is real not only in the sense that it is useful, but also in having identifiable relations to measurable quantities. See heritability of the big five. If significant variation persists in a population, then an evolutionary approach primes us to look for situational advantages of those differences. If there was one best way to be (most Adaptive Behavior), then we would expect everyone to converge to that ideal. The persistence of behavioral differences allows individuals to exploit different behavioral niches (different strategies.) See also Personality Psychology and The Nature of Psychological Maturity. The reality of personality and its significant hereditary component is also inconsistent with philosophical conceptions of free will as an uncaused cause.

Types vs. Traits


Whenever someone says “there's two kinds of people…” they are proposing a personality type. This has presumably been going on since prehistory, and early recorded theories of personality take the form of types (see the Four temperaments). Attempts to quantify personality using rating scales and statistical analysis are a modern elaboration that is more rigorous, but does not supersede personality typing. Formally, the obvious difference between a type and a trait is that a type is discrete, whereas a trait is continuous. For example, in the Myers-Briggs Type Indicator, a person is either introverted or extroverted, whereas in the Big Five personality traits, extroversion can be an inbetween value. There is a more subtle yet far more important difference. The quantitative trait approach allows us to say in a rigorous way that, of the hundreds or thousands of personality characteristics, certain characteristics go together. For example, this research tells us that if a person is friendly and likes to be with people, then he is also rather likely to be cheerful and assertive. That is, trait research is oriented toward finding general regularities across the entire population. The correlation between these more specific traits suggests that there is some underlying common cause in the brain, and this interpretation is supported by the heritability of traits. While personality typing does imply that traits go together, typing is particularly valuable as a way to appreciate the specifics of an individual. A person is far more than the sum of their parts; particular combinations of personality traits are meaningful in themselves because they predict certain sorts of behavior patterns and aptitudes, both in intimate relationships and also broader social contexts. That is, personality types describe the social relevance of particular kinds of personalities. For example, a discussion of a personality typology such as Myers-Briggs will often mention typical occupations that are chosen by or are suitable for a particular type, and personality typing is also frequently proposed as a way for psychologically savvy managers to appreciate how to best motivate and direct their employees. Although quite a few personality type systems have tests that can be used to assign a personality type, outside of the management context we can't ask people to take a test so that we can better manipulate them, so the practical application of personality type theories is usually based on the user's subjective impression of a person's type.

The Big Five The big five is a quantitative (trait) personality system. This approach dominates personality psychology research because it has a sound statistical basis and is the result of a convergence of many different-seeming trait schemes. Personality type tests are created top-down by taking the already formed type theory and then designing questions that “get at” the desired personality dimensions. In contrast, big five personality took a bottom-up approach, asking people to rate how well they were described each of the words on a list of personality-related adjectives, then finding which words hung together. One of the quirks of this approach is that Factor Analysis gives a statistically rigorous way to define “hanging together”, but it doesn't tell us what to call the dimensions it identifies. The names below are generally used in research literature, but there is plenty of room for disagreement about whether those names best capture the implied category.


Openness to Experience (vs. closed-mindedness) describes the breadth, depth, originality, and complexity of an individual’s mental and experiential life. Associated traits: Imagination, Artistic interests, Emotionality, Adventurousness, Intellect, Liberalism. Conscientiousness describes socially prescribed impulse control that facilitates task- and goal-directed behavior, such as thinking before acting, delaying gratification, following norms and rules, and planning, organizing, and prioritizing tasks. Associated traits: Selfefficacy, Dutifulness, Achievement-Striving, Self-Discipline, Cautiousness, Orderliness. Extraversion implies an energetic approach toward the social and material world and includes traits such as sociability, activity, assertiveness, and positive emotionality. Associated traits: Friendliness, Gregariousness, Assertiveness, Activity level, Excitement-Seeking, Cheerfulness. Agreeableness contrasts a prosocial and communal orientation towards others with antagonism and includes traits such as altruism, tender-mindedness, trust, and modesty. Trust, Morality, Altruism, Cooperation, Modesty, Sympathy. Neuroticism contrasts emotional stability and even-temperedness with negative emotionality, such as feeling anxious, nervous, sad, and tense. Anxiety, Anger, Depression, Self-Consciousness, Immoderation, Vulnerability.

Representational Opacity Representational opacity is the idea that the workings of the unconscious are necessarily invisible to conscious awareness because the way the unconscious mind works is fundamentally different. The conscious mind is largely verbal (see The Interpreter Theory), so its workings are necessarily discrete, symbolic (or qualitative) and somewhat logical, whereas the unconscious is quantitative, approximate, profusely connected and semantically ambiguous.

Artificial Intelligence Artificial Intelligence (AI) research has succeeded in creating usefully human-like cognitive abilities in areas such as speech recognition. More interesting for our purposes is what these efforts have revealed about the mind. In our view, the most important result has been a correction of misunderstandings about the mind which had resulted from the limitations of introspection. Broadly, one of the puzzles raised by AI research is that many things that seem easy to humans (such as recognizing an object in a picture) turn out to be quite difficult, and other things that seem hard (like logic and chess) turn out to be relatively straightforward. From the beginnings of AI in the 1950's, AI researchers over and over again predicted rapid progress on specific areas such as natural language understanding, but failed to deliver. In contrast, progress was rapid in solving well-defined problems with a logical structure, especially games such as chess, where computers soon rivaled all but the very best humans. Significant progress in more fuzzy areas such as Natural Language Understanding and object recognition began only when (in the 1990's) researchers started to abandon logical symbolic AI


in favor of sub-symbolic approaches that emphasized quantitative (continuous or relative) aspects of situations, often using statistical formalisms or models that roughly approximated what was known about brain structure. In fairness to early AI researchers, these methods were only made practical by the approximately million-fold increase in computer power from 1955 to 1995, but AI researchers had also been blinded by the user illusion presented by the human mind. We fall for the compelling intuition that the important action in the mind is conscious, failing to appreciate that what is unconscious we uh, well … don't know about. They can't be faulted for this misunderstanding, because at the time the only available model of the unconscious was the dark primitive Freudian version.

What Does this Tell Us? The difficulties of AI didn't arise primarily from failures in implementing the methods the researchers came up with, but in the failure to understand how natural intelligence actually works. For crucial capabilities such as Visual Perception, evolution had already found a solution that was approximate, ad-hoc, complex, brute-force, Massively Parallel, analog, and rather error-prone. The evolved solution was inelegant, but it was reasonably thrifty in its brain requirements, and (most of all) it was fast. This was important for keeping our rodent-like ancestors from being stepped on by dinosaurs. Our mammal ancestors had to solve many of the same problems that we do, including locating the appropriate environment, avoiding hazards, getting food, finding suitable mates, and offering some degree of parental care. This they did without any need to offer a Story about what they were doing. No need for story, so no need for consciousness, symbols or logic. Doing just fine without them, thankyouverymuch. These animals weren't unconscious in the sense of being knocked out, but they were unconscious in the sense of not continuously observing what they were doing and coming up with explanations for it in terms of their (socially acceptable) intentions. When the engine of Genetic-Cultural Coevolution got started, and it was time for a verbal capacity that was capable of generating and evaluating explanations, evolution didn't rebuild the mind from scratch, giving it a new architecture with logico-symbolic pattern detection and decision making. Instead, the design philosophy was, well… evolutionary. In other words, a hack. A new facility was designed with the needed behavior, but all that old stuff was left pretty much as it was. Some new connections were added so that our existing intuitive understandings could guide our stories for explaining those intuitions, and other new connections could inhibit behavior when our inner interpreter couldn't find an acceptable justification. Language is valuable both as a means of social coordination: “You go this way and I'll go that way…” and for cultural information transmission “Ya see, you push down here, and Pow!”. It's hard to know what the relative adaptive values of these forms of communication were to our ancestors in different eras, but there are a number of puzzling quirks of our consciousness and associated verbal abilities which suggest that effective persuasion (social coordination) has been weighed rather heavily, so that unbiased reasoning and impartial communication are much harder than telling a Story from your own perspective (see The Argumentative Theory).


This after-the-fact architecture of consciousness creates a highly useful disconnect between the stories that we use to explain what we've done and our actual processes of motivation and judgment that generated those behaviors. Although most of the time our stories are a good-faith effort to distill some communicable essence out of what we know and understand, the disconnect means that our communications can become biased and even downright deceptive without us forming any conscious intention to deceive. This is one of the areas where the tension of individual/group conflict is subtly expressed.

Reprogramming the Mind The word is EPTIFY. Don’t look in the dictionary. It’s too new for the dictionary. But you’d better learn what it implies. EPTIFY. We do it to you. Stand on Zanzibar by John Brunner The idea of mind as computer leads to the obvious question: why can't we reprogram the mind? The reprogramming metaphor comes in two major flavors: reprogramming someone else (Mind control), and reprogramming yourself (presumably for greater happiness or success.) Outside of science fiction this proves challenging because of several fundamental difficulties.

How To Direct approach: Reprogramming the mind by talking to someone (possibly yourself) is like trying to reprogram a mall information kiosk through the touch screen (see The User Interface Analogy.) Perhaps we can also exploit other seemingly intended interfaces such as our appreciation of music, our spiritual sensibilities and yearning for social connection, and the semi-articulate symbolic languages of art, poetry and myth. Interface hacks: Another possibility is that we can hack the system by exploiting a seemingly accidental interface, a bug or misfeature that is ordinarily mostly harmless. Meditation, and perhaps Hypnosis fall into this category. Meditation somewhat resembles a Denial-of-service attack. This fades over into the next category with body work such as massage, yoga or Tai Chi, and tricks such as sweat lodges and fasting. Hardware hacks: The third major thread in thought on reprogramming is that we should get out our screwdriver and open up the back of the mall kiosk. That is, we may need a hardware intervention such as drugs or genetic engineering. Though mainstream psychopharmacology carefully distances itself from such radical views, the goals and the means are basically the same. Given the limitations and side-effects of all these approaches, the comprehensive effort at reprogramming the mind should make use of all three. This is the trinity advocated in The Happiness Hypothesis: cognitive therapy, meditation and antidepressants.

Why it is Hard


Most discussion of mental reprogramming concentrates on the “how to” above, perhaps operating under an assumption that with the right techniques, based on understanding of how the mind works, it will all become straightforward — that we will be able to feel and act more or less however we please, and (more ominously) our controllers could “do it to us.” In reality, there are several serious obstacles faced by would-be mind programmers.

Undocumented Interfaces What if the only problem with programming the mind is that we don't know how? In the computer analogy, the mind's interface definitely is undocumented; this is a serious, but ultimately surmountable obstacle. This difficulty is more serious than the computer analogy implies, because (unlike any software) the structure of the mind is evolved, so need not make any sort of sense in a human way. As Marvin Minsky noted, “The brain is a hack.” Yet there are several other intractable challenges in mind programming that arise from the mind's design, some accidental and some intentional.

Misfeatures The mind was not designed to run multiple programs, with a computer's easy context switching, so it has neither suitable high-bandwidth interfaces nor the necessary internal signal paths to directly modify the details that would need to be changed. Some behavior is hardwired (though the extent of this is a subject of heated debate, see Nature Versus Nurture), and some important things also seem to be difficult to alter (write once), such as fear learning in the amygdala and the neural remodellings that happen during developmental critical periods. Metaphor notwithstanding, the brain lacks a precise analog of software. There is a clear distinction between the hardware of the brain and the nonphysical stuff that it represents, but the precise engineering analogy here is between signal processor and signal (see Mind.) Even a worm can adapt its behavior in response to the environment, but learning is not the same as programming. Minds are designed to learn, but they are not designed to be (easily) reprogrammable.

Security Consider the mall kiosk again. The designer could easily have put a USB port below the screen and added an “advanced settings” menu with a “boot from external media” option. This would surely be convenient for software upgrades, so why didn't they do that? Instead they impose security by passwords and physical locks. The abuse potential of user-reprogrammed mall kiosks is obvious, but humans are in basically the same situation. We must strike some sort of balance between easy access and security. There is a lot of conceptual malware, so we need to be careful, but we also gain vastly from our communication and collaboration with others. In fact, mind control, or at least behavior control, is not only possible, it is everywhere in our lives. Social psychology experiments such as the Milgram experiment and the Stanford prison experiment show the surprising ease with which most people can be induced to do seemingly unreasonable things merely by asking (in the appropriate social context.) These results are often interpreted as showing our (regrettable) lack of proper moral judgment, which should be


remedied by ethics classes, sensitivity training, or bible study. What we see is the workings of social coordination. Humans normally function in a social context, and our continued existence has often depended on our willingness to comply with requests that we do not understand, asking us to do things that we would not normally do. In the current context of mind reprogramming, the point is that our ability to be influenced is subject to security measures: we must be presented with appropriate cues such as symbols of authority or the appearance of compliance by others. People have paid much attention to the possibility of mind control by political authorities, but all organized groups depend on our potential for being influenced into coordinated action. Religious groups such as churches and cults make particularly broad and heavy use of all known reprogramming techniques, though it is rarely articulated what they are doing and why.

Evolutionary Contingency So we can see why we wouldn't want others to be able to easily reprogram us, but why shouldn't we be able to reprogram ourselves? Why can't we simply consciously choose to do whatever we will, and to feel however we would wish? And why does feeling matter at all? A simple (and largely correct) answer is that (as noted above), we just don't happen to have been built that way, and however useful such flexibility of our minds might be, the architecture of our brain is inherited from much simpler animals that entirely lacked conscious thought, and for whom reprogrammability made no sense. While evolution can produce remarkably welloptimized designs, it cannot entirely transcend our historical accidents of origin. Representational Opacity is the related idea that we cannot have any direct control over (or clear insight into) unconscious processes such as our feelings, intuitions and motivations. We cannot do so because the unconscious is not structured as a logico-symbolic computer with discrete belief-states. It is instead an ad-hoc kludge of semi-regular neural networks and semiadaptive hardwired subsystems. This is why we can effortlessly make gut judgments about complex situations, yet struggle to explain our conclusions.

Out of the Loop In some cases our inability to control unconscious processes may also be adaptive–it may help us to succeed in life. We argue above that we are not designed to be programmable. What we now consider is that we are designed not to be programmable. This is most obviously true with life-critical processes such as visceral regulation (see Body Model.) Also fairly obvious from the perspective of Evolutionary Psychology is that our unconscious perceptions, our emotions and our judgments all function the way that they do because they motivate us to act in ways that are adaptive. That is, they cause us to be successful, having children who may in turn be even more successful. The goal of evolution is not to make us happy — quite the opposite. We are evolved to be neutral or miserable much of the time. If by some drug or other means of reprogramming, we are deprived of all that carefully designed misery, then we might well choose to abandon the arduous tasks of reproduction and social engagement, and might instead withdraw from the world, selecting ourselves out of the gene pool, like Milton's Lotos Eaters, or the Shakers. We do not claim that such a choice would be


wrong, only that such views will always have a minority mind share, simply because moderately discontented people are so much more motivated to propagate their ideas and their genes. Less obviously, and more controversially, there are many life decisions that are far too important to be left to conscious whim. Evolutionary Psychology predicts that mate selection is one such area, and there have been some suggestive results. This is a large area of Intentional Opacity in the human psyche.

Philosophic Digression Free Will Note that the idea of self-reprogramming makes no sense without the understanding that there are some constraints on conscious free will. We definitely cannot feel as we choose, and frequently have difficulty acting as we choose. The target of self-reprogramming is clearly the Unconscious layers of mental processing, which do almost all of the work in the mind (see Level Map.)

History All of these reprogramming techniques have been in use for thousands of years, though there have been some recent advances in hardware interventions (synthetic drugs, direct brain stimulation.) So reprogramming is largely a groovy re-branding of an old bag of tricks (see NLP.) This long history makes it seem rather unlikely that there will be any sudden dramatic advances in non-hardware interventions. The troublesome nature of emotions and of the unconscious has also been a major concern of the world-wide wisdom literature, and of early philosophy. See, for example, the chariot allegory in Plato's Phaedrus.

Categorizing Techniques There is really no clear division between between the three categories of reprogramming techniques presented above, but such distinctions are still useful because they reflect the position of a particular intervention on scale of design intent. When we say that a human characteristic is intentional, we mean that it is useful treat it as a feature designed for some purpose, and that this is coherent even in the absence of an actual intelligent creator because the action of evolutionary adaptation creates this purposefulness (see Intentional Design.) In a high-level direct approach like talk therapy, using a clearly intended interface, we expect that the going is likely to be slow, and the effects modest. With a low-level hardware hack such as a drug, effects may be sudden and dramatic, but there can also be serious side-effects. Interface hacks occupy an intermediate position. It is unlikely that they will create fast easy change (or they would have been selected against), but it is possible that by diligent practice some individuals may make dramatic changes. The distinction between the direct approach and interface hacks can't be appreciated without Evolutionary Psychology, because without applying evolutionary theory one cannot say whether


a behavior is adaptive (appears intentional) or not. While this distinction does have some validity, remember that (just as in programming) today's hack, if proven useful enough, may be selected for, and become tomorrow's recognized interface. If the distinction between direct interventions and hacks goes unrecognized, then the difference between mind and body approaches is often exaggerated. This probably comes in part from the Level Confusion in an assumed Mind/Body Dualism. Once we concede that mind arises from the body, then it is clear that any intervention that has any effect whatsoever must necessarily change the physical state of the body. This is equally true of talk therapy and drugs.

Story We use story very broadly to mean any sort of explanation, theory, prediction, justification or verbal description. Any narrative inevitably contains these elements, whether it is a myth, a story intended to entertain, a persuasive political speech, or a scientific publication. Important characteristics of stories are that they are inherently verbal (we could speak them if we chose to), and that they have an ambiguous relation to the Truth. We argue that creation of story is nearly synonymous with Consciousness. According to The Interpreter Theory, the only function of consciousness is making story. This is at odds both with intuitive and philosophical concepts of conscious free will (see Determinism vs. Free Will), but is consistent with many streams of puzzling evidence from neuroscience, psychology and behavioral economics, and with evolutionary theories of the origin of consciousness. See Representational Opacity. Because story is fundamentally verbal, it is also fundamentally social. The ability to speak is useless without someone to communicate with. See The Argumentative Theory and The User Interface Analogy. Fictional stories are a natural outgrowth of the necessary ability to explain our actions to others and to convince them to agree with us in practical matters. Looking at the actions of the interpreter as story-telling gives a more nuanced way of viewing those times when the interpreter says something that isn't exactly true.

Truth and Social Influence Unless we're consciously manipulative, we tell stories either to get people to think the way that we do, or to establish empathy by getting people to think that we are thinking same as they do. In storytelling, it is understood that the ends justify the means. You can say whatever you need to in order to carry the story payload. It is understood that you won't muddy the message with conflicting evidence. Of course, getting people to think like you is self-serving, especially when your thoughts are self-serving. But getting people to think like you is also crucial for the transmission of culture. Trying to get people to think like you is fundamental to communication, and thus to being human. You could say it is a moral imperative. The question is where truth and deception come into storytelling. A good story carries truth, which normally happens only when the teller has a true belief. Can we recognize some stories as deceptive “lies� independent of whether they happen to be true? The canonical lie is making


a statement which you believe to be false to influence other's behavior to your self-serving ends, but a lie doesn't have to be false, it only has to be a deliberately deceptive story. In Evolutionary Psychology there is much investigation of deceptive, self-promoting behavior, and in social psychology the related concept of Motivated Reasoning. Our more positive spin is that, first of all, none of us know whether we are right or not, and we don't even know most of what we think. All we know is that we can generate a story that is a useful summary of some of our understandings. Second, presenting our thoughts in a persuasive way is a creative act, the fundamental mechanism of cultural transmission, and hence Cultural Evolution.

Unconscious From the information-processing perspective of the Level Map, we can see that the vast majority of what goes on in the mind are things that we are not conscious of. The lower levels in the level map are things that are automated, that happen without any conscious intervention or introspection. For example, part of the puzzle of an Optical illusion is that visual perception is automated, so our knowledge of the inaccuracy of the perception has no effect. This emphasis on the overwhelming predominance of unconscious processing leads to a view of the unconscious mind as necessarily highly effective and in some sense “smart�, as in the Adaptive unconscious. There are a variety of theories that acknowledge that the mind has effective unconscious means of accomplishing various tasks which are in addition to, and complement conscious mental processing. See Dual process theory. Broadly, the idea is that some tasks (such as visual perception) are both important and ill-suited to conscious management, so are implemented unconsciously. A skill (such as driving) may also require a great deal of conscious attention at first, but with practice becomes largely automated. Note that mental capabilities which existed in non-conscious human predecessors will tend to also be unconscious in humans because of Evolutionary Conservation. This meaning of differs somewhat from the usage in Depth psychology (see Psychoanalytic unconscious and Shadow (psychology)) See also Unconscious mind. These older concepts of the unconscious are not actually invalidated by the newer information processing theories of the Adaptive unconscious. The unconscious of depth psychology arises from introspective examination, and necessarily deals with that subset of mental phenomena that can come into our awareness, and particularly relates to the sometimes-fluid nature of the interface between consciousness and intuitive processes.

The User Interface Analogy In short, consciousness is the brain's user interface. Like a computer user interface, it hides a great deal of ad-hoc complexity inside a smooth conceptual surface that is designed to be as intuitive as possible. A software designer goes to considerable effort to come up with an appealing idea for how the user wants the computer to behave, and then does whatever is necessary to preserve this user illusion in the face of the peculiarities and limitations of the


software and hardware platform that he is working with. See The User Illusion for a version of this viewpoint. See also the complementary interpreter theory. It is difficult for someone who is not a programmer to fully appreciate this point. For a sample of how ugly it gets how quickly, consider the idea of a “file”. It seems simple: type some stuff into a word processor, say what you want to call the file and which folder to put it in, then save it. Later you can come back, navigate to the file, open it again, and your text is still there. Now, consider the FAT filesystem developed for the now obsolete and very unsophisticated MSDOS operating system. Just look at the description of how it works and the layers of complexity that have accreted. This complexity is only to remember the bits on disk. What goes on behind the scenes when you actually create those bits and then navigate to find them again using your mouse is astronomically more complex. The FAT filesystem resembles the brain in another way: it also evolved. A considerable amount of the complexity and disorganization in the brain comes from the same process that gave us long filenames in MSDOS while preserving the ability of old programs to use short (but funnylooking) names. Some may object to this metaphor on the grounds that it is just the latest in a long series of arguments that the brain is like whatever new piece of technology that comes along. When people first started to think of the mind as being a physical rather than magical or divine process, the said the brain was like a clock. When the telegraph came along, we were told that nerves were like telegraph wires. When mechanical calculators were invented, the brain was just like that. When servomechanisms where invented that had goal-seeking behavior, the new field of Cybernetics promised that understanding of the brain was imminent. When the first computers came along they were “giant brains.” Now we see that saying the brain is like a clock is simplified and distorted to the point of absurdity, and the telegraph and numerous other analogies have fallen from use. Aren't these computer analogies wrong too? Well, no. As technology has advanced it has acquired more and more mind-like aspects, so these analogies have become more and more powerful. When we replace the clock and other mechanisms with the computer and software, we can see that a sufficiently complex system, even though it is deterministic or mechanical in every detail, can nonetheless exhibit complex and unpredictable behavior somewhat similar to a mind. Mind has emerged from the bits. More is different.

X proved real One of our favorite rants concerns the currently common science news headline of the form “Scientists show X is real”, where X could be a disorder such as: ● Dyslexia, ● Fibromyalgia, ● Depression, ● Attention deficit/hyperactivity, etc. Or a subjective experience such as: ● The pain of rejection, or ● The taste of expensive wine.


On reading, we find that a brain scan such as FMRI or PET, or EEG brain waves or a neurochemical test has shown that the disorder or experience commonly considered to be psychological or subjective is in fact measurably neuro-electro-chemically different. Clearly this interpretation is based on the belief that anything psychological is “not real”. Anything that is measurable is real, and therefore not psychological. This can only make sense if speaker believes in Mind/Body Dualism. The belief that mind is a primarily nonphysical process that takes place in the soul is an intuitively appealing idea. Descartes famously attempted to reconcile this with the manifest importance of the physical brain by postulating some sort of connection between soul and brain. If we acknowledge that mind is a behavior that emerges directly from the brain's architecture and electro-chemical processes, then all psychological phenomena are in principle measurable. Whether it is measurable or not has only to do with how good our instruments are, and says nothing about whether the phenomenon is “real” or not. Clearly the science of psychology is doomed to waste away to nothing if any measurable phenomenon is out of bounds. See also Stephen Pinker on how Experience Changes the Brain. As well as exhibiting the pervasiveness of Level Confusion, this interpretation leads to an entirely incorrect conclusion about what to do about these disorders. Since problem is “real” and not psychological, then clearly a “body” intervention such as a drug is the only therapy that has a hope of success, and quite likely the flaw is intrinsic in the brain and nothing can be done to help. Demonstrating a correlation between mind and brain says nothing about the direction of causation. All experience changes the brain, so it is quite possible that the brain organization has been caused by the person's experience, rather than the other way around. Like all other experiences, psychotherapy changes the brain, and might be exactly what is needed here. In the case of global disorders of mood or arousal, it is likely the the causation runs both ways, creating a circular feedback, so it is productive to intervene though both the mind and the body simultaneously. In order to function properly, mood and arousal must stay within narrow ranges, so there must be a homeostatic feedback mechanism to push things to the desired setpoint.

Human Nature, Human Diversity It is impossible to give a completely precise explanation of how any person gets to be the way that they are. Even if we could measure the position of every atom in your body, this wouldn't tell us much useful about who you are, only really obvious things like how tall you are or how much you weigh. We humans are particularly interested in people's minds. What is your personality like? Do you have a good sense of humor? Can you play ping-pong? Our premise of reducibility and emergence says that all those things that are part of your mind must exist at the atomic level in some form, but even if we knew how those things are represented (which we don't), the complexity would be more than a human could understand. Science is never about finding the entire precise truth (which is impossible anyway). Instead, it is about finding some level of analysis where we can discover an approximation of the truth that we can understand.


One thing that most people are interested in is the ways in which people are similar or different. It is easy to see this Social Comparison all around us, and Evolutionary Psychology offers explanations of why we are interested in these comparisons. Two broad questions we can ask about similarity and difference between people are: â—? In what ways do all people everywhere tend to be the same? What is our common Human Nature? â—? In what ways to people tend to differ? What is the range of Human Diversity? In our people-watching, we sometimes wonder: Why is that person like that? Did someone deliberately teach them to act that way? Did they pick it up by imitating people they knew? Their parents or children in their neighborhood? Or were they just born that way? In science, why is the most important question because science is about finding out how things work. We want to know why. Why do we humans have in common the things that we do, and why do we differ in so many ways?

Human Nature It's obvious that many of the things that we have in common with other humans are already that way when we are born. No one would say that a baby learned to have ten fingers–that trait is innate. Even before scientific investigations of why people are the way they are, we already knew that parents are a very important influence. Human parents always have a human baby rather than a starling or a fern, and there is often a noticeable family resemblance between the child and both parents. Something about the parent's individual natures is inherited by the child. We have also long used selective breeding of farm crops and livestock to dramatically transform the innate nature of these living things, even without knowing anything about how this inherited essence is passed on. We now know that this heritable essence is encoded in the structure of DNA molecules, and we know a great deal about how DNA is copied from parent to child and how this DNA is used to generate the particular structure of individual cells. Although we can describe many details of prenatal development, our understanding of the way in which DNA shapes the resulting person is very sketchy. Developmental biology often studies simple organisms such as worms and fruit flies because even these much simpler developmental processes are poorly understood. So we don't know how our DNA directs our development, but we have good reasons for saying that this is true (see Genetic Causes). Now that we know the mechanism of heredity, what can we say about the causes of human similarity and human diversity? When we directly measure genetic diversity, we find that DNA is on average 99.9% similar between two modern humans. All human genetic variation falls in that remaining 0.1% of genetic diversity. Human DNA is also 99.7% similar between modern humans and Neanderthals and 96% similar between humans and chimpanzees, so small DNA differences can have large effects. Because all living humans are descended from a much smaller founding population only about 150,000 years ago (see out of Africa), all humans are relatively genetically similar, when compared to the diversity of other species that have been around for longer. This genetic similarity is an obvious explanation for the similarities in appearance and behavior of people all around the world. Of course today we have global travel and communication which


spread culture around the world, and even in pre-modern times people still did move around. Many similarities of European languages are due to the common influence of the Proto-IndoEuropean language, not genetics. But it is likely that the Cultural universals we see around the world arise as the best solutions to common problems faced in the human condition. In addition to our shared DNA, the human condition includes other universals, such as the physical world we live on. Human societies must find lifeways that “work” given these constraints, which tends to guide human behavior down certain paths. When we look at the 0.1% of DNA that varies between people around the world, we find that 85% of this genetic variation is present within every population, while only 8% occurs between continents. Given our ignorance about which changes in DNA matter and which ones don't, we have to make do with the assumption that on average all DNA changes are equally relevant to the kinds of differences that we care about. Then we'd expect the genetic differences between people in any region to be about ten times larger than the average differences between populations in different regions. So it is much easier to uncover useful approximate truths about how “people are innately different” than it is to say approximately true things about how “Africans are innately different from south Asians”.

Human Diversity We know that humans have a good bit of genetic diversity, but does this diversity matter? We can't answer this question unless we pick out a specific thing that matters and have some way to measure that trait. For example, height certainly matters if you want to be a basketball player, and height is easy to measure. Creativity is also important in many things we do, but the only measures we have are very crude and indirect, such as the number of books an author publishes. In order to get to “step 1” in a scientific understanding of human diversity you have to come up with ways to turn human differences into numbers. All human genetic variation amounts to only an 0.1% difference in our genetic code. In genetics we sometimes say that siblings “share half of their genes”, but what that really means is that they share half of their diversity. So siblings are expected to be at least 99.95% genetically similar. Identical twins are clones. Their DNA is identical (except for a scattering of new mutations.) It is obvious that identical twins are quite similar in many ways, including height and facial features. When we say that identical twins are similar, we need to say: “compared to what?” We need to compare the differences between identical twins to the differences between people who aren't twins. For example, we could compare the similarity of identical twins (DNA 100% similar) to the similarity of two randomly chosen people (DNA 99.9% similar). By comparing these two measurements of similarity, we can estimate the heritability of that trait. Heritability is the percentage of individual trait diversity caused by inherited genetic diversity.

Family Environment Of course, most children are raised by their parents, and parents give their children far more than just their DNA. The similarity between siblings is partly caused by their having been raised in the same family. Is there any way we can separate the (sub-)cultural and economic effects of


the shared family environment from the genetic effects of shared DNA? Both effects could explain the fact that siblings are more similar than people taken at random. One way to directly test the effect of shared family environment in comparison to genetics is to look at children where the connection between family and genetics has been broken. Adults often adopt unrelated children into their families. If differences in how families raise children were much more important in causing diversity than genetics, then we would expect that adopted children to be no more different from the other children in the family than the related children are from each other. In fact, for most measurable behavior traits, adopted children are much more similar to their biological relatives than to their adopted families. Identical twins are also sometimes adopted into different families. This gives a direct test of the relative power of genetics and of family environment to shape behavior patterns. Identical twins being adopted apart is rare, which reduces the amount we can learn from these natural experiments, but heritability estimates based on measurements of twins raised apart are similar to the estimates from other sources. A third way to estimate heritability (the most commonly used) is to compare the similarity between identical twins to the similarity of fraternal twins. This approach, the classical twin study, requires larger assumptions about how genetics and family environment interact, but allows the use of the much larger population of non-adopted identical twins, and also avoids weaknesses specific to adoption studies. Once again, this approach gives similar heritability estimates to the other techniques. The specific way that heritability is estimated in a twin study gets complicated, but the idea is that if the diversity of family environments has a much bigger effect on trait diversity than the genetic diversity between families does, then there will be little difference between being identical twins and being fraternal twins. If we see greater similarity between identical twins than between fraternal twins, then this is the effect of genetics. One assumption is that the effect of the genetic difference between fraternal twins is analogous to the effect of the (different) genetic difference between families. As children grow they also enter into a larger society, and learn from other people in addition to their parents. Many similarities between individuals are caused by shared Culture.

Behavioral Genetics There are two major kinds of research that go under this name: ● Heritability is a statistical measure of the tendency of traits to “run in the family”, and does not require any understanding of the way that inherited genes actually cause this variation. This is an attempt to scientifically resolve the Nature Versus Nurture controversy. See below. ● Genetic association studies search a subject's DNA for sequences that “go along” (are correlated with) a trait. One recent technique is the Genome-wide association study (GWAS), which tests the DNA of subjects for millions of genetic variants, searching for the variants correlated with a trait. Heritability is a mature technique, with results going back to the 1920's. Confidence in heritability of intelligence took a big hit in the 1970's when it was suspected that one of the


field's founders (Cyril Burt) had fabricated some research results. Since then, the availability of large databases of subjects and computer analytic techniques has allowed wide replication of findings of substantial heritability for traits such as height and IQ, and also a broadening of these heritability findings to an almost embarrassing breadth of behavior traits, such as likelyhood to divorce (40%). Genetic association is a much newer type of research that has only really gotten going since 2000 and is still rapidly developing. As of 2013, techniques such as GWAS have generated disappointing results. It had been hoped that such genome-level tests would be able to pick out genes that would predict the variation in many traits already known to be heritable, especially disease susceptibility, as well as behavioral traits such as intelligence and personality. This effort has largely failed. For example height is known to be 80% heritable, but GWAS struggles to explain more than 10%. See Still Missing for a more technical summary of this “missing heritability” controversy.

Genetic Variation How much of personality is caused by genes? What we humans find interesting about personality is the differences in personality. The Nature Versus Nurture question is how much of the differences between us are “caused by genes”, or (more precisely) caused by genetic differences. While human DNA is 96% similar to chimpanzees and 99.7% similar to Neanderthals, modern humans are on average 99.9% similar to each other. All human genetic variation falls in the remaining 0.1% of genetic diversity. In genetics we sometimes say that siblings “share half of their genes”, but what that really means is that they share half of their diversity. So siblings are expected to be at least 99.95% similar. We say “at least” because parents do not choose each other at random from the entire world population, and tend to favor some degree of similarity (Assortative Mating). See Human genetic variation.

Heritability Heritability is a measure of how much of the variation in a trait between individuals is due to genetic differences between individuals. Heritability is a number between 0 (no genetic influence) and 1 (complete genetic determinism), and is also often written as a percentage. Consider the trait of personality extraversion (being outgoing and energetic). If extraversion is 54% heritable, then 54% of the variation in extraversion is due genetic differences, and the rest other causes. Heritability indirectly measures the effect of genetics on behavioral traits such as intelligence and personality by comparing the trait similarity to the degree of relatedness of family members. A model of the expected genetic differences between siblings, parents and other relatives is compared to the measured trait differences. The big problem with using relatedness to find heritability is that the people in a family have a lot in common besides their genes. They live in the same place, and are rich or poor together. Also, most behavior is learned, and what young children learn from their parents is clearly important. Identical twins have (nearly) identical genes, and are highly similar for many traits. If the effect of the twins' shared family environment could be removed, then all of the remaining similarity


would be heritability. One way to do this is to compare identical twins that have been adopted into different families, but this is rare, so it is difficult to get a large enough sample for reliable statistics. The classical twin study (see Twin Study) gets around this problem by comparing the similarity of identical twins to the similarity of fraternal twins. The assumption is that in a family with fraternal twins, the shared family environment will cause the same degree of additional similarity between between the fraternal twins as is (on average) caused by the different environment shared by a pair of identical twins in another family. Using the assumption that the genetic effect on the trait is proportional to the genetic similarity, it is possible to solve for the heritability. The remaining variation (not explained by genetic difference) can be further broken down into contributions from the shared family environment and other unknown non-genetic factors. More recently, heritability and environment effects have been estimated by fitting computer models to the data. These models can make use of pedigree information from several generations, can estimate more parameters than just heritability, shared and non-shared environment, and can relax some of the assumptions made in the classical twin study, such as allowing non-additive effects of genes. See Reconsidering the Heritability of Intelligence in Adulthood for an interesting example of current research in this area.

Heritability of Behavior For the vast majority of behavioral traits for which there is a reliable test, substantial heritability has been found. From the table in Genetic Influence on Human Psychological Traits: Personality

40% – 60%

IQ (age 5)

22%

IQ (age 18)

82%

Schizophrenia

80%

Major depression

37%

Alcoholism

50% – 60%

Conservatism (over age 20)

45% – 65%

Religiousness (adults)

30% – 45%

The major findings are that for a wide range of traits examined: 1. The inherited (presumably genetic) component of individual variation is 50% to 70%. 2. The component of variation due to family environment (parenting, social status, etc.) is 0% to 20%. 3. The rest is other unknown causes or measurement error. Though this is more or less by definition the environmental contribution, it may be the prenatal environment rather than the social environment. 4. Heritability may be reduced in low social classes or in poorer countries. See below.


This is true for a wide range of traits. Intelligence (as defined by IQ) and personality (as defined by personality tests) have gotten the most attention because they most directly relate to the sociopolitical heart of the nature-nurture controversy, but basically the same results have been seen for many other traits such as religiosity, hours of TV watched, and even whether when you cross your arms you put the left or right on top. This body of research seems quite solid, but was intellectually marginalized and demonized for decades due to its direct connection with the controversies over IQ testing. The researcher who dominated early work in this area was Cyril Burt, who was successfully discredited for research fraud shortly after his death in the 70's. Whatever the truth of this matter is (many of the accusations didn't hold up), it seems clear that his early results were real because they have been reproduced with close agreement.

Heritability and Environmental Deprivation The bulk of heritability studies have been done in developed countries, where the average levels of nutrition and education are reasonably high, and where homes with abusive or neglectful parents are a minority. If you look specifically at those living their lives in the worst environments, then your findings of the contribution from genetics, family environment and nonshared effects may differ. In fact, this has been found to some degree, and is an ongoing research area (see Heritability of IQ). In particular, the heritable contribution tends to decrease, with increases in contributions from either the family or non-shared environments. The mixed results may reflect variations in the badness of environments between studies, and also the amount of variation of the badness of the bad environment within an individual study. But why should a deprived environment reduce heritability? If you have a genetic advantage, then wouldn't that appear in any environment? Not necessarily (see The Best Kind Of Person), but a likely explanation is that there is a “good enough” environment which allows you to reach your genetic potential, and that sort of environment is already available to most people in developed countries. Variations in the environments seen by identical twins or between adoptive families mostly don't take the environment out of the “good enough” range, so heritability is high. But if (for example) food is scarce, then the difference between one child getting enough to eat and the other suffering periods of malnutrition can cause significant differences in outcomes such as IQ or even physical height. There is not yet a consensus on how big the effect of deprived environments is on heritability, but this effect is a big reason for cautiousness in “between group” comparisons. Poor people have in common the fact that they are poor, so you have to consider this as a possible cause for lower IQ scores in this group. If there is any actual difference in the average environments between two groups, then this shared environment (and the shared cultural adaptions that go along with it) may be causing some of the difference in outcomes.

What Does it Mean? These results were correctly seen by both sides as being incompatible with extreme pro-nurture positions popular in intellectual circles during the 60's and 70's — that all humans have equal potential, and that any difference is due to environment. Though some may still hold these


views, the science is not on their side. However, there are subtleties even in this interpretation, see Nature Versus Nurture. The other interesting conclusion is that the family environment doesn't seem to have much effect on these things, so perhaps it hardly matters what (if anything) parents do beyond providing food, clothing and shelter. Judith Rich Harris attracted considerable attention for this interpretation in The Nurture Assumption. We don't buy this extreme interpretation, but these results do call into question ideas such as “Buying the right baby toys will increase my kids IQ”.

What Do the Numbers Mean? So it's been shown that some traits that can be measured numerically are more influenced by our biological inheritance than by any other cause, but how should we understand those numbers? A major degree of interpretive freedom comes from the meanings we assign to the things being measured. Perhaps IQ is not the same as “intelligence”, so it could still be that intelligence is strongly influenced by parenting. Clearly personality tests don't measure everything that we would colloquially call personality, so perhaps important aspects of personality are strongly influenced by parenting. We feel this verges on a semantic quibble, and is not very productive. For one thing, many of the heritable traits such as the self-reported importance of religion in one's life seem to have obvious meaning and importance. They don't depend on the semantically troublesome procedure of giving names to the anonymous results that pop out of factor analysis.

Semantic Circularity One interesting point is that, by their very design, intelligence and personality tests measure something that is relatively stable over a person's life, so will end up measuring the completely stable genetic contribution. The designers set out to measure something that was stable because it is the common understanding of these words that they describe something stable. They chose questions that gave consistent results over a lifetime, and therefore measure something that is not much affected by social experience. So it is not at all surprising that these tests measure tendencies present at birth, and hardly surprising that they measure the genetic contribution so precisely. In other words, if we accept that these tests define intelligence and personality, then it is almost true by definition that intelligence and personality are highly inheritable. We're willing to more-or-less accept this because it is in line with common-sense popular understandings of these terms. By definition, personality and intelligence are things that are innate (stable over a lifetime) and little influenced by social experience (I'm just a glass-halfempty kind of guy.) It was a plausible idea that personality and intelligence are largely determined by early social experience (in the family) that takes place before verbal tests can be given. If this were true, we would replace the common belief of stability with the more refined one of early plasticity followed by stability, but the evidence has gone the other way. Clever preverbal tests in infants (see Descarte's Baby) have found stability of traits such as anxiety from a very early age. The behavioral genetics results are another nail in the coffin. Parenting has little effect on these things.


Genetic or Random Innate individual differences may random rather than genetic. There is not nearly enough information in the genome to encode the detailed structure of the brain. Rather than being a blueprint, the genome is more like a recipe for an organism, unfolding through a process of development. In heritability studies, the non-heritable component is broadly referred to as “environment”, but this can't be assumed to be caused entirely by visible, meaningful and potentially controllable factors such as nutrition, parenting, peers and education. An unknown (but surely substantial) part of individual behavior variation is basically random. These developmental variations are caused by miniscule meaningless and uncontrollable details of the brain's internal environment.

What Does Parenting Affect? This does not mean that parenting has no effect, just that it affects other things. One thing that the family environment and parenting style clearly do affect is the kind of family environment the children will create and the parenting style that the children will use when they grow up. This has important implications for children's future life choices and happiness, and evolutionarily significant effects on the number of grandchildren.

Conclusions All behavior must ultimately have a genetic explanation, just as all behavior must have an electrochemical explanation. This does not mean that we must abandon psychological and sociocultural explanations for behavior.

Causal Accounting Innate: in the child implies not malleable, but there's a range: ● developmental chance ● genetic ● epigenetic This fades into factors that would normally be considered environment: ● prenatal environment ● ubiquitous environmental factors ● self selected environment Nurture implies action by parent, teacher, etc. Randomness isn't nurture, though it may be someone's responsibility to protect from random effects, if possible. Acts of the child himself or of other children also aren't nurture, since they aren't properly responsible. Is heritability interpretation tricky because it has to do with accounting for variability, and not the mean? Caring about difference is intuitive.

Environmental Multiplier


What is the right way to do causal accounting when causes act in parallel? ● intuition is that there is no clearly right answer, but this might not be true if we concede that practical causality is always about differences (manipulation) and not “cause of existence” (sufficient cause). ● We do have an emergent (non additive) interaction, but we can still say how much output variance is contributed by the two terms. ● Other intuition is that I prefer to attribute to genetic. One story is that the kind of flat environment that could completely undo the multiplier is unreasonable or undesirable in modern life. If we're serious about reducing wealth inequality then we need social control via norms and laws. Competition is unavoidable without huge social control, and the resulting society would be outcompeted by others more tolerant of inequality. - if the crucial property of causality is the possibility of other outcomes in the absence of these factors, and all are preconditions, (necessary) then it does make sense to say that both are causes. - I'd say that the ease of modification of a precondition isn't usually relevant to whether it's a cause, though that's an obvious practical consideration. The “gene for basketball” What happens if you make the environment uniform? Heritability increases and all causation becomes innate. Yet variation in outcomes decreases (?) because there is no longer any environmental multiplier (GxE interaction). But if the environment is uniform then there is only one niche. Then there *is* a best kind of person. The winners will be those adapted to the particular chosen environment. In fact, the assumption that flattening the environment will reduce outcome variation presupposes that the GxE term is smaller than the additive E. Isn't that a contradiction? How can GxE interaction both increase and decrease heritability, both increase and decrease outcome variation? I think that this might depend on the nature of the interaction, eg. convexity. Simulate? Flynn is definitely saying that environment variation is increasing heritability, magnifying small differences. So flattening would have to reduce. Competition can certainly magnify small differences. What is the significance of shared vs. non shared environment for the multiplier effect? Shared environment is defined by family, but is larger. If the environment is flat, then it's all shared? You could say that, but the shared environment seen by family studies is the variance of families from the norm. If something is constant then it's part of the “all other things” that are equal, and is invisible. Specifically in twin studies, the multiplier says that MZ non shared is in fact partially shared because MZ end up selecting into similar environments. This increases their similarity. But if environment were flat, then DZ would have no option of different environments, so would be more similar too. Plausibly this would reduce MZ DZ difference. It does seem likely that E flattening would reduce outcome variation too. I guess that is the Flynn theory. How do we square this with Turkheimer's observation that poverty reduces heritability? What does that look like? It seems that poor kids have greater variation in shared effects. Possibly not that variation in shared environment is greater, but that the mean has decreased so that variation dips below “good enough”. This increases correlation of DZ (and presumably MZ to a lesser degree). At the extreme it swamps the genetic effect. Presumably also greater total variation. The paper I found was 7 year olds. It seems likely heritability will be greater for older poor. So increased variation in the shared environment reduces heritability (Turkheimer), but increased variation in the non shared environment could increase heritability if MZ can select into similar environment (Flynn). Presumably non shared could reduce heritability too by swamping genetic effect.


Though we may think of self selection as being voluntary, that isn't necessary. MZ could select into jail. All that it requires is a reproducible GxE interaction. Suppose 1/2 of people are given lobotomy. If this is random, reduces heritability. If based on hair color, increases. Either way, this is a non shared effect. Note that a capricious environment (either shared or non shared) reduces heritability and increases outcome variation above flat. Effect of a responsive environment depends on how it responds to individual variation. Responsive doesn't mean nurturing, just consistent. In the west, environment likely responds to increase heritability and outcome variation. But consider a reverse dominance culture with strong conformity pressures. Outcome variation and individual differences are both suppressed. Every individual is constantly being pressured according to their actions and traits (responsive), but the socially controlled outcome variation is far less than the flat environment. Even for outcomes that aren't socially controlled, the flatness of the environment will minimize any multiplier effect. One way to look at the responsive environment is whether it is positive feedback or negative feedback. Social control is negative feedback, trying to maintain behavior at a desired setpoint. In contrast, social reinforcement is a positive feedback, trying to maximize a desired behavior. All cultures control some behavior, but there is a lot of variation in the encouragement given to high achievers and the degree to which behavior can be neutral and voluntary. WEIRD cultures maximize variation by minimizing social control and tolerating positive feedbacks. Competition creates positive feedback, both rich get richer and also the “negative� cycle of failure perpetuating itself. There is a question here of whether self selection into niches is a zero-sum game. Is all life merely competition for the desirable niches? But if people are innately diverse, then they don't entirely agree on what the desirable niche is. IMO this tends toward sophistry, but it is good to provoke thought about non zero sum and diversity. The flat environment may be a useful thought experiment, but as a policy proposal it's a straw man. Not only would we have to impose uniform material wealth, we'd need strict social controls to prevent behavior variation from causing nonuniformity in the social environment. There are cultures that do that, but most people in the modern world wouldn't really want to live there. So would increasing environmental homogeneity increase heritability or not? Because the whole environmental multiplier theory assumes that inhomogenous environment creates illusory heritability increase. Or at least insofar as the GxE term dominates it may be hard to assign cause. But this isn't true with significant differences, I think. One way to think about the GxE term is as an emergent effect. Pearl's rules allow a non causal correlation with emergence. Consider movie stars. Why do they even exist? All value is socially constructed, so it isn't possible to truly level outcomes without social control. What does the environment need to be like to have a multiplier? Competition, or even just being nonuniform. The only thing that would clearly prevent a multiplier is an environment where the outcome would be completely unaffected by any sort of behavior. In other words, behavior has no consequence at all, at least w.r.t. the outcome. Consider: f(g, e) = Ag + Be + d(g, e) Mathematically, we'd be happy to say that if one term dominates, then f is mainly that term. We might drop the others, as an approximation. In a sensitivity analysis, given particular levels of


variance in g and e, we could attribute output variance to one factor. Of course with more general nonlinearities it gets complicated, but with g * e it's just variation / mean, for each factor. We are really talking about the variance for the additive terms too, but the A/B ratio comes into play then. Of course models like these are ludicrously simple compared to the thing modeled, but that doesn't mean they can't give some insight. How is the environmental multiplier different from the search for authenticity? It could be that increasing heritability with age isn't so much a matter of locating a supportive environment (in the face of competition) as simply figuring out what pleases you, despite the environment. A more supportive environment would just speed this process. Of course these two would both operate to some degree. A question is whether they would make different predictions. This might be in the case of environment interventions, but wouldn't you do different interventions? This is the flat vs. responsive environment again. But suppose you can eliminate scarcity without eliminating diversity of options. Or at least some practical constraints are eliminated. Free college, all school teachers good, etc. So this question is whether the outcome variation is due to unfair or avoidable scarcity, or just individual diversity. (Both) The answer is that there *are* different predictions. If diversity of outcome is purely due to individual diversity, then increasing education availability would have no effect. That is a classic naturist argument. Of course no effect is hardly likely. Reducing the user cost of education will increase usage, and this will surely have multiple effects, some of which even resemble social capital theory. Of course if everyone is the same, then a responsive environment would respond the same to all, and the environment would be flat.

Diversity and Inequality Although heritable diversity is bad from an egalitarian perspective, the goal is presumably levelling of outcomes, not of heritability in itself. Turkheimer says bad environment reduces heritability; makes sense. Then improving worst environments both increases heritability and reduces outcome variation. This is an improvement of the worst shared environment, not the worst self-selected. Is that different? The shared environment is what children in the same family have in common. This can't include self selected unless everyone is the same in their preferences. Similar issues probably arise with GxE interaction in the twin study. But to say that we “improve” the environments available for self selection implies a ranking of environments that may be less plausible for free choices. There's also the argument that people may need to be forced or nudged into investing in education because it's “for their own good”. Many people may be motivationally poorly suited to the modern world, especially children. This is the classic coercive aspect of education. That is, social control over self selection is already accepted to some degree. This reduces the contribution from innate motivation, so would reduce heritability of educational level. Can you really make the horse drink? Not deeply, but the reduction in SES heritability is real, as is some reduction in outcome variation, just because the rising mean pushes the distribution up against a wall. Motivation + ability has a bigger effect than just ability alone.


The Flynn Effect The Flynn effect is a substantial increase in the average IQ over the 100+ years since IQ testing began, especially the Fluid intelligence component. Because of the large change over only a few generations, it is generally supposed that this could not be due to genetic change in the population. This argument (that the effect must be environmentally caused) does seem plausible, though it would be interesting to see someone seriously engage with the possibility of genetic influences, in light of emerging evidence supporting recent (and presumably ongoing) human evolution. So the effect does demand explanation, and Flynn's general explanation is plausible. (See Heritability estimates: IQ paradox resolved and Beyond the Flynn Effect). The change might be caused by broad changes in the culture (experienced by everyone) which change the mental world, demanding different capabilities. The period spanned by IQ tests was a time of rapid economic and social change. Farm mechanization drastically increased farm productivity, reducing the need for rural labor, at the same time that ongoing industrialization increased the need for urban labor. Before this period, even in “industrial” countries such as the US and England, the majority of the population was still rural agricultural workers. Urbanization brought that bulk of people into a far more complex and abstract material, economic and social environment. To say the modern world is more complex than that of an 1800's farmer or a prehistoric huntergatherer does not entirely capture the nature of the change. People's social worlds have always been complex. Traditional lifeways of tribal peoples are regulated by complex systems of social obligations and taboos. For example, kinship systems often described many more gradations of relationships than we now use. See Vengance Is Ours. Even when you look at the material world, it is not so straightfoward. What is more complex, a ball of mud, or an electronic microchip? The surprising answer is that by usual technical definitions of complexity, the ball of mud is far more complex. That complexity is not fully accessible to human senses, but it is likely that peoples for whom mud was an important structural material and plaything, they will recognize many qualities of mud that we might not. Likely what is more important about the modern world for the Flynn effect is the increasing use of abstraction. Manufactured objects are actually simpler in the sense that they are more ideal in their structure, having less randomness (Entropy). Artifacts such as a plastic cup or a toy block are far closer to a Platonic ideal than any natural or handcrafted object. In the modern world we also have complex artifacts (such as cars) that are visibly composed of distinct functional parts, and we frequently discuss those parts as separable things.

Genetic Causes Is DNA the “Master Molecule” or just “a valuable resource for the cell”?


The Parable of the General Contractor Consider the general contractor, hired to build a building. But how does he know which building to build? Will it be Bilbao or McDonald's? The contractor works from a blueprint drawn up by the architect. Does this drawing cause the building? Something like the blueprint is necessary, but we usually look other causes as being more critical: the availability of funds or the need for the building. Is the blueprint just another valuable resource, then? It is a resource, but of a unique kind: information. While many resources are necessary (ore that can be turned to steel, carpenters, contractors that can read blueprints), the blueprint has a unique role in determining what we end up with. This story sheds some light on the particular importance of DNA in making us who we are. It is true that DNA is more like a recipe than a blueprint, but DNA is entirely like the blueprint in being an information resource. Other resources, like the iron ore, are refined and standardized so that they can be shaped according to the information in the blueprint. They lose all their information. The contractor, and his associated ecosystem of tradesmen are also essential parts of the system. Their constructive efforts, and their understanding of how the language of blueprints calls for standard parts such as doors and windows, these are necessary for the building to come about. But if the resulting building isn't what the architect had in mind, then the system has malfunctioned. In much the same way, living things digest proteins, breaking them down into their constituent amino acids, stripping out the information. Then we stick them back together again according to the patterns written in our DNA. The blueprint doesn't cause there to be something instead of nothing, but it does determine what we end up with. Because the blueprint was made to be understood by humans, we can look at it and see which particular symbols caused a window to be installed there, and why that wall has no window. Our DNA was not made to be understood by anyone, but we have figured out some things, like the code for proteins. We don't understand at all well how the information in our DNA directs the our growth from a single cell, developing complex structures such as skeletons, muscles, and brains. Yet even not knowing that, we can be quite confident that our DNA (and perhaps epigenetic annotations) do determine all of the consistently reproduced aspects of our developed structure. This is simply because there is no other information resource available to do that patterning. After we are born, our experiences are another source of information, and the causes of human patterning become less clear. A good bit of our uniqueness likely also comes from minor decisions that emerged as “good enough� solutions to local structural problems in the self-organizing process of development. The particular solution used could not be predicted from DNA alone. It is a consequence of similar developmental decisions made earlier on and the constant random buffeting of thermal vibration. Similar things happen during construction of a building. The electrician has to route his wires around the plumbing pipes, and this isn't going to be done in exactly the same way twice, even though the lights do go on and the toilet flushes. Does the light switch go in the left, or the fan switch? The blueprint doesn't say. See Wiring the Brain: Nature, nurture and noise.


DNA as a Cause Is DNA the “master molecule” or just an “important resource”? It's true that DNA by itself isn't a sufficient cause for life–It isn't a “cause for existence”. DNA only functions as part of a cell, and in the case of human cells (or even fungus cells) this is a quite complex system with subparts like organelles, membranes, enzymes and receptors. But all of these parts are either made out of protein and RNA or are made by proteins (enzymes). And we do understand at a high level of detail how RNA is created from DNA and how proteins are built according to a recipe that is written in the DNA. Not only is DNA a necessary cause of the cell structure, cellular biologists usually proceed with the assumption that all of the essentially cell-like aspects of the cell are determined by the DNA. What do we mean by “essentially cell-like”? No two cells are exactly the same. The pressures of the surrounding environment affect the cell shape, and the random processes of selforganization mean that the precise locations of membrane molecules are left to chance. The biologist proceeds by the assumption that these differences between cells don't matter. When you look at cells in detail, you see that DNA exists within a complex regulatory structure. Not all DNA is transcribed in all cells at all times. This control is by transcription factors that bind to the DNA and various kinds of epigenetic markings on the DNA. But all these control processes operate using proteins assembled according to DNA instructions and regulatory sequences directly coded in the DNA (promoters and inhibitors). These regulatory processes are how the cell responds to the challenges and opportunities that the surrounding environment presents. The cell can't exist in isolation from the surrounding environment, but there is a genuine modularity to cell structure. The membrane is a physical boundary, and the cell goes to considerable effort to control its internal environment, pulling in chemical resources and expelling wastes. In a multicellular organism like a human, the cell's activity is controlled by many different kinds of messenger molecules. Proper functioning of our body depends on our cells maintaining their boundaries and respecting these external signals. In other words, the design of our bodies assumes that cells “of the same type” are interchangeable. This means that the biologist too can consider cells interchangeable. And insofar as those cells are interchangeable, it is because of what is in common between those cells (DNA and epigenetic markers placed by genetically determined processes.) So that is what “essentially cell-like means”. It's true that without ribosomes and tRNA that “speak the language” of DNA, the DNA wouldn't generate those structural proteins and enzymes, but we can be pretty confident that our DNA wouldn't be a “valuable resource” to a cell that didn't speak this genetic code, which is shared by all life on earth. Given only DNA it would be tricky to reconstruct what the organism looks like, especially without knowing the genetic code, but no other functioning life form could be constructed using that DNA as its genetic code. So all the other necessary causes inside the cell and in the external environment are highly constrained. There are many many other things that need to be correctly aligned for the cell to survive, but if they aren't there, all you have is a dead cell and a boring story. Those other causes can't contribute meaningful variation because all they can do is harm the cell.


Circular Causation One way to understand this puzzle is to see that the cell (as we know it) depends on circular causation. We don't know how the cell got the way it is, but there is some reason to consider the DNA as a sufficient cause for the cell. Certainly the cell is a sufficient cause for another cell (all other things being equal). This is just the chicken and egg problem. We can understand how chickens and eggs work without knowing how the chicken-and-egg system came into being. A chicken farmer doesn't need to solve the problem of “why is there anything instead of nothing at all”. Is an egg a sufficient cause for a chicken or not? We say the cell “as we know it” depends on circular causation because this is indeed how we understand cells, based on our assumption that cells are interchangeable. In reality there is no circular causation. The first egg that a chicken hatched from was laid by a bird that wasn't a chicken, way back in a long series of sequential causes, where each intervening chicken and egg weren't actually identical, any more than two chickens or eggs today are truly identical. Where does this leave the idea that an egg is a sufficient cause for a chicken, or that DNA is a sufficient cause for a cell? In the cloudy limbo-land where all sufficient causes live, based on induction and impossible-to-completely-specify assumptions of “all other things being equal”.

Genetics and Heritability ●

GCTA does not assume that SNPs cause the trait, only that they are linked with the causal genetic material. This could even be cross-generation-heritable epigenetic annotations. GCTA does not assume any sort of atomic gene concept, only that individuals share inherited stretches of DNA. It is true that the realities of DNA turned out to be more complex than the structure inferred by classic genetics. What “gene” means is now rather up in the air, but this doesn't in any way undermine the idea that inherited genetic information explains the great similarity between all humans and the greater similarity of closely related humans. There is an emerging consensus that changes in non-coding DNA are extremely important to the differences between humans, other primates, and plants. Is this a “gene” or not? Dunno. Do epigenetics, mosaicism, and so on free humans from the tyranny of genetic determinism? They do mean the picture is more complicated than one might have supposed, but all along we knew that organisms adapt to their environments, in part by regulating gene expression. Insofar as these are adaptive mechanisms, they are mechanisms of gene regulation. They are keyed off of non-coding DNA, mediated by proteins coded by other DNA, functional RNAs expressed from DNA. It's true that it's a huge mess, and will resist understanding, but that doesn't mean we should give up. Identical twins are conspicuously similar to each other in many ways, more similar than ordinary siblings or “unrelated” people. Twin studies attempt to quantify this, and in doing so make use of assumptions of varying plausibility. The conclusion from these studies of low gene x gene interaction (epistasis) is particularly puzzing, because at the micro


â—?

scale biochemical processes are strongly interacting dynamic systems, and even classical genetics has non-additive dominance. Genome technologies will continue to cast light on the mechanisms by which our heritage underlies human diversity. GCTA is certainly not the last word, but of the current technologies seems to be the most relevant comparison to classical heritability techniques. Important next steps are using full sequence data to unpick the assumptions of linkage between SNPs and unknown nearby causal DNA.

Good Or Evil? Are humans basically good or evil? The question might be unsound, but an evolutionary view of the human condition needs a coherent story about the moral character of human nature. Why? Because people mean something when they ask this question, and when they encounter evolution-based narratives of the human condition, they often find those visions of human nature to be morally unacceptable. For example, a critic of The Better Angels of Our Nature said that if human nature truly were as the author described, then humans would be depraved.

What is Human Nature? See Human Nature for a summary of what we believe human nature is like, and Human Origins and Original Sin for how we got that way. Humans are cooperative and competitive, peaceloving and violent, friendly and suspicious, and all for reasons that make sense from an evolutionary perspective. Human behavior can be understood as Adaptive Behavior, but it is common for the connection to reproductive success to be somewhat subtle. This is partly because complex and overtly pointless behavior (like a symphony orchestra) is a hard-to-fake signal of individual fitness in competition for mates and social status.

What is Good? Is this evolved nature good or bad? See Evolutionary Ethics for a more in-depth analysis, but the evolutionary perspective is that humans have evolved to be both cooperative and competitive, both generous and self-serving. These forces must remain in balance for a society to function, but people need little encouragement to watch out for their own interests, so most moral and legal guidance aims at promoting beneficial cooperation. Furthermore, our moral senses are the product of evolution, a sort of rough-and-ready summary of “what works� in relationships between individuals and also between individual interests and the group interest.

Myth Any story about humans being inherently good or evil is a myth. That means we can't say whether the story is true or not, but it also means that we think these stories are very important.


Perhaps our legends of the Golden Age waft up from our Collective Unconscious as memories of the Dreamtime, when we were all hunter-gatherers (see Human Origins and Original Sin). Almost all religions agree that we are both good and bad, and have many stories explaining why, such as the Fall of man, Pandora's box, the Apple of Discord, and so on. With the lens of Cultural Evolution we can see particular myths, such as the Christian myth of Total Depravity, as being both adaptive for the religion (by encouraging converts who want God's help in being a better person), and also adaptive for groups that adopt that religion (by reminding members that they have to constantly work at cooperating better.) We wouldn't say that humans are depraved, but we rather like saying that we are “weak in every part�. It nicely captures the truth that we are only strong when we work together, and summarizes a great many scientific findings as well. In particular, our Positive Illusions: we are not as smart or as virtuous as we think we are. Our socially approved stories of how morality works are incomplete. We don't work either the way it subjectively seems that we do, or how we are taught we should behave (and strive to appear to behave). This is for both implementation and adaptive reasons. Although the evolutionary perspective is new, the resulting human failings were well known to Jesus, Buddah and the other authors of the wisdom literature. We frequently fail to follow moral rules, and often act in self-serving ways. But what would it really be like if we followed rules without considering the context or always sacrificed our own interests? We think that there is much to Aristotle's idea that moral behavior depends on finding a satisfactory Tradeoff between goals, what he called the Golden Mean. Doing so invariably depends on the specifics of the situation, and it simply would not work to always favor the other's interests over our own or those of our own group.

Conclusion We are the way we are. Without humans there is no morality, so asking whether humans are moral is either meaningless or obviously true (by definition). A humanist says we must define what it means to be good, not because we are so good, but because there is no alternative. Without us there is no evil either. Morality is neither the free standing world of reasoning imagined in ethics, nor the ideological certainty of theology. It's a human achievement with ancient evolutionary roots (see Human Origins and Original Sin). A far more meaningful question is, given human nature, what environment best reinforces the good? That is, what promotes human flourishing? Evolutionary thinking about human nature has been driven mainly by a desire to understand how we got to where we are, so it doesn't come with a pre-packaged political program. However it is safe to say that, just as evolution is anathema to religious conservatives, traditional social progressives will also find much to dislike. See Evolutionary Politics. In The Better Angels of Our Nature, Stephen Pinker argues at length that violence has declined both from prehistoric conditions and during historic times. Great change is possible in the human condition, even though human nature has likely changed little during this period.


Heritability Controversy Is heritability an “indirect” measure? This is an intuitive judgment based on whether you think the causal connections are (intuitively) obvious. Do you put it in the “billiard ball” category or the “unwarranted assumption” category? This will be greatly influenced by how much time you have spent thinking about biological causation. #Philosophy and heritability: What do we mean by cause? Clarify innate, nurture Criticism: Trait causes vs trait differences Bucket model: series or parallel, additive or interacting. Area of rectangle Heritability is statistical measure only applying to populations, not individuals. Somehow it emerges without being to any degree true of individuals? Causal impact of genes vs casual impact of mutations [seems to be restatement of the “cause of existence vs. cause of difference.”] Thought experiments in the “cause of existence” case often seen to depend on intuitions about division of labor. This is an example of how causality is a complex and slippery concept that is heavily rooted in our nature as social animals. Change faucet/hose faucet/teaspoon. Heritability paradoxes: number of hands, PKU Heritability meditated by racism (non causal correlation) How much the drummer and how much the drum? This is a general critique of reductive analysis, not specific to nature/nurture. Yes, scientific theories have unwarranted assumptions, and may be wrong, but they allow progress in understanding. All holism can say is “that's how it is, I think I'll write a poem about it.” What if (instead of being two drummers heard in the distance) you're one of the drummers? Doesn't seem such a pointless distinction then, does it? Is it true that “there's no such thing as talent”? Is talent a peculiar obsession of our culture? Certainly other cultures value effort more. From the perspective of our individualistic culture, we expect motivation to be more internal, so we see effort as evidence of intrinsic motivation. I think that our individualistic culture pumps up individual differences, rather than playing them down. What causes poverty? Another incorrect critique of heritability is that it entirely fails to account for SES effects such as wealth and parental education. There's room for dispute of whether the accounting is correct, but this is exactly what heritability is trying to do. Generalizing beyond the sample does require an argument of sufficient similarity. This is the “between group comparison” problem. Cultural differences could also affect heritability. For example individualistic western (WEIRD) cultures could increase heritability of behavior because social conformity pressure is reduced. A liberal culture allows individuals to pursue their behavioral inclinations to a greater degree, increasing the diversity of outcomes, and increasing multiplier effects. If you just gave poor people enough to bump them above the threshold, would that end poverty? How much effect can we expect from interventions like free preschool and higher education. Evidence is that means tested interventions can work because the big benefit is to the poorest. Mandatory interventions are likely to have the biggest effect because the worst environments are in dysfunctional families. Consider our fuzzy intuitions about causation. Suppose we made a causal budget for poverty: % interaction with intentional adults (nurture) % interactions with children and irresponsible adults. % economic opportunity % personal decisions (free will) % other bad luck But aren't your personal decisions influenced by your genes, how you were raised, and community norms? The same of is true of your parents, of course. So at some level this gets into free will. Of course


social determinism is no more plausible than genetic, but it interesting to see the connection with free will, and the nearby responsibility. Events don't happen for a reason, but things survive for a reason. You can see why poverty survives. We ask the cause because we want to know whom to blame, or we wish to change it by breaking the causal chain. But any enduring system has a mesh of positive and negative feedbacks that maintain it. Poverty isn't a new problem. An attractor. History of social class? Always present in states. Maybe in some tribal agrarian and herding groups. Either a herd or land is capital, so you get wealth variation, even without hierarchy. People who are unemployed or under employed aren't effectively exploited by the economy. Then there are those working poor who often have more than one job, and are “exploited” It's hard to find the right balance with skepticism. On one hand, the skeptics are right. We're far too inclined to think we know, when we don't really. On the other hand, it's worth continuing on the scientific approach to understanding. There is something to the argument that heritability or evolutionary psychology are an advance over non-scientific approaches to these questions. Similarly, we should cut some slack for people trying to use correlational methods to show effectiveness for interventions. Much of the heat related to nature/nurture is around the classic social Darwinist argument of poverty. We know so much more than we did 100 years ago, but that has had little effect on the political debate. Is this question beyond rational investigation? Both sides have gathered evidence, and that is progress. Though there can't be a control, it does seem that randomized trials could provide good evidence about the effectiveness of particular interventions. I'd like to figure out a positive spin on heritability and innateness. Authenticity is one facet, diversity another. It's easy to see why people see innateness as bad, though. It's a constraint on the human spirit. I can't say that I think heritability is important because it's such good news, more because of its explanatory power and scientific support. I also think it's important to defend because it shows you can use science to learn about perennial disputes. For me, the good news is the possibility of moving beyond everyone having their opinion. The good news is that we can understand our world. EP is especially fascinating because it opens the possibility of “why” answers about human nature.

Human Diversity In comparison to other species, humans are not unusually diverse in their appearance and other obvious physical characteristics, but we can easily recognize someone from these small differences in appearance or the sound of their voice. Human behavior is astoundingly diverse. This diversity of human behavior is partly because our unusually complex minds enable us to come up with unpredictable new behaviors, but a lot of the diversity in human behavior is is caused by there being so many different specialized skills and ways of life that we can learn from other people.


Human Nature Recent evolutionary theories based on Genetic-Cultural Coevolution say that humans naturally act very much the way we observe them acting today. See Nature Versus Nurture for a summary of why we believe these features are inherent in the human condition. See also Good Or Evil?. We cooperate with each other a great deal, but we also compete for status and resources. Though the tension between getting along and getting ahead is not something that we talk about a lot, or even think about, in the human condition this conflict is inescapable. Evolutionary Psychology has much to say about the clever (and often unconscious) ways that we negotiate this issue. Evolution has decided for us that believing what you say is more important than knowing what you think (see The Happiness Hypothesis and this paper). People have innate emotions and motivations that guide them in navigating the world. Though the physical world takes its toll of premature death, the human world is primarily a social construct where success is determined in social interactions, so there is considerably cultural variation in the exact triggers for emotions or way in which motivations play out. People have a moral sense, or as the Genesis story explains, knowledge of the difference between good and evil. This is an intuitive sense of proper social behavior which primarily concerns itself with maintaining productive cooperation within the group, minimizing conflict between individuals and conflict between individual interests and the group interest. See The Righteous Mind. Although our moral sense is innate, there is a great deal of cultural variation in exactly what is considered moral. People exist in a context of cultural rules for appropriate behavior (such as laws), and this system is contrived to approximately mesh with the innate framework of moral emotions. Almost everyone mostly follows the rules, but (when nobody is looking) most people succeed in coming up with justifications for why it is o.k. to bend the rules a bit. Some people (mostly men) live as criminals or bandits on the margins of a society or between settlements. These people gain at least part of their living by taking from others, usually involving at least threats of violence. Group membership is tremendously important to humans because it is a matter of life or death. We participate in various different social groups, and learn to act appropriately in those contexts, displaying badges of membership, enforcing norms, and showing solidarity against outsiders. In-group/out-group dynamics are prominent. Groups also compete with each other. If the groups are within a single polity (tribe, country) this competition is usually non-violent, but the rewards in terms of status, political power, access to resources, and winning converts (Mind Share) are very real. Groups may cooperate, but when there is official cooperation (not just overlapping memberships) this often involves competition with other coalitions. Conflict between polities can turn into warfare. Almost all cultures have established practices where men form groups to fight on behalf of the polity. We have inherited from our social primate ancestors an attunement to dominance or status rankings, but because we participate in multiple groups and can achieve in many different ways, these rankings are multidimensional and context dependent. We have an innate drive for Social comparison, and this is one of the ways that we have become genetically adapted to participate


in Cultural Evolution. We also naturally coordinate group actions through the dynamic of leadership/followership (see Evolutionary Leadership Theory). There is considerable cultural variation in how prestige and power are assigned, the degree of inequality, and fixedness or fluidity of one's status. See Prestige Bias. We also have strong interest in sexual relationships and high motivation to find desirable sexual partners. One of evolutionary psychology's greatest successes is in explaining how the differences in characteristic male and female sexual behavior arise out of the peculiarities of human reproductive strategies. Modest but statistically solid differences have been found between men and women in areas such as sensory perception. These differences arose because specialization of men and women in different roles allowed evolution to separately optimize the design tradeoffs for each sex. Although there is considerable overlap between male and female abilities and motivations, Genetic-Cultural Coevolution resulted in social construction of gender roles which reinforce this behavioral specialization.

Innateness and Difference One area of controversy about the human condition concerns the interactions between the largely independent concepts of innateness (vs. learned or environmentally caused) and similarity (or difference) of individuals and groups. The controversy primarily concerns differences in behavior and the mental capacities and dispositions that underlie it. This is partly because the presence of physical differences is obvious when we compare individuals, the sexes, and (to some degree) ethnic groups. Behavior is also clearly highly influenced by experience, and our cognitive abilities are uniquely important in determining our success in the human created environment that we live in. Innate Similar for all Individual difference Group difference

Learned

Human universals

Cultural universals

Genetic and random

Experience/environment

Common ancestors (family or ethnic)

Cultural diversity

In its simplest form, the Nature Versus Nurture dispute concerns the “individual difference� row in this table, although there is also substantial concern related to the possibility of innate group differences. See Harald Eia: Brainwash for an interesting series of videos examining the nature/nurture dispute.

Innateness When we say that something about a person is innate, we mean that it is a stable trait or inclination that was present in them even as a child. Let's take this definition literally and say a trait is innate if was caused by anything that was inside us before they were born. Our genome


is clearly the primary internal cause but our development is controlled by the interaction between our genes and everything else. It can be hard to tell whether something is innate or not (see Nature Versus Nurture): ● Prenatal causes: A trait may be present from birth, but be caused by something outside our developing body. There are known harmful influences such as drug consumption, but often it's hard to tell apart from … ● Developmental randomness: We know the detailed structure of the body (including the brain) can't be directly specified in the genome because there just isn't enough information there. Development is highly dependent on self organization to keep things on track (see Randomly growing an embryo). We don't know how big a factor this is, but it surely contributes to the non-shared environment in heritability estimates. ● Delayed appearance: Most people would accept that secondary sexual characteristics such as voice change in men and broadening of hips in women are innate differences, but they don't appear until adolescence. Humans are born quite immature, and development continues for about 20 years. In particular, we now know that significant irreversible remodeling happens in the brain at adolescence. One theory is that this consolidates experience, gaining efficiency at the cost of future flexibility (watch Sarah Jayne Blakemore on the Adolescent brain), but some of these changes, such as the development of sexual behavior, are clearly innate. As well as these practical difficulties in determining the cause of a trait, even knowing the cause, we still might not be sure. Is developmental randomness an innate cause? But random development continues even after birth. We also can't ignore free will. As soon as we are born we begin to influence our experience by where we look and what we pay attention to. If we consistently choose to behave in certain ways, then that could develop into a stable behavioral trait, and can also affect our physical development. What do we want “innate” to mean? Why do we care? In the context of the Nature Versus Nurture debate, much of the concern is that innate traits are hard to change. How hard are these things to change? ● Developmental randomness: can't be controlled at all. ● Genetics: we can make genetically engineered animals by starting with a single cell, and this could almost certainly be done in humans too. But even ignoring the issue of what changes (if any) would be desirable or morally acceptable, the fact is that (aside from repairing specific genetic defects), we currently have absolutely no idea of what genetic changes we would need to make to affect behavior or even to shift simple physical traits such as height. ● Transgenerational epigenetics: this information is heritable, just as with genetic information. Not much is known yet, but there is fairly convincing evidence that malnutrition can cause physiologic changes that persist across generations. This raises the possibility that improvements in the environment may take more than one generation to appear. ● Irreversible development: Our bodies (and brains) undergo successive irreversible changes during development. Malnutrition, abuse and neglect clearly have lasting effects, and learned behaviors and attitudes can also be frozen in by developmental changes.


From our individual perspective, all these things are very difficult to change, but these fades over into factors that would normally be considered environment: ● prenatal environment ● ubiquitous environmental factors ● self selected environment Because of the fairly recent dispersal of humans out of Africa, followed by population growth, humans have a relatively low level of genetic diversity (in comparison to other species) and 85% of the existing genetic variation is present within every population, while only 8% occurs between continents. Stereotypes about sub-Saharan Africans are particularly dubious, since that subcontinent contains the greatest genetic diversity, see Human Genetic Variation. Although humans are relatively genetically homogenous, genetic differences do underlie individual differences in talent, motivation and behavior that humans happen to consider highly important (see Behavioral Genetics). Because of the relative similarity of all human populations, we would expect the variation between geographically defined groups to be smaller than individual variation. Even small differences can be detected with confidence using the large samples available, but any behavioral differences could be explained by culture and other aspects of the local environment, so it is hard to know what component might be innate.

Diversity vs. Equality How can it simultaneously be true that “All people are created equal” and that “No two people are alike”? It is true that no two people are the same (although twins have the same genes). If there is a form of equality that we share, then it isn't mathematical equality, because humans are not indistinguishable and interchangeable. In fact, a crucial feature of the human condition is diversity. When the American Declaration of Independence proposed that “All men are created equal”, the concern was with hereditary nobility, not with the idea that our personal genetic heritage causes us to vary in important ways. One of our main challenges in life (especially as an adolescent and young adult) is to figure out what we are good at in comparison to others. Perhaps it might have been the case that humans were substantially equal in their abilities, interests and motivations. If that were true, then a simple argument could be made for the naturalness of social equality. Our culture is founded on institutionally formalized forms of equality, such as “one person, one vote” and “equality before the law”. Aspiration for greater social equality is also seen as progressive, leading to ideals such as “equality of opportunity.” Yet is naturalness even the strongest argument in favor of social equality? To say that we ought to behave in some way because it is natural has been criticized as the naturalistic fallacy (see Is vs. Ought).

Human Nature What does it mean to say that any sort of human behavior is natural, given that all human creations (and only human creations) are artificial? The concept of a “state of nature” was developed as a thought experiment to investigate forms of government, but modern thinking on the evolution of human nature is based on the understanding that it is natural for humans to exist in an artificial environment.


That is, humans (as we have come to be) have always existed in an environment that is substantially human created. Although our ability to control the material environment has greatly increased in modern times, our ability to function in the social environment has always crucial in determining our success in life (our ability to raise children similar to ourselves.) The evolution of a human mind capable of culture created a new environment for humans to evolve into: the cognitive_niche. This created a positive feedback where greater mental abilities enabled a more complex social environment, which in turn created pressure for the mental abilities to thrive in that social environment (see Genetic-Cultural Coevolution).

Universals One area of investigation in social sciences such as psychology and anthropology has been identification of universals: what do all people and all cultures have in common? Broad generalizations can be made, but these must be made against a backdrop of diversity. Do we exclude individuals who are mentally ill or disabled? Are we willing to accept that cultural universals must admit some exceptions? Although innate human behavioral inclinations surely directly underlie many cultural universals, a cultural universal can also arise as a cultural adaptation to some non-cognitive aspect of the human condition. For example, all peoples go through childhood, adolescence, adult independence, and ultimately old age and death.

Cultural diversity implies cultural inequality Cultural diversity is also profound. Our modern way of life emphasizes the importance of particular abilities, and the cultural forms that we use to regulate interaction (market economy, literacy, democracy) represent only a tiny portion of the range of known cultural variation, let alone the presumably infinite space of possible human cultures. These western cultural forms increasingly dominate the world, displacing other cultures. Why? One answer is that modern cultures have shown an ability to support human productivity, allowing mutually beneficial cooperation to be extended to a far larger scale. Another answer is that our culture has evolved specifically to preserve and propagate itself, in competition with other cultures. It is incoherent to celebrate cultural diversity without acknowledging that cultures vary in important ways. These differences affect both what it is like to live in a particular culture and also to what degree cultures gain or lose “mind share�. From the perspective of Cultural Evolution we can say that a cultural variant is adaptive in the sense of propagating itself, but there is no straightforward connection between this descriptive statement and any claim that a cultural variant is in some absolute or moral sense superior (see Is vs. Ought). In particular, it is not necessarily the case that a culture which out-competes others will be more pleasant to live in (see the state).

Human Origins and Original Sin


One approach to deciding whether humans are basically good or evil is to infer what extremely early human behavior was like. What was the behavior and social structure of the earliest common ancestors of modern humans, about 150,000 years ago? What about our earlier social primate ancestors? Were these ancestors violent or peaceful? Did everyone had equal power within the social group, or did might make right? Of course, no one knows what the behavior of prehistoric humans and proto-humans was. Since behavior doesn't fossilize, it is hard to see how we ever could know for sure. What scientists have done is study our closest non-human relatives, the Chimpanzee and Bonobo, and also living humans whose way of life is similar to our best guess of how early humans lived. Why is early human behavior better evidence for human nature than how people behave now? Perhaps by looking at the behavior of ancestors we are getting more directly at the essence of human nature. In particular, culture has a huge role in shaping our behavior and social structure. It is very difficult to say, looking at human life today, what things are human nature and what are arbitrary cultural conventions.

Social Primates When we look at our social primate cousins we can see an aspect of our natures without being confused by culture. While there is evidence that non-human primates can adopt useful behaviors and socialize others into these patterns, culture clearly plays a far smaller role that it does for humans. We argue that being The Cultural Animal is the most important defining characteristic of humanity, so any animal that lacks complex culture is not exactly human, but their behavior is the best evidence we have of proto-human behavior. Since evolution works by tinkering with that went before, a great deal of our common ancestors are still in us. Especially fascinating is evidence that other social primates have moral emotional responses, such as to unfairness: Yet humans are neither chimpanzees or bonobos; we see ourselves in them, but they can't tell us who we are. Our social primate nature has been augmented and overlaid by new mental structures.

Anthropology Anthropology gives fascinating evidence about the vast diversity of human behavior and about the vast diversity of ways in which cultures shape our behavior into functioning social patterns. Any way that humans actually live today is clearly a possible way of living (at least under the right conditions). But how did our earliest ancestors live? There is convincing archeological evidence that farming and agriculture don't date back more than about 10,000 years, so early humans weren't farmers or herders. They must have gotten their food by some combination of gathering plant foods, scavenging the carcasses of dead animals, and by hunting. People that live this way are called hunter-gatherers. Among hunter-gatherer peoples that have been described by anthropologists, these groups tend to be:


Egalitarian There is no chief and, group decisions are made by consensus. Although men and women have different roles (see Sex Differences), they have roughly equal power. Some groups do have competition for status as a “big man�. Freely sharing There is a strong cultural value on everyone being satisfied with the sharing of scarce resources (especially meat). There is little personal property, and children are socialized to share. Nomadic They are prepared to pick up and move whenever it is useful. As well as allowing them to exploit different seasonal food sources, it is also important as a way to resolve conflict. Relatively non-violent (within group) The preferred solution to conflict is group discussion. If they can't agree, then the group may split up. In some areas between-group raiding is common; other groups avoid raids by moving away. Although there is no entirely convincing theory for why hunter-gatherer cultures should have these things in common, we can see how these behaviors and values are consistent with their lifeways. They form a synergistic whole, where each reinforces the other. Nomadism works well with hunter-gathering because it allows them to move on when food is exhausted. A nomad can't have many possessions because they have to carry them. Sharing works well with hunting, because kills are unpredictable, and meat must be eaten before it goes bad. Minimizing in-group conflict benefits any group. Sharing, egalitarianism, and lack of possessions reduce serious within-group conflict, and nomadism makes it easier for groups to split when there is conflict. Why are hunter-gatherers egalitarian? You might think the answer is obvious: we humans would just as soon not have any big man lording it over us. But why do we feel that way? Evolutionary Psychology is largely about explaining why we have the emotions and motivations that we do. Since being anything other than a hunter-gatherer is relatively recent (on the time scale of genetic evolution), our innate emotions and motivations should be well adapted to that way of life. The simplest answer is that hunter-gatherers don't need a leader (to resolve internal conflicts or lead war parties). In the hunter-gatherer life, everyone has to work to get enough to eat, and often has to work independently and take initiative. Egalitarianism is one way that a culture can manage Social Conflict. Evolutionary psychology predicts that individuals will be motivated to get more than an equal share of food or of sexual partners, but also predicts that nobody will want be on the losing side of within-group competition. An egalitarian social system reduces this conflict by setting the standard to be equality. Yet a standard of equality does nothing if the rule isn't followed. Evolutionary psychology predicts that we will be watchful for signs that we are losing out. If we have to enforce fairness by fighting for our rights (as the monkey attempts in the video above), then we will only be treated fairly by those we can defeat in a fight. Since some are better fighters than others, this creates the Dominance Hierarchy usually seen in social primates. The solution that egalitarian human societies have hit on is social control–the group cooperates to enforce equality (see reverse dominance hierarchy). Although less drastic solutions are preferred, in hunter-gatherer


groups there are always many hunters with excellent skills in using lethal weapons. Aspiring big men know that.

What About Morality? So are the behaviors of hunter-gatherer groups more virtuous than what we see in other societies? Certainly it is common enough for state societies (based on farming) to have high levels of social inequality, including hereditary nobility, to practice slavery and human sacrifice, to value fighting ability and to frequently attempt empire-building military conquest. That does create a lot of suffering. Yet anthropologists are quite reluctant to declare one culture to be superior to another in any way, let alone to say that it is morally superior. We know what we think is moral, but other cultures don't entirely agree. Is it fair to judge them by our standards? Cultural Relativism says no. Evolutionary Psychology provides a tricky way of generating an answer by turning this question around. It is likely our innate moral senses and motivations evolved to help us to function in hunter-gatherer groups with egalitarian social structures. As long as our social natures have not changed since then, living in a small egalitarian group would be the most natural way to live; it would “feel right” to us. For the past 5000 to 10,000 years almost everyone has been living in much larger, less equal, groups, where our moral sense doesn't quite line up with the social rules. This evolutionary mismatch has created millenia of nagging frustration. The moral superiority of hunter-gatherer life isn't quite that clear-cut, though. First, although philosophers who study Ethics don't agree on much, they do agree that “feeling right” is not a sound way to decide what truly is right (see Evolutionary Ethics). Second, it is likely that peoples who have been through the meat-grinder of civilization have endured some fine-tuning of their motivational structures. Although evolution does tend to move slowly, if you are living in a despotic city-state you'd better develop some acceptance of social inequality, or you're going to die. This creates strong Selection Pressure, which can shift things pretty dramatically in 10 generations (300 years), or even less. What about war? Are humans naturally violent, or peaceful? We don't really know how often tribes of early humans got into battles or whether they attacked other homnids (such as Neanderthals). We would expect that nomadic hunter-gatherers would rather move on than risk death in battle, as long as there was somewhere to move on to. Once we adopted farming, moving on was no longer an attractive option, so inter-group conflict increased. Inter-group conflict is also common in recent hunter-gatherers. Even before the rise of the state most of us were were worried about raids from neighboring tribes, and had to be prepared to fight. See The Better Angels of Our Nature for a book-length study of human violence.

Good or Evil? So are humans basically good or evil? What has the study of human origins told us? We evolved as egalitarian hunter-gatherers, so that kind of social structure feels right to us. In recent millenia we've learned to tolerate some social inequality, gone on to invent new types of hierarchical organizations with overwhelming military and economic power, and picked up some


new moral sentiments such as loyalty, respect for authority, and patriotism. We don't know how much of this change in the moral landscape is recent genetic evolution and how much reflects the flexibility of our moral mechanisms. The significant heritability of political attitudes and personality traits suggests that at least part of the story is genetic. This explains how we got to be the way we are, and why we are fascinated by hunter-gatherer lifeways, but what has looking at the beginning told us about ourselves, as we are now? Whether humans are innately good or bad today depends on our current nature, which is at least slightly different than how we were back then. Were we corrupted by civilization? Or has change between then and now has been almost entirely Cultural Evolution? Then, if we could find an empty corner of the planet with adequate abundance for hunting and gathering, we could go back to living the way that humans were designed to live. Stories about whether humans are inherently good or evil are myths. Fascinating, but neither true nor false. See Good Or Evil?.

Sex Differences people orientation vs. thing orientation Simon baron Coehn neonatal sex differences Why do people care? ● Only mental differences controversial ● Political (inequality and nature/nurture) ● Scientific investigation. EP assumes differences are genetic, although that assumption may not be entirely necessary. Cultural evolution plausibly also seeks successful reproduction, so the presence of such adaptations doesn't prove primacy of the genetic process. The “smart unconscious” and motivational opacity are also useful parts of the EP toolkit that help to understand subjectively puzzling aspects of the human condition. ● Personal understanding. Source of differences does not matter, but EP implicitly draws this in. Popular assumption is that differences are innate. How important is it to make the point that, regarded as animals, we are in many ways outliers? High reproductive investment. Necessitated help from father, others. EP says fathers help most, then relatives. There are many differences between men and women that are easily noticeable (appearance) or directly measurable (body size.) The reality of these differences is clear, and it is widely accepted that such anatomic differences are the result of a genetically programmed developmental sequence (Sexual differentiation.) Men and women also show clear behavior differences, and every culture has different standards for how men and women should behave (gender roles.) Are these behavior differences learned? Are they genetically determined? There is an ongoing Nature Versus Nurture debate about sex differences in the human Mind, see Individual Differences and Fairness. Knowing typical sex differences, it is possible to make weak predictions of how a person will behave, but it's pretty hard to predict anyone's behavior, male or female (Prediction is Intractable.) We can scientifically investigate sex differences in thinking and behavior without knowing what causes these differences. What differences are there, and how big are the differences?


Numerous statistically significant differences have been identified by psychological tests and surveys, such as that men tend to have superior spatial skills, and that women tend to Although these tre The widespread interest in scientific investigation of sex differences comes mainly from a desire to explain or understand the behavior someone of the opposite sex, especially lovers. That is, beliefs about sex differences are theories that people use to explain

Evolutionary Theories Most obvious are the differences in reproductive organs and reproduction-related behavior. As in all placental Mammals, the baby is gestated internally by the mother and then nursed after birth. In humans the approach of giving birth to a relatively immature infant is taken to an extreme. The typical age at maturity of a mammal the size of a human is 3 years, while human sexual maturity is at about 13 years. Human dependency on adult care also extends beyond sexual maturity, which is quite unusual for any animal. Because of extreme immaturity at birth, a human infant requires intensive care, including breastfeeding. In addition, the extended parental dependency means all of a woman's offspring will be dependents for a large part of the mother's life. In most mammals, the mother provides all parental care, feeding and protecting her children, as well as continuing to support herself. In contrast, humans are cultural animals who cooperate when raising children (and in most other tasks of living.) Humans also form emotional attachments with their sexual partners and with children. These love attachments are crucial to the maintenance of the family, the most basic unit of human social organization. Family structure and behavior is greatly influenced by cultural norms, but a family usually contains at least a mother, her children and a man who is her sexual partner (the social father.) The father sometimes assists by providing direct child care, often assists by helping to obtain food, clothing and shelter, and almost always participates in the larger community in ways that may indirectly benefit the family, such as publicly advocacy of family interests, production and consumption in the economy, and military service. Because the bonds between parents are sexual, cultural standards of appropriate male and female behavior cannot be separated from the norms about family structure. The mother and father take various roles in the family and in the community, influenced This means that the mother must invest considerable time and energy in raising young, while the father's contribution might be as little as the sperm itself. While this difference in parental investment is common to all mammals, Diverse family structures are used, Assistance comes from relatives and from other community members (friends, neighbors, paid caregivers.) A family usually contains at a woman who is See also: Evolutionary Psychology, and Sex Differences In Human Mate Preferences.

A Priori/A Posteriori An evolutionary perspective gives insight into a traditional philosophical problem of the relation between a priori knowledge (which can be known to be true without reference to evidence from the world) and a posteriori knowledge (which can only be evaluated by examination of the world


to see if it is in fact that way.) This is closely related to the duality of necessary/contingent (see A priori and a posteriori). Our point here is that this somewhat dusty philosophical debate can be seen as related to still-vital concerns of human nature and how the mind works. A trivial way that something might be known a priori is if the statement is a tautology (“All bachelors are unmarried”), but we find more interesting Kant's original speculations about the a priori structure of the mind (see Categories of Understanding.) As we see it, Kant was basically right that the mind contains particular capabilities for perceiving kinds of order in the world, such as causation. His argument was that these capacities of perception were a priori, somehow innate in man, perhaps God-given, or at least a necessary precondition for rational thought. This is in a narrow sense correct. When the philosopher sits down in his armchair, he does possess all of these capacities. However, this neglects both the philosopher's life-history and the evolutionary history of his species. From the evolutionary perspective, we see that our capacities of mind are a consequence of the usefulness of those capacities in the world we happen to live in. Kant's argument that the mind places a basically arbitrary structure on the world raises the possibility that our perceptions may be deluded, or likely at best grossly oversimplified. This is to some degree true, in that we can only perceive a tiny fraction of what is “out there” (see Sensory Limitations) and our understandings of the world may be quite wrong at times (see Naive Realism), but evolution has guaranteed some alignment between the relevant properties of the actual world and our mental capacities of perception. Although Kant's understanding of the problem of mind is firmly rooted in philosophical Idealism (a belief that important truths can be discovered by thought alone), from an empirical scientific perspective, we argue that he made the mistake of assuming that whatever he could determine by introspection was necessary a priori. He accurately introspected important aspects of the mind, but such capacities are only necessary from an anthropic perspective. They are a precondition for the existence of philosophers, but philosophers didn't have to exist. The question of Nature Versus Nurture can be seen as another related aspect of the philosophical debate between Idealism and empiricism. Kant's proposal of the a priori structure of the mind was a reaction to claims by empiricist philosophers such as John Locke that the mind is a blank slate at birth (Tabula rasa, The Blank Slate.)

Causality Kinds of causation: ● Social causation: intention, agents and responsibility. ● Billiard ball causation: we see an intuitively clear mechanism or causal chain. (Tacit knowledge) ● Inferred causation (correlation, experiment and induction). If I do this, then that will happen, all other things being equal. Although causation by induction seems the least sound of the three, the other kinds are based on it. We are confident that balls behave in a certain way because we have seen them interact and have played with them. We also seem to have innate intuitions about causal interactions between solid objects. Even babies know that solid objects don't pass through one another.


But why is the ball solid? Well, the ball is only solid under a limited range of conditions. Too hot and the ball will melt or vaporize. Solidness is an emergent property of the interactions of a large number of atoms. In a certain temperature range the ball is solid because of the interactions between electrons in the atoms. On one view, an atom is mostly empty space. Electrons don't actually bounce off each other, the electromagnetic force between them is mediated by the exchange of virtual photons. But that wouldn't happen if electrons didn't have electric charge. Why do they? So far as we know, they just do. It's an induction. Physics has given us some insight into why objects can be solid or liquid, but we still don't have a complete casual mechanism free from inductive assumptions. So in general, examining a causal mechanism at a higher level of detail can give us some insight into when the mechanism might break down, but it never eliminates all doubt. That is, we learn something about the nature of “all other things”. Agent causation relies on the particularly unsound process of inferring intention. Usually we see intention as a subjective mental state, but we freely infer intention based on actions and assumed motivations, and we tend to see our inferred intentions as being more reliable than the reported subjective intentions of the other. This is in fact reasonable on several levels, even leaving aside the social level explanation “he's lying”. We can act in an intentional way without forming a conscious intention (catching a ball unexpectedly thrown at us), and most animals probably act this way all the time. There is a good argument that our intention is actually generated unconsciously, based on weighing countless factors, most of which we never consciously consider. For this reason, we don't accurately know our own intention. The intuitive “cause of existence” is like the ideal of a sufficient cause, but on close examination we see that this sort of cause isn't actually knowable at all. All we can do is change some things while trying to keep everything else equal. We can never specify what “everything else” is in full detail. This is as true in counterfactual interpretations as in experimental manipulation or statistical inference. In an experiment the control is supposed to capture “everything else”. What is the “cause of existence” of a marble? The idea of sufficient cause is driven by the social need to assign blame and award praise. Once you've found sufficient cause for an event, then you can identify all the culprits. ● In the specifically social context, the ambiguity of the appropriate level of analysis isn't such a big problem, and ● Assigning blame and awarding praise is adaptive, but only if the investigation is kept inexpensive enough that the costs don't outweigh the benefits. In reality we must settle for an imperfect process, sometimes merely the identification of scapegoat. This fuels our false intuition that sufficient cause can correctly understood. ● Responsibility is a moral intuition closely akin to causation, and it isn't a simple unitary concept either. Consider the common tension between punishing leaders who didn't understand that mistakes were being made and punishing underlings who were more directly causally connected to the outcome. This is like the distinction of ultimate and efficient cause. We admire a leader who accepts responsibility even when he is only connected to the misfortune in a counterfactual way (failing to act). We also award leaders credit for good outcomes even when their contribution was mainly in getting out of the way.


These social meanings of causation related to division of labor and moral responsibility are quite different from the pragmatic mechanisms of a manipulation theory of causation. We can reason in a precise way about necessary causes (preconditions) and can often convincingly demonstrate that a causal connection is real. This sort of understanding begins with being constantly on the watch for meaningful coincidences (correlations). A coincidence doesn't prove a causal connection, but making a change in the world (an experimental manipulation) can. Because of the big payoff from being able to manipulate the world, we are highly attuned to noticing coincidences and are motivated to come up with and to act on causal stories. It is easy to fall for the narrative fallacy, so skepticism is always called for. Scientists are human, and their theories aren't exempt. Yet skepticism has no inherent limit; as the a ancient Greek skeptics understood, the critic never has to concede defeat, while a theory's supporter can be defeated. There is a fundamental asymmetry in knowledge. Things can be proven false, but (outside of purely artificial worlds such as mathematics) nothing can ever be proven true, and indeed the impossibility of specifying “all other things” means that real world causal rules are prone to unpredictable breakdowns. Skepticism is infinite and rigorously founded; causal knowledge is fragile, scarce and valuable.

Determinism vs. Free Will What does it mean for our traditional conceptions of free will and moral responsibility that all our thoughts and all our behavior are caused by physical processes in the brain? There is a long-standing philosophical debate (see Free will.) Determinism is a fairly clearly defined term, whereas Free will is not. In Freedom Evolves, Daniel Dennett finds considerable philosophical recreation in considering different sorts of free will and whether they are “worth having.” Our position is that because Prediction is Intractable, the threat to any sort of free will is overblown. We can define determinism in two similar ways: 1. If we could somehow rewind the world and replay it, exactly the same things would happen the second time around, or 2. If we could exactly measure the entire state of the world and had the right physics, then we could predict what will happen next. Right away we see that determinism also has a rather philosophical character, since neither of these things could ever actually be done. The first requires time travel, and the second requires a privileged observer outside the universe (as well as some really impressive measurement technology.)

Physical Nondeterminism For the concept of determinism to have any practical relevance we must limit the discussion to small parts of the universe that can be isolated in a controlled environment so that they can be subjected repeatedly to the same stimulus. If we look at Atomic physics (likely the smallest scale relevant in brain function), then we find that atoms are in many ways highly deterministic. Each atom seems to be utterly interchangeable with other atoms of the same isotope, and


properties such as excitation energy are so invariant that they are now used to define fundamental measurement units such as the second. However, if we look at phenomena such as the timing of Spontaneous emission, we find only randomness, and Quantum mechanics tells us that this non-determinism is fundamental. So then we're done, right? No determinism, so having a physical brain does not constrain free will. Hold on… First of all, as Dennett points out, randomness is not a “form of free will worth having.” Second, as discussed in Mind, the brain is designed to overcome the fuzzy imprecise behavior of the goop it is made out of, keeping our hearts beating at an appropriate rate in spite of atomic scale randomness such as Brownian motion. Even so, since the brain is basically an Analog computer there is surely some level of true random nondeterminism that leaks into our decisions. But even without this, human decisions would still be highly unpredictable due to the sheer complexity of the brain and the unpredictable inputs that each person constantly receives from the larger environment (which includes other unpredictable people reacting to our own behavior, adding to the Physical Chaos.)

Behavioral Regularities In spite of underlying randomness and irreducible complexity, in our common experience we do see that people are somewhat predictable, showing a Personality or temperament that is reasonably consistent over time, as well as habits or other characteristic behaviors. This differs from the sort of precise “I know what you will do next” determinism of philosophy thought experiments because it is only true in a statistical sense: “Joe tends to go and work out at the gym when he's had a bad day at work.” This sort of human predictability is real, and can at times seem like a sort of suffocating mechanical control that defeats our free will, especially when we are consciously trying to change how we behave. Though this sort of human determinism may be frustrating, it is familiar and unthreatening. See Reprogramming the Mind.

New Developments What is new is that fMRI and other improved measurement techniques are revealing more of the physical workings of the brain, undermining our naive dualism and suggesting the possibility that human behavior could be predicted much more accurately using these new means. Indeed, in the lab it has been found possible to predict simple decisions some seconds ahead of the apparent time of conscious decision. Due to the fundamental intractability of complex systems, predictions of individual behavior will always be short-term, statistical, or both (as are weather predictions, and for the same reason.) However, improved instrumentation and analysis may allow more accurate and less subjective predictions of behavior. The most dramatic area of practical application of behavior prediction is in the control of socially unacceptable behavior, especially in the legal system. See The Law and Neuroscience Project and Neuroethics. There is also considerable interest in exploiting regularities in human behavior for financial and political gain (marketing, spin control.)


Since free will is traditionally understood as being a conscious process, see also Consciousness.

Evolutionary Ethics There's a sense in which what is good or evil is remarkably simple. When we consider a List of Virtues or the Seven Deadly Sins from an evolutionary perspective, our moral senses can be understood as adaptations for negotiating the realm of individual/group conflict. What is good is that which promotes productive cooperation, both within our family and our larger social groups. What is bad is that which is destructive of existing resources, divisive in our relationships, and not oriented toward overall productivity.

Moral Behavior A good starting point for understanding morality is to look at how people normally behave in situations where moral issues are at stake. While virtues are often thought of as free-standing characteristics of behavior, when we look at concrete behavior at a fine level of detail we find that morality is highly context-dependent, and that the context is social.

Situationism There exists a long experimental tradition in social psychology—often cited, for reasons that will become obvious, under the title of “situationism”—that unsettles the globalist notions of character central in much philosophical theorizing. For example: ● Isen and Levin (1972: 387) discovered that subjects who had just found a dime were 22 times more likely to help a woman who had dropped some papers than subjects who did not find a dime (88% v. 4%). ● Darley and Batson (1973: 105) report that passersby not in a hurry were 6 times more likely to help an unfortunate who appeared to be in significant distress than were passersby in a hurry (63% v. 10%). ● Mathews and Canon (1975: 574–5) found subjects were 5 times more likely to help an apparently injured man who had dropped some books when ambient noise was at normal levels than when a power lawnmower was running nearby (80% v. 15%). ● Haney et al. (1973) describe how college students role-playing as “guards” in a simulated prison subjected student “prisoners” to intense verbal and emotional abuse. ● Milgram (1974) found that subjects would repeatedly “punish” a screaming “victim” with realistic (but simulated) electric shocks at the polite request of an experimenter. These experiments are not aberrational, but representative: social psychologists have repeatedly found that the difference between good conduct and bad appears to reside in the situation more than the person; both disappointing omissions and appalling actions are readily induced through seemingly minor situational features. What makes these findings so striking is just how insubstantial the situational influences effecting troubling moral failures seem to be; it is not that people fail to adhere to standards for good conduct, but that the can be induced to do


so with such ease. (Think about it: a dime [50 cents, inflation adjusted] may make the difference between compassionate and callous behavior.) At the same time, research predicated on the attribution of character and personality traits has enjoyed limited success in the prediction of behavior; standard measures of personality have very often been found to be tenuously related to behavior in particular situations where the expression of a given trait is expected. Moral Psychology: Empirical Approaches

Attitude-Behavior Gap Another durable and puzzling result from Social Psychology is the Attitude Behavior Gap. Although this can be seen as a more general mismatch between story and behavior, it has been studied most with moral behavior, where we often do one thing and say another. See Attitude Behavior Gap.

Self Interest Animals complex enough to have behavior tend to behave in adaptive ways, those which benefit their personal survival and reproduction, even if this places them in direct conflict with other members of their species. In evolutionary theory, this behavior is freely referred to as “selfish” without any negative moral connotation. Social species do exhibit cooperative behavior as well as competition, though when altruism is present it usually favors close relatives. Social science research shows that humans are no exception. A large part of the complex situation dependency of moral behavior seems to relate to judgments about what degree of selfserving behavior is acceptable in this particular situation, and yet (because of the AttitudeBehavior Gap) we are not aware that we are doing this. See also Story, Intentional Opacity and Representational Opacity.

In-group favoritism Humans are unique in the degree to which we cooperate with others who are substantially unrelated to us, and yet we are also often hostile to people from other groups. For humans, it makes sense to cooperate with people who share our goals and values, to do favors for neighbors who may someday return the favor, and to seek alliances with others who seem “like us”. Yet social psychologists have found that we still tend to favor any group we find ourselves part of, even when the experiment has been constructed so that the group is entirely random, and you will never meet your other group members even once; the group has no purpose and no future. See The Cultural Animal (book).

Moral Theory Many philosophers have abandoned Normative Ethics, that is attempting to

Empirical Ethics


Some philosophers have been showing an encouraging interest in this sort of actual moral behavior. See Moral Psychology: Empirical Approaches for a readable overview. Quasi-realism Ethical Egoism Enlightened Self-interest

Toward a Science of Morality Cellular Morality It's interesting to consider how single cells control their behavior in ways that (to a human) have moral aspects, even though cells clearly have no awareness of these implications. One example is the way that bacteria colonize a surface by forming a Biofilm. This slimy film often contains a host of unrelated types of bacteria that can form a cooperative community, where different bacteria eat different nutrients in the environment, with their excretions often being further digested by other bacteria. The bacteria live in a slimy extracellular matrix secreted by some of the bacteria. This helps the community to stick to the surface and can also protect them from some environmental hazards. Bacteria use chemical communication known as Quorum Sensing to activate their biofilm building behavior. Another example from multicellular organisms (including humans) is Apoptosis or programmed cell death. This when a cell that is diseased or no longer necessary disassembles itself for easy cleanup, killing itself in the process. It is easy to see this as a sort of altruism, traditionally considered a highly moral quality of behavior.

The Evolution of Human Morality The bible tells us that humans learned the difference between good and evil in the garden of Eden when they ate the forbidden fruit from the Tree of knowledge of good and evil. This displeased God, so he ejected Adam and Eve from Eden, and with this loss human suffering began. Evolutionary thinking about the origins of human morality carries a somewhat similar mixed message, that human moral intuition is an imperfect adaptation to the imperfect human condition. If we see morality as productive cooperation, then moral failures result in Social Conflict, both at the individual and group levels. We almost always cooperate with others in our close social groups, but we also have to be wary of letting these same people take advantage of us. Even as we are cooperating, we are also competing to get ahead (individual/individual conflict). Individuals and societies also need to consider the oppressive and exploitative potentials, where we may be forced to make an unreasonably great sacrifice of our own interests to get a benefit that goes mainly to other group members, and also the opposing problem of individuals choosing to favor their own interests at the cost of group productivity (individual/group conflict). Passions Within Reason examines how cooperative moral feelings and motivations could have evolved in the inevitable presence of competition. In particular, he argues that the human condition causes not only the evolution of “good” moral emotions such as loyalty, but also “bad” emotions such as our disproportionate anger toward those who have harmed our interests.


We also need to consider the costs and benefits of the ways in which our group chooses to cooperate and compete with other groups. This group-level interaction can be more or less productive, just as is the case with individual cooperation and competition. Trade and cooperation toward shared goals can be win-win, where everyone benefits. At the other extreme, war is highly destructive, yet behavior during war undeniably has moral aspects such as voluntary self-sacrifice. In The Righteous Mind, Jonathan Haidt argues that we have a mechanism he calls “the hive switch” where, when our group interests are threatened, we rally to support the group, and temporarily reduce our striving to get ahead through individual competition.

Neuro-Chemical Morality Humans aren't disembodied brains (see Embodied Cognition), but the complexity of the human condition comes largely from our having complex brains that are adapted to life in a complex socially constructed world. This social world is populated by many other such beings, each with their own complex life of the mind. Human behaviors, including the moral ones, must emerge from the structure and behavior of the brain, which we can understand at the Levels of biology, chemistry and physics. If evolutionary reasoning speculates about how we came to have our moral capacities, analysis at these lower levels seeks to explain the underlying moral mechanisms. One research area is the use of functional imaging to locate the neural basis of moral reasoning (sometimes called Neuroethics). The most interesting conclusion so far is that people arrive at predictably different answers to moral dilemmas according to which brain areas they use (which type of reasoning.) The chemical basis of neural operation is also being investigated, such as in work on the social effects of Oxytocin, see Paul Zak on Trust, Morality and Oxytocin and Oxytocin changes political preferences.

Is and Ought A central issue in evolutionary ethics is the Is vs. Ought problem. Evolutionary Psychology offers plausible explanations about how our moral emotions could have come to be, as part of a story of Human Nature, but this doesn't tell us what we ought to do, only what we're inclined to do. Evolution proves a theoretical framework for explaining and predicting many human behaviors: “good”, “bad” and morally neutral. The Righteous Mind offers a convincing psychological view of the foundations of our moral intuition, but this has also been the foundation of human social behavior throughout history, in all its wonder and tragedy. Can't we do any better than this? Our intuitive morality tends to accept: ● Destructive competition between groups. It's clear that, when averaged across both sides, war is a losing proposition, at least in the short term. Seeking to favor your own group at the cost of other groups is Groupishness. For the world as a whole, groupishness is destructive in the same way that individual selfishness is destructive to that individual's social group.


Suffering of others who are “out of sight, out of mind”. We favor our families and our friends over those we don't know. This is adaptive, and also makes practical sense, since our knowledge and influence are greatest in our nearby world, but is at odds with a universal sense of individual worth or rights. All societies also confront issues of fairness and justice. Does it seem that the rewards and punishments given are proportionate to the contributions or offenses? Problems with fairness and justice come not so much from holes in our moral senses as from social tradeoffs. Opposing views on these tradeoffs are backed by conflicting moral intuitions, leading to political conflict. If there is a regrettable moral blindness here, it is our tendency (via in-group/out-group dynamics) to see perfect solutions when none exist. Yet to say there is a tradeoff is not to say that all answers are equally good. The quality of a particular answer depends on the specifics of the challenges that the society is facing. Unfortunately Prediction is Intractable, so progress is mostly by trying things more-or-less at random and seeing if they “work”. There is evidence we have been doing better over time (see The Better Angels of Our Nature), so there is reason for hope.

Intentional Design Daniel Dennett has developed a framework that helps to explain the philosophical basis of our approach. He observes three different viewpoints or stances from which we can try to understand the behavior and organization of a thing or person: ● We may apply physics, chemistry, etc., to understand any arbitrary object: rock, toaster or person. “This object is flat because the 10 ton weight fell on it.” This is the physical stance. ● We may realize that an object (toaster) has been designed to perform some function, and so assume that it will indeed perform as designed. “If I put in bread and push this lever, I will get toast.” This is the design stance. ● We may realize that the object (person) is an agent with mind of its own, and has some goals that it intends to achieve. We may then expect the agent to apply whatever resources it has to realizing its intentions. “He wants to go to Wrestlemania, and he just got a new bowling ball, so he may sell his old ball to get money for a ticket.” This is the intentional stance. We'll put our spin on these ideas, and then fuse the intentional and design stances to create intentional design.

Reverse Engineering the Design Stance Dennet presents the stances as theories for predicting behavior. He observes that adopting the design stance allows a considerable reduction in complexity because we can simply assume that our toaster will make toast without having re-derive its behavior from physics each morning, and he considers the main risk in adopting the design stance to be our disappointment when the the promised behavior is not forthcoming. “Oh, no! The toaster is broken!” But for our (and his) purposes, the primary interest of the design stance is that it justifies reverse engineering: we can assume that each component is a partial solution to a design


problem, can determine the purpose of that part, and intelligently speculate why the part takes this form rather than some other. “When the electricity runs through this wire, it will get hot, and the heat causes the desired browning reaction in the bread. The wire is made out of a nickelchrome alloy because it combines high resistance, high melting point, and resistance to oxidization. A tin wire would melt before it got hot enough to make toast.” We will see that when we use the design stance to apply reverse engineering, we create risks that are much more philosophically troublesome than burnt toast. We feel it is productive to divide the design stance in two: the user stance and the engineer stance. The user takes the designer's stated intent at face value, and if the performance disappoints, then either he's using it wrong or the product is defective; in neither case does the user have much concern with how the designer applied the physical stance in his design. In contrast, the engineer is primary concerned with how the components and organization achieve the intent and how the components function within physical and practical design constraints. This is true regardless of whether the engineer originally designed the toaster or not. Interestingly, evolution actually creates good design purely from the user stance. Evolution has no understanding of why designs work or not, it just randomly tweaks the toaster design until it finds one that works better, then makes a lot of them. Though not at all what users are hoping for, and contrary to what engineering professors profess, evolutionary design is common in the real world of engineering — not just in the sense of taking a good design and knowledgeably improving it, but also in the sense of favoring some solutions over others because they work better for unknown reasons. But even if the toaster designer didn't know why some decision was right you can still profitably apply reverse engineering by assuming that he did — you could figure out the reason the designer didn't know. The design stance is still productive because we know the designer's intent was to produce a functional toaster. We have exactly the same situation when we apply the design stance to the evolved designs of biological species.

The Devious Intent of the Design Stance Although it is most clearly explained using human-designed objects, the design stance is intended to be applicable to the results of evolution, and this is our primary interest. Why is it productive to apply the design stance to objects that don't have an intelligent designer? Our argument is that the applicability of the design stance doesn't require an intelligent designer, it only requires that there is a definite goal and a reasonably effective design process. Evolution does have a definite goal (survival and reproduction), and it has clearly been reasonably effective in achieving this goal. In reverse engineering we are trying to infer the designer's subsidiary intentions from his overall intention by applying our assumption that the designer pursued his intention in an effective way. If we use the design stance to justify reverse-engineering of the products of evolution we may be led into error. Evolution is reasonably effective, but it does not generate optimal design, so it may be that there is no functional reason why something is the way it is—it need only work well enough. There has been considerable controversy over the pervasiveness and harmfulness of this error in evolutionary theorizing, and there is a clear political subtext to this dispute, especially with


respect to cultural evolution (see Just-So Stories.) However, this problem is intrinsic in reverseengineering in general, and not unique to the process of evolution. Just as the true explanation for an evolved feature may be historic or developmental, features in a human design may also be there for historic or manufacturing reasons. You might suppose that those two holes in the heater-plate have something to do with managing convective heat flow, but they're really there so that the assembler can put his screwdriver through them. In an important way, reverseengineering evolved objects is easier than reverse-engineering man-made objects, since in the case of evolution we are sure of the overall goal, whereas an antique kitchen gadget may be a complete mystery.

The Devious Design of the Intentional Stance The intentional stance is designed to be applicable to objects that it would be philosophically controversial to attribute “real intentions” to. In particular, it can be applied to chess-playing computers, fish or even plants. It is so productive to regard these things as having intentions that even a philosopher realizes that to function in the world he must act as though these intentions were just as real as a rock. Dennett says that these intentions are just as real as the physical center of gravity. Though there is no unique material property at this particular location, the center of gravity summarizes important physical properties of the object as a whole, providing a simpler way to make correct predictions about the object's behavior.

A Classification of Objects? It would be silly to apply the design stance to an object that was not constructed to have a particular function or to apply the intentional stance to an object that doesn't show any purposeful behavior. We don't need to infer that a rock has an intention to sit still to predict its behavior, and unless we know that a rock has been cunningly carved and artificially weathered to look its best in a rock garden we can't apply the design stance to explain its structure with respect to a design intent. So we can tentatively divide objects into three types according to the highest stance that can be usefully applied to them: an object is intentional, designed, or merely physical. Furthermore, since we consider evolved objects designed, and we will likely never run into a Boltzmann brain, there is a subset relation: all intentional objects are designed, all designed objects are physical. So intentional objects can be examined on three levels, while designed objects can be examined on two. Clearly there is some gray area between intentional and designed objects, which is core of the philosophical controversy over the reality of machine intelligence. We can regard a designed object that appears to exhibit intentional behavior as merely exhibiting a crystallized form of the designer's intention, and not having “real intention of its own”. Many would regard thermostats and trees in this way.

Interactions Between the Design and Intentional Stances In both the design and intentional stances we may be wrong about the about the designer's or agent's intentions. When we adopt evolutionary analysis we are assuming that successful reproduction is the ultimate goal. This is particularly controversial when (as in Evolutionary


Psychology) we say that evolutionary success is the intention underlying almost all human activity. In the intentional stance (as in the design stance) there is an issue with effectiveness. If the agent prefers ineffective means for achieving its intentions then we will make false predictions of its behavior. This is not as bad as it seems because we can appeal to the design stance to argue that the agent has been designed to be effective in pursuing its intentions: ● If the agent is artificial we might have the designer's word that he intended the agent to effectively pursue its artificial intention and his advertisements that he was effective in achieving his intention. If we believe him, then application of the intentional stance to the artifact is likely to be productive. ● If the agent is evolved, then we have reason to believe that the agent's species has evolved a tendency toward intentions that promote survival and reproduction and a reasonably effective capacity for achieving those intentions. We can't rule out nonadaptive intentions or the ineffective pursuit of those non-adaptive intentions, but it is reasonable to assume that this non-adaptive behavior is not overly harmful to reproductive success. There is some potential for confusing meta-circularity here because all intentional agents are designed and all designed objects have an intentional designer (or with evolution, a process that acts as though it has an intention, so we can productively apply the intentional stance and speak about evolution's intention.) Something particularly interesting happens when we apply the evolutionary design stance to humans because humans can report their subjective impression of their intentions. If we ask what they intend to do and why, we are almost always told that their intentions originate from subjective motivations such as emotions, morality, beauty or self-improvement, and not from the presumed design intention of reproductive success. Does this mean the evolutionary design stance doesn't apply to humans?

Intentional Design 101 Suppose you are the designer of an intentional agent. What intentions would you design the agent to have? The most obvious approach is transparent intentional design: give your intention to your agents. However, if you have a good idea about the best way for the agents to achieve your intention it would make sense to give the agents a multi-part strategy, where each part of the strategy has a specific intention related to it and hardwired capacities for evaluating whether that intention is being successfully pursued. That is, instead of relying on the agent's brilliance to figure out a satisfactory solution, we can sketch out the shape a solution will take. For example, the intention of a chess-playing program is to put the opposing king in checkmate while protecting its king from the same fate. But we could give our agent a head-start by telling it to “avoid material loss” and “gain positional advantage”, and give it instincts that a knight is worth three pawns and the center of a board is a good place to be. We would likely find that this chess program with a hardwired strategy would beat the pants off of a program that started out with a blank slate and had to figure everything out from first principles. We needn't explicitly give our intention to the agent at all. We could just say that it's a really bad position when your king can't move and a really good position when their king can't move. The intentional process just chugs along avoiding material loss and gaining positional advantage, then at some point a


hardwired test detects checkmate and declares the game over. This is opaque intentional design. See Intentional Opacity.

Is vs. Ought You may have noticed how easy it is to slide from “there is such a thing as human nature” to “humans naturally behave in a certain way”. These statements seem semantically equivalent, but for many people, to say that something is “natural” is almost the same as saying that it is virtuous (see the Appeal to nature.) In this worldview, saying that “selfishness is natural” is a logical contradiction. It cannot be true. Philosophers have long noted the difference between saying that this is the way things are (these are the facts) and this is what we should do (see the Is and Ought problem). But is it entirely unsound to move from is to ought? If we can't use what we know about the world, about ourselves, and about how we got this way, then we have no rational basis at all for moral argument. Consider the observation that the human hand happens to be peculiarly well-formed to be used as a weapon (by making a fist, see Fighting Shaped Human Hands). This has the obvious evolutionary explanation that at some point in our history there was a significant advantage to having a powerful punch. People who could form a closed fist where the thumb buttressed the fist were more successful at surviving and reproducing than those who (like non-human primates) couldn't punch properly. You might question the validity of this sort of reasoning in general (see Just-So Stories) or based on the details of this specific claim, but for the moment suppose that you accept this as true. Where does this leave the moral status of punching people? “Your honor, people of the jury, this man's hand was made for punching.” This is ridiculous as a legal defense of aggressive behavior, and is equally suspect as a moral argument. Yet to deny the importance of violence in the formative environments for human nature is equally unreasonable. A capacity for violence and an inclination to be violent when our interests are threatened is part of our behavioral legacy. The appeal of taking what you want by force is also sufficiently obvious that it can be rediscovered by each generation of toddlers.

Level Confusion Level confusion is confused thinking that arises from examining a system at different Levels without realizing that you are doing so or what the implications might be. Here are some common areas of level confusion: ● Mind vs. Body and reality: the X proved real fallacy. “I can imagine whatever I want, so my thoughts aren't real, but brain scanners can tell what I'm thinking by real changes in my brain. It's all so confusing…” ● Determinism vs. Free Will and moral responsibility. It is an intuitively appealing argument that if mind arises from a deterministic physical process, then there is no free will, and no moral responsibility. “If my behavior is determined, then I should stop worrying about making decisions because I don't have any choice, and I can do whatever I want without


taking any responsibility, but then I must have been destined to believe in determinism, and if I were destined to believe in free will, would my responsibility be different? It's all so confusing…” Subjective perception vs. the physical world and Naive Realism. “I just crashed into a car that wasn't there when I looked before, but it must have been there because I crashed into it. It's all so confusing…”

Sources of Level Confusion Clearly this thinking is confused, and something must be wrong here. Confusion can result from the failure to understand that: The map is not the territory! (Reality vs. Representation and hardware vs. signal) The brain is real and we can make factual statements about its functioning, but this tells us nothing about whether thoughts are true or real, or whether there is free will. Yet we must use abstraction (maps and categories) because the world has such a wealth of irrelevant particular details, most of which require considerable patience and sophisticated measurement merely to observe them. More is different! Because of Emergence simple thought experiments with commonsense interpretations can lead us far astray. Reality and the brain are not simple. Any system that consists of a huge number of complexly interacting parts can behave in unpredictable novel ways. You don't know what you think! The nature of Consciousness depends on both Representation and Emergence, but the design of our minds is evolutionary, proceeding by incremental modifications from an existing plan, and optimized to generate Adaptive Behavior. For The Cultural Animal it became necessary to understand ourselves to some degree, but it wasn't necessary (or even desirable) for our selfunderstanding to be entirely accurate, so it isn't. See Intentional and Representational opacity and The User Interface Analogy.

Catataxis Catataxis is a name for level confusion coined by John Brodie Donald (see Catataxis.) He offers these observations about emergent levels and their associated analytic levels: 1. Virtue reverses at a catataxic boundary: Another way of saying this is that what is good for the individual may be bad for the collective (and vice versa). [See Evolutionary Ethics] 2. Conflict below creates stability above: The more disagreement there is on one level, the more likely there is to be calm and stability on the level above. This is best summed up in the saying “still waters run deep“. A calm surface often masks a roiling torrent underneath. A stock market is a good example of this. 3. In the end, quantitative change becomes qualitative change: This is a more sophisticated way of saying more of the same is different. [See Emergence]


4. Today's groups are tomorrow's individuals: over time, things tend to get bigger and clump together. The river of history is a grouping vector. [See Meta-Evolution] 5. Categorization destroys information: Once the scale increases, the only way a human brain can function is by categorizing things. [See Levels, Modularity]

What to Do? While confusion can easily result from crossing levels without noticing it, we don't say that it is always a mistake crossing levels. What takes place at those boundaries is often quite interesting, both practically and theoretically: ● Interactions between individuals and social groups (emergent levels) are particularly important to humans. Mismatched interests create individual/group conflict, and our moral sense has evolved as a response to this conflict. ● The relationship between the physical world, our perceptions, and our conscious thoughts has fascinated humans ever since we were conscious. (see Mind/Body Dualism, Level Map). ● The brute physical world is often very predictable: what goes up must come down, and so on. In contrast, living things, and especially people, are quite unpredictable. This is a big reason why Mind/Body Dualism is so intuitively appealing, and why determinism seems inconsistent with free will. ● Crossing the analytic levels between sciences is often highly productive, and is the entire foundation of this Wiki.

Mind/Body Dualism Since the rise of science we've come to see the soul as a purely religious concept, but it arises naturally from how we perceive the world around us (see Descarte's Baby). This Mind-body problem has been debated since the beginning of philosophy. Dualism sees the mind (or soul) as a real thing, distinct from the body. A major reason why there has been little progress in understanding (until recently) is that how the mind actually works is very different than how it seems that our mind works (see The User Interface Analogy, Unconscious). Mind/body dualism just makes sense. Descarte's Baby argues that young children instinctively develop an dualistic theory of mind. Why is this? You could say that dualism is the design philosophy of the user illusion. By means of Level Confusion, dualism has become thoroughly entwined with views about whether mental entities such as thoughts, feelings and intentions are real or not. Our mind is implemented by our brain (and body), so cannot exist without it, but mind is an emergent property which is not nearly as constrained by its physical substrate as traditional arguments have supposed. This ultimate dependence of mind on a real physical substrate proves nothing about the reality of mental phenomena.


Obscurantism David Stove: What is Wrong with our Thoughts? LessWrong https://en.wikipedia.org/wiki/Obscurantism#Lacan To his students' complaint about the deliberate obscurity of his lectures, he replied: “The less you understand, the better you listen.”[29] In the 1973 seminar Encore, he said that his Écrits (Writings) were not to be understood, but would effect a meaning in the reader, like that induced by mystical texts. Pseudophilosophy#Romanticism According to Bunge, Pseudophilosophy is nonsense parading as deep philosophy. It may have existed since Lao-Tzu, but it was not taken seriously until about 1800, when the Romantics challenged the Enlightenment. By giving up rationality, they generated a lot of pseudophilosophy.

Reality Reality is that which, when you stop believing in it, doesn't go away. —Philip K. Dick We need some understanding of what it means for something to be “real”, even though this is a philosophical tar-pit (see Reality.) We'll take the reality of physical phenomena as a given, but the reality of mental phenomena is unclear—if they are real at all, they aren't as real as physical phenomena. Mental phenomena don't automatically become real simply because they arise from the physical structure of the brain. There is a close relationship between Truth and Reality. Truth (or falsehood) is a property of statements, while reality is that which actually is. In the most intuitive definition, a statement is true if it corresponds to or is consistent with reality. We discuss reality more in relation to subjective experience, while we emphasize the formal and contingent nature of truth and our process of assessing or seeking truth, especially through Science. See also Phenomenology and Ontology in philosophy. Somewhat along the lines of the Dick quote above, we feel that a key characteristic of reality is its failure to conform to our wishes. It is something we have to “deal with”, either by engaging with it according to its rules, or by just learning to live with it. Real phenomena also tend to have some sort of consistent rules of behavior; we can often predict what will happen, or failing that, explain what caused an event after the fact (but see Prediction is Intractable.) A philosophy of reality that meshes well with our position is Pragmatism, which more or less says: instead of trying to invent some critera for what is really real, let's just consider something to be real if it is useful to do so. Dennett argues that the self and intentions are as real as the physical center of gravity (see Intentional Design.) There is a reality gradient from the physical world up to the sometimes fanciful level of storytelling (see Level Map), and this parallels the emergence axis. We could say that unreality is an emergent phenomenon. Our evolved instincts are more real than some other mental phenomena because they are something that we have to “deal with”, and can't change simply by wishing (see Reprogramming the Mind.) These instincts manifest both as cognitive biases and emotions.


The reality, legitimacy and validity our emotional responses is a key part of our interpretive position (see Smarty-Pants Critique.) This is what it is like to be human, and we cannot be any other way. Though our feelings and motivations are ultimately explicable by our evolutionary heritage, this makes little human sense because our emotions and motivations are opaque to us (see Intentional and Representational opacity.) Our own emotions are something that we have to “deal with”—we can't choose how to feel, though we can manipulate our mood and feelings by what we do and how we think. Emotions are socially real: we have to accept that everyone's behavior is greatly influenced by their emotions and subjective judgments. We have no direct control at all over other people's subjective experience, but we must consider how they will judge and emotionally respond to our behavior.

Skepticism Is this a productive exercise of skepticism? Skepticism is always appropriate concerning one person's claim. People are fallible. Scientific and empirical methods have shown themselves to allow progress in development of knowledge in spite of individual fallibility. Paradoxically, it's likely that on any given issue there's some individual who's closer to the truth than the consensus is. This is why individual diversity is valuable. Especially when it comes to long tails. Extreme views are almost always wrong, but when they're right, they're very very right in comparison to the middle path. It's important to understand particular common failures of human reason, and to base skepticism on this knowledge. We strongly tend to over-perceive pattern and causality. We also (with good reason) greatly value a story that explains what we know. We're really bad at assessing a story against all possible stories we haven't heard and all the evidence we don't know. We do tend to falsely think that we understand, based on quite flimsy evidence. Science isn't immune to the narrative fallacy, but a good scientific theory is falsifiable. Science is interested in determining casual relations, and has had some pretty good success in doing so. When we consider a historical event (world war 1), the skeptical position that events don't a cause is defensible. We can imagine causal theories and the associated counterfactuals, but the “all other things equal” condition can never be met. For complex (but less unique) things, like murder, it becomes possible to infer patterns and test what causal relations are consistent with what we see. Perhaps we can even make useful theories about war in general. For science, it's natural to consider causality on attributes of potentially isolated systems. This allows a bit more precision in what “all other things” means and gives a strong suggestion of how an experiment could be structured. When something important happens, it's natural and adaptive to award praise and assign blame. We'd like a story about how we're going to bring it under control, and that requires causal explanation. It's adaptive even if we often don't get it right. People are selective in their use of skepticism, of course (confirmation bias). I can come up with many stories of why I'm skeptical about school and not about heritability. One point about skepticism, or criticism in general, is that criticism without positive suggestion is much weaker than suggesting a superior alternative. Early theories are certainly incomplete, possibly entirely wrong. That a theory can be criticised doesn't invalidate the field.


The success or failure of a scientific field isn't determined by it's success in answering criticism of it's philosophical foundations. It's whether the theory “works”. Physics is highly successful even though it's founded on the unwarranted and unprovable hypothesis that there are universal physical laws. It's mainly when a new theory gains mind share that approaches are replaced and theories abandoned. Sometimes an entire field or sub-area can fade if it ceases to be productive. It's unlikely that this will be caused by a new criticism, but old criticisms that dogged the field all along may seem compelling in hindsight. This could happen to string theory in physics, which has so far failed to generate testable predictions. Either for heritability or evolutionary psychology. Occam's razor. Preference for simple theories. As simple as possible (and no simpler). Preference for approaches that have been productive in the past, or in other areas (analogy). Science: testable theories and research programs. Explanatory and predictive power. Causality. Can we manipulate? Absence of correlation does disprove theory of substantial simple causation. To oversimplify, ~correlation → ~causation

Truth There is a close relationship between Truth and Reality. Truth (or falsehood) is a property of statements, while reality is that which actually is. In the most intuitive definition, a statement is true if it corresponds to or is consistent with reality (see Correspondence Theory of Truth.) In our discussion of truth we emphasize the formal and contingent nature of truth and our process of assessing or seeking truth (see Epistemology), especially through Science, while we refer to reality more in relation to subjective experience. Our philosophy of truth is empirical and pragmatic. Because truth is a property of statements, it assumes the existence of language and meaning (see Linguistics and Semantics.) The truth of a statement is implicitly dependent on many things left unsaid, which we must fill in from our knowledge about the world. See The Tree of Talking. In addition, like most human activities, our search for truth is a social activity. One of the characteristics of social in-groups is the general acceptance of particular truths and of rules for evaluating truth. If knowing the truth is useful (as Pragmatism says), then we would expect that groups with true beliefs would be more successful than those with false beliefs. That is, finding the truth is Adaptive Behavior in Cultural Evolution. A plausible Story is almost always a component of human beliefs, scientific or otherwise. As soon as an effect is proposed, people offer explanations. It is highly subjective which story we judge a story to have “the ring of truth”, and this judgment is shaped by the values of communities we participate in. Humans are compulsive creators of meaning and story, and once we notice a pattern, we are rarely satisfied by the explanation that it is a meaningless coincidence. It is likely that this bias is Adaptive Behavior because the cost of tentatively holding a belief is low, and this belief motivates is to watch for evidence that tests the belief. A correct story is valuable because it may allow us to predict or even manipulate what happens. The cost of an incorrect story is often


low because people usually conform to social norms of prudent behavior (see Conformity Bias.) Implicitly we acknowledge that our stories may only be relevant in some limited context. See also: Story, A Priori/A Posteriori, Just-So Stories.

Virtue Ethics Virtue ethics is the idea that rather than trying to understand virtue or morality as some sort of static intellectual rule-base, and studying it through examination of abstract moral dilemmas, we should strive to become virtuous, improving our character so that it becomes our inclination to act as a virtuous person would act. We find this approach congruent with our emphasis on personal wisdom and action over academic discourse and our understanding of virtue as arising from our innate non-verbal capacities for moral judgment. See Virtue ethics. We feel that the idea of cultivating personal virtue is a worthy form of self-development, and that it is much more consistent with reality than most of the ethical theories developed by academic philosophy. Two difficult realities of workaday morality vs. philosophic ethics are that psychologically our moral sense is intuitive (see The Righteous Mind) and that our moral character is constituted of countless small acts, few of which rise to the level of the moral dilemmas studied in philosophical ethics. By concentrating on what we can talk about, ethics risks confusing moral argumentation with actual moral behavior. See Evolutionary Ethics.

Why? Why is the sky blue? Why do I have to do my homework? Why do bad things happen? Why asks for a particular kind of Story, an explanation of behavior: ● Properties coming out of the substance or nature of the thing, ● The chain of events that caused the thing to come about, ● The intentions related to causing the thing. Why is very important word in this Wiki.

Physical Chaos Physical chaos is a body of ideas and theory related to the observation that when we model real world phenomena with realistic levels of complexity and nonlinearity, we find that the behavior over time of the system is unpredictable even though it is in principle deterministic. It is unpredictable because as time progresses behavior becomes exquisitely dependent on what were initially negligible influences. The canonical example of this is the Butterfly effect, where the flap of the wings of a butterfly in Africa can cause a tornado in Kansas two months later.

Emergence


Emergence means various things to various people, but the basic idea is that complex behavior can emerge from simple rules or physical laws when there's enough of these simple processes running in parallel and interacting, and that even the general form of this complex behavior is not at all obvious from examination of the rules. For example, from atomic physics emerges chemistry emerges life emerges mind emerges culture. Another characteristic of emergence is that order emerges out of chaos, which thermodynamically requires an energy input. This much is not in dispute. The question is what does it mean? In particular, what does it mean for the reductionistic approach to developing scientific theory and insight? Is all of science a big mistake, and do we need a new holistic theory of everything? Our take on this issue is that: ● Reductionistic explanations clearly have great power, and are by no means wrong, but their power is limited by emergence. ● It is also fruitful to develop sciences of complexity such as biology, psychology and sociology/anthropology, but results inevitably become messier and more tentative, soft science. ● It is a grievous error to say that a lower-level explanation is “more real” than the higherlevel phenomenon that it explains.

Reality of Emergent Phenomena It is intuitive, and seems to be a human cognitive bias, that when anything has been explained (even wrongly so), that the explained phenomenon becomes less significant and less real than the explanation. Although this approach is the cause for and justification of theory-building, and seems to serve us well in daily life, it has become troublesome as we understand the world at ever-finer levels of detail. This problem is particularly clear in Evolutionary Psychology and the reactions to it. As we start to understand human motivations and feelings as unconsciously pursuing reproductive success, we explain, and therefor seem to demean something which is extremely real to us. Is love real or not? Love is real, but it predictably appears in particular ways which are well explained as strategies for pursuing reproductive success.

Homeostasis The way in which living things maintain their function by controlling their internal environment in the face of changes in the external environment Homeostasis See also Evolutionary Conservation.

Prediction is Intractable It is an awkward fact that, except for extremely simple or routine tasks, it is impossible to make accurate predictions of the difficulty of achieving a goal. It usually takes more time and money than expected, and often we give up having achieved nothing. Similarly, when predicting things that result from human social behavior such as stock market moves or movie sales we find that


we can do no better than predicting the stock price will be the same as yesterday or the movie sales will be the same as a similar movie. In these areas there are people who claim to be experts, but as a whole they do no better than the trivial prediction of sameness, and often somewhat worse. The practical intractability of prediction is a straightforward consequence of Physical Chaos and the dynamics of complex systems. The Black Swan has much to say on the practical difficulties of prediction and the failure of experts to do any better than cabdrivers in predicting economic or political events. Gut Feelings has some wonderful examples of how financial experts do worse than chance and how we can do better than experts in sports or finance by averaging man-inthe-street opinions. A subjectively similar phenomenon is that when we do achieve our goal we usually find that our resulting happiness is much less than we imagined, and our personal interpretation of the meaning is much different than we expected. We consider these paradoxes of Affective Forecasting elsewhere because they result in part from our failure to understand how our motivational systems work.

Positive Illusions We have a difficult relationship with the fact of predictive intractability. One one hand, it something that “everyone knows”, but on the other hand, we persist in acting as though it isn't so. This is closely tied to our Positive Illusions. The beliefs that we are unusually competent, can beat the odds, and have more control than we do encourages us to believe that we can make accurate predictions even if nobody else can. It seems that we cling to our positive illusions because they foster an unjustified sense of optimism which helps us to persist in the face of uncertainty. This clueless persistence is the best we can do because we really don't know whether we will succeed or not (see Sunk Costs.) In other words, positive illusions are adaptive because they aid in risk-taking.

Incorrigibly Confident We are incorrigibly persistent in making and believing predictions in spite of the repeated failure of past predictions. Nassim Nicholas Taleb gives a wonderful analysis of this habit in The Black Swan. In particular, we like the idea of the narrative fallacy. When an unexpected event happens, we come up with a Story (theory) explaining why this event happened and why it was not expected, and then we somehow feel that we truly understand why the event happened. Also, because we understand where we went wrong and have learned our lesson, that error will never happen again, so our predictions will never be wrong again. Of course, nobody says that they follow this silly reasoning, but most people act this way over and over again. In our retrospective analysis we need to find an explanation and determine blame, then we are satisfied and go back to business as usual. Taleb is clearly exasperated that his repeated cries that “the King has no clothes” seem to fall on deaf ears.

The Best We Can Do?


So perhaps all this is true, but what should we do about it? In many areas our predictions are better than chance, so it is worth making some effort at predicting. In unpredictable areas with potential for dire consequences such as economic policy, politics and the environment, it would be good to cultivate some humility and a sense of caution. In pathologically unpredictable areas such as investment decisions we should reduce our expectations and not pay for lots of expensive advice.

Sensory Limitations Our senses disregard almost all of the virtually infinite amount of information that could be sensed. How do we know this, and why is this so? Let's consider only vision. People have estimated the human visual bandwidth at around 10 megabits/sec by considering things such as how many nerve fibers there are in the optic nerve and how much information each fiber can transmit. How much is there out there to sense that we are missing? Let's consider only seeing things by reflected sunlight, and not thermal night vision, radio or gamma rays. Sunlight brightness falls to less than 10% of the peak below 300nm and above 1500nm. The response of the human eye is about 380nm to 780nm, which is really a pretty good match. This is not a coincidence, because the eye evolved to sense reflected sunlight. We could push a bit into the ultraviolet and a full octave into the infrared, giving a potential bandwidth of 1.6×1015 bits/sec. This bandwidth allows for all possible color and any changing over time (such as motion.) We could potentially receive this much at each distinct point that we can see. While it would sometimes be useful to be able to see miles away, let's limit ourselves to an eye the size of the human eye. The optical (diffraction limited) resolution for a 1cm lens at this wavelength over a 90 degree field of view is about one gigapixel. Multiplying the spatial and time/spectrum resolution gives about 1×1024 bits/sec. It'd also be useful to have at least ten eyes for seeing behind your back, etc., so let's make that 1×1025 bits/sec, which is 1×1018 times more than the actual human visual bandwidth, a vast difference. Why are our eyes so much weaker than this? Biological possibility is one constraint—this is a theoretical limit that far exceeds the capabilities of the best camera made, let alone what could be achieved using biological goop. But by making the eye a bit bigger, adding more different color receptors and packing the entire field of view as densely as human central vision, it would be possible to increase the bit-rate 100 times or more. The reason that we have the eyes that we do is that Evolution has made a tradeoff between what it would cost to have better vision and what the benefit would be. We have pretty much the best eye that we can afford. Better eyes didn't give any significant advantage for staying alive in the savannah. A large part of the cost of better eyes would be in the need for more brain to process the data. A particularly clever economy in the human visual system is giving a high resolution in the center (the fovea), and then moving the eye around to build up a larger mental image. The eye doesn't work at all like a camera, taking in an entire scene in one go. Instead it is more like a paintbrush used to fill in the holes in a virtual canvas in our mind. visual perception and attention are extremely complex brain processes that take place almost entirely automatically and


unconsciously. The result is a user illusion so compelling that most people are naive realists who imagine that they directly see the real world.

Education Why does our education system work the way it does? How can we improve it? ● Why do we teach so much material that we forget because it's useless? Arithmetic, history ● Emphasis on academic subjects over vocational or practical (personal money management). ● Why so much emphasis on memorization, content vs. skills? ● We say we value critical creative thinking, but we mostly don't directly teach it.

Rational Analysis Whether we are analyzing an existing educational system or designing one, there are two primary questions: ● What do we teach? (curriculum) ● How do we teach? (methods) Any explanations or justifications of our answers to the primary questions will rest on our answer to the ultimate question: ● What is the function or aim of education? (policy, philosophy) While this framework is a necessary part of any rational argument about education, education is so enmeshed in history and in other aspects of a culture's lifeways (economy and social structure) that it is very difficult to choose educational practices based on abstract principles. Main article: Educational Philosophy See also: Education and Social Structure, Methods of Education

History and Cross-Cultural Comparison Comparison of current western educational systems to those in other times and places is very valuable. Where there are similarities, it suggests a widespread need or constraint. Differences are especially important because they help us to move beyond unthinking acceptance of the current system. History is also an important explanation, since education tends to adopt and perpetuate practices of other cultures. Main article: Anthropology and History of Education

Cultural Evolution Education is a fascinating topic for this wiki because it seems like a wonderful example of the process of Cultural Evolution. The basic concept of the Darwinian cultural evolution that we endorse is that cultural institutions (such as education) evolve by a process which is a generalization of biological Evolution.


Cultural evolution sits somewhere between history and rational analysis. It understands practices such as education as having arisen by a historical process of descent-withmodification. Modifications are imposed by humans, who presumably had their reasons, but the ultimate justification is that the practice has to work–it has to implement a social function. It is quite possible for a useful practice to be justified using a weak or incorrect explanation.

Education Economics Education economists, have proposed two models of education: the screening/signaling and human capital models. It's obvious that intelligent, articulate and skillful people tend to have spent a long time in school, but it could either be that that completing an education builds one's ability, or that having innate ability enables one to complete an education. Although the human capital model is far more popular with educators and the general public, screening and signalling roles can't be entirely ruled out, and some clever economic studies, making use of natural experiments such as scholarship rule changes, support some screening and signalling functions. Inconsistent results may also mean that the importance of these effects varies by country and academic specialization. What is the cost of education? Not just the substantial dollar costs, but the opportunity cost of having the first 21 or more years of life devoted to something that isn't itself productive. At the same time as these concerns about primary and secondary education, people have been noticing that a degree is no guarantee of a good job, and the increasing numbers of people going to college and getting degrees is reducing the value of the credential (see Higher Education Bubble). From the sorting viewpoint, this is unsurprising. The Economist periodically assesses whether higher education pays off when you consider the direct costs and lost wages. They still find a benefit, but it isn't as large as it used to be, partly because higher education costs have been increasing at well above inflation for decades. If there is any social inequality, and educational success has any effect on access to elite roles, then education will always have a sorting function, and will be competitive. This purely competitive aspect is the “I win/you lose” logic of a Zero Sum Game. Whether competition is good or bad is another argument (see competition), but this does cast doubt on the simple story that increasing education is a win-win situation (or positive sum game). If a society encourages parents to pour their resources into education fever to help their children compete, then any overall benefits are side effects of the status competition. Education fever is more pronounced in South Korea and China than in the US (see The Hidden Cost of Education Fever), but it does exist. Of course, a story of general benefit arising from individual competition is a common motif in economics. You could argue that the belief that there is a shortage of “good jobs” is like the Lump of Labor Fallacy. If education actually increases practical competence, then you would expect that increasing education would lead to economic growth, creating more “good jobs” for the newly educated. Yet this argument does ignore (as economists tend to) the fact that people clearly care about their relative wealth and status more than their absolute wealth (see Reasons For Valuing Relative Wealth). A rising tide lifts all boats, but the yachts are still taller than the rowboats. People will still assign winners and losers and want to the be winner.


Two economic trends that reinforce prolonged education with sorting and signalling functions are: â—? Outsourcing: in paternalistic and apprenticeship systems, employers took on a risk that their investment in the practical education of an employee might not be repaid. Given higher job mobility today, employers are increasingly intolerant of any on the job learning curve. This likely explains increasing vocational higher education such as business degrees. Even though it's not equivalent to actual experience, it's sufficiently relevant that it makes sense for employers to demand it. â—? Large employers are bureaucracies facing much the same problem as the Chinese emperor. That is, they also need a formal sorting mechanism to identify competent employees that isn't easily subverted by favoritism and other sorts of self-serving collusion.

Higher Education Managing a premier university has notable similarities to managing a luxury brand such as Rolls Royce or Rolex. A great deal of the value of a premier degree is simply in that it is rare (a positional good), but it is important to maintain a perception that the quality is superior. A university does indeed sell to students, and this has (for example) clearly driven grade inflation and increasing student-life amenities. However the university is also marketing its image as turning out a high-quality student product. Although this can be achieved to a large degree simply by selective admissions, it is necessary to give some attention to the educational process, especially in technical areas where some significant fraction of the subject material may actually turn out to be relevant in later life. There is presumably room in the market for schools to differentiate themselves by how hard they make their students work, but it is also quite possible that a school might follow short-term financial incentives to admit a lot of students and show them a good time, and then blow any any reputation they might have had for the worth of their degree. That is, they have debased their brand. In luxury goods this most often happens when the item is sold too widely at too low a price, so exclusivity is lost, but it could also happen if the perception of quality is lost. Moving away from a narrow view of brand management, the massive increase in Bachelors degrees since the 1950's has clearly debased the worth of having one. While you do still need a Bachelors to prove you aren't stupid or lazy, for many the real exclusivity is now in graduate degrees such as MBA. It may be that there is a niche in the market for an easy bachelors degree as a fun social interlude between your SATs and your GREs. There is some conflict of short-term interest between the student (paying the bill) and the end user of the process (employer or society at large) because learning is not always fun, and young adults have other things on their minds than learning. To understand the social function of education you need to get beyond the obvious view that people take courses on X, Y and Z because they need to know X, Y and Z in life. This is true to some degree, but leaves considerable puzzles about why we are teaching the subjects that we are. For fundamentals, we need to look at the input/output model of a degree program. The desire is for a high quality at the output of the program. People who not only have relevant skills and


knowledge, but are also smart, can communicate with others, and can structure their own time to get someone else's problems solved. There is no denying that part of higher education is a sorting process, whether it happens in admissions or through dropouts. The people who are not admitted or drop out “contribute” to the result quality. If possible, it's better to weed out the students that “can't hack it” on admissions, because that spares them a costly failure that could derail their career, but so far as the enduser of the graduate is concerned, it doesn't much matter. For universities, selective admissions is a shortcut to adding value, but it's important for there to be unselective options, because SAT, etc. are only moderately effective at predicting further academic success. Students who are admitted and enter the program are changed in various ways: ● In the overt curriculum, they learn how to remember various content material for days or weeks, and if the same skills come up again and again in their courses, they are likely to become enduring competencies. It is also possible that being exposed to the content and to the ways of thinking of a variety of subjects has some important “broadening” effect, even when you can't consciously recall the details. ● There is also the “hidden curriculum”, which is a very important contributor to value of higher education, especially in non-technical areas. Most importantly, students must learn to put out satisfactory levels of intellectual effort on command, with almost no direct supervision, only the looming threat of a bad grade.

Functions of Education Why teach math? Clearly how we teach math has a great deal to do with how it has always been done, and the “why” was therefore not often questioned. Arithmetic has been part of the school curriculum since ancient times. Land has to be surveyed, taxes assessed, and accounts figured. These were done by the educated elites, while ordinary laborers had little use for numeric skills. With the rise of market economies, almost all daily needs were met through the market, so everyone needed some number sense, and benefited from some arithmetic skills. So far as daily living and workplace needs go, manipulating money and quantities of goods to be bought or sold, profits, interest and so on are the main practical needs. At one time, pencil and paper arithmetic was clearly a useful skill, while today this is something that is hardly done outside of school. It is still important to be able to understand practical arithmetic problems so that you know which key or spreadsheet symbol to use. Traditional elementary math was moderately effective in conveying this sort of understanding, but it isn't really known how important practice with manual arithmetic is for conveying this sort of understanding or for developing “number sense”. Cultural institutions evolve, presumably meeting some cultural need. Education is an interesting case because the overt function of school (teaching of useful subjects) does obviously have some validity, but doesn't seem sufficient to explain the curriculum. Math education being in excess of obvious need is not a new thing. Since classical times, geometry and logic were part of the curriculum, often with various odd numerological accretions, such as (during the middle ages) extensive classification of special kinds of numbers, going well beyond square, prime, … . Education has often been highly circular, with its function being to prepare you for further education, then at some point to qualify you for entry into the elite. The connection between educational goals (curriculum) and fitness for privilege has never been entirely clear. This may


have worked in ancient China or in the industrial west, both as a way of sorting for intelligence, diligence, etc., and as legitimizing the social structure through opportunity for upward mobility creating an argument that privilege was deserved. But the educational enterprise has been shaped by deep tradition as a gateway to privilege, and that this conflicts with the goals of universal education.

Theories: The disconnect that often exists between curriculum and what we use in life (what we remember) is one of the most obvious puzzles. Some possible reasons (several may apply): ● It's just stupid, and doesn't serve any useful function, though perhaps it once did. ● With sorting, the content doesn't matter. Mastering any content can show fitness, and that content can be prized as being cultured. A historic view does show wide variation in what is considered valuable. ● Especially with education for aristocratic elites, it may be a social good sublimating their competitive drives into a non-destructive pursuit, even if the end is mere refinement, and not any actual good. ● Traditional education may “work” in a genuinely educational sense of promoting abilities, relevant cultural knowledge or wisdom, in spite of the apparent disconnect. We may not have conscious access to how our experiences have shaped our abilities. ● Babysitting: children aren't ready to enter the adult world, but need to be engaged in some simulacrum of it in order to absorb needed cultural knowledge, self discipline and social skills. In large part, the Hidden Curriculum. ● Maturation: school spans the immature years. Even without school, knowledge, self control and cognitive ability improves. Babysitting. ● Students are mashing ideas together to create understanding. This is just how the brain works. You can't teach skills without some sort of content. Exactly what the content is may not matter that much. ● Cultural conservatism. Why did Babylonians have to learn Sumerian? When you consider the social function of school as a whole you appreciate that it can have value largely independent of the usefulness of the subjects taught: ● Liberal education: we don't know whether it's true or not, but it's an ancient educational belief that learning useless stuff improves your mind. This could be considered a sort of learning transfer, but it's so ill-defined as to probably be impossible to ever quantify. ● The “hidden curriculum”: controlling yourself, sitting quietly, being motivated to do what someone else wants you to do, even if it's difficult and not intrinsically rewarding. ● Sorting: determining who's both smart enough to master content and diligent enough to master the hidden curriculum. These are people who will also do well in yet more of this sort of education, but more importantly, this intelligence and diligence is something that is actually useful in our civilized world. On this basis alone, it makes sense for employers to favor high achievers, regardless of what subjects they took. This sorting may also give the illusion that liberal education works.

Functions of Education


Why teach math? Clearly how we teach math has a great deal to do with how it has always been done, and the “why” was therefore not often questioned. Arithmetic has been part of the school curriculum since ancient times. Land has to be surveyed, taxes assessed, and accounts figured. These were done by the educated elites, while ordinary laborers had little use for numeric skills. With the rise of market economies, almost all daily needs were met through the market, so everyone needed some number sense, and benefited from some arithmetic skills. So far as daily living and workplace needs go, manipulating money and quantities of goods to be bought or sold, profits, interest and so on are the main practical needs. At one time, pencil and paper arithmetic was clearly a useful skill, while today this is something that is hardly done outside of school. It is still important to be able to understand practical arithmetic problems so that you know which key or spreadsheet symbol to use. Traditional elementary math was moderately effective in conveying this sort of understanding, but it isn't really known how important practice with manual arithmetic is for conveying this sort of understanding or for developing “number sense”. Cultural institutions evolve, presumably meeting some cultural need. Education is an interesting case because the overt function of school (teaching of useful subjects) does obviously have some validity, but doesn't seem sufficient to explain the curriculum. Math education being in excess of obvious need is not a new thing. Since classical times, geometry and logic were part of the curriculum, often with various odd numerological accretions, such as (during the middle ages) extensive classification of special kinds of numbers, going well beyond square, prime, … . Education has often been highly circular, with its function being to prepare you for further education, then at some point to qualify you for entry into the elite. The connection between educational goals (curriculum) and fitness for privilege has never been entirely clear. This may have worked in ancient China or in the industrial west, both as a way of sorting for intelligence, diligence, etc., and as legitimizing the social structure through opportunity for upward mobility creating an argument that privilege was deserved. But the educational enterprise has been shaped by deep tradition as a gateway to privilege, and that this conflicts with the goals of universal education.

Theories: The disconnect that often exists between curriculum and what we use in life (what we remember) is one of the most obvious puzzles. Some possible reasons (several may apply): ● It's just stupid, and doesn't serve any useful function, though perhaps it once did. ● With sorting, the content doesn't matter. Mastering any content can show fitness, and that content can be prized as being cultured. A historic view does show wide variation in what is considered valuable. ● Especially with education for aristocratic elites, it may be a social good sublimating their competitive drives into a non-destructive pursuit, even if the end is mere refinement, and not any actual good. ● Traditional education may “work” in a genuinely educational sense of promoting abilities, relevant cultural knowledge or wisdom, in spite of the apparent disconnect. We may not have conscious access to how our experiences have shaped our abilities.


Babysitting: children aren't ready to enter the adult world, but need to be engaged in some simulacrum of it in order to absorb needed cultural knowledge, self discipline and social skills. In large part, the Hidden Curriculum. ● Maturation: school spans the immature years. Even without school, knowledge, self control and cognitive ability improves. Babysitting. ● Students are mashing ideas together to create understanding. This is just how the brain works. You can't teach skills without some sort of content. Exactly what the content is may not matter that much. ● Cultural conservatism. Why did Babylonians have to learn Sumerian? When you consider the social function of school as a whole you appreciate that it can have value largely independent of the usefulness of the subjects taught: ● Liberal education: we don't know whether it's true or not, but it's an ancient educational belief that learning useless stuff improves your mind. This could be considered a sort of learning transfer, but it's so ill-defined as to probably be impossible to ever quantify. ● The “hidden curriculum”: controlling yourself, sitting quietly, being motivated to do what someone else wants you to do, even if it's difficult and not intrinsically rewarding. ● Sorting: determining who's both smart enough to master content and diligent enough to master the hidden curriculum. These are people who will also do well in yet more of this sort of education, but more importantly, this intelligence and diligence is something that is actually useful in our civilized world. On this basis alone, it makes sense for employers to favor high achievers, regardless of what subjects they took. This sorting may also give the illusion that liberal education works.

Anthropology and History of Education Why does our education system work the way it does? What are the purposes of education? Does what we teach and the way we teach it serve these purposes? What role does education play in creating or eradicating social inequality? The deep history of education is one way to understand some of the possibilities for interplay between education and status. In China, for over a thousand years, a high-stakes test was used to assign government jobs all the way up to the emperor's advisers. Oddly, a key part of the test was writing a poem. But rather than seeking truth or beauty, this poem was judged on how well it fit a highly constrained form that was not used in literature: the “exam poem”. http://en.wikipedia.org/wiki/Oral_tradition literacy Critical Thinking http://en.wikipedia.org/wiki/Enculturation http://en.wikipedia.org/wiki/Socialization apprenticeship History of Education in Babylonian times there were libraries in most towns and temples; an old Sumerian proverb averred that “he who would excel in the school of the scribes must rise with the dawn.” There arose a whole social class of scribes, mostly employed in agriculture, but some as personal secretaries or lawyers.[13] Women as well as men learned to read and write, and for the Semitic Babylonians, this involved knowledge of the extinct Sumerian language, and a complicated and extensive syllabary. Vocabularies, grammars, and interlinear translations were compiled for the use of students, as well as commentaries on the older texts and explanations of obscure words


and phrases. Massive archives of texts were recovered from the archaeological contexts of Old Babylonian scribal schools, through which literacy was disseminated. The Epic of Gilgamesh, an epic poem from Ancient Mesopotamia is among the earliest known works of literary fiction. The earliest Sumerian versions of the epic date from as early as the Third Dynasty of Ur (21502000 BC) (Dalley 1989: 41-42). … In ancient Egypt, literacy was concentrated among an educated elite of scribes. Only people from certain backgrounds were allowed to train to become scribes, in the service of temple, pharaonic, and military authorities. The hieroglyph system was always difficult to learn, but in later centuries was purposely made even more so, as this preserved the scribes' status. The rate of literacy in Pharaonic Egypt during most periods from the third to first millennium BC has been estimated at not more than one percent,[15] or between one half of one percent and one percent.[16] https://en.wikipedia.org/wiki/Mencius According to Mencius (372 – 289 BC), education must awaken the innate abilities of the human mind. He denounced memorization and advocated active interrogation of the text, saying, “One who believes all of a book would be better off without books” (尽信书,则不如无书, from 孟子.尽心下). One should check for internal consistency by comparing sections and debate the probability of factual accounts by comparing them with experience. During the Han Dynasty (206 BC- 221 AD), boys were thought ready at age seven to start learning basic skills in reading, writing and calculation.[28] Later, during the Ch'in dynasty (246-207 BC), a hierarchy of officials was set up to provide central control over the outlying areas of the empire. To enter this hierarchy, both literacy and knowledge of the increasing body of philosophy was required: “….the content of the educational process was designed not to engender functionally specific skills but rather to produce morally enlightened and cultivated generalists”.[30] The early Chinese state depended upon literate, educated officials for operation of the empire. In 605 AD, during the Sui Dynasty, for the first time, an examination system was explicitly instituted for a category of local talents. The merit-based imperial examination system for evaluating and selecting officials gave rise to schools that taught the Chinese classic texts and continued in use for 1,300 years, In the city-states of ancient Greece, most education was private, except in Sparta. For example, in Athens, during the 5th and 4th century BC, aside from two years military training, the state played little part in schooling.[31][32] Anyone could open a school and decide the curriculum. Parents could choose a school offering the subjects they wanted their children to learn, at a monthly fee they could afford.[31] Most parents, even the poor, sent their sons to schools for at least a few years, and if they could afford it from around the age of seven until fourteen, learning gymnastics (including athletics, sport and wrestling), music (including poetry, drama and history) and literacy.[31][32] formal oral education systems: Ancient Israel, Hindu Educational Philosophy Educational Essentialism Depending on the social needs, and also arbitrary tradeoffs between testing and effort at different educational stages, it is possible to come up with various educational systems that


“work”. I've found a fascinating paper on the education system in ancient china, and the rather close parallels with current practice in China, Korea and Japan. In these countries, high school students are expected to devote extreme effort to study in the years leading up to the university entrance exams, but the university education itself has a reputation for being rather undemanding. http://suen.educ.psu.edu/~hsuen/pubs/KEDI Yu.pdf The ancient Chinese system is particularly interesting, because it persisted largely unchanged for 1500 years, and in significant ways resembles what we expect education to look like, but in other ways seems very oddly balanced. Consider that high ministers directly answerable to the Emperor were chosen largely on the basis of their ability to write poems, and not actual beautiful poems, but instead a special highly constrained form, the “exam poem”. The content of the curriculum was studying the ancient Confucian classics (rather like holy books, heavy on moral guidance), with the aim of being able to write poems and essays about that topic. It seems pretty clear that if this system worked at all (which it clearly did), then the function of this system was largely sorting.

Methods of Education There is a lot of continuity between ancient and modern practices. A Sumerian from 3500 years ago would have no difficulty recognizing today's primary school classroom. Economic constraints are clearly important. ● element of coercion ● motivation by interest is preferred, but interesting all students in all topics won't happen ● some stuff like arithmetic and hand-writing are boring for almost everyone. ● reading may become rewarding, but is hard at first. ● Teachers must teach to the average student, which harms both high and low performers. Coercion is a big element, especially in primary education. For millennia, educational authorities have argued that the best way to motivate students is to arouse their interest, yet beating students has always been common. If we take curriculum and mass instruction as a given, then the need for coercion is pragmatic. A modern alternative view is to question the necessity of the fixed curriculum.

Progressive Education Reform ●

Educators have been saying for 2500 years that the best motivation comes from within the student, that simple memorization is not enough, and that you need to learn to think critically. A major aspect of the educational reform movement beginning around 1850 was an attempt to eliminate coercion from school, which had to involve some transfer of control from the teacher to the students. Ideally education would start from real-world experiences such as going for a walk, rather than learning from books. This was justified in various ways: ● A humanistic view growing out of the romantic movement was that the experience of childhood should be an end in itself; unpleasant educational practices could not be justified as a necessary means of preparing for life.


If school was more pleasant for children, it was expected that this would increase motivation, resulting in higher achievement. ● Authoritarian teaching and rote learning was seen as inconsistent with the principles of democratic society and with needs of the modern world for flexible creative thinkers. This can be seen as part of an overall gentling of western society. Reformers such as John Dewey had hoped to reform society by changing the schools. Though this grand goal was not achieved, disciplinary beatings of students are no longer considered acceptable, which is a major departure from the ancient tradition of western education. The goals and methods of romantic educational reform have by no means disappeared, but the extremes of flexible student-centered curriculum never made it into general educational practice, and since 1950 the goal of student-centered reform has met increasing opposition. The “Back to the basics” movement argued that educators had lost sight of the actual educational goals of learning and skill mastery (see Educational

Education Today In the past 200 years the big trend in education has been expansion in the number of people being educated and in the amount of education. While timing varied somewhat throughout the industrializing world, in the US primary education became universal during the 1800's, public high schools became widespread between 1900 and 1950, and college enrollment greatly increased in the 1960's and 70's. In contrast, the Methods of Education and content of education didn't change all that much, other than increasing adoption of industrial-scale techniques, such as the replacement of single-room schools by large buildings with agesegregated classrooms. While this expansion was clearly influenced by the needs and opportunities of the industrial revolution, national ideology and international comparisons also played a big role. The universal primary education and the research university model adopted by the US during the 1800's was largely based on German practices. See History of Education in the United States. It is very important to keep in mind this rapid expansion of the educational system. Two major consequences: ● The system is nowhere near equilibrium. A series of demographic and economic shocks from the larger economy worked their way through the school system, creating changes, but never really giving any “new normal” time to get established. This change largely makes meaningless any claims that education is worse than it used to be. ● Because existing educational practices were developed to teach pre-modern cultural elites, it is likely that there has not yet been sufficient time to optimize education for modern needs.

The US Today Is the system working? 21st century skills


● ● ● ●

Most US students fail to achieve standards for proficiency in English and math, worse with minorities. Not a decline, and school-age US literacy is average for developed countries (math, science below international averages) Heavy emphasis beginning in the 90's on bringing up the performance of the worst. Significant improvements in math, literacy improvements limited to the very worst. Ethnic gaps have decreased, but SES gap has increased. Correlation between student performance, income and parental education and parental skills has increased. (Increasing heritability?) Evidence that literacy has been pushed down to earlier age, without improvement at high school graduation. Possibly because poor students lack general knowledge, or maybe they just hit their limit.

Educational Philosophy The Encyclopedia of Educational Philosophy http://plato.stanford.edu/entries/education-philosophy/ There is a large—and ever expanding—number of works designed to give guidance to the novice setting out to explore the domain of philosophy of education; most if not all of the academic publishing houses have at least one representative of this genre on their list, and the titles are mostly variants of the following archetypes: The History and Philosophy of Education, The Philosophical Foundations of Education, Philosophers on Education, Three Thousand Years of Educational Wisdom, A Guide to the Philosophy of Education, and Readings in Philosophy of Education. The overall picture that emerges from even a sampling of this collective is not pretty; the field lacks intellectual cohesion, and (from the perspective taken in this essay) there is a widespread problem concerning the rigor of the work and the depth of scholarship—although undoubtedly there are islands, but not continents, of competent philosophical discussion of difficult and socially important issues of the kind listed earlier. On the positive side—the obverse of the lack of cohesion—there is, in the field as a whole, a degree of adventurousness in the form of openness to ideas and radical approaches, a trait that is sometimes lacking in other academic fields. … The debate between liberals and communitarians is far more than a theoretical diversion for philosophers and political scientists. At stake are rival understandings of what makes human lives and the societies in which they unfold both good and just, and derivatively, competing conceptions of the education needed for individual and social betterment. (Callan and White 2003, 95–96) … The final complexity in the debates over the nature of educational research is that there are some respected members of the philosophy of education community who claim, along with Carr, that “the forms of human association characteristic of educational engagement are not really apt for scientific or empirical study at all” (Carr 2003, 54–5). His reasoning is that educational processes cannot be studied empirically because they are processes of “normative initiation”—a


position that as it stands begs the question by not making clear why such processes cannot be studied empirically. … The different justifications for particular items of curriculum content that have been put forward by philosophers and others since Plato's pioneering efforts all draw, explicitly or implicitly, upon the positions that the respective theorists hold about at least three sets of issues. First, what are the aims and/or functions of education (aims and functions are not necessarily the same)? Alternatively, as Aristotle asked, what constitutes the good life and/or human flourishing, such that education should foster these? (Curren, forthcoming) … How students should be helped to become autonomous or develop a conception of the good life and pursue it is of course not immediately obvious, and much philosophical ink has been spilled on the matter. … Second, is it justifiable to treat the curriculum of an educational institution as a vehicle for furthering the socio-political interests and goals of a ruler or ruling class; and relatedly, is it justifiable to design the curriculum so that it serves as a medium of control or of social engineering? … Third, should educational programs at the elementary and secondary levels be made up of a number of disparate offerings, so that individuals with different interests and abilities and affinities for learning can pursue curricula that are suitable? Or should every student pursue the same curriculum as far as each is able—a curriculum, it should be noted, that in past cases nearly always was based on the needs or interests of those students who were academically inclined or were destined for elite social roles. Mortimer Adler and others in the late twentieth century sometimes used the aphorism “the best education for the best is the best education for all”.

Education and Social Structure Talking about education plunges you into the Nature Versus Nurture debate. Stereotypically, nurture argues that social inequality is caused by unequal education, while an extreme nature position says that says that education functions mainly by sorting out the smart people. Until roughly the industrial revolution, school was for elites. High school has only been universal in US since around 1900. It's odd to argue that universal execution is designed to perpetuate the underclass; this has more traditionally been done by not educating. Education (or anything else) can never make everyone an elite, but of the levers that are accessible to policy, it does seem like one of the more plausible ways to increase social mobility. One problem with this program is that in countries with historic class inequality, lower classes have culturally differentiated, and to some degree reject the norm of the ruling elite. Current hand wringing about US primary and secondary education is not about any deterioration in education, it's about increased expectations. Both “other countries have higher scores, and that shouldn't be”, and also a good intention to raise all students up to the level that would enable them to go to college, which is now seen as a minimum credential for a good job. In contrast, many aspects of the system are largely unchanged from the pre-1900 era, when higher education was overtly elitist. While the desire to insure that “no child is left behind” is commendable, the particular methods (frequent standardized tests, grading teachers and schools) are a huge experiment that isn't


founded on science or any actual understanding of how education works. This isn't because reformers are ignorant about how the current system works (though that may also be true), but because nobody really understands how the institutions of education are currently contributing to overall good (see puzzles.) These efforts largely take the current curriculum for granted, yet necessarily also trivialize it because of the limitations of testing. It is indeed a moral failing that we for years largely ignored the fate of children from poor neighborhoods, and local government was perhaps somewhat complicit in not holding students to higher standards, but simply blaming schools and teachers isn't going to solve the problem, and may indeed make it worse. These schools are now highly motivated to improve test results, but given the loose coupling between curriculum and whatever the actual function of education is, this may not help. Academic achievement is not just a matter of classes and teachers. Peer attitudes, home environment and sense of opportunity affect motivation. Learning does happen all day long, but what is learned depends on what is on offer. Clearly a large part of the problem is the effects of poverty on the community, creating stresses at home and in school that aren't conducive to learning. Another problem is (sub-)cultural adaptations to poverty and to working class status. If you don't believe you can get ahead, then you weigh things differently, and solidarity with your class mates (both senses) becomes more important.

Modern Times One of the core ideas of the 90's EP synthesis is mismatch between the current environment and the environment we evolved in (EEA). I agree with Boyd and Richerson's critique of this “big mistake” approach; traditional EP pays far too little attention to the power of culture to shape behavior, supposing that any behavior patterns seen in American psychology students are the result of an innate mental module. However, I think that a great many people feel stressed and alienated by the modern world because we have a strong instinct that having many relationships is a form of security, and that those relationships have to be cemented by face-to-face conversation. The combination of rule of law and monetized social interaction gives the modern person huge scope to do what they please, and this can be both fulfilling and highly productive, but the cost is alienation and an overwhelming range of choices. We have no idea of who to conform to. We have no idea of what measure of prestige to use. It's true that income is the default prestige metric in the US, and the default status display is buying uselessly costly houses, cars, home decorations, etc. But since at least the 1950's there has been much exploration of alternate value systems, especially in urban populations and the upper middle classes. The idea of “cool” or “hip”, with its strong value on non-conformity and creative improvisation. More recently, as boomers settled down and had families, there has been a shift towards less overtly rebellious visions of authenticity, but still reacting against the consumerist culture. The picture is confused because useless non-materialistic activities are also displays which win us status according to their their cost in time and Opportunity cost. As for the elites in the late 1800's, leisure is itself a status display. Today, having a day job, then being a Maker or an artist, is a status display. Even from a pure genetic EP perspective, once


you have enough wealth for basic needs, it may make sense to optimize your status by engaging in non-paid activity. Saying these things are status displays doesn't mean they aren't also authentic self-expressions, but Intentional Opacity leads us to give a beady eye to the reported motivations for these life choices. Is choosing to give up your job as a high-paid professional to be a stay-at-home parent an authentic expression of love? A way of showing your spouse is so stinking rich? A way to optimize your genetic fitness by maximizing your investment in your offspring? It can be all of these. What gives prestige is culturally determined, but we suspect an innate bias toward wealth and power (see Prestige Bias). Wealth and power are not purely socially constructed. All animals have a sense of quality and amount of food, and social mammals usually have some sort of dominance ranking. Humans are outliers, in that until 5000-10000 years ago, we mostly lived in egalitarian tribal groups (see Human Origins and Original Sin). This is probably partly why many people find today's high levels of social inequality to be frustrating and morally repugnant. Yet cultures clearly can vary widely in what is considered prestigious, and at the level of cultural evolution, this choice is presumably an important contributor to the culture's fitness (winning increasing mind share over time). Modern times are a huge puzzle for Evolutionary Psychology (EP), because of Demographic transition. People have been choosing to have fewer children with higher individual investment, in order for their children to have higher status. This is very odd under genes-alone EP theory, since it seems from the numbers that high status people have fewer children rather than more. All genes-alone EP can say is that this is another big mistake people have been tricked into sacrificing their reproductive potential because in the EEA status did win higher reproduction. Genetic/cultural coevolution opens the possibility of other answers, which, through the lens of group selection, might be seen to be increasing the inclusive fitness of the demographic transition strategy. Western cultural innovations are steamrollering the rest of the world, even though those peoples are far more numerous. For the culture's adaptive fitness, adequate education and economic differentiation are far more important than the total number of people. It is unknown whether western elites are actually sacrificing their long-term genetic fitness or not, but even in pre-modern times there have been practices of reproductive sacrifice in exchange for status (such as celibate priests and Chinese eunuchs).

Prestige Thanks for the reference on conformity and prestige bias. I didn't know those terms. Not By Genes Alone looks great and I look forward to getting into it. As much as I have learned and deduced, I'm still fuzzy on many things. I am still wrapping my mind around the nature of culture, as it seems many others are as well. Debating the nature of culture is something anthropologists have done a lot of. So there's no one right answer. I think the idea of Darwinian cultural evolution may help to refine thinking, since one discussion in anthropology has been about realizing that culture is not completely uniform within a cultural group, and this is the very diversity that natural selection needs in order to work (cultural variants). As societies become more complex, subcultures also increasingly emerge, so cultural variation is not purely at the individual level (there is stratification, to use the


genetic term). In the modern world we typically participate in multiple social contexts with their own subcultures, such as our place of work, at home, church or sports events, etc., and we also have many values and expectations that come from our social class and ethnic history. This means, that as well as individuals prospering or struggling depending on what cultural variants they adopt, identifiable social groups also gain or lose mind share according to whether people feel belonging to that group benefits them and their interests. Religions are one of the clearest examples of cultural subgroup evolution, but if you use a skeptical eye you can see the same things going on in business, education and politics. > I wanted to clarify a couple points you make. One of the core ideas of the 90's EP synthesis is mismatch between the current environment and the environment we evolved in (EEA). I agree with Boyd and Richerson's critique of this “big mistake” approach; traditional EP pays far too little attention to the power of culture to shape behavior, supposing that any behavior patterns seen in American psychology students are the result of an innate mental module. The conclusions drawn from psychology experiments are certainly debatable, but I'm not sure of the greater point. If our predispositions, inclinations, drives, and emotions were selected for small h/g groups, and if there hasn't been enough time for biological change, then aren't these qualities quite possibly (probably) mismatched to our current environment? Are you saying that it's possible that significant brain evolution has occurred since agriculture in response to the selective pressures of culture? I do think that mismatch is an important idea, especially for things like obesity and social alienation. Evolution has an amazing ability to produce working designs, and even highly optimized designs, but it is a key principle of evolutionary theory that flaws in the design tell us more about the process of evolution, and in particular provide the strongest evidence that the design is indeed the result of mindless evolution, and not an all-knowing god, or even a wise human. So when people started to apply evolutionary theory to human behavior, it was natural that they would look at places where our behavior seems dumb (not adaptive). Boyd and Richerson's critique of the “big mistake” does not rely on recent evolution. Instead, the argument mainly emphasizes the idea that humans have broad instinctual adaptions to cultural living (such as conformity and prestige bias), and apparently maladaptive behaviors may be taking advantage of our adaptations to culture construction. These behaviors may be harmful and yet still persist because our adaptations to cultural evolution are not sophisticated enough to tell the difference. Individual fitness and group fitness can also often be in conflict, in which case cultural evolution has the upper hand because it is so much faster. Some other criticisms that I have of the EEA, which overlap with both with an a common anti-EP criticism, and also with the Boyd and Richerson criticism: Speculating about the long-ago environment and the special characteristics it might have had is an error-prone way of doing science. See just-so stories. We should only do this when the uniformitarian hypothesis fails to explain the data. It may be that a seemingly bad behavior truly does harm fitness, but is a side-effect of a more general behavior adaptation which is still highly beneficial overall. Boyd and Richerson's argument above is one case. So a behavior may be maladaptive without ever having been adaptive. The simple existence of non-adaptive behavior is not in itself a very strong argument for a hypothetical EEA. This overlaps with the literature on cognitive bias, and in particular the


idea that apparently irrational behavior may still be adaptive in the real world. See Gut Feelings and Passions Within Reason. Another risk is that we simply accept cultural behavior norms as "good", without actually evaluating whether they are genetically adaptive or not. For example, is abusive parenting or poverty that persists generation after generation maladaptive? Anything that persists across generations is not too maladaptive, and we should be open to the evolutionary prediction that persistent behavior is the place to look for adaptations. There are far more poor people now than there were 100 years ago, simply because there's far more people than before (and "poor" is usually defined as relative rather than absolute wealth). The possibly importance of recent human evolution is another weakness of the EEA theory. The idea that humans are adapted primarily to the environment before agriculture is quite plausible, but that only takes us back about 10,000 years. How much genetic evolution could happen in that time is completely dependent on the strength of selection pressure. Although people have offered various stories about how human natural selection might have stopped at some point. There has been increasing human control over causes of premature death such as infectious illness and accidents, but there remains considerable variation in reproductive success. Unless genetics has no effect on this remaining variation, evolution will march on. The question is only how fast. If we could know adaptation proceeded at the same steady rate from the origin of modern humans about 200,000 years ago until now, then we would know that change in the past 10,000 years is only 1/20 of that. But the very evolutionary mismatch that EP observes creates selection pressure. IMO it is likely that selection pressure has been much higher in the past 5,000 to 10,000 years. This is not only because changes in the material lifeways such as foods, but also because of intense competition both within and between neolithic city-states. Genocidal conquests and execution of social trouble-makers could both create strong selection pressures on behavior. I've read some history of EP, and there are suggestions that the EP community around 1990 (the generation of Leda Cosmides and John Tooby) had thought about the political advantage of the African savannah EEA, that it supported the ideal of human equality. From http://anthro.vancouver.wsu.edu/media/PDF/Buss2.2.pdf Because racists and eugenicists typically justify discrimination (and worse) by claiming that one population is biologically superior to another, EP has taken great pains to ground itself in theory and evidence of a universal human nature that evolved, or was maintained by stabilizing selection, during the roughly 2 million years of the Pleistocene. If EP is correct, then there are no fundamental biological differences among human populations, let alone any notion of `biological superiority.' Only 15 years before, E O Wilson had been hugely criticized for proposing basically the same program of evolutionary thinking about human behavior (Sociobiology), and was criticized for racism simply for saying that humans have innate behavior, without touching on race at all. EP was a deliberate rebranding and relaunch, now with the powerful idea of mismatch as an explanation for non-adaptive behaviors. Whatever the technical merits of the African EEA idea, it can also be seen as an effort to find morally solid ground from which the EP program could be promoted. If any evolution took place since the out-of-Africa migration (60k -100k years ago),


then sub-populations might be more mismatched to the modern world, and perhaps even “more primitive”, which would sound a lot like the scientific racism common in the 1800's. In 1990 we had very little evidence about genetic change since Africa, beyond ethnic physical differences, and there was a plausible story that physical appearance might have been under particularly strong sexual selection, which could have caused appearance to diverge without any group being more fit according any morally relevant part of the environment. Being beautiful by local standards helps your success within your ethnic group without implying that one group is superior to another. Since then there have been a few clear examples of adaptive mutation and selective sweeps, such as lactose (milk) metabolism in Europeans and hemoglobin (blood) in Tibetans. It is likely that much of the change has not been due to mutations but rather to sexual reassortment of existing variants. When you look at the genome as a whole, two random individuals will typically differ at about 3 million positions (base pairs). It is likely that the vast majority of these differences have no effect, either because they are in a context in the genome which has no current effect (junk DNA), or because the change it causes is unimportant. Since we have very little understanding what the effect of these differences are, any scientific inferences have to rely on hugely oversimplifying assumptions. Assumptions (such as that all differences are equally important, or that gene effects are additive) are not only not proven to be true, they are clearly false. In science we often work with known-to-be-inaccurate numerical models simply because no accurate model is known. In the case of human biology, it's likely that there is no precise model that's humanly comprehensible . The question is how much truth we can squeeze out of our necessarily crude models. I know that our brains have a great capacity to adapt to many different situations–that seems the point of our capacity for culture. But what exists today in terms of population size and social complexity is so far beyond our EEA (thanks for that term btw), that I can't help but think the mismatch is significant and can account for problems we see today such as alienation, depression, anxiety, etc.. I agree with you here. But r.e. point [3] above, we should try to keep clear in our minds the distinction between “alienation, depression and anxiety suck” and “alienation, depression and anxiety are maladaptive”. One a nonintuitive catch phrase from EP is “It's not about happiness”. Though there are some points of overlap, evolutionary fitness is quite different from the “good life” of a philosopher or self-help guru. Take obesity for example. How maladaptive is obesity? Though there are some genuine public health issues, IMO 90% of our concern about obesity is driven by sexual and prestige competition. Individually we care because we want to look sexy, we want to look like the rich and famous. It was only yesterday in evolutionary terms that being fat and physically inactive was the privilege of kings. Now the prestige standard has switched, and we want to look more like we've been working in the field all day. From Fear and Loathing of Evolutionary Psychology in the Social Sciences: With a global population rapidly approaching six billion, it looks as though our current environments are quite congenial to our traits. Certainly, there is mortality associated with overindulgence, but most of it occurs in postreproductive years and therefore is only weakly selected against. Certainly there are many unhappy people in the world, but there is no way of knowing whether they would have been


happier in a forager lifestyle, and in either case, natural selection is not about happiness; it is about reproductive success. > Cooperation is one of the biggest of those problems which evolution solved. But cooperation today has a very different appearance today than it did originally. The essence is the same, but the execution has never been so specialized, institutionalized, and impersonal. This seems like a very sound place to discover the source of our modern complaints. Do you disagree? It is now common to live your life largely in the marketplace. Although there are still options for relationships based on mutual trust and commitment, in marriage, the workplace, or other social groups, such attachments are now avoidable, and some combination of the pressures from the system and personal choices are causing people to live in a far more socially detached way. This is a new stress, because we have instincts that personal attachments are a source of security and life meaning. It seems adult attachment is built on top of child attachment, and for a child being alone is a severe hazard. Modern life is indeed particularly complicated, but we also tend to fall for a mythic view that once upon a time life was bare to the heart and blissfully simple. The most complicated thing in the human environment has always been other humans. See Vengance Is Ours for an example of how non-simple life can be in small-scale societies. Since there is no way to conceptually encompass the true complexity of the modern world, we live in social networks that aren't much bigger than prehistoric tribes. Since we can only know so much, we individually don't know much more than we did long ago (though what we know is different.) Human life has always involved relationship drama, political intrigue, moral ambiguity, and environmental crises. As I was alluding, I see some key reasons why we experience stress and alienation, including our need for face to face loyalties. But do you think this need not be so? Do we not need secure face to face relationships to feel content? I'm reading an interesting book right now, “Loneliness”, by John Cacioppo. He does make basically this argument, with the important qualification that there seems to be considerable individual difference in the need for social connection. This is a good argument, but I'd remind that contentment is not a naturally stable emotional state. The human genius is to be discontented with anything. We only notice that we actually had it pretty good when things start to fall apart. Though I'm sure there are once again individual differences, a large part of the prestige or success motivation is endless optimization, always trying to make it better, whatever it is. Prestige is based on social comparison, there is no “good enough”. The combination of rule of law and monetized social interaction gives the modern person huge scope to do what they please, and this can be both fulfilling and highly productive, but the cost is alienation and an overwhelming range of choices. We have no idea of who to conform to. We have no idea of what measure of prestige to use I totally agree that we are overwhelmed by choices, and that freedom is a double edged sword in that it creates many big decisions and a lot of stress over choosing correctly. We are often adrift. It is not easy to find our own unique way. If I understand correctly, that is the point of our innate drives toward conformity. And this is why most people are happy doing what others in their groups do. But do you make this point in relation to the previous comment? Is this more of the source of alienation a you see it? Or are you simply setting up the next thought about the extreme fracture in our measure of prestige?


Well, I'm not sure exactly what alienation means, except that it's supposed to be a uniquely modern form of psychological distress, and has something to do with feelings of a detachment either from specific other people, or from more vague sources of life meaning. I think that specifically, lack of close social ties is stressful, anxiety producing, and perhaps leading to depression and feelings of meaninglessness. Social conformity is one thing that genuinely used to be a lot simpler, because of the lack of much variety in possible life roles, and strong conservative intuitions about any kind of innovation. Lack of freedom, in other words. We want freedom, we want control over our lives, because our intuition is that this will help us to optimize, to promote ourselves and our descendents. This is probably true, but the sudden dramatic loss of cultural guidance and constraint can be distressing too. The usual modern response is to choose some subgroup value system that appeals to us, we will be a punk or a hippie or a yuppie or whatever. There's still a huge number of groups with strong behavior norms, so take a plunge and let all those distressing possibilities melt away. I also had some thoughts about your comments on status and prestige… My guess is that displays of status have become important in post-agricultural times where populations are too big to personally know who everyone is and what they contribute. So we have been forced to try to ascertain status by external means, such as wealth, power, and influence. This has shifted our behavior from trying to earn status by means of contributing actual value to the group toward gaining status by obtaining its indicators. One thing that Boyd and Richards don't get into is how prestige is determined. I think they specifically chose prestige over status because status has connotations of a generally accepted ranking, whereas prestige seems to be more flexible in admitting that this judgment is contextsensitive. I'd be surprised to find a context where prestige is determined purely by altruistic contribution to the group, without regard to self-interest. Many cultural practices such as figuring out a new food source do not benefit the larger group unless they are adopted. “Grog eat stinky root and say it good. I wait and see. Grog still not die, but there are plenty of berries, I eat them. Grog have three strong children. Maybe something to this stinky root thing.” Something that benefits individuals still benefits the group as long as it doesn't harm the group in some other way. With EP, it is the seemingly altruistic behavior that demands special explanation, not the self-serving behavior. This is one of the reasons that EP is a spiky pill to swallow, because it seems to say that self-interested behavior is natural, even though it is contrary to our pro-social moral rhetoric. The group can't survive and prosper unless a majority of the individuals survive and prosper. Each individual is uniquely qualified to promote their own success, but it is not necessary to have moral guidance promoting self-interested behavior, since people do that automatically. However, it is easy, as you point out, to see the extreme plasticity in what constitutes status and in how it is earned. What matters, in the end, is an individual provide some perceived benefit to the group in order to gain his status. Cultures will be successful if they award prestige to individuals who act in ways that increase the culture's mind share. If watching “Lifestyles of the Rich and Famous” makes subsistence farmers move to cities and adopt western lifesyles, then Rich and Famous lifestyles are adaptive for the culture. They create desire that makes people want to adopt the culture. Whether this makes them happy, and whether they have more or fewer descendents, those are two additional questions. Cultural fitness is not individual fitness is not individual happiness.


What we see in modern times is the emergence of a super-fit cultural framework (state, rule of law, economics) that is devouring existing cultural diversity. The situation with respect to individual fitness is confusing because of demographic transition. It is hard to say whether we are closer or farther from “the good life”. Philosophy quickly bogs down in intractable ambiguity, and the new science of subjective happiness isn't very clear either. To first order, people vary in their “happiness setpoint”, and tend to return to that happiness level no matter what happens, good or bad. People tend to be satisfied if their situation has been improving (upward mobility) or if they are doing all right in comparison whomever they compare themselves to. Because we are optimizing, we respond to relative comparisons far more than to absolute material wealth. However there is some scientific evidence for the common-sense idea that extreme poverty such as is still common globally does lead to reduced subjective happiness, so the continuing global wealth increase, now pushing into India, China, and even Africa, does seem like a good thing. Even if the concern is with relative income inequality rather than the well-being of the worst-off, income inequality has been decreasing in the world as a whole (if not in the US and other developed nations). For example, there is already a pretty tangible sense that the moguls of finance and banking are not necessarily friends to society, and are not adding to our collective good. And there is also the sense that there is too much inequity. If the amount of inequity that exists now is truly damaging to society at large, as I believe it is, there is no reason this could not also continue to work itself into our collective computation of what and who deserves status. If people collectively looked down on individuals who hold excessive fortunes, it would diminish and even erase the benefit of holding such wealth. After all, it seems fairly obvious that the reason to obtain and hoard wealth is for status. Take away the status, and there isn't much left. How many homes and boats can you enjoy, especially while others look on scornfully. In this way, we collectively hold the power to prevent financial manipulation for personal gain. In this way, we hold the power to right the whole ship. Robert Frank has an interesting argument in Luxury Fever that we should regard conspicuous displays of wealth as socially toxic pollution, in analogy with the economic theory of Market Failure through externalities in pollution of the physical environment. I think that displays of wealth are indeed distressing for those who are less wealthy, mainly because (as I keep harping on), our striving motivation is based on relative comparison rather than any sense of what is necessary for the “good life”. In the modern world, we have astounding concentrations of wealth, and also astounding communication media which make everyone aware of every unattainable wealth display anywhere in the world. Even the super-rich are not immune. If you own the world's tallest skyscraper, you're only going to have that distinction for a couple of years, and meanwhile you still don't have the biggest yacht, the most houses or the best custom jet. Taking this line of thinking one step further, it is also fairly obvious that there is no correlation between wealth (above a moderate income) and happiness. We are not selected to anonymously and solitarily enjoy the spoils of status. We have presumably been selected, instead, to seek and want the respect, affection, esteem, and protection of people we know and with whom we directly interact. If loss of wealth would not necessarily result in a loss of happiness, the whole concept seems tenable. The idea of a new definition of status and success might well be a mime waiting and ready to be spread.


Alternate paths to prestige are already well underway, with the whole counterculture/alternative thing. In the west people have been vigorously exploring alternatives since at least the '50s, with the beat generation, and strains of rejection of materialism in the intelligentsia go back to the romantic movement in the early 1800's. One issue is that because prestige is culturally dependent, you can have a lot of different prestige models going at the same time in different subgroups. The problem is that the boring old wealth and power block is hard to ignore because they have a lot of wealth and power. We already see many stories along these lines. In fact, it is the expected theme of most stories. We are repeatedly told that real connection is far more important than riches. We just don't yet fully believe it. This story goes back at least King Midas. IMO these stories are common because we like to hear them, and not because they have ever been entirely accurate. The emotional reward of the moral in story isn't that it changes our understanding of what is moral, it's that it reassures ourselves that the moral world is indeed in proper order. We might suppose that the rich would benefit most from the moral, but the story is always more popular with the non-rich. I suspect that it may be a fine and tricky line to differentiate since wealth does have some real rewards, and in a world that offers little in other forms of security and fulfillment, it seems easy to miscast the value of money. AND because we don't yet understand better ways of establishing status and knowing who to follow. I know there is noting simple about my hope. It's much more of a preposterous pipe dream than a realistic and actionable concept. But I cannot help myself. The trick is to make the mime clear, simple, convincing and thus sticky, and to put it in the hands of the influential. That's my best guess. Do you give any thought to “saving the world?” How do you think it might be done? We live in interesting times. I'm certainly interested in getting people to question their social striving instincts and to be careful in who they choose to compare themselves to. I do think there is a valuable story here about our human nature as ultra-social animals. Building social connection is a path to meaning which is a win-win proposition, whereas status striving is zerosum. As you say, new ideas can work their way through a culture, though this often takes a generation or more. In my personal politics, I think it's a big problem how dependent US politicians have become on political donations, and I have made some contributions to the Mayday PAC which is working getting the money out of politics. I'd certainly support tax changes in the US aimed at reducing inequality, but my read on this is that the idea will have to gain a lot more mind share before anything will happen. In the intellectual world inequality has been getting a lot of attention, and living in the coastal intellectual establishment it is easy to get the impression that everyone is concerned about inequality and wants to “eat the rich”, but there is nothing like a political majority. IMO technological change and structural economic innovations such as globalization have been a huge driver of economic trends in modern times, and there is just no credible prediction of what is going to happen.

Social Behavior


This section in the Wiki analyzes social behavior. Although other organisms are social, we humans are uniquely social, living in large groups of distantly-related yet cooperating individuals, and in our reliance on culture for survival. Theories of Cultural Evolution are the extension of evolutionary theory to cultural change. Since our capacity for culture evolved genetically, and genetic evolution has not stopped, genetic and cultural evolution operate simultaneously, demanding a theory of Genetic-Cultural Coevolution. Of course, the main advantage in living in a social group is the opportunity for Cooperation. Traditionally this is regarded as so obvious as to often go unstated. A controversial aspect of evolutionary theory as applied to culture is its use to explain and predict various forms of conflict (failure to cooperate.) We do not see this viewpoint as in any way undermining the virtue or predominance of cooperation; what we do see is a subtle argument that the maintenance of cooperation is more difficult than often supposed, and that numerous aspects both of human instinct and culture work together to make this cooperation possible.

Attitude Behavior Gap A durable and puzzling result from Social Psychology is the Attitude-Behavior Gap. If situationism refers to how behavior is strongly context-dependent, the attitude-behavior gap refers to the disconnect between what we say about our attitudes and what we actually do. Although this has been studied most with moral behavior, where we often do one thing and say another, in this wiki we use the concept more broadly to describe any persistent mismatch between Story and behavior.

The Puzzling Case of the Chinese Guest Attitudes toward racism are one place where we find this gap. Recent research has emphasized that most people who claim not to be racist still have racist implicit attitudes and still tolerate or fail to perceive racism. Interestingly, the area was founded by the observation that in a time and place where racism was accepted (the US in the 1930's), people failed to be racist in an everyday situation, even though (on being asked) they claimed to be racist. See AttitudeBehavior Gap.

Why? The attitude-behavior gap is puzzling. Is this proof of yet another failure to live up to our ideals of rationality? Our spin is different. We take an evolutionary perspective to behavior, starting with an assumption that however we do behave is likely to be adaptive, meaning it has tended over the ages to aid human success in life, leaving more descendants. Simply being puzzling (not being consistent with a well-known Story) is not enough to prove that a behavior is not adaptive.


In example linked above of a motel clerk checking in a guest (even if he happens to be Asian), this is his job, and we would normally never question that doing his job is adaptive. So, behavior adaptive: (✓). Then, later on, the clerk answers a question about how he would treat an Asian guest. Whatever he happens to say, this mouthing of words is a distinct behavior, under different selective pressure. Speech is fundamentally a social act. There is quite a bit of evidence from social psychology that we are continuously adjusting both our behavior and our reported attitudes according to subtle social cues which we are almost certainly not consciously aware of. See for example Dirty Liberals!. We suspect that the largest single source of attitude-behavior gap is that many attitudes are adopted for social reasons (see membership_badge), and are therefore mainly guides about what to say in a particular social context, rather than rules for actual practical decision making. In our day-to-day life we hardly ever have cause to refer to our attitudes. For the clerk, back in the 1930's, in the rural US, having racist attitudes was indeed likely to be adaptive. Certainly his attitude did not seem to do him any harm. Tentatively, then, another adaptive behavior (✓). Headline: clerk consistently behaves adaptively in multiple situations. Where has the puzzle gone?

The Advantages of Doing one thing and Saying Another The puzzle lies in this flexible opportunistic behavior not being consistent with the socially approved stories of moral behavior. Saying one thing and doing something else is called hypocrisy. Because language exists for social coordination, we have to rely on people speaking accurately about what they have done, about what they intend to do, and about what goals they think would be best for the group as a whole. Speaking inaccurately is called lying, and is a moral offense because it does undermine effective social coordination. Freed from this restriction, the advantages of doing one thing and saying another are quite obvious. The surprise is that normal law abiding, morally upstanding people do this kind of thing all the time, in carefully calibrated and subtle ways. This is not at all surprising from an evolutionary perspective–it is in fact exactly what we would expect people to do. See individual/group conflict. We also argue in Representational Opacity that it just so happens that the human mind is constructed in a way that we can't report entirely reliably why it is that we behave as we do. It is hard know how much attitude-behavior gap is due to opportunistic hypocrisy and how much simply due to our failure to correctly understand ourselves (and the social necessity of denying that this problem of incomplete self-awareness even exists.)

Social Conformity and Scripts The ambiguity of the clerk's particular situation, with the unexpected challenge of a Chinese guest, is clearly also an important factor, since the clerk's racism likely originated in discrimination against African heritage. Even so, it seems likely that the vast majority of these clerks had never actually had to turn away any black (simply because they weren't around), and even if presented with dark brown skin and frizzy hair, would quite likely either check the guest


in like always, or go seek someone else for guidance about what to do. And if this clearly black person sent “white” signals in their dress and way of speaking, failing to conform to stereotype, then the clerk would be even more cautious about deviating from the default “customer” script. In situations that could have serious consequences, even if a strong signal does evoke an attitude, people will seldom “wing it” when they don't have a script. After being presented with an unexpected challenge it is indeed common to be upset and to work on deciding “what I'd do next time”, that is making up an actionable behavioral script. The failed racism study doesn't rule out the possibility that the next Chinese guest might be turned away. See Conformity Bias.

Cheating As humans, almost everything we do is a cooperative undertaking, and we always have to be alert to the possibility that we will be cheated out of our fair share of the rewards. Because of this, Evolutionary Psychology predicts that humans will be highly attuned to detecting and handling cheating in social interactions. What is our response to being cheated? We may try to recover what we we feel we are owed. Most cultures have some sort of judicial mechanism to support recovering damages, and to do so is economically rational. But our initial (and primary) response is emotional. We feel an anger which is much the same as if we had been physically harmed. In addition, this anger is in some sense disproportionate and irrational, in that it may lead us into some confrontation with the person we feel has wronged us, where the possible cost to us far exceeds the loss we suffered (we could be killed.) “It's not the money, it's the principle of the thing.” Robert Frank considers such a disproportionate “irrational” response to be exactly the sort of behavior that is needed to encourage cooperation, especially when there is no rule of law (see Passions Within Reason).

Genetic-Cultural Coevolution Genetic-cultural coevolution is the idea that once pre-humans acquired basic abilities for cultural learning and cumulative culture, this created a positive feedback (see Positive Feedback in Evolutionary Transitions) where culture became more sophisticated and this drove genetic evolution to create new people who could function better in the cultural environment, who created even more complex culture, and so on. The cultural/social environment quickly became a more critical determinant of individual success than the natural environment, and ever greater brainpower was required to navigate this new world. See Cultural Evolution, Adaptive Behavior, Dual Inheritance Theory, Group Selection and Not by Genes Alone. Because this is a fairly new idea, there isn't complete agreement on what to call this phenomenon. “Dual Inheritance Theory” seems the most common, but doesn't highlight the critical feedback interaction. We prefer “genetic-cultural” to “gene-culture” for the technical


reason that many important genetic changes were likely in non-coding sequences and in sexual reassortment of existing genes, rather than in mutations to genes.

The Collective Subjective epistemology (the study of how we come to know what we know, and the limits of that knowledge) in the European tradition has largely focused on whether and how humans, limited by their subjective nature, can come to know with confidence an objective world. Of course this assumes that there is an objective world we can perceive. To Plato, the material world we might hope to know is little more than shadows of some more important realm of forms. The reality is in the pattern, not in the thing itself. By the time of the early philosophers of science, this notion was being transformed. Galileo said we perceive the characteristics of material things, but can only infer their essence. Thus he says beeswax may be yellow, soft when heated by the hand, and smell of honey, but none of that tells us what wax is. Descarte and Hume took this argument further, saying we could be sure about little (Descarte) or nothing (Hume) of the material world without some kind of divine intervention. Kant provided an interesting alternative, suggesting (like Galileo) that we perceive information coming from the material world, but must evaluate it by category before assembling it into a cogent picture. In fact this does seems a lot like what current brain studies suggest. But the point of course is that something separates the individual subject from objective knowledge. Followers of Ayn Rand notwithstanding, no one claims that an individual can have total, accurate knowledge of the world. Even if when that quality of knowledge is available through rigorous scientific study, no one carries a laboratory around just to check the weather. This problem is not limited to philosophical debates. Since the discovery of quantum mechanics, physicists have been troubled by a difficulty separating the influence of their presence from the results of their experiments. Some argue that it is impossible to make that separation. Social scientists have long faced a similar dilemma. If that sounds like we are doomed to uncertainty, consider that we seem to function without giving it a second thought. How is that? Without getting too deeply into it, we think that that one answer might be we learn to make assumptions. By trial and error an infant growing into a child growing into an adult comes to expect consistency – we only need to experience ice a few times to know it’s likely to always be cold. Further, we think it follows that people come to rely on their parents, family members, teachers and peers, even strangers. It gets beyond strict epistemology, but we might say one mechanism people use to achieve a working knowledge of the world is human culture. Society allows them to compare evidence within the group, and to receive information without having to test it. So (for example) people might feel more confident about taking an action if it is supported by the judgment of others. It may be difficult to guarantee that the information is accurate, but people do the best they can with what they have. And don’t think about it very much. We might say that much of what individuals do on any given day either depends on or is a part of some kind of collective subjective action, in other words subjective decisions supported by


the subjective opinions of others. In fact it could be argued that human culture is just another way of saying the same thing, that people exist as social animals within that collective subjective. But it doesn’t have to be that general; take two examples. For one, people delegate the job of deciding justice to twelve proxies typically chosen at random from the list of registered voters. While the justice system is far from perfect it is fairly reliable, and normal expectations of fairness don’t require our constant personal involvement to confirm. Another basic form of collective subjective action is the economy. Most specifically, we can argue that the price setting mechanism of markets as a form of collective subjective.

Cooperation People are the way they are because of the benefits of cooperation. People are both selfish and groupish. Humans aren't the only kind of animal to get these benefits. E. O. Wilson observes that social insects account for just 2% of all insect species, but more than half the total insect biomass. Humans and social insects also show altruism, sometimes sacrificing themselves for the good of the group. Watch E. O. Wilson on “The Social Conquest of the Earth”:

The Cultural Animal When compared to other animals, humans are in a category of animal that happens to have only one member: us. Many animals are solitary, coming together only for mating. Quite a few kinds of animals are social, living in groups, with some degree of coordination and cooperation, and often having social structures such as a dominance hierarchy. Human collective behavior is indeed social, but we are cultural animals, and differ from other social animals in ways that are quite important to us. In particular, we have a unique reliance on cumulative culture mediated by language. We also show uniquely high levels of cooperation between unrelated individuals. See the book The Cultural Animal (book).

Cultural Evolution But though this progress of human affairs may appear certain and inevitable, and though the support which allegiance brings to justice, be founded on obvious principles of human nature, it cannot be expected that men should beforehand be able to discover them, or foresee their operation. Government commences more casually and more imperfectly. See cultural evolution, but note that for most of its history the term has referred to any defined process of cultural change or progress, and not specifically to Darwinian evolution. We understand cultural evolution as a facet of Gene-Culture Coevolution, and largely agree with Richerson and Boyd in Not by Genes Alone.


Culture We use culture in the broad sense to mean the knowledge, beliefs, learned behavior, technology, membership signals and bonding rituals of a particular group. Any social group will have unique or characteristic culture. ● The human capacity for culture allowed a new way of life, changing the environment, causing humans to evolve to be better suited to this environment, creating a positive feedback (self domestication.) ● Cultures work to suppress some problematic motivations, while co-opting other motivations to socially useful ends, sometimes sacrificing the interests of the individual. Historically culture has been closely allied with the concept of language and ethnicity. In addition to providing our language, culture also provides our practical lifeways: how we get food, clothing and shelter, our social norms of the proper ways to behave towards others, and our political structure.

Individual Differences and Fairness Our sense of fairness is an evolved part of our response to cooperative group living. Cooperation is very much a win-win proposition, but often some individuals will contribute more to a particular undertaking, and some individuals will receive more of the benefits. Our sense of fairness is an intuitive gut sense of whether the rewards are appropriate. This is closely related to the ideas of Cheating (breaking the rules) and freeloading (gaining a benefit that goes to all without having contributed.) The most obvious way that rewards can be fair is if they are proportionate to the contribution, but other cultural rules are used, and can make a great deal of sense (be adaptive) given the means of subsistence of that particular people.

There's this Tribe In a culture where hunting for large animals is an important food source, a kill is usually shared across the entire tribe. This is partly because it is impossible for the successful hunter to eat or preserve the entire animal before it spoils, but is also important as a way to protect the entire tribe from the risk of unpredictable and “lumpy” rewards. If there is a successful kill every week or so, then there is enough to go around. The best hunter may be two or three times more productive than the least competent, but even the best hunter might starve if they had to rely purely on their own resources. There is a substantial element of luck in large-game hunting. This risk may only be tolerable if it is spread over the entire tribe. The scheme of sharing kills is analogous to “income redistribution” from the best hunters to the worst hunters. Although the best hunters may grumble about freeloaders, a successful culture will find ways to keep their high achievers sufficiently happy that they remain motivated. ~~COMPLEX_TABLES~~

What About Us?


What does the parable of this tribe illustrate about the human condition? Though there may be incidental fringe benefits, the most important benefit that goes to high achievers is probably the benefit that goes to all members of the society, which is the benefit of being in a viable polity that can successfully compete with other polities and which can undertake the large-scale projects that may be necessary for mere survival in the natural environment. Then, as now, it remains largely a fantasy that high achievers could flourish by casting off the burden of supporting freeloaders. People are highly motivated to compare themselves to others, both better off and worse off. Social Comparison Theory People care a great deal about their relative standard of living (compared to others), not so much about their absolute standard, beyond a certain minimum of food, clothing and shelter. people differ in their abilities and interests some occupations and behaviors are socially highly rewarded (wealth, power and prestige.) Why these things? In different cultures, different things are rewarded. For example, in a pre-modern pacific island culture, it might be prestigious to be a good fisherman, navigator, or to be the best drummer. Fishing and other subsistence activities might be self-rewarding, but other roles are rewarded in ways that may seem arbitrary to us, but that make sense in that culture. Our understanding of human nature should inform our political and economic ideas of fairness and distributive justice. This debate is politically important because the laws and cultural norms that govern current societies award most positions of wealth, prestige and power to men. If there were no innate sex differences that were relevant to achieving these social rewards, then any inequality of outcomes must be caused purely by arbitrary cultural conventions. That would be obviously unfair. This inequality persists even in western countries where women legally have the same rights and opportunities as men, where sex discrimination is banned in the workplace, and where politicians that advocate sexist views rarely get elected. Persistent inequality is surely partly because many people in influential positions still believe in distinct gender roles, and judge people as being more or less suited to some job or office based on their sex (sexual discrimination.) However, some of the difference in outcomes is clearly due to choices that women make, such as what career to pursue and whether to suspend their career to raise children. Even if a woman decides on highly rewarded career and relies on others to take care of her children, she may make different decisions in work–life balance that put her at a competitive disadvantage in getting promotions.

Evolutionary Politics The whole modern world has divided itself into Conservatives and Progressives. The business of Progressives is to go on making mistakes. The business of the Conservatives is to prevent the mistakes from being corrected.


We are politically homeless: we have no political party and don't occupy any recognized political position. To most intellectuals, what we say will seem conservative, since we see the Human Condition as setting limits to the progressive program of social improvement through reason and moral insight. To conservatives, our desire for shaping a new better world may seem like leftist utopianism. The biggest difficulties that social progress faces are: 1. Due to unpredictability and cognitive limitations, it is hard to design a policy that will do only what we want (unintended_consequences), and 2. What we want may be physically or evolutionarily impossible, so we will fail. In hindsight we may may come up with a plausible story, but that doesn't mean we actually understand the real obstacle. If loose evolutionary reasoning can really be used to prove anything (so its claims are meaningless), then where are all of the uses of neo-Darwinian reasoning to justify socially progressive theories? The application of evolution to the study of human behavior has lead to many results that are just not politically neutral: ● There are powerful human behavioral instincts that social structures must either cater to or work around. ● Self-promoting, selfish, and competitive behavior is ubiquitous. ● Reproduction and family structure are vitally important. ● There are innate differences in male and female behavior resulting from different reproductive roles. ● Punishment and intimidation are vitally important for maintaining mutually beneficial social cooperation. In all these cases evolutionary argument points toward traditional views being correct, or at least implies that traditional views are likely to be more consistent with innate human motivations than some cleverly devised new way of living and behaving. The less-developed theory of Darwinian Cultural Evolution has other predictions that seem not so much repugnant as puzzling or nonsensical: ● Existing policies and behavioral norms may be “good” (culturally adaptive) for reasons that are not and were never understood by anyone, any more than a fish understood why it wanted to grow feet. ● Conflict between cultural and biological evolution creates a huge disconnect between what is a good story (justification or explanation) and what people people actually do. Because of this coupling to politics, we have adopted the tactic of not advocating public policy, instead recommending personal policies that you and your friends can adopt to make the world a better place. This reduces the tendency for our views to be pigeon-holed as being motivated by a political viewpoint, but (more subtly) also avoids our being dismissed due to confusion. Our views are confusing because we don't fall at a recognized political position. Evolutionary thinking leads to unfamiliar justifications for recognized policies. For example, in Luxury Fever, Robert Frank argues that work-safety and overtime regulations are desirable not because they prevent employers from exploiting their workers but because they prevent workers from compromising their own interests in a futile desire to get ahead in the rat race.


Groupishness Groupishness is the social analog of selfishness (Jonathan Haidt's term). Selfish motivations such as greed and lust promote individual survival and reproduction, while groupish motivations such as loyalty and respect promote the survival of our group (and hence of our selves.) See The Righteous Mind.

Hierarchy The concept of social hierarchy is almost inseparable from the sort of large-scale organization that became possible with the origin of the state. In the modern world, most of us participate at least two organizational hierarchies (our state and our employer), and often more. While many seem to feel that hierarchy is undesirable because it causes individual oppression and organizational inflexibility, and while newer communication technologies and increased reliance on a specialized market of services (Outsourcing) may make flatter and more fluid organizations possible, we think that some degree of hierarchy is necessary for coordinating most large-scale activities. Although hierarchies are socially constructed in rather arbitrary ways, there are nonetheless good reasons why societies have constructed hierarchies. Our basic argument is that there are inherent semantic benefits of hierarchical organization that explain the prevalence of this pattern.

Who's the Boss? It is generally agreed that a fundamental aspect of social hierarchy is a clear understanding about who must obey whom—a formal power relation, as in the military Chain of command. The related mathematical concept is the partial ordering. Formally, this says is that the can-boss relation is a Transitive relation, like numerical >=. If Joe can-boss Fred, and Fred can-boss Bill, then Joe can-boss Bill. In addition, aside from the fact that everyone can-boss themselves, nobody that we can boss can also boss us (A can-boss B implies B cannot-boss A.) An important subtlety of this is in the term partial ordering. There may be people A and B such that neither A can-boss B nor B can-boss A, and in fact this is true for most pairs of individuals in an organizational hierarchy.


(edit) convert -size 2048×1500 -trim -border 50 -bordercolor white org_chart_example.svg org_chart_example.png For example, neither Kelly can-boss Carl nor Carl can-boss Kelly. Usually organizational hierarchy has an additional property not required by the mathematical definition of partial ordering: everyone has exactly one immediate boss, except for a single top boss. Mathematically this says that the organizational chart is a tree. This relates to the vital organizational function of the hierarchy, which is to provide a Decision procedure in any case where it is unclear how to proceed according to standard procedures: “We'd better ask the boss.” In organizational hierarchies it is not true that information is constrained to flow only in the hierarchy. For example, Ed can send a purchasing order to Betty, even though there is no hierarchical relation between them (and Betty might seem to outrank him.) Purchasing is Standard operating procedure, and functions in parallel to the hierarchy.

Rank vs. Organizational Hierarchy To see the importance of the partial ordering aspect of hierarchy, consider what would happen if Carl came into Kelly's office shouting demands. She might be pretty nervous, but likely she'd clear it with Larry before doing anything. Though Carl is head of purchasing, he cannot-boss Kelly. In the military organization this is clearer as the distinction between rank and the Chain of command. Formal rank is found mainly inside organizational hierarchies, and is ordinarily subordinated to it. Rank is a simpler, more impoverished concept that is usefully unsubtle at times. When possible, the military functions through its organizational hierarchy (the chain of command), but rank is also well-defined, and in urgent situations where kicking the decision up the chain of command is not reasonable, soldiers are taught to obey anyone with a higher rank. “hierarchy” is often used to mean any formal system for determining can-boss. The rank system does define a partial ordering, and it has the usefully different property of more closely approximating a Total order. Rank defines the can-boss relation between many more pairs of


individuals (any two who differ in rank.) These two hierarchies coexist in the military, and are made compatible by insuring that rank increases as you go up the chain of command. Even in organizations that lack formal rank there may be some sense of unease with having a boss who seems in some way inferior (such as being younger or having lower seniority.) Note that the Wikipedia Military rank article is itself rather loose about the relationship between rank and hierarchy: Military rank is a system of hierarchical relationships in armed forces or civil institutions organized along military lines. This confusion between rank hierarchy and organizational hierarchy is long-standing, and the association of rank with violent oppression and animal dominance leads many to think that hierarchy is bad, and that we should strive to move beyond it. But this ignores the tremendous power of the uniquely human organizational hierarchy to coordinate large-scale cooperative behavior.

Rank and Dominance A rank hierarchy much more closely matches the concept of an animal Dominance hierarchy than does the partially ordered organizational hierarchy, and this rooting in our ancestry as social animals explains our emotional reaction to rank differences. Although an organizational hierarchy may lack a formal rank system, as social animals we are highly aware of dominance relations, and people gauge their dominance by how high they are in the hierarchy, both in terms of how many levels they are away from the boss, and by how many people they command. Dominance is an ancient system which gene/culture Genetic-Cultural Coevolution has seized upon as a mechanism to enable the the creation of the much more powerful organizational hierarchy. While many have noticed the uncanny similarity of chimpanzee politics to human political maneuvering, this superficial similarity conceals a profound difference. Dominance isn't organizational hierarchy, and it isn't even rank. In a real organization, asserting control is not always as simple as issuing a command, but when a boss is displeased by the organizational response and talks about “bashing heads all the way down the line�, he is speaking figuratively. For chimpanzees, the alpha male can do pretty much whatever he wants, but getting any other individual to obey still rests closely on the threat of violence, and other than leading by example, commanding any coordinated action is beyond his power.

But wait, there's more An important aspect of the partial ordering in hierarchy is that the ambiguity of individuals with no can-boss relation (such as Betty and Ed) relates to the economic phenomenon of Division of labour and the conceptual principle of Modularity. These are inextricably tied to the practice of organizational hierarchy, and are the primary reason that hierarchy adds value beyond that achieved by mere rank. Betty and Ed don't need any can-boss relationship because they perform distinct functions in the division of labor. The company has been divided into sub-


organizations with distinct functions, and although the organizations do interact in nonhierarchical ways (such as by purchase orders), these interfaces are clearly defined to insure modularity. So, even though Ed may know Betty ordinarily handles purchase orders, he doesn't just call her up and say what he needs. Instead, he fills out a form and sends it to “the purchasing department.” If Betty happens to be on vacation, it's Carl's job to make sure that the P.O. still gets handled somehow, and Ed doesn't even need to know.

Incentives and Consequences Protect people from the consequences of their actions? Somewhat. The unpredictability of social and economic change (Prediction is Intractable) means that people who made sensible and socially approved choices may find themselves out of luck. Consider workers in the US manufacturing industries. We are also not nearly as perfect in our mental functioning as we imagine (see Positive Illusions), and tiny lapses of attention can have drastic consequences for ourselves or others (consider driving.) We may also choose government policies that protect people from the consequences of others' actions. For example, programs that benefit children of the poor (such as TANF) can be seen as protecting children. positive illusions unpredictability consequences to others compassion, marginal utility of wealth, health costs of inequality safety net analogy

Mind Share We use mind share to mean the number of people who have adopted a particular cultural variant or Meme in theories of Darwinian Cultural Evolution. In genetic evolution, a genetic variant maximizes its fitness by increasing the number of descendents having that variant (Inclusive fitness), whereas a cultural variant maximizes its fitness by increasing its appeal. This may be an intrinsic appeal of the variant (like a catchy tune), but most often the appeal is context dependent. With Prestige Bias we choose a behavior or belief because it is favored prestigious early adopters. As a variant becomes common, Conformity Bias becomes important.

Procrastination An evolutionary theory of procrastination must understand that procrastination and conscientiousness have to exist in some kind of balance. There's a word for someone who does whatever they're told as soon as they are told they have to do it, and the word is “sucker”. Nearly all of the infinite number of things that it is possible for you to do would waste resources or be counter-productive. This is one reason why time management people tell you to consider long term goals, and short-term importance and urgency. We like to think of procrastination as being yet another example of Attitude Behavior Gap. A person who is procrastinating is behaving in a way that is inconsistent with some story about


how they “should” be behaving. If they themselves buy into this story, then this can be very upsetting. Not behaving in ways that you “should” behave can also have serious social consequences. Even so, we must consider that someone who “suffers” from procrastination could be acting in an adaptive way. First, there's Social Conflict, especially individual/individual and individual/group conflict. People may come to you and say that you should do something, and even truthfully say that it would benefit the group as a whole, but at some cost to yourself. Social cooperation is overall a win/win situation, and we see it as the fundamental basis of human nature (we are the The Cultural Animal). And yet… There is still the awkward matter of your individual interest. You have been given innate motivations to protect your individual interests, and it isn't necessary for you to even be aware how those bits operate (see Intentional Opacity). Another important way of understanding procrastination is in our theory of mind, especially the The Interpreter Theory and the The Argumentative Theory. Someone who is procrastinating is behaving in a way which they do not have a Story to justify. Perhaps this troubles them, and perhaps it doesn't, but in neither case does this lack of story necessarily prevent adaptive behavior. They are doing what they are doing. Although it is a convenient social fiction that people do things for reasons that they properly understand, that doesn't happen to be how the mind actually works. This is socially troublesome, so we have discovered moral taboos of hypocrisy and lying. But acting without generating any convincing story doesn't prevent goaloriented behavior, it only cuts off the possibility of gaining social support. While sometimes we may not bother to generate a story because the real story is unacceptable (individual/group conflict), it can also be useful to act without the burden of having to come up with a story. We do tend to compulsively make story about whatever we are doing as we go about our day, and this is a useful reflex–a check for whether we are about to do something socially awkward (that cannot be explained in an acceptable way.) But, aside from this peril, making story can also be a waste of time and energy, a distraction from the thing itself that you actually do choose to do (even if you can't explain why, or your explanation is lame). Procrastination is poorly understood by psychology partly because psychology in general fails to appreciate that motivations are a distinct class of mental/behavioral entity, not at all the same as a cognitive ability or emotion. Motivations drive emotions and decide when and where you're going to use your abilities. While it is adaptive to keep your boss happy, that in itself is not going to get your genes passed on. Think of procrastination as a motivation in service of work-life balance. We don't mean to say that procrastination is just about doing what you're told, or that it can't be a genuine problem. The point is that procrastination rests on a motivational foundation that keeps whispering in our ear “I can't believe I have to do this. There's got to be a better way.” People who hear that voice have (often enough) gotten out of doing things or found a better way. Taking a better way when it presents itself can be called “impulsivity”, at least until that other way is so well-proven that it becomes common sense.

Social Conflict


The human condition has a fundamental tension between the interests of the individual and the group that is a consequence of human nature (from our evolutionary history) and of the need for a polity (country, tribe, …) that fosters internal cooperation for mutual benefit, encourages people to follow the rules, and promotes the interests of the polity in competition with other polities.

Individual/Individual Conflict Evolutionary theory has traditionally emphasized conflict between individuals ( nature red in tooth and claw), either between species (predator/prey) or within species (competition for mates.) From a social perspective, the most interesting thing is the development of social structures and behavioral norms that promote within-group cooperation and minimize withingroup conflict.

Individual/Group Conflict It is, of course, foolish for an individual to physically come to blows with a group of individuals, which is why humans seek allies and then conflict in groups. The interesting forms of conflict under this heading are the more subtle individual responses to situations where their selfish interest may not coincide with the group interest. It is then that the pressures of biological evolution and cultural evolution conflict, and the outcome becomes particularly unclear. One of the human responses to these conflicting pressures is hypocrisy: publicly advocating pro-social behavior (virtue) while privately engaging in selfish or antisocial behavior (vice.) This is more effective than simple cheating (private vice alone) because it simultaneously disarms your potential competitors. All public speech and writings are subject to this pressure toward hypocrisy. This is especially true of the wisdom literature arising from the philosophy and religion of successful cultures. While there is indeed considerable wisdom in traditional values, we must remember that this is in effect pro-social propaganda. It is then no surprise to find that, for example, emotions are classified into “good” and “bad”, with good emotions being those that promote cooperation and self-sacrifice (love your neighbor, courage in battle), and bad emotions being those that advocate for the individual, and his or her need to survive and reproduce (anger, fear, pride, greed, lust.) See the seven deadly sins. Humans, and so human society, cannot function without the full range of emotions, but the forces of individual and group selection must remain in some acceptable equilibrium, and social regulatory mechanisms have little need to advocate for the individual, since individuals naturally do that without encouragement. This is why moral education is so one-sided in support of prosocial behavior. Hypocrisy is often the result, because even the moral leaders are unable to live up to these ideals. Evolutionary discussions of social behavior violate this taboo because evolutionary theory emphasizes the importance of conflict, and also because it predicts that people will find ways to promote their self-interest at the cost of the group. Some find this normalization of selfish behavior shocking. Perhaps so, but evolution describes the world that we actually live in, not the one that we desire.


Group/Group Conflict Although we will not discuss conflict between groups at great length, it clearly must underlie cultural evolution. Many aspects of the cultures we find ourselves in can only be explained as the result of competition between cultures and polities. We happen to find ourselves in cultures which to some degree value and glorify military prowess, and we happen to find ourselves as individuals curiously well-disposed to the idea of risking death in battle. This is because cultures with those values and technologies, composed of such courageous individuals, out-competed peace-loving cultures of cowardly individuals. Group/group conflict need not take the form of a decisive war. In many cases an aggressive culture with more productive lifeways gradually displaces or absorbs a competing culture.

Social Organization By definition a social animal has some organized interaction between individuals. In the human condition there are complex social interactions within and between several organizational categories: individual, in-group, family, polity and ethnic culture. See also Hierarchy. We prefer to avoid the term “society� as being too vague, but it can be understood as the sum total of social interactions that an individual participates in. In this definition, no two individuals live in the same society. Part of the reason for deprecating the concept of society is that due to its vagueness, society has no real organizational integrity, therefore can't be said to survive or not survive, and therefore only evolves as a consequence of the evolution of its component parts.

Individual The individual is the highest level of biological organization, and the smallest visible unit in social organization.

In-Group We'll use the term in-group to refer to all of the many different social groupings that an individual participates in and identifies with. Family, polity and culture can be considered in-groups, but we will use these more specific terms when we can. Out-group is defined by negation to be everything outside the in-group. In-group is a real social organization—in-groups can thrive or fail, and so are subject to evolutionary pressure. The out-group is a construct of the in-group, and doesn't have any actual social reality, however the dynamics of in-group/out-group conflict and social enforcement of boundaries between the in-group and out-group seem to be a basic aspect of human nature present at all levels of social organization.

Family


Family is the smallest unit of social organization. Because of the close biological relationships in the family and the crucial role that the family plays in human reproduction family organization is much more tightly controlled by evolved biological instincts than other social organizations.

Polity Political organization has to do with power, authority, law, territory, and negotiation or conflict with other political organizations. If we use the term “polity” without further qualification, we mean the highest level of effective political organization, which in ancient times would be the village or tribe, and in modern times is the nation-state.

The State The state is a type of polity that first arose about 6000 years ago, and is characterized by territorial acquisitiveness, militarism, strict chain of command with a supreme leader, rule of law, taxes, money, grain as the dietary staple, written language and bureaucracy. As such, the state is a cultural idea. States that discovered the power of these cultural practices quickly overwhelmed neighboring non-state polities with their military power and economic productivity, so we now all live in states. This is cultural evolution in action. No individual chose to subordinate him or herself to the state out of enlightened self-interest.

Ethnic Culture See Culture for a discussion of the broad sense of culture as the sea of evolving ideas and behaviors that individuals navigate. We use ethnic culture to mean “a culture” in the anthropological sense of “there's this tribe where…” From the origin of modern man perhaps 100,000 years ago to about 6000 years ago (when the state appeared), culture was a complete package that you inherited from your tribe. It told you everything you needed to know about how to live and explained everything that mattered. It gave you your language, your dress, your music, your spiritual beliefs, your worldview, your tribal and family organization, and your tools and techniques for getting food, clothing and shelter. Also, due to common ancestry and intermarriage within the culture, you were more related to those in your culture, and so physically resembled them. Because ethnic culture was so comprehensive and so closely aligned with important political and ecological boundaries, ethnic culture had clear social and geographic boundaries. You knew who you were, and you knew which tribes were “like us” and which tribes weren't. Because of this, ethnic culture formed the largest group of social organization.

Multi-Polity Cultures It is important to distinguish between the contributions of culture and polity to society because the boundaries of culture and polity overlap in complex ways. Culture frequently spans multiple polities. In non-state cultures, adjacent villages may have basically the same language, lifeways, behavioral norms, and ethnic traditions in dress, music, etc., but have no higher authority enforcing cooperative behavior. In these cases we often see that cultures evolve behavioral norms such as stylized warfare that minimize the damaging effects of conflict


between polities sharing the same culture. However conflict with polities from another culture quickly becomes total warfare: there are no shared social norms and the other tribe is demonized as the out-group.

Ethnic Inhomogeneity of States Polities can also span multiple ethnic cultures, which is particularly likely in states. It is the nature of states to expand and engulf adjacent polities. The state may subordinate existing political structures or replace them entirely, but this is not possible with culture. Political alliances are intrinsically fluid and opportunistic, but we don't choose our culture. There is no cultural chain-of-command to coopt. States impose an official culture from the homeland, invariably emphasizing state-centric cultural values of the rightness, honor and glory of military conquest and obedience to the state, but this is slow going. It is characteristic of the evolution from empire to state that people increasingly learn the state language, identify with the state, adopt new lifeways and statecentric values, and downgrade their previously vital culture to a historic ethnic tradition. Ethnicity is what is left when a culture degenerates to an origin myth. Based on their own experience, Americans, Japanese or Australians may suppose that it is normal for a state to be culturally homogeneous, but it is a peculiar property of states that arose recently from military conquest and the marginalization or genocide of indigenous cultures. The degree of cultural homogeneity seen in Europe is partially a result of the less recent IndoEuropean cultural invasion (displacing almost all indigenous languages and imposing military and pastoral values), and also remnants of the Roman empire: the Catholic church, etc.

Spirit Up until modern times, almost everyone experienced life in a way that was at least partly spiritual. Cultures settled on beliefs about the origins of the world, the purpose of life, and the causes and reasons for things beyond human control, and these beliefs underlay and reinforced general attitudes about the nature of the human condition. Then, with the Scientific Revolution science began to offer other explanations of phenomena beyond human control, such as the motion of the planets, and with the Industrial Revolution, the human condition itself was radically transformed by technology, as human control was extended into new realms. Scientific explanations met some of the needs that had been met by religion, and important changes in social conditions and economic lifeways undermined the appeal of religious traditions. At the beginning of the 20'th century, many intellectuals thought that God had died and that religion would fade away, no longer being needed. Yet at the same time, there were many signs of ongoing belief in something beyond the merely physical. Especially in the U.S., Christianity adapted, changing emphasis and developing new beliefs and practices. Other movements such as Spiritualism could either complement traditional religions or stand on their own as partial answers. Then we saw the rise of religious fundamentalism and the New Age movement. Even if there weren't always the churches, theology and weekly meetings of traditional western religion, there was clearly no rush to embrace science as a source of life meaning.


Psychological Buddhism Many practitioners of Buddhist Vipassana meditation refer to it as “a science of the mind.” Buddhist meditators have been practicing a systematic and rigorous examination of the mind for 2500 years, and over this time have arrived at many useful observations about how the mind functions. Buddhist observations of the mind are now being supported by scientific experiments (see contemplative neuroscience). However, we believe such a long history of consistent experimental evidence has great potential value, particularly as there are many commonalities with the conclusions of science and psychology, such as Cognitive Behavioral Therapy (CBT) and Mindfulness Based Cognitive Therapy (MBCT). It is very common for practitioners of Vipassana to also work with CBT. Buddhism has a great deal to say specifically about “why do people do things that make them unhappy” and “how to overcome suffering”. The term “suffering” is often used interchangeably with “unhappiness” in Buddhist terminology. A famous Buddhist scholar once wrote, “People are in love with the causes of suffering.” The Buddha chose to teach after his enlightenment when he saw that the very things people were doing to make themselves happy were the very things that were making them unhappy. The Buddha said, “So long as people cling to the pleasant, reject the unpleasant, and are ignorant of the neutral, there will be no freedom.” It is the clinging to the things that are pleasant, which we will inevitably lose, and the rejection of the things that are unpleasant, which will inevitably occur, that cause us to be unhappy. Only by a simple acceptance of the pleasant and unpleasant alike can we be happy.

Level Map

(click to open) This diagram is a map of the human condition that emphasizes the levels of reality and of mental processing. The general idea is that adjacent blocks connect in an intimate way and that non-adjacent blocks must communicate through intervening layers unless there is a connector arrow. The large-scale division is into the Physical World, the Body and four layers of Mind. Except for the topmost Storytelling layer this map also describes all mammals (though the functioning of the mental layers 2 and 3 is considerably richer in humans.)


Do not read too much into the specific dividing lines given here, supposing that the implied communication directly corresponds to neural pathways. The brain has complex and diffuse connectivity which (insofar as it is even known) does not lend it self to simple graphical representation. For example, Eye–hand coordination integrates perception, Body Model, Action and intention. The primary goal of this map is to show an organization of mind from the viewpoint of consciousness, and to emphasize the magnitude of unconscious processing. The vertical axis represents a transition from the real and deterministic physical world to the arbitrary and unpredictable mental world, with each layer becoming less real and more unpredictable in its behavior. This appearance of arbitrary and unpredictable behavior with increasing complexity is the process of Emergence. There are also two gradients that (not coincidentally) happen to vary across the mental layers. We are completely unconscious of the workings of the first mental layer, and our conscious awareness gradually increases as the layers go up. Similarly, the data rate decreases from megabits per second at the raw sense data and muscle command interface down to bits per second at the level of conscious symbolic processing. Only one bit in a million makes it into our conscious awareness. See Unconscious.

Physical World The physical world is the common substrate of everyone's existence, and is the gold standard of what is real. The Reality of any higher levels is a matter of taste and convenience. The physical world is also pretty deterministic at the human scale—we usually know what mindless objects are going to do next (but see Physical Chaos).

Body The body is part of (submerged in) the physical world, but it does have a well-defined boundary that is defended mechanically, biologically and mentally. We normally think of ourselves and other animals as agents that sense the surrounding environment, form intentions, then act, modifying the environment to achieve their goals (see Intentional Design.) Humans are fundamentally social, so other humans are one of the most important parts of the surrounding environment.

Senses Though physics may ultimately tell us some limit, the physical world presents us with what is effectively an infinite amount of information, of which our senses encode a vanishingly small part into nerve impulses for mental processing. For example, human vision senses approximately 10 megabits per second, but the physical limit for a human-sized eye is about 1Ă—1018 times greater (more on sensory limitations.)

Muscles


We use our muscles to modify the environment, and our proprioceptive sense gives us feedback on how we are moving.

Viscera Via the Autonomic nervous system our brain is constantly sensing and controlling the state of our body's life-support systems. This activity is almost entirely unconscious, but is closely integrated with our emotions.

Brain In humans the brain is a relatively large and costly organ, so it must be doing something pretty important in keeping us alive and helping us to reproduce. The brain is the physical substrate of the mind, which we show as levels stacked on top.

Mind We consider mind in a broad sense, meaning any processing of signals or representations done by the nervous system. Once we distinguish mental phenomena from the physical we create some sort of Mind/Body Dualism. Although all mental processing is a direct consequence of the physical structure of the brain, there is undeniable value in considering mind to be distinct from its physical implementation and in saying that mental events can cause other mental events. We can do this because by design the brain insulates mental events from unwanted physical influences (see Mind.) In the level map we have picked out a handful of mental capabilities and organized them into four levels of increasing conscious awareness (and decreasing bit-rate.) The division into discrete capabilities and levels is of course rather arbitrary; the point is the general trend that our more controllable, more conscious mental capabilities necessarily operate on highly processed and reduced digests of perception.

Layer 1 The first layer (Action, Body Model, Memory and perception) are capabilities which to us are largely a black box. We can form an intention to initiate Action, but we ordinarily have very little understanding of how we use our muscles to move or to accomplish tasks. The functioning of our Body Model is usually so intuitive and accurate that we easily fall into the trap of Naive Realism, supposing that we somehow directly know that we have limbs, and how they are positioned. But in disorders such as phantom limb syndrome, we see that the sense of having a limb, feeling it move, and feeling it in pain, these all are only perceptions mediated by the Body Model, which can exist in the absence of any actual limb. Although memory and perception may be in some ways more elaborated in humans that other animals, the general structure of the first layer in mind is largely inherited from those ancestors. With both Memory and perception it is important to understand that while we can try to remember something, or direct our attention to something, memory and perception function all


the time, and it is well established in by numerous threads of cognitive and behavioral research that perceptions or memories that we are not consciously aware of can still strongly influence our behavior. Early applications of an information processing view of mind in cognitive psychology regarded the unconscious information reduction from perception into consciousness as a filter, where that which did not become conscious was simply lost, but it is now clear that unconscious perceptions and memories can influence our behavior, largely by their effects on the second layer.

Layer 2 The second layer of mind (Emotion, judgment, belief) is in many ways the unconscious substrate of consciousness. We could never accomplish anything useful by conscious thought if we did not have our emotions to tell us what is important, our judgment to reduce diffuse experience into definite opinions and to tell us when we have gone far enough in conscious consideration and it is time to decide, and we also rely greatly on our subjective assessments of what is true and of how reliable our knowledge is. The functioning of this level can be described as intuitive, in that we can be aware of how we feel, we can know what it is that we think should be done, we can know what it is that we think is true, but our attempts to explain these things are a Story. We do not know all of the countless subtle factors that are weighed by our intuitions. If we are a good storyteller, then we will carefully test our stories against our intuitions, refining them, so that they best express our wisdom and experience, but this construction of meaning from experience is a creative process, not a simple matter of introspection. The theory of Representational Opacity goes some way toward explaining why this so.

Layer 3 intention and attention are generally regarded as conscious processes in humans, but nonhuman animals also act as though they have intentions (see Intentional Design) and clearly have limited attention which must be directed somehow. While animal experience is surely much different than human experience, it isn't a meaningless anthropomorphism to say: “The raccoon was trying to open the garbage can, and didn't notice when I came outside.” We can consciously direct our attention, but there is clearly a significant bottom-up factor in attention, where a perception “seizes our attention”, and this mechanism surely derives from our unconscious forebears. The situation with intention is particularly confusing, because nonconscious animals, computer programs, and even plants, can act in an intentional way, yet we see our intention as being the result of consciousness, and may feel that any agent that isn't conscious can't have true intention. This transitions to the next layer, by way of the The Interpreter Theory.

Layer 4 The highest layer of mind is that part which seems most clearly conscious, that of narrative: what we are saying out loud, or thinking in some internal dialog. The Interpreter Theory of consciousness accepts that yes, this is indeed conscious, but that also this is largely a Story


that we create to explain workings of the mind we don't truly understand (because we have no direct introspection into the intuitive processes the mind depends on.) It is only in the layer of storytelling that we reach a uniquely human capability, and this is above all a social capability (see The Argumentative Theory.) Humans have the capability to explain why they did something or to argue why something should be done because they function in a social context where this is necessary. Non-human animals do need to form intentions to direct their effective action, but have no need to explain these intentions. The human need to explain and persuade has has been addressed by evolution through the addition of a new capability, the storyteller (or interpreter.) The storyteller observes what we are doing and comes up with a story about why we are doing it. This story is not entirely a fabrication because the storyteller does have some privileged access to our mind (mediated by layer 3), but the intuitive and representationally opaque nature of layer 3 means that the storyteller doesn't have all the information, and in fact, by a principle of “need to know�, cognitive biases such as confirmation bias insure that the information made available is precisely the information that is useful to construct a convincing story. Unhelpful details such as contradictory evidence or conflicts due to self-interest are unconsciously excluded from our awareness.

Other Models This model corresponds in some ways to Dennett's physical, design and intentional stances. Of course, the physical world can in general only be understood from the physical stance. The body is part of the physical world, so the physical stance applies, but as the product of evolution we can also apply the design stance. The mind has also evolved, so all mental layers can also be examined from the design stance. The intentional stance only applies above the unconscious bottom layer


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.