E book 2 - threats of maintaining life on earth

Page 1

1


2


Mother Earth is like a giant living organism made of many closely interrelated components. These components are constantly interacting to provide and maintain the circumstances necessary for survival. The microcosm and macrocosm have to work in unison to support life. From the tiny bacteria to forests and ecosystems which act as carbon sinks in maintaining the complex interaction of gases in the atmosphere, every component plays a crucial role. If you were a doctor looking at Earth as a living organism, you would see that her survival is threatened. Human activity is breaking the balance of life. Unless we mend our actions, the system’s ability to support life is being lost. Here are 5 major factors that currently pose a threat to the Earth’s ability to sustain life.

3


Option

Votes

Existential Risks Nuclear Holocaust Resource depletion or ecological destruction Runaway global warming Accidental misuse of nanotechnology (“gray goo”) Take-over by a transcending upload Something unforeseen Naturally occurring disease Genetically engineered biological agent We’re living in a simulation and it gets shut down Deliberate misuse of nanotechnology Killed by an extraterrestrial civilization Badly programmed superintelligence Misguided world government or another static social equilibrium stops technological progress Physics disasters “Dysgenic” pressures Asteroid or comet impact Flawed superintelligence Repressive totalitarian global regime Our potential or even our core values are eroded by evolutionary development

4


Existential Risks Analyzing Human Extinction Scenarios and Related Hazards

It’s dangerous to be alive and risks are everywhere. Luckily, not all risks are equally serious. For present purposes we can use three dimensions to describe the magnitude of a risk: scope, intensity, and probability. By “scope” I mean the size of the group of people that are at risk. By “intensity” I mean how badly each individual in the group would be affected. And by “probability” I mean the best current subjective estimate of the probability of the adverse outcome. 1.1

A typology of risk

We can distinguish six qualitatively distinct types of risks based on their scope and intensity (figure 1). The third dimension, probability, can be superimposed on the two dimensions plotted in the figure. Other things equal, a risk is more serious if it has a substantial probability and if our actions can make that probability significantly greater or smaller. 1.2

Existential risks

In this paper we shall discuss risks of the sixth category, the one marked with an X. This is the category of global, terminal risks. I shall call these existential risks. Existential risks are distinct from global endurable risks. Examples of the latter kind include: threats to the biodiversity of Earth’s ecosphere, moderate global warming, global economic recessions (even major ones), and possibly stifling cultural or religious eras such as the “dark ages”, even if they encompass the whole global community, provided they are transitory (though see the section on “Shrieks” below). To say that a particular global risk is endurable is evidently not to say that it is acceptable or not very serious. A world war fought with conventional weapons or a Nazi-style Reich lasting for a decade would be extremely horrible events even though they would fall under the rubric of endurable global risks since humanity could eventually recover. (On the other 5


hand, they could be a local terminal risk for many individuals and for persecuted ethnic groups.) I shall use the following definition of existential risks: Existential risk – One where an adverse outcome would either annihilate Earthoriginating intelligent life or permanently and drastically curtail its potential. An existential risk is one where humankind as a whole is imperiled. Existential disasters have major adverse consequences for the course of human civilization for all time to come.

The unique challenge of existential risks Risks in this sixth category are a recent phenomenon. This is part of the reason why it is useful to distinguish them from other risks. We have not evolved mechanisms, either biologically or culturally, for managing such risks. Our intuitions and coping strategies 6


have been shaped by our long experience with risks such as dangerous animals, hostile individuals or tribes, poisonous foods, automobile accidents, Chernobyl, Bhopal, volcano eruptions, earthquakes, draughts, World War I, World War II, epidemics of influenza, smallpox, black plague, and AIDS. These types of disasters have occurred many times and our cultural attitudes towards risk have been shaped by trial-and-error in managing such hazards. But tragic as such events are to the people immediately affected, in the big picture of things – from the perspective of humankind as a whole – even the worst of these catastrophes are mere ripples on the surface of the great sea of life. They haven’t significantly affected the total amount of human suffering or happiness or determined the long-term fate of our species. With the exception of a species-destroying comet or asteroid impact (an extremely rare occurrence), there were probably no significant existential risks in human history until the mid-twentieth century, and certainly none that it was within our power to do something about. The first manmade existential risk was the inaugural detonation of an atomic bomb. At the time, there was some concern that the explosion might start a runaway chainreaction by “igniting” the atmosphere.

Although we now know that such an outcome was physically impossible, it qualifies as 7


an existential risk that was present at the time. For there to be a risk, given the knowledge and understanding available, it suffices that there is some subjective probability of an adverse outcome, even if it later turns out that objectively there was no chance of something bad happening. If we don’t know whether something is objectively risky or not, then it is risky in the subjective sense. The subjective sense is of course what we must base our decisions on. At any given time we must use our best current subjective estimate of what the objective risk factors are.

A much greater existential risk emerged with the build-up of nuclear arsenals in the US and the USSR. An all-out nuclear war was a possibility with both a substantial probability and with consequences that might have been persistent enough to qualify as global and terminal. There was a real worry among those best acquainted with the information available at the time that a nuclear Armageddon would occur and that it might annihilate our species or permanently destroy human civilization. Russia and the US retain large nuclear arsenals that could be used in a future confrontation, either accidentally or deliberately. There is also a risk that other states may one day build up large nuclear arsenals. Note however that a smaller nuclear exchange, between India and Pakistan for instance, is not an existential risk, since it would not destroy or thwart humankind’s potential permanently. Such a war might however be a local terminal risk for the cities most likely to be targeted. Unfortunately, we shall see that nuclear Armageddon and comet or asteroid strikes are mere preludes to the existential risks that we will encounter in the 21st century.

8


The special nature of the challenges posed by existential risks is illustrated by the following points: Our approach to existential risks cannot be one of trial-and-error. There is no opportunity to learn from errors. The reactive approach – see what happens, limit damages, and learn from experience – is unworkable. Rather, we must take a proactive approach. This requires foresight to anticipate new types of threats and a willingness to take decisive preventive action and to bear the costs (moral and economic) of such actions. We cannot necessarily rely on the institutions, moral norms, social attitudes or national security policies that developed from our experience with managing other sorts of risks. Existential risks are a different kind of beast. We might find it hard to take them as seriously as we should simply because we have never yet witnessed such disasters. Our collective fear-response is likely ill calibrated to the magnitude of threat. Reductions in existential risks are global public goods and may therefore be undersupplied by the market. Existential risks are a menace for everybody and may 9


require acting on the international plane. Respect for national sovereignty is not a legitimate excuse for failing to take countermeasures against a major existential risk.

If we take into account the welfare of future generations, the harm done by existential risks is multiplied by another factor, the size of which depends on whether and how much we discount future benefits. In view of its undeniable importance, it is surprising how little systematic work has been done in this area. Part of the explanation may be that many of the gravest risks stem (as we shall see) from anticipated future technologies that we have only recently begun to understand. Another part of the explanation may be the unavoidably interdisciplinary and speculative nature of the subject. And in part the neglect may also be attributable to an aversion against thinking seriously about a depressing topic. The point, however, is not to wallow in gloom and doom but simply to take a sober look at what could go wrong so we can create responsible strategies for improving our chances of survival. In order to do that, we need to know where to focus our efforts.

10


Classification of existential risks We shall use the following four categories to classify existential risks: Bangs – Earth-originating intelligent life goes extinct in relatively sudden disaster resulting from either an accident or a deliberate act of destruction. Crunches – The potential of humankind to develop into posthumanity is permanently thwarted although human life continues in some form. Shrieks – Some form of posthumanity is attained but it is an extremely narrow band of what is possible and desirable. Whimpers – A posthuman civilization arises but evolves in a direction that leads gradually but irrevocably to either the complete disappearance of the things we value or to a state where those things are realized to only a minuscule degree of what could have been achieved.

11


BANGS This is the most obvious kind of existential risk. It is conceptually easy to understand. Below are some possible ways for the world to end in a bang. I have tried to rank them roughly in order of how probable they are, in my estimation, to cause the extinction of Earth-originating intelligent life; but my intention with the ordering is more to provide a basis for further discussion than to make any firm assertions.

Deliberate misuse of nanotechnology

Nanotechnology

Nanotechnology is the control over matter with atomic or molecular precision. That is in itself not dangerous – instead, it would be very good news for most applications. The problem is that, like biotechnology, increasing power also increases the potential for abuses that are hard to defend against. The big problem is not the infamous “grey goo� of self-replicating nanomachines eating everything. That would require clever design for this very purpose. It is tough to make a machine replicate: biology is much better at it, by default. Maybe some maniac would

12


eventually succeed, but there are plenty of more low-hanging fruits on the destructive technology tree. The most obvious risk is that atomically precise manufacturing looks ideal for rapid, cheap manufacturing of things like weapons. In a world where any government could “print” large amounts of autonomous or semi-autonomous weapons (including facilities to make even more) arms races could become very fast – and hence unstable, since doing a first strike before the enemy gets a too large advantage might be tempting. Weapons can also be small, precision things: a “smart poison” that acts like a nerve gas but seeks out victims, or ubiquitous “gnatbot” surveillance systems for keeping populations obedient seems entirely possible. Also, there might be ways of getting nuclear proliferation and climate engineering into the hands of anybody who wants it. We cannot judge the likelihood of existential risk from future nanotechnology, but it looks like it could be potentially disruptive just because it can give us whatever we wish for.

13


In a mature form, molecular nanotechnology will enable the construction of bacteriumscale self-replicating mechanical robots that can feed on dirt or other organic matter.

Such replicators could eat up the biosphere or destroy it by other means such as by poisoning it, burning it, or blocking out sunlight. A person of malicious intent in possession of this technology might cause the extinction of intelligent life on Earth by releasing such nanobots into the environment. 14


The technology to produce a destructive nanobot seems considerably easier to develop than the technology to create an effective defense against such an attack (a global nanotech immune system, an “active shield�). It is therefore likely that there will be a period of vulnerability during which this technology must be prevented from coming into the wrong hands. Yet the technology could prove hard to regulate, since it doesn’t require rare radioactive isotopes or large, easily identifiable manufacturing plants, as does production of nuclear weapons. Even if effective defenses against a limited nanotech attack are developed before dangerous replicators are designed and acquired by suicidal regimes or terrorists, there will still be the danger of an arms race between states possessing nanotechnology. It has been argued that molecular manufacturing would lead to both arms race instability and crisis instability, to a higher degree than was the case with nuclear weapons. Arms race instability means that there would be dominant incentives for each competitor to escalate its armaments, leading to a runaway arms race. Crisis instability means that there would be dominant incentives for striking first. Two roughly balanced rivals acquiring nanotechnology would, on this view, begin a massive buildup of armaments and weapons development programs that would continue until a crisis occurs and war breaks out, potentially causing global terminal destruction. That the arms race could have been predicted is no guarantee that an international security system will be created 15


ahead of time to prevent this disaster from happening. The nuclear arms race between the US and the USSR was predicted but occurred nevertheless. Nanotechnology will make us healthy and wealthy though not necessarily wise. In a few decades, this emerging manufacturing technology will let us inexpensively arrange atoms and molecules in most of the ways permitted by physical law. It will let us make supercomputers that fit on the head of a pin and fleets of medical nanorobots smaller than a human cell able to eliminate cancer, infections, clogged arteries, and even old age. People will look back on this era with the same feelings we have toward medieval times—when technology was primitive and almost everyone lived in poverty and died young. Besides computers billions of times more powerful than today’s, and new medical capabilities that will heal and cure in cases that are now viewed as utterly hopeless, this new and very precise way of fabricating products will also eliminate the pollution from current manufacturing methods.

Molecular manufacturing will make exactly what it is supposed to make, no more and no less, and therefore won’t make pollutants. When nanotechnology pioneer Eric Drexler first dared to publish this vision back in the early 1980s, the response was skeptical, at best. It seemed too good to be true, and many scientists pronounced the whole thing impossible. But the laws of physics care little for either our hopes or our fears, and subsequent analysis kept returning the same answer: it will take time, but it is not only possible but almost unavoidable.

16


The progress of technology around the world has already given us more precise, less expensive manufacturing technologies that can make an unprecedented diversity of new products. Nowhere is this more evident than in computer hardware: computational power has increased exponentially while the finest feature sizes have steadily shrunk into the deep submicron range.

Extrapolating these remarkably regular trends, it seems clear where we’re headed: molecular computers with billions upon billions of molecular switches made by the pound. And if we can arrange atoms into molecular computers, why not a whole range of other molecularly precise products?Visions of good, visions of harm Some people have recently, publicly (and belatedly) realized that nanotechnology might create new concerns that we should address. Any powerful technology can be used to do great harm 17


as well as great good. If the vision of nanotechnology sketched earlier is even partly right, we are in for some major changes—as big as the changes ushered in by the Industrial Revolution, if not bigger. How should we deal with these changes? What policies should we adopt during the development and deployment of nanotechnology? Drexler discussed these issues extensively in his 1986 book Engines of Creation, and, in a remarkably prescient essay first published in 1988, called “A Dialog on Dangers,” outlined the concerns that have since come to the fore. One solution to these potential problems, proposed by Bill Joy, cofounder and chief scientist of Sun Microsystems Inc., would be to “relinquish” research and development of nanotechnology to avoid any possible adverse consequences. This approach suffers from major problems: telling researchers not to research nanotechnology and companies not to build it when there are vast fortunes to be made, glory to be won, and national strategic interests at stake either won’t work, or will push research underground where it can’t be regulated. At the same time, it will deprive anyone who actually obeys the ban of the many benefits nanotechnology offers. If a ban won’t work, how should we best address the concerns that have been raised? The key concerns fall into two classes: deliberate abuse and accidents. Deliberate abuse, the misuse of a technology by some small group or nation to cause great harm, is best prevented by measures based on a clear understanding of that technology.

18


Nanotechnology could, in the future, be used to rapidly identify and block attacks. Distributed surveillance systems could quickly identify arms buildups and offensive weapons deployments, while lighter, stronger, and smarter materials controlled by powerful molecular computers would let us make radically improved versions of existing weapons able to respond to such threats.

Replicating manufacturing systems could rapidly churn out the needed defenses in huge quantities. Such systems are best developed by continuing a vigorous R&D program, which provides a clear understanding of the potential threats and countermeasures available.

Besides deliberate attacks, the other concern is that a self-replicating molecular machine could replicate unchecked, converting most of the biosphere into copies of itself. While 19


nanotechnology does propose to use replication (to reduce manufacturing costs to a minimum), it does not propose to copy living systems.

Living systems are wonderfully adaptable and can survive in a complex natural environment. Instead, nanotechnology proposes to build molecular machine systems that are similar to small versions of what you might find in today’s modern factories.

Robotic arms shrunk to submicron size should be able to pick up and assemble molecular parts like their large cousins in factories around the world pick up and assemble nuts and bolts. 20


Unfortunately, our intuitions about replicating systems can be led seriously astray by a simple fact: the only replicating systems most of us are familiar with are biological selfreplicating systems.

We automatically assume that nanotechnological replicating systems will be similar when, in fact, nothing could be further from the truth. The machines people make bear little resemblance to living systems, and molecular manufacturing systems are likely to be just as kkdissimilar. An illustration of the vast gulf between self-replicating biological systems and the kind of replicating robotic systems that might be made for manufacturing purposes is exponential assembly, a technology currently under investigation at our company, Zyvex Corp., in Richardson, Texas. Zyvex is developing positional assembly systems at the micron, submicron, and molecular scale.

21


At the micron scale, using existing MEMS (microelectromechanical systems) technology, we are developing simple pick-and-place robotic arms that can pick up relatively complex, planar, micron-scale parts made with lithographic technology and assemble those planar parts into simple three-dimensional robotic arms that have the ability to pick up specially designed MEMS parts.

Called exponential assembly, this replicative technology starts with a single robotic arm on a wafer that then assembles more robotic arms on a facing wafer by picking up parts already laid out in precisely known locations. While the number of assembled robotic arms can increase exponentially (up to some limit imposed by the manufacturing system), this assembly process requires (among other things) lithographically produced parts, as well as externally provided power and computer control signals to coordinate the complex motions of the robotic arms. Cut off from power, control signals, and parts, a micron-sized robotic arm would function about as well as one of its larger cousins taken from one of today’s automated 22


assembly lines and dropped into the middle of a forest. Guidelines to principled development To avoid any possible risk from future (and perhaps more ambitious) systems, the Palo Alto–based nonprofit Foresight Institute (motto: preparing for nanotechnology) has written a set of draft guidelines to inform developers and manufacturers of molecular manufacturing systems how to develop them safely.

The guidelines include such common sense principles as: artificial replicators must not be capable of replication in a natural, uncontrolled environment; they must have an absolute dependence on an artificial fuel source or artificial components not found in nature; they must use appropriate error detection codes and encryption to prevent unintended alterations in their blueprints; and the like.

Building on over a decade of discussions of a very wide range of scenarios, the first 23


version of the guidelines was based on a February 1999 workshop in Monterey, Calif. The guidelines have since been reviewed at two subsequent Foresight conferences. Because our understanding of this developing technology is evolving, and will continue to do so, the guidelines will evolve with them—representing our best understanding of how to ensure the safe development of nanotechnology. Nanotechnology’s potential to improve the human condition is staggering: we would be shirking our duty to future generations if we did not responsibly develop it.

Accidental misuse of nanotechnology (“gray goo”) The possibility of accidents can never be completely ruled out. However, there are many ways of making sure, through responsible engineering practices, that species-destroying accidents do not occur. One could avoid using self-replication; one could make nanobots dependent on some rare feedstock chemical that doesn’t exist in the wild; one could confine them to sealed environments; one could design them in such a way that any mutation was overwhelmingly likely to cause a nanobot to completely cease to function. Accidental misuse is therefore a smaller concern than malicious misuse.

However, the distinction between the accidental and the deliberate can become blurred. While “in principle” it seems possible to make terminal nanotechnological accidents

24


extremely improbable, the actual circumstances may not permit this ideal level of security to be realized.

Compare nanotechnology with nuclear technology. From an engineering perspective, it is of course perfectly possible to use nuclear technology only for peaceful purposes such as nuclear reactors, which have a zero chance of destroying the whole planet. Yet in practice it may be very hard to avoid nuclear technology also being used to build nuclear weapons, leading to an arms race. With large nuclear arsenals on hair-trigger alert, there is inevitably a significant risk of accidental war. The same can happen with nanotechnology: it may be pressed into serving military objectives in a way that carries unavoidable risks of 25


serious accidents. In some situations it can even be strategically advantageous to deliberately make one’s technology or control systems risky, for example in order to make a “threat that leaves something to chance” .;

Leading nanotech experts put 'grey goo' in perspective A paper published today in the journal Nanotechnology warns that fear of runaway self-replicating machines diverts attention away from other more serious risks of molecular manufacturing. The paper, “Safe Exponential Manufacturing,” published by the Institute of Physics,

Drexler had cautioned against self-replicating machines in his 1986 book Engines of Creation. The idea became known as ‘grey goo’ and inspired a generation of science fiction authors.

26


In this article, Phoenix and Drexler show that nanotechnology-based fabrication can be completely safe from out-of-control replication. However, they warn that for other reasons misuse of molecular manufacturing remains a significant danger. “So-called grey goo could only be the product of a deliberate and difficult engineering process, not an accident,” said Phoenix. “Far more serious is the possibility that a large-scale and convenient manufacturing capacity could be used to make incredibly powerful non-replicating weapons in unprecedented quantity. This could lead to an unstable arms race and a devastating war. Policy investigation into the effects of advanced nanotechnology should consider this as a primary concern, and runaway replication as a more distant issue.”

Contrary to previous understanding, self-replication is unnecessary for building an efficient and effective molecular manufacturing system. Instead of building lots of tiny, complex, free-floating robots to manufacture products, it will be more practical to use simple robot arms inside desktop-size factories. A robot arm removed from such a factory would be as inert as a light bulb pulled from its socket. The factory as a whole would be no more mobile than a

27


desktop printer and would require a supply of purified raw materials to build anything. “An obsession with obsolete science-fiction images of swarms of replicating nanobugs has diverted attention from the real issues raised by the coming revolution in molecular nanotechnologies,” said Drexler. “We need to focus on the issues that matter—how to deal with these powerful new capabilities in a competitive world.”

Nuclear holocaust The US and Russia still have huge stockpiles of nuclear weapons. But would an all-out nuclear war really exterminate humankind? Note that: (i) (ii)

(iii)

For there to be an existential risk it suffices that we can’t be sure that it wouldn’t. The climatic effects of a large nuclear war are not well known (there is the possibility of a nuclear winter). Future arms races between other nations cannot be ruled out and these could lead to even greater arsenals than those present at the height of the Cold War. The world’s supply of plutonium has been increasing steadily to about two thousand tons, some ten times as much as remains tied up in warheads.

28


(iv)

Even if some humans survive the short-term effects of a nuclear war, it could lead to the collapse of civilization. A human race living under stone-age conditions may or may not be more resilient to extinction than other animal species.

In the immediate vicinity of a nuclear explosion, most casualties result from blast, heat and fallout during the first few days. The blast or heat from a one megatonne bomb - about 75 times the power of the Hiroshima bomb, and a size often found in nuclear arsenals - would kill almost all people, even those in shelters, out to a distance of two kilometres. Beyond ten kilometres the chance of death even for people without special protection would be very small. If the bomb is exploded at an altitude higher than the radius of the fireball from the explosion, as happened at Hiroshima and Nagasaki, local fallout is minimal. If exploded at or near the earth's surface, fallout lethal to unprotected people will be deposited downwind - most often to the east toward which prevailing upper atmospheric winds blow - for a distance of up to hundreds of kilometres. After a fortnight the radiation levels will have dropped to about one thousandth of what they were one hour after the blast.

A major global nuclear war could kill up to 400-500 million people from these effects, mainly in the United States, Soviet Union and Europe, and to a lesser extent China and Japan. The death toll would depend on a range of factors, such as the areas actually hit 29


by weapons and the extent of evacuation and fallout protection. This death toll would be made up mainly of the people in the immediate vicinity or downwind of nuclear explosions, and would total about ten percent of the world's population. This figure would be much higher if most of the largest population centres in countries all around the world were bombed,but there are no known plans for systematically bombing the largest population centres in areas such as India, Southeast Asia and China. On the other hand, if a nuclear war were limited in any sense - for example, restricted to Europe or to military targets - the immediate death toll would be less.

Global fallout When a nuclear bomb is exploded, energy is released by the fissioning (splitting) of either uranium-235 or plutonium. There are a range of products of this fissioning, many of which are radioactive - that is, they are unstable and decay sooner or later by emission of energetic radiation or particles. The most well-known fission product is strontium-90, which decays by emission of a beta particle. About half the strontium-90 nuclei decay in this way in a period of about 28 years, called the half-life. Different radioactive atoms have different half-lives, ranging from a fraction of a second to many millions of years. Other biologically important radioactive species produced by nuclear explosions are caesium-137 (half-life: 27 years), iodine-131 (half-life: eight days) and carbon-14 (half-life: 5600 years).

A nuclear bomb like that exploded over Hiroshima produces a total of about 800 30


grammes of fission products, measured one hour after the blast. The enormous heat generated by the explosion creates a huge upwards surge of air, resulting in the familiar mushroom cloud. The height of the cloud depends on the size of the explosion (see Figure 1). Most of the fission products are carried into the atmosphere by this initial updraft. They become dangerous to humans when they return to earth.

Figure 1. A typical configuration of the troposphere and stratosphere (divided by the dashed line) in July. The approximate heights of clouds from nuclear explosions of 20kt, 1Mt and 20Mt are sketched (widths are not to scale). The dotted line is a typical distribution of stratospheric ozone. If the bomb is exploded at or near the surface of the earth, a large amount of dust, dirt and other surface materials will also be lifted with the updraft. Some of the fission products will adhere to these particles, or onto the material used to construct the bomb. The very largest particles - stones and pebbles - will fall back to earth in a matter of minutes or hours. Lighter material - ash or dust - will fall to earth within a few days, or perhaps be incorporated in raindrops. The radioactive material which returns to earth within 24 hours is called early or local fallout. It is the most dangerous.

31


As mentioned earlier, the fission products contain a mixture of different types of radioactive atoms, some of which decay quickly and others much more slowly. A rough rule of thumb is that as time increases by a factor of seven, the average decay rate drops by a factor of ten. Thus, compared to the decay rate one hour after the explosion, the rate will be about ten per cent at 7 hours, about one per cent at two days (about 7 x 7 hours), and about 0.1 per cent at two weeks (7 x 2 days). (After about six months the fall in the decay rate becomes faster than this.) For this reason, exposure to early fallout is the greatest danger due to radioactivity generated by nuclear explosions.

32


Radioactive material which takes longer than 24 hours to return to earth is called delayed or global fallout. Some of the delayed fallout remains in the troposphere (see Figure 1) for days, weeks or months. This tropospheric fallout usually returns to earth within ten or 15 deg of latitude of the original explosion, mostly by being incorporated in raindrops as they are formed. The clouds of nuclear explosions larger than about one megatonne penetrate partially or wholly into the stratosphere, and deposit fission products there, which become stratospheric fallout. Since the stratosphere has no rain formation and is less turbulent than the troposphere, radioactive particles in the stratosphere can take months or years to return to earth. During this time the particles can move to any part of the globe.

The lower injection of radioactive material into the stratosphere means correspondingly higher levels of tropospheric fallout, especially near the latitudes of the explosions. Since tropospheric fallout returns to earth more quickly than stratospheric fallout, it is more radioactive and dangerous. Thus the shift to lower yield nuclear weapons has reduced the health risk of nuclear war from radioactivity to people who are far from the main regions of nuclear conflict, but increased it for those near the latitudes of numerous nuclear explosions. These conclusions are tentative, since it is possible that the rapid explosion of 4000Mt of nuclear weapons could greatly alter the atmospheric circulation, with unknown consequences for the distribution of fallout.

33


There are two main hazards from exposure to low levels of ionising radiation: cancers and genetic defects. In essence, the energetic radiation and particles from radioactive decay can disrupt the structure of cells in the body or in genetic material, causing or contributing to cancer or genetic defects. For several decades, a scientific controversy has raged over the effect of exposure to low levels of ionising radiation. Since the cancers and genetic defects caused by this radiation are usually impossible to distinguish from cancer and genetic defects due to other causes, available evidence is not adequate to measure the effect at low doses. The controversy concerns which theory is most appropriate to use to extrapolate from evidence at higher exposures (above one-half to one sievert).

Nuclear reactors Nuclear power reactors contain an enormous amount of radioactive material. Much attention has been focussed on the possibility that reactor containment systems might fail, leading to escape of radioactivity and the possible death of up to tens of thousands of people. The meltdown and dispersal of a portion of the core of a nuclear power reactor could readily result from attack on a nuclear power plant by conventional or nuclear weapons which disabled cooling and other control systems. Even more devastating, though, would be the result of direct hit by a nuclear weapon on a nuclear power reactor, with the nuclear reactor's radioactive inventory being directly incorporated into the fireball of the nuclear explosion. This inventory would then be incorporated into the fallout cloud from the explosion.

34


The short-lived decay products in the reactor mostly decay away during its operation, leaving the longer-lived products such as strontium-90 and caesium-137. Therefore, while the radioactivity from a one megatonne nuclear explosion remains higher than that from a large (1000MW) nuclear power reactor for a few days, afterwards the reactor's radioactivity poses a greater danger. If many reactor cores were vapourised in this way, large areas of countryside could be made highly radioactive for long periods of time.

It is possible that nuclear power reactors would be nuclear targets, because of their high economic value, because of their capability of producing plutonium for making nuclear 35


weapons, or because of the devastating radioactivity that would be spread about. The latter effect could also be achieved by attacking radioactive waste repositories or reprocessing plants. The main concentrations of large nuclear reactors are found in the United States, Europe, the Soviet Union and Japan, that is, those areas most likely to be involved in nuclear war in any case. If nuclear power facilities were attacked, therefore, most of the extra deaths and injuries would result in those regions. Because reactor cores are very well protected, dispersal of the core materials is unlikely to occur unless they are the specific target of highly accurate weapons.

Plutonium One special product of nuclear explosions is plutonium. Plutonium-239 is a fissionable substance and is used to construct nuclear weapons. It is also a highly dangerous radioactive material. It decays by emitting an alpha particle, which cannot penetrate a piece of paper or the skin. But once inside the body, plutonium-239 is a potent cancerinducing agent. Experiments have shown that less than one milligramme of insoluble plutonium oxide is definitely enough to cause lung cancer in beagle dogs.It is not known how much plutonium is required to induce lung cancer in humans, but estimates as low as a few millionths of a gramme have been made. Previous nuclear explosions have injected an estimated 5 tonnes of plutonium into the atmosphere. No one knows what effect this is having on human health. One of the highest estimates of the consequences is by John Gofman, who thinks 950,000 people worldwide may die of lung cancer as a result of this plutonium, over a period of many decades. A 4000Mt nuclear war could cause the release of ten times as much plutonium, some 50 tonnes, with ten times the consequences. Large nuclear power reactors contain an average inventory of perhaps 300 kilogrammes of plutonium. If it is assumed that all the plutonium from 20 large reactors - more than one tenth of the world total - were dispersed in a 4000Mt nuclear war, this would add another six tonnes of plutonium to the total released into the atmosphere. This would be about one tenth the amount directly released by the nuclear explosions themselves. The cancers and genetic defects caused by global fallout from a nuclear war would only appear over a period of many decades, and would cause only a small increase in the 36


current rates of cancer and genetic defects. The scientific evidence clearly shows that global fallout from even the largest nuclear war poses no threat to the survival of the human species. Nevertheless, the fact that hundreds of thousands or millions of people who would suffer and die from global fallout cannot be ignored. Furthermore, many more people than this would die from exposure to fallout in the immediate vicinity of nuclear explosions.

The effects of nuclear war on climate A major nuclear war would deposit millions of tonnes of dust in the stratosphere. Some sunlight would be absorbed or reflected away from the earth by the dust, causing a decrease in the earth's temperature. This in turn could conceivably trigger a major climatic change. For example, lowered temperatures could cause an increase in snow and ice near the polar caps, thus an increased reflection of light, and further lowering of temperatures.

Stratospheric dust from a nuclear war seems unlikely to cause such climatic change. In 1883 the volcanic eruption at Krakatoa deposited some 10 to 100 thousand million tonnes of dust in the stratosphere, and the 1963 Mt Agung eruption about half as much. These injections seem to have caused a minor cooling of the surface temperature of the earth, at most about half a degree Celsius, lasting a few years, with no long term consequences. A nuclear war involving 4000Mt from present arsenals would probably

37


deposit much less dust in the stratosphere than either the Krakatoa or Mt Agung eruptions. Another possibility is that decreases in ozone or increases in oxides of nitrogen levels in the stratosphere, caused by nuclear war, could lead to climatic change. A reduction in ozone levels by a factor of two could cause a decrease in surface temperature of one half to one degree Centigrade, but including oxides of nitrogen in the calculation reduces this effect. Whether or not a change in temperature at the earth's surface by this amount for a few years could cause irreversible climatic change is hard to assess. The National Academy of Sciences study concluded that the effects of dust and oxides of nitrogen injection into the stratosphere 'would probably lie within normal global climatic variability, but the possibility of climatic changes of a more dramatic nature cannot be ruled out'. Since the Academy assumed a nuclear war with the explosion of many more high-yield weapons than are presently deployed, the danger of climatic change from dust or oxides of nitrogen is almost certainly less than assessed in their report.

38


Fires and smoke In mid 1982, Paul Crutzen and John Birks drew attention to a previously overlooked major effect of nuclear war. They note that nuclear attacks would ignite numerous fires in cities, industry and especially in forests, crop areas and oil and gas fields. These fires would produce immense amounts of particulate matter which would remain in the lower atmosphere for weeks even after the fires ceased.

The smaller particles, called aerosols, would absorb sunlight. A large nuclear war with many fires and large aerosol production could lead to a reduction in sunlight in the midnorthern hemisphere by 90 per cent or more for a period of a few months. This reduction would pose no direct threat to human health, but indirect effects could be widespread. If the nuclear war occurred during the agricultural growing season of the northern hemisphere, food production could be virtually eliminated for that season. This could greatly increase the chance of mass starvation in the north, though it is possible that stored food and changes in dietary habits could prevent this. If the reduction in ground level sunlight were 99 per cent or more, this could lead to the death of most of the phytoplankton and herbivorous zooplankton in half the northern oceans. This could lead to extinction of species and unpredictable changes in the balance of life 39


on earth. Another effect of the fires would be production of large amounts of oxides of nitrogen and reactive hydrocarbons in the lower atmosphere, changes in lower

atmospheric dynamics, and creation of ozone and other potent air pollutants. (While ozone plays a useful role in the stratosphere it can be harmful to living things at ground level.) In effect, much of the northern hemisphere could be exposed to severe photochemical smog for a period of weeks. This could cause health problems in susceptible people, especially the aged. Potentially more disastrous would be the negative effect of the smog on agricultural productivity, further increasing the chance of crop failure and consequent starvation.

Effects The available evidence suggests that the global health effects of a major nuclear war are likely to be much less devastating than the immediate effects of blast, heat and local fallout. Present knowledge indicates that a large nuclear war in the northern hemisphere would have the following effects:

40


   

from fallout, death of perhaps 1000 people from cancers and genetic defects over 50 years; from changes in ozone, a negligible effect; from climatic changes, a tiny chance of any effect; from fires, a negligible effect. But this conclusion does not mean that the global effects should be ignored by Australians. First, many people will die worldwide from cancers and genetic defects caused by global fallout, and possibly from other global effects. Whether the total is 10,000 or 10,000,000, the suffering and death will be real for those who experience it, and should not be discounted by use of comparisons. Second, there does exist a chance that major climatic changes, alterations in agricultural productivity, or consequences for global ecology could result from nuclear war.

Now-familiar mushroom cloud from a nuclear bomb blast; this one is from a 1955 test at a Nevada testing ground

41


Third, simply not enough is known to predict with confidence all the global effects of nuclear war. The implications for ozone were not publicised until 1974 and the consequences of fires were first publicised in 1982. This suggests that further significant effects may remain to be discovered.

Furthermore, the exact consequences of known processes are a subject of scientific controversy. John Hampson's scenario for possible inadvertent destruction of ozone in a local region is an example of what may happen within the limits of scientific possibility. Until much more study is made of the effects of nuclear war, a high level of uncertainty will remain. Fourth, whatever the scale of global effects of nuclear war, the potential for 42


immediate death and destruction in areas directly attacked is more than sufficient to justify the most strenuous efforts to eliminate the nuclear threat.

Nuclear war will hit hardest at the areas bombed, not only directly from blast, heat and local fallout but also from delayed tropospheric fallout, fires and possible agricultural or economic breakdown. Since physical effects far from the regions of nuclear explosions are much less, the most important threat to a country such as Australia is direct nuclear attack.

The prime targets in Australia are the United States military bases at Pine Gap, Nurrungar and North West Cape. Attacks on these bases would kill perhaps a few 43


thousand people. There is a smaller chance of attacks on Cockburn Sound and on Darwin RAAF base, which are hosts for United States strategic nuclear ships, submarines and aircraft. Nuclear bombing of these two facilities, which are close to the population centres of Perth and Darwin respectively, could kill up to one hundred thousand people, depending on the wind direction at the time. Perhaps least likely, but certainly most devastating, would be nuclear attacks on major population centres. For example, the ports of major Australian cities could well be bombed if United States warships carrying strategic nuclear weapons were in harbour. Major population centres might also be hit as a consequence of attacks on associated military or economic facilities. Such attacks could kill from a few hundred thousand to several million people.

In the absence of direct attacks, the major indirect effects of nuclear war on a country such as Australia would not be physical but economic, political and social. Economically, nuclear war would cause an enormous disruption of world production and trade. Politically, nuclear war seems likely to cause massive upheavals not only in countries directly involved but in many of those far from the direct destruction. The social effects of nuclear war would be many, and include the psychological effects of massive nuclear destruction and the more immediate stresses of large numbers of refugees from Europe and North America. Study of and planning for these non-physical effects of nuclear war has been meagre or nonexistent. But unless the almost total lack of progress towards

44


nuclear disarmament since 1945 is somehow reversed, these possible effects seem certain to become reality sooner or later.

Nuclear war While only two nuclear weapons have been used in war so far – at Hiroshima and Nagasaki in World War II – and nuclear stockpiles are down from their the peak they reached in the Cold War, it is a mistake to think that nuclear war is impossible. In fact, it might not be improbable.

The Cuban Missile crisis was very close to turning nuclear. If we assume one such event every 69 years and a one in three chance that it might go all the way to being nuclear war, the chance of such a catastrophe increases to about one in 200 per year. Worse still, the Cuban Missile crisis was only the most well-known case. The history of Soviet-US nuclear deterrence is full of close calls and dangerous mistakes. The actual probability has changed depending on international tensions, but it seems implausible that the chances would be much lower than one in 1000 per year.

45


A full-scale nuclear war between major powers would kill hundreds of millions of people directly or through the near aftermath – an unimaginable disaster. But that is not enough to make it an existential risk.

Similarly the hazards of fallout are often exaggerated – potentially deadly locally, but globally a relatively limited problem. Cobalt bombswere proposed as a hypothetical doomsday weapon that would kill everybody with fallout, but are in practice hard and expensive to build. And they are physically just barely possible.

The real threat is nuclear winter – that is, soot lofted into the stratosphere causing a 46


multi-year cooling and drying of the world. Modern climate simulations show that it could preclude agriculture across much of the world for years. If this scenario occurs billions would starve, leaving only scattered survivors that might be picked off by other threats such as disease. The main uncertainty is how the soot would behave: depending on the kind of soot the outcomes may be very different, and we currently have no good ways of estimating this.

We’re living in a simulation and it gets shut down A case can be made that the hypothesis that we are living in a computer simulation should be given a significant probability. The basic idea behind this so-called “Simulation argument� is that vast amounts of computing power may become available in the future, and that it could be used, among other things, to run large numbers of fine-grained simulations of past human civilizations. Under some not-too-implausible assumptions, the result can be that almost all minds like ours are simulated minds, and that we should therefore assign a significant probability to being such computer47


emulated minds rather than the (subjectively indistinguishable) minds of originally evolved creatures. And if we are, we suffer the risk that the simulation may be shut down at any time. A decision to terminate our simulation may be prompted by our actions or by exogenous factors.

While to some it may seem frivolous to list such a radical or “philosophical� hypothesis next the concrete threat of nuclear holocaust, we must seek to base these evaluations on reasons rather than untutored intuition. Until a refutation appears of the argument presented in, it would intellectually dishonest to neglect to mention simulationshutdown as a potential extinction mode. The original simulation argument was based on very theoretical and rare circumstances. Unfortunately, everyone seems to have used the original theoretical circumstances as the spring board for thinking about 'are we in a simulation' possibilities. This is entirely unrealistic. 'IF' we are in a simulation then it is: 1. Very likely that our future selves set this place up. 2. That we will (shockingly) be in someone's 'simulation' project AND that we are copied people each living out someone else's life

48


3. Copied people living out someone else's life will exhibit deducible and measurable behavioural differences compared to a real population.

Simulation Argument; Some Basic Foundation Information & Definitions an Introduction Are we in a simulation? Do we have any reason or easily observable evidence to suspect that we might be in a simulation? Well, actually yes we do have observable evidence which for a simulation attempting to present self aware, free thinking population could actually be deduced. So, this page on it’s own presents quite a lot of observable evidence that I’ve not seen presented anywhere else to support the assertion that we are very, very likely to be living in a simulation.

49


Simulation Possibilities Well, Professor Bostrom put forward and had published a ‘simulation argument’ in 2003. For the simulation argument he basically presents a very simple reasoning line which can be summed up as follows:

He uses our own continuous and rapid advances in computer technologies to deduce that at some point it’s quite logical that we ourselves will possibly, even within the next few decades have the capabilities to create a simulation of a world full of conscious people.

50


He argues that if we could do this, then we might then develop the capacity to create many such simulations and as such it’s very possible that this has already been done and if it has already been done then it’s likely that there are more simulated people living in a simulated reality than there are real people living in a real reality and ‘therefore’ we ourselves are perhaps simulated people living in a simulated reality. The Simulation Argument Gives us ALL Good Reason to THINK about ‘Earth as a Simulation’ Possibilities If you read the general pages on his simulation argument web site and particularly his FAQ page as well as his ‘Why Make a Matrix? And Why You Might Be In One’page then you might notice that Professor Bostrom despite doing a magnificent job with respect to putting together the ‘simulation

argument’ actually not only makes some authoritative but very odd and completely false 51


unreasoned statements relating to ‘earth as a simulation’ (EAAS) possibilities BUT he doesn’t present basic foundation information such as explaining what the differences are between a Matrix and a Simulation.

As there are seriously important differences between a Matrix (this involves real people) and a Simulation (which involves simulating software defined copies of people very accurately) then it’s confounding to the extreme to have a page (Why Make a Matrix?

52


And Why You Might Be In One) describing and discussing MATRIX only possibilities on a web site purporting to be focused on SIMULATION ONLY POSSIBILITIES. Not only does this page NOT explain what they stark differences are BUT it also doesn’t even give basic definitions of a Simulation or a Matrix which would at the very least help you to figure the important differences out for yourself.

We are actually told very strongly that ‘IF’ we are in a simulation there will be no presented glitches or anomalies, despite no formal reasoning being offered to back this assertion up while once again we are not offered definitions of either a glitch or an anomaly. Even more bizarrely, we are confidently told that anyone purporting to have experiences that they consider to be an anomaly are likely to NOT be ‘real’ anomalies BUT rather to be a sign that the person having such experiences must be suffering from some sort of human frailty.

53


Why do Simulation Argument Discussions & Pages Continually Mix up Simulation Only Information with Matrix Only Material? This use of a frail people assertion to ‘rationally’ explain anomalies is quite frankly delusional. However, it does fit in perfectly with the bizarre inability of Professor Bostrom and apparently everyone else here to separate out Matrix only possibilities from Simulation only possibilities.

It would be ENTIRELY valid for a Matrix which is of real people living in a virtual reality to ASSUME that frailties in these ‘real’ peoples perceptions and therefore experiences would be due to themselves being FRAIL as real people. On the other hand on a web site supposedly seriously discussing simulation possibilities you would imagine that they would know that people in a simulation are entirely generated by seriously, unbelievable complex software. As this would be a FACT for simulated people then it would be a sign of frailty for ANYONE discussing simulation only possibilities to NOT even even question the possibility that frailties in simulated people could be due to the FACT that simulated as self aware, free thinking humans would absolutely be the most complicated component in your simulation. 54


As the most COMPLICATED component then anyone discussing frailties with respect to SIMULATION possibilities you would imagine would automatically.

AUTOMATICALLY discuss the possibilities of human frailties in terms of being possible anomalies at least of complex software interactions. ‘IF’ they didn’t then you’d have to question what sort of human simulation software induced frailty they are likely to be being subjected to!!! Easily Deducible, Observable Evidence of Earth as a Simulation Information Anomalies . 55


So, for all you rational and objective ‘professional’ THINKERS out there can you figure out WHOM WOULD BENEFIT from ourselves having:

1. A lack of basic definitions . . . 2. The happening absolutely EVERYWHERE mixing up of simulation only with matrix only material as well as . . . 56


3. A distinct lack of basic foundation information? The FACT that even experts cannot get anything right or even present basic information, the FACT that I’ve not seen anyone anywhere actually point out that basic foundation information and definitions are missing and or that there is a continuous mix and match of simulation only information with matrix only information essentially means that no one has even noticed this going on is in fact VERY STRONG EVIDENCE THAT WE ARE IN A SIMULATION.

I’d have to say that everything I describe above in not only a very suspicious body of evidence in it’s own right BUT it’s also of course even more suspicious because the only people that would benefit from this consistently observed feeblemindedness that is specific with respect to ‘earth as s simulation’ information would be some hypothetical simulation designers. I will now give you more evidence in support of my Earth Simulation Hypothesis . . . On Professor Bostroms simulation-argument.com FAQ page he writes:“It seems likely that the hypothetical simulators, who would evidently have to be technologically 57


extremely advanced to create simulations with conscious participants, would also have the ability to prevent these simulated creatures from noticing anomalies in the simulation.”

‘IF’ anyone spent time trying to think like an ‘earth as a simulation’ designer such that they managed to actually start THINKING realistically about earth as a simulation possibilities then they might figure out something that is quite stupidly obvious.

In a simulation project attempting to present entirely software defined SELF AWARE, FREE THINKING people then it is very obvious that the designers of such a simulation would prevent their simulated population from noticing and even of thinking about 58


anomalies by DIRECTLY managing the AWARENESS, THINKING & the EVALUATING ABILITIES of their simulated populationAND they would particularly use this seriously cheap to implement strategy (as I explain in detail here:

BADLY PROGRAMMED SUPERINTELLIGENCE When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so.

For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question. What does it feel like to stand here? It seems like a pretty intense place to be standing—but then you have to remember something about what it’s like to stand on a time graph: you can’t see what’s to your right. So here’s how it actually feels to stand there: 59


Which probably feels pretty normal‌

60


Superintelligence Intelligence is very powerful. A tiny increment in problem-solving ability and group coordination is why we left the other apes in the dust. Now their continued existence depends on human decisions, not what they do. Being smart is a real advantage for people and organisations, so there is much effort in figuring out ways of improving our individual and collective intelligence: from cognition-enhancing drugs to artificial-intelligence software.

The problem is that intelligent entities are good at achieving their goals, but if the goals are badly set they can use their power to cleverly achieve disastrous ends. There is no reason to think that intelligence itself will make something behave nice and morally. In fact, it is possible to prove that certain types of superintelligent systems would not obey moral rules even if they were true.

61


Even more worrying is that in trying to explain things to an artificial intelligence we run into profound practical and philosophical problems. Human values are diffuse, complex things that we are not good at expressing, and even if we could do that we might not understand all the implications of what we wish for. Software-based intelligence may very quickly go from below human to frighteningly powerful. The reason is that it may scale in different ways from biological intelligence: it can run faster on faster computers, parts can be distributed on more computers, different versions tested and updated on the fly, new algorithms incorporated that give a jump in performance.

62


It has been proposed that an “intelligence explosion� is possible when software becomes good enough at making better software. Should such a jump occur there would be a large difference in potential power between the smart system (or the people telling it what to do) and the rest of the world. This has clear potential for disaster if the goals are badly set.

63


The unusual thing about superintelligence is that we do not know if rapid and powerful intelligence explosions are possible: maybe our current civilisation as a whole is improving itself at the fastest possible rate. But there are good reasons to think that some technologies may speed things up far faster than current societies can handle. Similarly we do not have a good grip on just how dangerous different forms of superintelligence would be, or what mitigation strategies would actually work. It is very hard to reason about future technology we do not yet have, or intelligences greater than ourselves. Of the risks on this list, this is the one most likely to either be massive or just a mirage.

This is a surprisingly under-researched area. Even in the 50s and 60s when people were extremely confident that superintelligence could be achieved “within a generation�, they did not look much into safety issues. Maybe they did not take their predictions seriously, but more likely is that they just saw it as a remote future problem.

What is greater-than-human intelligence?

Machines are already smarter than humans are at many specific tasks: performing calculations, playing chess, searching large databanks, detecting underwater mines, and more. But one thing that makes humans special is their general intelligence. 64


Humans can intelligently adapt to radically new problems in the urban jungle or outer space for which evolution could not have prepared them. Humans can solve problems for which their brain hardware and software was never trained. Humans can even examine the processes that produce their own intelligence (cognitive neuroscience), and design new kinds of intelligence never seen before (artificial intelligence). To possess greater-thanhuman intelligence, a machine must be able to achieve goals more effectively than humans can, in a wider range of environments than humans can. This kind of intelligence involves the capacity not just to do science and play chess, but also to manipulate the social environment. Computer scientist Marcus Hutter has described a formal model called AIXI that he says possesses the greatest general intelligence possible. But to implement it would require more computing power than all the matter in the universe can provide. Several projects try to approximate AIXI while still being computable, for example MC-AIXI. Still, there remains much work to be done before greater-than-human intelligence can be achieved in machines. Greater-than-human intelligence need not be achieved by directly programming a machine to be intelligent. It could also be achieved by whole brain emulation, by biological cognitive enhancement, or by brain-computer interfaces (see below).

65


When will an intelligence explosion happen?

Predicting the future is risky business. There are many philosophical, scientific, technological, and social uncertainties relevant to the arrival of an intelligence explosion. Because of this, experts disagree on when this event might occur. Here are some of their predictions: 

  

Futurist Ray Kurzweil predicts that machines will reach human-level intelligence by 2030 and that we will reach “a profound and disruptive transformation in human capability” by 2045. Intel’s chief technology officer, Justin Rattner, expects “a point when human and artificial intelligence merges to create something bigger than itself” by 2048. AI researcher Eliezer Yudkowsky expects the intelligence explosion by 2060. Philosopher David Chalmers has over 1/2 credence in the intelligence explosion occurring by 2100.

66


Quantum computing expert Michael Nielsen estimates that the probability of the intelligence explosion occurring by 2100 is between 0.2% and about 70%. In 2009, at the AGI09 conference, experts were asked when AI might reach superintelligence with massive new funding.  The median estimates were that machine superintelligence could be 

achieved by 2045 (with 50% confidence) or by 2100 (with 90% confidence). Of course, attendees to this conference were self-selected to think that near-term artificial general intelligence is plausible.

67


iRobot CEO Rodney Brooks and cognitive scientist Douglas Hofstadter allow that the intelligence explosion may occur in the future, but probably not in the 21st century. Roboticist Hans Moravec predicts that AI will surpass human intelligence “well before 2050.” In a 2005 survey of 26 contributors to a series of reports on emerging technologies, the median estimate for machines reaching human-level

intelligence was 2085.[61] Participants in a 2011 intelligence conference at Oxford gave a median estimate of 2050 for when there will be a 50% of human-level machine intelligence, and a median estimate of 2150 for when there will be a 90% chance of human-level machine intelligence.

Consequences of an Intelligence Explosion Why would great intelligence produce great power?

Intelligence is powerful. One might say that “Intelligence is no match for a gun, or for someone with lots of money,” but both guns and money were produced by intelligence. If not for our intelligence, humans would still be foraging the savannah for food. Intelligence is what caused humans to dominate the planet in the blink of an eye (on evolutionary timescales). Intelligence is what allows us to eradicate diseases, and what gives us the potential to eradicate ourselves with nuclear war. Intelligence gives us superior strategic skills, superior social skills, superior economic productivity, and the power of invention.

68


A machine with superintelligence would be able to hack into vulnerable networks via the internet, commandeer those resources for additional computing power, take over mobile machines connected to networks connected to the internet, use them to build additional machines, perform scientific experiments to understand the world better than humans can, invent quantum computing and nanotechnology, manipulate the social world better than we can, and do whatever it can to give itself more power to achieve its goals — all at a speed much faster than humans can respond to.

How could an intelligence explosion be useful?

A machine superintelligence, if programmed with the right motivations, could potentially solve all the problems that humans are trying to solve but haven’t had the ingenuity or processing speed to solve yet. 69


A superintelligence might cure disabilities and diseases, achieve world peace, give humans vastly longer and healthier lives, eliminate food and energy shortages, boost scientific discovery and space exploration, and so on.

70


Furthermore, humanity faces several existential risks in the 21st century, including global nuclear war, bioweapons, superviruses, and more.[56] A superintelligent machine would be more capable of solving those problems than humans are.

How might dangerous?

an

intelligence

explosion

be

If programmed with the wrong motivations, a machine could be malevolent toward humans, and intentionally exterminate our species. More likely, it could be designed with motivations that initially appeared safe (and easy to program) to its designers, but that turn out to be best fulfilled (given sufficient power) by reallocating resources from sustaining human life to other projects. As Yudkowsky writes, “the AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.� Since weak AIs with many different motivations could better achieve their goal by faking benevolence until they are powerful, safety testing to avoid this could be very challenging. 71


Alternatively, competitive pressures, both economic and military, might lead AI designers to try to use other methods to control AIs with undesirable motivations. As those AIs became more sophisticated this could eventually lead to one risk too many.

Even a machine successfully designed with superficially benevolent motivations could easily go awry when it discovers implications of its decision criteria unanticipated by its designers. For example, a superintelligence programmed to maximize human happiness might find it easier to rewire human neurology so that humans are happiest when sitting quietly in jars than to build and maintain a utopian world that caters to the complex and nuanced whims of current human neurology.

72


What is Friendly AI?

A Friendly Artificial Intelligence (Friendly AI or FAI) is an artificial intelligence that is ‘friendly’ to humanity — one that has a good rather than bad effect on humanity. AI researchers continue to make progress with machines that make their own decisions, and there is a growing awareness that we need to design machines to act safely and ethically. This research program goes by many names: ‘machine ethics’, ‘machine morality’, ‘artificial morality’, ‘computational ethics’ and ‘computational metaethics’, ‘friendly AI, and ‘robo-ethics’ or ‘robot ethics’. It must be noted that Friendly AI is a harder project than often supposed. As explored below, commonly suggested solutions for Friendly AI are likely to fail because of two features possessed by any superintelligence: 1. Superpower: a superintelligent machine will have unprecedented powers to reshape reality, and therefore will achieve its goals with highly efficient methods that confound human expectations and desires. 2. Literalness: a superintelligent machine will make decisions based on the mechanisms it is designed with, not the hopes its designers had in mind when they programmed those mechanisms. It will act only on precise specifications of rules and values, and will do so in ways that need not respect the complexity and subtlety of what humans value. A demand like “maximize human happiness” sounds simple to 73


us because it contains few words, but philosophers and scientists have failed for centuries to explain exactly what this means, and certainly have not translated it into a form sufficiently rigorous for AI programmers to use.

Can we add friendliness to any artificial intelligence design?

Many AI designs that would generate an intelligence explosion would not have a ‘slot’ in which a goal (such as ‘be friendly to human interests’) could be placed. For example, if AI is made via whole brain emulation, or evolutionary algorithms, or neural nets, or reinforcement learning, the AI will end up with some goal as it self-improves, but that stable eventual goal may be very difficult to predict in advance.

74


Thus, in order to design a friendly AI, it is not sufficient to determine what ‘friendliness’ is (and to specify it clearly enough that even a superintelligence will interpret it the way we want it to). We must also figure out how to build a general intelligence that satisfies a goal at all, and that stably retains that goal as it edits its own code to make itself smarter. This task is perhaps the primary difficulty in designing friendly AI.

Genetically engineered biological agent INTRODUCTION

Biological weapons are designed to spread disease among people, plants, and animals through the introduction of toxins and microorganisms such as viruses and bacteria. The method through which a biological weapon is deployed depends on the agent itself, its preparation, its durability, and the route of infection. Attackers may disperse these agents through aerosols or food and water supplies. Although bioweapons have been used in war for many centuries, a recent surge in genetic understanding, as well as a rapid growth in computational power, has allowed genetic engineering to play a larger role in the development of new bioweapons. In the bioweapon industry, genetic engineering can be used to manipulate genes to create new pathogenic characteristics aimed at enhancing the efficacy of the weapon through 75


increased survivability, infectivity, virulence, and drug resistance (2). While the positive societal implications of improved biotechnology are apparent, the “black biology” of bioweapon development may be “one of the gravest threats we will face”.

Limits of Bioweapons Prior to recent advances in genetic engineering, bioweapons were exclusively natural pathogens. Agents must fulfill numerous prerequisites to be considered effective military bioweapons, and most naturally occurring pathogens are ill suited for this purpose.

First, bioweapons must be produced in large quantities. A pathogen can be obtained from the natural environment if enough can be collected to allow purification and testing of its properties. Otherwise, pathogens could be produced in a microbiology laboratory or bank, a process which is limited by pathogen accessibility and the safety with which the pathogens can be handled in facilities. To replicate viruses and some bacteria, living cells are required. The growth of large quantities of an agent can be limited by equipment, space, and the health risks associated with the handling of hazardous germs.

76


In addition to large-scale production, effective bioweapons must act quickly, be environmentally robust, and their effects must be treatable for those who are implementing the bioweapon.

77


Recent Advances As researchers continue to transition from the era of DNA sequencing into the era of DNA synthesis, it may soon become feasible to synthesize any virus whose DNA sequence is known.

This was first demonstrated in 2001 when Dr. Eckard Wimmer re-created the poliovirus and again in 2005 when Dr. Jeffrey Taubenberger and Terrence Tumpey recreated the 1918 influenza virus. The progress of DNA synthesis technology will also allow for the creation of novel pathogens.

According to biological warfare expert Dr. Steven Block, genetically engineered 78


pathogens “could be made safer to handle, easier to distribute, capable of ethnic specificity, or be made to cause higher mortality rates” . The growing accessibility of DNA synthesis capabilities, computational power, and information means that a growing number of people will have the capacity to produce bioweapons. Scientists have been able to transform the four letters of DNA—A (adenine), C (cytosine), G (guanine), and T (thymine)—into the ones and zeroes of binary code. This transformation makes genetic engineering a matter of electronic manipulation, which decreases the cost of the technique (4). According to former Secretary of State Hillary Clinton, “the emerging gene synthesis industry is making genetic material more widely available […] A crude but effective terrorist weapon can be made using a small sample of any number of widely available pathogens, inexpensive equipment, and college-level chemistry and biology.”

Techniques to Enhance Efficacy of Bioweapons Scientists and genetic engineers are considering several techniques to increase the efficacy of pathogens in warfare.

79


1. Binary Biological Weapons This technique involves inserting plasmids, small bacterial DNA fragments, into the DNA of other bacteria in order to increase virulence or other pathogenic properties within the host bacteria.

2. Designer Genes According to the European Bioinformatics Institute, as of December 2012, scientists had sequenced the genomes of 3139 viruses, 1016 plasmids, and 2167 bacteria, some of which are published on the internet and are therefore accessible to the public (6). With complete genomes available and the aforementioned advances in gene synthesis, scientists will soon be able to design pathogens by creating synthetic genes, synthetic viruses, and possibly entirely new organisms.

80


3. Gene Therapy Gene therapy involves repairing or replacing a gene of an organism, permanently changing its genetic composition. By replacing existing genes with harmful genes, this technique can be used to manufacture bioweapons. 4. Stealth Viruses Stealth viruses are viral infections that enter cells and remain dormant for an extended amount of time until triggered externally to cause disease. In the context of warfare, these viruses could be spread to a large population, and activation could either be delayed or used as a threat for blackmail. 5. Host-Swapping Diseases Much like the naturally occurring West Nile and Ebola viruses, animal viruses could potentially be genetically modified and developed to infect humans as a potent biowarfare tactic. 6. Designer Diseases Biotechnology may be used to manipulate cellular mechanisms to cause disease. For example, an agent could be designed to induce cells to multiply uncontrollably, as in cancer, or to initiate apoptosis, programmed cell death.

81


7. Personalized Bioweapons In coming years it may be conceivable to design a pathogen that targets a specific person’s genome. This agent may spread through populations showing minimal or no symptoms, yet it would be fatal to the intended target.

Biodefense In addition to creating bioweapons, the emerging tools of genetic knowledge and biological technology may be used as a means of defense against these weapons. 1. Human Genome Literacy As scientific research continues to reveal the functions of specific genes and how genetic components affect disease in humans, vaccines and drugs can be designed to combat particular pathogens based on analysis of their particular molecular effect on the human cell.

82


2. Immune System Enhancement In addition to enabling more effective drug development, human genome literacy allows for a better understanding of the immune system. Thus, genetic engineering can be used to enhance human immune response to pathogens. As an example, Dr. Ken Alibek is conducting cellular research in pursuit of protection against the bioweapon anthrax. 3. Viral and Bacterial Genome Literacy Decoding the genomes of viruses and bacteria will lead to molecular explanations behind virulence and drug resistance. With this information, bacteria can be engineered to produce bioregulators against pathogens. For example, Xoma Corporation has patented a bactericidal/permeability-increasing (BPI) protein, made from genes inserted into bacterial DNA, which reverses the resistance characteristic of particular bacteria against some popular antibiotics.

4. Efficient Bio-Agent Detection and Identification Equipment Because the capability of comparing genomes using DNA assays has already been acquired, such technology may be developed to identify pathogens using information 83


from bacterial and viral genomes. Such a detector could be used to identify the composition of bioweapons based on their genomes, reducing present-day delays in resultant treatment and/or preventive measures.

5. New Vaccines Current scientific research projects involve genetic manipulation of viruses to create vaccines that provide immunity against multiple diseases with a single treatment. 6. New Antibiotics and Antiviral Drugs Currently, antibiotic drugs target DNA synthesis, protein synthesis, and cell-wall synthesis processes in bacterial cells. With an increased understanding of microbial genomes, other proteins essential to bacterial viability can be targeted to create new classes of antibiotics. Eventually, broad-spectrum, rather than protein-specific, antimicrobial drugs may be developed.

Future of Warfare “The revolution in molecular biology and biotechnology can be considered as a potential Revolution of Military Affairs (RMA),� states Colonel Michael Ainscough, MD, MPH. 84


According to Andrew Krepinevich, who originally coined the term RMA, “technological advancement, incorporation of this new technology into military systems, military operational advancement, and organizational adaptation in a way that fundamentally alters the character and conduct of conflict” are the four components that make up an RMA. For instance, the Gulf War has been classified as the beginning of the space information warfare RMA. “From the technological advances in biotechnology, biowarfare with genetically engineered pathogens may constitute a future such RMA,” says Ainscough.

In addition, the exponential increase in computational power combined with the accessibility of genetic information and biological tools to the general public and lack of governmental regulation raise concerns about the threat of biowarfare arising from outside the military. The US government has cited the efforts of terrorist networks, such as al Qaida, to recruit scientists capable of creating bioweapons as a national security concern and “has urged countries to be more open about their efforts to clamp down on the threat of bioweapons” . Despite these efforts, biological research that can potentially lead to bioweapon development is “far more international, far more spread out, and far more diverse than nuclear science […] researchers communicate much more rapidly with one another by 85


means that no government can control […] this was not true in the nuclear era,” according to David Kay, former chief U.S. weapons inspector in Iraq (7). Kay is “extraordinarily pessimistic that we [the United States] will take any of the necessary steps to avoid the threat of bioweapons absent their first actual use” .

“There are those who say: ‘the First World War was chemical; the Second World War was nuclear; and that the Third World War – God forbid – will be biological’”.

With the fabulous advances in genetic technology currently taking place, it may become 86


possible for a tyrant, terrorist, or lunatic to create a doomsday virus, an organism that combines long latency with high virulence and mortality. Dangerous viruses can even be spawned unintentionally, as Australian researchers recently demonstrated when they created a modified mousepox virus with 100% mortality while trying to design a contraceptive virus for mice for use in pest control [37]. While this particular virus doesn’t affect humans, it is suspected that an analogous alteration would increase the mortality of the human smallpox virus. What underscores the future hazard here is that the research was quickly published in the open scientific literature [38]. It is hard to see how information generated in open biotech research programs could be contained no matter how grave the potential danger that it poses; and the same holds for research in nanotechnology.

Genetic medicine will also lead to better cures and vaccines, but there is no guarantee that defense will always keep pace with offense. (Even the accidentally created mousepox virus had a 50% mortality rate on vaccinated mice.) Eventually, worry about 87


biological weapons may be put to rest through the development of nanomedicine, but while nanotechnology has enormous long-term potential for medicine it carries its own hazards.

Something unforeseen We need a catch-all category. It would be foolish to be confident that we have already imagined and anticipated all significant risks. Future technological or scientific developments may very well reveal novel ways of destroying the world.

Some foreseen hazards (hence not members of the current category) which have been excluded from the list of bangs on grounds that they seem too unlikely to cause a global terminal disaster are: solar flares, supernovae, black hole explosions or mergers, gamma-ray bursts, galactic center outbursts, supervolcanos, loss of biodiversity, buildup of air pollution, gradual loss of human fertility, and various religious doomsday scenarios. The hypothesis that we will one day become “illuminated� and commit collective suicide or stop reproducing, as supporters of VHEMT (The Voluntary Human Extinction Movement) hope, appears unlikely. If it really were better not to exist (as 88


Silenus told king Midas in the Greek myth, and as Arthur Schopenhauer argued although for reasons specific to his philosophical system he didn’t advocate suicide), then we should not count this scenario as an existential disaster. The assumption that it is not worse to be alive should be regarded as an implicit assumption in the definition of Bangs. Erroneous collective suicide is an existential risk albeit one whose probability seems extremely slight.

Unknown unknowns The most unsettling possibility is that there is something out there that is very deadly, and we have no clue about it.

The silence in the sky might be evidence for this. Is the absence of aliens due to that life or intelligence is extremely rare, or that intelligent life tends to get wiped out? If there is a future Great Filter, it must have been noticed by other civilisations too, and even that didn’t help. 89


Whatever the threat is, it would have to be something that is nearly unavoidable even when you know it is there, no matter who and what you are. We do not know about any such threats (none of the others on this list work like this), but they might exist.

Note that just because something is unknown it doesn’t mean we cannot reason about it. In a remarkable paper Max Tegmark and Nick Bostrom show that a certain set of risks must be less than one chance in a billion per year, based on the relative age of Earth. You might wonder why climate change or meteor impacts have been left off this list. Climate change, no matter how scary, is unlikely to make the entire planet uninhabitable (but it could compound other threats if our defences to it break down). Meteors could certainly wipe us out, but we would have to be very unlucky. The average mammalian species survives for about a million years. Hence, the background natural extinction rate is roughly one in a million per year. This is much lower than the nuclearwar risk, which after 70 years is still the biggest threat to our continued existence. 90


The availability heuristic makes us overestimate risks that are often in the media, and discount unprecedented risks. If we want to be around in a million years we need to correct that.

Physics disasters The Manhattan Project bomb-builders’ concern about an A-bomb-derived atmospheric conflagration has contemporary analogues.

There have been speculations that future high-energy particle accelerator experiments may cause a breakdown of a metastable vacuum state that our part of the cosmos might be in, converting it into a “true” vacuum of lower energy density. This would result in an expanding bubble of total destruction that would sweep through the galaxy and beyond at the speed of light, tearing all matter apart as it proceeds. Another conceivability is that accelerator experiments might produce negatively charged stable “strangelets” (a hypothetical form of nuclear matter) or create a mini black hole that would sink to the center of the Earth and start accreting the rest of the planet. 91


These outcomes seem to be impossible given our best current physical theories. But the reason we do the experiments is precisely that we don’t really know what will happen. A more reassuring argument is that the energy densities attained in present day accelerators are far lower than those that occur naturally in collisions between cosmic rays. It’s possible, however, that factors other than energy density are relevant for these hypothetical processes, and that those factors will be brought together in novel ways in future experiments. The main reason for concern in the “physics disasters” category is the meta-level observation that discoveries of all sorts of weird physical phenomena are made all the time, so even if right now all the particular physics disasters we have conceived of were absurdly improbable or impossible, there could be other more realistic failure-modes waiting to be uncovered. The ones listed here are merely illustrations of the general case.

Naturally occurring disease What if AIDS was as contagious as the common cold? There are several features of today’s world that may make a global pandemic more likely than ever before. Travel, food-trade, and urban dwelling have all increased dramatically in modern times, making it easier for a new disease to quickly infect a large fraction of the world’s population.

Cosmic threats to life on Earth Hiding in the depths of space are things lethal beyond imagining.One day the End will be nigh for real.A forthcoming conference, part of the Edinburgh Science Festival, will discuss how the Universe could kill us all. What are these extra-terrestrial menaces?To whet your appetite (and scare your pants off) here are three ways civilisation could end.

92


1.

Asteroid impacts: Everyone is familiar with this idea, once a vaguely heretical idea, the theory that small Solar System bodies can collide with our planet with dramatic consequences has become mainstream science.The theory has pervaded popular culture inspiring both great works of science fiction and also Michael Bay’s 1998 movie Armageddon.

There have been several such impacts in Earth’s history. One, not the most devastating, is especially well-known. We know that 65 million years ago an asteroid about 10 km (6 miles) in diameter slammed into what humans later named the Gulf of Mexico.

93


To describe the results as cataclysmic would be to make an understatement. The impact probably caused a tsunami hundreds of metres high. There is a real but very small risk that we will be wiped out by the impact of an asteroid or comet.

In order to cause the extinction of human life, the impacting body would probably have to be greater than 1 km in diameter (and probably 3 - 10 km). There have been at least five and maybe well over a dozen mass extinctions on Earth, and at least some of these were probably caused by impacts. In particular, the K/T extinction 65 million years ago, in which the dinosaurs went extinct, has been linked to the impact of an asteroid between 10 and 15 km in diameter on the Yucatan peninsula. It is estimated that a 1 km or greater body collides with Earth about once every 0.5 million years. We have only catalogued a small fraction of the potentially hazardous bodies. 94


If we were to detect an approaching body in time, we would have a good chance of diverting it by intercepting it with a rocket loaded with a nuclear bomb.

Red hot dust, ash and super-heated steam would have scoured Central America, followed by a global firestorm. Dust and ash blasted into the upper atmosphere blanketed the Earth for years, shrouding the planet in cold and darkness. Poor dinos, they never had a chance. But would we fare any better if it happened again tomorrow? 2. Solar events: The Sun is vital for life.The Sun is a killer.The same star which brings us light and warmth can cause death by thirst, sunstroke or by inducing a slow cancer. Could the Sun be more lethal still? We know it is prone to violent outbursts, known as solar flares. These awesome explosions can throw billions of tonnes of sub-atomic particles travelling at millions of kilometres per hour into interplanetary space. Occasionally one of these onslaughts, a Coronal Mass Ejection, falls on our planet.

95


Luckily for terrestrial life, Earth’s magnetic field shields us from the Sun’s fury (and we may see a beautiful auroral display instead).However a big enough solar eruption could induce a magnetic storm, sending compass needles spinning or inducing damaging surges of current down electricity distribution lines (such an event triggered power cuts across Canada in March 1989:an event 150 million km away deprived five million people of electrical power for days).

96


A really big Coronal Mass Ejection (nothing like this has been recorded) could ruin delicate microprocessors across the planet. Imagine if you woke up tomorrow and there was no TV, radio, cellphones, internet, satnav or computers. It may sound an idyllic dream but I suspect it would be a nasty, brutish and short nightmare.

3.

Gamma ray bursts: Imagine an honest to goodness deathray from space surpassing anything the maddest scientist has ever dreamt up. From out of a clear blue sky, death falls across the Earth at the speed of light as a cosmic blowtorch of lethal radiation sterilises our planet of everything from paramecia to people.

It may never happen, but it could. Mysterious for decades after their discovery, gamma ray bursts are extremely rare short flashes of astonishingly intense radiation from deepest space. Gamma ray bursts release as much energy in ten seconds or so as the Sun will in its entire lifetime of billions of years.

97


They are the most luminous events in the Universe. As far as we know a gamma ray burst is actually a beam. As the core of a massive star implodes (initiating a supernova explosion), the incredible magnetic forces in the star’s centre focus twin tight beams of intense radiation and matter in opposite direction. These tear their way out of the star and blaze across the Universe.

All gamma ray bursts so far observed have been billions of light years away and that is a 98


good thing. A gamma ray burst in the Milky Way would be lethal to life on Earth from thousands of light years away (if the Solar System was lined up with one of the beams).Thankfully astronomers have yet to find any star with the potential to menace Earth in this way.

Astronomy has shown the Universe to be a violent and deadly place. Luckily our world has been a safe and tranquil home through mankind’s reign. It won’t always be so, one day our planet will not dodge a cosmic bullet. I wonder when.

99


Runaway global warming One scenario is that the release of greenhouse gases into the atmosphere turns out to be a strongly self-reinforcing feedback process.

Maybe this is what happened on Venus, which now has an atmosphere dense with CO2 and a temperature of about 450O C. Hopefully, however, we will have technological means of counteracting such a trend by the time it would start getting truly dangerous.

What is global warming? Global warming is the current increase in temperature of the Earth's surface (both land and water) as well as it's atmosphere. Average temperatures around the world have risen by 0.75°C (1.4°F) over the last 100 years about two thirds of this increase has occurred since 1975.1 In the past, when the Earth experienced increases in temperature it was the result of natural causes but today it is being caused by the accumulation of greenhouse gases in the atmosphere produced by human activities.

100


The natural greenhouse effect maintains the Earth's temperature at a safe level making it possible for humans and many other lifeforms to exist.

However, since the Industrial Revolution human activities have significantly enhanced 101


the greenhouse effect causing the Earth's average temperature to rise by almost 1°C. This is creating the global warming we see today. To put this increase in perspective it is important to understand that during the last ice age, a period of massive climate change, the average temperature change around the globe was only about 5°C.

A long series of scientific research and international studies has shown, with more than 90% certainty, that this increase in overall temperatures is due to the greenhouse gases produced by humans.

102


Activities such as deforestation and the burning of fossil fuels are the main sources of these emissions. These findings are recognized by the national science academies of all the major industrialized countries.

Global warming is affecting many places around the world. It is accelerating the melting of ice sheets, permafrost and glaciers which is causing average sea levels to rise. It is also changing precipitation and weather patterns in many different places, making some places dryer, with more intense periods of drought and at the same time making other places wetter, with stronger storms and increased flooding. These changes have affected both nature as well as human society and will continue to have increasingly worse effects if greenhouse gas emissions continue to grow at the same pace as today.

What causes global warming?

103


The cause of global warming is the increasing quantity of greenhouse gases in the our atmosphere produced by human activities, like the burning of fossil fuels or deforestation. These activities produce large amounts of greenhouse gas emissions which is causing global warming. Greenhouse gases trap heat in the Earth's atmosphere to keep the planet warm enough to sustain life, this process is called the greenhouse effect. It is a natural process and without these gases, the Earth would be too cold for humans, plants and other creatures to live. The natural greenhouse effect exists due to the balance of the major types of greenhouse gases. However, when abnormally high levels of these gases accumulate in the air, more heat starts getting trapped and leads to the enhancement of the greenhouse effect. Human-caused emissions have been increasing greenhouse levels which is raising worldwide temperatures and driving global warming.

Greenhouse gas greenhouse effect

emissions

and

the

enhanced

Greenhouse gases are produced both naturally and through human activities. Unfortunately, greenhouse gases generated by human activities are being added to the atmosphere at a much faster rate than any natural process can remove them. Global levels of greenhouse gases have increased dramatically since the dawn of the Industrial Revolution in the 1750s. Only a small group of human activities are causing the concentration of the main greenhouse gases (carbon dioxide, methane, nitrous oxide and fluorinated gases) to rise: 

The majority of man-made carbon dioxide emissions is from the burning of fossil fuels such as coal and oil so that humans can power various vehicles, machinery, keep warm and create electricity. Other important sources come from land-use changes (ex: deforestation) and industry (ex: cement production).

104




Methane is created by humans during fossil fuel production and use, livestock and rice farming, as well as landfills.



Nitrous oxide emissions are mainly caused by the use of synthetic fertilizers for agriculture, fossil fuel combustion and livestock manure management.

105




Fluorinated gases are used mainly in refrigeration, cooling and manufacturing applications.

Deforestation Deforestation has become a massive undertaking by humans and transforming forests into farms has a significant number of impacts as far as greenhouse gas emissions are concerned. For centuries, people have burned and cut down forests to clear land for agriculture. This has a double effect on the atmosphere both emiting carbon dioxide into the atmosphere and simultaneously reducing the number of trees that can remove carbon dioxide from the air.

When forested land is cleared, soil disturbance and increased rates of decomposition in converted soils both create carbon dioxide emissions. This also increases soil erosion and nutrient leaching which can further reduces the area's ability to act as a carbon sink.

106


What are the effects of global warming? Global warming is damaging the Earth's climate as well as the physical environment. One of the most visible effects of global warming can be seen in the Arctic as glaciers, permafrost and sea ice are melting rapidly. Global warming is harming the environment in several ways including:    

Desertification Increased melting of snow and ice Sea level rise Stronger hurricanes and cyclones

107


Desertification Increasing temperatures around the world are making arid and semi-arid areas even more dry than before. Current research is also showing that the water cycle is changing and rainfall patterns are shifting to make areas that are already dry even drier. This is causing water shortages and an intense amount of distress to the over 2.5 million people in dry regions which are degrading into desert. This process is called desertification.

108


Increased melting of snow and ice

Around the world, snow and ice is melting at a much faster pace than in the past. This has been seen in the Alps, Himalayas, Andes, Rockies, Alaska and Africa but is particularly true at the Earth's poles. Perennial ice cover in the Arctic is melting at the rate of 11.5% per decade and the thickness of the Arctic ice has decreased by 48% since the 1960s.

During the past 30 years, more than a million square miles of sea ice has vanished, an area equivalent to the size of Norway, Denmark and Sweden combined. The continent of Antarctica has been losing more than 100 cubic kilometers (24 cubic miles) of ice per year since 2002. Since 2010, the Antarctic ice melt rate has doubled.

109


Sea level rise The Earth's sea level has risen by 21 cm (8 inches) since 1880. The rate of rise is accelerating and is now at a pace that has not been seen for at least 5000 years.

Global warming has caused this by affecting the oceans in two ways: warmer average temperatures cause ocean waters to expand (thermal expansion) and the accelerated melting of ice and glaciers increase the amount of water in the oceans.

Stronger hurricanes and cyclones

Tropical cyclone activity has seen an obvious upswing trend since the early 1970s. Interestingly, this matches directly with an observed rise in the oceans' temperature 110


over the same period of time. Since then, the Power Dissipation Index which measures the destructive power of tropical cyclones has increased in the Pacific by 35% and in the Atlantic it has nearly doubled. Global warming also increases the frequency of strong cyclones. Every 1 degree C increase in sea surface temperature results in a 31% increase in the global frequency of category 4 and 5 storms.

111


CRUNCHES While some of the events described in the previous section would be certain to actually wipe out Homo sapiens (e.g. a breakdown of a meta-stable vacuum state) others could potentially be survived (such as an all-out nuclear war). If modern civilization were to collapse, however, it is not completely certain that it would arise again even if the human species survived. We may have used up too many of the easily available resources a primitive society would need to use to work itself up to our level of technology. A primitive human society may or may not be more likely to face extinction than any other animal species. But let’s not try that experiment. If the primitive society lives on but fails to ever get back to current technological levels, let alone go beyond it, then we have an example of a crunch. Here are some potential causes of a crunch:

Resource depletion or ecological destruction The natural resources needed to sustain a high-tech civilization are being used up.

112


If some other cataclysm destroys the technology we have, it may not be possible to climb back up to present levels if natural conditions are less favorable than they were for our ancestors, for example if the most easily exploitable coal, oil, and mineral resources have been depleted. (On the other hand, if plenty of information about our technological feats is preserved, that could make a rebirth of civilization easier.).

Environmentalists and scientists often refer to the two different ends of the environmental problem assources and sinks. Thus the environmental limits to economic growth manifest themselves as either: (1) shortages in the “sources” or “taps” of raw materials/natural resources, and thus a problem of depletion, or (2) as a lack of sufficient “sinks,” to absorb wastes from industrial pollution, which “overflow” and cause harm to the environment. The original 1972 Limits to Growth study emphasized the problem of sources in the form of shortages of raw materials, such as fossil fuels, basic minerals, topsoil, freshwater, and forests.2 Today the focus of environmental concern has shifted more to sinks, as represented by climate change, ocean acidification, and production of toxics. Nevertheless, the problem of the depletion of resources used in production remains 113


critical, as can be seen in discussions of such issues as: declining freshwater resources, peak (crude) oil, loss of soil fertility, and shortages of crucial minerals like zinc, copper, and phosphorus.

In conventional environmental analysis the issue of a shortage or depletion of natural resources has often been seen through a Malthusian lens as principally a problem of overpopulation.

114


Thomas Malthus raised the issue in the late eighteenth century of what he saw as inevitable shortages of food in relation to population growth.

This was later transformed by twentieth-century environmental theorists into an argument that current or future shortages of natural resources resulted from a population explosion overshooting the carrying capacity of the earth. The following analysis will address the environmental problem from the source or tap end, and its relation to population growth. No systematic attempt will be made to address the sink problem. However, the tap and the sink are connected because the greater use of resources to produce goods results in greater flows of pollutants into the “sink” during extraction, processing, transportation, manufacturing, use, and disposal.

Resource Depletion and Overuse There are many examples of justified concern over depletion and unsustainable use of resources—or, at least, the easily reached and relatively cheap to extract ones. A little discussed but very important example is phosphate. It is anticipated that the world’s known phosphate deposits will be exhausted by the end of the century.6 The largest 115


phosphate deposits are found in North Africa (Morocco), the United States, and China. Although phosphorus is used for other purposes, its use in agricultural fertilizers may be one of the most critical for the future of civilization.

In the absence of efficient nutrient cycling (the return to fields of nutrients contained in crop residues and farm animal and human wastes), routine use of phosphorus fertilizers is critical in order to maintain food production.

116


Today much of the fertilizer phosphate that is used is being wasted, leading to excessive runoff of this mineral, inducing algal blooms in lakes and rivers and contributing to ocean dead zones—both sink problems.

We could discuss many other individual nonrenewable resources, but the point would be the same. The depletion of nonrenewable resources that modern societies depend upon—such as oil, zinc, iron ore, bauxite (to make aluminum), and the “rare earths” (used in many electronic gadgets including smart phones as well as smart bombs)—is a problem of great importance. Although there is no immediate problem of scarcity for most of these resources, that is no reason to put off making societal changes that acknowledge the reality of the finite limits of nonrenewable resources. (“Rare earth” metals are not actually that rare. Their price increase in recent years has been caused by a production cutback in China, which accounts for 95 percent of world production, as it tries to better control the extensive ecological damage caused by extracting these minerals. Production of rare earths is starting up once again in the United States and a large facility is planned for 117


Malaysia, where it is being bitterly opposed by environmental activists. The main current issue with rare earth metals is notscarcity at the tap end, but rather pollution associated with mining and extraction—again a sink problem.)

What is important is that the environmental damage and the economic costs mount as corporations and countries dig deeper in mining for resources and use more advanced technology and/or in more fragile locations.

118


Mining companies are using new technologies such as robotic drills and high-strength pipe alloys to drill deeper after the surface deposits are depleted. Seafloor mining is another approach used to deal with declining easy-to-reach deposits. Still another way to deal with depleted high-quality deposits is to exploit those of lower quality. In highlighting this development, the CEO of a copper mining company explained: “Today the average grade—the grade is a measure of the amount of copper you can turn into material—is half of what it was 20 years ago. And so to get the same amount of copper from a deposit, you have to mine and process significantly larger quantities of material, and that involves higher cost.”7 This mining approach creates larger quantities of leftover spoils to pollute air, water, and soil.

Chart 1. Share of World Consumption by Income Decile

Source: World Bank, 2008 World Development Index, 4, http://data.worldbank.org. Note: World Bank staff combined measures of inequality within countries with measures of inequality between countries (using producer price parities) to derive estimates of the share of consumption by world income deciles.

119


Combating Pollution Depletion/Misuse

and

Resource

The comprehensive 2012 report, People and the Planet by the Royal Society of London, included as one of its main conclusions that there is a need “to develop socio-economic systems and institutions that are not dependent on continued material consumption growth”. In other words, a non-capitalist society is needed.

Capitalism is the underlying cause of the extraordinarily high rate of resource use, mismanagement of both renewable and nonrenewable resources, and pollution of the earth. Any proposed “solution”—from birth control in poor countries to technological fixes to buying green to so-called “green capitalism” and so on—that ignores this reality cannot make significant headway in dealing with these critical problems facing the earth and its people. Within the current system, there are steps that can and should be taken to lessen the environmental problems associated with the limits of growth: the depletion of resource taps and the overflowing of waste sinks, both of which threaten the future of humanity.31 Our argument, however, has shown that attempts to trace these problems, 120


and particularly the problem of depletion natural resources, to population growth are generally misdirected. The economic causes of depletion are the issues that must be vigorously addressed (though population growth remains a secondary factor). The starting point for any meaningful attempt actually to solve these problems must begin with the mode of production and its unending quest for ever-higher amounts of capital accumulation regardless of social and environmental costs—with the negative results that a portion of society becomes fabulously rich while others remain poor and the environment is degraded at a planetary level.

It is clear then that capitalism, that is, the system of the accumulation of capital, must go—sooner rather than later. But just radically transcending a system that harms the environment and many of the world’s people is not enough. In its place people must create a socio-economic system that has as its very purpose the meeting of everyone’s basic material and nonmaterial needs, which, of course, includes healthy local, regional, and global ecosystems. This will require modest living standards, with economic and political decisions resolved democratically according to principles consistent with substantive equality among people and a healthy biosphere for all the earth’s inhabitants.

121


Misguided world government or another static social equilibrium stops technological progress One could imagine a fundamentalist religious or ecological movement one day coming to dominate the world. If by that time there are means of making such a world government stable against insurrections (by advanced surveillance or mind-control technologies), this might permanently put a lid on humanity’s potential to develop to a posthuman level. Aldous Huxley’s Brave New World is a well-known scenario of this type.

A world government may not be the only form of stable social equilibrium that could permanently thwart progress. Many regions of the world today have great difficulty building institutions that can support high growth. And historically, there are many places where progress stood still or retreated for significant periods of time. Economic and technological progress may not be as inevitable as is appears to us. 122


“Dysgenic” pressures It is possible that advanced civilized society is dependent on there being a sufficiently large fraction of intellectually talented individuals. Currently it seems that there is a negative correlation in some places between intellectual achievement and fertility. If such selection were to operate over a long period of time, we might evolve into a less brainy but more fertile species, homo philoprogenitus (“lover of many offspring”). However, contrary to what such considerations might lead one to suspect, IQ scores have actually been increasing dramatically over the past century. This is known as the Flynn effect; see e.g. [51,52]. It’s not yet settled whether this corresponds to real gains in important intellectual functions. Moreover, genetic engineering is rapidly approaching the point where it will become possible to give parents the choice of endowing their offspring with genes that correlate with intellectual capacity, physical health, longevity, and other desirable traits. In any case, the time-scale for human natural genetic evolution seems much too grand for such developments to have any significant effect before other developments will have made the issue moot [19,39]. A number of smart people, Charles Murray included, are worried about "dysgenic pressure." The story, in brief, is that: 1. Intelligence is highly heritable. 2. The more intelligent have fewer kids than less intelligent. 3. Our average IQ is declining, or (here's the Flynn effect caveat) at least rising at a slower rate than it otherwise would. 4. We've got to get low IQ people to have fewer kids. The response is predictable: People who find (4) offensive put their heads in the sand about (1), (2), and (3), and people who like (4) insist that it follows straight from the facts. Once again, both sides are wrong. Yes, there is ample evidence that (1) and (2) are true. I've checked (2) using the NLSY myself, and found that the smartest people really do 123


average almost one child less than people at the other end of the scale. And while all the data says that IQ is rising, it's hard to deny that IQ would have risen more if there were no relationship - or a positive relationship - between IQ and fertility. But, by an argument parallel to my critique of eugenics using the Law of Comparative Advantage, (4) simply doesn't follow. What happens when low IQ people have more kids? It encourages greater specialization and trade. High-IQ people have a stronger incentive to focus on brainy work, because there are more low-IQ people to handle the non-brainy work. Of course, another route to the same result would be for high-IQ people to have more kids. And it's plausible that if we could have one more person, it would be better for the world if he had a high IQ. But that's a trade-off we virtually never face in the modern world! There's plenty of food, and if low-IQ families have fewer kids, high-IQ families are not going to "take up the slack." If you are really worried about dysgenic pressure retarding the advance of civilization, there are two sensible solutions. The first is to encourage high-IQ people to have more kids to increase the supply of brains; the second is to encourage low-IQ people to have more kids to increase the demand for brains. Urging either group to have fewer kids "for the good of society" is not smart.

Technological arrest The sheer technological difficulties in making the transition to the posthuman world might turn out to be so great that we never get there.

Something unforeseen As before, a catch-all. Overall, the probability of a crunch seems much smaller than that of a bang. We should keep the possibility in mind but not let it play a dominant role in our thinking at this point. If technological and economical development were to slow down substantially for some reason, then we would have to take a closer look at the crunch scenarios. 124


SHRIEKS Determining which scenarios are shrieks is made more difficult by the inclusion of the notion of desirability in the definition. Unless we know what is “desirable”, we cannot tell which scenarios are shrieks. However, there are some scenarios that would count as shrieks under most reasonable interpretations.

Take-over by a transcending upload Suppose uploads come before human-level artificial intelligence. An upload is a mind that has been transferred from a biological brain to a computer that emulates the computational processes that took place in the original biological neural network. A successful uploading process would preserve the original mind’s memories, skills, values, and consciousness. Uploading a mind will make it much easier to enhance its intelligence, by running it faster, adding additional computational resources, or streamlining its architecture. One could imagine that enhancing an upload beyond a certain point will result in a positive feedback loop, where the enhanced upload is able to figure out ways of making itself even smarter; and the smarter successor version is in turn even better at designing an improved version of itself, and so on. If this runaway process is sudden, it could result in one upload reaching superhuman levels of intelligence while everybody else remains at a roughly human level. Such enormous intellectual superiority may well give it correspondingly great power. It could rapidly invent new technologies or perfect nanotechnological designs, for example. If the transcending upload is bent on preventing others from getting the opportunity to upload, it might do so. The posthuman world may then be a reflection of one particular egoistical upload’s preferences (which in a worst case scenario would be worse than worthless). Such a world may well be a realization of only a tiny part of what would have been possible and desirable. This end is a shriek.

Flawed superintelligence Again, there is the possibility that a badly programmed superintelligence takes over and implements the faulty goals it has erroneously been given. 125


Repressive totalitarian global regime Similarly, one can imagine that an intolerant world government, based perhaps on mistaken religious or ethical convictions, is formed, is stable, and decides to realize only a very small part of all the good things a posthuman world could contain. Such a world government could conceivably be formed by a small group of people if they were in control of the first superintelligence and could select its goals. If the superintelligence arises suddenly and becomes powerful enough to take over the world, the posthuman world may reflect only the idiosyncratic values of the owners or designers of this superintelligence. Depending on what those values are, this scenario would count as a shriek.

Something unforeseen The catch-all. These shriek scenarios appear to have substantial probability and thus should be taken seriously in our strategic planning. One could argue that one value that makes up a large portion of what we would consider desirable in a posthuman world is that it contains as many as possible of those persons who are currently alive. After all, many of us want very much not to die (at least not yet) and to have the chance of becoming posthumans. If we accept this, thenany scenario in which the transition to the posthuman world is delayed for long enough that almost all current humans are dead before it happens would be a shriek. Failing a breakthrough in life-extension or widespread adoption of cryonics, then even a smooth transition to a fully developed posthuman eighty years from now would constitute a major existential risk, if we define “desirable” with special reference to the people who are currently alive. This “if”, however, is loaded with a profound axiological problem that we shall not try to resolve here.

126


WHIMPERS If things go well, we may one day run up against fundamental physical limits. Even though the universe appears to be infinite, the portion of the universe that we could potentially colonize is (given our admittedly very limited current understanding of the situation) finite , and we will therefore eventually exhaust all available resources or the resources will spontaneously decay through the gradual decrease of negentropy and the associated decay of matter into radiation. But here we are talking astronomical timescales. An ending of this sort may indeed be the best we can hope for, so it would be misleading to count it as an existential risk. It does not qualify as a whimper because humanity could on this scenario have realized a good part of its potential. Two whimpers (apart form the usual catch-all hypothesis) appear to have significant probability:

Our potential or even our core values are eroded by evolutionary development This scenario is conceptually more complicated than the other existential risks we have considered (together perhaps with the “We are living in a simulation that gets shut down” bang scenario). It is explored in more detail in a companion paper. An outline of that paper is provided in an Appendix. A related scenario is described in, which argues that our “cosmic commons” could be burnt up in a colonization race. Selection would favor those replicators that spend all their resources on sending out further colonization probes. Although the time it would take for a whimper of this kind to play itself out may be relatively long, it could still have important policy implications because near-term choices may determine whether we will go down a track that inevitably leads to this outcome. Once the evolutionary process is set in motion or a cosmic colonization race begun, it could prove difficult or impossible to halt it. It may well be that the only feasible way of avoiding a whimper is to prevent these chains of events from ever starting to unwind.

127


Killed by an extraterrestrial civilization The probability of running into aliens any time soon appears to be very small. If things go well, however, and we develop into an intergalactic civilization, we may one day in the distant future encounter aliens. If they were hostile and if (for some unknown reason) they had significantly better technology than we will have by then, they may begin the process of conquering us. Alternatively, if they trigger a phase transition of the vacuum through their high-energy physics experiments (see the Bangs section) we may one day face the consequences. Because the spatial extent of our civilization at that stage would likely be very large, the conquest or destruction would take relatively long to complete, making this scenario a whimper rather than a bang.

Something unforeseen The catch-all hypothesis. The first of these whimper scenarios should be a weighty concern when formulating long-term strategy. Dealing with the second whimper is something we can safely delegate to future generations (since there’s nothing we can do about it now anyway).

128


Assessing the probability of existential risks 8.1

Direct versus indirect methods

There are two complementary ways of estimating our chances of creating a posthuman world. What we could call the direct way is to analyze the various specific failuremodes, assign them probabilities, and then subtract the sum of these disasterprobabilities from one to get the success-probability. In doing so, we would benefit from a detailed understanding of how the underlying causal factors will play out. For example, we would like to know the answers to questions such as: How much harder is it to design a foolproof global nanotech immune system than it is to design a nanobot that can survive and reproduce in the natural environment? How feasible is it to keep nanotechnology strictly regulated for a lengthy period of time (so that nobody with malicious intentions gets their hands on an assembler that is not contained in a tamperproof sealed assembler lab? How likely is it that superintelligence will come before advanced nanotechnology? We can make guesses about these and other relevant parameters and form an estimate that basis; and we can do the same for the other existential risks that we have outlined above. (I have tried to indicate the approximate relative probability of the various risks in the rankings given in the previous four sections.) Secondly, there is the indirect way. There are theoretical constraints that can be brought to bear on the issue, based on some general features of the world in which we live. There is only small number of these, but they are important because they do not rely on making a lot of guesses about the details of future technological and social developments:

129


The Fermi Paradox Everyone feels something when they’re in a really good starry place on a really good starry night and they look up and see this:

Some people stick with the traditional, feeling struck by the epic beauty or blown away by the insane scale of the universe. Personally, I go for the old “existential meltdown followed by acting weird for the next half hour.” But everyone feels something. Physicist Enrico Fermi felt something too—”Where is everybody?” ________________

130


A really starry sky seems vast—but all we’re looking at is our very local neighborhood.

On the very best nights, we can see up to about 2,500 stars (roughly one hundredmillionth of the stars in our galaxy), and almost all of them are less than 1,000 light years away from us (or 1% of the diameter of the Milky Way). So what we’re really looking at is this: When confronted with the topic of stars and galaxies, a question that tantalizes most humans is, “Is there other intelligent life out there?” Let’s put some numbers to it—

As many stars as there are in our galaxy (100 – 400 billion), there are roughly an equal 131


number of galaxies in the observable universe—so for every star in the colossal Milky Way, there’s a whole galaxyout there. All together, that comes out to the typically quoted range of between 1022 and 1024 total stars, which means that for every grain of sand on every beach on Earth, there are 10,000 stars out there.

The science world isn’t in total agreement about what percentage of those stars are “sunlike” (similar in size, temperature, and luminosity)—opinions typically range from 5% to 20%. Going with the most conservative side of that (5%), and the lower end for the number of total stars (1022), gives us 500 quintillion, or 500 billion billion sun-like stars. There’s also a debate over what percentage of those sun-like stars might be orbited by an Earth-like planet (one with similar temperature conditions that could have liquid water and potentially support life similar to that on Earth). Some say it’s as high as 50%, but let’s go with the more conservative 22% that came out of a recent PNAS study. That suggests that there’s a potentially-habitable Earth-like planet orbiting at least 1% of the total stars in the universe—a total of 100 billion billion Earth-like planets. 132


So there are 100 Earth-like planets for every grain of sand in the world. Think about that next time you’re on the beach.

Moving forward, we have no choice but to get completely speculative. Let’s imagine that after billions of years in existence, 1% of Earth-like planets develop life (if that’s true, every grain of sand would represent one planet with life on it). And imagine that on 1% 133


of those planets, the life advances to an intelligent level like it did here on Earth. That would mean there were 10 quadrillion, or 10 million billion intelligent civilizations in the observable universe. Moving back to just our galaxy, and doing the same math on the lowest estimate for stars in the Milky Way (100 billion), we’d estimate that there are 1 billion Earth-like planets and 100,000 intelligent civilizations in our galaxy.1 SETI (Search for Extraterrestrial Intelligence) is an organization dedicated to listening for signals from other intelligent life. If we’re right that there are 100,000 or more intelligent civilizations in our galaxy, and even a fraction of them are sending out radio waves or laser beams or other modes of attempting to contact others, shouldn’t SETI’s satellite dish array pick up all kinds of signals? But it hasn’t. Not one. Ever.

Where is everybody? It gets stranger. Our sun is relatively young in the lifespan of the universe. There are far older stars with far older Earth-like planets, which should in theory mean civilizations far more advanced than our own. As an example, let’s compare our 4.54-billion-year-old Earth to a hypothetical 8-billion-year-old Planet X.

134


If Planet X has a similar story to Earth, let’s look at where their civilization would be today (using the orange timespan as a reference to show how huge the green timespan is):

The technology and knowledge of a civilization only 1,000 years ahead of us could be as shocking to us as our world would be to a medieval person. A civilization 1 million years ahead of us might be as incomprehensible to us as human culture is to chimpanzees. And Planet X is 3.4 billion years ahead of us… There’s something called The Kardashev Scale, which helps us group intelligent civilizations into three broad categories by the amount of energy they use:

A Type I Civilization has the ability to use all of the energy on their planet. We’re not quite a Type I Civilization, but we’re close (Carl Sagan created a formula for this scale which puts us at a Type 0.7 Civilization).

A Type II Civilization can harness all of the energy of their host star. Our feeble Type I brains can hardly imagine how someone would do this, but we’ve tried our best, imagining things like a Dyson Sphere.

135


A Type III Civilization blows the other two away, accessing power comparable to that of the entire Milky Way galaxy. If this level of advancement sounds hard to believe, remember Planet X above and their 3.4 billion years of further development. If a civilization on Planet X were similar to ours and

were able to survive all the way to Type III level, the natural thought is that they’d probably have mastered interstellar travel by now, possibly even colonizing the entire galaxy. One hypothesis as to how galactic colonization

could

happen

is

by

creating machinery that can travel to other planets, spend 500 years or so self-replicating using the raw materials on their new planet, and then send two replicas off to do the same thing. Even without traveling anywhere near the speed of light, this process would colonize the whole galaxy in 3.75 million years, a relative blink of an eye when talking in the scale of billions of

136


years:

Continuing to speculate, if 1% of intelligent life survives long enough to become a potentially galaxy-colonizing Type III Civilization, our calculations above suggest that there should be at least 1,000 Type III Civilizations in our galaxy alone—and given the 137


power of such a civilization, their presence would likely be pretty noticeable. And yet, we see nothing, hear nothing, and we’re visited by no one.

So where is everybody? _____________________

We have no answer to the Fermi Paradox—the best we can do is “possible explanations.” And if you ask ten different scientists what their hunch is about the correct one, you’ll get ten different answers. You know when you hear about humans of the past debating whether the Earth was round or if the sun revolved around the Earth or thinking that lightning happened because of Zeus, and they seem so primitive and in the dark? That’s about where we are with this topic.

In taking a look at some of the most-discussed possible explanations for the Fermi Paradox, let’s divide them into two broad categories—those explanations which assume that there’s no sign of Type II and Type III Civilizations because there are none of them out there, and those which assume they’re out there and we’re not seeing or hearing anything for other reasons. 138


Explanation Group 1: There are no signs of

higher (Type II and III) civilizations because there are no higher civilizations in existence. Those who subscribe to Group 1 explanations point to something called the nonexclusivity problem, which rebuffs any theory that says, “There are higher civilizations, but none of them have made any kind of contact with us because they all _____.” Group 1 people look at the math, which says there should be so many thousands (or millions) of higher civilizations, that at least one of them would be an exception to the rule. Even if a theory held for 99.99% of higher civilizations, the other .01% would behave differently and we’d become aware of their existence.

Therefore, say Group 1 explanations, it must be that there are no super-advanced civilizations. And since the math suggests that there are thousands of them just in our own galaxy, something else must be going on. This something else is called The Great Filter.

139


The Great Filter theory says that at some point from pre-life to Type III intelligence, there’s a wall that all or nearly all attempts at life hit. There’s some stage in that long evolutionary process that is extremely unlikely or impossible for life to get beyond. That stage is The Great Filter.

If this theory is true, the big question is, Where in the timeline does the Great Filter occur?

It turns out that when it comes to the fate of humankind, this question is very 140


important. Depending on where The Great Filter occurs, we’re left with three possible realities: We’re rare, we’re first, or we’re fucked.

1. We’re Rare (The Great Filter is Behind Us) One hope we have is that The Great Filter is behind us—we managed to surpass it, which would mean it’s extremely rare for life to make it to our level of intelligence.

This scenario would explain why there are no Type III Civilizations…but it would also mean that wecould be one of the few exceptions now that we’ve made it this far.

The diagram below shows only two species making it past, and we’re one of them. 141


It would mean we have hope. On the surface, this sounds a bit like people 500 years ago suggesting that the Earth is the center of the universe—it implies that we’re special.

However, something scientists call “observation selection effect” suggests that anyone who is pondering their own rarity is inherently part of an intelligent life “success story”—and whether they’re actually rare or quite common, the thoughts they ponder and conclusions they draw will be identical. This forces us to admit that being special is at least a possibility.

And if we are special, when exactly did we become special—i.e. which step did we surpass that almost everyone else gets stuck on? 142


One possibility: The Great Filter could be at the very beginning—it might be incredibly unusual for life to begin at all. This is a candidate because it took about a billion years of Earth’s existence to finally happen, and because we have tried extensively to replicate that event in labs and have never been able to do it.

If this is indeed The Great Filter, it would mean that not only is there no intelligent life out there, there may be no other life at all. Another possibility: The Great Filter could be the jump from the simple prokaryote cell to the complex eukaryote cell. After prokaryotes came into being, they remained that way for almost two billion years before making the evolutionary jump to being complex and having a nucleus. If this is The Great Filter, it would mean the universe is teeming with simple prokaryote cells and almost nothing beyond that.

There are a number of other possibilities—some even think the most recent leap we’ve made to our current intelligence is a Great Filter candidate. 143


While the leap from semi-intelligent life (chimps) to intelligent life (humans) doesn’t at first seem like a miraculous step, Steven Pinker rejects the ideaof an inevitable “climb upward” of evolution: “Since evolution does not strive for a goal but just happens, it uses the adaptation most useful for a given ecological niche, and the fact that, on Earth, this led to technological intelligence only once so far may suggest that this outcome of natural selection is rare and hence by no means a certain development of the evolution of a tree of life.”

Most leaps do not qualify as Great Filter candidates. Any possible Great Filter must be one-in-a-billion type thing where one or more total freak occurrences need to happen to 144


provide a crazy exception—for that reason, something like the jump from single-cell to multi-cellular life is ruled out, because it has occurred as many as 46 times, in isolated incidents, just on this planet alone. For the same reason, if we were to find a fossilized eukaryote cell on Mars, it would rule the above “simple-to-complex cell” leap out as a possible Great Filter (as well as anything before that point on the evolutionary chain)— because if it happened on both Earth and Mars, it’s almost definitely not a one-in-abillion freak occurrence.

If we are indeed rare, it could be because of a fluky biological event, but it also could be attributed to what is called the Rare Earth Hypothesis, which suggests that though there may be many Earth-likeplanets, the particular conditions on Earth—whether related to the specifics of this solar system, its relationship with the moon (a moon that large is unusual for such a small planet and contributes to our particular weather and ocean conditions), or something about the planet itself—are exceptionally friendly to life. 145


2. We’re the First For Group 1 Thinkers, if the Great Filter is not behind us, the one hope we have is that conditions in the universe are just recently, for the first time since the Big Bang, reaching a place that would allow intelligent life to develop.

In that case, we and many other species may be on our way to super-intelligence, and it simply hasn’t happened yet. We happen to be here at the right time to become one of the first super-intelligent civilizations. One example of a phenomenon that could make this realistic is the prevalence of gamma-ray bursts, insanely huge explosions that we’ve observed in distant galaxies.

In the same way that it took the early Earth a few hundred million years before the 146


asteroids and volcanoes died down and life became possible, it could be that the first chunk of the universe’s existence was full of cataclysmic events like gamma-ray bursts that would incinerate everything nearby from time to time and prevent any life from developing past a certain stage.

Now, perhaps, we’re in the midst of an astrobiological phase transition and this is the first time any life has been able to evolve for this long, uninterrupted.

3. We’re Fucked (The Great Filter is Ahead of Us) If we’re neither rare nor early, Group 1 thinkers conclude that The Great Filter must be in our future. This would suggest that life regularly evolves to where we are, but that something prevents life from going much further and reaching high intelligence in almost all cases—and we’re unlikely to be an exception. One possible future Great Filter is a regularly-occurring cataclysmic natural event, like the above-mentioned gamma-ray bursts, except they’re unfortunately not done yet and it’s just a matter of time before all life on Earth is suddenly wiped out by one. Another 147


candidate is the possible inevitability that nearly all intelligent civilizations end up destroying themselves once a certain level of technology is reached.

This is why Oxford University philosopher Nick Bostrom says that “no news is good news.” The discovery of even simple life on Mars would be devastating, because it would cut out a number of potential Great Filters behind us.

And if we were to find fossilized complex life on Mars, Bostrom says “it would be by far the worst news ever printed on a newspaper cover,” because it would mean The Great Filter is almost definitely ahead of us—ultimately dooming the species. Bostrom believes that when it comes to The Fermi Paradox, “the silence of the night sky is golden.” 148


Explanation Group 2:

Type II and III intelligent civilizations are out there—and there are logical reasons why we might not have heard from them.

Group 2 explanations get rid of any notion that we’re rare or special or the first at anything—on the contrary, they believe in the Mediocrity Principle, whose starting point is that there is nothing unusual or rare about our galaxy, solar system, planet, or level of intelligence, until evidence proves otherwise.

149


They’re also much less quick to assume that the lack of evidence of higher intelligence beings is evidence of their nonexistence—emphasizing the fact that our search for signals stretches only about 100 light years away from us (0.1% across the galaxy) and suggesting a number of possible explanations. Here are 10:

Possibility 1) Super-intelligent life could very well have already visited Earth, but before we were here. In the scheme of things, sentient humans have only been around for about 50,000 years, a little blip of time. If contact happened before then, it might have made some ducks flip out and run into the water and that’s it. Further, recorded history only goes back 5,500 years—a group of ancient huntergatherer tribes may have experienced some crazy alien shit, but they had no good way to tell anyone in the future about it.

150


Possibility 2) The galaxy has been colonized, but we just live in some desolate rural area of the galaxy. The Americas may have been colonized by Europeans long before anyone in a small Inuit tribe in far northern Canada realized it had happened. There could be an urbanization component to the interstellar dwellings of higher species, in which all the neighboring solar systems in a certain area are colonized and in communication, and it would be impractical and purposeless for anyone to deal with coming all the way out to the random part of the spiral where we live.

Possibility 3) The entire concept of physical colonization is a hilariously backward concept to a more advanced species. Remember the picture of the Type II Civilization above with the sphere around their star? With all that energy, they might have created a perfect environment for themselves that satisfies their every need. They might have crazy-advanced ways of reducing their need for resources and zero interest in leaving their happy utopia to explore the cold, empty, undeveloped universe. An even more advanced civilization might view the entire physical world as a horribly primitive place, having long ago conquered their own biology and uploaded their brains to a virtual reality, eternal-life paradise. Living in the physical world of biology, 151


mortality, wants, and needs might seem to them the way we view primitive ocean species living in the frigid, dark sea. FYI, thinking about another life form having bested mortality makes me incredibly jealous and upset.

Possibility 4) There are scary predator civilizations out there, and most intelligent life knows better than to broadcast any outgoing signals and advertise their location. This is an unpleasant concept and would help explain the lack of any signals being received by the SETI satellites.

152


It

also

means

that

we

might

be

the

super

naive

newbies

who

are

being unbelievably stupid and risky by ever broadcasting outward signals.

There’s a debate going on currently about whether we should engage in METI (Messaging to Extraterrestrial Intelligence—the reverse of SETI) or not, and most people say we should not. Stephen Hawking warns, “If aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans.”

Even Carl Sagan (a general believer that any civilization advanced enough for 153


interstellar travel would be altruistic, not hostile)called the practice of METI “deeply unwise and immature,” and recommended that “the newest children in a strange and uncertain cosmos should listen quietly for a long time, patiently learning about the universe and comparing notes, before shouting into an unknown jungle that we do not understand.” Scary.1

Possibility 5) There’s only one instance of higher-intelligent life—a “superpredator” civilization (like humans are here on Earth)—that is far more advanced than everyone else and keeps it that way by exterminating any intelligent civilization once they get past a certain level. This would suck. The way it might work is that it’s an inefficient use of resources to exterminate all emerging intelligences, maybe because most die out on their own. But past a certain point, the super beings make their move—because to them, an emerging intelligent species becomes like a virus as it starts to grow and spread. This theory suggests that whoever was the first in the galaxy to reach intelligence won, and now no one else has a chance. This would explain the lack of activity out there because it would keep the number of super-intelligent civilizations to just one.

154


Possibility 6) There’s plenty of activity and noise out there, but our technology is too primitive and we’re listening for the wrong things. Like walking into a modern-day office building, turning on a walkie-talkie, and when you hear no activity (which of course you wouldn’t hear because everyone’s texting, not using walkie-talkies), determining that the building must be empty. Or maybe, as Carl Sagan has pointed out, it could be that our minds work exponentially faster or slower than another form of intelligence out there—e.g. it takes them 12 years to say “Hello,” and when we hear that communication, it just sounds like white noise to us.

155


Possibility 7) We are receiving contact from other intelligent life, but the government is hiding it.The more I learn about the topic, the more this seems like an idiotic theory, but I had to mention it because it’s talked about so much.

Possibility 8) Higher civilizations are aware of us and observing us (AKA the “Zoo Hypothesis”). As far as we know, super-intelligent civilizations exist in a tightly-regulated galaxy, and our Earth is treated like part of a vast and protected national park, with a strict “Look but don’t touch” rule for planets like ours. We wouldn’t notice them, because if a far smarter species wanted to observe us, it would know how to easily do so without us realizing it. Maybe there’s a rule similar to the Star

156


Trek’s “Prime Directive” which prohibits super-intelligent beings from making any open contact with lesser species like us or revealing themselves in any way, until the lesser species has reached a certain level of intelligence.

Possibility 9) Higher civilizations are here, all around us. But we’re too primitive to perceive them. Michio Kaku sums it up like this: Let’s say we have an anthill in the middle of the forest. And right next to the anthill, they’re building a ten-lane super-highway. And the question is “Would the ants be able to understand what a ten-lane super-highway is? Would the ants be able to understand the technology and the intentions of the beings building the highway next to them?”

So it’s not that we can’t pick up the signals from Planet X using our technology, it’s that 157


we can’t even comprehend what the beings from Planet X are or what they’re trying to do. It’s so beyond us that even if they really wanted to enlighten us, it would be like trying to teach ants about the internet.

Along those lines, this may also be an answer to “Well if there are so many fancy Type III Civilizations, why haven’t they contacted us yet?” To answer that, let’s ask ourselves—when Pizarro made his way into Peru, did he stop for a while at an anthill to try to communicate? Was he magnanimous, trying to help the ants in the anthill? Did he become hostile and slow his original mission down in order to smash the anthill apart? Or was the anthill of complete and utter and eternal irrelevance to Pizarro? That might be our situation here. Possibility 10) We’re completely wrong about our reality. There are a lot of ways we could just betotally off with everything we think. The universe might appear one way and be something else entirely, like a hologram. Or maybe we’re the aliens and we were planted here as an experiment or as a form of fertilizer. There’s even a chance that we’re all part of a computer simulation by some researcher from another world, and other forms of life simply weren’t programmed into the simulation.

158


8.3

Observation selection effects

The theory of observation selection effects may tell us what we should expect to observe given some hypothesis about the distribution of observers in the world. By comparing such predictions to our actual observations, we get probabilistic evidence for or against various hypotheses. One attempt to apply such reasoning to predicting our future prospects is the so-called Doomsday argument. It purports to show that we have systematically underestimated the probability that humankind will go extinct relatively soon. The idea, in its simplest form, is that we should think of ourselves as in some sense random samples from the set of all observers in our reference class, and we would be more likely to live as early as we do if there were not a very great number of observers in our reference class living later than us. The Doomsday argument is highly controversial, and I have argued elsewhere that although it may be theoretically sound, some of its applicability conditions are in fact not satisfied, so that applying it to our actual case would be a mistake [75,76]. Other anthropic arguments may be more successful: the argument based on the Fermiparadox is one example and the next section provides another. In general, one lesson is that we should be careful not to use the fact that life on Earth has survived up to this day and that our humanoid ancestors didn’t go extinct in some sudden disaster to infer that that Earth-bound life and humanoid ancestors are highly resilient. Even if on the vast 159


majority of Earth-like planets life goes extinct before intelligent life forms evolve, we should still expect to find ourselves on one of the exceptional planets that were lucky enough to escape devastation. In this case, our past success provides no ground for expecting success in the future. The field of observation selection effects is methodologically very complex and more foundational work is needed before we can be confident that we really understand how to reason about these things. There may well be further lessons from this domain that we haven’t yet been clever enough to comprehend. 8.5

Psychological biases?

The psychology of risk perception is an active but rather messy field that could potentially contribute indirect grounds for reassessing our estimates of existential risks. Suppose our intuitions about which future scenarios are “plausible and realistic” are shaped by what we see on TV and in movies and what we read in novels. (After all, a large part of the discourse about the future that people encounter is in the form of fiction and other recreational contexts.) We should then, when thinking critically, suspect our intuitions of being biased in the direction of overestimating the probability of those scenarios that make for a good story, since such scenarios will seem much more familiar and more “real”. This Good-story bias could be quite powerful. When was the last time you saw a movie about humankind suddenly going extinct (without warning and without being replaced by some other civilization)? While this scenario may be much more probable than a scenario in which human heroes successfully repel an invasion of monsters or robot warriors, it wouldn’t be much fun to watch. So we don’t see many stories of that kind. If we are not careful, we can be mislead into believing that the boring scenario is too farfetched to be worth taking seriously. In general, if we think there is a Good-story bias, we may upon reflection want to increase our credence in boring hypotheses and decrease our credence in interesting, dramatic hypotheses. The net effect would be to redistribute probability among existential risks in favor of those that seem to harder to fit into a selling narrative, and possibly to increase the probability of the existential risks as a group. The empirical data on risk-estimation biases is ambiguous. It has been argued that we suffer from various systematic biases when estimating our own prospects or risks in general. Some data suggest that humans tend to overestimate their own personal 160


abilities and prospects. About three quarters of all motorists think they are safer drivers than the typical driver. Bias seems to be present even among highly educated people. According to one survey, almost half of all sociologists believed that they would become one of the top ten in their field [87], and 94% of sociologists thought they were better at their jobs than their average colleagues [88]. It has also been shown that depressives have a more accurate self-perception than normals except regarding the hopelessness of their situation [89-91]. Most people seem to think that they themselves are less likely to fall victims to common risks than other people [92]. It is widely believed [93] that the public tends to overestimate the probability of highly publicized risks (such as plane crashes, murders, food poisonings etc.), and a recent study [94] shows the public overestimating a large range of commonplace health risks to themselves. Another recent study [95], however, suggests that available data are consistent with the assumption that the public rationally estimates risk (although with a slight truncation bias due to cognitive costs of keeping in mind exact information). Even if we could get firm evidence for biases in estimating personal risks, we’d still have to be careful in making inferences to the case of existential risks. 8.6

Weighing up the evidence

In combination, these indirect arguments add important constraints to those we can glean from the direct consideration of various technological risks, although there is not room here to elaborate on the details. But the balance of evidence is such that it would appear unreasonable not to assign a substantial probability to the hypothesis that an existential disaster will do us in. My subjective opinion is that setting this probability lower than 25% would be misguided, and the best estimate may be considerably higher. But even if the probability were much smaller (say, ~1%) the subject matter would still merit very serious attention because of how much is at stake. In general, the greatest existential risks on the time-scale of a couple of centuries or less appear to be those that derive from the activities of advanced technological civilizations. We see this by looking at the various existential risks we have listed. In each of the four categories, the top risks are engendered by our activities. The only significant existential risks for which this isn’t true are “simulation gets shut down” (although on some versions of this hypothesis the shutdown would be prompted by our activities [27]); the catch-all hypotheses (which include both types of scenarios); asteroid or comet impact 161


(which is a very low probability risk); and getting killed by an extraterrestrial civilization (which would be highly unlikely in the near future). It may not be surprising that existential risks created by modern civilization get the lion’s share of the probability. After all, we are now doing some things that have never been done on Earth before, and we are developing capacities to do many more such things. If non-anthropogenic factors have failed to annihilate the human species for hundreds of thousands of years, it could seem unlikely that such factors will strike us down in the next century or two. By contrast, we have no reason whatever not to think that the products of advanced civilization will be our bane. We shouldn’t be too quick to dismiss the existential risks that aren’t human-generated as insignificant, however. It’s true that our species has survived for a long time in spite of whatever such risks are present. But there may be an observation selection effect in play here. The question to ask is, on the theory that natural disasters sterilize Earth-like planets with a high frequency, what should we expect to observe? Clearly not that we are living on a sterilized planet. But maybe that we should be more primitive humans than we are? In order to answer this question, we need a solution to the problem of the reference class in observer selection theory. Yet that is a part of the methodology that doesn’t yet exist. So at the moment we can state that the most serious existential risks are generated by advanced human civilization, but we base this assertion on direct considerations. Whether there is additional support for it based on indirect considerations is an open question. We should not blame civilization or technology for imposing big existential risks. Because of the way we have defined existential risks, a failure to develop technological civilization would imply that we had fallen victims of an existential disaster (namely a crunch, “technological arrest”). Without technology, our chances of avoiding existential risks would therefore be nil. With technology, we have some chance, although the greatest risks now turn out to be those generated by technology itself.

9 Implications for policy and ethics Existential risks have a cluster of features that make it useful to identify them as a special category: the extreme magnitude of the harm that would come from an existential disaster; the futility of the trial-and-error approach; the lack of evolved biological and cultural coping methods; the fact that existential risk dilution is a global 162


public good; the shared stakeholdership of all future generations; the international nature of many of the required countermeasures; the necessarily highly speculative and multidisciplinary nature of the topic; the subtle and diverse methodological problems involved in assessing the probability of existential risks; and the comparative neglect of the whole area. From our survey of the most important existential risks and their key attributes, we can extract tentative recommendations for ethics and policy: 9.1

Raise the profile of existential risks

We need more research into existential risks – detailed studies of particular aspects of specific risks as well as more general investigations of associated ethical, methodological, security and policy issues. Public awareness should also be built up so that constructive political debate about possible countermeasures becomes possible. Now, it’s a commonplace that researchers always conclude that more research needs to be done in their field. But in this instance it is really true. There is more scholarly work on the life-habits of the dung fly than on existential risks. 9.2

Create a framework for international action

Since existential risk reduction is a global public good, there should ideally be an institutional framework such that the cost and responsibility for providing such goods could be shared fairly by all people. Even if the costs can’t be shared fairly, some system that leads to the provision of existential risk reduction in something approaching optimal amounts should be attempted. The necessity for international action goes beyond the desirability of cost-sharing, however. Many existential risks simply cannot be substantially reduced by actions that are internal to one or even most countries. For example, even if a majority of countries pass and enforce national laws against the creation of some specific destructive version of nanotechnology, will we really have gained safety if some less scrupulous countries decide to forge ahead regardless? And strategic bargaining could make it infeasible to bribe all the irresponsible parties into subscribing to a treaty, even if everybody would be better off if everybody subscribed.

163


9.3

Retain a last-resort readiness for preemptive action

Creating a broad-based consensus among the world’s nation states is time-consuming, difficult, and in many instances impossible. We must therefore recognize the possibility that cases may arise in which a powerful nation or a coalition of states needs to act unilaterally for its own and the common interest. Such unilateral action may infringe on the sovereignty of other nations and may need to be done preemptively. Let us make this hypothetical more concrete. Suppose advanced nanotechnology has just been developed in some leading lab. (By advanced nanotechnology I mean a fairly general assembler, a device that can build a large range of three-dimensional structures – including rigid parts – to atomic precision given a detailed specification of the design and construction process, some feedstock chemicals, and a supply of energy.) Suppose that at this stage it is possible to predict that building dangerous nanoreplicators will be much easier than building a reliable nanotechnological immune system that could protect against all simple dangerous replicators. Maybe design-plans for the dangerous replicators have already been produced by design-ahead efforts and are available on the Internet. Suppose furthermore that because most of the research leading up to the construction of the assembler, excluding only the last few stages, is available in the open literature; so that other laboratories in other parts of the world are soon likely to develop their own assemblers. What should be done? With this setup, one can confidently predict that the dangerous technology will soon fall into the hands of “rogue nations”, hate groups, and perhaps eventually lone psychopaths. Sooner or later somebody would then assemble and release a destructive nanobot and destroy the biosphere. The only option is to take action to prevent the proliferation of the assembler technology until such a time as reliable countermeasures to a nano-attack have been deployed. Hopefully, most nations would be responsible enough to willingly subscribe to appropriate regulation of the assembler technology. The regulation would not need to be in the form of a ban on assemblers but it would have to limit temporarily but effectively the uses of assemblers, and it would have to be coupled to a thorough monitoring program. Some nations, however, may refuse to sign up. Such nations would first be pressured to join the coalition. If all efforts at persuasion fail, force or the threat of force would have to be used to get them to sign on. 164


A preemptive strike on a sovereign nation is not a move to be taken lightly, but in the extreme case we have outlined – where a failure to act would with high probability lead to existential catastrophe – it is a responsibility that must not be abrogated. Whatever moral prohibition there normally is against violating national sovereignty is overridden in this case by the necessity to prevent the destruction of humankind. Even if the nation in question has not yet initiated open violence, the mere decision to go forward with development of the hazardous technology in the absence of sufficient regulation must be interpreted as an act of aggression, for it puts the rest of the rest of the world at an even greater risk than would, say, firing off several nuclear missiles in random directions. The intervention should be decisive enough to reduce the threat to an acceptable level but it should be no greater than is necessary to achieve this aim. It may even be appropriate to pay compensation to the people of the offending country, many of whom will bear little or no responsibility for the irresponsible actions of their leaders. While we should hope that we are never placed in a situation where initiating force becomes necessary, it is crucial that we make room in our moral and strategic thinking for this contingency. Developing widespread recognition of the moral aspects of this scenario ahead of time is especially important, since without some degree of public support democracies will find it difficult to act decisively before there has been any visible demonstration of what is at stake. Waiting for such a demonstration is decidedly not an option, because it might itself be the end. 9.4

Differential technological development

If a feasible technology has large commercial potential, it is probably impossible to prevent it from being developed. At least in today’s world, with lots of autonomous powers and relatively limited surveillance, and at least with technologies that do not rely on rare materials or large manufacturing plants, it would be exceedingly difficult to make a ban 100% watertight. For some technologies (say, ozone-destroying chemicals), imperfectly enforceable regulation may be all we need. But with other technologies, such as destructive nanobots that self-replicate in the natural environment, even a single breach could be terminal. The limited enforceability of technological bans restricts the set of feasible policies from which we can choose. What we do have the power to affect (to what extent depends on how we define “we”) is the rate of development of various technologies and potentially the sequence in which 165


feasible technologies are developed and implemented. Our focus should be on what I want to call differential technological development: trying to retard the implementation of dangerous technologies and accelerate implementation of beneficial technologies, especially those that ameliorate the hazards posed by other technologies. In the case of nanotechnology, the desirable sequence would be that defense systems are deployed before offensive capabilities become available to many independent powers; for once a secret or a technology is shared by many, it becomes extremely hard to prevent further proliferation. In the case of biotechnology, we should seek to promote research into vaccines, anti-bacterial and anti-viral drugs, protective gear, sensors and diagnostics, and to delay as much as possible the development (and proliferation) of biological warfare agents and their vectors. Developments that advance offense and defense equally are neutral from a security perspective, unless done by countries we identify as responsible, in which case they are advantageous to the extent that they increase our technological superiority over our potential enemies. Such “neutral� developments can also be helpful in reducing the threat from natural hazards and they may of course also have benefits that are not directly related to global security. Some technologies seem to be especially worth promoting because they can help in reducing a broad range of threats. Superintelligence is one of these. Although it has its own dangers (expounded in preceding sections), these are dangers that we will have to face at some point no matter what. But getting superintelligence early is desirable because it would help diminish other risks. A superintelligence could advise us on policy. Superintelligence would make the progress curve for nanotechnology much steeper, thus shortening the period of vulnerability between the development of dangerous nanoreplicators and the deployment of adequate defenses. By contrast, getting nanotechnology before superintelligence would do little to diminish the risks of superintelligence. The main possible exception to this is if we think that it is important that we get to superintelligence via uploading rather than through artificial intelligence. Nanotechnology would greatly facilitate uploading. Other technologies that have a wide range of risk-reducing potential include intelligence augmentation, information technology, and surveillance. These can make us smarter individually and collectively, and can make it more feasible to enforce necessary regulation. A strong prima facie case therefore exists for pursuing these technologies as vigorously as possible.

166


As mentioned, we can also identify developments outside technology that are beneficial in almost all scenarios. Peace and international cooperation are obviously worthy goals, as is cultivation of traditions that help democracies prosper. 9.5

Support programs that directly reduce specific existential risks

Some of the lesser existential risks can be countered fairly cheaply. For example, there are organizations devoted to mapping potentially threatening near-Earth objects (e.g. NASA’s Near Earth Asteroid Tracking Program, and the Space Guard Foundation). These could be given additional funding. To reduce the probability of a “physics disaster”, a public watchdog could be appointed with authority to commission advance peer-review of potentially hazardous experiments. This is currently done on an ad hoc basis and often in a way that relies on the integrity of researchers who have a personal stake in the experiments going forth. The existential risks of naturally occurring or genetically engineered pandemics would be reduced by the same measures that would help prevent and contain more limited epidemics. Thus, efforts in counter-terrorism, civil defense, epidemiological monitoring and reporting, developing and stockpiling antidotes, rehearsing emergency quarantine procedures, etc. could be intensified. Even abstracting from existential risks, it would probably be cost-effective to increase the fraction of defense budgets devoted to such programs. Reducing the risk of a nuclear Armageddon, whether accidental or intentional, is a wellrecognized priority. There is a vast literature on the related strategic and political issues to which I have nothing to add here. The longer-term dangers of nanotech proliferation or arms race between nanotechnic powers, as well as the whimper risk of “evolution into oblivion”, may necessitate, even more than nuclear weapons, the creation and implementation of a coordinated global strategy. Recognizing these existential risks suggests that it is advisable to gradually shift the focus of security policy from seeking national security through unilateral strength to creating an integrated international security system that can prevent arms races and the proliferation of weapons of mass destruction. Which particular policies have the best chance of attaining this long-term goal is a question beyond the scope of this paper.

167


9.6

Maxipok: a rule of thumb for moral action

Previous sections have argued that the combined probability of the existential risks is very substantial. Although there is still a fairly broad range of differing estimates that responsible thinkers could make, it is nonetheless arguable that because the negative utility of an existential disaster is so enormous, the objective of reducing existential risks should be a dominant consideration when acting out of concern for humankind as a whole. It may be useful to adopt the following rule of thumb for moral action; we can call it Maxipok: Maximize the probability of an okay outcome, where an “okay outcome” is any outcome that avoids existential disaster. At best, this is a rule of thumb, a prima facie suggestion, rather than a principle of absolute validity, since there clearly are other moral objectives than preventing terminal global disaster. Its usefulness consists in helping us to get our priorities straight. Moral action is always at risk to diffuse its efficacy on feel-good projects rather on serious work that has the best chance of fixing the worst ills. The cleft between the feel-good projects and what really has the greatest potential for good is likely to be especially great in regard to existential risk. Since the goal is somewhat abstract and since existential risks don’t currently cause suffering in any living creature, there is less of a feel-good dividend to be derived from efforts that seek to reduce them. This suggests an offshoot moral project, namely to reshape the popular moral perception so as to give more credit and social approbation to those who devote their time and resources to benefiting humankind via global safety compared to other philanthropies. Maxipok, a kind of satisficing rule, is different from Maximin (“Choose the action that has the best worst-case outcome. Since we cannot completely eliminate existential risks (at any moment we could be sent into the dustbin of cosmic history by the advancing front of a vacuum phase transition triggered in a remote galaxy a billion years ago) using maximin in the present context has the consequence that we should choose the act that has the greatest benefits under the assumption of impending extinction. In other words, maximin implies that we should all start partying as if there were no tomorrow. While that option is indisputably attractive, it seems best to acknowledge that there just might be a tomorrow, especially if we play our cards right.

168


Future imperfect Yet, these risks remain understudied. There is a sense of powerlessness and fatalism about them. People have been talking apocalypses for millennia, but few have tried to prevent them. Humans are also bad at doing anything about problems that have not occurred yet (partially because of the availability heuristic – the tendency to overestimate the probability of events we know examples of, and underestimate events we cannot readily recall). If humanity becomes extinct, at the very least the loss is equivalent to the loss of all living individuals and the frustration of their goals. But the loss would probably be far greater than that. Human extinction means the loss of meaning generated by past generations, the lives of all future generations (and there could be an astronomical number of future lives) and all the value they might have been able to create. If consciousness or intelligence are lost, it might mean that value itself becomes absent from the universe. This is a huge moral reason to work hard to prevent existential threats from becoming reality. And we must not fail even once in this pursuit. With that in mind, I have selected what I consider the five biggest threats to humanity’s existence. But there are caveats that must be kept in mind, for this list is not final. Over the past century we have discovered or created new existential risks – supervolcanoes were discovered in the early 1970s, and before the Manhattan project nuclear war was impossible – so we should expect others to appear. Also, some risks that look serious today might disappear as we learn more. The probabilities also change over time – sometimes because we are concerned about the risks and fix them. Finally, just because something is possible and potentially hazardous, doesn’t mean it is worth worrying about. There are some risks we cannot do anything at all about, such as gamma ray bursts that result from the explosions of galaxies. But if we learn we can do something, the priorities change. For instance, with sanitation, vaccines and antibiotics, pestilence went from an act of God to bad public health.

169


2. Bioengineered pandemic

Natural pandemics have killed more people than wars. However, natural pandemics are unlikely to be existential threats: there are usually some people resistant to the pathogen, and the offspring of survivors would be more resistant. Evolution also does not favor parasites that wipe out their hosts, which is why syphilis went from a virulent killer to a chronic disease as it spread in Europe. Unfortunately we can now make diseases nastier. One of the more famous examples is how the introduction of an extra gene in mousepox – the mouse version of smallpox – made it far more lethal and able to infect vaccinated individuals. Recent work on bird flu has demonstrated that the contagiousness of a disease can be deliberately boosted. Right now the risk of somebody deliberately releasing something devastating is low. But as biotechnology gets better and cheaper, more groups will be able to make diseases worse. Most work on bioweapons have been done by governments looking for something controllable, because wiping out humanity is not militarily useful. But there are always 170


some people who might want to do things because they can. Others have higher purposes. For instance, the Aum Shinrikyo cult tried to hasten the apocalypse using bioweapons beside their more successful nerve gas attack. Some people think the Earth would be better off without humans, and so on. The number of fatalities from bioweapon and epidemic outbreaks attacks looks like it has a power-law distribution – most attacks have few victims, but a few kill many. Given current numbers the risk of a global pandemic from bioterrorism seems very small. But this is just bioterrorism: governments have killed far more people than terrorists with bioweapons (up to 400,000 may have died from the WWII Japanese biowar program). And as technology gets more powerful in the future nastier pathogens become easier to design.

Note that just because something is unknown it doesn’t mean we cannot reason about it. In a remarkable paper Max Tegmark and Nick Bostrom show that a certain set of risks must be less than one chance in a billion per year, based on the relative age of Earth. You might wonder why climate change or meteor impacts have been left off this list. Climate change, no matter how scary, is unlikely to make the entire planet 171


uninhabitable (but it could compound other threats if our defences to it break down). Meteors could certainly wipe us out, but we would have to be very unlucky. The average mammalian species survives for about a million years. Hence, the background natural extinction rate is roughly one in a million per year. This is much lower than the nuclearwar risk, which after 70 years is still the biggest threat to our continued existence. The availability heuristic makes us overestimate risks that are often in the media, and discount unprecedented risks. If we want to be around in a million years we need to correct that.

Humans: the real threat to life on Earth Earth is home to millions of species. Just one dominates it. Us. Our cleverness, our inventiveness and our activities have modified almost every part of our planet. In fact, we are having a profound impact on it. Indeed, our cleverness, our inventiveness and our activities are now the drivers of every global problem we face. And every one of these problems is accelerating as we continue to grow towards a global population of 10 billion. In fact, I believe we can rightly call the situation we're in right now an emergency – an unprecedented planetary emergency. We humans emerged as a species about 200,000 years ago. In geological time, that is really incredibly recent. Just 10,000 years ago, there were one million of us. By 1800, just over 200 years ago, there were 1 billion of us. By 1960, 50 years ago, there were 3 billion of us. There are now over 7 billion of us. By 2050, your children, or your children's children, will be living on a planet with at least 9 billion other people. Some time towards the end of this century, there will be at least 10 billion of us. Possibly more. We got to where we are now through a number of civilisation- and societyshaping "events", most notably the agricultural revolution, the scientific revolution, the industrial revolution and – in the West – the public-health 172


revolution. By 1980, there were 4 billion of us on the planet. Just 10 years later, in 1990, there were 5 billion of us. By this point initial signs of the consequences of our growth were starting to show. Not the least of these was on water. Our demand for water – not just the water we drank but the water we needed for food production and to make all the stuff we were consuming – was going through the roof. But something was starting to happen to water. Back in 1984, journalists reported from Ethiopia about a famine of biblical proportions caused by widespread drought. Unusual drought, and unusual flooding, was increasing everywhere: Australia, Asia, the US, Europe. Water, a vital resource we had thought of as abundant, was now suddenly something that had the potential to be scarce. By 2000 there were 6 billion of us. It was becoming clear to the world's scientific community that the accumulation of CO2, methane and other greenhouse gases in the atmosphere – as a result of increasing agriculture, land use and the production, processing and transportation of everything we were consuming – was changing the climate. And that, as a result, we had a serious problem on our hands; 1998 had been the warmest year on record. The 10 warmest years on record have occurred since 1998. We hear the term "climate" every day, so it is worth thinking about what we actually mean by it. Obviously, "climate" is not the same as weather. The climate is one of the Earth's fundamental life support systems, one that determines whether or not we humans are able to live on this planet. It is generated by four components: the atmosphere (the air we breathe); the hydrosphere (the planet's water); the cryosphere (the ice sheets and glaciers); the biosphere (the planet's plants and animals). By now, our activities had started to modify every one of these components. Our emissions of CO2 modify our atmosphere. Our increasing water use had started to modify our hydrosphere. Rising atmospheric and sea-surface temperature had started to modify the cryosphere, most notably in the unexpected shrinking of the Arctic and Greenland ice sheets. Our increasing use of land, for agriculture, cities, roads, mining – as well as all the pollution

173


we were creating – had started to modify our biosphere. Or, to put it another way: we had started to change our climate. There are now more than 7 billion of us on Earth. As our numbers continue to grow, we continue to increase our need for far more water, far more food, far more land, far more transport and far more energy. As a result, we are accelerating the rate at which we're changing our climate. In fact, our activities are not only completely interconnected with but now also interact with, the complex system we live on: Earth. It is important to understand how all this is connected.

Advertisement At the same time, the global shipping and airline sectors are projected to continue to expand rapidly every year, transporting more of us, and more of the stuff we want to consume, around the planet year on year. That is going to cause enormous problems for us in terms of more CO2 emissions, more black carbon, and more pollution from mining and processing to make all this stuff. But think about this. In transporting us and our stuff all over the planet, we are also creating a highly efficient network for the global spread of potentially catastrophic diseases. There was a global pandemic just 95 years ago – the Spanish flu pandemic, which is now estimated to have killed up to 100 million people. And that's before one of our more questionable innovations – the budget airline – was invented. The combination of millions of people travelling around the world every day, plus millions more people living in extremely close proximity to pigs and poultry – often in the same room, making a new virus jumping the species barrier more likely – means we are increasing, significantly, the probability of a new global pandemic. So no wonder then that epidemiologists increasingly agree that a new global pandemic is now a matter of "when" not "if". We are going to have to triple – at least – energy production by the end of this century to meet expected demand. To meet that demand, we will need to build, roughly speaking, something like: 1,800 of the world's largest dams, or 23,000 nuclear power stations, 14m wind turbines, 36bn solar panels, or just 174


keep going with predominantly oil, coal and gas – and build the 36,000 new power stations that means we will need.Our existing oil, coal and gas reserves alone are worth trillions of dollars. Are governments and the world's major oil, coal and gas companies – some of the most influential corporations on Earth – really going to decide to leave the money in the ground, as demand for energy increases relentlessly? I doubt it. Meanwhile the emerging climate problem is on an entirely different scale. The problem is that we may well be heading towards a number of critical "tipping points" in the global climate system. There is a politically agreed global target – driven by the Intergovernmental Panel on Climate Change (IPCC) – to limit the global average temperature rise to 2C. The rationale for this target is that a rise above 2C carries a significant risk of catastrophic climate change that would almost certainly lead to irreversible planetary "tipping points", caused by events such as the melting of the Greenland ice shelf, the release of frozen methane deposits from Arctic tundra, or dieback of the Amazon. In fact, the first two are happening now – at below the 2C threshold. As for the third, we're not waiting for climate change to do this: we're doing it right now through deforestation. And recent research shows that we look certain to be heading for a larger rise in global average temperatures than 2C – a far larger rise. It is now very likely that we are looking at a future global average rise of 4C – and we can't rule out a rise of 6C. This will be absolutely catastrophic. It will lead to runaway climate change, capable of tipping the planet into an entirely different state, rapidly. Earth will become a hellhole. In the decades along the way, we will witness unprecedented extremes in weather, fires, floods, heatwaves, loss of crops and forests, water stress and catastrophic sea-level rises. Large parts of Africa will become permanent disaster areas. The Amazon could be turned into savannah or even desert. And the entire agricultural system will be faced with an unprecedented threat. More "fortunate" countries, such as the UK, the US and most of Europe, may well look like something approaching militarised countries, with heavily defended border controls designed to prevent millions of people from entering, people who are on the move because their own country is no longer habitable, or has insufficient water or food, or is experiencing conflict over 175


increasingly scarce resources. These people will be "climate migrants". The term "climate migrants" is one we will increasingly have to get used to. Indeed, anyone who thinks that the emerging global state of affairs does not have great potential for civil and international conflict is deluding themselves. It is no coincidence that almost every scientific conference that I go to about climate change now has a new type of attendee: the military.

176


References 1. Hanson, R. (1995). Could Gambling Save Science? Encouraging an Honest Consensus. Social Epistemology, 9:1, 3-33. 2. Tickner, J. et al. (2000). The Precautionary Principle. URL: http://www.biotechinfo.net/handbook.pdf. 3. Foster, K.R. et al. (2000). Science and the Precautionary Principle. Science, 288, 979-981. URL: http://www.biotech-info.net/science_and_PP.html. 4. Lewis, D. (1986). Philosophical Papers (Vol. 2). New York: Oxford University Press. 5. Lewis, D. (1994). Humean Supervenience Debugged. Mind, 103(412), 473-490. 6. Bostrom, N. (1999). A Subjectivist Theory of Objective Chance, British Society for the Philosophy of Science Conference, July 8-9, Nottingham, U.K. 7. Jeffrey, R. (1965). The logic of decision: McGraw-Hill. 8. Kennedy, R. (1968). 13 Days. London: Macmillan. 9. Leslie, J. (1996). The End of the World: The Science and Ethics of Human Extinction. London: Routledge. 10. Putnam, H. (1979). The place of facts in a world of values. In D. Huff & O. Prewett (Eds.), The Nature of the Physical Universe (pp. 113-140). New York: John Wiley. 11. Kubrick, S. (1964). Dr. Strangelove or How I Learned to Stop Worrying and Love the Bomb: Columbia/Tristar Studios. 12. Shute, N. (1989). On the Beach: Ballentine Books. 13. Kaul, I. (1999). Global Public Goods: Oxford University Press. 14. Feldman, A. (1980). Welfare Economics and Social Choice Theory. Boston: Martinus Nijhoff Publishing. 15. Caplin, A., & Leahy, J. (2000). The Social Discount Rate. National Bureau of Economic Research, Working paper 7983. 16. Schelling, T.C. (2000). Intergenerational and International Discounting. Risk Analysis, 20(6), 833837. 17. Earman, J. (1995). Bangs, Crunches, Whimpers, and Shrieks: Singularities and Acausalities in Relativistic Spacetimes: Oxford University Press. 18. Morgan, M.G. (2000). Categorizing Risks for Risk Ranking. Risk Analysis, 20(1), 49-58. 19. Bostrom, N. et al. (1999). The Transhumanist FAQ. URL: http://www.transhumanist.org. 20. Powell, C. (2000). 20 Ways the World Could End. Discover, 21(10). URL: http://www.discover.com/oct_00/featworld.html. 21. Joy, B. (2000). Why the future doesn't need us. Wired, 8.04. URL: http://www.wired.com/wired/archive/8.04/joy_pr.html. 22. Drexler, K.E. (1992). Nanosystems. New York: John Wiley & Sons, Inc. 23. Drexler, K.E. (1985). Engines of Creation: The Coming Era of Nanotechnology. London: Forth Estate. URL: http://www.foresight.org/EOC/index.html. 24. Merkle, R. et al. (1991). Theoretical studies of a hydrogen abstraction tool for nanotechnology. Nanotechnology, 2, 187-195. 25. Freitas (Jr.), R.A. (2000). Some Limits to Global Ecophagy by Biovorous Nanoreplicators, with Public Policy Recommendations. Zyvex preprint, April 2000. URL:http://www.foresight.org/NanoRev/Ecophagy.html. 26. Gubrud, M. (2000). Nanotechnology and International Security, Fifth Foresight Conference on Molecular Nanotechnology. URL:http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/index.html. 27. Bostrom, N. (2001). Are You Living in a Simulation? Working-paper. URL: http://www.simulationargument.com. 28. Moravec, H. (1989). Mind Children. Harvard: Harvard University Press. 29. Moravec, H. (1999). Robot: Mere Machine to Transcendent Mind. New York: Oxford University Press. 30. Vinge, V. (1993). The Coming Technological Singularity. Whole Earth Review, Winter issue. 31. Bostrom, N. (1998). How Long Before Superintelligence? International Journal of Futures Studies, 2. URL: http://www.nickbostrom.com/superintelligence.html.

177


32. Moravec, H. (1998). When will computer hardware match the human brain? Journal of Transhumanism, 1. URL:http://www.transhumanist.com/volume1/moravec.htm. 33. Kurzweil, R. (1999). The Age of Spiritual Machines: When computers exceed human intelligence. New York: Viking. 34. Hanson, R. et al. (1998). A Critical Discussion of Vinge's Singularity Concept. Extropy Online. URL: http://www.extropy.org/eo/articles/vi.html. 35. Yudkowsky, E. (2001). Friendly AI 0.9. URL: http://singinst.org/CaTAI/friendly/contents.html. 36. National Intelligence Council (2000). Global Trends 2015: A Dialogue about the Future with Nongovernment Experts. URL:http://www.cia.gov/cia/publications/globaltrends2015/. 37. Nowak, R. (2001). Disaster in the making. New Scientist, 13 January 2001. URL: http://www.newscientist.com/nsplus/insight/bioterrorism/disasterin.html. 38. Jackson, R.J. et al. (2001). Expression of Mouse Interleukin-4 by a Recombinant Ectromelia Virus Suppresses Cytolytic Lymphocyte Responses and Overcomes Genetic Resistance to Mousepox. Journal of Virology, 73, 1479-1491. 39. Freitas (Jr.), R.A. (1999). Nanomedicine, Volume 1: Basic Capabilities. Georgetown, TX: Landes Bioscience. URL: http://www.nanomedicine.com. 40. Foresight Institute (2000). Foresight Guidelines on Molecular Nanotechnology, Version 3.7. URL: http://www.foresight.org/guidelines/current.html. 41. Foresight Institute (1997-1991). Accidents, Malice, Progress, and Other Topics. Background 2, Rev. 1. URL: http://www.foresight.org/Updates/Background2.html. 42. Schelling, T.C. (1960). The Strategy of Conflict. Cambridge, Mass.: Harvard University Press. 43. Knight, L.U. (2001). The Voluntary Human Extinction Movement. URL: http://www.vhemt.org/. 44. Schopenhauer, A. (1891). Die Welt als Wille und Vorstellung. Leipzig: F, A, Brockhaus. 45. Coleman, S., & Luccia, F. (1980). Gravitational effects on and of vacuum decay. Physical Review D, 21, 3305-3315. 46. Dar, A. et al. (1999). Will relativistic heavy-ion colliders destroy our planet? Physics Letters, B 470, 142-148. 47. Turner, M.S., & Wilczek, F. (1982). Is our vacuum metastable? Nature, August 12, 633-634. 48. Morrison, D. et al. (1994). The Impact Hazard. In T. Gehrels (Ed.), Hazards Due to Comets and Asteroids. Tucson: The University of Arizona Press. 49. Gold, R.E. (1999). SHIELD: A Comprehensive Earth Protection System. A Phase I Report on the NASA Institute for Advanced Concepts, May 28, 1999. 50. Huxley, A. (1932). Brave New World. London: Chatto & Windus. 51. Flynn, J.R. (1987). Massive IQ gains in many countries: What IQ tests really measure. Psychological Bulletin, 101, 171-191. 52. Storfer, M. (1999). Myopia, Intelligence, and the Expanding Human Neocortex. International Journal of Neuroscience, 98(3-4). 53. Merkle, R. (1994). The Molecular Repair of the Brain. Cryonics, 15(1 and 2). 54. Hanson, R. (1994). What If Uploads Come First: The crack of a future dawn. Extropy, 6(2). URL: http://hanson.gmu.edu/uploads.html. 55. Warwick, K. (1997). March of the Machines. London: Century. 56. Whitby, B. et al. (2000). How to Avoid a Robot Takeover: Political and Ethical Choices in the Design and Introduction of Intelligent Artifacts. Presented at AISB-00 Symposium on Artificial Intelligence, Ethics an (Quasi-) Human Rights. URL: http://www.cogs.susx.ac.uk/users/blayw/BlayAISB00.html. 57. Ettinger, R. (1964). The prospect of immortality. New York: Doubleday. 58. Zehavi, I., & Dekel, A. (1999). Evidence for a positive cosmological constant from flows of galaxies and distant supernovae. Nature, 401(6750), 252-254. 59. Bostrom, N. (2001). Are Cosmological Theories Compatible With All Possible Evidence? A Missing Methodological Link. In preparation. 60. Cirkovic, M., & Bostrom, N. (2000). Cosmological Constant and the Final Anthropic Hypothesis. Astrophysics and Space Science, 274(4), 675-687. URL:http://xxx.lanl.gov. 61. Bostrom, N. (2001). The Future of Human Evolution. Working paper. URL: http://www.nickbostrom.com. 62. Hanson, R. (1998). Burning the Cosmic Commons: Evolutionary Strategies for Interstellar Colonization. Working paper. URL:http://hanson.gmu.edu/workingpapers.html. 63. Freitas(Jr.), R.A. (1980). A Self-Reproducing Interstellar Probe. J. Brit. Interplanet. Soc., 33, 251-264.

178


64. Bostrom, N. (2000). Predictions from Philosophy? Coloquia Manilana (PDCIS), 7. URL: http://www.nickbostrom.com/old/predict.html. 65. Chislenko, A. (1996). Networking in the Mind Age. URL: http://www.lucifer.com/~sasha/mindage.html. 66. Barrow, J.D., & Tipler, F.J. (1986). The Anthropic Cosmological Principle. Oxford: Oxford University Press. 67. Tipler, F.J. (1982). Anthropic-principle arguments against steady-state cosmological theories. Observatory, 102, 36-39. 68. Brin, G.D. (1983). The `Great Silence': The Controversy Concerning Extraterrestrial Intelligent Life. Quarterly Journal of the Royal Astronomical Society, 24, 283-309. 69. Hanson, R. (1998). The Great Filter - Are We Almost Past It? Working paper. 70. Carter, B. (1983). The anthropic principle and its implications for biological evolution. Phil. Trans. R. Soc., A 310, 347-363. 71. Carter, B. (1989). The anthropic selection principle and the ultra-Darwinian synthesis. In F. Bertola & U. Curi (Eds.), The anthropic principle (pp. 33-63). Cambridge: Cambridge University Press. 72. Hanson, R. (1998). Must Early Life be Easy? The rhythm of major evolutionary transitions. URL: http://hanson.berkeley.edu/. 73. Leslie, J. (1989). Risking the World's End. Bulletin of the Canadian Nuclear Society, May, 10-15. 74. Bostrom, N. (2000). Is the end nigh?, The philosopher's magazine, Vol. 9 (pp. 19-20). URL: http://www.anthropic-principle.com/primer.html. 75. Bostrom, N. (1999). The Doomsday Argument is Alive and Kicking. Mind, 108(431), 539-550. URL: http://www.anthropic-principle.com/preprints/ali/alive.html. 76. Bostrom, N. (2002). Anthropic Bias: Observation Selection Effects in Science and Philosophy. Routledge, New York. URL: http://www.anthropic-principle.com/book/. 77. Bostrom, N. (2001). Fine-Tuning Arguments in Cosmology. In preparation. URL: http://www.anthropic-principle.com. 78. Bostrom, N. (2000). Observer-relative chances in anthropic reasoning? Erkenntnis, 52, 93-108. URL: http://www.anthropic-principle.com/preprints.html. 79. Bostrom, N. (2001). The Doomsday argument, Adam & Eve, UN++, and Quantum Joe. Synthese, 127(3), 359-387. URL: http://www.anthropic-principle.com. 80. Sjรถberg, L. (2000). Factors in Risk Perception. Risk Analysis, 20(1), 1-11. 81. Frieze, I. et al. (1978). Women and sex roles. New York: Norton. 82. Waldeman, M. (1994). Systematic Errors and the Theory of Natural Selection. The American Economics Review, 84(3), 482-497. 83. Cowen, T., & Hanson, R. (2001). How YOU Do Not Tell the Truth: Academic Disagreement as SelfDeception. Working paper. 84. Kruger, J., & Dunning, D. (1999). Unskilled and Unaware if It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments.Journal of Personality and Social Psychology, 77(6), 1121-1134. 85. Evans, L. (1991). Traffic Safety and the Driver: Leonard Evans. URL: http://www.scienceservingsociety.com/book/. 86. Svenson, O. (1981). Are we less risky and more skillful that our fellow drivers? Acta Psychologica, 47, 143-148. 87. Westie, F.R. (1973). Academic Expectations of Professional Immortality: A Study of Legitimation. The American Sociologists, 8, 19-32. 88. Gilovich, T. (1991). How We Know What Isn't So. New York: Macmillan. 89. Paulhaus, D.L. (1986). Self-Deception and Impression Management in Test Responses. In A. Angeitner & J.S. Wiggins (Eds.), Personality Assessment via Questionnaires: Current Issues in Theory and Measurement. New York: Springer. 90. Roth, D.L., & Ingram, R.E. (1985). Factors in the Self-Deception Questionnaire: Associations with depression. Journal of Personality and Social Psychology, 48, 243-251. 91. Sackheim, H.A., & Gur, R.C. (1979). Self-deception, other-deception, and self-reported psychopathology. Journal of Consulting and Clinical Psychology, 47, 213-215. 92. Sjรถberg, L. (1994). Stralforskningens Risker: Attityder, Kunskaper och Riskuppfattning. RHIZIKON: Rapport fran Centrum fรถr Riskforskning, Handelshรถgskolan i Stockholm, 1. 93. Urguhart, J., & Heilmann, K. (1984). Risk Watch: The Odds of Life. New York: Facts on File Publications.

179


94. Taylor, H. (1999). Perceptions of Risks. The Harris Poll #7, January 27. URL: http://www.harrisinteractive.com/harris_poll/index.asp?PID=44. 95. Benjamin, D.K. et al. (2001). Individuals' estimates of risks of death: Part II - New evidence. Journal of Risk and Uncertainty, 22(1), 35-57. 96. Drexler, K.E. (1988). A Dialog on Dangers. Foresight Background 2, Rev. 1. URL: http://www.foresight.org/Updates/Background3.html. 97. McCarthy, T. (2000). Molecular Nanotechnology and the World System. . URL: http://www.mccarthy.cx/WorldSystem/intro.htm. 98. Forrest, D. (1989). Regulating Nanotechnology Development. . URL: http://www.foresight.org/NanoRev/Forrest1989.html. 99. Jeremiah, D.E. (1995). Nanotechnology and Global Security. Presented at the Fourth Foresight Conference on Molecular Nanotechnology. URL:http://www.zyvex.com/nanotech/nano4/jeremiahPaper.html. 100. Brin, D. (1998). The Transparent Society. Reading, MA.: Addison-Wesley. 101. Hanson, R. (2000). Showing That You Care: The Evolution of Health Altruism. . URL: http://hanson.gmu.edu/bioerr.pdf. 102. Rawls, J. (1999). A Theory of Justice (Revised Edition ed.). Cambridge, Mass.: Harvard University Press. 103. Kirk, K.M. (2001). Natural Selection and Quantitative Genetics of Life-History Traits in Western Women: A Twin Study. Evolution, 55(2), 432-435. URL:http://evol.allenpress.com/evolonline/?request=get-document&issn=00143820&volume=055&issue=02&page=0423. 104. Bostrom, N. (2001). Transhumanist Values. Manuscript. URL: http://www.nickbostrom.com. 1] Yudkowsky (2001). Creating Friendly AI 1.0. Machine Intelligence Research Institute. [2] Anderson & Anderson, eds. (2006). IEEE Intelligent Systems, 21(4). [3] Anderson & Anderson, eds. (2011). Machine Ethics. Cambridge University Press. [4] Arkin (2009). Governing Lethal Behavior in Autonomous Robots. Chapman and Hall. [5] Capurro, Hausmanninger, Weber, Weil, Cerqui, Weber, & Weber (2006). International Review of Information Ethics, Vol. 6: Ethics in Robots. [6] Danielson (1992). Artificial morality: Virtuous robots for virtual games. Routledge. [7] Lokhorst (2011). Computational meta-ethics: Towards the meta-ethical robot. Minds and Machines. [8] McLaren (2005). Lessons in Machine Ethics from the Perspective of Two Computational Models of Ethical Reasoning. AAAI Technical Report FS-05-06: 70-77. [9] Powers (2005). Deontological Machine Ethics. AAAI Technical Report FS-05-06: 79-86. [10] Sawyer (2007). Robot ethics. Science, 318(5853): 1037. [11] Wallach, Allen, & Smit (2008). Machine morality: Bottom-up and top-down approaches for modeling human moral faculties. AI and Society, 22(4): 565–582. [12] Allen (2002). Calculated morality: Ethical computing in the limit. In Smit & Lasker, eds.,Cognitive, emotive and ethical aspects of decision making and human action, vol I. Baden/IIAS. [13] Good (1965). Speculations concerning the first ultraintelligent machine. Advanced in Computers, 6: 31-88. [14] MacKenzie (1995). The Automation of Proof: A Historical and Sociological Exploration. IEEE Annals, 17(3): 7-29. [15] Nilsson (2009). The Quest for Artificial Intelligence. Cambridge University Press. 1. Federation of American Scientists, Introduction to Biological Weapons (2011). Available at http://www.fas.org/programs/bio/bwintro.html (28 December 2012). 2. M. Ainscough, Next Generation Bioweapons: Genetic Engineering and Biowarfare (April 2002). Available at http://www.au.af.mil/au/awc/awcgate/cpc-pubs/biostorm/ainscough.pdf (28 December 2012). 3. J. van Aken, E. Hammond, EMBO Rep. 4, S57–S60 (2003). 4. A. Hessel, M. Goodman, S. Kotler, Hacking the President’s DNA. The Atlantic (November 2012). Available at http://www.theatlantic.com/magazine/archive/2012/11/hacking-the-presidentsdna/309147/?single_page=true (28 December 2012). 5. Advances in Genetics Could Create Deadly Biological Weapons, Clinton Warns (07 July 2011). Available at http://www.breakingnews.ie/world/advances-in-genetics-could-create-deadly-biological-weaponsclinton-warns-531347.html (28 December 2012).

180


6. European Bioinformatics Institute, Access to Completed Genomes (17 December 2012). Available at http://www.ebi.ac.uk/genomes/index.html (28 December 2012). 7. D. Kay, Genetically Engineered Bioweapons (2003). Available at http://www.aaas.org/spp/yearbook/2003/ch17.pdf (28 December 2012).  "Global temperatures." U.K. Met Office. http://www.metoffice.gov.uk/climatechange/guide/science/monitoring/global (accessed August 13, 2014).  2.Hansen, J., R. Ruedy, M. Sato, and K. Lo. "Global Surface Temperature Change."Reviews of Geophysics 48, no. 4 (2010): RG4004.  3.a. b. c. Le Treut, H., R. Somerville, U. Cubasch, Y. Ding, C. Mauritzen, A. Mokssit, T. Peterson and M. Prather. Historical Overview of Climate Change. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 2007.  4.U.K. Met Office. Warming: A guide to climate change. Exeter, U.K.: Met Office Hadley Centre, 2011.  5.Hansen, J., and M. Sato. Paleoclimate Implications for Human-Made Climate Change. In:Climate change inferences from paleoclimate and regional aspects. Wien: Springer, 2012.  6.Shakun, Jeremy D., and Anders E. Carlson. "A global perspective on Last Glacial Maximum to Holocene climate change." Quaternary Science Reviews 29, no. 15-16 (2010): 1801-1816.  7.a. b. c. IPCC. Summary for Policymakers. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 2007.  8."Global response to climate change." The Royal Society. https://royalsociety.org/policy/publications/2005/global-response-climate-change/ (accessed August 13, 2014).  9.U.K. Met Office. Evidence: The state of the climate. Exeter, U.K.: Met Office Hadley Centre, 2010.  10.National Research Council. Ecological impacts of climate change. Washington, D.C.: National Academies Press, 2008.  11.The World Bank. World Development Report 2010: Development and climate change. Washington, DC: World Bank and Oxford University Press, 2010.  12.Allison, I.. The Copenhagen diagnosis updating the world on the latest climate science. Sydney: UNSW Climate Change Research Centre, 2009.  13.Le Quéré, C., A. K. Jain, M. R. Raupach, J. Schwinger, S. Sitch, B. D. Stocker, N. Viovy, S. Zaehle, C. Huntingford, P. Friedlingstein, R. J. Andres, T. Boden, C. Jourdain, T. Conway, R. A. Houghton, J. I. House, G. Marland, G. P. Peters, G. Van Der Werf, A. Ahlström, R. M. Andrew, L. Bopp, J. G. Canadell, E. Kato, P. Ciais, S. C. Doney, C. Enright, N. Zeng, R. F. Keeling, K. Klein Goldewijk, S. Levis, P. Levy, M. Lomas, and B. Poulter. "The global carbon budget 1959– 2011." Earth System Science Data Discussions5, no. 2 (2012): 1107-1157.  14.Bousquet, P., S. C. Tyler, P. Peylin, G. R. Van Der Werf, C. Prigent, D. A. Hauglustaine, E. J. Dlugokencky, J. B. Miller, P. Ciais, J. White, L. P. Steele, M. Schmidt, M. Ramonet, F. Papa, J. Lathière, R. L. Langenfelds, C. Carouge, and E.-G. Brunke. "Contribution of anthropogenic and natural sources to atmospheric methane variability." Nature 443, no. 7110 (2006): 439-443.  15.Denman, K.L., G. Brasseur, A. Chidthaisong, P. Ciais, P.M. Cox, R.E. Dickinson, D. Hauglustaine, C. Heinze, E. Holland, D. Jacob, U. Lohmann, S Ramachandran, P.L. da Silva Dias, S.C. Wofsy and X. Zhang. Couplings Between Changes in the Climate System and Biogeochemistry. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 2007.  16.Montzka, S.A., S. Reimann, A. Engel, K. Krüger, S. O’Doherty, and W.T. Sturges. OzoneDepleting Substances (ODSs) and Related Chemicals, Chapter 1 in Scientific Assessment of Ozone Depletion: 2010, Global Ozone Research and Monitoring Project–Report No. 52, 516 pp., World

181


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.