HURJ Volume 06 - Fall 2006

Page 1

mind, brain

&

consciousness

HURJ Fall 2006 Issue 6


“Can the brain understand the brain? Can it understand the mind? Is it a giant computer, or some other kind of giant machine, or something more?� – David Hubel, 1979


Table of Contents fall 2006 focus: mind, brain & consciousness 10

Cogito, Ergo, Busted? Jason Liebowitz

14

Neural Stem Cells: Just What the Doctor Ordered? Ishrat Ahmed

16

The Mind-Machine Interface Kevin Chen

20

So You Want To Sleep With Your Mom? Adam Canver

23

Capital Punishment: Organic Basis of Criminal Behavior Jocelyn Fields

26

Can Machines Be Conscious? Stephen Berger

28

Frequent Mental Activity Reduces the Risk of Dementia Harsha Prabhala

31

Leptin and CCK: The Alternative To Atkins Nancy Tray

research spotlights: Questions That Plague a Bioethicist ... Focus on Dr. Jennifer Elisseeff: Using Stem Cells to Produce Bone for Osteoarthritis Patients ... The Pen is Mightier Than the Parabola: Science in Literature ... Five Questions for Mikhail Pletnikov, MD, PhD


HURJ 2006-2007 Editorial Board: Editor-in-Chief of Operations

....................... Sravisht Iyer

Editor-in-Chief of Content

....................... Priya Puri

Editors-in-Chief of Layout

....................... Nik Krumm ....................... Bryce Olenczak

Focus Editor

....................... Sadajyot Brar

Spotlight Editor

....................... Daria Nikolaeva

Copy Editor

....................... Defne Arslan

HURJ Staff: Ishrat Ahmed Stephen Berger Adam Canver Kevin Chen Eric Cochran

Manuel Datiles IV Jocelyn Fields Preet Grewal Jason Liebowitz Krisztina Moldovan

Nancy Tray Winnie Tsang Harsha Prabhala Julia Zhou

About HURJ: The Hopkins Undergraduate Research Journal provides undergraduates with a valuable resource to learn about research being done by their peers and interesting current issues. The journal is comprised of three sections- original research, a current focus topic, and student and faculty spotlights. Students are encouraged to contribute their work either by submitting their research, or by writing for our focus or spotlight sections. The tremendous interest in our focus section has necessitated the use of an application process for our writers, while our research and spotlight sections are open for all to contribute to.

About the Covers: On Inside Front: The IBM Bluegene/L is the world’s fastest supercomputer, consisting of 65,536 dual-processor nodes, functionally arranged in a three-dimensional torus network. On Inside Back: Drawing by Robert Fludd from the 17th Century, depicting the mind’s senses as the relationship between the world and the brain.

Hopkins Undergraduate Research Journal Johns Hopkins University Mattin Center, Suite 210 3400 N. Charles St. Baltimore, MD 21218 hurj@jhu.edu http://www.jhu.edu/hurj


Letter from the editors

{

On behalf of the staff, we would like to welcome you to the Fall 2006 issue of the Hopkins Undergraduate Research Journal (HURJ). HURJ is an entirely student-directed enterprise, written, edited and designed by undergraduates at Hopkins. HURJ was the first undergraduate research journal on campus, formed in 2001 with the intention of keeping the undergraduate community informed of research and intellectual achievements on campus. Since its inception, the Hopkins campus has seen the birth and growth of several research-themed journals. That HURJ has continued to expand and has continued to recruit quality submissions is a testament to the quality and quantity of the work undergraduates at Hopkins actively engage in. The current, and first ever, Fall issue of the journal features two sections, student-written focus and spotlights that were not part of the seminal Spring 2002 journal. The birth of the Fall issue marks a major expansion for HURJ and represents a tremendous milestone for the journal and all those who have participated in it over the years. The HURJ staff has been among the most disciplined and committed an editor could hope for and this issue is an excellent showcase of their dedication and abilities. In this issue, HURJ takes a closer look at the human bind, attacking the topic from all perspectives. Articles in this section examine the role of mental activity in modulating dementia, the role particular genes in obesity as well as neural stem cells. Other authors examine the Freudian theories, the make up of the criminal mind and the new (and somewhat frightening) paradigm of brain-fingerprinting. Adding to the depth are also articles that examine how advances in neurosciences and bioengineering have affected the mind-machine interface and if conscious machines are a possibility in the future. The breadth and quality of these articles marks the best focus section HURJ has ever produced. In Spotlights, HURJ chronicles the experiences of several Hopkins professors and students. Professors profiled in this issue include a bioethicist, a bioengineer, a neuroscientist and a professor of literature with ground breaking views on human sexuality. This issue of HURJ would not have been possible without the dedication and support of the HURJ staff, Hopkins faculty, and HURJ sponsors. We would like to thank the students and professors who have contributed their time by reviewing articles, researching topics and sharing their experiences with the Hopkins community. We also thank the Student Activities Commission and the Office of Student Life for their continued support. Finally, we are indebted to the Digital Media Center for the use of their facilities, equipment and their undying support. We welcome your ideas, comments and questions and hope this journal serves as encouragement or inspiration to begin your own project and contribute to the intellectual advancement of this university.

Sravisht Iyer

Priya Puri


hurj@jhu.edu

urs Sponsored by the Nu Rho Psi Honor Society

UNDERGRADUATE RESEARCH SYMPOSIUM See And Be Seen — Present your research. Dates and Applications Forthcoming, or email urs2007@jhu.edu


research spotlights Questions That Plague a Bioethicist Ishrat Ahmed / HURJ Staff Writer Is military-enforced quarantine a feasible and ethical solution to pandemics? Is the use of antidepressants such as Prozac wrong? Is it possible to reconcile the moral agency with the advances in science? These are just a few of the questions that concern Dr. Hilary Bok, a professor in the department of philosophy at JHU. As her research indicates, Dr. Bok focuses on ethics, bioethics, freedom of the will, and Kant. Dr. Bok’s interest in the feasibility and ethical nature of military-enforced quarantine stems from the 1980s AIDS hysteria during which even Congress discussed quarantining individuals with AIDS and with the potential of contracting AIDS. More recently, President Bush mentioned that quarantine is a viable option that ought to be considered if the avian flu ever

involves the course of the disease. Quarantine is much more effective if the disease becomes symptomatic before contagious. Otherwise, the only method to determine the infected beings is through contact tracing or testing, both of which are difficult and time-consuming. The avian flu, which is contagious prior to symptomatic, is the least likely scenario for quarantine. In addition to these epidemiological reasons, Dr. Bok believes military-enforced quarantine is ethically wrong. The government would need to deploy tremendous military resources, there could be shoot-to-kill orders, and most importantly, people would lose their freedom. All these measures are futile if the quarantine is ineffective. According to Dr. Bok, “it’s a waste of people’s time, people’s lives, and people’s con-

spread to humans. Dr. Bok, as well as epidemiologists, believes that military-enforced quarantine in most cases, including an avian flu pandemic, is ethically wrong and ineffective. The contagious rate is an essential factor in predicting the efficacy of quarantine. If the number of new cases of a disease is lower than the number of infected people, the disease will die out and quarantine may be effective. The avian flu, however, would be extremely contagious if spread to humans; thus quarantine would not control the pandemic. Another important factor

stitutional liberties.” One of the roles of bioethicists is advanced public education; Dr. Bok believes it is essential for people to understand the concept of military-enforced quarantine and to be prepared before a pandemic occurs. Dr. Bok is also investigating ethical issues that arise as a result of mind-brain relationships. She is interested in understanding why people have objections to taking anti-depressants and personality, mood, or character–altering pharmaceutical drugs. Several writers have presented the argument that individuals should overcome depression through strength of character: taking drugs is wrong because it is a physical way of altering one’s mood. Dr. Bok found this objection interesting, but inadequate. Prozac and other Continued on Page 7...


Focus on Dr. Jennifer Elisseeff: Using Stem Cells to Produce Bone for Osteoarthritis Patients Adam Canver / HURJ Staff Writer As the prospects of stem cell usage and tissue engineering become more prominent within the scientific world, so too does the work done by Dr. Jennifer H. Elisseeff become increasingly significant. A top, young researcher in the Department of Biomedical Engineering, Elisseeff runs a dynamic and successful lab with over a dozen graduate students and nearly as many undergraduates. Her research deals with Dr. Jennifer H. Elisseef, Department of Biomedical Engineering identifying and understanding the mechanisms post-doctoral fellow at the National Institutes which control the differentiation process in of Health in developmental biology. This was stem cells into functional tissues, primarily short-lived because she soon after got a job cartilage and bone. Some big picture goals of offer from the Johns Hopkins University Dethe research include producing a permanent partment of Biomedical Engineering. source of cells which could be used to repair As an assistant professor at Hopkins for five cartilage via injection of the cells into patients years, Elisseeff has made her presence known. with osteoarthritis. In addition to intense research in her field, she Elisseeff grew up under the influence of teaches courses at Hopkins, such as Cellular engineering, as her father was a professor in and Tissue Engineering. Few courses are ofocean engineering. She did her undergradufered directly related to the field of tissue enate education at Carnegie Mellon University gineering since it is relatively new and emergin Pittsburgh, Pennsylvania. She came in as a ing field of science, but she plans on teaching biology major as she was interested in medimore different courses in the future. Outside cine, but was conflicted with her similar interof the classroom, she teaches undergraduates est in physics. To accommodate the two in a in the lab. This is very important because it way that made sense, she ended up with a B.S. allows undergraduates to gain a sense of what in Chemistry. She was worked in a research it means to be in a lab. Important, but often laboratory involving polymer chemistry. Afoverlooked, skills are developed by just being ter college, she joined Dr. Robert Langer’s in a lab, such as ability to help and accombiomedical engineering group at the Massamodate others. Elisseeff ’s emphasis on unchusetts Institute of Technology (MIT) as a dergraduate training helps make the transigraduate PhD student. Elisseeff was operattion process from undergraduate to graduate ing in the Harvard-MIT Division of Health school easier. She finds the Hopkins underSciences and Technology program jointly graduate body to be especially eager to learn governed between MIT and Harvard Univerand scientifically curious. sity. Just like at Hopkins, she did lab work What lies ahead for Elisseeff is expertise in as necessary for PhD completion and took all the field of tissue and polymer engineering. courses at the medical school. This was a sigShe has collaborations with clinicians as well nificant factor in her interest in medical applias many other scientists. She focuses on adcations of her current research. After earning vancing her lab and instructing more graduher PhD degree, she worked for one year as a ate and undergraduate students.


The Questions of the Bioethicist anti-depressants behave similarly to the endorphins that result from physical exercise, such as running. However, people do not have objections to exercise as a physical method of altering one’s mood. Dr. Bok disagrees with the idea that the dissimilarity in effort between running and taking drugs should make a difference. She believes that depression often obstructs goals. Prozac and other anti-depressants can’t get individuals to their goals, but can allow them to try harder. For example, there is never an end to the goal of being a good person. Prozac can simply help to strive for the goal, but will never get the individual to fulfill it. Therefore, the

use of anti-depressants ought to be acceptable in overcoming depression. This mind-brain relationship issue is part of the greater question of reconciling the moral agency with advances in science. As Dr. Bok explains, “the more science explains about our human behavior, the more we question our will.” In Freedom and Will Dr. Bok provides an account of the compatibility of mechanism, such as causal determinism, and the ideas of freedom of will and moral responsibility. In short, mechanism rests on theoretical reasoning, which simply provides descriptive information of the real world. Freedom of will and

moral responsibility, however, emerge from practical reasoning, which assists the moral agent in deciding on a course of actions. Since practical reasoning and theoretical reasoning have different purposes, they do not conflict. As a result, mechanism and freedom of will, which rely on theoretical and practical reasoning respectively, are compatible. Dr. Bok enjoys bioethics because of the needed familiarity with science, public health, and medicine. She is currently investigating antidepressants, moral responsibility, and Kant and animals; and looks forward to more interesting projects through the bioethics institute.

The Pen is Mightier than the Parabola: Science in Literature Jason Liebowitz / HURJ Staff Writer Sigmund Freud’s groundbreaking views on human sexuality caught many people in the early twentieth century quite off-guard. In a departure from classical ideology, Freudian doctrine suggests that aesthetic experience (i.e. creating physical, visual, or auditory art) is simply a means of expressing one’s sexual urges in a more socially acceptable manner. However, according to Dr. Richard Halpern, the English Department’s Director of Undergraduate Studies, conventional wisdom concerning Freud’s dogma may be misleading, especially when taken out of context. In his book Shakespeare’s Perfume: Sodomy and Sublimity in the Sonnets, Wilde, Freud and Lacan, Dr. Halpern observes that Freud, Shakespeare, Oscar Wilde, and other authors built many of their writings upon the idea that sexuality arises out of aesthetic experience and not vice versa. This thesis finds its origins in the first chapter of St. Paul’s Epistle to the Romans, in which Paul declared that the Lord afflicted the Greeks with homosexuality as punishment for being too attached to statues. While Paul’s statement was mostly a condemnation of excessive idolatry, the Greeks viewed statues as both objects of worship and of aesthetic appeal, a fact taken to heart by writers of later generations. Thus, the idea of sexuality emerging from aesthetics has been carried on in literature over the years, including in Freud’s famous study of Leonardo da Vinci.

Dr. Halpern’s path to becoming a professor of literature is almost as complex as Freud’s writings. Fascinated by elementary particle physics since he was a child, Dr. Halpern entered college as a physics major only to discover that the labs did not suit him. Dr. Halpern then dabbled in math for some time, but his experiences reading poetry by Yeats allowed him to discover that literature could both challenge and move him. His interests grew to include science in literature, psychoanalytic theories, aesthetics, and literary drama. Dr. Halpern has and continues to teach courses on Shakespeare, great writers of the Continental Renaissance, and the works of Karl Marx, and he hopes to introduce a class that would examine the psychoanalytic and social implications of revenge tragedy. Among the most exciting topics in science and literature are those that are just recently emerging. Dr. Halpern explains, “What is exciting about literary theory right now is that there has been a new surge of interest in science on the part of the humanities so that a number of the paradigms of the so-called ‘new sciences,’ such as chaos theory, complexity theory, self-organizing systems, and autopoietic systems, have been very suggestive for people who think about literature and culture. These theories offer models for thinking through problems in new ways.” So, whether discussing limaçons or Lacan, Dr. Halpern is able to combine his passions for science and the humanities into a literary labor of love.


Five Questions for Mikhail Pletnikov, MD, PhD Krisztina Moldovan / HURJ Staff Writer

Q: Could you tell me a bit about your educational background and briefly describe your research? My earliest research interests involved the physiology of higher brain activity. I was involved in Cognitive and Behavioral Neuroscience research, including studies based on Pavlov’s conditional response, and memory studies in animal models through learning and memory tasks. I also used lesion techniques and infections to block neurons in specific brain regions, looking to investigate what role areas in the brain played through the various periods of brain growth and development, eventually applying this information to the study of pathologic conditions. Later I went on to also investigate cell immediate response genes and to look specifically at the development of the hippocampus and cerebellum in animals, looking to see how these regions contribute to brain development at different ages. It is at this point that I became particularly interested in developmental physiology and also when I first started working with Borna Disease Virus. BDV is a neurotropic virus which causes Borna Disease and infections neurological syndrome which causes abnormal behavior and fatality. When rats are neonatally infected with BDV, the virus causes a pattern of neurodegeneration that mirrors neurodevelopmental disorders in humans. Through BDV infection I started to study neurotoxicity in the brain and the effects of toxicity on neuronal survival. One of the major questions that we’re trying to answer is how the immune system

of the brain reacts to CNS insult, and in particular what the role of microglial cells is in this immune reaction. Later on, I also added genetic work to the research conducted at my lab, and began to work with DISC I (Disrupted in Schizophrenia I) mutants, using DISC one knockout mice as genetic models associated with schizophrenia.

Q: How did you come to choose this filed of study? What influenced you in your career path; what makes neuroscience so interesting to you? I was always interested in the biological and medical aspects of psychology. I wanted to learn more about biology-based psychology. Since there were no neurobiology courses available when I was going to school, and going fully into study of biology didn’t appeal to me, I compromised by going to medical school. At present, even though I don’t practice as a clinician, I work closely to human situations. My medical education serves as a bridge between the research I do in the lab and the medical problems the research can be applied to. Even though I don’t see patients I can apply what I am doing to medical problems. In the past psychology wasn’t considered as a true part of biology or medicine. Fortunately, this is not the case anymore, and nowadays if you want to understand human psychology fully you have to study neuroscience.


Q: There have been so many advances in the filed of neuroscience recently. In your opinion what is one of its areas which is still unknown and worth investigating? When looking at all the recent advances in neuroscience, I feel like there is a lot of knowledge out there but not a lot of understanding. We know a lot but we don’t know how to put that knowledge together to form a full picture. For example we have identified so many proteins but often we do not know how they interact or what their exact roles are. I think it would be very interesting if we could try to bring together elements of brain activity and apply such knowledge to the understanding of psychological syndromes. Consciousness is our last frontier.

Q: What advice would you give to an undergraduate wanting to pursue a career in the sciences or medicine? I would tell them, that in my opinion, as far as research is concerned, systems biology is the way to go. I think some of the next great developments will come in this field. If not put in the context of biological systems a lot of our new knowledge, like the discovery of new genes cannot be applied or put to use.

Q: Looking back at your academic career, what achievement would you say you were most proud of? I always thought of myself as able to connect dots, to take a look at the problem from an unexpected angle. The Borna Virus Model is a good case in point. I knew the important characteristics of autism and those of BDV infection, as it affects neurons, and also the reaction of kids to vaccinations, to be able to connect these dots and realize that neonatal BDV infection in rats could be used as an animal model for autism and other neurodegenerative diseases. It’s partially my educational background, studying medicine and physiology, which helped me to see the big picture and realize how different disciplines or areas of study can be put together and interrelated. Some would say that it is more advantageous in one’s career to just focus deeply on one filed and learn as much as you can about it, in order to have the deepest possible knowledge of that specific discipline. I, however, have always been most interest in connecting different fields and areas of study, trying to find the bridges that connect them and related one to the other. I also think it is very important to learn by others mistakes, not merely your own. Human society is built this way: one shouldn’t only look at one’s own achievements and failures but also at those of others to advance and gain understanding.


10


Cogito, Ergo, Busted? Scientific and Ethical Qualms Concerning Brain Fingerprinting Jason Liebowitz / HURJ Staff Writer With the past year’s controversy surrounding the Bush Administration’s eavesdropping and wire-tapping activities, many people have begun to wonder where the line of privacy is drawn. As unsettling as it is that federal intelligence agencies may be listening to private phone calls or reading personal email, at least there is always the comforting thought that nobody can penetrate the sanctity of mind…or can they? Following the tragic events of September 11th, a form of interrogation called brain fingerprinting has gained considerable attention amid claims that it is more effective at indicating guilt than traditional polygraph tests. As the name of the technique implies, the efficacy of brain fingerprinting lies in the fact that the suspect’s mind is essentially being read by questioners who use electrodes placed around the person’s scalp to measure brain activity. Instinctually, many shudder at the thought of their psyches being probed in such an invasive manner, but what makes brain fingerprinting different than a lie detector examination? Are the electrochemical processes of the brain inviolably distinct from the vital functioning of the rest of the body? If brain fingerprinting is proven most effective in catching criminals, then should emotional reservations hinder the protection of society? These, as well as other scientific and ethical questions, must be examined in order to arrive at some conclusion concerning how brain fingerprinting and related technologies ought to be used in the present and into the future. Current Technological Limitations: It is important to note that, so far, brain fingerprinting is only effective in cases in which explicit details surrounding a crime are only known by investigators and potential perpetrators. This scenario is necessary because the technique is essentially testing whether or not a person subconsciously reacts to sensitive information, such as a code word or image, which conjures up memories of the event in question. That being said, this would still imply

that the interrogation method would be applicable to terrorist plots, which are highly secretive and esoteric. The examination begins by affixing electrodes to the scalp of the suspect. These electrodes are used to produce an electroencephalograph (EEG), a record of neural activity in the cortex that is translated into wave forms analyzed by a computer. Next, the subject is shown words and images pertinent to the case, as well as some information purposefully unrelated to the matter, and the EEG measures the neurological reactions to the evidence. If the subject is a collaborator in the crime, the EEG should show some signature of familiarity with the stimulus. On the other hand, an individual that is not involved should show a pattern of unfamiliarity or surprise when presented with the words orimages. Based on the magnitude of mental responses, investigators can determine whether or not the suspect is privy to details of the crime and, thus, involved in some way with its planning or execution. The Case for Brain Fingerprinting: The most enthusiastic advocacy for the use of brain fingerprinting comes, not surprisingly, from Dr. Lawrence Farwell, a former Harvard University research associate and the inventor of brain fingerprinting technology. According to Dr. Farwell, one important use of the technique can actually involve proving the innocence of the wrongly convicted. An example of such a case is that of Terry Harrington, who was convicted of murder charges in 1978 but requested to be tested through brain fingerprinting in 2000. The test was carried out and indicated that Harrington was, in fact, not guilty, a finding substantiated by his accuser who later admitted to committing perjury during the original trial in order to avoid prosecution. The Iowa District Court ruled that brain fingerprinting is “generally accepted” as reliable in the scientific community and, thus, the test was admitted. As a result, the court overturned Harrington’s conviction, the state prosecution elect-

11


ed not to retry Harrington, and he was set free. Dr. Farwell goes on to claim that brain fingerprinting has the potential to apprehend serial killers, identify would-be terrorists, and even branch into other sectors of society by detecting the onset of Alzheimer’s and testing the effectiveness of advertising campaigns. In the report “Federal Agency Views on Brain Fingerprinting,” a number of FBI agents also said they envision brain fingerprinting as very useful in interviewing suspects of premeditated crimes and for testing suspected terrorists for recognition of known al-Qaeda code words. The Case Against Brain Fingerprinting: As mentioned before, some of the main justifications for not using brain fingerprinting are due to technological constraints and limited real-life applications. In the same report cited above, the CIA, FBI, Department of Defense, and Secret Service unanimously agreed that most cases do not involve specific information that is known only to the investigator and to the criminal (a prerequisite for the use of brain fingerprinting) and that the technique cannot be used as a general screening tool. Moreover, these agencies fear that brain fingerprinting may produce a sizable number of false positives and false negatives, meaning that the computer would misinterpret brain activity and incorrectly indicate the presence or absence of memories in the person being questioned. These potential flaws would only be accentuated by the effects of drug and alcohol abuse on the brain. In other cases, natural limitations on memory and the fact that a suspect may not have seen or heard the selected information in the first place would make accurate interrogation impossible. Additionally, the false memory phenomenon (a person wrongfully believing that he had a certain experience which he did not, perhaps due to the intimidating nature of the interrogation process) further complicates the technology’s accuracy. In more extreme cases, mental illnesses such as schizophrenia, bipolar disorder, depression, neurodegenerative disease, dementia, or even epilepsy can skew the results of brain fingerprinting. Current Conclusions on Brain Fingerprinting: Bearing in mind both the arguments for and against brain fingerprinting, it appears that the technology is not presently acceptable for widespread

12

use. While select scientists, entrepreneurs, and interrogation analysts point to a number of successful applications of the technology and foresee future potential, in-depth research conducted by academic committees and federal agencies cite the extremely specific parameters necessary for the proper use of brain fingerprinting and the flaws in the technological design as the main reasons not to utilize the technique. The shortcomings of the technology should come as no great surprise to researchers because of the nature of EEG’s. Unlike functional magnetic resonance images (fMRIs), which can track the blood flow and oxygenation of the brain to pinpoint specific regions that are activated at a given time, EEG’s measure the brain’s general activity. Therefore, if an innocent person being interrogated is shocked by a question or must think very hard to try to remember information, then brain activity will increase and the computer may wrongly interpret this response as an indication of guilt. With these shortcomings in mind, it makes sense that a number of companies are in the process of creating fMRI lie detectors; it is, however, too early in the research and development stage to comment on these products. Ethical Issues to Consider: Thus far, an evaluation of the practical applications of brain fingerprinting has served to show that, although the technique should not be used now, there is the possibility that brain fingerprinting can be refined and used in the future. However, a number of philosophical and neuroethical arguments have been put forth rejecting the use of brain fingerprinting on the grounds that it violates the basic tenets of cognitive liberty, which Dr. Wrye Sententia, Director of the Center for Cognitive Liberty and Ethics and a contributor to the 2002 President’s Council on Bioethics, defines as “[…] every person’s fundamental right to think independently, to use the full spectrum of his or her mind, and to have autonomy over his or her own brain chemistry” (Sententia 221). Although brain fingerprinting is not the kind of mind reading in which individual thoughts are spelled out in sentences for everyone to hear, the tremendous amount of ongoing research concerning which areas of the mind are active in speech and imagery may one day make such “picking of the brain” possible. In order to understand the ethical debate concerning techniques like brain fingerprinting, imagine


that there exists technology to read a person’s mind, word for word. Is the use of such machinery unethical? Some people would say that law enforcement should be able to use any available mechanisms to evaluate the guilt of suspects. However, it is necessary to remember that justice is dealt under the guidance of the Constitution and, although not stated explicitly, the right to privacy has been interpreted by the Supreme Court as implicitly protected by the Bill of Rights. Thus, one must consider the following question: If a person’s thoughts do not belong to him, then what does? Internal monologue is what provides humans with a sense of individuality, the freedom to explore any subject or express any opinion without fear of judgment by others. Allowing for a violation of this sanctuary of thought represents an unacceptable infringement on cognitive liberty. Put simply, mind reading technology poses challenges to liberty that cannot be overcome by any number of rules or good intentions. The administration of lawful search and seizure requires a warrant to enter a house or office, but no warrant can truly justify the invasion of the mind since the search for information cannot be bounded. Essentially, the probing of people’s thoughts is not limited to the individual pieces of information needed to prove guilt, but instead would reveal every idea that enters someone’s

head. Thus, there exists no mechanism by which to legitimately consent to mind reading because, once the process begins, there is no way to avoid the exposure of separatly important thoughts. Some individuals may claim that mind reading represents a natural progression of interrogative technology that is very much in line with polygraph tests. However, there is a qualitative difference between measuring the signs of increased pulse or respiration and delving into the subconscious workings of an individual. The polygraph test is merely a mechanical version of the way in which humans peer into the eyes of another or look for sweaty palms in the hopes of identifying the visible manifestations of dishonesty. Mind reading, on the other hand, goes beyond the scope of human faculties and crosses into the realm of personal autonomy. Accepting brain fingerprinting and similar technologies with the hopes of reducing false convictions or preventing crime may appear reasonable, but following such a course of action would pose long term ethical and philosophical problems too serious to be worth the potential benefits. When balancing the good of society against the protection of civil liberties, the separation between body and mind ought to be recognized and the line must be drawn at the sanctity of thought.

References: 1]

“Brain Fingerprinting Laboratories Summary Information.” Brain Fingerprinting Laboratories. 29 Apr. 2006 <http://www.brainwavescience.com/ExecutiveSummary.php>. 2] Sententia, Wrye. “Neuroethical Considerations: Cognitive Liberty and Converging Technologies for Improving Human Cognition.” Annals of the New York Academy of Sciences 1013 (2004): 221-228. 24 Apr. 2006 <http://www.annalsnyas.org/cgi/content/full/1013/1/221>. 3] United States. United States General Accounting Office. Federal Agency Views on “Brain Fingerprinting” Oct. 2001. 3 Apr. 2006 <http://purl.access.gpo. gov/GPO/LPS46073>.

Further Readings: 1] 2] 3] 4]

Bryant, John, Linda Baggott La Velle, and John Searle. Introduction to Bioethics. England: John Wily & Sons, Ltd, 2005. 149-161. Jenkins, John J. Understanding Locke: an Introduction to Philosophy Through John Locke’s Essay. Edinburgh: Edinburgh UP, 1983. 103-132. Teichman, Jenny. Social Ethics. Oxford: Blackwell, 1996.

13


Neural Stem Cells: Just what the doctor ordered? Ishrat Ahmed / HURJ Staff Writer Neurons are essential to our functioning. They make up the circuitry that stores and conveys information. The human brain was once believed to be incapable of regenerating these cells, meaning humans were born with all of the neurons that they would ever have. However, stem cell research has indicated that a small population of neural stem cells with regenerative capabilities does in fact exist in certain areas of the brain. This finding has changed the way that we perceive nervous system damage and diseases such as Parkinson’s. Stem cells share three main properties: the ability of cell division and self-renewal, an unspecialized status, and a potential to give rise to specialized cell types, which is initiated by genes and chemicals in the cellular environment. Researchers are currently investigating this interaction between genes and the environment since effective cell-based therapies, also known as regenerative or reparative medicine, are beneficial. There are two types of stem cells: embryonic and adult. Human embryonic stem cells were first isolated from blastocysts in 1998 by James Thomson at the University of Wisconsin (UW) in Madison. Human embryonic stem cells are pluripotent – they can develop into any specialized cell type – and they can proliferate over long periods of time. Furthermore, scientists can manipulate the environment of these cells to induce differentiation into specific cell types. While embryonic stem cells are currently controversial, recent advancements may allow for the isolation of such stem cells without harming the embryo. Adult stem cells, alternatively, have been found in several tissue types including bone marrow, muscle, and brain tissue. Although adult stem cells were first discovered in the 1960s, it was not until the 1990s that the adult brain was shown to contain stem cells that can generate the three ma-

14

jor brain cell types: astrocytes, oligodendrocytes, and neurons. Adult stem cells generally produce replacements for cell types of the tissue in which they are located. Recently, however, hematopoietic cells, which are blood-forming cells in the bone marrow, have been shown to exhibit plasticity, the capability of generating other cell types, including neurons. The ability of embryonic and adult stem cells to give rise to specialized cells is the central idea in cell-based therapy. Cell-based therapy requires a large number of stem cells, which is not yet possible for adult stem cells since they are present in small quantities in the human body and they do not proliferate rapidly. Once viable, such stem cell transplants would be extremely beneficial since adult stem cells are known to repair tissue damaged from normal wear and tear. Researchers are hoping to extend this ability to heal more extensive damages, including those created by neurodegenerative disorders. Stem cells have, in fact, started a new path towards treating neurodegenerative diseases, such as Parkinson’s and Lou Gehrig’s. Conventional treatment is geared towards relieving symptoms and limiting further damage. However, with the advent of neural stem cells, there is hope to actually restore lost function. Neural stem cells are present in the adult primate brain in two main locations: the subventricular zone located under the ventricles and the dentate gyrus of the hippocampus. Research in the mid-1990s indicated that stem cells from these two regions proliferate and migrate towards damaged brain tissue. Since the discovery of these neural stem cells, researchers have begun investigating cell-based therapy for neurodegenerative disorders. Currently, most neural stem cell research on neurodegenerative disorders focuses on Par-


kinson’s disease. This disorder affects more than 2 % of the population of age 65 and older. Its symptoms include tremors, rigidity, and hypokinesia, or decreased mobility. It is caused by the degeneration and loss of dopamine producing neurons (DA neurons). Parkinson’s disease is well-suited for current

Stem cells [are] path towards treatia ngnew neurodegenerative eases... [Whereas]disventional treatmentcon-is geared towards relieving symptoms [...] cells might actuallystem restore lost function. stem cell research because there is only one cell type that must be regenerated: DA neurons. Researchers have successfully induced embryonic stem cells to differentiate into cells that exhibit several functions of DA neurons. After transplantation of the differentiated cells into a rat model of Parkinson’s disease, neurons released dopamine, and motor function improved. Further research by James Fallon and colleagues at the University of California - Irvine have shown that a protein called transforming growth factor alpha (TGFα), which activates normal repair processes in several organs, may be used to treat Parkinson’s disease. The study suggested that the degeneration of brain cells during Parkinson’s is so gradual that the brain’s normal repair processes do not turn on. However, TGFα injected into a rat model of Parkinson’s resulted in neural cell proliferation and migration towards the damaged area in the substantia nigra, located in the midbrain, and striatum, which connects the two cerebral hemisphere. Furthermore, the Parkinson’s disease symptoms ceased. More recently, stem cells have been used as drug-delivery vehicles in treating Parkinson’s disease in rat and monkey models. A specific growth factor, glial cell-line derived neurotropic factor (GDNF), was shown to make some progress in limiting Parkinson’s disease in patients. However, GDNF cannot cross the blood-brain barrier. Clide Svendsen and colleagues at UW-Madison have

introduced genetically modified GDNF-secreting progenitor cells derived from stem cells into the striatum of monkey and rat brains. This resulted in new nerve fiber growth in the striatum and a migration of GDNF to the substantia nigra, which contains DA neurons in a healthy brain. The transplanted cells survived and were shown to produce GDNF for up to three months. However, before this research can be extended to humans, it is necessary to control the delivery vehicle by switching the cell on or off. The use of stem cells in drug-delivery vehicles will eventually prove useful in treating neurodegenerative diseases beyond Parkinson’s since over 70% of all drugs for brain disorders are unable to cross the blood-brain barrier. Similar research has also been performed towards the treatment of Krabbe’s disease in which oligodendrocytes fail to maintain the myelin sheath around neural axons. The symptoms exhibited include arrested motor and mental development, seizures, paralysis, and death. In this study, galactocerebrosidase, which promotes myelin sheath formation, was introduced to the brain via progenitor cells. It is necessary to note that unlike most other cells, neural stem cells are immune privileged, meaning that they do not trigger an immune system response that may cause rejection. Therefore, the use of neural stem cell transplants in treating diseases of the eye, brain, and spinal chord may eliminate the need for tissue typing and immunosuppressive drugs. Research has barely begun to scratch the surface of stem cell potential in treating neurodegenerative diseases. Neural stem cells may eventually help treat not only Parkinson’s disease, but also a myriad of much more complex neurodegenerative diseases such as Alzheimer’s, as well as, extensive damage to the central nervous system and the spinal cord. The brain’s own repair mechanism may one day be controlled to effectively heal the damage. As the average life expectancy increases, older people are finding themselves afflicted with more neurodegenerative diseases. Neural stem cell technology is undoubtedly essential in improving the quality of life of the elderly and their loved ones. For now, it is evident that neural stem cells have the ability to generate both brain cells and hope.

References: 1] Stem Cells Basics. NIH. 12 Aug. 2005. National Institutes of Health. <http://stemcells.nih.gov/info/basics/basics2.asp>. 2] Engineered stem cells show promise for sneaking drugs into the brain. 15 Dec. 2005. Embryonic Stem Cells Research at the University of Wisconsin-Madison. <http://www.news.wisc.edu/packages/stemcells/>. 3] Study suggests treatment for fatal nervous system disorder. 12 Dec. 2005. Embryonic Stem Cells Research at the University of Wisconsin-Madison. <http://www. news.wisc.edu/packages/stemcells/>. 4] Brain Stem Cells are Not Rejected When Transplanted. 14 July 2003. EurekAlert. <http://www.eurekalert.org/pub_releases/2003-07/seri-bsc071103.php>.

15


The Mind-Machine Interface “Gentlemen, we can rebuild him. We have the technology. […] Better than he was before. Better, stronger, faster.” –Oscar Goldman, The Six Million Dollar Man

Kevin Chen / HURJ Staff Writer No longer the realm of science fiction, the mind and the machine are becoming increasingly enmeshed and complimentary. The brain is often conceived and described as the most complicated computer system ever realized. Indeed, the integration of 16

millions of neurochemical signals is not unlike the transistors and diodes of the computer. Currently, both the fields of neural science and microelectronics are expanding at furious paces. But it is at the juncture of these two fields that fantastic possibilities


emerge. Can we use massive neural connectivity to engineer the next generation of computers? Can we use implanted electrodes to cure everything from Alzheimer’s disease to paralysis? Or dare we go so far as to enhance the brain, give humans increased cognitive power and even find new ways to cheat death? The merging of computer science and neuroscience is in hopes of creating devices that can translate neural signals into external electrical systems and vice versa. These devices are known as brain-computer interfaces, or BCIs. One major goal for BCIs is to artificially generate systems that can compensate for damaged or underperforming areas of the brain. Eventually, BCIs may offer a treatment option for conditions, like stroke or trauma, that result in the “locked-in syndrome” where a patient is conscious but unable to produce meaningful movement (Laureys et al., 2005; Kübler and Neumann, 2005). However, the development and engineering of BCIs is hampered by difficulties in working at the cellular scale, in fully mapping out complex neural systems, and difficulty in fabricating nanoscale microelectronics compatible with biological function. Even between institutions and laboratories, there is disagreement over how to capture electri-

cal signals as signatures certain behaviors and which signals provide the most accurate input translatable into electrical output (Andersen, Musallam and Pesaran, 2004). Nevertheless, research into applying artificial technology to repair, and perhaps enhance, neural tissue is accelerating at breathtaking speed. Already, we are quite familiar with the merging of man and machine. Scientists have long used technology and probes to implicate specific areas of the brain involved in various activities, even those as abstract as language and memory. Electrophysiological techniques involve inserting hair-thin electrodes into brain tissue of living animals; subsequently, behavioral observations can be correlated with single-cell recordings of individually firing neurons. In humans, some patients, planning to undergo neurosurgery, consent to have a sheet with embedded electrodes placed on the surface of the brain before their operation. During subsequent cognitive testing sessions, the electrode sheet can record cortical electrical activity and give some sense of what areas are involved in tasks such as mathematics, language, navigation and memory. In terms of clinical treatments, cochlear implants are examples of mechanical tech-

17


nologies used to enhance neural function and were introduced in the mid-1980s. These implants have been shown to significantly restore auditory sensation in the deaf, even allowing them to understand telephone conversations (Rizzo et al., 2001). In a more direct approach, devices for deep brain stimulation (the “brain pacemaker”) are FDA approved and increasingly used. The idea behind deep brain stimulation is to implant microelectrodes that provide stimulation to specific nuclei in the brain. This bolstered activity can regulate neuronal firing, make up for hypometabolism or even compensate for neural degeneration, thereby correcting homeostatic imbalances such as in epilepsy, depression and Parkinson’s disease patients (Mayberg et al., 2005; Grafton et al., 2006; Yeomans, 1990). Clearly, the use of microelectronics has greatly augmented both research in the neurosciences and also the ability to devise effective treatments for neurological disease. Now, there are new headlines regularly appearing that describe the next implant or the newest neural technology emerging from cooperation between the neurosciences and computer sciences. A sampling of some of the recent trends shows the wide applicability and interconnected nature of these two fields: Many research teams are part of an effort to produce an artificial retina. The strategies are varied: some groups are working at the level of the retina and photoreceptors, with implanted photosensitive electrodes or devices to stimulate cells downstream of rods and cones (retinal ganglia cells, for example, are preserved in retinitis pigmentosa and amenable to stimulation) (Chow et al., 2002). Others are opting for more direct cranial prostheses implanted at the level of the optic nerve or visual cortex (Dobelle WH, 2000; Veraart et al., 1998). The technology is in its infancy, grappling with problems associated with producing appropriate levels of electrical stimulation, minimizing electrical and heat damage, locating implantation sites that produce the most robust signal, and securing funding to even begin engineering a biocompatible prosthesis. But with more and more tests of new devices, technology is sure to advance far enough to provide meaningful improvement to quality of life in blind patients. A team of scientists at Duke University have had success in translating neuronal signals from a monkey brain to induce movement in a computer-con18

trolled mechanical arm (Carmena et al., 2003). First, a monkey with electrodes implanted in motor cortex was trained to actively reach and grasp a pole to move a cursor on a computer screen. Second, after extracting the electrical patterns in motor cortex associated with this action and using these waveforms to direct cursor control, the monkey learned that actual arm motion was not necessary to move the cursor. Finally, the brain patterns controlling the cursor were used to control the motion of a robotic arm. This groundbreaking research, while preliminary, showed promise for prostheses controlled directly by the brain and even suggested some mechanisms by which a prosthesis could “become a part of [a patient’s] own bodies” by processes of feedback and neuroplasticity. A BCI for the motor system holds great promise for patients afflicted by paralysis, amputation, end-stage amyotrophic lateral sclerosis or any condition hampering motor control. The interdisciplinary nature of neurotechnology works the other way as well, with the natural processing power the brain employed to bolster computational power. Researchers at the University of Florida were able to interface a dish of cultured neurons with electrodes connected to a computer flight simulator. Eventually, feedback loops and learning circuits were able to be established and the dish of neurons “learned” to operate the flight simulation. While certainly a long way from Kubrick’s HAL 9000, the possibilities are endless with applications to pilot-less aircraft and computers that actively learn as they are used. (Potter, Wagenaar and DeMarse, in press). This is merely a sampling of the new trends and technologies being researched. But with all these new advances, the actual implementation of new brain-computer interfaces must be carefully deliberated. The question of cognition and mind, when confronted with computers, enters an uncharted and ethically murky area of artificial intelligence. How do we define sentience as we begin enhancing our computer system with increasingly independent and “self-aware” neural networks? Could the paperclip animation in Microsoft Word gain a real kind of consciousness? While the philosophy of mind is an entire field that cannot be covered here, the author refers the reader to John R. Searle’s thought experiment about the Chinese Room (Searle, 1980) and


responses to this argument against the possibility of consciousness in computer programs. As scientists, engineers and physicians confront these new technologies and possibilities, it is necessary to take a step back and discuss the ethics and propriety of using machines to enhance the brain and using neural principles in technology. Is it possible that human sensory modalities could adapt to new input, for example the sensation of magnetic fields? What happens when our soldiers become an enhanced breed, engineered from birth to have retinas perceiving across the electromagnetic spectrum, cochleas that are sensitive to the quietest rumbling, and cognitive abilities enhanced by implanted microarrays? Could we advance to a point where the

disintegrating body is disposable, replaced by custom-machined parts? All these questions and more need to be considered, though answers are not easy to come by. Researchers, universities, governments and the public at large require a careful definition of what is acceptable BCI research and what is deemed unethical. Similar to the topic of stem cells, BCIs hold great promise but also large capacity for abuse and moral violations. The prospects of BCI technology are indeed exciting and expanding the frontiers of technology and life sciences. However, as with any new technology, our advancement must be made carefully and with full awareness of the pitfalls and interdictions associated with scientific progress, social culture and individual lives.

References: 1] Andersen RA, Musallam S, Pesaran B. (2004) Selecting the signals for a brain-machine interface. Curr Opin Neurobiol Dec;14(6):720-6. 2] Carmena JM, Lebedev MA, Crist RE, O’Doherty JE, Santucci DM, Dimitrov DF, Patil PG, Henriquez CS, Nicolelis MA (2003) Learning to control a brainmachine interface for reaching and grasping by primates. PLoS Biol Nov;1(2):E42. 3] Chow AY, Pardue MT, Perlman JI, Ball SL, Chow VY, Hetling JR, Peyman GA, Liang C, Stubbs EB Jr, Peachey NS (2002) Subretinal implantation of semiconductor-based photodiodes: durability of novel implant designs. J Rehabil Res Dev May-Jun;39(3):313-21. 4] Dobelle WH (2000) Artificial vision for the blind by connecting a television camera to the visual cortex. ASAIO J Jan-Feb;46(1):3-9. 5] Grafton ST, Turner RS, Desmurget M, Bakay R, Delong M, Vitek J, Crutcher M (2006) Normalizing motor-related brain activity: subthalamic nucleus stimulation in Parkinson disease. Neurology. Apr 25;66(8):1192-9. 6] Kübler A, Neumann N (2005) Brain-computer interfaces--the key for the conscious brain locked into a paralyzed body. Prog Brain Res; 150:513-25. 7] Laureys S, Pellas F, Van Eeckhout P, Ghorbel S, Schnakers C, Perrin F, Berre J, Faymonville ME, Pantke KH, Damas F, Lamy M, Moonen G, Goldman S (2005) The locked-in syndrome : what is it like to be conscious but paralyzed and voiceless? Prog Brain Res; 150:495-511. 8] Mayberg HS, Lozano AM, Voon V, McNeely HE, Seminowicz D, Hamani C, Schwalb JM, Kennedy SH (2005) Deep brain stimulation for treatment-resistant depression. Neuron. Mar 3;45(5):651-60. 9] Potter SM, Wagenaar DA, DeMarse TB (in press) “Closing the Loop: Stimulation Feedback Systems for Embodied MEA Cultures.” In Advances in Network Electrophysiology Using Multi-Electrode Arrays. Eds. M Taketani and M Baudry. <http://www.bme.ufl.edu/documents/demarse_tb_cv.pdf> 10] Veraart C, Raftopoulos C, Mortimer JT, Delbeke J, Pins D, Michaux G, Vanlierde A, Parrini S, Wanet-Defalque MC (1998) Visual sensations produced by optic nerve stimulation using an implanted self-sizing spiral cuff electrode. Brain Res Nov 30;813(1):181-6. 11] Yeomans, Johns Stanton. Principles of Brain Stimulation. Oxford: Oxford University Press, 1990.

Further reading: 1] Carmena JM, Lebedev MA, Crist RE, O’Doherty JE, Santucci DM, Dimitrov DF, Patil PG, Henriquez CS, Nicolelis MA (2003) Learning to control a brainmachine interface for reaching and grasping by primates. PLoS Biol Nov;1(2):E42. 2] Cummins, Robert and Denise Dellarosa Cummins. Minds, Brains, and Computers: The Foundations of Cognitive Science, An Anthology. Malden, MA: Blackwell Publishers Inc., 2000. 3] Friehs GM, Zerris VA, Ojakangas CL, Fellows MR, Donoghue JP (2004) Brain-machine and brain-computer interfaces. Stroke Nov;35(11 Suppl 1):2702-5. 4] Illes, Judy, ed. Neuroethics : defining the issues in theory, practice, and policy. Oxford: Oxford University Press, 2006. 5] Lowe, E.J. An Introduction to the Philosophy of Mind. Cambridge: Cambridge University Press, 2000. 6] Searle, JR (1980) Minds, brains, and programs. Behav Brain Sci 3:417-457.

19


So You Want to Sleep With Your Mom? Adam Canver/ HURJ Staff Writer

Wh at you wa obt nt ain is i sm ill an eg df or a ac a lly tio kn ow ns a nt o f oy mot so ou her .W ly lo v e.

hope to at you h W es. ntri ions u o c r relig o t j s a o rm dm nde u n rily ecessa ble sa n a e t o t n n io sta st is est ous o y u t m q ncestu e i r d fif i y n ll n ea des li ou icid ica r y t h a et hat is p d W t n n y. wa u ic et o ty a h

u

n

d

eu r F

d is

a

very familiar name.

Sig

m

The native Austrian is well-known for characterizing Freudian slips and theories about young men desiring to sleep with their mothers. He is the greatest contributor to modern psychology, credited as the creator of the still-prominent technique of psycho-analysis. Freud formed a kind of philosophical psychology, which he dubbed the psychical apparatus, detailing the inner-workings of the brain and the mind. He coined terms such as the id, the ego, and the super-ego to describe the vast unconscious of the human mind. His ideas, published in the early 1900s, dominated the field up until mid-century when newer psychologists tried to create alternatives to a somewhat radical view of the unconscious. Gilles Deleuze and Felix Guattari are two who broke free from Freudian ideas. They devote their collaborative writings to creating a philosophy that differs from the traditionally accepted Freudian theories. It is interesting to investigate the theories from both sides, illuminating the advantages and disadvantages of subscribing to one philosophy over the other. To understand Freud’s seemingly outrageous claims, one must first understand the basis for his arguments. In his writings, most notably An Outline of Psycho-Analysis, he makes a point to describe the nature of psychology as inherently different from traditional science. Unlike physics and chemistry, experiments of the mind have

20


quantitative limits. At some point, there needs to be some theorizing that cannot necessarily be proven nor disproven. Instead, the people will choose to support whichever idea appears to be the most explanatory. One must remember this fact when discussing the organization of the mind, since as of yet there are no scientific means to confirm these conjectures.

Freud set up his diagram of the

mind to contain six major components. The first three are the conscious, the pre-conscious, and the unconscious. The conscious is that which people can control; thoughts, hopes, and memory exist here as the readily-accessible aspects of the mind. The unconscious is the polar opposite, harboring all of the inaccessible desires. The pre-conscious is a kind of intermediary, bridging the two together. It can bring ideas from the conscious to the unconscious (i.e. forgotten information) and vice versa (i.e. information that is suddenly remembered). These basic components are overlaid with three more: the ego, the id, and the super-ego. The id, located in the unconscious, is a major contributor to the behaviors that are ultimately expressed and represents human innate drives and basic desires. It encompasses the desire to eat when hungry, to attack when threatened, and to participate in some type of sexual activity when aroused. The ego spans from the unconscious, to the pre-conscious, and finally to the conscious; it is what actually happens. That is, the ego is the conscious mind that is influenced by the id and by the super-ego, which people most tangibly experience. The superego is responsible for providing a sense of right and wrong by acting as a kind of moral and ethical compass to counteract the primitive drives of the id. Whenever there is civility, the superego is in play. It is the reason people do not tend to

attack each other constantly or express sexual desires freely. The conscious, pre-conscious, and unconscious can be placed over the id, the ego, and the super-ego to create the psychical apparatus, a relatively complete model of the mind and a basis for Freud’s subsequent theories.

U sing this model, Freud came up

with a theory on psycho-sexual development. The most radical notion in this theory was that sexual development begins at birth. He rejected the idea that people spontaneously realize they have sex organs at the time of puberty. According to Freud, there are separate phases through which everyone goes, the first of which is the oral phase. This phase stems from being breast-fed, where the infant is satisfied by the action of sucking. Soon after, the child enters the anal phase, where satisfaction is attained from defecation. Freud elucidates that feces is something the child finds pleasure in creating, despite its offense to most senses. The final child phase, the phallic phase, occurs when the child is aware of his or her sex organs and the potential for pleasure with them. Human development ends with the genital phase, when sexuality has matured and stabilized.

T he timeline for these phases is

more continuous than it is discrete. The oral phase occurs from approximately the age of six months to two years. The anal phase continues until around age five and the phallic phase picks up until puberty. Freud also discusses the possibility of never progressing from a phase, a process he calls fixation. One with an oral fixation, for example, would have never advanced past the oral phase and consequently tends to use the mouth for pleasure. Such people chew gum and like to suck on things often.

I t is at the end of the phallic phase

that the child becomes subject to the Oedipus complex, a period of latency in which the child fears castration from his father. According to Freud, the child wants to kill his father in order to sleep with his mother, just as in the Greek play Oedipus Rex. In the play, Oedipus desired his biological mother unknowingly, after having been sent away as a child. Through the course of the play, Oedipus kills his father and has sex with his mother, only to impale both of his eyes with sharp needles after discovering his mother’s true identity. Freud says that Oedipus should not have felt the need to blind himself because mother-loving is a more “normal” process than society recognizes. There exists a similar Elektra complex for girls, in which the daughter wants to kill the mother in order to have sex with the father. Freud did not elaborate on women because he considered the clitoris to be inferior, which is a realization girls have during the phallic phase. The Oedipus complex continues to be a source of disbelief within Freud’s world and is often the most controversial of all his ideas. It is an extreme example of tendencies he believed present within a family.

Some of Freud’s dissenters, Gilles

Deleuze and Felix Guattari, collaborated in an attempt to explicate the levels of consciousness differently. Their main idea is that there exists an abstract force that causes everyone to desire, called a desiring machine. Within the unconscious mind, there are three stages, or syntheses. The connective synthesis of production involves the connection of two desiring machines. The mouth of an infant desires the breast of its mother, for example, which demonstrates the various connections and associations that are forged with the desiring machine. 21


The disjunctive synthesis of recording is when the “body without organs” is introduced. This abstract body serves as a memory location for the connections made in the first synthesis. The conjunctive synthesis of consummation and corruption involves the creation of identity and individualism, where connections are broken to form a separate entity. Thus, a cycle is formed in which a connection is made in the connective synthesis and is then recorded onto the “body without organs” in the disjunctive synthesis to form an individual in the conjunctive synthesis.

behavior of society while Deleuze and Guattari tried to explain the overall behavior of society to shed light onto the individual.

O ne should choose to

subscribe to Freud’s ideas more readily because they are tangible. It seems that it is easy to get caught up in metaphysical explanations that are radically abstract and are too distant from the conscious world. Freud’s theories, although not bulletproof, provide explanations for the behaviors that are

consciously seen and experienced in daily life. His ability to produce such theories against the majority opinion is admirable. While some may claim to not want to kill their fathers or sleep with their mothers, who really knows? A complete rejection of what he said could prove to be a mistake as one cannot be sure of his or her unconscious workings. So I ask you to look deep inside yourself and ask one simple question: do I want to sleep with my mom?

Freud talked about Eros

(loving) and Thanatos (destructive) as basic human instincts which stem from the id. Similarly, Deleuze and Guattari explained Eros as being the result of the connective synthesis of production and Thanatos as being the result of the conjunctive synthesis of consummation and corruption, or anti-production. The greatest difference between the theorists, however, is in their goals. Freud tried to explain the individual’s behavior to reveal information about the References: 1] 2] 3] 4]

22

Freud, Sigmund. (1964). An Outline of Psycho-Analysis. London: The Hogarth Press and the Institute of Psycho-Analysis. Freud, Sigmund. (1955). Beyond the Pleasure Principle. London: The Hogarth Press and the Institute of Psyoho-Analysis. Freud, Sigmund. (1961). Civilization and its Discontents. London: The Hogarth Press and the Institute of Psycho-Analysis. Deleuze, Gilles and Guattari, Felix. (1983). Anti-Oedipus: Capitalism and Schizophrenia. Minneapolis: University of Minnesota Press.


n o ta i

ve

Irr

ality mb at i Co

Crimin

A

i s s e r g g

Capital Punishment: Organic Basis of Criminal Behavior Jocelyn Fields / HURJ Staff Writer We sometimes forget that humans are biological creatures whose behavior is controlled by the brain. Behavior is learned by experiences of reward, love, punishment, and the like, imposing permanent changes in the cerebral cortex. From childhood, our actions, attitudes, and moods develop by discipline, or lack thereof, into what will ultimately become our personality. From these attributes, the criminal mind develops from very early on in the cerebral cortex Defining the Criminal Mind By examining the neurobiological facts, it will become clear that the criminals are rational, calculating, and deliberate in their actions, not mentally ill, although many use outlets such as mood disorders, underlying psychological reasons, or the insanity plea in order to get out of dire ver-

dict. Most importantly, insanity is a legal term defined as a lack of responsibility for one’s actions at the time of the crime due to mental disease or defect, which diminishes criminal intent. However, for the purposes of this article, the definition of a criminal mind does not include insanity as we are examining the criminal mind in the aspect of the criminal being deliberate and rational in his or her actions. In such, it is important to define what the criminal mind consists of as there are many definitions that are scientific, medical, and personal. Additionally, insanity is often considered an inability to control one’s own mind. In the case of this article, the criminal minds discussed are fully capable of self-control. Hence, insanity will be ruled out for the sake of this discussion so as to focus, instead, on the neurobiological factors and anatomical features of the criminal mind. 23


Amygdala Limbic Syste ippocampus

hypothalamus Dopaminergic Nuclei

24

Of interest here are the criminals who feed off of wrong doings. Those who repeat violent acts of murder for pleasure or power posses a true criminal element. Their emotions are shallow, as they go from one relationship to another while manipulating and using people. A man who kills women by chopping them up and wearing their skin is not displaying “normal� behavior. Jeffery Dahmer, and known cannibal, is a prime example of one who possessed this true criminal element Psychiatristsdebated whether necrophilia (a mental disease marked by an erotic attraction to corpses) caused his abnormal behavior, or if he was, in fact, fully aware of his wrongdoings and able to prevent himself from acting. In the end, it was proven that he did, indeed, have necrophilia but that his mental illness did not control his behavior. He was convicted and sentenced to 15 consecutive life sentences. The Neurobiological Processes of a Criminal Mind The criminal mind is a combination of many things, such as gender, age, chemical imbalances, hormone imbalance, experience, and circumstance. It includes both neurobiological factors, as well as environmental factors. One must take information made about the link between cognitive deficits and biological anomalies with aggressive behavior with care. Still, studies of aggression and cognition are relevant to the extent that impairment of cognitive processes is a sign of cerebral impairment. Many studies have shown that aggressive boys have difficulty with working memory, especially as the amount of information increases. The limbic system largely deals with memory processing. As neuropsychological research suggests, aggression is most likely to be associated with frontal lobe dysfunction. It is also important to note that there are monosynaptic connections between the brain stem and the frontal lobe, illustrated in More specifically, many parts of the limbic system are at the medial border of the telencephalon and diencephalon. Regions such as the hip-

pocampus and amygdala and parts of the hypothalamus and regions of cerebral cortex on the medial surface of the hemisphere make up the largest chunks of the limbic system The amygdala is responsible for emotional memory and is known as the fear conditioning area. The criminal mind is aware of its actions and fears no repercussions; so one could conclude that the amygdala plays an important role in criminal thinking As many changes occur in the development of the cerebral cortex, behavior involves mainly the neural processes concerned with learning. There are other processes in other parts of the brain that contribute greatly to patterns of behavior. There are neural systems in the lower brain which result in feelings of anger or irrationality. Aggressive behavior increases when these areas are active and systems of the brain interact to determine the resulting behavior. The brain processes that control learning and memory are constantly interacting with those processes. Still, there are many types of aggressionwhich make it impossible to create a single model in which all types can fit without sacrificing detail. However, there exist a few common mechanisms found in most criminal minds which happen to most often deal with the hippocampus and amygdala. According to a study done by King in 1961, a very mild-mannered woman who was submissive and friendly had an electrode placed in her amygdala with a current of 4 milliamperes. There was no observable change in her demeanor. However, when the stimulus was increased to 5 milliamperes she became very hostile and aggressive and shouted angrily as she tried to hit the experimenter. This study shows that the amygdala plays an important role in aggression. In the experiment, the woman’s aggression could be triggered by the complex machinery and an electric current; in the case of criminal minds, aggression and violence are much more easily triggered. The point at which these neural processes for aggression is triggered is called the threshold. It is very common to see very low thresholds with


Limbic System

Diagram A very little provocation in criminal minds. To address the issue at hand more directly, aggression and rage were also studied by Griffith et al. (1987). Seizures were induced by unilateral microinjections of kainic acid into the dorsal hippocampus of cats. The kainic acid induced acute periods of intense seizure activity in the first few days and then continued with partial seizures lasting up to 4 months. The defensive rage effects of the medial hypothalamus were observed as follows: rage response thresholds were lowered during the interictal periodthe period of time between seizures– and rage-like responses resulted from mild stimulation in otherwise normally affectionate animals. These findings provide even more evidence that the dorsal hippocampal formation affects predatory attack and defensive rage associated with the septal area and amygdala. Another notable attribute of the cortical and limbic system nuclei, giving special attention to the hippocampus and amygdala, is the crucial role they play in regulating the hypothalamic-pituitary axis, which in turn controls the target endo-

Diagram B crine organs. The endocrine system regulates the secretion of hormones. Nuclei within the hypothalamus secrete releasing factors which are usually peptides such as CRF, TRH, or biogenic amines such as dopamines (known as the “feel good” hormones as they are secreted when someone feels happy), at the median eminence, into the pituitary portal circulation. Thus, the conclusion of the neuroendocrine system represents a dynamic, highly regulated physiological system which responds to internal or external perturbations in such a way as to maintain homeostasis. This homeostasis is the very essence of a “level-headed” or “normal” person. An ideal, balanced brain means no criminal mind due to hormonal imbalance, low aggression threshold, or overactive/underactive areas of the brain. There are endless studies and research that show that the cerebral cortex, along with experience and teachings, does in fact control some of what we do and some of how we react. Aside from the small input that experience and teachings do to the shaping of our mind, the amygdala and hippocampal formation regions

seem to regulate the “choice of action” system. Capital Punishment: avoiding the we-they syndrome Given that amygdala and hippocampal formation share an important role in the aid of aggression and rage, there is also a mental threshold which every person possesses; no one can run from his or her mind’s capabilities. Can people who are not insane but criminally inclined be held accountable for their actions? Should their punishment be harsher due to the nature of their conditions? If so, is capital punishment the answer? When applying the death penalty, two things must be taken into account: 1) the nature and circumstances of the crime and 2) the character and background of the offender. In the end, how much of being a criminal is biological? Does the distinction between being a criminal and not being a criminal lie in the limbic system? According to Austin Porterfield, “…we go on punishing the offender without developing the capacity to imagine ourselves in his place, to see that he is made like us...”

References: 1] 2] 3] 4] 5] 6] 7] 8] 9] 10] 11]

Eysenck J., Hans and Gudjonsson H., Gisli. The Causes and Cures of Criminality. New York: Plenum Press, 1989. Ganten, D. and Pfaff, D. Current Topics in Neuroendocrinology: Neuroendocrinology of Mood. Volume 8. Germany: Springer-Verlag Berlin Heidelberg, 1988. Glicksohn, Joseph. The Neurobiology of Criminal Behavior. Boston: Kluwer Academic Publishers, 2002. Jeffery, C.R. Biology of Crime: Volume 10. Beverly Hills: Sage Publications, 1979. Latzer, Barry. Death Penalty Cases: Leading U.S. Supreme Court Cases on Capital Punishment. Boston: Butterworth-Heineman, 1998. Nathanson, Stephen. An Eye for an Eye? Second Edition. Maryland: Rowman and Littlefield Publishers, 2001. Ramsland, Katherine. The Criminal Mind: A Writer’s guide to Forensic Psychology. Cincinnati: writer’s Digest Books, 2002. Rogers W., Joseph. Why are you not a Criminal? New Jersey: prentice-Hall, 1977. Samenow, Stanton. Inside the Criminal Mind. New York: Crown Publishers, 2004. Siegel, Allan. The Neurobiology of Aggression and Rage. New York: CRC Press, 2005. Vincent, Jean-Didier. Translated by Hughes, John. The Biology of Emotions. Massachusetts: Basil Blackwell, 1990.

25


C

an machines be conscious

Stephen Berger/ HURJ Staff Writer

Traditionally, many believe that only humans possess a mind and are capable of thought, but, in principle, there is no reason to assume consciousness is limited to humans: it may exist in some non-human animals, and might even be possible in silico, or through computer simulation. Developments in computer technology make it likely that a computer will be capable of the same sorts of “mental” activities as humans in the near future. Will these machines be capable of “thought” or merely mathematical processing? Can such a computer ever be conscious or sentient? A variety of approaches from philosophy, science, and engineering suggest that machine consciousness is at least theoretically possible.

26

?

The Turing test: intelligence as imitation The classical test for machine intelligence was formulated in 1950 by Alan Turing, a British mathematician considered by many to be the founder of modern computer science. In the Turing test, a person “converses with” a second person and a computer, both out of sight, using teletype or, more recently, instant messaging. (The computer Turing envisions is similar to the automated “bots” offered by some websites today.) The questioner attempts to determine which of his interlocutors is a computer and which is a person. If the questioner is unable to correctly identify the computer better than 50% of the time, the computer is said to have passed the test: it can successfully “act like a person.” To date, no computer has ever passed the Turing test. Because his test centers on the degree to which a machine is able to simulate or mimic human cognition, Turing implies that the question of whether machines can think is fairly meaningless. When a computer reaches a certain level of sophistication, whether we say the computer can or cannot think is moot – it is capable of acting as if it were thinking. Implicit in Turing’s notion of an intelligent machine is the idea that intelligence, and thought itself, are functional measures, not internal properties, of a system. Put another way, the cognitive processes going on inside the “mind,” – whether human, animal, or computer – are not important. Someone or something is intelligent only to the extent that it acts intelligent. For Turing, an intelligent machine is driven entirely by programs or algorithms that take inputs and convert them predictably into outputs. Machines are fundamentally calculators; the machines we call “intelligent” are merely the most efficient calculators, those with the largest memory banks and the most comprehensive programs for manipulating data. The ability to “learn,” or change the algorithmic response to an input, can also be programmed into the machine, which adds to its apparent intelligence. One may infer that this principle applies to the human mind as well: the human brain has a large but finite processing capacity, which may theoretically be matched by computers in the future. As technology develops, Turing argues, a certain line will be crossed, beyond which computers may be considered to be intelligent, purely as a function of their processing capabilities.


The Chinese room: thought as meaning The Turing test explains a great deal about the potential abilities of computers, and it leaves open the possibility that non-biological processors might one day be considered intelligent. But many have argued that the test fails to capture what we mean when we invoke the idea of consciousness. It seems that a conscious being does not merely perform calculations but rather thinks in some meaningful sense obscured by Turing’s approach. Some psychologists, such as Turing’s contemporary B.F. Skinner, have attempted to equate human thought with Turing-type algorithms. But a widely-shared intuition is that thought and consciousness imply something else, something greater than input-output processing. The modern philosopher of mind John Searle attempts to capture some of these concerns with his example of the Chinese room. Imagine a huge room filled with shelves and stacks of books. On each page of each book, there are two phrases printed in Chinese. A person standing outside the room hands a slip of paper with a Chinese phrase on it (the input) to a person inside the room, whom we suppose does not understand Chinese. She consults an index and sorts through the books until she finds the page with the same phrase printed on it. She then copies down the second phrase on the page (the output) and hands it back to the person standing outside of the room. Searle’s question is, does this room understand Chinese? The room has been “programmed” with all possible combinations of inputs and outputs: it has syntax. It is therefore able to respond algorithmically to any question posed with an appropriate answer that means something to the person standing outside the room. But for the room, there is no meaning to any of the exchanges of paper: the system lacks semantics. Searle argues that, just like the Chinese room, a computer attaches no semantic content to any of the calculations it performs – unlike the human mind, for which each input and output has some sort of personal meaning. In our experience of consciousness, the ability to calculate is not the same as the ability to understand. Complex systems: consciousness as an emergent property It is theoretically possible for a computer to attach meaning to the inputs and outputs swirling around its processors, but it is difficult to interpret what this might mean and how it might be accomplished. One suggestion is that meaning arises from the interrelationships of a large number of individual ideas in the mind. As an individual gains experiences and knowledge, “networks” of thoughts and facts are formed in the mind. When we read or hear the word “dog,” a mental image of a dog is conjured, as are a host of memories of dogs we have

personally encountered and facts learned about dogs from a variety of sources. The input “dog” is not just a single raw datum, but rather an idea with a complex and unique meaning for the mind it inhabits. Mental processing can exist in the absence of meaning, but true thought can only come when sufficient experience allows for meaning. The kind of mechanistic intelligence envisioned by Turing is therefore not true intelligence, and is certainly not consciousness. There is a certain level of complexity required, both of the processor and of the information processed, if the computer’s actions are to be considered “thought” – something which may be technologically possible at some point in the future, but which has not been engineered today. From this consideration arises the theory that any sufficiently complex information processing system will be conscious. Consciousness is an emergent property of complexity, just as wetness is an emergent property of large groups of water molecules. An abacus is a computational device, capable of processing information, but it is insufficiently complex to be conscious. Neurons in the human brain work on a binary system, just as computers do – either they fire or they do not fire; there is no “in between.” However, the connections among neurons in the human brain are many orders of magnitude more complex than in the most powerful computer today. Until recently, computers have only been capable of relatively simple calculations due to restrictions in their processing ability. Even the “smartest” computers, which can calculate many times faster than the most talented human, cannot perform the sort of global, abstract analysis typical of human thought. But advances in computer hardware and software are poised to allow for true machine intelligence. For instance, quantum computing, which may be practical in the near future, replaces the binary system of ones and zeroes with a system based on a much larger number of intermediate states, dramatically increasing the amount of information that can be coded in a computer’s memory. Massively parallel processing is a software advance that allows multiple calculations to be performed simultaneously by the computer, much as the human brain is able to process multiple inputs at the same time. When a computer can quickly process a tremendous amount of information, can attach meaning to this information, and is capable of learning from a totally blank slate, it may be capable of actual thought. It will be conscious, rather than merely mimicking such a state. Self-aware machines have the potential to provide a dazzling array of computing applications in the future and we appear to be approaching that technological brink today.

Further reading: 1] The Quest for Consciousness by Christof Koch 2] Consciousness Explained by Daniel Dennett 3] Conversations on Consciousness by Susan Blackmore

References: 1] (Books listed above) 2] Turing, A.M. “Computing machinery and intelligence.” Mind 59: 236 (1950). 3] Searle, John R. “Minds, brains, and programs.” Behavioral and Brain Sciences 3 (1980).

27


i v

i t c

A l

a t

r F

e u

q e

s e

M t

e h

k s

R

i

Individuals suffering from dementia, a progressive neurodegenerative condition, experience a deterioration of intellectual abilities, such as memory and the ability to reason. Dementia, in and of itself, is not a disease, but rather a group of symptoms. The most common disease that results in dementia is Alzheimer’s disease. According to the National Center for Health Statistics, which is a subdivision of the Center for Disease Control and Prevention, Alzheimer’s disease is the number eight killer in the United States, claiming 58,866 victims in 2001. The biological mechanism through which Alzheimer’s disease arises is still uncharted territory. Researchers have concluded that a buildup of β-amyloid plaques interfere with neurons causing damage to neural networks and cell death. Beyond this scientific observation, scientists have failed to develop drugs that halt the pathological effects of dementia. As a result, people have been resorting to alternative ways to reduce their chances of dementia. One common preventative method is based on the concept that the brain is like a muscle; the more it is worked

c u

d e R

28

t n

n e

f o

a i t

y t

n e

D

m e

a sh

ta

U

H a/

al

h ab

Pr

S RJ

r

Ha

er

rit ff W

and exercised, the stronger it becomes. In other words, participating in mentally stimulating activities may reduce a person’s susceptibility to dementia. Studies in the early nineties introduced the notion that those with more education are less likely to get Alzheimer’s disease. It was not until 1993 when Katzman et al conducted a study directly linking more years of education with reduced risks of dementia and Alzheimer’s disease. The effect of cognitively-stimulating activities on the relative risks of dementia has yet to be fully investigated. Beyond the common belief that using one’s mind can help preserve it, there exists sound clinical evidence regarding the correlation between mental activity and the probability of getting dementia. The purpose of this review is to analyze these studies to determine whether participating in cognitive mental activities several times a week can significantly reduce the relative risk of dementia, which most commonly results from Alzheimer’s disease. Scientific Evidence: In February 2002, Wilson et al conducted a longitudinal cohort study to investigate the effect of cognitive stimulating activities on the risk of incident of Alzheimer’s disease. The study involved 801 Catholic nuns, priests, and brothers, from 40 groups across the United States. All of the subjects who were approved to participate in the study were not only over the age of 65, but also did not have dementia upon inception of the study. The study incorporated seven common cognitive activities: reading books, viewing television, reading newspapers, reading magazines, playing games, listening to the radio, and going to museums. The frequency of participation was scored using a five-point scale such that: 5 points, approximately every day; 4 points, several times a week; 3 points, several times a month; 2 points, several times a year; and 1 point, once a year or less. The Mini-Mental State Examina-


tion (MMSE) was used to test the cognitive function of the subjects. The data collected by Wilson et al indicated that the composite cognitive activity score ranged from 1.57 to 4.71 (mean, 3.57; SD, 0.55). In addition, cognitive activity had an interesting negative correlation with age (r, -0.08; P<0.05). Subjects were followed for an average of 4.5 years and a total of 111 individuals developed Alzheimer’s disease after an average of 3 years. Adjusted appropriately for age, sex, and education, the relative risk (RR) of developing Alzheimer’s disease decreased by 33% (RR, 0.67; 95% CI, 0.49-0.92) for each point increase in the composite cognitive activity score. Furthermore, compared with an activity frequency score in the 10th percentile (score=2.86), a cognitive activity score in the 90th percentile (score=4.29) was associated with 47% decrease in RR, and a 28% decrease in RR was correlated with a score in the 50th percentile (score=3.71). Thus, engaging in cognitive activity several times a week or more was associated with a significant decrease in the relative risk of Alzheimer’s disease. A longitudinal study conducted by Wang et al examined the effects of late-life involvement in mental activities on the risk of dementia. The study was conducted in Stockholm, Sweden, and it involved 776 inhabitants. None of the subjects had dementia prior to the study or were under the age of 75. Mental activities were categorized as reading books/ newspapers, writing, studying, solving crossword puzzles, and painting or drawing. Similar to the previous studies, mental functionality was determined using MMSE. Wang et al found that participation in mental activity compared to none at all (RR=1) was associated with a 0.67 relative risk for dementia (95% CI, 0.45-1.01). A RR of 0.81 (95% CI, 0.52-1.26) was linked to individuals who partook in mental activities less than once a day. On the other hand, those who took part in mental activities every day were correlated with a RR of 0.54 (95% CI, 0.34-0.87). In other words, the study indicated that the risk of dementia was reduced by 33% in those individuals who engaged in daily mental activity compared to those who did none at all. A study to find the consequence of leisure activity on the incidence of Alzheimer’s disease was conducted by Scarmeas et al. A total of 1,772 nondemented subjects over the age of 65 from northern Manhattan communities of Washington Heights and Inwood in New York City were chosen for the study.4 Leisure activities included: reading

magazines or newspapers or books, playing cards or games, knitting or listening to music, walking for pleasure or excursion, visiting friends or relatives, being visited by relatives or friends, going to movies or restaurants or sporting events, watching television or listening to the radio, doing unpaid community volunteer work, physical conditioning, going to a club or center, going to classes, and going to church or synagogue or temple. One point was given for having taken part in each of the 13 activities listed above. The amount of participation in leisure activities was quantitatively analyzed my using the median value for dichotomization (“low” is less than or equal to 6 activities and “high” is greater than 6 activities). The data that Scarmeas et al collected showed that a “high” score was associated with a 12% reduction in the risk of dementia (RR, 0.88; 95% CI, 0.83-0.93). Furthermore, the intellectual factor was correlated with the greatest decrease in the risk of dementia (RR, 0.76; 95% CI, 0.61-0.94). The intellectual factor was a subset of the 13 leisure activities, specifically, reading newspapers or magazines, playing cards or games, and going to classes. Even though participating in more leisure activities reduces one’s risk of dementia, intellectually stimulating activities had the greatest beneficial effect by reducing the risk of dementia by 24%. Thus, Scarmeas et al indicate that taking part in more than six activities is recommended in order to considerably reduce the risk of Alzheimer’s disease. In his second study in 2002, Wilson et al focused on cognitive activity and the incidence of Alzheimer’s disease. As many as 6,158 individuals of a biracial community in Chicago participated in this study. Of those six thousand, a dementia-free cohort of exactly 3,838

29


was established. As before, only the effect of seven common leisure activities (reading books, viewing television, reading newspapers, reading magazines, playing games, listening to the radio, and going to museums) was investigated. Similar to Wilson’s previously conducted study, the five-point system was once again used to quantify the frequency of participation in the seven leisure activities. The variation in the composite measure of cognitive activity ranged from 1.28 to 4.71 (mean, 3.30; SD, 0.59). After an average follow-up interval of 4.1 years, the relative risk, which was adjusted for sex, age, and education level, associated with each point increased in the activity score is 0.36 (95% CI, 0.20-0.65). In other words, for every one-point increase in the cognitive activity score, the odds of getting Alzheimer’s disease is reduced by 64%. Furthermore, a person with a cognitive activity score in the 90th percentile (score=4.00) is half as likely to develop Alzheimer’s disease compared to a person with a cognitive activity score in the 10th percentile (score=2.43). Interestingly enough, years of education alone was related to a RR of 0.88 (95% CI: 0.790.97). In essence, individuals who partook in mental stimulating activities several times a week or more had the most substantial decrease in relative risk of Alzheimer’s disease. Vergesse et al conducted a very comprehensive investigation into the risk of dementia associated with leisure activities in the elderly. This study involved a cohort of 469 subjects, all of whom did not have dementia and were over the age of 75. The following were the six cognitive activities that were analyzed: writing for pleasure, playing board games or cards, reading books or newspapers, doing crossword puzzles, participating in organized group discussions, and playing a musical instrument. For each activity, subjects received no points for monthly or no participation, one point for weekly participation, four points for participation several days a week, and seven points for daily participation. After a median follow-up time of 5.1 years (mean, 6.6; SD, 4.9), 124 subjects developed dementia and 361 subjects died. Adjusting the data to sex, age, education level, and the presence of an illness, the hazard ratio associated with a one-point increment in the cognitive activity score is 0.93 (95% CI, 0.89-

0.96). Furthermore, the hazard ratio for individuals with activity scores in the highest third percentile compared to those in the lowest third percentile was 0.37 (95% CI: 0.23-0.61), which is a risk reduction of 63%. As a result, the increased participation in cognitive activities several times a week or more was most significantly associated with a reduced risk of dementia. Conclusion: All five comprehensive studies clearly associate more participation in cognitively-stimulating activities with a reduced risk of dementia compared to those who participated less frequently in cognitively-stimulating activities. Specifically, evidence from four out of the five studies suggest that taking part in such mental activities several times a week or more will significantly decrease the likelihood of developing dementia. The evidence that strongly associates cognitive activities and a reduced risk of dementia imply that stimulation of the brain results in physiological changes that make a person less susceptible to dementia. The molecular mechanisms of how β-amyloid deposits can pathologically stimulate dementia are still very unclear. A theory called “cognitive reserve” is one possible rationalization of how effortful mental activity can reduce the likelihood of dementia or Alzheimer’s disease. According to this theory, mental activity has the ability to generate new synaptic connections in addition to strengthening existing ones. Furthermore, mental activity may also be connected with neurogenesis, particularly in the hippocampus. Increased neural plasticity results in significant changes that allow an individual to bypass the pathology of Alzheimer’s disease. Moreover, the β-amyloid peptide is a crucial component of the Notch pathway, which regulates neuron creation. β-amyloid, one of the major factors contributing to Alzheimer’s disease, is regulated during neurogenesis, a process that can reduce the likelihood of the onset of the disease. By creating new neural connections and making existing connections even stronger, cognitive activity gives individuals a better chance of avoiding the occurrence of dementia.

References: 1] 2] 3] 4] 5] 6] 7] 8] 9] 10] 11]

30

Vergesse J et al. Leisure activities and the risk of dementia in the elderly. New England Journal of Medicine. 2003; 348(25): 2508-2516. Wilson RS et al. Cognitive activity and incident AD in a population-based sample of older persons. Neurology. 2002; 59: 1910-1914. Coyle, Joseph. Use it or lose it-do effortful mental activities protect against dementia. New England Journal of Medicine. 2003; 348(25): 2489-2490. Scarmeas N et al. Influence of leisure activity on the incidence of Alzheimer’s disease. Neurology. 2001; 57: 2236-2242. Wilson RS et al. Participation in cognitively-stimulating activities and the risk of incident Alzheimer’s disease. JAMA. 2002; 287(6): 742-748. Wang H et al. Late-life engagement in social and leisure activities is associated with a decreased risk of dementia: a longitudinal study from the Kungsholme project. American Journal of Epidemiology. 2002; 155(12): 1081-1087. Alzheimer’s Association. July 2002. Online source, http://www.alz.org/internationalconference/Pressreleases/2002releases/PR_072202_E.htm. National Center for Health Statistics. October 2004. Online source, http://www.cdc.gov/nchs/fastats/lcod.htm. Lodish et al. Molecular Cell Biology. New York, NY: Freeman; 2004. Katzman R. Education and the prevalence of dementia and Alzheimer’s disease. Neurology. 1993; 43: 13-20. Folstein M et al. Mini-mental state: a practical method for grading the mental state of patents for the clinician. Journal of Psychiatric Research; 1975; 12: 189198.


Nancy

y Tra

/H URJ r Staff Write

Lept in

an dc ck

Jay Leno once joked, “Now there are more overweight people in America than average-weight people. So overweight people are now average. Which means you’ve met your New Year’s resolution.” Though this quote is rather amusing, it becomes more startling once its inner meaning is dissected. Obesity is one of the fastest growing epidemics facing Americans today, a crisis that has escalated due to expansion of fast food chains and technological advances that make physical activity less necessary. Not only does obesity mar the body image, but even worse, it induces many health hazards including diabetes, stroke, heart disease, and cancer - all among the top ten killers of Americans. Unlike other health problems such as smoking, America has been unable to curb the prevalence of obesity. In 2002, the Centers for Disease Control and Prevention (CDC) released a statistic stating that nearly two-thirds of U.S. adults 20 years and older are overweight, and over half of these people are clinically diagnosed as obese. The study also stated that within the past four decades, overweight and obesity cases have increased most steadily, more so in the past twenty years. For example, overweight cases increased from 31.5 to 33.6 percent in U.S. adults ages 20 to 74, whereas obesity more than doubled from 13.3 to 30.9 percent. Current projections indicate that this health problem is only going to get worse. So what exactly do the terms overweight and obese mean? These words are used to describe excess weight, in particular a high percentage of body fat. Overweight and obesity are typically measured by the Body Mass Index (BMI), which compares height and weight using the equation, weight (kg) / height squared (m2). According to the National Institutes of Health (NIH), overweight individuals have a BMI ≥ 25, while obese people have a BMI ≥ 30. The progression from overweight to obesity holds increasing health risks. Most people instinctively attribute obesity to the failure of individual willpower, however, willpower is certainly not the only factor. Obesity is caused by a combination of influences – personal, environmental, and genetic. It was not until recently that scientists have been able to delve into the genetic aspect that makes individuals more prone to overeating. Studies have shown that the commonalities of becoming overweight and obese can be attributed to inherited genetic variations and hormonal imbalances in the brain. In the past few decades, researchers have made significant findings that narrow the genetic basis for obesity down to a few culprits. One of

s n i is Th tk the these A obeeA lternative to sity gene (Ob)

found in the brains of humans, rats and mice. The Ob gene produces a hormone called leptin, which acts in the hypothalamus (in particular, the arcuate, ventromedial and lateral nuclei) to regulate satiation and appetite. In some cases, lesions to the hypothalamus sections or resistance to leptin signaling resulted in obesity and diabetes. Other cases were due to the lack of the hormone altogether. For example, one nine-year old girl in England weighed over 200 pounds by consuming 1100 calories per meal. During medical investigation, it was found that she lacked the hormone leptin. Upon injections of leptin, however, her food intake rapidly diminished to 180 calories per meal, an 84% reduction. Eventually, her body weight went down to normal. Some obesity cases lack these genetic mutations, indicating that there are other genetic factors linked to obesity. For instance, fat cells also produce many hormones that could play key roles in the obesity epidemic. One of these hormones is cholecystokinin (CCK), found in both the hypothalamus and the intestine. It has been suggested that CCK controls appetite by acting in satiation signaling. Experiments done by Smith and Gibbs in the 1970s found that injections of CCK reduced food intake and increased physical activity in food-deprived rats. Low levels of CCK have also purportedly been found in obese mice. Another hormone found in the hypothalamus, Neuropeptide Y (NPY), has opposing roles to CCK and leptin. Whereas CCK and leptin act to moderate food consumption, NPY has been found to stimulate feeding, and has direct implications in the obesity epidemic. Hormonal imbalances in the brain involving leptin, CCK, and NPY, compose the main foundation for obesity in terms of genetics. However, each new discovery leads to countless proteins and genes that could also play possible roles, illustrating the massive implications of genetics in the obesity epidemic. Though the environment is a factor in obesity, researchers have shown that genetics plays a role, as well. In order to properly confront the obesity epidemic, we must recognize that obesity may be a hereditary condition, rather than a personal flaw in character.

References: 1] 2] 3] 4] 5] 6] 7]

Current Science & Technology Center. Hormones Controlling Obesity. April 2, 2004. http://www.mos.org/cst-archive/article/6120/1.html Friedman, Jeffrey M (2003) A War on Obesity, Not the Obese. Science 7: 856-858 Gibbs J, Young RC, Smith GP (1973) Cholecystokinin decreases food intake in rats. J Comp Physiol Psychol 84(3): 488-195 International Obesity Task Force. Managing the global epidemic of obesity. Report of the WHO Consultation on Obesity, Geneva, June 5–7, 1997. World Health Organization: Geneva Kalra SP (1991) Neuropeptide Y secretion increases in the paraventricular nucleus in association with increased appetite for food. PNAS 88:10931-5 Sturm R. 2002. The effects of obesity, smoking, and drinking on medical problems and costs. Health Aff 21(2):245-253.

31



“Before brains, there was no color or sound in the universe, nor was there any flavor or aroma and probably little sense and no feeling or emotion.� – Roger W. Sperry, 1964


brain

mind

conSciouSneSS neuronal intelligen Cognition

ychological

behavi

metaphys Semantic percep


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.