FALL 2016
Cognitive
: Training Fact or Fiction? MOLLY MAGID // PG 2
RETHINKING RACE IN MEDICINE
NEEDLES AND HAY: THE AI SAFETY DEBATE
MEDITATION: THE NEW ANCIENT MEDICINE
KAVIA KHOSLA // PG 8
CONNOR FLEXMAN // PG 12
KARINE LIU// PG 18
brown
asu gwu
berkeley cambridge harker harvard jhu
cmu nus
cornell georgia tech georgetown osu ucdavis uchicago melbourne yale
about
staff president Hye In Sarah Lee editors-in-chief, 14-15 Jordan Deloach & Oliver Lyman editors-in-chief, 15-16 Lucy Van Kleunen & managing editors, 14-15 Russell Shelp managing editor, 15-16 Adrija Darsha editors Methma Udawatta Adam Horowitz Owen Leary Russell Shelp Mark Sikov Shannon McCarthy Elena Weissman
writers Karine Liu Lucy Van Kleunen Kavia Khosla Shannon McCarthy Frances Chen
Maggie Rowe Molly Magid Kurt Pianka Connor Flexman
head of design Kaley Brauer layout and art Mae Heitmann Alec Davidson Yixuan Wang Helen Situ Montana Fowler Georgianna Stoukides Ben Wilson
i
THE TRIPLE HELIX || FALL 2016
The Triple Helix is an independent, studentrun organization committed to exploring the intersection of science and society. We seek to examine the socioeconomic, moral and environmental implications of scientific advances, and to highlight the surprising ways that science can affect our ideas of humanity. The Triple Helix has an international scope, with chapters all over the globe, ranging from Berkeley to Yale, to Cambridge, Melbourne and the National University of Singapore. You are currently holding a copy of one of the Brown chapter’s biannual magazines. Inside, you will find a collection of articles written, edited & designed by students here at Brown.
contents FALL 2016
8
12
Rethinking Race in Medicine
18
Needles and Hay: The AI Safety Debate
KAVIA KHOSLA
CONNOR FLEXMAN
Meditation: The New Ancient Medicine KARINE LIU
FEATURES ON THE COVER
COGNITIVE TRAINING: FACT OR FICTION?
Molly Magid
24
Fish Farming Sustainability
28
SHANNON MCCARTHY
Touching What Isn’t There: Virtual Reality in Medicine and Culture
34
LUCY VAN KLEUNEN
38
How We Teach About How We Got Here MAGGIE ROWE
42
The Biosocial Model of Crime KURT PIANKA
The Truth about the Truth: the concerning rise of faulty science FRANCES CHEN
46
Are we all WEIRD? Behavioral scientists seem to think so. KARINE LIU
FALL 2016|| THE TRIPLE HELIX
1
ARTWORK & LAYOUT Kaley Brauer ‘17 EDITOR Methma Udawatta ‘16
2
THE TRIPLE HELIX || FALL 2016
Cognitive Training: Fact or Fiction? MOLLY MAGID ‘17
Super intelligence, the idea that humans can gain intelligence up to a genius level, is one that many science fiction stories have explored. For example, “Understand,” by Ted Chiang describes how people gain super intelligence by taking neuron-healing drugs. Characters in other stories employ their own methods to improve intelligence: surgery, genetic transfusion, or alien technology, to name a few. But these methods and even the idea of improving intelligence itself seem farfetched and unlikely to be achieved. Nevertheless, online cognitive training sites claim that playing their brain games will improve the intelligence of their users. Brain training sites advertise the improvement of mental activity as a result of engaging in their brain training programs, but the evidence suggests that these sites don’t deliver on their promises. Nevertheless, this does not discredit the entire discipline of cognitive training, and specifically how it may help to prevent cognitive decline in older adults.
Many cognitive training sites like Lumosity, NeuroNation, and BrainHQ, advertise themselves as “gyms for your brain” [1]. Generally, these training sites promise to improve cognitive skills like memory, attention, and problem solving, as well as overall intelligence. They provide a variety of games for this “workout,” each game designed to improve a specific cognitive skill. For example, Lumosity training includes a game called Memory Matrix, which is composed of a series of square grids. The first round starts out with a three by three grid of squares and a lit up pattern that appears in the squares for a few seconds. After the squares disappear, the user must click on the correct squares to recreate the pattern flashed on the screen. With each new level, the grids become larger and the number of squares in the pattern increases. Lumosity asserts that this game helps users improve their spatial recall, a type of memory that “helps you track location and position,” and research by Lumosity concludes that participants
improved greatly on assessments of spatial recall after playing [2]. Lumosity and other brain training sites advertise that playing fun brain games every day will gradually improve cognitive functioning. But have these brain training programs really unlocked the secret to achieving greater intelligence, or are they just as fictional as brain-enhancing alien technology? Like any science-based product, brain training sites reference scientific studies to justify their claims of cognitive enhancement. However, studies performed using the specific games included in a site are often carried out by scientists affiliated with that organization [3]. The research used from the site’s personal labs may be influenced strongly by experimenter bias because those conducting the study presumably benefit from the success of the business. Indeed, many question the validity of certain findings like those asserted by Lumosity that brain games increased working memory and even FALL 2016 || THE TRIPLE HELIX
3
improved IQ by one point per hour spent training [4]. Many others, generally university researchers, have tried and failed to replicate these effects [5]. One scientist involved in a replication study asserted that the brain training tasks used by the original study “may have lacked appropriate controls” because they did not give the control group another unrelated task to perform instead of the training [6]. In the replication study, participants in the control condition completed visual search tasks that were not related to improving working memory and their performance was compared to those who did complete working memory tasks [5]. Other citations of supporting research done outside the sites, by independent labs, is often very loosely related to the training that these sites provide. This research is conducted with different activities than the online games and often use specific research populations. For example, Lumosity, NeuroNation, and BrainHQ all cite articles with data from populations like stroke patients, children with ADHD, or schizophrenics [6, 7, 8]. It is unclear whether the findings from these studies can be generalized to the broader population that brain training sites attract. The studies used by these sites to legitimize their claims either seem flawed or inapplicable to determine whether the training improves cognitive functioning in their actual user bases. Performing a brain training game improves one’s skills at playing that game and games like it, but does not produce far transfer. Far transfer is the concept that “knowledge acquired in one situation applies (or fails to apply) to another,” a very important part of learning [9]. An example of far transfer is learn-
4
THE TRIPLE HELIX || FALL 2016
ing to read a bus schedule and later applying this knowledge to read an airline schedule [10]. In terms of cognitive training, one might assume that playing a game targeted at improving short term memory would help a person to remember where a car is parked in a crowded lot. However, a meta-analysis of online training concluded that the “tailored” games offered by sites only help users improve on that specific game and do not have a lasting impact [11]. For what this analysis deemed “robust” studies, those that included controls and randomization, the long term effects were very low or nonexistent. “The implication on the part of the companies is that somehow you’re going to get better at everything that is mental, and there is no evidence to show that,” says David Myer, the Director of Michigan’s Brain, Cognition, and Action Lab [6]. Nevertheless, people sign up for subscriptions, spending around $15 per month, and expect more than just getting to get better at games. Users believe that playing these games will help them improve overall cognition, but there is no evidence that far transfer to real life cognitive tasks occurs. Even worse, some sites like CogniFit insist that their games can help people with multiple sclerosis, ADHD, or Alzheimer’s disease among other disorders and diseases [12]. It is particularly disheartening that the activities provided by online training sites lack far transfer to real cognition considering that some use these sites to try to improve their impaired cognitive functioning or to prevent cognitive decline. Fortunately, there is a use for cognitive training beyond online training, as a way to help both healthy older
adults and older adults with dementia to reduce cognitive decline. In this context, cognitive training is composed of many repetitive tasks intended to target one or more cognitive abilities: attention, memory, reasoning, using language, fluid intelligence, etc [13]. Unlike games used by brain training sites, these tasks are tailored to a specific population. For example, tasks are aimed at improving cognition related specifically to cognitive functions like memory that decline with age. In addition, these tasks are more likely to be backed by unbiased and vigorous research because, researchers are looking to further scientific process and determine whether cognitive training has any influence on cognitive decline in the elderly. This research contrasts with the research validating brain training sites, which is more of an afterthought. One review of cognitive training in older adults compared different types of cognitive training and found that the most effective were administered in a group setting, included a variety of cognitive tasks that targeted multiple areas of cognition, and consisted of tasks that became progressively more difficult [13]. This research anticipates the use of cognitive training becoming a commonplace intervention to help prevent cognitive decline in older adults. Such an intervention relies on the concept of neuroplasticity, the theory that an aging brain can still be reorganized and changed as a result of an experience [14]. In this case, the experience is engaging in tasks designed to improve cognitive function. One significant cognitive training trial, ACTIVE, reported that healthy older adults who participated in training experienced
less cognitive decline in everyday activities and that the effects experienced as a result have lasted up to ten years after the training [15, 16]. In this study, participants’ cognitive abilities were evaluated both through their score on the Mini-Mental State Examination (MMSE), a standard evaluation of cognitive state, and through patients’ own reports about performing daily activities. As for adults with Alzheimer’s disease, a meta-analysis of cognitive training studies concluded that cognitive training improves learning, memory, and executive function as well as their functioning in daily life, as assessed by self report [17]. Even though research done to assess this type of cognitive training are less influenced by bias than those performed to support online braining training, most studies in this review were limited by small sample sizes. This suggests that more well-designed studies need to be conducted to reach a clear conclusion.
transfer in the ACTIVE study, participants were asked to describe their performance in everyday cognitive tasks [15]. All participants in the three experimental conditions–training to improve reasoning, memory, or processing speed– reported less difficulty in everyday tasks than the control group, but the long term effects were only significant for participants in the reasoning and speed groups [16]. Other studies report that cognitive training only improves participant scores on evaluations of cognition, but does not improve cognition used in daily activities [15, 16]. However, a recent review of cognitive training concluded that far transfer does occur in cognitive training in all cognitive domains except for memory [18]. If these conclusions are true, it would certainly be a disadvantage in cognitive training for older adults with dementia, given that memory is most affected by this disease. Nevertheless, because cognitive train-
ing is a relatively recent area of inquiry, more research on far transfer still needs to be conducted to discover how well it can be achieved. Despite potential pitfalls, the use of cognitive training in controlled environments to help with cognitive decline is becoming an increasingly viable option for the aging population. With the rapidly growing population of older adults, cognitive training could be an option to help the elderly remain independent. The decrease in cognitive skills among older adults is correlated with an increase in both the use of health services and in health expenses, so increasing independence for older adults may help lessen health costs. Cognitive training is also especially promising for those with dementia who are looking for an alternative to drugs that may cause adverse effects. Some researchers like the Director of the Division of Behavioral
Though these results regarding cognitive training in older adults are noteworthy, it is important to remember that engaging in cognitive training cannot completely prevent cognitive decline. In the aging process, neurons that are no longer in use are removed. This happens for all aging people, though those with dementia do experience more neuron loss. Neuroplasticity can only improve cognitive processing by reorganizing neurons and changing the available brain structure, not through creation of new neurons [15]. Consequently, cognitive training cannot completely prevent decline because neurons will continue to die despite training. Decline in cognitive ability is an inevitable process of aging, so cognitive training is not a panacea to cognitive impairment and can only help to the extent that it slows this decline. Another flaw in cognitive training is that there is not a lot of evidence that far transfer occurs. To determine far
FALL 2016 || THE TRIPLE HELIX
5
and Social Research at the National Institute on Aging, believe that “these sorts of interventions are potentially enormously important. The effects [of cognitive training interventions] were substantial. There isn’t a drug that will do that yet” [19]. This kind of training also has interesting applications in a real world health context. It presents a distinct opportunity for people from many different medical disciplines like psychology, occupational therapy, recreational therapy, and gerontology to
all work together with primary care physicians [20]. This interdisciplinary approach could lead to interventions that integrate brain training with activities like exercising, healthy eating, taking drugs, treatment for depression and other approaches. For example, a team of health workers from many disciplines could design a schedule of activities and treatments for participants to perform every week. Since this intervention combines methods and approaches treating dementia and
[1] Lumosity. Lumosity Games Overview. [YouTube] Science and Technology video.
[11] Melby-Lervag M, Hulme C. Is Working Memory Training Effective? A Meta-Analytical review. Developmental Psychology. 2013;49(2):270-291.
[2] Hardy, J.L., Drescher, D., Sarkar, K., Kellett, G., Scanlon, M. Enhancing visual attention and working memory with a web-based cognitive training program. Mensa Research Journal. 2011. 42(2):13–20.
[12] CogniFit. Neurology, Cognitive Science And Brain Research. [Internet] [cited October 26, 2015]. Available from: https://www.cognifit.com/ neurology-brain-research-cognitive-science
[3] Hardy J, Nelson R, Thomason M, Sternberg D, Katovich K, Farzin F, Sanlon M. Enhancing Cognitive Abilities with Comprehensive Training: A Large, Online, Randomized, Active-Controlled Trial. PLOS. 2015 Sep; 1-17.
[13] Kueider A, Bichay K, Rebok G. Cognitive Training for Older Adults: What Is It and Does It Work? Issue Brief. Center For Aging. October 2014; 1-8.
[4] Jaeggi S, Buschkuehl M, Jonides J, Perrig W. Improving fluid intelligence with training on working memory. PNAS. 2008 March; 1-5. [5] Redick T, Shipstead Z, Harrison T, Hicks K, Fried D, Hambrick D, Kane M, Engle R. Working Memory Training May Increase Working Memory Capacity But Not Fluid Intelligence. Journal of Experimental Psychology: General. May 2013; 142(2) 359-379. [6] Olena, Abby. The Scientist. Does Brain Training Work? [Internet] April 21, 2014. [cited October 1, 2015] Available from: http://www.thescientist. com/?articles.view/articleNo/39768/title/Does-Brain-Training-Work-/ [7] NeuroNation. ADHD symptoms may be alleviated with brain training. [Internet] [cited October 13, 2015] Available from: http://www.neuronation.com/science/adhd-symptoms-may-be-alleviated-brain-training [8] BrainHQ. Physical Brain Chaange. [Internet] [cited October 26, 2015] Available from: http://www.brainhq.com/world-class-science/published-research/physical-brain-change [9] Lumosity. Bibliography. [Internet] [cited October 13, 2015]. Available from: http://www.lumosity.com/hcp/research/bibliography [10] Singley M, Andersong J. The Study of Transfer. Cambridge: Harvard University Press; 1989.
6
cognitive decline in general from a variety of angles, it may be the best way to lessen symptoms and prevent further decline. Perhaps in the future, in addition to receiving a prescription for medication, doctors may recommend cognitive tasks or brain training programs. This certainly is not the future of engineered super intelligence that science fiction authors once dreamed of, but it is a future that could help the elderly train their brains and recapture the capabilities of their youth.
THE TRIPLE HELIX || FALL 2016
[14] Guglielman, Eleonora. The Ageing Brain: Neuroplasticity and Lifelong Learning. [15] ACTIVE: A Cognitive Intervention Trial to Promote Independence in Older Adults. [16] Rebok G, Ball k, Guey L, Jones R, Kim H, King J, Marsiske M, Morris J, Tennstedt S, Unverzagt F, Willis S. Ten-Year Effects of the ACTIVE Cognitive Training Trial on Cognition and Everyday Functioning in Older Adults. The Journal of American Geriatrics Society. January 2014;62(1):16-24. [17] Sitzer D, Twamley E, Jeste C. Cognitive Training in Alzheimer’s Disease: A Meta-Analysis of Literature. Acta Psychiatrica Scandinavica, 2006. 114(2), 75-90. [18] Zelinski, E. Far Transfer of Cognitive Training in Older Adults. Restor Nerol Neurosci. 2009; 27(5): 455-47. [19] Dembner, Alice. Cognitive training helps elderly keep mental sharpness/ but value in daily life questioned [Internet]. December 13, 2002. [cited October 13, 2015]. Available from:http://www.sfgate.com/health/article/ Cognitive-training-helps-elderly-keep-mental-2711624.php. [20] Yu, F, Rose, K, Burgener, S, Cunningham, C, Buettner, L, Beattie, E, Bossen, A, Buckwalter, K, Fick, D, Fitzsimmons, S, Kolanowski, A, Specht, J, Richeson, N, Testad, I, & McKenzie, S. Cognitive training for early-stage Alzheimer’s disease and dementia. Journal of Gerontological Nursing. 2009; 35(3). 23-29.
Pillars of Creation // Michelle Miller ‘18 student art spotlight
RETHINKING
RACE IN MEDICINE KAVIA KHOSLA ‘16
Race is often the first characteristic people notice about someone: “that woman is black,” and “that man is Asian.” Doctors are not exempt from this way of thinking, and often perceive a new patient in this way. Medical charts and dictations often begin along the lines, “45 year old Hispanic man with acute angina.” Along with age and gender, race is one of the most significant markers used by health professionals to interpret an individual’s health, whether consciously or subconsciously. Many hope that using race to prescribe treatment will save money and improve health outcomes by tailoring medicine to individuals.
8
THE TRIPLE HELIX || FALL 2016
However, race is not an accurate representation of individuals. Humans have culturally and socially constructed racial categories over centuries. Racial categorization has greatly influenced human interactions throughout history, and researchers should try to understand this process, but problems arise when social definitions of race become so deeply entrenched in our subconscious that we begin using them to understand peoples’ health. In the 19th century, academics began to accept the racial determinism theory, which proclaims that race is linked to genetic traits. This implies that race is predetermined and fixed for each indi-
vidual. The theory popularized the creation of stringent racial categories, later used to assert some races as superior to others. This was a factor in justifying the Holocaust, when the Nazi party launched a genocide campaign aimed at “ethnic cleansing”. Almost 20 years later, the debate over anti-miscegenation laws brought the discussion back to the forefront of political discussion. These laws banned interracial marriage and sexual relations; many argued that African Americans (and other nonwhites) were so biologically different from whites that interracial sex could not be considered natural. Proponents used “scientific evidence” that races are distinct to proclaim that some have
ARTWORK & LAYOUT Yixuan Wang ‘18 Kaley Brauer ‘17
EDITOR Owen Leary ‘18
less preferable genes. Activists of the 1960s fought anti-miscegenation laws as a part of racial equality movements and argued that America needed to reconsider whether racial boundaries were scientifically accurate. Atrocities were committed against particular races on the basis of science in the 20th century; hence, scientists began to critically examine our perception of race. In 2003, a Human Genome Project report stated that “[t]here is no scientific basis for race,” and that “‘races’ cannot be distinguished genetically”. The U.S Department of Energy and the National Institutes of Health launched the Human Genome Project in 1990; by 2003 they had mapped the entire human genome, and began surveying the results. That they could not discerned and individual’s race by his
or her genome suggested that the racial determinism theory could not be accurate. This conclusion is strengthened by Harvard geneticist Richard Lewontin’s research from 1972, which showed that 85 percent of human genetic variability occurs within geographically defined races rather than between them. Subsequent research found even higher numbers, and attributed at most 6-10 percent of genetic differences to differentiating races. This tells us that humans are far more genetically homogenous than previously imagined; during all of natural history, there has been too much genetic interaction between geographic populations to develop pure races. Even though human races are generally homogenous, we cannot deny that people of recent common ancestry undoubtedly share genetic trends. In 2005, evolutionary developmental biologist Dr. Armand Marie Leroi published an Op-Ed called, “A Family Tree in Every Gene”. It argued that recent research offered “fresh evidence that racial difference is genetically identifiable”. Some geneticists argue that interracial genetic differences may be relatively small, but they obviously account for the physical differences we see between races and potentially other differences we cannot see. For this reason, Dr. Leroi and some of his contemporaries argue that race is still a significant source of scientific information.
The problem with these arguments lies in the incongruence between genetically similar ‘races’ and socially constructed ones. The continent of Africa contains the greatest amount of genetic diversity worldwide, yet we generically label everyone in Africa “African” or “black”. If one were to ask different populations around the world to categorize the same person into a racial category, responses would greatly vary depending on social and cultural beliefs. In the US, if someone has any African ancestry, no matter how small, they are considered African American and often self-identify as such. In this social phenomenon called black exceptionalism, anyone with Ethiopian lineage in America is grouped with all other African-Americans. Wilson et al. were able to delineate general racial groups by grouping individuals together by similar genetic trends. The results confirmed that our social definitions of race do not accurately describe the biological truth. The study showed that 62% of the Ethiopian participants were in the same genetic cluster as most Jewish, Armenian, and Norwegian participants. Therefore, simply considering Ethiopian-Americans “African-American” would be a genetic fallacy. The “Asian” label was also debunked by the placement of Afro-Caribbeans in the same genetic cluster as West Eurasians, with Chinese participants in a completely separate group. In summary, the racial determinism theory does not get the same level of support that it did a century ago. Biologists now understand that races are much more genetically complex and
85 % of human genetic variability occurs within geographically defined races rather than between them… This tells us that humans are far more genetically homogenous than previously imagined. FALL 2016 || THE TRIPLE HELIX
9
variable than we once thought. However, there are clear genetic trends between people who share recent ancestry from a specific geographic location. As technology advances, we develop the capacity to identify these trends. How we use this capacity will be crucial to the future of medicine. Research in the late 20th century began to illuminate the immense disparities in health between non-minorities and minorities in America. According to the CDC report, between 2001 and 2009, the rate of preventable hospitalizations was nearly double in blacks than in whites. In 2010, the rates of uninsured Hispanics and blacks, 41% and 26.2% respectively, significantly exceeded the rate of uninsured whites, 16.1%. As this research accumulates, so does the pressure on research institutions to understand these differences. The result is an institutionalized push to study disease by race. Medical anthropologist Duana Fullwiley says, “you can’t get a grant from the NIH unless you recruit in racial groups, label people by census category, and then report back the data in terms of outcomes by racial type”. This recent trend is complex and often problematic. When the CDC monitors disease prevalence or insurance rates by race, they describe the experience of socially constructed categories of race. These statistics do not claim anything biological about those races and are therefore suitable. However, using race in clinical trials to find innate biological differences between those races purportedly assumes that those categories are biological rather than socially constructed. An example of this misconstruction that received substantial criticism is BiDil, a combination of two generic drugs both commonly used to treat congestive heart failure. Dr. Jay Cohn realized the potential clinical benefits of combining the two drugs in a clinical trial in the 1980’s. However, in 1997 the FDA refused to give BiDil
10
THE TRIPLE HELIX || FALL 2016
its own patent because a combination of two generic drugs was not considered novel. Dr. Cohn reanalyzed the data from the original clinical trial by race and realized it worked more effectively in African-American patients. In 2000, Cohn got a new patent for BiDil as an “ethnic drug” and sold the rights to a pharmaceutical company called NitroMed. NitroMed conducted a relatively small clinical trial: “A-HeFT,” the African-American Heart Failure Trial. In the study, 1,050 self-identified African Americans either took their normal heart medications or BiDil. BiDil subjects exhibited a 43% lower mortality rate than the African Americans taking other drugs. In June of 2005, the FDA finally approved BiDil, this time with a race-specified label for black patients. There are several troubling aspects to BiDil’s approval. Firstly, the African-American patients were recruited through self-reporting. As we saw from earlier evidence, self-reporting is a rudimentary way of defining race because an individual’s perception of his or her race does not necessarily line up with his or her genetic makeup. Secondly, the only comparison done between African-Americans and other races was the retrospective analysis, used to give BiDil “fresh” branding so it would be approved. NitroMed did not conduct a proper comparison study, as they only recruited black patients,
yet the drug received a patent targeted only at African-Americans. In fact, NitroMed has confirmed that BiDil is beneficial in patients regardless of race. This suggests that the motivations for seeking a race label were economic, but the implications of the FDA’s label are far beyond economic. Because of the race label, many medical professionals do not think to give BiDil to patients who are not African-American. Even if they do want to prescribe it, some insurance companies will not cover BiDil for non-black patients because it is considered off-label. Patients who could be benefitting from the drug are not receiving it. Also, BiDil’s race label implies that the pill targets a genetic peculiarity intrinsic to African-American. This explanation may overpower other social factors contributing to African American patients’ health. The drug gives the illusion that BiDil can solve health disparities between blacks and whites. Meanwhile, important underlying causes of disparities like social
and economic inequality are not addressed. In 2001, Exner et al. showed inferior outcomes of a drug called enapril, an ACE inhibitor for patients with a particular heart defect, in African-Americans when compared with Caucasian patients. Subsequently, physicians stopped prescribing ACE inhibitors as often to African American patients with hypertension. Multiple studies after the one done by Exner have exhibited equal efficacy of ACE inhibitors in both racial groups when prescribed in a particular combination and dosage. Black patients were denied a beneficial drug because of a misleading study based on race. Health disparities research may be done in good faith, yet as it is done now, it manages to reinforce racial difference as the main reason for systemic health inequality. When studies measure clinical outcomes first and foremost by race, we could lose the bigger picture. Fullwiley notes that there are epigenetic factors that may be more influential than the actual genes we carry. Epigenetic factors are external influences, such as prenatal nutrition or environmental exposures, which affect gene expression. Sickle cell is traditionally considered a “black” disease, but a large portion of the Senegalese population experi-
ences a very mild version of it. Scientists assumed that Senegalese people have a gene that makes sickle cell milder. Eventually they found that the Senegalese cultural practice of ingesting a particular plant root encouraged growth of blood cells that do not sickle as easily. Reducing evolutionary history to race and ethnicity can cause scientific errors. Acknowledging that race is not as good a predictor of health as someone’s actual genetic code, Dr. Leroi argues, “Ideally, we would all have our genomes sequenced before swallowing so much as an aspirin. Yet until that is technically feasible, we can expect racial classifications to play an increasing part in health care”. On the other hand, one could argue that the stakes are too high to use race in clinical practice and that it should be used strictly to monitor the health experiences of socially defined races. And still others claim race could be used in a field called “4th generation” health disparities research. To eliminate health disparities, scholars from several disciplines must work synergistically “through scientific, ethical, and moral dialogue on the relationship between genes, race, and
[1] Wozniak J. Interviewed by: Mankiw N W. November 2, 2013 [cited November 11, 2013]. [2] ] Schwarz A, Cohen S. A.D.H.D. Seen in 11% of U.S. Children as Diagnoses Rise [Internet]. March 31, 2013 [cited September 20, 2013]. Available from: http://www. nytimes.com/2013/04/01/health/more-diagnoses-of-hyperactivity-causing-concern.html?pagewanted=all&_r=1&. [3] Tripp G, Wickens J R. Neurobiology of ADHD. Neuropharmacology 2009; 57 : 579-589. [4] Thapar A, Cooper M, Eyre O, Langley K. Practioner Review: What have we learnt about the causes of ADHD? J of Child Psychology and Psychiatry 2013; 54(1):3-16. [5] Sharp SI, Mcquillin A, Gurling H. Genetics of attention-deficit hperactivity disorder (ADHD). Neuropharmacology 2009; 57(7-8):590-600.
disease”. Beyond rethinking our approach to health disparities research, we could also postpone some research until genetics catches up to the task of sequencing any patient who could benefit from it. Already in many hospitals, oncologists are required to genotype patients who are at risk for cancer. Sequencing every patient’s genome is not plausible in the immediate future, but genetic technology becomes cheaper and more advanced each year. Soon enough, we may be able to group research participants accurately by their genetic variants rather than by race. In many breast cancer trials, it is now mandatory to save biopsy tissue from patients in case researchers want to go back to the trial in the future and analyze the results by genetics. If we similarly use bio-banks in health disparities research, we may be able to find a research method that does not jeopardize the very individuals for which it originally sought to empower.
[7] Bittner A, Egger HL, Erkanli A, Costello EJ, Foley DL, Angold A. What do childhood anxiety disorders predict? J of Child Psychology and Psychiatry 2007; 48(12):1174-1183 [8] Unnever JD, Cornell DG. Bullying, Self-Control, and Adhd. J of Interpersonal Violence 2003; 18(129):129-147. [9] Dahmen B, Pütz V, Herpetz-dahlmann B, Konrad K. Early pathogenic care and the development of ADHD-like symptoms. J of Neural Transmission 2012; 119(9):1023-1036. [10] Stolzer JM, The ADHD Epidemic in America. Ethical Human Psychology and Psychiatry 2007; 9(2):109-116. [11] Miller M, With increased social media use comes increased social anxiety. 2013 Jul 3.
[6] Weiss MD, Schibuk H, Allan BA, Saran K, Baer S. The screens culture: impact on ADHD. Springer Open Choice 2011; 3(4):327-334.
FALL 2016 || THE TRIPLE HELIX
11
THE
AI SAFETY
needles & hay:
DEBATE
CONNOR FLEXMAN ‘16 12
THE TRIPLE HELIX || FALL 2016
ARTWORK & LAYOUT Mae Heitmann ‘19 Kaley Brauer ‘17
EDITOR Mark Sikov
It is hard to state the truth, the whole truth, and nothing but the truth. That is the lesson being learned, not in a court of law, but in recently flaring debates over the risks of Artificial General Intelligence, or AGI. One side says that AGI will be our savior, giving us the technological capacity to cure cancer, manufacture food at will, and turn Earth into a paradise. The other camp says that there is a large likelihood it will destroy us, inadvertently or not. Many hold views somewhere in the middle. Thus, it may be pretty important to get to the bottom of this eternal life or death situation. AGI is different than Artificial Intelligence (or AI), the focus of most current academic research. AI includes such programs as Siri, Watson (the Jeopardy contestant), and chess-master Deep Blue, as well as things like speech recognition and directions in Google Maps. These algorithms use a variety of tools, like decision trees, machine learning, or brute-force search. Many of these programs are termed “intelligent” partly because of their ability to outperform humans (Deep Blue, for example), but they can only do so on a narrow domain. In contrast, AGI would use similar learning methods to reason generally about the world. Models of AGI have been proposed, but the real thing is far
in the future; we need either a much better conceptual framework or much more powerful computers in order to attain it, and probably both. Surveys of experts cluster nicely around specific dates for the first human-level AGI, which is a good benchmark for what we care about. They estimate a 10% chance of AGI by 2025, 50% by 2045, and 90% by 2080 [1]. While risks from AI have been encountered before, the larger risks posed to humanity by AGI are only now beginning to get widespread public attention. In 2014, Superintelligence: Paths, Dangers, Strategies, by Nick Bostrom of the Future of Humanity Institute, made the New York Times bestseller list. In January 2015, the similarly-inspired Future of Life Institute published a much-circulated open letter calling for greater research into “maximizing the societal benefit of AI” while “avoiding potential pitfalls” [2]. This set off a wide-ranging debate that included many AI researchers and many more members of the media and public. Unfortunately, as with many contentious topics, the arguments have largely been aimed at strawmen [3,4]. The public debate needs to be grounded in true academic discourse, which requires understanding the theoretical issues as proposed by leading scholars.
Current AI has its own risks, which will inform us on failure modes for AGI. The cause of risk in most AI is that we don’t know explicitly what an algorithm will do, almost by the definition of artificial intelligence [5]. In typical computer algorithms, programmers write procedural code that tells the computer exactly what action to take, so it’s fairly easy to peruse the source file and determine that action. AI is more opaque; this is why Deep Blue could beat any human at chess, including its creators. Instead of the creators specifying which moves it should make and when, they simply specified how it should evaluate potential moves, and it could then search a move further ahead and with greater speed than they ever could. The canonical example of AI gone wrong is the role of seemingly safe trading algorithms in financial firms that partially contributed to the flash crash of 2010, in which many algorithms began selling securities en masse by the millisecond [6]. Unfortunately, this isn’t the only time AI acts in a manner that we didn’t predict and could only figure out in retrospect. It is common for other methods of generating AI— like machine learning or evolutionary selection—to “outwit” their creators. One military algorithm trained to recognize tanks in surveillance photos
FALL 2016 || THE TRIPLE HELIX
13
initially behaved extraordinarily well, even generalizing from the training set to new photos from the batch. However, before implementation they ran a final test with new data from a different day. It was entirely unable to recognize the tanks. All of the pictures of tanks used in training had been sunny, while pictures without tanks had been cloudy. The algorithm had creatively used this as its criterion for discerning tanks [7]. Other similar examples abound [8]. Not all failures are quickly discovered, either; like much software, many algorithms work fine until they malfunction when presented with an edge case. Where narrow AI has real examples of these failures, AGI risk is purely theoretical but has the same problems, exacerbated in multiple ways. It may be significantly stronger than us, leading any malfunction to become catastrophic. Where does all this power come from?
14
THE TRIPLE HELIX || FALL 2016
Many experts think that human-level artificial intelligence will be possible by the end of this century. If that timeline holds, around this time a computer will become as competent at computer programming as a human. Shortly thereafter, it will be better than most human programmers, and at that point can begin to program itself. This will lead it to become even better at thinking and programming, and the cycle will continue until the AI reaches some sort of physical boundary in intelligence that slows its growth [9]. Unless this happens early on in its development, it may have become vastly smarter than humans, a condition referred to as “superintelligence,” in a relatively short period of time. Naysayers contend that an AI may take a long time to become much smarter than humans, either because intelligence is difficult to develop or because it simply turns out to require a giant resource input [10]. For now, either argument appears justifiable. But what about the AGI definition of moral maturity? Some believe that the outcome may be bad by default, unless careful planners intervene. Researchers have put much effort into studying the Value Alignment Prob-
lem, or the difficulty of programming computer morality to resemble human morality [11]. To state it simply, we don’t have a good form of ethics for an AGI to use. Human values are complex and fragile; there are many necessary components to have a fulfilling life, and if one of these is lost, we can be exquisitely unhappy [12]. For example, a pampered life without challenge, while good in some naïve ethical theories, would be considered by many to be a negative lasting outcome for humanity, as boredom would destroy us. Similarly, a reality in which we were dosed with drugs to make us happy but denied the option to connect with other humans may appear good to the AGI and yet fail our most basic intuitions of value. If you are in favor of this “wireheading” for happiness, that’s fine too; other examples abound. The exact narrative is unimportant, only the idea that it’s very easy to get almost everything right and still fail. Not only is it hard to specify all the important aspects, but missing one variable could have dramatic consequences. As Stuart Russell, co-author of the standard AI textbook, says, “A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable” [4]. Many people intuitively think that an AGI cannot be so dumb as to implement a world like the drug-utopia above. Unfortunately, Nick Bostrom
argues that agents with any combination of intelligence and morality can exist, a proposition referred to as the Orthogonality Thesis [13]. This theory concludes that having high intelligence does not necessarily imply high morality. While evolution has given rise to the human version of morality, selecting for those that can work well in groups and enable survival, an intelligent computer program we create would not have morals unless we program them in. With the arrival of superintelligence, the intelligence that led us to the top of the food chain will have been bested, and such an AGI could manipulate us almost at will. Then the question of value alignment becomes crucial—a randomly picked AI has no reason to share our values about the world. Aside from an AGI being dangerously ambivalent to human morality, it will (in many circumstances) be in direct conflict with it. Bostrom’s Instrumental Convergence Thesis 13 states that any AGI attempting to optimize for some end goal (utility function) will develop intermediate goals that help achieve the end goal; some of these may be dangerous, such as the goals of self-improvement, the preservation of their initial end goals, self-protection, and resource acquisition [13,14] .To see the danger, consider a state of being hellbent on self-protection (or anything else). Unless we solve the Value Alignment Problem, an AGI may go to any lengths to satisfy this goal, up to and including eliminating possible obstacles (like humans). In theory, any arbitrary goal could give rise to this behavior, even
some “satisficing” goal, such as making ten paper clips. In this example, the AGI might obsessively count them to make sure it was exactly right, using any methods allowed by its morality to obtain more resources so that it may defend itself and count them more [15]. Again, the specific narrative isn’t important, only that an AGI doesn’t have “common sense” and may do things very differently from what we intend. Preserving its own utility function is a goal that is especially worrisome, for it will cause the AGI to directly impede any human attempt to change its goals [16].Thus we may only have one shot at specifying these in the initial programming.
Institute has made solid progress mathematically in recent years, and while an ember of hope is alive, we should fan the flames [18].
While some argue that these scenarios could be avoided with proper time to test the AGI, if it becomes a superintelligence quickly, then this is certainly not an option. Another solution is to allow companies to create limited AGIs but control them in some method. These are commonly referred to as “boxing” techniques, or limited implementation such as building only an “oracle” that answers questions. However, despite a large fraction of AGI safety work focusing on these methods, it turns out that it’s vastly difficult to contain a system that is more intelligent than yourself, even if it can only answer your questions [13]. One could argue that making AGI provably safe is impossible, so we should give up on that and hope for the best, but surely—in the face of human extinction—it is silly to throw your hands up and sit idly by [17]. Further, the Machine Intelligence Research
It should also be mentioned the great opportunities an AGI would offer us, with solutions of many of the world’s scientific and medical problems attainable, and possibly a utopian future [9]. Then, this is not to promote fear of AI research; we shouldn’t (and probably couldn’t) stop any human from ever building an AGI. The pro-AGI side has its eyes open too. The only argument here is that we should be extremely careful with safety as well.
While we would like to believe that AGI cannot be an impediment to humanity’s future, arguments against preparing for that eventuality are flawed. Although one can find many uncertainties, argue that we won’t get AGI in any given time frame, or argue that it won’t be able to ascend quickly to superintelligence, we find that these probabilities aren’t enough. Furthermore, they miss the disastrous consequences in the event that they are wrong.
The AGI-safety leaders have made their stake: Bostrom’s book, FHI’s open letter, Elon Musk’s twitter comments, and Stephen Hawking’s public statements all advocate for more AI safety research [2,13,19,20]. These have catalyzed intense debate in this arena between both laypersons and researchers in the past year. Unfortunately, many of the responders in the public sphere
FALL 2016 || THE TRIPLE HELIX
15
seem not to have read Bostrom’s arguments. Edge.org hosted a discussion of the issue by many prominent thinkers, but unfortunately, as Stuart Russell noted, “Many of the contributors to this conversation seem to be responding to [other poor] arguments and ignoring the more substantial arguments proposed by Omohundro, Bostrom, and others” [4, 21]. Reading through the responses—some say biotech is more worrisome, some say narrow AI is more worrisome, some say AGI isn’t coming for a hundred years, some say current AI can’t “think”, some say Moore’s Law won’t hold, some say progress is slow, some say intelligence isn’t an algorithm, some say doomsday prophecies are all silly, some say artificial intelligence doesn’t have emotions, etc. If you look through these arguments, they all have one thing in common: they don’t at all shoot down this AGI safety argument. From those who entirely don’t understand what we’re talking about (“emotions”, “not an algorithm”), to those who simply
strawman (“doomsday”, many irrelevant metaphors), to those who bring up questions already treated (“it’s far off”, “it isn’t that bad yet”), it is highly unfortunate that so many are jumping to confidently dismiss the issue without even directing their thoughts toward the right argument. This is a textbook example of communication issues in science, which may prove an important effector of our future. The pro-AGI camp has the important point that it could be glorious, and the AGI-safety camp has the important point that we must tread lightly. These are not at all incompatible, but much of the discourse makes it looks as if they are. Of course, the AGI-safety camp also had many less rigorous arguments put forth from the public sphere and other futurists, some of whom of course go too far in arguing against AGI [23]. But opponents only argued against this weakest of the evidence put forth,
and concluded that the whole worry was invalid. This is emblematic of the tendency to strawman, or engage only points that seem weakest. The rationality community has for a few years now advocated “steelmanning” instead: engaging with the very strongest points of one’s opponents, and even attempting to find stronger points on their side, so as to find the truth of the matter and not falsely portray a debate to make your side look stronger than it is [23]. It’s true that some statements from Musk and others may promote unnecessary fear-mongering and a Luddite position of the populace. Some were undoubtedly meant to be inflammatory merely to provoke a discussion of this important issue, and scaring people away from the potentially vast benefits of artificial intelligence is probably wrong. But it’s also wrong to bastardize Bostrom’s case for more investigation into safety as science fiction. Maybe as this field matures, the strawman’s hay will gradually sift away, the steel needles of truth finally glinting in the sun.
Maybe as this field matures, the strawman’s hay will gradually sift away, the steel needles of truth finally glinting in the sun. 16
THE TRIPLE HELIX || FALL 2016
[1] AI Impacts. A Summary of AI Surveys [Internet]. 2015 Jan 1 [cited 2015 May 7]. Available from: http://aiimpacts.org/a-summary-of-ai-surveys/ [2] Future of Life Institute. Research priorities for robust and beneficial artificial intelligence: an open letter [Internet]. 2015 [cited 2015 May 7]. Available from: http://futureoflife.org/misc/open_letter [3] Bensinger R. Brooks and Searle on AI volition and timelines. Machine Intelligence Research Institute; 2015 Jan 8 [cited 2015 May 7]. Available from: https://intelligence.org/2015/01/08/brooks-searle-agi-volition-timelines/ [4] Russell S. Of myths and moonshine [Internet]. Edge Foundation; 2014 Nov 11 [cited 2015 May 7]. Available from: http://edge.org/conversation/ the-myth-of-ai#26015 [5] Harris D. Why you can’t program intelligent robots, but you can train them. 2015 Mar 2 [cited 2015 May 7]. Available from: https://gigaom. com/2015/03/02/you-cant-program-intelligent-robots-but-you-cantrain-them/
[13] Bostrom N. The superintelligent will [Internet]. Minds and Machines. 2012; 22(2):71-85. Available at: http://www.nickbostrom.com/superintelligentwill.pdf [14] Omohundro S. The basic AI drives. In: Wang P, Goertzel B, Franklin S, editors. Proceedings of the First AGI Conference; 2008 Feb. Amsterdam: IOS Press; Frontiers in Artificial Intelligence and Applications, Vol. 171; c2008. p. 483-92. Available at: https://selfawaresystems.files. wordpress.com/2008/01/ai_drives_final.pdf [15] Bostrom N. Superintelligence: paths, dangers, strategies. Oxford: Oxford University Press; 2014. [16] Soares N, Fallenstein B, Yudkowsky E, Armstrong S. Corrigibility [Internet]. Machine Intelligence Research Institute; 2014 [cited 2015 May 7]. Available from: https://intelligence.org/files/Corrigibility.pdf [17] Goertzel B. The Singularity Institute’s Scary Idea (and why I don’t buy it). 2010 Oct 29 [cited 2015 May 7]. Available from: http://multiverseaccordingtoben.blogspot.com/2010/10/singularity-institutes-scary-idea-and.html
[6] Newitz A. The unexpected places where artificial intelligence will emerge [Internet]. 2013 Sep 20 [cited 2015 May 7]. Available from: http://io9. com/the-unexpected-ways-that-artificial-intelligence-might-1355337058
[18] Machine Intelligence Research Institute. All MIRI publications. 2015 [cited 2015 May 7]. Available from: https://intelligence.org/all-publications/#books
[7] Dreyfus H L, Dreyfus S E. What artificial experts can and cannot do. AI & Soc [Internet]. 1992 Jan-Mar [cited 2015 May 7];6(1):18-26. Available from: http://link.springer.com/article/10.1007%2FBF02472766
[19] Gibbs S. Elon Musk: artificial intelligence is our biggest Existential Threat. The Guardian; 2014 Oct 27 [cited 2015 May 7]. Available from: http://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat
[8] Bellows A. On the origins of circuits [Internet]. 2007 Jun 27 [cited 2015 May 7]. Available from: http://www.damninteresting.com/onthe-origin-of-circuits/ [9] Yudkowsky E. Artificial Intelligence as a positive and negative factor in global risk [Internet]. In: Bostrom N, Cirkovic M M, editors. Global Catastrophic Risks. 2008 [cited 2015 May 7]. Available from: https:// intelligence.org/files/AIPosNegFactor.pdf [10] Hanson R. I still don’t get foom [Internet]. 2014 [cited 2015 May 7]. Available from: http://www.overcomingbias.com/2014/07/30855.html [11] Soares N, Fallenstein B. Aligning superintelligence with human interests: a technical research agenda [Internet]. Machine Intelligence Research Institute; 2014 [cited 2015 May 7]. Available from: https://intelligence. org/files/TechnicalAgenda.pdf
[20] Cellan-Jones R. Stephen Hawking warns artificial intelligence could end mankind [Internet]. BBC; 2014 Dec 2 [cited 2015 May 7]. Available from: http://www.bbc.com/news/technology-30290540 [21] Lanier J. The myth of AI [Internet]. Edge Foundation; 2014 Nov 11 [cited 2015 May 7]. Available from: http://edge.org/conversation/the-myth-of-ai [22] Havens J. You should be afraid of artificial intelligence. Mashable; 2013 Aug 3 [cited 2015 May 7]. Available from: http://mashable.com/2013/08/03/ artificial-intelligence-fear/ [23] Messinger C. Knocking down a steel man: how to argue better. 2012 Dec 7 [cited 2015 May 7]. Available at: https://themerelyreal.wordpress. com/2012/12/07/steelmanning/
[12] Yudkowsky E. Complex value systems are required to realize valuable futures. In: Schmidhuber J, Thórisson K R, Looks M, editors. Proceedings of Artificial General Intelligence: 4th International Conference, AGI; 2011 Aug 3-6; Mountain View, CA. Berlin: Springer; Lecture Notes in Computer Science Volume 6830; c2011. p. 388–93. doi:10.1007/978-3642-22887-2_48.
FALL 2016 || THE TRIPLE HELIX
17
Meditation :
THE NEW ANCIENT MEDICINE KARINE LIU ‘18
When you hear the word “meditation,” perhaps your mind wanders thousands of years and thousands of miles to a time and land unknown to you. Perhaps you conjure images of robed monks, sitting cross-legged in ancient temples tucked away in the deep mountainous regions of Asia. Or perhaps you think of the plethora of links on your Facebook newsfeed leading to blog posts raving over how just two minutes of meditation a day can heal all illnesses and transform your life.
Or maybe you’re simply confused. In recent years, the popularity of meditation has risen astronomically; yet amid the fanfare, we may find ourselves asking: where is the evidence for all of these claims? Through examining the history of meditation, both in ancient times and in the modern day, we may come to find that meditation is exceedingly more complex than any other typical health fad that can be reduced to a 140-character tweet.
ARTWORK & LAYOUT Kaley Brauer ‘17 18
THE TRIPLE HELIX || FALL 2016
EDITOR Elena Weissmann ‘17
A Brief History of Meditation The illustrious history of meditation begins in ancient India. The Hindu text Yoga sutra by the physician Patañjali gives us a glimpse into the ancient meditative practices that today serve as the foundation of meditation. As described in his text, meditation focuses upon the achievement of samadhi, a Sanskrit term with a literal meaning of “putting together” [1]. It refers to “an empty state of pure consciousness” and is the ultimate goal of meditation [2]. The teachings of Patañjali also colored the beginnings of Buddhist meditation, a distinct sect from which stems several modern branches of meditation. Siddhartha Gotama sought a state higher than samadhi. The state was called nibanna and could only be reached if active insight and recognition (as opposed to nothingness in the Yoga sutra subtype) were maintained during meditation.
This active use of the mind has acquired another name in today’s lexicon: “mindfulness”. Described by Jon Kabat-Zinn, a Professor Emeritus at the University of Massachusetts Medical School and leading scientist in the research of meditation, mindfulness refers to the “non-elaborative, non-judgmental awareness” of the present moment. It consists of three key components: 1. Careful attention to present experience 2. Recognition of the transient nature of such experience 3. Lack of an emotional reaction to such event [3] Achieving mindfulness can be done through two types of technique based on the Buddhist meditation tradition discussed before: focused attention (samatha) and open monitoring (vipassana). Samatha focuses on bringing the
concentration of the mind to a single object or point for a period of time. Conversely, vipassana draws upon the non-directed recognition of events as they happen in passing. Both require a certain discipline of the mind fundamentally unique to meditation. The idea of conducting such mental acrobatics then provokes the question: how is the brain affected? For the past several decades, meditation researchers have been trying to answer this crucial question. And while this inquiry has remained essentially the same in its focus, the researchers’ modes of investigation have changed considerably, incorporating varying techniques from as simple patient observation to shockingly vivid brain imaging.
From the Outside… Prior to modern neurotechnology, research on meditation was conducted through clinical studies based largely on observations of human participants. Of the preliminary studies done, Kabat-Zinn’s work in the 1980s was considered revolutionary. It was the first to demonstrate the use of meditation as a therapy for chronic pain, a common condition that is poorly understood [3]. In his experiment, the control group of chronic pain patients received a standard treatment such as physical therapy or analgesic/antidepressant regimens, while
the experimental subjects underwent an intensive ten-week “Mindfulness-Based Stress Reduction (MBSR)” program [4]. Each week included a single two-hour class on the various forms of meditation as well as at least 45 minutes of meditation per day for 6 days per week.
and depression. What’s more is the surprising longevity of these reductions; improved pain conditions for five of six indices were maintained from 2.5 to 7 months following completion of the program and with continued meditation. The potential for meditation as a form of alleviation for chronic pain was clearly The effects of such a program were shown, especially when compared substantial. The experimental pa- to the meager improvements in pain tients reported large reductions in measures for the control patients. pain scores in six different aspects of their pain, ranging from pres- Of course, such great improvement ent-moment pain to psychological could seem questionable at first, essymptoms of pain, such as anxiety pecially with the possibility of a pla-
FALL 2016 || THE TRIPLE HELIX
19
cebo effect. However, given that the average chronicity or period of continual pain was eight years for the patient population, all of whom had previously received extensive medical treatment unsuccessfully, the likelihood of such a simple placebo effect was considered minute. Additionally, the negligible improvement in the control group of patients, who knowingly received the standard treatment and were thus vulnerable to the placebo effect as well, also minimizes the probability of such an effect playing a large role. However, there is a limit to this analysis using patient report and observation: the exact mechanism through which meditation causes such healthful outcomes remains unclear. The plot thickens as we consider the reports of some MBSR patients that claimed their emotional relationships with pain, including fear and anxiety from such pain, had changed. How this can be compared to actual physical pain alleviation is still unknown. Regardless, Kabat-Zinn’s seminal work showed that meditation can serve to lessen the burden of pain both physically and emotionally, thus opening the door to the studies of meditation that would follow.
words, a commonly used measure of emotional reaction and thus tolerance to pain [3]. Similarly, scientist Eric Garland’s work incorporated the MBSR program into an examination of meditation’s effects upon irritable bowel syndrome (IBS). The conclusion echoed those of Kabat-Zinn’s: meditation was found to reduce the pain from IBS symptoms and to improve the quality of life for these patients [5]. Patients experienced diminished anxiety and emotional reactions to pain symptoms, which is critical in the treatment of chronic pain.
Since meditation has a significant effect on the emotional aspect of pain, , perhaps meditation could be beneficial to even more nebulous and often chronic conditions: mental disorders. Could meditation heal a sickness contained within the brain? Well-known for its unresponsiveness to traditional therapies, depression is one such illness that is often targeted by researchers. One standard treatment, cognitive behavioral therapy, was combined with the principles of meditation to form meditation-based cognitive therapy (MBCT). Educating patients about their depression while also introducing them to the ideas of mindfulness and meditation was Specifically, Kabat-Zinn’s MSBR a novel and interdisciplinary approgram was used as a model in a proach to improving the quality of study of patients suffering from fi- their lives. bromyalgia, a condition known for chronic extensive pain. After com- Once again, the addition of medpletion of the program, there was itation resulted in relief for many a marked improvement in the en- patients suffering from depression. gagement with pain-related threat Several studies showed that patients
20
THE TRIPLE HELIX || FALL 2016
who suffered previously from three or more major depressive episodes had a reduced risk of having another major depressive episode for a full year following participation in MBCT in comparison to other patients who received only cognitive behavioral therapy [5]. Yet there remains some mystery, as clinical outcomes have been mixed in other populations. Particularly, when looking at meditation as a potential treatment for depression derived from bipolar disorder, the palliative effects are less robustly substantiated. Moreover, MBCT has not proven as effective for anxiety disorders, specifically for generalized anxiety disorder. Interestingly enough, MBCT has been formally accepted as a treatment for post-traumatic stress disorder (PTSD) among military veterans, who have cited significant reduction in the severity of their anxiety. The spectrum of beneficial responses to meditation may seem puzzling at first, but differing neural structures and neural activity that have been associated with each mental disorder may explain these erratic results. This peek into the intricacies of arguably our most complex organ has only become possible with recent improvements in neurotechnology, which has revolutionized how we view meditation.
Looking In.... Researchers were quick to make use of these technologies. A 2005 study used â&#x20AC;&#x153;magnetization prepared rapid gradient echoâ&#x20AC;? technology to investigate the Buddhist meditation technique of vipassana and its effects on brain structure. Comparing experienced meditation practitioners of this tradition against those without experience illustrated an increase in cortical thickness amongst the practitioners [6]. Thicker brain tissue is thought to indicate a greater number of neurons and thus greater cognitive abilities. The scientists of this study also postulated that meditation may prevent age-related thinning of the cortex, as the average thickness of the Brodmann area 9/10, an area associated with emotional cognition, was similar between 40-50 year old meditation practitioners and the 20-30 year old meditators and controls. To say that meditation could lessen the inevitable consequences of aging is undoubtedly controversial and, as such, motivating to researchers. The findings of this study deepen
when we examine the functions of the specific areas of the cortex that grew in thickness. Notably, all were contained in the right hemisphere or half of the brain, the side that is essential for sustained attention. Further analysis revealed that the right anterior insula, a portion of the cortex related to bodily attention and visceral awareness, as well as an area called Brodmann area 9/10, had both grown in thickness. Such changes of the cortex, especially in these specific areas, are not as unlikely as it may seem: past research regarding what is known as cortical plasticity has found that tasks requiring attention to a specific stimulus result in profound alterations in related areas of cortex [7]. Knowing that these brain areas are associated with awareness and emotional cognition, two mental states integral in meditation, the growth of these areas in their cortical thickness signifies better cognition and cognitive processing. Furthermore, the brain has shown to be altered by meditation via changes in its folding, resulting in its char-
acteristic grooves and ridges. The pattern and degree of such folding, known as gyrification, was measured using magnetic resonance imaging (MRI) for two groups: regular meditation practitioners and those without any experience [8]. While there existed pronounced differences in several areas of the brain, the most pronounced were those within the previously mentioned right anterior insula. The complexity in gyrification was also found to correlate positively with the number of years that meditation was practiced. Folding correlates positively with the number of neurons in a cortical structure; once again, we see a likely enhancement in neural processing and function. So while the effects of meditation were initially lauded because of patient-reported improvements, the physical images that demonstrate actual brain alterations have bolstered the looming suspicion of meditationâ&#x20AC;&#x2122;s likely benefits.
FALL 2016 || THE TRIPLE HELIX
21
The Reach of Meditation The road ahead is not a simple one. Several factors have prevented the inclusion of meditation into the standard protocol of care. Primarily, the confusion amidst researchers of exactly what defines “meditation” and its various subtypes is a huge obstacle. Without clear delineations of which specific type of meditation is used, researchers across the field experience difficulty in sharing and comparing their findings. Additionally, the current literature about meditation and its health effects is overwhelmingly cross-sectional or
snap-shot; they cannot bring to light much causality since studies have been short-term. Long-term studies on the effects of meditation on the brain would help substantiate a potential cause-and-effect relationship.
And while the verdict isn’t clear on whether or not meditation can radically change lives, this technology will expedite the work that needs to be done. Elucidating the numerous ways that meditation affects the Finally, even with the neurotech- brain will take time, but with ongonology available to us today, there is ing improvements in neural imagan insatiable need for advancement. ing technology, the discovery of a Providing researchers with the de- enlightening as well as healing force vices necessary for high spatial im- in neuroscience may be waiting for age resolution to examine the cortex us. structure will catalyze the next new
[1] Tomasino B, Chiesa A, Fabbro F. Disentangling the neural mechanisms involved in Hinduism-and Buddhism-related meditations. Brain and cognition. 2014;90:32–40.
[6] Lazar SW, Kerr CE, Wasserman RH, Gray JR, Greve DN, Treadway MT, et al. Meditation experience is associated with increased cortical thickness. Neuroreport. 2005;16(17):1893.
[2] Mishra, R. (1959). Fundamentals of yoga. Houston: The Julian Press.
[7] Merzenich, MM.; DeCharms, RC. Neural representations, experience and change. MIT Press; Boston: 1996.
[3] Zeidan F, Grant JA, Brown CA, McHaffie JG, Coghill RC. Mindfulness meditation-related pain relief: evidence for unique brain mechanisms in the regulation of pain. [4] Kabat-Zinn J, Lipworth L, Burney R. The clinical use of mindfulness meditation for the self-regulation of chronic pain. Journal of behavioral medicine. 1985;8(2):163–90. [5] Metcalf CA, Dimidjian S. Extensions and Mechanisms of Mindfulness-based Cognitive Therapy: A Review of the Evidence. Australian Psychologist. 2014;49(5):271–9.
22
insights into meditation.
THE TRIPLE HELIX || FALL 2016
[8] vLuders E, Kurth F, Mayer EA, Toga AW, Narr KL, Gaser C. The unique brain anatomy of meditation practitioners: alterations in cortical gyrification. Frontiers in human neuroscience [Internet]. 2012 [cited 2014 Oct 10];6. Available from: http://www.ncbi.nlm.nih.gov/ pmc/articles/PMC3289949/
Nebula Rise // Michelle Miller â&#x20AC;&#x2DC;18 student art spotlight
FISH FARMING SuStainability SHANNON MCCARTHY ‘15.5
At farmer’s markets throughout Boston, Red’s Best, a local fish vendor, offers fresh seafood caught within the region. Red’s accentuates their product with “traceability labels,” scannable tags that come with every fillet sold and give a short biography of the fisherman who caught it, where he fishes, and the name of his boat [9]. Red’s customers know their fisherman by name and picture, which lends a sense of quaintness to the experience of buying fish, a feeling that has largely been lost in an era of large supermarket chains.
Sadly, Red’s seems to be among the last of its kind as more and more fish are grown instead of caught, a trend that has begun to eliminate fisherman from the business. Farming fish seems inevitable as the ocean has become unable to provide enough fish to feed the world’s growing population. Further, the rise of fish farming has both fish farmers and scientists alike tinkering with aquaculture practices to find a means to produce quality, healthy fish using environmentally sustainable methods. Avenues for improving fish farming
EDITOR Adam Horowitz ‘16 24
THE TRIPLE HELIX || FALL 2016
have focused on both the environment of farming and the fish themselves. Scientists have studied how food that fish receive affect farming, and how genetic modifications to the fish themselves make them more suitable for farming. Broadly speaking, the term “farm fishing,” also known as aquaculture, refers to the practice of raising fish in an enclosed container for the purpose of being sold [10]. According to Stephen Hall, the director general of the WorldFish Center, society needs farm fishing
ARTWORK & LAYOUT Kaley Brauer ‘17 Helen Situ ‘‘19
because “there just isn’t enough seafood in the seas” [4]. Due to both dwindling wild fish stocks and an increased demand for seafood, the ocean cannot adequately provide enough fish for the market [4]. The amount of wild fish caught globally has remained at 90 million tons per year since the 1990s, which has resulted in 32% of the world’s fish populations becoming “overexploited or depleted,” a statistic that increases for large fish like tuna [4,1]. Such depletion indicates that continuing to fish at this level is unsustainable [1]. Further, global seafood consumption has increased since the 1960s, making even the unsustainable supply of 90 million tons of fish unable to keep up with the growing demand for fish that accompanies increasing consumption [4]. Since diminishing fish stocks have prevented fishing companies from keeping up with demand, fish farms have filled in the gaps [1]. With this opportunity, the fish farming industry more than doubled between 2000 and 2012 [1]. For comparison, aquaculture produced one million tons of fish in 1950 and 53 million tons in 2008 [4]. This enormous growth has made fish farming the fastest growing area of food production over the past 60 years. [4]. As Ned Daly of Seafood Choices Alliance, an organization that promotes sustainable methods of producing or catching fish, has said, “it’s no longer a question about whether aquaculture is something we should or shouldn’t embrace. It’s here. The question is how we’ll do it” [4]. While fish farms play an essential role in providing the world with fish, they are notorious for their harmful environmental impact. Shrimp farms have decimated Mangrove forests, which previously provided food, shelter and other resources to the impoverished coastal populations of Asia in which they are situated [2]. Further, China, which dominates 61% of the global aquaculture business, has overcrowded their farms to meet demand [4].
Such overcrowding engenders disease among the fish, which leads to pollution of local water [4]. This pollution creates “dead zones” along the coast, which make it hard for other aquatic life to survive [4]. Diseases in fish populations also require farmers to use antibiotics that harm the environment and other fish populations [2]. For instance, a 2004 U.S. Geological Survey found a high occurrence of male bass with eggs in their testes [12]. The study linked this abnormal “intersex” condition to high levels of harmful chemicals in the water, including antibiotics, pesticides and arsenics, which alter a fish’s ability to regulate its hormone levels [12]. In addition to approved antibiotics, other, more dangerous kinds of antimicrobial are also found in fish populations. For example, traces of malachite green have been found in Chinese farmed fish, even though the Chinese government has banned this potential carcinogen [4, 13]. Given both the inevitability of fish farming and the damaging effects of the current fish farming methods, concern should be focused on identifying and overcoming the barriers to environmentally sustainable fish farming. One of the largest barriers to environmentally sustainable fish farming is feeding the fish in a way that doesn’t harm the environment, but doesn’t deprive the fish of their nutritional and health value to humans. This has become a problem because the most marketable fish, at least within the United States, are high on the aquatic food chain, which means they eat other fish [4]. Producing marketable fish, therefore requires producing many unmarketable fish as their food [4]. The current solution is to give feed pellets to farmed fish, which are conventionally ground up “fishmeal and fish oil” [4,7]. To make fish farming
more environmentally friendly, farmers are trying to decrease the percentage of their fish pellets that comes from fishmeal, which is essentially ground up fish [4]. They have achieved this by increasing the percentage of their fish food that comes from plants [4]. While this seems like it could be beneficial, changing this ratio has the potential to detrimentally change the nutrient content of the fish grown. Researchers from Wake Forest University have found that farmed tilapia, which ranks among the most highly consumed fish in America, have significantly different nutritional contents than wild tilapia [14]. While tilapia is one of the most easily farmed fish partly because it can survive on little fish protein and high levels of carbohydrates from vegetables, feeding tilapia mostly carbohydrates increases the amount of omega-6 fatty acids in their diets, which translates to a higher level of omega-6 fatty acids for those who eat these fish [14]. Most of the health benefits of eating fish come from the high levels of omega-3 fatty acids, yet with this carbohydrate rich diet, there are more omega-6 fatty acids and less omega-3s [14]. Therefore, those who eat farmed tilapia for the traditional benefits associated with high levels of omega-3s will not only not receive their omega-3s, but will actually get a large quantity of omega-6 [14]. This contrast can be particularly detrimental when
FALL 2016 || THE TRIPLE HELIX
25
26
taking into consideration that patients with heart diseases are often encouraged to eat more omega-3s, but a diet with more omega-6s could actually induce the very inflammation that the diet was intended to reduce [14]. Fish farmers also tend to choose higher energy feed-pellets so that they can raise fish faster [7]. However, these pellets often produce fish with higher fat contents, which is less healthy for consumers [7]. The increased amount of fat also leaves these farmed fish vulnerable to common “fat-soluble pollutants” that tend to travel up the food chain, like mercury and lead [7]. Overall, feeding fish in a sustainable way that will produce healthy fish remains one of the largest barriers to effective fish farming.
added a yeast with abundant omega-3s [16]. Therefore, these fish should not have the same adverse health effects as other farmed fish. The company claims that raising one pound of salmon traditionally requires raising four pounds of feeder fish, but their yeast-based feeder allows them to grow one pound of salmon with one pound of feeder fish, reducing their dependence on the smaller fish by 75% [16]. Therefore, manipulating fish feed has proved a viable option for making fish farming more environmentally sustainable by producing healthy fish with less of an environmental impact. However, companies like Verlasso are not necessarily the norm, but rather they provide proof that this system can produce fish that are safe to eat.
An aquaculture company called Verlasso, which raises salmon, believes that they’ve managed to manipulate fish feed to make farming more sustainable without compromising the health benefits of their fish. The company has decreased the amount of fishmeal in their feed, but instead of adding carbohydrates rich in omega-6, they’ve
Another approach to making farm fishing more environmentally sustainable is through genetic modification. A company called AquaBounty aims to produce a breed of Atlantic salmon genetically engineered to grow twice as fast as their wild counterparts [4]. Labeled AquAdvantage Salmon, these fish have been given a gene from the
THE TRIPLE HELIX || FALL 2016
Chinook salmon, a species that lives in colder northern waters [4]. The gene activates a growth-promoting hormone [4]. Wild salmon usually gain weight rapidly in the spring and summer when food is abundant but slower during the winter. The Chinook gene activates a growth-promoting hormone year round, allowing AquAdvantage Salmon to reach market weight faster. AquAdvantage salmon would constitute the first genetically modified food-animal to reach the marketplace, which has some people unnerved [4]. Some worry that if AquAdvantage Salmon escape into the sea, they could compete wild species into extinction [4]. The company has responded that the salmon will be kept sterile and breed in captivity [4]. However, opponents remain afraid as the sterilizing procedure only needs to 95% efficient, which would leave 5% of fish fertile [5]. Furthermore, salmon have been known to escape from farms; there have been 2 million escaped salmon in the past 10 years from Scotland alone [5]. Critics remain skeptical that AquaBounty’s containment systems will ensure salmon do not escape [5]. The concern that AquAdvantage Salmon could drive wild varieties towards extinction was confirmed in a recent study in which the two species were allowed to coexist [5]. The experiment found that the AquAdvantage Salmon ate the majority of the food, which led to smaller wild fish [5]. In terms of nutritional value, the U.S. Charity Ocean Conservancy found several differences in the AquAdvantage Salmon compared to other fish, including lower omega 3, folic acid, zinc, magnesium and phosphorus levels, yet higher levels of niacin and vitamin B6 [5]. However, the FDA did not find these differences significant enough to be problematic [5]. While this difference in quality may be a disadvantage
for AquAdvantage Salmon, it is analogous to the differences in quality usually observed in farmed fish when compared to wild varieties. For example, farmed salmon that are not genetically modified have lower levels of omega-3 [4]. Further, fillets from farmed salmon are actually gray, and a chemical dye turns them pink [4]. Therefore, in terms of tangible differences, AquaBounty’s salmon may be no worse than other farmed salmon when compared to wild species. A final fear surrounding AquAdvantage Salmon is the genetic modifications themselves. A general mistrust shrouds conversations about genetic modifications and genetically modified organisms (GMOs). AquAdvantage Salmon has been called “Frankenfish” by some of it opponents, a nick-
name that embodies the public’s pervasive skepticism towards GMOs [5]. One of the largest risks inherent in GMOs is that taking genes from one species and expressing them in another has the potential for unpredictable consequences [15]. The changes inherent in a GMO could also expose humans to new allergens and could transfer antibiotic resistant genes to consumers [15]. However, GMO advocates claim that humans have been genetically modifying organisms for thousands of years through breeding processes and current technology simply expedites this process [5]. Placing the image of Frankenfish next to the smiling picture of the Red’s Best fisherman can make the future look bleak for fish-lovers. However, the shift from the romantic sea-faring fisherman to the scientifically regulated farmed fish appears inevitable. The
[1] Waite R, Phillips M and Brummett R, World Resources Institute. Sustainable Fish Farming: 5 Strategies to Get Aquaculture Growth Right 04 June 2014. Available from: http://www.wri.org/blog/2014/06/ sustainable-fish-farming-5-strategies-get-aquaculture-growth-right [2] Donovan TW, Huffington Post. 9 Surprising Fish Farming Facts [Internet]. 31 May 2010. Available from: http://www.huffingtonpost. com/2010/03/31/9-surprising-fish-farming_n_518724.html [3] Burstein A, ArtisanFish. Welcome to ArtisanFish [Internet]. 2010. Available from: http://www.artisanfish.com/ [4] The End of the Line, Time Article [5] http://www.dailymail.co.uk/news/article-2517137/The-FrankenfishGM-super-salmon-muscling-way-plate.html UK Frankenfish Article [6] h t t p : / / n e w s . n a t i o n a l g e o g r a p h i c . c o m / n e w s / s p e c i a l - f e a tures/2014/04/140430-other-white-meat-fish-aquaculture-cobia/ [7] http://www.farmedanddangerous.org/salmon-farming-problems/ environmental-impacts/fish-feed/
battle has now become making farmed fish as environmentally sustainable as possible, while maintaining an acceptable quality. The current solutions to the sustainability question—feeding fish more vegetables and genetically modifying fish to grow faster—come with advantages and harrowing disadvantages. Unfortunately, if these solutions are used incorrectly, they could potentially sink America’s fish industry to the low level of current land-animal farms, which are notorious for their lack of standards and terrifying conditions. To avoid a dismal fate, the fish farming industry will need continued scientific research to find the best, healthiest methods for raising fish and governmental regulations to ensure that farms live up to these standards. With these measures, hopefully the farmed fishing industry will be able to provide fresh, healthy fish into the future, even if it isn’t the long-loved wild fish.
[8] http://www.wcvb.com/chronicle/summer-jobs-boston-public-market/26835424#!baXHC0 [9] https://www.redsbest.com/shopreds/index.php/traceability?___store=default [10] http://www.sciencedaily.com/articles/f/fish_farming.htm [11] http://www.nytimes.com/2011/02/01/business/global/01fish.html [12] A Reconnaissance for Emerging contaminants in the South Branch Potomac River, Cacapon River and Williams River Basins, West Virginia, April-October 2004 [13] Negative effects of malachite green and possibilities of its replacement in the treatment of fish eggs and fish: a review [14] The Content of Favorable and Unfavorable Polyunsaturated Fatty Acids Found in Commonly Eaten Fish [15] http://www.nature.com/scitable/topicpage/genetically-modified-organisms-gmos-transgenic-crops-and-732 [16] http://www.verlasso.com/our-story/faq/
FALL 2016 || THE TRIPLE HELIX
27
Facial Reconstruction, Solving Ancient Murders, and Virtual Shopping -
The Medical, Cultural, and Commercial Applications of Simulating Touch
LUCY VAN KLEUNEN ‘17 As a modern people, we know that our senses can be easily influenced by illusion- we watch movies in which we see people transforming into hideous creatures and listen to explosions that have never happened. We smell perfumes and eat foods chemically formulated to smell and taste like naturally occurring flavors. Advances in technology have largely relied on the development of better ways to trick the senses- how can we make it seem like two people talking from countries across the world are in the same room? How can we make video game players truly embrace the idea that they are in battle rather than sitting on their couch? But what about tricking our 28
THE TRIPLE HELIX || FALL 2016
sense of touch? We use touch throughout our daily lives to express our emotional connections to one another, work with tools, and understand the nuances of our physical environment. Touch is so fundamentally related to the way we experience reality that its virtual reproduction presents a unique challenge- as much as a fake fur coat can be made to look authentic, its pelt will never quite feel real. Yet, that is what researchers in the field of ‘haptics’ are working hard to do. Their work is inspiring a class of smarter technology that integrates immersive sensation into traditional software. In the next decade, this integration could bring about revolutionary changes in arenas as diverse as medical practice,
archaeological research, and commercial enterprise. Haptic perception is the process of recognizing virtual sensations or objects through the simulation of touch. Typically, a user of haptic technology experiences this simulation by holding a handle which vibrates in patterns determined by its embedded software. Some applications of this technology are probably familiar, such as a video game joystick vibrating to indicate a nearby explosion or the steering handle in a flight simulator similarly indicating the resistant force present while landing a plane. Amazingly, the same technology you are probably used to experiencing while playing arcade games is beginning to
ARTWORK & LAYOUT Georgianna Stoukides ‘18 Kaley Brauer ‘17
EDITOR Methma Udawatta ‘16
revolutionize the field of medicine [1]. The connection between haptics and medicine is a natural one. For centuries, human-to-human touch has been an essential component of clinical practice. Through physical exams, experienced doctors can diagnose illnesses and determine the nuances of an individual case rather than relying on expensive and generalizing tests. Some physicians have lamented the way that technological advances have led to less actual human contact in current clinical practice and training. [2] The development of touch-based training systems that can simulate human contact could shift medical education to address this imbalance by embracing rather than attempting to stem rapid advances in medical technology. In some medical fields that deal with complex surgeries or rare conditions, hands-on practice for medical students is difficult to facilitate or unethical. Students often practice techniques on each other or using plastic models. Haptic training systems could provide a more realistic experience and a greater variety of ages and conditions than students could typically experience using these current techniques [1]. Haptic technology has proved useful not only in education, but also in the execution of minimally invasive surgeries. In these surgeries, surgical instruments are inserted into the body through several small incisions rather than a single large opening. Surgeons are able to view internal organs via small cameras inserted during the
procedure. Compared to traditional “open” surgeries, patients often experience less pain and a shorter recovery time [3]. However, because of the surgery’s shielded nature and the minuteness of the instruments, surgeons are provided less tactile feedback than in traditional surgery. Using haptic technology, a sensory array attached to the end of the instruments can send tactile information to surgeon’s fingers so that he or she can feel the shape or hardness of internal tissue [4]. Applications of touch simulation have also been found useful in rehabilitation settings. After a stroke, spinal cord injury, or other neural impairments, patients must go through intensive therapy in order to regain motor control. To improve hand movement, they often perform repetitive tasks holding and interacting with small objects. Studies have shown that haptic systems in which patients are presented with a series of simulated tactile tasks are extremely effective at delivering the controlled and repetitive therapy necessary for these patients. Along with helping a patient regain motor control more quickly, such devices can record data about how a patient improves in a task and track their progress [5]. Similarly, arm and leg amputees using prostheses that employ tactile feedback have been able to more quickly achieve natural control of their prosthetic devices [6]. Despite these useful applications, according to Ingrid Carlbom, a haptics researcher in the Computer Sci-
ence department at the University of Uppsala in Sweden, the field of haptics is “truly still in its infancy.” Carlbom works primarily in the intersection of haptics and medicine. Her most recent work is a haptics-based system for planning head and neck surgeries [9]. Currently, most head and neck surgeons do not plan these surgeries themselves. Instead, they outsource surgery planning to other companies, which are often located in different countries. In some cases, the communication with a remote technician and the planning and manufacturing of a plate for use during surgery can take around two weeks. This makes it impossible to go through intensive planning of surgeries for trauma patients who must be operated on within a few days. Using the haptics-based system Carlbom is developing, however, a surgeons ideally “plans in an hour, prints saw guides and plates in-house, and then operates the next day” [7]. Pre-operative planning is extremely important because it allows surgeons to test various techniques and anticipate problems that could arise during surgery. In complex trauma cases with many bone fragments, proper planning is essential because a small error in alignment during initial phases of the surgery could lead to a situation where the rest of the bone fragments cannot be fit back together. Most current surgical planning systems display scans of a fracture on a two dimensional screen. This puts extra demands on the user to analyze and translate it
FALL 2016 || THE TRIPLE HELIX
29
3-D virtual reality system with haptic feedback in Carlsom’s lab. Image Source: [8]
to the three-dimensional problem at hand. The system Carlbom is developing allows for faster planning because it simulates a three-dimensional environment, thereby mirroring the environment actually experienced by surgeons in the operating room [8]. In a recent paper released by Carlbom’s lab, a retrospective study of four complex facial surgeries showed that in at least one case, some of the difficulties in the surgery could have been anticipated beforehand using the haptic software, thereby reducing the duration of some of the riskiest stages of the operation [9]. The haptic component of Carlbom’s lab’s system helps to enhance the similarity between the planning and surgical environment. Once users have uploaded scans of a head or neck fracture on to the system, they can view the scans from above and manipulate bone fragments via a haptic handle positioned at waist level. Users can move and rotate individual fragments or groups of fragments and experience the sensation of two segments coming in contact via the handle. Bone fragments can be connected manually by manipulating the fragments and interpreting their contact forces until they feel like the right fit— a task reconstructive sur-
30
THE TRIPLE HELIX || FALL 2016
geons are adept at. Fragments can also be joined using the system’s “Snap-tofit” feature where users indicate two surfaces and the system algorithmically determines their most stable fit. In some cases, such as compression fractures or those in which fragments are missing, there is not a clean break and the “Snap-to-fit” suggestions must be overridden manually. In this way, the expertise of man and the computational power of machine come together to plan a successful surgery [8]. By using a 3-D virtual reality system with haptic feedback rather than simply staring at a screen, users are given the impression that they are manipulating a group of physical objects in a shared space. It’s no surprise that this method of planning is extremely efficient - a senior surgeon given only 45 minutes of training was able to plan a complicated surgery in 22 minutes [8]. The difference between 22 minutes and two weeks could mean much less risk and possibly years more of comfort for a patient as a result of the quick response. While Carlbom’s work displays the power of adding haptic feedback to software, it is still limited to conveying force feedback via a single point of contact. Like most current
haptic systems, it can accurately convey the sensation of forces- such as two bone fragments coming into contact or catching a soccer ball. However, it cannot, for example, let a user actually “grip” a bone fragment or run their hands across the ball’s surface. According to Carlbom, the development of gloves or body suits that would be able to accurately convey the continuous texture and weight of a virtual object might be decades away. The computational burden alone of rendering such data is enormous [1,8]. Picture a simple scenario of a goalie standing on a field holding a soccer ball. The soccer ball in her hand has a certain weight and give based on its air level. Its surface is smooth and taut yet crossed with seams made of tiny ridges of string. Perhaps there are caked-on sections of mud on the ball or flecks of grass. At the same time, she is feeling wind or drops of rain on her hands. All of these tiny details add up to create a recognizable touch “image”. Conveying such data at sufficient resolution might take thousands of computations per second [9]. Thus, the advance of haptic technology is dependent on both an increase in computational power and the development of sufficient ways
to translate the data. Luckily, computational power has been increasing steadily for the past decades. Researchers have also already started developing hardware that could eventually be used to translate haptic feedback across surfaces rather than a single contact point. Many of these advances have come out of industry. One company, Dextra Robotics, has released a line of haptic exoskeleton gloves available for preorder as of November 2014 for around $200 each. Currently, these ‘Dexmo’ gloves both capture hand motion as well as provide force feedback to the wearer [10]. Although robotics companies are making advances in haptic technology, the majority of publications regarding haptic technology released since the 1970’s have come out of academia. However, the relationship between industry and university research remains important since haptic hardware is expensive to build so few final products are likely to be developed in a university setting [7]. Beyond the numerous and important applications of haptic technology in medicine, there are many interesting cultural applications of haptic technology. The developers of the Dexmo gloves have suggested using them for tasks ranging from robot manipulation or gaming to music production or virtual drawing. Another relevant ap-
Technology from Fujitsu Laboratories that can simulate smooth or rough surfaces on touchscreen devices Image Source: [13] plication is in the field of archaeology. Archaeologists looking to fit together the broken fragments of an ancient clay plate face a fairly similar problem to that of surgeons reconstructing a fragmented skull. In fact, the Uppsala system has proven very effective for rendering artifact reconstruction. Scans of ancient artifact fragments have been uploaded into the system and manipulated in the same way as bone fragments. The technology provides archaeologists a way to experiment with manipulating and connecting pieces of broken artifacts without directly handling the fragile pieces. The opportunity to feel force feedback between the pieces allows archaeologists who are experts in restoration to combine
their knowledge with the power of the software in much the same way as surgeons. Archaeologists working with ancient bone fragments have also been interested in haptic reconstructive software. One group of such researchers is studying the ruins of a fort in Öland, Sweden dating back to the Migration Period (AD 400-550). During their initial site excavations completed between 2011 and 2014, human skeletons with indications of trauma were found unburied in four of the fort’s houses. These skeletons were from the same era as various treasures and jewelry hoarded in the houses, suggesting the scene of a mass murder—perhaps as an attempt to quiet all the fort inhabitants rather than allow them to divulge the secrets of their treasure to an invading army [11]. The researchers in this investigation, which has received media coverage in Sweden due to its novelty and scale, were able to work with the haptics-based system in order to put together some of the skeletal fragments found at the ruins. Although the software is not ideally configured for this type of reconstruction- in which the bones have been significantly damaged by erosion and the environment- it is possible that with continued collaboration the technology could be modified to better cater to the needs of anthropologists in addition to surgeons [9].
‘Dexmo’ haptic exoskeleton gloves. Image Source: [10] FALL 2016 || THE TRIPLE HELIX
31
The collaboration between computer scientists and archaeologists could eventually reach past the reconstructive phase to completely change interaction with ancient artifacts in both research and museum settings. Scholars could manipulate and virtually examine fragile objects that would otherwise remain locked up in temperature-controlled vaults. Using virtual systems and haptic gloves, museum visitors might be able to ‘feel’ the textures and weights of ancient artifacts. Allowing for this level of interaction with objects typically kept behind glass could make the museum experience both more personal for the average visitor and more accessible for the visually impaired [12]. The issue of rendering texture has begun to be addressed by Fujitsu Laboratories, another industry pioneer in the field. In the past year, Fujitsu an-
nounced the development of a technology that can simulate smooth or rough surfaces on touchscreen devices. Ultrasonic vibrations on the tablet’s surface can create a high-pressure layer of air between the screen and the user’s finger-tip to simulate a smooth surface. Alternatively, the vibrations can provide the sensation of roughness by varying frequency to simulate alternating areas of high and low friction. On a prototype tablet unveiled in March 2014 utilizing these technologies, users could strum a Japanese harp or feel the rough skin of an alligator. Fujitsu claims that their technology is efficient enough on mobile devices to not reduce the battery life, and they plan to commercialize the technology within the next fiscal year [13]. Since the 1970’s, the number of papers on haptics published each year has grown exponentially [7]. With con-
[1] Kapoor S, Arora P, Kapoor V, Jayachandran M, Tiwari M. HapticsTouchfeedback technology widening the horizon of medicine. J Clin Diagn Res. 2014; 8(3):294-299. [2] Knox R. The Fading Art of the Physical Exam [Internet]. September 20, 2010 [cited November 30, 2014]. Available from: http://www.npr. org/templates/story/story.php?storyId=129931999 [3] American Society of Colon and Rectal Surgeons. Laparoscopic Surgery- What is it? [Internet]. 2014 [cited November 30, 2014]. Available from: http://www.fascrs.org/patients/treatments_and_screenings/ laparoscopic_surgery/ [4] Ottermo MV, Stavdahl O, Johansen TA. Design and performance of a prototype tactile shape display for minimally invasive surgery. Haptics-e: The Electronic Journal of Haptics Research 2008; 4(4). [5] Metzger J, Lambercy O, Califfi A, Conti F M, Gassert R. Neurocognitive Robot-Assisted Therapy of Hand Function. IEEE Transactions on Haptics. December 12, 2013 [cited November 30, 2014]. 7(2): 140-149. [6] Jones L, News from the Field. J IEEE Transactions on Haptics 2014; 7(2):271-272. [7] Carlbom, Ingrid. Interviewed by: Lucy Van Kleunen. November 29, 2014 cited November 30, 2014.
32
THE TRIPLE HELIX || FALL 2016
tinued research by academics such as Carlbom and developments from companies such as Fujistu Labratories and Dextra Robotics, the field of haptics is gradually gaining prominence. Someday in the distant future, we might be able to virtually comb through a store’s catalog on a touchscreen kiosk and “feel” the textures of articles of clothing. We might be able to run our hands over the words etched in the Rosetta Stone or hold the hand of a friend half way across the globe. Doctors in New York might be able to virtually examine patients in the Congo. Simulation of touch is the next great step in the creation of realistic virtual experiences. Integrating touch feedback into technology will serve to bring us closer together and enhance sensory experiences in our continually technological lives.
[8] Olsson P, Nysjö F, Hirsch J, Carlbom IB. A haptics-assisted cranio-maxillofacial surgery planning system for restoring skeletal anatomy in complex trauma cases. International Journal of Computer Assisted Radiology and Surgery. 2013 Nov [cited 2014 Sep 21]; 8(6): 887-894. [9] Carlbom, Ingrid. Interviewed by: Lucy Van Kleunen. October 25, 2014 cited November 9, 2014. [10] Dexta Robotics. Dexta [Internet]. 2014 cited November 9, 2014. Available from http://www.dextarobotics.com/products/Dexmo [11] Museiarkeologi Sydost. Sandy Borg [Internet]. Cited November 9, 2014. Available from http://www.sandbyborg.se/in-english.html [12] Brewster SA, Impact of haptic ‘touching’ technology on cultural applications. In J Hemsley, V Cappellini & G Stanke editors. Digital Applications for Cultural Heritage Institutions. Aldershot, England: Ashgate; 2005. pp.273-284 [13] Fujitsu Limited, Fujitsu Laboratories Ltd. Fujitsu Develops Prototype Haptic Sensory Tablet. February 24, 2014 [cited November 9, 2014]. Available from http://www.fujitsu.com/global/about/resources/news/ press-releases/2014/0224-01.html
Geometry in Bloom Orion Nebula // Michelle Miller â&#x20AC;&#x2DC;18
student art spotlight
The Truth about The Truth : THE CONCERNING RISE OF FAULTY SCIENCE FRANCES CHEN ‘17
Science is the search for truth about the natural world. Many people chose to become scientists because of their curiosity in understanding how the world works. While the purpose of science is to seek the truth, there are issues within the scientific research community that hinder these truth-seeking efforts. Recent studies have shown that only 10-20% of scientific publications tested were replicable. Since adhering to the scientific method, at least in principle, leads to replicable experiments, the fact that the vast majority are not replicable may indicate that bad scientific practice is common. Furthermore, in the past decade, the number of retractions in research journals has increased more than 15-fold, even though the number of papers published has only risen 44%.[1] This implies that bad science is becoming increasingly prevalent. The direction that science is moving is extremely concerning, as most research is based on previous research and is eventually used to develop
34
THE TRIPLE HELIX || FALL 2016
products used by the public. As more researchers enter the field, research is also becoming more competitive. This competitive environment skews scientists’ incentives, which can lead to the publication of corrupt science and can ultimately harm the public. This article will discuss why this is happening, its consequences, and what we can do about it. Recent attempts to replicate biomedical experiments found that most results could not be replicated. In 2012, Amgen, a biotech firm, tried to replicate 53 landmark studies in cancer research. Despite the fact that these papers came from reputable journals, they found that they could only replicate 6 of the 53—just 11%.[2] In a separate study, pharmaceutical giant Bayer found that they could only replicate 25% of 67 similarly important papers.[3] The lack of ability to replicate means that results may not have been achieved legitimately. This hints at a trend in the larger scientific community.
Looking deeper to cases of individual researchers may give us an idea of why so few experiments are replicable these days. One factor contributing to the problem may be errors in experimental design. One study identified errors in the experimental design of 50 publications on therapeutic agents that claimed to extend the lifespan of mice with Lou Gehrig’s disease. They then redid the experiments with a corrected experimental design, giving the animals the same dosage as what was in the original study, but found they were able to achieve simi-
ARTWORK & LAYOUT Ben Wilson ‘19 Kaley Brauer ‘17
Despite the fact that these papers came from reputable journals, they found that they could only replicate 6 of the 53—just 11%. lar results in only one of the 50.[4] Not adhering to the scientific method can cause biases and chance to skew the data, leading to results such as those found in this study. Another factor contributing to bad science is the manipulation of data. An analysis of individual cases reveals that this trend is driven by the desire to publish exciting results. When the researchers of the replicated publications
in the Amgen study were contacted, some of them required the Amgen scientists to sign a confidentiality agreement preventing them from publishing the contrasting evidence, so it will never be released to the public which of those 53 experiments were not replicable. This means that many researchers are willing to hide the truth if it protects their reputation. One researcher was contacted after Amgen failed to
replicate the study 50 times.[5] The researcher admitted that he only got the result in one of his six trials, but chose to publish it “because it made the best story.”[6] People who did not know about his five failed trials would assume that the published results would happen in every trial, when in reality the results were just an error of statistics. In life science studies, research-
FALL 2016 || THE TRIPLE HELIX
35
ers also hide inconvenient data—sometimes by excluding trials. In one experiment testing the effect of a drug as a stroke therapy, researchers treated 10 mice with the drug. However, their graph only showed the effect of 7 mice that were all successfully treated by the drug. When questioned about it, they admitted that the other 3 had died because the drug made the stroke worse. [7] According to Ulrich Dirnagl, the scientist who was reviewing the paper, “this isn’t fraud…You look at the data, there are no rules. … People exclude animals at their whim, they just do it and they don’t report it.” He explains that it has become an entrenched, accepted part of the culture. This mindset that exciting results are more important than scientific integrity is the driving force behind the widespread publication of non-replicable science. Scientists usually become scientists because they want to discover the truth and explain the world. So it is surprising that data manipulation is so widespread. One factor contributing to this effect is the fact that the scientists that do hide or distort data are overrepresented in publications. By tampering the data, they are able to reach more “exciting” conclusions, which are more likely to get published, especially in top journals. The competitiveness of academia is a driving force behind this data manipulation. There are six freshly graduated Ph.D.’s for every academic post that opens up each year, and researchers must receive funding if they want to survive in academia. And since the top journals are extremely selective, with a rejection rate of over 90%, only new groundbreaking studies with make the cut. Results that do not support the scientist’s hypothesis, which are equally important in advancing science, now account for only 14% of published research, down from 30% in 1990.[8] This trend, along with a “publish or perish” mentality that pervades academia incentivizes scientists to create positive results. According to Ferric Fang, a researcher at the University of Washington, “The surest ticket to getting a grant or job is getting published in a high-profile journal.” When
36
THE TRIPLE HELIX || FALL 2016
the system is forcing them to choose between the future of their career or scientific rigor, it makes sense that researchers may be tempted to manipulate evidence for their own survival. Furthermore, there are minimal institutional enforcements and no incentives for scientific quality and rigor. According to Brian Nosek, a psychologist at University of Virginia, “There is no cost to getting things wrong. The cost is not getting them published.”[9] This incentivizes scientists to ignore the scientific method and publish as soon as possible without questioning their results. This is compounded by the fact that there is little incentive to replicate. Journals are not interested in publishing replication studies, as they are not as exciting as new results. Replication is not as respected among scientists as new results are. In an evaluation of 370 genetic association studies, it was found that while studies reporting disease associations were published in high-impact journals, their subsequent refutation with better-quality data did not receive such favorable treatment.[10] This indicates that less rigorous but more “exciting” studies in high-impact journals are rewarded with promotion and funding, while their rigorous refutations are not being rewarded. This skewed incentive system contributes to what we see today because it puts political and economic pressure on scientists to engage in sensationalism and sometimes even dishonest behavior. The prevalence bad science has vast negative implications for both scientific advancement and public welfare. Nearly all studies use previous research as reference, and studies are used for drug development and new inventions. When studies based off of previous research fail because the previous studies were flawed, it is a waste of scientists’ time and of funding that could have been used for other purposes. For example, because pharmaceutical companies rely on basic studies to identify new targets for drug development, it is critical that the studies are reliable. Large companies like Amgen need to recruit hundreds or even
…according to one paper, up to 85% of research is a waste because of correctable problems. This means tens of billions of dollars are spent annually on unproductive research while exposing patients to unnecessary risk. thousands of patients to participate in their drug trials. Organizing scientists, materials, and patients for a trial is extremely expensive. Glenn Begley, the researcher who led the Amgen study, compared it to “placing a $1 million or $2 million or $5 million bet on an observation.” Not only is it a waste of money, it is a waste of time for the patients and researchers involved. Between 2000 and 2010, roughly 80,000 patients took part in clinical trials based on research that was later retracted because of mistakes or improprieties.[11] This is something that could and should have been largely prevented. Worldwide, over $100 billion is invested in biomedical research to produce around 1 million publications every year. But according to one paper, up to 85% of research is a waste because of correctable problems. This means tens of billions of dollars are spent annually on
unproductive research while exposing patients to unnecessary risk. It slows scientific advancement when resources are spent on studies that should not have been started. The effects of bad science are even more concerning when it threatens peoples’ health. In 2010, patients sued AstraZeneca, a big pharmaceutical company, after it was discovered that buried one of their trials. The study showed AstraZeneca’s schizophrenia drug quetiapine to be less effective than the control, and showed that patients gained an average of 5kg over the study.[12] For some patients, this caused diabetes and worsened their schizophrenia symptoms. When bad science allows a drug to become public, it can even cost lives. In 2004, New York State Attorney General Eliot Spitzer sued GlaxoSmithKline, another drug company, for persistently concealing and failing to disclose to physicians information about Paxil, an anti-depressant. Of the five studies that were found to be hidden, two failed to show that Paxil was more effective than a placebo and three showed that Paxil more than doubled the likelihood
of behavior that included “suicidal thinking and acts.”[13] For patients that are already depressed, this is a life or death situation. A person takes drugs to help themselves, so when taking a drug worsens or conditions or even kills them, it is extremely dangerous for patients. And many times, this danger could have been prevented if drug companies had been more honest about their results. Drug companies could still sell drugs with side effects, but it would benefit everyone if consumers knew about the side effects. Being more transparent would allow drug companies, doctors, and patients to work together to find ways to minimize the impact of the side effects. It would also prevent public distrust of medicine and science. The scientific community is moving in a dangerous direction. The increased competition in academia has ironically led to a decline in the quality of studies. Bad science is dangerous, wastes money, time, and resources, and will eventually lead to a public distrust of science. However, there are steps that we can take to change the current system that rewards “quick and dirty
[1] Naik, Guatam. Mistakes in Scientific Studies Surge[Internet]. The Wall Street Journal. 10 Aug. 2011 [cited 02 Apr. 2014] Available from: <http://online.wsj.com/article/SB10001424052702303627104576411850 666582080.html> [2] Begley, Glenn, and Lee M. Ellis. Drug Development: Raise Standards for Preclinical Cancer Research[Internet]. Nature. 28 Mar. 2012. [cited 06 Apr. 2014] Available from: <http://www.nature.com/nature/journal/ v483/n7391/full/483531a.html>. [3] Prinz, Florian, Thomas Schlange, and Khusru Asadullah. Believe It or Not: How Much Can We Rely on Published Data on Potential Drug Targets? [Internet] Nature. 31 Aug. 2011. [cited 06 Apr. 2014] Available from: <http://www.nature.com/nrd/journal/v10/n9/full/nrd3439-c1.html>. [4] Scott, Sean, Janice E. Kranz, Jeff Cole, John M. Lincecum, Kenneth Thompson, Nancy Kelly, Alan Bostrom, Jill Theodoss, Bashar M. Al‐Nakhala, Fernando G. Vieira, Jeyanthi Ramasubbu, and James A. Heywood. Design, Power, and Interpretation of Studies in the Standard Murine Model of ALS[Internet]. Amyotrophic Lateral Sclerosis 9.1 (2008): 4-15. [cited 04 Apr. 2014] Available from: <http://www. researchals.org/uploaded_files/ALS%202008%209%204.pdf> [5,6] Begley, Sharon. In Cancer Science, Many Discoveries Don’t Hold up[Internet]. Reuters. 28 Mar. 2012 [cited 02 Apr. 2014] Available from: <http://www.reuters.com/article/2012/03/28/us-science-cancer-idUSBRE82R12P20120328>
science.” This system can be changed by implementing procedures that encourage rigor and replication. Replication can also be made more respectable by setting aside a section in journals dedicated to replication studies. Adherence to the scientific method needs to be enforced by only publishing studies that are well controlled for bias and use rigorous procedures, even if the results are less exciting. It is essential that we change the direction that science is heading in before it causes even more severe consequences. The pressures put on scientists and the skewed incentive system have caused science to fall in this direction, but it can be changed if the scientific community is willing to work together to revamp the incentive system to benefit both themselves and the public in the long-run.
[7] Couzin-Frankel, Jennifer. When Mice Mislead[Internet]. Science Magazine. N.p., Nov. 2013 [cited 03 Apr. 2014] Available from: <http:// www.sciencemag.org/content/342/6161/922.full> [8,11] The Economist Newspaper. How Science Goes Wrong[Internet]. The Economist. 19 Oct. 2013 [cited 03 Apr. 2014] Available from: <http:// www.economist.com/news/leaders/21588069-scientific-research-haschanged-world-now-it-needs-change-itself-how-science-goes-wrong> [9] The Economist Newspaper. Trouble at the Lab[Internet]. The Economist. 19 Oct. 2013 [cited 04 Apr. 2014] Available from: <http://www. economist.com/news/briefing/21588057-scientists-think-science-selfcorrecting-alarming-degree-it-not-trouble> [10] Ioannidis, John P.A., Evangelia E. Ntzani, Thomas A. Trikalinos, and Despina G. Contopoulos-Ioannidis. Replication Validity of Genetic Association Studies[Internet]. Nature.com. 15 Oct. 2011 [cited 04 Apr. 2014] Available from: <http://www.nature.com/ng/journal/v29/n3/full/ ng749.html > [12] Goldacre, Ben. Drug Firms Hiding Negative Research Are Unfit to Experiment on People[Internet]. The Guardian. Guardian News and Media, 13 Aug. 2010 [cited 04 Apr. 2014] Available from: <http://www. theguardian.com/commentisfree/2010/aug/14/drug-companies-bury-negative-research> [13] Martinez, Barbara. Spitzer Charges Glaxo Concealed Paxil Data[Internet]. The Wall Street Journal. 3 June 2004 [cited 04 Apr. 2014] Available from: <http://online.wsj.com/news/articles/SB108618482620826827>
FALL 2016 || THE TRIPLE HELIX
37
How We
About
How We MAGGIE ROWE â&#x20AC;&#x2DC;17 38
THE TRIPLE HELIX || FALL 2016
ARTWORK & LAYOUT Alec Davidson‘19 Kaley Brauer ‘17
Hundreds of Americans filled the Creation Museum in Kentucky to hear Bill Nye and Ken Ham debate evolution and creationism in early 2014. While the affair garnered much media attention and hundreds of thousands of viewers, unsurprisingly there was no clear consensus as to who, if anyone, won the debate. An event like this serves as a reminder of the uniqueness of the evolution debate. No other scientific debate has so permeated the public sphere. The legislative debate over evolution in public schools is even more publicized than the scientific debate over the validity of evolutionary theory. The prevalence of the evolution debate in America affects evolution education in ways beyond whether or not it is included in public school curriculums. The public nature of the debate causes evolution education in place today to suffer in quality and efficacy due to poorly prepared students
and ineffective standardization at the state board of education level. The creationism and evolution debate, occurring in the form of state curriculum and biology textbook selection controversies, is often publicized in national media. Yet the pro-evolution outcomes of Supreme Court cases like Edwards v. Aguillard in 1987 and Kitzmiller v. Dover in 2005, which banned the teaching of creationism and intelligent design, respectively, in public schools, seem to have done little to change the landscape of public opinion on the issue [1]. A Gallup Poll taken in 1982 showed that 44% of Americans said that the statement “God created human beings pretty much in their present form at one time within the last 10,000 years or so” most closely identified their views on the origins of human life [2]. Gallup asked the same question of Americans in 2012 and found that 46% of respondents identified with this creationist view of human origins. This demonstrates that public acceptance of the science behind evolution has not changed significantly from 1982 to 2012 and suggests that public opinion is relatively immune to advancements in evolutionary science. This data, in part, reflects the fact that the United States is a very religious country. According to a 2012 Pew Research survey report, “half of Americans deem religion very important in their lives; fewer than a quarter in Spain (22%), Germany (21%), Britain (17%) and France (13%) share this view” [3]. A 2006 study reported that 70% or more of the population of these relatively secular European countries believe that evolution is true [4]. The high percentage of Americans report-
EDITOR Russell Shelp ‘16
ing religion as very influential in their lives suggests that much of the American population considers religion when forming opinions on scientific and political issues. A lot of the controversy surrounding the evolution debate stems from what some people interpret as a contradiction between science and the origin of life dictated in Christian religious texts, even though leadership in many religious institutions, including the Catholic Church, the United Church of Christ, the United Methodist Church, and the Presbyterian Church, have stated that evolution and faith are not inherently contradictory [5]. However, importance of religion to Americans certainly contributes to a relatively unchanging national view of evolution. Despite the stated positions on evolution of some of the country’s most prominent religious institutions, “a few studies have shown that religious beliefs appear to interfere with the understanding or acceptance of scientific views, especially for evolution” [6]. MIT Physicist Max Tegmark explored this disconnect between the scientific attitudes of faith leaders and their communities, suggesting that this belief gap “may have less to do with intellectual disputes and more to do with an epic failure of science education” [7]. The persistence of the thought that science and religion are incompatible may affect the way religious students perceive science, and educational and religious institutions are not effectively communicating their opinions on the issue. Instead of hearing productive dialogue, students encounter media whose casual discussion of the issue leads to confusion. For example, “antievolutionists tend to incorrectly inter-
FALL 2016 || THE TRIPLE HELIX
39
“A major issue in the debate is the misconception about what is and is not science. This distinction is key in understanding evolution, its place in the public school science curriculum, and why creationism should not be taught as a scientific alternative.”
twine evolution and origins and use the terms interchangeably” [8]. The media can also fail to accurately define the term ‘scientific theory’ [9]. Ambiguities and misconceptions can arise from word choice such as “Charles Darwin’s theory of evolution” which “[presents] evolution as a relic of the 19th century” [9]. Instead of contributing to productive dialogue on the issue, coverage of the evolution debate confuses students and adults alike. These misconceptions permeate society through the media and, while public opinion on evolution remains relatively unchanged, the ability of students to learn about evolution suffers. In 2002, Brian Alters and Craig Nelson published an analysis of the role of students’ scientific preconceptions on their ability to learn about evolution. Unsurprisingly, it concluded that “students’ prior conceptions are a major factor affecting how and if students learn” [6]. Incorrect preconceptions of science and evolution, such as those coming from confusion of terms, prevent students from benefitting from evolution education. A major issue in the debate is the misconception about what is and is not science. This distinction is key in understanding evolution, its place in the public school science curriculum, and why creationism should not be taught as a scientific alternative.
40
THE TRIPLE HELIX || FALL 2016
Alters and Nelson point out that the “juxtaposition of science and supernatural causation in articles about biological evolution may lead to confusion among casual readers” [6]. Newspapers often give a ‘balanced’ depiction of both sides of the issue such that “a lay reader could hardly avoid drawing the erroneous conclusion that there is some genuine controversy here between rival scientific camps” [9]. When little or no distinction is drawn between science and creationism in communication with people lacking expertise in the debate, the media inadvertently complicates the argument. A student’s inability to distinguish between what is science and what is not has impacts beyond the evolution debate when considering how science becomes incorporated into that student’s belief system. It affects critical thinking, as a study showed that “introductory nonmajor biology students…who were less skilled in reasoning were more likely to hold nonscientific beliefs than were students more skilled in reasoning” [6]. Included in the nonscientific beliefs were special creation, the soul, and teleology, among others [10]. The study identified 954 students’ beliefs through questionnaires. The students then took a modified version of the Lawson Classroom Test of Scientific Reasoning, which involves demonstrations and predictions, to classify
them as intuitive, transitional, or reflective thinkers. Reflective thinkers have the ability to “reflect on the adequacy or inadequacy of intuitively generated problem solutions” [10]. In contrast, intuitive thinkers will “respond incorrectly with the intuitively generated response” [10]. It was shown that the reflective thinkers, those who scored highest on the reasoning test, more frequently identified with accepted scientific beliefs than the intuitive thinkers, who scored lowest on the test. However, the study also showed that neither group was more or less likely to change their nonscientific or scientific beliefs after a semester of biology, indicating that the quality of education received may be more important than any scientific preconceptions [10]. The structure of evolution education in a state is outlined by its science standards, and when the media reports on the evolution debate, it is often recounting state board of education battles over evolution in public school curriculums. Controversy over the wording of the state science curriculum usually dominates media coverage and the problems associated with using state standards to dictate an effective evolution education program are overlooked. The simple inclusion of evolution in state science standards does not guarantee adequate evolution education due to the complexity of the
topic and its associated controversy. In some states, the rapid fluctuation of state standards has rendered them almost meaningless as a true benchmark for appropriate education. In Kansas, for example, the state school board’s evolution curriculum has flip flopped from one extreme to the other as the board cycles in and out of having a religiously conservative majority. In 1999, the Kansas Board of Education voted “to delete virtually any mention of evolution from the state’s science curriculum” [11]. The change was soon reversed after unhappy constituents voted the members of the board out of office. Over time, the board gained a conservative majority and controversy erupted again in 2005 when “science groups…[were] concerned about a section in the standards that [changed] the definition of science” to include the supernatural [12]. Again, the decision was eventually reversed. Most recently, “an anti-evolution group filed a federal lawsuit…to block Kansas from using new, multistate science standards…arguing the guidelines promote atheism” [13]. The seemingly constant challenges to and reversals of Kansas’ state science standards contribute to unproductive publicity of the
issue. Fluctuations like this also call into question the scientific legitimacy of the body deciding on the standards and the true effectiveness of the standards themselves. Even though many states have standards calling for comprehensive evolution education, that education must be carried out by teachers statewide in order to reach students. According to an analysis of state science standards, states with the highest rated evolution education exhibit a large number of biology teachers who “spend little time teaching it, believe that creationism should be included in science classes, and question the scientific validity of evolution” [14]. Clearly teachers are permitted to hold personal beliefs about evolution and creationism, but those who allow personal beliefs to lessen the quality of their evolution instruction put students at a disadvantage in their education. Such teachers also demonstrate that even the best state standards do not assure adequate evolution instruction. The complexity of evolution education cannot be overstated and the problems it faces have no easy solution. The prevalence of the debate in our culture engenders confusion about its
[1] Molleen Matsumura, Louise Mead. Ten Major Court Cases about Evolution and Creationism [Internet]. February 14th, 2001 [cited April 1st, 2014]. Available from: http://ncse.com/taking-action/ten-major-court-cases-evolution-creationism [2] Frank Newport. In U.S., 46% Hold Creationist View of Human Origins [Internet]. June 1st, 2012 [cited April 1st, 2014]. Available from: http:// www.gallup.com/poll/155003/hold-creationist-view-human-origins.aspx [3] The American-Western European Values Gap [Internet]. February 29th, 2012 [cited April 1st, 2014]. Available from: http://www.pewglobal. org/2011/11/17/the-american-western-european-values-gap/ [4] James Owen. Evolution Less Accepted in U.S. Than Other Western Countries, Study Finds [Internet]. August 10th, 2006 [cited April 1st, 2014]. Available from: http://news.nationalgeographic.com/ news/2006/08/060810-evolution.html [5] Religious Groups’ Views on Evolution [Internet]. February 3rd, 2014 [cited April 1st, 2014]. Available from: http://www.pewforum.org/2009/02/04/ religious-groups-views-on-evolution/ [6] Alters B, Nelson C. Perspective: Teaching Evolution in Higher Education. Evolution 2002; 56(10): 1891-1901. [7] Max Tegmark. Celebrating Darwin: Religion And Science Are Closer Than You Think [Internet]. February 12th, 2013 [cited April 15th, 2014]. Available from: http://www.huffingtonpost.com/max-tegmark/religionand-science-distance-between-not-as-far-as-you-think_b_2664657. html?utm_hp_ref=religion-science
facts, yet its publicity has failed to significantly change public opinion in the recent past. Americans’ strong belief in religion contributes in part to this phenomenon but does not condemn the country to generations of students who are uneducated in evolution and science in general. Scientists on the forefront of this issue should be arguing that religion and evolution are not mutually exclusive; only then will real change that prevents false scientific preconceptions occur on this issue. Evolution education is a very multifaceted issue that depends on properly prepared students and teachers rather than just state guidelines. By refusing to sensationalize issues over state science standards, the media will contribute to a more productive dialogue as to the most effective way to educate our children rather than serving as a bedrock for difficulties in science education. As a community of experts and laypeople alike, we must work to redeem the evolution debate from its detrimental state and strive to make it an example of productive scientific dialogue that betters the world for current and future generations of science students.
[8] Skoog G, Bilica K. The emphasis given to evolution in state science standards: A lever for change in evolution education? Science Education 2002; 86(4):445-462. [9] Rosenhouse, J, Branch, G. Media Coverage of “Intelligent Design.” BioScience 2006; 56(3): 247-252. [10] Lawson, A, Weser, J. The Rejection of Nonscientific Beliefs about Life: Effects of Instruction and Reasoning Skills. The Journal of Research in Science Teaching 1990; 27(6): 589-606. [11] Pam Belluck. Kansas Votes to Delete Evolution From State’s Science Curriculum [Internet]. August 12th, 1999. [cited April 1st, 2014]. Available from: http://www.nytimes.com/library/national/081299kan-evolution-edu.html [12] Greg Allen. New Kansas School Standards Question Evolution [Internet]. November 9th, 2005 [cited April 1st, 2014]. Available from: http:// www.npr.org/templates/story/story.php?storyId=5005321 [13] John Hanna. Lawsuit Filed in Kansas to block science standard [Internet]. September 26th, 2013 [cited April 1st, 2014]. Available from: http://www. kansascity.com/2013/09/26/4510571/lawsuit-filed-in-kansas-to-block.html [14] Moore R. Teaching Evolution: Do State Standards Matter? BioScience 2002; 52(4): 378-381.
FALL 2016 || THE TRIPLE HELIX
41
The Biosocial Model of Crime: Linking the Biological and the Socio-Spatial to Understand Behavior and Criminalization KURT PIANKA â&#x20AC;&#x2DC;17 The criminal justice system in America has come under increased public scrutiny over the past decade. Neurocriminology, the application of neuroscience principles and techniques to the study of crime, has much to contribute to this emotionally charged and vital conversation. Current neuroscience research indicates that structural and functional brain deficits are related to antisocial, aggressive, and violent behavior. These insights
42
THE TRIPLE HELIX || FALL 2016
have pushed the interdisciplinary field of neurocriminology towards a biosocial model for crime. This model links a neurobiological understanding of behavior with the study of how social forces shape our surroundings. It accounts for how socially produced environments contribute to brain deficits that are implicated in antisocial, aggressive and often criminalized behaviors.
LAYOUT Kaley Brauer ‘17 Tim Valicenti ‘17
EDITOR Shannon McCarthy ‘15.5
Antisocial behavior, in contrast to the colloquial use of the term, refers to unruly, inappropriate behavior that sometimes neglects or even violates the rights of others [1]. Often, this manifests as rule breaking and a general failure to adhere to socially acceptable guidelines of conduct [2]. Unsurprisingly then, antisocial tendencies often result in criminalized acts. Several promising markers in the brain have been shown to strongly predict future antisocial behavior, including low arousal overall, especially in the frontal lobe, a reduced amygdala volume, and certain chemical imbalances [3]. It is important to note, however, that the social reading of antisocial behaviors as criminal is not objective, but is informed by the social, political, and spatial relations of our society. The biosocial model depends on understanding how brain functions relate to behavior. The brain deficits associated with antisocial behavior are experimentally and empirically measurable. One of the most documented and studied deficits is an impaired or damaged frontal lobe. A low-functioning prefrontal cortex (PFC), the region at the very front of the brain, is convincingly linked to violent, antisocial, and psychopathic behavior [4,5]. The PFC plays a role in a wide-range of vital processes including emotional and behavioral regulation, personality, social judgment, and intellectual function. Studies done on antisocial populations found that on average subjects displayed impaired PFC function, and often evidence of head trauma affecting PFC and other critical brain areas [2]. These results are not unexpected considering the role the PFC plays in our
decision-making and social behaviors. Studies show that head trauma affecting the frontal lobe can trigger a dramatic shift in personality and increase impulsive, violent and aggressive behaviors [4]. It seems logical that ineffective modulation of emotional decisions and poor moral and social reasoning could lead to the commission of crime. Another measurable marker of antisocial behavior is low amygdala volume. The amygdala plays a vital role in emotional response. A smaller amygdala has been associated with aggression and poor impulse control. Lower amygdala volume in male subjects in particular is associated with childhood aggression, early psychopathic traits, and future violence [6]. A third marker is electroencephalography (EEG) measures of low arousal in the brain, which is associated with increased antisocial behaviors. Studies show that lower arousal states in children are strongly predictive of adolescent aggressive antisocial behaviors [7]. In addition to PFC function, amygdala volume, and arousal states, the literature demonstrates that there are several structural and functional brain deficits showing predictive and correlative relationships with future antisocial, aggressive, and violent behaviors. The second component of the biosocial model is the question of how these deficits occur. There is still a need for further research on the sources of structural brain deficits aside from the relatively clear-cut case of physical trauma. However, it is clear that deficits can arise from a complex interaction between genes and environment. The environment may contribute to
The biosocial model depends on understanding how brain functions relate to behavior.
brain deficits through direct or epigenetic –non-genetic influences on gene expression– mechanisms. Some of the most powerful environmental risk factors include malnutrition, exposure to manmade toxins, and physical and emotional trauma [3]. There is a clear link, for example, between malnutrition in childhood and the inhibition of healthy brain development. Longitudinal studies link malnutrition to persistent lower cognitive function [9]. Conversely, targeted dietary supplements, particularly of micronutrients like iron, can raise cognitive performance [10].
FALL 2016 || THE TRIPLE HELIX
43
What does any of this have to do with crime? The same scientists linking malnutrition with impaired cognitive function also found that early malnutrition was correlated with persistent aggressive behaviors and conduct disorder years later [11]. Access to proper nutrition may manifest economically and spatially. Many lower-income neighborhoods in the United States, especially communities of color, are “food deserts:” they lack ready access to healthy food due to spatial inequalities in food retail environment, namely cost and the prohibitive distance to larger, well-stocked supermarkets [12]. This simple observation of the geographic distribution of healthy foods implicates socio-spatial factors as potentially contributing to the link between brain development and crime rates [13]. Another powerful environmental risk factor for antisocial behaviors is toxin exposure, particularly to heavy metals like lead. A study of pregnant women and their children found that the children’s prenatal and postnatal bloodlead levels closely correlated with their likelihood of committing violent and criminal acts as adults [14]. This relationship between blood-lead levels and adult crime holds across decades and countries [15]. That this phenomenon was observed repeatedly is strong evidence that exposure to heavy metal poisoning during childhood impairs healthy brain development with corresponding effects on behavior. Exposure to toxins, like access to healthy food, occurs unequally along spatial and social lines. For example, the location of trash incinerators and other infrastructure that contributes to health hazards
44
THE TRIPLE HELIX || FALL 2016
is based on political decision-making and land-use policies. Such sources of concentrated pollution are most often found in politically and economically disenfranchised communities that possess less political and social capital: predominantly communities of color and African-American communities in particular [13]. Moreover, such communities also house older buildings that are often contaminated with lead and other heavy metals that have since been phased out of more wealthy neighborhoods. The socially produced environment, therefore, is directly linked to the biological roots of antisocial behavior. A third effect of the environment on cognitive function and antisocial behavior is through exposure to chronic stress and violence during childhood. A study published in 2010 demonstrated that a recent, local homicide significantly reduces vocabulary and reading performances of students living in the affected neighborhood [16]. Over time, chronic direct and indirect exposure to stress and violence takes a cumulative toll on local youth’s school performance. These students struggle, fall behind, and may eventually be excluded and left behind entirely, entering what social scientists call the school-to-prison pipeline [17]. Our lived environments are not produced in a social and political vacuum. Strongly predictive risk factors for antisocial behavior are produced by a complex interplay between social and political actors navigating the power structures we create. Understanding that antisocial behaviors arise at least
in part out of both genetic and environmental factors, we therefore accept the collective responsibility for the socially produced environment’s contribution to those behaviors. It is the interaction of “nature and nurture” that underpins observable deficits in vital brain regions related to decision making, emotion, stress response, impulse control and other behavioral factors [4]. The third component of a biosocial model for crime is how antisocial behaviors become crimes. While many social scientists have argued that the criminalization of behaviors is a subjective exercise [13], often neurocriminology studies treat “crime” as if it were an objective measure or outcome of behavior. In fact, what constitutes a “crime” is not just a certain action, but is instead how that particular behavior is interpreted in a given social, spatial, and temporal context by others [13]. For instance, a crime like loitering is identified according to heavily subjective readings of behavior by law enforcement officers who may interpret a relatively benign act such as standing in public space as antisocial or criminal. An individual may be criminalized depending on how a police officer reads their style, mannerisms, and their right to be in that particular space. In many cases, appearances and perceptions dictate whether police officers and other members of a community view and treat an individual as a criminal [13]. These readings, where specific people are deemed out of place and threatening in certain contexts, are further complicated when the targeted individual suffers from the functional brain defi-
Taken as a whole, the biosocial model underscores the fundamental flaws in our criminal justice system. cits outlined above. Being confronted by a police officer for standing on a public sidewalk is a challenge to one’s right to occupy that space. Given that the functional impairments associated with antisocial behaviors include inhibited emotional regulation and impulsivity, it is feasible that someone thus impaired would be more likely to respond to such a challenge in a manner interpreted as aggressive. An escalation of the situation retroactively justifies the police officer’s decision to
confront that individual based on appearances alone. Thus the subjective reading of identity and the right of individuals to occupy particular spaces is a key nuanced component of the biosocial model to crime. Taken as a whole, the biosocial model underscores the fundamental flaws in our criminal justice system. The cross-disciplinary approach discussed here indicates that we need to dismantle the overly punitive institutions and
[1] Glenn A, Johnson A, Raine A. Antisocial personality disorder: a current review. Curr Psychiatry Rep (2013); 15:427-435. [2] Raine A, Yang Y. Neuro foundation to moral reasoning and antisocial behavior. Soc Cogn Affect Neurosci (2006); 1: 203-213. [3] Raine A The anatomy of violence: The biological roots of crime. New York: Vintage Books; 2013. [4] Brower M, Price B. Neuropsychiatry of frontal lobe dysfunction in violent and criminal behaviour: a critical review. J Neurol Neurosurg Psychiatry (2001); 71: 720-726. [5] Yang Y, Raine A. Prefrontal structural and functional brain imaging findings in antisocial, violent, and psychopathic individuals: a meta-analysis. Psychiatry Res: Neuroimaging (2009); 174: 81-88. [6] Pardini D, Raine A, Erickson K, Loeber R. Lower amygdala volume in men is associated with childhood aggression, early psychopathic traits, and future violence. Biol Psychiatry (2014); 75(1): 73-80. [7] Niv S, Ashrafulla S, Tuvblad C, Joshi A, Raine A, Leahy R, Baker L. Childhood EEG frontal alpha power as a predictor of adolescent antisocial behavior: A twin heritability study. Biol Psychiatry (2014); 105: 72-76. [8] Tuvblad C, Gao Y, Wang P, Raine A, Botwick T, Baker L. The genetic and environmental etiology of decision-making: A longitudinal twin study. J Adolescence (2013); 36(2):245-255. [9] Liu J, Raine A, Venables P, Dalais C, Mednick S. Malnutrition at age 3 years and lower cognitive ability at age 11 years: independence from psychosocial adversity. Arch Pediatr Adolesc Med (2003); 157(6): 593-600.
“tough on crime” approaches and develop informed support programs and rehabilitative policies that work to reduce biosocial risk factors [18, 19]. In light of the understanding that behaviors arise out of a complex interaction of biological and social factors, it is time to leave behind the simplistic age of punitive justice in favor of programs acting at the individual, community, and institutional levels to foster healthy environments for all people.
[11] Liu J, Raine A, Venables P, Mednick S. Malnutrition at age 3 years and externalizing behavior problems at age 8, 11, and 17 years. Am J Psychiatry (2004); 161: 2005-2013. [12] Beaulac J, Kristjansson E, Cummins S. A systematic review of food deserts, 1966-2007. Prev Chronic Dis (2009); 6(3): 1-10. [13] Stefano Bloch. Interviewed by Kurt Pianka. 2015 Feb 27 [cited 2015 April 16] [14] Wright J, Dietrich K, Ris M, Hornung R, Wessel S, Lanphear B, Ho M, Rae M. Association of prenatal and childhood blood lead concentrations with criminal arrests in early adulthood. PLoS Med (2008); 5(5): 732-739. [15] Nevin R. Understanding international crime trends: the legacy of preschool lead exposure. Environ Res (2007); 104: 315-336. [16] Sharkey P. The acute effect of local homicides on children’s cognitive performance. Proc Natl Acad Sci U S A (2010); 107(26): 11733-11738. [17] Wald J, Losen D. Defining and redirecting a school-to-prison pipeline. New Dir Youth Dev (2003); 99: 9-15. [18] Dekovic M, Slagt M, Asscher J, Boendermaker L, Eichelsheim V, Prinzie P. Effects of early prevention programs on adult criminal offending: A meta-analysis. Clin Psychol Rev (2001); 31: 532-544. [19] Vaske J, Galyean K, Cullen F. Toward a biosocial theory of offender rehabilitation: why does cognitive-behavioral therapy work? J Crim Justice (2011); 39: 90-102.
[10] Bruner A, Joffe A, Duggan A, Casella J, Brandt J. Randomised study of cognitive effects of iron supplementation in non-anaemic iron-deficient adolescent girls. Lancet (1996); 348: 992-996.
FALL 2016 || THE TRIPLE HELIX
45
Are We All
WEIRD?
Behavioral Scientists Seem to Think So. KARINE LIU ‘18
46
THE TRIPLE HELIX || FALL 2016
ARTWORK & LAYOUT Georgianna Stoukides ‘18 Kaley Brauer ‘17
EDITOR Mark Sikov
Everyone has quirks—our idiosyncrasies, our guilty pleasures, our stories are products of unique combinations of lived experiences. The environments we inhabit are what shape each of us distinctly, suggesting that there is some truth in calling a person “one in a million” (or rather one in seven billion). Yet how are such differences considered in the study of human behavior? Behavior scientists, a term covering psychologists, cognitive scientists,
and economists, focus heavily on the commonalities in behavior across members of Homo sapien; many of their mammoth principles and theories are meant to ring true for people across the globe. However, as inclusive as these fields initially seem, recent studies have revealed an alarming trend of exclusivity: an overwhelming majority of experimental subject samples are surprisingly homogeneous. They all consist of participants from Western, Educated, Industrialized, Rich, Democratic countries—in other words, they’re WEIRD.
WEIRD—Is it an Insult? Coined by anthropologist Joseph Henrich, and psychologists Steven Heine and Ara Norenzayan, the classification of “WEIRD” is a fairly recent one, contemporary with the release of studies that have led to questions regarding the validity of much behavioral research. A 2008 comprehensive review of publications, known as a meta-analysis, of six major American psychology journals from 2003 to 2007 revealed that 95% of the publications used samples that were WEIRD [1]: 65% of subjects were from the United States, with a further 13% from other English-speak-
ing countries, and 14% were European. This first-world dominance, particularly by the United States, is bolstered by results seen in a 1997 survey, in which 70% of the psychological citations published that year were American [2]. In addition to a focus on Western countries, which are by and large homogenous, the “Educated” qualification has overwhelmingly become the standard as well. In that same 2008 meta-analysis, the majority (67%) of publications by the American Psychological Association’s Journal of Personality
and Social Psychology in 2007 drew their conclusions based on samples of undergraduate psychology students from American research universities. This is a trend that can even be seen in the samples taken from non-Western countries as well, with 80% of samples taken from a nearly identical pool of undergraduate psychology students. Psychology aside, other behavioral sciences fare no better. Westerners remain the most common subject type, with a specific predilection for undergraduates [1].
And the problem is… Although Western college students may be the most readily-available pool of subjects for University researchers, the 2008 meta-analysis notes that the benefits of using such a biased sample generally don’t outweigh the inherent costs. “If the goal of APA journals is to promote psychology as a human science and not just a science of Americans, their content and their editorial
leadership should reflect this” [1]. It is worth noting that there are studies in which the similarity of constituents in a sample helps to control unpredictable variables, but these are much farther and fewer between than those claiming grandiose phenomena about the “human mind and human behavior”. Qualifying their conclusions by establishing the limitations of using such a
unrepresentative sample or explicitly mentioning the homogeneity of their samples are [3]. But is there a need for specificity? Can we expand our understanding of the human consciousness correctly through observations based on WEIRD twenty-somethings?
FALL 2016 || THE TRIPLE HELIX
47
Frankly, is it weird to be WEIRD? Putting aside the fact that WEIRD subjects are in a minority worldwide (comprising only 12% of the world’s population), there have been published investigations that strongly suggest measurable differences between individuals considered WEIRD and non-WEIRD. Such disparities were highlighted in an extensive review by Heinrich, Heine, and Norenzayan. Their 2010 publication compiled studies that compared the two groups across a variety of “human” areas to exhibit differences that range from complex social behaviors to basic perception. This meta-analysis spans research dating as far back as the 1960s. A study done by Marshall Segall, Donald Campbell, and Melville Herskovitz demonstrated cognitive differences between subjects of industrialized societies and those of “small-scale societies” in what had been believed to be a basic universal human function: visual perception [4]. Nearly 1,900 participants from 14 WEIRD and non-WEIRD countries, were presented with the same five visual illusions, the most famous being the Müller-Lyer Illusion (see Figure 1). Although individual responses to these figures could differ, the distribution was expected to be similar for both groups if visual perception was truly universal. However, their findings supported the opposite conclusion: South African, European, and American subjects were more likely to give “illusion-supported” responses, while non-Western
subjects were significantly less susceptible. Remarking on this unforeseen distinction, Segall et. al posited that the results were likely due to cultural influences. WEIRD subjects, having lived in Figure 1. The Müller-Lyer Illusion used in the 1966 Segall et. al study. Participants were expected to contrast lengths of the horizonindustrialized tal segments, when they are in fact all equal in length. Westerners and urbanized were overall more likely to give the “illusion-supported” response areas, tended to that some were longer than others. seek out rectangularity, a predilection known as the “carpentered world” effect. Their sistent across diverse groups, then how greater amount of exposure to straight are do more intricate psychological lines and right angles, according to the processes fare under comparable inauthors, made WEIRD subjects vulner- spection? able to the illusions. In a series of studies over several years, This hypothesis, however, is far from Heinrich sought to answer this queswidely supported: subsequent studies tion. He examined the ways in which have both confirmed and denied the WEIRDness could be distinguished influence of “the carpentered world,” in behavioral values, such as fairness with some finding instead differences and economic decision-making. These in retinal pigmentation as the cause of characteristics were assessed through differing perception [5]. A particular the constructed activities that the restudy describes perception variance searchers called the “ultimatum game” between Zambian children who lived and the “dictator game” [5]. in urban and rural areas with results similar to Segall’s [6]. Although this Related to the much better-known does not concretely confirm Segall’s “prisoner’s dilemma,” in the ultimaidea, there is reason to consider that tum game, one subject is given a sum of the environment plays a role in shap- money, and may offer a portion of the money to the other subject. If the secing the perception of humans. ond subject accepts, the money is split This raises the question: if how we see according to the offer. If he or she reand interpret what we see is not con- fuses, neither receives any money. The dictator game differs in that the second participant is not allowed to reject the offer, thus the offering subject acts solely out of fairness or out of self-interest. Pooling from thousands of subjects from the United States as well as 23 “small-scale human societies, includ-
48
THE TRIPLE HELIX || FALL 2016
clusions about an entirely unique WEIRD population, it holds that that there may be measurable subtle differences among groups depending on upbringing and environment. It is feasible that the WEIRD population may produce a spectrum of data slightly but distinctly altered from that of the non-WEIRD population. However, further evidence points to differences in thinking across populations, making an even stronger case that it is unwise to generalize about human behavior from small subsets of the human race.
ing foragers, horticulturalists…drawn from Africa, Amazonia, Oceania, Siberia, and New Guinea,” Heinrich again found startling differences between these groups. Participants from the United States were at the extreme end of the spectrum in three areas. American proposers consistently gave the highest offer of money in the dictator game and the second highest offer in the ultimatum game. Even when given the history of deals rejected by the responder, American proposers still offered more money than proposers of other countries, suggesting a warped view of what offers would be accepted by their peers. While it would be rash to jump to con-
For example, research in the early 2000s uncovered a dichotomy in reasoning between the Eastern and Western world. The Eastern system of thought was found to be comparatively more holistic, while the Western more analytic. This was best seen in a study of categorization between two samples Chinese participants and European American participants. Through tasks based on choosing sets of words based on greatest similarity, the authors could learn about the subjects’ thought processes. What they discovered was that the European Americans were more likely to rely on rules for their categories, rather than on function or contextual relationship as was typical for the Chinese subjects. For instance, of the three words mon-
key-banana-panda, a European American subject would be more likely to categorize monkey and panda as animals, while a Chinese subject would instead group monkey and banana as “Monkeys eat bananas” [6]. Again, in these subtle but distinct differences in cognitive processes, we see hints of variation between the East and West in simple categorization alone, suggesting that at the very least that there is some dissonance between the two groups may exist to an even greater extent in more complex behavioral sciences. But perhaps the subpopulation of the world that should be scrutinized most closely is that comprised by American undergraduates, the behavioral research participant du jour. While differences in social psychology between American undergraduates and other non-WEIRD populations might seem obvious now, several studies point to the peculiarity of American college students in comparison to the rest of America. Evidence indicates that these individuals are more individualistic, less likely to conform, and more isolated from offline social networks than their non-college counterparts, rendering them unique from even their fellow Americans [7, 8, 9, 10]. Can the conclusions of many behavior and cognition studies be expected to hold for individuals across the globe when they may not even prove true for individuals across the street?
A New Normal While concretely defining differences between these societies may have been difficult in itself, combatting such an epidemic in discourse is likely to be even harder. As Heine (2010) suggests in his review, the responsibility to diversify participant pools falls heavily to both the editors and reviewers of journals as well as the authors them-
selves. Editors and reviewers of journals should press authors to be explicit in the feasible generalizability of their claims and to be specific in their descriptions of their subject samples. This would allow for unequivocal understanding of the limitations of their conclusions by the reader and likely bring ridiculously generalizing analyses of
mankind to an end. Moreover, in pre-publication phase, journal professionals should provide incentives for authors who actively study diverse participant pools. What with improving technology and accessibility to the Internet, the incorporation of individuals from more countries
FALL 2016 || THE TRIPLE HELIX
49
other than the West is more feasible (although this does limit such participation to those with Internet access). Future research may even turn to what is known as crowdsourcing via online services such as Mechanical Turk, a website where individuals across the globe can sign up to complete tasks and answer questionnaires created by companies and scientists alike. In fact, some social psychology studies have already been repeated using the online phenomenon and this website in particular. It’s important to note that some have found significant variations from the data gathered in their original research [11]. However, it stands to reason that as much as one can press major journals and research institutions to regulate their publications for such sample homogeneity, we may find a faster and
more efficient solution through an increased vigilance on the part of the audience. If readers became alert to the persistent and widespread use of the same unique subset of the population across studies, the demand for more inclusive science would grow exponentially in volume. And such scrutinizing surveillance would not only improve the research being done in the behavioral sciences, but also potentially lead to the discovery and revision of similarly homogeneous sampling in other sciences as well. Yet the road ahead, though clear, is not without its obstacles. Addressing the homogeneity is likely the first and easiest step; not to invalidate the studies with specific WEIRD populations, but to put them instead into context and to recognize that the ways they differ from other non-WEIRD populations
[1] Arnett JJ. The neglected 95%: Why American psychology needs to become less American. American Psychologist. 2008;63(7):602–14. [2] May RM. The Scientific Wealth of Nations. Science. 1997 Feb 7;275(5301):793–6. [3] Henrich J, Heine SJ, Norenzayan A. The weirdest people in the world? Behavioral and Brain Sciences. 2010 Jun;33(2-3):61–83. [4] Segall MH, Campbell DT, Herskovits MJ. The influence of culture on visual perception [Internet]. Bobbs-Merrill Indianapolis; 1966 [cited 2015 Feb 27]. Available from: http://198.24.168.220/Collection/Net/ allanmc/web/socialperception14.pdf [5] Pollack, R. H. (1970). Mueller-Lyer Illusion: Effect of Age, Lightness Contrast, and Hue. Science, 170(3953), 93–95. http://doi.org/10.1126/ science.170.3953.93 [6] Stewart, V. M. (1973). Tests of the “Carpentered World” Hypothesis by Race and Environnement in America and Zambia. International Journal of Psychology, 8(2), 83–94. http://doi.org/10.1080/00207597308247065 [7] Henrich, J., Boyd, R., Bowles, S., Camerer, C. F., Fehr, E., Gintis, H., McElreath, R., Alvard, M., Barr, A., Ensminger, J., Henrich, N. S., Hill, K., Gil-White, F., Gurven, M., Marlowe, F. W., Patton, J. Q. & Tracer, D. (2005a) “Economic man” in cross-cultural perspective: Behavioral experiments in 15 small-scale societies. Behavioral and Brain Sciences 28(6):795 – 815; discussion 815 – 55. [NB, SG, arJH, SK, EM, RAS]
50
THE TRIPLE HELIX || FALL 2016
may stem from influential forces from their unique environment. We could even begin to elucidate how culture can affect human cognition, opening an expanse of research within the field. The inclusion of more participants from non-WEIRD countries would be the subsequent step, which may spell bad news for college students seeking extra credit in their Psychology classes. Ultimately, there are difficult and likely expensive obstacles to overcome, but awareness of such homogeneity has already sparked discourse and change. There is work to be done in the pursuit of ever improving the integrity of our research, but we are motivated by an unyielding adherence to what we all know well to be true: that while human psychology and behavior can be many things–complex, confusing, and downright strange–in no way is it WEIRD.
[8] Ji L-J, Zhang Z, Nisbett RE. Is It Culture or Is It Language? Examination of Language Effects in Cross-Cultural Research on Categorization. Journal of Personality and Social Psychology. 2004;87(1):57–65. [9] Kusserow, A. S. (1999) De-homogenizing American individualism: Socializing hard and soft individualism in Manhattan and Queens. Ethos 27:210–34. [10] Snibbe, A. C. & Markus, H. R. (2005) You can’t always get what you want: Social class, agency, nd choice. Journal of Personality and Social Psychology 88:703 – 20. [11] Stephens, N. M., Markus, H. R. & Townsend, S. S. M. (2007) Choice as an act of meaning: The case of social class. Journal of Personality and Social Psychology 93:814 – 30. [12] Lamont, M. (2000) The dignity of working men. Russell Sage Foundation. [13] The roar of the crowd; Experimental psychology. The Economist. 2012 May 26;403(8786):77–9.
Galaxy Collisions // Michelle Miller â&#x20AC;&#x2DC;18 student art spotlight
DID YOU LIKE WHAT YOU READ?
find us online:
www.browntth.com
like us on facebook:
www.facebook.com/BrownTripleHelix
check back for more:
Our next issue will be coming out soon!
find our blog at ursa.browntth.com find us at ursa.browntth.com
brown@thetriplehelix.org brown@thetriplehelix.org