Columbia Science Review Vol. 14, Issue 2: Spring 2018
How Lucky Are We? Spring 2018
1
2
Columbia Science Review
Eating Ink
A
Cocktail Science
bout 50% of those tattooed in the US eventually seek their body ink removal, and some 20% regret getting their tattoos within a year of inking. Before the advent of laser technology, most tattoo removal techniques involved the abrasion of the top layers of skin through the use of acid, salt scrubs, and complete surgical excision, followed by the placement of skin grafts. Traditional laser tattoo removal is ineffective, costly, and painful. The procedures can cost an upwards of $10,000 since multiple treatments are necessary to remove most of the ink. Individuals can endure scarring, severe blistering, bruising, hyperpigmentation, and even delayed anaphylaxis. The laser procedure emits an intensified light beam that degrades the tattoo pigments. The word ‘laser’ originated as an acronym for “light amplification by stimulated emission of radiation.” A coherent beam of light is focused on the removal site, and specific wavelengths are used on different colored tattoos. The light breaks up the pigmented regions into fragments, which are then targeted by the body’s immune system. The cells that are responsible for the immune response are macrophages, the body’s supreme “big eaters” that engulf and digest debris, microbes, cancerous cells, and other foreign bodies through a process called phagocytosis. Recent studies on mice suggest that tattoo pigments remain in the skin through macrophage action. The cells store the ink pigments and remain stationary until they die, at which point they are replaced by other macrophages. On their own the pigments are too large to drain away from the tattoo site through the lymphatic vessels. These studies also suggest ways to render laser tattoo removal more effective. By delivering genetic instructions, it may be possible to temporarily prevent the macrophage uptake of the laser-fragmented pigments to give the fragments enough time to evacuate from the removal Spring 2018
3
site; however, inhibiting macrophage function could lead to improper wound healing. Thus, these techniques must be further investigated before they can be applied in any procedures. Still, the potential involvement of immune cells in tattoo removal is a promising discovery worth exploring in the pursuit of artless skin. -- Alice Sardarian
“Skype on Wheels”
B
efore the twentieth century, couples torn apart by distance pined over letters; the 1930s brought the era of anxious lovers spinning the rotary dial; and couples in the 1990s rationed out their precious minutes on flip phones, scrimping on text messages with bad acronyms. Nowadays, long distance lovers are spoiled by choice, with an arsenal of apps and software at their disposal. But what if constant Facebook messaging and nightly Skype calls just aren’t enough? Introducing the telepresence robot. Otherwise known as “Skype on wheels,” this gangly segway-esque robot can be maneuvered by a remote user, whose face appears on a tablet attached at eye level. Though it was originally intended to facilitate operations at businesses and hospitals, researchers at Simon Fraser University saw promise in the robot’s ability to nurture long-distance relationships. Long-distance couples who were given telepresence robots reported an enhanced “natural pattern of communication,” citing the ability to control their movement as a form of “body language” that helped facilitate conflict resolution. Moreover, couples
4
appreciated the newfound “unpredictability” that came with not knowing how the partner would move the robot. Most importantly, couples reported that using the robot allowed the partner to more closely integrate with their partner’s life. For example, partners could perform chores or cook with both hands while continuing to communicate, rather than having to leave their tasks temporarily. The concept is indeed intriguing. But with the possibility of robots knocking themselves over, poor camera resolution, hefty price tags, and even the offputting appearance itself, the idea of a robotic significant other stand-in hasn’t quite emerged from science fiction. Until then, it looks like longdistance lovers are just better off sticking to nightly Skype calls and some occasional Snapchatting. ---Jane Pan
The AI Chemist
E
very student who has taken organic chemistry painfully remembers memorizing countless synthesis reactions to answer dreaded exam questions. After all, there is no other way to reliably devise a method to synthesize a given product molecule other than to mix and match memorized aromatic, substitution, and pericyclic reactions. Today’s synthetic chemists face the same issue on a significantly larger scale. They must repeatedly scour databases of recorded reactions and rely on their training to produce a synthesis method. Naturally, this current strategy is inefficient and results in disagreements among chemists. A German research group led by Marwin Segler recently published a manuscript in Nature describing its development of an artificial-intelligence-driven program that devises synthesis methods for any destination molecule. Using deep neural networks, the program taps into its database of all 12.4 million known single-step organic chemistry reactions to find the most efficient production pathway. Notably, the program trains itself without any human input through a process called unsupervised learning. Compared to previous programs that relied on the manual input of numerous reaction principles, Segler’s software is able to self-
Columbia Science Review
improve at a far greater pace. In a blind study, Segler showed 45 chemists from institutes in Germany and China two synthesis pathways for each of nine molecules. For each molecule, he presented the pathway devised by the program and another pathway drawn by humans. The surveyed chemists preferred neither, demonstrating that the program could generate reaction steps as well as trained chemists. Segler claims that his tool has garnered the interest of several pharmaceutical companies and he hopes that it will accelerate molecule discovery in synthetic chemistry. And as much as this program will be a boon to chemists around the world, it might also be a savior to all the despondent students learning organic chemistry. -- Young Joon Kim
Fingernails may have originated as more than just confidantes for one’s health. Anthropologists contend that fingernails evolved from claws, as seen in other primates. We used to fight tooth and nail! When human ancestors migrated out of trees and eventually developed stone tools, claws were no longer needed for climbing or grabbing, and thus flattened to the tiny stubs they are now. Nails persisted in order to protect and to support our fingertips. In addition, they served as markers for selecting healthy mates. Nails are important indicators of our identity and health, and they remind us of the tree-climbing and fur-grooming abilities of our ancestors. They protect our fingertips, are useful for scratching, and can be stress-relief outlets (but beware—biting your fingernails is a great way for pathogens to enter your body!). Healthy nails are desirable for all, and if there is one thing to nail, it is that our nails deserve a little more appreciation for being loyal companions to humanity, in sickness and in health. -- Sirena Khanna
Nails: Humanity’s Loyal Companions
F
ingernails are the body’s confidantes. Each nail has a unique pattern of contours in the nail plate, the hard surface made of alpha-keratin protein. Like fingerprints, our fingernails are unique to us. In 2015, researchers proposed a recognition system that takes advantage of these unique features. Instead of using faces and fingerprints, which are permanent biometrics that are commonly misused, companies could use the distinct patterning of our fingernails as an alternative, transient biometric. Fingernails reveal more than who we are— they reveal our innermost secrets. Nail abnormalities are often signs of systemic disease: deep grooves in the nail plate, also known as Beau’s lines, can indicate pulmonary disease, while pigmented bands in nails suggest autoimmune disease. Nail shape, curvature, color, thickness, and attachment are all closely related to the overall health of an individual. Indeed, doctors often examine their patient’s fingernails to nail the diagnosis. Spring 2018
5
Letters From the EIC & President While procrastinating on Wikipedia several weeks ago, I came across a phenomenon known as “Poe’s Law.” In short, the law states that, unless a writer utilizes some clear indicator of humor, it is impossible for a reader to determine if a written work is serious or a parody. While the law was introduced in the context of Creationism, it reminded me of a trend I was seeing both in person and on the Internet: an increased belief in a flat Earth. While I was initially convinced that the support I saw for a flat Earth was a gag, my conviction changed when I watched a feature-length documentary supporting a flat Earth, made by people calling themselves “flat Earthers.” I asked myself several questions. Why would someone disregard physical evidence indicating that the Earth is round? How could someone explain scientific evidence to a flat Earther to show them that their view is flawed? The latter question resonated with me. I wanted to understand how people discussed science in the context of a flat Earth, and see if I could take away anything applicable to the Columbia Science Review. To research this question, I decided to read several debates between flat Earthers and people who support the spherical model of the Earth. I noticed several approaches that supporters of the traditional view of Earth used to debate with flat Earthers. Some spherical model supporters simply scoffed at the opposing view, saying it wasn’t worth the time to discuss such pointless arguments. Others attempted to use complicated mathematics to demonstrate the necessity of a round Earth. The only successful approach I saw in convincing a flat Earther that the Earth was a sphere was when someone took the time to simplify complicated scientific results into simple, yet accurate text. In essence, these debaters did what we do at Columbia Science Review: make science accessible to all who are willing to read. Here at Columbia Science Review, we recognize that it is necessary to describe complicated scientific results in understandable terms. Moreover, no topic is unworthy of discussion. Since different people have different scientific backgrounds, it is probable that something that is obvious for one person may not be clear to another. Throughout this issue, we discuss a variety of subjects that are as contentious as the idea of a flat Earth. For example, we discuss controversial topics such as physician assisted suicide and the use of dietary supplements. We hope you, the reader, are inspired by these articles to spread your own scientific ideas in accurate, yet accessible, terms.
Science is a story, despite what anyone says. Unlike most stories, there are strict rules about adding to the script. And the narrative is open sourced, with plenty of critics and deceivers. There are general guidelines for the many authors, and the characters are often of little to no importance. The many lessons are hidden in the fine print of the pages, even though the readers tend to focus on the prominent chapters. Russell Bertrand defines philosophy as the space between science and faith. But I believe that science is a philosophy. Intellectuals (and psuedo-intellectuals) praise science as the bastion of facts and the keeper of truth. Scientific word is law that governs the universe. And in many ways this can seem true. But then there is no room for interpretation. And the stories with only one reading are the most boring. Those who cannot understand will simply reject. And suddenly science is the hand-waving of the elites, of the powerful, and not the philosophy that it was meant to be. Science has been, and always will be, the story that comes from questioning the world around us. It is the tool through which we have best been able to deconstruct the vast complexity around us, from the tiny building “little animals” of our own bodies to the shimmering islands of dust that fill the night sky. We are all made of these little animals, and we have all enjoyed the night sky. Science, therefore, must be story for everyone. It is the shared understanding of our failures that leads to our greatest achievements, and to contribute to such a story is perhaps the most important thing one can do. I hope that the Columbia Science Review can help share this story with someone who hasn’t had the chance to listen. We do this in as many ways we can. In our events, outings, outreach, and publication, we hope that we can inspire discussion about science. And yes, we have marched in Washington D.C., we have hosted Nobel laureate speakers, and we have visited some of the finest research facilities in the world. But what I am most proud of are the discussions we’ve had, of the perspectives we’ve gathered, and the looks of curiosity and wonder that we’ve inspired. I hope that this is the focus of our organization moving forward as we continue on the exciting, important mission we’ve set out on. Best, Aunoy Poddar, President
Best, Justin Whitehouse, Editor in Chief
6
Columbia Science Review
Editorial Board The Editorial Board biannually publishes the Columbia Science Review, a peer-reviewed science publication featuring articles written by Columbia students.
Editor-in-Chief Justin Whitehouse
Chief Content Editor Young Joon Kim
Chief Content Reviewer Nikita Israni
Blog Content Manager Yameng Zhang
Chief Illustrator Jennifer Fan
Editors Kelly Butler Serena Cheng Lalita Devadas Sarah Ho Enoch Jiang Briley Lewis Timshawn Luh Heather Macomber Cheryl Pan Jane Pan Alice Sardarian Emily Sun Naazanene Vatan Tina Watson Adrienne Zhang Joyce Zhou
Reviewers Benjamin Greenfield Jessy (Xinyi) Han I-Ji Jung Bryan Kim Mona Liu Prateek Sahni Bilal Shaikh Kamessi (Jiayu) Zhao
Blog Columnists Sophia Ahmed Gitika Bose Sean Harris Tanvi Hisaria Kanishk Karan Audrey Lee Maria MacArdle Sonia Mahajan Shasta Ramachandran Mariel Sander Kayla Schiffer Manasi Sharma Sean Wang Kendra Zhong
Illustrators Christopher Coyne Cecile Marie Farriaux Sirenna Khana Yuxuan Mei Kyosuke Mitsuishi Natalie Seyegh Stefani Shoreibah Eliana Whitehouse Layout Editor Tiffany Li Layout Designers Amanda Klestzick Vivienne Li Alice Styczen Katie Long Joyce Zhou
Spread Science Director Michelle Vancura Spread Science Team Makena Binker Cosen Benjamin Ezra Kepecs Alex Maddon Kshithija KJ Mulam Coco (Kejia) Ruan Janine Sempel Stephanie Zhu
Administrative Board The Executive Board represents the Columbia Science Review as an ABC-recognized Category B student organization at Columbia University.
Noah Goss, President Aunoy Poddar, Vice President Ayesha Chhugani, PR Chair Keefe Mitman, Treasurer Marcel Dupont, Secretary Harsimran Bath, Lead Web Developer Cindy Le, Lead Web Developer
Maham Karatela, MCM Chase Manze, MCM Lillian Wang, MCM Urvi Awasthi, OCM Sophie Bair, OCM Aziah Scott Hawkins, OCM Amir Lankarani, OCM
Alana Masciana, OCM Anu Mathur, OCM Jason Mohabir, OCM Kush Shah, OCM Abhishek Shah, OCM Winni Yang, OCM Catherine Zhang, OCM
The Columbia Science Review strives to increase knowledge and awareness of science and technology within the Columbia community by presenting engaging and informative articles, in forms such as: • Reviews of contemporary issues in science • Faculty profiles and interviews • Editorials and opinion pieces
Sources for this issue can be found online at www.columbiasciencereview.com Contact us at csr.spreadscience@gmail.com Visit our blog at www.columbiasciencereview.com “Like” us on Facebook at www.facebook.com/columbiasciencereview to receive blog updates, event alerts, and more. Spring 2018
7
Columbia Science Review Cocktail Science 03
16 Is Physician-Assisted Suicide Ethical?
Overdosing on Dietary Supplements 09 19 Neandertals “R” Us? How Lucky Are We? 12
16
9
Is Physician-Assisted Suicide Ethical?
Overdosing on Dietary Supplements
19
12 How Lucky Are We?
8
Neandertals “R” Us?
Columbia Science Review
Overdosing on Dietary Supplements
Alice Sardarian Illustration By Sirena Khanna
I
n 1994, the government granted the dietary supplement industry the ability to flood the markets with vitamins, minerals, and other derived or synthetically generated substances. Companies were free from the scrutiny of the Food and Drug Administration (FDA) and were only expected to sell unadulterated and properly labeled products.1 These circumstances were due to extensive lobbying for the Dietary Supplement Health and Education Act (DSHEA), which permitted the sale of supplements to millions of customers without thorough risk and efficacy assessment or proper dosage guidelines. In support of the DSHEA, Paul Boler, Vice President of Pharmavite LLC (the parent company of the popular Nature Made brand), argued that “the industry was under attack from the FDA, which was seeking to impose unnecessary restrictions on nutrient dosages, claims and ingredients.”2 According to recent studies and analyses, it seems that such restrictions may actually be necessary to consumers’ health and well-being. The 37 billion3 dollar dietary supplement industry is fueled
by 68%4 of Americans, most of whom are satisfied with the products. Due to perceived positive experiences, compounded with relentless advertising campaigns, many consumers are unaware that they may be overdosing on supplements, potentially getting exposed to more harm than good.
The 37 billion dollar dietary supplement industry is fueled by 68% of Americans, most of whom are satisfied with the products. In 2014, the top ten leading supplement brands collectively spent about $260 million on advertising.5 By appealing to a vast population of consumers, the brands seemingly provide viable strategies to combat colds and flus, the effects of aging, and a lack of energy in an overworked and badly nourished American population. While the list of maladies seems infinite, supplements claim to have the right remedies for them all. Moreover, most brands’ claims are founded on limited
Spring 2018
9
in the highest categories of vitamin B6 … and B12 … compared with non users.”11 Doses greater than 20 milligrams per day of Vitamin B6 and greater than 55 micrograms per day of B12 were considered the highest dosages. Furthermore, two researchers at the Fred Hutchinson Cancer Research Center in Seattle demonstrated that the risk increased substantially for men who were also smokers, such that the highest category of vitamin B12 quadrupled their cancer risk.12 It’s likely that, when combined with carcinogen exposure, excessive vitamin intake may cause or contribute to damage of fundamental cellular functions.
studies funded by the pharmaceutical industry itself. Consequently, these offer “supportive, but not conclusive research,” according to Schiff Vitamins, the overarching company of the brands MegaRed, Digestive Advantage, Airborne, and Move Free.6 We must evaluate the dangerous perception that supplements are benign additives. If consumers are attentive to the nutrition labels of most supplement products, they will recognize the undeniable overdosage of vitamins and minerals in each serving size. Schiff Vitamins and other companies provide considerably higher doses of their supplements than what is recommended by the U.S. Dietary Allowance. Vitamins B6 and B12, for example, which attempt to improve energy levels, can be consumed at over 20 times the recommended 1.7 milligrams per day of vitamin B6 and 2.4 micrograms of vitamin B12 .7 What’s more concerning is that most supplement retailers are likely to offer much higher doses. Some inflated dosages include “daily 100-milligram B6 pills [and] B12… in doses of 5,000 micrograms.”8 It is important to be concerned with supplement overdose, as the influx of certain vitamins and minerals can interfere with bodily processes and lead to diseases like cancer.
We must evaluate the dangerous perception that supplements are benign additives. Of course, vitamins are beneficial to those who are nutrient deficient. Vitamin B deficiency, specifically, afflicts a mere 10.5% of the population that may need additional Vitamin B to prevent anemia, weakness, and diminished immunity.9 However, about 51% of Americans take multivitamins that contain B vitamins.10 This means that a significantly higher percentage of the American population takes B vitamins without being nutrient deficient. Indeed, if an individual is not nutrient deficient, taking supplements may be unnecessary and potentially harmful. The Journal of Clinical Oncology reported the results of a study on the long-term use of Vitamin B supplements and its correlation to lung cancer. Ten years of vitamins B6 and B12 supplement consumption, as per common multivitamin dosages, was linked to “an almost two-fold increase in lung cancer risk among men 10
Indeed, if an individual is not nutrient deficient, taking supplements may be unnecessary and potentially harmful. Research about B vitamins and their link to cancer is still being analyzed, though the primary hypothesis remains that the vitamins are integral parts of “a metabolic pathway that breaks down folate.”13 This pathway provides bases that make up DNA and are, thus, involved in gene expression. Any faults within the pathway as a result of overactive B vitamin processes can cause mutations at the DNA level that prompt cancerous growths. It is important to note that the risks and negative side effects of supplements are not at all limited to B vitamins. Most melatonin and vitamin C products provide high, ineffective doses to consumers. Professor Richard Wurtman, director of MIT’s Clinical Research Center, found that 0.3 milligrams of melatonin was a sufficient dose needed to achieve rest.14 Most supplements, however, provide ten times that amount, eliminating the melatonin’s effectiveness. Melatonin overdoses have been associated with hypothermia, as the hormone affects peripheral blood vessels and interacts with thermoregulation receptors in the brain; it can also disrupt menstrual cycles and puberty in adolescents.15 It is recommended that adults consume between 75 and 90mg of vitamin C per day, however, supplement brands like EmergenC provide as much as 1000mg per dose.16 The fine print upon EmergenC products warns against consuming over 2000mg of vitamin C.17 An overdose on this vitamin can cause nausea, insomnia, vomiting, and even kidney stones, amongst other symptoms.18 Ultimately, supplements oftentimes provide vitamins and minerals in excess, which can cause disease and harbor harmful side effects in unsuspecting consumers. Since a substantial percentage of the U.S. population relies on supplements, it is important to conduct extensive and thorough research on regulating the dosage of these supplements. Industry or regulatory entities should also equate the supplement potency with that of prescription medications, as opposed to labeling them benign and “necessary” additives to our diet. Furthermore, it may be helpful to encourage healthy nutritional eating habits in an effort to reduce supplement use; eating a
Columbia Science Review
variety of fresh nutrients such as fish, eggs, mushrooms, nuts, and leafy greens can easily target and address a lack of B vitamins.19 There has been a huge push from proponents of the fresh produce and healthy eating movements to promote better eating and nutritional habits, but it has been an uphill battle against the powerful and lucrative supplement industry. Additional research, such as that on B vitamins and cancer, will further our understanding of supplement risks and lead to well-informed consumers who have an understanding of what it is that they are ingesting and whether these supplements are worth the risk. !
17: “Products”. Emergen-C, https://www.emergenc.com/ products/everyday. 18: Zeratsky. “How much vitamin C is too much?”. Mayo Clinic, 2018, https://www.mayoclinic.org/healthy-lifestyle/ nutrition-and-healthy-eating/expert-answers/vitamin-c/faq20058030. 19: McDermott, Nicole. “The Benefits Of Vitamin B Complex”. Life, 2017, http://dailyburn.com/life/health/benefitsvitamin-b-complex/.
References: Supplements oftentimes provide vitamins 1: “Dietary Supplements”. FDA, 2018, https://www.fda. gov/food/dietarysupplements/. and minerals in excess, which can cause 2: Runestad, Todd. “How DSHEA Changed Lives, From disease and harbor harmful side effects in Those Who Fought For And Won Its Passage”. New Hope unsuspecting consumers. Network, 2014, http://www.newhope.com/regulatory/howdsheachanged-lives-those-who-fought-and-won-its-passage. 3: “Multivitamin/Mineral Supplements”. NIH ODS, 2015, https://ods.od.nih.gov/factsheets/MVMS-HealthProfessional/. 4: “CRN 2015 Consumer Survey On Dietary Supplements”. CRN USA, 2015, http://www.crnusa.org/CRNconsumersurvey/2015/. 5: “U.S. Leading Vitamin Brands By Ad Spend 2014”. Statista, https://www.statista.com/statistics/452459/leadingvitamins-ad-spend-usa/. 6: “Health & Wellness Blog”. Schiff Vitamins, https://www. schiffvitamins.com/news/article/how-megared-works. 7, 8, 12, 13: Hamblin, James. “Vitamin B6 And B12 Supplements Appear To Cause Cancer In Men”. The Atlantic, 2017. https://www.theatlantic.com/health/archive/2017/08/ b12-energy/537654/. 9: “American Nutrient Gap”. Medical Economics, http:// medicaleconomics.modernmedicine.com/medical-economics/news/american-nutrient-gap-and-how-vitamin-and-mineral-supplements-can-help-fill-it. 10: “Supplement Use Among Younger Adult Generations Contributes To Boost In Overall Usage In 2016”. CRN USA, 2016, 11: Brasky, Theodore M. et al. “Long-Term, Supplemental, One-Carbon Metabolism–Related Vitamin B Use In Relation To Lung Cancer Risk In The Vitamins And Lifestyle (VITAL) Cohort”. Journal Of Clinical Oncology, vol 35, no. 30, 2017, pp. 3440-3448. American Society Of Clinical Oncology (ASCO), doi:10.1200/jco.2017.72.7735. 14: Thomson, Elizabeth A. “Rest Easy: MIT Study Confirms Melatonin’s Value As Sleep Aid”. MIT News, 2005, http:// news.mit.edu/2005/melatonin. 15: Van Winkle. “The Dark Side Of Melatonin”. Huffpost, 2015, https://www.huffingtonpost.com/van-winkles/the-darkside-of-melatoni_b_8855998.html 16: “Office Of Dietary Supplements - Vitamin C”. NIH ODS, 2011, https://ods.od.nih.gov/factsheets/VitaminC-Consumer. Spring 2018
11
HOW LUCKY ARE WE?
Exploring the luck behind our past, present and future SEAN HARRIS ILLUSTRATION BY SIRENA KHANNA
A
rthur C. Clarke once said, “We are either alone in the universe or we are not. Both are equally terrifying.”1 Such terror often plagues discussion about the possibility of alien life, whether due to our fear of the unknown, the spooky nature of things fundamentally different from us, or the human aversion to the great expanse of darkness that lays above and below. Everyone seems to understand that solving this mystery would be exciting though perhaps deeply distressing. But beyond the awe and public hysteria, discovering even a single alien organism would have one very tangible consequence: it would reveal the likely future of humanity. Not in some abstract, philosophical sense, but literally. For example, consider if humanity’s tentative missions to Mars or other nearby planets were to reveal evidence of something as simple as multicellular life. Then while the rest of the world celebrates, many scientists
12
would immediately know that humanity would likely go extinct in the next few hundred years. To explain this seemingly non-sequitur conclusion, we need to take everything back a few steps. Say you are someone who has no idea how poker works, and you are playing a game of Texas Hold’em. The dealer gives you cards, and you end up with an ace, king, queen, jack, and a ten: a royal flush. That being your first hand ever, and with no reference to understand your position, you would likely consider this a common sequence—never knowing the chances of that happening are 1 in 650,0002. Similarly, you could be given a measly pair and justifiably guess that such a combination has never occurred in recorded history. We can look back at human history as a series of events, just like a sequence of cards, without knowing how probable or improbable they were. How likely was it for Kennedy and Khrushchev to avoid nuclear war in 1963? What
Columbia Science Review
our telescopes and satellites pointed up are silent. Since 1984 organized government programs have scanned the skies, but not a single photon has been detected that indicates anything other than lifeless convulsions of plasma and rock. If the Enlightenment killed the ancient gods, then modern science too seems to have expunged much of the hope of finding life that transcends Earth. And yet a new redo of the prototypical alien abduction/encounter/invasion movie comes out every year. The idea seems to be stuck in our heads - at least since the advent of sci-fi in the 1950’s, or even since the second century when the Greek satirist Lucian, albeit comedically, explored the idea of visiting the moon and sun to meet warring tribes.3 The possibility of aliens seems too alluring to ignore. Such was the topic of a casual lunchtime discussion between a few brilliant scientists working at Los Alamos National Laboratory in 1950. One of these scientists was Enrico Fermi, who won the Nobel Prize and has an entire class of particles named after him.4 He asked his colleagues, “Where are they?” To those who haven’t dedicated their lives to studying the universe, Fermi’s question seems surprising in that he doesn’t ask if aliens exist but assumes instead that they do and that they should be somewhere. The difference is that Fermi knew the odds. Consider the following: For an average galaxy containing 200 billion stars, each star having an average of two planets, each planet having a 1% chance of being capable of supporting life, where 1% of those potential planets evolve life over their roughly five billion year existence. Of those planets that possess life, only 1% have life that develops intelligence and technology. These statistics make live seem incredibly rare, but a galaxy of this composition would have 400,000 advanced civilizations over the course of its existence. Cutting each of those 1% odds down to 0.1%, a tenth of one percent, still gives you 400 alien races.
Where Is Everyone?
were the chances of life developing the mitochondria, the agriculture or the modern computer? One can open up a history textbook and know all these moments occurred, but there is no probability value associated with each event. We have been given a single hand of life on Earth, and from it we are trying to figure out whether such an existence is as common as a pair or as rare as a flush. A scientific individual with the famed Copernican Principle in mind would suggest that our history is likely an average one. After all, we aren’t the center of the universe, and what happens to us should roughly happen everywhere else as well. That’s great, until you realize that if what happened on Earth happened everywhere, then we would be living in a Star Trekian cosmic zoo of extraterrestrials (who likely would be more convincingly alien than Roddenberry’s dubious creations). Yet
While these numbers are obviously estimates, the math has been somewhat formalized by the so-called Drake Equation, which takes into account our most up-to-date scientific models of the universe. For example, recent observations with the Kepler telescope revealed that up to 22% of stars have earth-like planets that orbit in the so called “Goldilocks Zone”, the distance in which a planet can have water in liquid form and thus support life.5 As a side note, our assumption that life can only exist with liquid water could be biased due to our own biology. However, water does have special properties that make it crucial for any organism we could imagine: Sushil Atreya, who studies planet formation at University of Michigan in Ann Arbor, explains in a Scientific American article that water “acts as a solvent” and, vitally, as both a “medium and as a catalyst for … proteins.”6 Along with the prevalence of planets similar to our own in space, we can look to our own history to better understand possible timelines of alien species. Consider that the Earth formed roughly 4.5 billion years ago (bya), and that the oceans formed between 4.2 to 3.8 bya.7 The earliest remnants of life that we have found are dated conservatively to 3.5 bya, and new analysis of
Spring 2018
13
ancient microfossils in Canada suggests that our original ancestors could have formed as early as 4.29 bya, which means life could have been evolving as soon as the oceans were settled.8 “So what?” you might ask. This analysis is important because if archaic bacteria had sprung into existence as soon as conditions were ripe, it would mean that it was probably not an unlikely event. If, however, it had taken billions and billions of years, we would know that it was a more rare occurrence. So earthlike planets are in abundance, and life seems to have had no problem developing here. Finally, the nature of life and the time scale of its development ensure that once intelligent life evolves, it will eventually develop technology and spread into space. Our own first civilizations developed in Mesopotamia around 3000 BCE, the cutting-edge technology at the time being agriculture and stone tools. Only 5000 years later, the same species would be launching nuclearpowered vehicles into geostationary orbit. Five milenia seems long to us, but it is nothing in a geological, or even biological, timeframe. Our species split from chimpanzees about 6 million years ago9, and sharks have been around for 450 million years.10 Furthermore, the universe is 13.8 billion years old. From a cosmic perspective, humanity showed up and immediately developed advanced technology—the timescales involved are totally different orders of magnitude. Our development of consciousness
Of those planets that possess life, only 1% have life that develops intelligence and technology. These statistics make live seem incredibly rare, but a galaxy of this composition would have 400,000 advanced civilizations over the course of its existence. Cutting each of those 1% odds down to 0.1%, a tenth of one percent, still gives you 400 alien races. 14
and technology essentially occurred in the same cosmic moment. What this means is that we are unlikely to find any iron-age or industrial era aliens out there. Considering how fast those periods flew by for us, anyone we find will either be totally undeveloped or completely developed with computers, space flight, advanced quantum mechanics, and general relativity (or beyond). All those incredibly advances took us only .000036% the age of the universe—imagine what explosive growth could be completed in another measly percentage point.
The Filter.
What we know for sure is that there is some unaccounted variable keeping life from evolving and spreading through the universe. This missing piece of the puzzle is called the “Great Filter.”11 As Robin Hanson, the economist who coined the term, points out, there are nine milestones or conditions that must be met in an organism’s developmental history in order for it to reach the space age: a suitable star system, the transition from static to reproductive molecules (e.g., RNA), simple (prokaryotic) single-cell life, transition to complex (eukaryotic) single-cell life, sexual reproduction, multi-cell life, tool-using animals with big brains, technological explosion (where we are now), and finally off-world colonization. The question is which one of these steps is the filter that prevents life from spreading into the cosmos. There are many possible scenarios: perhaps the near-impossible step is the transition from inert chemicals to perpetual, self-reproducing protein structures. Maybe single-cell life coming together to form more complex multicellular organisms is the filter— we don’t know. But the answer to this question will tell us something profound about human existence—it will tell us how lucky we are, the odds of getting our hand of cards. If the filter is generating reproductive chemicals,we have already passed that step and we can be sure it is mostly smooth sailing from here on out. However, if the filter is a later phase of development that we have yet to reach, we will likely not make it. Such a hypothetical step in evolution has seemingly prevented all life from surviving anywhere else in the universe, and there’s no reason to believe we would fare differently.
Columbia Science Review
“We are either alone in the universe or we are not. Both are equally terrifying.” -Arthur C. Clarke
For this reason, and to return to the original hypothetical, the discovery of simple, multicellular life on Mars or a nearby planet would show that life isn’t some universal jackpot, but rather something as common and mundane as water—common enough to happen independently on not just one, but two planets in our solar system. We would immediately know the great filter was something beyond multicellular life, something we have yet to encounter. It would foretell our collective fate. But it is not so easy to imagine how exactly our society could come crashing down. Hundreds of years of social development and “enlightenment” fool us into thinking history is a strictly linear advancement from nature to civilization, barbarism to sophistication. And yet the Romans, the Soviets, Germans, Byzantines have all seen their once established and advanced nations fall into ruin. Human civilization has highs and lows, rise and falls - could we face a fall that prevents us from continuing our trajectory towards the stars? What event could halt the seemingly unstoppable march of history, not just for a decade or century, but forever? The lessons of the 20th century might cause one to immediately think of nuclear weapons. At the height of the Cold War, mankind possessed over 64,000 nuclear warheads12, each enough to vaporize a city. Many were on permanent standby, ready to launch nuclear fire at predetermined cities with a single phone call. Government research into potential casualties estimates that even a relatively miniscule launch of 100 warheads at major U.S. cities would cause 50 million casualties.13 Studies by the Internal Physicians for the Prevention of Nuclear War determined that even a minor nuclear exchange between powers like India and Pakistan would cause global climate disturbances that could threaten food supply for two billion people14, over a quarter of humanity. Maybe an understanding of the power of the atom is inherently too dangerous for life to possess, maybe scientific advancement inevitably leads every species to its own destruction. But the range of possible extinction events spans far beyond simple nuclear exchange. Nick Bostrom, a philosopher who studies these existential risks, identifies a multitude of outcomes: everything from drastic climate change, asteroid impacts, wars fought with future weapons beyond our comprehension,
or rogue artificial intelligence.15 Other theories suggest the competitive nature of evolution drives any species to become inherently predatory and violent, and thus unable to peacefully coexist in the cosmos. The corollary of this is that planet-bound civilizations like ourselves realize the possibility that space is filled with aggressive, hyper-advanced creatures, and then isolate themselves from what they perceive as a possibly dangerous universe, leaving no obvious evidence of their existence. Ultimately, this is all a game of speculation. There could be unaccounted variables that explain Fermi’s infamous question in full, perhaps something we have yet to discover about biology, chemistry, or physics. But as long as our sample size only consists of one species, we have nothing to rely on but our educated guesses. ! Footnotes 1 http://www.economist.com/node/10918055 2 https://en.wikipedia.org/wiki/Poker_probability 3 Grewell, Greg (2001). “Colonizing the Universe: Science Fictions Then, Now, and in the (Imagined) Future”. Rocky Mountain Review of Language and Literature. 55 (2): 25–47. 4 1938 Nobel Prize in Physics, Fermions 5 http://www.pnas.org/content/110/48/19273 6 https://www.scientificamerican.com/article/water-lust-whyall-the-ex/ 7 https://serc.carleton.edu/NAGTWorkshops/earlyearth/ questions/formation_oceans.html 8 https://www.nature.com/articles/nature21377 9 https://www.livescience.com/3996-humans-chimps-split. html 10 http://www.sharksavers.org/en/education/biology/450million-years-of-sharks1/ 11 https://www.webcitation.org/5n7VYJBUd?url=http:// hanson.gmu.edu/greatfilter.html 12 https://ourworldindata.org/nuclear-weapons 13 https://www.ncbi.nlm.nih.gov/books/NBK219165/ 14 http://www.ippnw.org/pdf/nuclear-famine-two-billion-atrisk-2013.pdf 15 https://www.technologyreview.com/s/409936/where-arethey/
Spring 2018
15
Is Physician-Assisted Suicide Ethical?
Examining the Oregon Death with Dignity Act
I
SONIA MAHAJAN ILLUSTRATION BY STEFANI SHOREIBAH
n 1994, Oregon passed the Oregon Death With Dignity Act (ODWDA), which “exempts from civil or criminal liability state-licensed physicians who, in compliance with ODWDA’s specific safeguards, dispense or prescribe a lethal dose of drugs upon the request of a terminally ill patient.” [1] In essence, ODWDA legalizes physician-assisted suicide via prescription drugs. However, due to controversy, the law did not go into effect until 1997. [2] Physician-assisted suicide is now legal in California, Vermont, Oregon, Washington, Colorado, Montana, and the District of Columbia [3]. Thirty more state governments are currently debating similar laws. [4] Yet, since the 1950s, physician-assisted suicide legislation has remained a point of contention. [7] Views on physician-assisted suicide have not changed 16
much since 2006 when the United States Supreme Court decided the case of Gonzales v. Oregon. In the initial legal challenge to ODWDA that led to Gonzales, John Ashcroft, then the Attorney General of the United States, argued that physician-assisted suicide was not “a legitimate medical practice” [13]. Additionally, prior to the Gonzales ruling physicians could be prosecuted by the Drug Enforcement Administration (DEA) under the Controlled Substances Act (CSA) for aiding a patient in physician-assisted suicide via lethal doses of drugs because those drugs are generally illegal to use. [8] However, the Supreme Court determined that the CSA does “not […] define general standards of state medical practice, which was the sort of medical matter historically entrusted to the states.” [9]. Furthermore, patients who qualify for physician-assisted suicide must
Columbia Science Review
be diagnosed by two physicians as terminally ill with less than six months to live. The physician authorizing the prescription must also ensure that the “patient has made a voluntary request, ensure the patient’s choice is informed, and refer patients to counseling if they might be suffering from a psychological disorder or depression causing impaired judgement” [10]. Moreover, the physician cannot directly administer the pills, as this would be considered euthanasia, which is not legal and is very different from physician-assisted suicide. [12]
The New England Journal of Medicine published a poll in 1996 of Oregon doctors that reported that only “31 to 54 percent of physicians polled have expressed neutral or positive attitudes towards” Death With Dignity laws. On the other hand, the Supreme Court stated that Attorney General Ashcroft, by virtue of not being a medical practitioner, had no authority to decide what was or was not “a legitimate medical practice.” [14] As far as medical regulation goes, “the role of the federal government is to prevent drug abuse” and nothing more, according to an article by Professor David Brushwood, a member of the American Society for Pharmacy Law and lecturer at the University of Wyoming. [16] Thus, it is up to the medical community to inform states about whether or not physician-assisted suicide is “a legitimate medical practice.” The New England Journal of Medicine published a poll in 1996 of Oregon doctors that reported that only “31 to 54 percent of physicians polled have expressed neutral or positive attitudes towards” Death With Dignity laws. [29] However, only 31% of respondents claimed to oppose physician-assisted suicide “because of moral objections.” [30] Other physicians were opposed because they feared personal retribution in the form of legal action from the patient’s family, the loss of their medical “license in another state,” and disciplinary measures from their hospital. [31] While the 2006 Supreme Court case may have alleviated physicians’ fears of losing their medical licenses or facing criminal prosecution, lawsuits from families and disciplinary measures from private hospitals that may have policies against physician-assisted suicide are still significant concerns. Oregon has a higher rate
than the rest of the country that approve of physicianassisted suicide and believe “it is ethical and should be legal in some cases.” [32] However, one study found that in most cases cancer “patients and the [general] public” were more supportive of physician-assisted suicide and viewed it as more ethical as did physicians. [34] The Pew Research Center found that 47% of U.S. adults approved of physician assisted suicide and 49% opposed the issue. [35] In states where physician-assisted suicide is illegal, many medical professionals advise qualified patients to “stop eating,” a suggestion nowhere near as controversial as the alternative [26]. Widely-publicized cases of individuals who have sought physician-assisted suicide cite patients’ desire “to be able to transition out of this life with their dignity” [25]. Death with Dignity, which lobbies for physician-assisted suicide legalization, says physician-assisted suicide is, “an end-of-life option that allows certain eligible individuals to legally request and obtain medications from their physician to end their life in a peaceful, humane, and dignified manner.” [37] They claim that the Death with Dignity Act is crafted to “[give] you the freedom and empowerment” to end your own life. As many as “[1 in 3] never take the medication” once it is prescribed to them,” and patients can back out of committing physician-assisted suicide at any time. [38] Sarah Witte, whose son passed away in 2010, explained in an op-ed for the organization Death With Dignity that physician-assisted suicide “[provides] peace (and peace of mind) and choice.” She firmly holds that “it’s an individual choice at an extremely personal time of one’s life.” [36]
Oregon has a higher rate than the rest of the country that approve of physicianassisted suicide and believe “it is ethical and should be legal in some cases.”
Spring 2018
17
However, writing on behalf of the American College of Physicians—American Society of Internal Medicine (ACP—ASIM), Lois Snyder and Daniel Sulmasy contend that physician-assisted suicide is not ethical and should not be legalized because “most individuals who contemplate or succeed at suicide are depressed or have other psychiatric comorbid conditions,” and that the desire to proceed with physician-assisted suicide “fluctuates significantly over time.” [18] Thus, Snyder and Sulmasy recommend palliative care (end-of-life care that seeks to alleviate pain) over physician-assisted suicide. They cited the Hippocratic Oath, (a binding pledge in which doctors swear to do no harm to their patients [19]. [20] They also worry that physicianassisted suicide could later provide a basis for the legalization of euthanasia against a victim’s will or be applied to “non-terminally ill persons.” [22] As of now, ODWDA offers strict regulation on who can qualify for physician-assisted suicide. [39] Ultimately, however, the question of whether physician-assisted suicide should be allowed boils down to the same question: is physician-assisted suicide safe and ethical? Both physicians and the public remain divided over the ethics of this issue. While physicianassisted suicide laws are beginning to appear in many state legislatures, Pew Research Center has found that “views on doctor-assisted suicide are little changed since 2005.” [40]
Ultimately, however, the question of whether physicianassisted suicide should be allowed boils down to the same question: is physician-assisted suicide safe and ethical? Perhaps one explanation for the lack of legal advancement in the debate surrounding assisted suicide is that legal experts are not entirely sure whether medical ethics and the views of medical practitioners even matter. In his dissent to the Supreme Court’s decision in Gonzales v. Oregon, Justice Antonin Scalia claimed that it makes more sense “for the Attorney General occasionally to make judgements about the legitimacy of medical practices” than for medical practitioners “to get into the business of law enforcement.” Two other Supreme Court justices joined Scalia in his dissent. [42] However, most of the literature surrounding medically-assisted suicide is 18
co-written by a J.D. Some physicians cite the Hippocratic Oath in their opposition to medically-assisted suicide and Death With Dignity Laws. Others worry about the potential misuse of such responsibility. Still others believe that these laws allow suffering patients relief in a time of turmoil. Each side’s argument has its merits. Ultimately, there is no clear choice, and there may well never be a right one. ! References: [1] Gonzales v. Oregon, 546 U.S. ___ (2006) [2]http://assets.pewresearch.org/wp-content/uploads/ sites/11/2007/10/Gonzales-vs-Oregon.pdf [3]http://www.cnn.com/2014/11/26/us/physicianassisted-suicide-fast-facts/index.html [4] https://www.deathwithdignity.org[6] http://assets. pewresearch.org/wp-content/uploads/sites/11/2007/10/ Gonzales-vs-Oregon.pdf [7]http://www.dailytexanonline.com/2017/11/27/ banning-physician-assisted-suicide-is-unethical [9] https://www.oyez.org/cases/2005/04-623 [15]http://www.doctordeluca.com/Library/WOD/ DefiningLegitMedPurpose05.pdf [17]http://annals.org/aim/fullarticle/714672/ physician-assisted-suicide?year=2001 [19]http://www.pbs.org/wgbh/nova/body/ hippocratic-oath-today.html [24]http://www.nejm.org/doi/full/10.1056/ NEJMlim060731 [25]https://www.nytimes.com/2015/09/12/us/ california-legislature-approves-assisted-suicide.html?_ r=0 [26]http://www.berkeleywellness.com/healthycommunity/health-care-policy/article/physicianassisted-suicide-ethical [29] http://www.nejm.org/doi/full/10.1056/NEJM199 602013340507#t=articleResults [33]https://ac.els-cdn.com/S0140673696916219/1s2.0-S0140673696916219-main.pdf?_tid=961a9e96d6dc-11e7-9c09-00000aab0f27&acdnat=1512163036_ ac6c4404b8aa4bbacfbf913c64e9f889 [35]http://www.pewresearch.org/facttank/2014/10/22/americans-of-all-ages-divided-overdoctor-assisted-suicide-laws/ [36] https://www.deathwithdignity.org/stories/ sarah-witte-individual-choice/ [40]http://www.pewresearch.org/facttank/2014/10/22/americans-of-all-ages-divided-overdoctor-assisted-suicide-laws/
Columbia Science Review
Neandertals “R” Us?
Nikita Israni Illustration by Christopher Coyne
T
here is perhaps no other extinct group that has had so much negativity chucked their way as the Homo neanderthalensis. Today, the word “Neandertal” is ascribed to the misogynistic attitudes of Hollywood producers and the diplomacy of Trump. But, when science itself has failed to give adequate representation to our distant cousins, can we really blame popular culture for doing the same? The first Neandertal was discovered in 1856 in Neandertal Valley, Germany and is the specimen on which the description and name of the species is based. An 1864 analysis of Neandertal 1 by Anglo-Irish geologist William King identifies tangible features of the skullcap, some of which are still seen as characteristic of Neandertals today, but describes these traits as being “approached by some savage races” and more similar to those of apes than humans. Additionally, scientific practices of the time like phrenology, which attempted to infer cognitive ability from bumps and lumps on the skull, painted Neandertals as a primitive, bestial wild men. Such views, rooted in Eurocentric social traditions and early attachment of the word “primitive” to 19th century Neandertal remains, would not be revised until the study of remains shifted more toward an objective investigation of the distance between Homo neanderthalensis and Homo sapiens. Neandertals were archaic humans that lived primarily in Western Europe, but extended to Eastern Europe, the Levant, and Central Asia. They existed from 130 Kya to 28 Kya, at which point it is suggested that they were replaced by, and likely hybridized with, Anatomically Modern Humans (AMHs) that emerged from Africa. Traditionally, the most diagnostic anatomical region of the Neandertal is their
facial skeleton. Over thirty plesiomorphies, or ancestral characters, and apomorphies, or derived characters, have been identified to define the Neandertal skull. The most prevalent, distinctive characters include a well defined suprainiac fossa, found in all European, or “classical,” Neandertal skulls to date, and an occipitomastoid crest. The suprainiac fossa is an elliptical depression on the occipital bone, and the occipitomastoid crest is an indentation along the suture between the occipital and temporal bones. Other defining craniofacial traits are a large and robust face, low and wide skullcap, projecting midface, large nose, continuous arched brows, occipital bun, lack of a chin, swept back cheekbones, and a gap behind the last molar. The main debate when it comes to these traits is how they arose and identifying which ones are autapomorphies: distinctive traits unique to a single taxon that can provide more clarity as to whether Neandertals can be identified as a subspecies of H. sapiens, or a separate species altogether. Three debated autapomorphies are found in the Neandertal mandible, ear, and nose. The human mandible, or lower jawbone, has two prominences at each end of almost equal elevation, separated by a deep notch at the midpoint between the two. In Neandertals, the anterior process is larger and higher than the posterior process, the notch is shallower, and its deepest point is off center. Meanwhile the direct ancestor of AMHs, Homo erectus, who existed from 1200 Kya to 600 Kya in Europe and Africa, shows an identical mandible to that of modern H. sapiens, providing further evidence of a Neandertal apomorphy. If so, this feature is indicative of a side branch that evolved separately from the AMHs.
Over thirty plesiomorphies, or ancestral characters, and apomorphies, or derived characters, have been identified to define the Neandertal skull. Many morphological differences are often statistically resolved through principal component analysis (PCA), in which the measured size and shape values of a specimen can be combined and grouped into different principal components (PC). These components reduce redundancy in the data and make it easier to detect variability in the data set. For example, PCA has been used to analyze the Neandertal inner ear, made up of the malleus, incus and stapes. As opposed to craniofacial bones that are shaped over the course of the Neandertal lifetime, the stapes is fully developed at birth and only shows signs of thinning
Spring 2018
19
as one reaches old age. Therefore, any difference in structure of the stapes would indicate that morphological differences between Neandertals and AMHs were present at birth, and likely genetically encoded. Generally, external casts of inner ear structures have shown that Neandertals had smaller ear canals than AMHs, but that the two ears were functionally similar. Based on the PC mean scores of chimps, gorillas, AMHs, and Neandertals, researchers from Germany determined that AMH and Neandertal mean shapes are similarly derived for the malleus and incus, but that the Neandertal stapes is more derived. However, the Neandertal ear is one of least studied portions of the Neandertal skull, and there has yet to be any response published against regarding the shape of the Neandertal stapes as an autapomorphy.
Therefore, any difference in structure of the stapes would indicate that morphological differences between Neandertals and AMHs were present at birth, and likely genetically encoded. Perhaps the most hotly debated facial feature of Neandertals is their large nasal aperture. In 1996, physical anthropologists Jeffrey Schwartz and Ian Tattersall identified what they believed to be three unique specializations of the nasal structure of Neandertals, most clearly observed in the Gibraltar 1 skull (45-70 Kya), discovered in 1848. The proposed apomorphies include a bluntly pointed medial (toward center) projection of the interior margin of the nasal cavity and swelling of the lateral (away from center) nasal walls into the posterior nasal cavity. Robert Franciscus of the University of Iowa argues that Schwartz and Tattersall use an extremely small sample size of five specimens, all with incomplete
20
nasal apertures. He also believes the medial projection was identified over a century ago as the crista turbinalis, and is also found in some AMH fossils. He also indicates that despite their swollen lateral walls, Neandertal noses actually have a wider internal breadth than early and late modern European human samples, and a breadth identical to some north African AMH samples. Issues of sample size and specimen incompleteness are to be expected in the field and this problem is intensified by the lack of any soft tissue in the fossil record. However, improvements in technology have allowed for digital reconstruction of soft nasal tissue, allowing researchers to test respiratory performance as opposed to evaluating changes in bone that may or may not have any adaptive explanation. The data from a 2017 study conducted by a team of international researchers is one of the more recent developments in determining the validity of the theory of the cold-adapted Neandertal face, as it found differences in the degree of humidification of air between Neandertals and modern humans. American anthropologist Carleton Coon spearheaded the idea that Neandertal faces were cold adapted in a 1962 book entitled The Origin of Races. In his book he compared prevalent Neanderthal traits with the anatomy of groups living at high latitude and observed that the size and shape of Neandertal bodies would have worked to minimize the loss of heat and provide protection against cold injury in
Columbia Science Review
the considerably cooler climates of northern Europe. For example, a larger opening in the maxilla just below the eye socket, an area known as the infraorbital foramen, can accommodate bigger or more blood vessels, allowing for more blood flow to the cheeks. Theoretically, greater midfacial projection would separate the brain from inspired cold air and a wider nose would be useful in humidification. This projection would also give rise to other traits identified in “classic” Neandertals, such as an occipital bun, retromolar gap, and well developed brow ridge. Based on the theory of climatic adaptation, it would be expected that Neandertal populations maintained such traits at the highest frequencies in areas of Europe where climatic selection was strongest. Although this entire suite of traits generally applies to “classical” Neandertals, further analysis by Holton and Franciscus revealed that wider nasal apertures are more often found in equatorial recent humans and that the Inuit people inhabiting Arctic regions have not developed large sinuses. However, a morphometric study in 2011 found that the amount of pneumatized space i.e. air space between facial bones, is fairly similar for Neandertals and modern humans. This study suggests that certain functional benefits of the distinctive Neandertal face may not have existed at all, and that something other than cold stress is responsible for their facial features. Ultimately, the debate over whether Neandertal bones reflect
adaptations that arose due to selective climatic pressure remains unclear, especially considering the now widely accepted theory of genetic drift.
Theoretically, greater midfacial projection would separate the brain from inspired cold air and a wider nose would be useful in humidification. The early rendition of the genetic drift theory was the accretion model, proposed by French paleoanthropologist Jean-Jacques Hublin in 1998. The model divides Neandertal finds into four stages and identifies the specimens and derived features present at each stage. It includes environmental influences, but also emphasizes demographic fluctuations and genetic drift in explaining human evolution. The model suggests that Neandertal features do not appear all at once, instead gradually accumulating over a period of 300,000 years. The initial colonization of Europe by small populations of pre-Neandertals would have produced genetic drift episodes resulting in the fixation of certain features, which meant some could have developed without any clear adaptive significance. A 2006 study that sampled 37 cranial measurements collected from 20 Neandertal specimens and 2,524 recent humans, found that Neandertals and modern humans would have diverged from each other some 300-400 Kya even in the absence of natural selection. This divergence would have occurred through the random fluctuations in allele, or gene form, frequencies that happen in all real populations. This accretion of Neandertal features is exactly the
Spring 2018
21
22
Columbia Science Review
expected pattern if genetic drift were responsible, and is less compatible with the climatic adaptation theory. A review of the literature on the Neandertal skull only highlights the difficulty with categorizing Neandertals as their own separate taxa, or placing them directly alongside AMHs. However, considering the arguments against many presumed autapomorphies and the variation already seen in the species, it is necessary that parameters for between and within-species variation either be set out more clearly when it comes to H. sapiens, or else widen to achieve a greater degree of inclusivity. The predominant theory of genetic drift makes it all the more likely that such “distinctive” cranial morphology can arise by chance within a population, especially when a certain level of geographical isolation is
This accretion of Neandertal features is exactly the expected pattern if genetic drift were responsible, and is less compatible with the climatic adaptation theory. at play. Ultimately, though the suite of traits characterizing “classical” Neandertals does present itself as distinct, this suite of traits is heavily fragmented outside of western Europe, making a convincing argument for the stance that Neandertals are more representative of regional variant than a different taxon altogether. Of course, with the studies of archaeology, ethnography, and genetics, this is only a small piece of the puzzle when it comes to determining just how close we are to our distant cousins.!
References: 1. http://www.newsweek.com// 2. http://deadline.com/ 3. William King The Reputed Fossil Man of the Neandertal (1864) 4.http://journals.plos.org/plosgenetics/article?id=10.1371/ journal.pgen.1002947 5. Cartmill and Smith 2009 Ch. 7 Talking Apes: The Neandertals 6. http://onlinelibrary.wiley.com/doi/10.1002/ajpa.10131/ epdf 7. http://www.pnas.org/content/ early/2016/09/21/1605881113/tab-article-info 8. Schwartz and Tattersall 1996 9. http://www.pnas.org/content/96/4/1805, Fransciscus 1999 10. http://www.pnas.org/content/114/47/12442 12. Coon 1962 Origin of Races 12. https://www.ncbi.nlm.nih.gov/pubmed/18842288 13. https://www.ncbi.nlm.nih.gov/pubmed/21183202 Spring 2018
23
COLUMBIA SCIENCE REVIEW www.columbiasciencereview.com
24
Columbia Science Review