Vol. 14, Issue 1: Fall 2017

Page 1

Columbia Science Review Vol. 14, Issue 1: Fall 2017

C.R.I.S.P.R-Cas9: Beginning of the End? fall 2017

1


C

Sophie Bair Enoch Jiang Abhishek Shah Maham Karatela

C o c k t a i l S c i e n c e

2

Columbia Science Review

onsciousness is one of the most tantalizing and elusive topics in modern neuroscience, and a recent scientific discovery may open a new window of understanding into the structures responsible for it. Researchers led by Christof Koch, president of the Allen Institute for Brain Science, found that three neurons connect throughout much of the entire mouse brain, and that one in particular envelops the outer layer. The neurons appear to be interconnected with most of the parts of the brain that receive sensory inputs and produce behavioral response; as the report states, Koch “has never seen neurons extend so far across brain regions.” Significantly, these cells can be traced back to the claustrum, a thin sheet of neurons that has been hypothesized to drive consciousness. Koch believes this result to support the idea that the claustrum integrates the brain’s various stimuli and responses and thus plays an integral role in consciousness, but Columbia University’s Rafael Yuste urges caution: like the scientific maxim “correlation does not imply causation”, the fact that the claustrum is connected to neurons that span across most of the brain does not unequivocally demonstrate that it is responsible, or even a part of what makes up consciousness. However, it does provide an outlet for future investigations, as Koch and his team hope to map more of the neurons that connect back to the claustrum. And above all, the discovery provides a hope that the question of consciousness can be disentangled someday.


O

S

ir William Elgar, best known for writing Pomp and Circumstance, was reportedly friends with a bulldog named Dan, who would attend choir practices with his master and growl at choristers who sang out of tune. Similarly, Richard Wagner reserved a special stool in his study for his Cavalier King Charles Spaniel, where he could watch how the dog reacted to particular keys or passages. Previous scientific analyses give some support to the notion that dogs can determine pitch-- recordings of wolves have shown that each will change its tone when others join the chorus, so as to preserve their own distinctive pitch and create an overall discordant sound. But what does science have to say about how much your puppy would like your Spotify playlist? A study performed by the Scottish SPCA and the University of Glasgow claims that when dogs were played five different genres of music (soft rock, Motown, pop, reggae, and classical) the greatest decrease in stress levels, as measured by factors such as heart rate, were seen in soft rock and reggae. But throughout all genres, dogs spent significantly more time lying down than standing when music of any kind was played. And although soft rock and reggae stood out on average, the study also suggested that just like humans, each dog had its own music taste. Previous studies have shown that playing classical music in shelters both calms the dogs and encourages visitors to stay longer, resulting in increased adoption rates. In contrast, playing heavy metal was shown to be associated with the canines spending more time shaking, barking, and using other forms of vocalizing such as howling. So if you want to make the perfect mixtape for Fido, mix some Beethoven with some Bob Marley and leave out the Black Sabbath.

n August 17th at 12:41 PM GMT, a gravitational wave rippled across the Earth, passing over the LIGO gravitational wave detector. Arriving from nearly 130 million light-years away, this wave marked the collision of two dense neutron stars, remnants from the death of billion-year old stars. Both neutron stars were roughly 20 kilometers across yet weighed more than the sun – in fact, a teaspoon of their surfaces would weigh more than 10 million tons. The neutron stars spiraled together, increasing in frequency and teetering on the edge of annihilation. At their closest and fastest, the stars were rotating thousands of times a second and travelling close to a third of the speed of light. This immense energy was radiated away from the stars as gravitational waves, a special type of radiative energy predicted by Einstein’s general theory of relativity. However, what was even more interesting than the collision was scientists’ ability to visually spot it in the sky. While LIGO has seen many black holes collide, such collisions emit no light. On the other hand, a neutron star collision is one of the brightest events in the sky. Thus, excited astronomers pointed telescopes at the sky in order to look for a large burst of light. Soon, scientists in Chile spotted it – a bright spot in the constellation Hydra. In less than 24 hours, scientists had, for the first time in human history, observed an astronomic collision both gravitationally and visually. Analyzing the spectra of the light emitted after the collision, scientists were able to find traces of heavy elements such as gold and silver, providing information about their creation. These elements are only produced in such explosive events: that’s right – gold is literally stardust! With many more gravitational wave detectors being built over the next few years, it is likely a new age of astronomy is beginning. Perhaps one day, all of us will be able to look to the sky and see the invisible world revealed to us through gravity. fall 2017

T

he recent popularization of Santiago Ramón y Cajal’s detailed sketches of brain neurons has brought forward a lot of coverage into the intersection of science and art. Both disciplines tend to have more in common than not, retaining a unique mutual relationship. Ramón y Cajal’s sketches bridge the two disciplines in an extraordinary way, bringing the beauty of human life and interpretation to the images we pore over through a microscope; the same intricate neurons of Ramon y Cajal’s brain sketched the cellular life we are constantly striving to understand. Art and science both necessitate the same curiosity and inquiry, and countless scientists have mastered each field since Da Vinci and his sketches of human anatomy. Science is very much as visual a discipline as art is, and recent school curriculums and initiatives that integrate the two reflect the population’s growing recognition of their mutual importance. The eye trained both by scientific and artistic inquiry has the potential to envision the incredible feats of humankind, from discovering the structure of a DNA molecule to visualizing the starry universe as it expands into infinity.

3


Letters From the EIC & President To me, science can be called a multitude of things. It can be a life saver as it is central in desgining today’s drugs and surgical techniques. It can be beautiful since it can offer elegant ways to express the fundamental laws of nature. But, in my opinion, science, at this point in time, cannot be considered accessible. To be more specific, science at the university and graduate level is not accessible. I remember getting together with friends as a child to watch Bill Nye or The Magic School Bus explain simple scientific concepts, such as the structure of ant colonies or the methods of storm formation. However, I hear people in college say they don’t plan to take any math or science courses beyond the minimum requirements because it is intimidating, boring, or even useless. While I don’t agree with these opinions, I do agree that the way science is presented in college can be dry, unappealing, and outright scary. Often, lecturers casually present their classes with power point slides recycled through the years or blackboard presentations copied directly from textbooks describing proofs of complicated claims. This ineffective pedagogy can dissuade even the most scientifically minded from pursuing further study. In fact, one of our own writers recently published an online article noting how the portrayal of scientific principles as “trivial” or “obvious” can dissuade students from further study. I joined Columbia Science Review because I wanted to show the Columbia community that science can indeed be interesting and accessible if presented in the right way. In our publication, we aim to present even the most complicated topics in ways understandable to readers with any degree of scientific familiarity. While it can be difficult to deconstruct the linguistic barriers that are unique to each area of science, I think it is absolutely necessary to do this to help every person understand the significance of every field. I hope you enjoy the written works in this publication and that you, too, can work to spread our love for science. Sincerely, Justin Whitehouse, Editor in Chief

This last year has been a trying time for the scientific community. From the erasure of all mentions of climate change from the EPA’s website, to scientific posts at the highest levels of government being filled with anti-science politicians, science has perhaps never in modern history had such little say in on our political process. Troublingly, the immense font of wisdom that is our collective scientific knowledge seems to be standing idly by. Modern science is the combined effort of innumerable generations, and while the history of science is filled with lessons about the dangers of its use for ignoble deeds, science has clearly established itself as a remarkable driving force for progress. Our current political moment is not the first time that the powers that be have opposed the truths, especially the inconvenient ones, which science reveals. Over time however, incontrovertible scientific truths make themselves unavoidable, regardless of the might of the powers that stand in their way. While the immediacy of the dangers posed by a lack of scientific perspective concern me deeply, I remain faithful that ultimately good science will shine through and illuminate a proper path forward as it has time and time again. This past year, however, has not been all gloom and doom in the scientific world. Astounding scientific discoveries and achievements have been continuing at an impressive, sometimes dizzying rate. In physics, the Nobel Prize was awarded to the leaders of LIGO (The Laser Interferometer Gravitational-Wave Observatory) who provided the world with its first direct detection of gravitational waves. Amazingly, the LIGO project has opened up an entirely new field in astronomy through stunningly accurate observations of the ripples in space-time created by massive objects. Even here at Columbia, Professor Joachim Frank won the Nobel Prize for his role in pioneering single-particle cryo-electron microscopy, a method for high-resolution structure determination of biomolecules in resolution. The state of scientific discovery throughout the world remains strong. At the Columbia Science Review, our mission is to bring the science of LIGO and Professor Frank to our community, jargon-free and as accessible as possible. We recognize that this epoch has a need for a scientifically literate populace, not just a technically sound scientific community. Through this publication, and our continual outreach through events on campus and our online content, we will continue to do our part in spreading science, we hope you join us in this undertaking. Sincerely, Noah Goss, President

4

Columbia Science Review


Editorial Board The Editorial Board biannually publishes the Columbia Science Review, a peer-reviewed science publication featuring articles written by Columbia students.

Editor-in-Chief Justin Whitehouse

Chief Content Editor Young Joon Kim

Chief Content Reviewer Nikita Israni

Blog Content Manager Yameng Zhang

Chief Illustrator Jennifer Fan

Editors Kelly Butler Serena Cheng Lalita Devadas Sarah Ho Enoch Jiang Briley Lewis Timshawn Luh Heather Macomber Cheryl Pan Jane Pan Alice Sardarian Emily Sun Naazanene Vatan Tina Watson Adrienne Zhang Joyce Zhou

Reviewers Benjamin Greenfield Jessy (Xinyi) Han I-Ji Jung Bryan Kim Mona Liu Prateek Sahni Bilal Shaikh Kamessi (Jiayu) Zhao

Blog Columnists Sophia Ahmed Gitika Bose Sean Harris Tanvi Hisaria Kanishk Karan Audrey Lee Maria MacArdle Sonia Mahajan Shasta Ramachandran Mariel Sander Kayla Schiffer Manasi Sharma Sean Wang Kendra Zhong

Illustrators Christopher Coyne Cecile Marie Farriaux Sirenna Khana Yuxuan Mei Kyosuke Mitsuishi Natalie Seyegh Stefani Shoreibah Eliana Whitehouse Layout Editor Tiffany Li Layout Designers Amanda Klestzick Vivienne Li Alice Styczen Jessica Velasco Elizabeth Wiita Joyce Zhou

Spread Science Director Michelle Vancura Spread Science Team Makena Binker Cosen Benjamin Ezra Kepecs Alex Maddon Kshithija KJ Mulam Coco (Kejia) Ruan Janine Sempel Stephanie Zhu

Administrative Board The Executive Board represents the Columbia Science Review as an ABC-recognized Category B student organization at Columbia University.

Noah Goss, President Aunoy Poddar, Vice President Ayesha Chhugani, PR Chair Keefe Mitman, Treasurer Marcel Dupont, Secretary Harsimran Bath, Lead Web Developer Cindy Le, Lead Web Developer

Maham Karatela, MCM Chase Manze, MCM Lillian Wang, MCM Urvi Awasthi, OCM Sophie Bair, OCM Aziah Scott Hawkins, OCM Amir Lankarani, OCM

Alana Masciana, OCM Anu Mathur, OCM Jason Mohabir, OCM Kush Shah, OCM Abhishek Shah, OCM Winni Yang, OCM Catherine Zhang, OCM

The Columbia Science Review strives to increase knowledge and awareness of science and technology within the Columbia community by presenting engaging and informative articles, in forms such as: • Reviews of contemporary issues in science • Faculty profiles and interviews • Editorials and opinion pieces

Sources for this issue can be found online at www.columbiasciencereview.com Contact us at csr.spreadscience@gmail.com Visit our blog at www.columbiasciencereview.com “Like” us on Facebook at www.facebook.com/columbiasciencereview to receive blog updates, event alerts, and more. fall 2017

5


Columbia Science Review Cocktail Science 02

16 C.R.I.S.P.R-Cas9: Beginning of the End?

Emerging Understanding of America’s 06 19 Running, Shaving, and Reasoning: An Exploration of Mathematical Paradoxes Peanut Allergy Climbing Masculinity: Men, Mountain 10 Climbing, and Gender Relations During the Golden Age of Mountaineering

20 Let’s Not Sugarcoat It 22 A case for more science in the Core

A Brief Review of Neurogenetic Tools in 13 Drosophila

13

10

A Brief Review of Neurogenetic Tools in Drosophila

America’s Peanut Allergy

19

10 Climbing Masculinity

6

Running, Shaving, and Reasoning

Columbia Science Review


Emerging Understanding of America’s Peanut Allergy Epidemic Kelly Butler Illustration by Yuxuan Mei

P

eanuts are widely regarded as a childhood staple and healthful snack, but the well-loved legume poses a major health risk to the growing number of Americans with peanut allergies. More severe than seasonal allergies or food intolerances, peanut allergy is a potentially lifethreatening condition in which the immune system falsely labels peanut proteins as harmful and launches a dangerous attack on the body. Peanut allergy is known to be an Immunoglobulin E (IgE)-mediated food allergy, meaning the immune system produces abnormal, peanutspecific IgE antibodies that erroneously flag peanut proteins as pathogenic. When even a trace amount of peanut protein enters the body, these IgE antibodies bind to white blood cells that subsequently release symptomcausing biomolecules. Though microscopic, these biomolecules have massively dangerous effects on the body and can catalyze a potentially fatal form of allergic reaction called anaphylaxis. An increasing number of Americans have these peanut-specific IgE antibodies lurking in their bloodstreams. In fact, the prevalence of peanut allergy in the United States increased threefold between 1997 and 2010, and an estimated four million Americans now live with the life-threatening immunological condition. Unlike the molecular process of peanut-induced allergic reactions, the factors behind the development of peanut allergy remain poorly understood. However, recent research suggests that a combination of genetic, environmental, and dietary factors may be fueling America’s peanut allergy epidemic. Data demonstrating the familial aggregation of peanut allergy within a single generation provided the first clue that genes play an

important role in the development of peanut allergy. In 1996, a team of researchers determined that children with peanut-allergic siblings were over five times more likely to have a peanut allergy than the general population. Given the similar genetic makeup of siblings, the authors hypothesized that individuals with certain genes are at greater risk of developing peanut allergy. Single generation familial aggregation was likely measured instead of intergenerational heritability because the incidence of peanut allergy among adults was too low at the time to gather statistically significant data. However, the rapid rise of peanut allergy allowed subsequent research teams to study heritability between two generations. For example, a 2009 study analyzed 581 nuclear families with at least one allergic parent and estimated that peanut allergy is inherited at a rate of 15 percent. Shedding more light on how genetic factors influence

fall 2017

7


the development of peanut allergy, recent advances in genomic screening technology have enabled researchers to identify specific genes associated with peanut allergy. In 2011, a study identified loss-offunction FLG mutations as a strong and independent risk factor for peanut allergy with an odds ratio of 5:3; this means that an individual with an FLG gene mutation is almost twice as likely to develop a peanut allergy. FLG genes code for filaggrin proteins that help keep the outermost layer of skin moist and compact. When these genes have loss of function mutations, filaggrin proteins cannot properly develop, causing the skin to be an ineffective barrier. FLG mutations are known to cause a skin condition called atopic dermatitis (eczema) that often coexists with peanut allergy, which has led many scientists to hypothesize that exposure to peanuts via compromised skin somehow leads to allergic sensitization.

Genetics alone cannot explain the rapid rise of peanut allergy in the United States.... there must be nongenetic factors that allow these genes to phenotypically manifest in peanut allergy. FLG loss-of-function mutations are not the only genotype associated with increased risk of peanut allergy. In a 2015 study, researchers conducted the first genome-wide analysis of peanut-allergic individuals and determined that certain sequences of human leukocyte antigen (HLA) genes on chromosome six are associated with an increased risk of developing peanut allergy. A close relationship between HLA genes and peanut allergy is highly plausible, as HLA genes code for proteins that allow the immune system to distinguish foreign invaders from the body’s natural proteins, and allergic reactions involve the immune system misidentifying harmless substances as dangerous and attacking the body’s own biomolecules. The repeated presence of FLG loss-of-function mutations and certain HLA genomic sequences among peanut-allergic individuals provides strong evidence that genetic factors influence the development of peanut allergy, but genetics alone cannot explain the rapid rise of peanut allergy in the United States. It is also implausible the causes of peanut allergy are solely genetic, as not all individuals with FLG lossof-function mutations or certain HLA gene sequences develop peanut allergies. Thus, there must be nongenetic factors that allow these genes to phenotypically manifest in peanut allergy. Recent research suggests that some of these non-genetic factors may include 8

early microbe exposure, exposure to peanut via skin before oral ingestion, and infant diet. The so-called “hygiene hypothesis” presents one theory as to how environmental factors influence the development peanut allergy. According to the hygiene hypothesis, the excessive cleanliness of Western homes eliminates microbes that are necessary for proper development of the immune system. Without exposure to these microbes during infancy and early childhood, the immune system cannot develop properly, and allergic diseases like peanut allergy occur. David Strachan (St. George’s, University of London) initially proposed the hygiene hypothesis in 1989 as an explanation for the increased prevalence of hay fever and atopic dermatitis in industrialized countries relative to unindustrialized ones. Observing that individuals born and raised in industrialized countries reported higher rates of hay fever and atopic dermatitis, Strachan hypothesized that the cleanliness of industrialized homes causes allergic diseases by eliminating beneficial microbes and by prohibiting proper development of the immune system. Providing further support for this hypothesis, Strachan also determined that children with older siblings were less likely to develop hay fever and atopic dermatitis, attributing this difference to beneficial microbe exposure from older siblings. The hygiene hypothesis received fresh support from a 2015 study finding that homes with dishwashers reported a significantly higher rate of allergic diseases than homes without dishwashers. In accordance with Strachan’s hygiene hypothesis, this study argues that dishwashers eliminate microbes that enter the gut and help prevent the development of allergies. Providing further support for the hygiene hypothesis, multiple research teams have revealed a lack of gut microbe diversity in infants with atopic dermatitis—a major risk factor for peanut allergy. Since the hygiene hypothesis was developed to explain a broad range of allergic diseases (i.e. hay fever, asthma and eczema), there is insufficient data to support a causal relationship between cleanliness and peanut allergy specifically. However, many scientists have argued that the hygiene hypothesis is also applicable to peanut allergy, noting that such a hypothesis explains the increased prevalence of peanut allergy in developed countries relative to undeveloped ones.

According to the hygeine hypothesis, excessive cleanliness of Western homes eliminates microbes that are necessary for proper development of the immune system. Like early microbe contact, route of initial exposure

Columbia Science Review


to peanuts may be related to the development of peanut allergy. Specifically, multiple studies have found that infants who were exposed to peanuts through their environments before peanuts were introduced into their diet faced a higher prevalence of peanut allergy. This finding agrees with and largely foreshadows a recent groundbreaking study that reversed a decades-old guideline to avoid feeding at-risk infants peanuts, with risk defined as presence of atopic dermatitis or family history of peanut allergy. Published in 2015, the Learning Early About Peanut (LEAP) study investigates the effects of early oral introduction of peanut on allergy prevalence in at-risk infants. The researchers found something completely unexpected: at-risk infants who were fed peanuts were 11.8 percent less likely to develop peanut allergy than their peers who avoided peanuts. Collectively, these studies reveal that infants who are genetically predisposed to peanut allergy are more likely to develop the condition if they are exposed to peanut through compromised skin before the potential allergen is introduced orally.

At-risk infants who were fed peanuts were 11.8 percent less likely to develop peanut allergy than their peers who avoided peanuts. While scientists have pinpointed genes associated with peanut allergy and have begun to gather significant evidence about non-genetic factors, much remains unknown about the causes of America’s peanut allergy epidemic. The process by which exposure to peanut via skin before oral introduction increases risk of peanut allergy remains completely elusive, and many genes associated with peanut allergy are likely waiting to be discovered. With significant further research, it is possible to uncover the exact causes of peanut allergy, and this knowledge could lead to effective preventative measures, and maybe even a cure for the millions of people already living with peanut allergies. !

fall 2017

9


Climbing Masculinity:

Men, Mountain Climbing, and Gender Relations during the Golden Age of Mountaineering E F

mily elsen

Illustration by Helen Jin

“I

confess—for it would be useless to conceal—that Mountaineer John Tyndall, in his climbing notes, I am a fanatic. I believe that man ought to climb presented two concepts: climbing as an allegorical mountains, and that it is wrong to leave any district representation of Christ’s suffering and climbing as a without setting foot on its highest peak.” Starting method for unifying the mind and body for the middle with Englishman Alfred Wills’s 1854 ascent of the class. Part of the reason why “muscular Christianity” was Wetterhorn and ending in the 1870s, the golden age of so popular amongst climbers was because Christ’s death mountaineering established mountain climbing as an and resurrection mirrored their climbs; indeed, many activity of leisure pursued by a dashing, well-educated first-hand accounts are structured after this model. The man whose physical exploits and mental fortitude cooperation of the physical and mental for the middle catapulted him to the summits of mountains and the class was striking since the idea of physical exertion front pages of newspapers. These exploits not only was typically reserved for the working class. However, captured the attention of the populace, but also led to this cooperation ultimately led to the development of the development of high-altitude physiology, and even physiological research of the effects of high altitudes on opened the door for the discussion of gender in mountain the body. climbing. This golden age of mountain climbing was not Mountaineer Edward Whymper, an engraver turned just about the relation of men to the mountains; it was mountaineer, was vastly interested in using climbing also about the relation of men to fatigue, men to women, as a starting point for conducting scientific research. and men to nature. Like Tyndall, he obsessively noted his physical abilities, Bodies in the Victorian Age were viewed as motors including his average walking pace of eleven minute that were burdened by the problem of fatigue. Because miles and ability to climb over 100,000 vertical feet of many breakthroughs in in eighteen days. After What were man’s limits? More so, completing the successful the field of thermodynamics at the time, society viewed summiting of the Matterhorn, were the limits of mountaineers fatigue as a form entropy, he turned to his body and different from those of “ordinary its workings; of particular or lack of order and organization. Therefore, it interest was altitude sickness. men?” was prudent to maximize What were man’s limits? the utility of the human body with as little entropy as More so, were the limits of mountaineers different from possible. Consequently, this created a divide between those of “ordinary men?” the body and the mind. For example, Stephen described Whymper’s solution was to climb the Andes mountains his guide, Ulrich Lauener, not in terms of his technical with his two trusted guides Jean Antoine Carrel and knowledge, but as “square shouldered, gigantic, the Louis Carrel in an effort to determine humankind’s most picturesque of guides…the very model of a true physical limits. Over the course of 17 days, Whymper mountaineer.” Tall and of wiry build himself, Stephen meticulously took down his and his guides’ pulses, almost seems to subvert his role as a mountaineer. How, their breathing patterns, and their physical responses to exactly, would he see himself within the narrative of changes in altitude. He soon realized that “mountain mountaineering? sickness” was a physical response to high altitudes with Stephen, like many mountaineers, was also a follower symptoms that included muscle weakness, slowness of of “muscular Christianity,” an English philosophical thought, and fatigue; coincidentally, these were the same movement rooted in masculinity and discipline that physical traits that, at the time, were ascribed to women. attempted to unify both the body and the mind. Nor does it help that Whymper tested his hypothesis on 10

Columbia Science Review


solely male subjects. Science and gender found itself at a crossroads. Women were mountaineers and active in the climbing communities. However, their roles as mountaineers could go one of two ways: they could cater towards their prescribed gender norms, or they could go on climbing adventures much like their male counterparts. One of the earliest climbing narratives geared towards women was A Lady’s Tour Round Monte Rosa by Mrs. Henry Warwick Cole. Under the auspices that her work would encourage other women to climb as well, she writes of the invaluable use of riding skirts and the disappointing lack of sidesaddles available for use. She further goes on to disparage the local guides, saying that “These men are, with few exceptions, very indifferently acquainted with their own country, and much disposed to magnify all its dangers, especially when they have to conduct a lady.” In light of her viewpoint, it makes sense that she would not even climb Monte Rosa; rather, her book describes the many walking paths a lady could take in the scenic countryside by the mountain.

The women who climbed not for recreation but for competition, however, had much in common with their male counterparts. They competed against each other, shamelessly self-promoting, and always chasing mountaineering glory. In the book A Summer Tour in the Grisons and Italian Valleys of the Bernina, Mrs. Henry Freshfield writes of the new trails she blazes as she journeys far above the glaciers and closely evades the dangers of ice-laden edges. Interestingly enough, her husband is not a central figure but rather a nuisance; rather than joining his wife on her climbs, he is intentionally left at their hotel. It should be noted, however, that while women were active in climbing, they weren’t exactly approved of. While Mrs. Warwick Cole never climbed Monte Rosa, critics of her book claimed that she was giving up the better characteristics of her gender and that “In daring, in physical strength, and in closeness and accuracy of thought she seems as much a man as Semiramis or Lady Macbeth.” The critic echoes Sir Stephen Leslie’s opinion of women climbers. “The number of persons who possess

fall 2017

11


the necessary independence of character [to ascend summits] is rare indeed…when I speak of ‘persons,’ I at present excluded the female sex.” While Cole had stayed within the “feminine” domain of traversing temperate zones and glaciers, her very presence threatened the “masculine” act of mountain climbing. On the whole, female mountaineers were never as popular as their male counterparts. This is partially because John Tyndall, Sir Leslie Stephen, and Edward Whymper were part of the Alpine Club, a mountaineering group whose exploits were followed ravenously by an adoring populace. The Alpine Club was originally founded to facilitate exploring the Alps. In addition, it served as a place where men of respectable breeding and competent climbing ability could gather and discuss their experiences on the mountains along with testing new types of climbing equipment. Whymper, Tyndall, and Stephen all produced best-selling books on their exploits, how they were dangerous but never enough for them. Yet, for Whymper, this bravado only went so far. He was caught in a disastrous and deadly accident on the Matterhorn in the Alps in 1865. After eight previous attempts, Whymper and his group became the first to summit the mountain, coming in only moments before an Italian group. However, on the way down, one of the group members slipped and took the three people attached to his rope down to the mountain’s glacier below. It was a sobering moment for a man who thought himself immune to death. “Every night, do you understand, I see my comrades of the Matterhorn slipping on their backs, their arms outstretched, one after the other, in perfect order at equal distances—Croz the guide, first, then Hadow, then Hudson, and lastly Douglas. Yes, I shall always see them...” Man, it seemed, could only dominate nature so much. Nature, and the Alps by extension, had been viewed as an object as terror for centuries before the advent of the Golden age. They were frigid wastelands that represented hell on earth. It was popularly thought that witches and other sub-human creatures populated the peaks, and that there were even dragons as well. Setting out to analyze the old folk tales and determine if they were true, physics professor Johann Schuechzer took it upon himself in 1702 to climb the Alps and determine the nature of the fauna that populated the mountain range. In his 1708 work Ouresiphoites Helveticus, sive Itinera Alpina tria, his fear-inducing depictions of dragons were used as representations of the terrors that awaited those on the Alps. Yet that was not his intention; hoping that his work would be treated as one of many in the realm of natural theology, he proposed that God’s existence could be found through nature, as evidenced by the fantastical creatures that existed within the mountain range. 12

The way for the Romantic viewpoint of the Alps was soon paved. Romanticism emphasized the unity of man in nature, and believed that this balance was “sublime.” The Swiss soon caught on to the word’s usage and used it to draw in tourists to the base of the mountains. Initially this idea worked, but the sheer number of tourists drawn in soon made the idea backfire. By the time the Golden Age dawned, the concept of the sublime was rapidly shrinking. There were many reasons for this; for example, too many people visited the Alps to make it the remote abstraction it had so consistently be portrayed as, especially to mountain climbers. Furthermore, the Romantic idea of the dashing explorer barely escaping the jaws of death becomes less idealistic and more immediate when pursuing the summit, as evidenced by Whymper’s experience summiting the Matterhorn.

“The number of persons who possess the necessary independence of character [to ascend summits] is rare indeed…when I speak of ‘persons,’ I at present excluded the female sex.” The self-aggrandizing qualities that climbers were well known for complicated matters as well. If they were skilled enough physically and mentally to conquer the mountains, then the mountains weren’t sublime; they, as the climbers, were. Indeed, famous woman mountaineer Elizabeth Le Blond said upon her summit of Aiguille du Midi “For twenty minutes I enjoyed the magnificent view. Then my thoughts turned to more commonplace subjects. We waved ‘au revoir’ to Chamoix, and the descent began.” The “sublime” goal of successfully climbing the Alps no longer laid in the mountains but in its climbers. Accounts of the Alps did not feature the landscape as the central piece but the mountaineers and their resourcefulness. The mystique and romantic notions of mountain climbing gone with the advent of the superior and dominating mountain climber, the activity soon was soon called “killing dragons.” In essence, mountain climbing in the Golden age did not exist in a vacuum. Rather, new conceptual models of physiology, feminization of climbing itself, and the ability to dominate nature influenced those who undertook the challenge. All of these ideas together coalesced into the brave and capable mountain climber exemplified by Edward Whymper: “Climb if you will, but remember that courage and strength are nought without prudence, and that a momentary negligence may destroy the happiness of a lifetime. Do nothing in haste; look well to each step; and from the beginning think what may be the end.” !

Columbia Science Review


A Brief Review of Neurogenetic Tools in L J Drosophila I K M

innie iang

llustration by

T

here are few organisms more annoying than a fruit fly. Found everywhere and seemingly indestructible as a nuisance, the common fruit fly is as trite as it is tiny. Yet, among the myriad model organisms biologists utilize to study life, one of the most widely studied is the fruit fly, Drosophila melanogaster. These model organisms are species that have much simpler physiologies than our own; hence, we study them to simplify the questions we have about ourselves. If we can find answers about a similar organism, those answers may be close to the truth in humans. Part of the power of working with flies is the large variety of genetic tools developed for experimental manipulations. Neuroscientists want to understand how information flows through the brain, from sensory reception to output as behavior. What does each neuron in the brain do, and how are these neurons linked together? How do the connections in such a network affect their function? By changing the activity of one neuron at a

yosuke

itsuishi

time and observing the behavioral effects, scientists can piece together the inner workings of the brain. In order to do so, however, scientists require powerful and specific molecular tools. Luckily, in Drosophila, the already developed genetic techniques at scientists’ disposal allow them to be spatially precise in their experiments, as they can pinpoint the exact neurons of interest. Furthermore, these tools allow scientists to easily control the neurons they’ve handpicked with simply light or temperature to change the activity of specific neurons. Transgenes are the crux of the power. These are genes that simply don’t quite belong in the organism but have been inserted in its genome instead. Transgenes are useful because genes become “transcribed” into mRNA, which is then translated into proteins that carry out most specialized cellular function. In 1993, geneticists Brand and Perrimon published a paper describing the adaptation of a yeast transcription factor (a protein)

fall 2017

13


for use in Drosophila to express a transgene of choice in a particular subset of neurons. This protein, called GAL4, is extremely powerful because lines of flies can be engineered when the GAL4 protein is synthesized in a particular pattern of neurons. GAL4 acts to control whether another protein is created. When it is expressed, a responder gene, known as the Upstream Activation Sequence (UAS), is also expressed. The responder can be anything, ranging from a fluorescent marker for the cell to a protein that blocks the neuron’s activity. With this information, researchers can specifically manipulate neurons based on their GAL4 expression (Duffy 2002). The GAL4/UAS system has thus allowed researchers to achieve spatial precision in controlling which neurons express what. New tools have even been developed as a consequence of this GAL4/UAS expression system in which the intersection of GAL4 expression from two lines narrows down the targeted (Luan et al. 2006).

Transgenes are the crux of the power. These are genes that simply don’t quite belong in the organism but have been inserted in its genome instead. But what do we want from this spatial specificity of genetic expression in the brain? Neuroscientists want to understand the circuitry of information processing in the Drosophila brain. While progress has been made in delineating the function of certain types of neurons, many gaps still exist in mapping how those neurons interact with others or what role they play in the neural network—what neuroscientists call the connectome of the brain. By creating a model of neuronal connections, researchers can investigate each link by inhibiting or activating one neuron in the web and examining the effects. That is, in loss-of-function experiments, researchers eliminate the action of some neuron to determine what the neuron is necessary for; conversely, in gain-of-function experiments, researchers activate the neuron to investigate what the neuron may be necessary for. Techniques such as thermogenetics and optogenetics are especially useful in determining the function of neurons in loss-of-function (inhibition) and gain-offunction (activation) experiments. While the GAL4/UAS system provides spatial precision in which particular neurons are affected, these tools allow for temporal precision in when these neurons are influenced. Thermogenetics grew to be useful with the transgene Shibirets1. Neurons send electrical signals as well as chemical signals to local neighbors. Shibirets1 works by blocking these chemical signals and thus preventing 14

the neuron from continuing to be active. The catch is, though, that the protein coded for by this transgene only creates such inhibition at a temperature above 29°C (Gonzalez-Bellido et al. 2009). When experimenters express the shibirets1 transgene in the fly, by changing the temperature of the fly’s environment, researchers can in effect flip a switch to change whether the neuron is inhibited. Doing so allows for great temporal control of neural activity. Though shibirets1 is mostly useful for inhibition experiments because it prevents neural activity, other transgenes exist for use in activation experiments. For example, in Drosophila the gene dTrpA1 codes for a temperature-sensitive ion channel that is blocked at lower temperatures but open at temperatures above 25°C. At the higher temperature, the channel allows positively charged cations to flow through. This flow of cations causes an influx of positive charge into the cell that depolarizes the membrane. As a result, the neuron fires an action potential and sends an electrical signal (Owald et al. 2015). With this mechanism, scientists can conduct gain-of-function experiments that involve stimulating particular neurons and observing the behavior that follows. These observations may help support whether the neurons are sufficient to induce an action.

...by changing the temperature of the fly’s environment, researchers can in effect flip a switch to change whether the neuron is inhibited.

Like thermogenetics, optogenetic tools are also popular for activation experiments. From bacteria, neuroscientists Zemelman and Miesenbock have employed channels that are closely affiliated with light-activated proteins known as channelrhodopsins. When light of a certain wavelength strikes one of these proteins, the channel can open and allow positively-charged ions to flow through, depolarizing the membrane to allow for an action potential to fire. Particular use of these channelrhodopsins has been made with the development of new variants that respond to red light, which is beneficial because flies cannot see light in this range of wavelength. This precaution prevents extraneous variables such as light visible to the fly (which may provoke a response) from affecting behavior so that the true effects of neuron manipulation can be correctly quantified (Owald et al. 2015). With this tool, specific neurons in flies can be precisely excited by simply turning on a corresponding wavelength of light. While optogenetics has only been developed well enough for activation experiments, recently new tools

Columbia Science Review


have also emerged for its use in inhibition experiments (Mohammad et al. 2017). As part of optogenetics, cation channels that have traditionally been utilized in activation serve their purpose well because they allow influx of positive charge into the cell, which in turn depolarizes the cell membrane. Historically, influx of cations (priming the neuron for signaling) or efflux of anions (making the neuron more likely to fire) are achieved via chloride or proton pumps, but these require massive amounts of light. However, new developments include anion channels that allow negatively charged ions to flow into the cell, lowering the likelihood of the neuron sending electrical signals and thus inhibiting the neuron. This new class of algal Guillardia theta anion Channel rhodopsins (GtACRs) transports ions across the membrane more effectively than traditional inhibitory optogenetic tools. This is an important development because the signal can hence be transmitted more quickly and requires less light for activation (Mohammad et al. 2017). The reversibility of thermogenetic and optogenetic options is key to their utility. That is, researchers can in effect switch neurons from activated to stimulated and then back again by simply changing one variable— temperature or light. This is a great advantage over other traditional methods of silencing neurons, which operate constitutively, or throughout the fly’s lifetime. For example, one widely used transgene in inactivation experiments is the Kir2.1 gene, which codes for a potassium channel that sits open in the membrane, allowing positively-charged potassium ions to leak out of the membrane. This hyperpolarization of membrane potential would lead to inhibition of the neuron, making it less likely to be receptive to signals from its neighbors (Hodge 2009). Other irreversible methods involve the use of RNA interference, a genetic technique that involves the introduction of RNA that matches with its mRNA (the precursor to protein) complement

Researchers can in effect switch neurons from activated to stimulated and then back again by simply changing one variable—temperature or light. and hence blocks protein synthesis (Hodge 2009). The traditionally used methods of this type are all inhibitory (and in some cases, though not quite as often, stimulatory) effects that persist in the fly from the first second to the last second of its lifetime, meaning that; thus, its development and behavior could also be impaired as a result of RNA interference. While all the methods so far discussed are extremely

useful, they each have suboptimal aspects. Problems with optogenetic methods involve the effects of light on tissue surrounding the neuron(s) of interest, especially if the wavelength of light used is not in the red range. Thermogenetics, on the other hand, may not be not as temporally precise as optogenetics because a temperature change requires time, unless an infrared laser beam can be used with precision. A few years ago, molecular biologist Bath et al. also published a new device called FlyMAD that allows very fast thermogenetic manipulation via an infrared laser of freely walking flies in an attempt to conquer the poor temporal precision of most thermogenetic approaches (Bath et al. 2014). Furthermore, while these methods are advantageous for studying the function of each neuron, researchers still want to understand how, mechanistically, this information is transmitted within the overall circuitry of the fly brain. To do so requires recording signaling between neurons, which normally necessitates tethering the fly to some support so that instruments can precisely record a given neuron. Currently, further research is being done to hone our ability to record neural activity in a moving fly (Owald et al. 2015). Loss-of-function and gain-of-function experiments are only some of the useful ways to manipulate neurons; for example, to obtain images of what particular neurons involved in Drosophila olfaction look like in the brain, neuroscientists Aso et al. expressed fluorescence in certain neurons to depict the neuronal architecture of brain structures involved in olfaction via the GAL4/UAS expression system (Aso et al. 2014). While these methods of neurogenetic manipulation in Drosophila may not be brand-new, they continue to play an important role in scientific advancement and also continually benefit the enhancement and development of new tools like optogenetic inhibition. As new methods for neuronal control begin to proliferate, neuroscientists have a greater chance than ever at truly understanding what a fly’s brain is doing as she buzzes around the world. It is clear that methods for optogenetic manipulation, while established, are still progressing and may yield greater power in the future. We can only hope that further advances will be made closer to mammalian systems so that these questions we answer in more tractable organisms can also find an answer in systems closer to humans.!

fall 2017

15


C.R.I.S.P.R-Cas9: Beginning of the End?

Aunoy Poddar Illustration by Eliana Whitehouse

O

h, wonder! How many goodly creatures are there here! How beauteous mankind is! O brave new world, That has such people in’t! -Shakespeare, The Tempest

The intersection of science and morality almost always evokes the fear of the inevitable. Often realized in literature, science-gone-wrong never fails to stoke the flames of our imagination. Mary Shelley gave us Frankenstein’s monster, a synthetic living creature that caused us to question the boundaries of where science should extend its reach. Not only that, the stitched up monstrosity conjured a brand of fear many authors stamped on their works. Aldous Huxley, who borrowed the title of “Brave New World” from Shakespeare’s The Tempest, carved up a dystopian society mired with extreme implementations of psychological conditioning and genetic editing. These themes have not gone away, as seen in Kazuo Ishigiro’s 2005 Booker Prize winning novel Never Let Me Go. Her characters are clones, unfortunate individuals born to be harvested for their organs. Think these examples are too esoteric? You will be glad to hear J-Lo is producing a TV series on biological warfare called C.R.I.S.P.R. Perhaps now would be a good time to take stock of current, and accurate, scientific progress. Nevermind a synthetic human, a synthetic yeast chromosome is the best we have been able to do. Cloning has not quite become the terrifying reality that the 90’s made us believe it to be, and our world has become more fearful than brave. Nevertheless, when it comes to genetic selection technologies like CRISPR/Cas9, the advancement of science has gotten to the point at which fear might be appropriate. The CRISPR/Cas9 genetic editing technology has taken the scientific world by storm, and rightly so. First used to edit human cells in 2013 by the Broad Institute’s Feng Zhang, the biochemical tool has revolutionized every biology laboratory that can get their hands on the new technology. CRISPR/Cas9 is a two-part system, in which repeated strands of DNA are cut by an enzyme called Cas9. Those DNA repeats are called Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR). While this may not sound too attractive, the system has enabled scientists to precisely alter sequences 16

in the DNA of a whole cohort of eukaryotic organisms, such as mice and humans. This has been enormously useful in characterizing behavior of specific genes. What does hypothetical gene E on chromosome 16, position 125,306, do? With CRISPR, scientists can play God with the genetic code and delete gene E, tweak gene E, replace gene E, or even make gene E proteins glow in the dark. While immeasurably useful, ethical considerations arose. The controversy with CRISPR began when scientists attempted to manipulate human embryos.

With CRISPR, scientists can play God with the genetic code and delete gene E, tweak gene E, replace gene E, or even make gene E proteins glow in the dark. In 2015, Chinese scientists attempted to edit human embryos with the CRISPR/Cas9 technology (Liang). While these embryos were non-viable, meaning they were from the start unable to become real humans, it incited an intense debate regarding the use of CRISPR/ Cas9 and the experimentation on human embryos. Their subsequent Nature publication has led to multiple calls for a moratorium on any further research on embryonic genetic editing until some safe, ethical standard can be established (Lanphier). Additionally, the United States refuses federal funding to “research in which a human embryo is intentionally created or modified to include a heritable genetic modification” (Meštrovic). Although it seemed to many this deterrent would prove effective, in August 2nd of this year, Shoukhrat Mitalipov’s research group became the first U.S. team to breach the moratorium. Attempting to demonstrate that germ line editing could cure the blood condition beta-thalassemia, Mitalipov’s group inserted a recombinant CRISPR Cas9 system at the moment of fertilization. This recent publication has unleashed a similar wave of renewed criticism and worry. The ethical dilemma of the CRISPR/Cas-9 system, as it stands now, are two fold. One, the tool is not completely accurate. While this is mostly irrelevant in many research pursuits, a tiny margin of error becomes greatly magnified in the context of human embryos. The persistent fear of off target effects are essential to address before any form

Columbia Science Review


fall 2017

17


of clinical trial should ever be attempted. These off target effects could manifest in nasty, and more importantly, unpredictable forms in seemingly viable subjects. And by “viable subjects”, I mean living, conscious human beings. Second, the social implications of an unprecedented form of artificial selection are enormous. While researchers like Mitalipov are interested in correcting rare heritable blood disorders, the line in the sand will face harsh waves in the future. Such a powerful correcting tool will force us to decide what constitutes an error in the first place. Lethal, genetic disorders are straightforward. Height, skin tone, and eye color are not. So what to do? Stopping the inevitable progress of science has always been an extremely difficult endeavor. However, the problems that lie ahead may require an extraordinary effort. While a moratorium has been called for, it should be enforced. Executive Director of the Center for Genetics and Society, Marcy Darnovsky, worries that “[we’re] creating a world in which the already privileged and affluent can use these high-tech procedures to make children who have some biological advantages” (Stein). And if that day comes, we will be facing a crisis. A crisis not only of science, policy, or economics, but a much more unsettling crisis of classification. While science can never answer a moral question, literature’s prophylaxis has been long underway. A motif that has intercalated into many dystopian societies has consistently been the detriment of a genetic hierarchy. Gattaca, a poignant, dystopian film made in 1997, follows the story of two men broken by the genetically engineered system that controls them. One is the result of a natural birth who dreams of a job only given to the genetically qualified ubermensch, and the other is an ubermensch designed for first place but perpetually ends up in second. Theatrics aside, the movie harrowingly imagines the future inequality prevalent in a society divided between genetically “superior” and “inferior” humans. Superior humans get better health, better looks, better jobs, and better lives. The rich and poor live worlds apart, as natural birth citizens are profiled and discriminated against. Neither the protagonists nor the audience comes away thinking this world is for the better. Huxley had his own hierarchy envisioned in “Brave New World”, with the superior Alphas lounging on top, Betas below, Gammas, Deltas and Epsilons subsequently trailing the list. Each group is appropriately conditioned for their future environment and occupation. The Alpha’s are appropriately Adonises, par excellence in mind and body. Epsilons are shunted, conditioned by fetal alcohol syndrome and deafening noise to enjoy the noxious fumes of the janitorial stations they occupy. Ominous, and a century before its time. Regardless of implementation, the technology to mold the genetic code of future children 18

carves a slope, one so slippery it may be impossible to ever recover from a fall. And to what end? The World Controller Mustafa Mond argues the genetic segregation and conditioning are essential to abrogate suffering. “We prefer to do things comfortably.” This is not naive nor malicious. Mitalipov echoes Mond an interview, asserting “We have intelligence to understand diseases, eliminate suffering. And that’s what I think is the right thing to do” (NPR). But our savage disagrees. But I don’t want comfort. I want God, I want poetry, I want real danger, I want freedom, I want goodness. I want sin. Not to mention the right to grow old and ugly and impotent; the right to have syphilis and cancer; the right to have too little to eat; the right to be lousy; the right to live in constant apprehension of what may happen tomorrow; the right to catch typhoid; the right to be tortured by unspeakable pains of every kind. - Huxley Rather than a question of ends, we are left with a question of freedom. If the past is any indication, the future of CRISPR and genetic editing technologies should inspire more optimism than fear. As early as the ‘70s and ‘80s, the ethical and moral dilemmas of cloning dominated the scientific and cultural landscape. Dolly the clone, the most famous sheep since Polyphemus’ food-turnedOdysseus-savior, became a cultural icon of scientific advancement. Arnold Schwarzenegger was in a terrible movie about clones in the 2000’s, and Ishigiro’s novel has even spawned movies following the clones who are glorified organ sacs for their originals. Yet, now in 2017, the fears about cloning have almost completely disappeared from the public consciousness. In his book The Lives to Come, Dr. Philip Kitcher of Columbia University discusses the unlikeliness of the future becoming the genetic dystopia authors are quick to imagine. The viability of cloning was so poor and without any immediate pressure to advance the technology, the eventual fears associated with its realization never came to fruition. He sees CRISPR in the same way and marks up the fear to misinformation and alarmism. So while admittedly very far from any of the dystopias conjured by the literary world, CRISPR has introduced a new wrinkle into the fold of the conversation. CRISPR, without editing any babies, is changing the world and having an impact, and we can be content with that progress for now. Still, the far off implications may come sooner than expected, and conversation to gently guide us away from outcomes that disturb even the bravest of us are still important as CRISPR/Cas9 becomes more advanced and widely implemented. But for the societies impregnated with designer babies, we will leave them to the writers to figure out for now. !

Columbia Science Review


The following three articles have been reprinted from the Columbia Science Review blog.

Running, Shaving, and Reasoning: An Exploration of Mathematical Paradoxes By Tanvi Hisaria

C

onsider the following situation: a runner is competing against a tortoise in a race. The tortoise is given a head start of 1 meter. Now, the runner starts running. In the time that it takes for him to run 1m, the tortoise has moved 0.5m. In the time that it takes for him to cover that 0.5m, the tortoise has moved another 0.25m. In the time that it takes for him to cover the 0.25m, the tortoise has moved another 0.125m, and so on. In whatever time the runner takes to cover the distance moved by the tortoise, the tortoise will move a little bit more. It would seem that the runner can never overtake the tortoise, but can only reduce its lead. If you perform such an experiment, however, the runner just runs past the tortoise with no regard to the mathematics involved. How can this be explained? This is a puzzle that troubled Zeno, an ancient Greek thinker famous for pointing out paradoxes in logic and mathematics. Perhaps one of his most famous paradoxes, the problem described above led to a revolution in mathematical thinking about the concept of infinity. From the above example, consider the sum 1 + (1/2) + (1/4) + (1/8) + …. This value gets closer and closer to 2, which is mathematically stated as “the value tends to 2 as the sum tends to infinity”. Most people understand this result as: the value gets closer and closer and closer to 2, but never actually reaches it, because you’re only covering half the distance between the value and 2 each time you add. However, considering what happens in real life, one does actually reach 2 and overtake the tortoise. This explains an important aspect about a counterintuitive concept of infinity: at infinity, you are not just infinitely close to the value, but are actually there. Thus, infinite sums have a value, and infinity is attainable! Now, let’s consider another mathematical puzzle: Russell’s paradox. In a town, there exists only one barber, and he shaves all the men who do not shave themselves. Thus, there are men who shave themselves, and men who are shaved by the barber. But, who shaves the barber? If he shaves himself, then he cannot be the barber , as the

barber is supposed to shave only the men who do not shave themselves. If he does not shave himself, then he also cannot be the barber, as the barber is supposed to shave all men who do not shave themselves. Thus, a paradox arises, and the barber seems unable to either shave or not shave himself. This story is an example of the broader problem in set theory (the mathematical study of collections of objects). Let S be the set of all sets that do not contain themselves; does S contain itself? If yes, then S contradicts itself, since S is the set of all sets that do not contain themselves. If not, then S should contain itself, as S is a set that does not contain itself. While the barber paradox has a solution in popular lore (the barber is a woman and hence does not need to shave!), Russell’s original paradox is not so simple. Early proposed solutions questioned every aspect of set theory: the definition of sets, hierarchies of sets, and even the nature of logic itself. Consequently, the aforementioned paradox has introduced new conditions and axioms that have strengthened the foundations of set theory. The two examples demonstrate the utility of paradoxes in mathematics. These deconstructions of logic invite mathematicians to delve deeper into a problem and find flaws in reasoning, thereby demonstrating a good strategy for solving any problem in general: to find the right answers, you have to first ask the right questions. There are various other examples, such as the Bertrand Paradox, that have led to a clearer definition of the term ‘random’ in probability problems. These paradoxes have questioned previously unchallenged areas of mathematics. In the process of resolving a paradox, all terms of the problem are examined meticulously, which leads to the extremely precise and well-defined discipline that exists today. !

No Professor, It Isn’t Obvious. How reductive language limits students’ learning in STEM classrooms at Columbia.

By Maria MacArdle But obviously, the rest is self-explanatory.” Your professor puts down the chalk, turns to their notes, and prepares to move to the next topic. Your stomach drops. You look down at the unfinished derivation in your notes and at the QED scribbled on the blackboard. Does everyone else know what is happening here? Did you miss a step? Is this based on something from last class? You’re confused, frustrated with yourself for being confused, and mostly wondering why nothing about this seems obvious at all.

fall 2017

19


That’s because it’s not. Realistically what you are learning in any given STEM class is the culmination of the life’s work of many scientists. It wasn’t obvious to Faraday that electricity and magnetism were linked, let alone the complex math that accompanied his theories. So why would his years of work, or the work of any other prominent scientist, appear obvious to anyone else? The language we use in STEM classrooms can have a distinct effect on the way material is learned. Beyond the clarity of the lecture, the tone an instructor takes when introducing difficult topics is something we don’t often think of, but it can have a distinct effect on students’ confidence and in turn, their ability to learn. Barnard’s own president, Dr. Sian Leah Beilock, has done extensive research on how math anxiety affects a student’s ability to learn the subject. Math anxiety is a perceived predisposition that you are inherently inferior in a subject, which has a clear and destructive effect on your ability to learn it. One way this dangerous correlation is harbored in Columbia’s classes is in the reductive language used to discuss challenging topics. Words and phrases such as “obviously”; “clearly”; “easy”; “self-explanatory”; and other diminishing terms we use without thinking can have a real effect on learning. When a professor describes a topic as “easy” or “straightforward,” we can often assume that everyone else in the room is at that level, discouraging us from asking questions of both our teachers and fellow classmates. Beyond this, language that makes students feel insecure can result in a pseudo-elitist attitude among students. With this attitude, we pretend to understand what we do not in order to appear on-board with what is expected to be “obvious.” In doing so, we put down those students who expose their confusion openly, or worse, we can use this language to exclude people from STEM fields altogether. More than anything else, students I interviewed were concerned about how it turns people off from studying STEM fields. Many of these students are deeply committed to an education in a hard science; 20

however, this kind of language has made them question their ability to do so, making it even more limiting for those who don’t see themselves as “math people” to hear language that further supports an assumption that if these topics don’t make sense to you automatically, you just don’t have the mind for them. There is a lot of power in the way we, as students, choose to speak to each other about our classes. With this in mind, we can try to avoid words and phrases that close doors to further discussion. We can make a point to indicate to ourselves and others when something is difficult, and understanding it is a non-trivial accomplishment. We can respect the scientists that came before us and the intellectual leaps they took that are in no way “self-explanatory.” In this age especially, the last thing we want to do is discourage people from studying the sciences. The stereotype of the lone genius – someone who indeed would find these topics “obvious” – remains a leading idea of how scientific discoveries happen. However, this is in no way the truth. STEM fields have always been a collaborative effort, and any form of exclusivity, be it active or passive, in the language we choose, is counterproductive. While STEM classes do not have a tendency to welcome every identity into the room equally, it is vital to understand that a diversity of perspectives is necessary for great feats of learning to occur. Therefore, it is on us, as students, faculty, and members of an academic institution, to understand and counteract the effects our language can have on the learning of ourselves and others. !

Let’s Not Sugarcoat It

By Mariel Sander

C

olumbia students are no strangers to sugar. Sometimes it seems like not a week goes by without a club or company giving out free Insomnia cookies on Low Beach or selling Krispy Kremes in Lerner. So when my roommate Amelia told me she’d gone “sugarfree” over the summer, I laughed. We had consumed ridiculous amounts of snack foods together throughout our freshman year and the idea that she had suddenly replaced cookies with carrot sticks and brownies with broccoli seemed absurd. But she wasn’t joking. She’d stopped consuming ice cream, bubble tea, and even foods like bacon and marinara sauce, things that I hadn’t even realized contained added sugars. As someone with a huge sweet tooth (deep fried cheesecake with whipped cream and sprinkles is my all-time favorite dessert), I thought it couldn’t hurt to do a little more research before following my roommate’s example. Yes, we all know eating a lot of sugar is

Columbia Science Review


unhealthy—but was it really as bad as my roommate seemed to think? According to the World Health Organization, the answer is an emphatic yes. A 2015 report stated that added sugars should sit at a measly 5% of our total energy intake. For context, if you eat 2000 calories a day, one can of soda at 150 calories already puts you at 7.5%. This means that even if you eat salad for the rest of the day, you’ve already exceeded your recommended daily added sugar intake. So why exactly is added sugar so bad for you? To start, the word “sugar” refers to three main types of sweetener. The first, “added sugars”, includes high fructose corn syrup, sucrose, and dextrose. There are also “natural sugars”, fructose in honey or lactose in milk. Finally, there are artificial sweeteners that are marketed as “sugar replacements” like aspartame, stevia, or sucralose. As it turns out, not all sugars are created equal. Several animal studies investigated the effects of different sugars on memory, anxiety, and weight gain. In an experiment at Waikato University in New Zealand, 45 rats were fed either sucrose or honey (which contains naturally occurring fructose) for 13 months. Scientists assessed memory through an Object Recognition test and a Y maze. Additionally, they studied the anxiety levels of the rats by using an Elevated Plus Maze, a cross-shaped contraption open on one side and designed to play rats’ fear of open areas against their desire to explore. Throughout the 13-month period, the scientists found that the honey-fed rats displayed significantly less anxiety compared to rats on the sucrose diet. Furthermore, by 9 months, assessments indicated that honey-fed rats had better spatial memory than sucrose-fed rats and the control (sugar-free) group of rats. Honey, which contains numerous antioxidants and bioactive compounds, seemed to be the key factor here since honey-fed rats tended to outperform both the rats on added sugar and the rats who consumed no sugar. The researchers then reran this experiment, this time measuring weight gain of the rats. Overall, they found that rats on a honey or sugar-free diet had 23.4% less weight gain and 9.2% less body fat than those fed with sucrose. This may be explained by the different metabolic pathways that sucrose and fructose are involved in, which elicit different hormonal reactions. The disparity in weight gain could also stem from the fact that digestion of honey produces hydrogen peroxide, which imitates the hormone insulin and affects glucose processing in the body. And what about “sugar replacements”? If you look at the label on a bottle of Coke Zero, you’ll see—as the name suggests—a string of zeroes. Zero calories, zero sugars. However, if you look at the ingredients, you’ll

spot an artificial sweetener called aspartame. It’s not technically sugar, so does that make it a guilt-free way to satisfy your sweet tooth? Earlier this year, an online review published findings on the effects of artificially sweetened beverages (ASBs) compared to those of sugar-sweetened beverages (SSBs) on people. The article cited an experiment proposing that the consumption of ASBs led to the development of glucose intolerance, which in turn adversely affected the essential microbes in the digestive tract. In addition, it was hypothesized that ASBs trigger “compensatory mechanisms”. The artificial sweeteners in ASBs seem to interact with our sweet taste receptors in a way that causes our bodies to crave sugar and our appetites to increase. On a psychological level, ASBs can also be trouble—if we believe that we’re consuming less calories, we may feel more inclined to consume more desserts. It is important to keep in mind that the data from many of these experiments might have also been subject to cherry picking. Historically, organizations with a stake in the result, such as the Sugar Research Foundation, preferentially funded studies that downplayed the negative health effects of sugar in order to support the sugar industry. A similar problem arises in many studies of ASBs and SSBs, as much of the research done is industry-sponsored, leading to conflicts of interest that may bias the results. With all this in mind, for the month of October, I decided to give myself a challenge: I went on an added sugar-free diet. After the first week, I noticed a marked difference in the amount of cravings for sugary snacks I had throughout the day. This said, come November 1st, I still plan on going to Duane Reade and stocking up on discounted Halloween candy. But instead of binge-eating it, I’ll be following the tried and true motto of eating in moderation. There’s no point in trying to sugarcoat it: the amount of added sugar we’re accustomed to consuming far exceeds the amount we actually need. !


A case for more science in the Core By Keefe Mitman, Anu Mathur, Noah Goss, Aunoy Poddar

What is Columbia College known for if not its Core Curriculum? In our four years here, we all dig into an impressive array of texts from the Western canon in Literature Humanities and Contemporary Civilization and learn sophisticated writing techniques in University Writing. Beyond the world of Western humanities, students at Columbia are obliged to conquer a foreign language, engage with the history and theory of art and music, endure a physical education requirement, familiarize themselves with non-Western cultures, and delve into the beauty of science. While these ancillary requirements succeed in breadth, their depth leaves something to be desired. It seems that these “addon” requirements aren’t always as comprehensive or educational as they intend to be. 22

The science requirement, as it currently stands, does not guarantee that Columbia College students graduate as well-versed in the sciences as they do in the humanities. This is problematic and antithetical to the college’s stated goal of the science requirement. As per the official Columbia College reasoning, “The science component is intended specifically to provide students with the opportunity to learn what kinds of questions are asked about nature, how hypotheses are tested against experimental or observational evidence, how results of tests are evaluated, and what knowledge has been accumulated about the workings of the natural world.” To address these substantial goals, Columbia College requires that students take two science classes, in

Columbia Science Review


addition to Frontiers of Science. Just two science classes, compared with the full eight semesters worth of rigorous humanities courses and the four semesters of foreign language that are deemed sufficient to bring students to a benchmark proficiency. Can proficiency in science be taught in just two classes?

Contemporary Civilization, we don’t talk on a surface level about incomprehensibly advanced topics in econometrics, we read Wealth of Nations. Why should Weapons of Mass Destruction, a narrow application of physics, be considered a substitute for learning fundamental physics?

What is especially concerning are the peripherally scientific classes that satisfy this already marginal science requirement: Food and the Body, Computing in Context, Symbolic Logic, and Mind, Brain and Behavior, to name a few. While psychology, statistics, computer science, etc., are important and complex fields of study that play a prominent role in the governance of our society, they do not constitute fundamental sources of knowledge regarding the underlying structure of our universe. They should not be considered sufficient substitutes for foundational classes in physics, biology, and chemistry.

The science requirement cannot represent an obstacle that students can skirt around, it must be Sisyphus’ boulder: met head on and rolled up the hill. Scientific knowledge is the endless end in itself. As we move further and further into the “post-truth” era, an era mired in scientific inaccuracies and a prioritization of conveniency and feeling over fact, there exists no better defense of scientific truth than knowing where that truth comes from. We hope that the Core Curriculum will address these issues, and we at the Columbia Science Review will continue to strive to spread science throughout our community.

When students are required to take a science requirement, they should be studying the molecular structure of cells and their evolution into more complex organisms or the underlying equations governing the innate details of electricity and magnetism. They shouldn’t just study the relevant applications of these respective ideas—Columbia students should also gain a sense of understanding as to how such discoveries occur. They should learn the spirit of scientific inquiry. Understanding the underlying ideas that drive scientific innovation and environmental phenomena is inextricably linked with the spirit of the Core and should be reflected in the science requirement.

An education in the fundamental tenets of science and its methods equips students with effective tools to be persuasive advocates of truth in any field they choose to enter. The Core Curriculum should not treat scientific inquiry as an abstract historical concept, as the “stimulus” for the Enlightenment or a philosophy embedded in Aristotle, Hume, and Locke. Scientific inquiry itself should be etched into the minds of our students as a modus operandi to deconstruct the vast complexity of the physical world around us and to speak authoritatively as voices for scientific truth.

The consequence of offering numerous surfacelevel, applied STEM courses that fulfill the science requirement is that most students look to complete that requirement with minimal effort and tend to take the less fundamentally scientific classes. Many nonSTEM majors end up enrolling in Weapons of Mass Destruction, Physics for Poets, Introduction to Statistical Reasoning, or Science of Psychology. And, more often than not, these classes serve as remedial courses for those who took AP Statistics or AP Physics in high school. While Columbia does require that students take science classes, there are no rules ensuring Columbia students expand their scientific knowledge beyond their high school curriculum.

This article was originally printed as an op-ed in the Columbia Daily Spectator.

The Core demands an understanding of Plato, Nietzsche, and Kant because these authors critically inform discussions of modern day society and government. Columbia College students receive a rigorous education in the foundational elements of literature and philosophy because this is part of the explicit mission of the Core. That very same mission should serve as a directive for a more foundational education in the sciences. In fall 2017

23


COLUMBIA SCIENCE REVIEW www.columbiasciencereview.com

24

Columbia Science Review


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.