Sigma Volume 5

Page 1

Volume V


Cover design by Jocelyn Hayes ’23. “When creating this drawing, I began by examining my own views of science. Although I’ve never really considered myself an enthusiast of STEM, I can’t help but acknowledge my own scientific fascination with the human form. Specifically–especially for those who know me or have seen my artwork–my love of drawing skeletons. Therefore, I worked on taking a bit of my own scientific interest while also brainstorming the wide variety of ways the word “science” can exist. However, since I’m in biology this year, I went with several different themes of biology that I recognize. To try to break out of that box, I added measurements, chemistry, atoms, and more. I was really into the idea of creating some sort of character that is a collection of different parts of science. In the end, all these ideas resulted in this creature: a wacky yet somehow entertaining collage made up of various parts of “science”." Copyright © Sigma Journal, Volume V. All Rights Reserved. P UBLISHED BY K NEPPER P RESS LATEX template adapted from The Legrand Orange Book Version 2.3, from L AT E XT EMPLATES . COM. Volume V, May 2022

Table of Contents Image by Vanessa Gonzalez-Rychener ’24.


Contents

Editors’ Section

α 1

Editors’ Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.1

Meet the Editors

4

1.2

Letter from the Editors

6

1.3

Mission Statement

6

1.4

Acknowledgements

6

Articles

β 2

Biographies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.1

Brown. Ada Lovelace

11

2.2

Rosenberg. Percy Lavon Julian

12

3

Life Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3.1

Apostolopoulos. The Effect of Cobalt Ions on the Activity of Catalase

3.2

Chang, Hung, Lee, Li, Lucena, Mardjoko, Tsarnakova, Wang, Wang, Zheng. Microbial Forensics: Identifying Bacteria and Yeast Using Ribosomal DNA Fingerprints 17

3.3

Gordon. Designing A Rubber Band-Powered Plane To Uncover Aerodynamics

22

3.4

Khan. The Inhibitive Effect of Salicylic Acid on Martian Catalase

28

3.5

Lamitina. How to Photograph a Trillion Stars

29

3.6

Nourbakhsh. The Kuwait Oil Fires: an Environmental Disaster

31

16


3.7

Sleet. The Dangers of Microplastics

33

4

Social Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4.1 4.2

Alarcon. To Err is Human, to Forgive Divine 38 Gonzalez-Rychener. Rebels’ Gradient: A Comparison of Different Kinds of Rebels and Rebellion in Epics 39

4.3

Kennard. No One Cares That You Ran a Marathon

4.4

McAllister. Project MK-Ultra

44

4.5

Salipante. The Aversion Project: South Africa’s Attempt to Cure Homosexuality

46

5

Mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

5.1

Hollingshead, Burton. Proving the Formula for the Fibonacci Sequence

49

5.2

Cardenes, Stern. Doomsday Algorithm

52

5.3

Lamitina, Porco. Using Schrödinger’s Equation to Calculate the Position of an Electron in 4–dimensional Space 54

5.4

Loh. Constructing a Very Special Circle through Six Very Special Points

55

5.5

Noaman, Zhang. Fractals: Finding Perimeters and Areas

60

5.6

Simhan, Bandi. The Fermi Estimate

63

5.7

Sinha, Sayette. Triangular Duel

65

5.8

Wagner-Oke, Anderson-Jussen. All Horses Are The Same Color: Proof by Induction

67

6

Computer Science and Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

6.1

Anderson-Jussen, Emerick, Harrison. The Gear Chair

69

6.2

Bandi. Engineering Lead Portfolio

71

6.3

Chang. Machine Learning and the Art of Persuasion: Creating Digital Assistant for COVID19 Vaccine Hesitant Users 74

6.4

Myers, Hopper. Condensed Design Proposal AE&D

77

6.5

Stern. An Ethical Future for Tech

78

41


α

Editors’ Section

1

Editors’ Section . . . . . . . . . . . . . . . . . . . . . . . 4

1.1 1.2 1.3 1.4

Meet the Editors Letter from the Editors Mission Statement Acknowledgements


Chapter 1. Editors’ Section

4

1.1

Meet the Editors

Hannah Chang Editor-in-Chief Slow Walker

Favorite subjects: Computer Science Favorite song: Cariño Favorite movie: Twin Peaks Wants to learn: How to walk faster

Kate McAllister Executive Artistic Editor Plaid Co-Senior Editor

Favorite subject: English Favorite song: Amergio by Patti Smith or Mad as a Hatter by Larkin Poe Favorite TV show: The Walking Dead Wants to learn: How to do a headstand

Harry Burton Executive Article Editor Sushi Enthusiast

Favorite subject: Chemistry Favorite song: Brahms Symphony No. 2 Favorite TV show: Breaking Bad Wants to learn: How to whistle


1.1 Meet the Editors

5

Vivian Loh Executive Technical Editor Pro Crastinator

Favorite subject: Geometry Favorite song: Fire by Elektronomia Favorite TV show: Don’t watch any Wants to learn: Better time management

Vik Sinha Executive Mechanics Editor

Favorite subject: Discrete Maths Favorite song: Thunder by Imagine Dragons Favorite movie: The Big Bang Theory

You Future Sigma Editor Sigma’s Secret Admirer

Favorite subjects: Anything! Favorite song: Sumthing by the Sigma Editors Wants to learn: How to be a Sigma Editor Fun fact: I’m a Sigma editor for the 2022-2023 school year!


Chapter 1. Editors’ Section

6

1.2

Letter from the Editors Welcome to the fifth edition of Sigma! Our name, Sigma, is by definition the 18th letter of the Greek alphabet and is used as the mathematical notation for a sum. Though this name has come to unify a diverse array of student work in the fields of STEM, this year we have extended Sigma to encompass academic writing from all fields of study to recognize our students’ talented work in a wider range of material and present the advancing practice of interconnection between STEM and the social sciences. This year, as we mobilize in the process of recovery from the COVID-19 pandemic, we have witnessed a shift in the spirit of approaching different academic disciplines. While last year was inspired by the incredible contributions of our nation’s scientists who allowed us to persevere through the pandemic, we also witnessed collaboration of the humanities, social sciences and STEM disciplines in analyzing and discovering vital truths about our changing world—for instance the reflection of socioeconomic inequalities in the face of COVID-19 or the dangers of perpetuating existing social biases in artificial intelligence and autonomous technology. Our own classrooms saw the flourishing of interdisciplinary course material with teachers crossing departments to create new courses that widen the academic lens in which students conduct critical thinking. This inspiration led many of us to embrace the magazine through a new light, emphasizing the interdisciplinary collaboration for the sake of improving the world around us. In the legacy of Dr. Keith Bemer – a lifelong believer in the pursuit of knowledge for all and the power that STEM education can hold – we chose this mission of impact for our fifth publication’s theme as we truly see the seed he planted grow and continue to blossom for years to come. As you read this diverse array of student work – from complex aerophysics research papers to proposals of machine learning algorithms to critical essays on cultural appropriation– we hope that you, too, will learn something that will inspire you to change the world through academics. From our LATEX files to your hands, we hope you enjoy the fifth issue of Sigma.

- The Sigma Editorial Team

1.3

Mission Statement In Dr. Keith Bemer’s vision, Sigma is Winchester Thurston’s student-run academic journal. The goal of our annual publication is to showcase exceptional student work at all academic-experience levels to a broad and diverse audience while also providing WT community members with the experience of publishing in a professional-style journal. Staff members and editors diligently work year-round encouraging submissions, selecting work for publication, aiding in the revision process, as well as LaTeX’ing and formatting articles to present you with this edition of Sigma.

1.4

Acknowledgements We would like to thank the following people who helped make this issue of Sigma successful: • Mr. Nassar, our faculty advisor, for providing us with all of the support, knowledge, and motivation we could have ever asked for in creating this issue of Sigma. • Dr. Olshefski, our new addition to the team as a co-faculty advisor, for providing professional insight and inspiration to expand the magazine to other fields of academic writing. • Student submitters, for having the drive to continue to research despite the loss of a traditional school year, the courage to submit their work to be published, and the perseverance to work with staff and editors to perfectly polish the articles you see today. • The WT Faculty, for tirelessly supporting our publication. • Sigma’s dedicated staff (Marco Cardenes, Jerry Zhang, Daniel Kochupura, Luke Lamitina, Brynne McSorely, Alex Sayette, Delia Brown, Helen Zhang, Tommy Gordon, Hannah Hammons, Felix Gamper) for aiding greatly in the submission, revision, formatting, and publication processes. • The incredible leading Sigma editors of years past (Anna Nesbitt ’21, Christopher Porco ’20, Aria Eppinger ’20, and Harrison Grodin ’18) for providing us with the foundation for an incredibly successful fifth edition.


1.4 Acknowledgements • The WT Fund, for supporting our journal and the Sigma club as a whole. • David Gilbreath and Knepper Press, for aiding in printing our fifth edition of Sigma. • Dr. Keith Bemer, for conceiving the idea for Sigma and enabling our publication to become a reality.

7



Articles

β

2

Biographies . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.1 2.2

Brown. Ada Lovelace Rosenberg. Percy Lavon Julian

3

Life Science . . . . . . . . . . . . . . . . . . . . . . . . . 15

3.1

3.7

Apostolopoulos. The Effect of Cobalt Ions on the Activity of Catalase Chang, Hung, Lee, Li, Lucena, Mardjoko, Tsarnakova, Wang, Wang, Zheng. Microbial Forensics: Identifying Bacteria and Yeast Using Ribosomal DNA Fingerprints Gordon. Designing A Rubber Band-Powered Plane To Uncover Aerodynamics Khan. The Inhibitive Effect of Salicylic Acid on Martian Catalase Lamitina. How to Photograph a Trillion Stars Nourbakhsh. The Kuwait Oil Fires: an Environmental Disaster Sleet. The Dangers of Microplastics

4

Social Science . . . . . . . . . . . . . . . . . . . . . . . 37

4.1 4.2

Alarcon. To Err is Human, to Forgive Divine Gonzalez-Rychener. Rebels’ Gradient: A Comparison of Different Kinds of Rebels and Rebellion in Epics Kennard. No One Cares That You Ran a Marathon McAllister. Project MK-Ultra Salipante. The Aversion Project: South Africa’s Attempt to Cure Homosexuality

3.2

3.3 3.4 3.5 3.6

4.3 4.4 4.5

5

Mathematics . . . . . . . . . . . . . . . . . . . . . . . . 49

5.1

Hollingshead, Burton. Proving the Formula for the Fibonacci Sequence Cardenes, Stern. Doomsday Algorithm Lamitina, Porco. Using Schrödinger’s Equation to Calculate the Position of an Electron in 4–dimensional Space Loh. Constructing a Very Special Circle through Six Very Special Points Noaman, Zhang. Fractals: Finding Perimeters and Areas Simhan, Bandi. The Fermi Estimate Sinha, Sayette. Triangular Duel Wagner-Oke, Anderson-Jussen. All Horses Are The Same Color: Proof by Induction

5.2 5.3 5.4 5.5 5.6 5.7 5.8

6

Computer Science and Engineering . . 69

6.1 6.2 6.3

Anderson-Jussen, Emerick, Harrison. The Gear Chair Bandi. Engineering Lead Portfolio Chang. Machine Learning and the Art of Persuasion: Creating Digital Assistant for COVID-19 Vaccine Hesitant Users Myers, Hopper. Condensed Design Proposal AE&D Stern. An Ethical Future for Tech

6.4 6.5



2. Biographies

Section Header by Hannah Chang ’22.

2.1

Ada Lovelace By Delia Brown ’25 Ada Lovelace is a woman who is celebrated as the world’s first computer programmer, for work done a hundred years before the field of computer science even existed. Ada Lovelace was born Augusta Ada Byron on December 10, 1815 to Lord Byron, a famous poet, and Lady Annabella Byron, a mathematician. Lord Byron engaged in scandalous behavior with other women, so Ada’s mother left him on January 15, 1816, when Ada was only five weeks old, and fled to Ada’s grandparents. Ada’s father went overseas and died when she was eight. She was not shown a picture of him until she was twenty. Lovelace did not get along well with her mother. She had a wild imagination that worried her mother; her mother did not want her to end up like her father, who was an emotional poet. Ada’s mother tried to tame her wild imagination by teaching her mathematics and logic, but Ada’s creativity could not be contained. When she was eight, she decided she wanted to fly and set out to build a flying machine, devouring books about birds to learn how their wings worked. As a young child, Ada was often ill. At 13 she got measles, which then turned into a more serious illness, leaving her mostly bedridden for multiple years. This gave her plenty of time to learn more mathematics and let her imagination soar. Ada’s mother once took her on a tour of a textile

factory where she saw the Jacquard loom, which could weave any design using patterns of holes punched in paper. These papers were fed through the loom with the holes telling the loom where to stitch. Ada was interested in mathematics and was tutored by many prominent scientists and mathematicians of the 1800’s including William Frend, William King, Mary Summerville, and Augustus De Morgan, who introduced her to advanced calculus topics such as the Bernoulli numbers. When she was 17, Ada was introduced to Charles Babbage at a private party in June 1833. Babbage invented the Difference Engine which could calculate and print limitless values of formulas. To get answers, the engine took the previous solution and used repeated addition, so it only needed to know how to add and subtract, not to multiply. Once started, the engine completed the steps on its own, which was unique of calculating machines at the time. Ada visited a performance of the small section of the Difference Engine that had been built and became fascinated with the machine. Babbage also created the Analytical Engine, which was a more complex machine then the Difference Engine, so complex that parliament refused to grant Babbage the additional money required to build. Ada’s creativity led her to see that the way patterned papers were used in the Jacquard loom was similar to cards used in the analytical


Chapter 2. Biographies

12 engine that told it when to perform different calculations. Ada famously said, “the Analytical Engine weaves algebraic patterns, just as the Jacquard loom weaves flowers and leaves.” Interestingly, until the 1980s computers ran on punch cards, which were very similar to the papers used for the Jacquard loom and the Analytical Engine. Ada believed that math and music were related and was the first person to think about how the Analytical Engine could be applied not just to computations, but also to the more creative fields of music and art. Ada’s most famous work was when she translated an article by Italian Luigi Menebra about the analytical engine from French to English. She added her own notes at the end, which were labeled A to G, and were longer than the original article. In note G, she wrote an algorithm for the Analytical Engine that could be used to calculate the Bernoulli numbers. This is considered the world’s first computer program. She also wrote about artificial intelligence, saying that the engine was not able to have original ideas. Alan Turing, the father of modern computer science, later disputed this claim. Ada’s notes were published in Taylor’s Scientific Memoirs in 1843, when she was only 28 years old, under her initials, A.A.L. In 1835, Ada married William, the 8th Baron of King, and they had three children. In 1838, he was made Earl of Lovelace while Ada became the countess of Lovelace. Not only was Ada able to achieve great scientific success, but she also had a family; she really did it all. This pio-

2.2

neer of computer science unfortunately died at the young age of 36, on November 27, 1852, from uterine cancer. However, her legacy still lives on today. Ada Lovelace day was created in 2009 to celebrate women in STEM. It is celebrated on the second Tuesday of October with people all over the world holding celebrations in Ada’s honor. There have also been many books written about Ada Lovelace, which continue to spread her creative ideas. There is even a computer language named Ada after her. Ada Lovelace was a creative and imaginative child, and she used her mind to learn mathematics and then to write the first computer program using this knowledge. Her legacy has lasted to this day, and it is with good reason that she was called by her friend, “The enchantress of numbers.” References [1] Adrian Hollings Christopher. Martin Ursula. Rice. The Making of a Computer Scientist. Bodleian Library, 2019. [2]

Emily Arnold McCully. Dreaming in Code. Candlewick Press, 2019.

[3]

Diane Stanley. Ada Lovelace Poet of Science: The First Computer Programmer. Simon & Schuster, 2016.

[4]

Laurie Wallmark. Ada Byron Lovelace and the Thinking Machine. Greston Books, 2015.

Percy Lavon Julian By Reed Rosenberg ’24 Percy Lavon Julian was born in 1899 and would live a decorated 76 years of full of discovery and innovation. He was an American chemist who was born the grandson of enslaved people in Montgomery, Alabama. Julian grew up in a time when African-Americans faced prejudice not just in school but in all aspects of life. He graduated from DePauw University in Indiana as valedictorian in 1920, and he then received his master’s degree in organic chemistry from Harvard University, followed by a doctorate in chemistry of medicinal plants at the University of Vienna in Austria. Julian was one of the chemists who first discovered how to synthesize and produce large amounts of steroids from plant compounds. Prior to this, steroids were extracted from animal tissue and fluids, which was very expensive. Julian and his team were able to create these same steroids in the lab, making the process much cheaper and therefore more widely accessible. One of his first major successes was a total synthesis of physostigmine, which is the active principle of the Calabar bean.

Physostigmine eases the constriction of outflow channels in the eye’s aqueous humor which relieves high pressure in the retina of the eye and causes glaucoma. If this condition is not treated, it can cause blindness. Percy Lavon Julian is also credited with being one of the chemists who discovered how to synthesize cortisone (which is cortical hormone produced in the adrenal gland) and hydrocortisone inexpensively. Julian’s research is important to understanding chemistry today because he discovered how to synthesize medicinal compounds from plant sources. This enabled many medicinal compounds to be able to be mass produced and done so at affordable prices. Through a by-product of his research of the Physostigmine synthesis, German chemists discovered the steroid stigmasterol. Julian developed a method for converting stigmasterol into progesterone which allowed it to be available on a large scale. Today, progesterone is used for many medical reasons, including decreasing the risk of uterine cancer and also for hormone replacement therapy.


2.2 Rosenberg. Percy Lavon Julian

13

Julian also developed a new synthesis for Substance S. are found in soybeans has helped to eliminate all kinds of This differs from cortisone by one oxygen atom. Discov- suffering in the world. ering this allowed Julian to synthesize cortisone as well as hydrocortisone. This discovery has helped with many References medical conditions such as rheumatoid arthritis and made [1] Britannica. Percy Julian. Apr. 2022. URL: https: for more affordable treatments for arthritis patients. //www.britannica.com/biography/PercyAnother major impact that Julian had on chemistry Julian. today was his ability to refine a soya protein. This protein became the main base of Aero-Foam. Aero-Foam is a [2] Science History Institute. Percy Lavon Julian. URL : https : / / www . sciencehistory . org / foam fire extinguishing product that was used to fight historical-profile/percy-lavon-julian. fires during World War II by the Navy. Julian’s work and research on soybeans launched a huge growth in the [3] pbs.org. Who Was Percy Julian? URL: https : soybean industry. / / www . pbs . org / wgbh / nova / julian / lrk His work discovering new uses for the chemicals that whowasjulian-exp.html.



3. Life Science

Section Header by Jocelyn Hayes ’23.


Chapter 3. Life Science

16

3.1

The Effect of Cobalt Ions on the Activity of Catalase By Zoe Apostolopoulos ’22 Introduction:

Catalase is an antioxidant enzyme present in all aerobic organisms that catalyzes the reaction in which hydrogen peroxide (H2 O2 ) is decomposed into water and oxygen. It is an extremely important enzyme because, without it, hydrogen peroxide could accumulate to toxic levels in certain cells and potentially be harmful or even deadly to the organism. We can determine if Extraterrestrius Martianii contains catalase by measuring the absorbance levels of H2 O2 as the reaction progresses through spectrophotometer readings. We should expect to see a continuous regression in the concentration of H2 O2 throughout the experiment as the enzyme catalyzes the reaction within the first few minutes of coming in contact with hydrogen peroxide. In the case of this experiment, cobalt ions are a known inhibitor of catalase, so we should expect to see some inhibition of its activity through higher concentrations of H2 O2 . Therefore, if it is determined that Extraterrestrius Martianii contains catalase, then the addition of cobalt ions will decrease the enzyme’s activity, producing a greater concentration of H2 O2 (M) throughout the entirety of the reaction.

Measuring Catalase Activity:

Catalase activity was measured through a spectrophotometer which measures light absorption. After the progression of the catalyzed reaction was observed at 30second intervals ranging from 0-3 minutes, a few minutes were allowed for the color to develop before transferring the different samples into cuvettes. The cuvettes were then individually placed in the spectrophotometer where their wavelengths (or colors) were determined. The earlier in the reaction, the greater the absorbance level and the darker the color will be. The farther in the reaction, the lower the absorbance level and the lighter the color will be.

Results:

In the graph above, we see the concentration of H2 O2 (M) produced in Trial 1 and Trial 2 over the course of 3 minutes. Both trials follow a similar line of regression with the highest concentration levels of H2 O2 at 0 minutes. Overall, Trial 2 had great concentrations of H2 O2 throughout the entirety of the reaction compared to Trial 1. Discussion: The clear linear regression relationship between the concentration of H2 O2 and the time allotted for the reaction to occur proves that Extraterrestrius Martianii contains catalase. The highest concentrations of H2 O2 were recorded at 0 minutes because the catalase had not had time to initiate the catalysis of the reaction, leaving a 100% concentration of H2 O2 . The following regressing points at each 30-second interval prove the activity of catalase, as the more time the enzyme is given to react, the greater the reduction of H2 O2 will be. Furthermore, the data from Trial 2 consistently displays a greater concentration of H2 O2 at each 30-second interval compared to the data from Trial 1. This greater concentration of H2 O2 throughout Trial 2 supports the hypothesis that the addition of cobalt ions will decrease the enzyme’s activity, therefore producing a greater concentration of H2 O2 (M) throughout the entirety of the reaction, given that the cobalt ions are the only variable that changed from Trial 1 to Trial 2.


3.2 Chang, Hung, Lee, Li, Lucena, Mardjoko, Tsarnakova, Wang, Wang, Zheng. Microbial Forensics: Identifying Bacteria and Yeast Using Ribosomal DNA Fingerprints

17

3.2 Microbial Forensics: Identifying Bacteria and Yeast Using Ribosomal DNA Fingerprints By Hannah Chang ’22, along with Cassandra K. Hung, Hyunkyung K. Lee, Katherine W. Li, Alejandro J. Lucena, C. Zora Mardjoko, Rositsa Tsarnakova, Jason Wang, Lisa Wang, Claire Zheng Note from the Editors

This Paper was granted permission to reside in the Sigma Journal courtesy of PA Governor’s School for the Sciences 2021 Journal. Abstract

Microorganism identification, specifically by using the 16S rRNA genomic region shared by bacteria, is applicable in various areas of our lives. Current methods that exist for identification are subject to various limitations. Our project focused on developing a computational tool that could aid in the identification of bacteria by analyzing their 16S rRNA gene sequence via PCR and restriction enzyme digests. Biopython was used to load data from the Ribosomal Database Project (RDP) for use in our comparison program. A possibility reducer algorithm identified matching bacteria sequences between the dataset and the test set based on the unique ribosomal DNA restriction fragment lengths. Our match results indicated functionality of the program, in addition to obtaining bacteria that had no fragment matches. The output data was manually validated to find false positive and false negative fragments that helped explain the results. Introduction Background

Microorganisms can be found ubiquitously in a variety of settings, and their identification enables the analysis of their function, structure, and uses in a comprehensible manner. This plays a crucial part in a variety of fields, including food contamination, food production, environmental cleanup, ecosystem biodiversity, and disease control and prevention, to name a few. In recent years, Salmonella has been found as a contaminant of a number of supermarket foods, including salad, ground turkey, and other frozen poultry [1]. Being able to quickly and accurately identify disease-causing agents like Salmonella is imperative to treating and monitoring the spread of such pathogens. Conversely, some microbes are critical to maintaining life. They modulate energy flow within ecosystems by acting as decomposers and are responsible for almost half the photosynthesis that occurs on Earth [6]. Microbes are also important in the production of food products such as cheeses, yogurts, and breads; for instance, modern-day yogurt production involves culturing the milk with live bacteria, including Streptococcus thermophilus and Lactobacillus bulgaricus, which produce lactic acid to thicken the yogurt [7]. How-

ever, understanding these phenomena would not have been possible without first knowing the identity of the microorganism being worked with. Whether it is studying infectious diseases, producing foods, or engineering sustainable technology, being able to identify unknown microorganisms is important for a number of scientific applications. Current methods for identification include denaturing gradient gel electrophoresis (DGGE), fluorescence in situ hybridization (FISH), clonal libraries, full genome sequencing, amplified ribosomal DNA restriction analysis (ARDRA), and terminal restriction fragment length polymorphism (T-RFLP). These techniques employ a variety of procedures and often have different efficacies depending on the microorganism that is being analyzed. For example, denaturing gradient gel electrophoresis (DGGE) relies on melting points of DNA fragments to track separation while clone libraries require additional phylogenetic comparison between the foreign organism and available data. Full genome sequencing is another alternative to identification, but it can take a while to produce. One of the more widely available procedures, amplified ribosomal DNA restriction analysis (ARDRA), allows for the identification of organisms through creating a DNA fingerprint of their rRNA [5]. Although ARDRA has the benefits of being relatively quick, inexpensive, simple to use, and available in most labs, there are drawbacks in that it is time-intensive since it relies on the manual matching of DNA fingerprints to a database. Additionally, organism identification is often a tedious task that may not always require the simplest of techniques; therefore, it is crucial that steps be taken in order to introduce an unchallenging and straightforward approach to discerning unknown microorganisms. 16S rRNA Gene

A 70S prokaryotic ribosome is composed of a large 50S subunit and a small 30S subunit, and the 30S subunit can be further divided into 21 proteins and a 16S rRNA molecule [7]. The associated 16S rRNA gene is of particular interest in studying bacteria taxonomy because it is highly conserved both structurally and functionally across prokaryotic organisms; however, there is still variation within 16S rRNA gene sequences that can be used as markers to differentiate between species [3]. Furthermore, the 16S rRNA gene is approximately 1,500 base pairs, which is sufficiently large enough to be used for research in computational informatics [2]. The entire prokaryotic


Chapter 3. Life Science

18

ribosome in addition to a diagram of the 16S rRNA gene broader scale, ARDRA and T-RFLP are essentially the illustrating conserved and variable regions is shown in same process; however, ARDRA will not use labeled Figure 1. primers while T-RFLP will implement such primers into the process. T-RFLP also uses a fluorescence detector to detect the labeled primers, contributing to the increased efficiency and expense. Purpose

Figure 1: Prokaryotic Ribosome and the 16S rRNA Gene Analysis of 16S Gene DNA using Restriction fragment length polymorphism (RFLP)

Restriction fragment length polymorphism (RFLP) is a technique that allows for genetic fingerprinting. In RFLP analysis, a DNA sequence is first digested into fragments using one or more restriction enzymes. Next, these fragments are separated through agarose gel electrophoresis, which separates mixtures of DNA fragments by length. However, due to the substantial time it takes to complete an analysis (up to one month) as well as the large amount of DNA needed in the original sample, RFLP is less widely used now [18]. However, PCR has been used in conjunction with RFLP in cleaved amplified polymorphic sequence (CAPS) assays. This technique is more efficient than traditional RFLP since PCR can amplify a small amount of DNA to levels sufficient for RFLP in two to three hours, meaning that samples can be analyzed in less time. Regardless, RFLP analysis acted as a stepping stone for the development of current techniques like ARDRA and T-RFLP.

Current methods for identifying bacteria using ribosomal DNA fingerprints require substantial time and resources, creating barriers to scientific research since costly equipment is required that may not be accessible to everyone. Additionally, in a lab setting, this process would be time-consuming and require several steps. First, the full genome would have to undergo PCR to isolate and amplify the target DNA region. In bacteria, the Uni331F and 1492R primers would be used in this step to target the 16S rRNA gene. Then, the PCR products would be run through a series of three restriction enzyme digests. Once the restriction fragments are obtained, they can be run through a gel electrophoresis to separate the fragments by size. Because the length of the restriction fragments produced will differ from species to species, the gel would be analyzed and compared to a reference or a database to determine the identity of the organism. Therefore, to overcome this obstacle and make identification more accessible, we aim to design a computational tool that will identify an unknown microorganism by analyzing its 16S rRNA gene sequence using the more inexpensive ARDRA method while still producing results comparable to that of T-RFLP. Methodology

Current Methods of Bacteria Identification Using Ri- Test Data Generation bosomal DNA: Terminal Restriction Fragment Length In order to establish a baseline for the behavior of our Polymorphism (T-RFLP) virtual digest program, test data was collected to compare

Terminal restriction fragment length polymorphism utilizes both PCR and RFLP analysis. The primers used in T-RFLP are labeled with fluorescent molecules and occasionally fluorescent dyes are needed for tagging, the most commonly used one being 6-carboxyfluorescein (6-FAM). After the PCR amplification, the amplicons are then cut by restriction enzymes. A capillary electrophoresis machine is then needed to separate the resulting fragments and their sizes are determined by a fluorescence detector. The main advantage of T-RFLP is that the first fragment in a sequence is able to be identified due to the primers being bound to the 5’ end of the fluorescent molecule. T-RFLP provides more potential for statistical analysis as compared to other techniques due to the grouping of fragments into operational taxonomic units (OTUs). Although T-RFLP is relatively fast, the capillary electrophoresis machine required for the process is expensive, making it a costly option when compared to other methods. On a

the results of our program with the results of a pre-existing web-based restriction enzyme digest. The overall process for test data generation is shown in Figure 3.

Figure 3: Overall Steps for Test Data Generation A full bacterium genome was downloaded from the GenBank database, then run through a virtual PCR simulation from the Sequence Manipulation Suite using the Uni331F and 1492R primers (Figure 4). This allowed us to target and amplify the desired 16S rRNA gene sequences in the DNA so that only this region would be used in the virtual digest. After the fragment was obtained through the virtual PCR, a virtual restriction enzyme test was performed.


3.2 Chang, Hung, Lee, Li, Lucena, Mardjoko, Tsarnakova, Wang, Wang, Zheng. Microbial Forensics: Identifying Bacteria and Yeast Using Ribosomal DNA Fingerprints The fragment was copied into the Sequence Manipulation Suite’s restriction digest, and the test was performed three times. Each trial used only one restriction enzyme, and the fragment was digested with the enzymes AluI, HaeIII, and MboI. The program displayed the number of fragments generated and their lengths for each individual enzyme, and the results of the virtual digest were recorded and compared to the results of our program. Test data was successfully collected from twenty bacteria in this manner.

19

for each enzyme was calculated by subtracting the last site cut by each enzyme with the length of the original sequence (Figure 5). These fragment lengths were then appended to the list in ‘length_dict’ that was contained in key with the enzyme that produced the fragments. Finally, the dictionary ‘length_dict’ was returned with the fragment lengths.

Bacteria Identification Process

The virtual digest found the restriction enzyme cut sites of AluI, HaeIII, and MboI for each genome sequence in the bacteria database. Then, the length of the fragments cut by the restriction enzymes were calculated. Information of each bacteria in the bacteria database were stored together in a class. We read the FASTA database file using the SeqIO reading function, parse().We also iterated through the dataset and performed a manual PCR to check whether the sequences were identified between the forward and reverse primers. A BioPython RestrictionBatch object, ‘rb’, was initiated in order to use the three restriction enzymes, AluI, HaeIII, and MboI simultaneously when performing the virtual digest. The BioPython search function could then be applied to the restriction batch to determine the restriction sites of a given sequence, which was returned as a dictionary with the three enzymes as keys and a list of restriction cut sites as values. A ‘BacteriaInfo’ class was created to concisely store the identifying information of a given bacteria in one object. The class contained three instance variables, ‘name’, ‘id’, and ‘lengths’, with ‘name’ representing the bacteria name, ‘id’ representing the bacteria’s accession number, and ‘lengths’ containing the dictionary of fragment lengths. In iterating through the bacteria database, each sequence of bacteria was first virtually digested and its fragment lengths were calculated, then a ‘BacteriaInfo’ object was created to store the information of the bacteria. The function, ‘find_lengths’, used the parameters ‘sequence’, representing the original genome sequence that was virtually digested and preprocessed to remove all extraneous space and newline characters, and ‘seq_dict’, representing the dictionary of restriction cut sites that was returned by the BioPython virtual digest search function. This function created a new dictionary, ‘length_dict’, with the same restriction enzyme keys as ‘seq_dict’, but with an empty list to store fragment lengths as values. By iterating through each key in ‘seq_dict’ and each cut site in the corresponding list to the key, fragments lengths were calculated as the difference between a particular cut site and its previous cut site. The last fragment length

Figure 5: Visual Representation of Fragment Length Calculation Possibility Reducer

The possibility reducer was created to identify known bacteria from the database of unnamed bacteria sequences. The algorithm finds possible bacteria matches from the fragment lengths calculated from the cut sights performed by the virtual digests. As mentioned before, after the virtual digest PCR was performed, each bacteria would result in unique ribosomal DNA fragment cut sites and number of cuts. Therefore, the identity of the bacteria can be determined by analyzing the fragment lengths. We created an algorithm to compare fragment lengths from two individual bacteria records. It analyzed two lists of fragment lengths and determined whether the numbers in these lists are close enough in value. Preconditions were first examined. To prevent the comparison between a record with no fragment lengths in a particular enzyme, empty lists that did not contain fragment lengths were deemed invalid for comparison with filled lists. The fragment lengths below the minimum base pair (100) were filtered out of the fragment length list and were disregarded in comparison. This minimum base pair value denotes the lowest cutoff number for the most effective range of DNA sequence. In figure 5, the two example lists of fragment lengths each contain fragment lengths below 100, and in the ‘filter lists’ step all of those numbers are removed from the list.


20

Chapter 3. Life Science

After the lists were filtered, the algorithm checked if each sequence contained the same number of fragment cuts. If each sequence contained a different number of fragment cuts, the algorithm determined the two sequences as non-matching and did not execute a comparison. If the two sequences contained the same number of fragment lengths, the fragment lengths were then sorted in increasing value. During the comparison, the algorithm took the difference between the corresponding lengths and evaluated whether this difference was within 10% (the use of this Figure 7: Comparing Bacteria Fragments from Each Enpercentage will be expanded on in the percent difference zyme number section). Results Out of the 18 tested bacteria, there were 8328 fragment matches with respect to the reference database. While our fragments were compared within a 10% differentiation interval based on size, these results are nevertheless indicative of the functionality of the program. They indicate that the fragments sizes obtained from the PCR and digest are able to be matched with established and proven fragment data. Looking more closely at the matches, it was noted that for 15 particular bacteria there was a match and three bacterias were found to have no matches within the fragments from the database. The exact names of the bacteria and the number of matches are shown in Table 5.

Figure 6: Matching Decision Process Diagram We iterated through the records in the database and for each bacteria sequence record a virtual digest was performed and the subsequent fragment lengths were calculated from the cut sites. Comparisons were made between the bacteria from the database and the lab dataset. Each bacteria contained fragment lengths from three enzyme cuts (AluI, HaeIII, MboI). In the comparison process, the fragment length lists from each corresponding enzyme would be compared using the aforementioned algorithm. If all three enzymes were deemed as a match from the fragment comparing algorithm, then the bacteria sequences would be added to a list of possible matches. Figure 6 shows an example of this process and Figure 7 illustrates what a potential match would look like.

Table 5: Number of Matches for Each Bacteria in the Test Dataset Discussion Our results had to be validated in order to see whether or not our program was in fact correctly identifying our code-obtained fragments with the fragments from the database. There were multiple matches with the database fragments. It can be assumed that the multiple fragment match is a result of the prevalence of the 16S region. It is highly conserved across many different strains of bacteria; therefore, it is likely that some bacteria types in the


3.2 Chang, Hung, Lee, Li, Lucena, Mardjoko, Tsarnakova, Wang, Wang, Zheng. Microbial Forensics: Identifying Bacteria and Yeast Using Ribosomal DNA Fingerprints database may yield very similar fragment sizes, producing multiple matches to our test fragments. For the three bacteria that did not produce matches, it is indicated that the bacteria fragments obtained from our program were matching the bacteria from the database within the 10% allowable range. Most of the sixteen bacteria had multiple matches, such as Campylobacter coli, likely had such results because similar to the other four bacteria, there were multiple strains of each bacteria in the database. Since the 16S region is highly conserved across bacterial organisms, it is a possibility that certain sequences, and thus fragments, appear across more than one strain, outputting more than one match. Some of the matches and nonmatches were manually validated after running the program. We verified the resulting matches and legitimacy of the program by checking for the occurrences of each test bacteria in the bacteria database as shown in table 6. We found false positive matches for four of our test bacteria: Shigella dysenteriae, Obesumbacterium proteus, Yersinia kristensenii, and Enterobacter cloacae. Three false negative results were determined as well: Helicobacter pylori, Haemophilus influenzae, and Haemophilus parainfluenzae.

Table 6: Number of Occurrences of Test Bacteria Species in the Bacteria Database It can be determined that the sequences could have had small errors such as single nucleotide insertions or deletions. The virtual PCR and restriction enzyme digest do not have tolerance to those small errors and therefore may result in incorrect results. Also, the limitations of the gel electrophoresis done in the lab was not fully taken into account. If the cut fragments were too small to be resolved by gel electrophoresis or if there were more fragments in the wet work than expected, the algorithm was not tolerant to this. For all of these errors, the tool can further be programmed to be more tolerant, whether it be regarding gaps, sequence inaccuracies, or the number of fragments from the wet work that are to be compared. The aim would be to still give the correct bacteria identification without giving too many possible bacteria choices.

21

Table 7: Comparison of Lab Data to Test Data for E. coli To further ensure the validity of our test data that was obtained from the program, we compared the test fragments with real laboratory-derived data after a PCR and restriction digest, as seen in Table 7. Looking at the fragments from Mbo1, the fragments were relatively similar. However, for the fragments from Alu1 and Hae3, it was clear that the test data produced more fragments than were actually produced by a traditional restriction digest. As with the limitations behind gel electrophoresis, this deviation of fragment number was most likely due to our program’s simplified matching approach. We used a restricted set of enzymes and primers, therefore when we consider the possibility of errors in the initial input sequence for the test data, it is inevitable that the resulting fragments sizes would vary such that a single fragment from the lab data is seen as multiple fragments from our program output. Finding ways to efficiently identify unknown microorganisms provides a stronger understanding of the natural world and allows researchers to pursue novel innovations. With further development, our program could assist in producing affordable high-quality identifications of unknown bacteria and yeasts. The identification of microorganisms is critical to the ability of the scientific community in numerous ways. Recent research has investigated how microbes could even be engineered for use as biofuels, in therapies, and more. For example, it has been proposed that the bacterium Alcanivorax borkumensis could be used to clean up oil spills after its genome was sequenced and the enzymes it uses to break down oil were identified [4]. The enzymes, hydroxylases, were efficient at breaking down oil both in water and soil, and around 80 percent of various crude oil compounds. As a result of these enzymes, A. borkumensis could be used as an efficient clean up of oil spills [4]. Another application of 16S rDNA fingerprinting is in the seafood industry, specifically, the traceability of seafood. Fingerprinting techniques (specifically, PCRDGGE) were used in one study in 2017 to determine the geographic origin of sea bass, by performing analyses on samples of sea bass skin mucus. The data show that fish from different geographical locations had unique operational taxonomic units (OTUs), and that PCR-DGGE was


Chapter 3. Life Science

22 able to be used to discriminate between fish from different geographic regions. Acknowledgements We would like to express our gratitude to our project advisor, Andrew McGuier, for his guidance and useful critiques of our work, as well as Johnathan Stephenson for his encouragement and supervision of our project. Our special thanks are extended to Dr. Natalie McGuier, who assisted throughout the project. We would also like to thank the contributors to the NIH GenBank, as well as the maintainers of the Ribosomal Database Project and the Sequence Manipulation Suite. Lastly, we wish to thank Dr. Barry Luokaala and the sponsors of PGSS, Carnegie Mellon University and the Mellon College of Science, and the PGSS Alumni Association for their part in making our research possible. References [1] Centers for Disease Control and Prevention. Salmonella Outbreak Linked to BrightFarms Packaged Salad Greens. 2021. URL: https://www. cdc.gov/salmonella/typhimurium-07-21/ index.html (cited on page 17). [2]

J. M. Janda and S.L. Abbott. 16S rRNA Gene Sequencing for Bacterial Identification in the Di-

agnostic Laboratory: Pluses, Perils, and Pitfalls. 2007. URL: https://doi.org/10.1128/JCM. 01228-07 (cited on page 17). [3]

Z.J. Jay and W.P. Inskeep. The Distribution, Diversity, and Importance of 16S rRNA Gene Introns in the Order Thermoproteales. 2015. URL: https: //doi.org/10.1186/s13062- 015- 0065- 6 (cited on page 17).

[4]

I Slav. Oil-Eating Bacteria Could Help Clean Up the Next Oil Spill. 2018. URL: https : / / www . businessinsider . com / oil - eating bacteria - could - help - clean - up - the next-oil-spill-2018- (cited on page 21).

[5]

Spectrum Compact CE System. Promega Corporation. URL: https : / / www . promega . com / products/sequencing/sanger-sequencing/ spectrum - cce - instrument / ?catNum = CE1304 (cited on page 17).

[6]

L.A Stark. Beneficial Microorganisms: Countering Microbephobia. 2010. URL: doi:10.1187/cbe. 10-09-0119 (cited on page 17).

[7]

N Tsevdos. Yogurt. Food Source Information. 2020. URL : https://fsi.colostate.edu/yogurt/ (cited on page 17).

caption graphicx

3.3

Designing A Rubber Band-Powered Plane To Uncover Aerodynamics By Tommy Gordon ’23 Although putting together a small (about 50 cm long) rubber band-powered plane may seem like a trivial science class activity or a simple individual project, it is an endeavor that is made surprisingly tricky due to the numerous forces and phenomena in action during flight. Throughout the past few months, I have been in the process of constructing several aircraft for an event named “Wright Stuff”, one of many that make up the Science Olympiad competition. Planes constructed for this event generally follow a standard design with a nose-mounted propeller, the main wing, and the empennage (the tail structure consisting of the vertical stabilizer and horizontal stabilizers). Below are some diagrams of my initial aircraft designs (Figures 1-3):

Figure 1


3.3 Gordon. Designing A Rubber Band-Powered Plane To Uncover Aerodynamics

23

Figure 2

Figure 4

Figure 3 Starting from the front of the aircraft, we have the propeller. Ideally, the center of mass would be located about 13 of the way down the main wing (starting at the leading edge), and therefore, like all other parts added to the aircraft, the weight must be taken into account. In addition to weight, the nose-mounted propeller’s centered position can cause the aircraft to turn slightly to the left or right (depending on what direction the propeller is moving). Both the torque effect and the slipstream/ corkscrew effect illustrate this left/ right turning tendency. The torque effect explains that as the propeller rotates in one direction, the aircraft is pulled in the other direction (on the horizontal axis or yaw). This phenomenon can also be explained by Newton’s third law of motion: for every action, there is an equal and opposite reaction [1]. The slipstream/ corkscrew effect is also related to the spinning of the propeller. When the propeller spins at high speeds, it acts like a wing, pushing accelerated air backward to propel the plane forward. This backward air; however, travels in a spinning pattern matching that of the propeller down the fuselage until it hits the vertical stabilizer (the aircraft component with the highest amount of surface area from the side) causing the plane to turn. The Figure 4 illustrates the torque effect and the fifth the slipstream effect.

Figure 5

The torque and slipstream effects are also the primary reason for deciding to have the aerofoil positioned above the fuselage as seen in Figure 2. The aerofoil can avoid the majority of disruption caused by the slipstream and torque effects with its positioning further above the fuselage. Following the propeller, is the wing or aerofoil. This is the most critical part of the plane as it is responsible for producing lift; the force that pushes the aircraft upwards.


Chapter 3. Life Science

24

values taken at two different points (at the leading edge and the trailing edge of the wing in this situation). Assuming the plane is flying in a forward direction and there is no change in elevation, the terms ρgh1 , ρgh2 , P1 , and P2 cancel out. The terms ρgh1 and ρgh2 account for ρ, (air density), g, (acceleration due to gravity), and h, (the elevation at each point). Since all of these values are equivalent, they can therefore be canceled out. The terms P1 and P2 account for the pressure of air at the elevation of both ends of the wind; however, as stated earlier, the plane’s elevation is not changing. Therefore, the air pressure is the same at both points [3]. 1 1 P1 + ρV12 + ρgh1 = P2 + ρV22 + ρgh2 2 2 Figure 6 Bernoulli’s principle explains why an aircraft can produce lift (due to the shape of the aerofoil). It states, “For horizontal flow, an increase in velocity must be accompanied by a decrease in pressure” [4]. When an aircraft aerofoil passes through air particles, the particles below the wing flow at a slower rate than those above the wing, therefore, decreasing the pressure above the wing and increasing the pressure below. The pressure difference formed produces lift. Figure 7 shows Bernoulli’s principle in action as the particles above the wing (aerofoil) move at greater speeds.

Figure 8: Bernoulli’s Equation After the ρgh and P terms have been canceled out, Bernoulli’s equation is stating that one-half the density of air (ρ) times velocity squared (V 2 ) is equivalent at both points. This equation also illustrates the conservation of energy as velocity and pressure will always be inversely proportional. When on one side of the equation the pressure is high, the velocity is low and vice versa for the other side. Apart from the shape of the aerofoil shape, the wing has several other key design aspects that increase lift and stability while minimizing drag. These design aspects include an appropriate angle of attack, dihedral, and winglets. As seen in Figure 2, the angle of attack of the plane is approximately 14 degrees. The angle of attack is the angle formed by the chord (straight line between the leading and trailing edge of the wing) and the oncoming flow of air as the plane moves forward.

Figure 7 Note: • Green: Air molecule position at t = 1 • Red: Air molecule position at t = 2 • Yellow: Air molecule position at t = 3 This situation can also be proven with Bernoulli’s equation (Figure 8). The subscripts 1 and 2 denote the

Figure 9


3.3 Gordon. Designing A Rubber Band-Powered Plane To Uncover Aerodynamics The angle of attack works to generate additional lift by directing the flow of air further downwards producing an upwards force. It takes into account the Coanda effect. The Coanda effect is the tendency of air to remain close to or attached to the upper surface of the wing, even during the downwards slope near the trailing edge (represented by Figure 10).

25

[2]

Figure 12 In reference to Figure 3, the aircraft has a slight dihedral. This means that both wings are angled slightly upwards. Adding a dihedral to an aircraft increases its roll stability. In other words, it reduces the tendency of an aircraft to bank from side to side in an uncontrolled manner. Figure 10 Whenever a plane with a dihedral sideslips (moves left As seen in the figure above, as the angle of attack or right and down in a certain direction), it will gradually gradually increases, so does the direction of airflow and roll back to a level flight. therefore lift. With this said, when the angle of attack becomes too great, the Coanda effect wears off, and the plane stalls. Figures 11 and 12 show how beyond an approximate angle of attack, airflow will separate from the wing’s upper surface and the plane will start to stall. This separation is called cavitation, and if the angle of attack exceeds approximately 16 degrees, the plane will slowly approach a stall and lose lift.

Figure 13

Figure 11

As seen in Figure 13, as the plane rolls towards one side, the wing that has moved downwards has a higher angle of attack in relation to the upwards wing. Since the plane has sideslipped, the relative airflow is also approaching from the right side, and therefore, in relation to that airflow, the right-wing has a higher angle of attack. When meeting the airflow from the right, the greater angle


Chapter 3. Life Science

26 of attack and lift generation of the right-wing will level the plane back out on the roll axis [5]. The final major design element of the wing are the winglets at the end of each wing. These are the tips of the wings of the aircraft and they are configured in a more extreme upwards angle in comparison to the dihedral.

The winglets reduce the surface area of the tips of the wings and introduce a sharp upward slope which reduces the amount of high-pressure air mixing with low-pressure air. In turn, the vortices are greatly reduced in size. The shape of the winglet also allows for the high-pressure air to accelerate to a lower-pressure state by the time it reaches the tip of the winglet. Another way to decrease the size of vortices (induced drag caused by lift), is to have wings with a larger aspect ratio. The aspect ratio of a wing is the ratio of the chord to the wingspan or the wingspan divided by the chord [5].

Figure 14 Although the dihedral improves roll stability on the aircraft, it can also assist in accentuating vortices that appear on the tips of the wings. Vortices are like small tornadoes that form when the high-pressure air below the Figure 16 wings moves towards the low-pressure air at the top of the wing near the wingtip. These vortices produce induced Although many types of aircraft have structural limitadrag (induced meaning it is inevitable drag produced by tions when it comes to increasing wing size dramatically, lift). In addition to the fact that fluids always travel from larger aspect ratios are present in planes such as gliders, high to low-pressure regions, the dihedral also makes this which attempt to maximize flight time by minimizing travel easier for the high-pressure air. drag (achieved by a high aspect ratio). The final major component of the aircraft is the empennage or the tail structure. The tail structure is composed of vertical and horizontal stabilizers. The vertical stabilizer assists with stability across the yaw axis (left and right) and the horizontal stabilizers across the pitch axis (up and down). While the empennage has less complex design aspects than the wing, it still maintains a critical role in aircraft stability and control. The airplane diagrammed in Figures 1-3 includes conventional tail design (with a singular vertical stabilizer between two horizontal stabilizers). When designing a tail, one of the most important things to take into account is weighting. If the tail is too light, the plane may go into a nosedive, and if it is too heavy, it may enter a stall and produce no lift. In addition, the horizontal stabilizers also have aerofoil designs meaning they produce lift. Figure 15


3.3 Gordon. Designing A Rubber Band-Powered Plane To Uncover Aerodynamics

Figure 17

27

Figure 19

This situation resembles a lever. Each side of the lever is varying in distance meaning mechanical advantage will Properly balancing the aircraft requires knowing the be at play. The shorter side has a lot of weight due to the tail arm, or the distance from the center of gravity to the main wing and a significant amount of force is produced aerodynamic center (center of lift) of the tail. by this wing, while the longer side of the lever contains less weight (lighter empennage) and has less force acting upon it. Therefore the plane should be balanced upon the fulcrum in an ideal situation. From the slipstream effect created by the propeller to the way Bernoulli’s principle describes how an aircraft creates lift, there are many complex forces interacting with an aircraft when it flies. Although these are some of the basic ones, it is crucial that they are all taken into account even when designing a small rubber band-powered aircraft. If you ever find yourself constructing one of these small airplanes in the future for a science project or just for fun, take a minute to think how all of the seemingly insignificant design choices can affect an aircraft’s performance. Or if you find yourself traveling somewhere on a much larger aircraft, think about how all of these concepts are at play during your flight. References Figure 18

[1]

URL : ol-january-iap-2019/class-videos/ lecture- 2- airplane- aerodynamics/ (cited on page 23).

The tail arm works like the fulcrum on a lever [6]. The fulcrum is positioned below the center of gravity and at one end you have the weight of the main wing and forces that act upon it. On the other end, you have the weight of the empennage and forces that act upon it.

[2]

Angle of Attack. URL: https : / / www . researchgate . net / figure / Coefficient of - Lift - versus - critical - AoA - Source wwwaviation- historycom_fig3_336286742 (cited on page 25).

[3]

Bernoulli’s Equation. URL: https://www.quora. com/What- are- the- limitations- of- theBernoulli-Equation (cited on page 24).


Chapter 3. Life Science

28

3.4

[4]

Bernoulli’s Principle. URL: https : / / www . princeton . edu / ~asmits / Bicycle _ web / Bernoulli.html (cited on page 24).

[5]

Dihedral. URL: https://www.grc.nasa.gov/ www / k - 12 / airplane / geom . html (cited on page 26).

[6]

Tail Arm. URL: https : / / www . amaflightschool.org/getstarted/how-doi - understand - basic - aerodynamics (cited on page 27).

The Inhibitive Effect of Salicylic Acid on Martian Catalase By Ilyas Khan ’22 Background: Catalase is a crucial enzyme for all aerobic life on Earth. By breaking down H2 O2 or Hydrogen Peroxide into water and oxygen, catalase prevents the creation of hydroxyl groups which can destroy DNA and other building blocks of cellular life. How catalase behaves can also be an indication of how closely two organisms are related, or if they are related at all. This experiment tests a recently discovered Martian form of catalase with Terrestrial catalase inhibitors. It is well documented that at a concentration between 100 µM and 10 mM, salicylic acid will inhibit most varieties of Terrestrial catalase. If this holds true in Martian catalase, its similarities could help determine the relatedness of Terrestrial and Martian Discussion: species. The data collected displays a clear lack of understandConcentration in M and Absorption of control re- ing around the functioning of the Martian catalase enzyme. action solutions as measured after three-minute With little difference between control and treatment trials, this form of catalase may not function the same way as trial: the catalase found on Earth. However, there is not suffiTime(min) [H2 O2 ] (M) A500 cient data to rule out the possibility that all differences Blank 0 0 can be accounted for by various errors in the process of 0 0.00246 1.3 the experiment. This is particularly plausible because 0.5 0.00116 0.614 of the discrepancies between the initial data points and 1 0.00121 0.638 trendlines. 1.5 0.00085 0.448 2 0.000417 0.22 References 2.5 0.000239 0.126 3 0.00011 0.058 [1] Uwe Conrath. Two Inducers of Plant Defense Responses, 2,6-Dichloroisonicotinic Acid and Salicylic Acid, Inhibit Catalase Activity in Tobacco. Concentration and absorbency of treatment reMay 1995. URL: https : / / www . pnas . org / action solutions as measured after three-minute content/pnas/92/16/7143.full.pdf. trial: [H2 O2 ] with [2] J Durner and D F Klessig. Salicylic Acid Is a ModTime(min) A500 acid ulator of Tobacco and Mammalian Catalases. Nov. Blank 0.0000114 0.006 1996. URL: https://pubmed.ncbi.nlm.nih. 0 0.00164 0.866 gov/8910477/. 0.5 0.00137 0.72 1 0.00152 0.8 1.5 0.000736 0.388 2 0.00088 0.464 2.5 0.000364 0.192 3 0.000228 0.12


3.5 Lamitina. How to Photograph a Trillion Stars

3.5

29

How to Photograph a Trillion Stars By Luke Lamitina ’22 Background Many people have seen the awe inspiring photographs taken over the years by telescopes such as the Hubble Space Telescope [3]. These canvases of ethereal structures that span over such vast distances have inspired generations and sparked social movements [6]. Astrophotography has a long and significant history spanning all the way back to 1839 when Louis Jacques Mandé Daguerre attempted to take the first long exposure of the moon. Due to technical problems with his telescope and tracking, his image ended up coming out fuzzy and distorted. Nevertheless, he had started the ball rolling on astrophotography, and it would only be a year before a man named John William Draper took the first successful photograph of the moon [1]. Although Daguerre wasn’t successful in his first attempt, he invented a new method called the Daguerreotype Process which allowed for much crisper images—not only revolutionizing the field of astrophotography, but photography as a whole [5]. Since John William Draper’s first picture of the moon, leaps in technology and innovation have allowed us to photograph some of the faintest objects in the cosmos. For example, in 1994 Robert Williams, the director of the Space Telescope Science Institute in Baltimore, Maryland, decided to use his director’s discretionary time on the Hubble Space Telescope in a way that seemed preposterous to many scientists at the time. His idea was to point the telescope at a place in the sky where there were thought to be no galaxies, or nebulae—somewhere in the sky that was believed to have nothing in it. After 10 days of exposure time, the Hubble returned what is now known as the Hubble Deep Field image [2]. This one picture had effects that completely revolutionized astrophysics as a whole. As an avid astronomer who partakes in exoplanetary research, a friend and I recently embarked on a new project: photographing a trillion stars. To do this, we photographed the galaxy known as M31, commonly referred to as the Andromeda Galaxy. At a distance of 2.5 million light years from Earth, Andromeda is the closest galaxy to the Milky Way. In 4.5 billion years, the two galaxies are predicted to collide and merge into a new galaxy deemed Milkdromeda [4]. Prior to this collision, my exoplanetary research partner and I set out to photograph Andromeda using the Allegheny Observatory. In this article I will overview our process and show the final result.

difficult to take good pictures of space if it is cloudy, even if the cloud level is low. The ideal conditions are a completely clear, cold winter night. On cold winter nights, the molecules in the air do not hold as much moisture as they do in the summer, leaving us with clearer images. Secondly, we must choose a target that is bright enough for our specific telescope to actually see. The system for giving objects in space a value that correlates with their apparent brightness is called visual magnitude. There are different scales that use different comparison stars as a reference, but the one that we used uses the star Vega as a reference. Objects dimmer than Vega are given a Vmag that is positive and objects brighter than Vega are given a Vmag value that is negative. Another important consideration is the size and distance of the object we are photographing. Depending on the camera one is using, some objects are either too small or too large to photograph. This can be due to the object’s physical size or its distance from the earth.

Figure 1: Locating an object with RA and DEC

Lastly, it is important to choose a target with a good right ascension (RA) and declination (DEC). RA and DEC is a coordinate system for the sky, similar to our coordinate system on the earth—they give a way of knowing where a star, galaxy, or nebula is located in the sky (see figure 1). In order to choose a good RA and DEC, multiple factors must be taken into account. The most important of these factors is how much atmosphere the light from our target must pass through before it reaches our Materials and Methods detectors. One thing you must know is that as light passes There are three major considerations when conduct- through any type of medium such as air, small variations ing astrophotography. The first is the weather. It is very in temperature, pressure, etc. cause the light to scatter. If


Chapter 3. Life Science

30 we are taking long exposures of the Andromeda Galaxy, this scattering effect, caused by Earth’s atmosphere, will give us a less crisp image. Therefore, we want the light to travel through as little atmosphere as possible. For example, a star located near an observer’s horizon would not be a good target because it would have to travel through much more air mass than a star located directly above the observer. When an object is at the spot in the sky where its light travels through as little atmosphere as possible, we say that the object is at zenith (see figure 2).

telescope thinks that it is pointed at Andromeda when it is actually pointing to a patch of sky close to Andromeda. To correct this, we must take a quick exposure and compare the stars we see to the stars near Andromeda. You can use any kind of sky map, but for our case we used a program called Starry Night. Once we figure out where we are actually pointing, we just keep moving the telescope and taking quick exposures (roughly 5 Seconds) until we are in the right spot. Lastly, we set the telescope to tracking so that we can take exposures without our galaxy shifting out of frame as the night goes on. Once everything is set up, we can take our exposures. Depending on the stability of the telescope you are using and the natural noise of the camera, these exposures should be anywhere from 1-30 minutes. In our case, we started by taking one or two five-minute exposures before moving on to ten-minute exposures when our data looked good. Then it just becomes a waiting game. The goal is to get as much exposure time as possible. For our image, we didn’t really feel like staying up until 6 am, so we took about two and a half hours of total exposure time. Finally, we were left with 15-20 raw frames that we then used in our post processing to render our final image.

Figure 2: Object at Observer’s Zenith Once we have chosen our target and checked the conditions, we are ready to observe. For the observations that my partner and I did, we needed a pretty powerful camera. Luckily, we had some time over winter break to use the Allegheny Observatory’s Keeler 16-inch telescope. Interestingly, we didn’t even use the telescope to take the exposures; for that we used a DSLR camera mounted on top of the telescope specifically designed for this kind of project. Before we took any exposures, we needed to complete three main steps: 1) Focus the telescope, 2) Calibrate the Telescope at Andromeda, and 3) Set up tracking so that Andromeda didn’t shift out of our telescope’s view as the earth rotated. Focusing the telescope is fairly easy. Imagine you are taking a picture with your phone and the screen is blurry when you first go to take it. You then tap the screen or point it at an object, and the thing that you are photographing comes into focus. It’s the same idea for astrophotography just with some more technical processes that differ based on the telescope you are using. Most observatories will actually have a telescope guide, which is a set of instructions for how to carry out certain steps using that specific telescope and accompanying software. Once we have the telescope focused, we can now actually point it at Andromeda. This second step can sometimes be the most frustrating. When we put our coordinates into the telescope, it almost never points perfectly where we want it to. There is almost always a degree of error. In other words, the

Post Processing and Results The first two steps in post processing are generating a master flat and a master bias. It is very common to have dust particles on the lens of our camera. By generating a master flat, we are given a frame that is only dust particles (see figure 3). This way, we can apply it to our Andromeda image and subtract everything in our master flat from our image, effectively removing any spots. A master bias is a little more complicated, but just as relevant. Any kind of electronics gives off heat to run. When taking long exposures this inherent heat from running our camera can affect the whole frame. This is what we call a camera’s ‘noise.’ It is similar to black and white static on an old TV screen. To remove this, all we do is take a zero-second exposure to see the noise of the camera. By having nothing in our frames, we are left with just the noise that the camera gives off. Now we are able to take our two master frames and subtract them from our original exposures of Andromeda.


3.6 Nourbakhsh. The Kuwait Oil Fires: an Environmental Disaster

31

Figure 3: Raw Data From An Actual Image

human psyche. It causes us to ask new questions and most importantly wonder. When I look at this image, I like to Once we have calibrated our frames to our specific think that maybe around one of those trillion stars there is camera, we are ready to render our final image. To do this, a planet, and on this planet there are billions of creatures we first stack all of our frames and run them through an such as ourselves. Maybe one day we will realize the image subtraction software program, allowing us to take reality of this fantasy, and our understanding of our place out any inconsistencies in our data from things such as in the universe will forever be changed. atmospheric pressure and temperature deviations giving us a super crisp calibrated image. References Once we have calibrated and stacked our image, we are given some artistic freedom. Depending on the soft- [1] Erica Fahr Campbell. “The First Photograph Of The Moon”. In: Time (Dec. 2013). URL: https:// ware you use, there are many different functions you can time.com/3805947/the-first-photographuse to make your final image more aesthetically appealing of-the-moon/ (cited on page 29). (see figure 4). [2] Berta Margalef-Bentabol et al. “Observations of the initial formation and evolution of spiral galaxies at 1 < z < 3 in the CANDELS fields”. In: Monthly Notices of the Royal Astronomical Society 511.1 (Mar. 2022), pages 1502–1517. DOI: 10.1093/mnras/stac080. arXiv: 2201.06334 [astro-ph.GA] (cited on page 29).

Figure 4: Final image Conclusion While astrophotography can seem like a very technical and scientific process, it can also be viewed as an artistic one. As our technology has become more advanced, astronomers have relied less on astrophotography as a form of data collection. Due to this, astrophotography has shifted from a strictly scientific endeavor to one in which science and art mesh. I think that there is an inherent beauty found in the cosmos, and what makes it beautiful is its distance and invisibility from us. Our primative biology was never meant to travel to space or see these nebulas and galaxies thousands of light years away, but through technology and science we are able to explore realms not even thought to exist. Perhaps this image does not contribute to the furthering of our scientific knowledge, but I think it has vast implications on the

3.6

[3]

NASA. Hubble Space Telescope Images. Edited by Karl Hille. URL: https://www.nasa.gov/ mission_pages/hubble/multimedia/index. html (cited on page 29).

[4]

Riccardo Schiavi et al. “Future merger of the Milky Way with the Andromeda galaxy and the fate of their supermassive black holes”. In: Astronomy & Astrophysics 642, A30 (Oct. 2020), A30. DOI: 10.1051/0004-6361/202038674. arXiv: 2102. 10938 [astro-ph.GA] (cited on page 29).

[5]

“The Daguerreotype Medium”. In: Daguerreotypes. Library Of Congress. URL: https://www. loc . gov / collections / daguerreotypes / articles-and-essays/the-daguerreotypemedium/ (cited on page 29).

[6]

Mike Wall. “Earth Day At 50: How Apollo 8’S ‘Earthrise’ photo helped spark the first celebration”. In: Space (Apr. 2020). URL: https://www. space . com / earthrise - image - apollo - 8 earth - day - 50th - anniversary . html (cited on page 29).

The Kuwait Oil Fires: an Environmental Disaster By Mitra Nourbakhsh ’22 In January 1991, Iraqi forces were in the midst of a rushed retreat from Kuwait at the end of the Arabian Gulf War. Embarrassed by Iraq’s defeat at the hands of a US-led coalition, Saddam Hussein ordered his armies to carry out one last act of war: they set on fire over 700

of Kuwait’s oil wells. Hussein hoped to inflict a blow on Kuwait’s oil production infrastructure, a huge money maker for the country and the cause of the Arabian Gulf War, and his plan worked almost too well. The wells blazed, columns of toxic smoke blocked


32 the sunlight, and oil leaked into the desert and the Persian Gulf. International firefighting crews could not be sent into the area until after the war because it was too dangerous. To complicate matters, when firefighters arrived they discovered that there were land mines around the oil wells that had to be removed before they could actually begin firefighting. In the end, the fires burned for ten months, and they were finally put out at a cost of $1.5 billion to Kuwait. Rated in 2010 as the third worst environmental disaster in history, the Kuwait Oil Fires caused devastation and disruption to the desert ecosystem. 11 million barrels of crude oil poured into the Persian Gulf, some of which ended up on the beaches of Iran, Bahrain, Qatar, and Saudi Arabia. 25 to 40 million barrels were spread across the desert, creating hundreds of oil lakes (see Fig. 1), destroying fragile habitats and endangering many species (see Fig. 2). The CO2 emissions from the burning oil were about 130 million tons. These changes, along with soot in the air, affected the weather: temperatures in Kuwait and neighboring countries were about 10 degrees Celsius lower than in a regular year. In addition, seawater temperatures dropped 5-8 degrees Celsius, a change that was more devastating to fish and prawns than the oil itself. The Kuwait oil fires created a far-reaching environmental catastrophe that has persisted to this day. The effects of these oil fires on the biodiversity of marine and land ecosystems have been disastrous. In Saudi Arabia, many of the mangroves were damaged by the oil, killing between 50 and 90 percent of the fauna: crabs, amphipods, and mollusks. Seabirds had oiled feathers and ingested the oil when preening; most birds were contaminated by the oil. About 100,000 waders were directly killed by the oil fire disaster. 80% of Kuwait’s livestock died after inhaling the noxious smoke. Dugongs and dolphins were found dead on beaches. Plant life was equally affected, as exposure to petroleum hydrocarbons damaged plant growth and seed germination and soil clogged by oil prevented plants from accessing light, water and nutrients. Species that relied on that vegetation became noticeably absent, the habitat’s carrying capacity decreasing without sufficient food sources. Today, the desert is still contaminated, and many of the species affected are still suffering 30 years later. 90% of the unprotected contaminated soil is still in the environment, the desert still marred by hardened black oil sludge. There have been scores of bioremediation efforts, and there is plenty of money available from the UN to make it happen, but there have been bureaucratic delays and issues with the efficacy or remediation. As of June 2021, just 10% of contaminated soil had been removed and buried in landfills, as was the original method of remediation. It doesn’t help that, as NASA says,“the sand

Chapter 3. Life Science and gravel on the land’s surface combined with oil and soot to form a layer of hardened ‘tarcrete’ over almost 5 percent of the country’s area,” which makes the oil extremely hard to remove. Over concerns of intensifying the problem later on with that method, bioremediation efforts of using microorganisms to break down and decay hydrocarbons have been employed. There have also been efforts to use landfarming, meaning that contaminated soil is tilled periodically and controlled in terms of moisture and pH to degrade contaminants. Still, due to the “high presence of petroleum hydrocarbons and the concentration of salt content in Kuwait’s contaminated soil” [4] there is no guarantee that these efforts will pay off. Still, efforts by Kuwait to remedy the damage done by the fires are ongoing and have widespread international support.

Figure 1: Kuwait Oil Fires

From the drilling of oil wells, to the war, to Saddam Hussein’s decision to carry out one last vengeful act, the fires were unnecessary, avoidable, and manmade; it is just one example of the many consequences of war. The disastrous effects have persisted to this day, and will continue to be seen for years to come. The cost in terms of plant and animal life is just unfathomable; numbers cannot truly encapsulate how devastating the fires were for the ecosystem, and the cost in human life and health is also significant. What was once a beautiful desert has been permanently damaged. Kuwait will never again be as pristine as it once was.


3.7 Sleet. The Dangers of Microplastics

Figure 2: Kuwaitian Land References [1]

3.7

International Institute for Applied Systems Analysis. The Environmental Impacts of the Gulf War 1991. 2004. URL: http://pure.iiasa.ac.at/ id/eprint/7427/.

33 [2]

CCK. Oil Well Fires in Kuwait. Nov. 2018. URL: https : / / cck - law . com / blog / oil - well fires-in-kuwait/.

[3]

Association for Diplomatic Studies & Training. Towering Infernos – The Kuwait Oil Fires. URL: https : / / adst . org / 2016 / 04 / towering infernos-the-kuwait-oil-fires/.

[4]

The Guardian. ‘Gushing oil and roaring fires’: 30 years on Kuwait is still scarred by catastrophic pollution. Dec. 2021. URL: https : / / www.theguardian.com/environment/2021/ dec/11/the-sound-of-roaring-fires-isstill- in- my- memory- 30- years- on- fromkuwaits-oil-blazes (cited on page 32).

[5]

Oceana. Mangrove Forest. URL: https : / / oceana . org / marine - life / mangrove forest/.

[6]

Taylor & Francis Online. Bioremediation of oilcontaminated soil in Kuwait. I. landfarming to remediate oil-contaminated soil. Dec. 2008. URL : https : / / www . tandfonline . com / doi / abs / 10 . 1080 / 15320389609383528 ? journalCode=bssc19.

The Dangers of Microplastics By Lorin Sleet ’22 Microplastics, defined as extremely small pieces of plastic debris in the environment resulting from the disposal and breakdown of consumer products and industrial waste, are significant causes of pollution and environmental damage. For example, they can affect marine ecology and cause water pollution. However, their full impact on the environment and ecosystems is still under study. The first documentations of marine plastic debris, found in the Sargasso Sea, were published in the journal of Science in 1972 [1]. In 1996, Captain Charles Moore discovered the “Great Pacific Garbage Patch” located in the middle of the North Pacific Subtropical Gyre [3]. However, it was not until 2004 that Richard Thompson, a professor of marine biology from the University of Plymouth, UK, coined the term “microplastic” and called for more research to be done on the subject. Following his call to action, the scientific evidence regarding contamination, fate, and effects of plastic debris in the oceans increased at an exponential rate. The crisis was introduced to the public and popularized two years later in 2006, when a five-part series called Altered Oceans by Ken Weiss of the Los Angeles Times won the Pulitzer prize. His fourth essay in the series, “Plague of Plastic Chokes the Seas,” gave a devastating account of the size

and effects of the problem on marine life and birds. Back in 2006, when Ken Weiss wrote the series, an estimated 1 million seabirds and about 100,000 seals, whales, dolphins, and other marine mammals per year choked on or got tangled in plastic nets or other debris. The numbers since then have only grown. The garbage patch was repeatedly described in articles as being “an island of floating plastic litter twice the size of Texas” [4]. Skeptical of yet intrigued by this claim, Miriam Goldstein at the Scripps Institution of Oceanography, along with a group of graduate students, organized an expedition to survey and sample the Great Pacific Garbage Patch in 2009 [5]. It was the first organized study of the Subtropical Gyre. The group spent three days throwing surface skimming nets into the ocean expecting to dredge up pieces of plastic, but got no significant results. However, on the fourth day, observers on the deck had to call for assistance as thousands of pieces of plastic debris “smaller than a pencil eraser” (< 5 mm in size) came into view. The crew had not brought equipment with the ability to quantify the soup of plastic before them. As they studied their observations upon their return, they advocated for investment into specialized research on quantifying the effect of the pollution in oceans and other waterways


34

Chapter 3. Life Science

such as shallow bays, coastal waters, and estuaries. Their plastics have become an integral part of society since the findings also demonstrated a need to shift from cleanup 1950s when the lifestyle “Throwaway Living” trended. to prevention methods to confront the pollution problem. From cutlery, straws, plates, cups to packaging, a 2021 esThe full extent of the distribution and impact of mi- timate showed that the average American throws away apcroplastics is still being studied. What is known, however, proximately 110 pounds of plastic annually. To provide a is that microplastics are extremely detrimental to biotic sustainable solution and reduce plastic pollution, there has organisms. Plastic is not biodegradable and therefore been a push to use biodegradable resources in the place ends up in oceans where abiotic factors, such as sunlight, of single use resources. While options like compostable begin a process called photodegradation. This process cutlery, containers, and straws are available to improve breaks plastic into smaller and smaller particles making it the usage of plastic, they alone cannot reduce plastic more likely for them to be mistaken by animals as food. emissions. One source of microplastics in the oceans is Birds such as Albatrosses often mistake plastic resin pel- emission from modern textiles. Our consumerism has lets for fish eggs; in doing so they feed the plastic to promoted the rise of fast fashion: “inexpensive clothing their hatchlings who ultimately die of starvation or organ produced rapidly by mass-market retailers in response ruptures. Seals and other marine mammals frequently be- to the latest trends” [2] Textiles, such as polyester, nycome entangled in discarded fishing nets and drown. This lon, acrylic, and other synthetic fibers, make up 34.8% of phenomenon is known as “ghost fishing.” The damage both primary and secondary microplastic pollution, the is not limited to the deaths of marine animals and birds. largest source by far. Approximately 2.2 million tons of The debris also has impacts, more broadly, on the food microfibers pollute our oceans yearly. Each time laundry webs. As trash accumulates within the gyres, due to the is done, nearly 9 million microfibers enter wastewater shear density, they block out sunlight from reaching the treatment plants that do not have the facilities to filter plankton and algae below. This is a cause for concern the fibers out. As a result the pollution is released into as plankton and algae are the most common autotrophs the ocean. There has been a development of home based in the marine food web, meaning they can produce their systems that prevent some of the pollution from being own nutrients from carbon and sunlight. As a result of the released into the environment. Filtration systems that plankton and algae communities being threatened, there can remove 97% of microfibers exist, however they are could be major, long-term shifts in the biodiversity of the expensive, not widely available, and there is little to no inocean since they are a keystone species. These dangers centive for consumers or larger companies to install these are compounded by the fact that plastics absorb and ex- technologies. To achieve such drastic changes it needs crete harmful pollutants (density independants) such as to be a collective effort. Both an effort on the part of colorants and chemicals like PVC (Polyvinyl Chloride). consumers as well as the plastic industry, waste managers, PVC has been recognized as a leading inhibitor of chem- scientists, and all levels of the government from all over ical cycling, specifically nitrogen’s cycling. Nitrogen is the world to develop and implement sustainable plans one of the most important limiting nutrients for photo- for the future. Stricter legislation regarding microplastics synthetic organisms such as algae and marine bacteria. is crucial unless the average human prefers to continue Should the harmful effects of microplastics in our oceans ingesting 20 kg of plastic throughout their lifetime, with continue, millions more animals will die and the effect on serious health and environmental repercussions. humans, due to the food web, will progress. It is projected that the continued ingestion of microplastics, due to their References toxins, will cause reproductive issues, damage to organs, [1] J. Edward Carpenter and K.L. Smith. Plastics on developmental issues in children and possibly death. the Sargasso Sea Surface. URL: https://www. science . org / doi / abs / 10 . 1126 / science . There is no simple solution to the “Great Pacific 175.4027.1240 (cited on page 33). Garbage Patch”. Attempting to design a net with the capabilities to scoop up such miniscule amounts of plas[2] Jambeck Research Group. URL: https://tos. tic would be fruitless. Not only would this mistakenly org/oceanography/article/the-story-ofextract marine life, it is estimated that it would take 67 plastic - pollution - from - the - distant ships one year to clean less than 1% of the North Pacific ocean - gyres - to - the - global - policy Ocean [Sleet-Dangers-5]. As the effect of the pollution stage (cited on page 34). is located primarily in bodies of water, the plastics are able to travel with the currents. Therefore setting up na- [3] et al Smith Madeleine. Microplastics in Seafood ture or zoned reserves would be impossible. Scientists and the Implications for Human Health. Sept. 2018. have come to the conclusion that currently the best way URL : https://www.ncbi.nlm.nih.gov/pmc/ to move forward is reducing plastic emissions. Single use articles/PMC6132564/ (cited on page 33).


3.7 Sleet. The Dangers of Microplastics [4]

[5]

National Geographic Society. Great Pacific Garbage Patch. Oct. 2012. URL: https://www. nationalgeographic . org / encyclopedia / great - pacific - garbage - patch/ (cited on page 33). National Geographic Society. Ocean Gyre. Oct. 2012. URL: https : / / www .

35 nationalgeographic . org / encyclopedia / ocean-gyre/ (cited on page 33). [6]

Kenneth R Weiss. Altered Oceans: Part Four: Plague of Plastic Chokes the Seas. URL: https: / / www . latimes . com / world / la - me ocean2aug02-story.html.



4. Social Science

Section Header by Jocelyn Hayes ’23.


Chapter 4. Social Science

38

4.1

To Err is Human, to Forgive Divine By Alexis Alarcon ’23 The limits of forgiveness must be observed and revered as much as the action itself. In the cases where total absolution is not achievable or permissible, the situation commands the attention of any passerby. The notion of condoning the Nazis has been, and likely always will be, a highly contested issue worldwide. By questioning the restrictions of remission in any instance it begs the question if pardoning is possible. While forgiveness and repentance are essential to life for practical purposes, on behalf of a race or community it is unacceptable and impossible. Forgiveness is defined as “the action or process of forgiving or being forgiven” by the Oxford Dictionary. This definition leaves out a crucial part of the process: repentance. What good is leniency if the offending party is not remorseful and therefore not deserving of being pardoned? In everyday life, seeking and granting mercy can be therapeutic for both parties; salvaging a relationship is likely worth more than the crime. It is generally viewed as the morally correct thing to do, for interpersonal interactions. A challenge arises in determining if the offending party does wish to come to terms with their wrongdoing, urging the question of when is it acceptable to not pardon the offender? Who gets to determine how many or what type of wrong is too hurtful? Revenge is generally frowned upon by polite society, however, in a case as massive and disturbing as the genocide of the European Jews, an issue so traumatizing to millions of people, the overwhelmingly emotional response is legally supported by the nation of Israel. It is arduous, borderline disrespectful, to suggest bringing civility and logic into the reactions of survivors and their families when the foundations of the world order were shaken. Kenneth Feinberg, a Washington D. C. attorney was tasked with assigning a dollar amount of reparations the federal government had to pay to the families of the three thousand people who lost their lives during the September 11 attacks on the United States. The families and much of the public nearly revolted over this callous proposal. Yet the immense torment perpetrated by the Nazis demands a price, some way to attempt to repair the lasting damage felt throughout the world today. To begin to consider the possibility of reconciliation, the perpetrator must show signs of and be readily willing to wholeheartedly commit contrition for their sins. Simon Wiesenthal proposes the impossible question of the possibility of forgiveness in his book, The Sunflower, having been challenged to do the same by a dying Nazi. The scale of evildoing during the Holocaust is hard to fathom, and to suggest the notion of empathy is abhorrent to some survivors, understandably

so. Taking a look at the unique story illustrated in The Sunflower [1], it is easier to consider the consequences of purging the Nazis of sin, and how one interaction can demonstrate the large difference of opinions on the topic. In this story specifically, Wiesenthal and hundreds of other prisoners from their labor camp are brought to work at a military hospital for the day. A nurse comes to greet the prisoners, tasked with selecting a Jew to hear the confession of an SS soldier on his deathbed. Wiesenthal is chosen out of the group and is led inside, where he listens for the rest of this day and the next to hear more of the story. Once the dying man finishes, Wiesenthal walks out of the room without a word, and returns to his fellow prisoners. The prisoners return yet again, but the nurse informs Wiesenthal that the man passed away that night. The book summarizes the rest of the war and Wiesenthal’s movements between camps until he is eventually liberated. He goes to visit the Nazi’s mother, and spares her the graphic details of her son’s life and crimes. The story ends, and we are implored to consider how we would react in the same circumstance, and influential people from around the globe were asked to write responses. Two of those responses were Edward H. Flannery, a Catholic priest on the executive committee of the National Christain Leadership Conference for Israel, and Primo Levi, an Italian writer and antifascist held at Auschwitz. Flannery, from the Catholic perspective, provides two possible approaches to this dilemma: psychological or emotive, and ethical or religious. He writes, “The psychological or emotive factors are of importance and should have an influence on the decision to be made, but when they are in serious conflict with ethical or religious principles they must give way. . . ” (page 136). He continues, “The dying SS man did not ask him to speak on behalf of all Jews or, for that matter, for the harm done to all Jews but only for what he had done” (pg 137). Flannery maintains if he were in Wiesenthal’s position he would have pardoned the dying Nazi. While his stance is understandable, and supported with scripture on forgiveness from the Bible, his argument for why the SS man should have been absolved is not entirely how it was told in the story. The SS soldier tells Wiesenthal, “‘Yes, it is a year,’ he continued, ‘a year since the crime I committed. I have to talk to someone about it, perhaps that will help’” (pgs 29 - 30). It is evident here that the Nazi is confessing his sins as a Catholic, whose confession is required to go to Heaven, not as a human desiring to correct a past misdeed. Illustrated above, the Nazi is not concerned with Wiesenthal’s reaction, he simply needs someone to listen and absolve him of his


4.2 Gonzalez-Rychener. Rebels’ Gradient: A Comparison of Different Kinds of Rebels and Rebellion in Epics sins. Primo Levi has a different viewpoint; he writes, “...I can affirm that you did well, in this situation, to refuse your pardon to the dying man. You did well because it was the lesser evil: you could only have forgiven him by lying or by inflicting upon yourself a terrible moral violence” (pg 191). This is apparent by Wiesenthal’s restlessness; he did not listen to the story in complacent silence, he was deeply uncomfortable the entire time. He is already wracked with guilt, to the point of visiting the Nazi’s mother after the war. If Wiesenthal went the route of forgiving the man, the reaction would cause himself further pain, which is the greater evil. Levi continues, “The act of ‘having a Jew brought to him’ seems to me at once childish and impudent. . . Did Himmler not believe something similar when he ordered the suspension of the Lager massacres. . . ?” (pg 192). This connection is the strongest point of the argument, solidifying the Nazi’s utter disrespect for human life. The demand to talk to anyone about his experience hints to the idea that he is motivated by a desire to repent for fear of imminent death. Would he acknowledge his crimes if he continued living?

39

camp for being a Polish Catholic. In the grand scheme of things, it is unrealistic, borderline nonviable, to hold one person or nation responsible in the legal sense. However, it is well understood among citizens of Germany that they are responsible for what happened during the Holocaust. This generation is not guilty of any crime, but bear the burden of reckoning with their past to come to terms with the atrocities committed by their fore-bearers. I don’t hold any German personally responsible, but I maintain that this genocide was perpetrated by an entire country, with many other complicit countries. A question brought up in a past discussion stuck with me: how was an entire nation and people readily willing to kill? For crimes as hatefully motivated and as systematically perpetrated as ones committed by the Nazis, forgiveness is not an option. The time to reprieve was years ago, before the first act of genocide was committed, before it was conceived. It is unfair to appeal to the descendants of survivors to take decisive action or on behalf of their relatives. The time for repentance is now and a forevercontinuous undertaking. As a community, the world is rightfully beyond forgiving. There can only be understanding of the past, how Germans today are responsible, and a desire to uphold the legacy of those who fought back and condemn those who murdered for a living.

While I have not personally met a survivor, my generation will likely be the last to have that opportunity. Nonetheless, I have a personal connection to the Holocaust. I recently learned that my great uncle was killed in a concentration camp. My mom always says he was in the wrong place at the wrong time whenever I bring up the Reference topic, but I’ve never been satisfied with that. I do firmly [1] Simon Wiesenthal. The Sunflower. Schocken (cited believe my great uncle was intentionally sent to a death on page 38).

4.2 Rebels’ Gradient: A Comparison of Different Kinds of Rebels and Rebellion in Epics By Vanessa Gonzalez-Rychener ’24 Nothing monumental ever happens without someone doing something against the rules or norms, and because this is such a vital element of change, it is also a vital element of stories. The novels Signs Preceding the End of the World by Yuri Herrera [1] and The Odyssey [2] by Homer are very different, yet their protagonists are both rebellious. In The Odyssey, the main character, Odysseus, is on a journey to return home from the Trojan war. On his way, he encounters many challenges and creates quite a few challenges for himself. The way he combats these obstacles is through rebellion (in the form of trespassing on property, killing people, etcetera) and enlisting the help of the gods – who favor him greatly. In Signs, Makina is on a journey from Mexico to the United States illegally to find her brother and bring him home. The story is also an allegory for the Aztec underworld, where each level corresponds to one chapter in the book. In this book, the journey itself is an act of rebellion, but there are many specific cases where Makina’s rebellion shows in other

ways too – such as standing up to sexual harrassment and helping those who have previously hurt her. According to the Cambridge English dictionary, the definition of rebellion is “action against those in authority, against the rules, or against normal and accepted ways of behaving” [3]. While rebellion is often used to describe the breaking of rules and the disregard for authority (both of which are blatantly present in Signs especially), the third part of the definition – going against societal norms – is displayed again and again as Makina navigates the macho crime world. The definition of rebellion almost every dictionary has very closely matches Makina’s journey, but Odysseus’ journey is a less close fit. These definitions, however, leave those in power in charge of setting the standard for rebellion. How can authority rebel from authority, or the norm-maker rebel from norms? In order to fully hold powerful people accountable, I will be using a slightly different definition: “action against those in authority, against the rules, against morals, or against


40 normal and accepted ways of behaving.” By making this small change, Odysseus’ reckless actions go from being compliant (a statement which does not seem to fit at all) to being rebellious. Odysseus’ and Makina’s rebelliousness is vital to their respective epics, but their rebelliousness shows up in profoundly different ways because of their respective levels of power, individual journeys, and personalities. The difference between Makina and Odysseus’ social status causes a world of difference between the ways in which they can be disobedient and still get away with it. When Odysseus wants something done, all he has to do is round up his army, tell a fantastic story of his adventures, or call on Athena – the Greek goddess of wisdom and warfare – to help. In the final battle against the suitors who have taken over Odysseus’ house when he returns home, the war hero and his three sidekicks have a great advantage over the pack of over a hundred suitors they are facing. His men throw their spears and all hit, thanks to Athena. Then, again with her help, the suitors throw and all miss (Homer 484-485). Because of his fame and power, Odysseus is able to employ powerful allies, including the gods themselves. He gets to openly murder hundreds by plotting a very preliminary plan, and not have to worry about details or consequences, because the gods do the rest. Even if Odysseus did not have the gods’ help, his great riches would get him quite far anyway. Because this superhuman hero has so many supporters, and norms of the day encouraged such actions, only a modern, critical eye would even call it rebellion. In contrast to Odysseus’ highly subsidized recklessness, when Makina wants to get things done, she often has to resort to much more backhanded, sneaky strategies. She reacts assertively – but not as destructively – when harassed by a young man on the bus. Herrera writes, “Makina turned to him, stared into his eyes so he’d know that her next move was no accident, . . . and with the other hand yanked the middle finger of the hand he’d touched her with all the way back” (31). Though Makina’s act also shows her thirst for revenge, she teaches a lesson by causing temporary pain, not ending anyone’s life. However, even if Makina was as bloodthirsty as Odysseus, she simply could not get away with a huge massacre. She has no political clout or weaponry, and her only way to get around the law is by hiding. In addition to being of a lower social status, being a woman forces Makina to take the law into her own hands when the macho culture and governing powers refuse to do anything about it. For this reason, too, she has to discreetly bend the boy’s finger back to get her point across. Another reason Makina’s rebelliousness differs from that of Odysseus is because her journey simply requires a different kind of unlawfulness. Even if the two were

Chapter 4. Social Science of the same social class and had the same power, their journeys are in many ways quite different, and different types of rebellion must be used to achieve their final goals. Because he had been told to be wary of murderous plans upon returning home, Odysseus goes to great lengths to hide his identity until the last moment. At one point, when his childhood nurse discovers who he is, the savage hero threatens death. “Nanny!” he whispers angrily, “Why are you trying to destroy me? . . . Be silent; / no one must know, or else I promise you, . . . I will not spare you when I kill the rest” (Homer 440). On Odysseus’ journey back home, he is set on keeping his identity secret from his wife in order not to be killed. If that part was not so trivial to his journey, Odysseus might not be as set on killing loved ones to keep with his plan. If Odysseus had not been gone as long as he was, threatening the woman who raised him would not be necessary and therefore this rebelliousness against bonds would not either. Though as a slave she is technically property, Odysseus’ nanny is close enough to be loved when it is convenient.Threatening her life must require some serious resolve. If it weren’t for Odysseus’ lofty situation in life, he likely could not afford to be as bold as he is here either. He has the ability to choose this strategy, which in some ways is easier. Makina, on the other hand, does not get that kind of choice. In Signs, the only way Makina can fulfill her journey is to let other, powerful men like Odysseus make the rules by which she must play. Though even if she had the option, Makina most likely would not choose to be so reckless (this will be covered later), she simply never gets that as an option. After her arrival in the United States, Makina has to deliver the mysterious, illegal package that has financed her journey to a shady underlord. Even as he tells her she has nothing to fear, the man pats his knife possibly as a subtle threat (Herrera 61). Makina’s journey does not involve taking over governments or even a household; instead her job is to get around pre-existing boundaries. Because of this, she has to work with people for whom sneaking around the law is second nature, and this makes her rebellion (which is actually breaking laws in this case) a lot more stealthy, too. It is clear that Odysseus and Makina’s lives and journeys are set in vastly different contexts, but in truth the characters have very different dispositions too. Out of all the differences between Makina and Odysseus, the most significant difference is probably their personalities. In The Odyssey, while Odysseus is recalling his voyage to King Alcinous of Phaeacia, he shamelessly tells of sacking the town of Cicones as if it were a typical day. Boastfully he states, “I sacked the town and killed the men. We took their wives and shared their riches equally among us” (Homer 241). The fact that Odysseus still shows no regret for his actions in the war many years


4.3 Kennard. No One Cares That You Ran a Marathon later shows how bloodthirsty rebellion is simply part of his nature. While much of The Odyssey is narrated in the moment, this part is a rarer example of him reflecting on past crimes. This moment when Odysseus could choose to redeem himself by showing a purer true character ends up proving the exact opposite. On the other hand, when Makina arrives in Mexico City, she sees some coyotes trying to take advantage of the boys who had harassed her. Instead of walking on and letting them get what they had coming, she warns the boys of the danger they are facing. “Watch it, they’re out to screw you,” she says and then continues on her way (Herrera 36). This ability to switch between vengeful and empathetic illustrates how Makina has simply learned to adapt to a hostile world, but inside she has a kind heart. Being kind in the cruel world she is living in is a kind of rebellion too, even though it technically breaks no laws. These two glimpses into the characters’ inner personalities really bring home the monumental difference between the two, even if they have many similarities. Though the main characters of Signs Preceding the End of the World and The Odyssey are both defiant and unruly, their individual journeys, social status, and character make their acts of rebellion profoundly different. Makina must use stealthy ways of showing her worth, like pulling a transgressor’s finger back, whereas Odysseus gets to be very open and murderous to his opponents just because of their respective levels of power. In fact, Odysseus’ actions might not even constitute rebellion under a traditional def-

4.3

41

inition. Odysseus must keep other people quiet in order to be kept secret upon returning home, while Makina must herself comply with powerful figures in order to get her job done. At the heart, though, Makina chooses to be much more empathetic, even to her foes, than the great hero Odysseus does to people who have done nothing wrong. Though the rebelliousness of many of the so-called rebellious acts of these characters could be disputed (Is the act rebellion if that rebellion is a societal norm? Is it rebellious to stop an act of generally accepted rebellion?), they reflect how even people who share the characteristics of a rebel can choose to and be forced to use that disobedience in different ways. Everyone has to start somewhere, and that’s never a choice, but what makes the biggest difference between rulebreakers – and between anyone – is how they choose to move forward. References [1]

Yuri Herrera. Signs Preceding the End of the World. Translated by Lisa Dillman. 2015 (cited on page 39).

[2]

Homer. The Odyssey. Translated by Emily Wilson. W. W. Norton Company, 2018 (cited on page 39).

[3]

REBELLION | Definition in the Cambridge English Dictionary. URL: https : / / dictionary . cambridge . org / us / dictionary / english / rebellion (cited on page 39).

No One Cares That You Ran a Marathon By Cyd Kennard ’23 “Running in a tank top and shorts when there’s frost on the ground is a wearisome existence” is the paraphrased thought of one of my teammates. She came with us to the last meet of the season but wasn’t able to run, so was ironically anointed “bagman” for carrying the layers we discarded in a large yellow sack. I agreed, and wondered who in their right mind thought that long distances should be run in thirty degrees Fahrenheit—certainly spectators can’t enjoy our discomfort. For that matter, certainly spectators can’t enjoy running to any degree. But that thought is partly a fallacy; short-distance races have their moments. I would stand in the cold to watch three minutes of an 800-meter run. Longer distances, on the other hand, seem a lost cause. Who would wait for hours to watch a half-marathon? A full marathon? That brings up a seemingly simple, yet layered question: why are people so much more drawn to watching sprints than marathons?

tion will likely be that short distances just take less time to watch. This is entirely valid: even the longest shortdistance runs take a few minutes at most, giving spectators fast and easy entertainment. While with long distances, the reward is much more gradual. It’s not difficult to imagine why any accomplishment might lose its appeal to viewers (that is, after said viewers spend hours watching a competitor, only to witness the grand climax of them crossing a bright line on the ground).

But it’s more than the brevity that draws us into sprinting and away from marathoning, isn’t it? After all, some of society’s favorite sports take an hour or more to watch (soccer, basketball, football). And additionally, while sprints themselves may be brief, the meets that they are a part of last at least twice a marathon’s duration—a track and field meet of only four hours is not in many athletes’ vocabularies. The next, most obvious answer is the pain and tedium that simultaneously characterize a The first response that you might pose to this ques- marathon and drive viewers away. Even the best long-


42 distance runners in the world finish their races in delirious pain—something far less appealing than the burst of semiexhausted joy which comes from sprinters’ finishes. But let’s be honest, that isn’t really the problem either. No matter what we might choose to believe, humans don’t always object to watching other people experience pain. Think again of other sports—boxing, wrestling, football, rugby, soccer—where the most exciting moments are when the underdog pounds their opponent to the ground, or when your home team delivers a particularly sharp kick to a player’s shin. This type of pain is expected, even welcomed by viewers. So why is the pain of marathon running any different? Well, for one thing, long-distance running is a lot like life. People tend to associate the pain and tedium of this sport with the arduous times of their own existence—subconsciously, at least. Imagine one of the world’s most efficient longdistance runners, Eliud Kipchoge of Kenya. As the first and only person to break the two-hour barrier in the marathon (although largely assisted by pacers and enhanced footwear), Kipchoge is additionally famous for maintaining a calm, even leisurely expression throughout all twenty-six point two miles [6]. This makes him a sort of outlier in the category of distance runners—some appear to be in excruciating unrest; others manage to look worse. In his lifetime, Czechoslovakia’s Emil Zátopek was observed to be “bobbing, weaving, staggering, gyrating, clutching his torso . . . [running] like a man with a noose around his neck. He seemed on the verge of strangulation” [2]. About a year ago I went for one of my longest runs, paced off of three older boys who were considerably faster than I was. Their recovery (slow or easy pace throughout the run) was my workout. In the seven-and-a-half miles we ran, they were able to maintain a conversation the whole time, whereas I was breathing so hard that it came out as a pant. But this is how these things tend to go—the young, inexperienced runner (me) follows in the footsteps of the veteran athletes (them). I felt my pace slipping as time passed, while they seemed to gain speed with each stride. By the last half-mile they had slipped from my sight. In a depressing sense, I remember seeing that run as a reflection of how my life could be: an endless stream of time spent trying to catch up to people who are, plainly, just better than I am. Conversely, if long-distance running reflects a dull reality, then short-distance running embodies a sanguine yet unrealistic message: you’re only ever a step or two behind those who truly excel beyond you. Take it from one of the fastest runners in the world, Sha’Carri Richardson of the United States. Placing first in the Olympic Trials for the 100-meter dash, Richardson earned a time of 10.86.

Chapter 4. Social Science Her teammate Javianne Oliver finished second at 10.99 [3]. This 0.13 of a second is a considerable separation in sprinting, yet the ordinary person would see it to be virtually no time at all. Richardson was the clear victor of the competition (though she was barred from racing in the Olympics because of arguably nonsensical marijuana restrictions) [4], and yet Oliver could still reason that she might have beaten Richardson if conditions were ideal. Indeed, Oliver was only a footstep behind; maybe less. In a race where all finishers come in within seconds of each other, sprinters see their competitors as always within their grasp. Even Jamaica’s acclaimed Usain Bolt, who won Rio’s 200-meter dash in 19.78, was only 0.65 seconds faster than the slowest runner [5]. These short distances of sprinting inflate runners with the idea that they are always exceedingly close to beating their competitors, and by extension communicate to spectators that in their own lives, they are mere footsteps away from overcoming impossible odds—a thought which is often improbable, and irresistibly attractive all the same. As explained by Bolt: “I just imagine all the other runners are big spiders, and then I get super scared” [7]. To move with such explosive strokes, sprinters need to temporarily minimize their pain, and increase their breathing, heart rate, and strength. This is achieved through adrenaline—often a result of an exciting or frightening situation [2]. For many, the quick, intense atmosphere of a sprint is the perfect stimulus to produce a wave of this hormone (though for Bolt, the thought of spiders incites more adrenaline). These athletes spend such a brief amount of time competing, with the time that they do spend being buttressed by this adrenaline rush. As a result, in the moment that they compete, sprinters find themselves fueled by the fantasy of having infinite strength. In truth, sprinters can conquer anything with adrenaline in their system—maybe even life itself. But life is no fantasy. Short-distance might provide a tempting facade of immortality, but as the saying goes: all good things must come to an end. And life is a truly good thing, isn’t it? When watching the tortured runner struggle through the last mile of twenty-six, some might disagree with this statement. But behind the pained mask, there is joy to be found through distance. There is joy, and there is relief. There is sorrow and suffering as well. In short, there is life. If you’re lucky, there are also goats. As in one of my favorite runs, which involved four of my teammates and a small cluster of farm animals. In the group of brown and grey, there was a single white goat; it had black eyes, though we claimed they were red. We called it a devil because every time we ran past it, one of us would trip (the trail was slanted to the left with rocks sticking out of the ground, but we still blamed it on the goat). We


4.3 Kennard. No One Cares That You Ran a Marathon ran three one-mile-long loops around the goat pen (the oldest runner fell first, the youngest second). I stumbled on the third—It was my fault, I looked the goat in the eyes and missed the tree root beneath my shoes—and fell to the ground with scuffs on my knees and dirt up my nose. I felt more pain than joy in that half-hour, I know, but the happiness it left me with outweighed any ache. Because that’s the moment I remember the run by. Not by the strain of my calves, or by the air that couldn’t quite inflate my lungs, but by the white goat. The white goat, and the way my teammates laughed as we ran away—a sound that couldn’t help but echo and spread—something which hovered in the air and mixed with the crinkle of our footsteps as we rushed across the dead and dying leaves. In terms of long-distance running, Zátopek describes the atmosphere to be along the “borders of pain and suffering . . . [that] the men are separated from the boys” [1]. Many would see this sentiment as an adequate metaphor; though in reality, it couldn’t be more flawed. Other than exclusively speaking of male athletes, Zátopek characterizes a marathon as a race that can only be completed by the strongest of body and mind. But if a marathon is life, then every person who lives must run—every person must be running now. One mortal life, one long-distance run. Each marathon does more than just remind us of our most painful and tedious days; it reminds us of the instances weaved between, those which bring meaning to our larger picture. Though the finish line comes slow, it arises all the same. Marathons prompt the idea that all those moments which compose life will one day come to an end, which, nevertheless, we dread. We are far less drawn to watching marathons than sprints because the extended pain of long-distance running reminds us that we are mortal—eventually, we will have to end the amalgam of emotions from suffering to joy that is running, and simultaneously, is life itself. And of this, we are terrified. Though that fear doesn’t need to impede our experience of life. In fact, it’s not able to. “I am human,” tweets Richardson after the win of her race, death of her biological mother, and one-month suspension from competition. “I’m you,” she says, “I just happen to run a little faster” [4]. Richardson’s speed has caused many to call her remarkable; but she is still no supreme being. She is as mortal as me, as human as you. Her life contains laughs and laments, lively ecstasy on top of unimaginable loss. That is why the fifteen-second view of the world which is internalized from a sprint is not reality: you cannot fill each moment of life purely with strength and joy and adrenaline and success. There’s a reason people say that life is a marathon, not a sprint. But this isn’t for the cliché

43

that you should pace yourself through life; it’s because at the end of the day, your life is just that: one life. It goes on for the long run, with all of its trials: tedium, exhaustion, joy, pain, adrenaline, strength, weakness. Marathons remind you that, regardless, you run on—you can never stop. Run until the end. References [1] Simon Burnton. 50 Stunning Olympic Moments No 41: Emil Zatopek the Triple-gold Winner. June 2012. URL: https://www.theguardian.com/ sport / blog / 2012 / jun / 22 / 50 - olympic stunning-moments-emil-zatopek. (cited on page 43). [2]

Jacquelyn Cafasso and Debra Sullivan. Adrenaline Rush: Everything You Should Know. Nov. 2018. URL : https : / / www . healthline . com / health / adrenaline - rush # symptoms (cited on page 42).

[3]

Taylor Dutch. Sha’Carri Richardson Wins the Women’s 100 Meters at the Olympic Track and Field Trials. June 2021. URL: https : / / www . runnersworld . com / news / a36772234 / %202021 - olympic - trials - womens - 100 meter-results/ (cited on page 42).

[4]

Adam Kilgore and Rick Maese. Sprinter Sha’Carri Richardson Suspended One Month after Marijuana Test, Putting Olympics in Doubt. July 2021. URL: https://www.washingtonpost.com/sports/ olympics / 2021 / 07 / 02 / %20shacarri richardson - drug - test/ (cited on pages 42, 43).

[5]

Olympic Channel Services. Rio 2016 Athletics 200m Men Results. 2016. URL: https : / / olympics.com/en/%20olympic- games/rio2016 / results / athletics / 200m - men (cited on page 42).

[6]

Amy Tikkanen. Eliud Kipchoge. Nov. 2021. URL: https://www.britannica.com/biography/ %20Eliud-Kipchoge (cited on page 42).

[7]

Cameron Tomarchio. 9.58 Reasons Usain Bolt Is the World’s Fastest Man. July 2014. URL: https : / / www . news . com . au / sport / 958 - reasons - usain - bolt - is - the worlds - fastest - man / %20news - story / 33edf559786bf1a4c64b4a272b9e51a8 (cited on page 42).


Chapter 4. Social Science

44

4.4

Project MK-Ultra By Kate McAllister ’22 Beginning in the 1950s, Project MK-Ultra started as a seemingly guiltless research program of the United States’ Central Intelligence Program, but soon became secretive, unethical, and dangerous. As American soldiers returned from the Korean War, it was evident that some individuals had changed in unexplainable ways. It was as if soldiers had been “brainwashed” by the foreign “Communist brainwashers” — the Soviet Union, Korea, China — through the execution of mind control techniques [2]. Perplexed, the CIA director of the time, Allen Dulles, appointed Dr. Sidney Gottlieb to begin experimentation centered around behavior modification as means to understand mind control. The main goal was to master the execution of mind control so that the United States may manipulate foreign leaders through implementation of these techniques [5]. The research conducted by Dr. Sidney Gottlieb was supposed to be experimental. He was to introduce an intervention and then study the direct effects of said intervention. To do so, he would use intrusions such as electro-shock therapy, hypnosis, radiation, and a variety of drugs, toxins, and chemicals. Seemingly, this means of study was reliable, the “X,” an experimental intervention to the subject, would directly cause the “Y,” successful mind control. However, due to Gottlieb’s inability to implement research tactics such as random assignment, control groups, and a double-blind research design, his data collected became unreliable as well as unjustifiable. The projecting question arose: why were these sadistic experiments being conducted? The data was skewed, and the researcher was knowledgeable of his subjects and the data produced by them. These experiments became means of personal vindication which strayed far from their original intention. Dr. Gottlieb—the mad scientist who was free to objectify and control his subjects to the extreme. Although he experimented with many means of behavioral alteration, Dr. Gottlieb became heavily reliant upon the use of Lysergic Acid Diethylamide, LSD. LSD affects the user through intensified thoughts, heightened emotions, and elevated sensory awareness. When ingested in high dosages, such effects may manifest as auditory or visual hallucinations. Gottlieb’s experiments became increasingly sadistic. They were no longer a means of identifying how “X” causes “Y,” for victims of Gottlieb experienced situations such as being locked in sensory deprivation chambers and restrained in a straitjacket while dosed with LSD [5]. Funded by many research centers and universities, Dr. Gottlieb conducted the majority of his experiments in American prisons. However, he also enacted secret experimentation in detention centers

throughout Europe and East Asia. By doing so, he was able to capture enemy agents and suspicious individuals and test his drug “potions” as means of escaping legal implications [1]. Despite his horrid secret behavior overseas, Gottlieb, overall, drew data from a range of test subjects. Some individuals freely volunteered, some were coerced through incentives to volunteer, and some were involved in experimentation without knowledge or consent [2]. Despite the presence of a small few who volunteered freely, the majority of his subjects were either given improper informed consent, not debriefed, or placed in great harm outside reason. Gottlieb wished to study reactions without the subject’s knowledge, and to do so, he placed fellow CIA employees, military personnel, doctors, government agents, prostitutes, mentally ill patients and members of the general public in great danger of being drugged without knowledge [5]. James “Whitey” Bulgar, a seasoned criminal and victim of Project MK-Ultra, was dosed with LSD more than 50 times without his consent. In letters written by Bulgar, he notes he and fellow inmates were provided incentives in exchange for their participation. Not only were they incentivized through reduced jail time, but were additionally misinformed in the true nature of the research they were to take part in — they had been told they were taking part in medical research in finding a cure of schizophrenia [6]. Bulgar reports he experienced “hours of paranoia and feeling violent. We experienced horrible periods of living nightmares...I felt like I was going insane” [2]. Bulgar, following his participation in this study, began to turn to more harmful criminal activity—including murder. Some have begun to correlate this change in behavior with the awful treatment he experienced throughout this study. James “Whitey” Bulgar is just one case in which participants were harmed outside of reason, experimented upon without consent, and deceived about the nature of the study. Frank Olson, a United States Army biochemist and biological weapons researcher, died suddenly following his proclamation that he would leave the CIA. Seen as a security threat to the secret nature of MK-Ultra, Gottlieb arranged for Olson to be drugged with LSD. Frank Olson, ignorant to his participation, was dosed and as an individual previously diagnosed with suicidal tendcies, experienced a psychotic episode leading to his death [5]. In fear of a confidentiality breach, Dr. Gottlieb once again subjected an individual to experimentation without voluntary consent, ultimately leading to the death of Frank Olson.


4.4 McAllister. Project MK-Ultra The Central Intelligence Agency of the United States began the search for mind control as a defensive tactic, but Dr. Sidney Gottlieb, had different goals. He was to find means of mind control, but to do so, he felt one must rid away the current human mind so that a new one may be implanted in place. Gottlieb succeeded and failed. He destroyed the human mind, successful in finding various ways to doso, but left changed “voids” in humans with no way to implant the new mind he desired [4]. He ultimately destroyed lives, and with the removal of the active head of the CIA, destroyed details of the experiments and records of Project MK-Ultra too [1]. Project MK-Ultra’s sole purpose is to emphasize the necessity of research procedures which safeguard an individual’s right to be properly informed, guaranteed privacy, not deceived, not coerced to participate, and properly debriefed, including being made knowledgeable of how the data collected will be of use. Due to the violation of almost all these principles by the execution of Dr. Gottlieb, those involved in the study were placed in unreasonable and unpredictable harm. “Volunteers” were deceived, failed to be debriefed, or simply sentenced to the study without consent. As seen in the case of Frank Olson, researchers were placed in unpredictable harm with the threat of becoming a non-consensual subject looming. 3rd party members were placed in danger of those altered by the experiments which took place, as seen in the case of James “Whitey” Bulgar, and society became threatened by the introduction and popularization of LSD. Project MK-Ultra, began justifiably in the pursuit of a means of national defense, but the means in which Dr. Gottlieb executed the study is inexcusable. In address to the Institutional Review Boards, through the original execution of Project MK-Ultra, Dr. Gottlieb was trying to find a direct causation for mind control by an experimental approach. However, an approach which guarantees ethical practices will be followed may be more properly done through an observational study. In this way, the participant is not under direct control of the researcher, but is rather being observed in the effect of the intervention. In an altered research design, there would not be unnecessary exposure to risk for subjects, and where risk is necessary, let it be reasonable in relation to anticipated benefits. In selection of subjects, voluntary participation with proper informed consent, no attempts of deceptions, and ensured privacy should be followed. For the means of initial experimentation, the subject pool should not include individuals susceptible to coercion such as children, prisoners, or those with impaired decision making. The study should follow a large randomized cohort of individuals following a double-blind procedure, ensuring no data will be skewed while obtaining a large representative data set.

45 Informed consent should be appropriately documented, and provisions to protect the privacy of subjects and data should be made. Regarding the necessity of such a study, I find the pursuit of “mind control,” despite the implementation of ethical standards, to be naturally unethical and brings forth no benefit. The original intention of such a study was to implement harm. The CIA of the United States wished to find a way to terrorize their wartime enemies as the result of the horrifc treatment of our troops. I believe that a study founded with the purpose of “revenge” can bring forth no true benefit besides participants’ suffering, for they are surrendering themselves and their mind to complete control, a power which should never be given to any researcher. Rather than pursuing further study of the human mind through the goal of dominance and control, I feel it is important to observe the mind and its reaction to certain interventions in hopes of furthering the development of brain reading technology. Dr. Gottlieb wished to find the “X” variable which directly caused the “Y” variable, or mind control. Instead of attempting to find direct causation, it may be more beneficial to conduct similar research through observation. In other words, how does the “X” variable affect the “Y” variable, or the behavior of the subject. We should invest in further exploration to understand the human mind through current technology such as the fMRI and electroencephalography [3]. The goal: measure brain activity and further technological advancements in the pursuit of understanding the human brain, not to obtain control. References [1] Terry Gross. The CIA’s Secret Quest for Mind Control: Torture, LSD and a ’Poisoner in Chief’. Sept. 2019. URL: https://www.npr.org/2019/09/ 09 / 758989641 / the - cias - secret - quest for - mind - control - torture - lsd - and - a poisoner-in-chief (cited on pages 44, 45). [2]

History.com. The CIA’s Appalling Human Experiments with Mind Control Channel. URL: https: / / www . history . com / mkultra - operation midnight - climax - cia - lsd - experiments (cited on page 44).

[3]

Queensland Brain Institute. How to Measure Brain Activity in People. Mar. 2018. URL: https://qbi. uq.edu.au/brain/brain- functions/howmeasure- brain- activity- people (cited on page 45).

[4]

Stephen Kinzer. The Secret History of Fort Detrick, the CIA’s Base for Mind Control Experiments. Sept. 2019. URL: https://www.politico.com/


Chapter 4. Social Science

46 magazine / story / 2019 / 09 / 15 / cia - fort detrick-stephen-kinzer-228109/ (cited on page 45). [5]

4.5

Project MKULTRA. Project MKULTRA. URL: https : / / www . cs . mcgill . ca / ~rwest / wikispeedia/wpcd/wp/p/Project_MKULTRA. htm (cited on page 44).

[6]

AP NEWS. After Learning of Whitey Bulger LSD Tests, Juror Has Regrets. Feb. 2020. URL : https : / / apnews . com / article / us - news - ap - top - news - whitey bulger - crime - weekend - reads - 8df % 20f185e1324cb7079b8a86c48c2ec56 (cited on page 44).

The Aversion Project: South Africa’s Attempt to Cure Homosexuality By Brian Salipante ’22 Apartheid lasted in South Africa from 1948-1994; nestled squarely inside this period was one of the most unethical tests of the nations history. In 1968, a man named Dr. Aubrey Levin claimed that he had a “cure” for homosexuality, and in 1969 he was given a chance to test his theory with immunity. Taking a step back to gain a broader understanding of the world, Dr. Levin conducted experiments in other countries that had already tried to “cure” homosexuality and found that neither the patients nor therapists found it helpful. Homosexuality was even removed as an illness from the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders in 1973.

by changing their genders that would turn them straight because they would be a woman attracted to men. After the surgery, the soldiers were issued fake IDs and told that they could not contact anyone from their old lives. During the surgeries, there was a large casualty rate, and the ones who survived often had medical issues afterwards. Even if the subjects survived these tests, the horrors were not over. Dr. Levin created a farm named Greefswald where he sent subjects who he deemed had not been cured. Although Greefswald was called a farm, it was more like a forced labor camp. Soldiers at the camp had to build barracks, march through brush for hours, hunt wildlife, and were deprived of food and sleep. Dr. Levin ended Universal forced conscription began in South Africa the tests in 1975 when he left the military and went on to in 1967. When the soldiers first joined, they were ex- become the Director of Mental Health in the Department amined by a doctor and chaplain to see if they were gay. of Health Services and Welfare in the Eastern Cape. The soldiers were threatened with harsh punishments and There are countless ethical issues with the Aversion imprisonment if the government learned that they lied project, and they all build on the foundation that is the about being heterosexual. Homosexual conscripts were basis of the tests. The Aversion project was billed as trysent to Ward 22 of a secret military hospital which was ing to cure homosexuality which is the first major issue led by Dr. Aubrey Levin starting in 1969 [2]. When because being gay is not an illness; therefore it cannot the soldiers arrived, they found themselves grouped in and does not need to be cured. The first ethical issue with life long drug addicts and people with severe men- that arises is that there was no informed consent: the subtal illness. The first test conscripts were forced into was jects were conscripted gay soldiers who had no choice in Dr. Levin’s main idea and what became known as the whether or not they would participate. These men were Aversion Project. Subjects would be strapped down into forced to endure torture, and they would be sent to an chairs and forced to view gay pornographic material. As even worse labor camp if they ever fought back. Conthe subjects viewed the images, Dr. Levin would electri- nected to the lack of consent, there’s also the issue of cally shock them over and over. These shocks were so subjects not being allowed to choose to stop the tests. The strong it was reported that one subject’s shoes went flying largest ethical issue is the fact that the tests were designed off during one. Dr. Levin’s hypothesis was that the sub- to harm the subjects. The subjects were gravely hurt in jects would begin to associate their homosexual thoughts every test, whether it was from the large electrical shocks with the pain from the electric shocks. In between these they endured or dying from a surgery gone wrong that tests, soldiers were also subjected to the horrors of Narco never should have happened in the first place. There are Analysis where they were pumped full of drugs so their so many ethical issues in these tests, but one of the more brain would be in a malleable state. Almost all of the egregious ones is the fact that even if the test had shown soldiers were chemically castrated by the time they left the results Dr. Levin was expecting, it wouldn’t have matWard 22, this resulted in many of them commiting suicide tered because it would be corrupt data. Sending subjects shortly after. The most horrific “cure” still had not been who were not cured to a labor camp incentivized them attempted though, Dr. Levin began to order forced gender to pretend that they had been “cured”. When you incenreassignment for some of his subjects. He believed that tivize subjects to give you the results you want, none of


4.5 Salipante. The Aversion Project: South Africa’s Attempt to Cure Homosexuality your results matter. The Aversion Project was a heinous set of experiments that lacked ethics in any manner after being built on a corrupt foundation of fundamental misunderstanding of homosexuality and transgenderism [1]. The aftermath of the Aversion Project is rather depressing. In 1994, Apartheid was ended in South Africa and a commitee to try crimes against humanities was formed. Following the creation of this committee, Dr. Levin and his family fled to Canada. Dr. Levin ingrained himself comfortably in Canadian culture as a professor of Psychology. He was never tried for his crimes against humanity, but in 2010 he was accused of sexual assault by one of his victims who had video evidence. In 2015,

47

Dr. Levin was charged with three counts of sexual assault and sentenced to 5 years of jail time. However, he was let out on parole after only eighteen months and now lives as a free man. References [1] Robert M Kaplan. Treatment of Homosexuality during Apartheid. Dec. 2004. URL: https : / / www . ncbi . nlm . nih . gov / pmc / articles / PMC535952/ (cited on page 47). [2]

Aubrey Levin | South African History Online. Aubrey Levin. URL: https://www.sahistory. org . za / people / aubrey - levin (cited on page 46).



5. Mathematics

Section Header by Kate McAllister ’22.

5.1

Proving the Formula for the Fibonacci Sequence By JohnIrv Hollingshead ’22 and Alexander Burton ’22 √ 1+ 5 proaches as n approaches ∞ [3, 8]. 2 This article will discuss the Fibonacci sequence, phi, also known as the golden ratio, and a formula for finding Knowing this, we can prove through induction a genthe nth term of the sequence. Induction was used to prove eral formula to find any term in the Fibonacci sequence. that this formula works for all terms of the Fibonacci sequence. This work has multiple practical applications in STEM and several other fields. Fig. 1 Abstract

Introduction The Fibonacci sequence is a series of numbers where the nth integer term is equal to the two numbers preceding it, where the first two terms are 0 and 1. In other words the Fibonacci sequence is: F0 = 0, F1 = 1 and Fn = Fn−1 + Fn−2 where n ∈ N

The golden ratio is a set of numbers where: a+b a = = φ . This equation only works with the a b √ 1+ 5 Fn+1 ratio φ = . In the Fibonacci numbers, ap- The Fibonacci sequence gives a better approximation of 2 Fn


Chapter 5. Mathematics

50 φ with the equation

a+b a = as n approaches ∞ [3]. a b

Proving the Function Proposition: 1 1 n n √ Fn = φ − (− ) . (1) φ 5 Gives the nth term of the Fibonacci sequence for all √ 5+1 Fibonacci numbers. where φ = [5]. 2

Proof. We will prove that if the function (1) is true for Fibonacci numbers at index k and k − 1, F also applies for Fibonacci number k + 1. Thus, we need a double base case.

1 1 1 Fk−1 = √ φ k · − (− )k · (−φ ) . (8) φ φ 5 Therefore from equations (6) and (8):

1 1 1 Fk + Fk−1 = √ φ k · − (− )k · (−φ ) φ φ 5 1 1 + √ φ k − (− )k . (9) φ 5 Fk + Fk−1 = 1 1 1 1 √ φ k · − (− )k · (−φ ) + φ k − (− )k . (10) φ φ φ 5 Rearranging we find:

Fk + Fk−1 = Since in generating the Fibonacci Numbers we set the 1 1 k 1 k 1 k k first 2 numbers in the sequence as 0 and 1 – the rest of √ φ + · φ − (− ) · (−φ ) − (− ) . (11) φ φ φ 5 the sequence follows from these lowest index terms – we k Factoring out φ from the first two terms and (− φ1 )k supply the first few Fibonacci Numbers after 0 and 1 to certify that we are expressing the Fibonacci sequence and from the second two terms we get: not an arbitrary choice of natural numbers. Fk + Fk−1 = 1 1 1 k 1 F0 = √ [1 − 1] = 0. (2) k √ φ · (1 + ) − (− ) · (1 − φ ) . (12) 5 φ φ 5 1 1 Using our definition of phi as φ = ab = a+b a , F1 = √ φ + = 1. (3) φ 5 a+b a b b 1 φ= = + = 1 + = 1 + . (13) 1 1 a a a a φ F2 = √ φ 2 − (− )2 = 1. (4) φ Similarly, 5 1 1 3 a a+b a−a−b b 1 3 F3 = √ φ − (− ) = 2. (5) 1−φ = − = = − = − . (14) φ a a a a φ 5 Substituting into equation (12) we find: Now we have shown that this function (1) gives Fibonacci values for the first 2 terms that follow from the formula Fn+1 = Fn + Fn−1 , verifying the basis step. Let us assume that the function (1) gives the correct Fibonacci numbers for terms k and k − 1. Proving Fk+1 = Fk + Fk−1 will assert that the general case Fn works for every Fibonacci number. 1 1 Fk = √ φ k − (− )k . (6) φ 5 1 1 k−1 k−1 Fk−1 = √ φ − (− ) . (7) φ 5 Because xk−1 = xk · x−1 we get:

1 1 1 Fk + Fk−1 = √ φ k · φ − (− )k · −(− ) . (15) φ φ 5 Since xk · x = xk+1 we get: 1 k+1 1 k+1 Fk + Fk−1 = √ φ − (− ) = Fk+1 . (16) φ 5 We have now shown that if we assume the function (1) gives the appropriate Fibonacci numbers at locations k and k − 1, then the function at Fk+1 = Fk + Fk−1 , which is the definition of the Fibonacci sequence as desired. Since the function works for the first numbers of the sequence, by induction, it must work for all numbers of the sequence.


5.1 Hollingshead, Burton. Proving the Formula for the Fibonacci Sequence

51

This formula is important, as the Fibonacci Numbers have multiple applications throughout STEM and other subjects. Being able to determine any term of the sequence is needed in computer science (for Fibonacci trees, psuedorandom number generators, Fibonacci heaps), optics (quantifying beam paths of light passing through multiple layers of varying refractive index), and even financial market trading (Fibonacci retracement). References ■

Leonardo Pisano Fibonacci, the founder of the Fibonacci sequence [2]. Conclusion Through induction, we have proven a general algorithm for determining the nth term of the Fibonacci Numbers. We started by proving that formula (1) is true for the lowest natural number values of the Fibonacci Numbers. Then through assuming that Fk and Fk−1 is true, we proved that Fk+1 is true, thus certifying that the formula works for all values of k ∈ N. We were able to do this through substitution and algebraic manipulation.

Fig. 2

Proof by induction is like dominoes. Showing Fn is true asserts that Fn+1 is true means F is true for all n [7].

[1]

Brasch T. von; Byström J.; Lystad L.P. Optimal Control and the Fibonacci Sequence. Journal of Optimization Theory and Applications, 2012.

[2]

Deb Russell. Biography of Leonardo Pisano Fibonacci, Noted Italian Mathematician. https : / / www . thoughtco . com / leonardo - pisano fibonacci - biography - 2312397. Accessed: 2019-24-07 (cited on page 51).

[3]

Estefan Marquina. The Golden Ratio. https:// www.fullpotentialtutor.com/the-goldenratio/. Accessed: 2019-03-01. 2019 (cited on pages 49, 50).

[4]

Adelson-Velsky Georgy; Landis Evgenii. An algorithm for the organization of information. Russia, 1962.

[5]

Hypergeometricx. Inductive proof of a formula for Fibonacci numbers. https : / / math . stackexchange . com / questions / 886978 / inductive - proof - of - a - formula - for fibonacci - numbers. Accessed: 2019-03-01. 2014 (cited on page 50).

[6]

Mario Livio. The Golden Ratio: The Story of PHI, the World’s Most Astonishing Number. New York, NY: Crown, 2008.

[7]

Math Vault. The Definitive Glossary of Higher Mathematical Jargon. https://mathvault.ca/ math- glossary/. Accessed: 2019-03-01 (cited on page 51).

[8]

Stephan C. Carlson. Golden Ratio. https://www. britannica . com / science / golden - ratio. Accessed: 2019-14-09. 2006 (cited on page 49).


Chapter 5. Mathematics

52

5.2

Doomsday Algorithm By Marco Cardenes ’23 and Julia Stern ’22 Abstract

Day Sunday Monday Tuesday Wednesday Thursday Friday Saturday

Number 0 1 2 3 4 5 6

Mnemonic Sansday Oneday Twosday Treblesday Foursday Fiveday Six-a-day

The Doomsday algorithm is a mathematical method for determining the day of the week. In order to use the algorithm quickly, one must memorize weekday numbers and certain anchor dates. From these dates, it is relatively easy to calculate the weekday of any date. Though the Doomsday algorithm is not the only way to find a date’s day of the week, it is an interesting use of the perpetual structure that applies to the Gregorian and Justinian The next step is to memorize each centuries "anchor day". calendars. The anchor day is a day where we already know the day of the week. To find the anchor day, we can memorize the following table: [4] Introduction What day of the week will Halloween be in the year 2093? What day of the week was Queen Elizabeth’s 2nd birthday? With the Doomsday algorithm, we can answer these questions in our head. The algorithm provides a way to mentally calculate the day of the week for any date. The Gregorian calendar is perpetual [3]—it operates in cycles of 400 years—so the Doomsday algorithm works both for the far past and the far future. Though our smartphones can provide the same information in seconds, the Doomsday algorithm is still useful for spontaneous calculations (and it’s a great party trick!). Background John Conway created the Doomsday algorithm in 1973 [2], but he was not the first to work with perpetual calendar algorithms. Conway was specifically inspired by Lewis Carroll’s puzzle method for determining the day of the week. Compared to other algorithms, the Doomsday algorithm is notable for its relative simplicity; this advantage, however, is contingent on one’s ability to memorize the Doomsday dates. The Algorithm For the purpose of explanation, we will be using September 3rd, 1967 as an example. Because the Doomsday Algorithm depends on the concept of modulation, we need to assign every weekday a number in mod 7. Figure 5.2 provides an assigned number to each weekday as well as a mnemonic device to remember each one. Some days are more intuitive than others, but these are important to remember for successful completion of the Doomsday Algorithm [4].

1500 1900 2300 Wednesday

1600 2000 2400 Tuesday

1700 2100 2500 Sunday

1800 2200 2600 Friday

If you want to find the anchor day of a century not listed above, you can take advantage of the 400-year cycle of the Gregorian Calendar. For example, if we want to find the century anchor day of 1100, we know that 100 + 400 = 1500, so the century anchor day of 1100 must be a Wednesday. Similarly, we can deduce that 3000’s century anchor day is a Friday. Now that we know the century’s anchor day, we can calculate the year’s anchor day by plugging in our year into the following equation. 12) n Year Anchor = (⌊ 12 ⌋ + n (mod 12) + ⌊ y (mod ⌋) 4 (mod 7) +century anchor. where n is the last two digits of our year. Using our example date, we can deduce that the year anchor can be solved algebraically as follows: 67 (mod 12) 1967 Anchor = (⌊ 67 ⌋) 12 ⌋ + 67 (mod 12) + ⌊ 4 (mod 7) + Wednesday = (5 + 67 (mod 12) + 1) (mod 7) + Wednesday = (5 + 7 + 1) (mod 7) + Wednesday = 13 (mod 7) + Wednesday = 6 + Wednesday 1967 Anchor = Tuesday [5] To get from "6 + Wednesday" to "Tuesday", we can think about the week as modulo 7. Adding 6 to Wednesday (Day 3) results in 3 + 6 = 9. Notice that 9 (mod 7) is 2, which corresponds with Tuesday. This might be rather confusing, but don’t worry: in 2010, another formula to calculate the Year’s Anchor had been discovered. This is called the "Odd + 11" method.


5.2 Cardenes, Stern. Doomsday Algorithm

53

We can write this in two ways: as an equation, and as a The closest known date to our example date is series of steps. The equation is as follows: September 5th. Our example date is two days before, therefore if we know that September 5th is a Tuesday, we j n k can deduce that our example day is a Sunday. (mod 7). (2) n+ 4 Conclusion The good news: Halloween 2098 will be on a Friday! Thanks to the Doomsday algorithm, we can look forward to a crazy Halloweekend in our retirement homes. To find the weekday of any date, we can follow the following steps: 1. Memorize weekday numbers 2. Calculate the century’s anchor date 3. Calculate the year’s anchor date 4. Find the nearest known date to your input date 5. Find the difference between the two dates 6. Based on the difference, calculate the day of the week The Doomsday algorithm is one of many methods Odd+11 flowchart [5] that use math to determine the day of the week. Though we no longer need mental algorithms to calculate the day By applying these two algorithms, you will end up of the week, the Doomsday algorithm reveals the intricate with the doomsday for a given year. The doomsday of a structure of the Gregorian calendar, another way that math given year means that certain dates within a year will all enriches our everyday lives. fall on the same weekday, so if we know the doomsday of a year, we already know 11 dates and their corresponding References day of the week. The dates below always fall on "doomsday." For the purposes of this article, we will call them [1] Konstantin Bikos. Doomsday Rule. https://www. timeanddate . com / date / doomsday - rule . known dates. html. Feb. 2022 (cited on page 53). where n is the last two digits of the year. Another way to represent this is by a series of steps in flow chart.

Month Februray March April May June July August September October November December

Day 28 7 4 9 6 11 8 5 10 7 12

Note The 29th if it’s a leap year

[2]

Jamie Ekness. Doomsday Algorithm. https : / / web . williams . edu / Mathematics / sjmiller/public_html/hudson/Ekness___ _Doomsday % 20Presentation . pdf. Feb. 2022 (cited on page 52).

[3]

The Editors of Encyclopaedia Britannica. Gregorian calendar. https://www.britannica.com/ topic/Gregorian-calendar. Feb. 2022 (cited on page 52).

[4]

S.W. Graham. http://people.se.cmich.edu/ graha1sw / pub / doomsday / doomsday . pdf. 1995 (cited on page 52).

[5]

Various. Doomsday Rule. https : / / en . wikipedia.org/wiki/Doomsday_rule (cited on pages 52, 53).

(4/4) (6/6)

[1]

(8/8) (10/10) (12/12)

For our example year, 1967, all of the known dates above fall on a Tuesday.


Chapter 5. Mathematics

54

5.3 Using Schrödinger’s Equation to Calculate the Position of an Electron in 4–dimensional Space By Luke Lamitina ’22 and Andrew Porco ’22 Background The Schrödinger equation is a linear partial differential equation that governs the wave function of a quantum-mechanical system. The equation describes the fluctuation over time of a wave function, the quantummechanical characterization of an isolated physical system. The idea that matter could behave like waves emerged in 1924, first proposed by French physicist Louis de Broglie. It later became known that the wave function of matter depends on the mass of the system. In particular, the wave function is inversely proportional to the mass. For larger systems, or collections of particles, such as humans, this wave function has no effect. But for systems with much smaller masses, such as electrons, this wave function has vast implications. For this paper, we will be focusing on the wave function of an electron. One way to imagine the wave function is as an enclosed wave around the particle. For the wave function to be stable there must be an integer number of waves otherwise the function collapses on itself, as seen in Figure 1. Interestingly, this trait of wave functions results in what is called energy levels of atoms. In other words, electrons can only exist at certain distances from the nucleus where their wave functions have an integer number of waves. To calculate the specific nature of a particle’s wave function we can use what is called the Schrödinger Equation.

operations that describes the interactions of the state of the complete system. In other words, the Hamiltonian Operator describes the total energy of the system, in our case an electron. Hoperator =

−h̄2 ∂ 2 +V (x) 2m ∂ x2

Conclusion We cannot conclude much just from our solution, but as proven by Max Born in 1928, if we square our function it yields a new function, based on probabilities, that describes the location of our electron in 4-dimensional space. Visually, this squared equation produces an energydensity model where the densest areas have the highest probability of where the electron is (Figure 2). This fact of not knowing exactly where the particle is, but rather where it is most likely to be gives rise to many new questions. As was famously proposed by Schrödinger himself, imagine there is a cat in a sealed box. Imagine an outside observer knows nothing about what is happening in the box. Lastly, there is a vial of poison in the box that is going to break at any point in time. To the outside observer, there is no way of knowing if the cat is alive or dead until the box is opened. This is similar to our energydensity model because we never know exactly where the particle is until we actually test it. This notion of probabilities in Quantum Mechanics has given rise over the years to fields such as the Many Worlds interpretation and produced some pretty interesting science fiction.

Figure 1: A particle’s wave function must have an integer amount of waves for it not to collapse on itself Method To calculate the exact wave function of an electron we can use Schrödinger’s time-dependent equation as seen in Equation 1. ∂ |ψ(t)⟩ ∂t H(t) describes what is called the Hamiltonian Operator. As seen in Equation 2, this is a set of mathematical Figure 2: Energy density model of six electrons [1] H(t)|ψ(t)⟩ = ih̄


5.4 Loh. Constructing a Very Special Circle through Six Very Special Points References [1] Quantum Physics Lady. What is wave function collapse? is it a physical event? 2020. URL: http:

55

/ / www . quantumphysicslady . org / what is - wave - function - collapse - is - it - a physical-event/ (cited on page 54).

5.4 Constructing a Very Special Circle through Six Very Special Points (ft. weird angle chasing and magical tornadoes) By Vivian Loh ’24 noindent The following proof is the proof of Lemma 3.11 in the book EGMO [1] “The Nine-Point Circle"

We claim that there is a circle passing through the 3 blue points AND the 3 red points, like this: What the heck even is that diagram? First of all, given a triangle, these three blue lines are the altitudes, AKA the lines from opposite corners forming 90 degree angles with the sides.

(Not all sets of 6 points can have a circle drawn through them, for example these 6 points do not.) And these three red points are the midpoints of the sides.


56

Chapter 5. Mathematics

First, we can draw a circle through the vertices of the triangle, like this: So we just want to show that the yellow triangles, which are just reflections of the purple triangles, also have vertices lying on the circumcircle.

This is called the circumcircle of the big triangle. Now we make the following claim: Claim 1: The reflections of the intersection of the blue lines over the 3 sides lie on the circumcirNow we’ll do a bit of geometry, using a technique cle. called angle chasing. It is known that if you have 4 lines “Intersection of the blue points” is just a fancy name such that the blue angles are equal, then we can draw a circle through these 4 points: for this purple point in the middle.


5.4 Loh. Constructing a Very Special Circle through Six Very Special Points

57

Now back to the original problem. Because the altitudes create right triangles, the blue angles have measures equal to 90 degrees minus the red angles, correct?

This is because of the Inscribed Angle Theorem, which says that the arc inscribed by an angle has twice the measure of the angle. And both of these angles inscribe the same arc of the circle: And since the yellow triangle is just a reflection of the purple triangle, this third angle is also blue:


58

Chapter 5. Mathematics When you reflect a triangle over the midpoint of one side, you get a parallelogram.

Now we just use the fact we proved earlier: Since these two angles are both blue, there is a circle drawn through the 3 vertices of the triangle and one of the yellow reflection points!

Applying this to the other 2 reflection points, we have shown that all three yellow reflection points lie on the circumcircle! Good news: We’re basically done with the geometry.

So, the reflection of the original triangle over the midpoint (purple triangle) is just the reflection of the green triangle over the perpendicular bisector of the horizontal segment.

And clearly, if the 3rd vertex of the green triangle lies on the circumcircle, then so must the purple triangle, Claim 2: The reflections of the intersection of because of symmetry. the blue lines over the red points also lie on the circumcircle. This is much easier now that we know Claim 1 is true.


5.4 Loh. Constructing a Very Special Circle through Six Very Special Points

59

Which means we can “shrink” the green circle down So, the reflections of the intersection of the blue lines by a factor of two, AT the intersection of the blue lines, over the midpoints of the sides lie on the circumcircle, almost like a magical tornado-like vortex. The purple too, and Claim 2 is proven! points will “travel” to the pink points, and we end up with the following overcomplicated diagram: Now for the punchline:

So because every point in the diagram is “mapped” to a point half as far from the “origin”, which is the intersecLook at this diagram here: The reflections of the in- tion of the blue lines, the purple circle (the circumcircle) tersection of the blue lines over all six pink points are the “maps” to a circle passing through all six pink points, purple points, which lie on the green circle. Thus the pink so indeed, the six points lie on a circle, as desired!!! segments and their corresponding purple segments are the same length, as shown: Reference [1]

Evan Chen. Euclidean Geometry in Mathematical Olympiads. MAA Press, 2016 (cited on page 55).


Chapter 5. Mathematics

60

5.5

Fractals: Finding Perimeters and Areas By Eesa Noaman ’22 and Siyuan Zhang ’22 Abstract

A fractal is a geometric figure which contains parts that have similar statistical character to the whole, making them infinitely complex in nature. We attempt to stretch a fractal to its limit and study its nature as the number of iterations approaches infinity. We use the Koch snowflake and Sierpinski Triangle as examples of such variation of fractals to guide our endeavor, and with this, we successfully present how perimeter is infinite yet area is finite in nature. We also attempt to symbolically demonstrate the mathematical calculations in the fractal’s limit.

the top-left triangle, n = 1 is the top-right star-like shape created by the overlapping of two triangles, and n = 3 is the bottom-right shape. We can only produce numerical values for calculations, such as perimeters and areas, regarding fractals when it has a finite number of iterations. Fractals obtained as n → ∞, like the Koch snowflake, are known to have infinite perimeters and finite areas in the limit [5]. In this article, we study how the perimeter and area of a fractal can be demonstrated in the limit.

Perimeter We see in Figure 5.5 that for every iteration of the Introduction Koch snowflake, each side is substituted with 4 new sides Fractals have been around for centuries in the natural of length 1/3 the original length. We let Nn be the total environment, the study of them, however, did not start number of sides and Ln be the length of each side, where until later in the twentieth century [5]. The geometric the number of iterations n ≥ 1, then structures of fractals are characterized by the repetition of a self-similar figure, referred to as an initiator; such 1 l Nn = 4Nn−1 = 3(4n ), Ln = Ln−1 = n (5.1) structures can be created by “an infinite generation of an 3 3 increasing number of smaller and smaller copies of [an initiator]” (see Figure 5.5). Amongst the first mathemati- where l is the side length of the initiator triangle [4]. cally described fractals is the Koch snowflake, introduced With this, we can calculate the perimeter P of the Koch snowflake at iteration n with the equation by Helge von Koch in 1904 [2]. Pn = Nn Ln = 3(4n )

l 4n = l. 3n 3n − 1

(5.2)

We can also rewrite Equation 5.2 as 1 4 Pn = 4Nn−1 · Ln−1 = Pn−1 . 3 3

Figure 1: Generating Koch snowflake from triangle [7]. We define the number of iterations of a fractal, n, such that n = 0 is its initiator and any increment of n is a subsequent generation. In the case of Figure 5.5, n = 0 is

(5.3)

As we have established, it is not possible to obtain quantitative perimeters of fractals when n → ∞ with the traditional methods. However, there exists a way to demonstrate the relationships of perimeters in the limit with calculations. In a 2016 research paper authored by Yaroslav Sergeyev, a Distinguished Professor at University of Calabria, Italy, he created a new numerical system and computational methodology to conduct “a more precise quantitative analysis of the Koch snowflake . . . at infinity” [5]. 1 which he calls a Sergeyev defines a new numeral, ⃝, “grossone”, as “the infinite integer being the number of elements of the set, N, of natural numbers.” Sergeyev 1 with mathematical further reinforces the definition of ⃝ expressions in the paper. We reference several instances of the expressions Sergeyev described for the purpose of illustrating how such reinforcement allows practical computation:


5.5 Noaman, Zhang. Fractals: Finding Perimeters and Areas

1 =⃝ 1 · 0 = 0, 0·⃝

1 ⃝ 1 0 = 1, = 1, ⃝ 1 ⃝

1 ·⃝ 1 −1 = 1, ⃝ 1 −3.1 5+⃝ 1 3.1 + 1, = 5⃝ 1 −3.1 ⃝ |odd numbers ∼ O| =

1 ⃝ , |even numbers ∼ E| = 2

1 ⃝ 1 + 1. ,|Z| = 2⃝ 2

61

Area The construction of a Koch snowflake is based upon a simple principle. Starting with an equilateral triangle, divide each side length into three parts and then use the middle of the three segments as the base of a new equilateral triangle. Upon repeating this process, patterns start to occur. When observing the number of additional triangles, it can be seen that the pattern begins with 3 additional triangles, then 12, then 48. Although not applicable from the initial equilateral triangle to the first iteration of the snowflake, a ratio of 4 appears between each iteration in regards to number of new triangles. The second pattern that occurs is the area of each new triangle. Each new triangle has 1/9 the area of the prior triangles. This value is derived from the equation for the area of an equilateral triangle:

1 ensures that The extensive mathematical definition of ⃝ √ the numerical system and the numeral itself are complete 3 2 a , where a is the side length. (5.6) and rational, thus allowing calculations and derivations to 4 be made from the definition. 1 When calculating the area for the next triangles added, a We can now substitute the number of iteration n for ⃝ in Equation 5.2 to study the Koch snowflake in the limit is replaced with a value of 1/3, and “execute easily arithmetical operations with infinite √ 2 √ numbers” [5]: 3 a 2 3 a = . (5.7) 4 3 4 9 1 ⃝ P⃝ 1 = N⃝ 1 L⃝ 1 = 3(4 )

l 1 3⃝

=

1 4⃝ l. 1 −1 3⃝

Thus, by dividing two infinite numbers Pn and Pn−1 that

P⃝ 1 = P⃝ 1 −1

1 4⃝ l 1 3⃝−1 1 ⃝−1 4 l 1 3⃝−2

Comparing the initial equation of an equilateral triangle and the equation formed with a third of the side length, the ratio of 1/9 is seen. such Taking the ratio of 1/9 and the increase in number of triangles going from 3 to 12 to 48 and so on, we can create a function that models the area as follows:

(5.4)

4 = , 3

(5.5)

√ 3 3 ∗ 4 3 ∗ 42 3 2 s 1 + + 2 + 3 + ... . A= 4 9 9 9

(5.8)

Recognizing this equation as a geometric series, the forwe obtain a finite number that can also be found in Equaa mula to calculate the sum is 1−r where a is the first term tion 5.3. Furthermore, we can calculate the difference of and r is the common ratio. Using 1/3 as the first term two perimeters provides the following equation: 1 1 −1 1 −1 4⃝ 4⃝ 4⃝ P⃝ l − l = l 1 − P⃝ 1 −1 = ⃝ 1 −2 1 −2 3 1 −1 3⃝ 3⃝

=

4 −1 3

1 −1 4⃝ l, 1 −1 3⃝

which is an infinite number. This new numerical system and computational methodology now allows us to accurately calculate results from infinite numbers and differ the perimeters of fractals with numbers of iterations that approach infinity.

! √ √ 1 3 2 2 3 2 3 A= s 1+ = s [1]. 4 5 1 − 94

(5.9)

Based on the result in the Area of a Snowflake equation, the area of a Koch snowflake converges to a singular value unlike its perimeter which diverges to ∞. In contrast with the Koch snowflake which gains area with each iteration, the opposite case must be observed too. The Sierpinski Triangle, named after Waclaw Sierpinski who first described it back in 1915 [3]. The pattern


Chapter 5. Mathematics

62 is formed by taking the midpoints of the segments composing the largest triangle, then connecting them to each other. This process creates four triangles within the larger one. The final step is to remove the center triangle of the four created as can be seen in the below figure.

Figure 2: Initial as well as first four iterations of the Sierpinski Triangle [6] To calculate the area of the Sierpinski Triangle, a pattern is formulated based on the first few iterations. Going from a full triangle to the first iteration, the area is reduced to 3/4 of the original. In the second iteration, the area reduces to 9/16, and in the third 27/48. This pattern that defines the area can be rewritten as: n−1 3 An = A0 4

n−1 3 = 0. n→∞ 4

References [1] Fractals. https : / / slidetodoc . com / fractals - the - koch - snowflake - first iteration - after - 2/. Accessed: 2022-2-28 (cited on page 61). [2]

Helge von Koch. “Sur une courbe continue sans tangente, obtenue par une construction géométrique élémentaire.” French. In: Ark. Mat. Astron. Fys. 1 (1904), pages 681–702. ISSN: 03654133 (cited on page 60).

[3]

Marianne Parsons. Pascal’s Triangle and Modular Exploration: Sierpinski Triangle. http : / / jwilson.coe.uga.edu/EMAT6680/Parsons/ MVP6690 / Essay1 / sierpinski . html. Accessed: 2022-2-28 (cited on page 61).

[4]

Heinz-Otto Peitgen, Hartmut Jürgens, and Dietmar Saupe. Chaos and Fractals. New York: SpringerVerlag, 1992 (cited on page 60).

[5]

Yaroslav D. Sergeyev. “The exact (up to infinitesimals) infinite perimeter of the Koch snowflake and its finite area”. In: Communications in Nonlinear Science and Numerical Simulation 31.1 (2016), pages 21–29. ISSN: 1007-5704. DOI: https:// doi.org/10.1016/j.cnsns.2015.07.004. URL : https : / / www . sciencedirect . com / science/article/pii/S1007570415002518 (cited on pages 60, 61).

[6]

Sierpiński triangle. https : / / en . wikipedia . org / wiki / Sierpinski _ triangle. Accessed: 2022-2-28 (cited on page 62).

[7]

Xavier Snelgrove. File:KochFlake.svg. https : / / commons . wikimedia . org / wiki / File : KochFlake . svg. Licensed under CC BY-SA 3.0. Accessed 25 February 2022. 2007 (cited on page 60).

(5.10)

where n is the iteration and A0 is the initial area. A simple way to represent this equation is as follows:

A = lim

perimeter whilst having an area that increase to a value of √ 2 3 2 s , where s is the side length. In contrast the Sierpin5 ski Triangle also had an infinite perimeter while the area is expressed as a limit approaching zero. By analyzing the increasing or decreasing nature of a fractal, the patterns created, then finally solving for a long run value, the areas of infinitely complex shapes begin to take form in numbers that the human mind can begin to fathom.

(5.11)

As n increases towards infinity, the area will decrease approaching a value of zero. Thus meaning that the Sierpinski Triangle has an area equivalent to zero. The area of fractals is most easily calculated by finding patterns in terms of the rate of increasing or decreasing, then applying it to existing methods such as limits and series, before finally solving them. Conclusion Fractals are extraordinarily complex shapes, with all having an infinite perimeter. While the perimeter is unchanging, the area can vary greatly between fractals. The simplest way to define the area of a fractal is through finding ratios between iterations and applying them to known mathematical expression types such as series and limits. The Koch Snowflake was found to have an infinite


5.6 Simhan, Bandi. The Fermi Estimate

5.6

63

The Fermi Estimate By Jay Simhan ’23 and Kush Bandi ’22 Abstract

Many questions have been proposed which are physically unanswerable by our current technology. Often, scientists and mathematicians have no basis or starting point on how to proceed in answering a certain question. However, having the ability to make educated and justified guesses is extremely significant in nearly all fields involving numbers. These are Fermi Problems and their solutions are counter-intuitive. The more estimates one makes regarding a certain proposition, the further they approach the true answer. The solutions to Fermi Problems involve several calculus-based concepts. Specifically, functions and equations that fluctuate around a certain value, which then converge to that value, exemplify the premise behind why Fermi Problems are equatable. Through our analysis of converging functions representing error in estimations, we exhibit how the nature of educated estimations concentrates to a single value as extra guesses are added. Though most Fermi Problems are generally not provable by current data, those which have been mathematically computed result in values extremely close to the estimates made in solving with minimal real data. These types of accurate estimations can help further humanity’s knowledge on abstract concepts we have yet to understand, and can potentially allow us to progress further in all regards.

Figure 1: Enrico Fermi [4] The reason Fermi’s estimations work in these problems is because the approximations of individual terms are generally close to correct, and the overestimates and underestimates help cancel each other out. Broadly, if bias is excluded, a Fermi calculation that involves calculations on several levels of different approximations will continuously become more accurate, and eventually will be more accurate than first supposed.

This paper will discuss the theory behind Fermi Problems, famous examples and solutions to these problems, and finally, the mathematical notation and calculaIntroduction tions proving why infinite Fermi estimates result in accuHave you ever wondered how many piano rate determinations of seemingly unanswerable questions. tuners there are in Chicago? Probably not - but how would one go about answering such a proposition? An ap- Methods/problems proximation could be severely incorrect, acutely precise, The most noted example of Fermi’s Estimate or anywhere in between. These types of indeterminable comes from the question posed above – how many piano problems which can’t be determined by mathematical or tuners are there in Chicago [1]? To take on this broad yet scientific deduction are known as Fermi Problems [3]. oddly specific inquiry, let us start with something a little bit easier to grasp. How many people are in Chicago? We don’t know the exact number, but we could assume Enrico Fermi, whom these questions are named somewhere between 2 and 3 million since it is probably after, was known for having the ability to make extremely less than one-third of the New York population (Which approximate calculations with minimal data [3]. He is somewhere around 8.5 million). Lets assume there are solved these problems by making numerous justified and 2.5 million people in Chicago. An average household has educated guesses about quantities. about 4 people, so we can say there are about 625,000 households in Chicago. Of course not all households have pianos, so let us assume 15 houses has a piano. This gives us about 125,000 pianos in Chicago. Pianos need to be tuned about once a year, so we need to find how many people it takes to service all 125,000 pianos per year. Consider a piano tuner who works full time. They


Chapter 5. Mathematics

64 could likely tune about 3 pianos a day, and work a 5 day week. This would be about 15 pianos tuned per day, multiplied by 50 weeks (let us take two off for vacation days, sick days, or other absences) to get 750 pianos tuned per year. Divide the number of pianos in Chicago that we calculated earlier (125,000) by the number of pianos serviced per year for every one piano tuner (750) and we get about 167 piano tuners in Chicago. Here, we have no way of checking if this is absolutely correct, but the goal is to understand the magnitude and proportion of the answer using common sense alone.

while solving a Fermi Problem. By taking the integral from 0 to infinity of the decaying sine wave, we get 0. The individual waves in the curve represent positive or negative error in an estimated guess, and the graph exemplifies that infinite guesses result in an overall error of 0, equating to a hyper-accurate result.

We see this occurrence in calculus as well, in something called a converging series. In this series, an infinite sum of positive and negative values add to approximately 0. If we call each of these values an “error" in our estimate, the converging series shows that the more To summarize the equation described above, consider: numbers totalling to the overall sum, the more accurately the function approaches its true value. Say we assume 1 Pre-calculating the number of pianos a piano tuner could million people live in Chicago– an underestimate. Then, service each year: we assume 6 people live in each household on average– an overestimate. Eventually, as modeled by the series 3 days · 5 days/week · 50 weeks = 750 pianos pictured above, these positive and negative errors will tuned per year even out to an approximation of the correct magnitude and near-correct value [3]. The ability to balance out Next, using the calculation above with the other estima- errors by increasing the number of parameters seems iltions supposed earlier: logical, but with calculus we can see that Fermi’s Estimate follows a similar pattern [5]. Fermi’s method of underNumber of households Households with pianos Pianos tuned per year z }| { standing bypasses complex mathematical calculations for z}|{ z}|{ 1 1 2, 500, 000 “unsolvable" scientific questions and instead uses concep· · 4 5 750 tual theories to narrow down guesses to a fairly accurate ≈ 167 piano tuners in Chicago estimation. We used a series of parameters increasing in specificity in our estimate to narrow down our number into something more realistic and likely to occur. As seen above, the more restrictions we put on the number, the closer to correct (or the correct magnitude) our estimate became. This phenomenon seems to be counter intuitive – how can more numbers result in a more accurate estimate?

Conclusion

While you may not need to know how many piano tuners are in Chicago at any given moment, Fermi’s Estimate can be helpful for any excessively difficult calculations at hand in which data and measurements are not an option. Although Fermi calculations are not acutely precise (due to non-infinite amounts of estimations), this analysis often produces results that are beneficial for many purposes. For example, if you wanted to pursue piano tuning in Chicago, the general estimate of competing businesses may be enough information. If the lack of precision is too risky, the value attained using this estimate at least provides a better understanding of where to look for more precise answers – perhaps our errors were made in the approximate number of pianos in the city, and that is something that we could look further into for details. Despite the “unsolvable" nature of Fermi Problems, the ability to calculate a value that would be impossible with true mathematical methods is significant due to the accuracy with minimal real data and limited substantial knowlFigure 2: Positive and Negative Errors Converging to 0 edge of the subject. Now, with the knowledge of this fascinating concept, consider a question that could have [2] true environmental impact (if it was possible to plant that This curve seen in Figure 2 is known as a decay- many trees): how many trees would need to be planted to ing sine wave, and helps represent the errors accumulated lower the average global temperature by one degree?


5.7 Sinha, Sayette. Triangular Duel References [1] Caroline Chen. How to Solve Any Problems Using Just Common Sense. https : / / www . nytimes . com / 2021 / 08 / 31 / magazine / fermi problems . html. Accessed: 2022-02-24. 2021 (cited on page 63).

5.7

[2]

Erik Cheever. Laplace Transform of Functions. https : / / lpsa . swarthmore . edu / LaplaceXform / FwdLaplace / LaplaceFuncs . html. Accessed: 2022-02-24. 2021 (cited on page 64).

[3]

Fermi Problem. https://en.wikipedia.org/ wiki / Fermi _ problem. Accessed: 2022-02-22. 2022 (cited on pages 63, 64).

65 [4]

Manhattan Project Spotlight: Enrico Fermi. https : / / www . atomicheritage . org / article / manhattan - project - spotlight enrico - fermi. Accessed: 2022-02-24. 2015 (cited on page 63).

[5]

Mark D. Normand. Expanded Fermi Solution for Estimating a Complaint’s Probability. https : / / demonstrations . wolfram . com / ExpandedFermiSolutionForEstimatingA . ... Accessed: 2022-02-22. 2011 (cited on page 64).

Triangular Duel By Vik Sinha ’22 and Alex Sayette ’23 Abstract

percent of his shots; and man C (call him Charlie), wielding mustard, makes half his shots (condiment weapon of choice alluded to in Figures 1 and 2). The men decide to draw lots to determine who shoots first, second, and third, taking turns firing at each other until one man is left standing. Finally, each man, on his turn, can target whichever opponent he desires. Assuming that all three men employ their best possible strategy, who has the best chance of survival? Now imagine that instead of condiments, the three men hold pistols. This arguably higher-stakes backyard game is commonly known as the triangular duel problem. In the following paper, we will discuss each man’s best strategy for survival and determine the individual chances of survival for each of the three men.

The triangular duel problem – where three men (A, B, and C) with three different levels of shooting accuracy (A never misses, B hits 80 percent of the time, and C hits half the time) draw lots to determine who shoots first, second, and third, subsequently taking turns shooting at each other until only one man is left standing – can be solved according to the principles of mathematical counting and probability theory, allowing us to determine an ideal strategy for each player and their odds of winning. Assuming all three men play with the best possible strategy, as outlined in Figure 1 below, the man with the lowest accuracy (C, with a hit rate of 50 percent) has the best odds at survival, at 52.2%, followed by the best shot (A, who never misses), with a chance of survival of 30%. B, with a hit rate of 80%, has the worst odds of winning, with a percent chance of survival estimated at only 17.8%. Solution to The Triangular Duel Problem These probabilities can be determined according to the By drawing lots, each man has a one in three chance addition and multiplication principles of probability. of shooting first. Note that it is in Charlie’s best interest to fire into the air on his first turn. If he were to hit Barney, Introduction it would be Arnold’s turn, and Charlie would surely be Suppose you and two friends are hosting a backyard defeated. If Charlie were to hit Arnold, then it would be barbeque. One thing leads to another, and you find your- Barney’s turn, and Charlie would have a 20% chance of self wielding a bottle of ketchup, facing two friends, one survival (as we will see, this is a lower chance of survival with a bottle of yellow mustard and the other with bar- than if he chooses to throw away his first shot). beque sauce, both with the intent to fire in order to prove, We can use the multiplication principle to determine in a life or death all-out condiment war, whose choice is the probability of any man’s individual path to victory by truly superior. In such a situation (not an unlikely picture multiplying the odds listed below each step in Figure 1. at all), one must first have a strategy, lest they fall at the This is shown in Figure 2, in which we multiply together hand of a friend recently turned foe. each of the probabilities from one point to the next until To demonstrate the scenario, suppose man A (call him we reach the final outcome. Notice that as we go further Arnold), wielding ketchup, never misses a shot; man B up the tree in Figure 2, the probability of each scenario (call him Barney), wielding barbeque sauce, makes 80 occurring decreases.


Chapter 5. Mathematics

66

and as expressed as the value written under the leftmost red branch in Figure 1. 2. Arnold may also win in the scenario that Barney gets the first shot (0.5 chance) and promptly misses Arnold (0.2 chance), allowing Arnold to take out Barney. Charlie then misses his shot (0.5 chance), allowing Arnold to take out Charlie, declaring Arnold the winner. The odds of this path occurring are 1 1 1 1 × ×1× = 2 5 2 20

Figure 1: A tree showing the probabilities of every possible outcome

(5.13)

as shown in Figure 2. 1 3 Adding up the following probabilities 14 + 20 = 10 = 0.3 yields the total probability of Arnold winning the duel. We then move on to Barney’s odds. In order to win, Barney must shoot first (0.5 chance) and successfully shoot Arnold (0.5 chance), since Arnold would surely eliminate him otherwise. Then, Charlie must miss Barney (0.5 chance). Eventually, in order to win, Barney must hit Charlie (0.8 chance), although this may occur after any non-negative integer number of repetitions of Barney missing his shot (0.2 chance) followed by Charlie missing his shot (0.5 chance). To represent Barney’s chances of winning, we can consider the following sequence of steps: 1 4 1 4 × × × + 2 5 2 5 1 1 4 1 4 1 × × ×( × )× + 2 5 2 5 2 5 1 4 1 1 1 1 1 4 × × ×( × × × )× + 2 5 2 5 2 5 2 5 1 4 1 1 1 1 1 1 1 4 × × ×( × × × × × )× +··· 2 5 2 5 2 5 2 5 2 5

Figure 2: A similar tree showing the probabilities of every possible outcome

1 4 1 4 Factoring out × × × , 2 5 2 5 1 4 1 4 we get × × × × (1 + 0.1 + 0.01 + 0.001 + · · · ). 2 5 2 5 1 4 1 4 1 n ) . Written as, × × × × ∑∞ n=0 ( 2 5 2 5 10 4 Which simplifies to × 1.111 . . .. 25 10 Observe that = 1.111 · · · , therefore the total probabil9 4 10 8 ity that Barney wins the duel is equal to × = = 25 9 45 0.177 . . ..

We can then utilize the addition principle to determine the overall probability of any man winning the duel by adding up the probabilities of reaching each of that man’s winning outcomes. Note that adding up Barney and Charlie’s paths to victory can be represented by an infinite sum, or the addition of infinitely many possible (and increasingly unlikely) paths. For example, Arnold can win the duel in one of two ways: 1. Arnold shoots first (0.5 chance), eliminating Barney (1.0 chance), then Charlie misses his shot (0.5 Since all three men’s chances of winning must add to 1, chance), and finally, Arnold shoots Charlie, as rep3 8 Charlie’s chance of winning must be 1 − 10 − 45 = 47 90 , resented by the equation accounting for a win around 52.2% of the time, while Arnold has a 30% chance of winning and Barney has a 1 1 1 ×1× ×1 = (5.12) 17.8% chance of winning. 2 2 4


5.8 Wagner-Oke, Anderson-Jussen. All Horses Are The Same Color: Proof by Induction Conclusion What makes the three-way duel problem so enticing is the fact that, even in a situation shrouded by human decision-making, mathematics leads us towards the best possible strategy for each player. In a group of three friends (certainly not the best of friends) where each decides to participate in a “truel,” one can rest assured that, however their shooting accuracy, by following the math, they are maximizing their odds at survival by following the outlined strategy, where any deviation is sure never to increase one’s odds of success. It turns out that the person with the highest chance of winning is not the person with the highest accuracy, but the person with the lowest accuracy, which is Charlie. Because Arnold is the most accurate of the three, he is also the most targeted, as both Barney and Charlie wish to eliminate him in order to face the weaker opponent in the final round. Similarly, Charlie is the least targeted

5.8

67

as Arnold and Barney will fire at each other until one of them is eliminated. Since Charlie is guaranteed the first shot at his opponent (whoever he may be), he has at least a 50 percent chance of winning the duel. The biggest takeaway from this problem is that "survival of the fittest" does not always mean "survival of the most powerful". For example, in a large, free-for-all sports competition, such as dodge-ball, or a heated game of Mario Kart, the one most likely to win is not necessarily the one with the highest skill, but rather the one who is least targeted by the group, slipping by unnoticed to take a final stab at victory when the time is right [1]. Reference [1]

Andrew M. Coleman. Game Theory and its Applications. New York, NY: Psychology Press, 2017 (cited on page 67).

All Horses Are The Same Color: Proof by Induction By Neil Wagner-Oke and Jack Anderson-Jussen

Are all horses the same color? While some people would be quick to say no, mathematicians might be more hesitant to give a quick answer. Using modern tactics of proof by induction, it may be possible to show that all horses ARE the same color. With a keen knowledge of induction, mathematicians proceed cautiously with this statement. On the surface it may seem crazy to think this is true, but with some mathematical reasoning, it can be proven with false logic. This false logic might fly under the radar for some, but with careful analysis we can say beyond a shadow of a doubt, that all horses are NOT the same color. Let’s start by looking at induction itself, and then see how it falls apart in this case of the horses.

Induction Induction is a technique of mathematical proof. The strength of proof by induction is its ability to prove large sequences of statements. To prove by induction you first take an arbitrary number n, and prove that when n is as low as possible the statement is true, we call this the base case. You then assume the statement n = k is true, and using this you can prove that n = k + 1 is true, this is the induction step. The principles behind induction can be boiled down to this simplification: if you want to prove someone can climb an infinite staircase all you need to do is show they can climb the the first (n = 1), they know how to climb from any step (n = k) to the next step (n = k + 1). If you can prove those two things, they can climb infinitely.

Introduction

The Horses

Abstract

Have you ever met someone who doesn’t like brownies? We certainly haven’t, and we would bet that nobody in the world dislikes them either. We could start proving this by saying, we like them, my dad likes them, my cousins all like them, and so on and so on. Unfortunately for us, we can’t ask every single person on the planet, and thus it makes it near impossible for us to prove this. We could boil down the population to a sample size of maybe 100 people, which makes the likelihood of them all liking brownies higher but as this isn’t the whole population we can’t prove our statement true beyond a reason of doubt. Problems such as this can be solved using a method of proof known as induction.

The Proof

For the base case, we look at all possible herds of horses that have only one horse. We know that each of these herds only has one color of horse because they only have one horse, and a horse can only be one color. Now we assume that all herds of size n only contain one color of horse. Finally, we must extend the assumption to all herds of horses that have n + 1 horses. We can do this because we know that the first n horses in a set of n + 1 horses must be the same color by our assumption. Likewise, the last n horses must be the same color. Therefore the first horse which is included in the first set of n horses must be the same color as the rest of the horses. Those


Chapter 5. Mathematics

68 horses are also the same color as the horse at n + 1 since the last n horses are the same color . This is illustrated in Figure 1.

Figure 2: The case n = 1 where the augment breaks down.

Conclusion This poor use of proof by induction shows why we need to be careful when writing proofs. In this case the failing of the proof is obvious because we can see that the proof proves a clearly false statement true. However, this is not always the case. While trying to prove statements that are not clearly true or false it is vital to make certain Figure 1: The general case for the horse problem. that a proof has no flaws. This fun thought problem gives What’s Wrong us a great way to experience induction in a way anybody This all seems reasonable, following the form of proof can think about, regardless of mathematical prowess. by induction, but all horses are not the same color. In addition, using the same method we can also prove other References false statements about all of a group being the same. A [1] All horses are the same color. https : / / en . second example being: all numbers have the same value wikipedia.org/wiki/All_horses_are_the_ [1]. So, what went wrong? The problem with this proof same_color. Accessed: 2022-02-22. 2022 (cited is that the inductive step we didn’t take into account every on page 68). possibility. In this case looking at the transition between n = 1 and n = 2 we see that there is no overlap (Figure 2), [2] Laura Pennington. Proof by Induction: Steps and Examples. https : / / study . com / academy / which is required for the inductive step. This results in lesson / proof - by - induction - steps the proof not working for any herds larger than one horse. examples.html. Accessed: 2022-02-22. 2022. [3]

Prof. Sormani. Proof by Induction. http : / / comet . lehman . cuny . edu / sormani / teaching / induction . html. Accessed: 202202-22. 2015.


6. Computer Science and Engineering

Section Header by Sophia Nicholls ’22.

6.1

The Gear Chair By Jack Anderson-Jussen ’22, Andrius Emerick ’23 and Thomas Harrison ’23 Originally for the upper class, wealthiest, and most powerful individuals such as kings and queens, wheelchairs eventually became common for those who needed them. But still wealth limits users’ options. A manual wheelchair is affordable costing between 75-250 dollars but motorized wheelchairs are not so affordable; this makes manual wheelchairs a more popular option for many. According to the Agency for Healthcare Research and Quality (AHRQ) in a 2005 study, 91% of people used manual wheelchairs and only 9.3% used electric wheelchairs. A Power Wheelchair price ranges from $1,500 to $4,000 and can go up over $15,000. In the United States alone 3.3 million people use wheelchairs, and 2 million of those are elderly. Wheelchairs have been helping users with disabilities access areas and go places that they previously haven’t been able to go before.

Wheelchair users depend on their chairs for freedom and independence. Many designs however lack the necessary leverage that a person might need to conquer a steep hill, a slippery slope, or an uneven surface. Manual Wheelchairs typically weigh between 15 lbs. and 50 lbs. Power Wheelchairs weigh between 50 lbs. and 250 lbs. These weights alone can also make it difficult for individuals with disabilities to navigate with the devices, as often it can be hard to generate the necessary force to propel the weight of the chair, and their own body weight up an inclined surface. Motorized chairs can help with this by using motors to turn the wheels, but again the cost of such motorized wheelchairs is dramatically different from that of a manual wheelchair. Individuals with disabilities however, face limitations; areas that may be hard to navigate in ideal conditions can


70 be nearly impossible in rain/winter months. “I wish I could live a normal life,” one user said when asked about the struggles they face navigating the world [2]. Additionally it can be hard for wheelchair users to navigate steep hills without tearing up the individual’s hands. Individuals who once were able to use a manual wheelchair may not be able to anymore due to medical conditions such as arthritis. Manually rolling the chair by the wheels can cause dirty hands, blisters, cramps and carpal tunnel syndrome, all of which limit one’s ability to use a wheelchair. Carpal tunnel syndrome is when a nerve in your hand (median nerve) is pinched or compressed. This can be caused by repetitive hand motions such as gripping which is essential for wheelchair users. In fact up to 73% of wheelchair users with spinal cord injuries suffer from carpal tunnel. People with carpal tunnel can experience numbness, pain, tingling, weakness, and lack of coordination, all of which can be major impediments for wheelchair users. It also requires arm strength to push up slopes which many users may have difficulty with. The older you are, the more susceptible to ailments and weakness. You also are more likely to need a wheelchair, this means that most of the people using wheelchairs are people that may not be able to easily navigate tricky terrain. The Department of Urban Housing and Development estimates that around 40% of the homeless population in the United States is burdened with some form of disability, with many of them being wheelchair users or people who would benefit from a wheelchair [1]. Thus it becomes important to create a solution that is inexpensive and accessible to people who don’t have the financial means to buy a $3500 motorized wheelchair and for those who would like to stay active. Individuals that use wheelchairs may find themselves in difficult situations when navigating terrain that isn’t conducive to their disabilities. Modern advancements have been made to help with these problems of not being able to navigate steep hills, or slippery slopes, but the main solutions are far more expensive than the initial buyin of a wheelchair; with motorized wheelchairs costing on average 33.33 times more than a manual wheelchair, accessibility to these solutions is limited to those with more money [3]. Therefore a more equitable and affordable solution to this problem is necessary. We hope to create a solution that is inexpensive, can enable wheelchair users to go up and down hills or slopes where traction is an issue, is lightweight and maneuverable, and most importantly: doesn’t utilize electricity. Our end goal is to enable individuals in wheelchairs to have more freedom with their lives and the places they can go by giving them the ability to go places that were previously inaccessible to them before.

Chapter 6. Computer Science and Engineering

Figure 6.1: Gear chair prototype By the end of the year we hope to have a design and bring to fruition a functioning prototype. We hope that this prototype includes a geared system to assist wheelchair users going up and down hills, as well as difficult terrain. Currently, we are working with a local bike shop, Kindred Cycles, to manufacture an internal gear hub to allow for the implementation of a gear mechanism. In the coming weeks we hope to create a prototype using inexpensive materials that is functional (as seen in figure 6.1) so that we can progress product development in a more final and streamlined manner. References [1] Michelle Diament. “More Than Two-Fifths Of Homeless Have Disabilities”. In: Disability Scoop (July 2009). URL: https : / / www . disabilityscoop . com / 2009 / 07 / 16 / homeless-report/4153/ (cited on page 70). [2]

David Oliver. “‘I live a beautiful life’: What wheelchair users wish you knew – and what to stop asking”. In: USA Today (July 2021). URL: https://www.usatoday.com/story/life/ health-wellness/2021/07/21/wheelchairusers- talk- disturbing- questions- whatthey-wish-you-knew/8017662002/ (cited on page 70).

[3]

Margaret Sellars. “How Much do Electric Wheelchairs Cost in 2021?” In: Mobility Deck (Dec. 2020). URL: https : / / mobilitydeck . com/how-much-do-electric-wheelchairscost/ (cited on page 70).


6.2 Bandi. Engineering Lead Portfolio

6.2

71

Engineering Lead Portfolio By Kush Bandi ’22 & The Giant Diencephalic BrainSTEM Robotics Team Note from the Editors

Below the Giant Diencephalic BrainSTEM Robotics Team describes their robot design used in the FIRST Tech Challenge competition. They won 2nd place at nationals.

Final Design Iterations

When starting the process of integrating all of the subsystems, we decided to go for a modular approach – every subsystem is its own piece inside the robot’s shell. This allowed for the construction of the robot to be quite simple, and we didn’t have to worry about subsystems attaching to each other. That said, with anything that has a simplistic design, a lot of effort is put in behind the scenes.

First, we have the drivetrain. Behind the final design lies the brainstorming power. At the beginning of the season, we desired to be able to fit within the 13.7-inch space between the barriers and the wall. This drivetrain is 13 inches wide, allowing for quick, easy cycles from the warehouse to the shipping hub. The wheels on this drivetrain were tested many times, but we ultimately decided on four-inch mecanum wheels, allowing for the best multi-directional mobility for the challenge. As shown in the image, the motors are placed in the back two-thirds of the robot, allowing for an opening for the collector to sit in. Finally, the drivetrain was significantly reinforced


72

Chapter 6. Computer Science and Engineering

with the use of carbon fiber rods throughout the entire that traps the freight in is opened, releasing it into the design. This is a robust, compact, and easy-to-assemble intended target. design.

Next is the collector. The first aspect of this design that allows our robot to exceed in collection is its inability to collect more than one piece of freight at a time – the distance from the intake to the storage is set to only carry a single piece. To collect the freight, the system uses surgical tubing that rapidly spins inwards to propel the freight inwards to be stored. When the time comes to transfer into the depositor, a powerful Long Robotics servo utilizes a spring to rotate the entire collector with little strain on the servo. When in its highest position, the gate is then lifted and propelled into the depositor by the tubing.

Then, the depositor takes control. When the freight is transferred from the collector into the depositor, it stays inside until the lift is ready to work. When the system is raised, a servo attached to the back of the depositor closes a lid on the box, and another servo flips the entire depositor 180◦ . Next, a third servo extends a slide out seven inches using a linkage, the maximum distance the system is capable of moving. From this point, the flap

In addition, the capping system is located on the depositor. A fourth and final servo is connected to a hookshaped piece which holds our Team Shipping Element (TSE), a rectangular prism with netting on the top. Compactly mounted to our depositor flipping servo, when the depositor is flipped, the TSE is oriented so it is over the top of the shipping hub. Then, through a series of precise controls, we smoothly and accurately position the TSE above the shipping hub. Finally, the hook is released, and the TSE is placed perfectly on the hub.


6.2 Bandi. Engineering Lead Portfolio

73

The turret then utilizes a 180 tooth gear to rotate the lift and depositor in up to 290◦ of rotation. One main challenge of turning this entire system was the point of rotation. Finding a turntable that had little play yet was also compact was a challenge, but we ultimately discovered an IGUS slew ring, which was donated to our team as part of our sponsorship with IGUS. Mounting the top plate and the gear to this ring allowed extremely little play in a small amount of space, which is essential for lifting the depositor to the highest level of the shipping hub. We also utilize a limit switch to orient the turret at the beginning of every match, allowing for extremely Our lift system is imperative to quicker scoring cycles, precise movement. and to make it as fast as possible, we implemented several strategies to make the entire subsystem exponentially more efficient. First, we wired the linear sliders to be cascading, meaning that each layer moves twice as much as the previous but requires more torque. Then, to reduce this torque, we use an innovative constant force spring. This spring neutralizes the weight of the lift and redefines its weight from 13 lbs to only 4 lbs, allowing us to use much faster motors to raise a lift that would otherwise need a motor of higher torque. Next, a REV touch sensor mounted at the bottom of the lift allows us to recognize when the lift is at its lowest position, and localize the motor’s encoders. This makes our lifting motion extremely accurate even over the course of an entire match. Finally, the entire component is made up of our signature FR4, and epoxy glass laminate, making the subsystem very rigid and consistent over countless robot runs. Finally, our carousel system is utilized in both the


Chapter 6. Computer Science and Engineering

74 autonomous and end game periods. To allow for the maximum amount of surface area possible, our robot has two-inch compliant wheels lined across the full front side of the robot. Each wheel has a gear above it, allowing for the transfer of power. There are also two layers of these wheels, compensating for a possible variation of carousel height. Our system is also motor-driven, allowing for precise tuning to find the optimal speed to turn the carousel without knocking the ducks over.

6.3 Machine Learning and the Art of Persuasion: Creating Digital Assistant for COVID19 Vaccine Hesitant Users By Hannah Chang ’22 Objective

This proposal outlines the basic structure and principles of “Digital Assistant” that not only responds to requests for vaccine-related information but also tries to persuade people to change their attitudes and behaviors, specifically to convince the vaccine hesitant to receive the vaccine.

enough information about the COVID-19 vaccination in Spanish [5]. To ensure that all communities are receiving valuable information, this digital assistant will be set in multiple languages. In addition to these basic functions, the Digital assistant will also have three specific features that will be discussed below. Feature 1: Providing Personal Stories

PART I: COVID-19 Digital Assistant Design Overview What this Digital Assistant sets out to do is to provide factual information on the COVID-19 vaccines and persuade vaccine hesitant users to reconsider not getting vaccinated. The Digital Assistant will be connected to medical websites to answer basic factual questions from users, including information for people with specific medical conditions. Since many skeptics base their hesitancy on mistrust, the Digital Assistant will also operate with an emphasis on transparency. A disclosure of the vaccine development process will be provided as well as a scientific explanation of what happens to the body during a vaccination. Stats on percent of the vaccinated population and commonly reported reactions to each type of vaccine will also be reported. Representatives from local communities can volunteer to speak on their experiences taking the vaccine, as specific communities might have targeted concerns about the vaccine. Many African-Americans have a distrust of the medical system because of the racism embedded in the medical system, leading to suspicion about COVID vaccines [5]. Therefore, vaccinated people should share their experiences to instill the safety and effectiveness of vaccines, which are critical for survival. A list of vaccinated public officials, including politicians and religious figures, will also be displayed. Members from Hispanic communities reported that there is not

Another powerful method of persuasion is uniting an idea with emotion. For example, an empirical study with experienced judges and attorneys showed that stories which evoked emotional responses actually created more credibility in the legal claims being made, which further created empathy from them in their judicial thinking and decision making thus affecting their rulings and decisions [4].


6.3 Chang. Machine Learning and the Art of Persuasion: Creating Digital Assistant for COVID-19 Vaccine Hesitant Users 75 The government can affect the public sentiment toward vaccination through an app that provides accounts of personal experiences with COVID-19, the vaccination process, and the reactions of vaccination, both positive and negative, reported by people of various age, race, occupation, location, and political ideology. This way, users can gain insight from people in the same community as them who may have reflected the same concerns that they currently have. This feature can be presented in text, audio, or video format in a casual manner to create the most authenticity.

Feature 2: Resolving Misconceptions With the current usage of social media platforms, people have been creating and sharing mass amounts of information–fake or real–about COVID-19 vaccines. Vaccine skeptics reported that one reason for their hesitancy on getting vaccinated is that they are unable to identify which information is correct. Conspiratorial thinking is a major contributor to vaccine skeptics’ hesitancy as it can provide comfort and stand as “a way to get one’s bearings during a rapid change in the culture or the economy, by providing narratives that bring order” [5]. One way to counter false information and misconceptions about COVID vaccination in the online resources is using AI Digital Assistance. When the user reads an article related to the COVID-19 vaccines, the Digital Assistant can scan the text and give a pop-up notification if it detects false information such as incorrect statistics or conspiracy theories; it can also provide the information approved by governmental health agencies with source references. Through Digital Assistant, people including vaccine skeptics will be able to weigh their decisions based on factual information and will be less likely to be dissuaded from vaccination.

Machine learning classification methods will be employed to enable the Digital Assistant to distinguish between factual and fake information. I will first compile a dataset that includes a number of public articles, posts, and chat-threads from Google and popular social media platforms, i.e. Instagram, Facebook, and Reddit that contain incorrect information about COVID-19 vaccinations. Unwanted variables such as URL, authors, usernames, data posted, and category will be filtered out, and the format and structure of these articles will be adjusted to maintain consistency. I will then extract those linguistic features e.g., word sentiment, percentage of stop words, informal language, and certain keywords, relating to wellknown vaccine myths using Linguistic Inquiry and Word Count (liwc2015) software. Up to 90 features from the text will be extracted, and each text will be classified into one of the categories in psychological impact. These input features will then be used to train machine-learning models. Each dataset will be divided into training and testing sets with a 70/30 split, respectively, and the set will have a similar distribution of articles: posts: threads, where each will be shuffled to ensure a fair allocation of false and true information in the training and the testing instances. Since these models will be more complex in nature, I will use more data in training for cross validation. To build the classifier, I will use an ensemble of methods including logistic regression, random forest (RF) , and multilayer perceptron (MLP) learning models. Logistic regression will be used to classify fake/true information because the text is being classified from a wide feature set into binary sets. Since the features are high dimensional, represent different categories (calculated from the liwc2015), I will use multi-layer decision trees (RF) and (MLP). Furthermore, the (RF) contains a lower error rate in comparison to other models, due to low correlation among trees. Each model will be trained multiple times with different sets of parameters using grid search to optimize the model also


76

Chapter 6. Computer Science and Engineering

to prevent over-fitting or under-fitting the data.

multivariable score to the observed outcome using logistic regression. Area under the receiver operating characteristic (AUROC) can be used to evaluate these classification models and quantify, specifically, the predictive value of the score. For building the classification models, I would use an ensemble combination of models including the logistic regression model, decision tree induction (DTI) using a variation of classification and regression trees (CART), random forest (RF), k-nearest neighbor (kNN), and multilayer perceptions (MLP). Parameter values will be scaled to the range between 0 to 1, with exception for (DTI) and (RF) where the original parameter values will be used. All models will be trained using repeated k-fold cross-validation for model evaluation and revision.

Feature 3: Scary COVID Statistics

The usage of fear is not a new method of persuasion in public health; many medical advertisements utilize the method emphasizing on potential danger of health risks that individuals might experience if they do not adopt the messages’ health recommendations. The second feature of the Digital Assistant focuses on the dangers of COVID for all groups of people, and the vaccine’s potential benefit on reducing the rate of death. The user will provide information such as age, specific health conditions, and location to determine a personalized COVID risk score before and after vaccination. Two scores would be calculated: risk of contraction and risk of death. Again, machine learning classification methods will be employed to enable the Digital Assistant to formulate the two risk calculations based on trends in public health data. The dataset would include medical records of COVID positive patients with traits (age, medical history, heath conditions, exposed to outbreak environment, etc.). For the development of the score and the ML models, patients would be classified according to their disease severity of non-severe: patients who tested positive for COVID-19 but were neither admitted to the ICU nor died of any cause during their hospital stay, and the severe: patients who tested positive for COVID-19 and required ICU admission at any stage during the disease, and the extreme severe cases: death of any causes during their hospital stay. Demographic data will be extracted from the record and included age at the time COVID-19 test was conducted, sex, weight, height, and body mass index (BMI), and specific health conditions, specifically, substance use (nicotine, alcohol, drugs), cardiovascular diseases, pulmonary diseases, type II diabetes, and cancer, etc. The training and test set will then be randomly divided into 80/20 for internal training and stratified for severe and non-severe cases in each set. The total score will be calculated from all of these parameters for each patient in the training and the test sets. For the training set, I will use the local regression fitting function (LOESS) to plot parameters against severity. The probability of a severe outcome can be determined by fitting the total

Adaptive Trial Design: determining the most effective methods A trial will be developed to find out which of the multiple persuasion tactics are effective in getting the public vaccinated. I will first recruit trial participants of both gender and various race, age, occupations, and religions, then randomize the participants into a number of groups in which the participants may have a similar character, e.g. race or religion, yet with other characteristics randomized. I will subject each group to a persuasion tactic then determine the best and the worst persuasive features based on users’ responses over time. One round of trial will last 4 weeks. At the end of each round, the participants will fill out a short survey asking if they are willing to get vaccination, and a sentiment analysis will be conducted on the survey responses to determine if the persuasion tactic has moved the sentiment of the participants toward getting vaccinated. In the next round of the trial, a bigger portion of the participants will be assigned to the methods with higher effectiveness as determined in the previous round and their sentiments will be determined again after 4 weeks. Methods with less effectiveness will be dropped over several iterations, and the methods that are particularly effective will emerge in the meantime. User responses can also be grouped by age, race, or political ideology to examine the effect of these characters on trends. References [1] David Kestenbaum. The Elephant in the Zoom. URL : www.thisamericanlife.org/736/theherd/act-two-5. [2]

et al Lin Jianchang. A General Overview of Adaptive Randomization Design for Clinical Trials. URL: www . hilarispublisher . com / open - access / a - general - overview - of adaptive - randomization - design - for clinical-trials-2155-6180-1000294.pdf.


6.4 Myers, Hopper. Condensed Design Proposal AE&D [3]

[4]

6.4

Lauren Neeragard and Hannah Fingerhut. Vaccine Wariness Dips; Obstacles Remain. May 2021. URL: digital . olivesoftware . com / Olive / ODN / PhiladelphiaInquirer/shared/. James Sudakow. A Good Story Is Always Far More Persuasive Than Facts and Figures. Aug. 2017. URL : www . inc . com / james - sudakow / why -

77 a- good- story- is- far- more- persuasivethan-facts.html (cited on page 74).

[5]

Sabrina Tavernise. Vaccine Skepticism Was Viewed as a Knowledge Problem. It’s Actually About Gut Beliefs. Apr. 2021. URL: www . nytimes . com / 2021 / 04 / 29 / us / vaccine - skepticism beliefs.html (cited on pages 74, 75).

Condensed Design Proposal AE&D By Dan Myers ’22 and Miranda Hopper ’22 On August 11th, 2021, near Orlando, FL, a woman named Shamaya Lynn was shot during a Zoom call in her home. Someone on the call called 911 and reported that they had seen a toddler before hearing a loud noise and witnessing Shamaya falling backwards out of her chair. Investigators concluded that Shamaya’s young daughter had gotten ahold of an unsecured handgun and discharged it, fatally wounding her [3]. On April 17th, 2021, in Baker, LA, an unsupervised three-year-old got ahold of their father’s newly purchased semi-automatic pistol (purchased for self-defense) while he was making lunch in the other room. The child was pronounced dead at the scene, having pulled the trigger, fatally shooting themself [1]. Teenagers aged 14-17 were the largest group affected, followed by children aged five and under. Seven in ten of these unintentional shootings occured in the child’s home. (https://everytownresearch.org/report/notanaccident/) In 2017 and 2020, there was a noticeable surge in unintentional child shootings. This aligns with the surge in the number of guns in the United States experienced in 2017. Of the shootings where information on the gun used could be obtained, 85% of the incidents involved a handgun, rifles and shotguns made up 7%, and assault-style rifles contributed less than 1% [2]. So where does the problem lie? It’s a complicated answer. Gun culture in America isn’t going away anytime soon, or likely ever. What can we do to reduce the rate of incidents in a country so obsessed with their firearms? To attempt to come to a conclusion, you first have to look at what’s already been done. According to the American Academy for Pediatrics (AAP), the safest home for a child is a home without guns. AAP states that “the most effective way to prevent unintentional gun injuries, suicide, and homicide to children and adolescents, research shows, is the absence of guns from homes and communities” [4]. This, of course, isn’t a realistic solution. Some of the most common safety measures in households that do have guns are gun safes/lockboxes, gun trigger locks, and ammunition lockboxes. It is also recommended that guns are not just hidden but properly stored

and locked whilst unloaded. In addition to this, it is suggested that ammunition be stored separately. The AAP advises gun owners to keep the safety catch in place at all times and to not allow children to handle any weapons, no matter if it is unloaded, safety is on, etc [4]. Maintaining gun safety in its multifaceted methods is one task the individual home/gun owner can accomplish, but children tend to socialize in and around environments outside of their own home. As a result, their safety cannot be guaranteed, thus the AAP recommends parents determine if there are unlocked guns in a house or building prior to allowing their child to visit. More than a third of all unintentional shootings of children take place in the homes of their friends, neighbors, or relatives. Lastly, and arguably most importantly, the AAP strongly urges parents to educate their children about gun safety and inform them that guns are a serious danger to them if mishandled. Parents must remind their children that what they see in the media such as movies is not reality, and that firearms are weapons with very real dangers. While there is no valid reason for not owning a gun safe, many people across the United States present one particular reason for not purchasing a gun safe. The leading cause of this can be summarized as wasting time in a self defense situation. Generally speaking, gun owners do not want to be fiddling with a lock or keypad once or even twice if they have ammunition safes in a life or death situation. There’s also the problem of cost. Neither a gun nor an ammunition safe are cheap. Gun owners of lower income may not feel the need or have the means to invest in both or either. The bottom line is that children should not have to be in danger over their parent’s or an adult’s choice to own a personal firearm. The fact that there are statistics specifically on children being involved in accidents with firearms should speak for itself. The overarching hope is to reduce the incidents of young children, typically from six months old to roughly eight years old, unintentionally harming themselves or others. In the United States, roughly five percent of annual gun deaths are unintentional shootings by individuals under the age of 18. Roughly 91 percent


Chapter 6. Computer Science and Engineering

78 of these victims are also under 18, making for a tragedy that is uniquely American [2]. The solution has to be functionally childproof, yet still fulfill and address the wants and concerns of the adult that owns the gun. It must be uninteresting and challenging enough that it’s difficult for a child to unlock it, but simple enough that an adult could quickly remove it if need be.

article _ 6870e4aa - 97c2 - 11eb - 9942 5bc77d4fa12f.html (cited on page 77). [2]

Everytown. Preventable Tragedies. Aug. 2021. URL : https : / / everytownresearch . org / report / notanaccident/ (cited on pages 77, 78).

[3]

NBC News. Toddler shoots, kills mom during video call after finding gun, Florida police say. Aug. 2021. URL: https://www.nbcnews.com/news/ us - news / toddler - shoots - kills - mom during- video- call- after- finding- gunn1276722 (cited on page 77).

[4]

American Academy of Pediatrics. URL: https : //www.aap.org/ (cited on page 77).

References [1]

6.5

The Advocate. Toddler gets ahold of gun, dies in accidental shooting while dad was making lunch. Apr. 2021. URL: https://www.theadvocate. com / baton _ rouge / news / crime _ police /

An Ethical Future for Tech By Julia Stern ’22 The past decade has been fraught with cases of racebased algorithmic bias, lawsuits over reckless data collection [10], and job losses caused by automation [4]—artificial intelligence has brought a new wave of uncertainty to the world of tech, and its rapid expansion will amplify these risks in coming years. There have been many efforts to combat the ethical risks of technological development, but few have been successful. The only way to create fair, safe, and equitable AI systems is to overhaul the values and practices of the tech world. A brighter future for tech must involve these key efforts. Ethics-Sensitive Computing Ethics should be a priority, not an afterthought—this belief will drive ethics-sensitive computing. It is a conceptual practice more than a concrete one, and it hinges on the widespread recognition of ethics in the tech world, both in formal and informal spheres. Any computer science education lacking an ethical component is incomplete [7]. In middle-school and highschool computer science curricula, there must be an underlying emphasis on human-centered design. At every level of education, students must recognize that technological development does not exist in a vacuum—transformation in the digital world leads to change in the physical world. Ethics education becomes most critical at the university level, especially for students that intend to work in technology and related sectors. Computer science programs are the optimal spot for the introduction of ethicssensitive computing, as students have yet to encounter the economic pressures of the tech world. It is a simple step forward. Students must enroll in an ethics course as part of their degree requirements—these courses will

specifically target the ethics of computing—and a new generation of computer scientists, engineers, data scientists, etc. will be familiar with ethics-sensitive computing before they enter the workforce. To a modest extent, this effort will encourage self-correction among the future leaders of tech. For ethics-sensitive computing to work, however, having employees dedicated to the ethical concerns of technology is necessary. For smaller companies, this initiative could be an Ethics Specialist; for larger companies, it could be an Ethics Team or an entire department devoted to ethics. Similar to medical professionals, lawyers, or teachers, these individuals must be licensed, a process that involves rigorous initial preparation from an outside institution and yearly training. The goal of an Ethics Specialist or Team is to actively promote ethical behavior rather than reprimand harmful behavior. Some of their responsibilities are as follows. 1. Ethics Specialists/Teams carefully review, test, and assess a new technology before its release, and they are able to make suggestions or raise concerns without pressure from other parts of the company. This action also facilitates accountability in artificial intelligence. If ethical concerns about new technology are ignored, then the culpable entity is more clear—the people who ignored them. If an Ethics Specialist or Team fails to predict ethical consequences, however, then accountability remains a tricky undertaking. 2. Ethics Specialists/Teams oversee the selection and cleaning of data, and after the completion of a new product or technology, they use standardized anti-bias metrics to assess its “fairness factor” and


6.5 Stern. An Ethical Future for Tech identify algorithmic flaws. Another component of ethics-sensitive computing is the identification and deconstruction of bias, a joint effort between tech employees and ethics experts. Simply put, “labels matter” [5]. Their biases are often hidden, impossible to detect when designing an algorithm, but clear as day when the algorithm functions. Artificial intelligence is dependent on data collection, so ethics experts must assess data before it trains machines. 3. Ethics Specialists/Teams must reflect diversity and inclusion on multiple levels. This commitment means diversity of demographic features like race and gender, but it also requires a diversity of background, experience, and knowledge. It is important to note that ethics experts can come from any background—professionals from non-tech fields can perform this job, namely academics, lawyers, or mathematicians, but a solid understanding of technology is always necessary. On a multi-person team, it is preferable to achieve a mix of different professionals. Thorough and legitimate diversity ensures the proper execution of ethics-sensitive computing. 4. Ethics Specialists/Teams promote the practice of ethics-sensitive computing. They are responsible for the continued education of company employees, and they ensure that ethics is a priority at every level of a company. They are proactive, not reactive. They are in tune with the community around them, and they think big picture, grasping the full impact of their work in different communities. Education and supervision are two factors that will further ethics-sensitive computing. Bottom-up change is the first step towards equitable and human-centered tech. Rethinking FAT

79 addictive, deceptive designs of social media networks undermine the autonomy of the individual. These practices are flagrant abuses of power. Human-centered technology strives to improve standards of living, not diminish them. Respecting the autonomy of consumers is essential for ethical computing, and it complements the values already set forth by FAT. FAAT should inspire the development of an industry standard of ethics. An Industry Standard of Ethical Conduct There have been previous attempts to create “honor codes” [2] for the tech industry, but they have largely failed for one reason: “self-regulation is not enough” [1]. This is the blaring reality of Big Tech—as long as ethical concerns are caught in the crossfire between economic and social interests, pressure can never derive from the internal workings of Big Tech. But if pressure comes from outside sources, mainly consumers and institutions concerned with public well-being, then an industry-wide “honor code” could work. A legitimate, trustworthy, and neutral organization must establish an industry standard of ethics for the tech field. We socially regulate other institutions—why not technology? We have expectations for the moral conduct of doctors, lawyers, and teachers—why not apply the same standards to the leaders of tech? If an ethics code reaches sufficient recognition and validity, it is likely that technology professionals will respect it without legal reinforcement. With an established benchmark for ethical conduct, regulation becomes more straightforward, as individuals can be suspended, fined, or fired if they breach the expectations for ethical tech. Tech giants continue to evade accountability for their abuse of privacy, and as artificial intelligence expands tech’s reliance on data collection [3], the asymmetry that already exists in consumer-corporation power relations will grow. Consumers have to restructure their engagement with Big Tech. Social norms are a powerful tool, and we can use them to bolster the ethical expectations established by a formal code. An ethics code will only work if citizens recognize that deception, data mishandling, and privacy abuse are never acceptable [9]. As we march towards a data economy [6], it is necessary that consumers regain the digital power behind the elusive doors of Silicon Valley.

As expert Dr. Aarti Singh explained in a guest lecture, FAT stands for the three factors that ethics should always consider: fairness, accountability, and transparency. FAT [8] is a good starting point for the creation of an industrywide ethics code, but to further protect consumers from hidden abuse, I propose FAAT. The additional A stands for Autonomy. Autonomous consumers can act in accordance with their beliefs, desires, and morals, and they are free from the control of outside influences. The tech field must honor people’s right to choose; citizens can choose to safeguard their data, to prefer the ‘un-optimized, un-efficient [7] option, References to trust humans over machines, etc. even if these decisions [1] J Buolamwini. Announcing the Sunset of Safe Face are technically “irrational.” Pledge. Feb. 2021. URL: https://medium.com/ Coders and corporations must acknowledge that they @Joy.Buolamwini/announcing-the-sunsetdon’t know what’s best for individuals and that to beof- the- safe- face- pledge- 36e6ea9e0dc5 lieve they do is both dangerous and arrogant. Even the (cited on page 79).


Chapter 6. Computer Science and Engineering

80 [2]

J Buolamwini. The Algorithmic Justice League, Joy Buolamwini. URL: https://www.ajl.org/ about (cited on page 79).

[3]

M Burgess. What is the Internet of Things? WIRED explains. Feb. 2018. URL: https://www.wired. co.uk/article/internet-of-things-whatis-explained-iot (cited on page 79).

[4]

W Knight. China Wants to Replace Millions of Workers with Robots. Dec. 2015. URL: https : / / www . technologyreview . com / 2015 / 12 / 07 / 164672 / china - wants - to - replace millions-of-workers-with-robots/ (cited on page 78).

[5]

[6]

org/development/desa/dpad/wp- content/ uploads / sites / 45 / publication / FTQ _ 1 _ Jan_2019.pdf (cited on page 79). [7]

J Shaw. Artifical Intelligence and Ethics. Jan. 2019. URL: https://www.harvardmagazine. com / 2019 / 01 / artificial - intelligence limitations (cited on pages 78, 79).

[8]

Taken from Dr. Aarti Singh’s lecture (cited on page 79).

[9]

C Véliz. Privacy Matters Because It Empowers Us All. Sept. 2019. URL: https : / / aeon . co / essays / privacy - matters - because - it empowers-us-all (cited on page 79).

Benjamin R. Assessing risk, automating racism. 2019. URL: https : / / winchesterthurston . [10] Z Wichter. 2 Days, 10 Hours, 600 Questions: myschoolapp . com / ftpimages / 1531 / What Happened When Mark Zuckerberg Went to download / download _ 6420554 . pdf (cited on Washington. Apr. 2018. URL: https : / / www . page 79). nytimes . com / 2018 / 04 / 12 / technology / Gabe Scelta. Data Economy: Radical transformamark-zuckerberg-testimony.html (cited on tion or dystopia? 2019. URL: https://www.un. page 78).



All outstanding work, in art as well as in science, results from immense zeal applied to a great idea. – Santiago Ramón y Cajal



The next page



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.