Broad Street Scientific | Volume 4 | 2014-2015

Page 1

Broad S treet Scientific

Volume 44 || 2014-2015 2014-2015 Volume

The North North Carolina Carolina School School of of Science Science and and The Mathematics Journal Journal of of Student Student STEM STEM Research Research Mathematics



ic Volume 4 | 2014-2015

The North Carolina School of Science and Mathematics Journal of Student STEM Research


Street Broad Scientific Volume 1 | 2011-2012

Essay

Table of Contents v

Words from the Editors

vi

Broad Street Scientific Staff

1

A Letter from the Chancellor

2

Big Bang Theory: The Misconceptions of Our Expanding Universe Keilah Davis, 2015

Biology and Chemistry

4

Emerging marine diseases: Variation in response to climate change conditions among strains of Serratia marcescens Rachel Cohn, 2015

13

Effects of Ocean Acidification on Calcification of the Estuarine Mud Crab Rithropanopeus harrisii Steven Tulevech, 2015

21

Biome Enrichment and Broad-Spectrum Immunization Enhance the Concentration and Range of Specificity of the Rattus Natural Antibody Repertoire Daniel Ren, 2015

30

Modeling How Intervention Can Limit the Spread of Ebola Virus Treena Chaudhuri, NCSSM Online 2016

37

Using van der Waals Heterostructures to Create p-n Junctions in Thin Film Solar Cells Shreyas Kolavennu, 2015


Street Broad Scientific Volume 1 | 2011-2012

Novel Synthesis and Characterization of Porous Thin Film ZnCo2O4 for Advanced Photocathodic Applications

44

Danuh Kim, 2015

A Computational and Statistical Analysis Examining the Impact of Polymers, Orientations, and Structure on Organic Solar Cell Performance using a semi-Empirical Monte Carlo Model

51

Pranav Kemburu, 2015

58

Guy Blanc, 2015

Development of Novel Methods for Monitoring Aging of the ATLAS TRT Straws

64

Rohit Das, 2015

Creating a Hybrid Agent/Grid Model of Contact-Induced Force Uday Uppal, 2015

Engineering, Programming and Testing The Efficacy of a Novel Single Cell Array

71 78

Aaron Sartin, 2015

Number Game Kevin Chen, 2015 Jay Iyer, 2015 Sandeep Silwal, 2015

Feature Article: An Interview with Jud Bowman and Taylor Brockman

85

92

Physics, Math, Engineering, CompSci Interview

A Mathematical Analysis of the Molecular Energy of Cyclopropane at Varying Geometries


Street Broad Scientific Volume 1 | 2011-2012


Street Broad Scientific Volume 1 | 2011-2012

Words from the Editors Welcome to the Broad Street Scientific: NCSSM’s journal of student research in science, technology, engineering, and mathematics. In the fourth edition of the Broad Street Scientific, we aim to not only showcase student research, but also to increase public awareness of the importance of student scientific participation by demonstrating the scientific aptitude of our students to readers both in and outside of the NCSSM community. We hope you enjoy this year’s issue. The theme for this year’s volume of Broad Street Scientific is based on an elegant mathematical pattern found in nature – the Fibonacci sequence. The Fibonacci sequence is closely related to the golden ratio. We see it in the spiral of the shell of the Nautilus, branching plants, our own fingers, and especially in flower petals. The flowers used in this edition include Leucanthemum, Rosa, Helianthus, and Cineraria. Each of thesse species has a petal number equal to a number from the Fibonacci sequence - 1, 1, 2, 3, 5, 8... etc. We thank the following photographers for allowing us to use their images: Ryan Kaldari, Yannis, Timothy Valentine, Xosé Arsenio Coto, Yuki Sasaki, and Friedrich Böhringer. We would like to thank the administration, faculty, and staff of NCSSM for the opportunity to pursue our research goals in the science, technology, engineering and mathematics fields. The support for student research at this school is unparalleled by any other high school in the state, and the student body would like to recognize the significance of such an investment in our, and the state’s, future. We would like to specifically thank our faculty advisor, Dr. Jonathan Bennett, for his advice and guidance through the fourth edition of the Broad Street Scientific. We would also like to thank our Chancellor, Dr. Todd Roberts, Dean of Science, Dr. Amy Sheck, and Research/Mentorship Coordinator, Dr. Sarah Shoemaker for their active support of this publication. Lastly, the Broad Street Scientific is extremely grateful to NCSSM alumni Jud Bowman and Taylor Brockman for their participation in this year’s interview and insight for the next generation of scientist-entrepreneurs.

BroadStreetSci Online

www.ncssm.edu/bss Volume 4 | 2014-2015 |

v


Street Broad Scientific Volume 1 | 2011-2012

Broad Street Scientific Staff Chief Editors

Jenny Wang, 2015 Justin Yang, 2015

Publication Editors

Richard Ong, 2015 Vibha Puri, 2016 Sicheng Zeng, 2016 Chichi Zhu, 2015

Biology Editors

Nimit Desai, 2016 Robert Fisher, 2016 Neeraj Suresh, 2015

Physics Editors

Daniel Lee, 2016 Chase Roycroft, 2016

Chemistry Editors

Abhi Kulgod, 2015 Caroline Liu, 2015 Rishi Sundaresan, 2016

Engineering Editors Math and Computer Science Editors Webmasters Faculty Advisor

vi | 2014-2015 | Volume 4

Grace Xiong, 2015 Larry Zhang, 2016 Adithya Iyengar, 2015 Sarah Wu, 2016 Abhimanyu Pintoo Deora, 2015 Andrew Spencer, 2016 Dr. Jonathan Bennett


Street Broad Scientific Volume 1 | 2011-2012

Letter from the Chancellor “I believe things cannot make themselves impossible.” ~ Stephen Hawking I am proud to introduce the fourth edition of the North Carolina School of Science and Mathematics’ (NCSSM) scientific journal, Broad Street Scientific. Each year students at NCSSM conduct significant scientific research, and Broad Street Scientific is a student-led and -produced showcase of some of their best work. Providing students with opportunities to apply their learning through research is not only vitally important in preparing and exciting students to pursue STEM degrees and careers after high school, but essential to encouraging innovative thinking that allows students to scientifically address major challenges and problems we face in the world today and will face in the future. Opened in 1980, NCSSM was the nation’s first public residential high school where students study a specialized curriculum emphasizing science and mathematics. Teaching students to do research and providing them with opportunities to conduct high-level research in biology, chemistry, physics, the applied sciences, math, and the social sciences are critical components of NCSSM’s mission to educate academically talented students to become state, national and global leaders in science, technology, engineering and mathematics. Thus, I am thrilled that each year we are increasing the outstanding opportunities NCSSM students have to participate in research. The works showcased in this publication are examples of the significant research that students conduct each year at NCSSM under the direction of the outstanding faculty at our school and in collaboration with researchers at major universities. For twenty-nine years, NCSSM has showcased student research through our annual Research Symposium each spring and at major research competitions such as the Siemens Competition in Math, Science and Technology, the Intel Science Talent Search, and the International Science and Engineering Fair to name a few. The publication of Broad Street Scientific provides another opportunity to highlight the outstanding research being conducted by students each year at the North Carolina School of Science and Mathematics. I would like to thank all of the students and faculty involved in producing Broad Street Scientific, particularly faculty sponsor Dr. Jonathan Bennett and senior editors Jenny Wang and Justin Yang. Explore and enjoy! Sincerely, Dr. Todd Roberts, Chancellor North Carolina School of Science and Mathematics

Volume 4 | 2014-2015 | 1


Street Broad Scientific

Essay

Volume 1 | 2011-2012

Big Bang Theory: The Misconceptions of Our Expanding Universe Keilah Davis Keilah Davis was selected as the winner of the 2014-2015 Broad Street Scientific Essay Contest. Her award included the opportunity to interview Jud Bowman and Taylor Brockman as part of the Featured Scientist section of the journal. The “Big Bang” Theory. Upon hearing those words, many think of the popular TV show. But they also refer to the proposed explosion that created our universe. This theory is one of the most popular modern theories for the origin of the universe, and yet it is poorly understood. In essence, the Big Bang Theory describes how the universe has changed over time. Objects in space were once much closer together than they are now. One cannot help but wonder, what is the universe expanding into? Does the universe have a center and are we near it? The Universe Is Expanding? Light allows astronomers to make important observations about our universe. When the expected emission spectrum of an object (i.e. a galaxy) is known, astronomers compare emission lines at a specific wavelength (λ0) to the observed wavelength (λ) and calculate the redshift (z), or change in wavelength, for those lines. Wavelength and redshift are related by the equation, Redshift is also related to velocity by the equation where v is the speed of an object and c is the speed of light. Thus, the velocity of distant galaxies can be measured by measuring their redshifts. When these velocities are plotted with distances on a graph, we see the relationship known as the Hubble Law, named after Edwin Hubble. As shown in Figure 1, the velocities of objects increase as you look farther out in the universe. This was the first clue that the universe was expanding. Recent observations with higher precision have confirmed Hubble’s observations and this linear relationship. These observed galaxies are moving away from Earth in every direction. Our Unexceptional Place The cosmological principle is the “assumption that the universe is homogeneous and isotropic” (Freedman, Geller, Kaufmann 697). This means that the universe is the same in every region and direction. Thus, it implies that 2014-2015| Volume | Volume 2 |2 |2012-2013 2014-2015 4 4 2

Figure 1. Hubble’s Law. This graph shows the linear relationship between the distances and velocities of studied galaxies. Source: http://firedrake.bu.edu/CC105/2007/ hubble.html our location in the universe is in no way unique or special. Due to advances in technology, astronomers have been able to measure the velocities of distant objects from other galaxies. The results show that the same linear relationship between velocity and distance is present on other galaxies. “The expansion of the universe looks the same from the vantage point of any galaxy” (Freedman, Geller, Kaufmann 694). Everything seems to be moving away in all directions. This direct evidence proves that the Hubble Law does not imply we are at the center of the universe. Thus, the question that follows is “does the universe have a center?” One way to think about this question is through the cosmological principle. If every region in the universe is the same, then an observer in every region sees expansion in every direction. Therefore, any region can appear to be the center but no region is the true center of the universe. The Edge and Beyond Finally, we have reached the main focus of this article. If the universe is expanding, then what is it expanding into? It is easy to imagine the universe as a balloon that is being inflated. As the balloon expands, it takes up more space. However, this would suggest that the universe has an edge and that it is expanding into something. Both are


Essay erroneous conclusions. Our ability to search through the observable part of the universe is limited by technology. If there is an edge then we cannot see it. As Dr. Edward L. Wright at UCLA explains, the question of the universe having an edge “involves the external geometry of the object, which can only be measured by an observer outside the object.” He states that since we are inside the universe we can only study the internal geometry of the universe (Wright). The idea here is that our view is limited by our condition in the very universe we wish to observe. If the universe does not have an edge, then the question “what is it expanding into” becomes meaningless. The universe, by definition, “encompasses all of space and time as we know it” (“Foundations of Big Bang Cosmology”). Asking the question “what lies beyond the universe?” is “as meaningless as asking ‘What on Earth is north of the North Pole?’” (Freedman, Geller, Kaufmann 694). In this scenario, the North Pole is defined as the north-most point on Earth. Therefore, nothing is outside of the universe. As Dr. Feuerbacher and Dr. Scranton stated, the expansion of the universe is “completely self-contained.”

Street Broad Scientific Volume 1 | 2011-2012

Web. 13 Jan. 2015. [4] Rothstein, Dave. “What Is the Universe Expanding Into?” Curious About Astronomy. Cornell University, Apr. 2003. Web. 05 Jan. 2015. [5] Wright, Edward L. “Frequently Asked Questions in Cosmology.” Frequently Asked Questions in Cosmology. N.p., n.d. Web. 06 Jan. 2015.

Conclusion Ultimately, there are many misconceptions surrounding the expansion of the universe. The public does not usually interpret data in the way astronomers do. Predictions and observations show that the universe is in fact expanding, but also that it has no center or edge. The homogenous nature of the universe and the cosmological principle do not allow for a true center or edge to exist. Hubble’s Law, along with more recent observations, continue to support this. Even though these questions have been answered, others still remain. Will the universe expand forever? Why is the expansion accelerating? What is the fate of our universe? Astronomers are currently exploring these open questions and developing models that attempt to explain them. As science and technology progress, we can only hope to have some of these questions answered in the near future. References [1] Feuerbacher, Björn, and Ryan Scranton. “Evidence for the Big Bang.” Evidence for the Big Bang. The Talk Origins Archive, 25 Jan. 2006. Web. 05 Jan. 2015. [2] Freedman, Roger A, Robert M. Geller, and William Kaufmann III. Universe. 9th ed. New York: W.H. Freeman and Company, 2011. Print. [3] NASA/WMAP Science Team. “Foundations of Big Bang Cosmology.” WMAP Big Bang Concepts. National Aeronautics and Space Administration, 24 Jan. 2014. Volume 4 | 2014-2015 | 3


Street Broad Scientific

Biology and Chemistry Research

Volume 1 | 2011-2012

Emerging marine diseases: Variation in response to climate change conditions among strains of Serratia marcescens Rachel Cohn ABSTRACT Caribbean elkhorn coral populations have been decimated by white pox, an emerging marine disease caused by Serratia marcescens (S.m.), within the past two decades. S.m., a common freshwater bacterium and facultative pathogen of humans, recently shifted into marine environments and hosts. Climate change conditions are expected to affect the geographic and host specificity of pathogens, and several marine disease outbreaks are associated with environmental changes in temperature, pH, and salinity. This study contrasts the two marine strains (PDL100 and PDR60), which vary in their dates of emergence, to determine whether the more recent strain (PDR60) is better adapted to extreme climate conditions. Two experiments were conducted: one manipulating the temperature and pH, and the other manipulating salinity, with optical density as an indicator of cell density and population growth. The results indicated a significant, positive effect of temperature on cell density (p<0.001) as well as a significant interaction (p<0.05) between temperature and strain, with strain PDL100 exhibiting the most growth at a moderate temperature and the more recent strain PDR60, at the highest temperature. There was no significant effect of pH. The effect of salinity was significant (p<0.0001), however whether it affected cell count needs to be determined. These results indicate that both strains of S.m. respond to changing environmental factors, but that the more recent strain PDR60 is better adapted to continuously increasing temperatures, providing evidence for its more recent emergence in the marine environment.

Introduction Recent research has tied changes in the oceanic environment to many specific marine disease outbreaks, such as the human to coral zoonotic bacterium Serratia marcescens (S.m.) [16, 17]. Anthropogenic climate change encompasses a wide variety of environmental effects including an increase in global temperature and CO2 levels, but also has implications tied more directly to the marine environment, such as a decrease in ocean pH and an increase in variability of salinity due to extreme weather patterns [17]. The direct relationship between these anthropogenic issues and emerging marine diseases is cause for alarm. Some climatologists predict an increase in mean atmospheric temperature of up to 2.0° C by 2100) [17]. As a result, and even as a result of current shortterm warming, surface temperatures of the ocean have seen significant increases [18]. Increased temperature specifically is thought to be a key factor in the emergence or re-emergence of many water-based disease outbreaks, such as toxic algae blooms [15], trematodes and other microparasites [11] and bacteria among others [17]. Several recent outbreaks of gram-negative bacterial strains and shifts in host species over the past few years could be responsible for warming trend. Increased temperatures have been known to cause increases in zooplankton that serve as stores for bacterial pathogens, such as cholera [17], indirectly causing bacteria emergence as a result of the environmental shifts. Host shifts are 4 | 2014-2015 | Volume 4

also cause for alarm, such as the human to coral reverse bacterial zoonosis of Serratia marcescens, which has caused massive mortalities of elkhorn coral since 1996 [16]. The anthropogenic increase in atmospheric carbon dioxide is also having a detrimental and distinctive effect on the oceanic environment, which acts as a sink for carbon dioxide, due to the decrease of oceanic pH, a phenomenon known as ocean acidification [13]. It has been found that many recently emerging marine bacteria, such as many strains of cyanobacteria (Ma et al., 2014) and Brucella bacteria [19], benefit from a more acidic environment, which, a priori, may be a causative factor in their emergence. Smaller, more adaptable pathogens like bacteria and plankton are optimizing in response to environmental factors that put larger organisms under too much stress [12], leading to an increase in the emergence of marine disease epidemics. Ocean acidification also has detrimental effects on many host species’ immune responses, such as the blue mussel, Mytilus edulis [1], and salmon [9], which has ultimately resulted in mass disease outbreaks. Severe weather patterns that disturb salinity, including an increase in the intensity of tropical storms, hurricanes, and the El Niùo effect, have also arisen due to anthropogenic climate change [17]. Heavy rainfall and severe storms not only disturb the salinity by dilution but also create upwelling, increasing overall particulate concentrations in marine environment [13]. Due to the comparatively short lifespans many pathogens, bacteria have a relatively quick adaptability to environmental


Street Broad Scientific

Biology and Chemistry Research change, as compared to their host organism [2]. Therefore, quickly adaptable strains get a higher return from environmental changes, which leads to disease emergence, especially when attacking a compromised host. Changes in salinity have been tied indirectly to the emergence of disease due to immunosuppressant effects in host species too, as was seen in multiple species of fish and seagrass [3, 2, 22]. Coincidentally, elkhorn coral killed by S.m. inhabits the same fluctuating ecosystem of the Florida Keys as the seagrass cited by Trevathan (2011). The connection between immune susceptibility and pathogen tolerance is an interesting effect of environmental change that may have a larger impact than salinity alone. White pox of Acropora palmata There are many marine epidemics for which little data exist on environmental correlates, such as white pox, a bacterial infection of elkhorn coral by the human pathogen Serratia marcescens (S.m.) [16]. The disease manifests itself through whitish lesions that lead to tissue loss and eventually death of the coral. Since 1996, two separately isolated strains of this pathogen have caused mass mortalities of this coral species throughout the Caribbean (Fig. 1): strain PDL100 in 1996 and strain PDR60 in 2003 [16, 21]. The mortality of such a crucial part of the coral reef ecosystem poses a huge threat ecologically so research must be done to determine the causes of such a large marine epidemic.

Figure 1. The percent cover of elkhorn coral of Eastern Dry Rocks reef, Key West, 1994-2001, first documented outbreak (Patterson et al., 2002). The primary study providing insight directly into the response of the pathogen to its new marine environment was conducted in 2010 on strain PDL100. It investigated the reaction of the strain to elevated temperatures [10], finding that increased temperature had a positive correlation with survival of the pathogen. More research is needed to investigate both of the new marine strains in response to temperature, to determine whether genetic variability in response to temperature between the strains has a trend over time. The effects of ocean acidification on the emerging pathogen S.m. have not yet been elucidated, and should be studied as well in order

Volume 1 | 2011-2012

to determine whether or not there is a direct correlation. Extreme weather phenomena such as hurricanes have been experienced more frequently and intensely in the Caribbean coral reef ecosystems, which have the power to alter the salinity and nutrient concentrations of the ocean due to the upwelling that these storms create [6]. Discovering whether the white pox pathogen thrives in this type of extreme environment could help to elucidate the threat that future storms and weather patterns could have on the prevalence of the disease. The lack of information on specific environmental correlates for white pox is concerning, as it is a prevalent issue in the field of emerging marine diseases and is therefore a field where there is much room for expansion. Discovering the differences in adaptability and tolerance of each emerging strain in response to temperature change, ocean acidification, and salinity changes will provide insight into the effects of climate changes on this emerging disease. In addition, identifying strain specific responses in the context of when strains PDL100 and PDR60 were isolated will provide insight into how quickly and in what ways the pathogen is adapting to these climate change conditions. While the environmental factors correlating with anthropogenic climate change in the marine environment are widely known, many of the fallouts, such as emerging and quickly adapting pathogens, are not. Materials and Methods Microorganism Background Experimental strains of S.m. were obtained from Dr. Erin Lipp at the University of Georgia Department of Environmental Health Science. Three unique isolates were used: PDL100, the marine strain emerging in 1996, and EL77 and EL95, both substrains of the marine strain PDR60, which emerged later, in 2002, all referred to here after as ‘strains’. The strains were received in deep agar stabs and subsequently streaked onto nutrient agar plates for isolation via sterile method. Plates were stored at 25°C [21] (Fig. 2). All research was conducted at the North Carolina School of Science and Mathematics.

Figure 2. The three experimental strains of S.m. Clockwise from top: PDR60 substrain EL77, PDR60 substrain EL95, and PDL100 (Photocredit: author).

Volume 4 | 2014-2015 | 5


Street Broad Scientific Volume 1 | 2011-2012

Biology and Chemistry Research

Preliminary Experiments and Growth Curves Growth curves for each S.m. strain were determined through an Optical Density (OD) assay. Glycerol artificial seawater liquid media cultures [16] were inoculated via sterile loop for each strain and incubated in a shaking water bath at 30°C and 100 r.p.m for the duration of the preliminary experiment. At hour 2, and every 0.5 hours after, optical density measurements were made from two aliquots (n=2) of each culture, and the resulting growth curve was modeled over hour 0 to hour 7.5. Two measurements were also taken at hour 27 to have a reading for optical density of the pathogen in the stationary growth phase. Colony formation units (CFU) were also measured in conjunction with the preliminary growth curves at hours 3, 4, and 5. From the same cultures, aliquots were diluted through a serial dilution and 200 μL of the diluted cultures were transferred to sterile nutrient agar plates and kept at room temperature overnight. CFUs were counted to be compared to OD600 at that time point. All optical density measurements were taken using the Novaspec Plus Spectrophotometer with its Cell Density application. The Cell Density application takes an optical density measurement from an aliquot in a cuvette at 600nm with an autocorrection at 800nm. Samples were blanked with a reference cuvette of sterile glycerol artificial seawater media prior to reading. Morphological and qualitative differences between the strains may exist, and clearly pigmentation differences exist (see Fig. 2). However, by diluting inoculant cultures to the same initial optical density within each strain (PDL100, EL77, and EL95), treatments in subsequent procedures all initiated from the same quantity of the pathogen. Temperature and pH Experimental Conditions The effects of temperature and pH on the growth of three strains of S.m. were determined with three treatment levels of each variable: strain, pH, and temperature (Fig. 3). The levels for pH consisted of a pH level within the bounds of the ocean currently (8.0), a pH lower than the current bounds (within the projected ocean acidification range) (7.8), and a pH much lower than the current bounds (7.5) [1]. Likewise, the levels for temperature were a temperature within the bounds of the summertime, Caribbean coral reef temperatures currently, 30°C, a temperature higher than the current bounds (within the projected ocean surface temperature rise), 33°C, and a temperature much higher than the current bounds, 36°C [10]. Temperature and pH Experimental Protocol The experiment was divided into two equal and

6 | 2014-2015 | Volume 4

Figure 3. Experimental design to model the effects of temperature and pH on strain specific growth response for S.m. randomized blocks due to shaking water bath availability. Block A consisted of two replicates of all treatments at the baseline temperature (30°C) and all four replicates of all treatments at the elevated temperature (33°C). Block B consisted of the additional two replicates of all treatments at the baseline temperature (30°C) and all four replicates of all treatments at the extremely elevated temperature (36°C). Glycerol artificial seawater liquid medium was used for all treatments, and the pH of the media at 30°C was adjusted, either increased or decreased, using dilute HCl or NaOH, respectively. pH measurements were taken using the Accumet Basic pH Meter (Fisher Scientific). Beakers were inoculated and the pathogen was grown overnight in glycerol artificial seawater liquid media at pH 8.0, in a shaking water bath at 30° C and 100 r.p.m. After an overnight, at hour 0, cultures were diluted to a target optical density much less than the maximum OD based on preliminary growth curves (Table 1). For Block B, optical densities of the overnight cultures were adjusted at this point until they were ±0.01 OD600 away from the initial OD for each strain in Block A, thereby matching initial inoculant OD between blocks. At this point, 9 mL of the pH adjusted media for each experimental unit based on treatment was inoculated with 1mL of the overnight culture in cell culture tubes (15mL Fisherbrand) and relocated to shaking water baths again at a temperature predetermined by the treatment.

Table 1. Initial OD600 cell density measurements by S.m. strain for each time block of the temperature and pH experiment. Differences were ±0.01 OD600. At hour 8 in the experiment, a point outlined by the growth curves in the preliminary experiments at the exponential growth phase of the bacteria, OD600 measurements were made again with the spectrophotometer.


Biology and Chemistry Research Salinity Experimental Conditions The response of S.m. to changes in environmental salinity levels, both long-term and short-term, was modeled with a similar experimental protocol but differing treatments. Three treatments maintained constant salinity over the entire experiment, and consisted of an oceanic baseline salinity (35 parts per thousand), a salinity higher than the current bounds (45 parts per thousand), and a salinity lower than the current bounds (25 parts per thousand) [8]. Short-term salinity change was also modeled with two different treatments in which the initial media salinity was 35 ppt but was later adjusted to down to 25 ppt and up to 45 ppt. All three strains (PDL100, EL77, and EL95) were tested, providing a second variable. Salinity Experimental Protocol In order to manipulate the precise salinity of the media, a non-seawater TP Soy Broth medium with a known initial salinity (5g/L) was used. This liquid media was adjusted by adding various calculated masses of NaCl to obtain salinities needed for each of the five salinity treatments. Despite being unrealistic in the true oceanic salt composition, NaCl was used as a model to represent total salinity of the ocean due to the fact that artificial seawater salts confound the OD readings by clouding the medium. After growing overnights of each strain in TP Soy Broth at 30°C and 100 rpm, culture tubes of 5mL medium were inoculated with 500 μL of the overnight cultures, with 4 replications per strain, per salinity treatment (N=60). At hour 4, each culture tube was adjusted to a final volume of 10 mL and a different, or same, salinity as designated by the treatment, and returned to the shaking water bath. At hour 8, OD600 measurements were made via the Novaspec Plus Cell Density application, which was recorded as a response variable to the treatment. Data Analysis Data analysis, including ANOVA and Mean Separation (Student’s T Test) was conducted using JMP (version 10.0.0). Results Optical density of S. marcescens over time Optical density over a growth period of 27 hours in S.m. was measurable using the Cell Density OD600 spectrophotometer assay, especially from hour 6 on (Fig.4). The three strains had a wide range of OD measurements (difference of 0.527) at hour 7.5 but by hour 27 had a smaller range of OD measurements (difference of 0.066), indicating that populations were nearing a maximum population cell density. A large change in slope observable

Street Broad Scientific Volume 1 | 2011-2012

from approximately hour 5.5 to hour 8, depending on strain, indicated exponential growth and an appropriate time period from which to sample growth aliquots. Hour 8 was chosen to allow for growth of EL95, which appeared to be the slowest growing strain given the initial inoculant conditions.

Figure 4. Optical Density (600 nm) of the three experimental S.m. strains over a period of 7.5 hours. CFUs were counted at hours 3, 4, and 5 at a variety of dilutions and compared to the OD measurements at that time point. While all three strains observed an increase in mean CFU counts over that time period at the ideal serial dilution (10-4), OD was not significantly readable at that point, as was evidenced by several negative OD readings. However, CFU counts proved growth of all three strains over the time interval as well as OD measurements comparable to the initial inoculant volume, with EL95 at the lowest OD by 8 and also the lowest CFUs at hours 3, 4, and 5 (see Table 2 and Fig. 4).

Table 2. Mean CFU counts (n=2) for strains at 10-4 dilution from original cultures. > indicates an extremely high number of CFUs (uncountable). Effects of temperature and pH on S. marcescens growth Temperature had a significant effect on all three S.m. strains (p < 0.001), and there was also a significant temperature by strain interaction (p < 0.05) (Table 3, Figure 5). Between strains, temperature differed in its effect on growth. For both PDR60 substrains – EL77 (Fig. 5a) and EL95 (Fig. 5b) – growth increased with increasing temperature, with the most growth occurring at the highest temperature. While there was a positive effect of temperature for strain PDL100 between 30°C and 33°C for all pH values, there was a negative effect between 33°C and 36°C, indicating that PDL100 has an optimal temperature zone at 33°C (Fig. 5c). This significant difference between the strains correlates with Volume 4 | 2014-2015 | 7


Street Broad Scientific Volume 1 | 2011-2012

Biology and Chemistry Research

the emergence time of the strains, with strain PDL100 emerging in 1996 and both substrains EL77 and EL95 emerging in 2002.

Table 3. ANOVA with analysis of temperature and pH versus strain effects. Statistically significant responses between temperatures, and between strains at each temperature.

Figures 5. S.m. growth (as optical density) after 8 hours with strain specific response to temperature only. Mean separation by temperature effect on strain (Student’s T test) shown above bars. ISE shown.

Trends were difficult to detect due to differences in pH treatments across all three strains (Fig. 5 a-c), and no statistically significant differences between pH levels (p = 0.3220) were found, either between pH treatment levels or between strains at varying pH levels. The large amount of variance surrounding many means indicates that statistical significance between these means is unlikely. Such a result indicates that there was probably no effect of pH on S.m. growth. Effects of salinity of S. marcescens growth The effect of salinity of S.m. growth was seen in several trends across the strains. For all three strains, a very distinct and statistically significant decrease (p < 0.0001) was seen between salinity treatments (Table 4). OD of cultures grown at 45 ppt were significantly less than any other treatment including the control treatment at typical ocean salinity levels, as well as the treatment where salinity was increased to 45 ppt at a halfway point (Fig. 7). In addition to this trend, suggested trends (indicated by mean separations in Fig. 8) were seen at two other points in the data: in strain EL77 between the control salinity treatment and the treatment modeling sudden increase in salinity to 45 ppt (Fig. 7a), as well as in strain EL95 between the control treatment and the cultures grown at 25 ppt for the duration of the experiment (Fig.7b). With four replications (n=4), variance from the mean was still highly visible but more stable between all treatments.

Figures 6 a-c. (see previous column) S.m. growth (as optical density) after 8 hours with varying temperature and pH treatments. ISE shown.

8 | 2014-2015 | Volume 4

Figure 8. S.m. growth (as optical density) after 8hours with strain specific response to salinity treatment only. Mean Separation by salinity (Student’s T Test) shown above bars. ISE shown.


Biology and Chemistry Research

Table 4. ANOVA of salinity versus strain effect. Statistically significant responses between salinities for each strain.

Street Broad Scientific Volume 1 | 2011-2012

rate, but only under the same CFUs per inoculation, as all three strains approached the same carrying capacity (approximately 0.900 OD600) by hour 27. Similar growth in later experiments from the same inoculant OD confirmed this a priori hypothesis, but OD between strains must be compared with caution as differences in OD due to either or a combination of varying cell size, color, etc. are unknown. In this way, statistical significances between strains were only found in one of the treatment variables (temperature), but it is possible that strain specific OD differences could account for more. By using a point in time (hour 8) as close to the exponential growth phase for all three strains as was possible given the different inoculant volumes, differences in growth dynamics could be seen in different treatments when started from the same inoculant volume within strain. In this way, significant difference could be assigned to the effect of the treatment and not the amount of inoculant. Effects of temperature and pH on S. marcescens growth

Figure 7 a-c. S.m. growth (as optical density) after 8 hours with varying salinity. Treatment labels indicate the initial treatment/the changed (or same) salinity at hour 4. Discussion Optical density of S. marcescens over time Increased optical density of cultures over time when compared to a sterile control indicates that optical density of S.m. was a reasonable response variable for growth. While it is visible that strain EL95 grew at a slower rate over the first 7.5 hours, initial CFUs per inoculation were not controlled for between strains. Due to this, it is probable that all three strains increase in optical density at a similar

While much is known about the effects of temperature on other emerging marine diseases [6, 4], this study is the first to examine the effects of temperature on both marine strains of S.m. as well as the compounded effects of temperature and pH. Temperature is seen as a growthfavoring factor in many diseases [17], and the strictly positive trend between temperature and growth of S.m. strain PDR60 after 8 hours supports my a priori hypothesis that increased temperature provides an environment more conducive to bacterial growth. Human sewage, the likely source of S.m. to the marine environment [20], has been making contact with the marine environment for much longer than 1996, when S.m. was initially identified as a coral pathogen. Due to the fact that S.m. has been in the marine environment much longer than white pox has been around, a recent environmental change, in this case temperature, logically, must have a positive interaction with pathogen growth. The significant difference between strains in response to temperature (PDR60 strains OD increasing at all temperatures, in comparison to PDL100 strain OD increasing only between 30째C and 33째C) is an interesting result, suggesting that in fact, PDR60, the later emerging strain, is adapted better to a wider range of increased temperatures than PDL100 is. Such an indication could provide clues as to why PDR60 has emerged more recently, and also indicates the adaptability of the pathogen to changing extreme temperatures, even in the short-term of approximately 18 years. The fact that two distinct strains of S.m. have both emerged over the past two decades and that the more recent strain (PDR60) is significantly more adapted to grow at higher temperatures indicates just how adaptable this disease, and likely others, are to climate change conditions. No significant effects were seen in response to varied Volume 4 | 2014-2015 | 9


Street Broad Scientific Volume 1 | 2011-2012

pH treatments, or even in response to pH compounded with temperature. While this result indicates that pH has no effect of S.m. growth for any of the strains, it says nothing directly concerning either host susceptibility or pathogen virulence. It has been seen in multiple other species that pH affects immune response [1, 9], and little to no data yet exist concerning the effects of these specific environmental changes on Acropora palmata, let alone the effects of theses changes on its susceptibility to white pox disease. No change in growth of S.m. at varying pH levels also indicates that the bacterium is not affected by decreased pH, which is a beneficial adaptation in an environment where the pH is decreasing relatively rapidly. In this way, no change to bacterial growth at decreased pH can still be seen as an effective factor in the emergence of S.m. in the marine environment. In this way, S.m. tolerance, combined with growing host susceptibility is a likely cause of emergence, and one that is easily translatable to other organisms and environments. Effects of salinity of S. marcescens growth The effects of salinity on S.m. OD are clearly visible through the results, but disentangling growth of the bacteria population as opposed to other factors that salinity is known to affect (i.e. cell size), is somewhat harder. The data show a significant decrease in OD for all strains when the salinity was higher for the entire growth period (treatment 45/45 ppt). However, attributing this decrease to simply cell density is more difficult. While all environments in the experiment were hypertonic, some, the 45 ppt, were clearly more hypertonic than others. This provides a possible explanation (shriveled cells), for the decreased OD at this higher salinity. Cell size could be worked around by measuring CFUs at an extremely high dilution. However, this does not account for the fact that short-term change to 45 ppt did not cause significant change from the control, nor does it explain the response that 25 ppt, which is significantly less hypertonic than 35 ppt, did not display significantly higher OD than the 35 ppt control either in the short-term or the long-term, both of which the previous explanation of shriveled cells suggests that it should. Such a response indicates that salinity may have had an effect on cell size, but likely, also had an effect on cell density at 45 ppt for for 8 hours. Decreased cell population growth at a significantly increased salinity has some interesting implications. Seasonal upwelling causes increased salinity and increased concentrations of other nutrients, which has been shown to be a factor in the emergence of many cyanobacteria and other pathogens [15]. However, these results indicate the opposite association is S.m. No significant differences were observed at all other salinity treatments though, indicating that S.m. is adaptable and tolerant to a wide range of salinities, especially more dilute ones. While salinity specifically may not have a positive effect on cell density, it 10 | 2014-2015 | Volume 4

Biology and Chemistry Research is likely that other changes that correlate with changes in salinity, such as nutrient changes, could have an effect on S.m. population growth. Conclusions and Future Work It is evident from the results of these experiments that interactions between Serratia marcescens (S.m.) strains and multiple environmental factors, are in fact present. In addition, the earlier emerging strain PDL100 differs in its response to temperature, growing best at a moderately high temperature, while both later emerging substrains of PDR60 grew best at the highest temperature. Such results indicate the need for further research to determine to what extent these environmental factors have an effect on the emergence of these strains, and what other environmental factors could have similar impacts on the pathogen’s growth and adaptation. Not only could these environmental factors be affecting pathogen population, but also it has been seen that environmental factors can impact pathogen virulence to the host species, effectively activating a previously unaggressive pathogen [5]. Studies on the virulence of S.m. could also shed light on the impacts that anthropogenic climate changes have directly on the bacteria. While pH had no effect on pathogen growth, this study did not investigate possible effects on the bacteria’s cell processes. The opposite assumption, that environmental factors could be having an effect on the host, and not the pathogen, also provides an interesting perspective in view of the emergence of white pox disease. Host susceptibility to disease due to environmental factors is an increasingly common reason for disease emergence [12]. While pH as an effective environmental factor appeared to have no significant or distinct effect on S.m. population growth, it is feasible that pH could be affecting host susceptibility, providing an indirect advantage to the pathogen. Further research, done on Acropora palmata, would be needed to determine host sensitivity to changes in pH, and therefore susceptibility to the pathogen. Such an interaction could also have a compounded effect with temperature or salinity, or even other environmental factors. Of course, temperature, pH, and salinity are only a few simplified effects of anthropogenic climate change as a whole. Other factors such as pollution, seasonal changes, turbidity changes, etc., have also been connected to emerging marine diseases [6]. It is likely that temperature, pH and salinity are not the only environmental factors causing the emergence of S.m. as a marine pathogen, and investigating the other factors that could contribute to pathogen proliferation or host susceptibility, is a field where there is much room for expansion. Increased salinity due to upwelling is intrinsically tied to increased turbidity and nutrient concentrations, so it is likely that these environmental factors could all play a role in increased S.m. growth, but more research would need to be done to confirm such an indication.


Biology and Chemistry Research Knowing the impacts of projected climate changes on S.m. could help uncover the cause of the emergence of this pathogen, but also, the probable impacts for other, similarly emerging marine pathogens [6]. This research indicates that S.m. has adapted to specific environmental changes, and also that further evolution of the pathogen into novel strains, even more adapted to the changing marine environment, is likely to result in the future, due to the short timespan that these significantly different strains emerged in. Such an indication could have devastating effects on the ecosystem if S.m. continues to thrive in new strains as the environment changes. If other pathogens behave the same way, the environmental consequences would be devastating. The field of emerging disease has shown direct correlations to anthropogenic changes time and time again, and understanding the severity of our environmental impacts is important to evaluating the consequences of human activity for the future. Acknowledgements I thank A. Sheck of the North Carolina School of Science and Mathematics (NCSSM) for her mentorship and support, F. Bullard of NCSSM for statistical mentorship, E. Lipp of the University of Georgia for providing the study organisms, C. Harms of the North Carolina State University for guidance in selecting a study system, and J. Allen and A. Feng of NCSSM for their peer mentorship. This study used funding from the Glaxo Endowment to NCSSM. References [1] Bibby, R., S. Widdicombe, H. Parry, J. Spicer, and R. Pipe. 2008. Effects of ocean acidification on the immune response of the blue mussel Mytilus edulis. Aquatic Biology 2: 67-74. [2] Birrer, S.C., T.B.H. Reusch, and O. Roth. 2012. Salinity change impairs pipefish immune defense. Fish and Shellfish Immunology 33: 1238-1248. [3] Bowden, T.J. 2008. Modulation of the immune system of fish by their environment. Fish & Shellfish Immunology 25: 373-383. [4] Falenski, A., A. Mayer-Scholl, M. Filter, C. Göllner, B. Appel, and K. Nöckler. 2011. Survival of Brucella spp. in mineral water, milk and yogurt. International Journal of Food Microbiology 145: 326-330. [5] Feehan, C., R.E. Scheibling, and J.-S. Lauzon-Guay. 2012. An outbreak of sea urchin disease associated with a recent hurricane: Support for the “killer storm hypothesis” on a local scale. Journal of Experimental Marine Biology and Ecology 413: 159-168. [6] Harvell, C.D., K. Kim, J.M. Burkholder, R.R. Colwell, P.R. Epstein, D.J. Grimes, E.E. Hofmann, E.K. Lipp, A.D.M.E. Osterhaus, R.M. Overstreet, J.W. Porter, G.W. Smith, and G. R. Vasta. 1999. Emerging marine diseases - Climate links and anthropogenic factors. Science 285:

Street Broad Scientific Volume 1 | 2011-2012

1505-1510. [7] JMP®, Version <10.0.0>. SAS Institute Inc., Cary, NC, 1989-2007. [8] Kelble, C. R., E.M. Johns, W.K. Nuttle, T.N. Lee, R.H. Smith, and P.B. Ortner. 2007. Salinity patterns of Florida Bay. Estuarine, Costal and Shelf Science 71: 318-334. [9] Kroglund, F., B. Finstad, K. Pettersen, H.-C. Teien, B. Salbu, B.O. Rosseland, T.O. Nilsen, S. Stefansson, L.O.E. Ebbesson, R. Nilsen, P.A. Bjørn, and T. Kristensen. 2012. Recovery of Atlantic salmon smolts following aluminum exposure defined by changes in blood physiology and seawater tolerance. Aquaculture 362-363: 232-240. [10] Looney, E.E., K.P. Sutherland, and E.K. Lipp. 2010. Effects of temperature, nutrients, organic matter and coral mucus on the survival of the coral pathogen, Serratia marcescens PDL100. Environmental Microbiology 12: 2479-2485. [11] Mas-Coma, S., M.A. Valero, and M.D. Bargues. 2009. Climate change effects on trematodiases, with emphasis on zoonotic fascioliasis and schistosomiasis. Veterinary Parasitology 163: 264-280 [12] McMichael, A.J. 2004. Environmental and social influences on emerging infectious diseases: past, present and future. Philosophical Transactions of the Royal Society of Biological Sciences 359. [13] O’Neil, J.M., T.W. Davis, M.A. Burford, and C.J. Gobler. 2012. The rise of harmful cyanobacteria blooms: The potential roles of eutrophication and climate change. Harmful Algae 14: 313-334. [14] Paerl, H.W., and V.J. Paul. 2012. Climate change: Links to global expansion of harmful cyanobacteria. Water Research 46:1349-1363. [15] Paerl, H.W., N.S. Hall, and E.S. Calandrino. 2011. Controlling harmful cyanobacterial blooms in a world experiencing anthropogenic and climatic-induced change. Science of the Total Environment 409: 1739-1745. [16] Patterson, K.L., J.W. Porter, K.B. Ritchie, S.W. Polson, E. Mueller, E.C. Peters, D.L. Santavy, and G.W. Smith. 2002. The etiology of white pox, a lethal disease of the Caribbean elkhorn coral, Acropora palmata. PNAS 99: 8725-8730. [17] Patz, J.A., P.R. Epstein, T.A. Burke, and J.M. Balbus. 1996. Global climate change and emerging infectious diseases. JAMA 275: 217-223. [18] Patz, J.A., T.K. Graczyk, N. Geller, and A.Y. Vittor. 2000. Effects of environmental change on emerging parasitic diseases. International Journal for Parasitology 30: 1395-1405. [19] Seleem, M.N., S.M. Boyle, and N. Sriranganathan. 2010. Brucellosis: A re-emerging zoonosis. Veterinary Microbiology 140: 392-398 [20] Sutherland, K.P., J.W. Porter, J.W. Turner, B.J. Thomas, E.E. Looney, T.P. Luna, M.K. Meyers, J.C. Futch, and E.K. Lipp. 2010. Human sewage identified as likely source of white pox disease of the threatened Caribbean elkhorn coral, Acropora palmata. Environmental Microbiology 12: 1122-1131. Volume 4 | 2014-2015 | 11


Street Broad Scientific Volume 1 | 2011-2012

[21] Sutherland, K.P., S. Shaban, J.L. Joyner, J.W. Porter, and E.K. Lipp. 2011. Human pathogen shown to cause disease in the threatened eklhorn coral Acropora palmata. PLoS ONE 6: e23468. [22] Trevathan, S.M., A. Kahn, and C. Ross. 2011. Effects of short-term hypersalinity exposure on the susceptibility to wasting disease in the subtropical seagrass Thalassia testudinum. Plant Physiology and Biochemistry 49: 10511058.

12 | 2014-2015 | Volume 4

Biology and Chemistry Research


Street Broad Scientific

Biology and Chemistry Research

Volume 1 | 2011-2012

Effects of Ocean Acidification on Calcification of the Estuarine Mud Crab Rithropanopeus harrisii Steven Mark Tulevech, Jr. ABSTRACT Ocean acidification, the decline of aquatic pH due to mounting atmospheric CO2 levels, is beginning to wreak havoc on many of the world’s most sensitive fisheries and productive ecosystems. In this experiment, I investigated how Rithropanopeus harrisii, the Estuarine Mud Crab, might react to such a future threat by subjecting the crab to ocean-like conditions at various pH values. This was done by analyzing the rates of calcification and survival over a two week period among pH treatments 7.40, 7.90 (control), and 8.40. I observed the average change of wet weight, post mortem dry weight, ash weight, and mortality rate for each treatment in order to determine calcification and survival rates; the calcium content of each crab was obtained by looking at a percentage of ash weight to dry weight, which would give the rough percentage of calcium in the body. On the contrary to what I expected, no difference among treatments was detected when observing the change of wet weight, % of ash weight to dry weight, or mortality. These results suggest that Rithropanopeus harrisii is capable of both survival and calcification at a variety of pH values, which is especially interesting given the extremity of the 7.40 and 8.40 pH values. Furthermore, the Estuarine Mud Crab’s uncommon tolerance for a variety of pH values may be evidence of an uncommon adaptation to acidic waters, or of an ability to influence pH through excretion of certain chemicals. Speculations aside, this experiment’s findings illustrate the unpredictable yet costly effects of ocean acidification, and seek to improve our understanding of this irreversible process’s effects within our increasingly inhospitable world.

Introduction Since the dawn of the Industrial Revolution, atmospheric carbon dioxide concentrations have risen at the most rapid rates in Earth’s history [19]. Most of this increase, however, has transpired not from natural occurrences but from human activities, particularly the addition of fossil fuels and the reduction of organic soils, grasslands, and forests [17]. Increased levels of CO2 present in the atmosphere are well-known to directly alter climate, weather, and life systems worldwide; considerably less attention has been given, however, to ocean acidification [3, 19], the complex, global phenomenon that is causing ongoing, extreme changes for an array of organisms within the planet’s major bodies of water, particularly in coastal environments [5, 19]. This is in addition to the significant role that the world’s bodies of water play in mitigating climate: over one third of all CO2 emitted in the last 200 years has been absorbed by the oceans [5, 17]. Over time, the process of CO2 absorption into the oceans has altered ocean chemistry through the seawater carbonate balance [6]. The concentration of hydrogen ions in the oceans, which determines overall pH, has risen 30-33% in just 200 years; carbonate ion concentrations, moreover, have fallen 16% [6]. Seawater pH currently ranges from 7.8 to 8.2 around the world, though this global average has already declined 0.1 units since 1850 due to accelerating shifts in seawater chemistry [1, 10]. Predictions estimate that global pH will decline an additional 0.3-0.4 pH units by 2100, assuming that the “business as usual” CO2 emissions model is perpetuated [3]. It is necessary, therefore, to pursue more in depth the effect that ocean acidification

will have on marine organisms, which is essential to the survival of ocean and coastal ecosystems, in addition to their commercial indispensability. Crustaceans, bivalves, and other marine calcifiers are especially at risk because of their sensitivity to fluctuations in ocean pH, which affect their ability to build the calcium exoskeletons necessary to survive and to function in their environment. In addition, crabs have shown variance in their responses to ocean acidification. An inquiry comparing the effects of CO2-induced ocean acidification on a variety of different ocean organisms concluded that “marine calcifiers exhibit mixed responses to CO2-induced ocean acidification” [14]. Coral reefs and bivalves suffered severe losses while some crustaceans such as the blue crab, in complete contrast, experienced “modest” to “average” wet weight gains [14]. In another study, however, it was found that even after 200 days in experimental conditions there was no change in the calcium content of red king crabs (though there was 100% mortality in the 7.5 pH treatment), but there was a significant decrease of calcium content in the Tanner Crabs [10]. The lack of a conclusive answer for the effect of ocean acidification on crabs, in particular, has further instigated curiosity into the little work that has already been done on crabs. The lack of work done looking at the effect of ocean acidification on crabs is disproportionately small compared to the overwhelming importance of the organisms in coastal ecosystems and economies. The few studies that have been done have concluded that crabs react very differently to acidification than do other kinds of marine shellfish. Instead of the expected high mortality rates, Volume 4 | 2014-2015 | 13


Street Broad Scientific Volume 1 | 2011-2012

significant reductions in mass, and large decreases in calcium content suffered by their bivalve counterparts, crabs, in some cases, are calcify better in more acidic conditions than in controlled ocean conditions – granted that only a few survive, and this is not the case for all crabs [14]. This is, however, not the case for all crabs, and it is certainly an exception throughout the ocean [14, 19]. Calcification, too, is not always an accurate indicator of an organism’s health. Other species have been observed to develop better shells at very high metabolic cost to their tissue and muscle mass – one reason why my experiment looked at wet weight as a core response variable. Other consequences of more acidic waters inhibit organisms’ ability to forage, develop, and reproduce [5] - not to mention the added costs of regulating ocean acidification at the outer shell [14]. When more harms are inflicted on one member of the marine food web, things can often become more complicated for another, which can lead to the jeopardizing of the structure of the estuarine or ocean food web as a whole [9]. Crabs often serve as scavengers and the predators for small animals, an important role in the ecosystem in which they are located for recycling nutrients throughout the system and helping to maintain the ecosystem’s vitality. Crabs, however, are also very important for their commercial value. In 2011, for example, Crabs were the #8 most consumed type of seafood worldwide and is the most expensive species of the top ten [9]; In the United States, the per capita consumption of crab was .512 pounds in 2011 [9]. These huge numbers must be supplied by either productive natural fisheries or through aquaculture, both of which subject to disruption in the future by acidification [9]. Along the Eastern Seaboard of the United States, areas such as the Chesapeake Bay have declined over the years due to both ocean acidification and overfishing. This, in turn, hurts a number of coastal communities that rely on shellfish like crabs to support themselves [9]. Keeping the ecological and economic importance of estuarine crabs in mind, I chose a small estuarine crab local to tidewater North Carolina to be my study organism. The species chosen was Rithropanopeus harrisii, known as the “Harris Mud Crab” or the “Estuarine Mud Crab.” Aside from its similarity to other estuarine crabs along the Western Atlantic, Rithropanopeus harrisii was chosen because of its local abundance, and high amount of local expertise in working with the study organism. Productive estuaries in the area, additionally, supported this crab in multiple locations, and offered the opportunity for the species to be obtained on-site. Finally, the species is also known to inhabit a substantial range of waters from the Western Atlantic to Brazil, which could have made the findings of the experiment applicable outside North Carolina [18]. Using Rithropanopeus harisii, this experiment seeks to 14 | 2014-2015 | Volume 4

Biology and Chemistry Research understand the effect on calcification rates, growth, and survival by varying pH. Throughout the experiment, the question was proposed: what effect on calcification rates, calcium content, overall crab mass, and mortality does significant alteration of pH induce? My a priori hypothesis predicted that crabs kept at the lowest experimental pH would have reduced calcification rates, lower % calcium content, substantially less mass, and a higher mortality rate. Consequently, I predicted that crabs kept at the control pH or higher-than-control pH would experience the opposite. Materials and Methods Four 30 x 30 x 10-centimeter Rithropanopeus harrisii traps were constructed from a large roll of galvanized steel cloth with openings of 0.5 inches. This sharp, cage like material was measured, cut, folded and assembled into these traps. No larger openings were needed; the 0.5 inch openings were large enough to accommodate even the largest Estuarine Mud Crabs that were caught. Traps were tied together at sides and corners using stainless steel wire, but not before each trap was filled about 75% with cleaned, dried, shells of deceased oysters. Oyster shells were retrieved from a local oyster recycling center. The shells are rather effective in attracting Mud Crabs [16], who are attracted to submerged organic and inorganic surfaces, such as underwater roots, oyster shell beds, and rocky jetties [18]. After the traps were constructed and baited, they were laid along the sandy shore in the downstream portion of a large local estuary. One mistake made was the failure to realize that the traps need be submerged below the water level of low tide, not only because Rithropanopeus harrisii lives only below the surface, but also to prevent human tampering with the traps. After ten days of leaving the traps out, I re-visited the traps to find three out of the four torn apart and tampered with. After reconstruction and improved site choice, the crab traps were all placed in a submerged root bed. Traps were embedded in and around the roots of this log mass to attract crabs hiding along these roots, who would quickly move to the oysters after the traps were placed. During subsequent trips to this site, a total of over 500 crabs- ranging from the very small to nearly the maximum size of 20 millimeters were caught and returned to the marine laboratory where the remainder of the experiment was done. In order to extract the crabs from their traps, a cooking pan was placed under the trap when it was removed from the water to catch any crabs that might fall, and the trap was shaken lightly so that crabs nestled among the oyster shells would fall through the small holes of the galvanized steel cloth. After crab collection, a sample of site water was taken back to the marine


Street Broad Scientific

Biology and Chemistry Research laboratory in order to establish what the control pH for the experiment, which was ~7.89 pH units. At the lab, crabs were distributed among two 40-liter buckets where sand, oyster shells, and an aerator were added to simulate the crabs’ environment in an actual estuary, and provide dissolve oxygen, respectively. The crabs were reared with water from the capture site in this controlled environment for 2-4 weeks before subjection to experimental conditions. The experimental design consisted of a division of crabs randomly chosen from the bucket into three treatments containing twenty five crabs. Each crab was placed at random into a self-contained fingerbowl (about 7.5cm in diameter) within its treatment, and was given about 40mL of a 40% sea water/60% well water mixture, created to mimic the average salinity of the water from the collection site, which was 15ppt. Seawater was especially valuable to have in the mixture because of its large carbonate and bicarbonate content- which give it a high buffering capacity- and made it significantly easier to stabilize pH during the pH altering process. pH was manipulated by adding droplets of 0.1 molar hydrochloric acid or sodium hydroxide. Even in a volume as large as a 2-liter beaker, this was enough to shift the pH to the highest and lowest values required by this experiment. In the 2-liter beakers, the low pH treatment had a pH of 7.40 +/- 0.20, the control treatment maintained 7.90 +/- 0.20, and the high pH treatment had at 8.40 +/- 0.20. By design, the allowed error value for the pH permitted pH values from crossing over and coming too close to the pH of another treatment. To ensure the pH was within this margin of error, the 40mL of water was changed twice a day for two weeks. This process required the twice-daily creation of seawater solutions, pH manipulation, and solution distribution, with the objective to reducing pH variation as much as possible. The core idea of the experiment was to measure the change of the crabs’ masses over time while crabs were subjected to pH 7.40, 7.90, and 8.40. Keeping the crabs alive through the experiment was critical to final outcome; therefore, crabs were fed carefully and observed closely. Keeping the crabs alive was particularly challenging initially- in two instances the experiment was significantly set back due to a majority of the crabs being killed. Crabs were first killed due to leaving food in the water for several hours, which caused an exponential decline in dissolved oxygen. Crabs were also killed a second time due to exposure to tap water that contained chlorine. After learning from these experiences, the methods were refined and the guidelines for feeding and solution prep were strictly defined. Crabs were fed Friskies SeaFood Sensations at day 0, 4, and 11 [16]; this was done by grinding up the meal and then depositing a pinch into each bowl, where it was left for one hour and one hour only before the water had to be changed. Before the

Volume 1 | 2011-2012

experiment began, each crab was removed briefly from its bowl, dried thoroughly, and then an initial blotted wet weight was taken to the one-thousandth of a gram, using a weigh boat and balance. The crabs were then exposed to water of their respective pH values for two weeks’ timelong enough that a significant physiological change should manifest itself [16]. After the time interval of the experiment, a final blotted wet weight was taken for each crab. The carapace length, or shell length from left to right tip, was recorded to the nearest hundredth of a millimeter, and the crabs were sexed based on observation of the presence or absence of wide horizontal stripes found on the abdomen [16]. After these observations, an image was taken for each crab in its fingerbowl using a cell phone camera, and then each crab was placed, alive, into individual scintillation vials marked with a diamond scimplar. These vials were topped with Parafilm and then placed in a freezer overnight, in order to humanely terminate the crabs. The next morning, the vials were removed and a hole was poked into every Parafilm seal. Each of the sixty-seven vials were piled into four larger jars, which were sealed with an airtight cap and attached to the Virtis Sentry Condenser Vacuum [16]. The vacuum was activated and left active overnight to remove any remaining moisture from individual crab bodies- so that a very accurate dry weight could be taken for each crab. The following morning after that, the vials were removed and measured (after Parafilm was removed) immediately using the same balance as before. Following the measurement of all of these dry weights, the vials were placed onto a glass plate that was inserted into a Thermolyne Oven, which was pre-heated to 500 degrees Centigrade [16]. The dried bodies of the crabs were cooked in this furnace for a tenhour period, when they were removed and a final ash weight or “cooked weight” was measured. This weight was the weight of calcium content in each crab, which included the exoskeleton but no internal organs or other “soft” parts of the crab. Analysis was done by looking at graphs to compare treatment data for change of wet weight, dry weight, ash weight, carapace length, and crab sex. Change of wet weight and percentage of ash weight to dry weight were used to find the calcium content and any change of calcification observed in both individuals and in groups. Mortality was tracked over the course of the experiment, and mortality by group was analyzed using a chi-square method to check for statistical significance. Results and Illustrations All treatments were found to have a net loss of wet weight though no difference from treatment to treatment was actually detected. The 7.40 (most acidic) treatment lost -2ug (SE: 1.74), the 7.90 -4ug (SE: 1.99), and the 8.40 -8ug (SE: 5.66), as shown in Figure 1. In contrast to what Volume 4 | 2014-2015 | 15


Street Broad Scientific Volume 1 | 2011-2012

Biology and Chemistry Research

was expected, there appears to have been a trend towards more weight lost on average in the treatments having a higher pH. The high pH treatment, in fact, lost nearly twice as much weight on average as the control – which had twice the rate of loss as the low pH group. There was not a statistical difference in the change of weight in any treatment, however. This can be attributed to the individual variations that crabs experience in response to ocean acidification. Each treatment contained several outliers that grew or lost a substantial amount (more than Figure 2: Change of Wet Weight for those crabs (across treatments) who had a positive net change of weight during the 2-week time period. At one standard error, there was a significant difference observed between the 7.90 and control (7.90) groups, indicating that more crabs in the 7.40 group grew than those in the control group. (N=5 for 7.40; N=9 for 7.90; N=6 for 8.40).

Figure 1: Average change of wet weight (final-initial measurements) in (ug) over the two week time interval of the experiment. Note that averages are organized by pH and that all average changes are negative values. (N=24 for 7.40; N=20 for 7.90; N=21 for 8.40). Error readings are to one standard error. 10ug); one crab on average changed more than +/- 30ug per treatment. Crabs likely vary in their responses to ocean acidification due to variations in the ratio of mass of shell to surface area, which affects how much of the crab is exposed to the seawater at a given time. Furthermore, crabs also vary in their strength to tolerate exterior changes in pH, along with ability to compensate metabolically for these added costs. These individual variations help to explain the diversity of responses to acidification, as well. Without the detection any significant trends when looking at the average change of wet weight for all crabs together, the next logical step was to separate individuals who gained and individuals who lost weight. What was observed after separating the two groups of crabs was that within the group of gainers those in the 7.40 treatment had considerably more growth (average = +10.02ug; SE=5.08) than the 7.90 (control) (average= +2.81ug; SE=0.47) (Figure 2). The 8.40 treatment also experienced more growth than the control but less than the low pH. Among the relatively few crabs that gained weight (N ranged from n=5 to n=9), there was a single statistical difference at one standard error between the low pH group and the control group (Figure 2).

16 | 2014-2015 | Volume 4

In the other group of crabs - the group of individuals who lost weight – I found that the control treatment actually experienced the greatest decline (average = -8.04ug; SE= 2.70) while the low pH treatment had a more modest decline (average = -5.47ug; SE= 0.83), and the high pH treatment had a value in-between (average = -6.52; SE=7.57) (Figure 3). The fact that the control treatment had so many losers after outlier removal suggests the control treatment was in fact a metabolically stressful level for the crabs, given the large decline and the fact that the control group had the largest number of crabs that declined in wet weight (N=17 vs. N=12, N=13). It is also

Figure 3: Average change of wet weight for those crabs (across treatments) who had a negative net change of weight during the two week time period. Note that the control treatment had the greatest average decline while the low pH treatment had the smallest decline of the treatments. Error Readings are to one standard error. (N=18 for 7.40; N=12 for 7.90; N=13 for 8.40). important to note that the total number of crabs who lost weight across treatments was significantly larger (N ranged from n=12 to n=18) than those who gained weight (N


Street Broad Scientific

Biology and Chemistry Research ranged from n=5 to n=9), which may in part explain why all three groups experienced a negative average change of wet weight when all crabs were grouped together. Returning to the question of how, if at all, the Estuarine Mud Crab was affected by ocean acidification- these two separate analyses confirm no change was detected, but do provoke interesting questions concerning individual treatments and the general effects of pH. Analysis of the change of wet weight (Figure 1) indicated that a significant difference between treatments did not exist, despite the large surviving sample size (20+) per treatment. A percentage of ash weight to dry weight (Figure 4) was necessary to make clear that there was no difference in calcium content (and therefore, calcification rates) among treatments. Dry weight (ug) varied only +/-1ug between groups, clearly indicating the very small difference among the weights of crabs. Comparing the average ash weights (ug) of the various treatments as a percentage of dry weight gives the actual amount of calcium in what weight that was left over. This value answers the question as to whether varying the pH of water has any effect on the calcium content and calcification rates of the crabs. Interpretation of this data shows that there was not a significant difference present among the treatments concerning calcium content, which is roughly revealed by looking at ash weight as a percentage of dry weight (see Figure 4). Crab survival rates by treatment in response to ocean acidification were another major focus of this experiment. Mortality was tracked throughout the experiment by treatment group, which includes crabs that suffered from

Figure 4: average Ash weight as a percentage of Dry weight by pH treatment. Ash and Dry weights were calculated separately, and then ash weight (ug) was expressed as a percentage of dry weight, which gives the amount of calcium left in the dry body after exposure to exp. waters. Error Readings are to One Standard Error. (N=24 for 7.40; N=20 for 7.90; N=21 for 8.40). being weakened due to molting, crabs that simply passed, and crabs that went missing over the course of the experiment. The date, treatment and number within

Volume 1 | 2011-2012

treatment were recorded along with the suspected cause of death, and are given below in (Table 1). Actual mortality of crab across all treatments remained low: one crab perished in the low pH treatment, three in the control treatment, and four in the high pH treatment, as shown in (Table 1). A chi-square test for significance was run using this information, but the chi-squared value

Table 1: Mortality by Treatment and the Time throughout the experiment of Mortalities. Day 10 was the date of death for multiple crabs, a likely result of the dissolved O2 restrictions that feeding causes. The 8.40 treatment had the highest number of mortalities while the 7.40 had the lowest, a surprising result. Most mortalities occurred in the second 7 day period of the experiment. All treatments had initial N=25. of 1.75 was found to be lower than the critical Chi squared value of 5.99, indicating that a significant dissimilarity of mortality did not exist among treatments. The control treatment had a total mortality of three crabs- the value that the high and low pH mortalities were compared to. In other words, according to the data, it would be expected under normal “control� conditions in the estuary that three crabs out of twenty five would die over a period of two weeks, which was reasonable, if not a low estimate [16]. The data supports the fact that a significant difference in mortality was not detected from one treatment to the next, despite the trend of increasing mortality from low pH to high pH (it was not significant). The data does, however, indicate that nearly all of the crabs perished during the second half of the experiment, particularly on Day 10. A likely reason behind this is that Day 10 was the second feeding day of the experiment, after Day 3 (crabs were fed once a week). The feeding day was a possible cause of death because dissolved oxygen levels can decrease significantly in the one hour period where bacteria growth is high during feeding. This does not account for the fact that other crabs perished after Day 10, nor is it likely that the mortalities on Day 10 were solely due to feeding. It is possible, then that the mortality rate increased over the course of the experiment as crabs were exposed to deviating pH values for an increased length of time. This possibility, among others, is addressed below in the Discussion section. Conclusions and Discussion The results of this experiment were both counterintuitive and not in agreement with the a priori hypothesis. The Volume 4 | 2014-2015 | 17


Street Broad Scientific Volume 1 | 2011-2012

hypothesis was that crabs kept in the most acidic conditions would end up with the greatest decline of average wet weight, lowest calcium content or % ash weight (which indicates a slower rate of calcification), and the lowest rate of survival. A significant statistical difference was not detected between the low pH treatment and the control treatment for any of these predictions, nor did the high pH data support any suggestion that the opposite may be true. After the experiment, additional reading into the literature was done to try to make sense of these results. What was found is that the evidence of no net change observed in my experiment over a two week time interval is in agreement with current publications. These studies of ocean acidification’s effect on crustaceans include the aforementioned studies on the Red King and Tanner Crabs [10], as well as the Blue Crab and American Lobster [14]. Reading into the literature indicates that the short to medium term effect of ocean acidification is either an increase in calcification or no net effect [7, 11, 14, 19]. Each major study was done over a significantly longer time interval than my own: at 60 days [14], and at 200 days [10]. Both studies found that with a moderate decline of pH (0.10.3 units), crabs experienced increased calcification and had relatively low mortality rates [10, 14]. After 100 days of being kept at 7.8 pH, Tanner Crabs, for example, had only a 20% mortality rate [10]. This rate of mortality was higher (40%) for a treatment kept at 7.5 pH over the same time period [10], supporting the idea that lower pH values result in higher rates of mortality. Even more intriguing, though, is that at 100 days kept at 7.5 pH the mortality rate of adult Red King Crabs was 100% [10]. The mortality rate increased from 40% to 100% due to only the amount of time the crab was kept there. Clearly, both pH and time affect mortality by species, as they might for calcification, assuming the time interval is long enough. For all of the crabs looked at by Long et al. [10] and by Ries et al. [14], the mortality rate after just two weeks was less than 5% at all pH levels, which is a level on par with if not less than the level of mortality I saw during my experiment. Why then, did the experiment I performed not show a similar trend in calcification? Clearly an explanation for the lack of detecting a difference in my experiment was the reduced time interval in which acidification was allowed to be carried out. Due to lab time constraints and to setbacks along the way, two weeks was the maximum amount of time that this experiment could have been carried out over. Two weeks, as short as the time may be, should still be enough time to detect any physiological change [16]. There are other explanations, too, for why a lack of a difference was observed. To ensure that the target pH is kept in the bowl at all times of day and night, I would have ideally used carbon dioxide aerators in this experiment, 18 | 2014-2015 | Volume 4

Biology and Chemistry Research which mix precisely the amount of carbon dioxide with seawater in several chambers to produce the water at a target pH. Water, using this apparatus, is supplied all of time, ensuring that pH is constant during the experiment. In my experiment, however, the pH was attempted to be kept as stable as possible by changing out the water in all fingerbowls two times a day, a painstaking process. Keeping the pH values within the allowed margin of error (+/- 0.2 pH units) was not always possible because the pH values rose extremely rapidly after just 2-3 hours of being added to each fingerbowl. On average, the pH value of the low pH bowls would be about 0.5 units higher after 12 hours, which could in fact mean that crabs kept at a low pH value were experiencing “control values.” The control group’s pH also tended to rise about 0.3 units- to about the 8.20 range, but the 8.40 group stayed approximately constant. This tendency of the pH to rise over the course of the experiment was noted, but there was little that could be done about it. Meanwhile, however, the question persists as to why this phenomenon might occur? One explanation for the continuous rise is because of the fact that Rithropanopeus harrisii, like other crustaceans of the estuarine community, excretes a large quantity of ammonia (up to 86% of what is excreted) – a chemical which has a major effect on raising pH [2]. This constant excretion of ammonia by the crabs may have affected the pH of all treatments during the experiment, and might explain why the pH would increase as soon as a crab was added to the bowl, especially with the low volume of the bowls. It is interesting, however, that crabs are able to sort of involuntarily regulate the pH of the water around them, simply by functioning as they do in their environment and by producing ammonia. Though the experiment had several (uncontrollable) setbacks, it also had several strengths. I was able to work with three treatments of twenty-five crabs, a high sample size to detect any trends that might exist. The fact that crabs were separated into individual fingerbowls meant that it would be very hard for two crabs to be mixed up or misidentified, and this separation ensured that each crab was subjected to the same acidification as any other crab. No other study has looked at acidification’s effect on crustaceans with a sample size of this magnitude, nor have any studies been centered on an estuarine-specific crab. It was observed that on a 14-day time interval, Rithropanopeus harrisii can survive and tolerate a variety of pH values that may even be significantly different from the pH of its estuarine environment. The explanation for this could very well be the fact that the time interval was too short or that the pH was too variable; or the explanation might be that Rithropanopeus harrisii is more adaptable to ocean acidification than had previously been expected. The third explanation is concluded from the environment’s relationship with the crab. I gathered


Street Broad Scientific

Research Estuarine Mud Crabs from an area along the shores of the Lower Neuse River at the foot of the Croatan National forest. It is possible that the Estuarine Mud Crab can handle the low pH waters because it has been continuously challenged with low pH waters as a result of Pine and Cyprus tree run-off during the past ten thousand years that the crab has lived there [16]. And so, there are several likely reasons that hid results of this experiment that might otherwise have been visible. Future Work Much has been learned from this study about how ocean acidification as a process affects individual organisms exposed to it. While demonstrating the Estuarine Mud Crab’s counterintuitive resilience against even acutely acidic conditions (the 7.5 pH), it was discovered that, on a larger scale, effects of ocean acidification cannot be generalized. So many particular circumstances factor into the work at hand; in this field a gradually developed adaptation to ocean acidification or a better mechanism to regulate the pH outside the body can make all the difference for one species, a reason we must be very careful when looking at ocean acidification. What can be done, however, is take a species-by-species approach to studying effects of ocean acidification, all the while making sure that each experiment is done at pH values and experimental periods consistent with other published works. The length of time clearly plays a large part in the results of the experiment as evident with Tanner crabs in Long et al. [10]. If there was a chance to re-do this experiment, I would most certainly try to extend the time length, while performing a larger experiment with captured Estuarine Mud Crabs from multiple locations, including not just the estuary where these crabs are abundant. It would be ideal choose crabs from an estuary that had not been exposed to Pine and Cyprus run-off for the last ten-thousand years. Other areas where the crabs are less abundant but still inhabit- such as the open ocean and much farther upstream in a freshwater environment could be possible harvesting areas. Perhaps looking across the range from which these crabs originate and still keeping them at various levels of acidification might indicate the extent to which these animals can not only survive, but flourish. Another idea is to compare multiple species of crab at the same pH value, which potentially could very clearly illustrate the ability of different species of crab to react and adapt to a more challenging pH conditions. In addition, I could compare crabs who have been exposed to Pine and Cyprus runoff to those which haven’t (over a longer time interval, of course), to see the effect of lower pH conditions. Completing this experiment has left only more, more interesting questions that need to be answered. Performing this experiment on another species of crab, or

Volume 1 | 2011-2012

another type of shellfish entirely- such as a clam or whelk could yield entirely different results. If the pH values were much more extreme- could a significant change of wet weight, then, be detected on Rithropanopeus harrisii in just a two week period? I wonder if the theoretical mechanism crabs use to increase calcification at the epicuticle could be applied to other things. What if the organic layer could be strengthened to not only help crabs fortify their shells but also to help them simply survive more acidic conditions? Finally, I wonder whether making the modifications as described above could have altered the results of the experiment. In the future, I would like to re-visit this experiment with the modifications described above, because there is more still to be learned about the Estuarine Mud Crab, and until then, my curiosity rages on, my work unfinished. Acknowledgments Dr. Amy L. Sheck, Dean of Science and Instructor of Biology, North Carolina School of Science and Mathematics, Instructor of Research in Biology Course, Project Mentor, Advisor in all respects. Dr. Daniel Rittschof, Professor of Marine Science and Conservation, Duke University Marine Laboratory. Laboratory Director of Lab where research was carried out. Assisted in removed direction and instruction of trap construction and placement. Provided possible explanations for unexpected results during and after experiment. Helped to place experiment in context and helped facilitate understanding of typical physiological problems. Also provided expertise related to biological and ecological processes related to Rithropanopeus harrisii. Beatriz Orihuela, Research Specialist, Duke University Marine Laboratory, Instructed/Supervised on Use of specialized Lab Equipment Zhuying Zhang, Volunteer and High School Sophomore, West Carteret High School, Served as Lab and Field Research Assistant/Assisted in Carrying out Lab Work Mei Zhang, Post-Doctorate Student, Duke University Marine Laboratory, Provided Lab Space and Equipment for carrying out experiment in the Laboratory Julie Graves, Instructor of Mathematics, North Carolina School of Science and Mathematics, Reviewed Data from Experiment, provided guidance, support, and suggestions for further analysis Carol Tulevech, Mother, Provided Advice, Transportation, and Guidance Steve Tulevech, Father, Provided Advice,Transportation, and Guidance; assisted in Recovering Crab Traps; provided some Materials and Guidance Peggy Tulevech, Grandmother, Provided Advice, Transportation, and some Materials Volume 4 | 2014-2015 | 19


Street Broad Scientific Volume 1 | 2011-2012

Grace Tulevech, Sister, Assisted in Recovering Crab Traps My Research in Biology peers, for providing motivation and assistance at the times when it was needed most, and for all that we have learned together and from each other My Research in Biology Seniors, who left to us an amazing legacy to continue. For all of the lessons learned from you and through you, who have done so much to make this work possible The Glaxo Foundation Endowment to NCSSM, for support of research like mine and for realizing the importance of young people getting involved in research, to make a difference. Thank you all. References [1] Caldeira, K., and M.E. Wickett 2003. Anthropogenic carbon and ocean pH. Nature 425:365. [2] Chu-Chen, J. and P.G. Chia. 1996. Oxygen Uptake and Nitrogen Excretion of Juvenile Scylla serrata at different temperature and salinity levels. Journal of Crustacean Biology 16: 437-442. [3] Doney, S.C., V.J. Fabry, R.A. Feely, J.A. Kleypas. 2009. Ocean acidification: The other CO2 problem. Annual Review of Marine Sciences 1:169-192. [4] Dupont, S., O. Ortega-Martinez, and M. Thorndyke. 2010. Impact of near-future ocean acidification on echinoderms. Ecotoxicology 19:449-462. [5] Fabry, V.J., B.A. Seibel, R.A. Feely, and J.C. Orr. 2008. Impacts of ocean acidification on marine fauna and ecosystem processes. ICES J Mar Sci 65:414-432. [6] Feely, R.A., C.L. Sabine, K. Lee, W. Berelson, J. Kleypas, V.J. Fabry, and F.J. Millero. 2004. Impact of anthropogenic CO2 on the CaCO3 system in the oceans. Science 305:362-366. [7] Findlay, H.S., M.A. Kendall, J.I. Spicer, and S. Widdicombe. 2009. Future high CO2 in the intertidal may compromise adult barnacle Semibalanus balanoides survival and embryonic development rate. Mar Ecol Prog Ser 389:193-202. [8] Houghton, J.T., Y. Ding, D.J. Griggs, M. Noguer, P.J. van der Linden, and D. Xiaosu. 2001. Climate Change 2001: the scientific basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge. 944 pp. [9] Kleypas, J.A., R.A. Feely, V.J. Fabry, C. Langdom, C.L. Sabine, and L.L. Robbins. 2006. Impacts of ocean acidification on coral reefs and other marine calcifiers: a guide for future research. Report on a workshop held 18-20 April 2005, St. Petersburg, FL, sponsored by NSF, NOAA, and US Geological Survey. 88 pp. [10] Long, W.C., K.M. Swiney, and R.J. Foy. 2013. Effects 20 | 2014-2015 | Volume 4

Biology and Chemistry Research of ocean acidification on the embryos and larvae of the red king crab, Paralithodes camtschaticus. Marine Pollution Bulletin 69:38-47. [11] McDonald, M.R., J.B. McClintock, C.D. Amsler, D. Rittschof, R.A. Angus, B. Orihuela, and K. Lutostanski. 2009. Effects of ocean acidification over the life history of the barnacle Amphibalanus amphitrite, Marine Ecology Progress Series 385: 179-187. [12] Orr, J.C., V.J. Fabry, O. Aumont, L. Bopp, and others. 2005. Anthropogenic ocean acidification over the twentyfirst century and its impact on calcifying organisms. Nature 437: 681-686. [13] Pane, E.F., and J.P. Barry. 2007. Extracellular acidbase regulation during short-term hypercapnia is effective in a shallow-water crab, but ineffective in a deep-sea crab. Marine Ecology Project Series 334:1-9. [14] Ries, J.B. 2009. Marine calcifers exhibit mixed responses to CO2-induced acidification. Geology 37: 1131-1134 [15] Ries, J.B. 2011. A physiochemical framework for interpreting the biological calcification response to CO2induced ocean acidification. Geochimica et Cosmochimica Acta 75:4053-4064 [16] Rittschof, D. Assistance through personal communication (Acknowledgements, see above). [17] Sabine, C.L., R.A. Feely, N. Gruber, R.M. Key, K. lee, and J.L. Bullister. 2004. The oceanic sink for CO2. Science 305:367-371. [18] Turoboyski, K. 1973. Biology and Ecology of the crab Rhitropanopeus harrisi ssp. Tridentatus. Marine Biology 23: 303-313. [19] Whiteley, N.M. 2011. Physiological and ecological responses of crustaceans to ocean acidification. Marine Ecology Progress Series 430:257-271.


Street Broad Scientific

Biology and Chemistry Research

Volume 1 | 2011-2012

Biome Enrichment and Broad-Spectrum Immunization Enhance the Concentration and Range of Specificity of the Rattus Natural Antibody Repertoire Daniel Ren

ABSTRACT

Humans living in industrialized society have alarmingly high rates of cancer, which may be partially attributed to impaired immune function. Biome depletion, the loss of microbial and eukaryotic species diversity from the human biome, is known to cause allergic, autoimmune, and inflammatory diseases, and is increasingly suspected as a cause of cancer. To investigate the role biome diversity plays in mediating immune responses to cancer, the natural antibody repertoire, which provides an innate, humoral mechanism of tumor surveillance, was evaluated in sera from rats with varying degrees of biome richness. Similarly, broad-spectrum immunization, a form of artificial biome enrichment, was examined as a possible stimulant of natural antibody enhancement. The results indicate that increasing biome diversity enhances the robustness, efficacy, and specificity of natural IgM and IgG binding to a comprehensive range of autologous antigens, while immunization has a limited, yet still profound, impact on antigen recognition. This provides evidence that immunogenic stimulation – through biome enrichment and immunization – improves humoral prevention of tumorigenesis. This study supports populationwide implementation of biome reconstitution (the deliberate reintroduction of species into the human biome) and broadspectrum immunization, as a two-pronged approach to significantly reduce the pandemics of cancer and inflammatoryrelated diseases in post-industrial society.

1. Introduction Cancer is a leading health problem in the world, and is a prominent issue on the forefront of current medical research [1]. Epidemiology and other disciplines of biology suggest that many types of cancer are related to chronic inflammation caused by certain aspects related to an industrialized Western lifestyle, such as vitamin D deficiency, inflammation inducing diets, lack of exercise, and prolonged psychological stress [2-4]. These are the same factors that induce the same type of chronic inflammation in allergic and autoimmune diseases [24]. However, another factor appears to be a leading cause of immune disease. Biome depletion, the loss of species diversity from the ecosystem within the human body, is perhaps the most important factor contributing to the pandemics of non-infectious inflammatory diseases, and may contribute to an increased cancer risk as well [58]. Part I (Research Question I) of this study explores the role biome depletion may play in mediating cancer development. Biome depletion is uniquely and especially present in industrialized culture and society [9]. This includes many microorganisms and eukaryotes, which aid in important physiological functions. For example, helminths are multicellular intestinal worms that have existed and coevolved with vertebrate immune systems for over 100 million years, benefiting humans by secreting immune regulatory molecules that discourage allergic and autoimmune reactions [5]. This regulates and modulates the immune system, and prevents overreaction to benign antigens, though it is not immunosuppressive

and actually enhances normal immune capabilities [10]. These regulatory molecules may also have significant effects on other immune functions, such as regulating the complement system and establishing a balance between the innate and adaptive immune systems [8,11-14]. Other depleted organisms, such as saprophytic bacteria, regulate the immune system by secreting factors that modulate mood and other emotional behavior [8]. However, factors in post-industrial culture, such as running water, toilets, and water treatment facilities, have essentially eradicated helminths and many beneficial microorganisms from the biome of people living in modern, industrialized societies, leading to an evolutionary quandary of immune imbalance [5,8,15,16]. Other practices, such as replacing breast milk with infant formulas and overusing broad-spectrum antibiotics have also profoundly disrupted the ecological balance of these organisms in human biomes, leading to an evolutionary change in the ability of our immune systems to combat diseases [8,9,15-20]. This is alarming because the loss of helminths and other microbes in the human body reduces species diversity and disrupts many of the important immune interactions between humans and their symbionts [9]. One significant ramification of biome depletion is the development of greatly over reactive and hypersensitive immune systems, which has resulted in a staggering increase in the rates of allergic, autoimmune, and inflammatory diseases in post-industrial societies [5,9]. One such disease is autism, which may be caused by aberrant neuroinflammatory factors that disrupt normal nervous system development [6]. Other disorders, Volume 4 | 2014-2015 | 21


Street Broad Scientific Volume 1 | 2011-2012

including asthma, food allergies, multiple sclerosis, schizophrenia, and type 1 diabetes, may also be associated with biome depletion [21-27]. While the dynamics of each of these diseases are quite different, they share an interesting commonality: the epidemic, widespread nature of these diseases only exists in industrialized countries and not in developing nations [9]. The similar underpinning of chronic inflammation between non-infectious immune diseases and cancer indicates that biome depletion may also be implicated in cancer development, through dysregulation of the immune system [28]. Much evidence connects developing tumor cells to improper regulation of the immune system, particularly the humoral immune system, and biome depletion may contribute to a weakened immune system that leads to an increased risk of cancer [28-31]. Part I of this study aims to determine if biome depletion affects natural antibody repertoire, a key part of the humoral immune system. Natural antibodies are present without sensitization to an antigen, and play an important role in preventing tumor development and growth [32]. Natural antibodies, many of which are polyreactive, confer an innate resistance to tumor severity and provide the host organism with complex tumor surveillance and regulation capabilities [32]. Natural antibodies recruit NK cells to tumorigenic sites, which decreases the frequency and malignance of tumor cells [32,33]. Natural antibodies may also stimulate increased phagocytic activity in macrophages, leading to the engulfment of tumor cells [34]. Since the binding of syngeneic natural antibodies to tumors causes host cells to reject and destroy the tumor, natural antibody levels are inversely related to tumor frequency [32,35]. Thus, examining the natural antibody repertoire of an organism may provide an indication of the immune system’s abilities to prevent tumorgenesis, and may allow us to better understand how biome depletion may contribute to the development of cancer. Part II (Research Question II) of this report investigates a similar method of enhancing the natural antibody repertoire. Immunization, the stimulation of the immune system with immunogens (antigens capable of eliciting an immune response) has been known to spur the production of antibodies specific to the target immunogens, as well as cross-reactive antibodies, which allows the immune system to swiftly and effectively recognize these antigens upon subsequent encounters [36,37]. Considering this, immunization can be viewed as essentially an artificial and highly controlled method of biome enrichment, in which specific immunogenic agents are introduced to the immune system in order to increase the biodiversity of the human biome. In Part II of the present study, I investigate the potential role broad-spectrum immunization plays in mediating the development of the natural antibody repertoire.

22 | 2014-2015 | Volume 4

Biology and Chemistry Research Research Question I: Biome Diversity The impact of biome richness on the natural IgM and IgG antibody repertoire will be tested by comparing the binding of natural antibodies found in “biome depleted,” “biome enriched,” and wild rat sera to various autologous antigens. Research Question II: Immunization The effect of immunization on the natural IgM and IgG antibody repertoire will be examined by comparing the binding of natural antibodies found in immunized and non-immunized laboratory rat sera to various autologous antigens. 2. Materials and Methods Standard conditions for laboratory rats Laboratory housing conditions were approved by the University Institutional Animal Care and Use Committee, and all animal husbandry was done prior to my experimentation. Male (n = 4) and female (n = 8) Sprague Dawley rats obtained from Harlan Sprague Dawley were housed in a standard hygienic laboratory environment. Rats were bred after acclimatization for 62 days, yielding 31 female F1 rats (male rats were not used in this experiment to eliminate gender as a potential confounding variable). 20 of the 31 females were immunized using the protocol described later in this report, and are known as the “biome depleted” rats in the biome diversity experiment and the “immunized” rats in the immunization experiment. The remaining 11 rats represent the “non-immunized” group in the immunization experiment, and were weighed, euthanized with CO2, and had blood drawn from the posterior vena cava. Blood was centrifuged in BD Vacutainer Plus blood collection tubes (BD Biosciences) at 2390 x g at 22 °C for 10 minutes, and the sera was aliquotted, flash frozen in liquid nitrogen, and stored at -80 °C. Research Question I: Biome Diversity “Biome enriched” conditions In addition to the 12 laboratory rats described above, 12 genetically identical Sprague Dawley rats (4 male and 8 female, from Harlan Sprague Dawley) were placed into a biome enrichment facility, prior to my experimentation. General housing conditions were identical to standard conditions. However, these F0 rats were exposed to the following two forms of potential biome enrichment. 1. Nine days before the arrival of the F0 rats, four female Rattus Norwegicus wild rats were caught and housed in the


Street Broad Scientific

Biology and Chemistry Research same facility. Upon arrival, the F0 laboratory rats were acclimatized for 62 days before breeding, during which used bedding from the wild rat cages was introduced weekly into the F0 laboratory rat cages. During this time, the F0 laboratory rats were also exposed to bedding from rats housed in unregulated rodent facilities. 2.Three Hymenolepis diminuta cysticercoids (the larval stage of the rat tapeworm, a type of helminth) were fed to each of the F0 female rats 56 days before breeding. Colonization of Hymenolepis diminuta was confirmed using a modified version of the McMaster technique. Breeding of the F0 rats yielded 15 female F1 rats, which were immunized using the protocol described below and represent the “biome enriched” group of rats. Wild rats Wild rats (different from the rats from which bedding was taken for biome enrichment) were caught by my Principal Investigator using live traps set in various urban, suburban, and rural areas in a study unrelated to my project. Rats were then weighed and euthanized. Blood was drawn, and the sera were collected, aliquotted, flash frozen, and stored at -80 °C. Research Question II: Immunization Immunization with peanut extract, FITC-KLH, and DNP-Ficoll Prior to my project, biome depleted (n = 20) and biome enriched (n = 15) F1 rats were immunized with peanut extract, fluorescein isothiocyanate coupled to keyhole limpet hemocyanin (FITC-KLH; SigmaAldrich), and DNP-Ficoll (2,4 Dinitrophenyl conjugated to AnimoEthylCarboxyMethyl-Ficoll; Biosearch Technologies) according to the following protocol. Peanut extract, FITC-KLH and DNP-Ficoll were mixed at 2 mg/mL, 1 mg/mL, and 2 mg/mL, respectively to create an immunization “cocktail” that would provide a broad range of immunogenic stimulation. Peanut extract was used to provoke an IgE response upon sensitization; FITCKLH was prepared to provide a T-dependent antigen for immunization; and DNP-Ficoll, a high molecular weight polysaccharide, was utilized as a T-independent antigen. Once each rat was 43 days old, the cocktail was combined with Imject Alum (Pierce) in a 1:1 (v/v) ratio, mixed for 30 minutes at room temperature, and injected intraperitoneally at a dose of 5 mg/kg. On days 45, 47, 50, 52, the peanut extract alone was injected at 2 mg/kg. On day 57, the rats were injected with the entire cocktail at 5 mg/kg, except without the alum conjugate. F1 rats were euthanized at 71 days old, and sera were collected from the blood.

Volume 1 | 2011-2012

3. Experimentation Preparation of rat organ extracts To provide autologous antigens that would be recognized by natural antibodies in rat serum, rat tissue extracts were prepared. The following solid rat organs were harvested from male WKY rats euthanized for a purpose other than my project: 1 brain, 4 kidneys, 1 liver, 2 lungs (right), 9 prostates, and 7 spleens. The liver was perfused in saline (0.9% NaCl) to wash away excess blood. Similarly, the lungs were perfused in Perfadex® (Vitrolife), a preservation solution the colloid dextran 40, via the pulmonary artery to wash blood from the lung. The other organs were not perfused. Each organ was flash frozen and stored at -80 °C, and the individual organs of the same type were pooled together. Each pool of organs was then pulverized into a powder using a BioPulverizer (Biospec Products). The resulting tissue powders were thawed and then washed twice at 4 °C, except for the kidney powder, which was washed four times, and the spleen powder, which was washed five times, due to the high blood content in these two organs. Each wash was performed by suspension of the tissue powder in 4.35 mL PBS/g tissue, mechanical agitation and mixing of the suspension, centrifugation of the suspension at 90 x g for approximately 1 minute, and disposal of the supernatant. After washing was completed, additional PBS was added to bring each suspension to a total volume of 5.56 mL/g of tissue powder. Each suspension was homogenized individually using a homogenizer (OMNI GLH), with two pulses approximately 45 seconds each, to lyse cell membranes and expose intracellular antigens. Each homogenized suspension was spun once at 9,710 x g for 30 minutes at 4 °C, and the supernatant was collected and mixed with 10% glycerol, for storage at -80 °C. A DC Protein Assay Kit (Bio-Rad Laboratories) was used to calculate the total protein concentration present in each organ extract, using the microplate assay protocol (Table 1) [38].

Table 1. Total Protein Concentrations of Organ Extracts. Lung, prostate, and spleen organ extracts were loaded for gel electrophoresis at the listed concentrations. Brain, kidney, and liver were diluted in 1x phosphate buffered saline (PBS) and loaded at respective concentrations of 5.53, 6.48, and 6.22 mg/mL. Volume 4 | 2014-2015 | 23


Street Broad Scientific Volume 1 | 2011-2012

Binding of immunoglobulins to antigens as determined by immunoblotting Lung, prostate, and spleen antigen mixtures for immunoblotting were prepared by mixing 280µL organ extract, 108µL LDS sample buffer (4x) (Novex NuPAGE®, Life Technologies), and 43µL sample reducing agent (10x) (Novex NuPAGE®). Brain, kidney, and liver antigen mixtures were prepared similarly, except the organ extracts were diluted 1:1.3, 1:2, and 1:3, respectively, in 1x phosphate buffered saline (PBS) before being mixed. Each mixture was vortexed, boiled for 7 minutes at 100 °C, vortexed again, and centrifuged for 7 minutes at 15,996 x g. 200µL of each antigen mixture was loaded on the large well of each of two 4 to 12% acrylamide gradient preparation gels (Novex NuPAGE® 4-12% Bis-Tris ZOOMTM Gel), and 5µL of a molecular weight standard (PageRuler Plus) was loaded into the standard well of each gel. Proteins were then separated by electrophoresis using SDS PAGE. Brain and liver extracts were run at 45V for 4 hr 50 min; kidney extract was run at 55V for 4 hr; lung and prostate extracts were run at 65V for 3 hr 15 min; and spleen extract was run at 70V for 3 hr. Antigens were transferred to PVDF membranes for 7 minutes at 20V using an iBlotTM gel transfer device (Ethrog Biotechnologies Ltd., Invitrogen). PVDF membranes were blocked for 70 minutes at room temperature using 1.0 % bovine serum albumin, 0.1 % Tween 20, and 1x sodium azide in Tris buffered saline (blocking buffer). Each membrane was then cut into 16 strips (excluding the standard strip), each 4 mm wide. 33 strips were incubated overnight at 4 °C with specific rat sera diluted 1/400 in blocking buffer, while 1 strip was incubated overnight in blocking buffer to act as a control for anti-IgG and anti-IgM conjugate binding to organderived antigens. Rat sera were selected randomly from rats weighing over 250 grams. The experimentation for both the biome diversity and immunization studies was limited to 33 total rats (8 wild, 8 biome enriched, 8 biome depleted/immunized, and 7 non-immunized) due to size constraints of the gel. After overnight incubation in rat sera, blot strips were washed 3 x 10 minutes with Tris buffered saline. Strips were then incubated for 1 hour at room temperature either in alkaline-phosphatase conjugated, affinity purified goat antibody specific for the Fc region of rat IgG (SigmaAldrich), diluted 1/1000 in blocking buffer, to detect natural IgG antibody binding, or in alkaline-phosphatase conjugated, affinity purified goat antibody specific for rat μ-chain (Sigma-Alrich), diluted 1/1000 in blocking buffer, to detect natural IgM binding. Strips were then washed 3 x 10 minutes with Tris buffered saline and developed with 1-StepTM NBT-BCIP (nitro blue tetrazolium and 5-bromo-4-chloro-3-indolyl-phosphate; Thermo Scientific). Strips that had been incubated in goat anti-rat IgG were developed for 28 minutes with fresh developer 24 | 2014-2015 | Volume 4

Biology and Chemistry Research added after 14 minutes, and strips that were in goat antirat IgM were developed for 7 minutes. Finally, all strips were washed with distilled water twice for 6 minutes each and air dried. Membranes were analyzed with Quantity One software v. 4.6.6 (Bio-Rad Laboratories) to quantify the amount of natural antibody binding to antigens on each strip. Table 2 shows the band detections settings used for each experiment.

Table 2. Band Detection Settings. Membranes were analyzed using the lane background subtraction, sensitivity, and noise filter settings listed to the left, to quantify the range and intensity of natural antibody binding to antigens.

After all bands had been quantified, the amount of binding for each band on the control strip was subtracted from the corresponding bands on all the other strips. Statistical analyses Results from biome depleted, biome enriched, and wild rats were compared using a 1-way ANOVA, followed by Tukey and Fisher’s LSD post-hoc tests. Results from nonimmunized and immunized rats were compared using a t-test. The distribution of bands relative to band size was also analyzed for each group of rats, using a Java program I designed. 4. Results and Discussion 4.1 Research Question I: Biome Diversity The repertoire of natural IgM and IgG antibodies recognizing autologous antigens from various organ extracts was evaluated by immunoblotting, and the results are shown in Tables 3 and 4. The data indicate that increasing biome diversity dramatically increases the robustness of the natural antibody repertoire, enhancing binding to antigens present in all organ extracts, though to varying degrees. On average, the total binding of IgM was increased by about 71% in biome enriched compared to biome depleted rats, and wild animals displayed about a 122% increase compared to biome depleted rats. Binding


Street Broad Scientific

Biology and Chemistry Research of IgM was also characterized by a moderate increase in the number of bands recognized (an average of 17% and 27%, for biome enriched and wild rats, respectively, compared to biome depleted rats), but it is possible that given the high density of bands, the presence of overlapping bands may have caused an underestimation of the bands recognized (See Figure 1A for example). Natural IgG antibody binding showed similar increases. Nearly all of the differences for IgM binding and all of the differences for IgG binding were statistically significant. These results indicate that increasing biome diversity increases the total concentration of natural antibodies present in rat sera and the range of specificity of the repertoire. Animals with depleted biomes have a limited natural antibody repertoire with diminished tumor surveillance capabilities, leading to potentially greatly impaired abilities to mount humoral responses against tumor specific antigens and avert tumor formation. On the other hand, rats with greater species diversity within their internal environments are likely at a more balanced ecological and evolutionary equilibrium, with plastic host-symbiont relationships that modulate and regulate important immune functions, resulting in better equipped immune systems that are less hypersensitive and more effective in protecting against disease and tumor growth. By enhancing the natural antibody repertoire, increasing biome richness may alleviate the problems associated with the evolutionary quandary of immune imbalance and improve the humoral immune system’s abilities to recognize and respond to particular antigens, thus preventing tumorigenesis and cancer development.

Volume 1 | 2011-2012

rats, biome depleted and wild rats, and biome enriched and wild rats, respectively. Bolded letters indicate statistical significance with both Tukey and Fisher’s LSD post-hoc tests, while non-bolded letters indicate significance with only Fisher’s LSD post-hoc test.

Table 4. Binding of natural IgG in the serum of biome depleted, biome enriched, and wild rats. The same two parameters were analyzed using a 1-way ANOVA as in Table 3. Total binding was normalized to the mean biome depleted total binding. Means are shown ± standard error. ANOVA significance asterisks and multiple comparisons letters have the same designations as in Table 3.

Figure 1 illustrates natural IgM binding to liverderived antigens an example immunoblotting analysis. The distribution analysis shows heightened IgM binding to both low and high intensity bands when comparing biome enriched and wild rats to biome depleted rats, though there is a less noticeable difference between biome enriched and wild rats (1B). Other organs showed similar patterns, for both IgM and IgG binding. Figure 1 (below). Example of immunoblotting and analysis: Liver IgM.

A)

Table 3. Binding of natural IgM in the serum of biome depleted, biome enriched, and wild rats. Two parameters were evaluated using a 1-way ANOVA when comparing the three groups: 1) the total amount of binding between natural antibodies and organ antigens (intensity x surface area) and 2) the number of antigens recognized by the natural antibody repertoire, quantified by the number of unique bands. Total binding was normalized to the mean biome depleted total binding. Means are shown ± standard error. For ANOVA significance, the asterisks designate: * (p < 0.05), ** (p < 0.005), *** (p < 0.0005), and **** (p < 0.0001). For multiple comparisons, “a,” “b,” and “c” indicate significant differences between biome depleted and biome enriched

Figure 1A. Rat liver extract proteins were probed by immunoblotting as described in the Materials and Methods. W1 through W8 were incubated in rat sera from wild rats; E1 through E8 were in sera from biome enriched rats; and D1 through D8 were in sera from biome depleted rats. A control strip with no serum is labeled “C”, and indicates reactivity of the anti-IgM conjugate with liverderived antigens.

Volume 4 | 2014-2015 | 25


Street Broad Scientific

Biology and Chemistry Research

Volume 1 | 2011-2012

B)

of natural antibody recognition of particular antigens. Overall, broad-spectrum immunization, as a controlled form of artificial biome enrichment, may be an important proactive measure to increase the immune surveillance and tumor recognition capabilities of the natural antibody repertoire.

Figure 1B. The distribution of bands as a function of band size is shown. Noise was reduced using a rectangular smoothing algorithm that I developed.

C)

D)

Table 5. Binding of natural IgM in the serum of nonimmunized and immunized rats. The same two parameters were analyzed as those in Tables 3 and 4, except with a t-test (2 tailed, homoscedastic) instead of 1-way ANOVA, though the asterisks indicate the same levels of significance. Total binding was normalized to the mean non-immunized total binding. Means are shown Âą standard error.

Figure 1C. The total reactivity of natural IgM from each serum sample is shown. In this and all future graphs, bars indicate mean and standard error. (D) The number of antigens recognized by natural IgM in each rat sera is shown.

4.2 Research Question II: Immunization Binding of natural IgM and IgG antibodies found in rat sera from non-immunized and immunized rats, examined by immunoblotting, is shown in Tables 5 and 6. Interestingly, the data suggest that immunization differentially impacts the natural IgM and IgG repertoire, extensively enhancing the natural IgM repertoire, but significantly affecting IgG binding to only brain and liver antigens. IgM binding, on average, is characterized by a 52% increase in total binding and a 28% increase in the number of bands recognized with immunization. Binding of IgG also tends to increase with immunization, for both parameters, but the high variation between individual rats results in insignificant comparisons. Perhaps immunization promotes an immune response that boosts the concentration and broadens the scope of the natural IgM repertoire, but has a very limited effect on the antigen recognition capabilities of natural IgG antibodies. Still, these findings suggest that the immune stimulation provided by immunization benefits the balance and vigor of the humoral immune system, increasing the efficacy 26 | 2014-2015 | Volume 4

Table 6. Binding of natural IgG in the serum of nonimmunized and immunized rats. The same parameters were evaluated when analyzing natural IgG antibodies as with natural IgM antibodies (Table 5). Total binding was normalized to the mean non-immunized total binding. The asterisks indicate the same level of significance as in Tables 3 and 4. Means are shown Âą standard error.

Figure 2 illustrates natural IgM binding to brainderived antigens an example immunoblotting analysis. The distribution analysis shows that compared to nonimmunized rats, immunized rats have similar IgM binding to low intensity bands, but higher reactivity to medium and high intensity bands (2B). Other organs showed similar patterns for IgM binding, while the distribution of IgG binding was comparable between non-immunized and immunized rats.


Street Broad Scientific

Biology and Chemistry Research

A)

B)

C)

D)

Figure 2. Example of immunoblotting and analysis: Brain IgM. (A) (B) (C) and (D) show the same information as in Figure 1, except with brain extract instead of liver extract, and different groups of rat sera. In (A), I1 through I8 were incubated in sera from immunized lab rats, and N1 through N7 were in sera from non-immunized lab rats. C is the control strip.

5. Conclusions The results confirm the proposal by Bilbo that that biome enrichment, or reconstitution, may be a promising solution to the problems caused by biome depletion [5]. Biome reconstitution is the deliberate introduction of species back into the human biome, and may act as a preventative measure against disease by reestablishing the species balance in the human biome and thwarting the negative implications of an improperly regulated and hyperactive immune system [5]. Thus, biome

Volume 1 | 2011-2012

reconstitution is different from specific disease treatments which attempt to alleviate the problems caused by certain diseases. Instead, biome reconstitution is designed to be an important part of a healthy lifestyle, functioning to boost the immune system, enhance tumor surveillance, and, on a population level, drastically diminish the pandemics of cancer as well as allergic, autoimmune, and inflammatory diseases in post-industrial society. When considering the clinical implementation of biome reconstitution, both microbiota and eukaryotic organisms must be taken into account [9]. In terms of microorganisms, Rook suggests replenishing the microbiome with microbial transplants, which have been shown to be highly effective in combatting certain diseases [8,39]. To reconstitute eukaryotes into the human biome, a controlled approach to domesticate and colonize certain helminths, such as the rat tapeworm, Hymenolepis diminuta (which was used in this study), has been proposed [9]. Part I of this study provides impetus for these clinical applications by demonstrating the heightened efficacy of natural antibody binding to autologous antigens as a result of increasing biome diversity. This points strongly toward population-wide biome enrichment as a critical step in restoring missing factors from our internal environment, reestablishing our immune system’s evolutionary adaptedness, stimulating our humoral immune response against tumorigenesis and chronic inflammation, and dramatically improving public health. Similar to biome reconstitution, broad-spectrum immunization presents itself as a practical method of improving the robustness of the natural antibody repertoire. While current methods of immunization focus on vaccination against specific diseases [40], a broadspectrum approach using a diverse array of immunogens, utilized population-wide, could be a potent solution to markedly reduce rates of cancer and other diseases related to chronic inflammation. Thus, this study provides evidence supporting a combination of biome reconstitution and immunization to train the immune system from two supplementary angles – biome reconstitution as a natural approach and immunization as a controlled means of immune stimulation – that, when used simultaneously, may provide extensive enrichment of the natural antibody repertoire. Future work In the future, I would like to use enzyme-linked immunosorbent assays (ELISAs) to test binding of natural antibodies present in different rat sera to tumor specific rat antigens, such as p53, NG2, MAGE-1, MAGE-3, PSMA, and MUC1 [41-45], which would need to be customsynthesized since purified forms of these rat proteins are not commercially available. This would more directly relate biome diversity and immunization with specific tumor recognition and surveillance capabilities of natural Volume 4 | 2014-2015 | 27


Street Broad Scientific Volume 1 | 2011-2012

antibodies, compared to the broad and comprehensive approach of this study. ELISAs could also be used to confirm the results presented in this report. Further, once specific interactions between natural antibodies and tumor antigens are identified, their interactions can be studied experimentally using X-ray crystallography, NMR spectroscopy, and fluorescence resonance energy transfer (FRET), and modeled computationally using molecular docking software such as MolDock or FLIPDock. Another important future direction is moving towards clinical studies in humans, with the goal of optimizing the parameters of immune stimulation in order to achieve the highest benefit to risk ratio. Essential features to be considered include the species and dosage of helminths and microorganisms used in reconstituting the human biome, and the specific array and concentration of immunogens utilized in vaccinations. Other factors, such as the subject’s age, race, gender, and contraindications (such as immunocompromised patients, e.g. after an organ transplantation), must also be taken into account when artificially enhancing the biome. 6. Acknowledgements I gratefully thank William Parker for allowing me to work in his laboratory, working with me to come up with my project idea, and mentoring me throughout my project. I also thank Zoie Holzknecht for training me in laboratory procedures and experimental methods. Thanks to Myra Halpin and Michael Bruno for providing me guidance and support throughout my project. 7. References [1] Ma, X., et al. (2006) Global burden of cancer. Yale J Biol Med, 79, 85-94. [2] Lappe, J.M., et al. (2007) Vitamin D and calcium supplementation reduces cancer risk: results of a randomized trial. Am J Clin Nutr, 85, 1586-91. [3] Donaldson, M. (2004) Nutrition and cancer: A review of the evidence for an anti-cancer diet. Nutrition Journal, 3, 19. [4] Reiche, E.M., et al. (2004) Stress, depression, the immune system, and cancer. Lancet Oncology, 5, 617-25. [5] Bilbo, S.D., et al. (2011) Reconstitution of the human biome as the most reasonable solution for epidemics of allergic and autoimmune diseases. Med Hypotheses, 77, 494-504. [6] Becker, K.G. (2007) Autism, asthma, inflammation, and the hygiene hypothesis. Med Hypotheses, 69, 731-40. [7] Parker, W., et al. (2012) A prescription for clinical immunology: the pills are available and ready for testing. A review. Curr Med Res Opin, 28, 1193-202. [8] Rook, G.A.W. (2007) The hygiene hypothesis and the increasing prevalence of chronic inflammatory disorders. Transactions of the Royal Society of Tropical 28 | 2014-2015 | Volume 4

Biology and Chemistry Research Medicine and Hygiene, 101, 1072-1074 [9] Parker, W., et al. (2013) Evolutionary biology and anthropology suggest biome reconstitution as a necessary approach toward dealing with immune disorders. Evolution, Medicine, and Public Health, 2013, 89-103. [10] Pi, C., et al. (2014) Biome enrichment enhances the humoral responses against T-independent antigens. Manuscript submitted for publication. [11] Lin, S.S., et al. (2012) Immune characterization of wild-caught Rattus norvegicus suggests diversity of immune activity in biome-normal environments. Journal of Evolutionary Medicine, 1, 1-16. [12] Trama, A.M., et al. (2012) Lymphocyte phenotypes in wild-caught rats suggest potential mechanisms underlying increased immune sensitivity in post-industrial environments. Cellular & Molecular Immunology, 9, 163-74. [13] Maizels, R.M., et al. (2003) Immune regulation by helminth parasites: cellular and molecular mechanisms. Nat Rev Immunol, 3, 733-44. [14] Daniłowicz-Luebert, E., et al. (2011) Modulation of specific and allergy-related immune responses by helminths. J Biomed Biotechnol, 2011, 821578. [15] Rook, G.A.W. (2009) Review series on helminths, immune modulation and the hygiene hypothesis: the broader implications of the hygiene hypothesis. Immunology, 126, 3-11. [16] Rook, G.A., et al. (2005) Microbes, immunoregulation, and the gut. Gut, 54, 317-20. [17] Guaraldi, F., et al. (2012) Effect of breast and formula feeding on gut microbiota shaping in newborns. Front. Cell. Inf. Microbio., 2, doi: 10.3389/fcimb.2012.00094. [18] Wold, A.E., et al. (2000) Breast feeding and the intestinal microflora of the infant--implications for protection against infectious diseases. Advances in Experimental Medicine & Biology, 478, 77-93. [19] Yoshioka, H., et al. (1983) Development and differences of intestinal flora in the neonatal period in breast-fed and bottle-fed infants. Pediatrics, 72, 317-21. [20] Mackie, R.I., et al. (1999) Developmental microbial ecology of the neonatal gastrointestinal tract. American Journal of Clinical Nutrition, 69, 1035S-1045S. [21] Bickler, S.W., et al. (2008) Western diseases: current concepts and implications for pediatric surgery research and practice. Pediatric Surgery International, 24, 251-5. [22] Genuis, S.J. (2010) Sensitivity-related illness: the escalating pandemic of allergy, food intolerance and chemical sensitivity. Science of the Total Environment, 408, 6047-61. [23] Correale, J., et al. (2008) Helminth infections associated with multiple sclerosis induce regulatory B cells. Annals of Neurology, 64, 187-99. [24] Gale, E.A. (2002) A missing link in the hygiene hypothesis? Diabetologia, 45, 588-94. [25] Saunders, K.A., et al. (2007) Inhibition of autoimmune type 1 diabetes by gastrointestinal helminth


Biology and Chemistry Research infection. Infection & Immunity, 75, 397-407. [26] Becker, K.G., et al. (2010) Similarities in features of autism and asthma and a possible link to acetaminophen use. Medical Hypotheses, 74, 7-11. [27] Bilbo, S.D., et al. (2012) Is autism a member of a family of diseases resulting from genetic/cultural mismatches? Implications for treatment and prevention. Autism Res Treat, 2012, 910946. [28] Shadman, M., et al. (2013) Associations between allergies and risk of hematologic malignancies: results from the VITamins and lifestyle cohort study. Am J Hematol., 88, 1050-4. [29] Vollmers, H.P., et al. (2007) Natural antibodies and cancer. J Autoimmun., 29, 295-302. [30] Schwartz-Albiez, R., et al. (2009) Natural antibodies, intravenous immunoglobulin and their role in autoimmunity, cancer and inflammation. Clin Exp Immunol., 158, 43-50. [31] Swann, J.B., et al. (2007) Immune surveillance of tumors. The Journal of Clinical Investigation, 117, 11371146. [32] Chow, D.A., et al. (1981) Murine natural antitumor antibodies. II. The contribution of natural antibodies to tumor surveillance. Int J Cancer, 27, 459-69. [33] Kasai, M., et al. (1979) Direct evidence that natural killer cells in nonimmune spleen cell populations prevent tumor growth in vivo. J Exp Med, 149, 1260-4. [34] Shirai, T., et al. (1972) Natural cytotoxic autoantibody against thymocytes in NZB mice. Clin Exp Immunol, 12, 133-52. [35] Menard, S., et al. (1977) Natural anti-tumor serum reactivity in BALB/c mice. I. Characterization and interference with tumor growth. Int J Cancer, 19, 267-74. [36] Tang, D.-c., et al. (1992) Genetic immunization is a simple method for eliciting an immune response. Nature, 356, 152-154. [37] Katz, J., et al. (2009) Serum cross-reactive antibody response to a novel influenza A (H1N1) virus after vaccination with seasonal influenza vaccine. Morbidity and Mortality Weekly Report, 58, 521-524. [38] Assay, B.P.P.P.C. (1979) Instruction Manual. BioRad Laboratories, Hercules, CA. [39] Kelly, C.P. (2013) Fecal Microbiota Transplantation — An Old Therapy Comes of Age. New England Journal of Medicine, 368, 474-475. [40] Centers for Disease Control and Prevention. (2011) Recommended adult immunization schedule-United States, 2011. MMWR. Morbidity and mortality weekly report, 60, 1. [41] LÊger, O., et al. (1995) Primary structure of the variable regions encoding antibody to NG2, a tumourspecific antigen on the rat chondrosarcoma HSN. Correlation of idiotypic specificities with amino acid sequences. Molecular Immunology, 32, 697-709. [42] Deshpande, G., et al. (1990) Rearrangement and overexpression of the gene coding for tumor antigen p53 in a rat histiocytoma AK-5. FEBS Letters, 271, 199-202.

Street Broad Scientific Volume 1 | 2011-2012

[43] Teama, S.H., et al. (2013) Multiple molecular markers MAGE-1, MAGE-3 and AFP mRNAs expression nested PCR assay for sensitive and specific detection of circulating hepatoma cells: Enhanced detection of hepatocellular carcinoma. Egyptian Journal of Medical Human Genetics, 14, 21-28. [44] Chang, S.S., et al. (2002) The clinical role of prostate-specific membrane antigen (PSMA). Urologic Oncology: Seminars and Original Investigations, 7, 7-12. [45] Raina, D., et al. (2004) The MUC1 Oncoprotein Activates the Anti-apoptotic Phosphoinositide 3-Kinase/ Akt and Bcl-xL Pathways in Rat 3Y1 Fibroblasts. Journal of Biological Chemistry, 279, 20607-20612.

Volume 4 | 2014-2015 | 29


Street Broad Scientific

Biology and Chemistry Research

Volume 1 | 2011-2012

Modeling How Intervention Can Limit the Spread of Ebola Virus Treena Chaudhuri

ABSTRACT

In this study, the spread of Ebola in Sierra Leone is modeled to show the impact of different interventions. A computational model adapted from Legrand et al shows how the virus spreads and how various methods of intervention can affect the epidemic. Although the data used applies to the current outbreak in Sierra Leone, the model can be adapted to fit other areas. The model stems from the simple SIR algorithm (susceptible, infected, recovered/removed), but encompasses more variables and possibilities, resulting in the SEIHFR model (susceptible, exposed, infectious, hospitalized, funeral, recovered/removed). The multiple methods of intervention were shown to greatly reduce the amount of exposure, infection, and death from the Ebola virus. The results show the importance of intervention in West Africa, where the Ebola epidemic is concentrated, to limit the spread of the disease.

1. Introduction Very little is known about Ebola, making it difficult to combat the recent outbreaks. The virus, previously known as Ebola hemorrhagic fever, was first discovered in 1976. Since then there have been fifteen outbreaks, the largest of which started in 2014 with a total of 20,206 cases and 7,905 deaths worldwide as of December 28, 2014 [16]. This recent outbreak had over thirty times more cases than the second largest outbreak in 1976. Until recently, epidemics were primarily limited to rural, forested areas. An unprecedented amount of the current outbreak has been largely made up of cases in densely populated cities, increasing the risk of infection. Early intervention has the potential to limit the spread of Ebola and solve the problem at its root.

The current outbreak in West Africa has been concentrated in mostly Sierra Leone, Liberia, and Guinea. The cases, however, are not spread equally throughout these countries. The regions of Bombali, Kailahun, Kenema, and Port Loko have had the most cases in Sierra Leone. In Liberia, most cases of Ebola have occurred in Lofa, Margibi, and Monteserrado. Guinea has experienced the most cases in Gueckedou and Macenta, with 1651 deaths in the region of Macenta alone as of January 7, 2015 [7]. Although Ebola has been mostly contained in West Africa, there have been at least twenty-four cases in Europe and the United States. The infected primarily include health care workers.

Ebola Virus Disease (EVD), a rare disease with a high fatality rate, has five known species, four of which affect humans. The other strain can only cause the disease in nonhuman primates. First discovered near the Ebola River in what is now the Democratic Republic of the Congo, Ebola outbreaks have been appearing around parts of Africa, most recently in West Africa. The natural reservoir host of Ebola is still unknown; however, researchers believe that bats are the most likely reservoir. The virus is transmitted through direct contact with bodily fluids or skin of those with Ebola or those who have died of the disease [6]. Symptoms can appear two to twenty-one days after exposure, although generally about eight to ten. At first, symptoms are flu-like and mild, often including headaches, fevers, aches, diarrhea, and vomiting. As the disease progresses, symptoms become severe causing people to hemorrhage. Patients may “vomit blood or pass it in urine, or bleed under the skin or from their eyes or mouths� [17]. Eventually, blood pressure plummets, causing the heart, kidneys, and liver to fail [17]. 30 | 2014-2015 | Volume 4

Figure 1: Deaths by Districts in Sierra Leone [5] Only two out of ten cases in the United States ended in death; one was a visitor and the other, a doctor [17]. 2. Computational Approach 2.1 Data This model analyzes data from the first 9 months of the recent Ebola outbreak in Sierra Leone. The epidemic takes place throughout West Africa, but this model focuses on Sierra Leone, which has a population of 6,384,066 [19], because it has had the most cases. A total of 9,973 cases have been identified (confirmed, probable,


Biology and Chemistry Research and suspected) as of January 7, 2015 [18]. The model was fitted to the most up to date information available from the World Health Organization (WHO) and Government of Sierra Leone’s Ministry of Health and Sanitation. 2.2 The Model The model used was adapted from Legrand et al [2], who modeled past Ebola outbreaks in the Democratic Republic of Congo in 1995 and Uganda in 2000. The structure of the stochastic compartmental model remained the same, but the variable values were updated to fit the most recent data in Sierra Leone.

Street Broad Scientific Volume 1 | 2011-2012

The following assumptions were made to build the model: - The whole population was assumed to be susceptible. - The model was initialized with the data available for May 29, 2014 from Government of Sierra Leone. [8] - All observed cases were assumed to be caused by human to human transmission. Initial transmission may have occurred from non-human transmission. [3] [10]

Figure 3: VenSim Model (see Appendix for larger diagram)

Figure 2: Graphical Representation of the SEIHFR Model In the model, the population was broken into classifications of: susceptible individuals (S) who can be infected by Ebola virus if exposed; exposed individuals (E) who have been exposed to the virus but are not yet symptomatic; infected people (I) who are symptomatic; infected people who are hospitalized (H); people who have died from the virus and may be infectious at their funerals (F); and individuals who have either recovered or been removed from their community after their funerals (R). The VenSim software [9] was used to perform simulations of the model. All simulations were run for 365 days using increments of 0.125 day and integration type ”RK4 Auto.” 2.2.1 Mathematical Description of the Model Below are the differential equations that describe the model:

2.2.2 Data Used in the Model The model used epidemic information and morbidity data from the current outbreak in Sierra Leone. The raw data was collected by the World Health Organization (WHO), the Ministry of Health and Sanitation of Government of Sierra Leone [8], and the Office for the Coordination of Humanitarian Affairs in West and Central Africa [18]. Some of these data is curated by Caitlin Rivers, a graduate student of Network Dynamics and Simulation Science Laboratory of Virginia Tech, and is available at Github [14].

a). Baseline Model (left) b). After 365 day (above) Figure 4: Baseline Model Without Any Intervention The model was fitted with the following variable values: Population (N): 6,384,066 Population count was available from Wolfram Mathematica software. [19] Community Contact Rate (CC): 0.128 The rate that susceptible people are exposed to the virus in their communities. [1] Hospital Contact Rate (CH): 0.080 The rate that susceptible people are exposed to the virus at hospitals. [1] Funeral Contact Rate (CF): 0.111 The rate that susceptible people are exposed to the virus at funerals. [1]

Volume 4 | 2014-2015 | 31


Street Broad Scientific Volume 1 | 2011-2012

Incubation Period (A): 11.4 days The time it takes for a person to become symptomatic after being exposed to the virus; the time between being exposed (E) and infected (I). The mean incubation period of 11.4 days was used in the model. [15] Time until Hospitalization (TH): 5 days Time between being infected and being hospitalized. [15] Time from Hospitalization to Death (TD): 4.2 days The mean time to death after hospital admission was 4.2 days. [15] Duration of Traditional Funeral (TF): 4.5 days A person may still be infectious after death for a while. Duration of Infection (TI): 17.3 days How long a person is symptomatic and can infect others before death. [15] Time from Infection to Death (TD): 9.2 days The model used 9.2 days as the mean time from infection to death. [1] Probability a Case is Hospitalized (P): 0.197 The likelihood of a case being hospitalized was 19.7%. [1] Case Fatality Rate for Unhospitalized Population (FU): 0.69 According to the estimate by WHO, case fatality rate in Sierra Leone was 69% for the first 9 months of the epidemic. For this model, the same rate was assumed for both hospitalized and unhospitalized population. [15] Case Fatality Rate for Hospitalized Population (FH): 0.69 See above. [15]

Biology and Chemistry Research In this simulation, the time until hospitalization was decreased by 25%, 50%, and 75%.

a). Reduced by 25%

b). Reduced by 50%

This data is based on a combination of suspected, probable, and confirmed cases. Suspected: A person who has experienced high fever and had contact with a suspected, probable, or confirmed case of Ebola, a dead or sick animal, or any person with at least three Ebola symptoms. Probable: Any suspected case that has been evaluated by a medical professional; any person who died suspected of Ebola and had a link to a confirmed case, but did not laboratory confirmation. Confirmed: A probable or suspected case who tests positive for EVD. 3. Results and Discussion 3.1 Types of Intervention The model was modified to demonstrate different types of intervention: 3.1.1 Reduction of Time until Hospitalization 32 | 2014-2015 | Volume 4

c). Reduced by 75%

d). After 365 day Figure 5: Impact of Reduction of Time to Hospitalization Reducing the time until hospitalization dramatically decreases the amount of deaths. After a 75% reduction in time, a change from 5 days to 1.25 days, deaths dropped from 40,033 to only 407, about 1% of the original. The change is so significant because not only does the fatality decrease, but also the exposure. A lack of exposure to the virus leads to fewer infections and deaths.


Street Broad Scientific

Research

Volume 1 | 2011-2012

Time until hospitalization can be reduced if more hospitals are made available to those infected. Many people in West Africa cannot access medical care rapidly because of the lack of infrastructure. However, according to WHO, there are enough beds for the amount infected, about two per case. The problem is that the hospitals are unevenly distributed throughout West Africa, so many people do not have access in their areas. Therefore, medical centers should be made more accessible through the development of transportation infrastructure and an even distribution of clinics. In the short term, investment is needed for better medical evacuation procedures, in which the health care workers travel to the patients. [11] 3.1.2 Reduction of Case Fatality Rate for Hospitalized Individuals In this simulation, the case fatality rate for hospitalized individuals was decreased by 25%, 50%, and 75%.

d) After 365 days Figure 6: Impact of Reduction of Case Fatality Rate for Hospitalized Individuals Although reducing the case fatality rate for hospitalized individuals has little to no effect on the exposed and infected individuals, it may significantly reduce the amount of deaths. This pharmaceutical method is not as effective as the development of hospital infrastructure, but it is still needed to combat the proliferation of the Ebola virus. The development of effective therapeutic interventions and drugs such as ZMapp, Bimcidofovir, Favipiravir, etc. should be given high importance. [4] 3.1.3 Increase of the Probability a Case is Hospitalized In this simulation, the probability a case is hospitalized was increased to 40%, 60%, and 80%, instead the original 19.7%.

a) Reduced by 25%

a) 40%

b) Reduced by 50%

b) 60%

c) Reduced by 75% Volume 4 | 2014-2015 | 33


Street Broad Scientific

Biology and Chemistry Research

Volume 1 | 2011-2012

c) 80%

b) Reduced by 75%

d) After 365 Days Figure 7: Impact of Increasing the Likelihood that a Case is Hospitalized Increasing the likelihood that an Ebola case is hospitalized has great impact on the amount of exposed individuals. Not only does the amount of people exposed to the virus decrease, but also the amount of infections and deaths. To increase the probability of hospitalization, the hospital infrastructure should be expanded and proper health education about the virus should be made available. Many cultures in West Africa fear or shame the hospitalization of disease, treating Ebola as a taboo, so many refuse treatment.[20] It is important to teach the values of proper treatment to maximize the amount patients enrolled and, consequently, limit the spread of the virus. 3.1.4 Reduction of Funeral Contact Rate In this simulation, the contact rate at funerals of infected individuals was decreased by 50%, 75%, and 100%.

c) Reduced by 100%

d) After 365 Days Figure 8: Impact of Decreasing the Funeral Contact Rate Reducing and almost eliminating the risk of exposure at funerals of individuals with Ebola considerably lessons the amount of exposure, infection, and death. It is necessary to reduce the possibility of post-mortem infection due to unsafe funereal practices. In Guinea, for example, over 300 cases were linked to a single funeral [12]. According to the guidelines published by WHO, it is possible to have proper and safe burials, while still respecting the dignity of the dead and the tradition of the people. Safe burial practices should be enforced in West African countries, where the worst outbreak of Ebola has occurred [13]. 4. Conclusion

a) Reduced by 50%

34 | 2014-2015 | Volume 4

The models of the various types of intervention demonstrate that it is important, necessary, and realistic to reduce the spread of the Ebola virus. Building new


Street Broad Scientific

Biology and Chemistry Research infrastructure for hospitals and medical centers, providing health education to teach West Africans proper safety procedures, improving pharmaceutical methods and drugs, and establishing safe funereal practices may significantly limit the spread of the Ebola virus, reducing infection and death in West Africa. Without human intervention, the virus may continue to proliferate and annihilate populations.

Volume 1 | 2011-2012

Appendix

5 Acknowledgements I would like to thank Mr. Robert Gotwals of North Carolina School of Science and Mathematics for introducing me to computational science. I would also like to acknowledge J. R. Legrand for his work in modeling past outbreaks of Ebola virus.

Volume 4 | 2014-2015 | 35


Street Broad Scientific Volume 1 | 2011-2012

6. References [1] Caitlin M. Rivers et al. “Modeling the Impact of Interventions on an Epidemic of Ebola in Sierra Leone and Liberia”. In: PLOS Currents Outbreaks November 6, 2014.2 (2014). doi: 10.1371/currents.outbreaks.4d41fe5d6c05e9df30ddc e33c66d084c. [2] Legrand J. R. et al. “Understanding the Dynamics of Ebola Epidemics”. In: Epidemiology and Infection 135.4 (2007), pp. 610–621. Doi: 10.1017/ S0950268806007217. [3] Mar´ı Sa´ez et al. “Investigating the Zoonotic Origin of the West African Ebola Epi- demic”. In: EMBO Mol Med 7.1 (2015), pp. 17–23. doi: 10.15252/ emmm.201404792. [4] BBC. Ebola: The race for drugs and vaccines. url: http://www.bbc.com/news/health-28663217. [5] DataMarket. Ebola: 2014 West Africa Outbreak - Detailed Indicators. url: https://datamarket.com/data/ set/4spg/ebola-2014-west-africa-outbreak-detailed-indic ators#!ds=4spg!88ck=4:88cl=4.m.l.k&display=line. [6] Centers for Disease Control and Prevention. Review of Human-to-Human Transmission of Ebola Virus. url: http:/ /www.cdc.gov/vhf/ebola/transmission/human -transmission.html. [7] Gomes Marcelo F. C. et al Gomes. “Assessing the International Spreading Risk Associated with the 2014 West African Ebola Outbreak”. In: PLOS Currents Outbreaks September 2014.1 (2014). doi: 10.1371/currents.outbreaks.cd818f63d40e24aef769dda7df9e0da5. [8] Ministry of Health and Government of Sierra Leone Sanitation. Sub-national time series data on Ebola cases and deaths in Guinea, Liberia, Sierra Leone, Nigeria, Senegal and Mali since March 2014. url: http:// health.gov.sl/wp-content/uploads/2015/01/Ebola-Situation-Report_Vol-225.pdf. [9] Ventana Systems Inc. VenSim PLE Software, Ver 6.3. url: http://vensim.com/ vensim-personal-learning-edition/. [10] Donald G Mcneil. Source of Ebola Outbreak in West Africa Might Be Bats, Study Says. url: http:// www.nytimes.com/2014/12/31/health/source-of-ebolaoutbreak- might-be-bats-study-says.html?_r=0. [11] World Health Organization. Ebola Situation Report. url: http://www.who.int/csr/disease/ebola/situationreports/en/. [12] World Health Organization. Sierra Leone: a traditional healer and a funeral. url: http://www.who.int/csr/disease/ebola/ebola-6-months/ sierra-leone/en/. [13] World Health Organization. Use Safe Burial Practices. url: http://www.who.int/csr/resources/publications/ ebola/whoemcesr982sec7-9.pdf. 36 | 2014-2015 | Volume 4

Biology and Chemistry Research [14] Caitlin M Rivers. Data for the 2014 Global Ebola outbeak. url: https://github.com/cmrivers/ebola. [15] WHO Ebola Response Team. “Ebola Virus Disease in West Africa — The First 9 Months of the Epidemic and Forward Projections”. In: New England Journal of Medicine 371.16 [16] The Economist The Data Team. Ebola in graphics The Toll of a Tragedy. url: http://www.economist.com/blogs/graphicdetail/2015/01/ ebola-graphics. [17] The New York Times. How Many Ebola Patients Have Been Treated Outside of Africa? url: http:// www.nytimes.com/interactive/2014/07/31/world/africa/ ebola-virus-outbreak-qa.html. [18] The Office for the Coordination of Humanitarian Affairs in West Central Africa. Sub- national time series data on Ebola cases and deaths in Guinea, Liberia, Sierra Leone, Nigeria, Senegal and Mali since March 2014. url: https://datamarket.com/data/ set/4spl/sub- national- time- series- data- on- ebola- cases- and- deathsin-guinea-liberia-sierra-leone-nigeria-senegal-and-malisince-march-2014#!ds=4spl!88d0=3:88d1=1.6.5.3&displa y=line. [19] Wolfram. Mathematica, Version 10. url: http://www. wolfram.com/mathematica/. [20] Reuters. As Ebola stalks West Africa, medics fight mistrust, hostility. url: http://uk.reuters.com/article/2014/07/13/us-health-ebola-westafrica-idUKKBN0FI0P520140713


Street Broad Scientific

Biology and Chemistry Research

Volume 1 | 2011-2012

Using van der Waals Heterostructures to Create p-n Junctions in Thin Film Solar Cells Saishreyas Kolavennu ABSTRACT Depletion of fossil fuels and non-renewable energy sources has called for clean, renewable alternatives. Thin film solar cells have the potential to become more efficient devices than current types. In this project, the p-n junction of a cell was made with van der Waals heterostructures, which are combinations of different monolayers of materials. Black phosphorus and molybdenum disulfide were used as the p-n junction because of their tunable electronic properties. The films were spray deposited onto a FTO substrate using an airbrush and then gold was evaporated onto the device. SEM imaging determined that the film was deposited uniformly and that it had a high surface density. A solar simulator plotted I-V curves for the device across different voltages. The results depicted that this combination of materials was able to act a productive p-n diode. There were some inconsistencies with the data, and it is most likely because of defects resulting from spray deposition. The open circuit voltage was around 0.2 volts, which was very low. However, the rest of the results depict that this process has shown promise for van der Waals heterostructures to make p-n junctions and can eventually create more efficient photovoltaic devices.

1. Introduction As industries grow and the global population rises, the world will increase the rate at which it consumes resources. Currently, non-renewable energy sources account for about 85% of the world’s energy supply [2]. These resources tend to have harmful consequences, such as global warming, but are still the primary source of energy. In order to create a sustainable future, the use of alternative energy sources is a must. Solar energy has the potential to be a major resource. In 2013, solar energy accounted for 29% of all new electricity generation capacity, only behind natural gas [2]. Though this has been increasing over the years, we should still research methods to make them more efficient. The most common type of solar cell is polycrystalline solar cells, known for their high efficiency and relatively simple assembly method. Conventional methods use polycrystalline solar cells that have efficiencies of around 20% [3]. However, there are other variations that can also be applicable for converting solar energy into electrical power. We must look for alternatives with economic benefits such as the cost reduction associated with using solar energy. Thin film devices have the potential to be a beneficial substitute for conventional devices. These devices have the potential to overtake the popularity of polycrystalline cells, and become the predominate type that is used in solar cell farms and mounted on houses and buildings. Construction of solar cells may be difficult, but a certain type of cell is more beneficial for mass production. 1.1 Thin Film Solar Cells Compared to current solar cell devices, thin films generally have a lower efficiency than polycrystalline silicon cells [4]. However, thin film cells are more versatile in the surface they can be built upon [3]. Almost any

metal or plastic could be used as the base layer for these cells. These films are also flexible and work relatively well in indirect light [3]. Thin films are more efficient in high temperatures, so they can be used in a variety of environments [3]. This is particularly beneficial because polycrystalline silicon solar cells decrease in efficiency when exposed to such conditions. For conventional devices, efficiency can decrease by up to 10% or more in some unsuitable environments [2]. For these reasons, there must be more emphasis on experimenting with these devices. Thin film solar cells are devices made from depositing one or more thin layers (~100nm-10μm) of light-absorbing materials onto a substrate (some glass, metal, or plastic) and creating an electrical contact around those layers [15]. Developers must create a suitable p-n junction that will allow for the exchange of electrons, the basis for creating an electrical current. The transfer of electrons from one layer to another will create an electric field and current [15]. The “n-type layer” is the layer of a cell where there is an excess of electrons, and the “p-type layer” is where there are holes instead of electrons [15]. Throughout the entire process, all materials remain electrically or charge neutral. Only between different materials at the interface, the p-n junction, might there be an excess negative or positive charge [15]. The p-type layer actually becomes negatively charged and the n-type layer becomes positively charged. The reason is that the n-type layer has more electrons than the p-type and so those electrons “spill” into the p-layer in an attempt to balance the charge [11,15]. This creates the charge imbalance and sets up an electric field between the p- and n-type layers due to the excess charge in each layer [15]. The excess electrons go to the p-type semiconductor, and the holes (areas of positive charge) to the n-type [15]. An electric field is created when both layers are placed directly on top of each other. This allows Volume 4 | 2014-2015 | 37


Street Broad Scientific Volume 1 | 2011-2012

the semiconductors to act as an electrical battery. Figure 1 shows the basic configuration for a thin film solar cell. The first layer will be a substrate. Next, an ohmic contact is built between the substrate and the p-type layer. The ohmic contact is used as an electrical junction between two conducting surfaces that has a linear I-V curve and respects Ohm’s Law. Above that, the p-type layer and the n-type layer are deposited. Lastly, a certain coating is applied that will bring in light and trap it without reflecting it back out of the cell. The p-n junction is a vital part of modern semiconductor devices. In the most conventional methods, the p and n-type junctions are formed by chemically doping the regions to create a graded junction [16].

Biology and Chemistry Research such as the thickness of a layer, which causes the band gap of the material to decrease as the thickness increases [13], and orientation defects [5,14]. Stacking these 2D crystals on top each other makes heterostructures that can be used as either p-type or n-type layers in photovoltaic devices [12]. The individual properties of each material must be considered when making a heterostructure and the resulting p-n junction.

Figure 2: Basic representation of van der Waals heterostructures [5]

Figure 1: General structure of thin film device [4] Some current thin films use materials such as cadmium telluride (CdTe) and copper indium gallium selenide (CIGS) as the main material to construct devices [15]. Current research has shown that materials such as graphene have tried to become the “new silicon” because of their similar properties [5]. Graphene, the monolayer form of graphite, has led to the use of other two-dimensional materials, which are called van der Waals heterostructures [5]. 1.2 van der Waals Heterostructures Current research on graphene has led researchers to investigate other two-dimensional materials such as hexagonal boron nitride (hBN), molybdenum disulfide (MoS2), and black phosphorus [5]. Figure 2 demonstrates the basic concept behind constructing heterostructures from a variety of different materials. From this diagram, each of the layers is deposited on top of each other like LEGOs [5]. There are many different combinations that can be applicable, but the geometry of the material must be considered when attempting to make a productive heterostructure. The theoretical idea is to be able to use the different properties of each of the materials and combine them to create the most efficient device [5,17]. For example, one could take the properties of a material with good lightcapturing ability and one with high mobility and combine them into one material. Theoretically, this would work magnificently. However, in reality, there are limitations 38 | 2014-2015 | Volume 4

Mobility, for example, looks at the rate at which the electrons move that will generate a current [9]. In any chemical contacts, it is possible for the lattice mismatches to affect the interactions between materials [17]. Additionally, the ultrathin nature of van der Waals heterostructures allows for electrical modulation of the band structure [5]. This is the process of varying electrical properties and the energies of the electrons in the material. Being able to vary these conditions helps to create novel electronic and optoelectronic devices with two-dimensional materials [5,17]. Black phosphorus and molybdenum disulfide have been shown to have properties that make them viable in solar cell engineering. 1.3 Black Phosphorus Phosphorus has many allotropes such as red, white and black phosphorus. Black phosphorus is a very sensitive material because it oxidizes relatively quickly when exposed to air, so it must be handled under nitrogen [6]. Monolayer black phosphorus can be used as the n-type layer when it is incorporated into cell construction. It is recommended to work with black phosphorus inside a glove box or in a setup that forces oxygen out. When black phosphorus is in its bulk form, it has a band gap of 0.59 eV and a relatively high mobility of 2,722 cm2/V·s [9]. Recent studies have shown samples of black phosphorus films that are ~50-250 nanometers thick [8]. The band gap of monolayer black phosphorus, 1.51 eV, is also higher than that of its bulk form [9]. This allows the material to perform at a larger range of voltages, which is very beneficial for photoelectric devices. Figure 3 depicts how the band gaps of monolayer black phosphorus and molybdenum disulfide compare to their bulk material counterparts.


Street Broad Scientific

Biology and Chemistry Research

Volume 1 | 2011-2012

b)

Figure 4: Idea of MoS2/black phosphorus p-n junction [13] 2.2 Spray Deposition

Figure 3: Band gap values of bulk and monolayer material counterparts 1.4 Molybdenum Disulfide (MoS2) MoS2 is a layered transition metal and is composed of layers of covalently bonded S-Mo-S held together by van der Waals forces [10]. Like graphene, it can be extracted into its two-dimensional structure relatively easily with scotch tape [1,10]. This was the typical way of synthesizing monolayers. The band gaps of bulk MoS2 and single layer MoS2 are 1.29 eV and 1.9 eV, respectively [7]. This means the single layered MoS2 can support a higher electric field than its bulk form before breaking down, making it more feasible for use in power intensive devices. The monolayer MoS2 also showed greater photoluminescence [7]. Recent experimentation of MoS2 showed that the material has a higher photosensitivity than that of graphene [10]. Though graphene has a zero band gap, it still has a fast electronhole recombination rate [10]. The higher photosensitivity of MoS2 and electron-hole recombination will make it a viable material for a p-n junction. This project aims to determine if a p-n junction made from van der Waals heterostructures would be a suitable diode for thin film solar cells. 2. Methods 2.1 Thin Film Synthesis Liquid exfoliation was used to make the material’s monolayers/few layers [1]. Molybdenum disulfide and black phosphorus are prepared using the same process, but with their respective materials. Molybdenum disulfide is prepared with n-methyl-2-pyrrolidone (NMP) as the solvent. The concentration of MoS2 was 5 mg/mL. The solution was sonicated in a bath sonicator for 10 cycles of 99 minutes each. The solution was then centrifuged for 30 minutes at a speed of 10,000 revolutions per minute. After this process, a pellet of MoS2 and the thin sheets of material immersed in solution are formed. The supernatant is removed, leaving the solid particles at the bottom of the centrifugation tube. The supernatant will be used for the airbrush that deposits the thin sheets onto the substrate.

There are some conventional methods for depositing thin films onto a substrate. Doctor blading is a tedious, yet accurate method for making thin films, but very time consuming. The solution is streaked across the substrate, allowing the solvent to evaporate, and leaving the material deposited on the substrate. Spin coating puts the substrate on a spinning surface, and the solution is drop casted onto it. Though this is viable, it will waste solution, and will not ensure a uniform film. Spray deposition is a relatively new method for depositing thin films. It has been previously used to spray quantum dots for quantum dot solar cells and depositing polymers [8]. These are all solution-based processes for depositing thin films. Methods like chemical vapor deposition are viable; however, they are more costly. In this experiment, an Iwata Airbrush was used as the airbrush and nitrogen was the air supply for spray deposition. The general idea was to spray the material in the solvent onto the heated substrate. The solvent will evaporate, leaving the material. The substrate was FTO/ Glass and was placed on a hot plate set at 235°C. This temperature was used because the boiling point of the solvent (NMP) is 205°C [18] and there was some leeway added to ensure all of the solvent evaporated quickly. The airbrush was then placed vertically above (~5 centimeters) the heated FTO/Glass. The incoming pressure of nitrogen gas into the airbrush was set at seven psi. The airbrush was moved side-to-side to deposit the thin sheets into a film. Four milliliters of solution was deposited onto the substrate for each material. First, the monolayer MoS2 was deposited, and then the black phosphorus was deposited on top of the previous layer. Figure 5 below describes the setup of the device. Generally, the black phosphorus (n-type layer) would be the layer adjacent to the substrate. However, an inverted p-n junction was used and the only modification that needs to be made is using a metal with a low work function; in this case, gold. If the conventional setup were used, the metal would have needed to be one with a high work function, like aluminum. The electron transfer layer (MoO3) was used to create additional support for the metal evaporation and would help with mobility of the entire device.

Volume 4 | 2014-2015 | 39


Street Broad Scientific Volume 1 | 2011-2012

Biology and Chemistry Research cell to function. Figure 7 shows the surface density of the MoS2 and black phosphorus film.

Figure 5: Structure of constructed thin film solar cell 2.3 Efficiency Testing After the materials were sprayed onto the substrate, a metal was evaporated onto the sample. A metal with a small work function had to be evaporated for this setup. Next, the device was tested in a solar cell simulator. The solar simulator had a light shining up from below with a piece of glass that focused on the substrate. The material on the substrate was scraped off in order to create a better electrical contact. The voltage applied and the corresponding currents were plotted and the resulting open voltage is used to help determine the efficiency of the device. The test was done in direct light, and when the surrounding environment of the cell was completely dark. This was the setup that was used inside the simulator. A Keithley machine was also used to measure the voltage and accompanying current for the sample without the MoO3. MoO3 is used as an additional electron-transport layer. This was to see the main effect of the p-n junction of MoS2 and black phosphorus.

Figure 6: Device setup for the solar simulator 3. Results 3.1 SEM Imaging Before the film was tested quantitatively, the morphology of the thin film was analyzed. SEM imaging was used to characterize the thin film that was deposited onto the FTO substrate. It was used to find locations where the material did not deposit, or any “pinholes� on the FTO. The substrate must be completely covered; otherwise any layers above, such as the evaporated metal, will hit the FTO and cause the cell to short-circuit. The substrate must be void of any inconsistencies and the films also need to have viable thicknesses that allow the 40 | 2014-2015 | Volume 4

Figure 7: Surface density of MoS2 on FTO The image depicts how this deposition method was able to completely cover the substrate. This shows that the density of the film is very high, meaning it will be a viable thin film that can be used for solar cell assembly. Uniformity is vital for construction of these devices; otherwise the efficiencies will be severely hindered. Cross sectional-SEM was able to determine the thickness of each of the layers. The next image (Figure 8) depicts the cross-sectional view of the thin film on FTO.

Figure 8: SEM image of cross section of MoS2 and black phosphorus film on FTO. The FTO has a thickness of around 500 nanometers, and was measured to differentiate between the actual deposited film and the substrate. The MoS2 had a thickness between 220-250 nanometers. The same amount of black phosphorus was deposited and there were similar thicknesses for the black phosphorus as well. The thickness is important in the construction of the device because it can impact the drain current modulation and the mobility of the electrons in the cell. If the layer is too thick, it is possible for the electrons to have too small of a mobility for them to create a feasible current. The final image (Figure 9) displays both MoS2 and black phosphorus shown by the film’s cross-section. The material on the surface was


Street Broad Scientific

Biology and Chemistry Research

Volume 1 | 2011-2012

partially rigid, but the entire layer was able to completely cover the surface, which is vital for the assembly of a thin film photoelectric device. These results show that the film is uniform and dense across the surface. The surface density is qualitatively really high, and the cross-sectional images displayed how the layers of MoS2 and black phosphorus were able to stack on top of each other, which was one of the main goals of this project. 3.2 X-ray Photoelectron Spectroscopy Data Figure 10: XPS Data with black phosphorus in nitrogen atmosphere and black phosphorus exposed to air. 3.3 Solar Simulator Data

Figure 9: Surface image of film with black phosphorus and MoS2 X-ray photoelectron spectroscopy is used to measure the elemental composition of materials. Black phosphorus was deposited onto a 1-centimeter by 1-centimeter goldplated silicon substrate. Both samples were made inside a nitrogen glove box and one was used in the spray deposition. The other film was left inside the glove box. The two figures below show the XPS data of the spray deposited sample and one that is in the pristine form. The peak around the 134 eV mark in the first graph shows that the black phosphorus was oxidized. Since this peak is not visible in the spectrum for the film in the nitrogen atmosphere, it means that black phosphorus will oxidize in air. The oxidation process could have also been stimulated when it was exposed to light while not in the nitrogen environment.

The cell was prepared in a way to run eight trials on the same device. After the deposition was done, a layer of MoO3 was deposited as an electron-transfer layer. Next, gold was evaporated onto the samples. For each contact, the I-V was generated and plotted. The figure above depicts the setup of the solar cell device. The material was deposited across the entire surface, including the glass. However, only the part of the contact on the FTO was what was in contact with the simulator. For the Keithley measurement, the voltage was changed at specific intervals and measurements were taken 10 times at each value. The values were averaged and the following I-V plot (Figure 11) was produced.

Figure 11: I-V plot for measurements from a Keithley

Volume 4 | 2014-2015 | 41


Street Broad Scientific Volume 1 | 2011-2012

Figure 12: Solar simulator I-V plots for both light and dark settings The results showed promise for the materials to act as a productive diode. They have the general shape of an I-V plot for a diode. The majority of the trials showed a spike in current when the voltage reached 0.2 volts. However, if that inconsistency is ignored, the I-V curve shows that this combination of materials can be used as a p-n diode. It was difficult to test the efficiency because of the discontinuity at that point. Between the images of the Keithley machine and the one done in light from the simulator, the one from the simulator showed that the open voltage is a higher. This shows that the layer of MoO3 does have an impact on the properties of the device. 4. Discussion The principal achievement of this project was the fabrication of a p-n junction with van der Waal heterostructures for applications in photoelectric devices. This resulted from synthesizing individual monolayers and layering them to create the basis for a solar cell. The spray deposition process was used instead of other deposition techniques because of its feasibility and low cost. Because current techniques like sputtering and chemical vapor deposition can take time and money, the project sought to look for a new method for solution- processable thin film deposition. This is particularly beneficial when creating thin films in mass quantities. A faster, cheaper method will be very useful when it is perfected and will be applicable for all types of films. A plethora of testing was performed in order to verify the qualitative and quantitative properties of the resulting p-n junction. The material must completely cover the substrate so that there will not be any contamination between the different layers and the metal that was evaporated. If there is, it can cause the device to short-circuit and severely affect efficiency. Though SEM imaging concluded that the material was spread uniformly with a high surface density, it is possible that the gold penetrated through the minute pinholes. The reason for the initial cells having such low current was that the layer of black phosphorus may have come in contact through minute pinholes that were not found through SEM imaging, causing the cell to perform 42 | 2014-2015 | Volume 4

Biology and Chemistry Research weakly. During the heating and synthesis process,the phosphorus could have etched the substrate, damaging the conductive material of the substrate. Further experimentation can determine whether it is the actual black phosphorus or the phosphorus oxide that caused the etching. The graphs from Figure 1 show that the black phosphorus has been oxidized whenever it comes in contact with air. When the material is oxidized, it can change the properties of the material, and will affect the expected results. For this reason, it is important to maintain a nitrogen environment while handling black phosphorus, or any mixtures of the material. This may have been a potential source of error. There are some possible reasons why the diode showed low efficiencies. It is possible that while the electrons are moving throughout the film, the electrons are trapped at the edge, making it difficult for them to transfer between layers and stopping the current. As mentioned in a previous paper [14], when graphene is synthesized, it lacks orientation correlation in long ranges. Because of this, the edges of the sample may not be oriented in a way that allows for electron charge transfer. Another possibility for the defects is that the conduction band and the valence band of the materials are interfering with each other. As one of the material’s valence bands reaches the conduction band, it is possible that the electron would need more energy to get to the actual conduction band of the second material. The data from the “dark� trials was not significant at all. This test was done to see if the cell could act as a battery. There was almost no correlation with any of the trials, and most of the tests actually were not able to produce any results. Five of the eight tests were not able to record a current. The films showed that they had high resistance because the data showed a linear regression between current and voltage. The cell would not be able to work if the resistance was high, since the mobility of electrons would be drastically hindered. In order to show a productive diode, the I-V curve should have a significant open voltage, and have a shape similar to that of Figure 6. 5. Conclusion & Future Work Thin film solar devices have shown evidence that they have certain advantages over conventional polycrystalline solar cells. Their cost efficiency and stability make them more productive in a wider variety of conditions. However, they must be improved in order for them to have the effect that current photoelectric devices have. This project looked at finding a viable method to deposit thin sheets of materials as a p-n junction, while enhancing the electrical properties of the entire device. Two-dimensional materials have many applications in creating photoelectric devices. The combination of properties from different materials can potentially make a device with incredible efficiency for a solar cell. The larger band gaps and higher mobilities of monolayer


Biology and Chemistry Research black phosphorus and MoS2 showed that a p-n junction developed with these materials has the behavior of a productive p-n diode. The I-V plot in light showed that the trials began to show an exponential increase. This demonstrates diode behavior. Though the efficiency was not significant, minor improvements can greatly bring this up to standard. In the next step of this project, there are a few objectives that will be considered: removing the pinholes from the substrate, attempting to create a bulk heterojunction of materials, and using different combinations of twodimensional materials to make a p-n junction. The deposition technique can be refined to completely forbid the metal from reaching the substrate. Next, the p-n junction was made so that it was a bilayer of materials, molybdenum disulfide and black phosphorus. However, it is also possible to experiment with a bulk heterojunction of materials. This would include a monolayer of one material, a solution of two materials as the junction, and the second layer deposited last. There are also other two-dimensional materials that could be used. For example, hexagonal boron nitride (hBN) and tungsten diselenide (WSe2) are also materials that could play the same role as the materials in this experiment [4]. The addition of other heterostructures allows us to develop different combinations of van der Waal heterostructures to create suitable p-n junctions. 6. Acknowledgements I would like to acknowledge my lab mentor, Dr. Scott Warren, for allowing me to use his lab and materials for my experimentation. Next, I would like to thank Mr. Tyler Farnsworth assisting me in the lab and overlooking my project. This was almost a second project for him, and I appreciate the extra workload he took on. I would also like to thank Jun Hu and Dr. Myra Halpin for helping analyze the data, formatting the paper, and giving general feedback. 7. References [1] Coleman, J. N., Lotya, M., O’Neill, A., Bergin, S. D., King, P. J., Khan, U., … Nicolosi, V. (2011). Twodimensional nanosheets produced by liquid exfoliation of layered materials. Science (New York, N.Y.), 331, 568–571. doi:10.1126/science.1194975 [2] “SEIA.” Solar Industry Data. N.p., n.d. Web. 19 Sept. 2014 [3] “Advantages Make Thin Film Solar Panels Shine.” SolarTown. SolarTown, LLC, 22 Oct. 2012. Web. 05 Sept. 2014 [4] Leung, Isaac. “Is This the End for Silicon Thinfilm Solar Cells?” Electronics News. 7 May 2014. Web. 16 Sept. 2014 [5] Geim, a K., & Grigorieva, I. V. (2013). Van der Waals heterostructures. Nature, 499, 419–25. doi:10.1038/ nature12385

Street Broad Scientific Volume 1 | 2011-2012

[6] Lange, S., Schmidt, P., & Nilges, T. (2007). Au3SnP7@Black Phosphorus: An easy access to black phosphorus. Inorganic Chemistry, 46, 4028–4035. doi:10.1021/ic062192q [7] Mak, K., Lee, C., Hone, J., Shan, J., & Heinz, T. (2010). Atomically Thin MoS_{2}: A New Direct-Gap Semiconductor. Physical Review Letters. doi:10.1103/ PhysRevLett.105.136805 [8] Alaa Abdellah, Bernhard Fabel, Paolo Lugli, Giuseppe Scarpa, Spray deposition of organic semiconducting thin-films: Towards the fabrication of arbitrary shaped organic electronic devices, Organic Electronics, Volume 11, Issue 6, June 2010, Pages 10311038, ISSN 1566-1199 [9] Qiao, J., Kong, X., Hu, Z., Yang, F., & Ji, W. (2014). Few-layer black phosphorus: emerging direct band gap semiconductor with high carrier mobility. arXiv preprint arXiv:1401.5045. Retrieved from http://arxiv. org/abs/1401.5045 [10] Yin, Z., Li, H., Li, H., Jiang, L., Shi, Y., Sun, Y., … Zhang, H. (2012). Single-layer MoS2 phototransistors. ACS nano, 6, 74–80. doi:10.1021/nn2024557 [11] “Different Types of Solar Panels.” Enviro(Shop). Enviro(Group), 2012. Web. 22 Sept. 2014 [12] Deng, Y., Luo, Z., Conrad, N. J., Liu, H., Gong, Y., Najmaei, S., … Ye, P. D. (2014). Black PhosphorusMonolayer MoS2 van der Waals Heterojunction P-N Diode. ACS nano, 37. doi:10.1021/nn5027388 [13] Li, L., Yu, Y., Ye, G. J., Ge, Q., Ou, X., Wu, H., … Zhang, Y. (2014). Black phosphorus field-effect transistors. Nature nanotechnology, 9, 372–7. doi:10.1038/ nnano.2014.35 [14] Lin Gan, Xuewu Ou, Qicheng Zhang, Ruizhe Wu, and Zhengtang Luo. (2014). Graphene Amplification by Continued Growth on Seed Edges. Chemistry of Materials. 26 (14), 4137-4143 [15] Harris, William. “How Thin-film Solar Cells Work” 07 April 2008. HowStuffWorks.com. 20 September 2014. [16] Lund, H. “Solar Cells.” Solar Cells. N.p., 2008. Web. 27 Sept. 2014. [17] Fang, H., Battaglia, C., Carraro, C., Nemsak, S., Ozdol, B., Kang, J. S., … Javey, A. (2014). Strong interlayer coupling in van der Waals heterostructures built from single-layer chalcogenides. Proceedings of the National Academy of Sciences of the United States of America, 111, 6198–202. doi:10.1073/pnas.1405435111 [18] “N-Methylpyrrolidone (NMP) Properties.” N-Methylpyrrolidone (NMP) Properties. BASF, 2007. Web. 28 Sept. 2014.

Volume 4 | 2014-2015 | 43


Street Broad Scientific

Biology and Chemistry Research

Volume 1 | 2011-2012

Novel Synthesis and Characterization of Porous Thin Film ZnCo2O4 for Advanced Photocathodic Applications Danuh Kim ABSTRACT In this research, ZnCo2O4 was chosen as one of ideal candidates for an alternative photocathode of Dye-Sensitized Solar Cell (DSSC) and Dye-Sensitized Photoelectrosynthesis Cell (DSPEC). A novel p-type nanomaterial ZnCo2O4 sheet was synthesized at 110 °C under pressure for 16 hours. X-ray powder diffraction and scanning electron microscopy characterized the crystallinity and porous quasi 2-D morphologies having advantages in photocathodic applications. The ZnCo2O4 platelets were processed as a thin film and coated on a fluorine-doped tin oxide glass. Energy dispersive x-ray spectroscopy verified the metal ratios of the synthesized samples, while X-ray photoelectron spectroscopy and UV-photoelectron spectroscopy supported the atomic proportions and spinel structure of platelet film. The DSSC device was constructed to test the material’s performance. Photocurrent density verses voltage curve graph showed high open circuit voltage and flat dark curve compared to traditional NiO device, which is beneficial to device performance. Incident Photon Conversion Efficiency graph showed that only dye one the surface absorbed light, indicating correct solar cell performance. Open Circuit Voltage decay graph showed the lifetime of the cell. Further study of this unprecedented material will enhance DSSC and DSPEC performance as alternative p-type semiconductors.

1. Introduction

1.3. Dye-Sensitized Photoelectrosynthesis Cells and Dye-Sensitized Solar Cells

The tandem Dye-Sensitized Photoelectrosynthesis Cell DSPEC (Dye-Sensitized Photoelectrosynthesis Cell) is The limited supply of energy based on nonrenewable used to produce solar fuels, which is one of the prominent fossil fuels and the environmental damage caused by the candidates for a future renewable energy source. excessive use of combustible energy sources are some of the most important issues in the world today. In particular, coal a) and natural gas-fired power plants produce 25% and 6% of the total U.S. global-warming emissions, respectively[1, 2]. Unlike traditional powering systems, technologies that utilize renewable energy sources produce very few harmful emissions[3]. In addition, renewable energy can be infinitely produced, whereas coal, oil, and natural gas production is in terminal decline because of limited resources[4]. In order to sustain industrial growth without excessive pollution and the threat of energy depletion, we must find clean and renewable energy sources that can replace traditional energy sources. 1.1. Demand for New Energy Sources

1.2. Solar Energy Among various forms of renewable energy, solar energy represents a vast resource for the generation of clean and sustainable energy. While natural gas emits from 0.6 to 2 pounds of carbon dioxide equivalents per kilowatthour(CO2E/kWh) and coal produces 1.4 to 3.6 pounds of CO2E/kWh, solar energy emits only 0.02 to 0.04 pounds of CO2E/kWh[5]. Solar energy was the second most installed source of electricity in 2013 as 38 GW of photovoltaics were installed worldwide. The global solar energy production is rapidly growing and is expected to develop continuously[6].

44 | 2014-2015 | Volume 4

b)

Figure 1. Schematic of the operation system of the (a) DSPEC and (b) p-DSSC[7]


Street Broad Scientific

Biology and Chemistry Research According to Fig. 1a, at the photoanode of the DSPEC, sunlight absorbed by the chromophore-catalyst assembly excites electrons from the highest occupied molecular orbital (HOMO) to the lowest unoccupied molecular orbital (LUMO) of the chromophore. The chromophorecatalyst assembly is oxidized when the excited electrons are injected into conduction band of the TiO2 film, an n-type photoanode material. Then these electrons are finally collected in glasses coated by fluorine-doped tin oxide (FTO), which is a transparent conducting oxide. The oxidized chromophore-catalyst assembly can then oxidize water molecules by splitting them into oxygen molecules and protons. At the photocathode electrode, sunlight absorbed by a different chromophore-catalyst assembly excites electrons, leaving holes in the HOMO of the chromophore. The chromophore is then reduced by injecting holes into the p-type photocathode material. The holes are collected at FTO glasses in the cathode side of the cell. The reduced chromophore catalyst assembly reduces two carbon dioxide molecules when four protons from the photoanode react with carbon dioxide to produce two water molecules and two carbon monoxide molecules. Carbon monoxide and molecular oxygen are both used as solar fuel; when they undergo combustion, it produces energy and releases carbon dioxide and water as byproducts. Those byproducts, water and carbon dioxide, are used again by the DSPEC to repeat the energy production cycle. This enables the DSPEC to produce clean and renewable energy from sunlight without any producing any waste products. The major issue with the tandem DSPEC is the poor performance due to the low efficiency of photocathode material[8]. Most efforts for development of p-type photocathode have been focused on metal oxides, because those materials have generally high band gaps and producing them has been feasible at low costs[8]. The currently used standard p-type material is NiO. So far, NiO has shown the highest performance among candidate materials, with a record photoconversion efficiency of 1.3% in the DSSC configuration[9], whereas the n-type TiO2 counterpart has exceeded the efficiency of 12%[10]. In spite of these advantages, NiO as a p-type material has proven non-optimal because of its low hole mobility, poor chromophore surface loading, low dielectric constant, low surface area, and low light harvesting efficiency caused by limitations of the film thickness[8]. Those traits give NiO a high charge carrier recombination rate, reducing the efficiency of the device. Therefore, the search for novel p-type materials to improve the performance of the system is very important for researchers. Accordingly, the ultimate goal of this project is to synthesize a high-quality photocathode material for practical incorporation in tandem DSPECs, providing sufficiently large photovoltages to enable water splitting and carbon dioxide reduction. As the tandem DSPEC design has a complicated structure, we employed an even simpler DSSC as a model to investigate the photocathode

Volume 1 | 2011-2012

material properties. According to Fig. 1b, the DSSC configuration is analogous to that of DSPEC, but the chromophorecatalyst assembly is replaced with a chromophore (dye)[9]. In a p-type DSSC, the molecular chromophore is excited and quickly reduced by injecting holes into the p-type material. The reduced chromophore is then regenerated by a liquid electrolyte to complete charge separation[10]. Unlike the DSPEC design, which produces carbon monoxide as a solar fuel, the DSSC produces electrical power directly from the incident sunlight. Because it has a simpler configuration than the DSPEC, and at the same time, it is a convenient way to examine the material properties of films, the DSSC is frequently used as a model to study DSPEC performance. 1.4. ZnCo2O4 and its morphology Among the p-type metal oxides with a high band gap, ZnCo2O4 was chosen as one of ideal candidates for an alternative photocathode material. This material is known to exhibit a maximum hole mobility of higher than 0.2 cm2/V s and a conductivity of higher than 1.8 S cm-1 [11]. These values are much higher than the mobility of standard NiO, 6.3 x 10-5 cm2 /V·s [12], and a conductivity of 2.2 x 10-3 S·cm-1 [8]. High carrier mobility and conductivity would allow a thicker film to be deposited with good charge collection efficiency. With a thicker film and more dye absorbed to the film, the light absorption from the dye will be higher leading to a better light harvesting efficiency. Also, it is important to avoid visible light absorption for ZnCo2O4, which promotes the light absorption from the dye. NiO, on the other hand, absorbs visible light because the intervalence charge transfer (IVCT) absorption band arises from the photon-induced hopping of an electron from Ni2+ to Ni3+ [13]. In case NiO competes with the dye for visible light absorption , it is obvious that the resulting overall device performance would be poor. Judging from the above-mentioned cons and pros, ZnCo2O4 is a strong candidate for use as a new p-type material in DSPECs. 2. Materials and Methods 2.1. Synthesis of ZnCo2O4 Porous ZnCo2O4 was synthesized following Sun’s method[14]. 5 mmol Zn(NO3)2·6H2O, 3 mmol Co(NO3)2·6H2O, and 54 mmol urea were dissolved in 20 ml distilled water and 40 ml ethylene glycol. The solution was moved into a pressure bottle and heated at 110 °C for 16 h. When the reaction ended, precipitate was washed with water and ethanol for several times. The dried precipitate was ground into a powder and annealed at 450 °C for 5 hours.

Volume 4 | 2014-2015 | 45


Street Broad Scientific

Biology and Chemistry Research

Volume 1 | 2011-2012

2.2. Making the ZnCo2O4 Paste and Coating as a Thin Film ZnCo2O4 paste was made following the method used by Ito[15]. Two kinds of pure Ethyl Cellulose (EC) powders, EC 1 (5–15 mPas) and EC 2 (30–50 mPas), were dissolved in ethanol to yield 10 wt.% solutions prior to usage. This 10 wt.% ethanolic mixture was added to a round-bottomed flask containing 6 g ZnCo2O4 (obtained from the previously prepared precipitate) and 24.3 g terpineol and diluted with 30 ml ethanol to obtain a final total volume of 105 ml. The mixture was then homogenized with a disperser, a ball mill, and an ultrasonic horn to yield an approximately 1.5μm thick film. The ZnCo2O4 paste was then spin coated on the surface of FTO glass to result in the thin film. 2.3. p-DSSC Device Fabrication Spin coated surfaces were annealed for 40 minutes at 400oC and were placed in P1 (dye) solution for 24 hours. Then the dye-loaded surface was sandwiched with counterelectrode by using 25 micron thick surlyn gasket. Counterelectrode was made by dropping 2 drops 0.5mM H2PtCl6 in IPA on the glass surface and annealed it at 380oC for 30 minutes. The space in between ZnCo2O4 layer and counterelectrode was filled with I-/I3- electrolyte using vacuum pump. The hole used to inject electrolyte was closed by 25 micron thick surlyn gasket and a cover glass. 3. Results

3.1.2.Scanning ELectron Microscope (SEM) Images

a)

b)

Figure 3. SEM images of ZnCo2O4 obtained at a) low and b) high magnification Figure 3 illustrates the SEM images of ZnCo2O4 sheets obtained at low and high magnification, respectively. Fig. 3A shows the apparent and bare morphology of the prepared material. It is difficult to discuss the structure and property of the material only through Fig. 3A. Therefore, we went down closer to observe the morphology of sheet in detail. Fig. 3B exhibits that the sheet has a nanoporous structure-literally similar to nanocellular foam. Even if the pores are scattered considerably, the pore size was fairly homogeneous and loosely regular. It is noteworthy that the pores remained intact even after calcination at elevated temperature. 3.1.3. Energy Dispersive X-Ray Spectroscopy

3.1. Powder Analysis 3.1.1. X-Ray Diffraction (XRD) Pattern

Table 1. Atomic composition of ZnCo2O4 sheets by energy dispersive X-ray spectroscopy (EDS)

Figure 2. The X-ray diffraction(XRD) pattern of the ZnCo2O4. The X-ray diffraction (XRD) pattern of the ZnCo2O4 in Fig. 2 showed a single strong peak at 19 °, indicating that the material was highly crystalline. This profile also indicates that the material is highly textured, which means that it has single predominant crystal face due to its 2-dimentional architecture.

46 | 2014-2015 | Volume 4

Energy Dispersive X-ray Spectroscopy (EDS) provides an elemental composition of materials. According to the EDS data, the ZnCo2O4 sheets show that the zinc to cobalt ratio was approximately 3, indicating that the sheets contained excessive cobalt. This result can be supported by tendency of zinc to be less reactive than cobalt; Zn2+ oxidation state is very stable so that some of the zinc ions escape from the reaction. Therefore, the prepared ZnCo2O4 contained excessive cobalt. 3.2. ZnCo2O4 Film Analysis 3.2.1. X-Ray Photoelectron Spectroscopy (XPS)


Biology and Chemistry Research

b)

Figure 4. X-ray photoelectron spectroscopy (XPS) spectra of ZnCo2O4 sheet thin film a) shows that there is a peak at 1020eV, which indicates the presence of Zn2+ ion in the material. b) XPS analysis of Cobalt shows that there is a peak at 780eV, which indicates the presence of Co3+ ion in the material. Figure 4 presents the XPS spectra of ZnCo2O4 sheet thin film focused on zinc and cobalt. In Figure 4A, a Zn2+ peak was present at 1020eV. Other peaks associated with zinc having different oxidation state were absent. This is consistent with the expected zinc oxidation state in ZnCo2O4, because the material has a spinel structure in which the oxidation state has to be Zn2+ According to Figure 4B, a Co3+ peak was present at 780eV. There was a small satellite peak for Co2+ at 795eV. The major peak trend indicates the correct oxidation state of the cobalt ions in order for the ZnCo2O4 to be spinel structure (Zn2+, Co3+, O2-). Table 2. X-ray photoelectron spectroscopy (XPS) data summarizing the binding energies and atomic concentrations of ZnCo2O4 thin film surface.

As seen in Table 2, XPS detected the atoms zinc, cobalt, oxygen, carbon, and tin. Tin was observed due to the fluorine doped tin oxide (FTO) upon which the ZnCo2O4 was coated. As expected, there was no monitored fluorine

Street Broad Scientific Volume 1 | 2011-2012

signal because the intensities from fluorine are usually small. Carbon was observed since it was abundant on the film surface from burning off organic materials that were included in the original Zn Co2o4 paste. Atomic concentration of film surface shows that zinc and cobalt has approximately 1 to 1.2 ratio, which is off the ratio of ZnCo2O4. This can be explained by phase separation of ZnCo2O4. When ZnCo2O4 is exposed to high temperature, it goes phase separation into ZnO and Zn0.5Co2.5O4, exhibiting a mixed Co2+ and Co3+ oxidation state[16]. The small Co2+ satellite peak observed in XPS supports the idea that part of ZnCo2O4 participated phase separation. EDS data in Table 1 also supports the phase separation of ZnCo2O4, as Zinc to Cobalt ratio was greater than 1:2, but less than 1:5. When we calcinated the powder from the reaction to obtain the final ZnCo2O4 powder, some of the ZnCo2O4 particles were transformed to ZnO particles and Zn0.5Co2.5O4 particles. The atomic ratio of zinc to cobalt shown in Table 2 is close to 1 to 1.2 because more ZnO particles than ZnCo2O4 particles were located on the surface of the film. XPS can only analyze the surface of the film so since the ZnCo2O4 particles were hidden under the surface, the analysis result showed that there is less cobalt ratio and more zinc ratio than it is supposed to be in the actual material. 3.2.2. Ultraviolet Photoelectron Spectroscopy

Figure 5. The ultraviolet photoelectron spectroscopy spectra (UPS) of ZnCo2O4 thin film. The Fermi energy level(EF) is 26.2 eV, when the intensity is 0. The work function was calculated using the relationship between work function and Fermi level. The work function of ZnCo2O4 is 4.76eV, which is slightly higher than lowest work function of NiO. Ultraviolet Photoelectron Spectroscopy (UPS) showed that the synthesized material was semiconducting in nature. At Fermi level, the intensity of UPS is 0. For our UPS instrument, Fermi level is 26.2 eV. The work function is the energy required to pull an electron out of a surface: the vacuum level to the Fermi level. The ZnCo2O4 had a work function of 4.76 eV, which indicates that it’s valence band position is slightly lower than that of NiO, which has the highest work function of 4.4 eV[8]. The lower Volume 4 | 2014-2015 | 47


Street Broad Scientific Volume 1 | 2011-2012

work function increases energy gap between valance band and electrolyte nernstian potential, increasing Voc. Although ZnCo2O4 showed slightly higher work function than lowest NiO work function, it is expected to obtain ZnCo2O4 with lower work function at optimum calcination temperature. This is based on the fact that NiO had varying work functions depending on the calcination temperature, and the lowest work function was shown when the calcination temperature was 450Co [8].The UPS data also showed that the synthesized material was a p-type semiconductor. 3.3. Dye-Sensitized Solar Cell (DSSC) Device Performance Analysis Four Dye-Sensitized Solar Cell devices, D5-Z, D6Z, D7-Z, and D8-Z were constructed based on the synthesized ZnCo2O4 film.

Biology and Chemistry Research Power conversion efficiency(PCE) also highly depends on Jsc, thus its value is expected to increase as Jsc increases. The ZnCo2O4 device has a very low dark saturation current. Dark saturation current describes the current produced by the device when there was no light. The closer the dark saturation current is to 0 A/ cm2, the better the performance of device is because no photocurrent should be produced in dark. Photocurrent should be produced only by the dye absorbing the light. The dark saturation currents of ZnCo2O4 devices were from 5.07E-06 A/ cm2 to 3.66E-07 A/ cm2, which are extremely small close to 0. Compared to the lowest dark saturation current of NiO device, 1.10E-05 A/ cm2, ZnCo2O4 devices have a lower dark saturation current. 3.3.2. Photocurrent Density vs. Voltage Curve

3.3.1. Photovoltaic Metrics Table 3. Photovoltaic metrics for device performance of DSSC constructed with ZnCo2O4 as p-type photocathode. Jsc indicates short circuit current, Voc indicates open circuit voltage, FF indicates fill factor, and PCE indicates power conversion efficiency.

Short circuit currents( Jsc) of ZnCo2O4 devices ranged from 0.13 to 0.14 mA/cm2, which are significantly lower compared to that of optimum NiO device, which was 1.18 ± 0.09 mA/ cm2. This low Jsc is attributed to its extremely low dye absorption of ZnCo2O4 film. However, if the film was modified to absorb increased amount of dye, the Jsc is expected to increase significantly as the open circuit voltage(Voc) of this device is significantly higher compared to that of NiO device. The NiO device modified to have maximum Voc has Voc of 150 ± 20mV, but currently, this device is functioning poorly as the modification causes deterioration in device performance. The maximum Voc of working NiO device is 108 ± 4mV. Without any modification, ZnCo2O4 device has higher Voc of 160mV compared to NiO device, and is functioning properly. High Voc reduces recombination because hole diffusion becomes faster through the film[8]. The higher the fill factor(FF), the better the device performance is. FF of the device is remarkably low but fill factor is expected to increase with an increase of Jsc. 48 | 2014-2015 | Volume 4

Figure 6. Photocurrent Density vs. voltage curve of the device D5-Z. The A line indicates the light curve created by the performance of the device in the presence of light, and the line B indicates the dark curve, created when the device worked without light. Extremely flat dark curve close to photocurrent density of 0 is shown in the curve B. Dark saturation current is 3.20415E-07 A/cm2. Photocurrent density under the light is shown in the curve A. As the Jsc is not very high for this device, light curve does not show improved performance. 3.3.3. External Quantum Efficiency (EQE)

Figure 7. The external quantum efficiency (EQE) of the ZnCo2O4 device. External quantum efficiency(EQE) of the device was derived from incident photon to current efficiency(IPCE). For the wavelength of 400nm to


Street Broad Scientific

Biology and Chemistry Research 700nm, photocurrent produced by device was measured. Using the Equation EQE(%) = Collected Current/ Photons Incident, EQE was plotted at each wavelength. According to the graph, the device is reacting most sensitively to the light of 520nm wavelength. At the wavelength, 1.4% of the photons hitting the sample were successfully collected as current. This result matches with the absorbance spectra of dye P1, which was used to make the ZnCo2O4 device. 3. 3.4 Open Circuit Voltage Decay and Lifetime

Figure 8. Lifetime of the device measured by opencircuit voltage vs. lifetime graph. The lifetime of the cell is the time left when the opencircuit voltage starts to be measured. The lifetime of the device is approximately 101 seconds. It is similar with the average value of other general DSSC devices. 4. Discussion and Conclusions 4.1. Discussion The produced p-type ZnCo2O4 had advantages that were inaccessible in previous p-type material, representatively NiO. Even if the device employing ZnCo2O4 showed a low Jsc and PCE, they showed significantly high Voc and a flatter dark saturation curve without any modification. In particular, the dark saturation current of ZnCo2O4 was 102 times lower than that of modified NiO. These characteristics demonstrate possibility of using ZnCo2O4 for p-type material in next generation DSSC and DSPEC. Voc and the dark saturation current of ZnCo2O4 device are expected to improve more dramatically than that of NiO device because it starts with higher Voc and a lower dark saturation current than unmodified NiO. Jsc and PCE of the device are expected to be improved when we modify the dye loading level on ZnCo2O4 . If the amount of dye increases, Jsc and PCE are increased significantly as the material has large Voc and dark saturation current inherently. Modification of ZnCo2O4 can be an alternative method to achieve high DSSC and DSPEC efficiency as modification of NiO had not reach to that level yet.

Volume 1 | 2011-2012

4.2. Conclusion The goal of this project was to synthesize novel, nanoporous, p-type material ZnCo2O4 sheets for advanced photocathodic applications. The synthesis was accomplished under 110 Co for 16 hours and the synthesized material was annealed for 450 Co for 5 hours. The powder material was characterized using XRD, SEM, and EDS. The paste was made out of the powder and coated on the surface of FTO glass as a thin film. Then the film was analyzed by XPS and UPS in order to study their properties and suitability for use in advanced photocathodic applications. The nanoporous properties indicated by the SEM images suggest that the materials have extremely high surface areas. Hence, when the new materials are used in solar energy production devices such as DSSCs and DSPECs, the dye loading on the semiconductor film surface will be up to 300 times greater than the dye loading on planar bulk film. XRD showed the crystallinity of the materials and EDS showed the atomic ratio of the materials, indicating the correct spinel structure with slight impurity caused by calcination at high temperatures. XPS showed the atomic ratio of the surface of the thin film and UPS showed the work function value of the material. The performance of the p-DSSC device constructed with ZnCo2O4 thin film performance showed that ZnCo2O4 has high Voc and low dark saturation current naturally. EQE showed that in the device, dyes were the only material creating current in the device, indicating the correct device performance. The experiment showed that the novel nanoporous ZnCo2O4 sheets have a potential to become alternative p-type material used in DSSC and DSPEC in order to improve their performances. 4.3. Future Work Future work related to this project includes: • Surface modification of ZnCo2O4 in order to increase its dye load • Surface modification of ZnCo2O4 in order to increase its Voc • Surface modification of ZnCo2O4 in order to further decrease its dark saturation current • Performance testing of films on modified ZnCo2O4 p-DSSC devices and comparison of the results with non-modified ZnCo2O4 p-DSSC devices and modified NiO p-DSSC devices. Acknowledgements • Dr. James F. Cahoon – University of North Carolina Chapel Hill •Cory Flynn – University of North Carolina Chapel Hill •Dr. Myra Halpin – North Carolina School of Science Volume 4 | 2014-2015 | 49


Street Broad Scientific Volume 1 | 2011-2012

and Math •Dr. Michael Bruno – North Carolina School of Science and Math •Shannon McCullough – University of North Carolina Chapel Hill •Esther Oh – University of North Carolina Chapel Hill References [1] Environmental Protection Agency, Inventory of U.S. Greenhouse Gas Emissions and Sinks: 1990-2010, 2012 [2] Energy Information Agency (EIA), How much of the U.S. carbon dioxide emissions are associated with electricity generation?, 2012 [3] O. Edenhofer et al., Intergovernmental Panel on Climate Change (IPCC), IPCC Special Report on Renewable Energy Sources and Climate Change Mitigation. Prepared by Working Group III of the Intergovernmental Panel on Climate, Cambridge University Press, 2011, pp1075 [4] A. Valero, Physical Geonomics: Combining the exergy and Hubbert peak analysis for predicting mineral resources depletion, Resources, Conservation and Recycling, Volume 54, 2010, pp1074-1083 [5] Benefits of Renewable Energy Use, Retrieved from http://www.ucsusa.org/clean_energy/our-energychoices/renewable-energy/public-benefits-of-renewable. html#.VCRcAdy-UWY [6] M. Grätzel et al., Global Market Outlook for Photovoltaics 2014-2018, European Photovoltaic Industry Association [7] C.Flynn (UNC EFRC,),personal communication, June 30, 2014 [8] C. Flynn et al., “Hierarchically-structured NiO nanoplatelets as mesoscale p-type photocathodes for dye-sensitized solar cells”, J. Phys. Chem. C, Vol. 118, pp 14177–14184, 2014 [9] Hagfeldt A, Brief Overview of Dye-Sensitized Solar Cells, AMBIO. Vol.41, 2012 [10] A. Yella et al., “Porphyrin-sensitized solar cells with cobalt (II/III)-based redox electrolyte exceed 12 percent efficiency”, Science, Vol. 334, pp. 629-634, 2011 [11] H. J. Kim et al.,“Structural and transport properties of cubic spinel ZnCo2O4 thin films grown by reactive magnetron sputtering”, Solid State Communications, Vol. 129, pp. 627–630, 2004 [12] F. Odobel et al., “Recent advances and future directions to optimize the performances of p-type dyesensitized solar cells”, Coord. Chem. Rev., Vol. 256, pp. 2414-2423, 2012 [13] P. Monk et al., Electrochromism and Electrochromic Devices, Cambridge University Press, 2007 [14] B. Sun et al., “Hierarchical NiCo2O4 nanorods as an efficient cathode catalyst for rechargeable non-aqueous Li–O2 batteries”, Electrochemistry Communication, Vol. 50 | 2014-2015 | Volume 4

Biology and Chemistry Research 31, pp. 88-89, 2013 [15] S. Ito et al., “Fabrication of thin film dye sensitized solar cells with solar to electric power conversion efficiency over 10%”, Thin Solid Films, Vol. 516, pp. 4613–4619, 2008 [16] S. Huber et al, “Synthesis and magnetic properties of Zn spinel ceramics”, Ceramics, Vol. 57, pp. 162-166, 2013


Street Broad Scientific

Biology and Chemistry Research

Volume 1 | 2011-2012

A Computational and Statistical Analysis Examining the Impact of Polymers, Orientations, and Structure on Organic Solar Cell Performance using a Semi-Empirical Monte Carlo Model Pranav Kemburu

ABSTRACT

Organic solar cells show potential for producing cheaper energy than other available alternative energy sources. With an active layer created out of a polymer:fullerene blend, organic photovoltaic (OPV) cells are more versatile than inorganic solar cells. Despite these advantages, OPV’s demonstrate lower efficiencies than inorganic due to fundamental differences in the physics of device operation. One way to raise the efficiency is to better understand how polymers, molecular orientations and structure impacts device performance through the Dynamic Monte Carlo (DMC) simulation. The DMC simulation models the power converting process within cells; examining the particles interacting within, thus allowing testing of polymers PCDTBT, PSBTBT, PCPDTBT, PTB7, and P3HT. The DMC simulation is conducted using electrical parameters, such as carrier mobilities. Within our study we include new parameters, [100] and [010] stacking determined by GI-WAXS measurements, that have not been implemented before, and examined the effects of replacing PEDOT-PSS with Graphene Oxide. It is shown that Graphene Oxide is a slightly less efficient holetransporting layer. Our simulation showed an efficiency preference for the [010] orientation, with the rare polymer preferring the [100]. This differs from previous works, and shows the potential for interesting studies on how orientation can impact OPV’s.

1. Introduction Organic solar cells provide the opportunity to provide cheaper, alternative energy than other green energy producing products, such as inorganic solar cells. Created out of polymers and small molecule, organic solar cells are easy to produce and have a large range of materials that can be used in their creation. As flexible, lightweight devices, organic solar cells are more adaptable and more versatile than the widely used inorganic solar cells. They offer a range of applications and implementations that inorganic solar cells cannot attain due to organic solar cells versatility. Although organic photovoltaic (OPV) cells have a great number of advantages, inorganic solar cells are used due to their higher efficiencies. OPV cells have reached a maximum efficiency of 18%, lagging behind their inorganic counterparts maximum efficiency of 45% (1). There exists numerous challenges in raising the efficiencies of organic solar cells, but there has been extensive research done on organic solar cells. Research has been conducted to determine the specific properties of organic solar cell efficiency values and ways to improve them; however, not enough information is available to make organic solar cells competitive. Many models have been created to simulate the processes of organic solar cells in order to create quicker and more effective methods of measuring their efficiencies, but they lack parameters that are necessary to make them as accurate as experimental designs.

By better understanding the parameters of organic solar cells, more accurate models can be created to simulate the interactions of particles within allowing researchers to examine the physics of organic solar cells without actually creating cells to test. If we are able to determine the properties that most impact efficiency values, we will have created an opportunity to replace inorganic solar cells with a cheaper, mass producible alternative that can produce green energy. Organic solar cells have a completely different makeup and work by a completely different set of physics than inorganic solar cells. An organic solar cell is a stacked structure comprised from bottom to top of: a glass substrate, an indium tin oxide (ITO) layer, a hole-transporting layer (commonly PEDOT-PSS), an active layer, and an electrode (aluminum). The structure can be seen in Figure 1. The hole-transporting layer smoothens the ITO layer and increases the work function. Within the active region is a polymer donor layer, donates electrons, and a small molecule acceptor layer, which accepts electrons. The active layer is where power is produced. For this study, the morphology of the active layer will be a bulk heterojunction (BHJ): a mixing of acceptor and donor layers. A BHJ morphology was chosen as recent studies have shown it to produce higher efficiencies (2). A solar cell produces power through the photovoltaic process. When the donor absorbs a photon from light, the donor generates an exciton. An exciton is a particle Volume 4 | 2014-2015 | 51


Street Broad Scientific Volume 1 | 2011-2012

comprised of an electron and hole pair. As the exciton travels through the active layer, it disassociates into its

Figure 1: Structure of an organic solar cell (3). separate parts. If the electron can reach the accepting small molecule without meeting with any holes, then the cell produces power. On the other hand, if an electron meets again with a hole, it will recombine into an exciton and will not produce power. By better understanding the operations of organic solar cells, we can improve their efficiencies. One such model to simulate organic solar cells is the Dynamical Monte Carlo (DMC) model produced by Watkins et al. (4). The DMC model simulates charge transport, and can be used to model different blends of polymers and small molecules. Watkins simulated a blend of PFB and F8BT, obtaining values that mtched experimental results. Watkins created a simplified version of the model in order to better understand organic solar cells: not to discover actual efficiency values. Fan Yang and Stephen Forrest took this simplified model, and adapted the model in order to produce numerical values for efficiency (5). Yang, Forrest, and Watkins experimented with different morphologies ofthe active layer: a bilayer morphology, with the active layer stacked on the donor layer, and a checkerboard morphology. By correcting the simplifications made by Watkins, they modeled a blend of CuPc and C60, obtaining simulated results that correlated with experimental results. Our model draws heavily from the model of Yang and Forrest, extending the focus to be on a BHJ solar cell comprised of a PCBM small molecule, andvarious polymers. Studying the effects of pairing PCBM with different polymers in different orientations will offer more insights on how organic solar cells work. This study will also examine the effect of replacing the PEDOT-PSS layer with Graphene Oxide (GO), which has shown potential in acting as a replacement hole-transporting layer (6). Through better thin-film deposition techniques, we can obtain experimental results with higher efficiencies that correlate with our simulation results such as the RIRMAPLE technique (7). The goal of our study is to better understand how organic solar cells function, and how different structures, polymers, and GI-WAXS orientations can affect our model. By doing this, a cleaner, greener, and cheaper method of obtaining energy can be implemented on a large scale. 52 | 2014-2015 | Volume 4

Biology and Chemistry Research 2. Materials and Methods 2.1 Model Description Our model was built with three parts: a bulk heterojunction morphology generator, a transfer matrix to calculate exciton generation rates and distribution, and absorption efficiencies, and a Dynamical Monte Carlo that simulated charge transport. Our simulation was based on a nanoscale model. 2.1.1 Bulk Heterojunction Morphology The first part of the model generated the morphology we used throughout our simulation. The methodology of generation is the same method developed and used by Watkins et al. (4). The morphology was generated by controlling the Ising Model and Kawasaki spin dynamics with a Metropolis Algorithm Monte Carlo. The program begins by assigning every lattice point, or exciton occupation site, a spin of positive or negative one. The sign of the spin determines whether the particular point will be a part of the donor or acceptor layer, thereby randomizing the morphology. The morphology generator was run using C++, and the morphology pictures were created using MATLAB.

Figure 2: A bulk heterogeneous morphology. Red represents the donor layer and blue represents the acceptor layer. 2.1.2 Exciton Generation Rate The transfer matrix used in our study is the same matrix as developed by Yang and Forrest (5). The optical field intensity, based off of a wavelength range of 300900 nm and the distance from the cathode, is obtained by treating the OPV as a micro cavity. The interfaces are assumed to be optimal, optically flat, and isotropic with real and imaginary indices of refraction. The optical field is calculated using the transfer matrix method (8)(9). The tangential component of the electrical field must always be continuous; this boundary condition allows us to create 2x2 matrices to define the system. By assumingthat the structure of the solar cell is squished between two semi-infinite layers, the glass from below and air


Biology and Chemistry Research from above, we can calculate the electrical field of the cell. Our model differed from that of Yang and Forrest by including a 30 nm thick holetransporting layer. The GO and PEDOT-PSS layers will be deposited on experimental devices in the future, and the inclusion allows our model to be more accurate in terms of correlation with experimental values. Yang and Forrest also did not state the thicknesses of the ITO and aluminum contact layers, so normal thicknesses of 100 nm were used. These assumptions allowed us to calculate the optical field intensity. Within our model, our focus was on the optical constants. The optical constant n, the real component of the index of refraction of the material, was used to convert the total electric field to optical field intensity. For each material used, we collected different values of n to calculate the optical field intensity of the layer the materials lay in. The equation below is where n was used, with c representing the speed of light, and ε0 representing the vacuum permittivity.

To calculate the absorption coefficient, we used the following equation. λ is the wavelength in the vacuum, and k is the imaginary component of the index of refraction for the layer. The exciton generation length can then be calculated by multiplying the optical field intensity with the absorption coefficient.

The optical constants for the different polymers and materials were obtained from literature. The optical constants for the ITO and aluminum layer was taken from the Refractive Index Databse (12). The constants for PEDOT-PSS were taken from Hoppe etal (11). The constants for P3HT:PCBM were taken from Monestier et al (10). The constants for GO were taken from Jung et al. (13). The constants for PTB7:PCBM were taken from et al. (14). The constants for PCDTBT:PCMB were taken from Nickel et al. (15). The constants for PSBTBT were taken from Mescher et al. (16) . The constants for PCPDTBT were taken from et al. (17). This portion of the model was run using MATLAB. 2.1.3 Charge Transport Model The final part of the model is the Dynamical Monte Carlo portion used to monitor charge transport. Particles and their interactions are monitored throughout a cubic lattice 100x100x60. The size of each lattice position is begotten from the lattice constant, which is obtained within this study from the [010] and [100] GI-WAXS orientations. Excitons are constantly produced according

Street Broad Scientific Volume 1 | 2011-2012

to the exciton generation rate. The model focuses on the collection of carriers and the recombination of particles. The First Reaction Method (FRM), a commonly used DMC algorithm, was used to speed up our simulation. In the FRM, all possible actions for a particle are generated at each step. The FRM lines up all these events based off of the time it takes to complete them, and then executes the event that occurs in the shortest amount of time. After this event is executed, then all possible events are recalculated. FRM reduces the computation necessary for the program significantly, and only differs by less than .5% accuracy (18). The exciton has three possible events that it can undertake. It can move, dissociate or recombine within the simulation. Throughout its entire lifetime, an exciton can only travel its diffusion length. The exciton will execute these three events repeatedly until its lifetime is over. A charge carrier has three events it can execute: move, recombine, or collection. If a charge carrier is collected, then power is generated. The efficiency of the system, the external quantum efficiency (EQE), ηEQE, was calculated in this portion of the model. EQE is the number of carriers collected divided by the number of incident photons. It is calculated through four other efficiency values: the absorption efficiency (ηα) describing the number of excitons generated for each incident photon, the exciton diffusion efficiency (ηED) describing the number of excitons that reach the donor without recombining, the charge transfer efficiency (ηCT) describing the number of electrons and holes generated for each exciton that reaches the donoracceptor interface, and the charge collection efficiency (ηCC) that describes the number of charges collected for each electron and hole. ηCT is assumed to be one for OPV’s, due to the speed of the process and scale of the model (9). The DMC simulation was run through Java. 2.2 Parameters Investigated The charge transport model used a set of parameters we obtained from literature. These values were used to simulate the effect of having different polymers blended with PCBM in order to study the factors of polymers that make organic solar cells more efficient. These values determine the structure, orientation, and interaction of particles within the active layer of the cell. The orientation, [010] or [100], goes into determining the lattice constant, which is linked to the sphere of interaction of particles, and specifically the electrons and the holes. Some values were not available in current literature, so the values of P3HT were used in their stead. P3HT was picked, as it is the most commonly used polymer for organic solar cells today. This happened for all polymers Volume 4 | 2014-2015 | 53


Street Broad Scientific Volume 1 | 2011-2012

Biology and Chemistry Research

for the recombination rate and energy width of density of states (36, 37). Many values of PSBTBT were also assumed, aside from the orientation length, therefore the efficiency values of PSBTBT will be higher than reported in literature.

2.3 Procedures We began by using previous knowledge from research and past models of Dynamical Monte Carlo simulations to develop a more accurate model to correct for assumptions. In our project, we focused on a new model composed of PCBM molecules embedded into a polymer matrix. We used this model to simulate the photovoltaic process within these organic solar cells. After we created the model, we began to gather the parameters of these molecules and polymers to apply them to the simulation. We collected the optical constants of all of the structural materials that made up the organic solar cell, in order to find the exciton generation rate and absorption efficiency. We also collected the energy and structural values of the polymer and fullerene layers in order to simulate the charge transport within the organic solar cell, and to obtain the efficiency values. After we gathered these values, we generated a randomized BHJ morphology, with a ratio of 1:1 and a feature size of 6.42, to use for our study. We then ran this morphology, thereby obtaining the total exciton generation efficiency rate to use in our DMC simulation. Through the DMC simulation, we analyzed the data using statistical analysis to find general trends among the different materials. From there we drew conclusions on the impact of using different materials. 3. Results 3.1 Polymers We begin by obtaining data for the different sets of polymers, structures, and orientations. Because some parameters were unavailable, and the values for P3HT were substituted, we will only focus on general trends within the polymer sets and not exact comparisons of efficiencies. We collected data for the different particles, excitons, holes, charge carriers, and electrons. The information about these particles that is important within this study is whether they disassociate, recombine, or are collected. Plotting the collection of the electrons and holes, as well as the number of excitons created against time, we get:

54 | 2014-2015 | Volume 4

Figures 3, 4, and 5: These figures are graphs of particles and their behavior against time in ns, of [010] PEDOTPSS PTB7. By applying a linear fit, we are able to obtain the gradient as well as the average number of particles collected and created per ns. We found the average between the number of holes collected and the number of electrons collected, and then divided it by the number of excitons created. The resulting value is the charge collection efficiency, because it shows how efficiently the excitons are transferred, and also how often excitons do not recombine. The exciton diffusion efficiency can be calculated by dividing the total number of excitons disassociated by the total number of excitons created, thus allowing us to find out how many excitons diffused. The charge transfer efficiency is again assumed to be equivalent to one due to the ultraspeed of the photovoltaic process within organic solar cells. The absorption efficiency can be calculated from the exciton generation profile used in the second part of the model. From these values, we can calculate the EQE for each of the different structures and orientations for each polymer we are studying.


Biology and Chemistry Research

Street Broad Scientific Volume 1 | 2011-2012

Figures 7 and 8: Exciton generation rate over wavelength of a blend of PTB7:PCBM. Graph B is the exciton generation rate with PEDOT:PSS as the hole-transporting layer, and graph A is the exciton generation rate with Graphene Oxide as the hole transporting layer.

3.2 Orientation We then can compare the different orientations and structures of polymers, showing the general trends for improving the efficiency of the polymers, as well as the most efficient orientations and structures. Graphing the EQE values on a bar chart presents us with an easier view to compare the different parameters.

Figure 6: EQE values of PTB7 for each orientation and hole-transporting layer 3.3 Structure We can compare the efficiencies of the Graphene Oxide layer and the PEDOTPSS layer within MATLAB. The two hole-transporting layers will affect not only the exciton generation rate, which will affect the diffusion and collection efficiencies, but also the absorption efficiency. The optical field, as a function of distance from aluminum, is multiplied by the absorption coefficient and the result is integrated over 300-900 nm in wavelength. This result is the exciton generation rate. The distribution of excitons in the OPV will correspond with the shape of the curve, as follows:

4. Discussion From our results, we are able to see general trends among the different polymers for their orientations and structures as presented by computational modeling. Because our simulations modeled different orientations, and utilized a different holetransporting layer than conventionally used, we can examine the unique differences that each of these factors would have on the efficiencies. 4.1 Orientation Generally there tends to be a trend towards the [010] GI-WAXS orientation being preferred over the [100] GI-WAXS value. This is largely due to the sphere of influence exerted by the two different orientations. For the [100] orientation, the particles tend to have to travel a larger distance to the donor-acceptor interface than they do for the [010] orientation. This indicates that the smaller lattice constant, due to the smaller [010] value, is preferable to the larger [100] value. By lowering the lattice constant, the hopping conduction is improved and there is an overall better charge collection among the particles (38). There are some polymers and structures that do not follow the orientation trend of [010] being more effective than [100], such as the PEDOT-PSS structured P3HT and PCPDTBT versions. As the effect of orientation has not been deeply studied, not much is understood about the effects that orientation might have, and comparative experimental values are not available. The inclusion of these orientations show that the DMC simulation is sensitive to structural and orientation changes, enhancing the DMC simulations credibility. 4.2 Structure From our data we see an efficiency preference for PEDOT-PSS over Graphene Oxide. This correlates with experimental studies done by scientists, but the efficiency values of Graphene Oxide are still comparable to those of PEDOT-PSS (6). Polymers such as PCDTBT and PCPDTBT show possible trends of Graphene Oxide being a more effective hole-transporting layer than PEDOT-PSS. This varies with the results shown by Volume 4 | 2014-2015 | 55


Street Broad Scientific Volume 1 | 2011-2012

PTB7, PSBTBT, and P3HT so experimental results are needed to compare the simulation results against. Most studies of Graphene Oxide as a hole-transporting layer have been done with a P3HT:PCBM blend, so our studies show an interesting possibility of a different combination of materials having different implications. 4.3 Polymers Our study shows P3HT as the least efficient of the polymers, which does not correlate with experimental data (39). Although the values of P3HT correspond with experimental values, the other polymer values do not. This is largely due to assuming similarities among the different polymers and P3HT, and using P3HT’s parameters for these different polymers. For each polymer, we can compare the values for different orientations and structures against each other to get the general trends listed above. Looking at Figure 6, we can see the general trends between the specific polymer, PTB7, and preferences for certain orientations and structures. The other polymers tend to follow these trends with few exceptions. 5. Conclusions and Future Work In this experiment we examined the effects of orientation, structure, and polymer makeup of different organic cells through a Dynamical Monte Carlo Simulation. We were able to prove that Graphene Oxide is a less suitable hole-transporting layer than PEDOT-PSS, which corresponds with experimental results (6). In general, organic solar cells tend to have higher efficiencies in the [010] orientation, rather than the [100] orientation due to the distance that particles have to travel before they can be collected. The different polymers cannot all be categorized under these general trends, as some polymers have unique properties and parameters, so polymers should be tested under all possible structures and orientations possible. In this way, we can obtain a better understanding of the impacts of different polymer profiles. 6. Acknowledgements I would like to thank Dr. Adrienne Stiff-Roberts and Ayomide Atewologun at Duke University, for assisting me and mentoring me in my research project. I would also like to thank Dr. Sarah Shoemaker for her assistance throughout my research experience. References 1. Wilson, G. M. (2013). “Building on 35 Years of Progress - The Next 10 Years of Photovoltaic Research at NREL.” 2014, from https://www.purdue.edu/discoverypark/energy/assets/ pdfs/pdf/Pioneer%20in%20 Energy%20Greg%20Wilson%20Presentation%207.17.13.pdf. 2. Xu, X. (2012). Monte Carlo Simulation of Charge 56 | 2014-2015 | Volume 4

Biology and Chemistry Research Transport in Organic Solar Cells. Department of Electrical and Computer Engineering. Duke University 27. 3. S. E. Shaheen, N. Kopidakis, D. S. Ginley, M. S. White and D. C. Olson, “Inverted bulk- heterojunction plastic solar cells,” SPIE Newsroom, 24 May 2007. [Online]. Available: http://spie.org/x14269.xml. [Accessed 22 July 2014]. 4. P. K. Watkins, A. B. Walker and G. L. Verschoor, “Dynamical Monte Carlo Modelling of Organic Solar Cells: The Dependence of Internal Quantum Efficiency on Morphology,” Nano Letters, vol. 5, no. 9, pp. 1814-1818, 2005. 5. F. Yang and S. R. Forrest, “Photocurrent Generation in Nanostructured Organic Solar Cells,” ACS Nano, vol. 2, no. 5, pp. 1022-1032, 2008. 6. S. Li, K. Tu, C. Lin, C. Chen, M. Chhowalla, “Solution-Processable Graphene Oxide as an Efficient Hole Transport Layer in Polymer Solar Cells,” ACS Nano, vol. 4, no. 6, pp. 3169-3174, 2010. 7. A. Stiff-Roberts, R. Pate, R. McCormick, K. Lantz, “RIR-MAPLE deposition of conjugated polymers and hybrid nanocomposites for application to optoelectronic devices,” AIP Conference Proceedings, vol. 1464, pp. 347-375, 2012. 8. P. Peumans, A. Yakimov and S. R. Forrest, “Small Molecular Weight Organic Thin-Film Photodetectors and Solar Cells,” J. Appl. Phys., vol. 93, pp. 3693-3723, 2003. 9. P. Peumans, Y. Aharon and F. R. Stephen, “Erratum: “Small molecular weight organic thin-film photodetectors and solar cells” [ J. Appl. Phys. 93, 3693 (2003)],” J. Appl. Phys., vol. 95, no. 5, pp. 2938, 2004. 10. F. Monestier, J. Simon, P. Torchio, L. Escoubas, F. Flory, S. Bailly, R. de Bettignies, S. Guillerez and C. Defranoux, “Modeling the short-circuit current density of polymer solar cells based on P3HT:PCBM blend,” Sol. Energy Mater. Sol Cells, 2006. 11. H. Hoppe, N. S. Sariciftci and D. Meissner, “Optical Constants of Conjugate Polymer/Fullerene Based BulkHeterojunction Organic Solar Cells,” Mol. Cryst. Liq. Cryst., vol. 385, pp. 233-239, 2002. 12. M. Polyanskiy, “Refractive Index Database,” 2012. [Online]. Available: 8 http://refractiveindex.info/. [Accessed 24 July 2014]. 13. I. Jung, M. Vaupel, M. Pelton, R. Piner, D. Dikin, S. Stankovich, J. An, and R. Ruoff, “Characterization of Thermally Reduced Graphene Oxide by Imaging Ellipsometry,” ACS Nano, vol. 112, no. 23, pp. 8499-8506, 2008. 14. Hedley, G. J. et al., “Determining the optimum morphology in high-performance polymer-fullerene organic photovoltaic cells,” Nat. Commun. 4:2867 (2013). 15. F. Nickel, C. Sprau, M. Klein, P. Kapetana, N. Christ, X. Liu, S. Klinkhammer, U. Lemmer, A. Colsmann, “Spatial mapping of photocurrents in organic solar cells


Biology and Chemistry Research comprising wedge-shaped absorber layers for an efficient material screening,” Solar Energy Materials and Solar Cells, vol. 104, pp. 18-22, 2012. 16. J. Mescher, S. Kettlitz, N. Christ, M. Klein, A. Puetz, A. Mertenz, A. Colsmann, U. Lemmer, “Design rules for semi-transparent organic tandem solar cells for window integration,” Organic Electronics, vol. 15, no. 7, pp. 14761480, 2014. 17. Bernhauser, L. (2014). Variable Angle Spectroscopic Ellipsometry (VASE) on Organic Semiconducting Thin Films. Technical Physics, Linz Institute for Organic Solar Cells (LIOS). 18. F. Yang and S. R. Forrest, “Photocurrent Generation in Nanostructured Organic Solar Cells,” ACS Nano, vol. 2, no. 5, pp. 1022-1032, 2008. 19. D. J. D. Moet, M. Lenes, M. Morana, H. Azimi, C. J. Brabec, and P. W. M. Blom, Appl. Phys. Lett. 96, 213506 (2010) 20. Hedley, G. J., et al. (2013). “Determining the optimum morphology in highperformance polymer-fullerene organic photovoltaic cells.” Nat Commun 4. 21. K. Yonezawa, H. Kamioka, T. Yasuda, L. Han, Y. Moritomo, “Fast Carrier Formation from Acceptor Exciton in Low-Gap Organic Photovotalic,” IOP Science, vol. 5, no. 4, pp. 1-3, 2012. 22. Z. Li, G. Lakhwani, N. Greenham, C. McNeill, “Voltage-dependent photocurrent transients of PTB7:PC70BM solar cells: Experiment and numerical simulation,” J. Appl. Phys., vol. 114, pp. 1-8, 2009. 23. W. Chen, T. Xu, F. He, W. Wang, C. Wang, J. Strzalka, Y. Liu, J. Wen, D. Miller, J. Chen, K. Hong, L. Yu, S. Darling, “Hierarchical Nanomorphologies Promote Exciton Dissociation in Polymer/Fullerene Bulk Heterojunction Solar Cells,” ACS Nano, vol. 11, no. 9, pp. 3703-3713, 2011. 24. A. Ward, A. Ruseckas, I. Samuel, “A Shift from Diffusion Assisted to Energy Transfer Controlled Fluorescence Quenching in Polymer-Fullerene Photovoltaic 9 Blends,” Organic Semiconductor Center. 25. F. Etzold, I. Howard, R. Mauer, M. Meister, T. Kim, K. Lee, N. Back, F. Laquai, “Ultrafast Exciton Dissociation Followed by Nongeminate Charge Recombination in PCDTBT:PCBM Photovoltaic Blends,” Journal of the American Chemistry Society, vol. 133, pp. 9469-9479, 2011. 26. S. Alem, T. Chu, S. Tse, S. Wakim, J. Lu, R. Movileanu, Y. Tao, F. Bélanger, D. Désilets, S. Beaupré, M. Leclerc, S. Rodman, D. Waller, R. Gaudina, “Effect of mixed solvents on PCDTBT:PC70BM based solar cells,” Organic Electronics, vol. 12, no. 11, pp. 1788-1793, 2011. 27. Lu, X. et al., “Bilayer order in a polycarbazole-conjugated polymer,” Nat. Commun. 3:795 doi: 10.1038/ ncomms1790 (2012). 28. A. Guilbert, J. Frost, T. Agostinelli, E. Pires, S. Lilliu,

Street Broad Scientific Volume 1 | 2011-2012

J. Macdonald, J. Nelson, “Influence of Bridging Atom and Side Chains on the Structure and Crystallinity of Cyclopentadithiophene−Benzothiadiazole Polymers,” Chemistry of Materials, vol. 26, pp. 1226-1233, 2014. 29. B. Collins, Z. Li, C. McNeill, H. Ade, “Fullerene-Dependent Miscibility in the Silole-Containing Copolymer PSBTBT-08,” Macromolecules, vol. 44, pp. 9747- 9751, 2011. 30. G. Grancini, R. Kumar, M. Maiuri, J. Fang, W. Huck, M. Alcocer, G. Lanzani, G. Cerullo, A. Petrozza, H. Snaith, “Panchromatic “Dye-Doped” Polymer Solar Cells: From Femtosecond Energy Relays to Enhanced Photo-Response,” The Journal of Physical Chemistry Letter, vol. 4, pp. 442-447, 2013. 31. Y. Nam, J. Huh, W. Jo, “A computational study on optimal design for organic tandem solar cells,” Solar Energy Materials and Solar Cells, vol. 95, pp. 1095-1101, 2011. 32. Szmytkowski, J., “Modeling the electrical characteristics of P3HT:PCBM bulk heterojunction solar cells: Implementing the interface recombination,” Semiconductor Science and Technology, 2010. 25(1). 33. Gevaerts, V.S., et al., “Discriminating between bilayer and bulk heterojunction polymer:fullerene solar cells using the external quantum efficiency,” ACS Appl Mater Interfaces, 2011. 3(9): pp. 3252‑5. 34. Monestier, F., et al., “Modeling the short---‑circuit current density of polymer solar cells based on P3HT:PCBM blend,” Solar Energy Materials and Solar Cells, 2007. 91(5): pp. 405‑410. 0 35. (2014). Example:P3HT orientation analysis. Creative Commons Attribution Share Alike, GISAXS. 36. Boix, P.P., et al., “Determination of gap defect states in organic bulk heterojunction solar cells from capacitance measurements,” Applied Physics Letters, 2009. 95(23). 37. Nalwa, K.S., et al., “Dependence of recombination mechanisms and strength on processing conditions in polymer solar cells,” Applied Physics Letters, 2011. 99(26). 38. Atewologun, A. (2014). A Semi-Empirical Monte Carlo Model of Organic Photovoltaic Device Performance in Resonant Infrared, Matrix-Assisted Pulsed Laser Evaporation (RIR-MAPLE) Films. Department of Electrical and Computer Engineering. Duke University, 95. 39. A. Guerrero, B. Dörling, T. Ripolles-Sanchis, M. Aghamohammadi, E. Barrena, M. Campoy-Quiles, and G. Garcia-Belmonte “Interplay Between Fullerene Surface Coverage and Contact Selectivity of Cathode Interfaces in Organic Solar Cells,” ACS Nano, vol. 7, no. 5, pp. 4637-4646, 2013.

Volume 4 | 2014-2015 | 57


Street Broad Scientific

Physics and CompSci Research

Volume 1 | 2011-2012

A Mathematical Analysis of the Molecular Energy of Cyclopropane at Varying Geometries Guy Blanc

ABSTRACT

Cyclopropane, C3H6, has an equilateral triangle shape for the C-C bonds in its most stable geometry. This study attempted to find the a relatively simple yet accurate function F, such that: E = F(s1, s2, s3) Where si are each of the three C-C bond lengths and E is the energy of the molecule. A Python program created the molecular energy input files for 1540 different possible geometries of cyclopropane, and they were run with B3LYP/631G(d) using Gaussian 03 on the Zeus ECU server. Three main fits were attempted on this data set: One which incorporated only bond and angle stretching as used in molecular mechanics. This fit had an R2 of .9215, with the advantages of being the simplest fit and providing easy to visualize information about how the molecule stretches. The second fit incorporated non-bonding interactions as well, which greatly increased the complexity, but gave an R2 value of .9952. The final fit did not use any molecular modeling formulas, and aimed to have the closest fit with the simplest function. It achieved an R2 value of .9668 with a relatively simple function. This project served as a great educational tool and increased my appreciation for how complex computational chemistry calculations are.

1 Introduction 1.1 Cyclopropane Geometry Cyclopropane, C3H6, can be very useful in organic synthesis.[1]

Figure 2: Eclipsed and Staggered configuration of hydrogen atoms in ethane. Cyclopropane is forced into the Eclipsed configuration which leads to strain 1.2 Molecular Mechanics Molecular Mechanics is a fast computational chemistry methodology. It is the application of relatively simple formula, mostly from classical mechanics, and can be Figure 1: A Picture of cyclopropane from the used to compute the energy of a molecule. When comNCSSM server puting energy, the following sources of potential energy are summed: The three carbon bonds in cyclopropane form a triangle. • Bond stretching In its most favorable geometry, each of the C-C-C angles • Angle stretching in cyclopropane is 60 degrees, because it is an equilateral • Dihedral angle stretching triangle, whereas a standard tetrahedral shape would give • Non-bonding interactions bond angles of 109.5 degrees. This leads to a high angular strain caused by the ring structure.[2] There is also a Bond and angle stretching are approximated as if they torsional strain on the carbon-carbon bonds. It would be were springs. Thus, the following two formulas are used: preferential if the hydrogen atoms were staggered instead of eclipsed, but due to the planar structure of cyclopropane, this is not possible.[3]

58 | 2014-2015 | Volume 4


Street Broad Scientific

Physics and CompSci Research Where ks and kΘ are spring constants for the bond and angle respectively, l0 is the equilibrium bond length, and Θ0 is the equilibrium bond angle. For simplicity, Dihedral angle stretching was not considered in this study. Because the carbonhydrogen bonds were kept fixed, and the carbon molecules are all planar, the energy from dihedral stretching should not significantly deviate from a constant; however, there is still some present because the of the eclipsed hydrogen atoms. Non-bonding interactions include van der Waals forces, polar interactions, and hydrogen bonds. van der Waals forces can be approximated using Lennard Jones potentials with the following formula: For constants a and b [4] The Lennard Jones potential, however, does not take into account any polar interactions. For these the Stockmayer potential is more accurate: For constants a, b, and c [5] 1.3 Scope of This Project This project studies the energy of cyclopropane at different geometrical configurations. The dihedral angle between the plane of the carbon molecules and the hydrogen-carbon bonds were kept constant, as was the length of the hydrogencarbon bonds. The shape of the CC-C triangle was varied, by varying the side lengths. The goal of this paper is to write as simple and accurate as possible a function, F, that can be used to find the energy given the three C-C bond lengths. Thus,

Volume 1 | 2011-2012

The data set was composed of all combinations of C-C bond lengths with side lengths between 1.025 Å and 1.975 ˚ A, incrementing by .05 Å, with all of the other geometrical values in the list above kept constant. The numbers above were chosen to allow for 20 possible side lengths while still ensuring that none of the geometries tested violated the triangle inequality theorem. With 20 possible side lengths there are the following number of geometries tested: • = 1140 scalene triangles •2* = 380 isosceles triangles (doubled because a triangle with sides (1.025, 1.025, 1.075) is different from (1.075, 1.075, 1.025) • = 20 equilateral triangles This is a total of 1140 + 380 + 20 = 1540 combinations. A Python program was used to enumerate through all the possible side lengths and create all 1540 molecular energy input files. The input files were then transferred to the Zeus server at ECU, where a .tcsh file was run that submitted all of the input files to the server. After the files were completed, the output files were transferred back, and a Python program was used to extract the molecular energy from each output file. This data was stored into a text file, which was then read by Python programs used for data analysis. 3 Results and Discussion 3.1 Bond Stretching and Angle Stretching Models Given the three sides of a triangle, s1, s2, and s3, it is possible to determine the three angles of the triangle, θ1, θ2, and θ3 using the law of cosines. This results in the following:

E = F (s1, s2, s3) Where s1, s2, and s3 are the three C-C bond lengths. 2 Computational Approach All calculations were performed using B3LYP/6-31G(d) with Gaussian 03[11]. A geometric optimization was performed on cyclopropane using the NCSSM server. This gave the following optimal values: • C-H bond length = 1.087 Å • C-C-H angle = 118.175 degrees • H-C-H angle = 113.921 degrees • C-C bond length = 1.509 Å

[6] The first attempt at fitting a function, F, only took into account the bond stretching and angle stretching, and thus had the following form:

Where a, b, c, d, and e are the coefficients that minimize the residual sum of squares, and θ1, θ2, and θ3 are found from the three side lengths using the law of cosines. The values for the coefficients were determined using the Levenberg-Marquardt algorithm, as implementedin the scipy python package[7]. They are as follows: a = 0.7516163582240072 b = 1.61994148406949 Volume 4 | 2014-2015 | 59


Street Broad Scientific Volume 1 | 2011-2012

c = 0.10303476049651884 d = 8.014103940301279 e = −132.95231551874403 R2 = 0.92149054352 Note that the angles were measured in radians.Thus, the value of d makes no physical sense. d should represent the equilibrium bond angle, but when converted to degrees, the value of d indicates that the equilibrium bond angle is approximately 460 degrees. In the next attempt, the value of d was fixed to a more reasonable value. To determine which value to use, a geometric optimization using B3LYP/6-31G(d) was performed on propane on the NCSSM server. The C-C-C angle on propane, which should be a close measure to the relaxed equilibrium angle of cyclopropane, was 112.875 degrees, or 1.97004 radians. Thus, in the next fit, the same function was used but the value of d was fixed to the measure of this angle. In this fit, the following coefficients were optimal: a = 0.7516163289853884 b = 1.619941487279439 c = 0.10303477113370649 e = −118.21232468233023s R2 = 0.921490543552 The two R2 values are nearly identical, but this fit has more reasonable coefficients. 3.2 Incorporating Non-Bonding Interactions The next fit aimed two incorporate non-bonding interactions into the model, which is the interactions between hydrogens on two different carbons. To simplify, the model, only the interactions between hydrogens on the same side of the C-C-C triangle was considered.

Physics and CompSci Research sidered, as well as the same interactions on the opposite side; the interaction between hydrogen 6 and 8 was not considered. Additionally, because the C-C-H angles and HC bonds were fixed between all trials, the distance between two adjacent hydrogens was always the 1.026 Å more than between the carbons they are attached to. For example, if the bond between Carbons 1 and 2 is 1.5 Å, the distance between hydrogens 7 and 8 was 1.5 Å + 1.026 Å = 2.526 Å. It is possible to simulate these nonbonding interactions using Lennard Jones potentials, giving the following model:

With d fixed as before. The Levenberg-Marquardt algorithm in scipy was unable to minimize the coefficients of this model. In the next and final model based on molecular modeling principles, Stockmayer potentials were used instead of Lennard Jones potentials. The C-H bond has a slight polarity, so Stockmayer potentials should be slightly more accurate. The model was:

With d one again fixed. The optimized coefficients were: a = −0.152229739396 b = 1.45100704451 c = 0.0971631910835 e = −116.687413053 f = −87.8717256284 g = 124.007295139 h = −15.4989140544 R2 = 0.99515423151280069 This is a very good fit; however, the coefficients do not have a realistic physical interpretation, mainly because a is negative. If the bonds behaved like springs, a would need to be positive. This means that the high R2 value is not due to a model that physically describes the data set, but simply due to an increased number of coefficients and complexity of the model. 3.3 A Model Not Using Molecular Modeling Methods

Figure 3: Cyclopropane with the atoms numbered For example, the interactions between hydrogen 4 and 8, between 7 and 8, and between 4 and 7 would be con60 | 2014-2015 | Volume 4

Although it was possible to get a high R2 value by using molecular modeling methods, the function became very complex and the high R2 was only achieved after the coefficients lost their physical interpretations. The next goal was to find the simplest function with the best fit to the data, without using molecular modeling methods. Many methods were attempted, but only the most successful


Street Broad Scientific

Physics and CompSci Research is given here for brevity. Recall that the goal is to find a function F such that E = F (s1, s2, s3) One possible way to make this problem simpler is, instead of finding a three dimensional function F, find a one dimensional function G and summate the results. That is: E = F (s1, s2, s3) = G(s1) + G(s2) + G(s3) The limited domain of G (there are only 20 possible side lengths, and thus 20 elements in the domain) makes finding an optimum function simpler. It is possible to determine exactly which 20 values each element of the domain should correspond to for a best fit. To do so, a function was written, in Python, that represents G. This function took 20 varied coefficients, a0, a1, ... , a19, and simply matched a side length to a coefficient.

Volume 1 | 2011-2012

Many attempts were made to find a relatively simplefunction to fit the data above to a function G. Ultimately, the best fit was found by observing that the function has a similar shape to that of Lennard Jones potential. The

Figure 4: The optimum function for G will go through the points on this graph formula for Lennard Jones did not fit too well, but by allowing the exponents to be values other than 6 and 12, a very close fit was possible. That is: The optimized coefficients are: a = 160941.118628 b = 2.09447517537 c = −160941.302524 d = 2.09445657918 e = −38.7029631311 R2 = 0.99996894946005688

For example G(1.525) = a10 and F(1.075, 1375, 1.775) = G(1.075) + G(1.375) + G(1.775) = a1 + a7 + a15. Once again, The Levenberg-Marquardt algorithm implemented in Scipy was used to find the values of a0... a19, such that F had the least error with respect to the data. The values of a0... a19 determined by this fit thus represent the optimum values of G(x) at each side length, and are shown in the following graph. If G goes exactly through the points on this graph, then the function F (s1, s2, s3) = G(s1) + G(s2) + G(s3) will have an R2 of 0.96683308125890355. Thus, no matter what function we use for G, that value of R2 is the maximum. Although this summation of the same function prevents a perfect fit, it has potential for a very good fit while increasing simplicity.

Note that this R2 value is how close this function fits the optimum values of G, not a measure of how well the function F that is a summation of this function fits the data set. Using this function we set F to the following: F (s1, s2, s3) =

=

-

- 38.7029631311

This F has an R2 value of 0.96680306056609455 which is extremely close to the optimum value for any function of G, and is high considering the relative simplicity of this function. 4 Conclusions The molecular energy of propane, assuming the H-C bond lengths and C-C-H bond angles are kept constant, can be written as some function of the three C-C bond lengths. This project aimed to approximate this function with little error and a simple expression. Three of the models in this work have some usefulness. The molecular modeling model that took into account only bond angle and stretching, is fairly simple, requirVolume 4 | 2014-2015 | 61


Street Broad Scientific

Physics and CompSci Research

Volume 1 | 2011-2012

ing only five coefficients, one of which was fixed to the C-C-C angle in propane. Its R2 of 0.9215 gives it a fair amount of accuracy, but its greatest strength is that the coefficients give some information about how the bonds and angles stretch, and the functions de fined for these stretches are very simple. Thismakes it easy to visualize the resulting function. The molecular modeling model that took into account Stockmayer potentials was rather complex. Additionally, the coefficients have no reasonable physical interpretation, and the resulting function is extremely difficult to visualize; however, it has the advantage of being the most accurate model explored in this research, with an R2 of 0.9952. The last model explored was a summation of a function of each side length. It’s advantage was that it was simple, requiring only five coefficients, but still accurate, with an R2 of .9968. Future work on developing a more accurate yet still relatively simple model is possible. One possibility is to incorporate a second function, H, that takes an input other than side length, and is either multiplied by the summation of G or added to it. Two interesting possible inputs are the area of the triangle, which is a measure of how dense the electrons are, and the area divided by perimeter squared, which is a measure of how close to equilateral the triangle is. This leads to four possible models to consider: E = F (s1, s2, s3) E = F (s1, s2, s3) E = F (s1, s2, s3) E = F (s1, s2, s3)

= (G(s1) + G(s2) + G(s3)) * H(A) = G(s1) + G(s2) + G(s3) + H(A) = (G(s1) + G(s2) + G(s3)) * H( ) = G(s1) + G(s2) + G(s3) + H( )

Where A and P are the area, which can be computed using heron’s formula, and perimeter of the triangle respectively. With correctly selected functions for G and H a simple and very accurate model should be possible. This project, or a similar one, would be very useful as an educational tool. Although it was a rather long undertaking, it gave me invaluable appreciation for how complex computational chemistry calculations are, gave me a thorough understanding of how the geometry of cyclopropane affects its energy, and increased my data analysis skills. Others who take on a similar project could also gain similar benefits. This project also opens the door for future research projects that aim to find simpler yet more accurate formulas for the single point energy for a variety of molecules. If enough of these projects are undertaken, it may be possible to apply such formula to macromolecules, providing more accuracy than current methods. 62 | 2014-2015 | Volume 4

5 Acknowledgements The author thanks Mr. Robert Gotwals for assistance with this work. Appreciation is also extended to the Burroughs Wellcome Fund and the North Carolina Science, Mathematics and Technology Center for their funding support for the North Carolina High School Computational Server. Special thanks is given to the Center for Applied Computational Studies at East Carolina University for the generous use of the SGI Origin workstation where the bulk of this work was conducted. References [1] Use of cyclopropanes and their derivatives in organic synthesis Henry N. C. Wong, Moon Yuen Hon, Chun Wah Tse, Yu Chi Yip, James Tanko, and Tomas Hudlicky Chemical Reviews 1989 89 (1), 165-198 [2] ”Ring Strain in Cyclopropane.” The Department of Chemistry Biochemistry. The University of Texas at Austin, n.d. Web. Dec. 2015. http://research.cm.utexas.edu/nbauld/teach/ cycloprop.html [3] James. ”Cycloalkanes Ring Strain In Cyclopropane And Cyclobutane.” Master Organic Chemistry RSS. N.p., n.d. Web. Dec. 2015. ¡http://www.masterorganicchemistry.com/2014/04/03/ cycloalkanes-ring-strain-incyclopropane-and-cyclobutane/¿ [4] Gotwals, Robert R., Jr., and Shawn Sendlinger. A Beginner’s Guide to Computational Chemistry. N.p.: n.p., n.d. Sept. 2013. Web. Dec. 2014. [5] Mourits, F. M., Rummens, F. H. A. (1977). A critical evaluation of LennardJones and Stockmayer potential parameters and of some correlation methods. Canadian Journal of Chemistry, 55(16), 30073020. doi:10.1139/v77-418 [6] Weisstein, Eric W. ”Law of Cosines.” From MathWorld–A Wolfram Web Resource. http://mathworld.wolfram.com/LawofCosines.html [7] Jones E, Oliphant E, Peterson P, et al. SciPy: Open Source Scientific Tools for Python, 2001-, http://www. scipy.org/ [Online; accessed December 2014] [8] Schmidt, J.R.; Polik, W.F. WebMO Pro, version 7.0; WebMO LLC: Holland, MI, USA, 2007; available from http://www.webmo.net (accessed December 2014). [9] The North Carolina High School Computational Chemistry Server, http://chemistry.ncssm.edu (accessed December 2014). [10] The Zeus Computational Chemistry Server at East Carolina University, zeus.ecu.edu (accessed December 2014). http://pubs.acs.org/doi/abs/10.1021/cr00091a 005 [11] Gaussian 03, Revision C.02, M. J. Frisch, G. W. Trucks, H. B. Schlegel, G. E. Scuseria, M. A. Robb, J. R. Cheeseman, J. A. Montgomery, Jr., T. Vreven, K. N. Kudin, J. C. Burant, J. M. Millam, S. S. Iyengar, J. Tomasi,


Physics and CompSci Research

Street Broad Scientific Volume 1 | 2011-2012

V. Barone, B. Mennucci, M. Cossi, G. Scalmani, N. Rega, G. A. Petersson, H. Nakatsuji, M. Hada, M. Ehara, K. Toyota, R. Fukuda, J. Hasegawa, M. Ishida, T. Nakajima, Y. Honda, O. Kitao, H. Nakai, M. Klene, X. Li, J. E. Knox, H. P. Hratchian, J. B. Cross, V. Bakken, C. Adamo, J. Jaramillo, R. Gomperts, R. E. Stratmann, O. Yazyev, A. J. Austin, R. Cammi, C. Pomelli, J. W. Ochterski, P. Y. Ayala, K. Morokuma, G. A. Voth, P. Salvador, J. J. Dannenberg, V. G. Zakrzewski, S. Dapprich, A. D. Daniels, M. C. Strain, O. Farkas, D. K. Malick, A. D. Rabuck, K. Raghavachari, J. B. Foresman, J. V. Ortiz, Q. Cui, A. G. Baboul, S. Clifford, J. Cioslowski, B. B. Stefanov, G. Liu, A. Liashenko, P. Piskorz, I. Komaromi, R. L. Martin, D. J. Fox, T. Keith, M. A. Al-Laham, C. Y. Peng, A. Nanayakkara, M. Challacombe, P. M. W. Gill, B. Johnson, W. Chen, M. W. Wong, C. Gonzalez, and J. A. Pople, Gaussian, Inc., Wallingford CT, 2004.

Volume 4 | 2014-2015 | 63


Street Broad Scientific

Physics and CompSci Research

Volume 1 | 2011-2012

Development of Novel Methods for Monitoring Aging of the ATLAS TRT Straws Rohit Das ABSTRACT Straw wire aging damages long-term performance of the Transition Radiation Tracker (TRT), a gaseous straw detector in CERN’s Large Hadron Collider (LHC). Formation of silicon-hydrocarbon deposits on the wires causes an aging effect that results in a drop in gas gain. Such polymerizing impurities can permanently alter the detector’s geometry and electric field conditions, limiting both its accuracy and lifespan. Before LHC Run II in 2015, during which the LHC will ramp-up to 13 TeV, we seek to create and implement a tool that we can use to better understand the aging effect’s consequences for detector performance. By measuring the reduction in gain of the TRT barrel and end-caps during Run I (2010-2012 at 7 TeV), we observe a clear and rising degradation effect present in all sectors of the TRT that may be a result of LHC run conditions. However, no obvious aging was observed in data with stable run conditions. Further studies are needed to isolate the effects caused solely by aging from observed degradation caused by these additional factors. Isolation and monitoring of aging will assist in more effectively understanding its effects on all gaseous straw detectors, commonly used for particle detection in several high energy physics experiments.

1. Introduction 1.1 Motivation Several types of aging degrade long-term performance of the Transition Radiation Tracker (TRT), a gaseous straw detector in the Large Hadron Collider at CERN. Aging of the straw wires contributes to a gas gain drop effect, a result of deposits on the wires created by polymerization of silicon and hydrocarbon composites [3]. These polymerizing impurities form larger molecular chains that not only insulate the wire, causing a gradually increasing signal loss, but may also irreversibly change the geometry and electric field conditions of the detector, significantly limiting its detection accuracy and desired 10-year lifespan [3]. With the large increase in luminosity that will follow the LHC’s ramp-up to 13 TeV in 2015, the need to monitor the effects of aging on the TRT becomes increasingly important. Therefore, we have developed a tool with which we can better understand the implications of the aging effect for detector performance. By measuring the reduction in gain over time of the TRT barrel and endcaps, we observe a clear and gradually increasing degradation effect present in all sections of the detector. However, further studies must be conducted to isolate permanent aging effects from degradation caused by temporary additional factors such as machine run conditions. Isolation and monitoring of the aging will help us to better understand its effects not only on long-term performance of the TRT, but also on that of gaseous straw detectors in general, such as the RICH detectors used in Fermilab’s SELEX experiment [8]. 1.2 CERN and the Large Hadron Collider One of the oldest issues explored by physicists is the composition of matter at the most fundamental level. The 64 | 2014-2015 | Volume 4

study of high-energy physics has given rise to the Standard Model, the most successful theory to date for describing the elementary constituents of matter and interactions between them. Although incomplete, this theoretical framework provides a strong basis for further research and discovery in the field, which may prove to be essential to our understanding of the universe as a whole [5]. High-energy physics research directed towards confirmation and extension of the Standard Model is conducted at CERN, the European Organization for Nuclear Research. Its Large Hadron Collider (LHC), the world’s largest particle accelerator, with a circumference of 27 km, is capable of producing proton-proton (p-p) collisions with a center-of-mass energy of 8 TeV [4]. Although currently not in operation, the LHC will be capable of producing collisions with a center-of-mass energy of up to 13 TeV when it resumes operation in 2015. Collisions at such high energies allow for recreation of conditions that were present fractions of a second after the Big Bang, through which we can discover new physics phenomena related to the origins of our universe and the fundamental makeup of its matter [5].

Figure 1. Shown above is the CERN accelerator complex. ATLAS and CMS, the two general, all-purpose detectors, are built around the two proton-proton collision points [4].


Physics and CompSci Research

Street Broad Scientific Volume 1 | 2011-2012

1.3 ATLAS Detector CERN houses two general, all-purpose detectors aimed at studying proton-proton collisions, one of which is A Toroidal LHC ApparatuS (ATLAS) (see Fig. 1). Built around one collision point of the LHC, the ATLAS detector is 46 m long, 25 m high, and 25 m wide, making it the largest particle detector to date [7]. It runs along the beam line (the z-axis in the ATLAS coordinate system) and is composed of four concentric cylindrical subdetectors working in conjunction with end-cap detectors, designed to provide precise measurements of the energy and momenta of the collisions’ resulting decay products [7]. Each cylindrical section is responsible for measuring different particle properties, allowing the ATLAS detector as a whole to differentiate between different particle types as the particles pass through its sections sequentially (see Fig. 2) [4]. Figure 3: Shown above is a cross-section of the IT’s barrel region. The beam line runs through R = O mm, and the method of detection changes with increasing radius (in order: Pixel Detector, Silicon Semiconductor Central Tracker SCT), and Transition Radiation Tracker (TRT)) [4].

Figure 2: Shown above is a cross-section of the ATLAS detector’s barrel region, with the proton beam line running through the center (z-axis). As can be seen in the diagram, each successive subdetector is designed to measure a different quantity or particle resulting from the collisions [4]. 1.3 ATLAS Detector The innermost cylindrical subsection of the ATLAS detector, the Inner Tracker (IT), achieves precise tracking and momentum measurements through the use of its own three concentric subdetectors (see Fig. 3), immersed in a 2 Tesla external magnetic field [4]. Exposure to this magnetic field causes the paths of charged particles resulting from the collisions to curve, and information regarding a particle’s charge and momentum can be determined from the direction and degree of curvature of its path [7]. Neutral particles, however, are unaffected by the magnetic field and do not ionize atoms along their paths, so are not detected by the IT (all their energy is instead deposited in the Hadronic Calorimeter) [7]. Each subdetector of the IT is composed of a barrel and end-cap region, and although independent from one another, their combined measurements of particle momenta and tracks allow for extremely in-depth reconstruction of events [7].

1.5 Transition Radiation Tracker The outermost subdetector of the IT, the Transition Radiation Tracker (TRT), utilizes straws (122,880 on each end-cap disk and 52,544 in the barrel region) serving as drift chamber detectors to precisely reconstruct the paths of ionizing particles [4]. Each straw, 4 mm in diameter, has a 31 μm gold-coated tungsten wire running along its center, held at high voltage and serving as an anode. The interior of a straw’s wall, coated with 0.2 μm of Al, is held at ground potential and therefore functions as a cathode. Between the cathode wall and the anode wire is a gas mixture composed of 70% Xe, 27% CO2, and 3% O2. When a charged particle traverses a TRT straw, primary electrons resulting from ionization of the straw’s gas cause an avalanche of electrons to collect on the anode due to the potential difference inside the straw (see Fig. 4). This creates a detectable electrical signal on the wire which is then sampled every 3.125 ns to determine the resulting collected charge, which is proportional to the energy losses of the electrons. In addition to the position measurements obtained from signals produced by the straws, the TRT is capable of determining a particle’s distance from a wire (within ~120 microns) through accurate timing measurements, further contributing to the detector’s accuracy [7]. The signal, a measure of energy loss calculated from the measured current, is represented by a 24-bit binary pattern corresponding to a 75 ns period. When the signal is above a specified low-threshold, a 1 is recorded in the bit pattern; otherwise, a 0 is recorded (see Fig. 4). The leading edge (LE) and trailing edge (TE) times are recorded, roughly corresponding to the times at which the first and Volume 4 | 2014-2015 | 65


Street Broad Scientific Volume 1 | 2011-2012

Physics and CompSci Research

last primary electrons produced by the traversing particle are detected. The time over threshold (TOT), the time elapsed while the signal is above the low-threshold, is also recorded and serves as the last parameter of interest in precise reconstruction of the traversing particle’s path [7].

Figure 4: Shown above is a signal caused by primary electrons, proportional to energy loss of the particle that caused the signal. To track an ionizing particle in the TRT, the signal from each straw hit is read at 3.125 ns intervals, represented by the bit pattern above. The horizontal dotted line represents the low threshold (approximately 250-300 eV). Adapted from [7]. 1.6 Detection of Transition Radiation A second signal that is detected by the TRT is due to transition radiation (TR), emitted primarily by highly relativistic electrons as they traverse the radiator foam (composed of layered materials of varying indices of refraction) in which the straw matrix is embedded [7]. Relativistic charged particles produce TR when they cross the boundary between two media with different indices of refraction. Since a moving particle’s electromagnetic field is different in each medium, the particle must “shake off ” the difference upon crossing the interface. TR photons of at least 1.022 MeV can pair-produce electrons that cause cascades in the Xe gas mixture (see Fig. 5), but most TR photons are low-energy X-rays of approximately 5 KeV. TR is one of the various mechanisms by which an electron may lose energy (others include ionization, Compton scattering from a TR photon, Bremsstrahlung, and Cherenkov radiation), described by the Bethe formula [4]. TR is the most easily identifiable, however, because of the relatively high energy it releases in the X-ray domain [2]. Additionally, the intensity of the TR produced is proportional to the particle’s Lorentz factor (γ) [2]. Electrons, due to their very low rest mass, are the only particles capable of traveling fast enough through the detector (γ>1000) to produce detectable TR; therefore, occurrence of this signal allows us to discriminate for electrons when identifying ionizing particles [9].

66 | 2014-2015 | Volume 4

Figure 5: Shown above are the processes that occur as an ionizing particle traverses the TRT. As the electron and positively charged pion traverse the straws, they both produce the low-threshold signal created by primary electrons. However, only the electron is capable of producing a TR photon [7]. To distinguish between the primary electron signal and the transition radiation signal, the detector uses a lowthreshold of approximately 250-300 eV for the primary electrons (see Fig. 5) and a high-threshold of 6 KeV for the TR [4]. In conclusion, the TRT provides precise tracking measurements for radii between 50 and 100 cm in the ATLAS detector and allows for identification of electrons [7]. 1.7 Predicted Dependence of Aging Effect on Gas Flow Direction There are two inputs for the gas in the TRT, Input A (located at z=+720.5 mm) and Input C (located at z=720.5 mm) (see Fig. 6) [9]. The gas flows from +z to –z in Input A and from –z to +z in Input C, bringing in silicon deposits due to factory impurities from the electronics at the ends of the straws. Tests conducted prior to the LHC becoming operational, including irradiation tests designed to mimic ion implantation in the detector’s silicon integrated circuits, show that these deposits tend to stay at the beginning of the wire because they are solid-state materials, unable to be broken down and removed by the gas flow [3]. We therefore predict that there will be an aging effect dependent on the direction of gas flow present in the TRT.

Figure 6: Shown above are cross sections of opposite sides of the TRT. The shaded regions represent the detector’s input ports for gas inputs A (out of the page) and C (into the page), running parallel to the z-axis. [6].


Street Broad Scientific

Physics and CompSci Research 2. Materials and Methods 2.1 High Threshold Hit Efficiency Because aging contributes to signal loss in the TRT, we propose a novel method for monitoring aging that analyzes straw hit data in the TRT barrel from Period A of 2012, during LHC Run I. The reduction in gain, defined as the drop in charge collected on the wires due to aging, affects both the number of low threshold (LT) and high threshold (HT) hits recorded. However, the HT hits are expected to be more sensitive to this reduction in signal size, due to the fact that a much larger current is required to trigger the high threshold. There are in turn fewer HT hits recorded overall, so smaller changes will affect them more prominently. Therefore, in order to measure the effects of aging in the TRT straws, we look at the HT hit efficiency (see Equation 1), defined as:

1) as a function of hit z-position in the detector. 2.2 Relative Change in High Threshold Hit Efficiency Since we predict a dependence of the aging effect on direction of gas flow as well, we measure HT/All in all three layers of the TRT barrel for both Input A and Input C. To quantify the dependence on gas flow direction, we take the difference between the HT hit efficiencies at both inputs (see Equation 2). This is plotted as a function of zposition and normalized by dividing by twice the average efficiency (defined as the average of HT/All at all values of z), giving us the relative change in efficiency as a function of hit z-position in the straw. We thus define

Volume 1 | 2011-2012

3. Results 3.1 Degradation Observed in HT/All Plots We first look at the HT hit efficiency as a function of hit z-position for both gas inputs in all three straw layers of the TRT barrel detector (see Fig. 7). The first notable trend common among all three layers in both gas inputs is the significant drop in HT hit efficiency as |z| decreases. Lower |z| values are representative of closer proximity to the collision point and therefore the source of radiation, so we observe in the data that the lowest HT hit efficiencies are recorded closest to the collision point. Through this trend we infer that there are possible signs of aging, as it can be observed that the most drastic loss in signal is apparent closest to the radiation source, one of the primary causes of aging in the straws. In the absence of any aging effect, we would expect very few (if any) fluctuations in the HT hit efficiencies with respect to z-position in the detector, and they would most certainly not follow the clear drop observed in the data near z=0. The degradation effect also decreases as we move from Layer 1 (the innermost straw layer) to Layer 3 (the outermost straw layer) in the TRT barrel. Again, we observe a signal loss most prominent closest to the radiation source (Layer 1) that becomes less significant as we move away from the collision point, this time radially rather than parallel to the beam line. In the absence of any aging effect, the HT hit efficiencies would stay similar across all straw layers of the TRT barrel. Through these two trends in the HT/All vs. z-position data, we observe that the degradation effect increases as distance to the beam collision point decreases, likely indicative of an aging effect caused by radiation produced by the beam.

2)

where n is the total number of HT hit efficiencies measured (equal to the number of z bins). We then fit the data with a straight line, concerned primarily with the slope of this linear fit. Gas flow is directed from –z to +z, so a positive slope indicates that the degradation effect decreases with position along the direction of gas flow, a negative slope indicates that the degradation effect increases with position along the direction of gas flow, and a slope of zero indicates that there is no observed dependence of straw wire deterioration on gas flow direction. In addition, the magnitude of the slope is representative of the strength of the dependence of the degradation effect on gas flow direction.

Figure 7: Shown above are plots of HT/All versus hit z-position in different sections of the TRT barrel. The left column shows plots for gas Input A and the right column shows plots for gas Input C, while the rows are organized by layer in the barrel (radial distance increases with descending rows). Volume 4 | 2014-2015 | 67


Street Broad Scientific

Physics and CompSci Research

Volume 1 | 2011-2012

3.2 Degradation Observed in ΔHT/All Plots As we predict that a degradation effect is dependent on the direction of gas flow, the effect observed in Input A should be the opposite of the effect observed in Input C. We look at the relative change in the HT hit efficiency defined in Equation 2, which quantifies the effect as a function of hit z-position (see Fig. 8). Because of the way the difference is defined, a positive slope in the trend line indicates that the degradation effect in the straws decreases along the direction of gas flow. In all three layers of the barrel, Figure 8 shows a clear dependence of the effect on the direction of gas flow. This dependence is representative of the fact that the degradation effect is not carried completely down the wire. It is also interesting to note that the slopes of the fitted lines decrease as we move from Layer 1 to Layer 3, demonstrating that the strength of the dependence decreases as distance to the collision point increases. The greatest positive slope and therefore clearest dependence is visible in Layer 1, likely due to the most prominent degradation effect observed there, caused by its closest proximity to the radiation source.

3.3 Interpretation of Observed Degradation Our initial hypothesis that the Transition Radiation Tracker has been subject to an aging effect dependent on direction of gas flow in the barrel has in part been confirmed by our results. The significant signal loss observed at close proximity to the collision point in the HT/All vs. z-position plots in all three straw layers of the TRT barrel for both gas inputs, in conjunction with the fact that the observed effect decreases as we move away from the collision point (both along the beam line and radially), are a clear sign of a degradation effect apparently caused by radiation produced by the high-energy proton-proton collisions in the LHC. It is possible, however, that this observed degradation effect is at least in part due to ozone accumulation or different run conditions in addition to straw wire aging. Furthermore, our initial prediction that the effect is dependent on direction of gas flow in the detector is also clearly supported by the data. The silicon deposits brought into the straws by the gas flow do in fact remain at the beginning of the wire and serve as a cause of the degradation effect independent of radiation exposure, as depicted by the observed positive slopes (and implied dependence) in the ΔHT/All vs. z-position plots. Therefore, our prediction that an observed degradation effect is dependent on the direction of gas flow is confirmed by the data, but further studies must be conducted to study if straw wire aging is the sole cause of the observed effect. Effect of Antibody Concentration Though Ab binding reduces virion diffusivity because of mucus affinity, it does not completely stop virion motion; most AAV6 virions reaching the epithelium have bound antibodies. However, Ab do reduce the average infectivity of viruses reaching the epithelium. The average effective infectivity, defined as the percent of Ab sites occupied, was 50%. Figure 6c shows the distribution of antibodies bound per virion at the end of the simulation. Almost all virions have between 15 and 19 antibodies bound to them by the time they have reached the epithelium. 4. Discussion

Figure 8: Shown above are plots of the relative change in HT/All versus hit z-position in all three layers of the TRT barrel, with the top plot showing Layer 1, the center plot showing Layer 2, and the bottom plot showing Layer 3. 68 | 2014-2015 | Volume 4

4.1 Implications of Observed Degradation The observed degradation effect and its dependence on direction of gas flow in the TRT barrel detector are evident in all other periods of data collection during 2012 as well, along with a similar effect and dependence observed in the end-caps of the detector [6]. In addition, a dependence of the observed effect on proximity to the collision point seen in the HT/All vs. z-position plots (see Fig. 7) tells us that radiation exposure is potentially an additional factor to be considered as a cause for aging in the straw wires. The slope values for the linear fits in the ΔHT/All plots (see Fig. 8) increase as time progresses during 2012, indicating that the observed effect is worsening with time [6]. Finally, the ramp-up to 13 TeV during the LHC’s


Street Broad Scientific

Physics and CompSci Research Run II in 2015 will not only expose the detector to higher amounts of radiation, but will also drastically increase the amount of data that will need to be reliably measured by the TRT. For all these reasons, the current need for a tool to monitor signal loss in the TRT is critical. Therefore, we will implement our developed tool in the TRT during LHC Run II for further monitoring of the aging effect. 4.2 Isolation of Aging Effect from Ozone Accumulation Further work is needed to isolate the aging effect from all other potential causes of the degradation observed in both the barrel and end-caps of the TRT throughout 2012. Besides straw wire aging, accumulation of ozone in the gas is the most probable cause of the degradation effect. This occurs when oxygen molecules in the gas absorb free electrons produced by ionization of the gas by a decay particle’s track. As the ozone molecule is accelerated towards the wire by the potential difference, it can create more ozone molecules through an avalanche effect, almost identical to the manner by which secondary electrons are created by the primary electrons resulting from initial ionization of the gas by charged decay particles. This accumulation of ozone, similar to the silicon deposits that contribute to aging, can cause a decrease in the gas gain and thus a temporary drop in the HT hit efficiency, masking the signal loss caused by aging degradation, a more permanent consequence for detector performance [1]. Damage caused by such ozone accumulation is also consistent with our observations. Ozone accumulation gets reset when a run starts/ends; Liu et al. examined the dependence of the ΔHT/All fitted slope values on individual slices of integrated luminosity within a run to gain better insight into the role ozone accumulation plays in the curve. This distinguishing difference between ozone accumulation and straw wire aging may allow us to differentiate between the two. Integrated luminosity is a measurement of the collected data size (total number of collisions), resulting from interactions between bunches of protons [6]. Little to no effect was expected in the first bin of a run, but it was predicted that if an effect due to ozone accumulation were present, the slope would increase as ozone builds up during a run. Eventually, ozone would saturate, represented by a drop in the fitted slope values [6]. These predictions were confirmed by Liu et al.’s results (see Fig. 9).

Figure 9: Shown above is the measured dependence of the ΔHT/All fitted slope values on integrated luminosity, indicative of a potential ozone accumulation effect [6].

Volume 1 | 2011-2012

Another run condition considered by Liu et al. was the average interaction per bunch crossing number (<μ>), a measure of the average number of protons that actually interact during collisions of proton bunches during LHC events. Liu et al. showed that a strong dependence of the fitted slope values in the ΔHT/All data on <μ> would indicate the presence of a strong accumulation of ozone, due to the fact that <μ> is directly related to instantaneous luminosity [6]. Therefore, studying <μ> could provide further information regarding the dependence of the effect on luminosity. Liu et al. found no obvious dependence of the degradation effect on <μ>, and a slight dependence on both integrated and instantaneous luminosities, indicative of possible ozone accumulation (see Fig. 9) [6]. The slope values for each layer in the TRT barrel increase with instantaneous luminosity, as shown in the Period B data represented by the table below (see Table 1). Since machine conditions are constantly changing, it is important to understand this dependence, especially if we are to attempt to isolate the ozone accumulation effect from the straw wire aging effect in the future.

Table 1: Shown above are the slope values (x10-5 fraction HT hits/mm) for runs with low instantaneous luminosity and runs with high instantaneous luminosity in each layer of the TRT barrel during Period B [6]. 5. Conclusions and Future Work 5.1 Conclusions In conclusion, we found that a degradation effect in all sections of the Transition Radiation Tracker is visible and worsening over time. The loss in signal caused by this degradation effect is dependent on both proximity to the beam collision point and direction of gas flow in the detector. Two potential causes of the effect are prolonged exposure to high amounts of radiation and solid-state silicon deposits brought in by the gas mixture. Further work must be conducted in the near future in order to determine whether the observed effect is solely due to straw wire aging, as predicted, or in part due to various run conditions, such as accumulation of ozone within the straws. Future work will primarily consist of examination of the dependence of the fitted slope values in the ΔHT/All plots on properties such as integrated and instantaneous luminosities. By considering these factors, we hope to better understand the effects that the frequently-changing machine run conditions have on degradation observed in the TRT straws. In doing so, we can further refine our method for monitoring aging in the TRT to accommodate the large increase in both energy and luminosity that Volume 4 | 2014-2015 | 69


Street Broad Scientific

Physics and CompSci Research

Volume 1 | 2011-2012

the LHC will undergo during the upcoming Run II in 2015. Once we completely isolate and determine the contributions of aging in the TRT, we can look to perfect our method for monitoring aging effects on detector performance and devise novel techniques to minimize such effects. 5.2 Proposed Monitoring Tool for LHC Run II We are currently writing a Python script that will work in conjunction with ROOT, CERN’s C++ based data analysis framework, to serve as a tool during Run II. The script will look up luminosity information automatically, essential because the dependence of the degradation effect on run conditions may be relevant to a wide range of experiments. It will be especially useful for those pertaining to other sections of the ATLAS detector. When finished, this function will be directly implemented in the monitoring tool proposed for Run II, which will produce further information that should allow us to better understand the degradation and possibly isolate straw wire aging from the observed effects. A study of the first data from Run II could very well help to distinguish between ozone accumulation and aging; if the effect has lessened or is no longer present during the first few runs analyzed in 2015 with low instantaneous luminosity, the effect could be attributed to an accumulation of ozone. If the degradation grows worse, however, it could be indicative of an aging effect in the TRT straws. The current proposed monitoring procedure for 2015 includes continuation of the method presented in this study using data collected at the beginning of Run II, along with further study of the dependence of the fitted slopes on ozone accumulation and other run conditions utilizing the aforementioned Python function. A primary future goal will be to better understand machine operation in order to isolate the straw wire aging effect in the TRT straws from ozone accumulation and other run conditions, and further refine our tool developed to quantify the aging effect. This program will be run every 1 fb-1 during the early stages of Run II, and will allow for careful monitoring of the degradation effect before it becomes an issue for long-term detector performance. Acknowledgements First and foremost, I’d like to thank Dr. Venkatesh Kaushik for sparking in me a true passion for high energy physics and showing me that knowledge is best when it is shared. I would also like to thank Dr. Jonathan Bennett of NCSSM for accepting me into and mentoring me during the entirety of the Research in Physics program. Finally, I would like to thank Dr. Mark Kruse and Ms. Miaoyuan Liu of Duke University for making my dream of traveling to CERN a reality.

70 | 2014-2015 | Volume 4

References [1] Akesson, T. et al. 2003. Aging studies for the ATLAS Transition Radiation Tracker (TRT). Nuclear Instruments and Methods in Physics Research A. 515: 166-179. [2] Andronic, A. and Wessels, J.P. 2011. Transition Radiation Detectors. Nuclear Instruments and Methods in Physics Research A. 666: 130-147. [3] Capeans, M. et al. 2004. Recent Aging Studies for the ATLAS Transition Radiation Tracker. IEEE Transactions on Nuclear Science. 51: 960-967. [4] Cortese, Alejandro Javier. Technique for Long-Lived Anomalously Charged Particle Searches at ATLAS. Thesis, Duke University. Durham: 2012. [5] Griffiths, David. Introduction to Elementary Particles. New York: Harper & Row Publishers, Inc., 1987. Print. [6] Liu, Miaoyuan, Fredrick Luehring, and Benjamin Weinert. “Development of methods of study aging of the ATLAS TRT Straws.” ATLAS NOTE 13:32. 2014. [7] Minot, Ariana Sage. A Global Dilepton Analysis in the eμ Channel. Thesis, Duke University. Durham: 2010. [8] Sauli, Fabio. Gaseous Radiation Detectors: Fundamentals and Applications. Cambridge, UK: Cambridge UP, 2014. Print. [9] Wagner, Peter. 2008. “Commissioning and Performance of the ATLAS Transition Radiation Tracker with Cosmic Rays and First High Energy Collisions.” University of Pennsylvania on behalf of the ATLAS Collaboration. Print.


Street Broad Scientific

Physics and CompSci Research

Volume 1 | 2011-2012

Creating a Hybrid Agent/Grid Model of Contact-Induced Force Uday Uppal ABSTRACT Most computational fluid dynamics (CFD) models follow either a Lagrangian or Eulerian approach to study the performance of objects in fluid flow. These wind tunnel models monitor a multitude of response variables, one of the most important being lift. However, it is difficult to find models that instead use the collisions between individual air particles and the object of choice to report such variables. The goal of this research project was to build a hybrid agent/grid model in NetLogo that would use a coupled Eulerian and Lagrangian approach to measure contact-induced lift. In this model, the individual air particles were represented by mobile agents, and the object of interest (in this case, a Clark Y wing) was represented by grid agents, as was the empty environment and the edges of the wind tunnel. The model included many control features, two of which were the magnitude of initial flow rate and the angle of attack of the wing. Experimental runs measured the lift force as a function of both of these control features, and the results indicate that this type of agent/grid based approach to CFD is in fact viable.

1. Introduction and Motivation The goal of this project was to create an agent/grid based wind tunnel model that could be used to conduct a computational fluid dynamics (CFD) style study of objects in air flow. Is it possible to replicate lift using collisions between air particles and a gridded wing rather than using a traditional Bernoulli continuum approach? This research project is based around building such a model and testing it in order to answer this question. The project was originally motivated by the growing interest in unmanned aerial vehicles (UAVs). Due to their versatility and large number of applications, both military and otherwise, UAVs are becoming more popular as subjects of research [2]. In its very early stages, this project was aimed at studying UAV designs and optimizing UAV flight in order to have designs for cost-efficient UAVs. However, the difficulty in finding a robust free CFD software package to carry out the project brought about the realization that building such a model from scratch may be an interesting research project in itself. CFD packages are sets of numerical methods and algorithms that make it possible to study the interactions between an object of choice and the fluid that it is travelling through. Standard approaches to CFD consider the variation of physical quantities (such as density, position, velocity, and pressure) in a continuum, by either considering a control volume that moves with the media or one that stays fixed as the media passes through [7]. The former approach is known as Lagrangian, and the latter is known as Eulerian [7]. Although there currently exist many CFD software packages that use either of these two approaches to study air flow patterns around an object of interest, there are very few that use the interactions between indi-

vidual or groups of air particles and the object being studied. Models that use such an approach represent a unique field of modeling known as agent-based modeling. Computational agent-based models must break, or discretize, the continuum into pieces that can be individually calculated. These models work off two main components: discrete mobile agents that have a dynamic position and various individual properties, and fixed grid agents that have volume and occupy the modeling space. The rules for interactions between these different types of agents define the parameters of an agent-based model. Such an approach may be used in upper-level CFD models; however, access to such models seems rare and it was difficult to locate any that could be used freely. The model in this research project uses a coupled Eulerian/Lagrangian approach in that it studies what happens to air particles (mobile agents) as they interact with each other and with the object (grid agents), but also records the effect of these collisions on the object itself. The model studies how this contact between the mobile particles and the gridded object can help record a measure of lift. This sort of hybrid approach makes the model quite unique and intriguing. Agent-based programming is becoming increasingly popular due to its many advantages in a multitude of situations [3]. Modeling with agents is especially useful when a problem consists of different objects and a certain environment that these objects exist in [9]. The different objects can be represented by different types of agents, each with its own personal attributes, and the types of interactions among the agents and between the agents and the environment can be used to define the parametric conditions of the model. CFD models are computationally robust in nature, and the challenge of this research is to determine the viability of agent-based modeling in CFD. Volume 4 | 2014-2015 | 71


Street Broad Scientific Volume 1 | 2011-2012

The air particles – or parcels of air particles based on scale – can be represented by agents, and the object of choice as well as the wind tunnel itself can be represented by the environment grid. The rules for interactions among the air particles and those between the particles and the object or the edges of the tunnel would define the model and also define how any results are output. NetLogo is one of the most robust and well-known agent based programming environments [8]. It is freeware, which makes it much more accessible than many other agent based programming tools. However, it is still sophisticated enough that it can be used to build powerful and detailed models [8]. Its easy-to-use visual modeling environment makes it work very well for models that depend on user inputs and modifications, and there are many tools in its interface that can be used to modify the environment and parameters of the model. Furthermore, NetLogo syntax is very easy to understand for even inexperienced users, and is well designed: the grid agents (patches) and the mobile agents (turtles) are linked based on position and can call on each other’s attributes through simple commands. It is for these reasons that NetLogo was deemed a strong fit for modeling a wind-tunnel program that could help answer this project’s research question. The goal of this project was not to compete with existing CFD packages; the goal was to explore what could be done with a hybrid agent/grid based model that used both Eulerian and Lagrangian ideas. This project was only intended to represent construction at a rudimentary level, with a chance for expansion of the model to include more variables, more output results, and even a component in the third dimension if so desired. The possible expansions to this model are endless, and one can always delve deeper into the model in order to give it more realistic boundary conditions and parameters. For now however, the goal was only to see if such a model was even viable and if so, how well it could replicate lift based on collisions between agent particles and a gridded wing.

2. Building the Model 2.1 Modeling the Fluid To begin building this model, the first step included designing the basic structure of the model. It was decided that the air particles would be represented by the agents – turtles in NetLogo – and that the actual wing would be gridded into the environment – patches in NetLogo – as would the wind tunnel and its boundaries. The research mentor’s gas pressure NetLogo model was first studied in detail. This model contained code for the movement and interactions between individual gas particles and their surroundings in a manner that conserved energy and momentum. It also contained code for having controllable fan on one side of the environment that would set the x-component of a particle’s velocity to the fan value. The model was stripped of its extraneous code, and placed into a new file and began being used as the of72 | 2014-2015 | Volume 4

Physics and CompSci Research ficial wind tunnel model. The method that created the particles and set their initial x- and y-velocity components was edited. Three sliders were implemented: fanlvl, base-pressure, and initial-density. At setup, the empty gridded patches were populated with a number of particles based on initial-density value, and this value was maintained throughout each run by adding particles on either side of the tunnel based on how many were leaving the system at any given moment. This was a better solution than horizontal wrapping of the model because with wrapping activated, the motion of the particles exiting was the same as the motion of particles entering, in a way making the model feed into itself. Additionally, particles within the volume of the tunnel were considered to have a directional velocity component (flow velocity represented by fanlvl slider) and a deviational velocity component (pressure impact on velocity based on base-pressure slider). Since each particle was assumed to have the same mass, fanlvl could represent the mean velocity of all the particles in the environment, and base-pressure could represent the deviation from that mean in any direction for each individual particle, with the total sum of the deviation being zero in order to maintain fanlvl as the average velocity. Each particle was given a random angle value between 0 and 360. The initial velocity was represented as a vector sum of the components: the fanlvl was in the x-direction for each particle and the base-pressure pointed in the direction of the random angle for each particle (see Figure 2.1.1 below).

Figure 2.1.1: Initial velocity as the sum of fanlvl and base-pressure vectors Each particle also had a scaled color from very light tints of blue to dark shades of red that would represent the speed at which the particle was moving. This value could also be used as a scale to represent the particle’s temperature since this particle was assumed to be a gas particle and average kinetic energy was calculated using mass and velocity, with all the masses assumed to be equal. The collision kernel was worked on to make sure it conserved both momentum and energy along the axis of collision. For each step, the collision algorithm counted the number of particles in each patch, and if there were more than one, the x- and y- velocity components of all the particles were randomly shuffled among each of the particles in that patch. This made sure that momentum and energy were conserved along the axis of collision since each particle is assumed to have the same mass and since the overall velocity components remained the same, even


Physics and CompSci Research though the individual motion of the particles was altered. The collisions of air particles with gridded areas of the wind tunnel were handled in a different manner. These surface collisions were detected through the presence of interpenetration. Due to the discrete nature of the model, particles could penetrate into a restricted patch before collision was detected. In every time step, the position of each particle was checked to see if the particle was in a restricted patch (top/bottom boundary or wing patch). If this was the case, the velocity components of the patch were modified and the particle was redirected based on the surface normal of the patch that it was on (see Figure 1.2 below). The projection of a particle’s motion vector onto the surface normal of the patch was used in modifying the x- and y-components of velocity and redirecting the particle as shown below. This ensured that particles bounced at the correct angle and that the energy of the system was conserved.

Street Broad Scientific Volume 1 | 2011-2012

A wing had to be placed into the environment for this model to be tested. The decision was made to place a Clark Y Wing into the model, since it is a standard wing design but also includes a curve to add some complexity to the model, making it a viable wing for testing and building purposes [13]. The wing was discretized into small lines that modeled the curve of the wing, and then small squares were chosen to represent the substance of the wing. Each of these small squares would be represented by a patch, and each patch would have an attribute that would give the direction of the surface normal n Ě‚ based on the part of the discretizing line that represents that patch in the wing (see Figure 2.2.1 below)

Figure 2.2.1: Close-up of discretized Clark Y wing with normal vectors

Figure 2.1.2: Redirection of agent following interpenetration into wing. Although this algorithm worked in most situations, some restricted patch areas were only one patch thick, and particles could penetrate into these areas from multiple directions. This problem was later resolved with a second set of surface normal values that were inserted for each such patch so that particles on both sides of the patch could bounce off correctly. An axis was calculated between the two surface normal values, and if a particle approached from below that axis, it bounced off one normal, and if it approached from above, it bounced off the other, still using the vector addition shown above. Gravity was ignored in this model for two reasons. For one, the wing in a wind tunnel is held in a certain position and gravity is not allowed to affect the acceleration of the wing. Also, at this scale, gravity would have a negligible effect on the density gradient within the fluid. 2.2 System Conditions The size of the system could be edited using the settings button inside the model, and this was an easy way to change the ratio between the size of the wing and the volume of the tunnel. In addition, the initial wing position could be specified using the wing-x-shift and wingy-shift input fields. This gave the user more control over the model.

Another feature that was added to the model was the angle of attack variable. Airplane wings are generally angled which allows them to either catch or lose lift, and being able to edit this angle of attack is an integral component of CFD models. Thus, the angle-of-attack slider was added to the model representing how much the wing was angled above the horizontal. If the actual angle of the wing was changed, the model would run into discretization trouble and would also have to redefine the surface normal values; therefore, it was decided that the most efficient manner in which to edit the angle of attack was to change the angle at which the flow rate and the deviational components were directed and added onto the particles when they were first created. In short the x- and y-axes were redrawn at the angle-of-attack. Additionally, the entire wind tunnel was modified so that the top and the bottom of the tunnel were placed on the attack angle variable as well. This would ensure that particles bounced correctly off the boundaries of the tunnel and did not skew any of the lift data by bouncing at the wrong angle. 2.3 Output and Display Measuring the overall force being exerted on the wing by the particles in the system involved recording values at each collision. It was already known that the particles would only exert force along the surface of the wing, and the patches on the surface of the wing all had surface normal values that would report the direction in which the force was being applied. All collisions conserved momentum and energy, meaning that the change in velocity could be used to represent the change in force since F=ma, where all the masses are equal and the acceleration is the change in velocity over one time step. Using this information, code was implemented that caused each patch of the wing to record the total amount of force applied by all particles Volume 4 | 2014-2015 | 73


Street Broad Scientific

Physics and CompSci Research

Volume 1 | 2011-2012

that came into contact with that patch during each tick of the program. These values were added to a growing list of collision force values for each patch, and after the size of the list reached a certain sampling number, the oldest value was deleted so that the maximum size of the list for each patch remained the same. The value that was used to represent the collision force on each patch of the wing was the mean of all the values in this list, and this sum of the x- and y-components of these force values resulted in a measure of the total force applied on the wing as a result of the collisions from the air particles on the surface of the wing. The lift component of this force perpendicular to the angle of attack. Since the x- and y-axes were being redefined based on the angle of attack, lift was simply the component of the overall force along the y-axis. A similar method was used to calculate and represent the moment of the object in order to show how the object would rotate due to these different collision forces from the particles. First, assuming that each patch that was a part of the wing would have the same mass, the x and y coordinates of each wing patch were averaged to get a center of mass. The magnitude of the moment was calculated by recording the cross product between the vector from the center of mass to the patch of choice and the force vector for that patch. Since the resulting vector only had a z-component, the direction of the moment could be simply represented by a positive or negative value. The sum of all the moments was put into a list in every time step, and once the list became greater than the desired sampling length, the oldest value was dropped off. The mean of this list was recorded as the moment value in the center of wing. Now that the model was completed algorithmically, various display and output components had to be created in order to make it meaningful. The model needed to be capable of reporting and displaying basic information about the system as well as the monitored values. Reporters were placed near the model environment that reported basic information about the model such as volume of the tunnel, density in terms of gas particles per grid unit, and also the overall temperature and pressure. Additionally, the model also had reporters for the magnitude of the total lift and the x- and y-components of this lift, as well as a reporter for the moment being calculated. In order to visually display this information in the windtunnel itself, small arrows were placed in the wing patches that allowed the user to see how much force was being applied to each surface patch and in which direction. Additionally, the sum of these forces resulted in another arrow that was placed at the center of mass of the wing, and this arrow showed the total force applied on the wing as well as the direction it was applied at. Similarly, an arrow was created for the moment. It was placed a certain distance horizontally away from the center of mass of the wing and the moment was divided by the length of this moment arm to show the moment at that patch of the wing. Additionally, axes were placed in the model that changed 74 | 2014-2015 | Volume 4

with the angle of attack to give an accurate representation of how the horizontal and vertical components were oriented. This concluded all the changes made in the model for visual clarity and important display components and monitors (see Figure 2.3.1 below).

Figure 2.3.1: Close-up of wing with individual patch arrows, moment arrow, and force arrow displayed Furthermore, the model was cleaned up and organized so that it was easier for the user to see exactly what was happening in the model and where (see Figures 2.3.2 and 2.3.3 below).

Figure 2.3.2: Model at setup

Figure 2.3.3: Model during run

3. Results Simply building what was thought to be CFD model was not enough however: the reported values had to be measured and analyzed in order to see if they were even reasonable. To do this, the behavior space tool of NetLogo was put to use. This tool allows the user to run the model with various different inputs for a certain number of timesteps and record whatever value is desired as the dependent variable. For this model, there were two independent variables: the magnitude of initial flow (represented by the fanlvl variable) and the angle of attack. The dependent variable that was measured as a result of the independents was the mean lift. Before running the model to retrieve data, graphs were


Physics and CompSci Research

Street Broad Scientific Volume 1 | 2011-2012

created in the modeling environment that tracked density and lift over time-steps. It was seen that no matter what the fan level and attack angle, the density remained constant but the lift after the sample length varied around a certain value, rising and falling but maintaining a certain average. In addition, the rise and fall, or the amplitude, of the lift function decreased over time. This is displayed in Figure 3.1, which was pulled directly from the NetLogo modeling environment. Figure 3.3: Contour of Collision Induced Lift vs Flow Rate and Angle of Attack for Clark Y Wing

Figure 3.1: Density and Lift vs. Ticks Due to this fact, a reporter was placed into the model that would measure the mean lift after the desired sample length had been reached. This is the reporter that was tracked in the behavior space model. The rest of the behavior space model was then set up, with the constants being density at 1 particle/patch, base pressure at 3 patches/tick, and the sample length being 100 values. For the independent variables, fan level was varied from 1 to 3 patches/ tick with increments of 0.5 patches/tick, and the angle of attack was varied from 0 to 14 degrees with increments of 2 degrees. The length of each of these 40 runs was set at 1000 ticks, and each run was repeated 5 times. The mean of the lift recorded in the five repetitions for each run was then calculated in order to ensure stability and eliminate transient behavior, and a graph was created plotting these lift values against the flow rate and angle of attack (See Figures 3.2 and 3.3).

As can be seen from these graphs, there is a definite trend; lift increases with both angle of attack and flow rate. This shows that the model does output realistic values for lift based on the many variables. These values are known to be realistic because the same sort of trend is noted in traditional CFD packages [6]. Additionally, to validate the fact that the data from the model was not a result of any possible coding or algorithmic errors, these tests were again run using only a simple circle of patches rather than a wing cross-section. The results are displayed in the Figures 3.4 and 3.5 below. As is expected for a circular object, neither the angle of attack nor the flow rate has any recognizable effect on average lift. This proves two things about the model. The algorithms within the model are not coded wrong and do not skew the data, since the model performed as it was expected to in this case. Additionally, the error due to discretization of the object is not very large, because even though the object tested for this data was not a perfect circle, the output values still revolved around a lift of 0, just as expected.

Figure 3.4: Surface of Collision-Induced Lift vs Angle of Attack and Flow Rate for Circle

Figure 3.2: 3D Surface of Collision Induced Lift vs Flow Rate and Angle of Attack for Clark Y Wing Figure 3.5: Contour of Collision-Induced Lift vs Angle of Attack and Flow Rate for Circle Volume 4 | 2014-2015 | 75


Street Broad Scientific Volume 1 | 2011-2012

4. Discussion The results drawn from the model imply that its hybrid agent/grid approach to CFD performs well in measuring lift. Although the model is unique in that it has both Eulerian and Lagrangian components and that lift is actually calculated using the collisions between mobile agent particles and the grid agent wing, the results that are output still agree with standard CFD models. There is definitely a similar trend relating lift to angle of attack and flow rate as can be seen in such models, and with the addition of realistic boundary conditions, the agent/grid based model would perform even better and output more meaningful data. The fact that such a model could be built: one that is agent-based and depends on the interaction between agents and their environment to run, and that it also agrees with traditional CFD models shows that agent based modeling is viable when it comes to CFD. This model is also very easily accessible since it is built using freeware and can be downloaded and used by anyone for any sort of project or research. As mentioned earlier, the goal of this project was not to compete with existing CFD packages. In fact, the reason for building this model was to see what kind of collision induced lift could be recorded in a hybrid/agent based wind tunnel model and if these recorded values were realistic or not. This work is just a small start on what could end up being a very versatile and accurate tool, and there are endless ways in which the model can be refined and expanded to include more features and that are so far missing.

5. Conclusion and Future Work The goal of this project was to build an agent/grid based computational fluid dynamics model that could help model wing designs and then study the effects of air particles on such wings. The entire project was carried out with this goal in mind, and the end result was a working model of this kind that displays lift as it should. The experiments carried out were focused on making sure that the lift values from the model made theoretical sense and varied correctly with flow rate and angle of attack. In general, the experimentation attempted to validate the model by analyzing the data it was outputting. Furthermore, experimentation also attempted to eliminate any possibility that the model seemed to be working correctly while in reality it was not. The results from the behavior space experiments did end up proving that the model works well because they output realistic values of lift as a function of flow rate and angle of attack. As expected increasing the angle of attack increased the lift, as did increasing the flow rate. Additionally, the experiments run with the circle resulted in small values for lift that all varied around 0 and had no sort of trend with either angle of attack or flow rate. This proves that the resulting lift is not due to errors in the code of 76 | 2014-2015 | Volume 4

Physics and CompSci Research the model; it is legitimately reported as it is expected to be reported. This model is an excellent start because so far, all the output results agree with what is expected and is accurate. However there are many ways in which this model can be refined and expanded so that it truly becomes more versatile and accurate, with results that are truly meaningful to the user. One way in which this model can be improved is by adding actual scale and units to all the values. This would mean a better model that is calibrated to reality by having values that actually make sense in terms of units rather than just being simple numbers [4]. This would give the user a chance to compare the results from this model to other models and real life conditions in general. Units are an important part of any measurement, and this model would be a lot better with realistic units. Another way to refine the model is to have a better discretized wing. This would mean expanding the overall size of the tunnel to allow for a larger and more detailed wing inside. As the wing gets bigger, it would be easier to discretize and the small errors that occur due to discretization would be minimized in comparison to the total size of the wing. Basically, this refinement seeks to put a larger amount of information into the model, and to expand its size. Also, more agents and a higher density of particles in the model would help replicate better what actually happens when a wing is slicing through the air. The amount of particles in our model, although on the order of 10,000, is still very small and there would be a lot more particles in the same amount of space than there are in the model. Although the equipment is currently a limit since the computer that the model is being run on slows down quite a bit whenever one attempts to raise the density too high, expanding the number of particles is still an important fix to improve the model. Another possible addition to the model is an editor that allows users to input any sort of wing into the program. For example, a user could insert x- and y-coordinates for a wing, and the model could automatically discretize such a wing and add surface normal values on its own, automatically scaling and inserting the wing into the model. This would truly allow users to test anything in the model and to see how such objects would perform in air particle flow. Furthermore, adding friction to the model would make it even more realistic and would also allow for the calculation of friction drag and other such values, making for an even more robust CFD tool. All of these suggested additions and modifications to the model would make it a stronger program in general, and would allow the user a larger amount of freedom in choosing to measure what they wanted and how. The beauty of this model is that it can be expanded to whatever degree is required and one can go as far in depth into this model as they want. One can always refine the code, and add a new feature that makes the model more realis-


Physics and CompSci Research tic and also outputs better and more meaningful results. This model could even be expanded to a 3-dimensional one – which is an actual feature in NetLogo – to give users a stronger tool for studying how objects would perform in air flow [8]. This would make for an even more robust model, but still easily accessible and serviceable by the inexperienced or casual user.

6. Acknowledgements This completion of this project would not have been possible without my mentor, Dr. Garrett Love. He provided his invaluable expertise on the NetLogo language and also on the research topic in general. Additionally, he provided his molecular kinetics NetLogo model for study and use. Also, my research teacher Mr. Robert Gotwals was very important to this project. He helped mentor the project and made sure that deadlines were met and resources were provided.

Street Broad Scientific Volume 1 | 2011-2012

Engineering and Industrial Aerodynamics, 94, 699–723. doi:10.1016/j.jweia.2006.02.001 [11] O’Neil, D. A., & Petty, M. D. (2013). Organizational Simulation for Model Based Systems Engineering. Procedia Computer Science, 16, 323–332. doi:10.1016/j. procs.2013.01.034 [12] Thiele, J., Kurth, W., & Grimm, V. (2011). Agentand individual-based Modelling with NetLogo: introduction and New NetLogo Extensions. Die Grüne Reihe, 68–101. doi:ISSN 1860-4064 [13] “UIUC Airfoil Coordinates Base.” UIUC Airfoil Data Site. UIUC Applied Aerodynamics Group, n.d. Web. 18 Aug. 2014. <http://m-selig.ae.illinois.edu/ads/ coord_database.html#C>. [14] Wolfram, Stephen. A New Kind of Science. Champaign, IL: Wolfram Media, 2002. Print.

References

[1] “Airfoils and Airflow.” AV8N. N.p., n.d. Web. 30 Sept. 2014. <http://www.av8n.com/how/htm/airfoils. html>. [2] Austin, R., 2010, Unmanned Aircraft Systems : UAVs Design, Development and Deployment: Aerospace Series: Chichester, Wiley. [3] Bankes, S. C. (2002). Agent-based modeling: a revolution? Proceedings of the National Academy of Sciences of the United States of America, 99 Suppl 3, 7199– 7200. doi:10.1073/pnas.072081299 [4] Bushnell, D. M. (2006). SCALING: Wind Tunnel to Flight*. Annual Review of Fluid Mechanics. doi:10.1146/annurev.fluid.38.050304.092208 [5] Damaceanu, R. C. (2008). An agent-based computational study of wealth distribution in function of resource growth interval using NetLogo. Applied Mathematics and Computation, 201, 371–377. doi:10.1016/j. amc.2007.12.042 [6] Lissaman, P. B. S. (1983). Low-Reynolds-Number Airfoils. Annual Review of Fluid Mechanics. doi:10.1146/ annurev.fl.15.010183.001255 [7] Lomax, H., Pulliam, T., Zingg, D., & Kowalewski, T. (2002). Fundamentals of Computational Fluid Dynamics. Applied Mechanics Reviews. doi:10.1115/1.1483340 [8] Lytinen, S. L., & Railsback, S. F. (2010). The evolution of agent-based simulation platforms: a review of NetLogo 5.0 and ReLogo. In European Meetings on Cybernetics and Systems Research (pp. 1–11). [9] Macal, C. M., & North, M. J. (2013). Introductory tutorial: Agent-based modeling and simulation. In Proceedings of the 2013 Winter Simulation Conference Simulation: Making Decisions in a Complex World, WSC 2013 (pp. 362–376). doi:10.1109/WSC.2013.6721434 [10] Moonen, P., Blocken, B., Roels, S., & Carmeliet, J. (2006). Numerical modeling of the flow conditions in a closed-circuit low-speed wind tunnel. Journal of Wind Volume 4 | 2014-2015 | 77


Street Broad Scientific

Physics and CompSci Research

Volume 1 | 2011-2012

Engineering, Programming and Testing The Efficacy of a Novel Single Cell Array Aaron Sartin ABSTRACT The objective of this project is to create a system to isolate, sort, and control large arrays of single cells, allowing for the retrieval of specific cells for downstream analysis. In large groups of nearly homogenous cells, the averages mask the responses of small amounts of heterogeneous cells, which often provide crucial details necessary to understand their exact mechanisms. HIV latency is a prime example; although HIV latency was discovered twenty years ago, what must be done immunologically to clear infected reservoirs has yet to be resolved. To analyze what causes these reservoirs, we create a platform to sort magnetically labeled cells into separate compartments embedded upon a silicon wafer. By using a variety of wires and an external magnetic field, we are able to control cell direction and efficiently sort them. In order to combat HIV, and a larger class of diseases, we seek to create a device to isolate, store, and extract single cells.

1. Introduction In the study of biological cells, the analysis of single cells and their pertinent interactions is limited at best. Currently, the most popular method for single cell analysis, fluorescence activated cell sorting, only provides a limited snapshot of a cell’s life cycle [9]. Because of this lack of resolution, mapping genomes and studying mechanisms of rare cells often inhabiting largely homogenous cultures becomes difficult. By creating an array for the sorting and retrieval of such cells, many cells can be analyzed for long periods of study. This device would allow for the generation of accurate genomes for rare cells, and the study of cell to cell interactions that would otherwise be impossible to observe with precision. One important class of cells to study are those infected by human immunodeficiency virus (HIV), a disease that affects approximately 35 million people. It has been known for approximately twenty years that HIV undergoes a latent reaction [6]; however, the exact mechanism that causes dormancy and reactivation has yet to be discovered [20]. Recently, histone deacetylase (HDAC) inhibitors such as vorinostat have shown progress in disrupting latency, but the immunological process necessary to clear the reservoirs of latent HIV infected cells is still poorly understood [2] [8]. Creating an array to isolate, store, and analyze cells is essential to advances in understanding the intricacies of not only HIV, but also other diseases as well [27]. An understanding of CD8+ T cell responses, specific to HIV-1, is essential to controlling and potentially curing the disease. These cells implement multiple methods to control viremia, the spread of a virus through the bloodstream, such as the direct delivery of cytolytic proteins to infected CD4+ and the secretion of multiple cytokines [1] [7] [5] [4]. Some patients, known as elite controllers, have strong immunological responses that are able to suppress 78 | 2014-2015 | Volume 4

the replication of the virus and nearly halt the progression of HIV to AIDS. Unfortunately, the mechanisms utilized by the elite controllers’ immune systems are unclear due to the wide variety of polyfunctionalities, proliferative capacities, and cytolytic capacities of CD8+ T cells [4] [16] [28] [10]. It is currently suspected that the direct degranulation of serine proteases (e.g. perforin and granzyme B) is an especially effective method of HIV suppression found in elite controllers [16]. Although there is uncertainty about the exact mechanisms, CD8+ T cells are critical in the eradication of latent reservoirs of HIV infected cells [8] [16] [21]. Because of this, analyzing the specific interactions between HIV-specific CD8+ and infected cells is essential to the prevention, control, and cure of HIV. Although this study will primarily be focusing on HIV, the single cell array can be applied to a variety of problems such as cell lineage tracing [15] [13], next generation vaccine development [12], tumorigenic potential of single tumor cells [18] [17], mechanism of aging and the cell life cycle [19] [22], determining differentiation pathways of stem cells [25] [11], cell decision making and cooperative behavior [14] [3] and gene regulation and correlated fluctuations [23] [24].

2. Methodology In order to move cells into and out of our array, we utilize the magnetic force. We begin by creating paths of magnetic permalloy, Ni81Fe19, on a silicon wafer and magnetically labeling the cells. The setup is situated in a horizontally rotating magnetic field, causing the magnetically labeled cells to move in distinct steps according to the cycles of the field. By varying the magnetic field at different locations our array we are able to cause cells to move in different directions. To ensure the health of the cells, a complex system for temperature management must


Physics and CompSci Research be developed in conjunction with a nanoscope in order to track cell locations. The combination of these parts into a silicon wafer is illustrated in Figure 1. My work on this project primarily focused on instrument development regarding the physical structure and the automation of the system created.

Figure 1: The array chamber (a) illustrates the cell compartments and the electronic switching elements for varying placement of cells. The chip schematic (b) shows the overall layout of the silicon wafer, including a deposition and extraction chamber. The entire chip is then contained within an incubator (c)[26].

Street Broad Scientific Volume 1 | 2011-2012

to the permalloy pathing even if the external magnetic field is exhibiting a pulling force. This attachment allows us to simultaneously create cell movement at any angle. 2.2 Sorting and Storage In order to sort and store the magnetically labeled cells, we utilize three types of intersections, places where the permalloy paths meet or loop upon themselves. The first is pictured in Figure 2; due to the geometry, the cell is able to jump the thin section of the vertical path when traveling to the right, however, when going in the opposite direction, the cell instead travels along the adjacent path. This type of gate serves two primary functions; first it is used to connect rows of cells as they are exiting the system. The connection greatly simplifies the extraction process as they can be retrieved from a single location. Second, this type of intersection can be built upon to create a basic containment system pictured in Figure 3. The cells are able to cross the vertical path located in approximately the center of the diagram, but as explained earlier, they are unable to exit; this allows for the cells to be trapped in a continuous cycle.

2.1 Movement Our method of cell movement takes inspiration from magnetic bubble technology; an uneven, magnetized path is used in combination with an external rotating magnetic field to create motion. As the external field rotates, the points of lowest potential energy along the path move in turn. This concept is illustrated in Figure 2.

Figure 3: A sample design of a storage compartment. A cell trapped within this compartment is highlighted by the green circle. The cell is able to travel in a continuous loop until extracted [26].

Figure 2: A magnetically labelled cell, represented by the black dot, continually moves to a state of lowest potential energy, represented by the blue region. The red regions represent regions of highest potential energy, yellow medium levels, and green low to medium levels. This is controlled by the rotating magnetic field represented by the red arrow [26]. Because the pathing is magnetic, a cell remains attached

In order to extract cells from these compartments, we build upon the first type of intersection, pictured in Figure 2, by applying an additional magnetic force through the use of wires. By applying current in the appropriate direction, we can create a magnetic force in line with the overall magnetic field, causing cells to “hop over� the walls they would normally be unable to climb. Because we are able to selectively apply a current to these wires at any given time, they effectively become switches. Figure 4 demonstrates the pathing effects they have.

Volume 4 | 2014-2015 | 79


Street Broad Scientific Volume 1 | 2011-2012

Physics and CompSci Research Without any additional force, a cell would simply loop around the path, but with a current applied to the wire, a cell is able to hop the small gap. This hop occurs because of the symmetry of lowest potential energy along the two paths. When the external magnetic field is in line with the adjacent paths, the potential energy at either point is equal. By applying a small magnetic force that is in line with the external magnetic field a cell is able to hop between the two points. When these three gates are combined, we can create a complex array that allows for the efficient sorting, storage and extraction of magnetically labeled cells. An example is pictured below in Figure 6.

Figure 4: The cell, represented by the black dot, is moving in the counter-clockwise direction. The normal path expected is demonstrated in figures a through c. By applying a current to the wire above the pathing, a magnetic force is created causing the expected path to change. This new path is illustrated in figures d through f. In figures a and d the potential energy is illustrated in which blue represents the lowest values and red the highest [26]. In order to sort cells we utilize what is effectively a two way switch. Similar to the switch pictured in Figure 4, a wire positioned above the setup has a current applied to it in order to generate an additional magnetic force. The setup is pictured in Figure 5.

Figure 5: Two adjacent paths are separated by a small gap. Because of the magnetization of the paths, there is a symmetry in the points of lowest energy along both sides, as illustrated in figure a. The expected pathing when no current is applied to the wire is pictured in figures a through c, and the expected pathing when a current is applied is pictured in figures d through f. Figures a and d show the potential energy, where dark blue is the lowest and red the highest [26]. 80 | 2014-2015 | Volume 4

Figure 6: An example design of an 8 by 8 array of cell storage compartments. Cells move along the white lines, representing the permalloy pathing; the yellow and orange lines represent the cell control wires. Cells initially enter in from the top left then move into the system where they can exit in the lower right. In order to prevent error and minimize the number of wires required to generate the maximum number of compartments, each compartment contains what is essentially a two-step lock, illustrated in Figure 6. Every row and column has an associated wire. In order for a cell to enter or exit a compartment, both the wire associated with row and the wire associated with the column must have a current applied to it.

Figure 7: A close up image of the compartments pictured in Figure 6. Each compartment requires both the wire associated with the row and the wire associated with the column to be activated in order to enter or exit.


Physics and CompSci Research This allows multiple cells to be sorted at once without other cells accidentally entering compartments on the same row or column as the intended destination. 2.3 Instrument Development To create a reliable magnetic field and ensure the health of the cells under analysis, an external structure must be created. This consists of a plastic support structure created by a 3D printer, coils of copper wire, an incubation chamber and a nanoscope for the analysis of cell locations. This setup is pictured in Figure 8.

Street Broad Scientific Volume 1 | 2011-2012

In order to support the magnetic field generating structure and to hold the chip containing the cells, a set of elbows and a center were created in a 3D printer. The elbows, one of which is pictured in Figure 10, ensure the stability of both the magnetic field and the chip and allow for the organization of the copper wires used to generate the magnetic field.

Figure 10: One of four elbow pieces used to support the structure.

Figure 8: Through the use of two sets of copper wires wrapped perpendicularly too each other, represented by the orange lines, a magnetic field is created. This setup is this contained within a plastic support structure, represented by the gray and black, in which the slide and incubation chamber are contained. The silicon wafer is contained within the center divot of the structure. The implementation of this structure is pictured on the right where I nanoscope is used to analyze cell locations.

The center component, pictured in Figure 11, rests on top of the entire structure. The center is divoted, positioning the chip containing the cells in the center of the magnetic field. Divots for input cables and clips are incorporated.

The support structure is composed of four unique pieces. The magnetic field generating component, pictured in Figure 9, is at the core. With two sets of copper wire coiled at right angles to each other, the magnetic field’s direction can be rotated as they receive varying currents.

Figure 11: The center of the structure. The indent within the center holds the silicon wafer, the divot to the left allows for convenient wiring, and the holes on the right allow for the placement of clips to stabilize the chip. Finally, in order to keep the cells alive, an incubation chamber was used. By applying an even electrical heating and measuring the chambers temperature, human body temperature can be maintained. This setup encases the silicon wafer as pictured in Figure 1. Figure 9: Opposite copper wires are powered in series to create a controllable magnetic field. The orange strips represent the coils of copper wires.

2.4 Automation In order for the use of this system to be efficient, it is automated using a combination of LabView to control a UEIDaq board and MATLAB to register cell locations Volume 4 | 2014-2015 | 81


Street Broad Scientific

Physics and CompSci Research

Volume 1 | 2011-2012

through differential imaging techniques. The interface of this system, in which voltage of wires, frequency of rotation and the sorting of cells are all customizable is shown below in Figure 12.

3. Results Efficacy of magnetic cell control is primarily being tested through the use of three and five micron magnetic beads. Additionally, we have also shown efficacy using living cells illustrated in Figure 13.

Figure 12: The user interface for sorting cells. The Resource and Device Information section is data regarding the UEIDaq board used. Analog output settings allow for global customization of Voltage and the length of a cycle, or a rotation of the magnetic field, in milliseconds. The waveform chart and analog output display the voltages for each of the 32 pins. By selecting a cell control map and anyone of the buttons A1 to H8, a cell can be sorted into the desired compartment. By using MATLAB, the entire system can be automated. Cell locations are determined in each frame and the differences in locations over a series of frames are used to track speed and ensure that any potential errors can be corrected.

Figure 13: Using MATLab software, 5 micron beads moving along a track are identified. By identifying cell locations, movement can be appropriately timed and predicted through a LabView interface. In order to maximize efficiency, cells will be sorted simultaneously. In order to accomplish this, a map of the relative timings of the necessary gates to transfer a cell from the beginning of the system to a given compartment is created. For example, sorting a cell into compartment A1, requires the activation of three switches in sequential order. By generating a map containing each of the compartments, the timings for all switches necessary to transfer a cell to any given compartment can be efficiently retrieved. These timings can then be passed into the program where an internal counter mechanism activates the switches when needed. Because we are able to vary the frequency of rotation in the magnetic field, and therefore the speed of the cells, timing is measured by the number of cycles instead of seconds.

82 | 2014-2015 | Volume 4

Figure 14: A successful filming of a cell moving along the permalloy path. Each image is one-fourth of a cycle later than the previous. These beads approximate both the size and the magnetic moment of the magnetically labelled HIV cells. In the majority of the initial testing of cell and bead manipulation, we found accurate and reliable success, but occasionally some error emerged. This error manifested in two ways: clumping and unexpected cell velocity. In rare cases, a grouping of beads could agglomerate, causing a clump to become stuck together and potentially fail to progress forward through the intersections. Although we expect this will not be nearly as much of a problem in the case of the HIV-1 infected cells, due to natural repulsion between biological cells, we are working to counteract this by increasing the strength of the magnetic field while lowering the frequency when such cases occur. This minimizes velocity error and increases the force on the cells, allowing them to begin to separate. Additionally, by applying varying chemical coatings such as soap we are able to help prevent these agglomerations. The majority of the clumping error is generated from unexpected velocities. As theory suggests, we expect that


Street Broad Scientific

Physics and CompSci Research the speed at which the cells move along the pathing varies linearly with frequency of the external magnetic field. Experimentally, we find this pattern to hold true at low frequencies, but as we approach values of approximately two hertz, the velocity of the cells actually decreases and approaches zero as the frequency continually increases. This phenomenon is illustrated in Figure 14. In order to ensure the continuous movement of all cells at expected values, we utilize frequencies between .1 and 1 hertz.

Volume 1 | 2011-2012

sibility of clumping and unexpected cell velocities. Upon completion of the tool, tests upon the HIV-1 infected cells can begin. A variety of tests will be implemented, but they will largely focus on the interactions between CD8+ HIV1 specific cells and infected CD4+ cells. This will involve the efficacy of various methods of elimination and the mapping of their respective genomes. The development of this tool will not only allow for the better examination of HIV, but also a multitude of diseases and treatment options.

5. Acknowledgments For their help with constructing and testing the device, I’d like to thank the researchers at Duke, Dr. Benjamin Yellen, Roozbeh Abedini, Shahrooz Amin, Cody Baker, Dr. David Murdoch, John Yi, and Zachary Healy. I’d like to thank to Dr. Jonathan Bennett and my fellow Research in Physics Students for their assistance in learning the necessary material. Finally, I’d like to thank the NCSSM Foundation for helping to fund my research.

References Figure 15: The measured velocity of five micron beads is plotted against the frequency of the rotation of the magnetic field. The velocity varies linearly as expected up until values of approximately one hertz when we begin to see a decrease. The automation of the external magnetic field and the UEIDaq controlled chip has proven successful. We are able to precisely control the field’s strength and frequency at all points along with each of the pins used for sorting cells. Sample maps have functioned properly, but the final mapping of the array is currently under construction. The incubation chamber, similarly, has also been proven effective. We are able to clearly view each of the beads or cells and accurately import their images into MATLAB for analysis while maintaining a regular temperature of approximately 37˚ Celsius.

4. Future Work Moving forward, there are two major steps in project progression. The first is completing the automation of cell sorting. By utilizing the differential imaging software provided in MATLab we are able to import images of cells from a nanoscope positioned above the system and analyze cell locations. These locations can then be passed into the sorting program, allowing for complete automation. Once this step is complete, the testing of efficacy of the entire system may begin. Initially this will be done by conducting population tests with the three and five micron beads, but live cell tests will be used soon afterwards. A large part of this stage will be maximizing efficiency regarding the pos-

[1] Almeida, Jorge. R, et al. “Antigen sensitivity is a major determinant of CD8+ T-cell polyfunctionality and HIVsuppressive activity.” Blood 113 (2009): 6351-6360. [2] Archin, N.M., et al. “Administration of vorinostate disrupts HIV-1 latency in patients on antiretroviral therapy.” Nature 487 (2012): 482-485. [3] Balaban, Nathalie Q., et al. “Bacterial Persistence as a Phenotypic Switch.” Science 305 (2004): 1622-1625. [4] Betts, Michael R., et al. “HIV nonprogressors preferentially maintain highly functional HIV-specific CD8+ T cells.” American Society of Hematology - Blood 107 (2006): 4781-4789. [5] Chen, Huabiao, et al. “Differential Neutralization of Human Immunodeficiency Virus (HIV) Replication in Autologous CD4 T Cells by HIV-Specific Cytotoxic T Lymphocytes.” Journal of Virology 83 (2009): 3138-3149. Works Cited [1] Almeida, Jorge. R, et al. “Antigen sensitivity is a major determinant of CD8+ T-cell polyfunctionality and HIVsuppressive activity.” Blood 113 (2009): 6351-6360. [2] Archin, N.M., et al. “Administration of vorinostate disrupts HIV-1 latency in patients on antiretroviral therapy.” Nature 487 (2012): 482-485. [3] Balaban, Nathalie Q., et al. “Bacterial Persistence as a Phenotypic Switch.” Science 305 (2004): 1622-1625. [4] Betts, Michael R., et al. “HIV nonprogressors preferentially maintain highly functional HIV-specific CD8+ T cells.” American Society of Hematology - Blood 107 (2006): 4781-4789. [5] Chen, Huabiao, et al. “Differential Neutralization of Human Immunodeficiency Virus (HIV) Replication in Autologous CD4 T Cells by HIV-Specific Cytotoxic T Lymphocytes.” Journal of Virology 83 (2009): 3138-3149. Volume 4 | 2014-2015 | 83


Street Broad Scientific Volume 1 | 2011-2012

[6] Chun, Tae-Wook and Anthony S Fauci. “Latent reservoirs of HIV: Obstacles to eradicating the virus.” Proceedings of the National Academy of Sciences (1999): 1095810961. [7] David, A., et al. “Heterogeneity in HIV Suppression by CD8 T Cells from HIV Controllers: Association with Gag-Specific CD8 T Cell Responses.” The Journal of Immunology 182 (2009): 7828-7837. [8] Deeks, Steven G. and Barré-Sinoussi. “Public health: Towards a cure for HIV.” nature 487 (2012): 293-294. [9] Herzenberg, Leonard A., et al. “The History and Future of the Fluorescence Activated Cell Sorter and Flow Cytometry: A View from Stanford.” Clinical Chemistry 48 (2002): 1819-1827. [10] Horton, Helen, et al. “Preservation of T Cell Proliferation Restricted by Protective HLA Alleles Is Critical for Immune Control of HIV-1 Infection.” Journal of Immunology 177 (2006): 7406-7415. [11] Kaern, Mads, et al. “Stochasticity in gene expression: from theories to phenotypes.” Nature Reviews Genetics 6 (2005): 451-464. [12] Koff, Wayne C., et al. “Accelerating Next-Generation Vaccine Development for Global Disease Prevention.” Science 340 (2013): 1232910-1232916. [13] Kretzschmar, Kai and Fiona M. Watt. “Lineage Tracing.” Cell 148 (2012): 33-45. [14] Long, Tao, et al. “Quantifying the Integration of Quorum-Sensing Signals with Single-Cell Resolution.” Public Library of Science - Biology 7 (2009): e68. [15] Meacham, Corbin E. and Sean J. Morrison. “Tumour heterogeneity and cancer cell plasticity.” Nature 501 (2013): 328-337. [16] Migueles, Stephen A., et al. “HIV-specific CD8+ T cell proliferation is coupled to perforin expression and is maintained in nonprogressors.” Nature Immunology 3 (2002): 1061-1068. [17] Navin, Nicholas, et al. “Tumour evolution inferred by single-cell sequencing.” Nature 472 (2011): 90-94. [18] Quintana, Elsa, et al. “Efficient tumour formation by single human melanoma cells.” Nature 456 (2008): 593598. [19] Rowat, Amy C., et al. “Tracking lineages of single cells in lines using a microfluidic device.” Proceedings of the National Academy of Sciences of the United States of America 106 (2009): 18149-18154. [20] Ruelas, Debbie S and Warner C Greene. “An Integrated Overview of HIV-1 Latency.” Cell Press (2013): 519-529. [21] Shan, Liang, et al. “Stimulation of HIV-1-Specific Cytolytic T Lymphocytes Facilitates Elimination of Latent Viral Reservoir after Virus Reactivation.” Immunity 36 (2012): 491-501. [22] Spencer, Sabrina L., et al. “Non-genetic origins of cell-to-cell variability in TRAIL-induced apoptosis.” Nature 459 (2009): 428-432. [23] Stewart-Ornstein, Jacob, Jonathan S. Weissman and Hana El-Samad. “Cellular Noise Regulons Underlie 84 | 2014-2015 | Volume 4

Physics and CompSci Research Fluctuations in Saccharomyces cerevisiae.” Molecular Cell 45 (2012): 483-493. [24] Toriello, Nicholas M., et al. “Integrated microfluidic bioprocessor for single-cell gene expression analysis.” Proceedings of the National Academy of Sciences of the United States of America 105 (2008): 20173-20178. [25] Vermeulen, L., et al. “Single-cell cloning of colon cancer stem cells reveals a multi-lineage differentiation capacity.” Proceedings of the National Academy of Sciences of the United States of America 105 (2008): 13427-134232. [26] Yellen, Ben. NIH Proposal. Durham, n.d. [27] Yin, Huabing and Damian Marshall. “Microfluidics for single cell analysis.” Current Opinion in Biotechnology (2012): 110-119. [28] Zimmerli, Simone C., et al. “HIV-1-specific IFN-γ/ IL-2-secreting CD8 T cells support CD4-independent proliferation of HIV-1-specific CD8 T cells.” Proceedings of the National Academy of Sciences of the United States of America 102 (2005): 7239-7244.


Street Broad Scientific

Physics and CompSci Research

Volume 1 | 2011-2012

Number Game Kevin Chen, Jay Iyer, and Sandeep Silwal ABSTRACT We consider cycle graphs where integers are placed on vertices. Then an iterative step is introduced on each vertex, where the integer is replaced with a new integer. This integer is the absolute value of the difference between the integer on the vertex and the integer on the vertex to the right. We use tools from Number Theory to analyze the end behavior of this process. Our results were that if the number of vertices is a power of two, then eventually, all of the vertices only have zeroes. In the other case where the vertices are not a power of two, the graph enters a periodic state. We then generalized our work by considering wheel graphs and a similar iterative process as before which used the fact that wheel graph are self-dual. Our results were that this process is also eventually periodic. We then considered the case where we replaced integers on the vertices of cycle graphs with real numbers. We proved that if the number of vertices is three or a power of two, all of the numbers on the vertices converge to zero. We conjecture that this behavior holds for all number of vertices.

Introduction Consider a cycle graph Cn where all the vertices have an integer associated with them. Construct another graph Cn formed by the midpoints of the vertices. For the new vertices, associate with them the absolute value of the difference of the integers that are the immediate neighbors of the midpoints. A example of C3 is shown below.

provide some proofs for the behavior of ∆qX depending on the value of k.

2. Periodicity of X 2.1 Length is a power of 2 Lemma 2.1. Suppose k = 2 j . If not all of the entries of X are even, at least one of {X, ∆X, . . . , ∆k-1X} has all odd entries. Proof. Consider the entries in X (mod 2). If

∆q We wish to study the end behavior as this process is continuously repeated. To do this, we will form a tuple of the values of the integers associated with the vertices and define an operation which corresponds to the building of a new cycle as described above. Initially, we will assume that the values of the vertices are non-negative integers. Later on, we will analyze the situation where the values take place in R. Define X = (x1, x2, . . . , xk) as a k tuple of non negative integers. We will let k equal the length of X and define

∆ X = (|x1 - x2|, |x2 - x3|, . . . , |xk - x1|) and ∆qX = ∆(∆q-1X) and ∆0X = X. Let X (mod 2) = = (x1 (mod 2), x2 (mod 2), . . . , xk (mod 2)). Let max{X} be the largest entry in X and min{X} be the minimum entry in X. Finally, we will treat ∆ here as an operator and say an iteration of X to mean applying ∆ once to X. We will

= (1, 1, . . . , 1)

for some q, there are two possibilities of ∆q-1

,

(1, 0, 1, 0, . . . , 0), (0, 1, 0, 1, . . . , 1), each one uniquely determined by our choice of x1. Similarly, each of the choices for ∆q-1 yields two unique possibilities for ∆q-2 . Continue this backwards process to ∆q-(k -1) . Now if some tuple is possible for l ’s with ∆l , q - (k - 1) < l < q, then it cannot be possible in any ∆r , r ≠ l since by our construction, a tuple possible for ∆q-n goes to ∆q in exactly n iterations. Thus, in {∆q-(k -1)

, ∆q-(k -1)

, . . . , ∆q-(k -1)

},

we have accounted for 1 + 2 + . . . + 2k -1 = 2k - 1 of the different combinations of parities. Since we assumed not all of the entries were even initially, all possibilities of parities for xi have been accounted for. Applying this process forward, we can see that applying at most j - 1 times to any starting choice of X, except where all the entries are even, gives us an iteration of X with all odd entries as desired.

Volume 4 | 2014-2015 | 85


Street Broad Scientific

Physics and CompSci Research

Volume 1 | 2011-2012

Proof. We prove this by induction on q. Let xi = dmi. Then Y = (m1, m2, . . . , mk) and

Lemma 2.2. If X (mod 2) = (0, 0, . . . , 0), ∆qX (mod 2) = (0, 0, . . . , 0) for all q > 1.

∆ X = (|dm1 - dm2|, |dm2 - dm3|, . . . , |dmn - dm1|) = (d|m1 - m2|, d|m2 - m3|, . . . , d|mk - m1|) and

Proof. Suppose Z = (z1, z2, . . . , zk), and zi ≡ 0 (mod 2) for all i. Then let

∆Z = Y = (y1, y2, . . . , yk).

∆ Y = (|m1 - m2|, |m2 - m3|, . . . , |mk - m1|) as desired. Now suppose that this holds for some q = r > 1. Then ∆rX = (dt1, dt2, . . . , dtk) and ∆rY = (t1, t2, . . . , tk). Then

∆r+1Y = (|t1 - t2|, |t2 - t3|, . . . , |tk - t1|)

Then for every i, yi = |za - zb| ≡ 0 + 0 ≡ (mod 2) for some 1 < a, b < k. Taking Z = ∆q - 1X proves the lemma by induction.

which completes the induction.

Corollary 2.3. Let k = 2 j. One of

Theorem 2.5. Let k = 2 j. ∆qX is (0, 0, . . . , 0) for some finite q.

{X, ∆X, . . . , ∆k-1X, ∆kX} has all even entries. Proof. If X has all even entries, then we are done by Lemma 2.2. Otherwise, by Lemma 2.1 above, one of {X, ∆X, . . . , ∆k-1X} has all odd entries so that for some 1 < q < j - 1, Then if

∆qX (mod 2) = (1, 1, . . . , 1).

Proof. From Corollary 2.3, ∆rX has all even entries for some finite r. Then consider Y = 1/2 ∆rX From Lemma 2.4, the values of the tuples in the iteration of Y are exactly half of those in the corresponding iterations of X. However, max {∆rX} > max {∆rX} unless ∆rX = (0, 0, . . . , 0). Now applying the same argument to Y gives us a series of tuples with decreasing maximum terms. Since these maximum terms are all positive integers, this tuple must tend to 0. Therefore, there has to be a q such that ∆qX = (0, 0, . . . , 0) as desired.

Y = ∆q+1X (mod 2) = (y1, y2, . . . , yk), yi ≡ 1 - 1 ≡ 0 (mod 2)

2.2 Length is not a power of 2

for all i. Then by Lemma 2.2, ∆rY (mod 2) = ∆q + rX (mod 2) have all even entries and q + r < k as desired.

Suppose X = (x1, x2, . . . , xk). Suppose xm = max{X}, 1 < m < k. Define the left cell of X as (xr, . . . , xm) where xi = 0 for r < i < m. Define the right cell analogously. For example, consider X = [3; 0; 0; 27; 0; 5]. Then the left cell of X is [0; 0; 27]. Then ∆X = [3; 0; 27; 27; 5; 2] which has the left cell [0; 27].

Lemma 2.4. Let X be the tuple (x1, x2, . . . , xk) and suppose d|xi, 1 < i < k. Then let Y be the tuple (x1 /d, x2 /d, . . . , xk /d) and denote Y = 1/d X. For all q > 0, if

∆qX = (a1, a2, . . . , ak) then

∆qY = (b1, b2, . . . , bk)

where ai = dbi for all 1 < i < k.

86 | 2014-2015 | Volume 4

If the length of the left and the right cell is 1, i.e. the maximum term is not surrounded by any 0’s, then max{X} > max{∆X}. Otherwise, suppose that the length of the left cell is t > 1 and the length of the left cell is greater than the length of the right cell. Lemma 2.6. The length of the left cell in ∆qX = t - q, q > 0. Proof. We will proceed by induction on q. The (r - 1)th term of ∆X is |xr - xr - 1| ≠ 0 by definition since xr = 0 and xr-1 = 0. Then |xn - xn - 1| = 0 for r < n < m - 1 since xn = xn-1 = 0. Then |xm - xm - 1| = xq so the length of the left


Street Broad Scientific

Physics and CompSci Research cell in ∆X decreases by 1 since if ∆X = Y = (y1, y2, . . . , yk), the left cell is (yr, . . . , ym - 1). Now suppose that the lemma holds for some t > 1 and ∆tX = (l1, . . . , lk) so the left cell is (lr, . . . , lm - 1). Then the r - 1th term of ∆X is |lr - lr - 1| ≠ 0 since lr = 0 and lr - 1 = 0. Then |ln - ln - 1| = 0 for r < n < m - l since ln = ln-1 = 0. Then if ∆l = (z1, . . . , zk), the left cell of ∆l = (zr, . . . , zm - (l + 1)) which completes the induction. We can perform a similar argument if the length of the right cell is greater than the length of the left cell. Theorem 2.7. Let X = (x1, x2, . . . , xk) where k is not a power of 2. Then for some finite q, all the entries of ∆qX are zeroes or consist of zeroes and a positive integer c. Proof. If all the entries of ∆rX are equal for some t, then ∆r+1 consists of all zeroes. Otherwise, we can assume that the entries are never all equal. Then consider the left and right cell of X. WLOG, the length of the left cell is larger. Then by Lemma 2.6, the length of the left cell of some iteration of X is 1 which means that the maximum term is not surrounded by any 0’s. Therefore, after some finite t, max {X} > max {∆tX}. We can repeat this argument until all but one of the entries of the iterations of X are zeroes as desired. If the length of the right cell is larger, then we can apply a similar argument. r

Theorem 2.8. For every X, there exists a q and r such that ∆qX = ∆q + rX. Proof. If X has length a power of 2, then the theorem is clear since ∆qX = (0, 0, . . . , 0) for some finite q. Similarly, if X has length that is not a power of two but still goes to (0, . . . , 0), the theorem is true. Therefore, it remains to prove the theorem for the cases where X goes to a tuple consisting of just zeroes and a positive integer c. Now if the length of X is k, there are only 2k possibilities of different arrangements of c and 0’s in X. Therefore, there exists a q such that X = ∆qX. Then since a tuple Y completely determines ∆Y , we have ∆aq + rX = ∆bq + rX for integers a , b and 0 < r < q. Since the terms in the iterations of ∆ applied to X repeat after every q iterations, we say X has period q. 2.3 Length of the Periods Theorem 2.9. Let k = 2j. Let M be the largest entry in X. Then it takes at most ([log2M] + 1)(k + 1) iterations of X to reach (0, 0, . . . , 0). Proof. From Corollary 2.3, all terms in ∆k+1X are even. Therefore, the maximum term in 1/2k+1X is atmost . In every k + 1 step, we can divide all of the terms by our tuple by 2. Therefore, the maximum number of iterations taken for X to reach (0, 0, . . . , 0) depends on when the maximum term of is 1 for some j. If j = [log2M], the maximum term in is atmost 1 and therefore, we need k + 1

Volume 1 | 2011-2012

more steps to reach (0, 0, . . . , 0) Theorem 2.10. Let X be a tuple of length k not a power of two. Let M be the largest entry in X. Then it takes atmost M (k - 2) + 2k - 1 iterations of X to become periodic. Proof. From Theorem 1, the number of steps it takes for the maximal term to decrease is equal to 1 minus the length of the larger of the left cell or the right cell. Therefore, we can consider the maximal left cell of the form (c, 0, 0, . . . , M) which takes k - 2 iterations to reduce the maximal term. Then we can again consider the worst case scenario. Repeating this argument, we get that it takes (k - 2)M iterations of X to reach a state of 0’s and a constant c. Then since there are 2k - 1 other possibilities of the placement of c ’s and 0’s, we get that it takes atmost M (k - 2) + 2k - 1 iterations of X to reach a periodic state. 2.4 Wheel Graphs We will now consider a slight variation of the process described in the introduction. Consider a wheel graph Wn where each vertex is associated with an integer. Let C represent the integer associated with the center vertex. Now we will construct a vertex for each triangular face of Wn and associate with it the integer that is the maximum of the absolute values of the difference of all vertices on the face. Furthermore, we will create one additional vertex which will be associated with the integer that is the maximum of the absolute values of all the differences of the integers not in the center vertex of Wn. Connecting all the new vertices, we will get another wheel graph Wn since Wn is self dual. An example with C3 is shown below.

Similar to above, we will work with a tuple and create an operation which matches the process described above. Define a Dual tuple X as X = (C; x1, x2, . . . , xk) with the center entry C and outer entries xi. We will call xj the j th entry in X. Define where

∆X = (C’; y1, y2, . . . , yk) C’ = max{|xj - xi|}

for 1 < i, j < k and

Volume 4 | 2014-2015 | 87


Street Broad Scientific

Physics and CompSci Research

Volume 1 | 2011-2012

yi = max{|xi - C |, |xi + 1 - C|, |xi - xi + 1|}. Due to the graphical interpretation of the process, for every ∆qX = (C; x1, x2, . . . , xk), we can consider (C; xm, xm+1, . . . , xm+k). Now Define

(x1; x1, x1 - min(x’3, x’4), . . . , x1 - x’k) and then

r

max{X} = max{C; x1, . . . , xk} and min{X} = min{C; X1, . . . , xk}. Lemma 2.11. If C = max{X}, max{∆X} is an outer entry. Proof. Let xj = min{X}. Then |C - xj | > |C - xi | > |xi - xn| for 1 < i, n < k. Then the j th entry in ∆X is equal |C - xj | = max{∆X}. Lemma 2.12. max{X} > max{∆X} if all of the entries in X are non zero.

(x’j; 0, . . . , x’j, x’j, . . . , x’’k) for 1 < j < k. Note that x’j = max{∆3X} where x’i and x’’i are some integers. Lemma 2.14. Let X = (aj ; 0, a2, . . . , aj, aj, . . . , ak) with aj = max{X}. Then for ∆qX, q > 1 there are two consecutive entries of aj, aj, and a 0 is an entry in ∆qX. Proof. We proceed by induction. If X is of the above form, then

∆X = (aj ; aj, a’2, . . . , 0, . . . , aj).

Proof. Since no entries are equal to 0, every entry in ∆X is equal to |a - b| for some non zero positive integers a and b. Therefore, every entry in ∆X is less than the corresponding entry in X. Therefore, max{X} > max{∆X} as desired.

Now suppose that the statement holds for some positive integer r. Then

Lemma 2.13. If max{∆qX} > max{∆q + 1X} for all j > 0, then ∆rX = (0; a, a, . . . , a), a > 0 for some r.

(by an appropriate rotation if necessary) where aj = max{∆kX}. Then

Proof. Using Lemma 2.20, every center entry is less than or equal to the maximum value in ∆qX, q > 0 for every other term. Then since max{∆qX} forms a decreasing sequence of positive integers, there exists a r such that max{∆rX} = 0. If the center entry is 0, we are done. If not, the center entry for max{∆r+ 1X} is 0. Therefore, it remains to consider the case when max X does not decrease in each turn. Consider X = (C; x1, x2, x3,. . . , xk). By Lemma 2.20 we can suppose that the center entry is not the maximal entry. Furthermore, we can suppose that x1 is the maximal entry in X by rotation. Then the maximum does not change if X = (C; x1, 0, x3,. . . , xk) where x1 = max{X}. After X has reached iteration, the next three iterations of X are (x1; x1, 0, x’3, . . . , x’k) and

88 | 2014-2015 | Volume 4

∆kX = (aj ; 0, a’2, . . . , aj, aj, . . . , a’n)

∆r + 1X = (aj ; aj, a’’2, . . . , 0, a’j, . . . , aj) which completes the induction. Therefore, the lemma is true for all positive integers q. Theorem 2.15. Every Dual X is eventually periodic. Proof. Since the maximum term in ∆qX is fixed, every Dual X must be eventually periodic. Furthermore, we have determined a sufficient and necessary condition for period to start by lemmas 2.22 and 2.23. 2.3 Generalized Number Game We will now prove a result where the entries in our tuple take values in R and the length of the tuple is a power of 2. Theorem 2.16. If k is a power of two, there exists an integer j such that ∆kX = (0, 0, . . . , 0) where X = (x1, x2, . . . , xk) and xi R; 1 < i < k. j

Proof. Let X = (f1, f2, . . . , fk) where all fi are polynomials in y1, y2, . . . , yk. The following diagram shows an example for the k = 4 case.


Street Broad Scientific

Physics and CompSci Research

Volume 1 | 2011-2012

Therefore, ∆X in our example is the following.

Now pick k numbers a1, a2, . . . , ak in R. We will assign each yi, 1 < i < k to the real number ai. In our example above, we will assign y1 √2, y2, y3, y4.

To compute ∆X, we will look at what happens to ∆X in R. In the example above, X = (x1, x2, x3, x4) where

We note that in this process, the degree of each entry at any point is equal to 0 or 1 in each variable since we are only adding or subtracting degree 1 polynomials. Now we will consider just the constant term of the polynomials. Doing this, the first iteration of ∆ is

In R, we will say X = (x1, x2, x3, x4) where

When in R, if X = (x1, x2, x3, x4), we will utilize previous definitions and let

∆ X = (|x1 - x2|, |x2 - x3|, . . . , |xk - x1|). In our example above, we have ∆X = (y1, y2, y3, y4) where

Call X’ the tuple where we just consider the constant term of X. If we consider X’ (mod 2), from Corollary 2.3, we know that there is some iteration q1 such that

∆jX’ (mod 2) = (0, 0, . . . , 0) Then we consider Y’ = 1/2∆q1X’. Similarly, there is a q2 such that∆q2Y’ = (0, 0, . . . , 0). Continuing this, we see that the entries of X’ are divisible by aribiarily high powers of 2. Thus, they must all be equal to 0 at some finite iteration I of ∆. Therefore,

∆IX = (g1, g2, . . . , gk) where

Then to determine ∆X, we will reverse our assignments of y1, y2, y3, y4.

for integers , i.e., all the terms in ∆IX are polynomials in y1, y2, . . . , yk with no constant terms. Then let ∆IX = X1 and consider 1/y1X1. By Lemma 2.4, this does not affect the convergence of ∆kX. Considering X1 reduces the Volume 4 | 2014-2015 | 89


Street Broad Scientific

Physics and CompSci Research

Volume 1 | 2011-2012

number of indeterminant variables by one. Then by a similar argument as above, we can arrive at a tuple Xk which only has terms in Z. Then ∆qXk = (0, 0, . . . , 0) for some finite q. Therefore, X must reach zero in some finite iteration as desired. Lemma 2.17. If X = (x1, x2, . .. , xk) and Y = (x1 + d, x2 + d, . . . , xk + d), then ∆X = ∆Y. Proof. By definition,

∆X = (|x1 - x2|, |x2 - x3|, . . . , |xk - x1|) = (|(x1 + d) - (x2 + d)|, |(x2 + d) - (x3 + d)|, . . . , |(xk + d) - (x1 + d)|) = ∆Y We will now prove an analogous result for tuples of length 3. Suppose X = (a, b, c) and that for any k R. Suppose c = min{X}. Using lemma 2.4 and 2.14, we can consider the convergence of X - c = (a - c, b - c, 0) and (X - c) = (1, q, 0) for some irrational q. Since a - c > b - c, we have q (0, 1). From now on, we will let X = (0, 1, q) for irrational q < 1. Then ∆X = (1, 1 - q, q). Lemma 2.18. All entries of ∆rX, r > 1 are of the form , m, n Z>0. Proof. We proceed by contradiction. If all of the entries of ∆rX are not in the above form for some k, then some entries must be either -a-bq or a+bq for positive integers a and b. We can let a, b be positive since we can always change the sign of a, b. The first case cannot happen since -a-bq < 0 which is not possible since all entries are greater than or equal to zero by definition. Furthermore, the entries cannot be equal to zero since a + bq is irrational while 0 is rational. The second case cannot happen since a+bq > a > 1 but all entries of ∆rX are strictly less than 1. From now on, we will assume that the entries in ∆rX are of the form |m-nq| for non negative integers m, n. We will call m the constant term and n the linear term. Now in ∆rX, define a special entry as the entry having the largest constant term and the largest linear term. We will prove below that such a term always exists and it is unique. Furthermore, for all entries xi = |a-bq| in ∆rX, call xi a type 1 entry if |a-bq|= a-bq and a type 2 entry if |a-bq| = -a+bq. Lemma 2.19. Every ∆rX, q > 1 has only one special entry and each special entry is surrounded by a type 1 neighbour on one side and a type 2 neighbour in the other side. Furthermore, ∆rX = (a - bq,|(a + c) - (b + d)q|, - c + dq) where |(a + c) - (b + d)q| is the special entry in ∆rX. Proof. We proceed by induction. The base case is ∆2X = (q,|1 - 2q|, 1 - q) and the special entry is |1 - 2q|. Furthermore, |1 - 2q| = |(0 + 1) - (2 + 1)q|so the base case is true. 90 | 2014-2015 | Volume 4

Now suppose that the lemma holds for some t >1. Then we have two cases for ∆tX. If the special entry in ∆tX is type 1, then

∆tX = (a - bq,(a + c) - (b + d)q, - c + dq), t+1 ∆ X = (- c + dq,|(a + 2c) - (b + 2d)q|, (a+c) - (b + d)q). If the special entry in ∆tX is type 2, then and

∆tX = (a - bq, - (a + c) + (b + d)q, - c + dq)

∆t+1X = (|(2a + c) - (2b + d)q|, a - bq, -(a + c) + (b + d)q). Furthermore, we see that the constant term of the special entry is the sum of the constant terms of the other entries and similarly, the linear term of the special entry is the sum of the linear terms of the other entries. This completes the induction so the lemma is true for all positive integers t. Lemma 2.20. Suppose

∆rX = (a - bq, |(a + c) - (b + d)q|, - c + dq). Then q is in one of the two intervals

Proof. Since a - bq > 0, we have 0, we have q > . Therefore, q entry in ∆rX is type 1, then

> q and since - c + dq > . Now if the special

(a + c) - (b + d)q > 0 or q >

. If the special entry ∆rX is type 2, then (b + d)q - (a + c) > 0

or q > . Since the two intervals

, we have that q is in one of

We will call Ir the interval that q is determined to be in from ∆rX. With the lemmas above, we are ready to prove the most important result in this section. Lemma 2.21. Suppose that Ir = Then

, n > 1.

Proof. We proceed by induction. ∆X = (1, 1 - q, q) so and so the base case is done. Suppose that the lemma holds for some positive integer t. Then suppose . Then from lemma 2.15, or . In the first case,


Street Broad Scientific

Physics and CompSci Research

Volume 1 | 2011-2012

since by the inductive hypothesis,

Taking j - r + 1 = c completes the proof. Theorem 2.24. If X = (0, 1, q) for irrational q, then

Similarly, in the second case,

which completes the inductive step. Thus the lemma is true for all positive integers r. Lemma 2.22. Let in ∆rX. Then

represent the special entry

Proof. Consider the special entry in ∆rX. If the special entry goes to 0, then the other two entries in ∆rX must also go to 0 because of lemma 2.14. Now from Lemma 2.18, we have that so rearranging gives

and

Then since

Proof. The first part follows from Lemma 2.19 and Lemma 2.20. For the second part, consider Ir = or Ir = . Since q Ir,

we have that as desired. Therefore,

Since

3. Conclusion

we have

as desired. Lemma 2.23. for some constant c. Proof. By the triangle inequality and using the fact that nr+1 > nr,

Similarly, Now since converges to q, there exists a j such that

4. Acknowledgements We would like to thank Dr. Teague for providing us with this wonderful problem and mentoring us.

This gives us

Since nr+n > nr for n > 1, we have

Our results can be summarized as follows. The end behaviour of playing the number game on a cycle graph Cn with integer entries is that if n is a power of 2, the entries go to zero. Otherwise, the entries either become all zeroes or the graph enters a period stage where the all the entries are zeroes and another integer constant. For a wheel graph, the entries also eventually enter a periodic stage after a certain stage in the game. We then analyzed wheel graphs with real entries given that the entries are linearlly independent over the rationals. For Cn where n is a power of 2, the graph eventually reaches a stage where all the entries are zeroes. For C3, the graph also eventually reaches a stage where all the entries are zeroes. It may be possible to generalize our method for C3 to other cycle graphs and we conjecture that every cycle graph with such real entries behaves similarly as C3.

. Then Volume 4 | 2014-2015 | 91


Street Broad Scientific Volume 1 | 2011-2012

Interview

Feature Article: An Interview with Jud Bowman and Taylor Brockman

Left to Right: BSS Chief Editor Jenny Wang, 2014-2015 Essay Contest Winner Keilah Davis, Jud Bowman, BSS Chief Editor Justin Yang, BSS Faculty Sponsor Dr. Bennett, Taylor Brockman, Taylor Robless (Brockman Apprentice Foundation, Digital Marketing). Photo Credit: Brian Faircloth Jud Bowman (‘99) is the founder and former CEO of Appia. Taylor Brockman (‘99) is the CTO of Causam Energy. Together, they established the Bowman-Brockman Endowment for Entrepreneurship and Advanced Research at NCSSM. Davis: What really inspired you guys to decide to start your own company as high schoolers at NCSSM? Bowman: From my perspective, part of it was because of the timing. We were here ‘98 to ‘99, when there was a lot of euphoria in the tech market. We thought, there’s no reason we can’t do this! Brockman: There was something special happening with the open source movement in ‘95-’96. You were able to contribute at an expert level no matter who you were or where you were. So being a part of that made a lot of tools available to get this thing going without requiring a lot of heavy investment - the software was freely available. Bowman: One last thing I would say too - there is the beautiful thing of being 17 years old, which is that you don’t have a fear of failure. Davis: In the beginning, were there any hiccups or bumps in the road that may have been discouraging, but you pushed through? Bowman: The biggest thing to tackle - the first stumbling block - was that the dorms had no internet. And this dates us big time, but there was no campus wide wifi. Even our rooms didn’t have cable lines. The first thing we had to do was to get internet into the dorms. That was all Taylor. Brockman: Network accessibility to get internet access was key. That was where we did a lot of research. We launched our initial products on the internet, and that was still kind of a new thing at that time. We found some of our friends, three of our friends at the school, who were good with computers to join us over the summer. We convinced our parents to liquidate some of our college funds - our first $50,000 - so that we could buy some servers, hire some friends for the summer, and get an apartment we could work out of. They believed in us, and that was really important to us. I think an92 | 2014-2015 | Volume 4


Street Broad Scientific

Interview

Volume 1 | 2011-2012

other thing that was almost a stumbling block was that the college offer acceptance deadline was almost 2 weeks before our angel investor meeting. Jud and I both rolled the dice and deferred for a year, believing that we could get funded. Bowman: Yeah that was a tough decision, or it seemed like one at the time. Davis: So was there any academic or life lesson that you learned here at Science and Math that helped you along the way or was it just being at the school that gave you the confidence to do anything? Brockman: There was a very specific lesson I learned my first year here. I had come from a very western state school where the mathematics program just wasn’t as far along, and so I felt like I was placed in the wrong class here. So I had to convince my professor that I should be bumped up. And it was right along the time where the first graphing calculators were coming out. And so it was sort of applying what I had learned from the open source community to the mathematics problems in front of me, and demonstrating that was what allowed me to get moved up, sort of bypassing the standard procedure to get into the right class. Bowman: Coming to Science and Math instills the belief that you can do anything - and that sounds so cliche - but I guess we did not have a fear of failure. And another thing was that Science and Math brings kids together, I mean I never would have met Taylor if it weren’t for NCSSM putting us in the same Hunt Dorm. He’s from Shelby, NC and I’m from Greenville and went to Enloe High School in Raleigh. We would have never met probably if it hadn’t been for NCSSM. And to say this school changed our lives - it is so directly correlated. I never went to college - well only for a little bit - and dropped out to become an entrepreneur. I feel like I’ve never had a job in my life. We started a project senior year that became a company that took on a life of its own. I’ve now had the chance to start two companies. So yeah there is no question that this school changed our lives pretty directly. Davis: As you reflect back on your experience here, what was one of your favorite memories? Brockman: It was so much fun for me to be involved in starting a new club. It was something completely ridiculous: the Grillmaster Club, a club about the art of grilling hamburgers. I worked with the faculty to get it chartered and raised money to buy supplies. We got grills installed, brought our guitars and amplifiers out in front of the Hunt lawn, and had a big party with music and hamburgers. Bowman: One of the coolest things about Science & Math is that it allowed me to explore many career paths. I originally thought I wanted to be an architect. Through the Mentorship program, I interned one day a week with a local architect doing residential architecture, which was a lot of fun but proved to me that I wasn’t any good at it. Another thing I wanted to do was something related to Wall Street - trading stocks. I founded the investment club here and I was always daytrading stocks here on campus fairly successfully. One of the bio teachers - Dr. Naiman - had a relative who worked on the floor of the NY Stock Exchange as a market maker. I thought, that’s the coolest thing in the world. We had a term for special projects - SPW which... Bennett: ... evolved into Miniterm. Bowman: So for my senior year SPW/Miniterm I got to go to New York City with my girlfriend for an internship getting to follow this market maker around at the NYSE. That was a really cool experience. Wang: Since a large % of startups do fail, how do you know an idea will succeed or when to stop pursuing an idea? Brockman: We’ve done a lot of research in that: creating the minimum viable product, or failing fast. Building a product is not the issue. The idea is the most important - finding a market need for an idea to take off, getting an idea out there as early as possible, seeing who responds to it, and being able to change it cheaply. It’s like working in modeling clay before taking it to the kiln to be fired. Bowman: I agree with the failing fast strategy. I generally encourage entrepreneurs now to not even pursue funding and this is more for tech startups - unless you have massive initial traction. At least a million in revenue or a million in users. And those numbers are getting to the point where a million may not even be enough. And in terms of deciding whether to pursue an idea, I have a rule. I have to have a crush on an idea to the point where it’s all I can think about for a least 30 days, almost like an obsessive crush. And if I’m literally sleeping, thinking, dreaming about it all the time, then it may be an idea worth pursuing. But I still wouldn’t look for funding unless the idea had significant traction. Volume 4 | 2014-2015 | 93


Street Broad Scientific Volume 1 | 2011-2012

Interview

Brockman: And a lot do fail. 99 out of 100 do. Bowman: I think we started our company in our dorm when there was almost an unfair advantage - the funding environment was great. These funding environments are cyclical. Right now is actually a great time. This past year was the best in venture capital since 2001 so it’s not that different from when we started. These things come in waves, and in a lot of ways the best thing is that we started so young because we were not scared of those odds of failure. In my opinion, it would be crazy to consider being an entrepreneur as a career path - it’s just far too risky. Brockman: The costs, especially on the software side, have come way down now. The clouds, the mobile development kits, $99 now gets you a year of what you need. I like to play in areas where you can play cheap and fail fast. Bowman: On Wall Street there is a term called hedging and in so many respects Taylor and I were able to take enormous risk but we completely hedged out the downsides. If we failed, both of us could go back to college. All of the capital we later obtained was from venture capital funding, not from ourselves. Yang: What advice would you give to young aspiring scientific researchers/entrepreneurs? Brockman: Take it outside of the classroom. Follow your passion. If you find yourself waking up 30 minutes early every morning reading news about your topic area of interest, that’s a good sign. If you are tinkering, sketching, and prototyping on a Friday night with an idea, that’s a good sign. Balance your courseload, but also find something that is academically challenging to do outside of the classroom. Bowman: Surround yourself with the smartest people you can find and go to the best college you can get into regardless of the cost. Bennett: What are you guys working on now? Bowman: So the company that we started grew from the two of us in the dorm here to at its peak about 500 people. It went public on NASDAQ and we were both there for 9 years. We left around the same time - the summer of 08. We both have gone to different startups after that. The startup that I founded that summer now has about 60-65 people and is in the process of getting acquired. Brockman: The project that I joined was named Causam Energy, and we announced last fall that we were acquiring a firm called Power Analytics, so we’ve got offices in Raleigh, San Diego, and Charleston. I can’t really say how many people we have but we are a small, nimble company focused on next generation powergrid services. Just like what smartphones did to telephone systems, we imagine that these new communications systems will revolutionize the way we generate renewable energy to help us consume it in intelligent ways. Wang: Besides your own companies, what are some of your favorite companies to follow? Favorite innovators? Bowman: Besides the big ones? Google, Facebook, Tesla, SolarCity, Apple, Uber - I read about pretty much all of them at TechMeme. Right now my favorite entrepreneur to follow is Elon Musk, simply because he has such diverse interests. He may not be the wealthiest right now, and he has not yet achieved as much success as Gates or Zuckerberg, but he’s pretty much the only entrepreneur that has made multiple multibillion dollar companies across so many disparate industries - space, solar, batteries, PayPal - that’s four! Brockman: Small-scale companies are really fun to be involved with: the local community accelerators. There are really brilliant people that are taking those risks. They have a hot new idea and they want to talk about it. They want some advice. And watching the Series A and Series B funding rounds, especially in the southeast, is a lot of fun. In terms of innovation, I try to challenge everyone in my company to go to at least one conference every year. Recently, I went to a conference where the keynote speaker was Neil deGrasse Tyson. He was talking about all the new sensors used for asteroid discovery. His new telescope - the sheer data processing volume flowing from the telescope through the computational pipeline is amazing. At conferences they talk the innovation that is new today, not like the stuff you read in textbooks. So now one of the challenges for me is to dream bigger - what’s next? I want to do something to get out of my comfort zone. I’m incredibly excited about bioinformatics, especially the Bowman-Brockman projects we’ve been reading about. Just seeing the cutting edge research where technology and biology begin to blend is exciting. So there are greater challenges ahead. 94 | 2014-2015 | Volume 4


Street Broad Scientific Volume 1 | 2011-2012

Questions? Comments? Submissions? broadstreetscientific@gmail.com



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.