Manhattan Scientist 2016

Page 1

The Manhattan Scientist Series B Volume 3 Fall 2016

A journal dedicated to the work of science students



The Manhattan Scientist Series B

Volume 3

Fall 2016

ISSN 2380-0372

Student Editors Eric Bailey Gabriela Bukanowska Faculty Advisor and Editor in Chief

Constantine E. Theodosiou, Ph.D.

Manhattan College

4513 Manhattan College Parkway, Riverdale, NY 10471 Student Handbook manhattan.edu MANHATTAN COLLEGE • RIVERDALE, NY 10471


Series B

The Manhattan Scientist

Volume 3 (2016)

A note from the dean The present Volume includes thirty papers, covering all disciplinary subjects of the School of Science at Manhattan College. These activities continue to be aligned with the College’s mission, to “provide a contemporary, person-centered educational experience that prepares graduates for lives of personal development, professional success, civic engagement, and service to their fellow human beings.” The quality of the projects is indicative of our students and faculty and their commitment to research as part of the students’ educational experience. The participants ranged from High School summer interns, to primarily undergraduate students of all our majors, to graduate students in Mathematics. I would like to particularly express our gratitude to the faculty who willingly provided critical mentoring to our students and future colleagues. Most of the faculty received minimal or no compensation for these efforts. This work continued to receive critical financial support for our students from a variety of sources (in no particular order): the School of Science Research Scholars Program, the Jasper Scholars Program of the Office of the Provost, the Catherine and Robert Fenton endowed chair in Biology, the Linda and Dennis Fenton ’73 endowed biology research fund, the Michael J. ’58 and Aimee Rusinko Kakos endowed chair in Chemistry, Robert Ryan ’71, Jim Boyle, ’61, the Camille and Henry Dreyfus Foundation Senior Scientist Mentor Program, and a National Science Foundation research grant. I would like to express my deep appreciation to the students for their efforts and their persistence in the road of scientific discovery. We are all honored to showcase our students’ and colleagues’ continuing contributions to the body of scientific knowledge. Finally, I want to thank the two student editors for the thorough review of the submitted documents and their editorial recommendations. It is with great pleasure that the editors present the publication of Series B, Volume 3, of The Manhattan Scientist.

Constantine Theodosiou Dean of Science and Professor of Physics

ISSN 2380-0372


Series B

The Manhattan Scientist

Volume 3 (2016)

Table of Contents Biochemistry Mapping the yeast nucleosomal protein interactome Gabriela Bukanowska . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 A study in nutrigenomics: how do dietary variations influence epigenetic status? Tiffany Rodriguez . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Exploring chromatin dynamics within the DNA damage response pathway in living cells Bright Shi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

Biology Identication of intestinal parasites in domestic dog (Canis familiaris) from Winston-Salem, NC Eric Bailey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Procedures to deterine rates of bark formation on saguaro cacti Lauren Barton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Automated quantitative analysis of terminal tree branch similarity by 3D registration Joseph Brucculeri . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Leaf venation patterns and the distribution of water within leaves Jorge Gonzalez . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Development of a finite element model of tree branches with variable leaf characteristics Jesse Jehan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Characterizing abnormal protein expression in Sense mutant Danio rerio as a link to amyotropic lateral sclerosis James LiMonta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Xylem conductivity from stems to leaves of grass plants Humberto Ortega . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Growth dynamics of flowering branches of Artemisia tridentata Ishmael PeËœna . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Consequences of chytridiomycosis and urbanization faced by red-backed salamanders in Lower New York State Paul Roditis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Temporal variation in the prevalence of human intestinal parasites in two bivalve species from Orchard Beach NY Freda Tei . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

Chemistry Aniline analogues as new ligands for chromate capture Ashley Abid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Remediation of water containing chromium(VI) using an insoluble ascorbic acid derivative Mary Cacace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Determining the structure of SUZ-9 Eric A. Castro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Solving the structures of ZSM-18 and SUZ-9 Gertrude Turinawe Hatanga . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Organic molecules that aid in removing chromium(VI) from water Douglas Huntington . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 A molecular mechanics study of chromodulin and tyrosine kinase James Irizzary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Spectroscopic study of the interaction between dipicolinic acid and human serum albumin James Irizzary and Matthew Feliciano . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Investigation of the dipicolinic acid interaction with human serum albumin using spectroscopic measurements Sophia Prentzas and Marisa Kroger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Redesigning and improving the multistep synthesis of zingerone Dominick Rendina . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Eliminating aqueous chromium(VI) with renewable technology Analisse Rosario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175


Series B

The Manhattan Scientist

Volume 3 (2016)

Computer Science Analysis of HDD swap vs remote memory swap for Virtual Machines and Linux Containers Steven Romero and Emmanuel Sanchez . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Autonomous remote memory for Virtual Machines and Linux Containers Emmanuel Sanchez and Steven Romero . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Monte Carlo studies of 5 junction comb polymers John Stone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193

Environmental Science Modification of coffee oil feedstock and heterogeneous catalyst for biodiesel synthesis Th´er`ese Kelly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201

Physics Developing automated systems for measuring interference fringes of a Michelson interferometer Sean Heffernan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Tree branches as fractals Cristina Hibner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Development of the New Small Wheel for the ATLAS experiment – Micromegas Alex Karlis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229

On the cover: Image of a leaf with tertiary veins marked with numerous quaternary areas. (From Gonzalez, page 58)


Mapping the yeast nucleosomal protein interactome Gabriela Bukanowska∗ Department of Chemistry and Biochemistry, Manhattan College Abstract. There is a wealth of information regarding chromatin related protein in chromatin biology, yet there is still limited knowledge of their mechanistic details in living cells. This work aims to gain a better understanding on protein interaction and the dynamic networks that act at the nucleosomal level inside the living nucleus. By using yeast molecular genetics, the main focus of this research was the expansion of a library of genomically tagged proteins that are known to have chromatin functionality. The goal was to define the mechanisms by which these proteins interact with nucleosomes and how they influence overall chromatin structure. Through a PCR based homologous recombination technique we introduced myc-tags as fusions to different protein gene sequences. The proper tagging of genes was verified following protein expression utilizing western blotting and antibody recognition of the fusion protein from cell lysates. The labeled proteins were then assayed for nucleosomal interaction in vivo, by crosslinking from histone proteins that contained the photo-crosslinking unnatural amino acid, p-Benzoylphenylalanine (pBpa). This unnatural amino acid was added to the histone proteins through an expanded genetic code in Saccharomyces cerevisiae. This approach was successful at tagging protein and current work is being done to identify histone-protein interactions.

Introduction Histones represent a highly conserved family of proteins which assemble to form the core fundamental elements for Eukaryotic DNA compaction, the nucleosome. These repeating elements arise from intimate interactions between a histone and double helix DNA. A histone is a small positively charged class of proteins that form octamers made up of a histone H2A-H2B dimer and a histone H3-H4 tetramer. This octameric unit is the core of a nucleosome. The negatively charged double helix DNA wraps around the histone about 1.67 times and allows for 146 base pairs of DNA to interact and form a stable structure. The nucleosomes are joined together by linker DNA, which varies in base pair length from species to species. As proteins of a relatively small size, they regulate various chromosomal roles such as chromatin remodeling, gene silencing, transcriptional activity, and replication. While ground-breaking work has provided crystal structures for the core nucleosome, there is little experimental evidence for an accepted model of higher level chromatin organization [1]. Chromatin structural hierarchy begins with a primary 10-nm-diameter assembly of “beads on a string” combination of a histone octamer and double helix DNA. The repeating mononucleosomes rely on non-specific interactions due to the sugar phosphate backbone of the DNA and the basic amino acid residues in the histones forming ionic bonds. As chromatin continues to condense, a 30-nm chromatin fiber arises as a secondary structure which is then further facilitated into tertiary levels of 300 through 800-nm arrangements that ultimately gives rise to the familiar shape of the chromosome. Nevertheless, the molecular interactions that stabilize higher-ordered structures remains elusive and we can only speculate how the non-histone proteins help assemble and stabilize ∗

Research mentored by Bryan J. Wilkins, Ph.D.


2

The Manhattan Scientist, Series B, Volume 3 (2016)

Bukanowska

chromatin beyond the 30-nm chromatin fiber. The intimate association between histones and DNA implicate these small proteins in various chromosomal roles. Irregularities in chromatin structure and function are hallmarks for cancer and disease. As part of the research, we used yeast molecular genetics to modify the gene sequences of proteins of interest continuing onto expressing proteins with fused peptide tags that can be easily identified via immunochemical methods [2]. Most chromosomal research so far has been performed in solutions, in vitro, whereas our research utilizes techniques that allow us to study the chromatin structure of the fiber and its interactions inside the cell, in vivo. Polymerase chain reaction (PCR) is one of the most versatile techniques used in modern genetics which amplifies a DNA target of interest based on unique nucleotide oligomers that prime the DNA region of interest. Our goal was to add the DNA coding sequence for a small peptide fusion tag (3x Myc, N-EQKLISEEDL-C) and the selectable marker (HIS3M6X) to the genome at the 3’-end of a target gene. PCR product homologous recombination was used in order to install the sequences at the genomic level [2]. The PCR products were transformed into yeast cells and the cellular machinery utilized those DNA segments. Homologous recombination requires that the homologous regions of the PCR products anneal to their genomic sequences. During replication, the polymerase will read through the annealed segments and insert the amplified DNA in between the flanking homologous regions. Our primers were designed so that they contained homology to the 3’-end of a target gene as well as the 3’-genomic regions flanking the gene (Fig. 1, green and red homology regions). The primer design and plasmid were previously standardized [2], where only changes in the S3 and S2 primer regions were necessary for the selection of the genomic target. The S3 and S2 homology regions pair to their genomic targets and are inserted through homologous recombination (Fig. 1). The modified genes translate to proteins containing fusion tags and were assayed for successful recombination events using antibody detection of the small myc-fusion tag that resulted from the translated mRNA. For those genes that were successfully labeled, nucleosomal interaction with that protein was assayed, in vivo, by crosslinking histone proteins that contain a photo-crosslinking unnatural amino acid [3, 4]. The synthetic amino acid of interest is p-benzoylphenylalanine (pBpa) which site-specifically replaces histone amino acid residues in response to a genetically replaced codon with the Amber stop codon, TAG [5]. Chemical crosslinking can identify specific interfacing amino acid residues responsible in the molecular binding events at protein interfaces. These residues can then be scanned for loci along the histone protein surfaces which will reveal a mechanistic foundation for these events. Using this approach in vivo and not in vitro will give us the opportunity to map selective interactions throughout the cell cycle. Our approach aims to map chromatin binding, providing a structural model of nucleosomal association and reveals mechanistic details of their function. These functional interactions will help us understand how misregulation may occur as well as understand disease states more clearly.


Bukanowska

The Manhattan Scientist, Series B, Volume 3 (2016)

3

Figure 1. General scheme for homologous recombination. The S2 and S3 primers were specially designed to contain homology regions specific to a gene of interest (YFG). The S3 primer was designed to anneal to the 3’-end of a target gene and the S2 primer was designed to anneal to the 3’-genomic region of the gene. Each of the primers also contained regions that annealed to a reporter cassette that amplified the sequences of the 3-myc tag and the HIS3M6X gene.

Materials and Methods PCR amplification of 3-myc-HIS3 cassette The plasmid pYM5 was used for the amplification of the 3-myc-HIS3 cassette [2]. The PCR reaction mixture was as follows: approximately 1 µg pYM5 template plasmid, 200 µM dNTP mixture, 0.5 µl Phusion polymerase (1U, Thermo Scientific F530S), 10 µl HF Phusion buffer, 0.2 µM S2 primer, 0.2 µM S3 primer and water to a final volume of 50 µl. The reaction was performed on a BioRad Thermal Cycler under the following cycling conditions: 30 cycles, denaturing at 98 ◦ C for 195 s, annealing at 62 ◦ C for 30 s, and extension at 72 ◦ C for 75 s. To check the success of reaction, 2 µl of sample was mixed with 2 µL of DNA running dye and then loaded R safe) and subjected to into a 1% agarose gel (50ml 1X TBE Buffer, 0.5g Agarose, 5µl CYBR electrophoresis for 1 hour at 125V. Agarose gel images were captured using an Omega LumTM G Imaging System on CYBR setting. The expected PCR product for each PCR was ∼1700 base pairs. Primers used in this work The italicized bases are those that prime to the target plasmid. Base pairs that are not italicized are homologous regions to the genome.


4

The Manhattan Scientist, Series B, Volume 3 (2016)

Bukanowska

Rsc9: S3.GGCTGATATTCCACCTTTAACATTAGCATTGTCTGAATACATGGAAAACACGTCGGG GTTACGTACGCTGCAGGTCGAC S2.CAATAAATCTAATCTCAACTTTTTTCTTTAAAGTTAAGCCCAACCGATTTTTTTTCTC ATTCAATCGATGAATTCGAGCTCG Ino80: S3.GTCACTCGTGAAGGTAGCAAAAGCATAAGTCAAGATGGAATTAAGGAAGCGGCAA GTGCATTGGCACGTACGCTGCAGGTCGAC S2.GCAGATTAAAGATAGACATTAACTCCGCTTAATGTAAATAACACAATATGAATACCT TTTTCAATCGATGAATTCGAGCTCG Ioc3: S3.GATATTTATGATGACAACGACAATGATTCTTCTTTTGATGATGGTAGAGTTAAAAGG CAGCGCACTCGTACGCTGCAGGTCGAC S2.CGAAATGCAGCCTGTAAGGAGTTTCACAATCTTCACGTTCGTTGAAAGCTAGTTGT TTAATCGATGAATTCGAGCTCG Bdf1: S3.GCGCTGCACACAACGGGTTTTCCTCATCTTCAGATGACGATGTTAGCAGCGAAAGT GAAGAAGAGCGTACGCTGCAGGTCGAC S2.CAAAATATCAAAATGGTGCTCATTCTTCTCAGTCGTTGAAGATAATCAAATTCAAA ATTCAGTCAATCGATGAATTCGAGCTCG Ethanol precipitation of DNA The DNA from three PCR reactions per gene of interest was pooled together and then the total DNA was precipitated and concentrated into 10 µl of water. The entirety of the PCR reactions was ethanol precipitated by adding 1/10 volume of 3M sodium acetate (pH 5.2), added 2-2.5 volumes of 100% ethanol and placed on ice for 20 minutes. After the incubation, the samples were centrifuged for 10 minutes at maximum speed after which the supernatant was discarded. Added 1 ml of ethanol centrifuged once again and decanted the samples. Let the samples air dry for about 30 minutes or until samples were visibly dry. Yeast transformation of homologous PCR products and selection Yeast cells were transformed using a standard heat shock lithium acetate protocol. Wild type yeast cells, BY4741 (genotype: MATa his3∆1 leu2∆0 met15∆0 ura3∆0), were grown to an OD600 = 1.0, 50 ml total. The cells were collected by centrifugation and then washed in sterile water. Following another centrifugation round, the water was discarded and the cells were re-suspended in 1 ml of competent cell buffer (100mM lithium acetate, 10 nM Tris-HCl, pH 7.5, and 1mM EDTA in water). The transformation reactions were as follows: 10µl concentrated DNA from PCR, 10 µl


Bukanowska

The Manhattan Scientist, Series B, Volume 3 (2016)

5

single stranded (carrier) DNA, 100 µl competent cell solution, and 700µl PEG solution (100 mM lithium acetate, 10 mM Tri-HCl, pH 7.5, and 1 mM EDTA in 40% PEG 3350). Transformations were incubated at 30 ◦ C for 30 min and then 80 µl DMSO was added. Cells were heat shocked at 42 ◦ C for 10 min and then washed with sterile water. Following the wash, the cells were re-suspended in 100 µl water and plated on SC agar plates minus histidine. The plates were incubated at 30 ◦ C for 2-3 days. A negative control was also performed in which no PCR DNA was added to the reaction. Individual cell colonies that grew in the absence of histidine were streaked on new agar plates to screen for potential false positives. Colonies that survived the second round of screening were grown in full medium (YPDA, glucose 2%) to an OD600 >1.0, 25 ml total. 12 ODs were collected and the cells lysates were prepared from whole cells via TCA precipitation. TCA precipitation of whole cells Precipitations were performed using 1ml of lysis buffer (1.2% beta-mercaptoethanol, 300 mM NaOH, 1 mM PMSF in water) per sample. Incubated all samples on ice for 10 minutes, then added 160 µl of 50% (w/v) TCA and once again incubated on ice for 20 minutes. Centrifuged samples at full speed for 10 minutes, discarded supernatant and washed cells with acetone. The centrifugation step, decantation and washing of cells with acetone were repeated once again. The samples were air dried for 10 minutes or until visibly dry. The pellets were re-suspended using 100 µl of 2X SDS buffer and placed in a sonicator on with heat and left samples in for 10 minutes. A boiling apparatus was set to 92 ◦ C at 1500 rpm for 10 minutes and centrifuged at max speed for 5 minutes. Electrophoresis and western blotting In general, protein samples were analyzed on NUPAGE 8% SDS-PAGE gels and run in MOPS buffer (50 mM MOPS, 50 mM Tris, pH 7.5, 3% SDS, 1 mM EDTA) for 2-3 hours at 125 V. Western blots were performed in transfer buffer containing 10% methanol (chemical composition of transfer buffer), using PVDF membrane, and transferring at 100 V for 1 hour. Following the transfer, the membrane was washed and then incubated in a blocking solution (3% milk and 0.05% Tween-20 in TBS). Primary antibody incubation (1◦ anti-myc, mouse, 1:1000 dilution in blocking solution) was performed overnight at 4◦ C, and the next day the membrane was washed twice in TBS. The membrane was then incubated in the secondary antibody solution (2◦ anti-myc rabbit HRP conjugated, goat, 1:1000 dilution in 5% milk, 0.05% Tween-20 in TBS) for 45 minutes, then washed twice in TBS and then finally washed in freshly made TBS with 0.1% Tween-20 for 10 minutes. The wash solution was removed with ten quick rinses with deionized water. Imaging of the membrane was performed using The Omega LumTM G Imaging System set to Chemi for 15-45 minutes per image (dependent on the intensity of the signal). Amersham ECL Select substrate was used to visualize the proteins (∼400 µl substrate per membrane). The solution was evenly distributed on top of the membrane prior to imaging.


6

The Manhattan Scientist, Series B, Volume 3 (2016)

Bukanowska

Antibodies used in this work Primary: c-Myc Sc-40, produced in a mouse, Santa Cruz Biotechnology (9E10) Secondary: Anti-mouse IgG peroxidase conjugated, produced in a goat, Sigma (A4416)

Results and Discussion The initial phase of this work was aimed at tagging genomic DNA of genes that encode for chromatin related proteins. This process required the design of PCR primers that amplified the tag of interest and selectable marker from a target plasmid. The primers also included flanking regions that were homologous to the gene of interest. Using S3 and S2 primers, PCR reactions were preformed and described in the experimental section. Each of the PCR products were amplified from the same template and therefore the size of the PCR product, regardless of the homology targets, were expected to be the same (âˆź1700 bp). Fig. 2 shows an example of the expected PCR products after they were separated by electrophoresis. Each PCR reaction for each S3/S2 primer pair that was used yielded a product of about 1700 bp as expected.

Figure 2. The PCR products following electrophoretic separation for both genes RSC9 and BDF1. Each PCR reaction yielded the same sized product for each primer pair that was used. Their primer sets differed only in the 5’-homology region to the genome.

Ethanol precipitation was then performed in order to concentrate the DNA products from the PCR. Each precipitation reaction was from three pooled PCR reactions for each gene of interest. The total DNA was precipitated and concentrated as described in the experimental section. Following ethanol precipitation we then transformed yeast cells with the concentrated DNA and plated them on selectable media that was minus histidine. The reason for not including histidine in our media was because the tag insertion also added a HIS3M6X gene to the genome that allowed cells with proper recombination of the PCR DNA to survive in the absence of histidine. Individual yeast cell colonies were selected and streaked on new agar plates to exclude any contamination between


Bukanowska

The Manhattan Scientist, Series B, Volume 3 (2016)

7

the samples. The colonies that survived the second round of minus histidine screening were grown in full media. The cell lysates were prepared from whole cells via TCA precipitation as described in the experimental section. TCA precipitation is a step that is often omitted but we found that it produced much cleaner pictures than the samples that TCA precipitation was not performed on. In order to analyze the samples we assayed whole protein cellular lysates to detect the presence of the myc-tag. We did this via western blotting and antibody detection. Using an antibody that recognized the myc-tag we visualized the blots with chemiluminescence (Fig. 3).

Figure 3. Western blot analysis of several different homologous recombination attempts for different gene targets. The clone number is an arbitrary number assigned to the sample for identification post analysis. Wt indicates wild type cells with no recombination attempts. Bdf1, Ioc3, Ino80 indicate the genes that were targeted.

The differences in sizes for the proteins of interest resulted in many failed attempts at obtaining an image with a strong signal. Larger proteins are quite difficult to transfer out of an SDS-PAGE gel and onto the blotting membrane. This may have been an issue with the Ino80 protein visualization because its molecular size is nearly 200 kiloDaltons (kDa). However, we did successfully label and visualize Bdf1 that is in the range of 120 kDa. Rsc9 (∼65 kDa) and Ioc3 (∼100 kDa) are in a manageable transferrable size range suggesting there may be technical issues with the failure of this western blot analyses. Due to the nature of gel electrophoresis, there could have been many factors for why the proteins did not appear in our images. We suggest another reason for inconclusive results was the small amount of protein present in our samples. Yeast protein expressions levels are well documented and we can determine the average amount of protein in the cell. For example, Bdf1 is expected to be present at ∼8000 molecules/cell and with a half-life of nearly 12 hr. In contrast, Rsc9 is only expected to have ∼2600 molecules/cell with a half-life of 8.5 hrs. These significant drops in cellular protein con-


8

The Manhattan Scientist, Series B, Volume 3 (2016)

Bukanowska

centration may attribute to the difficulty in obtaining enough protein transfer to visualize with western blotting. Similarly, the Ino80 (∼6000 molecules/cell, 6.8 hr half-life) and Ioc3 (∼1800 molecules/cell, no half-life reported) proteins are significantly less abundant than the Bdf1 proteins we were successful at labeling. The lack of a signal from the western blotting procedures does not indicate a failed homologous recombination. Keep in mind, these cells grow on minus histidine plates and therefore have properly integrated the HIS gene. It is possible that the recombination was successful but not at the proper loci. This would yield cells that survive in the absence of histidine but do not confer a tagged protein. We are currently using a PCR based method to assay the proper integration of the tag in the genome. By designing primers that anneal just inside the 3’-end of the target gene and just outside the 3’-end in the genomic region we can determine if the marker and tag were inserted at the proper loci. This work is currently being addressed. We are also currently in the process of troubleshooting the western blot procedure as well as the steps leading up to it in order to further facilitate and perfect our techniques. One way to potential capture the myc signal may be enhancing the overall protein concentration in the sample.

Acknowledgement The author would like to thank the School of Science Research Scholars Program for financial support to perform this work.

References [1] Luger, K., M¨ader, A. W., Richmond, R. K., Sargent, D. F. & Richmond, T. J. Crystal structure of the nucleosome core particle at 2.8 A resolution. Nature 389, 251–260 (1997). [2] Knop, M. et al. Epitope tagging of yeast genes using a PCR-based strategy: more tags and improved practical routines. Yeast 15, 963–972 (1999). [3] Wilkins, B. J et al. A cascade of histone modifications induces chromatin condensation in mitosis. Science 343, 77-80 (2014). [4] Liu, C. C. & Schultz, P. G. Adding New Chemistries to the Genetic Code. Annu. Rev. Biochem. 79, 413–444 (2010). [5] Chin, J. W. et al. An expanded eukaryotic genetic code. Science 301, 964–967 (2003).


A study in nutrigenomics: how do dietary variations influence epigenetic status? Tiffany Rodriguez∗ Department of Chemistry and Biochemistry, Manhattan College Abstract. Nutrigenomics is the study of nutritional control of gene expression. We aim to understand how nutrition regulates gene expression and identify epigenetic markers of diet related diseases. Epigenetics relates gene expression levels to chromatin-associated deviations that do not include DNA mutations. Many of the “switches” that promote epigenetic changes are controlled by posttranslational modifications (PTMs) on the chromatin specific histone family of proteins. We determine how changes in chromatin PTMs are influenced by dietary alterations. Specifically we want to delineate how epigenetic marks change with diet and how they correlate with altered chromatin structure. We utilize a technique that allows for the site-directed encoding of unnatural amino acids into protein, in vivo, in yeast. We express histone proteins that contain the UAA, p-benzoylphenylalanine, which contains a photoactivatable crosslinking R-group that can create protein-protein crosslinks. Using this probe we scan histone proteins, in living cells for protein interaction partners. Using calorie restriction, we mimic dietary changes and monitor variations in crosslinking patterns. Positions that yield crosslinking variants provide the first clue to diet influenced epigenetic markers. Histone PTMs are well documented and we can associate the positions of crosslinking deviations with known localized modification sites. We conclude that CR drastically changes the nucleosomal interactome. When cells are stressed with limited glucose intake, crosslinking patterns are altered, verifying that protein interactions are changed as compared to wild-type cells. These changes suggest that protein regulation is altered at the nucleosomal level, most likely in response to altered PTM patterns.

Introduction Nutrigenomics is an emerging field that reveals the role of nutrition on gene expression, which brings together the science of bioinformatics, nutrition, molecular biology, genomics, epidemiology, and molecular medicine [1]. This field of research predominately focuses on the effect of nutrients on the genome, proteome, and metabolome [2]. By studying these effects we are able to comprehend the relationship between specific nutrients and nutrient-regimes on human health. The overall approach of nutrigenomics consists of: demonstrating which genes are switched on/off at any given moment, understanding how gene/protein networks cooperate; and determining the influence of nutrients on altered protein expression levels. Specifically, this field aims to understand how nutrition alters the epigenetic control of gene expression [1, 2]. Nutrigenomics is a revolutionary way of utilizing food as a pharmaceutical, containing the ability to reverse disease and hinder the rigors of ageing [2]. This involves finding epigenetic markers of the early phases of diet related diseases. Epigenetics relates altered gene expression levels to chromatin-associated deviation, however this does not include changes in the DNA sequence. Epigenetic changes are controlled by PTMs, either on DNA, or the chromatin specific histone family of proteins. For instance, histone acetylation, methylation, and/or phosphorylation can adjust chromatin structure to allow genes to be more, or less accessible. ∗

Research mentored by Bryan Wilkins, Ph.D.


10

The Manhattan Scientist, Series B, Volume 3 (2016)

Rodriguez

This work focused on histone protein modifications in an attempt to understand how chemical alterations to the protein mediates chromatin rearrangement in response to dietary changes. For instance, it is known that calorie restriction (CR) extends the lifespan and health span of multiple model organisms [3]. In addition, CR has been demonstrated to delay or even reduce various agerelated diseases such as, cancer and diabetes [4]. However, the mechanism by which CR extends lifespan is still uncertain. It is of interest to understand how CR promotes longevity in mammals, but at the chromatin level. We will investigate how protein interactions (at the histone level), change in response to dietary restriction. By studying these factors, we will be able to infer the structural and mechanistic details about how diet influences the control of chromatin structure and ultimately on gene expressions. This work highlighted the nucleosomal histone octamer, because it can be used to elucidate nutrigenomic details. Histones are the core of the nucleosome, and the nucleosome is the basic repeating unit of the chromatin fiber. Most eukaryotic species contain five different histones- H1, H2A, H2B, H3, and H4. These five histones are small basic proteins that contain numerous lysine and arginine residues [5]. The histones’ lysine and arginine residues consist of positive charges, which allow the proteins to neutralize to the negatively charged sugar-phosphate backbone of the DNA [5]. Histone H2A, H2B, H3 and H4 form an octameric structure around which approximately 147 base pairs of DNA tightly associate in a left-handed solenoidal twist. Chromatin dynamics are controlled at the nucleosomal level and changes in gene expression can be directly tied to changes in chromatin structure. The DNA that is associated with the nucleosome is inherently inactive until it is moved, or slid, away from the histone octameric unit. These alterations in structure are regulated by histone PTMs and protein-histone interactions. The nucleosomal interactome is a dynamic system that is dependent upon cellular needs. By studying the protein interactions that occur at the chromatin level we can determine what proteins play an essential role in chromatin regulation, in response to changes in dietary intake. Our approach to studying nucleosomal interactions relies on a method that allows for the expansion of the genetic code in yeast. It essentially expands the ability of an organism to utilize more than the 20 naturally occurring amino acids as building blocks for the assembly of protein molecules [5]. Thus, the addition of new amino acids to the genetic code increases the range of functions available to proteins, and this provides new methods for probing protein structure and function, both inside and outside the cell [6]. This method utilizes synthetic biology which is, -an emerging field of science that combines both biology and engineering to create, and or alter, biological functions that do not exist in nature [7]. Our research applies a technique that allows for the site-directed encoding of unnatural amino acids (UAA) into protein, in vivo, in Saccharomyces cerevisiae (Yeast) [8]. By utilizing this technique we are able to express histone proteins that contain, p-benzoylphenylalanine (pBpa). pBpa contains a benzophenone group that is a photo-active crosslinker. Therefore, when the histone


Rodriguez

The Manhattan Scientist, Series B, Volume 3 (2016)

11

protein is irradiated with 365 nm ultraviolet light, the benzophenone group produces a free radical, and readily recombines to form a covalent bond with a protein interacting partner, within 4 angstroms (Fig. 1).

Figure 1. Mechanistic scheme for the UV-activation of p-benzoylphenylalanine (pBPA) and chemical crosslinking.

A protein-protein crosslink can be utilized to scan histone proteins in living cells. By encoding pBpa at different positions throughout the histone protein, it allows us to analyze the proteinprotein interactions that occur on the surface of the nucleosome. In addition, by working under in vivo conditions, a better understanding of protein-protein interactions under true physiological conditions can be achieved. The genetic incorporation of the UAA relies on an evolved translational system that utilized an amber (UAG) stop codon to introduce new chemistries into proteins [6]. Therefore, the expanded genetic code of Saccharomyces cerevisiae requires both an evolved suppressor tRNA and an evolved aaRS (aminoacyl-tRNA synthetase) that are engineered to recognize, and translate, the UAA of interest [6, 8]. The aaRS charges its cognate tRNA with the UAA. Since the evolved tRNA is an amber suppressor, an amber codon can be introduced into a gene of interest and it feeds the expression system the UAA. This results in full-length protein expressed with a site-specifically placed UAA. We install the crosslinking amino acid into histone proteins that contain short fusion tags and allow the modified histones to integrate into the native landscape of the chromatin in the living nucleus. Once the cells have established pBpa-histone containing nucleosomes we then expose the cells to UV-light and capture histone-protein interactions. Following irradiation the histone-protein interactions are covalently “trapped� and the contacts can be assessed via biochemical techniques, such as western blotting and antibody detection of the genetically installed fusion tag (Fig. 2).


12

The Manhattan Scientist, Series B, Volume 3 (2016)

Rodriguez

Figure 2. General scheme for the incorporation of pBpa into the chromatin landscape. We used a two plasmid system where one plasmid encodes the gene for the histone protein of interest. The histone gene contains a genetically installed sequence that translates to a C-terminal protein fusion tag (HA). The TAG refers to the codon mutation that will accept the pBpa protein during translation. The second plasmid contains the genes for the suppressor tRNA and the aaRS specific for pBpa.

This study was performed in the model organism, Saccharomyces cerevisiae for several reasons: 1) yeast genetics are easily and efficiently manipulated, 2) yeast is a eukaryotic system that allows for studies on histone proteins, 3) cellular networking and signaling pathways are highly conserved from yeast to humans, 4) and most importantly, studies have shown that yeast can be used as a model for nutritional studies and dietary restrictions (9).

Materials and Methods Yeast strains All experiments were performed in the yeast strain BY4741 (genotype: MATa his3∆1 leu2∆0 met15∆0 ura3∆0). The mutated histone gene plasmids and the plasmid for the pBpa tRNA/aaRS were previously reported [10]. The histone protein genes of interest were tagged with a DNA sequence that coded for a C-terminal HA-tag for antibody detection. Each pair of plasmids was doubly transformed into yeast via standard chemical lithium acetate techniques so that cells contained one plasmid with the mutant histone gene and one plasmid with the suppression elements. Yeast culturing and crosslinking All yeast cultures were grown in the presence of glucose and 1 mM pBpa (Chem-implex #05110). Wild type (Wt) cells were supplemented with 2% glucose (final concentration) and calorie restricted cells were grown in 1.5% and 1.0% glucose. Cells were allowed to grow to saturation allowing pBpa to be translated into the histone protein of interest, and in turn, incorporated into the nucleosome. This process required the cells double at least 8 times. Whole cells were collected (12 ODs total) and then resuspended in 100 µL of water and then irradiated with 365 nm light at approximately 10 cm distance for 30-45 min, at 4 ◦ C. Whole


Rodriguez

The Manhattan Scientist, Series B, Volume 3 (2016)

13

cell lysates were then prepared by boiling the samples in 2x SDS-loading buffer. Proteins were separated by electrophoresis (SDS-PAGE) in standard Towbin buffer (25 mM Tris, pH 8.3, 192 mM glycine, 0.1% SDS) on 15% polyacrylamide gels. Proteins were separated to the desired resolution dependent upon the assay. Proteins were then transferred to a PVDF membrane using standard western blotting techniques. The transfer was performed in Towbin buffer containing 20% methanol (v/v) and run at a constant 100 V for 1 hr. Following the transfer the membrane was blocked for one hour and then incubated with the primary antibody over night at 4 ◦ C. The membrane was then washed two times in TBS buffer (150 mM NaCl, 50 mM Tris-HCl, pH 7.5) and then incubated in the secondary antibody for one hour. Following the addition of the secondary antibody the membrane was washed twice in TBS and then once, for 10 min, in TBST (TBS plus 0.1% tween-20 detergent v/v). Finally the membrane was subjected to 10 quick washes with dH2 O. To the membrane, 500 µL of peroxide/luminol solution ECL Select (Amersham) was added, covering the membrane. Images were obtained digitally using “Chemi” setting on an Omega LumTM G Imaging System. Antibody solutions used Blocking solution: 3 % BSA in TBST. Primary antibody solution: 3 % BSA in TBST with anti-HA (abcam ab9110), rabbit, at a 1:10000 dilution. Secondary antibody solution: 5 % milk in TBST with anti-rabbit peroxidase-conjugated (sigma A0545) at a 1:10000 dilution.

Results

We were interested in how the chromatin structure and the chromosomal dynamics in living cells change in response to dietary changes. Therefore, we decided to first test how protein-protein interactions change on the nucleosome in response to diet changes. Yeast cells grown in 2% glucose are considered Wt. We compared crosslinks in cells grown under Wt conditions (2% glucose), versus cells grown in 1.5% and 1% glucose. The reduction in glucose percentage was considered dieting, therefore the cells were under CR. We first tested this on histone pBpa containing histone H2A at position 61 (H2A A61pBpa). Assays of CR influence on histone-protein interactions from histone H2A position 61 (Fig. 3), indicated that there was a significant decrease in crosslinks as the percentage of glucose was changed from 2% to 1%. Nearly all the crosslinks vanished at 1% glucose. However, in the presence of 1.5% glucose, instead of the crosslinks completely disappearing as observed in 1% glucose, new crosslinks appeared at different positions as compared to the Wt (purple asterisks). The same general pattern was observed for changes of 2% to 1% glucose when other histone pBpa positions were assayed (Fig. 4). Crosslinking was probed from several altered sites including histone H2B and histone H3, as indicated in Fig. 4. Interestingly, only when crosslinking was


14

The Manhattan Scientist, Series B, Volume 3 (2016)

Rodriguez

Figure 3. Western blot images of the changes of protein-protein interactions of histone H2A A61, in response to calorie restriction. Each blot was an independent biological replicate and imaged using anti-HA antibodies that recognized the small HA epitope that was genetically installed to the histone gene. Minus UV samples were controls to verify the crosslinked proteins were UV-dependent. The cells were all grown in identical conditions but with altered glucose intake, as indicated in each of the legends. pBpa was present in each sample. The red asterisk marks a crosslink previously identified and designated as a histone H2A-H4 histone crosslink [10]. The purple asterisks represent protein bands of interest that appear in response to lowered calorie intake for the 1.5% glucose samples.

assayed from the H2A A61pBpa position did we observe the obvious appearance of new protein interactions when conditions were shifted to 1.5% glucose (purple asterisks). When crosslinking was performed from histone H2A position Y58 there also appeared to be a slight increase in protein interactions in the same range as those observed for position 61, however they were not as dominant. Additionally, during the H2A Y58 assays, the dominant crosslinks appeared to remain unchanged with altered dietary conditions. Crosslinks from histone H2A position A20 appeared to have a decrease in interaction efficiency as the levels of glucose were minimized. This general trend was also observed when probing for interactions from histone H3 at position A29 (Fig. 4). Alternatively, crosslinking from position S17 on histone H2A resulted in the opposite trend. As the levels of glucose were reduced the efficiency of crosslinks appeared to increase. In general we observed that as we changed the nutrient intake, we induced changes in crosslinking efficiencies.

Discussion Post-translational modifications (PTM) of histones are a vital step in the epigenetic regulation of a gene expression [11]. It is known that the N-terminal tails of histones are the most accessible regions of the histones, because they protrude from the nucleosome and possess no specific structure. Therefore, the N-terminal tails are subjected to various modifications such as: acetylation, methylation, phosphorylation, and ubiquitination. These PTMs are added through enzymatic action and are considered the “writers� of the histone modifications [11]. In general, PTMs are


Rodriguez

The Manhattan Scientist, Series B, Volume 3 (2016)

15

Figure 4. Western blot images of the changes of protein-protein interactions in response to calorie restriction for the following pBpa positions: H2A Y58, H2B K30, H2A S17, H2A A20 and H3 A29. Each blot was imaged using anti-HA antibodies that recognized the small HA epitope that was genetically installed to the histone gene. Minus UV samples were controls to verify the crosslinked proteins were indeed UV dependent. The cells were all grown in identical conditions but with altered glucose intake, as indicated in each of the legends. pBpa was present in each sample.

believed to function in a combinatorial pattern known as the histone code. This process alters the expression states of associated loci in multiple ways, thus enabling gene regulation [11]. There are several intriguing examples of how nutrition state affects chromatin structure in Saccharomyces cerevisiae, and other model organisms [12]. For instance, Robyr et al. demonstrated that altering the type and concentration of sugar in the medium led to changes in the histone deacetylation states of known carbon-catabolic enzymes, and related metabolic functions [12]. The accessibility of chromatin for transcription is affected by the modification state of histones, specifically acetylation and methylation. Our goal was to relate the PTM influenced changes that are caused by dietary intake and understand how those PTMs affect chromatin structure and overall gene expression levels. Utilizing calorie restrictions we mimicked dietary changes, and monitored variations in crosslinking patterns. Histone positions that demonstrate crosslinking variants provide the first clue to diet influenced epigenetic markers. Therefore, the position of crosslinking variations can be associated with histone PTMs. CR was first tested on H2A A61pBpa because this residue sits in the acidic patch domain of


16

The Manhattan Scientist, Series B, Volume 3 (2016)

Rodriguez

histone H2A. This domain is a well-known docking site for nucleosomal interacting proteins due to its increased charge density and ability to favor ionic stabilized protein-protein interactions. This domain is a surfaced exposed region of the nucleosomal structure and has been previously documented to play an important role in promoting chromatin condensation during mitosis by binding the histone H4 tail of a neighboring nucleosome during nucleosomal compaction [10]. We wanted to determine how interactions at this position would change in response to dietary signals giving a clue as to how the chromatin structure might be altered based on a nucleosomal interactome profile. We hypothesized that if crosslinks increased across a broad range of interactions that this might infer a more opened and accessible chromatin conformation, allowing an increased substrate surface for interacting proteins. Using a previously described H2A A61-H4 histone interaction we were able to assess open versus closed chromatin structures [10]. This crosslink is represented by a red asterisk in Fig. 3 and is a marker of the closed chromatin state. When this crosslinked signal is increased due to cellular changes it indicates a more compacted chromatin structure has formed, because as chromosomes compact the histone H4 tail binds to the histone H2A acidic patch. When it is less prevalent, it indicates a less compacted chromatin state (more open and actively transcribing state). Using this marker we were able to assess how the chromatin structure may have changed during CR. We observed that during CR the H4-H2A interaction disappears suggesting that CR causes the cellular response to shift to a more actively transcribing chromosome. This is highlighted by the fact that the release of the A61 position from the H4 tail opens the site to be free to bind other proteins that could play essential roles during the metabolic response of CR. We find that several other proteins bind to the acidic patch during the response suggesting they are essential proteins in the pathway. The H4-H2A interaction is regulated by acetylation events on the H4 tail [10]. We conclude that the CR response induces a swift acetylation event that quickly opens the chromosome for easy access to DNA by proteins for metabolic gene regulation. Due to the data obtained from the H2A A61pBpa, we can conclude that the acetylation events are unregulated during each of the assays we performed because the only difference in each of the assays was the position of the pBpa on the histone. This change would have no effect on the acetylation signaling that occurs when cells undergo CR. Interestingly, assays from position H2A S17 result in crosslinks increased at 1% glucose, rather than following the general trend of crosslinks decreasing when glucose decreases. This implies that acetylation events may play a role in the increased signals of the proteins present. It also indicates that each of the proteins that interact with histone H2A at position S17 are essential for chromosomal dynamics of the Wt as well as for the metabolic shift that occurs during CR. In each of the other assays (H2A A20pBpa, H2B K30pBpa, H2A Y58pBpa and H3 A29pBpa, Fig. 4) we observed the same general trend. As the glucose levels were restricted the crosslinking patterns shifted so that the efficiency of the crosslinks was reduced significantly at 1% glucose. We conclude that at these positions the Wt stabilizing proteins are not part of the CR pathway and that a loss of these proteins causes the altered structure needed for the activated chromosomal state. It


Rodriguez

The Manhattan Scientist, Series B, Volume 3 (2016)

17

is likely that these are regulated by acetylation events across the histone landscape. Acetylation can be activated by the inhibition of type I and II histone deacetylases (HDACs), which in turn produces an active open chromatin state, making gene expression more accessible. It would be interesting to determine if we obtain the same shift in crosslinking patterns in cells that are inhibited for HDACs. While those experiments would yield data for a global change,they would not specify the specific histone acetylation responsible for most of the changes. We are currently working on assessing acetylation levels through antibody detection to better understand if our hypothesis is correct. We will also attempt to use mutational analysis at known acetylation sites to determine if the loss of the acetylation changes our pattern shifts. We will create genomic mutations at the sites of known potential PTMs to determine if the mutants create the same variation in crosslinking. Therefore, we will correlate the specific PTM with nutritional changes and infer its role in genetic control. This will inform us of the epigenetic histone marks essential for these pathwys. Future research will consist of testing a library of histone positions under CR to monitor variation in crosslinking patterns. In addition, we will utilize antibody detection methods to measure methylation and phosphorylation levels on histone positions that demonstrated crosslink changes in response to CR. Correlating not only acetylation, but all the PTMs, is an essential key to fully understanding how these signals relay to each other along the pathway. Amino acid restriction is also linked to CR and shown to aid in the increased lifespan of organisms. Therefore, we will utilize a drug that induces amino acid restriction to observe how it affects crosslink patterns. In those experiments we will compare and contrast the epigenetic patterns that occur during alternative metabolic pathways.

Acknowledgments The author wishes to take this opportunity to express her appreciation to the School of Science Research Scholars Program. This work was supported by the Jasper Summer Research Grant from Manhattan College under the gracious direction of Dr. Rani Roy. The author also thanks the Department of Chemistry and Biochemistry at Manhattan College for providing the materials to conduct this research. Finally, this project would not have been possible without the guidance and dedication of her mentor, Dr. Bryan Wilkins.

References [1] Neeha, V. S.; Kinth, P. J Food Sci Technol Journal of Food Science and Technology 2012, 50 (3), 415–428. [2] Choi, S.-W.; Friso, S. Advances in Nutrition: An International Review Journal 2010, 1 (1), 8–16. [3] Lin, S.-J. Calorie restriction extends yeast life span by lowering the level of NADH. Genes & Development 18, 12–16 (2004).


18

The Manhattan Scientist, Series B, Volume 3 (2016)

Rodriguez

[4] Mei, S.-C., and Brenner, C. Calorie Restriction-Mediated Replicative Lifespan Extension in Yeast Is Non-Cell Autonomous. PLOS Biology PLoS Biol 13, (2015). [5] Moran, L. A. Principles of biochemistry; Pearson: Boston, 2012. [6] Young, T. S., and Schultz, P. G. Beyond the Canonical 20 Amino Acids: Expanding the Genetic Lexicon. Journal of Biological Chemistry 285, 11039–11044 (2010). [7] Serrano, L. Mol Syst Biol Molecular Systems Biology 2007, 3 (158). [8] Chin, J. W. et al. An expanded eukaryotic genetic code. Science 301, 964–967 (2003). [9] Santos, J., Leit˜ao-Correia, F., Sousa, M. J. & Le˜ao, C. Dietary Restriction and Nutrient Balance in Aging. Oxidative Medicine and Cellular Longevity 2016, 1–10 (2016). [10] Wilkins, B. J. et al. A cascade of histone modifications induces chromatin condensation in mitosis. Science 343, 77–80 (2014). [11] Histome: The Histone Infobase http://www.actrec.gov.in/histome/ptm main.php (accessed Aug 20, 2016). [12] Garfinkel, M. D.; Ruden, D. M. Nutrition 2004, 20 (1), 56–62.


Exploring chromatin dynamics within the DNA damage response pathway in living cells Bright Shi∗ Department of Chemistry and Biochemistry, Manhattan College Abstract. Genetic information is stored in the form of chromatin, consisting of DNA, histones and other essential proteins. Histone proteins mediate all aspects of chromatin function and are regulated by sets of posttranslational modifications (PTMs). Modification patterns dictate differential pathways dependent upon cellular queues. This dynamic behavior is at the heart of all chromatin related processes, such as replication, transcription and repair. Unfortunately, DNA is inherently susceptible to damage. There are numerous forms of damaging factors, where several DNA damage pathways collectively protect the genome from life-threatening mutations that have direct links to both cancer and aging. Therefore, it is crucial that methods are developed that allow for us to study chromatin processes to better understand DNA damage pathways. We are using a synthetic biology approach that can trap histone-protein interactions in living cells, using unnatural amino acids. Comparing histone-protein interactions that are altered, due to DNA damage, will help us resolve the mechanisms that reshape chromatin structure under damaging stress. Many factors recognize and repair different types of damage but the orchestration of their function is still largely unknown. DNA damage signaling promotes broad changes in histone PTMs, and how the modifications control interactions at the nucleosomal interface during the response pathway is elusive. We can monitor histone PTMs across the cell cycle and correlate their influence on histone-protein interactions during damage pathways. We aim to expose nucleosomal repair protein-protein interactions and the mechanistic details of repair dynamics in yeast.

Introduction Chromatin is a nucleoprotein macromolecule that stores genetic information in the form of DNA, histones and other essential proteins. Chromatin functions in both DNA packaging and DNA processing. Chromatin’s most basic repeating unit is called the nucleosome. It consists of DNA wrapped around a histone octameric complex made of two histone H2A-H2B dimers and one histone H3-H4 tetramer [1]. Each of the histones possess N-terminal tails that protrude out of the nucleosome and interface with the solvent front. The tails can be highly modified by posttranslational modifications (PTMs) that alter the chemical properties of the protein. Through these modifications histone proteins mediate all chromatin functions. Different PTMs result in different outcomes. For example, when acetylation marks are upregulated the chromatin becomes more active and less compact. Histone interacting proteins can then associate with the chromatin fiber to help modulate the cell’s functioning. By enhancing our understanding of chromatin behavior in living cells, one may lay the footstone that can develop into new strategies to fight against cancers and other chromatin related diseases. Studying chromatin dynamics in vivo is very difficult and requires a technique that can reveal the interactions that occur between chromatin and chromatin-associated proteins. This work utilizes a synthetic biological approach that uses unnatural amino acids to capture histone-protein ∗

Research mentored by Bryan Wilkins, Ph.D.


20

The Manhattan Scientist, Series B, Volume 3 (2016)

Shi

interactions in living cells to identify and monitor how PTMs influence chromatin dynamics during DNA damage repair pathways. DNA is inherently susceptible to damages, ranging from internal cellular stress to external environmental factors. So far, many factors are known that recognize and repair DNA damage, but the precise regulation of each step is still unknown. Several DNA damage pathways collectively protect the genome to counteract potentially life-threatening mutations [2]. DNA damage signaling promotes changes in histone PTMs as well as the recruitment of nucleosomal remodeling complexes. How exactly these modifications control nucleosomal interface interactions are unknown to us. This work aims to expose nucleosomal repair protein-protein interactions and the mechanistic details of repair dynamics in Saccharomyces cerevisiae yeast. This model organism is ideal for these studies because much of the repair pathway has been conserved from yeast to humans [3]. We aimed to identify changes in protein-histone interactions in response to DNA damage caused by hydrogen peroxide (H2 O2 ) and Methyl Methanesulfonate (MMS). MMS is an alkylating drug that specifically methylates guanine and adenine DNA bases. Causing mutations that break DNA double-strands and lead to replication problems. H2 O2 causes oxidative stress that can lead to mutations in DNA by creating an abasic site (loss of the base from the nucleotide). By comparing changes in protein-histone interactions, we can further reveal the mechanisms that shape chromatin structure. We can monitor how changes in interactions correlate with specific PTMs that occur during the change. To study the changes in protein-nucleosome interactions we used an amino acid suppression technique [4]. Synthetic amino acids that do not exist in nature are used to produce“unnatural” proteins that can be analyzed for protein function. One specific unnatural amino acid that suits our purpose is p-benzoylphenylalanine (pBpa) [5]. It possesses the ability to create covalent bonds, or chemical “bridges,” between two interacting proteins when activated with UV-light (365 nm). When pBpa is in its excited state it forms a diradical that can easily abstract a hydrogen from a neighboring protein that is in direct contact with the pBpa-containing protein (Fig. 1). This causes two radicals that readily recombine to form a covalent bond between the two proteins. This bridging allows for the identification of the crosslinkined protein by mass spectral or immunodetection methods. The techniques being used have been previously developed for the study of chromatin [6]. We take advantage of this technique to insert pBpa into histone proteins and monitor crosslinks from the nucleosome in the living nucleus. In order to form the histone crosslinks within living yeast cells we use a dual plasmid system for the suppression of a genetically installed stop codon with pBpa. One plasmid contains the histone gene with a TAG (stop codon) inserted at an amino acid position of interest and the other contains genes for an amber suppressor tRNA and an evolved aminoacyl-tRNA synthetase (aaRS) specific for the unnatural amino acid, pBpa [7]. The tRNA/aaRS pair has been engineered to work with the endogenous translational machinery but are orthogonal to the host system [4]. This means


Shi

The Manhattan Scientist, Series B, Volume 3 (2016)

21

Figure 1. pBpa structure and benzophenone chemistry. Under UV-light (âˆź350 nm) pBpa is excited, but no energy is released at equilibrium to damage the cell. In its excited form, pBpa can easily form a covalent bond with another protein through radical recombination, following hydrogen abstraction.

that the tRNA/aaRS pair are specific for only the pBpa and not any of the naturally occurring amino acids. The TAG codon is a natural ribosomal stop signal; however, this system “tricks� the cellular translational machinery to read through the stop as a sense codon (suppression of the stop codon). We site-specifically mutate codons in the histone gene to TAG in order to genetically place pBpa at desired positions in the fully translated protein. The mutant pBpa-histones are expressed in vivo and are allowed to incorporate into the native chromatin landscape. Once the pBpa is distributed through the chromatin fiber the cells are exposed to UV light and the protein is activated to crosslink to binding partners (Fig. 2). Following histone crosslinking, protein was extracted out of the cells through TCA precipitation. The crosslinked products were analyzed by gel electrophoresis and western blotting to identify pBpa

Figure 2. General scheme for pBpa incorporation into the landscape of chromatin. Using a two plasmid system the pBpa can be genetically inserted site-specifically into the desired histone at the installed stop codon.


22

The Manhattan Scientist, Series B, Volume 3 (2016)

Shi

positions in the histone proteins that crosslinked to other proteins. The histone protein of interest was identified via immunoblotting against a small fusion peptide tag that was introduced to the gene coding sequence on the histone plasmid (the human influenza hemagglutinin, HA-tag, was used). Finally, using this approach we induced DNA damage to understand how the crosslinking would be altered during the response. Alterations in crosslinked patterns insinuate that at the position being probed there is a histone-protein interaction change in response to the DNA damage pathway. This leads to the ability to understand how the change might correlate with PTMs on the histone that could possibly regulate the protein binding.

Methods Yeast culture growth and DNA damage induction The yeast cells used in this study were the BY4741 cellular strain (genotype: MATa his3∆1 leu2∆0 met15∆0 ura3∆0). The plasmids containing the histone and the system synthetase and tRNA were previously reported, and were used to transform into the cells [6]. The histone gene was genetically modified with a fusion tag coding for the HA peptide protein tag. The histone plasmid confers a uracil gene selectable marker and the synthetase plasmid confers a leucine gene selectable marker. In order to maintain all plasmids during the experiment, all cells were cultured in synthetic drop out media minus uracil and leucine, supplemented with a final concentration of 2 % glucose (w/v) and 1 mM pBpa. DNA damaging agents were added as described below. A single colony was cultured in 25 mL of media in a 100 mL flask, and grown overnight at 30 ◦ C. The next morning the optical density of the cells was measured by spectroscopy at an absorption of 600 nm (optical density 600, OD600 ). Cells were grown in sets so that there were flasks for the control, as well as those that received the DNA damaging drug. The OD600 was measured for each culture and they were normalized so that they had equal densities. This was used to set the equivalent number of cells in each assay equal to each other. The control received no drug. DNA damaging agent was added to produce 5 mM H2 O2 , 10 mM H2 O2 , 0.03% MMS (v/v), 0.05% MMS (v/v), and 0.1% MMS (v/v). When the drug was added, the cells were grown for an additional hour at 30 ◦ C, with shaking. The cells were then split into equal portions equivalent to 12 ODs (1 OD is equivalent to a 1 mL culture at A600 = 1.0). Crosslinking and protein isolation 12 ODs of cells was collected and then resuspended in 100 µL of water. The cells were then placed on an aluminum plate and exposed to 365 nm of light for a total of 40 min under an ice-cold environment. The cells were mixed every 15min. Following crosslinking the cells were collected and then subjected to cell lysis to isolate the cellular proteins. Whole cell lysates were prepared via TCA precipitation. To each sample 1 mL of lysis buffer (1.2 % beta-mercaptoethanol, 300 mM NaOH, 1 mM PMSF in water) was added. The sample was


Shi

The Manhattan Scientist, Series B, Volume 3 (2016)

23

incubated on ice for 10 min, and then 160 µL of 50% trichloroacetic acid (TCA) was added, and incubated on ice for 20 more min. The lysates were centrifuged at 4◦ C at 15000 r.p.m. for 10 min, the supernatant was discarded and the cells were washed with acetone. The wash was repeated one more time and then allowed to air dry for 30 min. Each culture was resuspended with 75 µL of SDS-PAGE loading buffer and boiled at 95 ◦ C for 20 min. The samples were stored at -20 ◦ C until further analysis. Protein electrophoresis and western blotting Proteins were separated via electrophoresis on homemade 15 % SDS-PAGE gels in standard tris-glycine buffer (25 mM Tris, 192 mM glycine, 0.1 % SDS). Wells were loaded with 4 µL of marker solution (pre-stained Rec protein ladder) or 10 µL of yeast protein samples that were used for immunodetection with anti-HA antibodies. For samples that were analyzed with antibodies for H4K16 acetylation (ac) and H3S10 phosphorylation (ph), 2 µL and 5 µL of sample were loaded, respectively. Electrophoresis was performed at 125 V until resolved the appropriate distance. Proteins in the agarose gels were transferred to a PVDF membrane via western blotting in Towbin buffer with 20 % ethanol. The transfer was performed at 100 V for 75 min. The membrane was washed with water 3 times and then blocked in 3% BSA-TBS solution (for HA and H3S10 ph antibody) for 20-30 min on a rocker platform. For H4K16 Ac, a 3% milkTBS blocking solution was used. The blocking solution was removed and then the primary (1o ) antibody was added and incubated on the membrane overnight at 4o C. The primary solutions were as follows: anti-HA at a 1:10000 dilution in 3 % BSA-TBS; anti-H4K16 ac at a 1:3000 dilution in 3 % milk-TBS 3% milk; and anti-H3S10 ph at a 1:9000 dilution in 3 % BSA-TBS. All primary antibodies were raised in rabbit. The next day, the membrane was washed with TBS 3 times, each time for 2-3 min on a rocker platform. The secondary (2o ) antibody was then added for 45 min. The secondary antibodies were conjugate to HRP and the solutions were as follows: anti-rabbit at 1:10000 in 5% milk for each of the primary antibodies used. The membrane was then washed with TBS 3 times, each time for 2-3 min on a rocker platform. Then the membrane was washed with 20 mL of 0.1% TBST (0.1% tween in TBS) for 10 min. Finally the membrane was washed with 10 quick rinsed of water. The following antibodies were used: anti-HA (abcam, ab9110), rabbit. anti-H4K16ac (active motif, 39167), rabbit. anti-H3S10ph (cell signaling, 9701), rabbit. anti-rabbit peroxidase conjugated (sigma, A0545), goat. Western blot imaging To the membrane, 300 µL of peroxide/luminol solution ECL Select (Amersham) was added, covering the surface of the membrane. Images were obtained digitally using “Chemi” setting on


24

The Manhattan Scientist, Series B, Volume 3 (2016)

Shi

an Omega LumTM G Imaging System for âˆź30 min for HA antibody and 5-15 min for H4K16 Ac and H3S10 ph antibodies.

Results In this report, two sites were assayed thus far, histone H2A A61 and histone H2A Y58. These histones had the indicated amino acid replaced by pBpa using the described suppression system and were assayed for their crosslinking efficiencies and patterns during the DNA damage pathways. These were visualized by western blotting and immuno-detection (Figs. 3 and 4). We first analyzed crosslinks from histone H2A that contained pBpa at position 61 (H2A A61pBpa). Crosslinked samples were assayed for histone-protein interaction by detecting bulk histone via antibody detection against the HA-fusion tag on the protein. Row D of Fig. 3 represents the concentrated bulk of the histone molecules that did not have crosslinking partners. Rows C through A represent signals that are from crosslinks that have occurred to the histone protein of interest. When the histone is covalently bound to a crosslinking partner, it migrates slower during electrophoresis due to the

Figure 3. Western blot for yeast cells with H2A A61pBpa. The top blot was visualized with primary HA antibody. The first two lanes detail the signal of wild type (Wt) yeast cells with and without UV light (negative and positive controls). The same samples were detected for H4K16ac and H3S10ph. Each antibody detection and damaging agent indicated is as labeled. D is the bulk histone and C through A are crosslinked products.

increased size of the complexed interaction. A denser band refers to a stronger signal from the antibody, meaning that more histone or the histone-protein complex is present. The negative UV lane represents cells that were not exposed to crosslinking activation. Crosslinked proteins appear only in the lanes that contain proteins from cells that have been exposed to UV-light. As the DNA damaging agent is increased in concentration, there appears to be a decrease in crosslinking efficiencies, particularly when exposed to MMS. This is most apparent in row A, where the strongest signal dominates in the wild type (Wt) cultures versus the damaged cells.


Shi

The Manhattan Scientist, Series B, Volume 3 (2016)

25

The same H2A A61pBpa samples were then analyzed for the presence of histone H4K16 acetylation by immuno-detection using an antibody specific for the histone PTM. When the cells were exposed to both H2 O2 and MMS at the lowest concentrations, the H4K16ac signal increased relative to the control cells. As the concentrations of the damaging agents were increased, the signal for H4K16ac decreased but still maintained levels above Wt. Finally, the samples were analyzed for the presence of H3S10 phosphorylation by immunodetection. H3S10ph signals were elevated in the presence of the damaging agents as compared to Wt levels. In contrast to the H4K16ac levels that dropped as the damaging agent was increased in concentration, the presence of the phosphorylation PTM appeared to maintain near constant levels, independent of the damaging agents concentration levels. The results do suggest that phosphorylation levels may be higher during the MMS response compared to the cellular response due to H2 O2 . The second position of interest was that of tyrosine 58 on histone H2A. Fig. 4 depicts a different pattern for histone crosslinking from this position as compared to those detected from position 61. Although the UV-dependent banding pattern is different, the trend of crosslinking efficiencies

Figure 4. Western blot for yeast cells with H2A Y58pBpa. The top blot was visualized with primary HA antibody. The first two lanes show the signal of wild type (Wt) yeast cells with and without UV light. The same samples were detected for H4K16ac. Each antibody detection and damaging agent is indicated as labeled. B is the bulk histone and A represents crosslinked products.

remains the same. Row B represents the bulk histone where no crosslinks are detected in the absence of UV light. As the damaging agent was increased in concentration, the efficiency of the crosslinks decreased (row A). Additionally, we observe the same H4K16ac trend as that of the assays from the A61 position. When the damaging agent is present at its lowest concentration the signal for acetylation is elevated as compared to Wt levels. As the damaging agent is increased in concentration, the signal for the acetylation decreases. The H3S10ph detection for these samples failed and needs to be re-analyzed.


26

The Manhattan Scientist, Series B, Volume 3 (2016)

Shi

Discussion The two positions chosen for our initial crosslinking experiments are of significance because they both reside in a surface exposed region of the nucleosome. This region is referred to as the H2A acidic patch and is essential as a docking site for nucleosomal proteins. This domain has been shown to be a key point of regulation at the nucleosomal surface. Our results suggest that under levels of increased DNA damaging agent (stressed conditions) the ability of the histone to interact with nuclear proteins is reduced. There is a decrease in contacts between histone and the protein it crosslinks to, at least for some of the contacts we detect. When comparing rows B and A (Fig. 3), MMS is more effective at preventing histone and protein crosslinking across row A (larger proteins) because on row B only 0.1% MMS treated cells differs from other cells. This observation infers that the histone-protein interaction in row A is influenced much greater during a DNA damage response and when the response is initiated, the protein is in contact with the chromatin fiber at a lower concentration. The protein appears to be an essential chromatin related protein during wild-type conditions but is swiftly disengaged under cellular stress. Also for the histone-protein complex in row C (Fig. 3), H2 O2 has little or no effect on their crosslinking while it is clear that increasing amounts of MMS reduces signal strength. Thus, the changes in H2A A61pBpa crosslink at row A is directly related to DNA damage done by MMS or H2 O2 . The repairing process of DNA interferes with this crosslink or the protein crosslinking with the histone is damaged. While it is important to state that our assays cannot identify the protein contact, we plan to use mass spectrometry to determine the identity of the protein in the crosslink with the histone on H2A A61. Interestingly, the crosslinks that appear in rows B and C do not appear to be affected by the changes in DNA damaging conditions. These results indicate that the protein interaction that is occurring is stable, or necessary, across both normal Wt and damaging conditions. However, when the cells are stressed with the most concentrated levels of MMS all the crosslinks (rows A through C) decrease and are nearly abolished. H4K16 acetylation blotting was utilized to access the opened versus closed structure of chromatin. Increased acetylation means increased accessibility of chromatin for binding proteins. DNA damage pathways are thought to quickly alter chromatin structure so that the DNA becomes easily accessible for repair enzymes to act on the damaged loci. The H4K16ac western blot for H2A A61pBpa indicates that minute amounts of DNA damaging agent significantly increases acetylation at position lysine 16 on histone H4 (Fig. 3). This suggests that the chromatin quickly reorganizes into a more open structure allowing access to proteins to repair DNA damage at a significant rate. For both H2 O2 and MMS, the lower concentrations of the damaging agent significantly raise the acetylation levels, but as the damaging agent amount was increased the acetylation levels decreased. This suggests that the yeast cells’ self-repairing process may not be sufficient to repair the damaged done because the levels become too high to maintain the response. As the concentrations become too high, cells may become so damaged that they enter cell death and therefore can no


Shi

The Manhattan Scientist, Series B, Volume 3 (2016)

27

longer regulate the H4K16ac properly. This is further supported by the fact that at the highest levels of MMS cross-linking becomes limited and we conclude that the lower levels of damaging agents allow for a sufficient signal response, where the higher levels create too much stress on the cells. All future assays will be performed at the lower levels indicated in this study because conditions that stimulate a response but do not over stress the cells are required. H3S10ph is an interesting PTM because it is implicated in signaling for both the opening and closing of the chromatin fiber. When cells enter mitosis, and chromatin compaction is propagated and the H3S10ph levels anticorrelate to that of H4K16ac levels. In other words, their cellular levels oppose each other and in mitosis H4K16ac levels are absent and H3S10ph levels are increased. We observe an opposite trend during the DNA damage response. The H3S10ph signal appears to correlate directly with the acetylation signal. When acetylation levels rise so do the phosphorylation levels. These results suggest that during the DNA damage response pathway that both an increase in acetylation and phosphorylation are required to regulate damage signaling and protein binding during the response. How, and why they correlate will need further experimentation to deduce. We observed similar results for crosslinking from histone H2A Y58pBpa (Fig. 4). Row A shows decreasing signal strength for the crosslinks as the damaging agents are increased. Although we observe only one crosslink in this assay it follows the same trend. The H4K16 acetylation blot for both H2A A61 and H2A Y58 should be exactly the same because the only difference between these two types of yeast cells is the position of the pBpa in the histone. This difference should not influence the opening and closing of chromatin because the unnatural amino acid pBpa has been used for similar experiments and evidence suggests that it does not influence yeast cells normal functioning. Although there is a detection level difference for the acetylation in the two assays, the general pattern for both western blots is the same. Minute amounts of DNA damaging agent significantly increase the acetylation signaling, suggesting great accessibility of the histone and DNA by repair enzymes. To proceed toward our initial goal of this experiment, such as to reveal the mechanisms that shape chromatin structure under DNA damage. This experiment must be repeated at other significant positions surrounding the histone octamer. These positions are typically the sites that are most likely to come in contact with other proteins that bind onto the histone. As mentioned, mass spectrometry is necessary to reveal the actual proteins that crosslink with the histone on the western blots. Only when the structures and the functions of these proteins are known, the mechanisms can be understood.

Acknowledgement I would like to thank my advisor, Dr. Bryan Wilkins for his guidance and support through this project. I would also like to thank Dr. Rani Roy and Dr. John Regan for their support and mentorship. This work was funded by the School of Science Summer Research Scholars Program.


28

The Manhattan Scientist, Series B, Volume 3 (2016)

Shi

References [1] Luger, K., M¨ader, A. W., Richmond, R. K., Sargent, D. F. and Richmond, T. J. Crystal structure of the nucleosome core particle at 2.8 A resolution. Nature 389, 251-260 (1997). [2] Tsabar, M. and Haber, J. E. Chromatin modifications and chromatin remodeling during DNA repair in budding yeast. Curr. Opin. Genetics Dev. 23, 166-173 (2013). [3] Fontana, L., Partridge, L. and Longo, V. D. Extending healthy life span—From yeast to humans. Science 328, 321–326 (2010). [4] Liu, C. C. and Schultz, P. G. Adding new chemistries to the genetic code. Annu. Rev. Biochem. 79, 413-444 (2010). [5] Dorman, G. and Prestwich, G. D. Benzophenone photophores in biochemistry. Biochemistry 33, 5661-5673 (1994). [6] Wilkins, B. J., Rall, N. A., Ostwal, Y., Kruitwagen, T., Hiragami-Hamada, K., Winkler, M., Barral, Y., Fischle, W. and Neumann, H. A cascade of histone modifications induces chromatin condensation in mitosis. [7] Chin, J. W., Cropp, T. A., Anderson, J. C., Mukherji, M., Zhang, Z. and Schultz, P. G. An expanded eukaryotic genetic code. Science 301, 964-967 (2003).


Identification of zoonotic intestinal parasites in domestic dogs (Canis familiaris) from Winston-Salem, NC Eric A. Bailey∗ Department of Biology, Manhattan College Abstract. Pathogenic intestinal parasites can be detrimental to the health of both humans and animals. There are many diseases that can affect dogs and humans. Protozoans such as Giardia lamblia, Toxoplasma gondii, Neospora caninum, and Cryptosporidium parvum have the capability of being transmitted from dogs to humans. Helminths, such as Ancylostoma caninum, Necator americanus, and Echinococcus granulosus, also have the ability to be transmitted through fecal matter from dogs to humans. The goal of this study was to identify the prevalence of protozoans and helminths in canine fecal samples that were collected in Winston-Salem, NC, in March 2015. DNA was extracted from the fecal samples and analyzed by the polymerase chain reaction (PCR) using primers specific for each parasite. The human protozoan parasites, G. lamblia, T. gondii and Cryptosporidium parvum, were not detected in these samples. However, we found that the fecal samples were positive for N. caninum, A. caninum, N. americanus, and E. granulosus at a prevalence of 3%, 13%, 6% and 3%, respectively. Two PCR-positive samples for A. caninum were further analyzed by sequencing. Sequence analysis revealed they were two separate isolates, one initially identified in dog feces, and the other in human feces. These data indicate the potential of zoonotic transmission of protozoan and helminth parasites from dogs to humans.

Introduction Intestinal parasites cause health problems in individuals throughout the world. Some parasites can be transmitted from animals to humans. They are known as zoonotic parasites. Animals that are constantly in close contact with humans, such as domestic dogs (Canis familiaris) may have an increased chance of spreading some of these diseases to their owners. There is a long list of parasites that have the ability to infect both dogs and humans. Many of these diseases such as cryptosporidiosis (Cryptosporidium parvum), echinococcosis (Echinococcus granulosus), rabies (rabies virus), methicillin-resistant Staphylococcus aureus, plague (Yersinia pestis), and salmonellosis (Salmonella typhimurium), can be life-threatening (CDC.gov). Zoonotic protozoans can cause debilitating infections in both humans and dogs. Humans can contract G. lamblia and T. gondii infections when they come into contact with soil, food, or water that have been contaminated with dog feces (Baneth et al., 2016). These infections can cause intestinal illnesses, such as diarrhea and malabsorption, potentially leading to dehydration. There is little known about the potential for zoonotic transmission of C. parvum and N. caninum. Humans can contract C. parvum infection when they come into contact with water or food that was contaminated with dog fecal matter containing C. parvum oocysts (CDC.gov). Cryptosporidum canis causes infection in dogs, while C. parvum causes cryptosporidiosis in humans. There is evidence that C. parvum can infect dogs; however, the prevalence of C. canis in humans is very low (Lucio-Forster, 2010). ∗

Research mentored by Ghislaine Mayer, Ph.D.


30

The Manhattan Scientist, Series B, Volume 3 (2016)

Bailey

N. caninum is generally transmitted between dogs and cattle. Cattle become infected with this protozoan when they come into contact with soil or food that has been contaminated with feces. Although there is evidence of N. caninum infection in humans, there is no data to support that it is a zoonotic parasite (Oshiro et al., 2015). N. caninum may infect humans the same way as T. gondii (Oshiro et al., 2015). A human infection with N. caninum is concerning because this parasite causes abortions and stillbirths in other animals. The tapeworm, E. granulosus, can be spread from dogs to humans through ingestion of eggs from the feces of sheep or dogs. When E. granulosus enters the human body it travels first to the small intestine, where it enters the bloodstream. E. granulosus then forms cysts that can insert themselves into various parts of the human body, such as the brain, the bone marrow, the heart and the lungs, which can eventually rupture. This disease can be fatal (CDC.gov). Ancylostoma caninum and Necator americanus can infect both humans and dogs. These helminths are hookworms. Hookworms can enter humans either through ingestion or by penetrating the skin (CDC.gov). Once inside the body, the hookworms migrate to the lungs, where they eventually begin to make their way up to the pharynx (CDC.gov). The hookworms are then swallowed and travel to the small intestine (CDC.gov). Generally, Ancylostoma duodenale is the species that has the ability to infect humans. However, there is evidence that A. caninum can infect humans and cause health problems. Infection with this hookworm can cause eosinophilic gastroenteritis, which is associated with abdominal pain and other abdominal problems (Landmann and Prociv, 2003). When these hookworms are in the small intestine, they may go undiagnosed for a significant period of time, until they cause serious diseases such as anemia. The goal of this study was to identify the prevalence of Giardia lamblia, Toxoplasma gondii, Cryptosporidium parvum, Neospora caninum, Echinococcus granulosus, Necator americanus, and Ancylostoma caninum in domestic dog fecal samples collected in Winston-Salem, NC, in 2015.

Materials and Methods Sample Collection and DNA Extraction Thirty-two dog fecal samples were collected in neighborhoods and shelters in Winston-Salem, NC (Dr. Johanna Porter-Kelley and her students, Winston-Salem State University, NC) on March 16, 2015. DNA was extracted from each samples using the Qiagen DNA stool kit (Qiagen, Valencia, CA). Polymerase Chain Reaction G. lamblia required the use of a nested polymerase chain reaction. In the first reaction, the forward primer 5’-AAGCCCGACGACCTCACCCGCAGTGC-3’ (Gia7) and the reverse primer 5’-GAGGCCGCCCTGGATCTTCGAGACGAC-3’ (Gia759) were used (Hong et al., 2014). In the second reaction, the forward primer 5’-GAACGAACGAGATCGAGGTCCG-3’ and the reverse primer 5’-CTCGACGAGCTTCGTGTT-3’ were used (Hong et al., 2014). The condi-


Bailey

The Manhattan Scientist, Series B, Volume 3 (2016)

31

tions for the PCR were as follows: thermal cycle reactions were set to an initial denaturing step (95◦ C for 5 min), 35 cycles of a denaturing step (95◦ C for 30 s), an annealing step (55◦ C for 30 s), an extension step (72◦ C for 60 s), and finally an extension step (72◦ C for 7 min) (Hong et al., 2014). C. parvum was detected using the forward primer LAX469F 5’CCGAGTTTGATCCAAAAAGTTACGAA-3’ (Rochelle et al., 1997). The reverse primer that was used was LAX869R 5’-TAGCTCCTCATATGCCTTATTGAGTA-3’ (Rochelle et al., 1997). The conditions for the PCR were as follows: thermal cycle reactions were set to an initial denaturing step (94◦ C for 3 min), 35 cycles of a denaturing step (94◦ C for 45 s), an annealing step (55◦ C for 45 s), an extension step (72◦ C for 1 min), and a final extension step (72◦ C for 7 min) (Rochelle et al., 1997). For the T. gondii PCR, the forward primer 5’-GTAGCGTGCTTGTTGGCGAC-3’ was used (Fazaeli et al., 2000). The reverse primer 5’-ACAAGACATAGAGTGCCCC was used (Fazaeli et al., 2000). The PCR conditions were as follows: thermal cycle reactions were set to an initial denaturing step (95◦ C for 5 min), 35 cycles of a denaturing step (94◦ C for 30 s), an annealing step (60◦ C for 1 min), an extension step (72◦ C for 2 min), and a final extension step (72◦ C for 7 min) (Fazaeli et al., 2000). The PCR primers used to detect N. caninum were the forward primer NP6 5’-CTCGCCAGTCAACCTACGTCTTCT-3’ and the reverse primer NP21 5’-CCCAGTGCGTCCAATCCTGTAAC-3’ (Razmi, 2009). The conditions for the thermocycler were: initial denaturation (95◦ C for 5 min), 40 cycles with a denaturation step (94◦ C for 60 sec), an annealing step (63◦ C for 60 sec) and an extension step (74◦ C for 3.5 min), followed by final extension step (74◦ C for 10 min) (Razmi, 2009). The primers used for A. caninum and N. americanus were the same. The PCR was a two-step reaction. The first reaction used the forward primer 5’GTTGGGAGTATCGCCAACCG-3’ and the reverse primer 5’-AACAACCCTGAACCAGACGT3’ (Yamage et al., 1996). In the second round reaction the forward primer was the same as the forward primer for the first reaction and the reverse primer was 5’-ATGCGTTCAAAATTTCACCA3’ (Yamage et al., 1996). The PCR conditions were: initial denaturation (95◦ C for 2 min), 34 cycles of denaturation (95◦ C for 30 s), an annealing step (55◦ C for 30 s), an extension step (72◦ C for 30 s), which was followed by a final extension (72◦ C for 10 min) (Yamage et al., 1996). The primers used for E. granulosus were the forward primer 5’-CATTAATGTATTTTGTAAAGTTGˇ 3’ and the reverse primer 5’-CACATCATCTTACAATAACACC-3’ (Stefani´ c et al., 2004). The ◦ conditions for the reaction were as follows initial denaturation (95 C for 5 min), 35 cycles of denaturation (95◦ C for 1 min), an annealing step (55◦ C for 1 min) and an extension step (72◦ C for 1 ˇ min), these were followed by a final extension (72◦ C for 10 min) (Stefani´ c et al., 2004). The PCR products were detected using a 1.5% agarose gel. The positive PCR samples were sequenced by the Sanger method (Genewiz, South Plainfield, NJ).

Results Detection of Protozoan Parasite DNA To detect the protozoan parasites from the dog fecal samples collected in North Carolina, a PCR-based technique was used utilizing species-specific primers. T. gondii, C. parvum and G.


32

The Manhattan Scientist, Series B, Volume 3 (2016)

Bailey

lamblia were not detected in any of the samples tested (Figs. 1 A-C). In contrast, we detected the dog protozoan parasite, N. caninum with a prevalence of 3% (1/32) (Fig. 1D).

Figure 1. Agarose gel electrophoresis of PCR products for protozoan parasite detection. A C. parvum. Lanes 1 and 21: 1 Kb marker. Lane 2: positive control. Lanes 3-20 and 22-34: C. parvum DNA samples. Lane 37: negative control. B) G. lamblia. Lanes 1: 100 bp marker. Lane 2: positive control. Lanes 3-18 contain G. lamblia DNA samples. Lane 20: Negative control. C) T. gondii. Lanes 1 and 21: 1 Kb marker. Lane 2: positive control. Lanes 3-19 and 22-36: T. gondii DNA samples. Lane 38: negative control D) N. caninum. Lanes 1 and 21: 1 Kb marker. Lane 2: positive control. Lanes 3-20 and 22-35: N. caninum DNA samples. Lane 37: negative control.

Detection of Helminth Intestinal Parasites To detect helminth intestinal parasites, PCR was performed using primers that were specific for A. caninum, N. americanus, and E. granulosus. Echinococcus granulosus was observed at a prevalence of 3% (1/32) (Figs. 2A, 3). While N. americanus was found at a prevalence of 6% (2/32) (Figs. 2B, 3). In contrast, A. caninum was found at a 13% prevalence (4/32) (Figs. 2B, 3).


Bailey

The Manhattan Scientist, Series B, Volume 3 (2016)

33

Figure 2. Agarose gel electrophoresis of PCR products for helminth parasite detection. A) E. granulosus. Lanes 1 and 21: 1 Kb marker. Lanes 2-19 and 22-35 contain E. granulosus DNA samples. Lane 37: negative control. Lane 11: positive sample for E. granulosus. B) A. caninum and N. americanus. Lanes 1 and 21: 1 Kb marker. Lanes 2-19 and 22-34: A. caninum and N. americanus DNA samples. Lane 36: negative control.

Figure 3. Prevalence data of intestinal parasites in the dog fecal samples from Winston-Salem, NC

Sequence Analysis Positive PCR products were further characterized by sequencing. Two of the samples (2/8) that were sent for sequencing gave readable sequences. The sequence data confirmed the PCR products to be isolates of A. caninum. One PCR product had a 99% match with an isolate first described in dog stools in the Mekong River basin in Southeast Asia (Fig. 4A) (Sato et al., 2016).


34

The Manhattan Scientist, Series B, Volume 3 (2016)

Bailey

The second PCR product had a 100% sequence identity to an isolate that was found in a human stool samples in Tamil Nadu, India (Fig. 4B) (George et al., 2016).

Figure 4. BLAST analysis of positive A. caninum PCR products.

Discussion A total of 32 dog fecal samples were analyzed in this study. Of the protozoan parasites tested, only N. caninum was detected. None of the samples showed co-infections. Sequence analysis confirmed the presence of A. caninum in two PCR products. Each sample was matched with a different isolate of A. caninum. Interestingly, one of the samples showed a 100% match with an isolate what was originally identified in a sample of human stool (George et al., 2016). This shows that A. caninum does have the ability to cross species barrier, suggesting zoonotic transmission. A. caninum does cause intestinal problems in humans. It can result in severe abdominal pain or can remain asymptomatic for extended periods of time (Landmann and Prociv, 2003). In a study performed in Australia to test the capability of A. caninum to infect humans, it was shown through experimentally infecting human volunteers that this parasite does cause infections in humans as well as dogs


Bailey

The Manhattan Scientist, Series B, Volume 3 (2016)

35

(Landmann and Prociv, 2003). Symptoms such as eosinophilic gastroenteritis, developed from this experimental infection (Landmann and Prociv, 2003). Another significant finding from this study was that an isolate of A. caninum, which was found in humans and dogs, is not necessarily constrained to any geographic border. We identified an isolate that was initially reported to be from a human stool sample from Tamil Nadu, India (George et al., 2016). In our study, that isolate was found in dog stool samples collected in North Carolina. Although A. caninum is not geographically bound, its prevalence may be restricted by climate. Future studies will be conducted to compare the prevalence of intestinal parasites in dogs from warm and temperate climates.

Acknowledgments I would like to thank Dr. Ghislaine Mayer for being my advisor for this research. I would also like to thank Dr. Johanna Porter-Kelley and her students for providing us with the fecal samples that were collected in Winston-Salem, NC. I also thank the Dean of the School of Science, the Biology Department, and the Jasper Summer Scholars Program for funding this research.

References Baneth, G., Thamsborg, S., Otranto, D., Guillot, J., Blaga, R., Deplazes, P., & Solano-Gallego, L. (2016). Major parasitic zoonoses associated with dogs and cats in Europe. Journal of Comparative Pathology, 155 doi:10.1016/j.jcpa.2015.10.179 Division of Parasitic Diseases and Malaria. (2013). Hookworm. Retrieved October 05, 2016, from http://www.cdc.gov/dpdx/hookworm/index.html Division of Parasitic Diseases and Malaria. (2013). Echinococcosis. Retrieved October 05, 2016, from http://www.cdc.gov/dpdx/echinococcosis/index.html Dubey, J. P., Schares, G., & Ortega-Mora, L. M. (2007). Epidemiology and control of neosporosis and Neospora caninum. Clinical Microbiology Reviews, 20:323-367. doi:10.1128/cmr.0003106 Fazaeli, A., Cartera, P.E., Dardeb, M. L., Pennington, T. H. (2000). Molecular typing of Toxoplasma gondii strains by GRA6 gene sequence analysis. Int. J. Parasitol. 30:637-642. George, S., Levecke, B., Kattula, D., Velusamy, V., Roy, S., Geldhof, P., Kang, G. (2016). Molecular identification of hookworm isolates in humans, dogs and soil in a Tribal Area in Tamil Nadu, India. PLoS Negl Trop Dis PLOS Neglected Tropical Diseases, 10. doi:10.1371/journal.pntd.0004891 Hong, S.-H., D. Anu, Y.-I. Jeong, D. Abmed, S.-H. Cho, W.-J. Lee, and S.-E. Lee. (2013). Molecular characterization of Giardia duodenalis and Cryptosporidium parvum in fecal samples of individuals in Mongolia. American Journal of Tropical Medicine and Hygiene 90:43-47. Landmann, J. K., & Prociv, P. (2003). Experimental human infection with the dog hookworm, Ancylostoma caninum. The Medical Journal of Austrailia, 178:69-71.


36

The Manhattan Scientist, Series B, Volume 3 (2016)

Bailey

Lucio-Forster, A., Griffiths, J. K., Cama, V. A., Xiao, L., & Bowman, D. D. (2010). Minimal zoonotic risk of cryptosporidiosis from pet dogs and cats. Trends in Parasitology, 26:174-179. doi:10.1016/j.pt.2010.01.004 Oshiro, L. M., Motta-Castro, A. R., Freitas, S. Z., Cunha, R. C., Dittrich, R. L., Meirelles, A. C., & Andreotti, R. (2015). Neospora caninum and Toxoplasma gondii serodiagnosis in human immunodeficiency virus carriers. Revista Da Sociedade Brasileira De Medicina Tropical Rev. Soc. Bras. Med. Trop., 48:568-572. doi:10.1590/0037-8682-0151-2015 Razmi, G. (2009). Fecal and molecular survey of Neospora caninum in farm and household dogs in Mashhad Area, Khorasan Province, Iran. Korean J Parasitol The Korean Journal of Parasitology. 47:417. doi:10.3347/kjp. 47.417 Rochelle, P. A., De Leon, R., Stewart, M. H., Wolfe, R. L. (1997). Comparison of primers and optimization of PCR conditions for detection of Cryptosporidium parvum and Giardia lamblia in water. American Society for Microbiology 60:106-114. Sato, M., Sato,M. O. and Yoonuan, T. (2016). Ancylostoma caninum genes for 18S rRNA, ITS1, 5.8S rRNA, ITS2, 28S rRNA, partial and complete sequence, isolate: 201-1. Retrieved September 28, 2016, Scallan E, Hoekstra R M, Angulo F J, Tauxe RV, Widdowson M A, Roy S L, Jones J L, Griffin P M (2011). Foodborne illness acquired in the United States major pathogens. Emerg Infect Dis. 2011;17:7-15. ˇStefani´c, S., Shaikenov, B. S., Deplazes, P., Dinkel A., Torgerson, P. R., Mathis, A. (2004). Polymerase chain reaction for detection of patent infections of Echinococcus granulosus “sheep strain”) in naturally infected dogs. Parasitology Research Parasitol Res. 92: 347-351.


Procedures to determine rates of bark formation on saguaro cacti Lauren Barton∗ Department of Biology, Manhattan College Abstract. Bark formation, or epidermal browning, has been shown to occur on saguaro cacti and more than twenty other cactus species in the Americas. Bark formation occurs on older saguaro cactus plants, with bark formation first beginning on equatorial-facing surfaces. For this study, saguaro cactus plants (Carnegiea gigantea) found in Southern Arizona were analyzed to further understand bark formation (Fig. 1). For the saguaro cactus, the Southfacing surfaces should show bark formation first, followed by the East, the West, and then the North respectively. Previous research determined average rates of bark formation on surfaces for a population of saguaro cactus plants. These rates were deduced for surfaces using logistic curves. These logistic curves were generated using the least squares analysis method, which allowed for the least amount of error in the graph. As such, this study generated average logistic curves for each of the twelve surfaces, which were used to determine the time difference, in years, between the formations of bark on the different surfaces. The current research has been used to expand upon this idea by looking to better understand both the average bark formation rate for cactus plants and the bark formation rates on individual cactus plants. This study utilized logistic curves of individual cactus plants to compare with the ’average’ cactus. One approach resembled the logistic curves generated for an ’average’ cactus. An additional statistical method, nonlinear mixed effects, was implemented to generate logistic curves for individual cacti. Additionally, the current study analyzed bark formation with additional data. One database considered a group of cacti that showed extensive bark formation while a second database considered only a group of cacti with little to no bark in previous surveys. With these two new methods to follow individual cacti, further analyses will focus on the characteristics of individual cacti.

Introduction Epidermal browning is a phenomenon occurring on the stem surfaces of more than 24 species of columnar cacti in the Americas (Evans and DeBonis, 2015) including Carnegiea gigantea (Engelm) in Arizona and Sonora, Mexico. Bark formation eventually occurs on all surfaces of C. gigantea (saguaro cacti) and results in premature morbidity and mortality of plants (Evans and DeBonis, 2015). Bark formation is caused by an aggregation of epicuticular waxes on stem surfaces (Evans et al., 1994a; 1994b). As these epicuticular waxes accumulate, the stomata on the surface of the cactus are blocked, resulting in reduced gas exchange (Evans et al., 1994a; 1994b). Reduced gas exchange leads to a discoloration of the stem surface from green to brown, with an eventual formation of a thick bark layer caused by proliferation by epidermal cells (Evans et al. 1994b). Additionally, studies show that UV-B light exposure shows an identical accumulation of epicuticular wax buildup on the stem surface, thus resulting in a similar discoloration and eventual formation of bark (Evans et al., 2001). Fig. 2 provides an example of this bark formation beginning to occur on a saguaro surface. As stated above, bark formation leads to premature death of saguaros at a rate of 2.3% per year (Evans et al., 2005; 2013), a very high rate considering that these cacti should live for hundreds of years (Steenbergh and Lowe, 1977). ∗

Research mentored by Lance Evans, Ph.D.


38

The Manhattan Scientist, Series B, Volume 3 (2016)

Figure 1. Image of a desert scene from the Sonoran Desert with several saguaro cacti (Carnegiea gigantea). The cactus in the foreground has numerous branches while some in the background are shorter. The cactus in the foreground is about 120 to 140 years old.

Barton

Figure 2. Image of the surface of a saguaro cacti (Carnegiea gigantea). The scale is next to a crest with extensive bark formation while there is less bark on the trough to the right of the crest with the scale.

Bark formation first begins to appear on equatorial-facing surfaces on all columnar cactus species. For saguaro cacti in southern Arizona, bark formation begins on south-facing surfaces (Evans et al., 1992; 1994a). Over time, bark formation occurs on all surfaces prior to death. A previous study (DeBonis et al., 2015) determined rates of bark formation on various stem surfaces on saguaro cacti over a 16-year period. The above study analyzed percentages of bark of crests and troughs for south, east, west, and north-facing surfaces (DeBonis et al., 2015). The above study used many general logistic curves to track rates of bark formation on twelve surfaces of 897 cacti. Like an abiotic disease, bark formation progresses in a pattern that can be categorized into stages on a logistic curve: these stages include an ‘initial’ stage, a ‘manifest’ stage, and a ‘morbid’ stage prior to death (Truett et al., 1967; Boyd et al., 1987; Le Gall et al., 1993; Marshall et al., 1995; Biondo et al., 2000; Kologlu et al., 2001). From these logistic curves, average rates of bark formation were generated for each of the twelve cactus surfaces. The purpose of the current study is to understand this process of bark formation on individual cactus plants to compare with average rates on surfaces (DeBonis et al., 2015). Instead of generating one average curve for the entire cactus population, logistic curves were generated for individual


Barton

The Manhattan Scientist, Series B, Volume 3 (2016)

39

cactus plants. These logistic curves of rates of bark formation for individual cactus plants will provide for an analysis of characteristics of the environments (aspects, sun versus shade, etc.) that each cactus is exposed to in the desert location.

Materials and Methods Field Conditions Saguaro cacti (Carnegiea gigantea (Engelm.) Britt and Rose were observed and analyzed initially over a 16- year period. The field plots were established in Tucson Mountain Park in 19934. In Tucson Mountain Park, 50 permanent plots were established in 1993-4 and within these 50 plots, the location of 1149 cacti were established and analyzed (Evans et al. 1995). These plots were chosen because they expressed specific characteristics, which are described in further detail in another study (Evans et al. 2005). Cacti were again analyzed in 2002 (Evans et al. 2005) and 2010 (Evans et al. 2013). Currently, cacti followed over this 16-year period in the previous study were analyzed in further detail dependent upon their external characteristics. Cactus plants that exhibited a greater covering of bark on their surfaces were analyzed in 2014. Conversely, cactus plants that exhibited very little bark coverage on their surfaces were analyzed in 2015. Data sets used All data for each data set included percent bark areas. Percent bark areas included crests, right troughs, and left troughs for each cardinal direction: south, east, west, and north- facing surfaces (a total of twelve surfaces) for all cacti evaluated; Fig. 3 indicates crest and trough surfaces. The previous study analyzed data collected in 1993-4, 2002, and 2010 for 897 cacti and has been presented in additional studies (Evans et al., 1995; 2005; 2013; 2015). Fifty-seven cactus plants with a high coverage of bark on their surfaces were analyzed in 1993-4, 2002, 2010, and 2014. Sixty-eight cactus plants with a lesser coverage of bark on their surfaces were analyzed in 19934, 2002, 2010, and 2015, respectively. Three excel spreadsheets (Microsoft Inc) were used to document the percentages of barked area on each of the twelve surfaces for each individual data set. An example of the generated spreadsheets is shown in Table 1. Theory of logistic curves Logistic curves are helpful in successfully determining rates of disease progression in humans (Truett et al., 1967; Boyd et al., 1987; Le Gall et al., 1993; Marshall et al., 1995; Biondo et al., 2000; Kologlu et al.,2001). Noted in a previous study and because of its success in modeling disease progression, it was chosen as the best possible method to show the progression of bark formation on the twelve cactus surfaces (DeBonis et al., 2015). The purpose of the previous study and the current study are to determine rates of bark formation on cactus surfaces relative to one another. Each study used crest data, right trough data, and left trough data for south, east, west, and north- facing surfaces (twelve surfaces total were evaluated). The previous study analyzed the twelve surfaces of 897 cacti over a 16- year period (1993-4, 2002, and 2010). Fifty-seven cactus


40

The Manhattan Scientist, Series B, Volume 3 (2016)

Barton

Figure 3. Image of the surface of a saguaro cacti (Carnegiea gigantea) showing crests (concave protrusions) and troughs (convex indentations).

plants that displayed a greater coverage of bark on their surfaces, but were not completely covered, were analyzed further over a 20- year period (1993-4, 2002, 2010, and 2014). Sixty-eight cactus plants that displayed a lesser coverage of bark or no coverage at all were also analyzed further over a 21- year period (1993-4, 2002, 2010, and 2015). Despite the differences in analysis of data, all three data sets were found to show fundamental stages of a logistic curve in terms of bark formation over time. For the first two data sets showed three fundamental stages of a logistic curve. The first stage, referred to as the ‘initial’ stage included cacti that showed less than 25% bark on their surfaces in 1993-4; additionally, bark coverage remained less than 25% bark in 2010 or 2014. The second stage termed ‘manifest’ was characterized by a linear progression of bark formation; the surface typically started with more than 25% bark on a surface and rapidly increased, but was still less than 75% coverage from 1993-4 to 2010 or 2014. The third stage, the ‘morbid’ stage, was categorized by surfaces with more than 75% bark and made up the plateau of the logistic curve. The 68 cactus plants with little to no bark that were analyzed in 2015 displayed unique logistic curves; each surface was categorized by the ‘initial’ stage of the curve. No curve generated for the 2015 data included a manifest stage or a morbid stage. General logistic curves - generating an average response Two methods were used to analyze the logistic curves generated in the current studies. The first method used in the previous study of 897 cacti as well as for the current study of 2014 and 2015 data sets involved the creation of an average logistic curve. As previously stated, a logistic model seemed to be the best fit for this data. As stated in the previous study (DeBonis et al., 2015), a general logistic curve described by Draper and Smith (1998) was not entirely sufficient in modeling the progression of bark formation. This is because a general curve has inherent symmetry; our data


Barton

The Manhattan Scientist, Series B, Volume 3 (2016)

41

Table 1. Percentages of cactus surface with bark formation of eleven cactus plants for 1994, 2002, 2010, and 2015. Percentages Plot Number

Cactus Number

1994

2002

2010

2015

242 46 208 154 242 273 41 83 208 36 83

29 19 4 34 23 9 27 2 13 8 12

1 0 3 2 1 5 10 1 3 25 25

1 0 3 2 1 5 10 5 10 25 25

4 5 5 10 5 6 10 5 12 80 85

5 5 10 10 10 15 18 20 25 85 85

does not have an inherent symmetry. As such, a generalized curve was selected: y = 100/{1 + exp[−b(t − c)]}a

(1)

This allowed for an asymmetric curve where the bottom part of the S- curve could look different than the upper portion and the inflection point could be anywhere on the curve. Additionally, this provided a curve with three parameters a, b, and c which were determined with the method of least squares (Gottschalk and Dunn, 2005). Further, the curve had cactus surfaces in different stages of bark formation and as such the curve needed to be synchronized with respect to time. The method made use of least squares analysis as well as the square of errors in the times of the data points (DeBonis et al., 2015). This method yielded an average logistic curve for each of the twelve surfaces that fit the data. Fig. 4 provides an example of this average logistic curve. Theory of synchronizing logistic curves among cactus surfaces Once the average curves were generated for each of the twelve surfaces for each of the three data sets, the curves were synchronized to generate time sequences. This synchronization would show a time difference between surfaces, for instance between south crests and east crests. This generated average time delays between surfaces from the logistic curves. This could be determined for all twelve surfaces in each data set. Nonlinear mixed effects The second method used in the current study was only used on the data sets evaluated in 2014 and 2015. Nonlinear mixed effects, like the method above, generates logistic curves. These logistic curves, however, are generated for each individual cactus as opposed to creating an average logistic curve. As such, each curve is generated relative to an average of all the cactus plants, but


42

The Manhattan Scientist, Series B, Volume 3 (2016)

Barton

Figure 4. Average logistic curve of percent of bark formation on stem surfaces versus relative time on east-facing right troughs for injury groups of initial (open circles), manifest (greyfilled circles) and morbid (closed circles) for a population of 900 saguaro cacti (Carnegiea gigantea).

this average differs from the one generated in the previous method. These logistic curves are all shifted and synchronized in time as stated in the previous method. Method for fitting the data As stated previously, two methods were implemented for the results obtained. Additionally, three separate data sets were analyzed; the first data set used in the previous study, evaluated bark percentages on surfaces in 1993-4, 2002, and 2010. For the current study, two additional data sets were included to expand upon the previous: the first data set was characterized by significant bark coverage on all surfaces evaluated in 1993-4, 2002, 2010, and 2014. The second data set, characterized by very little bark formation on its surfaces was evaluated in 1993-4, 2002, 2010, and 2015. For the previous study, the data acquired for the three timed data points were designated b1, b2, and b3 respective so that it could be processed as an ordered triple (b1, b2, b3). For the current research, an additional data point was included (either for 2014 or 2015). As such, these curves contained four data points designated b1, b2, b3, and b4, respectively. Each procedure for each data set was carried out using Matlab (www.mathworks.com) in which programs written by Dr. DeBonis were used to acquire the results. For both methods, a preprocessing step was necessary to generate accurate logistic curve fits. Using these ordered triples and quadruples of data; anything that was not essential to the generation of an accurate fit was removed. This included (1) triples or quadruples that were missing data, (2) triples or quadruples with all equal values, and (3) any triples or quadruples that decreased in bark formation rate over time, as this is illogical. Next, each triple and quadruple needed to be divided into the three separate classes: Initial, Manifest, or Morbid. Initial classes included triples or quadruples that had bark percentages less than 25%. Morbid classes included triples and quadruples that had bark percentages greater than 75%; all other triples or quadruples were classified in the manifest section. With this, an iterative method was used to produce curves for each surface of a cactus for all


Barton

The Manhattan Scientist, Series B, Volume 3 (2016)

43

data sets. The iterative method was repeated with different initial conditions consistently, until a curve with the “best” fit was obtained. In general, the iterative method was used to find the best asymmetric logistic fit to the data for all three data sets. The data was shifted horizontally to obtain the best least squares fit and from this the best asymmetric fit was determined once the iterations stabilized (Fig. 4). Previous research describes in greater detail the statistical method for obtaining the “best” fit curve for the average logistic curve (DeBonis et al., 2015). This method, in addition to its use to obtain an average logistic curve, was also implemented by the nonlinear mixed effects method. Instead of an average logistics curve generated for all data, nonlinear mixed effects used the same procedure to make individual curves for each individual cactus at each of the twelve surfaces. Method of synchronizing logistic curves From the previous method, once average logistic curves were generated for each surface, they needed to be synchronized to produce time delays. A previous study goes into great detail about how the logistic curves were synchronized and time delays were determined by providing multiple different methods to obtain the end result (DeBonis et al., 2015). The methods each seemed to highlight how different pair-wise comparisons could be used between curves to find the average difference, in years, between the rates of bark formation on surfaces. This method was essential for both the previous study and the current study. This method allowed for the generation of average time delays to be created from the logistic curves produced. For the first portion of this study, average time delays were determined from the average logistic curves. This was done for all three data sets. In comparison, average time delays were also generated for each individual cactus. These individual logistic curves produce time delays for a specific cactus at each surface from the South Crest. Table 2 shows the time delays calculated from the average logistic curves and the individual logistic curves.

Results For each data set, entire logistic curves of bark formation were generated for each of the twelve surfaces. The data for these logistic curves were divided into three categories: initial, manifest, and morbid which were all relative to their percentage of bark formation at a specific period in time. For the previous study, 897 cacti were evaluated at three timed data points (1994, 2002, and 2010) for each of the twelve surfaces. In total, on one logistic curve, 2500 data points were used. These timed data points were used to construct logistic curves (Fig. 4). Open circles on this logistic curve indicated there was less than 25% bark on the surfaces. Grey circles show the data for the manifest portion characterized by greater than 25 and less than 75% bark. Lastly, dark circles indicated the morbid stage characterized by greater than 75% bark formation with the data plateauing at 100%.


44

The Manhattan Scientist, Series B, Volume 3 (2016)

Barton

Table 2. Time shifts for individual cactus surfaces relative to an average cactus for the dates shown. Results of two methods are shown. Time Shift (Year) Plot Number

Cactus Number

1994

2002

2010

2015

1994

2002

2010

2015

242 46 208 154 242 273 41 83 208 36 83

29 19 4 34 23 9 27 2 13 8 12

-24.8 -24.5 -21.2 -20.3 -21.4 -19.0 -17.4 -17.7 -15.7 -2.0 -1.5

-16.8 -16.5 -13.2 -12.3 -13.4 -11.0 -9.4 -9.7 -7.7 6.0 6.5

-8.8 -8.5 -5.2 -4.3 -5.4 -3.0 -1.4 -1.7 0.3 14.0 14.5

-3.8 -3.5 -0.2 0.7 -0.4 2.0 3.6 3.3 5.3 19.0 19.5

-15.8 -2.8 -15.9 -13.6 -12.4 -15.6 -17.0 -8.7 -10.4 -6.5 -6.0

-7.8 5.2 -7.9 -5.6 -4.4 -7.6 -9.0 -6.3 -7.2 1.5 2.0

-4.5 -5.1 -1.8 -3.9 -1.9 -0.3 -1.0 1.7 -0.1 -5.8 -7.8

-0.4 -0.1 0.3 1.1 0.1 0.5 1.0 0.1 0.7 -3.3 -2.8

*Time shift values are in years before (negative values) and after (positive values) relative to the average.

The method of least squares fit the portions of the curve together and provided an asynchronous curve. The next step involved the comparing of logistic curves, looking specifically at the differences between the crest surfaces and the trough surfaces in the cardinal directions. From this, the average time differences among all surfaces were determined from the south crest. From the south crest, the east crest had a three to five year time-shift while the west crest had approximately a six-year time shift. North- facing crests experienced the largest delay with a twenty-one-year time delay. The surface with the greatest time delay from the south crest were deduced to be the north right troughs, which experienced a delay of about 35 years. Similar logistic curves were generated for the current data sets studied. The first data set, categorized by a great amount of bark formation on each of its surfaces produced a similar average logistic curve as the previous study. The curve looked very similar to the previous curve generated, however, it was generated for a smaller subset of data with an additional timed data point (2014); this curve had approximately 220 data points on each curve, and as such the curves varied in shape slightly (Fig. 5). With this curve, we can begin to compare the average time delays between all twelve surfaces in this specific subset of data, as well as the time delays in the previous study to the time delays of the current study: how are they similar and how do they vary, and the possible reasons. The second study also generated average logistic curves similar to the previous study. The second study was categorized by cactus surfaces that had a very little bark formation occurring at all timed data points (1994, 2002, 2010, and 2015). The curve, shown in Fig. 6, really only displayed the initial portion of the logistic curve. As noted, these 270 data points seem to fall within the initial stage, with a few beginning to move into the manifest stage. From this, time


Barton

The Manhattan Scientist, Series B, Volume 3 (2016)

45

delays can be calculated between all surfaces in this one subset of data as well as compared to the other subsets of data previously discussed.

Figure 5. Average logistic curve of percent of bark formation on the North right trough versus relative time in years for a population of 57 saguaro cacti (Carnegiea gigantea).

Figure 6. Average logistic curve of percent of bark formation on the North right trough versus relative time in years for a population of 68 saguaro cacti (Carnegiea gigantea).

Figure 7. Multiple logistic curves of percent of bark formation on the North right trough versus relative time in years generated for individual cacti found in a population of 68 saguargo cactus plants (Carnegiea gigantea).

In addition to the average logistic curves mentioned above, the current study also generated individual logistic curves for each cactus in each subset of data using nonlinear mixed effects. Nonlinear mixed effects allowed for the production of unique individual logistic curves taken from an average, and will be used to determine individual rates of bark formation. These individual rates of bark formation will be compared relative to one another as well as the average rates of bark formation generated in the previous method. Fig. 7 is an example of the individual logistic curves generated for 68 cacti using the nonlinear mixed effects method.


46

The Manhattan Scientist, Series B, Volume 3 (2016)

Barton

Discussion The purpose of the current studies are to compare the bark formation rates in two different subsets of data to the bark formation rates acquired in a previous study (DeBonis et al., 2015). The previous study used three timed data points evaluated in 1994, 2002, and 2010 on twelve different surfaces of the same cacti to generate twelve different average logistic curves. Since bark formation does not follow the typical pattern of a symmetrical logistic curve with an inflection point at 50% (Draper and Smith 1998), a more general logistic curve was selected y = 100/{1 + exp[−b(t − c)]}a

(2)

in which parameters were found by a method of least squares analysis (Gottschalk and Dunn 2005). This method was implemented because it allowed for asymmetric behavior of the curve and allowed for different points of inflection; this process was described in further detail in the previous study (DeBonis et al., 2015). From this previous method, however, the current studies could be developed. The current studies used two subsets of data to analyze differences in randomly selected cacti with certain characteristics. The first subset of data (57 cacti) was chosen because the cacti showed greater amounts of bark formation on their surfaces in 1994, 2002, and 2010. As such, these cactus plants were then evaluated again in 2014 and used for further analysis. Additionally, a second data set was used to further evaluate the progression of bark formation on cacti; this subset of data (68 cacti) was characterized by the little amount of bark on its surfaces, which will be essential for future studies. As stated previously, bark formation rates are increasing premature morbidity and mortality in typically long- lived saguaro cacti. This bark formation is caused by increased exposure to UV-B sunlight and a build up of epicuticular waxes; once it starts to form the bark formation prevents the stomata from opening, and the cactus is unable to carry out the necessary gas exchange with the outside environment (Evans et al., 1994a, 1994b). This bark formation typically occurs in a pattern around the cactus. It has been noted in previous literature that bark formation begins on saguaros on south-facing surfaces before any other surfaces. As such, for the previous study and for the current studies, average rates of bark formation as well as individual rates of bark formation were expressed as time after bark formation began to occur on the south-facing crest. Following the south crest, bark formation began to occur next on the east facing crests (DeBonis et al., 2015); this occurred three years prior. West crest was shown to develop bark formation after east at about eight years from the south, and north crests showed bark formation occurring fifteen years later than the south crest surface. The south, east, west, and north troughs were delayed by 4, 5, 10, and 15 years respectively. This information from the previous study, coupled with the current data sets, will allow for an expanded study upon rates of bark formation. The first subset of data involving 57 cacti were chosen as they represent a population of cactus plants with a significant amount of bark formation


Barton

The Manhattan Scientist, Series B, Volume 3 (2016)

47

already. They were evaluated in 1994, 2002, 2010, and additionally in 2014 because a majority of the surfaces were either already in the manifest or morbid stage. These cacti will be useful in comparing the large population of cactus plants (897) and determining what is similar and what differs between the two groups and how their average bark formation rates vary. Additionally, the use of nonlinear mixed effects and the individual logistic curves will allow for the study of particular cacti plants that differ relative to the average. Using this method, there will be 57 individual curves per each surface, totaling to 684 different logistic curves to be interpreted. This allows for a thorough investigation of what differs and what varies between specific cacti and can help provide insight into why this bark formation is occurring. Similarly, the second subset of data of 68 cacti can be used for the same basic idea; it can be used as a comparison to the previous data. Additionally, it can be used to interpret and better understand the tipping point of this bark formation. By using the average logistic curves for each surface as well as the individual logistic curves, a better understanding of what is causing this bark to occur and can help decipher at what point it begins to progress rapidly. With 816 logistic curves generated for the 68 cacti for each of the 12 surfaces, individuals that are outliers can be chosen to further understand what truly differs between these individuals experiencing bark formation and why some experience it more severely and rapidly than others. With the previous data and the current studies, it will hopefully be possible to find a possible correlation between rates of bark formation and some factor or influence noted in individual cacti. If this is possible, preventative measures could be taken to allow for the survival of these saguaro cacti. It has been noted that saguaro cacti have the ability to live to be over 200 years of age (Steenbergh and Lowe 1977); however, as of recent, O’Brien et al. (2011) have shown that out of 20,372 saguaros the oldest cacti was found to be only 110 years of age. As such, this indicates that the formation of this bark formation on saguaro cacti surfaces is increasing the rate of mortality.

Acknowledgments The author is indebted to the Catherine and Robert Fenton Endowed Chair to Dr. Lance Evans for financial support for this research. She is also grateful to Dr. M. DeBonis and Dr. L. S. Evans for extensive help with this study.

References Biondo, S., E. Ramos, and M. Deiros. 2000. Prognostic factors for mortality in left colonic peritonitis: a new scoring system. J. Amer. Coll. Surg. 191: 635-642. Boyd, C. R., M.A. Tolson, and W.S. Copes. 1987. Evaluating trauma care: The TRISS method. Trauma Score and the Injury Severity Score. J. Trauma 27: 370–378. DeBonis et al. 2015. Unpublished data - personal communication Draper, N.R. and H. Smith. 1998. Applied Regression Analysis. 3rd ed. John Wiley & Sons. New York.


48

The Manhattan Scientist, Series B, Volume 3 (2016)

Barton

Evans, L. S. 2005. Stem surface injuries to Neobuxbaumia tetetzo and Neobuxbaumia mezcalaensis of the Tehuacan Valley of Central Mexico. J. Torrey Bot. Soc. 132: 33-37. Evans, L. S., and M. DeBonis. 2015. Predicting morbidity and mortality of Saguaro cacti (Carnegiea gigantea) J. Torrey Bot. Soc. 142: 231-239. Evans, L. S., K.A. Howard and E. Stolze. 1992. Epidermal Browning of Saguaro Cacti (Carnegiea gigantea): Is it new or related to direction? Environ. Exp. Bot. 32: 357-363. Evans, L.S., V.A. Cantarella, K.W. Stolte and K.H. Thompson. 1994a. Phenological changes associated with epidermal browning of saguaro cacti at Saguaro National Monument. Environ. Exp. Bot. 34: 9-17. Evans, L. S., V.A. Cantarella, L. Kaszczak, S.M. Krempasky, and K.H. Thompson. 1994b. Epidermal browning of saguaro cacti (Carnegiea gigantea). Physiological effects, rates of browning and relation to sun/shade conditions. Environ. Exp. Bot. 34: 107-115. Evans, L. S., V. Sahi, and S. Ghersini. 1995. Epidermal browning of saguaro cacti (Carnegiea gigantea): Relative health and rates of surficial injuries of a population. Environ. Exp. Bot. 35: 557-562. Evans, L. S., J. Sullivan and M. Lim. 2001. Initial effects of UV-B radiation on stem surfaces of Stenocereus thurberi (organ pipe cacti). Environ. Exp. Bot. 46: 181-187. Evans, L. S., A. J. Young, and Sr. J. Harnett. 2005. Changes in scale and bark stem surface injuries and mortality rates of a saguaro (Carnegiea gigantea) cacti population in Tucson Mountain Park. Can. J. Bot. 83: 311-319. Evans, L. S., P. Boothe and A. Baez. 2013. Predicting morbidity and mortality for a saguaro cactus (Carnegiea gigantea) population. J. Torrey Bot. Soc. 140: 247-255. Gottschalk, P.G., Dunn J.R. 2005. The five-parameter logistic: a characterization and comparison with the four-parameter logistic”, Anal. Biochem. 343:54-65. Kologlu, M., D. Elker, H. Altun, and I. Sayek. 2001. Valdation of MPI and OIA II in two different groups of patients with secondary peritonitis Hepato-Gastroent. 48: 147-151. Le Gall, J.-R., S. Lemeshow, and F. Saulnier. 1993. A new simplified acute physiology score (SAPS II) based on a European/North American multicenter study. J. Amer. Med. Assoc. 270: 2957-2963. Marshall, J.C., D.J. Cook, and N.V. Christou. 1995. Multiple organ dysfunction score: A reliable descriptor of a complex clinical outcome. Crit. Care Med. 23: 1638-1652. O’Brien, K., D. Swann and A. Springer. 2011. Results of the 2010 Saguaro census at Saguaro National Park. National Park Service. U.S. Department of Interior. Tucson, AZ 49p. Steenbergh, W.F. and C.H. Lowe. 1977. Ecology of the Saguaro II reproduction, germination, establishment, growth, and survival of the young plant. National Park Service Monograph Series Eight. Truett, J, J. Cornfield, and W. Kannel. 1967. A multivariate analysis of the risk of coronary heart disease in Framingham. Journal of chronic diseases 20: 511–24.


Automated quantitative analysis of terminal tree branch similarity by 3D registration Joseph Brucculeri∗ Department of Mechanical Engineering, Manhattan College Abstract. A popular but unsubstantiated view is that tree branch morphologies are similar and are of a reiterative nature. To date there are no studies that document self-similarity among tree branches. The purpose of this research is to develop a tool to quantitatively compare the terminal ends of various tree branches. A program, 3D Branch Similarity Quantifier was written and verified in MATLAB to automate the process of comparing any branch-like structure. Eighty-five terminal branch comparisons from five different tree species were analyzed. The program considered two tree branches at a time, scaled them for relative size, and then used 3D coordinate matrix manipulation to register the branches onto one another for best fit analysis. Similarities were quantified by finding the Root-MeanSquare-Error (RMSE) between the two matrices of points. Two terminal branch shapes were considered. Simple Y Branches (terminal main branch with a single side branch) all had RMSE values of less than 1.5 which indicated that branches were similar. When Y+1 Branch (a Y Branch with an additional side branch), the RMSE doubled for most species, indicating a decrease in similarity. Overall, these programs were accurate and rapid for an analysis of branch similarities.

Introduction A popular but unsubstantiated view is that tree branch morphologies are similar and are of a reiterative nature. Leonardo Da Vinci first observed these patterns and concluded that trees “conserve total cross-sectional area across every branching point” [1]. Other research has explored ratios between tree branch lengths and cumulative leaf areas [2]. To date there are no studies that document self-similarity among tree branches. The purpose of this research is to develop a tool that compares terminal tree branches and determine how geometrically similar terminals are. A simple method of comparing the shapes of tree branches would be to physically hold the branches next to each other and observe where there are any differences. This method is not quantitative. Accurate geometric data are needed to determine similarities between two branches. However, this study automated this process in MATLAB. The program developed, 3D Branch Similarity Quantifier, which quickly and accurately determined branch similarities quantitatively.

Materials and Methods A previous study provided geometric branch data of over forty tree species [3]. Branches from five species were analyzed (Table 1). Terminal ends of branch pairs were compared to determine branch similarities. For comparisons, branches had to be of overall similar shapes. Two basic shapes were compared in this study (Fig. 1). The first were ‘Y Branches.’ A Y geometry is a terminal stem with one ∗

Research mentored by Lance Evans, Ph.D., and Zahra Shahbazi, Ph.D.


50

The Manhattan Scientist, Series B, Volume 3 (2016)

Brucculeri

Table 1. Number of branches and comparisons made within each of the five species analyzed Species

Cornus florida

Zelkova serrata

Tilia americana

Acer palmatum

Morus rubra

Y Branches Y+1 Branches Number of Comparisons

14 6 29

7 5 18

8 6 20

9 0 9

6 0 9

Y+1 Branch

Y Branch Figure 1. Example Y and Y+1 Point Branches

secondary branch while a Y+1 geometry is a terminal stem with two secondary branches. A user friendly automated program, 3D Branch Similarity Quantifier, was written, tested and modified in MATLAB. MATLAB was used for its ability to manipulate matrices and plot data. 3D image registration techniques are applicable with the use of matrices in MATLAB. “Image registration is the process of transforming the different sets of data into one coordinate system,� [4]. The automated program is shown in Fig. 2. The program was developed so that no MATLAB programming experience is needed to operate it. Anyone with the required geometric branch data can run the program and compare terminal branches. Figure 2. 3D Branch Similarity Quantifier flow chart

User Inputs Information

Prepare Branches for Comparison

Write Point Information matrices

Branch Quantitative Comparison

The first part of the program prompts the user to enter branch data of the two terminal ends being compared. The information is contained in a specific format, shown in Table 2. The user simply needs to run the program and a dialogue window opens, asking for input. The user must enter the file name, the two branches to be compared using column H, and whether to scale or not. Table 2. Excel spreadsheet column format A

B, C, D

F

G

H

Point Number

XYX Point Coordinates

Point Connections

Terminal Branch number

Points forming branch


Brucculeri

The Manhattan Scientist, Series B, Volume 3 (2016)

51

After that the first automated function of the program, Branch Matrices Maker, begins. This function creates two matrices for each branch. The first, Point Information Matrix, is a N ×6 matrix, where N equals the number of rows and six (6) is the number of columns. Each row contains information on one out of N points. The first column contains the point number, the next three columns contain XYZ coordinates, and the last two contain tags that are vital to the registration process. Two tags will be given to each point. Both tags will be whole numbers if the point is at the start or the end of the branch (connects only to one other point), or it represents a split point (connects to at least three points). These points will be called Whole Number Points. All other points, Zero Points (only connects to two different points) get a ‘0’ as both tags. The first tag in the fifth column, is called Level Tag, which numbers each point along all possible paths to every end point. The next tag in the sixth column is the Order Tag which essentially just renumbers the points starting with 1. An example branch with tagged points can be seen in Fig. 3. The second matrix made, Connection Matrix, is a N ×4 matrix, that just orders the points in the order in which they are connected in all paths forming the branch. The format of the columns is identical to the first four columns of the Point Information Matrix. The exact process of the functions can be best explained through the pseudo code displayed on Table 3.

Figure 3. Example on how points are tagged on a branch by ‘Level’ (Circle) and ‘Order’ (Square)

Figure 4. Example showing white artificial points added with respect to the black zero points

The next function, Branch Registration Prep, is vital in order to have the point branches quantitatively compared. The function adds artificial points for every Zero Point on a second branch where necessary. The process is illustrated in Fig. 4. This is done for all Zero Points since both branches need an equal number of points. Also, these new points are inserted into the Point Information Matrices in proper order so they are in the same row as its corresponding Zero Point in the other matrix. The pseudo code on Table 4 shows the method on how this was accomplished. The last function of the 3D Branch Similarity Quantifier, Branch Comparison, finally performs registration (a geometric comparison) of the two branches and quantifies dissimilarities. First it scales the branches to the same relative size if the user prompted it to do so in the be-


52

The Manhattan Scientist, Series B, Volume 3 (2016)

Brucculeri

Table 3. Pseudo code for Branch Matrices Maker Input: Excel File Name, Point Numbers for each branch Output: Point Information and Connection Matrices 1. for all Point Numbers in Branch 1 2. Find the Point Number and its XYZ coordinate data in columns A:D of the Excel File 3. Add the point number and XYZ coordinate data into a matrix called PointInfo1 4. end 5. Add 2 empty columns to PointInfo2 so that it is a Nx6 matrix 6. Repeat lines 1-5 for all points in Branch 2and store information in PointInfo2 7. Extract all data from column F of the Excel File and define as Connections 8. for all connections 9. If two points in a connection are both in Branch 1 10. Add Point Number and XYZ Data of both points in a matrix called Connection1 11. elseif two points in a connection are both in Branch 2 12. Add Point Number and XYZ Data of both points in a matrix called Connection2 13. end 14. end 15. Tag the first points of branch 1 in the fifth and sixth columns of Pointinfo1 16. for the rest of the points in PointInfo1 17. if the point repeats only once or more than twice 18. Find the first point it is connected to in Connection1 that is tagged a whole number 19. Tag the point as the as the last tag plus 1 in the fifth column in PointInfo1 20. else 21. Tag the point as 0 in the fifth column in PointInfo1 22. end 23. if the point has a whole number tag in the fifth column in PointInfo1 24. Renumber those points starting with 1 in the sixth column of PointInfo1 25. else 26. Tag the point again as 0 in the sixth column of PointInfo1 27. end 28. end 29. Repeat

ginning. The method of for scaling involves finding the optimal ratio of branch lengths called the Scaler. One of the branch’s point matrix is multiplier by the Scaler. The program takes the two extended Point Information Matrices and registers them by first using an existing MATLAB function, rot3dfit [4, 5]. The function itself outputs the transformed point matrix, transformation matrix, and a calculated error, Root-Mean-Square-Error (RMSE) was used to overall quantify each branch pair comparison. After the comparison was quantified, the two registered point branches were plotted for observation. The pseudo code in Table 5 explains the process of this function.

Results 3D Branch Similarity Quantifier was first tested and verified before actual tree branches were compared. Two point arrays of artificial Y branches of congruent geometry were written. They


Brucculeri

The Manhattan Scientist, Series B, Volume 3 (2016)

53

Table 4. Pseudo code for Branch Registration Prep Input: Point Information Matrices Output: Extended Point Information Matrices 1. for all zero tagged points in PointInfo1 2. Find the next whole number tagged point it is connected to and define it at Point1 3. Define its tags as OrderTagA and LevelTagA 4. Define Point2 as the nearest point in connection with level tag one less than LevelTagA 5. Define its order tag as OrderTagB 6. Define DistanceA as thedistance between the point and Point1 7. Define DistanceB as the distance between the point and Point2 8. Define PercentDistance as DistanceB / (DistanceA + DistanceB) 9. Define PointA and PointB as the Points in PointInfo2 with OrderTagA and OrderTagB 10. Create a Vector PathRows of all the zero tagged points connecting PointA2 to PointB2 11. Define a Vector DistanceVector as the distance between the first point in PathRows to PointA2 12. for all Points in PathRows 13. Define distance as the distance between the point and the next point in PathRows 14. Define AddDistance as distance + the last number in DistanceVector 15. Add AddDistance to Distance Vector 16. Add OrderTagB to PathRows 17. Redefine PathRows as its transpose 18. end 19. Define PointLength as the last number in DistanceVector * PercentDistance 20. Define SDistance as the smallest value in DistanceVector that is greater than PointLength 21. Define PathRowPointA2 as row in DistanceVector where SDistance occurs 22. Define PointA2 as the point in PathRowPointA2 row of PathRow 23. Define PointB2 as the next point in PathRow 24. Define AddLength as SDistance - PointLength 25. Define Vector as PointB2 - PointA2 26. Define UnitVector as the unit vector of Vector 27. Define NewPoint as PointA2 + AddLength * UnitVector 28. Add NewPoints XYZ coordinatesto a new matrix, AddtoInfo2, in first 3 columns 29. Add PointA2’s point number in the fourth column 30. end 31. Repeat lines 1-30 for PointInfo2 to create matrix, AddtoInfo1 32. for all points in AddtoInfo1 33. Insert a row in PointInfo1 right before the Point from fourth column, of the point’s XYZ data with number and tags of 0 34. end Repeat lines 32-34 for AddtoInfo2 and add data in new rows in PointInfo2

were compared through the program and an RMSE value of X. Because the two branches are identical, a RMSE value of 0 was expected and this verified the accuracy of the tools ability to quantitatively compare branch geometries. The overall purpose of this research was to study the reiterative nature of paired tree branch terminals. Eighty-five samples were analyzed with the 3D Branch Similarity Quantifier, with and without scaling. All RMSE values from each branch pairs was documented and an overall RMSE


54

The Manhattan Scientist, Series B, Volume 3 (2016)

Brucculeri

Table 5. Pseudo code for 3D Branch Similarity Quantifier Input: Extended Point Information Matrices, Connection1, and ScaleOpt Output: Point Arrays of both branches after registration, Connection1tform, MAE, and RSME 1. Define PointArray1 and PointArray2 as XYZ coordinate data from PointInfo1 and PointInfo2 2. if Scaling then 3. Scaler = BranchScaler(PointInfo1,PointInfo2) 4. else 5. Define Scaler as 1 6. end 7. Define PointArray1Scaled as PointArray1*Scaler 8. Use rot3dfit function with PointArray1Scaled and PointArray2 and obtain the rotation matrix (Rot), translation vector (Trans), transformed points of branch 1 (PointArray1tform), and RSS 9. Define Connection1Array as points Connection1 10. Transform Connection1Array into Connection1tform with Rot, Trans, and Scaler 11. Display Scaler value 12. Define RSME as RSS / square root of the number of Points in PointArray1 13. Display RSME value 14. Create Scatter Plot of points in PointArray1tform in magenta 15. Create Scatter Plot of points in PointArray2 in cyan 16. Plot Lines between connections using Connection1tform in magenta 17. Plot Lines between connections using Connection2 in cyan 18. Equate axes

was given to each of the species for both un-scaled and scaled Y branches and Y+1 branches. However, values of RMSE required additional verification to determine the degree of dissimilarities among branch pairs. Based upon relating the plotted registered branches and their outputted RMSE values, a meaningful RMSE value for comparisons was developed. A description for set RMSE intervals can be seen in Table 6. Fig. 5 shows plotted registered point branches and corresponding calculated RMSE values. Table 7 shows a set of RMSE intervals with their meaning with overall similarities for branch comparisons of the five species.

0.46 Figure 5. Plots of scaled registered Y Branch (Left) and Y+1 Branch (Right) with calculated RMSE values

1.01


Brucculeri

The Manhattan Scientist, Series B, Volume 3 (2016)

55

Table 6. Description for RMSE intervals RSME Interval

Brief Description

Detailed Description

0.0 – 0.5

Identical

Branches show alignment throughout its entirety. Branches contain nearly identical features.

0.5 – 1.0

Very Similar

Majority of the branch’s side branches are aligned or the entirety of the branch is nearly aligned. Branches share some similar features.

1.0 – 1.5

Similar

Some side branches are aligned. The rest of the branch contain some similar features but are not aligned due to variation in side branch angle or position.

1.5 – 2.0

Fairly Similar

Few parts of the branch are aligned. Branch may contain some similar features but are not aligned due to variation in side branch angle and placement but mostly due to Length difference. Similarity is still observable.

2.0 – 3.0

Barely Similar

Branch may or may not show alignment. Branches either do not share similar features, or have opposing features (ie. Bends in opposite direction). Branches might also be distinctly different sizes. Very few similarities are observable.

< 3.0

Distinct

No signs of similarity

Table 7. Overall branch similarity. Japanese Maple and Red Mulberry sample did not contain Y+1 branches Species

Dogwood

Zelcova

American Basswood

Japanese Maple

Red Mulberry

Unscaled Y Branch Scaled Y Branch Unscaled Y+1 Branch Scaled Y Branch

1.81 1.39 3.38 2.86

1.39 1.41 1.97 1.86

2.08 1.101 2.58 1.73

4.17 1.07 -

4.17 1.18 -

Discussion The purpose of scaling the branches to a similar size is so their geometric features could be compared more accurately. As the data show, the majority of the error in similarity was calculated through more distinct geometric features such as side branch starting angles, bending angles, and length ratios. Scaling did decrease RMSE for a majority of samples, but not all. In some cases, branch pairs had very distinct features that so that scaling did not improve the RMSE. Overall, the results show that scaled Y Branches had RMSE values of less than 1.5, which indicates they are ‘similar’. However, Y Branches are at the lowest level of geometric complexity of terminal branches. Branches are composed of numerous side branches. When only one additional side branch was considered RMSE values doubled, indicating they were ‘fairly to barely similar’. The increase of RMSE from Y Branches to Y+1 Branches was consistent among all species and makes complete sense. If additional side branches were to be considered and branch complexity increased, RMSE values were increase indicating that complex branches were be very dissimilar. The use and accuracy of RMSE was also studied. RSS was first used since it was a pro-


56

The Manhattan Scientist, Series B, Volume 3 (2016)

Brucculeri

grammed output from the rot3dfit function. RSS was considered to be inaccurate since it did not since it did not average errors among all points. RSS is simply is a sum of errors among individual points. Therefore, as more points were considered on a branch the calculated error increased. Moreover, Mean-Absolute-Error (MAE) was also calculated. MAE is better fit when one would not want their error skewed by an outlier. However, no points are outliers in this study; all points are accountable. Therefore, RMSE was the best fit for our analysis because RMSE gives more weight to outliers by squaring the difference between points [6]. Making mathematical measurements by hand to compare differentiations between two branches about 26 cm long each can take up to 3 hours. With 3D Branch Similarity Quantifier, using our method of computations compared and plotted 26 cm long branches of 19 points in 22 seconds. Also, terminal branch sizes range from 7 cm to 40 cm in this study. The branches can be compared as is, or the branches could be scaled to the same relative size. Where one branch is three times the size of the other, quantitatively comparing them by hand can require much more time and a new level of difficulty. 3D Branch Similarity Quantifier can compare different branches with the use of scaling with negligible extra time or if scaling the branches down, even less time. In summary, our program and operations showed that simple Y branches had RMSE values of less than 1.5 which indicated that such branches were similar. However, when an additional side branch was considered there was a decrease in similarity. Overall, these programs were accurate and rapid for an analysis of branch similarities.

Acknowledgements The author is grateful to the Linda and Dennis Fenton ’73 endowed biology research fund and the Catherine and Robert Fenton endowed Chair for financial support of this research.

References [1] Aratsu, R., “Leonardo Was Wise – Trees Conserve Cross-Sectional Area Despite Vessel Structure.” J. Young Investigators 1:1-5 (1998) [2] Honda, H., Fisher, J., “Ratio of tree branch lengths: The equitable distribution of leaf clusters on branches.” Proc. Nat. Acad. Sci. 76:3875-3879 (1979) [3] Kaminski, A., Mysliwiec, S. Shahbazi, Z., Evans, L., “Stress Analysis Along Tree Branches.” Amer. J. Mech. Eng. 3:32-40 (2015) [4] Holia, M. and Thakar, V., “Image registration for recovering affine transformation using Nelder Mead Simplex method for optimization.” Int. J. Image Processing 3:218-228 (2009) [5] “Rigid Transformation,” Matlab Newsgroup, URL:https://www.mathworks.com/matlabcentral/ newsreader/view thread/97093 [Accessed 13 June 2016] [6] Chai, T. and Draxler, R. R., “Root mean square error (RMSE) or mean absolute error (MAE)? – Arguments against avoiding RMSE in Literature.” Geosci. Model Dev. 7:1247-1250 (2014)


Leaf venation patterns and the distribution of water within leaves Jorge Gonzalez∗ Department of Biology, Manhattan College Abstract. The purpose of this study was to compare total leaf area to secondary, tertiary and quaternary areas and determine the relationships between them. The leaves were obtained from the Manhattan College campus and then were identified using several sources. Each leaf species was sampled twice and photographed in order to take measurements using a computer program as well as using LICOR LI-3100 Leaf Area Meter for checking the measurements of each leaf. Using, Microsoft Paint (Microsoft Inc.), a line was drawn to trace the outline of the entire leaf, as well as each secondary, tertiary, and quaternary areas. For tertiary and quaternary areas, the leaves were put into alcohol for several days then were placed in a 10% NaOH solution to remove pigments. The leaves were put into saffron to dye all the veins for visual purposes. The computer program Image J was used to take all measurements among each leaf. Results showed that secondary areas are scaled to total leaf area, as the total leaf area increases the area will increase. Tertiary and quaternary areas are not scaled to total leaf area, but when comparing with individually they are similar. For the future, xylem cells will be studied in order to determine the relationship with the areas that the xylem cells feed on.

Introduction Plants are very diverse (Krupnick, 2001), but with a casual view they may appear similar. In addition, at casual glance leaves seem similar as well. However, upon close inspection leaf venations are markedly different among species (Fig. 1).

Figure 1. Leaf images of four tree species to show variation in venation among leaves. A= Tilia americana; B=Viburnum lentago; C= Populus deltoides; D= Hydrangea macrophylla.

The purpose of leaf veins is to transport water and nutrients to leaf cells. For example, water from stems goes from the petiole to the main vein and then to smaller and smaller veins so that all ∗

Research mentored by Lance Evans, Ph.D.


58

The Manhattan Scientist, Series B, Volume 3 (2016)

Gonzalez

cells of the leaf obtain the water needed for normal function. At casual glance, leaf venation is not similar among primary and secondary veins (Fig. 2) but should be similar among small level veins as water needs to be distributed to cells closest to these smaller level veins. In this study, areas associated with secondary, tertiary, and quaternary veins (Figs. 2 - 5) were investigated.

Figure 2. Diagram of veins and areas of leaves that were used to bisect leaves into separate areas. Note that each secondary area is served by one secondary vein. Note that each tertiary area is served by one tertiary vein.

Figure 4. Image of a leaf with a variety of veins and two tertiary areas.

Figure 3. Image of a leaf with a variety of veins and a secondary area.

Figure 5. Image of a leaf with tertiary veins marked with numerous quaternary areas.

The following hypotheses were investigated: 1. If secondary areas (areas served by secondary veins) will be scaled to entire (total) leaf areas. 2. If tertiary areas (areas served by tertiary veins) will be similar among all species. 3. If quaternary areas (areas served by quaternary veins) will be similar among all species.


Gonzalez

The Manhattan Scientist, Series B, Volume 3 (2016)

59

Materials and methods Sampling For each plant species chosen, images of whole plants and individual leaves were saved for analysis. At least two leaves from each species were sampled with different areas usually one small and one large leaf. For each leaf the petiole and half of the leaf was cut and the leaf was placed on a white background with a ruler for measurement purposes. Photographic images were taken of each leaf to identify secondary areas and tertiary areas (Figs. 3, 4). Sample Processing In order to view the veins, leaves were decolorized with 95% ethanol for three days. If additional decolorizing was necessary leaves were placed in 10% NaOH for two hours (Jensen, 1962). The leaves were then washed with water and bleach was added to remove any pigments in the leaf. The leaf was placed into a dye called safranin (Jensen, 1962) that stained all veins; most importantly, tertiary and quaternary veins were stained, which were not visible with chlorophyll present. The leaves were chosen from plants on the Manhattan College campus during the summer of 2016. Six plant species were chosen for analysis (Table 1). Each leaf species was identified using two sources (Kershner et al., 2008, www.tropicos.org). Table 1. Species tested

Hydrangea macrophylla (Thunb) Ser. Fraxinus pennsylvancia Marshall Quercus alba Marshall Viburnum lentago L. Tilia Americana L. Populus deltoids Rydb.

Total Leaf Area (cm2 )

Tertiary Areas (mean) (cm2 )

Number Tertiary Areas (total)

Quaternary Area (mean) (cm2 )

Number Quaternary Areas (total)

92.2 94.8 101 125 128 141

0.39 0.43 0.12 0.29 0.36 0.29

236 220 845 430 356 487

0.004 0.002 0.002 0.015 0.002 0.004

23,000 47,400 50,700 8,320 64,000 38,100

Measurements First, the leaves were measured using the LICOR LI-3100 Leaf Area Meter. For each leaf, once each image was placed in Microsoft Paint (Microsoft Inc.), a line was drawn to trace the outline of the entire leaf (Fig. 2), and each secondary, tertiary, and quaternary areas (Fig. 5). Thereafter, the areas associated with secondary, tertiary, and quaternaryveins were traced (Fig. 2). The computer program Image J (National Institute of Health, https://imagej.nih.gov/ij/) was used to take all measurements like total leaf area, secondary areas, tertiary areas and quaternary areas. Measurements of total leaf area were accounted from the bottom of the primary vein and all around the leaf. Each vein feeds on an area half way before reaching another vein and all secondary veins were determined because the veins are attached to the primary vein (Figs. 2, 3). For


60

The Manhattan Scientist, Series B, Volume 3 (2016)

Gonzalez

tertiary and quaternary the leaf was put under dissecting microscope and viewed at 16× magnification to get a closer and accurate view of the veins. (Fig. 4). In order, to see quaternary areas the magnification was put to its max at 34× in which quaternary areas were clearly seen (Fig. 5). Images of the leaves were taken under the microscope for tertiary and quaternary areas.

Results

8 Number of secondary areas (cm2)

Secondary area (mean)(cm2)

As expected, secondary areas were scaled to whole leaf areas shown in (Fig. 6), as total leaf area increased there was an increase in mean secondary area. Furthermore, the number of secondary areas showed that as the total leaf area increased the number of secondary areas decreased (Fig. 7).

7 6 5 4 3 2 1

60 50 40 30 20 10

0 80

100

120

140

Total Leaf Area (cm2 ) Figure 6. Relationship between secondary areas and total leaf areas for leaves of six plant species. The equation of the line is y = 0.109x − 8.78 with an r2 value of 0.81.

80

100

120

Total Leaf Area (cm2 )

140

Figure 7. Relationship between number of secondary areas and total leaf areas for leaves of six plant species. The equation of the line is y = −0.77x + 125 with an r2 value of 0.87.

For tertiary areas comparing the total leaf area with the mean tertiary areas, these areas were not scaled to the total leaf area. As expected, these areas were quite similar across all six species (Table 1). In Table 1, the mean tertiary areas range from 0.12 cm2 to 0.43 cm2 while the number of tertiary areas varies among all six species. These numbers range from 220 to 845, depending on the total leaf area. A similar relationship was found with quaternary areas; the quaternary areas were very similar comparing with the other species. The values for mean quaternary area ranged from 0.002 cm2 to 0.015 cm2 , but in fact the number of quaternary areas was distinct in each individual species. The number of quaternary areas ranged from 8,000 to 64,000 areas among a species which depended upon the total leaf area (Table 1). Histology of xylem cells showed that in a primary vein there will be more xylem cells compared to a secondary and tertiary veins. (Fig. 8).


Gonzalez

The Manhattan Scientist, Series B, Volume 3 (2016)

61

Figure 8. Primary vein (left), secondary vein (middle) and tertiary vein (right). The number of xylem cells for primary were 163, for secondary 49, and for tertiary 15.

Discussion Studying these areas helps to understand how the areas are distributed among all six species. As shown, secondary areas are scaled to total leaf area and the number of secondary areas to total leaf area is a linear inversely proportional relationship shown on (Figs. 6, 7). This shows as a leaf grows larger the area increases but the number of secondary veins decreases. Measuring tertiary areas showed us that these areas range from 0.12 to 0.43 cm2 , which were similar in all six species. This tells us that across distinct species these tertiary areas are similar but the number of tertiary areas within a leaf is very diverse. Quaternary areas show the same relationship as tertiary areas, the areas range from 0.002 cm2 to 0.015 cm2 and the number of quaternary areas show a high number of areas ranging from 8,000 to 64,000 areas. These numbers were obtained by using the average tertiary and quaternary areas divided by the total leaf area, which shows how much water will be transported to all of the areas among the leaf. Studying the areas and the xylem cells in the future can help us understand the distribution of water among a leaf for distinct species. In Fig. 8, the number of xylem cells in a primary, secondary and tertiary veins reveals that in a primary vein there will be more xylem cells present. Thus, the primary vein has to deliver water to a larger area. When water is transported into the primary vein, the water is then distributed to all secondary veins, tertiary and quaternary veins. For future research, studying the number of xylem cells compared to the area that a vein feeds can tell us how the water is distributed among leaves.

Acknowledgements The author is indebted to the Catherine and Robert Fenton Endowed Chair to Dr. Lance Evans for financial support for this research.


62

The Manhattan Scientist, Series B, Volume 3 (2016)

Gonzalez

References Kershner, B., Matthews, D. Nelson, G. Field Guide to Trees of North America. Sterling Publ. Co., New York. (2008) Krupnick,, G. “Centres of Plant Diversity: Introduction.� Department of Botany, National Museum of Natural History. http://botany.si.edu/projects/cpd/introduction.htm. (2001) Jensen, W.A. Botanical Histochemistry. Principles and Practice. W. H. Freeman, San Francisco, CA. (1962)


Development of a finite element model of tree branches with variable leaf characteristics Jesse Jehan∗ Department of Mechanical Engineering, Manhattan College Abstract. Plants show a great variety in geometry and material properties. Therefore, if trees are to becomputer modeled for stress analyses, a unique model is needed for each tree. A program, Immediate-TREE has previously created a Finite Element (FE) model of treebranches. However, Immediate-TREE only modeled branches with leaves represented as point loads. Two important aspects were missing from that model: First, the torsional stresses added by a leaf were ignored. Second, the bending moments induced on the branch by the leaves were inaccurately represented. The current research improved Immediate-TREE by placing leaf and petiole models onto real branches. After the program development results show (1) actual leaves produce 10 percent higher effective stresses in branches than point loads, (2) the model was sensitive since a doubling of leaf mass doubled the effective stresses in the branch, and (3) future applications would be to adjust the program to allow various leaf spacings along the branches. A variety of leaf shapes and masses and the effects of external loads such as snow loads.

Introduction It is easy to observe that trees show a wide variety of physical and mechanical features throughout the thousands of known species. Though there are obvious similarities between species there is no obvious reason to model them as the same. In fact, it is beneficial to model each case uniquely as making generalizations will introduce unwanted error. The purpose of this research was to modify Immediate-TREE (Keane et al., 2016), a program previously created to model various tree branches (varying geometries and mechanical properties). Immediate-TREE processes branch data to produce a model of branch geometries without leaves present. The current research will add leaves to Immediate-TREE to create more complete branch models. Immediate-TREE is a computer program created in MATLAB that takes an excel file containing mechanical information and processes that information, creating a Python file which can be interpreted by Abaqus, the Finite Element Analysis (FEA) program. Currently, ImmediateTree does not take into account a three-dimensional leaf model. Instead of a leaf model, a simple point load was created in the place of a leaf. This simplification leads to errors. Leaves create tensional forces that are misrepresented by point loads alone. A comparison was done between the previous method in which point loads represented leaves and the current method in which leaf blades and petioles are modeled (Figs. 1 and 2). In general, the presence of petioles and leaves had approximately 10% higher effective stress values for equivalent loading than when only point loads were used. From this we can conclude that modeling petioles and leaves should give more accurate effective stress values than point loads alone. ∗

Research mentored by Lance Evans, Ph.D., and Zahra Shahbazi, Ph.D.


64

The Manhattan Scientist, Series B, Volume 3 (2016)

Jehan

Figure 1. Image of a branch of Betula nigra used for analysis. The terminal portion acropetal to the black line with 26 leaves (terminal 0.28 m) was analyzed.

Figure 2. Relationship between effective stress versus distance along a branch of Betula nigra. Line A (no leaves) represents data along the branch with no leaves considered. Line B (pointload) represents data along the branch in which the masses of the leaves were considered as point loads on the branch. Line C (petiole + ptload) represents data along the branch in which masses were placed as point loads on petioles. Line D (3D leaves) represents data along the branch in which leaves were modeled as 3D objects and placed on petioles. Distance is calculated as distance from the base of the branch towards the tip.

Fig. 3 shows a flow chart of the overall process of using geometric tree branch data and producing a model of the branch with actual leaves attached. Specifically, branch data were used


Jehan

The Manhattan Scientist, Series B, Volume 3 (2016)

65

by Immediate-TREE to produce a Python file named twig. Automating this process has many benefits. First, human error is eliminated from the most technical part of the process, the FE modeling. Second, calculations and models can easily be modified. This research will explore the mechanical differences between the modeling methods described and show how the developed three-dimensional model predicts stresses and stress patterns with greater accuracy.

Figure 3. Overall process of creating a 3D terminal branch model “3D Branch” with leaves attached. Both versions of Immediate-TREE use data from the same excel file (Table 1). Immediate-TREE version 1 was taken from (reference) and remains unchanged. ImmediateTREE version 2 is the process and code to create “Petiole Creator” and “Leaf Code”. “Excel file+” is the original excel file with petiole coordinates added (Table 1).Python code “Twig” is created by Immediate-TREE version 1 for each branch. Python code “Leaf” is created by ImmediateTREE version 2 for each branch. Both Python codes are combined to make “Twig+Leaf” which is the file that Abaqus uses as input to create the final branch model “3D Branch”.

Methods and Materials Overall approach The overall purpose of this study was to create a model to determine tree branch stresses with leaves. The first step was to determine the best method to model leaves onto stems in Abaqus. Modeling leaves onto branches presented two challenges, first to accurately represent leaves in Abaqus. The second was to attach the leaf petioles to stems. The next step was to evaluate the process of leaf modeling. The next step was to determine if the created preliminary leaf models would produce accurate and expected stress values on branches. The preliminary stress values


66

The Manhattan Scientist, Series B, Volume 3 (2016)

Jehan

were similar to those of (Keane et al., 2016) so we were on the right path. The next step was to add the preliminary leaves to (Keane et al., 2016) model using Abaqus. The results were similar. Abaqus data (Keane et al., 2016) was used to determine an approximate relationship between effective stress and distance along branch for a branch of Betula nigra (Figs. 1 and 2). In order to determine more accurate estimates of effective stresses caused by actual petioles and leaves, Immediate-Tree was modified to use petiole and leaf models instead of point loads. Petioles were added based off images of the original branch. Petiole angle data was never recorded and therefore was not available for this project. Similarly, leaf blade positioning (angles) was not recorded and therefore was based off pictures. These models were created by hand without any automation. Leaves were added individually into Abaqus. This preliminary process indicated that it was possible to integrate leaf modeling into immediate-Tree. Then coding in MATLAB began to start to integrate leaf modeling into Immediate-Tree. To begin this process the first step was to create dimensions for the petioles (lengths and positioning). To reiterate, the overall purpose of this project is to create branch models with leaves present to determine branch stresses. The flow chart for this process is shown in Fig 3. Previously, Immediate-TREE was developed to output a Python file which is used by Abaqus to calculate branch stresses. Immediate-TREE calculates branch stresses only considering leaves as point loads along the branch. Immediate-TREE (Keane et al., 2016) used data of X, Y, Z coordinates, diameters, masses, and other mechanical properties. Immediate-TREE is produced in MATLAB and produced a Python file herein called twig (Fig. 3). Creating Petioles (Petiole Creator) The first program of this study was Petiole Creator which used only X, Y, Z branch data (Table 1 in Appendix). Leafcode uses data from Petiole Creator which is added to twig to produce twig+ Python file. This Python file is inputted into Abaqus to create a FE model of the branch with leaves called leafbranch. To model a tree branch manually the researcher must first collect the tree branch of interest, then take detailed measurements and write these measurements into an excel file. The researcher can then operate a FE program such as Abaqus and use their measurements to create the working model of the collected tree branch. Finally, once all the modeling is done the stress analyses can be done by the FE program. This allows Immediate-TREE to automate the modeling process. MATLAB is utilized as the connection between data collection and stress analyses. The MATLAB programs take data from the excel file, change that data into Python code, then output one Python file which Abaqus will take as input. One of the main purposes of this research is to create additions to the pre-existing ImmediateTREE MATLAB code, increasing its functionality to include leaf models rather than point loads.To achieve this, the code had to be amended with two additional MATLAB files. The first reads the excel file containing three dimensional coordinates for points along the branch, it then adds coordinates to this excel file which will serve as the start and end points for the leaves’ petioles.


Jehan

The Manhattan Scientist, Series B, Volume 3 (2016)

67

This MATLAB program is designed to create these points so that petioles will be in plane with the branch and petioles will be at 45-degree angles outward from the branch. The second MATLAB program is designed to create the Python code that Abaqus will read. This program has a multi-step process for creating the required leaves and adding them to the Abaqus model. To demonstrate how the program functions there is pseudocode for both programs written. This pseudocode is designed to walk a user through the steps that the program takes to serve its purpose without going into specific details. Operation of “Petiole Creator” The purpose of “Petiole Creator” was to create petioles and place them on the branches at 45degree angles initially. The following code has a loop configuration that is used to complete that process. The code loop uses item numbers 2 through 13. Before the process is complete, several iterations may need to occur (Table 1 in Appendix): 1. There is a counter called ‘J’. For counter ‘J’ if ‘A’ is the last point of the branch terminal, the program will create a “Pvector” (a vector) from point A and point B. If ‘A’ is at the terminal, then the instructions on line 1 will be initiated. If ‘A’ is not at the terminal, code on line 1 will not be initiated and the program will go to line 2. 2. Line 2 creates “Pvectors” from point B and point C initially, thereafter “Pvectors” from Point C and Point D will be established. This process iterates for all branch points. 3. All remaining steps in this program are used to manipulate “Pvector” to create petioles. Each loop of the program will create petioles for points A through D. Line 3 finds only the distances between the two points in the vector. 4. Unit vectors are created one at a time. In order to make the unit vector of each “Pvector”, “Pvector” is divided by the magnitude found in line 3. This process maintains the direction of “Pvector”. 5. The aim of Step 5 is to create default (X, Y, and Z) values that will later be modified to become the end points of each petiole. 6. The aim of step 6 is to calculate Z valuesfor the end points of each petiole. 7. The aim of line 7 is to create a “Qvector” which creates data for each petiole based upon X, Y and Z values. 8. The aim of line 8 is to find distances between the tips of petioles and it’s junction on the stem using “Qvector”. This process is done for all petioles. 9. The aim of line 9 is to create a unit vector from “Qvector”. “Qvector” is divided by the magnitude found in line 8. This maintains the direction of “Qvector”. This process is similar to the use of “Pvector” above in line 4. 10. Line 10completes the creation of the petiole from “Qvector”. 11. The “Qvectors” are placed in a matrix and printed onto an excel file where they can then be used by Abaqus to create petioles branches. Each loop of the program places one petiole in the excel file.


68

The Manhattan Scientist, Series B, Volume 3 (2016)

Jehan

Creating Branches with Leaves (LeafCode) “Petiole Creator” edits the original Excel file and the new Excel file is input to “LeafCode”. The purpose of the “LeafCode” is to use petiole creator data to output Python code (Table 2 in Appendix). This Python code is used by Abaqus to make the branch models that consists of a single branch with its leaves. The following code has a loop configuration that is used to complete that process. Operation of “LeafCode” 1. The purpose of line 1 is to read the branch + petiole excel file (after pet creator edits the original excel file). 2. Line 2 creates a matrix that catalogues the coordinates (X, Y and Z) of the branch. 3. Line 3 creates a matrix called “petguide” that stipulates that a point is either on the branch or on one of the petioles. Values of 0 indicate the ends of the branch (locations where petiole will not be placed), values of 1 indicate points on the branch while values of 2 indicate points on one of the petioles. 4. Line 4 is a loop (Loop 1) that looks through “petguide” and selects all values of 1 and places them in a new matrix, “LeafPoints”. The purpose of “LeafPoints” is to have a matrix consisting of points only where petioles will later be placed. 5. Line 5 prints the Python code (“LeafPoints”) to import “Leafmodel”. 6. Line 6 writes the instruction created so that Abaqus can read “Leafpoints”. For Abaqus to operate all instructions need to be in a language it understands. Line 5 created an instruction so it can be read into Abaqus. 7. For Abaqus to create a model it must first create individual parts. This line tells Abaqus to turn “leafmodel” into “Leafpart” (Table 3 in Appendix). 8. Line 8 writes the instruction created so that Abaqus can establish “Leafpart”. For Abaqus to operate all instructions need to be in a language it understands. Line 7 created an instruction so it can be read into Abaqus. 9. Line 9 writes the instructions for Abaqus to give “LeafPart” mechanical properties for later stress analysis. 10. Line 10 writes the instruction created so that Abaqus can assign material properties to “Leafpart”. For Abaqus to operate all instructions need to be in a language it understands. Line 9 created an instruction so it can be read into Abaqus. 11. Line 11 creates variables DPT1-DPT3. These are arbitrary numbers which the program will later use to create reference planes. The purpose of the reference planes is to align to leaf to the branch in X, Y and Z coordinates. The next step is to place individual leaves on the branch (“Loop 2”) (Table 4 in Appendix). 12. Line 12 starts Loop 2. Now all leaves have been placed on the branch and the petioles are correctly aligned. Leaves are placed on the branch once per loop starting at the base of the branch and working towards the tip.


Jehan

The Manhattan Scientist, Series B, Volume 3 (2016)

69

Sensitivity Analysis After all programs were created, analysis of the sensitivity of the programs was performed. For this analysis, seven leaves of Betula nigra (Fig. 4) were used. For the first test, petiole lengths were 100 mm and mean leaf masses were 0.5 g. For the second test, leaf petioles were shortened to 50 mm in length and leaf masses remained 0.5 g. For the third test, petiole lengths were 100 mm and mean leaf masses were 1.0 g. All three scenarios were processed through the programs.

Figure 4. Image of test branch of Betula nigra with seven leaves. For this model leaf petioles were 100mm in length and the mean leaf mass was 0.5 g. For this branch effective stress values for A, B, C, D, and E, 0.17, 0.38, 1.22, 3.56, 8.92 times 106 Pa. When leaf petioles were 50mm length and the mean leaf mass was 0.5 g, effective stress values for A, B, C, D, and E, were 0.17, 0.38, 1.19, 3.47, 8.72 times 106 Pa. When leaf masses were double with petiole length being 100mm, effective stress values for A, B, C, D, and E, 0.34, 0.76, 2.19, 6.57, 15.8 times 106 Pa.

Results Process to create MATLAB code Immediate-TREE (Immediate-TREE version1) is a MATLAB program. All of the initial modeling for this study was done in Abaqus. Prior to modifying Immediate-TREE the initial modeling was done in Abaqus. Then this initial modeling had to be recreated in MATLAB to create Immediate-TREE version2. So that the MATLAB program could accomplish the Abaqus modeling. This process was iterative. When each step of the Abaqus process had been recreated in MATLAB, the MATLAB code had to be tested to ensure the code would replicate the Abaqus process correctly. The next part of the MATLAB code could then be written and subsequently tested. This process continued until the program accomplished all the tasks of creating a branch model with leaves and petioles.


70

The Manhattan Scientist, Series B, Volume 3 (2016)

Jehan

Sensitivity Analysis In order to determine the sensitivity of the program to leaf and petiole characteristics the following three tests were performed. For the first test, petiole lengths were 100mm and mean leaf masses were 0.5 g. For this branch, effective stress values for A (leaf blades), B (the junction of leaf blades to petioles), C (smaller stems), D (intermediate stems), and E (basal stems) were 0.17, 0.38, 1.22, 3.56, 8.92 times 106 Pa, respectively. When leaf petioles were 50 mm length and the mean leaf mass was 0.5 g, effective stress values for A, B, C, D, and E, were 0.17, 0.38, 1.19, 3.47, 8.72 times 106 Pa, respectively. When leaf masses were 1.0 g with petiole length being 100mm, effective stress values for A, B, C, D, and E, 0.34, 0.76, 2.19, 6.57, 15.8 times 106 Pa, respectively. As expected, petiole lengths reduced effective stress in stems by one percent only. In contrast, when leaf masses were increased by a factor of two, effective stresses in the five locations by almost a factor of two. This result was also expected. From this it is concluded that the programs produced models that returned predictable, accurate results.

Discussion In this study, the models produced by this program were accurate since the sensitivity analysis gave predictable results. Moreover, the models of the leaves produced predictably higher effective stress values in the branch compared with point loads. The programs produced by this study are easy to use and require only a few steps to enter excel data into MATLAB so that Abaqus can do the stress analysis. These few steps require little training. If Immediate-TREE (Keane et al., 2016) is not used, the calculations would be extensive and would be less precise. The use of Immediate-TREE decreases processing times by a factor of 240. For example, the processing of a branch with seven leaves only required two minutes of computer processing. Another important consideration for such studies is the fact that using programs in Abaqus minimizes the possibility of human error in calculations and modeling. The current program can be modified to accept data of branch geometry, leaf placement and leaf characteristics from any tree species. In addition, a modified program should be capable of considering larger tree branches. Provided that the data were to be collected for a very large branch with many side branches, the program would require a modification allowing custom leaf placement to be considered. Snow loads could be added by hand very effectively. Though the program only considers the mass of the branch, leaves, and petioles a snow load or other external loads could be added in Abaqus very easily once Immediate-TREE finishes processing. Future applications would necessitate further testing and refinements of Immediate-Tree version2. Immediate-Tree version2 currently uses a Guided User Interface (GUI) for half the processes. An additional GUI would facilitate data processing. Currently, the model is constrained by leaves of uniform thickness and density. This is a


Jehan

The Manhattan Scientist, Series B, Volume 3 (2016)

71

generalization that simplifies the leaf creation process. Thus the model would need to be modified if leaves of complex geometries were considered. Moreover, the current model establishes petiole angles at 45 degrees. Petiole angles may vary and thus the program could be modified to accept petioles with a variety of angles.

Acknowledgments The author is grateful to the Catherine and Robert Fenton Endowed Chair and to Dr. L. S. Evans for financial support and guidance. The author is grateful for detailed excel files from A. Kaminski, A. S. Mysliwiec, Z. Shahbazi, and L.S. Evans.

References Kaminski, A., Mysliwiec, S. Shahbazi, Z., Evans, L., 2014, “Stress Analysis Along Tree ranches,” ASME IMECE, 5, Nov 14-20 Keane, D., D. Avanzi, L. Evans, and Z. Shahbazi. 2016. Automated Finite Element Analysis on Tree Branches. ASME 2016 International Design Engineering Technical Conferences & Computers and information in Engineering Conference. Charlotte, NC August 21-24. Maths.com. 2016. Abaqus 6.12 Scripting User’s Manual, Dassault Syst`emes, http://www. maths.cam.ac.uk/computing/software/abaqus docs/docs/v6.12/pdf books/SCRIPT USER.pdf

Appendix Table 1. Pseudo code for “Petiole Creator” 1 2 3 4 5 6 7 8 9 10 11 12 13 14

For loop: from 2 to the number of branch points If A is the last point, create PVector based on point A and point B Thereafter PVectors are created to establish points on the branch Find the magnitude of PVector Divide PVector by magnitude, creating unit vector Create default x, y and z variables for vector Find the z value of the perpendicular vector Create QVector from x y and z values Find the magnitude of QVector Find the QVector unit vector Find the end point for the petiole Place end point in matrix End

Once PetioleCreator is finished running the excel file is now called “Branch+Leaf”. This enabled Immediate-Tree to create petioles on branches. Leafcode (See below) will create leaf blades and place them on their petioles.


72

The Manhattan Scientist, Series B, Volume 3 (2016)

Table 2. Pseudo code for “LeafCode” 1 2 3 4-8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

Read excel file Create matrix “Points” from branch coordinates Create matrix petguide – this is used to tell if coordinates belong to branch or petiole Loop for size of petguide that consolidates leafpoints ImportLeafString = Python code to import the pre-made leaf model PrintLeafString prints the LeafString code (line 9) into the Python file LeafPartString = Python code to create a part for the leaf model in abaqus PrintPartString prints the PartString code (line 11) into the Python file LeafSecAssign = Python code to assign a section and material to leaf models PrintSecAssign prints the SecAssign code (line 13) into the Python file Create DPT1-DPT3, these values are manipulated later to create datum planes For i=1 to size of LeafPoints (this repeats the following for every leaf) InstanceLeaf = Python code to instance a leaf PrintInstanceLeaf prints the InstanceLeaf code (line 17) into the Pythonfile Create datumcounter – this number varies as leaves are created Create parallelcounter – serves same purpose as datumcounter but varies differently Fixleaf = Python code to attach created leaves to correct places on branches PrintFixLeaf prints the FixLeaf code (line21) into the Python file End

Table 3. Creation of “Leafmodel,” part of Table 2. Process of making “Leafmodel” • Measurements of actual leaves (Leaf Area, mass, shape, thickness) • These measurements were used to create a 3D leaf model • This leaf model was saved as a template so that it could later be imported into Abaqus as the leaves for the branch.

Table 4. “Loop 2” of “LeafCode,” part of Table 2 “Loop 2” provides the details to construct individual leaves for “LeafCode”. Knowledge of Abaqus is required for a clear understanding of “Loop 2”. The following steps are used to construct individual leaves. 1. Create the Python code in Abaqus. 2. Print “InstanceLeaf” to the Python file.The Python file is read to instance a leaf in the assembly module of Abaqus (“InstanceLeaf”). 3. Create “Datumcounter”. “Datumcounter” is a variable that the program uses to keep track of specific numbers that are identified exclusively with individual leaves (e.g. leaf 1 will use counter values of 3 and 6 and leaf 2 will use counter values of 4 and 7 etc.) 4. Create “Parlelcounter”. “Parlelcounter” is a counter used that keeps track of numbers used in the process of aligning leaves to petioles. 5. “Fixleaf”, another line of Python code, places leaves onto petioles. 6. Print “FixLeaf” to the final Python file.

Jehan


Characterizing abnormal protein expression in Sense mutant Danio rerio as a link to amyotropic lateral sclerosis James M. LiMonta∗ Department of Biology, Manhattan College Abstract. A mutation in zebrafish called Sense occurs in the tardbp gene and causes an amytrophic lateral sclerosis (ALS) like phenotype that leads to early embryonic death. This mutation occurs on the 3’-untranslated region (3’UTR) of the tardbp gene which codes for the Tar DNA-binding protein of 43 kDa named TDP-43. Because mutations that occur in the 3’-UTR do not cause a direct change in protein’s amino acid sequence, the goal of this study is to show a comparative difference in the mRNA levels of wild-type and Sense mutant zebrafish. Tarbp mRNA was extracted from 3 days’ post-fertilization (dpf), and 4 dpf zebrafish embryos of wild-type (WT), and Sense mutant genotypes as well as take a closer look at the mutation that occurs in the 3’-UTR. The 3’-UTR of the tardbp gene in Sense mutant, and Wild-Type were sequenced, and of the eight mutations found four of them created two new microRNA binding sites. They were then analyzed by reverse transcriptase real-time polymerase chain (RT-qPCR) reaction using primers specific to the tardbp gene. The results of the RT-qPCR showed a significant relative decrease in tardbp mRNA expression in Sense mutants when compared to the WT tardbp mRNA expression. The mean delta Cq expression of the WT 4 dpf was 1.5845 × 10−2 with a standard deviation of 2.2238 × 10−3 , and the 4 dpf Sense mutant’s mean delta Cq expression was 1.1479 × 10−3 with a standard deviation of 4.4828 × 10−5 . This data suggests a decrease in TDP-43 protein levels because mRNA is a protein pre-cursor.

Introduction Amyotrophic Lateral Sclerosis (ALS) is a progressive neurodegenerative disease that degrades motor neurons throughout the central nervous system. There is an estimated 420,000 people globally diagnosed with ALS, and the average life span post diagnosis is 2 to 5 years (ALSA, 2016). Of those cases an estimated 5-10% have genetically acquired ALS called Familial ALS (FALS) which can be caused by various gene mutations (ALSA, 2016). One such gene is tardbp, which codes for Tar DNA-Binding of 43 kDa (TDP-43) and when mutated, has been shown to cause the development of ALS in zebrafish as well as in humans. A study done by researchers from the University of Montreal discovered the effects of expressing the TDP-43 protein by knocking down of the tardbp gene with antisense morpholino oligonucleotides (AMO), and over expressing TDP-43 in cell cultures by inserting a tardbp, plasmid expression vector (Kabashi et al., 2009). The result of both the over expression and under expression of TDP-43 was shown to cause an ALS like phenotype in the zebrafish. This mutation, named Sense, is homozygous recessive meaning it is only phenotypically expressed when the organism has both mutated tm4: allele at the tardbp locus. Marked morphological changes can be seen in the Sense mutant embryo as early as 44 hours’ post fertilization (hpf). The major morphological changes are muscle degeneration, impaired spinal motor neuron axon ∗

Research mentored by Quentin Machingo, Ph.D.


74

The Manhattan Scientist, Series B, Volume 3 (2016)

LiMonta

outgrowth, reduced blood circulation, and mis-patterning of vessels (Schmid et al., 2012). They lose all response to stimuli by 4 dpf, and the mutation causes death around 7 dpf. Previous research done by Manhattan College students has suggested that a mutation, named Sense, is located in the 3’-untranslated region (3’-UTR) of the tardbp gene and causes the ALS-like phenotype observed in the affected zebrafish. The 3’-UTR is responsible for post-transcriptional gene regulation, through microRNAs (miRNAs) mediated regulation. MiRNAs bind to a complementary RNA sequence on the 3’-UTR of a target mRNA and results in the inhibition of translation by either directly degrading the mRNA or by severely decreasing the efficiency of translation. Both alterations will prevent the production of TDP-43 but when miRNA bind to the 3’-UTR and decrease the efficiency of translation, the mRNA still remains in the cytoplasm of the cell in an idle state. This is problematic because if the miRNA does not directly degrade the tardbp mRNA it becomes difficult to confirm there is an inhibition of translation. Since the Sense mutation is in the 3’-UTR, the direct observation of a change in the mRNA’s coding sequence and ultimately a change in the amino acid sequence of TDP-43 is impossible. The main purpose of this study is to show that the mutation on the 3’-UTR of the tardbp gene causes a change in TDP-43 production. This change in TDP-43 production will almost certainly be caused by the miRNA regulating tardbp mRNA sequence which is responsible for the translation of TDP-43. I also would like to determine how the miRNA accomplishes this task either by directly degrading the mRNA or by inhibiting the mRNA translation. I hypothesized that the mRNA levels in the Sense mutant embryos will decrease due to the mutation that will create new microRNA binding sites on the 3’UTR of the mRNA increasing translational inhibition. I also hypothesized that there will be a relative decrease in overall TDP-43 levels in the mutant embryos comparative to the wild-type embryos; due to the mutation on the 3’-Untranslated region of the TARDBP gene.

Materials and Methods Sample Collection and mRNA Extraction Embryos were collected from zebrafish, and incubated at four different development stages: 24 hpf, 36 hpf, 3 dpf, and 4 dpf. The embryos when at their right developmental stage were lysed by Tripure, the aqueous layer, containing the mRNA, was then extracted after centrifugation. Transfer RNA (tRNA) is then added to help stabilize the mRNA, and the mRNA was then washed with 70% ethanol, and suspend in water. Reverse Transcriptase Real-Time Quantitative PCR RT-qPCR was run for each mRNA’s time stage in triplet. A Bio-Rad iTaq SYBR Green OneStep Kit was used with the forward primer 5’-TTGTGAGGTTTGGAGACTGG (QM197), and the reverse primer 5’-CACAAACACTTTACGGCTCC (QM198) specific to the 144bp region in the 2nd exon of the tardbp gene. The thermocycler conditions are as follows: it began with the reverse transcription of the tarbp mRNA occurred at 50◦ C for 10 minutes, an initial denaturing step (95◦ C for 1 minute), 44 cycles of a denaturing step (95◦ C for 10 sec), an annealing step (60◦ C for 30 sec), an extension step (65◦ C for 5 sec), and a final extension step (95◦ C 30 min).


LiMonta

The Manhattan Scientist, Series B, Volume 3 (2016)

75

Results Tarbp 3’-UTR DNA Sequences The 3’-UTR of two Sense mutant samples (152aRC, 152bRC), and one WT sample (wtamp) were sequenced. Eight mutations occurred at the following base pairs: 1,958bp, 1,960bp, 1,963bp, 2,005bp, 2,047bp, 2,056bp, 2,065bp, and 2,097bp. Of the eight mutations four produced two new microRNA binding sites at 2,005bp, 2,047bp, 2,056bp, 2,065bp (Fig. 1). The RT-qPCR results are shown in Fig. 2.

Figure 1. DNA Sequence of tardbp mRNA from two Sense mutant (152aRC, 152bRC) samples and one WT sample (wtamp).

Figure 2. Relative mRNA expression of tardbp in WT and Sense mutants (Sen -/-).

Discussion

The RT-qPCR results, as seen in Fig. 2, show no significant difference in mRNA expression between Wild-Type and Sense -/- mutant embryos at the 3 dpf stage. However there is a significant


76

The Manhattan Scientist, Series B, Volume 3 (2016)

LiMonta

difference between Wild-Type and Sense -/- mutant embryos at the 4 dpf stage. The mean delta Cq expression of the WT 4 dpf was 1.5845 × 10−2 with a standard error of 2.2238 × 10−3 , and the 4 dpf Sense mutant’s mean delta Cq expression was 1.1479 × 10−3 with a standard error of 4.4828 × 10−5 . The 3-UTR DNA sequences seen in Fig. 1 help get a better picture of why this decrease is occurring. The DNA sequences showed that the Sense mutation actually created two new miRNA binding sites. These new binding site will definitely increase mRNA degradation because they create more opportunities for miRNA binding. The decrease in tardbp mRNA level reinforces my hypothesis that relative decrease in overall TDP-43 protein levels in the mutant embryos comparative to the wild-type embryos because mRNA is a protein precursor, and is necessary for the production of protein. Moving forward the next logical step for this research is to characterize the protein expression using Western Blotting techniques to conclusively show that TDP-43 protein levels do in fact decrease in Sense mutant zebrafish. I have already used commercial antibodies specific to TDP-43 in mice to successfully detect TDP-43 in zebrafish.

Acknowledgments The author thanks Dr. Quentin Machingo for his guidance this past year. He also thanks the School of Science Summer Research Scholar’s Program for the financial support.

References ALSA: Facts You Should Know. (2016). http://www.alsa.org/about-als/facts-you-should-know. html. Kabashi, E., Tradewell, M. L., Dion, P. A., & Bercier, V. (2009). Gain and loss of function of ALS-related mutations of TARDBP (TDP-43) cause motor deficits in vivo. http://www.ncbi. nlm.nih.gov/pubmed/19959528 Schmid, B., Hruscha, A., Banzhaf-Strathmann, J., & Strecker, K. (2012). Loss of ALS-associated TDP-43 in zebrafish causes muscle degeneration, vascular dysfunction, and reduced motor neuron axon outgrowth. http://www.ncbi.nlm.nih.gov/pubmed/23457265.


Xylem conductivity from stems to leaves of grass plants Humberto Ortega∗ Department of Biology, Manhattan College Abstract. Grass plants provide food for many human populations worldwide. The productivity of grass plants is dependent upon having an adequate water supply. The purpose of this study is to determine the relationship between water conductivity in stems and water conductivity in leaves. Eight species of tropical plants and eight species of temperate plants were evaluated. Overall, 23% to 39% of all bundles in stems were contributed to each leaf. In addition, between 8% and 15% of the xylem conductivity in stems was contributed to leaves. Xylem conductivity per bundle was similar in all samples of this study. Data from additional species will help us to understand if there are differences between tropical and temperate grasses with respect to water distribution to leaves.

Introduction Grass plants provide food for many human populations worldwide (Figs. 1 and 2). Data in Table 1 show grain yields from grasses worldwide. The total grain yields from these six crops were 2.3 billion metric tons in 2013. The amount of grain yields shows that human populations worldwide are extremely reliant on grass crops, especially for crops that are considered staple foods such as wheat (Fig. 1), rice (Fig. 2), and especially corn. An understanding of water use efficiency is important to the overall production of food. Many times, grain crops that require larger amounts of water are grown in areas that receive insufficient precipitation most years. For example, corn, which requires high moisture, is grown in areas where wheat with a lower water requirement is normally grown. Farmers want to grow corn rather than wheat since corn provides them more profit. Table 1. Worldwide production of food grains from grass plants for 2013 calendar year. Grain Corn Wheat Rice Barley Sorghum Oats

Grain Production (metric tons) 817,000,000 681,000,000 678,000,000 123,000,000 61,000,000 20,000,000

www.nueokstate.edu www.geohive.com www.agrostats.com www.spectrumcommodities.com www.nationmaster.com

Water moves from root to stem to sheath to leaf (some goes to seeds and flower if applicable). Morphologically, grass plants have leaf blades produced by stems. Unique to grasses, leaves have sheaths that wrap around stems. Thus, each leaf has a leaf sheath and leaf blade (Fig. 3). The image in Fig. 4 shows two nodes with the leaf sheaths and leaf blades. The image in Fig. 5 shows ∗

Research mentored by Lance Evans, Ph.D.


78

The Manhattan Scientist, Series B, Volume 3 (2016)

Ortega

a detail of the node, leaf sheath and leaf blade arrangement. Water in grass plants is carried in vascular bundles. The distribution of vascular bundles from a stem to a leaf is shown in Fig. 6.

Figure 1. Image of mature wheat plants in a field. www.conejobread.com

Figure 2. Image of mature rice plants in a terraced landscape in Vietnam. www.intrepidtravel.com


Ortega

The Manhattan Scientist, Series B, Volume 3 (2016)

79

Figure 3. Image of a grass stem with a leaf blade to the left and a leaf sheath of a second leaf with its blade and a stem above the second leaf. http://www.extension.umn.edu/agriculture.

Figure 4. Diagram of sampling of stems, leaf sheaths, and leaf blades with nodes labeled.

Figure 5. Diagram of water flow throughout the tissues up to transpiration of water vapor from the leaf.

Figure 6. Diagram of vessel distribution in the grass stem and leaf blade.

In grasses, water is conducted in xylem cells in individual vascular bundles (Fig. 7). Thus, vascular bundles and the xylem cells in vascular bundles form a continuum from roots to leaves, flowers, or fruits. The overall purpose of this study was to determine if water transport characteristics were similar in tropical and temperate grass species. Since transpiration should be much greater for tropical plants living in warm climates and lower in temperate grasses living in colder climates,


80

The Manhattan Scientist, Series B, Volume 3 (2016)

Ortega

Figure 7. Image of a vascular bundle (labeled by the black circle) and its xylem cells (marked with black X’s).

the overall hypothesis of this study was that xylem conductivity would not be similar among the grasses of the two groups. For this study, we hypothesized that 1. The numbers of vascular bundles in stems and leaves will not be the same for tropical and temperate grass species. 2. The percent contribution of bundles and xylem conductivity in stems and leaves will not be the same for tropical and temperate grass species. 3. The xylem conductivity per bundles in stems and leaves will not be the same for tropical and temperate grass species.

Materials and Methods Plant sampling The tropical grass species were sampled from several islands of Hawaii archipelago in years 2014 and from Hiva Oa Island in 2016 (Table 2) while temperate grasses were sampled from the Manhattan College campus between 2014 and 2016. Plants taken as samples had no abnormalities. All plants had flowers on seeds that were used for species identification. Wagner et al. (1999) and Hitchcock (1950) were used for species identification for tropical and temperate grass species, respectively. All species names were verified with www.tropicos.org.


Ortega

The Manhattan Scientist, Series B, Volume 3 (2016)

81

Table 2. Grass species used in this study. Grass Species Tropical species Phragmites australis Zea mays Polypogon interruptus Saccharum officinarum Digitaria insularis Cenchrus agrimoniodes Axonopus fissifolius Temperate species Pennisetum alopecuroides Miscanthus sinensis Sphenopholis intermedia Calamagrostis acutiflora Hordeum vulgare Dactylis glomerata Poa pratensis

Location of samples Van Cortlandt Park, +40.87, -73.91 South of Lihue. +21.60, -159.20 Haleakala National Park. +20.40, -156.10 South of Lihue. +21.60, -156.10 Maui. +20.52, -156.11 Maui. +20.47, -156.27 Hiva Oa Island, Puamou, Marquises Island, French Polynesia. -9.46, -138.53 Manhattan College, +40.87, -73.91 ” ” ” ” ” ”

Tissue sampling Sampled plants were subdivided into stem segments below nodes, leaf sheaths, and leaf blades. Each tissue sampled was 0.5cm to 3.5cm in length. In some cases, the leaf blade had two samples, one near the leaf sheath and one near the center of the leaf. Histological procedures Tissue samples were fixed in FAA (Jensen, 1962) for 24 hours. After fixation, samples placed were put through a series of tertiary butanol (Fisher Scientific, Pittsburgh, PA). Tissues were placed in liquid Paraplast-Xtrawax (McCormick Scientific, Richmond, IL) in an oven at 56◦ C. After a second change of wax, tissue samples were embedded in Paraplast. Tissues were sectioned with a microtome at 35 µm. Sections were stained in Safranin (Jensen, 1962) and later made permanent with Canada balsam (CAS 8007-47-4, Acros, Fisher Scientific, Pittsburgh, PA). Microscopic analysis The first step was to determine the number of vascular bundles in each tissue (Fig. 7). The number of bundles were counted at 100 and 400 times magnification. The second step was to determine the number of vessel conduits in vascular bundles. For each tissue sample, at least 10% of all samples were sampled. For each selected vascular bundle, the number of conduits was counted. In addition, the diameter of each conduit (Fig. 7) was determined using ImageJ (National Institutes of Health, http://rsb.info.nih.gov/ij). Two diameter measurements at right angles were determined for each conduit. Xylem conductivities were determined using the Hagen-Poiseuille equation: π × number of conduits × average radius of conduits(cm)4 8 × viscosity of water


82

The Manhattan Scientist, Series B, Volume 3 (2016)

Ortega

(McCulloh et al., 2009). The units of xylem conductivity are g cm MPa−1 s−1 . Statistical analysis The individual conductivity values in these tissues were used to find the average conductivity in the tissues. These average tissue conductivities were averaged, then multiplied by the number of bundles present in the tissue sample to give forth the average conductivity of the tissue. Standard deviations of the conductivity values in xylem bundles were taken, and the standard deviation divided by the mean was also calculated. From there, we moved more data to another file that catalogs all of the information obtained from different sorts of species. The plants considered the best representative of the species, and that had the most information, were used in this spreadsheet. This spreadsheet, in addition to the percentage of water moved into the leaf blade, calculated the average conductivity value of the tissue, and the average of the percentages of water moving into the leaf blade. In addition to these, the percentage of xylem bundles going into the leaves from the stems are also calculated (Fig. 6). These two values were then averaged for all plant species to contrast the average tissue conductivity between tropical and temperate grass species, and the average percentage of water going into the leaf blade between tropical and temperate grasses

Results It was hypothesized that the number of vascular bundles in stems and leaves will not be the same for tropical and temperate grass species. Tropical grasses had 136 bundles per stem while temperate grasses compared to 50.9 bundles per stem (Table 3). Although these mean values are not similar, since the standard deviation values were relatively high so there was no statistical difference (p = 0.18) between the two groups. On average, temperate grass leaves have 16.3 bundles per leaf whereas temperate grass leaves have 22.9 bundles per leaf (Table 3). Similarly, the values for the two groups not significantly different (p = 0.37) since the standard deviations values were large. Coincident with the values there was no statistical significant difference in the percentage of bundles from stems to leaves for the two plant groups. Xylem conductivity values were similar for stems and leaves for tropical and temperate grass species (Table 3). In contrast, contributions of xylem conductivity from stems to leaves differed significantly for the two grass groups. Xylem conductivities from stems to leaves for temperate grass species was 15.4%, higher than values of 8.18% for tropical species (p = 0.044).

Discussion Xylem conductivities per bundle in stems and leaves were much higher for tropical than for temperate grasses. Again, the high variation within each group resulted in a lack of statistical significance among the two groups. We have many additional plant samples from both the tropics and from temperate zones and we plan to conduct analyses on these samples. The addition of data from additional species should provide a better view of any differences between the two groups. The results obtained from these plant tissues show that there is a difference in the way tropical and temperate grass species conduct water—especially what percentage of water leaves the stem


Ortega

The Manhattan Scientist, Series B, Volume 3 (2016)

83

Table 3. Number of bundles and xylem conductivity characteristics of tropical and temperate grass species. Xylem Conductivity (g cm MPa−1 s−1 )

Number of bundles

Mean Xylem Conductivity per Bundle (g cm MPa−1 s−1 )

Grass Species

Stem

Leaf

Percent to leaf

Stems

Leaves

Percentage

Stems

Leaves

Tropical species Phragmites australis Zea mays Polypogon interruptus Saccharum officinarum Digitaria insularis Cenchrus agrimoniodes Axonopus fissifolius Mean Standard deviation Standard deviation/Mean

4.86 92.4 0.193 12.5 3.53 0.332 0.0804 16.3 33.9 2.08

0.373 5.88 0.0151 2.31 0.347 0.00914 0.00349 1.28 2.19 1.72

7.67 6.36 7.82 18.5 9.85 2.76 4.33 8.18 5.12 0.625

57.0 444 39.0 110.0 189.0 74.0 36.0 136 146 1.08

31.0 45.0 9.00 27.0 30.0 13.0 5.00 22.9 14.3 0.627

54.4 10.1 23.1 24.5 15.9 17.6 13.9 22.8 14.8 0.650

0.0853 0.208 0.00494 0.113 0.0187 0.00448 0.00223 0.0624 0.0781 1.25

0.0120 0.131 0.00168 0.0854 0.0116 0.000703 0.000697 0.0347 0.0520 1.50

Temperate species Pennisetum alopecuroides Miscanthus sinensis Miscanthus sinensis Sphenopholis intermedia Calamagrostis acutiflora Hordeum vulgare Dactylis glomerata Poa pratensis Mean Standard deviation Standard deviation/Mean Test probability

0.0858 6.20 1.61 0.130 0.126 0.0204 0.460 0.0271 1.08 2.14 1.97 0.281

0.0190 0.472 0.352 0.0129 0.0138 0.00558 0.0653 0.00255 0.118 0.185 1.57 0.212

22.1 7.61 21.9 10.0 10.9 27.4 14.2 9.41 15.4 7.37 0.477 0.0440

23.0 126 103 28.0 22.0 13.0 72.0 20.0 50.9 43.6 0.858 0.183

9.00 21.0 47.0 9.00 9.00 9.00 17.0 9.00 16.3 13.3 0.816 0.374

39.1 16.7 45.6 32.1 40.9 69.2 23.6 45.0 39.0 15.9 0.41 0.0616

0.00373 0.0492 0.0156 0.00464 0.00575 0.00157 0.00639 0.00136 0.0110 0.0161 1.46 0.134

0.00211 0.0225 0.00748 0.00144 0.00153 0.000620 0.00384 0.000283 0.00497 0.00744 1.50 0.183

to travel into the leaf. Tropical species do not transport as much water from their stems into their leaves as much as temperate grass species do. Eventually, we expect to combine these xylem conductivity data with the overall plant architecture and sizes and numbers of leaves to determine the water distribution patterns in tropical and temperate grass species especially in relation to seed production. Thus, the eventual path of our research project is well defined.

Acknowledgment The author is indebted to the Catherine and Robert Fenton Endowed Chair to Dr. Lance Evans for financial support for this research.


84

The Manhattan Scientist, Series B, Volume 3 (2016)

Ortega

References Hitchcock, A. S. 1950. Manual of the Grasses of the United States. United States Government Printing Office, Washington D.C. Jensen, W. A. 1962. Botanical Histochemistry. W.H. Freeman, San Francisco, CA. 408 p. McCulloh, K. A., J. S. Sperry, and F. R. Adler. 2003. Water transport in plants obeys Murray’s law. Nature 421: 939-942. National Institutes of Health. Image J. https://imagej.nih.gov/ij/ Tropicos.org. Missouri Botanical Garden. http://www.tropicos.org Wagner, W. L., Herbst D. R., and Sohmer S. H. 1999. Manual of the Flowering Plants of Hawai’i. Bishop Museum Press, Honolulu, HI.


Growth dynamics of flowering branches of Artemisia tridentata ˜ ∗ Ismael Pena Department of Biology, Manhattan College

Abstract. Artemisia tridentata, Big Sagebrush, is an evergreen shrub found predominantly in the western United States Big Sagebrush can range in size from 2 to 13 feet, with a spreading array of branches from the main stem. In stems of Artemisia tridentata we see a high level of eccentric growth occurring. This eccentric growth is seen as an uneven growth of xylem rings, usually leading to a growth ring to be cut off at a location. The purpose of this study was to view the growth characteristics of both vegetative and flowering branches in the samples of A. tridentata processed. Randomly selected terminal stem samples were collected weekly and sent from Thistle, Utah. Samples were randomly chosen, and processed for secondary branches. Each branch was removed from the main stem and marked as either vegetative or flowering. Various measurement and observation were recorded for each stem sample. Results are mostly shown in Table 1 where we can see each branch and its length, as well as other measurements. The total length of all secondary branches data shows an increase in length as we proceed from month to month, with the exception of June to July. Looking at the samples in September and in October we see a large, almost four times, increase in length of all secondary branches. This is of interest because between these two samples is when a change from all vegetative branches to all flowering branches occurs. Other trends such as increase in total length of current year growth are shown. Future research will focus on more samples being processed as well as looking into the histology of the stem.

Introduction Artemisia tridentata, Big Sagebrush, is an evergreen shrub found predominantly in the western United States (Figs. 1 and 2). Big Sagebrush can range in size from 2 to 13 feet, with a spreading array of branches from the main stem (USDA plant guide). In stems of A.tridentata a high level of eccentric growth occurs. This eccentric growth, or irregular growth, can be described as uneven growth patterns of xylem rings (Diettert, 1938). This eccentric growth is due to death of the vascular cambium which causes the growth rings to be uneven and un-circular compared to the rest of the ring (Evans et al., 2012). Stems of A. tridentata produce two types of branches, flowering and vegetative. In this experiment we focused on determining the growth characteristics of individual branches of terminal stems. Since eccentric growth occurs only to main stems at the base of flowering branches and not on main stems at the base of vegetative branches, our research was aimed at understanding when vegetative branches become flowering branches during the growing season. In addition, data were collected to understand the rates of growth of vegetative and flowering branches during the growing season. The growth of flowering branches is important since each branch can produce hundreds of seeds. Overall, the purpose of this research was to determine the time and growth pattern when vegetative branches became flowering branches. ∗

Research mentored by Lance Evans, Ph.D.


86

The Manhattan Scientist, Series B, Volume 3 (2016)

PeËœna

Figure 1. Several sagebrush plants (Artemsia tridentata) in the desert showing several flowering branches. The plants are relatively evenly spaced in the view.

Figure 2. Flowering branches at stem terminals of a single sagebrush plant (Artemsia tridentata). Note that this plant probably has 20 to 40 terminal branch stems.

Materials and Methods Stem samples of Artemisia tridentata were collected from plants growing in Thistle, Utah. Six randomly selected terminal stem samples (approximately 10 cm in length) with intact secondary branches were sampled once per week from June 2015 through November 2015. All branches were stored in their shipping boxes and mailed to Manhattan College for analysis. Once received boxes were organized by date and left until chosen for sampling.


Pe˜na

The Manhattan Scientist, Series B, Volume 3 (2016)

87

To analyze samples, individual, randomly selected samples among the six branches for each date of sampling were laid out on paper (Fig. 3). Secondary branches were removed from main stems starting at stem bases. As each branch was removed, the letter ‘V’ for vegetative or ‘F’ for flowering with a corresponding branch number was marked on the paper at the position the branch was removed (Fig. 4). Photographs were taken before and after branches were removed. A millimeter ruler was present in all photographs so distances could be determined. All images were archived on computer files. Numerous measurements were taken from photographic images with Image J (National Institutes of Health). These measurements and images were used to make our summary page (Fig. 5)

Figure 3. A terminal stem of Artemsia tridentata from September 10, 2015. The stem is natural with all secondary branches attached. The pencil point marks the location of the beginning of growth for 2015.

Figure 4. A terminal stem of Artemsia tridentata from September 10, 2015. This is the same sample as in Fig 3. For this image all of the secondary branches have been removed. The green arrow marks the location of the beginning of growth for 2015.


88

The Manhattan Scientist, Series B, Volume 3 (2016)

PeËœna

Results In our study we processed 5 samples of Artemisia tridentata from June to November 2015. The use of our summary sheet (Fig. 5) for each sample allowed us to make various measurements using image J. Various characteristics for our stem samples are shown in Table 1. In Table 1 each branch is shown, as well as their length for each sample. The placement along the main stem of each sample is also shown. The main stem used for placement comparison is not exactly the length of each sample but simply used for branch position in mm.

Figure 5. A terminal stem of Artemsia tridentata from September 10, 2015. This is the same sample as in Fig. 3. For this image all of the secondary branches have been removed. The pencil point marks the location of the beginning of growth for 2015. Branches are marked as vegetative (V) or flowering (F).

In this table the data for number of branches is shown, as well as others. The data ranges from 14-26 branches. We see that 26 branches arise in the July sample which is not expected so early on. This is odd because the sample before it has 14 branches while the one after had 17. Though these branches are different, somewhat of a trend was expected. This increase in number of branches is not as drastic as the increase seen in our total length of secondary branches (Table 1). For these measurements we see a huge increase in length from our September to our October sample. The increase is from 366mm to 1260mm which is very large while the range is from 296mm to 1260mm. Results of Total number of branches and total length of secondary branches are shown graphically in Fig. 6. As we move on to total length of current year stem growth we can see that our data increases quite linearly. The data ranges from 126 mm to 290mm which is s good increase for several


The Manhattan Scientist, Series B, Volume 3 (2016)

Cumulative length of all secondary branches (mm)

Pe˜na

89

1400 1200

Figure 6. Relationship between number of branches and cumulative length of all secondary branches for stems of Artemsia tridentata from June 6, July 7, September 10, October 1, and November 23, 2015. Green symbols indicate vegetative branches while brown symbols indicate flowering branches. Note that branch lengths were short for vegetative branches but as vegetative branches became flowering branches, branches lengths increased markedly.

November 23

1000 October 1

800 600 September 10

400

June 4

200

July 2

0 0

10

20

30

Number of branches months. The data for total length of the main stem includes both the current year (2015) growth, as well as the previous year (2014) growth. Here the data ranges from 260mm to 311mm. It is expected that these numbers would vary because each sample is different and there really is no trend among samples.

Discussion In our experiment we were focused on determining the growth characteristics of both flowering and vegetative branches in samples of Artemisia tridentata. The data taken from each sample is mostly shown in Table 1, this allows us to view various trends in the different measurements taken. First in this table we see the data for each branch length as well as the placement of each branch on the main stem. From June to November, the last branch of each sample increases in position along the stem. This meaning that as the month’s progress, the stem gets longer and the branches begin to grow in that area. Looking at the number of branches data, from our first sample, June 04, to our last sample, November 25, we see a steady increase in the number of branches present in the new growth though there is an exception with the July 2nd sample. Though these samples are different, in the sense that they are not the same stem, somewhat of a trend was expected, The total length of all secondary branches is shown next, where a large change in length is observed. Specifically, we can see that from our September sample to our October sample there is a large increase from 366 mm to 1260 mm. An increase in total branch length is expected but such a high increase, of more than 3× that of September, was not expected. This was surprising but also interesting because during this time is when we can view the change from vegetative branches to flowering branches. Fig. 6 shows the data from total number of branches and total length of


90

The Manhattan Scientist, Series B, Volume 3 (2016)

PeËœna

secondary branches graphically. This graph shows the number of branches increases uniformly as described previously. The trends observed are shown in the graph, allowing to view them in an easier way. The total length of current year stem growth shows data that ranges from 126 mm to 290 mm. Once again this is the growth in 2015 where this data is quite linear and the linear increase is interesting because it shows a trend for these samples. As expected there is an increase in new growth of the stem throughout the months of each sample. The total length of the main stem was also taken for comparison of how much of the stem was new growth and how much was old year growth. By knowing the length of the whole stem the length of old growth can be determined from the difference of the total length from the total new growth length. The only trend that is noticeable is that from September 10th to November 23rd ; these samples show an increase in main stem length as the months progress.

Acknowledgements The author is indebted to the Catherine and Robert Fenton Endowed Chair to Dr. Lance Evans for financial support for this research.

References Diettert, R.R. 1938. The morphology of Artemisia tridentata Nutt. Lloydia 1:3-74 Evans, L.S., A. Citta, and S.C. Sanderson. 2012. Flowering branches cause injuries to secondyear main stems of Artemisia tridentata Nutt. Subspeciies tridentata. Western North American Naturalist. 72:447-456. National Institutes of Health. Image J. (https://imagej.nih.gov/ij/) United States Department of Agriculture, Natural Resources Conservation Service, Plant Guide. http://plants.usda.gov/plantguide/pdf/pg artr2.pdf Welch, B. 2005. Big Sagebrush: a sea fragmented into lakes, ponds, and puddles. General Technical Report RMRS-GTR-144, Fort Collins, CO


PeËœna

The Manhattan Scientist, Series B, Volume 3 (2016)

91

Appendix Table 1. Lengths of secondary branches on stem samples of Artemisia tridentata during 2015 from Thistle, Utah Main Stem Length (mm) 290 280 270 260 250 240 235 230 220 215 212 210 200 195 190 185 180 175 170 165 160 155 150 145 140 135 130 125 120 115 110 105 100 95 90 80 75 70 65 60 55 50 45 40

06/04

07/02

09/10

10/01

11/23

Branch Length (mm)

Branch Length (mm)

Branch Length (mm)

Branch Length (mm)

Branch Length (mm)

5.3

5.13 6.11 4.89 5.83

15.2 6.60 7.54 8.28 9.88 13.9 13.6 30.1 14.9 15.7 7.21

5.73 6.73 7.08 7.99 9.80 10.8 7.48 7.44 18.6 6.48 8.14 21.8 19.7 22.8 14.9 27.7 7.91 11.9 14.4 5.73 21.9 4.63

10.9 7.2 9.1 16.9

24.1 16.5 39.0 45.9 46.1

13.6 19.6 28.8 35.1 39.3 10.8 42.1 53.8 65.7 62.3 52.9

39.0

82.6

55.0 52.6 30.5

75.2

64.1

54.2

17.8

60.2

77.7

24.4

70.5

31.6

68.3

86.7 84.0 53.2 84.1

29.5

61.7 34.7 73.0

44.8

64.2

72.7 80.5

14.4 56.6

66.7 48.4

17.4 16.9

68.1 29.8 22.0

65.2

85.4


92

The Manhattan Scientist, Series B, Volume 3 (2016)

PeËœna

Table 1. (continued) Main Stem Length (mm) 30 20 15 10 5 0 (New Growth)

06/04

07/02

09/10

10/01

11/23

Branch Length (mm)

Branch Length (mm)

Branch Length (mm)

Branch Length (mm)

Branch Length (mm)

4.32 81.2 29.9 69.2

11.7 167

Total Number Of Branches

14

26

17

22

22

Total Length of All Secondary Branches (mm)

308

296

366

1260

1260

Total Length of Current Year Stem Growth (mm)

126

202

210

227

290

Total Length of Main Stem(mm)

302

305

260

283

311


Consequences of chytridiomycosis and urbanization faced by red-backed salamanders in Lower New York State Paul Roditis∗ Department of Biology, Manhattan College Abstract. Batrachochytrium dendrobatidis (Bd) is a chytrid fungus that is affecting salamander and amphibian populations on a global scale, however there have not been many accounts of Bd documented in lower New York State. In addition, urbanization has undoubtedly had its effects on a variety of both plant and animal species. The primary goal of this study was to explore the ecological differences between urban salamander habitats with habitats found in more suburban and rural areas of New York State (NYS), to determine if urban salamander populations are forced to occupy less suitable microhabitats. Our second goal was to investigate the prevalence of B. dendrobatidis in lower NYS. Throughout the summer of 2016, 17 amphibian samples were examined and swabbed for Bd in Van Cortlandt Park, Saxon Woods Park, and in Teatown Park, and ecological data was also collected. The DNA collected from the swabs was extracted and underwent a Polymerase Chain Reaction and was compared to a known sample of the fungus. None of the amphibians sampled were found to be exposed to Bd. However, there were significant ecological differences found between the salamander habitats and sites chosen at random for comparison. More surprisingly, there were no significant differences between salamander habitats in urban locations compared to those further north, indicating that even salamanders in NYC may be able to locate ideal microhabitats.

Introduction Amphibians across the world are feeling the effects of habitat loss, pollution, disease, and climate change. The International Union for Conservation of Nature listed about 30% of all amphibian species in the world as threatened (Hof et al., 2011). It is clear that amphibian conservation is necessary because of their importance in the ecosystem as a whole. They affect the structure of the ecosystem by soil burrowing and aquatic bioturbation (Babbit & Hocking, 2014). They also have a hand in energy flow through direct consumption and nutrient cycling. Finally, amphibians are invaluable as indicator species due to their sensitivity to environmental conditions, with the health of a given population reflecting the health of their habitat (Babbit & Hocking, 2014). Chytridiomycosis is one of the major diseases effecting amphibians globally. Chytridiomycosis is a fungal disease that is caused by Batrachochytrium dendrobatidis (Bd), a chytrid fungus. Bd is responsible for infecting the keratinized cells of the amphibians’ skin, which in turn makes their epidermis thicker restricting them from conducting proper cutaneous respiration and absorbing water through their once semi permeable skin (Global Invasive Species Database, 2006). The Red Backed salamander (Plethodon cinereus) is a widespread species that is abundant in New York State forests and is just one of the numerous amphibians that may be threatened by disease (Breisch & Ducey, 2003). Because of its abundance in the northeastern region of North America, P. cinereus can be found in some rather urban areas of the United States. However, ∗

Research mentored by Gerardo Carfagno, Ph.D.


94

The Manhattan Scientist, Series B, Volume 3 (2016)

Roditis

research has shown that these salamanders can possess symbiotic bacteria that produce antibodies that fight against infections like Bd (Harris et al., 2006). However, urbanization may also have adverse effects on salamander populations by altering important environmental factors such as moisture, and temperature. It is clear that this is an ongoing issue locally and globally that needs to be further studied and ultimately addressed. The first goal of this study was to quantify the microhabitat of local salamander populations in an urban location (Van Cortlandt Park), and make comparisons with salamanders in a more rural or suburban location (Saxon Woods and Teatown Park). The second goal was to determine whether the chytrid fungus, Batrachochytrium dendrobatidis is present in these populations.

Materials and Methods Field Data Upon finding each salamander, they were handled with care, using sterile gloves. The study subjects were first rinsed in a tube of sterile dechlorinated water. This is done to remove any excess dirt, debris and even any transient bacteria that may be on the skin of the amphibian. After a second rinse with dechlorinated water, the subject was swabbed on its lateral, dorsal, and ventral sides ten times, using sterile cotton swabs. The tips of the swabs were then cut and placed in 1.5 mL microcentrifuge tubes containing 70% ethanol. Using a new sterile cotton swab, the subject was swabbed for cutaneous microflora. The swab was then streaked on a Difco R2A agar plate. Following the swabbing of the amphibian, its morphological characteristics were recorded. Measurements were recorded for the subject’s weight, total length, and snout to vent length. Ecological data related to the area in which the subject was found was collected as well. The cover object that the salamander was found under was measured, along with the distance it was from the nearest tree; that tree’s diameter was also recorded. The soil and air temperatures were then record. Finally, percentage canopy and ground coverage at the site was documented using standard techniques. The same ecological data was recorded for a random site, five to twenty meters away from the salamander location. DNA Extraction DNA was extracted from the swabs that were collected in the field using the Qiagen Blood and Tissue DNA kit (Qiagen, Valencia, CA). The swabs were first transferred to 2 mL micro-centrifuge tubes using sterile tweezers and 400 µL of PBS were added. 20 µL of Qiagen proteinase K, and 400 µL of Buffer AL were then added, and was immediately followed by a 15 second vortex period. Following the vortexing, the samples were incubated in a water bath at 56◦ Celsius for ten minutes. 400 µL of 100% ethanol were then added, immediately followed by a brief vortexing period to remove droplets from the lid. 700 µL of the mixture were transferred to a DNeasy mini spin column with a collection tube, and centrifuged at 6000 x g for one minute. The 2 mL collection tube containing the filtrate was then discarded and was replaced with a new tube. The mixture from the swab tube that remained was added to the spin column and centrifuged again at


Roditis

The Manhattan Scientist, Series B, Volume 3 (2016)

95

6000 x g for one minute. The 2 mL collection tube was discarded once again, and was replaced. 500 µL of buffer AW1 were then added to spin column, followed by a centrifugation period at 6000 x g for one minute. The 2 mL collection tube was replaced again. 500uL of buffer AW2 were then added followed by a centrifugation at full speed, for three minutes. The spin column was then placed in a sterile 1.5 mL centrifuge tube and 100 µL of Buffer AE was added to the spin column. This was followed by an incubation at room temperature for one minute. The column was then centrifuged at 6000 x g for another one minute. This step results in the DNA being suspended in AE buffer. PCR and Gel Electrophoresis A Polymerase Chain Reaction was used to detect B. dendrobatidis in the extracted DNA of the salamander samples. A positive control, purified Bd, was given to the study by Pisces Molecular Lab in Boulder Colorado. 25 µL of cocktail mix was used as a negative control. In each PCR tube there was 10 µL of deionized water, 12.5 µL of Invitrogen master mix, 1.25 µL of forward primer ITS1-3 Chytr, and 1.25 µL of reverse primer 5.8S Chytr . All together there was a total volume of 25 µL in each tube (Windstam & Olori 2014). The PCR protocol started with an initial denaturation cycle at 95 o C for 4 min, followed by 50 cycles of denaturing at 95 o C for 30s, annealing at 55 o C for 30s, and finally, an extension cycle at 72 o C for 45s (Windstam & Olori 2014). Of the total volume, 2.5 µL from each PCR tube was mixed with 0.6 µL of bromophenol blue loading dye, and loaded into a 2% agarose gel in 1X TAE buffer. After the Gel Electrophoresis was complete, the gel was viewed under UV light. Data Analysis With the ecological data collected from the field sampling over the last two summers, T-tests were used to determine if there were significant differences between the majority of data from the 40 salamander sites and from the 40 random sites. T-tests were also used to determine if there were significant differences between the majority of data from the urban site (Van Cortlandt park) to that from the more rural and suburban parks. Chi-square tests were similarly used to determine if there were significant differences in the percentages of canopy and ground coverage.

Results DNA Analysis Fig. 1 shows the results of the PCR and the Gel Electrophoresis that followed. In lane 1 is the 100 bp marker; in lane 2 is the positive control; in lane 3 is the negative control, followed by the 17 DNA samples that are in lanes 4 - 20. Table 1 displays both the forward and reverse primers used for the detection of the Bd fungus. The positive control was successful in demonstrating the efficacy of our primers and thus testing positive for Bd. However, none of the amphibians sampled tested positive for the chytrid fungus.


96

The Manhattan Scientist, Series B, Volume 3 (2016)

Roditis

Figure 1. PCR Gel Results of 17 Cutaneous Swabs for B. dendrobatidis detection Lane: Contents => 1: 100 bp Marker; 2: (+) Control; 3: (-) Control; 4-20: Samples

Table 1. Primer used for Bd Detection 0

Forward Primer (ITS1-3 Chytr)

5 -CCTTGATATAATACAGTGTGCCATATGTC-3’

Reverse Primer (5.8S Chytr)

5’-AGCCAAGAGATCCGTTGTCAAA-3’

Ecological Analysis The significant results from the t-tests are displayed in Figs. 2 - 4. Of the 14 t-tests that were run, 2 showed significant results. The tests for max length of cover object between salamander and random sites (Fig. 2), and distance to nearest tree between salamander and random sites (Fig. 3). The length of coverage objects was significantly smaller than those at random sites (t = 2.05, df = 35, p = 0.0479) and the average distance to the nearest tree was significantly shorter at salamander sites than at random sites (t = −2.45 , df = 35, p = 0.0194). The average ground temperature between salamander and random sites (Fig. 4) was found to be mildly significant with a P value of 0.074. All other variables were not significantly different in comparison to salamander sites versus random locations (all p > 1).

Figure 2. Maximum Length of Cover Object Analysis

Figure 3. Distance to Nearest Tree (CM) Analysis

Figure 4. Ground Temperature Analysis


Roditis

The Manhattan Scientist, Series B, Volume 3 (2016)

97

Surprisingly, there were no significant differences found between the salamander sites in Van Cortlandt compared to those found in Teatown and Saxon Woods. The average length of the cover objects (P = 0.292, t = 1.07, df = 34), average width of the cover objects (P = 0.192, t = 1.33, df = 34), average distance to the nearest tree (P = 0.960, t = 0.05, df = 34), average diameter of the nearest tree (P = 0.656, t = 0.45, df = 34), average ground temperature (P = 0.729, t = 0.35, df = 34), and average air temperature (P = 0.804, t = 0.25, df = 34) all were not significantly different between the urban and suburban locations. Similarly, canopy coverage (x2 = 1.82, p > 0.1) and ground coverage (x2 = 3.09, p > 0.01) in Van Cortlandt versus Westchester were not significantly different.

Discussion Out of the 17 cutaneous swabs collected, none tested positive for B. dendrobatidis. The positive control showed the expected band at around 146 bp. This is in addition to the samples from last year that also all tested negative. However, this does not mean that B. dendrobatidis was not present in the population. There are many different aspects that could have played a role in this result. For one, the samples we found and swabbed do not represent the total population in this location. Additional sampling is necessary to further develop the claim that the population is free of Bd. The efficacy of the DNA extraction kit is also a factor to consider. The ecological portion of this study did result in substantial insight on the local salamander population. The results show that there are significant differences between the salamanders’ microhabitats and the rest of the woods. It was found that the average maximum length of the cover object under which salamanders were found was significantly smaller in salamander locations as opposed to random cover objects available in in the woods (Fig. 2). On average, salamanders were found under logs with a maximum length of 125.4 cm, as opposed to the average length of available objects which was 180.2 cm. The choice to live under shorter logs may be due to the fact that the smaller logs usually are more decomposed. These decomposed logs may allow for more optimal microclimates (better moisture and temperature, for example). This result contradicts other research that stated that longer logs offer a better environment for the salamanders and that longer logs, “provide more area than smaller logs and may increase the survival rate of salamanders by reducing movement, water loss, and exposure to predators,� (Hunter & Carol Strojny, 2009). This may hold truth, however, it was not apparent in our results. The salamanders found over the last two years of this study also tended to live closer to mature trees than what would be expected if they were selecting locations randomly (Fig. 3). This characteristic coincides with the known habitat preferences of this population. This result may simply show that salamanders tend to be located in dense, mature tree stands. Salamanders also prefer damp, dark, concealed locations. Locations near a mature tree could provide a cooler micro-environment with more shade; however, our results indicate that canopy coverage is not significantly different between salamander sites and random sites. It would be interesting to observe if canopy cover is still insignificant if more sampling and analysis was conducted, because other researchers have noted that canopy coverage


98

The Manhattan Scientist, Series B, Volume 3 (2016)

Roditis

plays a role in salamander microhabitats. For example, the loss of canopy coverage may reduce the suitability of that environment due to the resulting rapid loss of moisture (Droege & Welsh, 2001). Finally, Fig. 4 shows that ground temperature at salamander sites tended to be cooler than surrounding locations. This again shows that these amphibians are selecting cooler microhabitats, likely to minimize water loss. Overall, our data demonstrate that salamanders are selecting specific types of microhabitats, close to mature trees and under relatively short cover objects in cooler locations. It was also established that there is no significant evidence to show that there are any ecological differences between Van Cortlandt Park, an urban park, and Saxon Woods and Teatown, which are more suburban parks. This is apparent because of the fact that the ecological variables in all three locales were statistically similar. With this being the case, we can conclude that salamanders in Van Cortlandt Park are able to find sites with appropriate microhabitat, and thus we can go further and say that urban salamanders are not forced to occupy less suitable microhabitats, because of the effects of urbanization. With this information, future researchers can determine a suitable and ideal habitat for this species , and be confident of the ability to conserve populations even in urban environments.

Acknowledgements

My mentor, Dr. Carfagno, and I would like to thank Linda and Dennis Fenton ’73, and the School of Science Summer Research Scholars Program for financial support to pursue this biological research. A special thanks also goes to Andrew Paramo, and Mary Portes for their prior contribution to this study and for their assistance in data collection.

References

Babbit, Hocking, “Amphibian Contributions to Ecosystems Services.” Herpetological Conservation and Biology 9.1 (2014):1-17. Breisch, Alvin, and Peter K. Ducey. ”Woodland and Vernal Pool Salamanders of New York State.” New York State Conversationalist (2003): n. page. New York State Department of Environmental Conservation, June 2003. Web. 26 Feb. 2015. Droege, Sam and Hartwell H. Welsh. “A Case for Using Plethodontid Salamanders for Monitoring Biodiversity and Ecosystem Integrity of North American Forests.” Conservation Biology 15.3 (2001): 558-69. United States Forest Service. United States Department of Agriculture. Web. Global Invasive Species Database. 2006. Memphis, TN. http://www.issg.org/database/species /contacts.asp?si=123&fr=1&sts=&lang=EN Hof, Christian, Miguel B. Ara´ujo, Walter Jetz, Carsten Rahbek. “Additive Threats from Pathogens, Climate and Land-use Change for Global Amphibian Diversity.” Nature (2011) 516-519. Hunter, Malcolm L. Jr, and Carol Strojny. “Log Diameter Influences Detection of Eastern RedBacked Salamanders (Plethodon Cinereus) in Harvest Gaps, but Not in Closed-Canopy Forest Conditions.” Herpetological Conservation and Biology 5.1 (2009): 80-85.


Temporal variation in the prevalence of human intestinal parasites in two bivalve species from Orchard Beach NY Freda Fafah Ami Tei∗ Department of Biology, Manhattan College Abstract. Bivalves have been shown to be infected with human intestinal parasites such as Cryptosporidium parvum, Toxoplasma gondii and Giardia lamblia. Cryptosporidium parvum causes cryptosporidiosis in humans and other vertebrates. Toxoplasma gondii causes toxoplasmosis in humans and Giardia lamblia causes giardiasis in humans and dogs. As a result of their association with these parasites, bivalves could be used as bio-sentinels for human parasites in aquatic environments. Four bivalve species were collected at low tide from Orchard beach, New York in September 2014 and October 2015. The species surveyed were the soft-shell clam (Mya arenaria), the ribbed mussel (Geukensia demissa), the blue mussel (Mytilus edulis), and the Atlantic oyster (Crassostrea virginica). We have previously reported on the prevalence of C. parvum and G. lamblia in these four bivalve species from that site. For this study, we will focus on two out of the four, which are Mytilus edulis and Mya arenaria. The goal of this study was to determine the temporal variation in C. parvum, T. gondii, and G. lamblia in bivalves collected from Orchard beach, New York using the polymerase chain (PCR)-reaction. We found that the prevalence of C. parvum in 2014 samples was 1% in Mytilus edulis, 16% in Geukensia demissa and 50% in Mya arenaria. However, in 2015, T. gondii, G. lamblia and C. parvum were not detected in Mya arenaria. In contrast, G. lamblia had a 28.75% prevalence in Mytilus edulis in the samples collected in 2015 compared to 20.6% from samples collected in 2014. These results indicate that bivalves can be used to assess water quality.

Introduction Bio-sentinels are indicator species that are used to detect pollutants in the environment. Human intestinal parasites like Cryptosporidium parvum, Toxoplasma gondii and Giardia lamblia are detrimental to human health. Therefore, it is important to monitor these protozoan parasites. Conventional ways of monitoring water quality are by measuring the levels of coliforms that build up in the aquatic environment (Ferguson et al., 1996). Bivalves could be used as bio-sentinels for detecting parasites since they are filter feeders. This feeding mechanism allows them to trap any parasites in their tissues. The presence of these parasites in the bivalves can be tested. Last year, we reported the occurrence of Cryptosporidium parvum in three bivalve species collected from Orchard Beach, NY. The prevalence of C. parvum in these bivalves were 50% in Mya arenaria, 16% in Geukensia demissa and 1% in Mytilus edulis (Tei et al., 2016) Cryptosporidium parvum is a human intestinal parasite that causes cryptosporidiosis. The parasite is introduced into the environment through human fecal matter. The CDC reports Cryptosporidium as the highest causative agent of human intestinal disease in America. Cryptosporidium oocysts are transmitted through food or water. Likewise, Toxoplasma gondii is a protozoan parasite that causes toxoplasmosis. Toxoplasma gondii is detrimental to the health of individuals who are immunocompromised and to pregnant women. The mode of transmission for Toxoplasma ∗

Research mentored by Ghislaine Mayer, Ph.D.


100

The Manhattan Scientist, Series B, Volume 3 (2016)

Tei

gondii, according to the CDC, is through improperly cooked food which are contaminated with the oocysts (CDC.gov). The oocysts can also be contracted from infected cat feces, or passed from mother to child, and in some rare occasions, through blood transfusions from an infected donor. Another protozoan parasite that infects humans is Giardia lamblia. G. lamblia causes giardiasis in humans and dogs. According to the CDC, Giardia is not as prevalent (CDC.gov). Giardiasis is contracted through drinking or eating food that is contaminated with Giardia cysts. The goal of this research is to determine the temporal prevalence of Cryptosporidium parvum, Giardia lamblia and Toxoplasma gondii in Mytilus edulis and Mya arenaria collected from Orchard beach, NY in 2014 and 2015.

Materials and Methods Sample collection and DNA isolation. Four species of bivalves were collected from Orchard Beach, NY, during low tide. The species collected were Mya arenaria (8 in 2014, 17 in 2015), Crassostrea virginica (10 in 2014, 9 in 2015) Geukensia demissa (44 in 2014, 64 in 2015) and Mytulis edulis (97 in 2014, 80 in 2015). The digestive tract, the gills, the foot, and the mantle were dissected from each individual. DNA was isolated from each tissue using the Qiagen DNA tissue kit (Qiagen, Valencia, CA). Briefly, 0.25 g of each individual tissue was obtained and cut into tiny pieces to ensure more efficient lysis, the tissue was further placed into 180 microliters of lysis buffer. DNA was isolated per the manufacturer’s protocol. The quantity and purity of DNA was determined using a UV spectrophotometer. Polymerase chain reaction and Agarose gel electrophoresis. The prevalence of Cryptosporidium spp. as determined by PCR, using the primer sets that target the 18S ribosomal RNA subunit gene of Cryptosporidium parvum (forward primer: 5’CCGAGTTTGATCCAAAAAGTTACGAA-3’; reverse primer: 5’TAGCTCCTCATATGCCTT ATTGAGTA-3’with the following conditions: 35 cycles at 94◦ C for 1 min, 52◦ C for 2 min, and 72◦ C for 3 min with a final extension at 72◦ C for 5 min (Rochelle et al., 1996). Purified C. parvum DNA was used as a positive control in the PCR reaction, while water was used as a negative control. The PCR products were detected using a 1.5% agarose electrophoresis stained with ethidium bromide visualized under UV light. T. gondii DNA was detected using primer sets targeting the GRA6 gene. The sequences used were as follows: forward primer, 5’-GTAGCGTGCTTGTTGGCGAC- 3’, reverse primer: 5’-TACAAGACATAGAGTGCCCC-3’ with the following conditions for the PCR reaction: 95◦ C for 5 minutes for one cycle, 94◦ C for 30 seconds for 35 cycles, 60◦ C for 1 minute, 72◦ C for 2 minutes, and one cycle for the final extension at 72◦ C for 7 minutes (Fazaeli et al., 2000). Purified T. gondii DNA was used as a positive control in the PCR reaction, while water was used as a negative control. PCR products were detected using a 1.6% agarose gel stained with ethidium bromide under UV.


Tei

The Manhattan Scientist, Series B, Volume 3 (2016)

101

G. lamblia DNA was detected using a nested-PCR protocol previously described primer sets targeting the β-giardin gene (Lalle et al., 2005). Purified G. lamblia DNA was used as a positive control, while water was used as a negative control. PCR products were detected using a 1.5% agarose gel stained with ethidium bromide visualized under UV light. The genotype of the positive samples was detected by sending the positive samples to a gene sequencing facility (GENEWIZ, South Plainfield, NJ). Nucleotide sequences were obtained and analyzed using the National Institutes of Health (NIH) Basic Local Alignment Search Tool (BLAST) to search for similarities with previously determined nucleotide sequences in the database.

Results Detection of protozoan intestinal parasites in Mya arenaria. Seventeen Mya arenaria specimens collected in 2015 were tested using PCR to detect Cryptosporidium parvum, Toxoplasma gondii and Giardia lamblia DNA. None of these parasites were detected in Mya arenaria (Table 1). Table 1. Infection status of Mya arenaria Species name

Specimen number

C. parvum DNA

T. gondii DNA

G. lamblia DNA

Mya arenaria

1-17

Absent

Absent

Absent

Detection of protozoan intestinal parasites in Mytlius edulis. Eighty Mytilus edulis specimens collected in 2015 were tested using PCR to detect Cryptosporidium parvum, Toxoplasma gondii and Giardia lamblia DNA. T. gondii (Table 2) and C. parvum DNA (Table 2) were not detected in any of the eighty specimens. G. lamblia DNA was detected in 22 specimens of Mytilus edulis resulting in a prevalence of 28.75% (22/80) (Fig. 1 and Table 2). All tissue types of Mytilus edulis were positive for G. lamblia (Fig. 2). Table 2. Infection status of Mytilus edulis Species name

Specimen number

C. parvum DNA

T. gondii DNA

G. lamblia DNA

Mytilus edulis Mytilus edulis Mytilus edulis Mytilus edulis Mytilus edulis Mytilus edulis Mytilus edulis Mytilus edulis Mytilus edulis Mytilus edulis

1-37 38-40 41-42 43 44-50 51-57 58-64 65 66-69 70-80

Absent Absent Absent Absent Absent Absent Absent Absent Absent Absent

Absent Absent Absent Absent Absent Absent Absent Absent Absent Absent

Absent Present Absent Present Absent Present Absent Present Absent Present


102

The Manhattan Scientist, Series B, Volume 3 (2016)

Tei

Figure 1. Mytilus edulis infection with G. lamblia. Top Lane 1: 100 bp marker Lane 2: positive control Lanes 3-20: Mytilus edulis tissue DNA Bottom Lane 1: 100 bp marker Lanes 2-19: Mytilus edulis tissue DNA Lane 20: negative control

Figure 2. Tissue distribution of Giardia lamblia in the bivalves collected from Orchard Beach New York in 2015.

DNA sequence analysis of G. lamblia DNA from bivalves collected in 2014 and 2015. Analysis of the tissues from Mytilus edulis that were positive for Giardia in 2015 was carried out by using the Basic Local Alignment Search Tool (BLAST). We found that 31.8% (7/22) of the positive G. lamblia sample were of the sub-assemblage AI genotype, while 68.1% (15/22) were of the assemblage A genotype (Table 3 and Fig. 3). Table 3. DNA sequence analysis for detected G.lamblia DNA in Mytilus edulis collected in 2015. Mollusks

Percentage

Assemblage

Sub-assemblage

Mytilus edulis

(15/22) 68.1% (7/22) 31.8%

A A

AII AI


Tei

The Manhattan Scientist, Series B, Volume 3 (2016)

103

Figure 3. Comparison of the Prevalence of the Genotype from 2014 to 2015.

Temporal variation in intestinal protozoan parasites in bivalves collected from Orchard beach, NY in 2014 to 2015. 17 Mya arenaria and 80 Mytilus edulis specimens were collected from Orchard Beach, NY, in 2015. PCR and gel electrophoresis were conducted to determine the presence of C. parvum, G. lamblia and T. gondii. None of the tested parasites were detected in Mya arenaria (Table 1). There was a marked decrease in the prevalence of C. parvum in Mya arenaria, Geukensia demissa and Mytilus edulis from 2014 to 2015. C. parvum was not detected in any of the bivalves tested (Table 1 and Fig. 4). There was no change in prevalence of G. lamblia from 2014 to 2015 in Mya arenaria and Geukensia demissa. In contrast, there was a 1.4-fold increase in the prevalence of G. lamblia in Mytilus edulis from 2014 to 2015 (Fig. 5).

Discussion We observed a change in the prevalence of human intestinal parasites in bivalves collected from Orchard beach in two consecutive years (2014-2015). The diversity and prevalence of human intestinal parasites was higher in 2014. We noticed a drop in the prevalence of C. parvum in the samples collected in 2015 (Fig. 3). We also noticed a decrease in the prevalence of G. lamblia in Mya arenaria and Geukensia demissa. This could be due to the fact that the beach was cleaned prior to our collection. Indeed, the summer of 2015, News 12 the Bronx a local TV station reported that the beach was being closed due to contamination of fecal matter (News 12, 2015). It is of utmost importance that Cryptosporidium parvum be closely monitored because the CDC lists it as an agent for bioterrorism (Lopez et al., 2013). Bivalves in aquatic environments could be used as good bio-sentinels to help closely monitor Cryptosporidium parvum and Giardia lamblia. This could help to prevent an outbreak, which could be a threat to individuals who are immunocompromised (Cozon et al, 1994). For Toxo-


104

The Manhattan Scientist, Series B, Volume 3 (2016)

Tei

Figure 4. Change in the prevalence of Cryptosporidium parvum between the 2014 and 2015 in three bivalve species collected from Orchard beach, NY.

Figure 5. Comparison of prevalence of Giardia lamblia between 2014 and 2015.

plasma gondii there was no change in prevalence from 2014 to 2015. It is possible that Mytilus edulis, Geukensia demissa and Mya arenaria are not good bio-sentinels for T. gondii. The distribution of G. lamblia in the different tissues showed that the foot was the tissue where DNA was detected the most, followed by the abductor muscle, the mantle, the digestive gland, and the gills. We previously reported that the tissue distribution for C. parvum in Mya arenaria was largely in the foot and the digestive tract (Tei et al., 2016). The high infection in the foot could be because either the foot is exposed the most or that G. lamblia targets that organ more than any other tissue. G. lamblia is characterized by seven different genotypes. These genotypes are set apart due to the differences in their metabolism, drug sensitivity and host preference (Andrews et al., 1989). Assemblages A and B only infects humans. Assemblage A is usually associated with humans and it is subsequently divided into two subdivisions which are sub-assemblage AI and sub-assemblage AII. Assemblage C and D infect dogs, whiles hoofed animals are infected by assemblage E. Assemblage F infects cats while assemblage G only infects rodents (Lalle et al., 2005). Last year, we


Tei

The Manhattan Scientist, Series B, Volume 3 (2016)

105

reported the occurrence of assemblage A with a sub-assemblage AII in all the tested Mya arenaria (3/3). Sub-assemblage AII is found in only humans. We reported that in 2014, 100% (3/3) of the G. lamblia found in Mya arenaria were of the assemblage A and sub-assemblage AII. Fifty percent (1/2) of Geukensia demissa was from the assemblage C and 50% (1/2) was from the assemblage D. Moreover, 57.1% (8/14) of the positive G. lamblia samples for Mytilus edulis was from the assemblage A and the sub-assemblage AII and 57.1% (8/14) were from the assemblage C (Tei et al., 2016). In 2015, sequencing analysis of the positive G. lamblia tissues showed that 15/22 of the positive tissues were of the assemblage A with a sub-assemblage AII and 7/22 of the positive tissues were of the sub-assemblage AI (Table 3). Sub-assemblage A1 typically infects cats and it is transferred into humans (Sprong et al., 2009). There was a decrease in the diversity of the genotype detected in tissues from bivalves collected in 2015. Future studies will focus on comparing the prevalence and genotype of human intestinal parasites from year to year, as well as from other sites in the New York area.

Acknowledgements I would like to thank the Linda and Dennis Fenton ’73 Endowed Biology Research fund for supporting this research. I would also like to thank my research advisor Dr. Mayer for her guidance and mentorship and my co-authors Steven Kowalyk, Matthew Presta, Jhenelle Reid, Christopher Annabi, Joey Annabi and Mohammed Fazeem. I would also like to thank Dr. Judge for his help in the collection of bivalves.

References Andrews, R.H.; Adams, M.; Boreham, P.F.; Mayrhofer, G.; Meloni, B.P (1989). Giardia intestinalis: Electrophoretic evidence for a species complex. Int. J. Parasitol. 19, 183–190. Cozon G., Biron F., Jeannin M, Cannella D, Revillard JP (1994). Secretory IgA Antibodies to Cryptosporidium parvum in AIDS patients with Chronic Cryptosporidiosis, The Journal of Infectious Disease. 3: 696-699. Fazaeli, A., Carter, P. E., Darde, M. L., & Pennington, T. H. (2000). Molecular typing of Toxoplasma gondii strains by GRA6 gene sequence analysis. International journal for parasitology, 30: 637-642. Ferguson C., Coote B., Ashbolt N., and Stevenson I (1996). Relationships between indicators, pathogens and water quality in an estuarine system. Water Research, 30:2045-2054. http://www.cdc.gov/parasites/crypto/ http://www.cdc.gov/parasites/toxoplasmosis/ https://www.cdc.gov/parasites/giardia/ http://bronx.news12.com/news/officials-orchard-beach-closed-to-swimmers-due-tocontamination-1.10781543


106

The Manhattan Scientist, Series B, Volume 3 (2016)

Tei

Lalle, M., Pozio, E., Capelli, G., Bruschi, F., Crotti, D., & Caccio, S. M. (2005). Genetic heterogeneity at the β-giardin locus among human and animal isolates of Giardia duodenalis and identification of potentially zoonotic subgenotypes. International journal for parasitology, 35: 207-213. Lopez, J. (2013). Carl A. Burtis, Edward R. Ashwood and David E. Bruns (eds): Tietz Textbook of Clinical Chemistry and Molecular Diagnosis (5th edition): Elsevier, St. Louis, USA, 2012, 2238 pp, 909 illustrations. ISBN: 978-1-4160-6164-9. Indian Journal of Clinical Biochemistry, 28 (1), 189. Rochelle, P.A. (1996). Comparison of primers and optimization of PCR conditions for detection of Cryptosporidium parvum and Giardia lamblia in water. Applied and Environmental Microbiology. 63: 106-114. Sprong, H.; Cacci`o, S.M.; van der Giessen, J.W.B (2009). Identification of Zoonotic Genotypes of Giardia duodenalis. PLoS Negl. Trop. Dis. 3 Tei, F. F., Kowalyk, S., Reid, J. A., Presta, M. A., Yesudas, R., & Mayer, D. C. G. (2016). Assessment and Molecular Characterization of Human Intestinal Parasites in Bivalves from Orchard Beach, NY, USA. International Journal of Environmental Research and Public Health, 13(4), 381.


Aniline analogues as new ligands for chromate capture Ashley Abid∗ Department of Chemistry and Biochemistry, Manhattan College Abstract. Chromium(VI) or hexavalent chromium is the toxic form of the element chromium. Due the prevalent pollution from its use in manufacturing industries, the World Health Organization (WHO) reports that trace amounts of chromium(VI) have been found in groundwater and drinking water, as well. We investigated different organic compounds that could bind to chromium(VI) and help improve the absorbent capacity of GAC (Granular Activated Carbon) and lead to removing the most amount of chromium(VI).

Introduction The element chromium is a transition metal known for its steely gray luster, which can resist tarnishing and has a high melting point. The name of the element comes from the Greek word “chroma” since many of the compounds containing chromium are colored. It comes in several oxidation states, the most common oxidation states being: chromium(III) and chromium(VI). Generally, the chromium(VI) oxidation state exists as either chromate or dichromate ions. Due to its versatile properties, the element has been used often in manufacturing industries, such as, the tanning and dyeing industry. Chromium(VI) or hexavalent chromium is the most toxic form of the element chromium. The Environmental Protection Agency (EPA) classifies chromium(VI) as a Group I carcinogen. According to the EPA, chromium(VI) is especially dangerous for the human respiratory system. Both short-term and long-term exposure to chromium(VI) can cause shortness of breath, coughing, and wheezing, with chronic chromium exposure being more linked to septumbronchitis, decreased pulmonary function, and pneumonia. Animal studies with chromium(VI) exposure also show it to cause lung tumors. Due to the industrial pollution from the common usage of chromium in manufacturing processes, trace amounts of chromium(VI) have been found in not only groundwater, but drinking water, as well. The prevalence of the pollutant and its potential harm to our health as a carcinogen has led to a rise in research for removing chromium(VI) from water. Currently, the World Health Organization (WHO) reports that the maximum acceptable amount of chromium(VI) in water is 0.05 ppm. The human body can convert a certain amount of chromium(VI) to chromium(III), which is actually beneficial to the body. Chromium(III), or trivalent chromium, acts as an essential mineral for us, since it is known to enhance the action of insulin, which is necessary for carbohydrate breakdown. In addition to that, chromium(III) also appears to play a role in the metabolism of proteins, carbohydrates, and fats. Since chromium(III) has been shown to help people who have insulin resistance problems, many people have opted to take trivalent chromium supplements, as well. ∗

Research mentored by John Regan, Ph.D.


108

The Manhattan Scientist, Series B, Volume 3 (2016)

Abid

Table 1. Current Remediation Methods of Chromium(VI) From Water Adsorption Biosorption Membrane Filtration

Chemical Reduction Electrochemical treatment Ion Exchange

There have been several different methods that remove chromium(VI) from water. Table 1 lists some of the methods that are currently in use to attempt to remove the present chromium(VI) in water [1]. Chemical reduction methods of water remediation involve the usage of reducing agents to convert the chromium(VI) to its trivalent form of chromium(III). Fig. 1 shows an example of the chemical reduction. The reducing agent used is ascorbic acid, also known as vitamin C, to reduce the chromium(VI) to its benevolent form of chromium(III). Other common reducing agents used to reduce chromium(VI) to chromium(III) are: hydrogen sulfide and iron oxide. The main drawback for this method is that the reducing agents are not recyclable, thus still leaving the water with residual organic material.

Figure 1. Reduction of Chromium(VI) to Chromium (III) With Ascorbic Acid

A large number of these methods are primarily physical means of removing the chromium(VI) from water. Adsorption is the process that creates a film from a group of molecules being concentrated on the surface of the sorbent. It utilizes the ratio of the concentration of the molecules to the solubility of the molecule to work. Though it is cost-effective, research in using adsorption for removing chromium(VI) shows that it is not very effective in removing all of the chromate from groundwater. One of the commonly used substances for treated chromium-contaminated water is GAC (Granular Activated Carbon). Since GAC is a form of activated carbon, it works better with nonpolar and hydrophobic substances, such as benzene than polar, charged molecules like chromium(VI) [2]. Our approach is based on converting chromate into neutral organic molecules that can be effectively absorbed by GAC. Fig. 2 shows an example of cyclic and noncyclic esters formed between the chromate and catechol and phenol, respectively. This occurs during physical methods of removing chromium(VI), such as with GAC.


Abid

The Manhattan Scientist, Series B, Volume 3 (2016)

109

Figure 2. Cyclic and Noncyclic Chromate Complex Formed with Chromate and Catechol

Activated carbon is often used to absorb organic materials and residual disinfectants in water supplies [3]. GAC is the granular form of this special type of carbon. It has been commonly employed in the petroleum industry as a way of purifying gasoline by removing any organic impurities within [4]. Since it is primarily used to remove organic materials, it does not work so well with polar, charged molecules like chromate. This has been one of the main obstacles when utilizing GAC to remove chromium(VI) [5].

Materials and Methods Preparing the Stock Solution 60 mg of solid potassium chromate (K2 CrO4 ) was mixed with 700 mL of water to produce a 2.00 × 10−2 M stock solution of potassium chromate.

Using the Chromate Capture Compounds 10 mL of the 0.02 M potassium chromate stock solution was mixed with four molar equivalents of the chromate capture compound on the stirrer for an hour. For the chromate capture compounds, the following compounds were investigated: catechols, phenols, and anilines. After stirring the mixture for an hour, 600 mg of GAC was added and stirred for 30 minutes. An aliquot of the solution was removed and diluted to a 10−4 (0.0002)M concentration. Afterwards, the amount of the chromate complex that GAC was able to remove was measured through UV/VIS spectroscopy. The effectiveness of the GAC absorbing the chromate complex was determined by comparing the height of the peak specifically at 372 nm (which indicates the presence of chromate) to the peak of the prepared stock solution. The percent chromate removal was then calculated.


110

The Manhattan Scientist, Series B, Volume 3 (2016)

Abid

Results

Of the molecules we tested as chromate capture compounds, the anilines did the best in removing the most chromate. Particularly, 3,4-dimethylaniline and p-Phenylenediamine removed the most percentage of chromate, compared to the other chromate capture compounds (Table 2). Table 2: Table 2. Results of percentage of Cr(VI) removed with 4 and 10 molar equivalents of chromate capture compounds Compound

Name

Percent Cr(VI) Removal with 4 Molar Equivalents

Percent Cr(VI) Removal with 10 Molar Equivalents

3,4-dimethylaniline

78%

99%

p-phenylenediamine

70%

75%

phenol

36%

39%

4-nitrocatechol

38%

42%

Discussion

GAC is not an effective absorbent of Cr(VI) at neutral pH due to the high polarity of chromate ions, such as CrO4 2− . Neutral compounds, such as H2 CrO4 , are better absorbed. Our results, shown in Table 2, indicate that the combination of organic compounds and GAC is a superior method of removing Cr(VI) compared to GAC alone. Our hypothesis relies on the formation of chromate esters from chromate and catechol, as an example. The uncharged chromate ester is better suited to be absorbed onto GAC. Of the compounds tested, 3,4-dimethylaniline and GAC provided the best at removing Cr(VI). A 10 molar excess of 3,4-dimethylaniline (0.242 grams) and GAC (0.6 grams) virtually removed all of the Cr(VI) from 10 mL of 0.02 M solution. Our research shows that the GAC is more likely to absorb compounds that have more electron density instead of electron-withdrawing groups, such as, the nitrocatechol. GAC prefers compounds that are nonpolar and so, doesn’t work very well with absorbing polar, charged-molecules like chromate alone. When GAC alone was tested with removing the chromate, it was able to remove only about 30% of the chromium(VI). This is why creating the chromate complex was helpful in reducing the polar nature of chromate by attaching it to an electron density-rich organic compound like 3,4-dimethylaniline. The more polar nitro-catechol compound was less effective at removing chromium(VI) most likelly due to the polarity of the nitro group. Fig. 3 shows the noncyclic esters, or chromate complex formed between the 3,4dimethylaniline and the chromate ion. The aromaticity of the benzene rings and the increased


Abid

The Manhattan Scientist, Series B, Volume 3 (2016)

111

electron density from the methyl groups made the molecule more nonpolar and thus, more favorable to react with chromate and be absorbed by GAC. When four molar equivalents of the compound were used for the reaction, the molecule was able to remove about 78% of the chromate from the existing stock solution. When the concentration was increased to ten molar equivalents, the 3,4-dimethylaniline removed almost all of the chromate present in the solution. Compared to the other non-aniline analogues, the other molecules less effective to remove the chromium(VI) from the solution even at increased concentrations, with Phenol and nitrocatechol removed about 39% and 42% of the chromium(VI).

Figure 3. Formation of Chromate Complex Between Chromate and 3,4-dimethylaniline and Sequestration of Complex on GAC

For future investigations, looking into more alkylated forms of aniline may lead to a compound with not only more electron density but also an insoluble compound. In addition to removing chromium(VI), another main goal of this research project was to find a compound that was also insoluble, to remove the concern of leaving residual organic material, once the chromium(VI) has been removed. The 3,4-dimethylaniline acts as a partially soluble compound. Although it was able to remove most of the chromate, there was still a small amount of residual organic material that the GAC could not absorb. Interestingly, the 3,4-dimethylaniline left a consistent amount of


112

The Manhattan Scientist, Series B, Volume 3 (2016)

Abid

residual organic material even at the different concentrations of four molar equivalents and ten molar equivalents. This may suggest that the partially soluble nature of the molecule makes it less likely to leave residual material, and so, moving forward with a more insoluble compound would help greatly in the investigation of removing chromium (VI)

Acknowledgements I would like to thank the Jasper Research Scholars Program for providing the funding to perform this work. I would also like to thank the Department of Chemistry and Biochemistry at Manhattan College for providing the space and resources for my research, as well, as Dr. Regan for his guidance and mentorship.

References [1] Ashri, Wan, et al., “Removal of Hexavalent Chromium-Contaminated Water and Wastewater: A Review.” Water Air Soil Pollution 200:59-77 (2009) [2] Weber, Walter, “Sorption from Solutions with Porous Carbon.” University of Michigan pp. 90-93 (1967) [3] DeSliva, Frank, “Activated Carbon Filtration.” Water Quality Products Magazine. Web. (January 2, 2000) [4] Noorman, David, et al., “Groundwater Remediation and Petroleum: A Guide for Underground Storage Tanks,” CRC Press, pp. 52-57 (1990) [5] Nakijima, Akira, and Baba, Yoshinari. “Mechanism of hexavalent chromium adsorption by persimmon tannin gel.” Water Research. Web. (August 2004)


Remediation of water containing chromium(VI) using an insoluble ascorbic acid derivative Mary Cacace∗ Department of Chemistry and Biochemistry, Manhattan College Abstract. At least 74 million Americans in forty-two states consume a Group 1 carcinogen, classified by the International Agency for Research on Cancer (IARC). Cr(VI), one of the United States’ top pollutants, is produced by certain industries such as stainless steel and textile dyes amongst many others. Due to its mobility in soil, this Cr(VI) byproduct has wandered from groundwater into the drinking water. Ascorbic acid, commonly known as Vitamin C, is known to reduce Cr(VI) to chromium(III). Cr(III) is a more beneficial form of Cr found in daily multivitamins. Our goal is to produce an insoluble ascorbic acid derivative that can both perform the chemical reduction and subsequently be recycled for additional uses.

Introduction Chromium is a metallic element with the following chemical properties: lustrous, hardness, stable in heat, resistance to corrosion, and brittleness. It is a transition metal and therefore has many oxidation states from Cr(II) to Cr(VI) with the trivalent and hexavalent states being the most dominant forms [1]. Due to its various chemical properties and transition states, Cr has multiple uses in industry. Industries that use Cr include but are not limited to: stainless steel, textile dyes, chrome plating, fungicides, wood preservatives, ceramics, and leather tanning. In employing Cr, these industries produce Cr(VI) as a harmful byproduct. Because it is mobile in soil, Cr(VI) can contaminate the groundwater near these industrial sites [2]. In such aqueous environments, Cr(VI) exists as both chromate (CrO4 2− ) and dichromate (Cr2 O7 2− ) ions [1]. According to the WHO, the safest level of exposure to Cr ions is less than 0.05mg/L. The adverse effects of Cr(VI) exposure, both in the long and short term, are extensive. Short term exposure can lead to both skin and stomach irritation in the form of ulcers, while long term exposure can cause respiratory damage, contact dermatitis, and additional damage to the liver, kidneys, and gastrointestinal tract [3]. The IARC has classified Cr(VI) as Group 1 (carcinogenic to humans) while both the WHO and EPA have also declared it a carcinogen. The second dominant form of Cr, trivalent Cr, does not present the same harmful effects after exposure nor is it as mobile in soil. In fact, it has no toxicity and is an essential nutrient for certain metabolic processes [4]. The difference in toxicity between hexavalent and trivalent Cr is directly related to the ease with which Cr(VI) can pass through cell membranes as well as its greater oxidizing potential [1]. Trivalent Cr is not absorbed as easily. Current remediation methods to remove Cr(VI) from groundwater are physical, biological, and chemical. While the physical methods simply remove Cr(VI) from the water supply and ∗

Research mentored by John Regan, Ph.D.


114

The Manhattan Scientist, Series B, Volume 3 (2016)

Cacace

accumulate Cr(VI) waste, the biochemical and chemical methods reduce the Cr(VI) to the more beneficial Cr(III) form. In addition, the physical methods tend to be more expensive because the Cr waste must be disposed of after the separation takes place. Because the chemical separation techniques reduce the Cr, the disposal aspect is not necessary making it more cost effective. Physical separation techniques rely on ion exchange chromatography, adsorption, and filtration. In ion exchange chromatography, the stationary phase the column is composed of ion exchange resin. The mobile phase (the groundwater) is passed through the column at a low pH and the Cr(VI), having a high affinity for the resin, replaces the ion with lower affinity –typically chloride or hydroxide ions- that was previously bound to the column resin [5]. In order to remove the Cr(VI) bound to the column resin, it is first converted to sodium chromate. After this conversion the ion exchange column is washed with a strong base, such as sodium hydroxide (NaOH), in order to regenerate the column. While this method is successful in removing Cr(VI) from water, the process of disposing the Cr(VI) waste is both time consuming and expensive. Adsorption is a less expensive method of physical separation and has the same effectiveness as ion exchange chromatography in regards to removing Cr(VI) from water. Many sorbents, or a substance that can collect or absorb molecules of another substance, have been used to keep costs low. Some of these include biosorbents including bacteria, yeast, fungi, and seaweed, nutshells, and coffee [5]. Adsorption is an equilibrium process, meaning, the mass of Cr(VI) that absorbs to the surface of the sorbent depends on the concentration of the aqueous Cr(VI) and the affinity of the sorbent for the Cr(VI) [5]. Because there are a variety of sorbents that can be used, another benefit of this method is availability, in addition to the low cost. However, similar to ion exchange chromatography, it is problematic in that the Cr(VI) waste adsorbed to a sorbent needs to be disposed of, which raises the cost of this remediation method. Membrane filtration is also known to be effective in removing Cr from the polluted groundwater. Three types of membrane filtration are known to be effective in the removal of Cr(VI) include: inorganic, polymeric, and liquid membrane [4]. While all have proven to be successful at removing Cr(VI) from ground water, the inorganic membrane is the most effective. In previous methods of inorganic membrane filtration, a carbon membrane basis was manipulated in different ways to test for maximum rejection of Cr(VI). As with the previous physical separation methods explained, the issues arises in the disposal of the Cr(VI) waste after the separation takes place. Biological methods of remediation also exist to remove Cr(VI) from soil and groundwater. One method in particular, phytoremediation, refers to the process in which plants take up and remove Cr from the soil and reduce Cr(VI) to Cr(III) [5]. This reduction can also be carried out in microorganisms, such as bacteria, algae, yeast, and fungi. While these methods are not without limitations, the ability for plants and microorganisms to reduce Cr(VI) to Cr(III) minimizes Cr toxicity. Chemical remediation methods for the removal of Cr(VI) from groundwater are beneficial in a number of ways. Most importantly, they rely on the ability to reduce Cr(VI) to Cr(III), which


Cacace

The Manhattan Scientist, Series B, Volume 3 (2016)

115

both lowers toxicity and eliminates the need for Cr(VI) waste disposal, decreasing the cost. There are many common reagents known to perform this reduction including: Fe(0), H2 S, and SO2 [5]. When these reagents are used to remediate a solution with a high concentration of Cr, a higher pH is required in order for Cr(III) to be produced. The issue with the reducing agents is that they are not able to be recycled once the reduction takes place. Ascorbic acid, commonly known as Vitamin C, has been known to reduce Cr(VI) to Cr(III) following the reaction in Fig. 1 below:

Figure 1. Reduction of Cr(VI) to Cr(III) with ascorbic acid as the reducing agent. Ascorbic acid is oxidized to dehydroascorbic acid. Both forms of ascorbic acid are soluble in water.

According to Xu et al. [2], Cr(VI) was dispelled from a solution of potassium dichromate in the presence of ascorbic acid. Because both ascorbic acid and dehydroascorbic acid are soluble in water, a problem is encountered in regards to recycling the chemical compound once the reduction takes place. The recycling step is both environmentally friendly and necessary. This step ensures that there are no unwanted environmental effects due to the presence of excess ascorbic acid in water. Therefore, to compromise the water solubility of ascorbic acid, organic syntheses were performed to attach enough hydrophobic groups to the ascorbic acid structure so that it could be recycled from the water after the reduction of Cr(VI). These modifications were performed at C5 and C6 of ascorbic acid, having protected the hydroxyls at the carbons involved in the redox chemistry (C2 and C3) by the formation of benzyl ethers. Once the optimal hydrophobic groups were successfully attached, the benzyl ethers would be cleaved in order to expose the C2 and C3 hydroxyls. This method would reduce the toxicity of Cr(VI) by converting it to Cr(III), exclude the expensive additional step of disposing Cr(VI) waste, and eliminate any harmful effects a high concentration of ascorbic acid might have in groundwater on the environment.

Materials and Methods H NMR spectra were recorded on Anasazi Instruments Inc., Eft-60 NMR Spectrometer. Purification was accomplished by a CombiFlash RetrieveTM HPLC using ethyl acetate in hexanes as the eluent. 1

Synthesis of ketal-protected ascorbic acid (Fig. 5). 5.0 g of L-(+)-Ascorbic Acid and 10% TFA stirred at reflux (âˆź60â—Ś C) in acetone overnight. Volatiles were removed in vacuo. 1 H NMR


116

The Manhattan Scientist, Series B, Volume 3 (2016)

Cacace

(DMSO/d6 /TMS, δ 1.5 ppm, S, 6H). Addition of benzyls to C2 and C3 of ketal-protected ascorbic acid (Fig. 6). Following the experimental procedure of Kulkarni et al. [6] 1.9 g of ketal-protected ascorbic acid was stirred with 2.5 molar equivalents of potassium carbonate in acetone. When reflux was reached (∼60 ◦ C) 2.3 equivalents of benzyl bromide were added and the reagents stirred at reflux overnight. After the reaction, the volatiles were removed in vacuo. The residue was diluted with saturated sodium chloride and ethyl acetate. The aqueous layer was extracted three times with ethyl acetate. The combined organic layers were treated with drying agent. The ethyl acetate was removed in vacuo. The product was a white solid. 1 H NMR (CDCl3 , δ 7.4 ppm, M, 10H; 5.2 ppm, M, 4H). Cleaving ketal to form diol (Fig. 7). 4.9 g of ascorbic acid containing ketal and benzyls at C2 and C3 stirred in 10% TFA in methanol overnight at 50◦ C. Toluene was added and the volatiles were removed in vacuo. The product was a brown oil. Addition of tosyl group to C6 of diol (Fig. 8). 1.1 g of ascorbic acid diol was stirred with 25 molar equivalents of pyridine at 0◦ C. Five molar equivalents of p-toluenesulfonyl chloride were added and the reaction stirred to RT overnight. The reaction was brought to 0◦ C, water was added, and this stirred for twenty minutes. Following, ethyl acetate was added and the organic layer was washed three times with 10% sodium bicarbonate, 5% hydrochloric acid, and saturated sodium chloride. After collecting the organic layer it was dried with drying agent. Volatiles were removed in vacuo. The tosyl product was a brown oil. 1 H NMR (CDCl3 , δ 2.3 ppm, S, 3H). Ring closing of the hydroxytosylate to form epoxide (Fig. 9). 700mg of ascorbic acid tosyl variant stirred in THF with 1.2 molar equivalents of sodium hydride at 0◦ C and warmed to RT over the course of an hour. The reaction was diluted with MTBE and passed through a fine filter containing silica gel to remove sodium p-toluene sulfonate byproduct. Volatiles were removed in vacuo. The product was a yellow oil. Opening epoxide to form naphthyl ether (Fig. 10). 1.2 equivalents of 2-naphthol and 1.2 equivalents of sodium hydride stirred in THF at 0◦ C to form sodium naphthoxide. The epoxide (40 mg) was added and the reaction stirred while warming to RT for three hours. The solvent was removed. The mixture was diluted with water and ethyl acetate. The organic layer was washed with water and aqueous acid. It was dried with drying agent and concentrated in vacuo. The product was a yellow oil.

Results and Discussion A previous strategy by Markaj [7] relied on replacing the hydroxyls at the fifth and six carbons of ascorbic acid with a hydrophobic ketal. The ketal provided a water insoluble crystalline solid while not affecting the site of the redox chemistry, the hydroxyls at second and third carbons of the molecule. While this initial prototype was completely successful in removing the Cr(VI) from potassium dichromate solution, it was not able to be recycled (Fig. 2). The ketones on the ring of the dehydroascorbic acid compound (Fig. 2) hydrated, therefore the prototype became water


Cacace

The Manhattan Scientist, Series B, Volume 3 (2016)

117

soluble. If the dehydroascorbic acid compound was isolated, we would have been able to treat it with a reducing agent to recycle our initial prototype.

Figure 2. Synthetic strategy. The epoxide intermediate provides versatility with respect to adding hydrophobic groups.

Due to the initial prototype’s susceptibility to hydration, it was necessary to increase the lipophilicity of our new target molecule (Fig. 3).

Figure 3. The increased lipophilicity of our new target molecule (right) in relation to the initial prototype.

A disadvantage of the initial prototype was the presence of six oxygen atoms, increasing its water solubility. One of our goals was to limit the new target molecule to five oxygen atoms. We also wanted to synthesize more stable C-O bonds that were not susceptible to hydration unlike the hydrophobic ketal. In addition, the ketal offered limited diversity with respect to additional hydrophobic substituents. Therefore, we wanted to synthesize an intermediate that would provide versatility with respect to the hydrophobic groups we could add to the ascorbic acid structure. A number of synthesis were performed on the L-(+)-ascorbic acid in order to form an epoxide intermediate. The epoxide intermediate would provide versatility in regards to increasing the lipophilicity of our molecule. For example, either a naphthyl ether or biphenyl could be synthesized at the hydroxyl of the sixth carbon of ascorbic acid (Fig. 4). The hydroxyl at the fifth carbon in both target molecules would be capped or dehydrated in order to maintain our five oxygen limit and increase lipophilicity. The biphenyl addition would be performed via a Grignard reaction, while the naphthyl ether would be formed by first obtaining a naphthoxide to which the epoxide would be added. Our target molecule was the naphthyl ether. The first step of the synthesis was the formation of a ketal-protecting group at the hydroxyls of the fifth and sixth carbons of ascorbic acid. This step was necessary to ensure that the following synthetic step occurred at the hydroxyls that participate in the redox chemistry, located at the


118

The Manhattan Scientist, Series B, Volume 3 (2016)

Cacace

Figure 4. Synthetic strategy. The epoxide intermediate provides versatility with respect to adding hydrophobic groups.

second and third carbons of the molecule. The ketal-protected ascorbic acid, shown below (Fig. 5), was successfully synthesized from L-(+)-ascorbic acid with a 95% yield.

Figure 5. Ketal formation at hydroxyls of C5 and C6 of ascorbic acid.

The next step of the synthesis (Fig. 6) was the formation of benzyl ethers at the hydroxyls of second and third carbons of ascorbic acid following Kulkarni et al. [3].

Figure 6. Formation of benzyl ethers at C2 and C3 of ketal-protected ascorbic acid.

The reaction was successful, however TLC analysis showed the presence of benzyl bromide in the product so there was a need to purify this product. We employed two methods of purification: the first was an HPLC, which allotted a 48% yield and expected white solid [6]. The second purification method performed was a trituration, in which the product stirred in hexanes. The product was filtered and the hexanes layer was kept in the freezer. After a week the additional solid product was collected by filtration.


Cacace

The Manhattan Scientist, Series B, Volume 3 (2016)

119

After purification of the benzyl ascorbic acid variant, the ketal formed at the fifth and sixth carbons was cleaved in order to form a diol shown in Fig. 7.

Figure 7. Cleaving the ketal at C5 and C6 to form the diol.

This reaction was successful with a crude yield of 96%. Formation of the diol allowed the addition of more hydrophobic groups to make ascorbic acid water insoluble. A tosyl group was added to C6 of the diol successfully with an 88% yield following the reaction in Fig. 8.

Figure 8. Tosylation at C6 of ascorbic acid.

The yield obtained followed purification of the product via HPLC. In order to form the epoxide intermediate, we needed to close the ring of the hydroxytosylate. This reaction, shown in Fig. 9, was extremely water sensitive due to the presence of sodium hydride. In addition, the epoxide product was unstable at RT and needed to be stored in the freezer. Because of the water sensitivity of this reaction and instability of the product, low yields (< 20%) were obtained.

Figure 9. Ring-closing of the hydroxytosylate to form the epoxide.

The formation of the epoxide allowed for many possibilities in regards to adding hydrophobic groups. At first, we tried to perform a Grignard reaction using 4-biphenylmagnesium bromide, but this reaction was not successful. Therefore, we tried another approach using 2-napthol and sodium hydride shown in Fig. 10.


120

The Manhattan Scientist, Series B, Volume 3 (2016)

Cacace

Figure 10. Formation of naphthyl ether at C6 of ascorbic acid.

After allowing the 2-naphthol and sodium hydride to react and the sodium naphthoxide to form, the epoxide was added in the hopes of cleaving it open to obtain a naphthyl ether at C6. This reaction is in progress.

Conclusion The syntheses up to, and including, the formation of the hydroxytosylate have been successful. We are currently focused on optimizing the ring-closing of the hydroxytosylate to obtain the epoxide as well as the formation of the naphthyl ether.

Acknowledgements I would like to thank the School of Science Summer Research Scholars Program as well as the Department of Chemistry and Biochemistry for the financial support. I would also like to thank my mentor, Dr. Regan, for his continued guidance and support.

References [1] Jacobs, James and Testa, Stephen M. Overview of Chromium (VI) in the Environment: Background and History. Chromium (VI) Handbook. 3-22. (2004) [2] Xu, Xiang-Rang, Li, Hua-Bin, Li, Xiao-Yan, and Gu Ji-Dong, “Reduction of Hexavalent Chromium by Ascorbic Acid in Aqueous Solutions,” Chemosphere, 609-613. (2004) [3] World Health Organization, Guidelines for Drinking-Water Quality. 4th Edition. 340. (2011) [4] Owlad, Mojdeh, Aroua, Mohamed Kheireddine, Wan Daud, Wan Ashri, and Baroutian, Saeid. Removal of Hexavalent Chromium-Contaminated Water and Wastewater: A Review. Water Air Soil Pollution. 200: 59. (2009) [5] Hawley, Elisabeth L., Deeb, Rula A., Kavanaugh, Michael C., and Jacobs, James R.G. Treatment Technologies for Chromium (VI). Chromium (VI) Handbook. 274-292. (2004) [6] Kulkarni, Mukund G., and Thopate, Shankar R. Chemoselective Alkylation of L-Ascorbic Acid. Tetrahedron, Vol. 52, No. 4. pp.1293-1302. (1996) [7] Markaj, Paul. “Strategies for the remediation of toxic groundwater containing hexavalent chromium: insoluble 5,6-O-(substituted methylidene)-L-ascorbic acid.” The Manhattan Scientist, Ser. B, Vol. 2, 127-134. (2015)


Determining the structure of SUZ-9 Eric A. Castro∗ Department of Chemistry and Biochemistry, Manhattan College Abstract. Zeolites are versatile minerals and have many applications in both the petroleum and petrochemical industry. The focus of this research is to solve the structure of SUZ-9, a synthetic microporous aluminosilicate zeolite mineral, by utilizing physical model building techniques and sophisticated computer programs such as GSAS II, ATOMS, and SuperFlip. The most recent update of GSAS-II, a sophisticated computer program containing an arsenal of integrated programs such as Charge Flipping and Monte Carlo/Simulated Annealing routines, was installed. At present, physical model building combined with hints from powerful theoretical structure solving programs has only given hints about the structure of SUZ-9.

Introduction to Zeolites Zeolites are hydrated aluminosilicate minerals made from inter-linked tetrahedra of alumina (AlO4 ) and silica (SiO2 ) [1]. There are about 40 natural zeolites and more than 150 zeolites have been synthesized. The most commonly mined forms include chabazite, linoptilolite, and mordenite [2]. They have the following characteristics: 1. They are solid materials, containing pores and cavities. The pores allow zeolites to store water molecules thus making them hydrated. 2. Their porous structure allow for them to have the ability to be utilized as molecular sieves for industrial separation and adsorption processes [3]. 3. Zeolites possess catalytic properties that are applied in petroleum processing, petro- chemicals, and solution control [4]. Researchers at BP and ExxonMobil independently synthesized the zeolite SUZ-9. They did not determine the structure for SUZ-9. It is highly probable that SUZ-9 is the largest in a family of 12-ring zeolites (Table 1). All members of the family have hexagonal symmetry and similar chemical adsorption properties consistent with 12-ring pores. All members have a c cell dimension ˚ There are only five basic building units that combine to assemble the five known zeolites of 7.5 A. ˚ (Fig. 1). in this family. Each of these building units has a height of 7.5 A

Experimental Methods Using X-ray powder diffraction data X-ray diffraction is a powerful technique for the study of crystal structures and atomic spacing. Its primary application is used for the identification of unknown crystalline materials. X-ray diffraction is based on constructive interference of monochromatic X-rays and a crystalline sample. These X-rays are generated by a cathode ray tube, filtered to produce monochromatic radiation, ∗

Research mentored by Richard Kirchner, Ph.D.


122

The Manhattan Scientist, Series B, Volume 3 (2016)

Castro

Table 1. Family of Zeolites Framework Type

Material Name

˚ (a=b) a (A)

˚ c (A)

Building Units

Space Group

OFF

Offretite

13.291

7.5

d6r can gme

P-6m2

MAZ

Mazzite

18.102

7.5

gme

P63 /mmc

LTL

LZ-212

18.13

7.5

can d6r ltl

P6/mmm

LTF

LZ-135

31.3830

7.5

gme

P63 /mmc P6/mmm P63 /mmc or P6/mmc

MOZ

ZSM-10

31.575

7.5

d6r can pau ltl

?

SUZ-9

36.14

7.5

?

Figure 1. The 5 building units that make up all known members of the 12-ring family. Stacking a d6r on top of a can ˚ height as in ltl, gme and pau. Bonds are drawn between the tetrahedral atoms in the above unit gives the same 7.5 A units. Ring openings are described by the number of tetrahedral atoms in the ring. Thus d6r has 6-rings and 4-rings.

collimated to concentrate, and directed toward the sample. X-ray diffractometers consist of three basic elements: an X-ray tube, a sample holder, and an X-ray detector (Fig. 2) .

Figure 2. The X-ray detector moves in a circle around the crystalline sample which is placed on the sample holder. The X-ray detector position is recorded as the angle 2θ which records the observed number of X-rays.


Castro

The Manhattan Scientist, Series B, Volume 3 (2016)

123

X-ray diffraction can be implemented through a two-dimensional powder or three-dimensional singe crystal experiment. The difference between powder and single-crystal diffraction is illustrated below (Figs. 3 and 4):

Figure 3. Powder diffraction diagram. Characteristics: (1) Minimal sample preparation is required; (2) Can provide unambiguous mineral determination; (3) Provides two dimensional information; (4) Indexing and extraction of information is more difficult

Figure 4. Single crystal diffraction diagram. Characteristics: (1) Is best for identification of an unknown material; (2) Is a non-destructive analytical technique; (3) Provides detailed crystal structure, including unit cell dimensions; (4) Data collection process is considerably longer

The two-dimensional data from powder diffraction often has significant overlapping of reflections making structure solution more difficult. Industrially synthesized zeolites are usually micro-crystalline (i.e. powders) because powders have a large surface area to material mass and are more reactive and economic in industrial processes. Synchrotron X-ray powder diffraction data for SUZ-9 (Fig. 5) was provided by Dr. J. M. Bennett. This small data set also made it difficult to obtain results from powerful crystallographic


124

The Manhattan Scientist, Series B, Volume 3 (2016)

Castro

programs. Analyzing powder diffraction data using crystallographic programs can give the unit cell constants and possible symmetry (space group). Decomposing the powder pattern into integrated intensities (reflections) often allows a trial structure to be determined by modern direct methods (ab initio) programs [5]. Incomplete or partial trial structures can provide important clues to allow complete structures to be determined by classical physical model building.

Figure 5. SUZ-9 synchrotron powder X-ray diffraction pattern. The green line and black hash marks under the plot of experimental data (red crosses) indicate the extracted reflections. The purpleplot (lower curve) is the difference between the experimental and extracted data. This SUZ-9sample contains an Offretite impurity.

Using state-of-the-art crystallographic computer programs The GSAS-II program suite We used GSAS-II [6], an up-to-date integrated collection of the most powerful crystallographic programs for powder X-ray diffraction data. It handles all the steps in diffraction analysis, such as data reduction, peak analysis, indexing, Pawley fits, small-angle scattering fits, and structure solution in addition to structure refinement. It can be used with large collections of related datasets for repeated refinements and for parametric fitting to these results. We were eager to use GSAS-IIT because of its dual ability to use Charge Flipping [6] and Monte Carlo / Simulated Annealing [6] techniques for solving crystal structures. Introduction to simulated annealing Simulated Annealing is an algorithm that is able to find a good enough solution in a reasonable amount of time. This is because it is not realistic to expect to find the optimal solution within a sensible length of time; we have to settle for something that is close enough within a short timeframe. The simulated annealing algorithm was originally inspired from the process of annealing in


Castro

The Manhattan Scientist, Series B, Volume 3 (2016)

125

metal work. Annealing involves heating and cooling a material to alter its physical properties due to the changes in its internal structure. In our work, we hope to anneal the building units shown in Fig. 1 to produce the framework topology of SUZ-9. The ATOMS program ATOMS is a program for drawing all types of atomic structures, including crystals, polymers and molecules [7]. It can make fully “three-dimensional” color drawings using the latest system software, or it can make simple schematic black-and-white drawings for reproduction on a small scale in publications, or virtually anything between these extremes. It is especially useful for converting the atom positions in a physical model into atomic fractional coordinates used in crystallographic computing.

Physical model building techniques The framework topology, the geometrical array in three dimensional space [8], of a new crystalline material can imagined from suggestions or hints obtained from sophisticated crystallographic programs. A physical model is built by connecting tetrahedral jacks with plastic tubing ˚ ⇔ 1 cm). I first constructed the pau building unit, an eight ring unit (i.e., bonds) cut to scale (1 A described in the Zeolite Data Base, and showed how it was related to other known building units that might comprise the SUZ-9 topology.

Figure 6. A typical result from a stand-alone SuperFlip calculation. Clearly visible are 12-rings centered on the vertices, and two central 12-rings along the diagonal of the unit cell.


126

The Manhattan Scientist, Series B, Volume 3 (2016)

Castro

Discussion Model building based upon partial theoretical calculation results. A typical result from previous calculations using the ChargeFlipping stand-alone program [9] is shown above in Fig. 6. Twelve-rings are centered on each vertex, and there are two additional 12-rings positioned on the long diagonal. It should be possible to use the building units shown in Fig. 6 and combine them, including combinations with themselves, to build a physical model that resembles the Charge Flipping result. Christine Schmidt combined the ltl and pau units to build the model in Fig. 7. This model turned out to be just a variation of the known structure MOZ, not a correct model for SUZ-9.

Figure 7. CS3 model with ltl (white) and pau (black) building units

Another model, shown in Fig. 8 satisfied all of the requirements of a possible correct topology for SUZ-9. It also resembles the theoretical result shown in Fig. 7. This model is composed solely

Figure 8. Physical model made only by connecting ltl building units. It satisfies all requirements for a SUZ-9 unit cell and agrees with ChargeFlipping results. However its calculated powder pattern did not match the experimental X-ray powder diffraction pattern indicating this is not the SUZ-9 topology.


Castro

The Manhattan Scientist, Series B, Volume 3 (2016)

127

by connecting ltl cages. But when the calculated powder pattern from this proposed topology was compared to the experimental x-ray powder diffraction pattern, the two did not match indicating that this model is not the correct topology of SUZ-9. Model building continues to date. Conclusion At present, physical model building combined with hints from powerful theoretical structure solving programs have only given hints about the structure of SUZ-9. Through working with framework topology, research students get a grasp of geometric understanding of spatial relation of a structure and how its constituent parts are interrelated or arranged. It is imperative to understand the experimental methods behind any research project. In the case of determining unknown structures such as SUZ-9, it is not certain that one method may work or not. Backtracking steps and looking into new approaches can allow for new results to emerge as in the case with applying the Monte Carlo/Simulated Annealing.

Acknowledgements Financial support from The Camille and Henry Dreyfus Foundation Senior Scientist Mentor Program, and participation in the Summer Research Scholars Program at Manhattan College is gratefully acknowledged. I thank my mentor, Dr. Richard Kirchner, and my colleagues Christine Schmidt and Gertrude Turinawe Hatanga for allowing me to work with great individuals.

References [1] Ibrahim, Siti Aida Binti. “Synthesis and characterization of zeolites from sodium aluminosilicate solution.” University Sains Malaysia Institutional Repository, no. 1 (2007): 2-8 [2] Ober, Joyce A. “Mineral Commodity Summaries 2016.” U.S. Geological Survey, 202 p., http://dx.doi.org/10.3133/70140094. [3] Roberts, C. W. “Molecular Sieves for Industrial Separation and Adsorption Application” The Property and Applications of Zeolites: The Proceedings of a Conference organized jointly by the Inorganic Chemicals Group of the Chemical Society and the Society of Chemical Industry, The City University, London, 18th-20th April 1979. Ed. R.P. Townsend London: Chemical Society, 1980. [4] Vaughan, D. E. W. “Industrial Uses of Zeolite Catalysts. The Property and Applications of Zeolites.” The Proceedings of a Conference organized jointly by the Inorganic Chemicals Group of the Chemical Society and the Society of Chemical Industry, The City University, London, April 1979. Ed. R. P. Townsend London: Chemical Society, 1980. [5] David, W. I. F. Structure Determination from Powder Diffraction Data. Oxford: Oxford UP, 2002. [6] Toby, Brian and Von Dreele, Robert (2013) GSAS-II J. Appl. Cryst. 46, 544 [7] E. Dowty, ATOMS , Shape Software, 2016.


128

The Manhattan Scientist, Series B, Volume 3 (2016)

Castro

[8] Breck, D. W. “Potential Uses of Natural and Synthetic Zeolites in Industry. The Property and Applications of Zeolites.� The Proceedings of a Conference organized jointly by the Inorganic Chemicals Group of the Chemical Society and the Society of Chemical Industry, The City University, London, April 1979. Ed. R. P. Townsend London: Chemical Society, 1980. [9] Palatinus L. Chapuis G. (2007): SuperFlip - a computer program for the solution of crystal structures by charge flipping in arbitrary dimensions. J. Appl. Cryst. 40, 786-790.


Solving the structures of ZSM-18 and SUZ-9 Gertrude Turinawe Hatanga∗ Department of Chemistry and Biochemistry, Manhattan College Abstract. ZSM-18 and SUZ-9 are two industrially synthesized aluminosilicate zeolites. Zeolites are especially useful in the Petroleum industry as catalysts to breakdown hydrocarbons into products such as diesel and kerosene. The focus of this research is to confirm the published topology of ZSM-18 and to solve the structure of SUZ-9 using physical model building techniques and sophisticated ab initio crystallographic computer programs such as GSAS, ATOMS, SuperFlip and Zefsa-II. The updated GSAS-II suite of integrated programs now includes Charge Flipping and Monte Carlo/Simulated Annealing routines that are state-of-the-art methods for solving structures from x-ray data. However, using these routines in GSAS-II did not work as well as their original stand-alone versions. ZSM-18 refinements in GSAS-II did not converge, indicating that there may be problems in the published topology or space group. For SUZ-9, Simulated Annealing calculations using the Zefsa-II program provided hints about its structure that may help determining its structure by model building.

Introduction Two industrially synthesized zeolites, ZSM-18 and SUZ-9 are the focus of our study. A zeolite is a crystalline, porous aluminosilicate mineral [1]. The metal atoms (commonly, silicon and aluminum) in the zeolites are surrounded by four oxygen anions to form an approximate tetrahedral geometry consisting of a metal cation (T-atom) at the center and oxygen anions at the four apexes (Fig. 1). The size and shape of the pores have a very significant effect on the properties of any zeolite especially for adsorption processes.

Figure 1. Basic structure of a zeolite.

Unique properties such as their porous character with uniform pore dimensions, allowing certain molecules to enter the crystals while rejecting others based upon their molecular size, make zeolites special [2]. These properties lead to their use in separation processes hence the name ∗

Research mentored by Richard Kirchner, Ph.D.


130

The Manhattan Scientist, Series B, Volume 3 (2016)

Turinawe Hatanga

“molecular sieve.” There are both naturally occurring zeolites and dozens more that are industrially synthesized. The main areas in which zeolites are applied are: in separation processes as adsorbents, in industrial applications as catalysts, and in detergents as ion-exchange water softeners [2].

Background on ZSM-18 The research on ZSM-18 has been ongoing for many years. Julius Ciric at Mobil Oil Corporation first patented ZSM-18 in 1975 [3]. In 1990, Lawton reported ZSM-18 as the first example of framework type MEI [4]. The MEI framework type has a three-dimensional channel system, ˚ intersected by 7-ring channels (3.2*3.5A) ˚ as illustrated in Fig. with a 12-ring channel (6.9*6.9A) 2. The topology of ZSM-18 was determined based on hypothetical model building. This topology was however very controversial, being that ZSM-18 was the first known aluminosilicate to possess 3-rings [5] and a number of other odd-numbered rings [4].

Figure 2. ZSM-18 structures with MEI framework type. On the left, four 12-ring pores are shown, viewed from above [6]. On the right is the view from the side. ZSM-18 has odd-numbered rings. These consist of 3, 5 and 7 tetrahedral metal atoms linked together, as shown in Figs. 1 and 3 [7]. It is quite difficult to see the 5- and 7-rings but the 3-rings are visible, appearing at the top of the structures on the right.

Methods and Discussion ZSM-18 Being that the ZSM-18 topology was only determined by model-building; variations of this model have been suggested over time because possible ways to combine T-atoms to form the topology are virtually limitless [1]. The main goal of this research is to determine the ZSM18 structure using crystallographic x-ray diffraction data. We use sophisticated crystallographic


Turinawe Hatanga

The Manhattan Scientist, Series B, Volume 3 (2016)

131

Figure 3. The images illustrate different types of rings. Shown from left to right is a 3-ring, 4-ring, 6-ring and 8-ring [8]. The corners of these rings are made up of what is referred to as tetrahedral metals (T-atoms) commonly silicon and aluminum. Stacking rings in the framework topology produces pores.

programs such as GSAS-II [9], ATOMS [10] and Superflip [11] to verify the published topology. The refinement of the published topology in space group P63 /m does not converge, indicating there may be problems with the published topology or its space group. Running the Superflip program did not solve the structure but did suggest P6/m and P11m as possible space groups. SUZ-9 Unlike ZSM-18, the topology of SUZ-9 is unknown. We are using model building and sophisticated ab initio crystallographic computer programs to try and solve this structure. Previous research and absorption experiments have determined that SUZ-9 is likely a member of the 12Ring Family of Porous Materials (see Table 1) with its cell dimensions a and b being twice those of LTL and MAZ. Table 1. Family of Zeolites Framework Type

Material Name

˚ (a=b) a (A)

˚ c (A)

Building Units

Space Group

OFF

Offretite

13.291

7.5

d6r can gme

P-6m2

MAZ

Mazzite

18.102

7.5

gme

P63 /mmc

LTL

LZ-212

18.13

7.5

can d6r ltl

P6/mmm

LTF

LZ-135

31.3830

7.5

gme

P63 /mmc P6/mmm P63 /mmc or P6/mmc

MOZ

ZSM-10

31.575

7.5

d6r can pau ltl

?

SUZ-9

36.14

7.5

?

All framework types in the 12-Ring Family of Porous Materials have the same c cell dimension ˚ The topology of all members of this family are made from assembling only five building of 7.5A.


132

The Manhattan Scientist, Series B, Volume 3 (2016)

Turinawe Hatanga

blocks, shown in Fig. 4. These building blocks are presently being assembled in various ways in attempts to construct a model with the cell constants of SUZ-9. pau

gme

ltl

d6r + can

c = 7.5 Å

d6r

can

Figure 4. Building blocks [7] within the 12-ring Family of Porous materials. All the building units in the 12-ring ˚ By combining the double six ring (d6r) and cancrinite (can) as family shown in the top row have a height c = 7.5 A. ˚ height is also obtained. shown to the right in the top images, a 7.5 A

In addition to model building we installed and explored GSAS-II [9], an updated version of GSAS-I with Charge Flipping and Simulated Annealing routines which were otherwise not available in GSAS-I. The Simulated Annealing program is used in x-ray crystallography to solve structures. Our first attempt was to export in XYZ Cartesian coordinates a simple 12-ring from ATOMS, which was input into GSAS-II as a rigid body. The results of this simulated annealing calculation were however not helpful. Next we used a complete ltl building block (185 atom rigid body). This calculation nearly overwhelmed our computer. The result we got, shown in Fig. 5, shows what appears to be an ltl cage placed at each of the four vertices of the unit cell. We believe that either the simulated annealing program, or the graphics display routine in GSAS-II did not work properly. We hoped the program would fill in the empty gaps between the ltl cages. We found Zefsa II [12], a more powerful stand-alone Simulated Annealing program that includes Parallel Tempering, and installed it on a more capable computer (O.S. Linux Mint 17.3). Preliminary results from Zefsa are shown in Fig. 6. Although a complete solution was not obtained, additional hints about the topology were produced that are being explored with model building.

Summary Refinements of the literature topology of ZSM-18 did not refine. Further work is needed to confirm the structure of ZSM-18. The Simulated Annealing program IN GSAS-II did not work well for SUZ-9 and nearly overwhelmed the capability of our fastest computer (a 2015 MacBookPro). However, the original Simulated Annealing program, Zefsa II [12]was installed on a faster computer (O.S. Linux Mint


Turinawe Hatanga

The Manhattan Scientist, Series B, Volume 3 (2016)

133

Figure 5. Screenshot of Monte Carlo / Simulated Annealing result in GSAS-II.

Figure 6. Typical Zefsa-II Simulated Annealing/Parallel Tempering result. White dots show Si atom positions. The Ëš between two Si atoms. Top view (down c) shown on left. Side view green lines indicate an ideal bond distance (3.1 A) of cell shown on the right. Within the hexagonal unit cell a 12-ring is clearly shown centered about each vertex. The six dots in the cell interior suggest other rings (pores) along the long diagonal of the cell.

17.3). To date the Monte Carlo/Simulated annealing routines have been supplemented by a parallel Tempering technique which also has not produced a complete solution but has given additional hints about the topology of SUZ-9. These hints are currently being explored by physical model building techniques.

Acknowledgements This work was funded by the Camille and Henry Dreyfus Foundation Senior Scientist Mentor Program and the Summer Research Scholars Program at Manhattan College. I thank my mentor, Dr. Richard Kirchner, Dr. Joseph Capitani for computer assistance, and Eric A. Castro and Christine Schmidt, student research colleagues, for their help and assistance during this research.


134

The Manhattan Scientist, Series B, Volume 3 (2016)

Turinawe Hatanga

References [1] Price, G. L., “What is a Zeolite?” http://www.personal.utulsa.edu/∼geoffrey-price/zeolite/zeo narr.htm [2] Moscou, L. in: H. van Bekkum, E. M. Flanigen, J. C. Jansen (Eds.), Introduction to Zeolite Science and Practice; Studies in Surface Science and Catalysis, 58, Elsevier Science Publishers, Amsterdam, pp 2-5 ( 1991) [3] Ciric, J., Mobil Oil Corporation, assignee. “Synthetic zeolite ZSM-18” US Patent No. 3,950,496 (Filed April 13,1976) [4] Stephen L. Lawton and Wayne J. Rohrbaugh “The Framework Topology of ZSM-18, a Novel Zeolite Containing Rings of Three (Si,Al)-O Species.” Science, Vol. 247, Issue 4948, pp. 13191322 (1990) (DOI:10.1126/science.247.4948.1319) [5] Burton, A. and H. Vorman, “Zeolite ZSM-18, its synthesis and its use.” ExxonMobil Chemical Patents Inc., US Patent No. 2013169419 (Filed April 5, 2013) https://www.google.com/patents/WO2013169419A1?cl=en [6] W. H. Baur and R. X. Fischer, “Zeolite-Type Crystal Structures and their Chemistry.” Framework Type Codes LTA to RHO in Landolt-B¨ornstein - Group IV Physical Chemistry, Vol. 14D pp. 1-6 (2006) http://link.springer.com/chapter/10.1007/978-3-540-45870-8 8#page-1 [7] Baerlocher Ch., L. B. McCusker, and D. H. Olson, “Atlas of Zeolite Framework Types.” Sixth Revised Edition. Elsevier, Amsterdam. Z. MEI Framework Type, pp. 204-205 (2007) [8] Gillett, S. “Toward a Silicate-Based Molecular Nanotechnology II. Modeling, Synthesis Review, and Assembly Approaches.” https://www.foresight.org/Conference/MNT05/Papers /Gillett2/index.html [9] Toby, B. H. and R. B.Von Dreele, “GSAS-II: The Genesis of a Modern Open-Source AllPurpose Crystallography Software Package,” Journal of Applied Crystallography 46 (2), 544549 (2013). [10] E. Dowty, ATOMS, Shape Software, 2016 [11] Baerlocher Ch., L. B. McCusker, and L. Palatinus “Charge flipping combined with histogram matching to solve complex crystal structures from powder diffraction data.” Z. Kristallogr. 222(2), 47-53 (2007). [12] M. Falcioni and M. W. Deem. J. Chem. Phys. 110, 1754-1766 (1999)


Organic molecules that aid in removing chromium(VI) from water Douglas Huntington∗ Department of Chemistry and Biochemistry, Manhattan College Abstract. Chromium(VI) metal is a designated priority pollutant by the US Environmental Protection Agency (Owlad et al., 2008). Human exposure towards Cr(VI) can lead to conditions such as dermatitis, irritation of the respiratory system, and lung cancer if continuous exposure persists (Dayan and Paine, 2001). Cr(VI), in the form of chromate, can be partially removed by absorption from water using a granulated activated carbon (GAC) filter (DeSilva, 2000). An alternative method of chromate removal was investigated by reacting different organic compounds to form a chromate ester complex which could be readily absorbed onto GAC (Nakajima and Yoshinari, 2004). We demonstrated that compounds with aromatic rings, such as catechol and 1,2-diaminobenzene, remove chromate more efficiently from solution compared to GAC alone. Non-aromatic organic compounds, as 1,3-propanediol and trans-1,2-diaminocyclohexane were less effective.

Introduction Various commercial endeavors such as the processes of wood treatment, tanning of leather, manufacturing of stainless steel and pigment/dye production generate wastewater containing chromium (VI) metal (DeSilva, 2000). The chromium (VI) metal that is generally found in this wastewater exists as chromate (CrO4 2− ) or dichromate (Cr2 O7 2− ) ions. The World Health Organization (WHO) recommends safe levels to be less than 0.05 mg/L of chromium ions in drinking water (Owlad et al., 2008). A serious public health concern is when Cr(VI) enters into a public drinking water source either at a municipal water treatment plant or residential water wells. A variant of Cr(VI) metal is chromium(III), Cr(III), found in many common vitamin supplements. The Cr(III) in vitamin supplements is known as chromium picolinate and is essential for a healthy lifestyle in humans. However, Cr(III) is nearly insoluble at neutral pH and is poisonous only at high concentrations (Owlad et al., 2008). The most widely used method for Cr(VI) removal is through the use of activated carbon. Activated carbon species include granulated activated carbon (GAC), powder activated carbon (PAC), activated carbon fibrous (ACF), and activated carbon cloth (ACC) (Owlad et al., 2008). Sharma and Forster (1996) demonstrated that GAC has an optimal absorbance of Cr(VI) when the pH of a solution is below 3 suggesting GAC is not effective at removing ions, and only neutral molecules. Absorption of Cr(VI) using GAC offers advantages over other methods such as ease of operation and availability. However, activated carbon removal methods only work well at acidic pH’s and do not effectively absorb charged species such as chromate. In addition, they are relatively expensive for removing a few milligrams of Cr(VI) per gram. A relatively new process of Cr(VI) absorption is through the use of biosorbents. These materials are derived from low-cost agricultural wastes such as sawdust or hazelnut shell (Owlad et al., 2008). Advantages of this method are that the ∗

Research mentored by John Regan, Ph.D.


136

The Manhattan Scientist, Series B, Volume 3 (2016)

Huntington

biosorbents are relatively inexpensive and effective in reducing the concentration of chromate ions to low levels (Owlad et al., 2008). Disadvantages of this process are that it is relevantly new and not enough research has been conducted to ascertain how effective they are. Membrane filtration is another removal technique for Cr(VI). These membranes can treat inorganic effluent with Cr(VI) concentration under a wide range of operational conditions, unlike GAC (Owlad et al., 2008). However, the operational cost for these membranes are extremely high making them unfeasible except at water treatment facilities. The process of ion exchange is also another way to remove Cr(VI) from solution industrially. However, ion exchange has limitations with wastewater containing Cr(VI) and the operational cost for Cr(VI) ion exchanger resins is extremely high (Owlad et al., 2008). Our objective was to demonstrate that we could use different organic compounds of varying structure to form non-polar organic-chromate molecules which could be absorbed onto GAC, unlike regular chromate. Practically, we developed a very low operational cost and the overall process is efficient compared to some of the other methods for Cr(VI) removal.

Materials And Methods The molecules that were tested for percentage of Cr(VI) removal from 0.5M stock solution were ethylene glycol, 1,3-propanediol, pinacol, cis-1,2-dihydroxycyclohexane, trans-1,2dihydroxycyclohexane, trans-1,2-diaminocyclohexane, 1,2-diaminobenzene, and catechol. A potassium chromate stock solution was also prepared with a concentration of 2.0×10−2 M, where every 3 mL of chromate contained 11 mg of chromate ion. The organic compound stock solution and the chromate stock solution were then pipetted, in a 3:1 or 6:1 molar ratio, into a 20 mL beaker and allowed to stir for fifteen minutes. This reaction was performed at pH 7 and at room temperature. After the fifteen minutes, 0.6 g of GAC was added to the solution and stirred for an additional fifteen minutes. The solution was then diluted to a concentration of 2.0×10−4 M. An Agilent 8453 UV/visible photodiode array spectrophotometer was used for UV/Vis measurements. Glass cuvettes were used throughout the experiments. Every solution tested was compared to a standard chromate solution of 2.0×10−4 M. The characteristic peak for the chromate ion absorption is 372 nm. The percent Cr(VI) removal was calculated based on the differences of the two λmax at 372nm between the stock solution and test sample. Each compound was tested twice to ensure accuracy. The percent Cr(VI) removal from solution was calculated from the following formula: (λmax of standard chromate + GAC) - (λmax of compound) / (λmax of standard chromate + GAC). The average percentage of Chromium (VI) removal was calculated by averaging the two percentages obtained from the two trials. Standard deviations were also calculated for each compound.

Results GAC alone was tested and shown to be not as effective as the compounds in removing chromate from solution (data not shown). As seen in Table 1, the most effective compounds at remov-


Huntington

The Manhattan Scientist, Series B, Volume 3 (2016)

137

ing Cr(VI) from solution at 3:1 molar equivalents of compound to chromate were compounds that contained aromatic rings, including catechol and 1,2-diaminobenzene (entires 6 and 8). Table 1. Molar equivalence of 3:1 for Cr(VI) removal Percent Cr(VI) Removal (average)

Entry

1

69%

5

69%

2

70%

6

86%

3

66%

7

66%

4

68%

8

83%

Entry

Organic Compound

Organic Compound

Percent Cr(VI) Removal (average)

Entry 6 had an average percentage of Cr(VI) removal of 86% and entry 8 had an average percentage of Cr(VI) removal of 83% (Table 1). The diols, entries 1, 2, and 3, had an average percentage of Cr(VI) removal lower than the aromatic compounds, with removal values around 66%. The cyclic diols that were tested, entries 4 and 5, were similar to the non-cyclic diols in which the removal of Cr(VI) values were around 66%. The percentage of Cr(VI) removal for entry 7 was 66%, just on par with the non-cyclic and cyclic diols. To determine if additional Cr(VI) could be removed from solution, we changed the molar equivalence ratio from 3:1 to 6:1. The results, using our three best compounds, catechol, 1,2diphenylamine, and 1,3-propanediol, are shown in Table 2. As seen in Table 2, as the molar equivalence of compound to chromate increased, the percent removal for Cr(VI) also increased. Entry 2 still had the highest percentage of Cr(VI) removal. A general conclusion that can be drawn is that compounds containing both an aromatic ring and hydroxyl groups are much more efficient in removing CR(VI) than their non-aromatic diol or


138

The Manhattan Scientist, Series B, Volume 3 (2016)

Huntington

Table 2. Molar equivalence of 6:1 for Cr(VI) removal Entry 1

Organic Compound

Percent Cr(VI) Removal (average) 86%

tt

2

95%

3

91%

amino counterparts.

Discussion Chromate ion cannot be absorbed efficiently onto GAC due its high polarity. However, chromate ion and the organic compound can react to form a neutral chromate ester, which is absorbed readily onto GAC. The reaction between 1,2-diaminobenzene and chromate is displayed in Fig. 1.

Figure 1. Reaction between 1,2-diaminobenzene and chromate to form a chromate ester

An equilibrium is believed to be established between the chromate ion, 1,2-diaminobenzene and the cyclic chromate ester. Support for the equilibrium is observed with the fact that an increase in mixing time between 1,2-diaminobenzene and chromate does not improve the percent of chro-


Huntington

The Manhattan Scientist, Series B, Volume 3 (2016)

139

mate ester removed. In addition, increasing the amount of 1,2-diaminobenzene from 3 to 6 molar equivalents improved overall Cr(VI) removal. With the selection of compounds we tested, we where able to compare various structural elements to determine if they had an influence on Cr(VI) removal. Stereochemistry was investigated by comparing cis and trans 1,2-dihydroxycyclohexane (entries 4 and 5). We initially believed that the trans compound might be slightly more stable but it was determined that Cr(VI) removal was similar for both cis and trans compounds, resulting in 68% and 69% of Cr(VI) removal. As a result, stereochemistry was determined to be irrelevant in our study for chromate ester formation. Another phenomenon, nucleophilicity, was tested with amino groups that are more nucleophilic than hydroxyl groups. However, nucleophilicity was determined to have a negligible effect, as shown with 1,2-diaminocyclohexane (entry 7), with an average percentage of Cr(VI) removal of 66%, which is comparable to the other diols. Another factor that was examined was the ring size of the chromate ester that formed. A chromate ester can either form a 5 or 6 membered ring with ethylene diol or 1,3-propanediol (entries 1 and 2). We believed that the 5 membered ring, from ethylene glycol, would be the kinetically favored product compared to the six membered ring from 1,3-propanediol. However, the 6-membered ring may be more stable due to more favorable bond angles. The results indicated that 5 or 6 membered rings were equally effective at removing Cr(VI). The one factor that was determined to have an effect on Cr(VI) removal was the presence of an aromatic ring in the compound. Both catechol and 1,2-diaminobenzene (entries 6 and 8) had larger percentages of Cr(VI) removal than their counterparts. One possible reason is the stabilization of the chromate ester by the pi electrons that the aromatic ring posses. It is believed these chromate esters form more rapidly at higher concentration. One interesting factor that we tested for was the duration of stirring to see if that had any affect on Cr(VI) removal. Initially, we believed that a longer initial stir time would increase the amount of chromate ester that formed and absorbed onto GAC. However, a longer initial stir time had no influence in percentage of Cr(VI) removed from solution. Stirring for fifteen minutes was just as efficient as stirring for one hour in forming the organic-chromate ester. The next logical step was to determine if a longer contact time with GAC in solution had any influence on the percentage Cr(VI) removal from solution. This was also ineffective as contact time with GAC for fifteen minutes or one hour were the same. The organic-chromate ester must get readily absorbed onto GAC.

Conclusion We successfully demonstrated that organic compounds with aromatic rings had a higher percentage removal of Cr(VI) from solution at 3:1 and 6:1 molar equivalence. Non-diols and nonaromatic diols had a lower percentage removal of Cr(VI) from solution compared to their aromatic counterparts. Compounds that were increased to 6:1 molar equivalence had a larger percentage of Cr(VI) removal than compounds tested at 3:1 molar equivalence.


140

The Manhattan Scientist, Series B, Volume 3 (2016)

Huntington

Acknowledgments I would like to thank the Manhattan College School of Science and the Department of Chemistry and Biochemistry for this opportunity, and especially Dr. Regan for his advisement. The Summer Research Scholars Program provided the financial support for my research.

References Dayan, A D, and A J Paine. “Mechanisms of Chromium Toxicity, Carcinogenicity and Allergenicity: Review of the Literature from 1985 to 2000.” N.p., 01 Sept. 2001. Web. 02 Oct. 2016. DeSilva, Frank. “Activated Carbon Filtration.” Water Quality Products Magazine (January 2000): n. pag. Web. Nakajima, Akira, and Yoshinari Baba. “Mechanism of Hexavalent Chromium Adsorption by Persimmon Tannin Gel.” Water Research 38.12 (2004): 2859-864. Web. Owlad, Mojdeh, Mohamed Aroua, Wan Daud, and Sacid Baroutian. “Removal of Hexavalent Chromium-Contaminated Water and Wastewater: A Review.” SpringerLink. Springer Science, 20 Nov. 2008. Web. 02 Sept. 2016. Sharma, D C, and C F Forster. “Removal of Hexavalent Chromium from Aqueous Solutions by Granular Activated Carbon.” University of Birmingham, 2 Apr. 1996. Web.


A molecular mechanics study of chromodulin and tyrosine kinase James Irizarry∗ Department of Chemistry and Biochemistry, Manhattan College Abstract. Chromodulin, also known as low-molecular-weight chromium-binding substance (LMWCr), is a chromium cofactor that plays an important role in glucose metabolism, but its structure has never been characterized. We performed Merck Molecular Forcefield, in aqueous solvent (MMFFaq) molecular mechanics calculations in an attempt to characterize the structure and determine its effect on tyrosine kinase. These calculations were performed in order to better understand the enzymatic mechanism and compare the thermodynamic properties of the chromodulin structures to the experimental values; this would hopefully allow us to characterize the structure.

Introduction Chromium has been used as a nutritional supplement since the 1950s. Later studies showed that chromium has the ability to alleviate the symptoms of Type II Diabetes, hinting at the biologically active role for chromium. In the 1980s, a chromium binding oligopeptide known as chromodulin was isolated. Chromodulin combats insulin resistance by increasing the activity of insulin receptor tyrosine kinase; however, the exact mechanism is still not known. If blood glucose levels start to get too high, the hormone insulin is secreted in order to lower blood glucose levels and initiate the conversion of glucose to glycogen. Once insulin is bound to the insulin receptor, the receptor is activated and the two β-subunits, phosphorylate specific tyrosine residues. These β-subunits are also known as tyrosine kinases. The enzyme then phosphorylates second messenger proteins and the resulting signal transduction initiates gene expression that lowers blood glucose levels. As seen in Fig. 1, after the insulin receptor has bound insulin, chromium is transported into the cells by a protein called transferrin (Tf) [1]. Chromodulin is then loaded with chromium; the large binding constant (Kf = 1.10 × 1021 M−4 ) [2], essentially causes all chromium in the cell to be bound by chromodulin. Chromodulin binds chromium by the following reaction: Apo-LMWCr−6 (aq) + 4Cr3+ (aq) → Holo-LMWCr6+ (aq) ∆Grxn (311.75 K) ∼ -125.58 kJ/mol

∆Gof of Cr3+ (aq) at 311.75 K ∼ -216 kJ/mol [3].

After chromodulin is loaded with chromium, it binds to the insulin receptor and increases its activity. Once insulin levels drop, chromodulin dissociates from the receptor and is excreted from the body (Fig. 2). Chromodulin has very high specificity for chromium, as evidenced by the fact that no other metal is able to increase insulin receptor activity to the same extent that chromium does. ∗ Research mentored by Joseph Capitani, Ph.D.


142

The Manhattan Scientist, Series B, Volume 3 (2016)

Figure 1. Mechanism of tyrosine kinase activity [1]

Irizarry

Figure 2. Proposed mechanism of chromodulin [4]

Despite being found in a wide variety of animals (e.g. rabbits, cows, humans), the exact structure of chromodulin has never been determined. Recent efforts have been undertaken to elucidate the structure of the metal cluster and the sequence of the amino acid. The chromium cluster is comprised of a tetrahedral arrangement of four Cr3+ ions. Three Cr3+ ions are antiferromagnetically coupled together and interact electrostatically with a fourth Cr3+ ion (Fig. 3). Cr2 forms carboxylate ligand bridges to interact with Cr3 and Cr4 , while Cr3 and Cr4 are believed to be bridged by a stronger oxo-ligand (e.g. hydroxide, water, oxygen anion etc.). Unlike most metalloproteins, the chromium is believed not be coordinated by sulfur atoms in cysteine residues, but rather by oxygen atoms from carboxyl groups in acidic amino acid residues (Glu and Asp). Based on experimental data, model b (Fig. 3) appears to be the correct orientation of the chromium atoms in the trinuclear cluster; this resembles a scalene triangle.

Figure 3. Model of interactions between chromium atoms in chromodulin [5]


Irizarry

The Manhattan Scientist, Series B, Volume 3 (2016)

143

The exact amino acid sequence of chromodulin differs between species, but this research project was focused chromodulin found in bovine liver. The oligopeptide has been theorized to contain Asp/Asn, Glu/Gln, Cys, and Gly amino acid residues in a 2:4:2:2 ratio, respectively. Recent efforts in characterizing the structure of chromodulin have revealed the Glx and Asx residues are responsible for binding the Cr3+ ions in the chromodulin cluster. Furthermore, Chen and Vincent have shown that the amino acid sequence pEEEEGDD, where pE is pyroglutamate, is able to bind chromium nearly as well as chromodulin (Kf = 1.92 × 1020 M−4 ) [2]; 2 Cys and 1 Gly residues were lost during the isolation procedure. pEEEEGDD−6 (aq) + 4Cr3+ (aq) → pEEEEGDD-Cr6+ 4 ∆Grxn (311.75 K) ∼ -121.05 kJ/mol Because the N-terminus is glutamate, it is difficult to sequence chromodulin using standard methods, as it causes the conversion of glutamate to pyroglutamate. Thus, chromodulin should have the sequence EEEEGDD as its starting sequence; this structure also acts as the binding region. Assuming standard amino acid linkage, the sequence of chromodulin may be one of the following: EEEEGDDGCC, EEEEGDDCGC, or EEEEGDDCCG. Spectroscopic analysis using UV/Vis spectroscopy shows several distinct peaks (Fig. 4).

Figure 4. UV/vis spectra of chromodulin [6]

In Fig. 4, the two peaks around 400 nm and 600 nm are due to the d-orbital transitions, but the peak 260 nm is more ambiguous; it may be due to either a disulfide bridge between two cysteine residues or due to the chromium cluster creating a ligand-to-metal charge transfer (LMCT) complex with one of the ligands in the coordination complex [6].

Methods Molecular mechanics calculations were performed on theoretical structures of chromodulin and tyrosine kinase enzymes using Spartan ’14 [7], molecular mechanics, and the basis set MMFFaq [8]. The chromodulin structure studied was found in bovine liver, while the tyrosine kinase


144

The Manhattan Scientist, Series B, Volume 3 (2016)

Irizarry

enzyme was from the human insulin receptor. The structure of insulin receptor kinase was received from the protein data bank while the structure of chromodulin was inferred from experimental data. Every calculation included calculation of the thermodynamic properties at 311.75 K; this temperature corresponds to bovine liver temperature. The ∆Gof value of Cr3+ was approximated by using the standard reduction potential (Eo ) of Cr3+ [4] and using the equation: ∆Go = -nFEo . The structure of chromodulin was studied by altering the structure of the chromium cluster in the three candidate peptides and the binding region of chromodulin. Tyrosine kinase was modelled at different states in the enzymatic mechanism, each containing different cofactors (e.g. ethylmercury and chromodulin). The structure of both the peptides and proteins were modelled so that the calculations reflected the structure of these molecules at physiological pH (pH ∼ 7).

Discussion The results obtained from this project provide good data for elucidating the structure of chromodulin. Although the shape of the chromium cluster was determined by previous experiments, the exact orientation of the cluster and the residues involved in binding are still unknown. To narrow down the correct structure, six different orientations of the chromium cluster were developed with each atom in the trinuclear assembly being bound to different residues in the binding region EEEEGDD (Fig. 5); the fourth chromium atom has interactions with all of the carboxyl groups of

Figure 5. Possible orientations of the chromium cluster in chromodulin

the acidic residues. The models were created in such a way that the binding of Cr3+ does not cause additional bond strain. Tables 1-3 assume that Cr3 and Cr4 are bridged by carboxylate groups, but since it is theorized that stronger bridging ligands are needed, different bridging ligands were used to see if the ∆Gof values are closer to the theoretical value for this amino acid sequence. By comparing the ∆Gof values in the above figures, it is apparent that the sequence EEEEGDDGCC with the chromium cluster model 4 produces the lowest ∆Gof values (Tables 2-3); this implies that they


Irizarry

The Manhattan Scientist, Series B, Volume 3 (2016)

145

form very stable chromodulin complexes; their ∆Gof values produce ∆Gorxn values close to the accepted value determined from numerous experiments. Tables 1-3 assume that Cr3 and Cr4 are bridged by carboxylate groups, but since it is theorized that stronger bridging ligands are needed, different bridging ligands were used to see if the ∆Gof values are closer to the theoretical value for this amino acid sequence. Table 1. ∆Gof (kJ/mol) of candidate amino acid sequences Amino Acid Sequence

∆Gof - no disulfide bridge (kJ/mol)

∆Gof - disulfide bridge (kJ/mol)

EEEEGDDGCC EEEEGDDCCG EEEEGDDCGC pEEEEGDD

-334.47 -385.02 -390.53 -391.89

-321.22 -299.26 -261.64 N/A

Table 2. ∆Gof (kJ/mol) of candidate chromodulin structures (w/o disulfide bridge) Orientation Model

EEEEGDDGCC-Cr4

EEEEGDDCGC-Cr4

EEEEGDDCCG-Cr4

1 2 3 4 5 6

-866.05 101.81 -436.43 -1348.39 141.59 -935.98

-634.24 -49.72 -450.53 642.43 -168.23 -1096.53

-716.67 120.01 -440.46 -224.51 -178.73 -143.8

Table 3. ∆Gof (kJ/mol) of candidate chromodulin structures (w/ disulfide bridge) Orientation Model

EEEEGDDGCC-Cr4

EEEEGDDCGC-Cr4

EEEEGDDCCG-Cr4

1 2 3 4 5 6

-897.52 104.61 -449.68 -1397.48 -210.74 -907.68

-909.42 -5.97 -400.14 -1083.54 -190.8 -1082.18

-884.16 147.38 -439.93 -221.93 -161.09 -248.49

Despite the fact that the N-terminal glutamate has been converted to pyroglutamate, it is still possible for the carbonyl oxygen on pyroglutamate to bind chromium rather effectively under aerobic conditions (Table 4). The relatively small deviation from the experimental data shows that these theoretical structures have good likelihood of being consistent with the actual structures of chromodulin and pEEEEGDD. In particular, the small % error present in the sequence EEEEGDDGCC shows a great likelihood that it possesses the correct structure of chromodulin. However, because the percent error of EEEEGDDGCC with a disulfide bridge present is relatively small (Table 5), it cannot be ruled out as a possible structure of chromodulin. The bridging ligand that produced ∆Gof


146

The Manhattan Scientist, Series B, Volume 3 (2016)

Irizarry

Table 4. ∆Gof and Kf for the binding of O2 & Cr3+ by apo-chromodulin and the candidate structures Name of Molecule

∆Gof (kJ/mol)

% error

Kf (M−4 )

pEEEEGDD pEEEEGDD (bovine liver)

-135.65 -121.05

12.06% N/A

6.71 × 1022 .92 × 1022

values closer to the theoretical value was O2 (Table 5). This also agrees with experimental data Table 5. ∆Gorxn and Kf for the binding of O2 and Cr3+ by pEEEEGDD Name of Molecule

∆Gorxn (kJ/mol)

% error

Kf (M−4 )

EEEEGDDGCC EEEEGDDGCC-disulfide apo-ehromodulin (bovine liver)

-128.64 -147.42 -125.58

2.44% 17.39% N/A

3.59 × 1021 5.03 × 1024 1.10×1021

that only the Glu and Asp residues are responsible for chromium binding; however, based on Fig. 6, the C-terminus may be responsible for coordination as well. Furthermore, because the results

Figure 6. Space-filling (left) and ball and stick (right) models of theoretical chromodulin structure (EEEEGDDGCCCr4 ) [3]

show consistency by having a relatively small percent error in orientation model 4, this may be a good approximation of the chromium cluster in chromodulin. The oxygen molecule shown in Fig. 6 is most likely an anion, as it is orienting itself to be nearly perpendicular to the planar trinuclear assembly; it may be possible that oxygen molecule is orienting itself to maximize electrostatic interactions from both chromium ions. The oxygen anion may be either a superoxide (O− 2 ) or a −2 peroxide (O2 ) anion. Furthermore, given this geometry, the oxygen atom may be coordinating


Irizarry

The Manhattan Scientist, Series B, Volume 3 (2016)

147

to both Cr3+ ions in the cluster. Given this information, the binding reaction can be written as follows: O2 (g) + apo-LMWCr−6 (aq) + 4Cr3+ (aq) → holo-LMWCr6+ (aq).

Since O2 refers to an element in its standard state, it is able to stabilize the structure of holochromodulin without severely affecting the ∆Gorxn for binding of Cr3+ . Also, if the O2 molecule is negatively charged, the anion is most likely formed over the course of the reaction, in order for the charge balance of the reaction to be valid. The exact mechanism of by which tyrosine kinase functions is not very well known, nor understood. By studying the thermodynamics of the enzyme tyrosine kinase, it shed light into the enzymatic mechanism. Enzymes lower the activation energy of unfavorable reactions by passing through alternative and more favorable reaction pathways. In order to best understand the mechanism, the enzymes were studied with and without chromodulin and a possible ethylmercury cofactor from the PDB file. The structure of tyrosine kinase containing ethylmercury (TyrKHg) appears to provide the correct structure of the enzyme, as its ability to bind ATP is greatly aided by chromodulin, as opposed to TyrK (Fig. 7). Although the binding of ATP by TyrKHg is nonspontaneous, it may work through energetic coupling (one reac-

Figure 7. Tyrosine kinase mechanism of action in the presence of chromodulin

tion driving the other) possibly from insulin binding, the formation of the tyrosine kinase dimer, or the reaction being pulled by the energy released from the phosphorylation reaction. However, even though the binding can be favorable through energetic coupling, the initial interaction between tyrosine kinase and ATP is nonspontaneous; therefore enzyme activity should be relatively low in the absence of chromodulin. Furthermore, because the tyrosine residues in the active site are so close to each other, each subsequent autophosphorylation step will be more endergonic; this causes the degree of autophosphorylation to be somewhat limited. Also, since chromium is transported into the cell after insulin has already been bound to the receptor, it is fair to postulate that the enzyme should be able to


148

The Manhattan Scientist, Series B, Volume 3 (2016)

Irizarry

function fairly well without chromodulin; although it will eventually become insulin resistant if there if there is a chromium deficiency, due to insufficient levels of chromodulin. Based on the results, the binding of LMWCr to TyrKHg is an endergonic process (Table 6); therefore, it may bind to the enzyme via a different pathway. Since tyrosine kinase carries out signal transduction after insulin binding, it can be inferred that TyrKHgPi3 is the predominant form of the enzyme at this point in the reaction. Due to this, it is possible that substrate phosphorylation and autophosphorylation occur simultaneously. Afterwards, TyrKHgPi3 can bind chromodulin to increase enzyme activity; it is worthy to note that due to the close proximity of the tyrosine residues, ATP binding gets progressively more endergonic over the course of the reaction. However, even though ATP binding is endergonic, the phosphorylation reaction makes the overall reaction exergonic (Table 7). The presence of chromodulin lowers the ∆G for ATP binding at this point and allows autophosphorylation to occur more favorably (Table 8). However, this creates the disadvan-tage of making substratedifference between TyrKHgLMWCr and TyrKHgPi3 LMWCr. Table 6. ∆Gof of tyrosine kinase enzyme structures Form of Enzyme

∆Gof (kJ/mol)

TyrK TyrKHg TyrKLMWCr TyrKHgLMWCr

65161 33521 57584 43922

Table 7. ∆Gorxn for autophosphorylation of tyrosine kinase enzymes Form of Enzyme

∆Gorxn (kJ/mol)

TyrK TyrKHg TyrKLMWCr TyrKHgLMWCr

-38128.74 -16506.74 -30756.74 –39409.74

Table 8. ∆Gorxn for the binding of ATP by tyrosine kinase enzymes Form of Enzyme

∆Gorxn (kJ/mol)

TyrK TyrKHg TyrKLMWCr TyrKHgLMWCr

-20922.27 1358.73 -19166.27 -18955.96

Assuming the mechanisms illustrated in Fig. 2 and Fig. 7 are accurate, once insulin levels drop, chromodulin will spontaneously dissociate from the enzyme after performing one last substrate phosphorylation.


Irizarry

The Manhattan Scientist, Series B, Volume 3 (2016)

149

The effect chromodulin has on tyrosine kinase activity may not be solely thermodynamic in nature, but it may also be due to the enzyme’s topology. In Fig. 8, TyrKHg and TyrKHgPi3 have their transmembrane groups coiled up in order for the N-terminus to be attracted to the C-terminus. However, this is not apparent in TyrKHgLMWCr and TyrKHgPi3 LMWCr, due to the positively charged chromodulin repelling the positively charge N-terminus. In cellular conditions, this would most likely cause the cytosolic region to move further into the cytosol and be more available for substrate binding.

Figure 8. Structures of TyrKHg (upper left), TyrKHgPi3 (lower left), TyrKHgLMWCr (upper right), and TyrKHgPi3 LMWCr (lower right) (a) Red Circle: Transmembrane region, (b) Yellow Circle: Cytosolic region

Conclusion Based on our calculations, it can be concluded that these structures of chromodulin and pEEEEGDD created during this project may possibly match the structures studied experimentally. When comparing the ∆Gorxn and Kf values of the sequence EEEEGDDGCC and pEEEEGDD to the accepted values, it shows that they may have the approximate orientation of the chromium cluster found in chromodulin. Also, since only the sequence EEEEGDDGCC was able to produce comparable results, out of all three sequences, there is a strong likelihood that it may be the sequence of chromodulin. By modelling the effect chromodulin has on the energy of the tyrosine kinase enzyme, its effects on the enzyme mechanism can be inferred. Since chromodulin appears to make ATP binding more favorable, that shows one way that this cofactor can increase enzyme activity. By making tyrosine autophosphorylation a more favorable process, the reaction should be able to occur at a more rapid pace and signal transduction can occur more efficiently. This may also be accomplished by the effect that chromodulin has on the molecular structure of tyrosine kinase and consequently its ability to perform substrate binding.


150

The Manhattan Scientist, Series B, Volume 3 (2016)

Irizarry

Future Work Further calculations will be performed using semi-empirical methods and density functional theory in order to provide better approximations of the molecular geometry. Density Functional Theory will be employed in order to determine the UV/Vis structure shown in Fig. 6, which will be used compared to the experimental spectra seen in Fig. 4. In terms of experimental work, the magnetic properties of the chromium cluster will be altered by raising the temperature in order to examine its effect on chromium binding. This may provide better methods for isolation of apo-chromodulin, without decomposition of the peptide sequence.

Acknowledgements The author gives his thanks to the Michael ’58 and Aimee Rusinko Kakos Summer Research Fellowship for its continued financial support. He also thanks Dr. Joseph F. Capitani for providing guidance and advice throughout the entirety of this research project.

References [1] Ahern, Kevin, “Insulin Receptor.” Integration and Cellular Signalling. Oregon State University, n.d. Web. 11 Aug 2016. [2] Chen, Y., Watson, H., Gao, J., Sinha, S. H., Cassady, J., Vincent, J., “Characterization of the Organic Component of Low-Molecular-Weight Chromium-Binding Substance and Its Binding of Chromium.” J. Nutr., 141(7), 1225–1232 (2011) [3] Harris, Daniel. Lucy, C. A., Quantitative Chemical Analysis. 9th ed., W. H. Freeman & Company (2016) [4] Vincent, John, “The Biochemistry of Chromium.” J. Nutr., 130(4), 715-718 (2000) [5] Jacquamet, L., Sun, Y., Hatfield, J., Gu, W., Cramer, S., Crowder, M., Latour, J., “Characterization of Chromodulin by X-ray Absorption and Electron Paramagnetic Resonance Spectroscopies and Magnetic Susceptibility Measurements.” J. Am. Chem. Soc., 125, 774-780 (2003) [6] Peterson, R. L., Banker, K. J., Garcia, T. Y., Works, C.F., “Isolation of a novel chromium (III) binding protein from bovine liver tissue after chromium (VI) exposure.” J. Inorg. Biochem., 102(4), 833–841 (2008) [7] Spartan’14 Wavefunction, Inc. Irvine, CA [8] Halgren, Thomas. A., “Merck molecular force field. I. Basis, form, scope, parameterization, and performance of MMFF94.” J. Comput. Chem., 17(5-6), 490–519 (1996)


Spectroscopic study of the interaction between dipicolinic acid and human serum albumin James Irizarry∗ and Matthew Feliciano∗ Department of Chemistry and Biochemistry, Manhattan College Abstract. Human Serum Albumin (HSA) is a protein found in plasma and helps to transport drugs throughout the circulatory system. The binding of dipicolinic acid (DPA), a chemical component of bacterial spores, to HSA was investigated using fluorescence and UV/Vis spectroscopy. The experimental trials were performed in order to understand the type of forces involved in the binding of DPA and determine the mechanism of quenching involved in binding reaction. These were determined by calculating the thermodynamic parameters, the quenching constant, and the binding constant.

Introduction Human Serum Albumin (HSA) (Fig. 1) is a transport protein found in human blood. HSA is a globular protein that is comprised of a single amino acid chain and contains three structurally homologous domains (I, II, and III); each domain also contains two subdomains within each [1]. HSA binds most drugs and is used to transport them throughout the circulatory system. When drugs are bound to HSA they become inactive and unable to carry out their intended biological function [1]; if a drug bind strongly to HSA the concentration of the active form of the drug will decrease.

Figure 1. Three-dimensional structure of HSA (http://pubs.rsc.org/en/content/articlelanding/2009/ob/b911605b#!divAbstract)

The two types of fluorophores present in HSA correspond to Trp-214 in subdomain IIA (Fig. 2), which dominates the fluorescence of HSA and several tyrosine residues found in different ∗

Research mentored by Jianwei Fan, Ph.D.


152

The Manhattan Scientist, Series B, Volume 3 (2016)

Irizarry + Feliciano

subdomains. The intensity of HSA’s fluorescence is very sensitive and subject to change, due to the local chemical environment, such as solvent type and different types of ligands in solution. These factors can induce a change in the protein’s conformation and reduce the emission through static and dynamic quenching. Consequently, this allows ligand-albumin binding information to be acquired via fluorescence quenching measurements. This is why the interactions between HSA and dipicolinic acid (DPA) are under investigation.

Figure 2. Molecular structure of tryptophan (http://www.sigmaaldrich.com/catalog/product/sial/t0254)

DPA (Fig. 3) is one of many compounds that comprise bacterial spores (endospores) [2]. Under appropriate conditions, endospores germinate into active cells and release DPA; due to this, detection of DPA is used to chemically detect the presence of endospores. By determining the binding constant and thermodynamic parameters of DPA’s binding to HSA, the toxicological profile and pharmacodynamics of DPA can be determined.

Figure 3. Molecular structure of DPA (http://www.sigmaaldrich.com/catalog/product/aldrich/p63808)

Materials and Methods HSA and DPA were purchased from Sigma Aldrich. 0.05 M tris-HCl buffer (pH = 7.2) was prepared using analytical reagent grade. All solutions were prepared with ultrapure deionized water. The fluorescence of the solution was measured with a Photon Technology International (PTI) cell holder, equipped with a thermostat with the cell holder’s slit widths set to 4 mm. HSA’s excitation wavelength was set at 280 nm, and the emission spectrum was read in the 290 - 500 nm range. The UV/Vis spectrum was recorded with an Agilent 8453 UV/visible photodiode array spectrophotometer. 200 µL of stock HSA (1.00 × 10−5 M) and 1800 µL of Tris buffer (pH = 7.2) were mixed together in a cuvette which was then placed in a constant temperature cell holder for 10 minutes in order to reach the desired temperature. The HSA solution was titrated with DPA by adding 1.00 ×


Irizarry + Feliciano

The Manhattan Scientist, Series B, Volume 3 (2016)

153

10−4 M DPA at 2 µL intervals until the protein was saturated at 22 µL. After each addition of DPA, the fluorescence intensity of the solution was measured using the spectrofluorometer. Each set of titrations was performed at three different temperatures (295, 303, and 308 K). The UV-visible spectrum of each solution was also recorded at 295 K. The binding constant and thermodynamic parameters of DPA’s binding to HSA were determined via static quenching.

Results and Discussion UV/Visible spectra By examining Fig. 4, it is evident that there is an overlap of the λmax of the HSA and DPA solutions. Due to the overlap of λmax in the HSA and DPA solutions, it is apparent that there is an inner filter effect being produced by DPA. There is a competition between HSA and DPA for photons, due to their similar λmax values. When these molecules are promoted to the excited state at this wavelength during fluorescence spectroscopy, a decrease in the fluorescence of HSA is apparent. Fewer excited HSA molecules are present due to the competition for photons between DPA and HSA. Also, the difference in absorbance values between the theoretical and experimental values of the 50/50 mixture of HSA and DPA can be accounted for by the formation of the HSADPA complex. If the complex is formed and does not absorb strongly at that wavelength, then the experimental value of absorbance will be smaller than the theoretical value. This difference in absorbance value also shows that there definitely is association between HSA and DPA.

Figure 4. Overlay of the UV-visible spectra. (a) 1.0×10−5 M HSA, (b) 50/50 mixture of HSA and DPA, (c) Calculated 50/50 mixture of HSA and DPA and (d) 1.0 × 10−4 M DPA.

Fluorescence quenching By looking at Fig. 5, it is evident from the decrease in HSA’s fluorescence that DPA is able to quench HSA. Furthermore, as quenching increases, λem shifts to a shorter wavelength. This is due to DPA changing the micro-environment of Trp-214 making it more hydrophobic. By examining the UV/Vis spectra (Fig. 1), the process by which DPA quenches HSA can be explained.


154

The Manhattan Scientist, Series B, Volume 3 (2016)

Irizarry + Feliciano

Figure 5. Fluorescence spectra of HSA in tris buffer in the absence and presence of DPA (1 × 10−4 M). λex = 280 nm. The µL of DPA added (A – J): 0, 2, 4, 6, 8, 10, 12, 14, 18 and 24.

The measured fluorescence of HSA was corrected for the inner filter effect using [3]: (1)

Fcor = Fobs × eA/2 .

Fobs and Fcor refer to the measured and corrected fluorescence intensities, respectively. A refers to the absorbance of DPA at the λex = 280 nm, at the same concentration as it is in the mixture. However, Eq. (1) is only valid because the absorbance is less than 0.3 [4]. Fig. 6 illustrates that HSA achieves saturation with DPA when the ratio [HSA]/[DPA] = 1. This implies that HSA and DPA should bind in a 1:1 ratio. 1.4

F0/F

1.2

1.0

0.8 0.0

0.2

0.4

0.6

0.8

1.0

[DPA] mM

Figure 6. Ratio of the emission intensities /F ) of intensities HSA (1(FµM) vs. the concentration of DPA. F0 and F are the Figure 3. Ratio of(F the0emission 0/F) of HSA (1.00 mM) vs. the concentration of F0 and F are thepresence emission intensities of respectively. HSA in the absence and presence of DPA, emission intensities of HSA in DPA. the absence and of DPA, respectively.

Stern-Volmer constant Due to the quenching mechanism being static quenching, the corrected intensity values can be entered into the Stern-Volmer equation [3]: F0 /Fcor = 1 + KSV [Q].

(2)


Irizarry + Feliciano

The Manhattan Scientist, Series B, Volume 3 (2016)

155

KSV is the Stern-Volmer quenching constant and [Q] is the concentration of the quencher, in this case DPA.

F0/Fcor

F0/Fcor

The Stern-Volmer constant can be determined by measuring the slope of the lines present in Fig. 7. After determining the value of the Stern-Volmer constant for this reaction, the quenching 1.7 (■) 295K (▲) 303K (●) 308K (■) 295K 1.3 1.5 1.2

1.3 1.1

1.1 1.0

0.9 0

0.5 [DPA] (mM)

0.9

1

0

Figure 7. Stern-Volmer plots at 295, 303, and 308 K

mechanism (static or dynamic quenching) was determined. In dynamic quenching, the SternFigure Stern-Volmer plots of HSA at three temperatures: (a) in tris buffe Volmer constant is determined from the4.following equation: buffer. (■) 295K (▲) 303K (●) 308K. KSV = τ kq (3) where kq refers to the quenching rate constant and τ is the lifetime of the excited state of HSA (τ = 1 × 10−8 s). For dynamic quenching, kq ≤ kd , where kd is the diffusion rate constant in aqueous solution (kd = 1010 M−1 s−1 ). The value of kq calculated for this project (kq ∼ 10−13 M−1 s−1 ) [3] is much greater than the value of kd . Furthermore, in dynamic quenching KSV increases as temperature increases; KSV decreases in static quenching as temperature increases, as shown in Table 1. This is also shown in Fig. 7, as the slope of the graph increases as temperature decreases. These two facts illustrate that the quenching is static and dynamic. Table 1. Stern-Volmer constants at 295, 303, and 308 K Temperature −1

KSV (M

)

295 K

303 K

(4.75 ± 0.69) × 10

5

308 K 5

(3.31 ± 0.28) × 10

(2.84 ± 0.28) × 105

Association constant The process in which HSA binds DPA can illustrated in the following reaction [5]: P + nD Dn P

(4)


156

The Manhattan Scientist, Series B, Volume 3 (2016)

Irizarry + Feliciano

where P is the free protein (HSA), D is the drug (DPA), n is the number of binding sites, and Dn P the binding complex. The binding constant Ka can be calculated using the following equation: Ka =

[Dn P ] . [P ] [D]n

(5)

However, since HSA is the only fluorescent species in the reaction: [P ] F = [P ]0 F0

log

(6) (7)

[D] = [Q] F0 − Fcor = log Ka + nlog[Q]. Fcor

(8)

The value of log Ka can be determined from the lines in Fig. 8, by determining the value of each line’s y-intercept. Table 2 shows that as temperature increases, both Ka and KSV decrease and Ka and KSV are almost equal to each other; this provides further evidence of a static quenching mechanism. -0.4 (■) 295K (▲) 303K (●) 308K

-0.2 -0.4 -0.6 -0.8 -1.0

-0.8 -1 -1.2 -1.4

-1.2

-1.6

-1.4

-1.8 -0.8

-0.6

-0.4

-0.2

(■) 295K (▲) 3

-0.6 Log [(F0-Fcor)/Fcor]

Log [(F0-Fcor)/Fcor]

0.0

0

0.2

-0.8

-0.6

-0.4

Log [DPA]

F0 −Fcor Figure 8. logFigure vs.[(log0−[Q]at )295, K at three temperatures. (a) in tris-buffer and 5. Log / 303, ] vs.and Log308 [DPA] Fcor 295K (▲) 303K (●) 308K. Table 2. Values of KSV , Ka , and n at 295, 303, and 308 K Temperature −1

Ka (M ) KSV (M−1 ) n

295 K

303 K 5

(4.62 ± 0.77) × 10 (4.75 ± 0.69) × 105 0.982 ± 0.158

308 K 5

(3.34 ± 0.27) × 10 (3.31 ± 0.28) × 105 1.07 ± 0.109

(2.81 ± 0.12) × 105 (2.84 ± 0.28) × 105 1.09 ± 0.204


Irizarry + Feliciano

The Manhattan Scientist, Series B, Volume 3 (2016)

157

Thermodynamic Parameters The thermodynamic parameters of the binding process are obtained using Vant’Hoff’s equation [6]: ∆S o −∆H o 1 + . (9) log (Ka ) = 2.303R T 2.303R The graph of the Vant’Hoff equation (Fig. 9) shows that DPA binds more strongly to HSA as the value of 1/T increases or as T decreases. These thermodynamic parameters can provide information about the forces responsible for ligand binding (Table 3). 5.7

5.4 5.3

5.6 Log(Ka)

Log(Ka)

5.2

5.5

5.1 5.0 4.9

5.4 322

327

332

337

342

4.8 322

327

1/T x 105 (K-1)

332

1/T x 105 (K-1)

Figure 9. log(Ka ) vs. 1/T at 295, 303, and 308 K

Figure 6. Log (Ka) vs. 1/T (a) in tris buffer and (b) in phosphate buffer. Table 3. Ligand binding forces based on thermodynamic parameters [7] Thermodynamic parameters

Type of interaction

∆H > 0, ∆S > 0 ∆H < 0, ∆S > 0 ∆H < 0, ∆S < 0

Hydrophobic interactions Electrostatic interactions Van Der Waals forces and hydrogen bonding

Since ∆H < 0 and ∆G < 0, it shows that the binding of DPA by HSA is an exothermic and spontaneous reaction (Table 4). Furthermore, ∆H < 0 and ∆S > 0, since the binding of DPA is due to the electrostatic forces between the negatively charged DPA and the positive center of HSA. Due to the small ∆S values, it is possible that the binding interaction may also be due to Van Der Waals forces and hydrogen bonding.

Conclusion When DPA binds to HSA, the fluorescence intensity of HSA decreases. The mechanism of quenching is believed to be static by the binding of DPA by HSA. This has been further substan-


158

The Manhattan Scientist, Series B, Volume 3 (2016)

Irizarry + Feliciano

Table 4. Thermodynamic parameters at 295K, 303K, and 308K temperatures Temperature

∆H (kJ/mol)

∆S (J/molK)

∆G (kJ/mol)

295 K 303 K 308 K

-29.1

+9.83

-31.99 -32.07 -32.12

tiated by the data showing that both Ka and KSV are almost equal to each other and their values both decreased with a temperature increase. The values of the thermodynamic parameters indicate that the binding interaction is mainly due to electrostatic forces, but not excluding Van Der Waals forces or hydrogen bonding. The value of n shows that HSA has only one binding site in domain II, which is consistent with the literature value.

Acknowledgements The authors would like to thank Dr. Enju Wang from St. John’s University for the collaborative work. They also thank Dr. Jianwei Fan for her guidance throughout the entirety of the project.

References [1] Mauro Fasano, Stephen Curry, Enzo Terreno, Monica Galliano, Gabriella Fanali, Pasquale Narciso, Stefania Notari, and Paolo Asceno, “The extraordinary ligand binding properties of human serum albumin,” IUBMB Life, 57(12), 787-96 (2005) [2] G. W. Gould and A. Hurst (Eds.), The Bacterial Spore, Academic Press, New York (1969) [3] J. S. Lakowicz, Principles of Fluorescence Spectroscopy, 3rd edition, Springer, New York, NY (2006) [4] Mullah Muhaiminul Islam, Vikash K. Sonu, Pynsakhiat Miki Gashna, N. Shaemninway Moyon, and Sivaprasad Mitra, “Caffeine and sulfadiazine interact differently with human serum albumin: A combined fluorescence and molecular docking study,” Spectrochim. Acta Mol. Biomol. Spectrosc. 152, 22-33 (2016) [5] T. S. Sing and S. Mitra, “Interaction of cinnamic acid derivatives with serum albumins: a fluorescence spectroscopy study,” Spectrochim. Acta Mol. Biomol. Spectrosc. 78 (3): 942-8 (2011) [6] P. B. Kandagal, S. Ashoka, J. Seetharanappa, S. M. T. Shaikh, Y. Jadegoud, and O. B. Ijare, “Study of the interaction of an anticancer drug with human and bovine serum albumin: Spectroscopic approach,” J. Pharm. Biomed. Anal., 41 393-9 (2006) [7] Philip D. Ross and S. Subramanian, “Thermodynamics of protein association reactions: forces contributing to stability,” Biochemistry, 20 (11), 3096-102 (1981)


Investigation of the dipicolinic acid interaction with human serum albumin using spectroscopic measurements Sophia Prentzas∗ and Marisa Kroger∗ Department of Chemistry and Biochemistry, Manhattan College Abstract. Human serum albumin (HSA) is the most abundant human plasma protein that has the ability to reversibly bind to drugs and transport them to their target sites. In this work, the HSA’s binding ability to dipicolinic acid (DPA), a chemical component of endospores, was investigated with the use of UV/Visible absorption and fluorescence spectroscopy. Our experiments determined the quenching constant, binding constant, the thermodynamic parameters, the number of the binding sites located on the HSA, as well as the type of quenching mechanism that occurred between the DPA and HSA. In addition, the natures of the binding forces were discussed.

Introduction Human serum albumin (HSA) is a single polypeptide chain of 585 amino acids. It is the most abundant protein found in the human blood plasma. It contains three homologous domains (I, II, III), each containing two subdomains (Fig. 1).

Figure 1. Three-dimensional diagram of HSA [1]

The fluorophores present in the HSA is largely due to the amino acid tryptophan (Fig. 2), located in other subdomains. The fluorescence intensity of the HSA can easily be affected by its ∗

Research mentored by Jianwei Fan, Ph.D.


160

The Manhattan Scientist, Series B, Volume 3 (2016)

Prentzas + Kroger

microenvironment. Ligands are able to bind to the HSA producing a conformational change and reducing the emission intensity of the HSA using quenching mechanisms. Using the fluorescent properties of the HSA, the binding between the HSA and the ligand can be further investigated.

Figure 2. Structure of Tryptophan

Figure 3. Dipicolinic Acid (DPA)

HSA is greatly known as an important regulator for pharmacokinetic behavior. It has the ability to bind reversibly to most drugs and transport them to various target sites. When the drug binds to HSA, it loses its ability to carry out its biological function. Therefore, the stronger the binding between the drug and HSA, there is a lower concentration of the active form of the drug [1, 2]. Dipicolinic Acid (DPA) is a chemical compound that comprises a portion of the endospores (Fig. 3). It is mainly known as the compound responsible for the endospore’s heat resistance. When the endospores germinate, DPA is released and can be bound to the HSA [3].

Experimental Materials. The HSA and DPA were purchased from Sigma Aldrich. A 0.05 M phosphate buffer (H3 PO4 − /HPO4 2− ) was prepared in distilled water at pH 7.2 to mimic the pH of the human blood pH at 7.35. Instrumentation The emission spectrum was measured using a Photon Technology International fluorometer with a 1-cm cuvette connected to a thermostat bath. The excitation wavelength was set at 280 nm, and the emission spectra were set from 290 - 500 nm. The excitation and emission bandwidths were both set to 5 nm. The absorption spectrum measured using an Agilent 8453 UV/Visible photodiode array spectrophotometer. Fluorescence titration. A 2 mL solution containing 200 µL HSA stock (1.00 × 10−5 M) and 1800 µL phosphate buffer was incubated overnight and then added to the cuvette attached to the thermostat bath. The


Prentzas + Kroger

The Manhattan Scientist, Series B, Volume 3 (2016)

161

solution was titrated with successive additions of DPA (1 × 10−4 M) until 24 µL was added, or until the emission quenching was saturated. The fluorescence titration was carried out at three different temperatures: 295 K, 303 K, and 308 K.

Results and Discussion UV/Visible absorption spectra The UV/Visible absorption spectra (Fig. 4) indicate the wavelengths for pure DPA and pure HSA to be 273 nm and 279 nm, respectively. The experimental value of the 50/50 DPA and HSA mixture had a lower peak than the calculated theoretical value of the 50/50 DPA and HSA mixture. This indicates that there is some interaction between the DPA and HSA at the ground state.

Figure 4. The absorption spectra. (a) 1 × 10−4 M DPA solution (λmax = 273 nm), (b) 50/50 mixture of HSA and DPA, (c) Theoretical 50/50 mixture of HSA and DPA, (d) 1 × 10−5 M HSA solution (λmax = 279 nm).

The absorption maxima of the HSA and DPA are relatively close, resulting in the inner filter effect between the DPA and HSA. The inner filter effect is the competition of the excitation photons of the HSA by the DPA creating the apparent decrease in the fluorescence intensity of the HSA. To correct the measured fluorescence with the inner filter effect, the following equation was used [2]: Fcor = F × eA/2

(1)

where F is the emission intensity in the presence of the quencher, Fcor is the corrected emission intensity of HSA, and A is the absorbance of the DPA at 280 nm with the same concentration as in the mixture of HSA and DPA. Fluorescence quenching spectra Fig. 5 depicts that the DPA quenched the HSA and the HSA emission maximum, λem , shifted to a shorter wavelength, indicating that the DPA affected the HSA’s microenvironment, making it more hydrophobic.


162

The Manhattan Scientist, Series B, Volume 3 (2016)

Prentzas + Kroger

Figure 5. Emission spectra of HSA in the absence and presence of DPA (1x 10−4 M). λmax = 280 nm. The µL of DPA added (A-J): 0, 2, 4, 6, 8, 10, 12, 14, 18, and 24.

Fig. 6 indicated that successive amounts of DPA lead to a saturation for the DPA and HSA. The saturation was reached when 1mM of DPA was added to the solution, which is the same as the 1mM of HSA present in the mixture. Therefore, the ratio between HSA and DPA is 1:1. 1.4

F0/F

1.2

1.0

0.8 0.0

0.2

0.4

0.6

0.8

1.0

[DPA] mM

Figure 6. Ratio of the emissionFigure intensities ) of HSA vs.(Fthe concentration of DPA. F0 and F are the emission 0 /F 3. Ratio of(F the emission intensities 0/F) of HSA (1.00 mM) vs. the concentration of DPA. F0 and F are theof emission of HSA in the absence and presence of DPA, intensities of HSA in the absence and presence DPA,intensities respectively. respectively.

Stern-Volmer quenching constant The Stern-Volmer equation F0 /Fcor = 1 + KSV [Q]

(2)

was used to determine the Stern-Volmer quenching constant KSV . F0 is the fluorescence intensity without the DPA quencher. Fcor was obtained using Eq. (1) used to correct the inner filter effect. The [Q] is the concentration of the quencher, DPA. Fig. 7 is the plot of F0 /Fcor of the HSA vs. the DPA concentration at three different temperatures. Based on Fig. 7, the KSV values decreased with an increase in temperature.


Prentzas + Kroger

The Manhattan Scientist, Series B, Volume 3 (2016)

K (▲) 303K (●) 308K

(■) 295K (▲) 303K (●) 308K

1.3

F0/Fcor

163

1.2 1.1 1.0

0.5 [DPA] (mM)

1

0.9 0

0.5 [DPA] (mM)

1

Figure 7. Stern-Volmer plot of HSA at three different temperatures.

Volmer plots of HSA at three temperatures: (a) in tris buffer and (b) in phosphate Quenching mechanism K (▲) 303K (●) 308K.

Static quenching is the association between the protein and the ligand in the ground state, producing a non-emissive product. Therefore, it is expected that as the temperature increases, there would be less association between the HSA and DPA, with more free HSA present, which is fluorescent. Thus, the KSV values would decrease. Dynamic quenching is a result of the collisions between the protein and the ligand in the excited state. In dynamic quenching, there is an energy transfer from the HSA to the DPA, which competes with the fluorescence of HSA. As a result, the KSV values would increase with increasing temperature. The experimental data is consistent with static quenching by definition (Table 1). Table 1. KSV , Ka , and n at three different temperatures KSV (M−1 ) 295 K 303 K 308 K

Ka (M−1 ) 5

(1.70 ± 0.77) × 10 (1.10 ± 0.00) × 105 (7.51 ± 0.44) × 104

n 5

(1.96 ± 0.47) × 10 (1.11 ± 0.03) × 105 (7.91 ± 0.10) × 104

.94 .95 .75

For dynamic quenching, the Stern-Volmer quenching constant, KSV = kq τ , where kq is the bimolecular quenching rate constant and τ is the life time of excited state of HSA without quencher, 1 × 10−8 s. The maximum value of kq ≤ kd , where kd is the diffusion rate constant in aqueous solution (1 × 1010 M −1 s−1 ). The calculated kq value for our experimental data was kq = 1 × 1013 M−1 s−1 > kd = 1 × 1010 M−1 s−1 . Therefore, the mechanism cannot be dynamic and must be static quenching.


164

The Manhattan Scientist, Series B, Volume 3 (2016)

Prentzas + Kroger

Determination of binding constant and number of binding sites Another evidence of static quenching is KSV = Ka , where Ka is the equilibrium constant of the association reaction between HSA and DPA. The binding constant is derived [4] from P + nD Dn P. P is the free protein, D is the drug, n is the number of binding sites on the protein, and Dn P is the binding complex. The Ka is obtained using Ka =

[Dn P ] . [P ][D]n

(3)

The double-log equation (4)

log[(F0 − Fcor )/Fcor ] = log Ka + n log[DPA]

where F0 is the emission intensity in the absence of quencher, Fcor is the corrected emission intensity in the presence of quencher, Ka is the binding constant, and n is the number of binding sites on HSA, allowed the determination of the binding constant, Ka , and the binding sites, n, by observing the y-intercept and slope, respectively, from the log[(F0 −Fcor )/Fcor ] vs. log[DPA] graph in Fig. 8. -0.4

03K (●) 308K

(■) 295K (▲) 303K (●) 308K

Log [(F0-Fcor)/Fcor]

-0.6 -0.8 -1 -1.2 -1.4 -1.6 -1.8

-0.2

0

0.2

-0.8

-0.6

-0.2

0

0.2

Log [DPA]

Log [DPA]

)/ ) 308K.

-0.4

Figure 8. log[(F log[DPA] at three temperatures. 0 − cor )/Fcor ] vs. ] vs. Log [DPA] at three temperatures. (a) inFtris-buffer and (b) in phosphate (■)

Thermodynamic parameters The Van’t Hoff equation log(Ka ) =

∆S ∆H − 2.303 R 2.303 RT

(5)


Prentzas + Kroger

The Manhattan Scientist, Series B, Volume 3 (2016)

165

was used to understand the thermodynamic parameters. Here Ka is the binding constant and ∆S and ∆H are the change in entropy and change in enthalpy, respectively. The ∆S and ∆H were then used to determine Gibb’s free energy, ∆G, in the equation (6)

∆G = ∆H − T ∆S.

Fig. 9 is the plot derived from Eq. (5) where the ∆H and ∆S were determined from the slope and y-intercept, respectively. 5.4 5.3

Log(Ka)

5.2 5.1 5.0 4.9 327

332 1/T x

105

337

342

4.8 322

327

332 1/T x

(K-1)

337

342

105 (K-1)

Figure 9. log(Ka ) vs. 1/T at three different temperatures: 295K, 303K, 308K.

Log (Ka) vs. 1/T (a) in tris buffer and (b) in phosphate buffer.

Table 2 depicts all of the calculated thermodynamic parameters. These parameters determined that the reaction was exothermic and spontaneous. There was also a decrease in order. Table 2. Thermodynamic Parameters for three different temperatures Temperature

∆H (kJ/mol)

∆S (kJ/mol K)

∆G (kJ/mol)

295 K 303 K 308 K

-52.89

-77.96

-29.90 -29.27 -28.88

Binding forces There were four possible binding forces present between the HSA and DPA: hydrophobic interaction between the aromatic rings, electrostatic interaction, van der Waals forces, and hydrogen bonding. The signs from the thermodynamic parameters calculated allows for the determination of the type of binding forces. If ∆H > 0 and ∆S > 0, hydrophobic interactions are present; if ∆H < 0 and ∆S > 0, electrostatic interactions occur; and, if ∆H < 0 and ∆S < 0, there are van


166

The Manhattan Scientist, Series B, Volume 3 (2016)

Prentzas + Kroger

der Waals forces and hydrogen bonding. The calculated ∆H and ∆S values were both negative, indicating that the van der Waals forces and hydrogen bonding are dominant [5].

Conclusion This study indicated that DPA quenches the emission of HSA and there is an affect on the microenvironment of the tryptophan in Domain II of the HSA. The quenching mechanism of this reaction is static quenching. As the temperature increased, both the KSV and Ka values decreased. The thermodynamic parameters showed that the intermolecular forces between the HSA and DPA are predominantly van der Waals forces and Hydrogen bonding. There is only one binding site found in HSA located in Domain II.

Acknowledgment The author would like to thank St. John’s University, Dr. Enju Wang for the research collaboration, and the Manhattan College School of Science for the opportunity.

References [1] Mullah Muhaiminul Islam, Vikash K. Sonu, Pynsakhiat Miki Gashna, N. Shaemninway Moyon, and Sivaprasad Mitra, Spectrochimica Acta Part A: Mol and Biomol Spectroscopy 152, 22-33 (2016) [2] M. Maciazek-Jurczyk, M. Maliszewska, J. Pozycka, J. Pownicka-Zubik, A. Gora, and A. Sulkowska, Journal of Molecular Structure 1044, 194-200 (2013) [3] Hui Xu, Quanwen Lie, and Yanqing Wen, Spectrochimica Acta Part A: Mol and Biomol Spectroscopy 71, 984-988 (2008) [4] Jianniao Tian, Jiaqin Liu, Xuan Tian, Zhide Hu, and Xingguo Chen, Journal of Molecular Structure 691, 197-202 (2004) [5] Philip D. Ros and S. Subramanian, Biochemistry, 20 (11), 3096-3102 (1981)


Redesigning and improving the multi-step synthesis of zingerone Dominick Rendina∗ Department of Chemistry and Biochemistry, Manhattan College Abstract. The multi-step synthesis of zingerone is an experiment used at Manhattan College for second semester Organic Chemistry laboratory, beginning with the aldol condensation of vanillin and acetone to form dehydrozingerone, followed by transfer hydrogenation with sodium formate into zingerone. The experiment has had varying results with students, and for the spring semester of 2016 not a single transfer hydrogenation was successful. The focus of this research project was to find out what went wrong with the students and see if there was any kind of error within the procedure. Multiple reaction parameters were varied to find what may have caused failure as well as to find ways to improve the procedure to give better synthesis results.

Introduction There are a number of plants that possess different kinds of medicinal properties, one of which is the ginger plant which has been historically used for medical purposes. It has been generally used for treating motion sickness, nausea, and relieving pain. Ginger also possesses other anti-cancer properties and two compounds with these properties, dehydrozingerone and zingerone (Fig. 1), are formed within this synthesis.

Figure 1. Dehydrozingerone (left) and Zingerone (right)

Dehydrozingerone has been tested as a possible chemotherapeutic for colon cancer [1] and zingerone has been used as an anti-oxidant for excessive peroxynitrate in human tissue cells which leads to strokes [2]. The synthesis of these compounds is important for the properties they possess. This two-stage synthesis has been used as a laboratory class experiment for Organic Chemistry II at Manhattan College and the results have been sporadic across multiple years; sometimes the reaction works for most of the students whereas other years it fails for more than half of the students. The spring 2016 semester was the first year the transfer hydrogenation reaction failed for every single student in the class, thus a redesign for this experiment is in order. ∗

Research mentored by James V. McCullagh, Ph.D.


168

The Manhattan Scientist, Series B, Volume 3 (2016)

Rendina

Methods and Analysis

The overall synthesis is a two-step process as seen in Fig. 2. It begins with the aldol condensation of vanillin and acetone in basic conditions for a one hour reflux, upon later work up steps the final product yields dehydrozingerone, a yellow crystalline substance. The synthesis concludes with the transfer hydrogenation of the previous product using a hydrogen source and a catalyst as a means to remove the carbon-carbon double bond in the middle of the molecule, upon later work-up steps the final product yields zingerone, an oily substance with a slight yellow color to it.

Figure 2. General Mechanism for the two step synthesis

The reaction for the synthesis of dehydrozingerone has been used historically: Hiroshi Nomura isolated the compound back in 1918 using vanillin and acetone as well; however, the procedure for the aldol reaction calls for a nine hour reflux which is far too long [3]. The transfer hydrogenation of dehydrozingerone was attempted by Leverett R. Smith in 1996 as well as other closely related compounds, but the catalyst used was a rhodium on alumina variant which is significantly expensive to run with multiple students [4]. The goal of this research is not only to find out why the reaction failed but to find cheaper alternatives to the synthesis steps as well. We analyzed both reaction stages to examine what exactly causes this reaction to work the most efficiently, or what causes it to fail. For each step of the reaction, the following conditions were altered: Stage 1: Aldol condensation Aldol condensations require a molecule that has a carbonyl with at least one hydrogen in the alpha position and another carbonyl in which nucleophilic attack can take place. In order for the reaction to occur basic conditions are required, thus the pH was not adjusted. The reaction itself cannot be altered much, but we made the following changes as seen in Fig. 3.


Rendina

The Manhattan Scientist, Series B, Volume 3 (2016)

169

1. Reflux time – The original procedure calls for a one hour reflux, we cut down this reflux to see how the purity and the yield of the compound was affected. The purpose of altering this was to try and put ourselves in the position of a student who decided to shortcut his/her reaction; would his product affect the later stage of this reaction? 2. Adding extraction and silica gel filtration steps – Dehydrozingerone has a yellow color to the compound and it was speculated that this yellow color was due to an impurity caused by a pyrylium salt. It was unclear if this caused the reaction to fail, and nevertheless the attempt to remove the yellow color was attempted as a means to confirm its presence.

Figure 3. Aldol condensation stage with specific reagents and altered variables (bolded)

Stage 2: Transfer hydrogenation As opposed to normal hydrogenation which uses hydrogen gas, transfer hydrogenation uses a compound that form into a more stable one once it donates its hydrogen. In the case of sodium formate the molecule becomes carbon dioxide when it donates its hydrogen. Normal hydrogenation uses hydrogen gas which is an explosion hazard if it is not handled properly. The catalyst used for transfer hydrogenation is a palladium on alumina variant. This is the stage where the students did not succeed; as a result we tested multiple different variables as shown in Fig. 4:

Figure 4. Transfer hydrogenation with specific reagents and altered variables (bolded)

1. pH conditions: The original procedure used 0.25M sulfuric acid in methanol as the solvent. We wanted to test how different pH values will affect the reaction. It was uncertain whether the acidic conditions were causing sporadic results, therefore different kinds of pH conditions were used. The four solvents tested were a 0.25M hydroxil (OH) solution in methanol, a 0.25M H2 SO4 solution in methanol, a saturated solution of K2 PO3 , and finally pure methanol.


170

The Manhattan Scientist, Series B, Volume 3 (2016)

Rendina

The purpose was to see what speeds up the reaction the most, what slows it down, and perhaps identifying any solvent which stops the reaction. 2. Order of Addition: The order of addition of the reagents was considered, to find how the speed would be affected if the catalyst was added after the reagents were added to the flask. Likewise, we examined what would if we added the catalyst before anything else. The original procedure asks that dehydrozingerone and sodium formate be added to the flask with the solvent and then the catalyst be added; this is considered regular addition. The reverse addition adds the catalyst to the flask before the dehydrozingerone and sodium formate are added. 3. Quality of the Reagent: Assuming that the student sample synthesized in the first stage of the reaction is poor, we used the test of the effects of the impure product undergoing our best set of conditions to find how severely the reaction would be affected. Assuming that variables were outside of the student’s control, such as a contaminated sodium formate sample or an older catalyst, we studied how these factors affected our product in the end. 4. Reflux Time: To figure out just how fast our reaction was actually moving, we decided to reduce the reflux time to at least half of the original reflux time of one hour. This serves two purposes, to show the speed of our reaction and to give an estimate of how pure the compound is. More details will be covered on the determination of the purity when discussing NMR spectroscopy. There are two purposes to testing these variables and both go hand in hand with one another: finding what causes the reaction to fail, as well as figuring out what set of conditions push the reaction forward. Knowing what reagents cause the reaction to fail to any degree, we can narrow down the amount of work that needs to be done in order to redesign this experiment. These changes are discussed later in the Results and Discussion sections. Determining purity – NMR spectroscopy NMR spectroscopy was used to identify the amount of hydrogen within the samples. The spectrum for dehydrozingerone was taken before hydrogenation, and the sample of zingerone was taken after hydrogenation. The sample spectra for both are seen in Figs. 5. When stopping the reaction before its completion time, the spectrum shows a mixture of peaks within these two compounds. When dehydrozingerone was hydrogenated, the two peaks labeled C and B moved significantly farther up in the spectrum. Also, the hydrogen peaks A for both of the compounds are very similar in location. Using these two peaks we can find the percent conversion of the compound by taking the area under both of these peaks through their integration. When we divided the area of the zingerone peaks by the total area, we received our percent conversion. A mixture of these two compounds in one NMR spectrum is shown in Fig. 6.


Rendina

The Manhattan Scientist, Series B, Volume 3 (2016)

Figure 5. Sample spectra for dehydrozingerone (top) and zingerone (boittom)

Figure 6. Mixture spectra of dehydrozingerone and zingerone

171


172

The Manhattan Scientist, Series B, Volume 3 (2016)

Rendina

Results

Stage one of the synthesis shows that there was a relationship between reflux time and percent yield;, the longer the reaction was left to reflux, the higher the percent yield. We also found that the percent yield decreased when the addition of extraction and silica gel filtration steps were used to remove an impurity, but neither removed the yellow color of dehydrozingerone (Table 1). Table 1. Data for the first stage in this synthesis Time Refluxed

Recrystallization Method

Percent Yield

Yellow color present?

1 Hour 30 minutes 30 minutes 15 minutes

Ice bath Ice bath Extraction, then ice bath Silica gel filtration, then ice bath

71.58% 50.87% 35.75% 15.56%

Yes Yes Yes Yes

Stage two of the synthesis worked under multiple conditions; however, we found that under strongly acidic conditions and weakly basic conditions the reaction was the most optimal in converting from dehydrozingerone to zingerone. The pre-reduced palladium on alumina catalyst was slightly more effective than that of the non-reduced catalyst, but the results were not very far off from one another (Table 2). Table 2. Data for stage two: pH conditions and catalyst effects Catalyst Variant

Solvent used

Conditions

0.25M H2SO4 in methanol

Acidic

0.25M KOH in methanol

Basic

Sat. K2CO3 in methanol

Weakly basic

Pure methanol

Neutral

Regular Pre-Reduced Regular Pre-Reduced Regular Pre-Reduced Regular Pre-Reduced

Percent Conversion 99.99% 77.78% 78.26% 81.81% 90.90% 97.56% 69.56% 80.00%

Using strongly acidic conditions described above, it was discovered that the order of addition is significant as well. Using what was defined as reverse addition lowers the percent conversion by a very noticeable margin (Table 3). Table 3. Data for stage two: Original vs. Reverse addition Order of Addition

Catalyst used

Percent Conversion

Original

Regular Pre-reduced

99.99% 77.78%

Reversed

Regular Pre-reduced

52.38% 69.23%


Rendina

The Manhattan Scientist, Series B, Volume 3 (2016)

173

Using weakly basic conditions, with the regular palladium on alumina catalyst, and with regular addition, the time of reflux was tested. We also used multiple different poor reagents to see the effects of them on the purity (Table 4). Table 4. Data for stage two: Effects of reflux time (left) and the use of low quality reagents (right) Time refluxed

Percent Conversion

Altered Reagents

Percent Conversion

30 minutes 15 minutes 5 minutes

90.90% 78.95% 59.09%

Poor dehydrozingerone Older regular catalyst Dirty sodium formate

78.94% 85.00% 72.00%

Discussion While undergoing this research, it was important to keep in mind the common mistakes which undergraduate students might make while preforming a chemical reaction in an organic chemical lab; this was the driving force for all of our testing methods and altercations. Two major points can be concluded from stage one. The first point is that the time of reflux does matter. The original reflux calls for a one hour duration, and if a student decides to shortcut the reaction, the yield will be affected; the more time cut short the more drastic the yield loss. If the yield is too low, the student may not be able to proceed for the second stage. Interestingly enough, all four dehydrozingerone samples created had very high purity and the only aspect affected was the yield. The purpose of using extraction and silica gel filtration steps was to try and remove impurities which were believed to be the cause of the yellow color. The second major point is that both steps were unsuccessful in removing the yellow color, but as the experimentation went on to the second stage of the reaction, it was found that the reaction works even with its yellow color. Therefore, silica gel filtration and extraction are not necessary to begin with since the yellow color does not cause the reaction to completely fail. These steps can be completely removed and still receive a good yield, streamlining the procedure to be more efficient. The major part of this research was devoted to stage two. Out of all the pH conditions present, the most successful conditions were the acidic and weakly basic ones. The least effective was the neutral condition. However, the results for the acidic conditions may require more testing. Table 3 shows that the pre-reduced catalyst was more effective than the non-reduced catalyst for all cases except the acidic condition. When the order of addition was reversed for the same conditions, the pre-reduced catalyst was more effective than the regular catalyst as well. Weakly basic conditions worked well, and we used this as our best set of conditions due to the consistency in results. Although the pre-reduced version was more efficient than the non-reduced catalyst, it was not by a significant margin, and nearly negligible. All of the tests performed were done at a reflux time of thirty minutes and all of them were over fifty percent conversion for only half of the original reflux. The regular catalyst would work just as well as the pre-reduced catalyst if the full reflux took place. Given the negligible difference between the two catalysts, and the fact that the pre-reduced catalyst


174

The Manhattan Scientist, Series B, Volume 3 (2016)

Rendina

is significantly more expensive, we can avoid using the pre-reduced variant and still receive good results. Reverse addition also decreased the yield to a significant extent, but when reversing the addition, the yield was acceptable to still receive some product in the end as opposed to no product at all. Likewise, the use of poor reagents was unable to stop the reaction even with an impure sample of dehydrozingerone, a contaminated hydrogen source, or an old catalyst. The percent conversion was still very good for only half the reflux time. Using our best set of conditions, the reflux time was reduced to less than half the original reflux, and the yield was still very good given the actual time spent refluxing the reaction. The five minute reflux using our best set of conditions gave a significantly higher percent conversion than any student did for the spring 2016 semester for the full hour reflux.

Conclusion As far as finding the reason to why this synthesis failed, more specifically the second stage, there is no definite reason that we found within the reaction conditions. However, much more is known about the synthesis and what conditions really cause it to work at faster rates. In addition, certain steps were removed from the procedure which was deemed unnecessary making the procedure more time efficient and more accurate due to less error accumulating over all of the steps combined.

Acknowledgements This work was funded by the Manhattan College School of Science Summer Research Scholars Program. The research done benefited from the assistance of Dr. James McCullagh who mentored the work and Dr. Joseph Capitani who preformed calculations on the pyrilium salt.

References [1] Shingo Yogosawa, Yasumasa Yamada, Shusuke Yasuda, Qi Sun, Kaori Takizawa, and Toshiyuki Sakai, “Dehydrozingerone, a Structural Analogue of Curcumin, Induces Cell-Cycle Arrest at the G2/M Phase and Accumulates Intracellular ROS in HT-29 Human Colon Cancer Cells,” Journal of Natural Products 2012 75 (12), 2088-2093 [2] Sang-Guk Shin, Ji Young Kim, Hae Young Chung, and Ji-Cheon Jeong, “Zingerone as an Antioxidant against Peroxynitrite,” Journal of Agricultural and Food Chemistry 2005 53 (19), 7617-7622 [3] Hiroshi Nomura, Method of preparing “zingerbone” (methyl 3-methocy-4-hydroxyphenyl ethyl ketone). United States Patent Office, Apr. 23 1918 [4] Leverett R. Smith, “Rheosmin (“Raspberry Ketone”) and Zingerone, and Their Preparation by Crossed Aldol-Catalytic Hydrogenation Sequences,” The Chemical Educator, Contra Costa College, San Pablo, Aug. 5 1996


Eliminating aqueous chromium(VI) with renewable technology Analisse Rosario∗ Department of Chemistry and Biochemistry, Manhattan College Abstract. Chromium(VI) is a carcinogen found in groundwater resulting from different chemical processes and industries. On the contrary, its reduced form, Cr(III) is found in most vitamin supplements. Ascorbic acid, or Vitamin C, is an efficient reagent in reducing Cr(VI) to Cr(III), but becomes unrecoverable in water due to its solubility. This paper describes the use of granulated activated carbon to make ascorbic acid insoluble in water as it reduces Cr(VI). This complex is then able to be recycled, using glutathione, and becomes more effective in reducing Cr(VI).

Introduction Hexavalent chromium is a carcinogen found in groundwater produced by factories and industries (Dayan and Paine, 2001; Owlad et al., 2008). Chromates are found naturally and in groundwater, but are also by-products of processes including welding, electroplating, and the making of stainless steel, cement, leather dyes, paints, and wood preservatives. Cr(VI) causes severe health effects such as dermatitis, skin ulcers, sinonasal cancer, DNA damage, respiratory damage, and other similar defects (Dayan and Paine, 2001). According to the Environmental Protection Agency (EPA), microscopic concentrations of Cr(VI) is hazardous; 0.1 ppm (0.1 mg/L, or 0.862 µM) is the recommended concentration limit for safe drinking water. On the contrary, the reduced form of Cr(VI), Cr(III), is essential for the maintenance of metabolic processes for humans. Cr(III) is commonly added to vitamin supplements that can be bought over-the-counter in pharmacies (Vincent, 2000). Water treatment industries have been continuously searching for the most effective methods of removing Cr(VI) from groundwater (Owlad et al., 2008; Ponder et al., 2000; Hawley et al., 2004). The three most common methods are sorption, precipitation, and chemical oxidation-reduction. Sorption is the process in which Cr, in both its forms, is absorbed onto an absorbent material. Commonly used absorbents include granulated activated carbons and silica gels. Consequently, the contaminants make the absorbent hazardous as well. Precipitation is the process of adding oxides to Cr-containing water; insoluble Cr oxides form and are then extracted. The oxidationreduction process is a chemical reaction between Cr(VI) and a reducing agent to produce Cr(III) and the oxidized form of the reducing agent. Common reducing agents include lead metal and iron metal; however, the oxidized forms of these metals are hazardous if ingested (Gheju, 2011; Ponder et al., 2000).

Background Another known reagent that can be used to reduce Cr(VI) is ascorbic acid. The following Scheme 1 is the reduction of Cr(VI) using ascorbic acid in water (Kazmi and Rahman,1997; Bor∗

Research mentored by John Regan, Ph.D.


176

The Manhattan Scientist, Series B, Volume 3 (2016)

Rosario

sook et al., 1937).

Scheme 1

The benefit of using ascorbic acid as the reducing agent as opposed to metals is that ascorbic acid, otherwise known as vitamin C, has many positive health effects on humans as it is needed to maintain good health. On the contrary, the oxidized form of ascorbic acid, dehydroascorbic acid, and its decomposition by-products has unknown effects on the environment. Thus, adding ascorbic acid alone to groundwater for reducing Cr(VI) would not be safe due to its very polar and very soluble properties. To prevent losing ascorbic acid in the Cr(VI) containing water, this project’s goal is to combine the methods of sorption and oxidation-reduction by using the adsorbent GAC, or granulated activated carbon, as a solid support for holding the ascorbic acid as it reduces Cr(VI) (Weber, 1967; Park et al., 2016). Scheme 2 shows the first step of loading ascorbic acid onto GAC and forming an insoluble form of ascorbic acid.

Scheme 2

As shown in Scheme 3, the second step was involved using this complex to reduce Cr(VI). After the reduction of Cr(VI), dehydroascorbic acid remains on the GAC. This project also includes the recycling of this complex, which entails the reduction of dehydroascorbic acid back to ascorbic acid, so that it can be used again (Scheme 4) (Borsook, 1937). This is done by adding a reducing agent that is able to reduce dehydroascorbic acid and is also not dangerous to human


Rosario

The Manhattan Scientist, Series B, Volume 3 (2016)

177

Scheme 3

health, such as glutathione. Glutathione is a naturally occurring antioxidant found in plants and animals.

Scheme 4

Experimental Procedure Loading GAC with ascorbic acid Five grams of GAC was weighed and placed in a beaker. In a separate beaker, five grams of ascorbic acid was dissolved in 20 mL of tap water. After the aqueous ascorbic acid was made, the GAC was placed into the beaker containing ascorbic acid. This beaker was left sitting overnight, covered with parafilm. This mixture was filtrated using a Hirsch funnel and washed with approximately 400 mL of tap water. The GAC-AAC was air dried and stored in a beaker. When determining the amount of ascorbic acid loaded onto the GAC, the GAC-AAC was weighed and placed in a preweighed, dry beaker. It was then heated at 80â—Ś C overnight and reweighed. Reducing potassium chromate 0.5 g of GAC-AAC was added to a 125mL Erlenmeyer flask and 10 mL of 200 ÂľM potassium chromate was added to the flask. The flask was sealed with parafilm, placed on the Tek-Tator Variable Rotator and rotated for 5 minutes at 110 rpm. Using a pipette, a sample of the water was


178

The Manhattan Scientist, Series B, Volume 3 (2016)

Rosario

collected and sealed into a test tube with parafilm. This was repeated until 40 mL of potassium chromate has been added, or 4 samples have been taken. To determine the concentration of Cr(VI), a calibration curve was constructed using Beer’s law. The absorption of known concentrations of potassium chromate were taken, and a straight line was constructed. The equation of the line was used to convert between concentrations and absorptions. The absorptions were obtained by using the Agilent 8453 Spectrophotometer. Recycling GAC-AAC The dehydroascorbic acid-GAC complex used for the reduction of chromate was quickly rinsed with about 400 mL of tap water using the Hirsch funnel. The GAC-AAC was then placed in a round bottom flask. 1 mL of 0.1 M HCl was added. The GAC-AAC was left soaking for 30 minutes and about 0.174 g of glutathione was added. After 30 minutes, the recycled GAC-AAC was filtered, rinsed with 400 mL of tap water and air dried. Reducing chromate with the recycled material follows the same method as above.

Results and Discussion When loading the ascorbic acid onto GAC and then drying the complex, the weight of ascorbic acid that was loaded onto GAC is approximately 37 mg for every gram of GAC. The maximum loading occurred at 24 hours; soaking the GAC for greater than 24 hours showed no difference in performance. It is important to note the possibility of ascorbic acid binding onto the GAC in an unproductive way. Carbons 2 and 3 were used for the reduction of chromate. Ideally, carbons 5 and 6 bind onto GAC, exposing carbons 2 and 3 to the chromate solution. It is, however, possible that carbons 2 and 3 are bound onto GAC, thus exposing unreactive carbons 5 and 6. To reduce chromate with GAC-AAC, 10 mL of 200 µM potassium chromate was added to the flask containing the GAC-AAC and was stirred on the rotator-plate for 5 minutes. This process was continued until the concentration of chromate was higher than 8.6 µM, which is only ten times over the EPA’s recommended limit. This was then compared to the ability to reduce Cr(VI) for GAC alone, where GAC-AAC reduces Cr(VI) about 40 times more effectively than GAC alone. The results are shown in Table 1. Stirring the GAC-AAC for longer periods of time increases the amount of chromate that can be reduced (Fig. 1). To reach 8.6 µM concentrations of Cr(VI), 40 mL of chromate was required for 5-minute intervals of chromate addition, whereas 30-minute exposure time per aliquot addition required 90 mL of chromate. This suggests that the longer the chromate was left stirring in the GAC-AAC, the more time the chromate has to travel deeper into the GAC to interact with the ascorbic acid that is there. This data suggests the reduction does not fully occur on the surface of the GAC-AAC, but does in fact depend on how deep the chromate needs to travel within the GAC pores. Without stirring for 24 hours (Fig. 2), GAC-AAC is able to reduce chromate to very low concentrations. Because of this, it can be concluded that the concept of “diffusion” is very important.


Rosario

The Manhattan Scientist, Series B, Volume 3 (2016)

179

Table 1: Concentrations of Cr(VI) using GAC and GAC-AAC. Material

Volume of 200 µM Cr(VI)

Concentration of Cr(VI)

GAC GAC GAC GAC-AAC GAC-AAC GAC-AAC GAC-AAC

5 mL 10 mL 20 mL 10mL 20 mL 30 mL 40 mL

27.5 µM 29.8 µM 51.02 µM 5.04 µM 6.14 µM 8.08 µM 10.6 µM

Volume of 200 µM Chromate

Conclusion: GAC-AAC is 40 times more effective than GAC alone

100 75 50 25 0 0

5

10

15

20

25

30

35

Addition of 10 mL Chromate per time (mins) Figure 1: Changing GAC-AAC Rate

Experiments testing the effectiveness of GAC-AAC have also been done and showed that GACAAC performs more efficiently under acidic conditions. Using 200 µM potassium chromates at pH 3, 6, and 8, the amount of chromate solution that could be reduced to 8.6µM were 60 mL, 35 mL, and 25 mL, respectively. To recycle the GAC-AAC, the first step was to make sure the environment inside of the GACAAC is acidic. Thus hydrochloric acid was added and the material was soaked. Dehydroascorbic acid decomposes at neutral pH; acidic environments slow down and prevent the decomposition of ascorbic acid. After 30 minutes, the GAC-AAC was acidic and glutathione was then added. Glutathione is an appropriate reducing agent because this molecule is a common antioxidant and is efficient in reducing ascorbic acid; there are no health risks in introducing glutathione to humans, although it was thoroughly washed off. The reaction between glutathione and ascorbic acid is instantaneous, but soaking the used GAC-AAC with glutathione ensures that it has traveled deep into the GAC-AAC to reach all of the ascorbic acid. Referring to Table 2, recycled GAC-AAC reduced more effectively than GAC-AAC. This suggests that glutathione is taking part in the reduction of chromate, although it is not known to reduce chromate. The chemistry is


The Manhattan Scientist, Series B, Volume 3 (2016) Chromate Concentration (μMolar)

180

Rosario

150 125 100 75 50 25 0

0

5

10

15

Time (Hours)

20

25

Figure 2: GAC-AAC time curve

unknown and is being investigated. Table 2: Recycling GAC-AAC Volume of 200 µM Cr(VI)

Concentration Of Cr(VI)

20 mL 30 mL 40 mL 50 mL

3.90 µM 2.17 µM 6.07 µM 9.88 µM

Concentration of 200 µM Chromate

Another current experiment is recycling the used GAC-AAC before it reaches 8.6 µM. It appears that recycling before this concentration works more effectively than recycling after this concentration. This is because there is less dehydroascorbic acid to reduce and the glutathione seems to have a much more prominent role in the reduction of chromate. The GAC-AAC seems to be able to be continuously recycled, but further investigation is needed (Fig. 3). 7

6.61

6 5 4 3

1.88

2

0.73

1 0

0

1

Times Recycled Figure 3: Recycling GAC-AAC before breakthrough

2


Rosario

The Manhattan Scientist, Series B, Volume 3 (2016)

181

Conclusion

Ascorbic acid is an effective reagent for the reduction of Cr(VI) to Cr(III). To solve the problem of ascorbic acid being too soluble, ascorbic acid can be absorbed onto granulated activated carbon. For every half a gram of this complex, approximately 35 mL of potassium chromate can be reduced before concentrations reach higher than 8.6 µM; similarly for each recycling. Comparing this to GAC alone, it was only able to reduce Cr(VI) to a minimum of 27.5 µM, showing that GAC-AAC is much more effective. This complex is then able to be recycled using an antioxidant reagent: glutathione. In the future, our goal is to reduce concentrations of Cr(VI) to lower than the EPA’s limit of 0.862 µM, discover the reasons behind the effect of glutathione on the reduction of Cr(VI), and the continuous recycling of this material. Another important project for the future is to optimize the successful loading of ascorbic acid onto GAC. One way would be to use a coconut-based GAC which absorbs polar molecules. However, as mentioned earlier, there is the possibility that the reactive carbons 2 and 3 of ascorbic acid bind onto the GAC pores, thus not reacting with the Cr(VI). A way of fixing this would be to use a modified ascorbic acid derivative, 5-6-o-isopropylidene-ascorbic acid (Scheme 5). This variation of ascorbic acid has carbons 5 and 6 attached to a much less polar group; GAC prefers to bind onto non-polar molecules, and the reactive carbons 2 and 3 will be free to react.

Scheme 5

Acknowledgements

I would like to thank the Manhattan College School of Science for giving me the opportunity for this research project. I also would like to thank alumnus Robert Ryan ’71 for financially support. The Department of Chemistry and Biochemistry supplied the equipment for this research.

References

Borsook, H.; Davenport, H. W.; Jeffreys, C. E. P.; Warner, R.C. The Oxidation of Ascorbic Acid and its Reduction in Vitro and in Vivo. J.Biol.Chem. 1937. Vol. 117, 237-279.


182

The Manhattan Scientist, Series B, Volume 3 (2016)

Rosario

Dayan, A. D.; Paine, A. J. Mechanisms of Cr Toxicity, Carcinogenicity and Allergenicity: Review of the literature from 1985 to 2000. Human and Experimental Toxicology. 2001, 20, 439-451. Gheju, M. Hexavalent Cr Reduction with Zero-Valent Iron (ZVI) in Aquatic Systems: A Review. Water Air Soil Pollut. 2011. 222, 103-148. Hawley, E. L.; Deeb, R. A.; Kavanaugh, M. C.; Jacobs J. A. Treatment technologies for chromium(VI) In: Guertin J, Jacobs JA, Avakian CP, editors. Chromium(VI) handbook. Boca Raton: CRC Press; 2004. pp. 273–308. Kazmi, S. A.; Rahman, M. U. Kinetics and Mechanism of Conversion of Carcinogen Hexavalent Cr(VI) to Cr(III) by reduction with Ascorbate. Journ.Chem.Soc.Pak. 1997. Vol.19, No.3. Owlad, M; Aroua, M. K.; Daud, W. A. W.; Baroutian, S. Removal of Hexavalent Cr-Contaminated Water and Wastewater: A Review. Springer Science + Business Media B.V. 2008. Park, J.; Ok, Y. S.; Kim, S.; Cho, J.; Heo, J.; Delaune, R. D.; Seo, D. Competative Adsorption of Heavy Metals onto Sesame Straw Biochar in Aqueous Solutions. Chemosphere. 2016. 142, 77-83. Ponder, S. M.; Darab, J. G.; Mallouk, T. E. Remediation of Cr(VI) and Pb(II) Aqueous Solutions Using Supported, Nanoscale Zero-valent Iron. Environmental Science and Technology. 2000. 34, 2564-2569. Vincent, J. B. The Biochemistry of Chromium. Journal of Nutrition. 2000. 130, 715-718. Weber, Jr, W. J. Sorption from Solution by Porous Carbon. Principles and Applications of Water Chemistry. John Wiley and Sons, New York, NY. 1967. 89-126.


Analysis of HDD swap vs remote memory swap for Virtual Machines and Linux Containers Steven Romero∗ and Emmanuel Sanchez∗ Department of Computer Science, Manhattan College Abstract. Virtual Machines (VMs) and Linux Containers (LCs) are often used in cloud computing datacenters. These software computers frequently demand more memory than the host Physical Machine (PM) can provide. This causes the host PM to use Hard Disk Drive (HDD) swap in order to increase memory capacity. However, using the HDD as swap causes significant performance degradation. Therefore, remote memory swap is often implemented to address this issue. This paper presents updated experiments on the latest Linux kernel version to test the efficacy of remote memory for VMs. Additionally, experimental evaluations are also performed for LCs with remote memory, which have not been previously examined.

Introduction Virtual Machines (VMs) and Linux Containers (LCs) are essentially computers that are entirely made of software. They are processes which run on a host physical machine (PM), and rely on the host’s hardware for memory and processing power. Both VMs and LCs are heavily used in cloud computing datacenters [1, 2]. Datacenter owners host and lease multiple VMs and Containers on different PMs. All VMs and Containers in a datacenter share physical resources such as memory, processing, networking, and storage with each other. In these datacenters, the VMs and LCs often need sudden, large amounts of memory to run user requested applications including scientific calculations, and data mining. Since VMs and containers are processes, they can be instantiated with more memory (i.e. virtual memory) than physically available on the host PM’s RAM. This is called overcommitting, and is what allows economy of scale to keep costs low for datacenters [1, 2]. Overcommitted memory could result in memory requests that exceed the host PM’s memory limit. When more memory than available is requested, the machine by default supplements the lack of memory by using some space in the HDD as working memory storage. The issue with using this supplementary memory is that the performance of all processes significantly worsens. This implies that a better solution is needed for when a host PM runs out of memory space. Similar research publications have considered this issue previously [3, 4]. It was observed that a number of PMs in the cloud computing datacenters were usually underloaded with regard to memory usage. As a result, enough memory was available in the cluster to supplement the memory requirements of a machine that was at its memory limits due to VMs. A prototype remote memory framework was proposed. Utilizing this prototype, PMs that needed to use swap space could ∗

Research mentored by Kashshifuddin Qazi, Ph.D.


184

The Manhattan Scientist, Series B, Volume 3 (2016)

Romero + Sanchez

instead use the RAM of another machine as such. This resulted in increased overall performance of the machines. However, most of the experimental analyses that have been performed are now outdated and not completely applicable. The Linux kernel has been updated since these tests were done. Since the Linux kernel controls all of the operating system’s communications with hardware, it is important to see if these changes alter the results of the previous tests in any significant way. More importantly, no experimental analysis exists for remote swap space with LCs. To enable remote memory with LCs, it is critical to measure and quantify their performance within a remote memory framework.

Experimental Setup and Evaluation A remote memory framework was created to be used with VMs and LCs. The remote memory framework consisted of a ramdisk [5] created on one PM to be used on the other PM as swap space. The ramdisk was then shared across the cluster using a Network Block Device (NBD) server [6]. On the receiving PM, the NBD client was used to mount the ramdisk. The visible ramdisk was then set as the swap device. All tests were performed without modifying the Linux kernel or the code for VMs or LCs. Two PMs were used for all tests. Both of these machines have the samep specifications: Intel i5 processor, 8GB RAM, running Ubuntu 16.04, kernel version 4.4.0-34-generic. These machines were connected to each other through a 1 Gbps switch. The KVM hypervisor was used to create and manage the VMs. To create the LCs, Docker was used. Both the VMs and the LCs were created with the same specifications: 8 GB RAM, running Ubuntu 16.04, kernel version 4.4.0-34generic. To stress the memory used by the VMs and LCs and replicate real world scenarios, two memory benchmarks were run inside the VMs/ LCs – Memory BandWidth Tool (MBW), and Sysbench. Three iterations were performed with each benchmark, on both VMs and LCs. The benchmarks were systematically adjusted so that the first iteration used 7 GB memory, the second iteration used 5 GB memory, and the third iteration used 2 GB memory. For each of these iterations, cgroup was used to gradually restrict the amount of memory available in local RAM, while increasing the amount of memory used in the remote swap device. For each iteration, performance was recorded when 25%, 50%, and 75% of the required memory was swapped remotely. The entire experiment was then repeated using HDD swap instead of remote swap. Figs. 1 and 2 show the results after running the MBW benchmark in VMs and LCs respectively. The performance of MBW can be quantified in MB per second, which is the speed of execution. Thus, higher values indicate better performance. The best performance improvements for remote swap are seen for memory requirements around 7 GB. For VMs, when 25% of the required 7 GB is remotely swapped, the speed is around 40 MBps compared to 30 MBps using HDD swap. Around 2 GB memory requirements, HDD swap actually outperforms remote swap.


Romero + Sanchez

The Manhattan Scientist, Series B, Volume 3 (2016)

185

However, as demonstrated by Fig. 2 for LCs, remote memory performance is consistently higher than HDD swap.

Figure 1. VM performance with MBW (HDD vs Remote Swap)

Figure 2. LC performance with MBW (HDD vs Remote Swap)

Figure 3. VM performance with Sysbench (HDD vs Remote Swap)

Figure 4. LC performance with Sysbench (HDD vs Remote Swap)

Figs. 3 and 4 provide the results for the Sysbench benchmark for VMs and LCs respectively.


186

The Manhattan Scientist, Series B, Volume 3 (2016)

Romero + Sanchez

The performance of Sysbench is quantified in seconds, which is the time taken for some operations of the benchmark to complete. It is understood that lower values indicate a better performance. The pattern observed in these results is similar to earlier results. For memory usage around 7 GB section, remote swap continues to outperform HDD swap. For this benchmark, remote swap on VMs performed better overall than on LCs.

Conclusions and Future Work The experimental results demonstrate that remote memory swap had an overall average of 34% increase in performance over HDD swap. The majority of these improvements were found when dealing with higher memory loads of ∼7 GB. For lower memory requirements, the performance improvement was insignificant. Remote swap on LCs showed improvements up to 104% and an average of 42% performance increase over HDD swap. Overall, it can be concluded that under the observed setup, remote swap performs better on LCs than on VMs. In summary, remote memory mechanisms used in these experiments improve VM/ LC performance for heavy memory requirements in most scenarios. However, certain disparities exist, possibly due to the kernel’s handling of NBD and the creation of network packets. Further investigations are being performed to isolate the bottlenecks in the kernel, and if required, develop a modified protocol to enhance remote memory.

Acknowledgement The authors would like to acknowledge the School of Science Summer Research Scholar Program and Robert Ryan ’71 for financial support. The authors would also like to acknowledge Dr. Kashifuddin Qazi for his continued guidance and mentoring.

References [1] B. Rimal, E. Choi and I. Lumb, “A taxonomy and survey of cloud computing systems,” in INC, IMS and IDC, Seoul (2009) [2] K. Qazi, Y. Li and A. Sohn, “PoWER: prediction of workload for energy efficient relocation of virtual machines,” in 4th annual Symposium on Cloud Computing, Santa Clara (2013) [3] D. Williams, H. Jamjoom, Y. Liu and H. Weatherspoon, “Overdriver: Handling memory overload in an oversubscribed cloud,” ACM SIGPLAN Notices, vol. 46, no. 7, pp. 205-216 (2011) [4] M. R. Hines and K. Gopalan, “MemX: supporting large memory workloads in Xen virtual machines,” in Proceedings of the 2nd international workshop on virtualization technology in distributed computing, New York (2007) [5] “Linux: Create RAM Disk Filesystem,” [Online]. http://www.digitalinternals.com/unix/linuxcreate- ram-disk- filesystem/438/. [Accessed May 2016]. [6] “The Network Block Device,” [Online]. http://www.linuxjournal.com/article/3778. [Accessed May 2016].


Autonomous remote memory for Virtual Machines and Linux Containers Emmanuel Sanchez∗ and Steven Romero∗ Department of Computer Science, Manhattan College Abstract. In memory overcommitted data centers, heavy memory requirements for Virtual Machines (VMs) and Linux Containers (LC) can be supplemented by other Physical Machines (PMs) in the cluster. Such remote memory from different supplier PMs has been shown to benefit performance compared to other existing solutions. However, it is critical to intelligently select the PMs that partake in the remote memory system. Setting up remote memory after a VM or Container sees a surge in memory use, results in extended periods of performance degradation. Additionally, choosing a remote supplier PM simply based on its current memory usage could lead to catastrophic failures if the PM later develops heavy memory requirements itself. In this research, a framework is proposed that uses prediction mechanisms to mitigate the two problems. Experimental results on a simulation demonstrate the efficacy of the framework in maintaining the overall health of the cluster, and reducing performance degradation of the VMs and Containers.

Introduction Virtual Machines (VM) and Linux Containers are the core technologies that enable Cloud Computing. They are defined as software that emulate actual systems and are hosted on Physical Machines (PM). A PM that hosts VMs or containers sees them as processes in memory. The benefits of this include the ability to use multiple different operating systems on a single PM (for VMs, containers can only host Linux operating systems), easy access to them, and simple recovery if failure ever occurs. It is for these reasons that many datacenters employ the use of VMs and containers. Datacenter owners host and lease multiple VMs and Containers on different PMs. All VMs and Containers in a datacenter share physical resources such as memory, processing, networking, and storage with each other. Since VMs and containers are processes, they can be instantiated with more memory (virtual memory) than physically available on the host PM’s RAM. This is called overcommitting, and is what allows economy of scale to keep costs low for datacenters. Overcommitting also makes VMs and containers useful for running memory heavy applications, such as scientific calculations. However, sudden and excessive bursts in memory from VMs and containers could potentially result in memory overloads on the host PM [1]. If not handled, this could cause the VM or container to crash due to lack of memory. There are generally three accepted methods to avoid such failure. The first method is to migrate the memory intensive VM or container to another PM with enough memory to accommodate it. The second method is to supplement the PM host’s RAM with hard disk drive (HDD) swap space. The third method is to use memory from another PM in the cluster to supplement main memory (remote memory) [2, 3]. ∗

Research mentored by Kashshifuddin Qazi, Ph.D.


188

The Manhattan Scientist, Series B, Volume 3 (2016)

Sanchez + Romero

The proposed framework in this paper aims to mitigate some of the existing issues with remote memory, and make it alluring for practical use. Background The use of remote memory for VMs and containers has been shown to be useful in certain scenarios [3]. Usually, remote memory is more efficient than using HDD swap space, since modern networks are much faster than traditional HDDs. To achieve remote memory, a remote supplier PM is needed that pledges to supplement the overloaded PM’s memory requirements. It is critical that a suitable remote supplier PM is selected; wrongly choosing a supplier could cause catastrophic failures in the PM cluster. For example, an overloading PM could select a supplier PM that does not have enough memory to supplement. In such a scenario, both the requesting PM and supplier PM could stay overloaded. Depending on how the cluster is setup, this could result in a chain of events that would slow down operations in the cluster, ultimately rendering it inoperable. There have been similar other research projects with remote memory that aim to select the supplier PM using a reactive approach. The reactive approach entails that as soon as a PM overloads, a remote connection is instantly setup to another PM with enough memory at the current moment [3]. There are two major issues with this approach. The first is that once a VM or container sees a burst in memory usage and causes an overload, it is already too late to address the issue. The second problem is that a supplier PM may have the necessary RAM at the current moment, but there is no guarantee that it will continue to have enough memory throughout the period of overload. The framework proposed in this paper, utilizes a prediction mechanism based on Fast Fourier Transform (FFT) similar to References [1] and [4], to address the aforementioned issues with remote memory. FFT has been previously used successfully to predict VM/container loads for other unrelated issues [1]. Specifically, the framework ensures that remote memory is set up proactively. First, based on prediction results, VMs with a potential to overload on memory in the near future are identified. Second, again with prediction results, candidate remote supplier PMs are chosen that should have sufficient memory available throughout the entire duration of the overload. These mechanisms reduce both the performance degradation due to a VM overloading and the possibility of cluster-wide failure due to supplier PMs overloading.

Methodology The framework was created in Python along with some modules programed in shell script. As shown in Fig. 1, the framework consists of three phases running on each PM in the cluster. The first phase periodically predicts memory loads of the VMs and containers ten minutes into the future using FFT. If a potential memory overload is detected, then the second phase initiates. During this phase, the potentially overloading PM broadcasts a request for memory to the other PMs in the cluster. When a PM receives a request for memory, it checks if it has enough memory to completely accommodate the memory hungry VM or container throughout the memory intensive period, or if it has just enough memory to supplement the overloading PM. In the first case, the PM


Sanchez + Romero

The Manhattan Scientist, Series B, Volume 3 (2016)

189

Figure 1. Framework Overview

offers to migrate the VM or container to itself, so that the broadcasting PM is no longer stressed for memory. In the second case, the PM responds to the broadcasting PM with the amount of memory it can supplement. On receiving such responses from the various PMs in the cluster, the broadcasting PM enters the third phase. During this phase, preference is given to a PM that is ready to completely accommodate the memory hungry VM or container, and migration is setup to the new host PM. If no such PMs exist, then one of the other PMs offering to supplement memory is chosen as supplier, and remote memory is established between the overloading PM and the supplier PM. Further, if no PMs respond, the framework falls back to HDD swap to supplement the memory.

Experimental Evaluations In order to test the efficacy of the framework a simulation of a PM cluster was setup. The cluster consisted of four PMs with 8 GB of RAM each. Every PM hosted two VMs with 8 GB of RAM per VM. To test the framework in a real world scenario, NASA data traces were used to generate memory loads on the VMs [5]. Fig. 2 shows a sample of the NASA data traces, which record number of clicks to the NASA website over a period of 28 days. For comparison and analysis, the simulation was run separately using four mechanisms. The first mechanism is the previously mentioned reactive approach which does not use a prediction method, and simply chooses supplier PMs as and when needed, based on current memory usage. The second mechanism uses na¨Ĺve prediction by averaging a few immediate historical usage values. The third mechanism uses the proposed framework and prediction by FFT. Finally, the fourth mechanism uses a look-ahead method that makes optimal predictions. The last mechanism is impractical and included only for comparative study.


190

The Manhattan Scientist, Series B, Volume 3 (2016)

Sanchez + Romero

Figure 2. NASA Website Data Traces Sample

For each of the four mechanisms, seven parameters were recorded for the run of the simulation. Table 1 lists the seven parameters and their results for each of the four mechanisms. In all cases, lower numbers indicate better performance. It can be observed that the proposed FFT based framework outperforms the reactive and average based prediction methods for most of the parameters. As an example, with the framework, there were 11 instances of overloads lasting greater than 10 minutes, compared to 40 and 74 instances for reactive and averaging respectively, during the run of the experiment. Similarly, there were 3803 instances of two PMs simultaneously overloading with the framework, compared to 5535 and 7216 instances for reactive and averaging respectively. The only parameter for which the framework did not clearly outperform was the number of instances when four PMs simultaneously overloaded. Even in that case, the framework is comparable to averaging and is superior to the reactive method. Table 1. Comparison of Framework vs. Averaging vs. Reactive vs. Optimal FFT

Averaging

Reactive

Optimal

Total Overloaded PMs

9867

11765

14716

1472

Total Time Overloaded (minutes)

4764

5564

7287

665

Overloads Lasting >10 mins

11

40

74

0

Overloads Lasting <10 mins

2351

3334

3456

51

Instances of 2 PMs Simultaneously Overloading

3803

5535

7216

65

Instances of 4 PMs Simultaneously Overloading

33

29

71

8


Sanchez + Romero

The Manhattan Scientist, Series B, Volume 3 (2016)

191

Conclusions and Future Work The paper proposed an intelligent approach to selecting and setting up remote memory in clusters for VMs and containers. A framework was described and evaluated that utilized FFT based predictions to reduce chances of failures in the cluster when using remote memory. Based on the experimental results, prediction through averaging alone resulted in a 25% reduction in total PM overloads compared to the reactive method. This was further enhanced by using FFT, which achieved 33% reduction in PM overloads compared to the reactive method. For overloads lasting greater than 10 minutes, using FFT resulted in an 85% reduction compared to the reactive method. The experimental results demonstrate that intelligent prediction substantially outperforms other reactive methods when dealing with remote memory. Using the proposed framework, datacenters could avoid catastrophic failures in their clusters, while allowing the VMs and containers to run important memory heavy applications. Further research will be conducted to compare the FFT based prediction used in the framework against other similar prediction mechanisms. Various trade-offs, including time to compute and prediction accuracy shall be analyzed. Finally, the framework will be tested on a physical cluster. As part of a larger ongoing project, remote memory can also be used to benefit live migration of VMs. Since the memory of a VM is distributed across multiple PMs, the VM could be live migrated from one PM to another PM that already hosts some of the VM’s memory. This implies that live migration is simply a matter of switching the current host PM to be the remote supplier, and the current remote supplier to be the host PM. By reducing the amount of memory to be transferred over the network during the actual migration, the approach could potentially reduce the total time required for live migration.

Acknowledgement The authors would like to acknowledge the Robert Ryan ’71 for financial support. Additionally, they are grateful to the Manhattan School of Science Summer Research Scholars Program, and Dr. Constantine Theodosiou for constant support. Finally, the authors would like to acknowledge Dr. Kashifuddin Qazi for continued guidance and mentoring.

References [1] Z. Gong, X. Gu and J. Wilkes, “Press: Predictive elastic resource scaling for clouds,” in International Conference on Network and Service Management, Canada (2010) [2] “Linux: Create RAM Disk Filesystem,” [Online]. http://www.digitalinternals.com/unix/linuxcreate-ram-disk-filesystem/438/. [Accessed May 2016]. [3] D. Williams, H. Jamjoom, Y. Liu and H. Weatherspoon, “Overdriver: Handling memory overload in an oversubscribed cloud,” ACM SIGPLAN Notices, vol. 46, no. 7, pp. 205-216 (2011) [4] K. Qazi, Y. Li and A. Sohn, “PoWER: prediction of workload for energy efficient relocation of virtual machines,” in 4th annual Symposium on Cloud Computing, Santa Clara (2013)


192

The Manhattan Scientist, Series B, Volume 3 (2016)

Sanchez + Romero

[5] M. Arlitt and C. Williamson, “Web Server Workload Characterization: The Search for Invariants,� in SIGMETRICS Conference on the Measurement and Modeling of Computer Systems, Philadelphia (1996)


The shapes of ideal five junction comb polymers in two and three dimensions John Stone∗ Department of Computer Science, Manhattan College Abstract. This work investigated the shapes of eleven and fourteen branch five junction comb polymers in the ideal regime in two and three dimensions. A Monte Carlo growth technique was employed. It was found that the extrapolated property values obtained by computer simulation are in excellent agreement with the available theory. Polymers with a complete set of interior branches have a more symmetrical shape.

Introduction In a set of previous papers [1–7] the shapes of comb polymers containing two, three or four junctions have been studied. In this paper these investigations were extended to examine ideal five junction comb polymers with either eleven or fourteen branches (see Fig. 1). In these structures four of the branches are internal (joining junctions) and either seven or ten are external. If m is the number of monomers in a branch and b is the number of branches, there are a total of N = bm + 1 units in these uniform combs.

Figure 1. Sketches of the 11 (left) and 14 (right) branched comb polymers discussed in this paper.

An overall polymer size can be measured by the mean-square radius of gyration, hS 2 i, where h i denotes an average over the polymer configurations. It is well-known [8] that for large polymers hS 2 i follows the scaling law hS 2 i = C(N − 1)2ν . (1) The coefficient C is a model dependent amplitude but the exponent 2ν is universal and equal to 1 for all ideal polymers of any topology. The compactness of a given polymeric structure is measured by the g-ratio, involving the respective radii of gyration of a branched structure, hS 2 ib , and a linear structure, hS 2 il , containing the identical number of units. Casassa and Berry [9] obtained a general equation for the g-ratio ∗

Research mentored by Marvin Bishop, Ph.D.


194

The Manhattan Scientist, Series B, Volume 3 (2016)

Stone

of uniform, ideal comb polymers with n three-functional junctions regularly spaced along the backbone: r2 (1 − r) 2r(1 − r)2 (3n − 2)(1 − r)3 hS 2 ib + + g= 2 =r− (2) hS i` (n + 1) n n2

Here, r is the ratio of the number of units in the comb backbone to the total number of units in the polymer. In the case of the 11 branch combs, r = 6/11 and n = 5, so g = 821/1331(= 0.61683). The g-ratio of ideal 14 branch polymers was determined from a calculation of the form factor presented later in this article. Its value is 163/343(= 0.47522). Details about the shapes of polymers can be determined from the radius of gyration tensor. Its eigenvalues, ordered by magnitude are λ1 ≤ λ2 in two dimensions and λ1 ≤ λ2 ≤ λ3 in three. These are the principal moments of gyration along the principal orthogonal axes [10]. The average trace of this tensor, λ1 + λ2 or λ1 + λ2 + λ3 , is equal to hS 2 i. Rudnick and Gaspari [11, 12] have defined the average asphericity, hAi, of polymers in d dimensions as + * Pd 2 i>j (λi − λj ) (3) hAi = P (d − 1)( di=1 λi )2 The shape of a three (two) dimensional linear polymer can vary from a fully extended rod in which only λ1 does not vanish so that hAi has unit value, to a sphere (circle) for which all the λ s are equal. In the latter case hAi is zero. In between the extremes of a rod and a sphere (circle), a polymer configuration can be imagined as approximately enclosed inside an ellipsoid (ellipse). There are other indicators of the shape; in two dimensions hδ1 i and in three dimensions hP i. These are defined, respectively, as hλ1 i hδ1 i = 2 , (4) hS i and

hP i = ¯ is Here, λ

*

¯ 2 − λ)(λ ¯ 3 − λ) ¯ 27(λ1 − λ)(λ P3 ( i=1 λi )3 ¯ = λ1 + λ2 + λ3 . λ 3

+

.

(5)

(6)

Note that hAi and hP i involve an average of a ratio whereas hδ1 i involves a ratio of averages.

Another important structural property of polymers is the form factor [13], S(k), which provides information about the spatial monomer distribution N 1 X S(k) = 2 < e ik • (Rn −Rm ) > N m,n

(7)


Stone

The Manhattan Scientist, Series B, Volume 3 (2016)

195

where k is the momentum transfer of the scattering experiment and Rm and Rn are the positions of the m-th and n-th monomers. In an ideal polymer the configurations have a Gaussian distribution. Casassa and Berry [9, 14] used this fact to obtain the form factor for uniform ideal combs with single branches attached to the backbone : 2(A + B + C) (8) S(k) = x2 where A = x − 1 + e−xr , (9) 1 − e−xrn/(n+1) B = 1 − e−x(1−r)/n n + 2 , (10) 1 − exr/(n+1) and (n − 1)(exr/(n+1) − 1) − (1 − e−xr(n−1)/(n+1) ) −x(1−r)/n 2 C = 1−e . (11) (1 − exr/(n+1) )2 1/2

In these equations x = khS 2 i` where hS 2 i` is the radius of gyration of an ideal linear polymer with N units. hS 2 i` is related to hS 2 ib by the g-ratio. The form factor of the eleven branch comb polymer is easily obtained from these equations.

In this article we compute hS 2 i, the g-ratio, hAi, hδ1 i, hP i and S(k) from both theoretical equations and accurate Monte Carlo computer simulations for ideal multi-branch 5-junction comb molecules in two and three dimensions.

Methods Numerical evaluation applying Benhamou’s method The form factor for the fourteen branch five junction ideal comb polymer has been computed by following the method of Benhamou [3, 15]. Here, Debye scattering for tree-like networks built from identical ideal linear polymer chains has been investigated. For these trees the form factor can be decomposed into intra-chain and inter-chain contributions, each involving two distinct chains within the network. The intra-chain contribution is proportional to the Debye scattering function of a single ideal chain 2 (12) sD (k) = 2 (x − 1 + e−x ). x The inter-chain contributions essentially depend on the distance of the two chains within the network, i.e. the length of the unique path between, and including, the chains involved in each contribution. The details are contained in reference [3]. In the special case of the 14 branch combs we obtain 2 S(k) = (10 + x − 4e−x/14 − 15e−x/7 + 4e−3x/7 + 4e−5x/14 + e−2x/7 ). x2

(13)


196

The Manhattan Scientist, Series B, Volume 3 (2016)

Stone

This has a Taylor expansion of

163 x + ... 1029 The g-ratio is then computed by dividing the 14 branch S(k) by that of a linear chain S(k) = 1 −

1 S(k) = 1 − x + ... 3

(14)

(15)

to get the g-ratio value of 163/343(= 0.47522). Monte Carlo In our MC method chain growth on a simple square or cubic lattice has been employed to generate five junction ideal combs with eleven and fourteen branches. Details of this approach are contained in Zajac and Bishop [6]. (α)

If Xj denotes the α component of the position vector of the j-th bead, then the center of (α) mass coordinates, XCM , of a given configuration are given by (α) XCM

N 1 X (α) = X , for α = 1, 2, 3 or1, 2 N j=1 j

(16)

and the matrix components of the gyration tensor, Q, may be written in the form Qαβ

N 1 X (α) (β) (α) (β) = (Xj − XCM )(Xj − XCM ). N j=1

(17)

The square radius of gyration of this configuration is then calculated as S 2 = Q11 + Q22 + Q33 = λ1 + λ2 + λ3 .

(18)

S 2 = Q11 + Q22 = λ1 + λ2 .

(19)

in three dimensions and as in two dimensions. The set of property values was then further averaged over the total number of generated samples (160,000) to determine the values of the mean and the standard deviation from the mean, employing the usual equations. In the MC simulations for S(k), N ranges from 127 to 701 whereas our simulations of the shape properties of these 5 junction combs employed N values ranging from 2201 to 7001. The computational complexity of the shape work is O(N ) whereas the form factor calculations require O(N 2 ). Moreover, each S(k) calculation involves two simulation runs; one to determine hS 2 i and then a second to calculate S(k). We will demonstrate later that the current smaller systems are still large enough to probe the asymptotic regime.


Stone

The Manhattan Scientist, Series B, Volume 3 (2016)

197

To compute S(k) using the generated MC configurations, we first average Eq. 7 over the angle between k and (Rn − Rm ) to find that N 1 X S(k) = 2 hJ0 [k(Rn − Rm )]i N m,n

in two dimensions and S(k) =

(20)

N 1 X sin[k(Rn − Rm )] h i N 2 m,n [k(Rn − Rm )]

(21)

in three dimensions. Here, J0 is the zero-th order Bessel function.

Then this set of configurations was further averaged over the total number of generated samples (10,000) to obtain the S(k) results presented here.

Results The hS 2 i data were fit by a weighted nonlinear least-squares program [16] to determine the exponent in the scaling law. In all the results reported in the tables, the number in parenthesis denotes one standard deviation in the last displayed digits. It was found that 2ν had the value of 1.00 ± 0.02 for both 11 and 14 branched combs in two dimensions and 1.00 ± 0.01 for both branched combs in three dimensions. These results are completely consistent with the theoretical value of 1.00. The MC g-ratios have been calculated from the radius of gyration data and the errors in these quantities have been computed from the standard equation relating the error in a ratio to the error in the numerator and the error in the denominator. However, these computer results are for finite N whereas the theories are for infinite N . Infinite N g-ratio values have been obtained by fitting a scaling law as explained in Zweier and Bishop [1]. These extrapolated g-ratios for the comb systems are compared to other findings in Tables 1 and 2. Both Wei’s [17, 18] method and the MC simulations are in excellent agreement with each other and the theoretical predictions. The g-ratios of the fourteen branch comb, which has a complete set of interior branches, have a relatively lower value than those found for the eleven branch combs. Table 1. Results for 11 branch comb polymers in two and three dimensions.

a

Property

2D Extrapolated

Wei

g ratio hAi hδ1 i hP i

0.616(2) 0.307(2) 0.788(2)

0.61683(0) 0.30616(0) 0.78750(8)

Exact

3D Extrapolated

Wei

Exact

0.61683

0.617(1) 0.299(1)

0.61683(0) 0.29919(9)

0.61683

0.308(2)

0.15401(5)a

Note that the Wei method defines hP i with an additional factor of 1/2.


198

The Manhattan Scientist, Series B, Volume 3 (2016)

Stone

Table 2. Results for 14 branch comb polymers in two and three dimensions.

a

Property

2D Extrapolated

Wei

g ratio hAi hδ1 i hP i

0.473(2) 0.264(2) 0.764(2)

0.47521(9) 0.26441(0) 0.76393(6)

Exact

3D Extrapolated

Wei

Exact

0.47522

0.475(2) 0.255(2)

0.47521(9) 0.25566(6)

0.47522

0.239(2)

0.11976(5)a

Note that the Wei method defines hP i with an additional factor of 1/2.

The error in the A calculation, which involves the division of separately averaged quantities, was determined similarly as the g-ratio and the results are presented in Tables I and II. The values found for hAi of ideal 11 and 14 branch comb polymers are in excellent agreement with the theoretical prediction of the Wei method. As expected, the results indicate that the polymers become more symmetric in their shape as the structure changes to higher branching and a complete set of interior branches. The three dimensional S(k) data are presented in Fig. 2. The reciprocal of the form factor is plotted to emphasize differences at higher k values. The dashed curve, solid curve and dotted curve are the exact results for the 14 branch combs, the 11 branch combs, and linear chains, respectively.

Figure 2. Comparison of the three dimensional MC simulation for the reciprocal of the form factor to the exact results. The dashed curve is the exact result for the 14 branch combs, the solid curve is the exact result for the 11 branch combs, and the dotted curve is the exact result for linear chains (Debye equation). The circles are the MC values for the 14 branch combs when N = 687, the squares are the MC values for the 11 branched combs when N = 683, and the triangles are the MC values for the linear chains when N = 680.


Stone

The Manhattan Scientist, Series B, Volume 3 (2016)

199

The circles, the squares, and the triangles are the MC values for the 14 branch combs when N = 687, the 11 branched combs when N = 683, and the linear chains when N = 680, respectively. There is fine agreement between the exact predictions and the MC simulations except at the largest values of x. Large values of x examine small values of distance. At that scale the detailed structure of the polymer has a significant effect on the form factor. Increased crowding is clearly seen as the polymer studied is changed from a linear to an 11 branch and then to a 14 branch comb.

Conclusions A Monte Carlo growth algorithm has been used to investigate branched five junction comb polymers in the ideal regime. The g-ratio, the asphericities and their respective error bars, and the form factor have been determined for a wide range of N . It was found that the values obtained are in fine agreement with the available theory.

Acknowledgements I thank the Manhattan College Computer Center and the Kakos Center for Scientific Computing, for generous grants of computer time. I also thank Dr. Marvin Bishop for his guidance in this work, and Dr. Rani Roy and Mrs. Elen Mons for organizing the Manhattan College Jasper Summer Research Scholars Program, which provided the financial support.

References S. Zweier and M. Bishop, J. Chem. Phys., 131, 116101 (2009) S. Zweier and M. Bishop, Comp. Educ. J., XVIIII(4), 99 (2009) C. von Ferber, M. Bishop, T. Forzaglia and C. Reid, Macromolecules, 46(6), 2468 (2013) M. Perrelli and M. Bishop, Comp. Educ. J., 5(3), 25 (2014) C. von Ferber, M. Bishop, T. Forzaglia, C. Reed and G. Zajac, J. Chem. Phys., 142, 024901 (2015) [6] G. Zajac and M. Bishop, Comp. Educ. J., 6(1), 44 (2015) [7] R. de Regt, C. von Ferber, M. Bishop, A. J. Barillas, T. Borgeson, Physica A, 458, 391 (2016) [8] P. G. de Gennes, Scaling Concepts in Polymer Physics, (Cornell University Press, Ithaca, 1979) [9] E. F. Casassa and G. C. Berry, J. Poly. Sci. A-2, 4, 881 (1966); Note that references [5, 7] have a misprint for the g-ratio equation even though the reported numerical calculations are correct. [10] K. Solc and H. Stockmeyer, J. Chem. Phys., 54, 2756 (1971) [11] J. Rudnick and G. Gaspari, Science, 237, 384 (1987) and references therein. [12] J. Rudnick and G. Gaspari, J. Phys. A, 19, L191 (1986) [13] P. J. Flory, “Statistical Mechanics of Chain Molecules,� (Hanser, Munich, 1989) [14] Note that the form of Eq. 8 given in reference [9] contains a misprint which has been corrected in the denominator of Eq. 10

[1] [2] [3] [4] [5]


200

The Manhattan Scientist, Series B, Volume 3 (2016)

Stone

[15] M. Benhamou, Condensed Matter Physics, 7, 179 (2004) [16] P. R. Bevington, “Data Reduction and Error Analysis for the Physical Sciences,� (McGrawHill, New York, 1969) [17] G. Wei, Physica A, 222, 152 (1995) [18] G. Wei, Physica A, 222, 155 (1995)


Modification of coffee oil feedstock and heterogeneous catalyst for biodiesel synthesis Th´er`ese Kelly∗ Department of Chemistry and Biochemistry & Department of Biology, Manhattan College Abstract. The high cost of the raw materials and product refinement associated with the current method of biodiesel synthesis affects the adoption of the renewable alternative energy on a large scale. Coffee oil is an example of a potentially low cost feedstock for biodiesel synthesis because it can be extracted from waste coffee grounds. Homogeneous catalysts are the most commonly used catalysts in biodiesel production, but they are harmful to the environment and heavily contribute to the processing costs associated with the production of the fuel. Heterogeneous catalysts are recyclable and a more energetically and cost efficient choice of catalyst. The heterogeneous catalyst calcium oxide (CaO) is a sustainable alternative to homogeneous catalysts like potassium hydroxide (KOH), although it displays lower in situ reactivity and stability than its active phase of Ca-glyceroxide. In the present work, purchased coffee oil first underwent a Fischer esterification pretreatment to lower its free fatty acid (FFA) content before its use as feedstock in the transesterification reaction. Ca-glyceroxide was synthesized in the laboratory and successfully catalyzed the conversion of coffee oil to fatty acid methyl esters (FAMEs), or biodiesel. X-ray Powder Diffraction (XRD) and Scanning Electron Microscopy (SEM) confirmed the formation of the active phase catalyst, and 1 H NMR and FTIR were used to analyze the conversion of the feedstock to biodiesel.

Introduction The pursuit of alternative sources of renewable energy is necessary at a time of growing reliance on nonrenewable and environmentally hazardous fossil fuels like petroleum (Deligiannis et al., 2011). Biodiesel is a renewable and environmentally friendly fuel, and can be produced from a variety of triglyceride-rich oil feedstocks (Patil et al., 2009a). Although the fuel can be used in standard diesel engines with little or no modification (Patil et al., 2009b), it is in many ways superior to its petroleum counterpart in terms of its performance and contribution to environmental pollution (Deligiannis et al., 2011). Biodiesel has a high flash point of 150◦ C, which lowers its volatility and makes it safer to use (Deligiannis et al., 2011). Its higher cetane number allows for a more complete combustion of the fuel, which leads to lower emissions of carbon monoxide, unburned hydrocarbons, and particulate matter in exhaust gas (Patil et al., 2009b). Biodiesel is produced via a catalyzed transesterification of the triglyceride component of oils, which yields fatty acid methyl esters (FAMEs) and a glycerol byproduct. Fig. 1 shows the reaction equation for the transesterification reaction. Transesterification is typically carried out with non-renewable homogeneous catalysts such as potassium hydroxide (KOH), but their associated environmental impact and post-production processing make them a less sustainable choice (Reyero et al., 2014). Homogeneous alkaline catalysts react with free fatty acids (FFAs) present in oils, forming soaps that make it difficult ∗

Research mentored by Yelda Hangun-Balkir, Ph.D.


202

The Manhattan Scientist, Series B, Volume 3 (2016)

Kelly

Figure 1. Transesterification reaction equation.

to separate the glycerol and biodiesel layers and require removal by a water washing step (Di Serio et al., 2007; Reyero et al., 2014). The resulting wastewater effluents require proper disposal management, and the acid neutralization of any remnant catalyst further lowers product yields (Reyero et al., 2014; Di Serio et al., 2007). Heterogeneous catalysts are renewable alternatives to their homogeneous counterpart, because they can easily be recovered following the filtration of the transesterification reaction product and therefore do not have the drawbacks associated with the use of homogeneous catalysts. Two major factors that affect the feasibility of biodiesel production on a large scale are the availability of a low cost and quality feedstock and the costs of post-production processing (Di Serio et al., 2008). At a time of worldwide food crises, is it important to utilize oil feedstocks that will not affect the availability of edible oils (Patil et al., 2009b). Coffee oil is a promising feedstock for biodiesel synthesis because it can be extracted from waste coffee grounds, which contain an average of 15% oil (Deligiannis et al., 2011). The application of heterogeneous catalysts in biodiesel synthesis could significantly reduce the cost of biodiesel production, by eliminating the energetically and environmentally costly refinement steps associated with the use of homogeneous catalysts (Di Serio et al., 2007). One of the most well-studied heterogeneous transesterification catalysts for biodiesel production is calcium oxide (CaO) (Reyero et al., 2014). CaO is inexpensive and displays good catalytic performance at mild reaction temperatures, though its limitations can include low specific activity and stability in the transesterification reaction (Reyero et al., 2014). Ca-glyceroxide is the active phase of CaO in the transesterification reaction, and is formed with the progression of the transesterification reaction and the formation of glycerol (Le´on-Reina et al., 2013). This transformation can be explained by the greater solubility of CaO in glycerol compared to methanol (Le´on-Reina et al., 2013). The study conducted by Le´on-Reina et al. (2013) showed that Ca-glyceroxide is more selective then CaO towards the formation of FAMEs, and its high reaction stability allowed it to


Kelly

The Manhattan Scientist, Series B, Volume 3 (2016)

203

be reused in five subsequent transesterification reactions while maintaining high catalytic activity. In the present work, Ca-glyceroxide was synthesized in the laboratory and used as a heterogeneous catalyst in the transesterification of purchased coffee oil. In order to determine its relative efficacy as a catalyst in the transesterification reaction, reactions were also carried out with CaO and the homogeneous catalyst KOH. In addition to exploring the feasibility of a heterogeneouscatalyzed transesterification of coffee oil, alternative sources of the heterogeneous catalyst and oil feedstock were also considered in this project.

Experimental Materials Coffee oil was purchased from Eden Botanicals and served as the feedstock in all transesterification reactions. Calcium oxide 20 Mesh Powder (Alfa Aesar, Ward Hill, MA) was used as the standard heterogeneous metal oxide catalyst and KOH (Fisher Science Education, Nazareth, PA) was used as the homogenous catalyst. Anhydrous methanol (Fisher Scientific, Waltham, MA) was the alcohol used in the transesterification reactions. A round bottom flask set in either a sand or water bath served as the apparatus for the transesterification reactions, and a hot plate and magnetic stir bar were used to heat and stir the reaction mixtures. Preparation of catalysts and synthesis of Ca-glyceroxide The homogeneous catalyst KOH was used as-received in the transesterification reaction. The CaO used in all experiments was first calcined at 900 ◦ C with a heating rate of 10 ◦ C per minute for approximately 4 hours. Ca-glyceroxide was synthesized according to a method adapted from that described in Reyero et al. (2014), by combining calcined CaO with glycerol and methanol. The synthesis reaction equation for Ca-glyceroxide is shown in Fig. 2 (Le´on-Reina et al., 2013). 2.5 g of calcined CaO, 8.4 g of glycerol, and 37 g of methanol was heated and stirred for three hours in a round bottom flask set in a 60 ◦ C water bath, and the resultant solid product was recovered via vacuum filtration. The solid was washed with approximately 15 mL of tetrahydrofuran (THF) and dried under vacuum filtration for 15 minutes. The synthesized Ca-glyceroxide catalyst was added to a screw cap vial and stored in a desiccator to avoid contamination by air.

Figure 2. Ca-glyceroxide synthesis reaction equation.

Characterization of CaO and active phase Ca-glyceroxide X-ray powder diffraction (XRD) was used to characterize the crystalline structure of the asreceived calcium oxide and the synthesized Ca-glyceroxide. The resultant diffraction peaks for each catalyst were compared with those recorded in Reyero et al. (2014). XRD patterns were


204

The Manhattan Scientist, Series B, Volume 3 (2016)

Kelly

acquired using a D2 PHASER X-ray diffractometer (Bruker, Karlsruhe, DE), with a scan step size of 0.01◦ and a scan rate of 2 seconds per step over a 2θ range of 5◦ -60◦ and 5◦ -100◦ for the Ca-glyceroxide and purchased CaO catalysts, respectively. Scanning electron microscopy (SEM) was used to obtain high-resolution images of CaO and Ca-glyceroxide, which allowed for the observation of the distinct surface morphologies of the two heterogeneous catalysts. Scanning electron micrographs were obtained with a LEO 1550 SFEG-SEM (Zeiss, Germany). For the characterization of Ca-glyceroxide, the electron beam was delivered to the sample at 20 kV acceleration voltage and 9 mm working distance. For the characterization of calcined CaO, the electron beam was delivered to the sample at 2.5 kV acceleration voltage and 8 mm working distance. In order to make the catalyst samples conductive, an Edwards 150B Sputter Coater was used to coat each sample twice with 3 nm of Gold (Au) at 45◦ . A Robinson backscatter detector (RBSD) was used to collect images of Ca-glyceroxide. A secondary electron (SE2) detector was used to collect images of CaO. Fischer esterification pretreatment of coffee oil In order for the conversion to FAMEs to occur, feedstocks that are used in transesterification reactions catalyzed by homogeneous alkaline catalysts should contain no more than 0.5 to 1 % w/w FFA (Serio et al., 2007; Ghadge et al., 2005). A high FFA content can also result in saponification due to the acid-base interaction between feedstock and catalyst (Di Serio et al., 2007; Ghadge et al., 2005). The incomplete conversion of the coffee oil to FAMEs using calcined CaO, as well as the failure of Ca-glyceroxide to catalyze the conversion in the first trial, suggested that a low FFA % may be also necessary for a heterogeneous-catalyzed transesterification reaction. Thus, the coffee oil feedstock used in the second trial of the transesterification reaction catalyzed by Caglyceroxide first underwent a two-step Fischer esterification pretreatment. Fischer esterification refers to the conversion of carboxylic acids to esters, which occurs by heating a carboxylic acid in the presence of an alcohol and a small amount of a strong acid catalyst (McMurry, 2005). In the present work, a two-step Fischer esterification pretreatment was applied in order to lower the FFA content of coffee oil. The method was adapted from Ghadge et al. (2005), whose twostep Fischer esterification pretreatment procedure successfully reduced the 19% FFA content of Madhuca indica oil to below 1%. The reaction equation shown in Fig. 3 shows the esterification of a fatty acid with methanol using a sulfuric acid catalyst.

Figure 3. Fischer esterification reaction equation.


Kelly

The Manhattan Scientist, Series B, Volume 3 (2016)

205

As prescribed by Tan et al. (2015), 50 mL of the coffee oil was heated and stirred at 110 ◦ C for one hour in a sand bath in order to remove any water content and lower the viscosity of feedstock. The oil was left to cool and stir overnight. To begin the Fischer esterification, the coffee oil was heated at 60 ◦ C for one hour in a round bottom flask placed in sand bath. A methanol to oil ratio of 0.60 v/v and 1% v/v H2 SO4 (aq) was used in both steps. 30 mL of methanol and 0.5 mL of H2 SO4 (aq) were briefly stirred in a beaker placed in an ice bath, and added to the round bottom flask to begin the reaction. The reaction mixture was heated and stirred at 60 ◦ C for one hour, and upon cooling was added to a separatory funnel. The resulting top layer was removed and set aside. The bottom layer underwent the second pretreatment step under the same reaction conditions. At the completion of the pretreatment, the two top layers were combined and the final bottom layer was heated to 110 ◦ C in order to remove any excess methanol or water. Fourier Transform Infrared (FTIR) spectroscopy was used to characterize the products in the top and bottom layers following the Fischer esterification pretreatment. Transesterification reactions Transesterification reactions were first carried out using as-received coffee oil and each of the three catalysts. A second trial of the transesterification reaction catalyzed by Ca-glyceroxide was carried out with the coffee oil that underwent the Fischer esterification pretreatment. An oil to methanol molar ratio of 1:12 was used for the reactions catalyzed by calcined CaO and Caglyceroxide. The second trial of the transesterification reaction catalyzed by Ca-glyceroxide was conducted in a sand bath at 60 ◦ C. 30 mL of pretreated coffee oil was heated and stirred in a 250 mL round bottom flask until it reached reaction temperature. 35 mL of methanol was heated at approximately 55 ◦ C. 0.308 g of Ca-glyceroxide was quickly added to the coffee oil in order to prevent contamination of the catalyst by ambient air, and the subsequent addition of hot methanol allowed for the onset of the reaction. The reaction mixture stirred for three hours at 60 ◦ C, cooled to room temperature, and stirred overnight. The solid catalyst was recovered via vacuum filtration. The reaction mixture was poured into a 125 mL separatory funnel to facilitate the separation of any biodiesel or glycerol product layers. Anhydrous sodium sulfate (Fisher Scientific, Waltham, MA) was added to the product following its removal from the separatory funnel, in order to remove any excess methanol or water. Analysis of transesterification reaction products 1 H Nuclear Magnetic Resonance (1 H NMR) and Fourier Transform Infrared (FTIR) spectroscopies were used to analyze the structural and functional group composition of the product samples. The product samples from all reactions were analyzed by 1 H NMR. The product samples of the transesterification catalyzed by KOH and the second trial of the transesterification catalyzed by Ca-glyceroxide were analyzed by FTIR. The product samples were collected from the top and bottom regions of the contents in each separatory funnel. An EFT-60 NMR spectrometer (Anasazi Instruments, Indianapolis, IN) and a Thermo Nicolet NEXUS 470 FTIR spectrometer (Thermo


206

The Manhattan Scientist, Series B, Volume 3 (2016)

Kelly

Scientific, Waltham, MA) were used to carry out the spectroscopic analyses. The FTIR spectra of the samples were observed in the 500 − 4000 cm−1 range after 32 scans were recorded for each sample.

Results and Discussion Catalyst characterization The X-Ray Diffraction (XRD) patterns of CaO and Ca-glyceroxide are shown in Fig. 4. The diffraction peaks at 32.26◦ , 37.36◦ , 54.01◦ , 64.31◦ , 67.43◦ , and 79.71◦ in the CaO spectrum align with the diffraction pattern of the compound described by Reyero et al. (2014). The peaks at 18.01◦ , 34.24◦ , and 50.88◦ suggest the presence of CaCO3 in the CaO sample (Reyero et al., 2014), which is due to the contamination of the sample by CO2 . In addition to thermal activation of the compound prior to its use in experiments, calcination of CaO allows for the removal of such contaminants (Reyero et al., 2014).

Figure 4. XRD patterns of as-received CaO and Ca-glyceroxide. Symbols refer to the major peaks characteristic of CaO (∆) and Ca-glyceroxide ( ).

The active phase of CaO in the transesterification reaction, Ca-glyceroxide, was synthesized with CaO, glycerol, and methanol. The diffraction peaks at 8.3◦ , 10.1◦ , 21.3◦ , 24.4◦ , 26.8◦ , 34.5◦ , 36.3◦ are characteristic of Ca-glyceroxide, and correspond to the peak values listed for the compound in Reyero et al. (2014). According to Reyero et al. (2014), the XRD pattern of a sample of calcined CaO recovered following the transesterification of sunflower oil matched that of the Ca-glyceroxide synthesized ex situ in the study. This confirms that CaO transforms into Caglyceroxide during the transesterification reaction.


Kelly

The Manhattan Scientist, Series B, Volume 3 (2016)

207

The SEM micrograph of CaO in Fig. 5 shows a spherical morphology for the catalyst.

Figure 5. SEM micrograph of CaO at 10.00 KX magnification.

The SEM micrograph of Ca-glyceroxide in Fig. 6 is distinctly different from that of CaO.

Figure 6. SEM micrograph of Ca-glyceroxide at 10.00 KX magnification.

Analysis of products of Fischer esterification pretreatment A line of separation allowed for the distinction of the two layers that resulted following the addition of the pretreated coffee oil to the separatory funnel, which indicated that the bottom layer contained the coffee oil and the top layer contained a less dense aqueous methanol-water layer (Ghadge et al., 2005). The bottom layer separated in the final pretreatment step was a dark green


208

The Manhattan Scientist, Series B, Volume 3 (2016)

Kelly

color, but regained the brown color charactetistic of coffee oil following its final heating at 110 ◦ C (Fig. 7).

Figure 7. (L to R) Before and after final separation and heating of pretreated coffee oil.

The FTIR spectrum analysis of sunflower oil described by Shimamoto et al. (2015) provided a description and explanation of the absorption patterns characteristic of plant based oils. Comparison of the FTIR spectrum of the bottom layer of the pretreatment product with the FTIR spectrum of standard coffee oil confirmed that the bottom layer was coffee oil. The presence of a broad peak at 3371 cm−1 in the pretreated coffee oil shown in Fig. 10 indicates the presence of an alcohol (–OH), and is likely due to excess methanol remaining in the coffee oil after the pretreatment step. The presence of the alcohol peak even after the brief heating of the product at 110 ◦ C and treatment with anhydrous sodium sulfate suggests that a significant amount of excess methanol was present at the end of the pretreatment. In order to the reduce the amount of excess methanol, a pretreatment method could be developed according to the exact fatty acid content of coffee oil. This could be done by determining the acid value (mg KOH/g) of coffee oil through a titration method prescribed by the American Society for Testing and Materials (ASTM). Glycerolysis is an alternative to the Fischer esterification pretreatment, and lowers FFA content by re-esterifying fatty acids to glycerol to form mono-, di-, and tri-glycerides ( Kombe et al., 2013). The process does not require methanol, and can be carried out using a metallic catalyst such as zinc chloride; thus, glycerolysis would eliminate the risk of feedstock contamination by excess methanol (Kombe et al., 2013). The FTIR spectrum of the combined top layers of the pretreatment product indicated no presence of oil. Analysis of products of transesterification reactions Fig. 8 shows the 1 H NMR spectra of standard coffee oil and the product of the Ca-glyceroxidecatalyzed transesterification of pretreated coffee oil. A sample was taken from the entire product of the transesterification reaction for 1 H NMR analysis. The peak at 3.66 ppm present in the latter spectrum indicates the presence of a methyl ester group, and is not present in the spectrum of the


Kelly

The Manhattan Scientist, Series B, Volume 3 (2016)

209

coffee oil. This signifies that Ca-glyceroxide successfully catalyzed the conversion of pretreated coffee oil to FAMEs. Fig. 9 shows the 1 H NMR spectra of standard coffee oil and the product of the KOH-catalyzed transesterification of untreated coffee oil. The presence of a similar peak at approximately 3.65 ppm in the latter spectrum indicates that KOH catalyzed the conversion of standard coffee oil to FAMEs.

Figure 8. 1 H NMR spectra of coffee oil and biodiesel synthesized with Ca-glyceroxide

Figure 9. 1 H NMR spectra of coffee oil and biodiesel synthesized with KOH.

FTIR spectra of the pretreated coffee oil and the product of its transesterification catalyzed by Ca-glyceroxide are shown in Fig. 10. FTIR analysis showed that the transesterification product was the same in the top and bottom regions of the separatory funnel. FTIR analyses of rapeseed oil and biodiesel derived from rapeseed oil, provided by Shimadzu Corp. (2016), provided a comparison and explanation for the FTIR pattern of the transesterification product. The absorption bands at 3008.66 cm−1 , 2938.08 cm−1 , and 2850.57 cm−1 in the coffee oil spectrum are preserved in the spectrum of the transesterification product, and indicate the presence of olefinic (=CH-) and aliphatic (-CH2 -) hydrocarbons. The signal at 1738.46 cm−1 in the coffee oil spectrum is also preserved, and indicates the presence of an ester (C=O). The new signal at 1438.57 cm−1 in the spectrum of the transesterification product indicates the formation of the methyl ester (RCOOCH3 ), and confirms that Ca-glyceroxide catalyzed the conversion of pretreated coffee oil feedstock to FAMEs. The signals at 1171.27 cm−1 and 1199.35 cm−1 in the spectrum of the transesterification product replace the 1167.08 cm−1 signal in the coffee oil spectrum, which is due to the loss of the triple ester group present in triglycerides (Shimadzu Corp., 2016). The spectra of the standard coffee oil and the product of its transesterification catalyzed by KOH are shown in Fig. 11. Similar to Fig. 10, the peaks indicative of the olefinic (=CH-), aliphatic (-CH2 -), and ester (C=O) groups are preserved in both spectra, and the new signal at 1438.57 cm−1 in the spectrum of the transesterification product indicates the production of the methyl ester


210

The Manhattan Scientist, Series B, Volume 3 (2016)

Kelly

Figure 10. FTIR spectra of pretreated coffee oil and biodiesel synthesized with Ca-glyceroxide.

( RCOOCH3 ). The signals at 1176.01 cm−1 and 1199.35 cm−1 in the spectrum of the transesterification product replace the signal at 1163.05 cm−1 in the coffee oil spectrum.

Figure 11. FTIR spectra of coffee oil and biodiesel synthesized with KOH.

The similarity between the FTIR and 1 H NMR spectra shown in Figs. 8-11 confirms that both catalysts allowed for the conversion of coffee oil to FAMEs. Spectroscopic analyses of the two reaction products showed no indication that glycerol formed in either reaction; however, minute amounts of glycerol may have been present in the products. Fig. 12 shows the product of the transesterification reaction catalyzed by Ca-glyceroxide. The efficacy of Ca-glyceroxide in this conversion indicates that a heterogeneous catalyst can be used to catalyze the transesterification of coffee oil. In order to quantitatively determine the degree of conversion, it is necessary to determine the percent yields of FAMEs following the transesterification of coffee oil. The application of High-Performance Liquid Chromatography (HPLC) and Gas Chromatography-Mass Spectrometry (GC-MS) would be useful in the calculation of percent yields, because these methods would allow for the identification, separation, and quantification of FAMEs produced following transesterification. In order to compare the catalytic


Kelly

The Manhattan Scientist, Series B, Volume 3 (2016)

211

Figure 12. Crude coffee biodiesel

performances of calcined CaO and its active phase Ca-glyceroxide in the transesterification of pretreated coffee oil, the reactions would have to be carried out using either catalyst and the percent yields of FAMEs would have to be determined for both transesterification products.

Conclusion The active phase of CaO in the transesterification reaction, Ca-glyceroxide, was synthesized in the laboratory and successfully catalyzed the transesterification of pretreated coffee oil into FAMEs. Ca-glyceroxide was produced in order to introduce the active phase of the catalyst early on in the transesterification reaction, due to its properties of high basicity and high catalytic activity. The ability of Ca-glyceroxide to catalyze the synthesis of FAMEs indicates potential for the replication of this project using waste materials as sources of catalyst and feedstock. Waste seashells are reservoirs of calcium carbonate (CaCO3 ), which decomposes to CaO upon calcination at 900 â—Ś C. With the CaO sourced from waste seashells, it may be possible to synthesize Ca-glyceroxide using methanol and the glycerol byproduct formed in the transesterification reaction. The repurposing of waste seashells and recycling of glycerol would allow for a more sustainable synthesis of the active phase of CaO. Coffee oil could be extracted from waste coffee grounds and serve as the feedstock for the transesterification reaction. Collectively, the utilization of these materials could address the limitations of costly biodiesel production and issues of waste management.

Acknowledgment I extend my gratitude to Dr. Hangun-Balkir and all members of the Manhattan College faculty in the Department of Chemistry and Biochemistry. I also thank the Manhattan College Summer Fellows Program for making this summer research experience possible.

References Deligiannis, A. , G. Papazafeiropoulou, G. Anastopoulos, F. Zannikos. Waste Coffee Grounds as an Energy Feedstock. Presented at 3rd International CEMEPE & SECOTOX Conference, Skiathos,


212

The Manhattan Scientist, Series B, Volume 3 (2016)

Kelly

GR, June 19-24, 2011, 617 (2011) Di Serio, M.; Cozzolino, M.; Giordano, M.; Tesser, R.; Patrono, P.; Santacesaria, E. From Homogeneous to Heterogeneous Catalysts in Biodiesel Production. Ind.Eng. Chem. Res. 46, 6379. American Chemical Society Web Editions. (2007) Di Serio, M.; Tesser, R.; Pengmei L.; Santacesaria E. Heterogeneous Catalysts for Biodiesel Production. Energy Fuels. 22, 207-217. American Chemical Society Web Editions. (2008) Ghadge, S.V.; Raheman, H. Biodiesel production from mahua (Madhuca indica) oil having high free fatty acids. Biomass Bioenergy. 28, 601-605. ScienceDirect. (2005) Kombe, G.G.; Temu, A.K.; Rajabu, H. M.; Mrema, G.D.; Kansedo, J., Lee, K.T. Pre-Treatment of High Free Fatty Acids Oils by Chemical Re-Esterification for Biodiesel Production- A Review. Adv. Chem. Eng. Sci. 3, 242-247. Scientific Research Publishing. (2013) Le´on-Reina, L.; Cabeza, A.; Rius, J.; Maireles-Torres, P.; Alba-Rubio, A. C.; Granados, M. L. Structural and surface study of calcium glyceroxide, an active phase for biodiesel production under heterogeneous catalysis. J. Catal. 300, 30-36. ScienceDirect. (2013) McMurry, J. Carboxylic Acid Derivatives and Nucleophilic Acyl Substitution Reactions. In Organic Chemistry. 5; Huber, J., Chelton, D., Masson, C., Eds.; Brooks/Cole: Pacific Grove, CA; 1, 855-857. (2000) Patil, P.D.; Deng, S. Transesterification of Camelina Sativa Oil Using Heterogeneous Metal Oxide Catalysts. Energy Fuels 23, 4619-4624. American Chemical Society Web Editions. (2009a) Patil, P.D.; Gude, V.G.; Deng, S. Biodiesel Production from Jatropha Curcas, Waste Cooking, and Camelina Sativa Oils. Ind. Eng. Chem. Res. 48, 10850-10856. American Chemical Society Web Editions. (2009b) Reyero, I.; Arzamendi, G.; Gand´ıa, L. M. Heterogenization of the biodiesel synthesis catalysis: CaO and novel calcium compounds as transesterification catalysts. Chem. Eng. Res. Des. 92, 1519-1530. ScienceDirect. (2014) Shimadzu Corp. “Infrared Spectroscopy differences between biodiesel prepared from rapeseed and the edible rapeseed oil.” https://www.shimadzu.hr/sites/default/files/Infrared%20Spectroscopy %20differences%20between%20biodiesel%20prepared%20from%20rapeseed%20and%20the %20edible%20rapaseed%20oil.pdf. (accessed July 2016) Shimamoto, G.; Favaro, M.; Tubino, M. Simple Methods via Mid-IR or 1 H NMR Spectroscopy for the Determination of the Iodine Value of Vegetable Oils. J. Braz. Chem. Soc. 26,1443. SciELO. (2015) Tan, Y.H., Abdullah, M.O., Nolasco-Hipolito, C., Taufiq-Yap, Y.H. Waste ostrich- and chickeneggshells as heterogeneous base catalyst for biodiesel production from used cooking oil: Catalyst characterization and biodiesel yield performance. Appl. Energy. 160, 61. ScienceDirect. (2015)


Developing Automated Systems for Measuring Interference Fringes of a Michelson Interferometer Sean Heffernan∗ Department of Physics, Manhattan College

Abstract. A device for measuring high velocity interference fringes in a Michelson Interferometer was developed, both in code and in hardware, so that no individual fringes were skipped in data acquisition. The device requires some refinement, but otherwise works as intended.

Introduction A Michelson Interferometer is a device that measures optical path length changes in a laser by measuring the position shift of the interference fringes. These interference fringes are created by shining a laser into a beam splitter, and reflecting the light back with a pair of mirrors towards a singular point. As you move one of these mirrors, the fringes move proportionately to the change in path length and the frequency of the laser [1]. One limitation for using a Michaelson Interferometer to measure the optical path change is that the fringes can travel at speeds greater than the human eye can track. Over the course of our investigations, we found that these fringes alternate between high intensity to low intensity at frequencies that made counting the fringes utilizing a 60 frame per second camera difficult, and a concern was brought up that the we might be missing many fringes in between the camera frames. The goal of our investigation is to find a more reliable, and easier method by which we can count these fringes without losing any fringes. To do this, we needed to measure exactly how fast these fringes were passing by in terms of frequency, and by extension how fast our equipment could read the information. Measuring data at high frequencies can be difficult without expensive equipment. Fortunately, microcontrollers and microprocessors are currently in the middle of a sort of Renaissance, with many companies trying to produce microcontrollers both as cheaply, and reliably as possible. This means that these sorts of devices are currently fairly inexpensive, yet extremely powerful and easy to use. Any sort of electronic sensor can be wired to one of these microcontrollers, and it will measure the data output by the sensor at very high frequencies, whether it be light intensity, temperature, velocity, acceleration, gyroscopic turning, pressure – the act of programming these devices is very easy. Due to the ease of use, we decided to use this method to count the number of fringes by using a light sensor. ∗

Research mentored by Veronique Lankar, Ph.D.


214

The Manhattan Scientist, Series B, Volume 3 (2016)

Heffernan

Interferometer Design In the interferometer we used, the mobile mirror was attached to a copper pipe (Fig. 1), which in turn had a heat rope wrapped around it. As the heat rope had a current sent through it, the copper pipe would warm up and expand, moving the mirror and changing the optical path length. For the purposes of our experiment, we would need to find both a viable light sensor and a temperature sensor.

Figure 1. Michelson Interferometer

Light Sensor Testing To measure the light intensity as a function of time, we utilized an Arduino microcontroller wired to a light sensitive sensor, which would output a voltage when exposed to light. To find an adequate sensor, we set some simple prerequisites for the sensors which we tested before usage. First and foremost, the sensor must react properly to changes in light intensity in a timely fashion – some sensors encounter saturation when they are exposed to high frequency blinking. For measuring high speed fringes, this can lead to the sensor improperly detecting light intensity. Secondly, the sensor needs to have a reasonable saturation value. When the sensors are exposed to light, if they encounter a certain amount of light, the voltage output shifts from a linear relationship to a logarithmic. Because of this, we need to find the sensor that works most adequately in the interferometer. Finally, although least important overall, is ease of use – if a sensor requires extremely limited conditions to operate, it is less favorable to a more generally usable sensor. For example, if one light sensor only works at a specific temperature, but another light sensor does not, we will select the less restricted light sensor. To this end, we investigated five sensors: the TEPT5600 phototransistor (Fig. 2 (left)), 0EMT6000 phototransistor (Fig2c), the NSL 5152 photocell (Fig. 2 (right)), the TSL 2561 lux sensor (Fig. 3), and the NSL 4960 photocell (Fig. 4).


Heffernan

The Manhattan Scientist, Series B, Volume 3 (2016)

215

Figure 2. (from left to right) Vishay TEPT5600; Sparkfun TEMT6000; Luna Optoelectronics NSL5152

Figure 3. Luna Optoelectronics NSL4960 (left) and Adafruit TSL2561 (right)

For each sensor, we measured the maximum lux that could be detected without causing the sensor to saturate. This would help in determining if the light from the laser would need to be attenuated with polarizers for all futher tests. To do this, we measured the voltage output of the sensors over distance for a known source of light intensity, and compared the voltage output to the light intensity at that distance. For each sensor, we would find the location where the voltage per lux switched from linear to logarithmic in growth (Fig. 4)

Figure 4. Finding the maximum Lux


216

The Manhattan Scientist, Series B, Volume 3 (2016)

Heffernan

Afterwards, each sensor was wired to a Hantek 6022BE Oscilloscope to measure the output voltage and see whether the output voltage frequency matched a control frequency. The control frequency was created by having a fan with a variable frequency be put in front of a laser to ‘chop’ the laser, in order to create a similar setting to the moving interference fringes (Fig. 5)

Figure 5. Measuring the maximum frequency

The Hantek 6022BE is a 2 channel DC oscilloscope that can read changes in voltage between 4 nanoseconds and 5 kiloseconds in scale. This makes it very helpful in that it is known to be able to accurately measure frequencies significantly higher than our sensors would normally be used for, having a maximum measurable frequency at 0.25 gigahertz.

Results As shown in Table 1, we found that of the sensors we tested, the TEPT5600 and the TEMT6000 matched the first parameter, had had a very high lux limit relative to the other sensors. The third parameter would be a simple question of whether a higher difficulty aiming the light to the sensor was worth a higher band of lux linearity. We chose to go with the TEMT6000, and simply use a light attenuator to reduce the lux of the fringes. It also gives us the benefit of having a higher maximum frequency without having the sensor saturate and output a constant voltage rather than a rising and falling voltage with time. Temperature Sensor Testing Next, since we were also measuring the temperature of the copper pipe over time to track thermal expansion, we needed to find a temperature probe. The two probes we tested were the Adafruit Max31855 thermocouple and the Sparkfun TMP36 temperature sensor. However, our testing of these sensors was cut short when we discovered that the TMP36 was extremely sensitive to background ‘noise,’ including static electricity. This made calibrating the device significantly more difficult than its counterpart, as it could pick up even background information from electrical wires moving too close to it.


Heffernan

The Manhattan Scientist, Series B, Volume 3 (2016)

217

Table 1. Characteristics of sensors tested. Parameter

Sensor

Voltage

Vishay TEPT5600 Sparkfun TEMT6000 Luna Optoelectronics NSL 5152 Adafruit TSL 2561 Luna Optoelectronics NSL 4960 Arduino Analog Input/Output Vishay TEPT5600 Sparkfun TEMT6000 Luna Optoelectronics NSL 5152 Adafruit TSL 2561 Luna Optoelectronics NSL 4960 Vishay TEPT5600 Sparkfun TEMT6000 Luna Optoelectronics NSL 5152 Adafruit TSL 2561

Maximu,m Frequency

Lux Saturation Limit

Luna Optoelectronics NSL 4960

Range 0V–6V 0V–6V 0 V – 100 V 2.7 V – 3.6 V 0 V – 150 V 0 V – 5V 120 Hz >200 Hz 40 Hz 33 Hz 40 Hz <75 Lux < 30 Lux <15 Lux >200 Lux (did not saturate with intensity) <20 Lux

Microcontroller Testing To measure the number of fringes using this sensor, we would need to find a way to record the voltage output by the sensor over time. We elected to utilize an Arduino Genuino Uno microcontroller to collect this data because of its ease of use. However, we had concerns that the microcontroller could run into similar issues as the camera – where individual fringes would be missed because the frequency of data acquisition was not high enough. The Arduino Genuino Uno is an open source developed microcontroller that operates using an ATMega 328 chip, an 8-bit, 16 MHz CPU with 2KB ram, and 32 KB flash ram for program storage [3]. It has 20 pins of digital Io and several of these pins can be used as analog inputs to measure voltage, based on which sensor is connected to those pins. Ordinarily, it can operate at a clock speed of 16 MHz, however, from our investigation, the Arduino is slowed down in capturing data by communicating with our data storage devices, both the onboard microSD card and the USB based Serial connection. This means that although the Arduino is operating at high frequencies, it is ultimately limited by the rate at which it can record data. To measure the maximum frequency the Arduino could record, we used the laser and fan set up from earlier on the lasers and simply tested the data acquisition as compared to the internal clock. Specifically, for every data point, the Arduino read, it would also output the exact amount of time, in milliseconds, that had expired since the Arduino was turned on. We also tested the Arduino in a variety of situations in an effort to find the optimal collection frequency. What we found was that the Arduino’s ability to read data was significantly slowed by USB connection for the serial monitor functionality of the source code, with the Arduino’s processor


218

The Manhattan Scientist, Series B, Volume 3 (2016)

Heffernan

able to process data in individual microseconds, but wasn’t able to store it until it could be printed. However, our attempts to code the Arduino to simply print the data to an onboard SD card proved to be even less effective at data collection, as it slowed the processor down to a greater degree than the USB did to simply printing it to the serial monitor. As such, we decided to go with using the serial monitor to print the data, rather than to have the Arduino create a text file containing the data we collected. We then tested various baud rates for the Arduino, in an effort to find an optimal speed at which it could communicate through the USB without running out of memory for the data acquisition, and found that the optimal setting was 115200 baud rate. In this configuration, we then tested the microcontroller in the laser with frequency control fan set up we used to test the sensors, and found that the maximum frequency the Arduino could accurately process was between 100 to 110 Hz. Testing the Arduino Device After setting up the Michelson Interferometer, we attached one of the mirrors to a copper pipe and wrapped a heat rope around it uniformly. Utilizing the Arduino package, we affix the light sensor to a point where the interference fringes from the Michelson Interferometer will travel past. We also attach the thermocouple probe to the copper pipe to measure the temperature over time, to track the thermal expansion. We then turn on the arduino sensors and the heat rope and collect the data. The laser used is a 633 nm red helium-neon laser. The interferometer was turned on from 23.75 C to 67.5 C – which translates to a change in temperature of 43.75 C. By using linear thermal expansion: ∆L = L0 A(∆t) (1) where A is the expansion coefficient of the material) for a copper pipe of length 27.5 cm, we find that the change in path length should be 0.1997 mm. In the data collected by the Arduino (Fig. 6), we found that there were 1798 individual local maxima. The calculation for change in path length, and by extension, change in distance of the mirror is given by d = mλ/2 (2) and if we assume that each local maxima was a fringe, this would correspond to 0.56 mm change in path length. The error was >180%

Conclusions One potential source of error is our definition of a ‘fringe’ in the data. About 400 of the local maximums lie below 0.45, which would correlate to a ‘half fringe’ (a point where it is transitioning from an in phase fringe to an out of phase fringe.), meaning a large number of these data points are not ‘true’ fringes. This means that the Arduino fortunately didn’t miss any fringes, but instead simply has background noise to sift through, meaning the device works overall. Further testing


Heffernan

The Manhattan Scientist, Series B, Volume 3 (2016)

219

Figure 6. Test data run with Arduino

is required to find a good threshold value to limit the number of non fringes, and more advanced techniques of analysis will be investigated.

Future Plans Despite the rocky start to the experiments, the technology has shown that it is viable for future experiments of other types, particularly for a classroom setting. We have already tested the Arduino further utilizing a gyroscope and an accelerometer, and found that both are viable methods for measuring the frequency and acceleration over time for pendulum and spring oscillations. Because the Arduino is open source in both hardware and software, it is inexpensive to use, easily developed for, is reusable for different kinds of experiments, and is a great tool in a teaching environment. Since coding for Arduinos is C based, it is relatively easy to pick up, since a lot of the programming language has already been developed for. This also has the added benefit of having the system be very modular in design. Whenever you need to measure something different with a single Arduino, you can just attach and upload the code for the appropriate sensors, and the Arduino will do the rest. Because of its high power relative to low cost, students and professors could utilize and experiment with the Arduinos in creative ways without fear of damaging very expensive equipment. Although our lab does have the equipment to create a high-speed data acquisition system through other means, we found the Arduino system significantly more user friendly and the documentation for the system is more readily available, should errors arise. In addition, the alternative equipment would be significantly more expensive to purchase should a part become damaged, for no noticeable return in ease of operation, ease of development, or ease of troubleshooting [2]. More specifically, there are labs currently being designed for the undergraduate setting utiliz-


220

The Manhattan Scientist, Series B, Volume 3 (2016)

Heffernan

Figure 7. A physical Pendulum with Arduino Sensor

ing the Arduino [4]. One such lab is to measure simple harmonic motion of a physical pendulum (Fig. 7). From the tests done, we have shown that the physical laws concerning harmonic motion are present utilizing an accelerometer and a gyroscope – that we can measure acceleration over time and it matches the expected acceleration, and that the period is within 10% of the expected value.

Acknowledgments This author wishes to thank the School of Science Summer Research Scholars Program for funding this project. He would also like to thank Alexander Karlis for his wealth of knowledge of interferometers and Drs. Bruce Liby and Veronique Lankar for their endless feedback.

References [1] Shepherd G.G. 2015. Optics, Atmospheric/Optical Remote Sensing Instruments. Reference Module in Earth Systems and Environmental Sciences. Pages 338-345 [2] Galeriu C, Letson C, Esper G. May 2015. An Arduino Investigation of the RC Circuit. The Physics Teacher. Volume 53. [3] Sources of Arduino code: Thermocouple: Adafruit:c2005-2016 NYC (NY): Adafruit (2016). https://learn.adafruit.com/max31855-thermocouple-python-library; Light Sensor: Sparkfun: r2010-2016 Niwot(CO):Sparkfun [2016]. http://bildr.org/2011/06/temt6000 arduino [4] Lankar. V, August 2016. Arduino Based Physics Labs. Leanpub.com. ISBN-13: 978-09896278-5-6


Tree branches as fractals Christina Hibner∗ Department of Physics, Manhattan College Abstract. A fractal is a simple infinitely repeating pattern resulting in very complex shapes. Samples of terminal branches from eight tree species were studied to determine if their geometries resemble fractals. If tree branches have fractal properties then branches are self-similar. Tree branch samples were taken from eight tree species on the Manhattan College campus. Three accepted procedures were used to determine if branch terminals had fractal qualities: box counting, used with the node addition method and the “Y” Method (or self-similarity method [1]), and the Newman method. A minimum of six analytical comparisons were conducted on each branch for this study, depending on the unique structure of the branch. With box counting techniques, fractal dimension values ranged from 1.15 to 1.42 were obtained for whole branches. Similar fractal dimension values were obtained from simple bifurcation-like terminals and samples when additional side branches were added. Values that hover around 1.2 for the “Y” method (self-similarity method) procedure shows that the branches have fractal properties. Newman analysis showed very poor results. We conclude that terminal branches of tree show fractal qualities.

Introduction A fractal is a simple infinitely repeating pattern resulting in very complex shapes [2]. The word ‘fractal’ comes from the Latin word fractus meaning ‘broken’ and was first used by Benoit Mandelbrot in 1975 in a paper entitled “The Fractal Geometry of Nature”. Mandelbrot defined fractal as a rough geometric shape that can be split into repeating parts that are identical, just smaller. The words “self-similarity” also describes this phenomenon [1]. There shapes are commonly found in nature and are as infinite as reality will allow. For example, a binary tree branch starts at one branch then splits to two, then four, eight, sixteen, thirty-two, and so on however environmental factors and the limits of biology prevent trees from being true fractals. These structures have a unique geometrical property in that they are self-similar [3]. Many people refer to Da Vinci’s rule of the trees when referring to fractals. In a true fractal, there is a bifurcation in which a stem branches into two branches of each dimensions. Of course, this simple bifurcation does not occur in any plant except lower plants of the genus Psilotum. For higher plants, the main stem exhibits apical dominance in which the main stem gets more nutrients and grows more than branches. Branches arise from axillary buds that are initially dormant and only grow after the main stem continues to grow. So, the bifurcation described by Da Vinci does not occur in any plant species except Psilotum (Fig. 1). Species of Psilotum are from an old group of plants most of which are extinct [4].

Methods and Materials The tree species sampled were from the Manhattan College campus, Bronx New York during the summer of 2015. The species were Acer palmatum, Cornus florida, Cornus sericea, Lagerstroemia indica, Crataegus monogyna, Morus rubra, Platanus occidentalis and Zelkova serrata ∗

Research mentored by Lance Evans, Ph.D.


222

The Manhattan Scientist, Series B, Volume 3 (2016)

Hibner

Figure 1. Image of a plant of genus Psilotum that shows fractal-like bifurcation.

(tropicos.org). We worked from an archive that contained photographic images, x, y, z coordinates that describe three-dimensional structures, and lengths of all branch segments. For each photographic image, a weighted line drawing was made using and open source program called paint.NET [5]. Exact lengths and thicknesses of each branch segment was traced using vector programming. All background colors were removed. The only pixels in the document were that of the branch drawing. Line drawings were analyzed using three methods (Table 1). Table 1. The three methods of analysis and a brief description of the process involved with each method Methods (1) Node addition

(2) Self-similarity

(3) Newman

Uses ImageJ FracLac [6] to analyze branch samples that were segmented at their nodes down the main branch then cumulatively added branches until the entire branch was analyzed. Each node added yields a new data point. These results were then graphed according to their node number and their fractal dimension.

Uses ImageJ FracLac [6] to test for selfsimilarity by isolating the parts the branch drawings that are a simple fork resembling the letter “Y.” These images were then run through FracLac. Next, a single branch was added to the “Y” branch and renamed “Y+l” then was run through FracLac. Another branch was added resulting in “Y+2” and so on. These results were then graphed according to their Y+x number and their fractal dimension.

Calculates the fractal dimension according to predetermined equations. Branches were grouped and labeled according to their order, grouping together every branch that was first order, second order, and third order. The average length of each order branch was recorded as well as the number of branches in each order.

To analyze these patterns, the fractal dimension of the whole branch was found and compared to the fractal dimension of parts of the branch. The fractal dimension [7] refers to the ratio that shows the index of complexity of a fractal pattern by comparing how the detail of the pattern changed with the scale at which it is measured using box counting. Box counting is when you superimpose a grid of boxes of increasingly small side lengths then count the number of boxes that overlap at the pattern. It is a widely accepted method for fractal analysis [8]. Box counting is a way


Hibner

The Manhattan Scientist, Series B, Volume 3 (2016)

223

to measure the fractal at various scales and see how these measurements behave when measured at increasingly fine scales (Fig. 2). Think of it like pixels; large pixels will not show the full detail of the pattern. However, small pixels will show more detail. As the box size approaches zero, the infinite nature of the fractal pattern will be measured. Using this method, we can determine how much space the pattern takes up between one and two dimensions. Box counting was used in the node addition method as well as the “Y” method (self-similarity method [1]) but not the Newman method.

Figure 2. Visual representation of box-counting performed on a Koch fractal [1].

Methods 1 and 2 used ImageJ FracLac [6] to obtain fractal dimensions. ImageJ FracLac uses box-counting to analyze fractal patterns. FracLac measures how much space a pattern takes up by using increasingly smaller side lengths and counting the number of boxes that overlap at the pattern for each box size and then takes the ratio. The fractal dimension is the slope of the box count over the box size. For node addition analysis, samples were segmented at their nodes down the main branch and cumulatively added nodes for each image analyzed. Starting with just the terminal branch to the first node, the image was run through FracLac which calculates a fractal dimension number. Then we add everything between the first node and the second node and analyze that image in FracLac which calculated a new fractal dimension number. We continue in this fashion cumulatively adding branches until the entire branch was analyzed (Fig. 3). Each node added yields a new data point. These results were then graphed according to their node number and their fractal dimension. Method 2, named the “Y” method, involves tests of self-similarity using ImageJ FracLac as well. Beginning at individual stem terminals, the first test including only terminals and one secondary branch (resembling the letter “Y”). All of the outer branches resembling the letter “Y” were run through ImageJ FracLac to obtain fractal dimension numbers. The next step included a second secondary branch (designated “Y+1”). One branch down the stem was added and that new image was run through the program to obtain a new fractal dimension number. Another single


224

The Manhattan Scientist, Series B, Volume 3 (2016)

Hibner

Figure 3. Diagram of a sycamore branch that shows eleven individual portions that were used to determine fractal dimension parameters. The portions were added cumulatively; 2 was added to part 1, ending with in the entire branch analyzed.

branch was added to the previous “Y+1” image and was titled “Y+2” and so on in this fashion (Fig. 4). These results were then graphed according to their “Y+n” number and their fractal dimension number.

Figure 4. Diagram of a Lagerstroemia indica branch with isolated Y branches (yellow) with one branch added (Y+1 - yellow with orange), with two branches added (Y+2- yellow - orange - pink), and with three branches added (Y+3- yellow - orange - pink - purple).

Figure 5. When two first-order branches converge, they turn into a second-order branch. When two second-order branches converge, they turn into a third-order branch and so on. However, when a lower-order branch runs into a higher-order branch, the following branch takes the higher-order number.


Hibner

The Manhattan Scientist, Series B, Volume 3 (2016)

225

Method 3 used the method developed by Newman [9] in which the fractal dimension is calculated from-the-outside-in using ratios of the number of exterior branches (first-order branches) and their lengths versus the number of second-order branches and their lengths. Table 2 contains the predetermined equations illustrating this. When two first-order branches converge, they turn into a second-order branch. When two second-order branches converge, they turn into a third-order branch and so on. However, when a lower-order branch runs into a higher-order branch, the following branch takes the higher-order number (Fig. 5). First, branches were grouped and labeled according to their order, all the first-order branches were grouped together, all the second-order branches were grouped, and all the third-order branches were grouped. The average length of each order branch was then recorded as well as the number of branches in each category. The equations call for the two ratios, the bifurcation ratio: the number of first-order branches over the number of second-order branches, and the length-order ratio: the average length of second-order branches over the average length of first-order branches. The fractal dimension is the ratio of the log of the bifurcation ratio over the log of the length-order ratio. Table 2. Newman analysis equations where N is the number of the indicated order branch and r is the mean length of branches of the indicated order. D1−2 is the calculated fractal dimension using the data from the first order and the second order branches. D2−3 is the calculated fractal dimension using the data from the second order and the third order branches.

Results As a general trend for the box counting method, as more of the branch is added to the analyzed image the fractal dimension number calculated increased linearly (Table 3). This is to be expected because as the branch becomes more and more complete, it approaches the total fractal dimension number for that species. It was observed that each species had a unique fractal dimension number. The fractal dimension increases linearly as the number of side-branches increased.


226

The Manhattan Scientist, Series B, Volume 3 (2016)

Hibner

Table 3. Node addition method. Fractal dimensions of species as sections down the main branch are added. The trend generally increases steadily. Node Number Species Acer palmatum (JAP) Cornus florida (DOG) Cornus sericea (OSI) Lagerstroemia indica (CRA) Crataegus monogyna (HAW) Morus rubra (RED) Platanus occidentalis (SYC) Zelkova serrata (ZEL)

1

1-2

1-3

1-4

1-5

1-6

1-7

1-8

1-9

1-10

1.26 1.27 1.17 1.3 1.16 1.16 1.27 1.17

1.26 1.24 1.22 1.32 1.16 1.17 1.27 1.18

1.31 1.24 1.20 1.26 1.18 1.18 1.29 1.25

1.32 1.33 1.23 1.29 1.17 1.20 1.30 1.25

1.30 1.29 1.24 1.30 1.19 1.19 1.31 1.28

1.32 1.33 1.26 1.31 1.17 1.20 1.29 1.32

1.42 1.29 1.32 1.15 1.22 1.26 -

1.29 1.33 1.15 1.22 1.27 -

1.33 1.23 1.28 -

1.35 1.25 1.29 -

Considering more side-branches makes the branch more complex and thus the linear addition of branches eventually approached the fractal dimension for that species. As expected, species with more side-branches had higher fractal dimension values than species with fewer side-branches. As nodes are added to the previous branch structure, the fractal dimension increases linearly. It was noted that branches with more sprouts and more higher order branches such as Dogwood had a higher fractal dimension while less complex species such as Hawthorne with simple structures had a lower fractal dimension (Fig. 6).

Figure 6. A plot of the best fit lines of each species. As more of the branch is added to the analyzed image, the fractal dimension increases linearly as the number of side-branches increased.


Hibner

The Manhattan Scientist, Series B, Volume 3 (2016)

227

The self-similarity approach (Y method [1]) showed how structured can be the same and therefore have the same fractal dimension number regardless of species. Considering additional side-branches eventually gives a relatively constant fractal value for each species. This tells us that there is a stable fractal dimension that comes with a consistent structure even interspecies. For the self-similarity method, all the Y+n branches cross species have a similar structure therefore their fractal dimension numbers should all be the same no matter what number branch was added (Table 4). The results obtained support this theory. All fractal dimension numbers are around 1.2 which is the fractal dimension of that pattern. This relatively constant fractal dimension trend confirms the idea of self-similarity however instead of within the species, this type of pattern relates branches across species. Table 4. Self-similarity method. Fractal dimensions of whole branches and Y shaped branch segments of eight plant species. The trend is relatively constant. Branches Species Acer palmatum (JAP) Cornus florida (DOG) Cornus sericea (OSI) Lagerstroemia indica (CRA) Crataegus monogyna (HAW) Morus rubra (RED) Platanus occidentalis (SYC) Zelkova serrata (ZEL)

Entire

Y

Y+1

Y+2

Y+3

Y+4

1.32 1.42 1.29 1.35 1.15 1.26 1.3 1.32

1.26 1.26 1.22 1.27 1.17 1.25 1.29 1.26

1.22 1.27 1.25 1.25 1.17 1.22 1.24 1.17

1.22 1.23 1.22 1.21 1.17 1.2 1.27 1.17

1.26 1.24 1.23 1.21 1.17 1.22 1.28 1.18

1.23 1.26 1.22 1.2 1.18 1.19 1.29 1.18

The results obtained from the mathematics from the Newman method were not as expected nor did they compare well with the results from the Newman paper (Table 2.) The Newman method may work for more binary computer generated trees but is inconsistent with other obtained data and therefore does not work well with natural branches.

Discussion No articles found have reported quantitative results on comparing the self-similarity of tree branches in two dimension. Most papers found discuss the mathematics of the construction of computer generated fractal trees and tree branch equation modeling in three dimensions which is similar to this research but is difficult to compare to a quantitative test of biological self-similarity. Nature will never be perfect so this data should be taken with a grain of salt. Possible errors may be due to the branches being broken or damaged by environmental factors such as storms, animals, insufficient sunlight or pruning. This may be why some fractal dimension numbers had a higher error or standard deviation than others. Another error might have been in the way the previous research party measured the x, y, z coordinates. They used the projection of the shadow of the branch to get the coordinates which were then used to calculate the lengths. A main error to


228

The Manhattan Scientist, Series B, Volume 3 (2016)

Hibner

be aware of is that we are taking a 3 dimensional object and compressing it into two dimensions when we take a picture. These were the pictures that were used to trace the weighted line drawings and therefor directly affect the results. This research may be used in the design of solar panels. Tree branches are meant to hold leaves in a way that collect the most sunlight. Patterns of trees with a high fractal dimensions such as the zelcova or the dogwood may be a model for solar panel structure design.

Acknowledgements The author would like to thank Dr. Lance Evans, Dr. Rani Roy, Dr. Bruce Liby, Joe Brucculeri and Jesse Jehan, Research Scholars, and Manhattan College. The author is grateful to the Linda and Dennis Fenton ’73 Endowed Biology Research Fund for financial support of this research.

References [1] Fractal Dimension - Koch Snowflake. Math.ubc.ca, Web. http://www.math.ubc.ca/∼cass/courses/m308-03b/projects-03b/skinner/ex-dimension-koch snowflake.htm. 2016 [2] Falconer, Kenneth. Fractals; A Very Short Introduction. New York: Oxford University Press. 2013 [3] Hibner, Christina “Fractal patterns of diffraction and interference patterns.” Manhattan Scientist, Ser. B, Vol. 2, p 181. 2015 [4] Schulte Paul J., Arthur C. Gibson and Park S. Nobel. “Xylem Anatomy and Hydraulic Conductance of Psilotum nudum.” American Journal of Botany, Vol. 74, No. 9 (1987), pp. 1438-1445, Botanical Society of America, Inc. [5] “Paint.NET - Free Software for Digital Photo Editing.” PaintNET RSS. N.p., n.d. Web. 21 July 2016. http://www.getpaint.net/index.html [6] ”FracLac for ImageJ” Web. 11 July 2016. https://imagej.nih.gov/ij/plugins/fraclac/FLHelp/Introduction.htm. [7] “Fractal Dimension.” Wikipedia. Wikimedia Foundation, https://en.wikipedia.org/wiki/Fractal dimension. 2016 [8] “Fractal Dimension.” Fractal Foundation Online Course. Fractal Foundation, Web. 13 July 2016. http://fractalfoundation.org/OFC/OFC-10-3.html. [9] Newman, W. I. “Fractal trees with side branching.” Fractals, Vol. 5, No. 4 (1997) 603–614 World Scientific Publishing Co. https://www.math.purdue.edu/ ∼agabriel/tree.pdf.


Development of the New Small Wheel for the ATLAS experiment – Micromegas Alexander Karlis∗ Department of Physics, Manhattan College

Abstract. At the European Organization for Nuclear Research (CERN), in order to benefit from the expected high luminosity performance that will be provided by the Phase-I upgraded LHC, the first station of the ATLAS muon end-cap system (Small Wheel, SW) will need to be replaced. The New Small Wheel (NSW) will have to operate in a high background radiation region (up to 15kHz/cm2 ) while reconstructing muon tracks with high precision as well as furnishing information for the Level-1 trigger. These performance criteria are demanding. In particular, the precision reconstruction of tracks for offline analysis requires a spatial resolution of about 100 µm, and the Level-1 trigger track segments have to be reconstructed online with an angular resolution of approximately 1 mrad. The NSW will have two chamber technologies, one primarily devoted to the Level-1 trigger function (small-strip Thin Chambers, sTGC) and one dedicated to precision tracking (Micromegas detectors, MM). The sTGC are primarily deployed for triggering given their single bunch crossing identification capability. The MM detectors have exceptional precision tracking capabilities due to their small gap (5 mm) and strip pitch (approximately 0.5 mm). Such a precision is crucial to maintain the current ATLAS muon momentum resolution in the high background environment of the upgraded LHC. The MM chambers can, at the same time, confirm the existence of track segments found by the muon end-cap middle station (Big Wheels) online. The sTGC also has the ability to measure offline muon tracks with good precision, so the sTGC-MM chamber technology combination forms a fully redundant detector system for triggering and tracking both for online and offline functions. This detector combination has been designed to be able to also provide excellent performance for the eventual High Luminosity LHC upgrade [1].

CERN Background The Large Hadron Collider (LHC) at CERN is the world’s largest and most powerful particle accelerator in the world. The sheer size of the collider is twenty-seven kilometers wide in circumference and reaches depths up to one-hundred seventy-five meters deep. Two protons are sped up to 0.999999 times the speed of light and collide at roughly 13 TeV. With four different research sites, (LHCb, ALICE, ATLAS, and CMS) 600 million collisions are recorded at these sites for analysis.The LHC has initial and operational costs that exceed $10 billion with a magnetic field rated at nine teslas, an ultra-high vacuum of 10−13 atm, and a large refrigerator with low temperatures at 1.9 Kelvin. Overall, the LHC has over 7,000 students and scientists, 500 universities, and 80 countries working in collaboration on cutting-edge particle physics research. During 2018, there will be a long shut-down in order to make significant upgrades to the LHC and ATLAS experiment. In these upgrades, the New Small Wheel (NSW) with the Micromegas technology will be implemented (Table 1, Fig. 1). ∗

Research mentored by Rostislav Konoplich, Ph.D.


230

The Manhattan Scientist, Series B, Volume 3 (2016)

Karlis

Table 1. An Approximate Timeline of the Scheduled LHC & ATLAS upgrades

Figure 1. The location of the current Small Wheels and where the New Small Wheels will reside.

Motivations for Upgrades One of the main concerns regarding the current Small Wheel in the end-caps is background radiation. The first concern is that the NSW will have to operate in a high background radiation region while reconstructing muon tracks with high precision as well as furnishing information for the Level-1 trigger. The other issue that arises within the current system, is that of the muon tracking system. Particles, like protons, produce fake triggers by activating the end-cap trigger chambers at an angle that is nearly identical to the desired muons. Both of these two issues represent serious


Karlis

The Manhattan Scientist, Series B, Volume 3 (2016)

231

limitations on the ATLAS performance beyond design luminosity. In order to solve the two problems together, ATLAS proposes to replace the present muon Small Wheels with the New Small Wheels. The NSW is a set of precision tracking and trigger detectors able to work at high rates with excellent real-time spatial and time resolution. These detectors can provide the muon Level-1 trigger system with online track segments of good angular resolution to confirm that muon tracks its original location [1].

Micromegas A. Detector Technology The New Small Wheel will introduce two new types of detector technology, sTGC (small Thin Gap Chambers) and Micromegas (micro mesh gaseous structure) denoted by MM or µM. MM detectors consist of a planar (drift) electrode, a gas gap of a few millimeters thickness acting as conversion and drift region, and a thin metallic mesh at typically 100−150 µm distance from the readout electrode. Charged particles traversing the drift space ionize the gas; the electrons liberated by the ionization process drift towards the mesh. With an electric field in the amplification region 50 − 100 times stronger than the drift field, the mesh is transparent to more than 95% of the electrons. The electron avalanche takes place in the thin amplification region, immediately above the readout electrode [1]. The weak point of the MM original design was their vulnerability to sparking. Sparks occur when the total number of electrons in the avalanche reaches around 107 . High detection efficiency for minimum ionizing muons calls for gas amplification factors of the order of 104 . Sparking is not desired during the data acquisition process. Sparks may damage the detector and readout electronics and/or lead to large dead times as a result of a high voltage breakdown. For the MM detectors to be installed on the New Small Wheel, a spark protection system has been developed. By adding a layer of resistive strips on top of a thin insulator directly above the readout electrode the MMs become spark-insensitive (Fig. 2). The readout electrode is no longer directly exposed to the charge created in the amplification region; instead the signals are capacitively coupled to it.

Figure 2. (Left) This depicts the structure of the MM chamber with the mesh, pillars, readout electrodes, and PCB. (Right) This is an image of the interaction that the particle has with the gaseous structure.


232

The Manhattan Scientist, Series B, Volume 3 (2016)

Karlis

B. Micromegas Layout As illustrated in Fig. 3, each multiplet contains four active layers, grouped into two pairs. In each pair the detectors are mounted back-to-back. Each multilayer comprises four sTGC’s and four Micromegas detector planes (shown in Fig. 4). The sTGC are primarily deployed for triggering given their single bunch crossing identification capability. The detectors are arranged in such a way (sTGC – MM – MM – sTGC) as to maximize the distance between the sTGCs of the two multilayers. The MM detectors have exceptional precision tracking capabilities due to their small gap (5 mm) and strip pitch (0.5 mm).

Figure 3. Arrangement of the detectors in a sector

Figure 4. Multilayer design

Vacuum Table Development In order to accurate and precisely make the Micromegas detectors, it was necessary to develop a tool that would essentially provide flatness to the combination to the individual Micromegas components (sTGC and MM). The tool used to achieve was called a vacuum table. This table it constructed using a wooden frame, carbon fiber table top, and a honeycomb aluminum mesh inside the table. The main adhesive used was an industrial-strength glue.


Karlis

The Manhattan Scientist, Series B, Volume 3 (2016)

233

The following list of steps and associated pictures (Fig. 5) describe the process of the creation of the vacuum tables. 1. We used a finely cut granite table with table top precision within a few microns. The granite table had to be cleaned three times with alcohol cleaner as well as finely scraped with a sharp blade to get rid of any imperfections. 2. After the table was cleaned, the next step was to prepare for the vacuum table top’s gel coat. A strip of aluminum was placed on both long sides of the perimeter of the desired rectangular shape in order to maintain a certain height of the gel coat. 3. A straight edged bar was used to evenly distribute the gel coat across the surface. The two associated pictures show the gel coat being spread (on the left) and the final smoothed product (on the right). 4. After the gel coat had dried, eight layers of carbon fiber weave was laid and cut. 5. The carbon fiber top was created by making a home-brewed vacuum bag with sixteen different valves implemented in order to allow the epoxy to spread within the bag. Vacuum bags were easy to create and the most cost effective way to create carbon fiber in this case. Once the vacuum pump was turned on, the sealed bag would compress and simultaneously suction out the epoxy, spreading it across the surface of the carbon fiber. 6. The three pictures are of the finished product table top. The image on the right, has the wooden frame already glued to the top. The edges are cut finely and the top surface is polished. 7. The main key to these tables is flatness. Therefore, we tested the flatness using an appropriate device (image on the left). The image on the right shows the vacuum device connected to the side of the table. 8. With such a precise measurement, some issues arise. One of the main issues arises when the vacuum is turned on and there are divots in the table. This is obviously counterproductive to our goal of flatness. These need to be fixed, and in order to do so, the table is flipped around to the opposite side where small holes are drilled. In those small holes, glue is injected in order to better secure whatever defect arose inside the table. The last pictures (9) of Fig. 5 demonstrate the process in which the PCB’s are created for the Micromegas detector. As one can see in the picture on the left, the brown PCB is laid on one vacuum table, and then a honeycomb mesh is laid on top to separate one PCB from another. The picture on the right shows the two vacuum tables coming together to adhere the two PCB’s; it being almost perfectly flat.

Conclusion During LS2, the current Small wheel will be replaced by a ‘New Small Wheel’ in the ATLAS detector. The NSW will service new detector physics including a new micro mesh layer coined “Micromegas.” The Micromegas chambers will allow a greater precision tracking measurements in 2 coordinates, while also providing additional trigger information. In my participation, I helped create vacuum tables, devices with extreme precision, to facilitate the adhesion process between two PCB’s.


234

(1)

(3)

(4)

The Manhattan Scientist, Series B, Volume 3 (2016)

(2)

Karlis


Karlis

The Manhattan Scientist, Series B, Volume 3 (2016)

(5)

(6)

(7)

235


236

The Manhattan Scientist, Series B, Volume 3 (2016)

Karlis

(8)

(9) Figure 5. Process steps (1-9) through which PCB’s are created

Acknowledgement I would first like to thank Dr. Konoplich for allowing me to come on his trip to Geneva with him and enabling me to learn from such an amazing experience. I would also like to thank Givi Sekhniaidze for being a great mentor and supervisor while I worked on this project, as well as Alexi Gongadze. And last, but not least, I would like to thank the CERN community for being so welcoming and helpful in my quest of becoming a physicist.

References [1] T. Kawamoto et al., “New Small Wheel Technical Design Report,” CERN-LHCC-2013-006 ; ATLAS-TDR-020



4513 Manhattan College Parkway, Riverdale, NY 10471 manhattan.edu


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.