Scientia - Fall 2014

Page 1

Scientia

Autumn 2014 Issue IV


The image is a simulation event from dark matter of mass 1 GeV (Vector coupling to up quarks) with high MR and high R2. The high MR value represents high masses of pair-produced particles and other weakly interacting particles from their decays. The high R2 value represents a large amount of transverse momentum imbalance, and indicates decays created by pair-produced particles that decay to weakly interacting particles. The image shows the event from the perspective of the transverse plane of the detector, as if looking along the beam. Solid red bars represent energy deposits in the electromagnetic calorimeter, solid blue bars represent energy deposits in the hadronic calorimeter, green lines are charged particle tracks in the silicon tracker, the large red arrow represents the magnitude and direction of Missing Transverse Energy, and the large yellow bars are magnitude and direction of jets.

The cover for this issue of Scientia is based on a model of a proton- proton collision that is expected to produce particle dark matter. Dark matter is hypothesized to represent approximately 85% of the matter in our universe yet its properties and interactions with normal matter are not yet understood. The detection of dark matter particles is particularly challenging since they seem to only weekly interact with ordinary matter and could therefore pass through detectors without leaving a trace. Currently, scientists at the Compact Muon Solenoid (CMS) collaboration at CERN’s Large Hadron collider are seeking to identify missing energy that could be accounted for by such particles. The cover image, created by the Scientia Managing Editor Irene Zhang, is based on a model produced by CMS that demonstrates the tracks of detected particles as well as the magnitude and direction of the missing transverse energy that is expected to indicate the presence of dark matter. Dark matter thus represents one of the fundamental mysteries of particle physics and open frontiers in modern science. As you flip through the pages of this issue of Scientia, we hope that you will become intrigued by this and many other such frontiers. One of the articles in this issue, “Dark matter Search at the CMS Detector using the Razor Kinematic Variables� by Natalie Harrison, deals with analyzing the data produced by collisions such as the one represented on the cover image. Other research articles, abstracts, and interviews address such diverse and fascinating topics as HIV infection, the genetics of depression, novel catalysts and ancient CO cycles. The Triple Helix at the University of Chicago and the Scientia staff in particular strive to create a thought-provoking outlet for a cross section of innovative undergraduate research. We hope that you enjoy it. Sincerely, Luizetta Navrazhnykh Co-Editor-in-Chief, Scientia

Scientia 5

Scientia Inquiries: Erin Fuller

8

Illinois Institute of Technology Open-Source Monitoring Devices; Assessment of a Low Cost Potentiostat Histological and Immunohistochemical Characterization of the

University of Illinois at Chicago Transform Infrared Spectroscopic Imaging

10

Understanding 2-Dimensional Phase Transitions of Surfactant at the Liquid-Liquid Interface

University of Chicago

12


Scientia

Autumn 2014

Northwestern University Sensitive Paint Developing Small Molecules and Determining Their Proteasome Sclerosis

13

Animal Model of Depression

Loyola University Chicago Cancer

DePaul University

14 16 18

Jawad Arshad

Natalie Harrison

2

22

About Scientia Dear Reader, The Triple Helix, Inc. (TTH) at the University of Chicago is proud to present the newest issue of Scientia, our most recent venture as a chapter. Scientia, with its team of committed editors, showcases the highest quality original undergraduate research. Additionally, we provide a resource for students interested in pursuing research in the future. This issue marks our first step in expanding our journal, which was made possible by a collaboration with CAURS, the Chicago Area Undergraduate Research Symposium. As you browse through the pages of this issue, particularly the Abstracts section, make note of the studies conducted by talented student-researchers at other Chicago schools. We are excited to present this issue to you, both as an example of excellent work and as a resource for our growing research community. We hope that with this expansion, a new and diverse world of research will be available to students across the city, and we welcome you to use our journal to explore this world freely. Scientia is currently seeking writers and editors to contribute to future issues. We encourage all those with interest to get involved by contacting the members of our team. Sincerely, Khatcher O. Margossian Co-Editor-in-Chief, Scientia Chief Financial Officer, The Triple Helix, Inc.

3


Scientia

Autumn 2014

Scientia Inquiries

About The Triple Helix

at The University of Chicago

The Triple Helix, Inc. (TTH) is the world’s largest completely student-run organization dedicated to evaluating the true impact of historical and modern advances in science. Of TTH’s more than 25 chapters worldwide, the University of Chicago chapter is one of the largest and most active. We, TTH at The University of Chicago, are extremely proud of our chapter’s accomplishments, which are perhaps best summed by our title as the William J. Michel Registered Student Organization of the Year in 2012. We continue to work closely with an ever-increasing number of faculty members, and have notably acquired the generous support of the founding Pritzker Director of Chicago’s Institute for Molecular Engineering, Matthew Tirrell, and his department. We have expanded our local organization so that now, we can confidently say that there is a place here for each and every one of our fellow college students. We have consciously and dramatically increased the size of our production, marketing and events teams, and have watched our group of talented writers and editors grow at unprecedented levels. In fact, we have further expanded the intellectual diversity of our chapter, with TTH members having declared for more than 30 of the University’s different majors and minors. Finally, we are absolutely thrilled to present the newest issue of our journal of original undergraduate research, Scientia.

Interview with Dr. Albert Colman Erin Fuller Heat-powered, the Earth swirls with forces. Continents shift imperceptibly or with sudden violence, sloshing oceans in their basins and making mountains flinch, driven by the sluggish stirring of the mantle around the metal core. The do-si-do of weather patterns, ocean currents, geophysical events, and biological processes shape and are shaped by each other, and their inertia influences other forces, such as the famous chemical cycles. One field that attempts to understand these forces is biogeochemistry. It is not a topic that often

surfaces in the media, though it draws upon knowledge honed in nearly every physical science field in order to try to understand, using various chemical and physical clues that persist in the geologic record, what the Earth was like in the past. Albert Colman was lured into the field by a fascination with interglacial climate cycles. From there, it was a short hop to the next iceberg. He was offered a job as a lab assistant in an atmospheric chemistry lab and began studying the sources of carbon dioxide and nitrogen

Over the years, TTH UChicago members have found themselves in research positions around campus, taking advantage of the hundreds of opportunities we are lucky to have here. We found ourselves wishing, however, that there was an outlet where we could reach out to our peers on campus, to share our projects and project ideas – and to hear or read about their work as well. So we decided to create that outlet ourselves. This was the impetus for our journal, Scientia, which is the culmination of many of our member’s hard work. We hope you enjoy reading the fruits of our labor! Jawad Arshad and Melissa Cheng Co-Presidents of The Triple Helix, Inc.

Images by Dr. Albert Colman and edited by Tucker S. Elliott Karymsky Volcano is the most active volcano on the Kamchatka Peninsula, with intervals during which it erupts multiple times per day. The flight to Uzon provides a close-up view of Karymsky’s ash covered flanks and restless summit.

4

5


Scientia

Autumn 2014 gases within the forest ecosystem and the atmosphere to relate them to environmental parameters, such as when trees leafed in the springtime. Presently he is an assistant professor in the Department of Geophysical Sciences at the University of Chicago. “It helped to fill in my broader awareness of how to connect studies on modern processes that influence the chemistry environment in a natural sense,” he says, “[and] how might one try to tease out these relationships when looking at the past when you don’t have the capability of putting sensors into the forests fifty million years ago.”

isolating CO using water as an oxidant. “There’s something a little bit exciting about thinking of a compound as toxic as CO to you and me, and to many microbes as well, as an oxygen or energy source.” They discovered much more CO dissolved in the water than expected and realized that it might be coming from microbes. The reason that this finding intrigued him so much is that this type of anaerobic microbial ecosystem would have been prevalent on the early Earth over two and a half billion years ago, when there was very little oxygen on the Earth. “Back then,” Colman said, “the overall chemical

Gurgling mud pots small and large abound in Uzon Caldera. In many places, boiling conditions are reached within a meter of the surface. The resultant hot waters, when combined with acid fumes originating from deep within the volcano, rapidly degrade primary igneous minerals into clays. These features often dissolve away the rock from below, sometimes producing large muddy sinkholes like the one pictured here.

Most of his time is spent attempting to answer the question, “Why were things the way they were?” He believes in framing scientific questions before planning the methods to attack them, such as through fieldwork, modelling, and isotope mapping. In many cases he tries to identify the feedback mechanisms responsible for certain chemical states of the atmosphere or the ocean in the past, and from there searches for analog environments existing in the present day in order to see how these processes function and “how they are mediated by microbial activity or other biological activity.” In a sense, he deals with geologic processes by studying microscopic ones. Currently he has been studying the hot springs in Kamchatka, Russia, looking for new microbes capable of

6

“How might one try to tease out these relationships when looking at the past when you don’t have the capability of putting sensors into the forests fifty million years ago?” atmosphere of the Earth would have been very different.” Firstly, there are many chemical reactions vital to certain metabolic processes that could not occur without oxygen, such as redox reactions. Secondly, there would have been no ozone layer to protect life, so the ultraviolet flux to the surface of the Earth would have been much more powerful, and almost certainly very harmful to life. Thirdly,

PhD student Bo He and Colman collect gases bubbling up in Arkashin spring in Uzon Caldera. The orange sediments lining the pool are a combination of orpiment and realgar, two different arsenic sulfide minerals precipitating from the spring water. Despite a temperature of 70 °C and high concentrations of dissolved arsenic and other metals, an active microbial community lives here, harnessing energy from redox reactions. The gas samples are analyzed in Chicago and help reveal the microbial physiologies that may be prevalent in the subsurface.

there would have been different chemical reactions taking place at the time, such as photolytic reactions. As a result of these differences, if there hadn’t been some kind of stabilizing mechanism, the Earth would have frozen over completely. There were a few glacial periods, but there “has been a lot of effort expended trying to understand what kept the Earth clement.” This subject is intensely debated, but most scientists argue that large amounts of carbon dioxide in the atmosphere kept the planet warm back then. Colman told me of a paper that had resurfaced in 2005 that argued that if carbon dioxide at that level was phytolytized into CO, there might have been an effect known as “CO Runaway Atmosphere,” characterized by as much as 10% CO in the atmosphere. To put that in perspective, CO in the modern atmosphere is measured in parts per billion. The modelers of the paper thought it unlikely that this effect would persist since the Earth would shift toward removing it, and after modeling the thermodynamics and

“The world would have been very different then.” chemical systems of the atmosphere, ocean, and primitive biosphere, they concluded that it would be energetically favorable to consume the carbon monoxide. The oceans remove much of the CO in the atmosphere, which indicate why most organisms didn’t evolve to consume it. However, the microbial ecosystems in the hot springs are able to keep high percentages of CO levels in the water, much like a microbial system that could have helped remove CO from the ancient atmosphere. While CO would not have been as high as 10%, only perhaps a tenth of a percent, it could have been enough to pressure metabolic evolution and change the biosphere until it was unrecognizable. “The world would have been very different then,” Dr. Colman said.

7


Scientia

Autumn 2014

Histological and Immunohistochemical Response to Islet Encapsulation Materials Veronica Ibarra

Islet transplantation has the potential to cure type I diabetes. However, immunosuppression and poor islet availability hinders broad applications of this therapy. As a solution, encapsulation in polymeric biomaterials can be used since it reduces the need for immunosuppressant drugs and enables the use of xenogeneic islets. Biomaterial success requires the ability to control response following implantation. The goal of this research is to evaluate the inflammatory response of alginate microbeads following implantation. Shape and structure of alginate microbeads were analyzed using phase contrast microscopy. Failure rates of alginate microbeads with and without islets implanted

Illinois Institute of

were determined using histological analysis. Alginate microbeads both with and without islets exhibited a high failure percentage. Failure rates without and with islets were 27.56Âą28.51% and 20.89Âą1.26%, respectively. The tissue response around alginate microbeads was analyzed. Non-degraded beads showed collagen deposition with low cell density. Degraded beads showed a high density of inflammatory macrophages. Immunohistochemical methods were developed to identify phenotypes of macrophages to determine if the presence of inflammatory (M1) versus healing (M2) macrophages. Staining protocols were developed with the following markers: CD68 (pan-macrophage marker) and CD163 (M2 pro-healing macrophage marker). Current work is focused on analyzing the types and levels of inflammation present.

Open-Source Monitoring Devices: Assessment of a Low Cost Potentiostat

Muon Ionization Cooling Experiment

Sylwia Odrzywolska

Miles Winter

With today’s advanced technology and wide-spread information sharing, open-source devices are becoming more common, and it is more important to assess the effectiveness and reliability of those devices. As part of a survey of open-source water quality monitoring tools, this project focused on an inexpensive, open-source, hand-held potentiostat. Potentiostats are used in electroanalytical experiments to determine the presence of electroactive compounds. A typical measurement involves adjusting the current on the counter electrodes to keep the potential constant on the working electrode while the compounds present are either oxidized or reduced. Most potentiostats are very expensive, costing thousands of dollars, but the CheapStat (Rowe et al., 2011) is a low-cost alternative. The device supports multiple potential waveforms that perform cyclic, square wave, linear sweep, and anodic stripping voltammetry making it versatile for different kinds of measurements. Thus far this research project focused on assembling the device based on instructions provided by Rowe et al. An ongoing series of tests in a controlled environment will be used to determine the precision, accuracy, and detection limits for the instrument, and compare its performance to a much more expensive commercial potentiostat. The ultimate objective is to have a properly working potentiostat that can be used for water monitoring in underdeveloped areas or for electrochemical experiments in undergraduate laboratories.

8

The Muon Ionization Cooling Experiment (MICE) is an international collaboration aimed at demonstrating the ionization cooling of muons. Through the process of cooling, the phase- space volume of a beam of muons is decreased. This controlled ionization cooling of muons sets the stage for the next generation of particle physics experiments and demonstrates that the technology required to operate a neutrino factory and a muon accelerator is within reach. Since muons have a mean lifetime of only two microseconds, ionization cooling is the only practical solution to cooling muons on such a small timescale. Furthermore, to ensure that the desired particle beams primarily contain muons, MICE requires a good particle identification system. By using time-of-flight detectors, Cherenkov counters, and calorimeters, muons can be distinguished from the pions and electrons that may reduce the purity of the beam. Cherenkov detectors, in particular, are effective at identifying particles. When a charged particle exceeds the speed of light in a given medium, it emits Cherenkov radiation (light) in a predictable and characteristic way. In MICE, this Cherenkov radiation is collected by two different Cherenkov particle detectors and by studying the collected data, the particles can be isolated and their responses to various experimental conditions can be tracked. We are currently working on calibrating the Cherenkov detectors in order to optimize their ability to respond to various particles. Moreover, MICE is an ongoing project and progress continues to be made.

9


Scientia

Autumn 2014

Prostate Cancer Cell Lines Tumorigenicity Neal Shah

Chicago

The slow growing nature of prostate cancer (PCa) characterizes its malignancy in comparison to other types of cancer. One factor that differs in cancerous cells compared to healthier cells is their high level of activity in the glycolytic pathway even with a large amount of oxygen at the cell’s disposal. A normal, healthy cell utilizes oxidative phosphorylation as its main energy source. Accumulating evidence indicates that HK-II plays a pivotal role in promoting cell survival in rapidly growing, glycolytic tumors. Normal prostate tissues express HK-I but in, previous completed studies, it was shown that HK-II was overexpressed in prostate cancer cells. The knockdown of this gene resulted in a significant reduction of tumorigenicity but also in an overexpression of HK-I. Thus, we determined HK-I contribution to the tumorigenicity of PCa cells to fully establish HK-II as a viable target for PCa therapy. In our study, we established a stable knockdown of HK-I in human PCa cells (PC3). The effect of HK-I knockdown on tumorigenicity was determined by examining the cell’s proliferation rate and the cell’s ability to form colonies, as well as contribution to the overall HK activity. Based on our results, we found that the activity of HK-I was negligible compared to HK-II in PC3 cells. Thus, HK-I does not contribute much to tumorigenicity and HK-II remains a viable prime target for PCa therapy.

Wound Care Diagnosis: A Multidisciplinary Pilot Study using Fourier Transform Infrared Spectroscopic Imaging

Understanding 2-Dimensional Phase Transitions of Surfactant at the Liquid-Liquid Interface

Bennett Davidson

Thomas Bsaibes

Wound healing is a complex process that involves an integration of numerous clinical and biochemical pathways. Wound healing consists of four steps: hemostasis, inflammatory, proliferative and remodeling. During skin injury, the platelets form over the injury site to form a fibrin based clot which serves as a means for controlling any further extensive bleeding. In order to treat the wound, the correct diagnosis must be determined at the earliest stage. This study focused upon the understanding of chronic wounds with an emphasis upon venous ulcers. Venous ulcers are defined to be wounds that form due to venous valves improperly functioning making them usually the main form of chronic wounds. The purpose of this project was to monitor wound progress of study patients to establish a potential diagnostic method for appropriate treatment plans of patients with non-healing chronic wounds. Our preliminary histological imaging of the tissue samples suggest that FT-IR can be modeled for wounds using the framework of granulation tissue for wound tissue pathological modeling. Furthermore, our findings are supported through a set of computational tools referred to as Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA). These preliminary findings suggest we can establish a diagnostic method with further clinical study patients to predict good versus bad healing.

10

Competing long-range and short-range interactions within a layer of surfactants can lead to the stabilization of complex interfacial morphologies, as previously demonstrated for insoluble surfactants that form monolayers on the surface of water. However, soluble surfactants are more commonly utilized in industrial processes and domestic products. Brewster angle microscopy (BAM) images of the interface between water and hexane solutions of tetrahydro-perfluorododecanol (FC12OH) reveal that FC12OH forms several different domain structures, which vary with temperature. Monitoring the interfacial tension with quasi-elastic light scattering (QELS) demonstrates that the domains form near the 2-dimensional gas-solid phase transition within the interfacial surfactant layer. The domains exhibit multiple length scales with cluster and stripe morphologies.

11


Scientia

Autumn 2014

Subtilis

Illustrating Dissolved Oxygen Content in Water Through Pressure Sensitive Paint

Klevin Lo

Kimberly Huynh

Each year, the poultry industry produces over two billion pounds of feather waste that is mostly processed into nutrient-poor animal feed but recent research has shown that feather keratin can be used to produce biodegradable plastics, fertilizers, detergents, and pharmaceuticals. However, current chemical methods for keratin degradation are energetically costly and previous efforts at keratinase production in heterologous hosts have been stymied by poor protease expression. To address both these problems, our team constructed BioBricks based on kerA, a serine keratinase gene native to B. licheniformis active on whole poultry feathers. We designed two biobricks for expression in E. coli and constructed a high copy number BioBrick plasmid with an origin of replication compatible with B. subtilis. By designing a new keratinase expression system in B. subtilis, we hope to provide a faster, cheaper alternative to current methods of industrial keratinase production.

Pathogen Avoidance in Female Siberian Hamsters Dora Lin

One disadvantage of sociality is the increased exposure to contagion by infected conspecifics. It may be important for social animals to recognize and avoid pathogenic threat in conspecifics (i.e., infected/sick individuals). The behavioral immune system participates in this process via the expression of disgust and behavioral avoidance. Olfactory mechanisms allow pathogen avoidance in rodents. We examined whether female Siberian hamsters could discriminate between odors of sick and healthy conspecific males and females. Pathogen avoidance behavior was measured during the active and inactive phases to examine circadian rhythms, as well. Donor males and females were injected with lipopolysaccharide (LPS) or saline, to generate sick and healthy odors, respectively; donor odors consisted of soiled bedding collected 6 h after treatment. Focal females investigated odors of sick and healthy males and females, in 4 separate 10 minute testing sessions. Preliminary data indicate that females spent more time investigating odors from saline-injected than LPS-injected hamsters, and more time positioned as far as possible away from the LPS odors in the testing environment. Avoidance was greatest to odors of sick females. Circadian rhythms in these behaviors were not apparent. We conclude that female hamsters can recognize and avoid odors of sick conspecifics, and do so in a sex-biased manner. Pathogen avoidance is estrogen-dependent in females, and future work will examine whether seasonal changes in estradiol secretion modulate behavioral immunity.

12

Pressure sensitive paints (PSPs) were first designed for aerodynamic testing of aircraft in wind tunnels. Depending on how much oxygen is applied to a surface, PSPs fluoresce at different intensities. PSP offers a novel application that will help visualize dissolved oxygen concentration in an aquatic system. Within the experimental setup, a flat plate coated with the paint will be submerged in water at varying oxygen concentrations. The varying concentrations, shown through the paint’s fluorescence, can then be imaged with a scientific grade camera and analyzed using MATLAB to create a calibration curve. Water of a higher oxygen concentration will then be impinged onto the surface of a PSPcoated plate submerged in water of a lower oxygen concentration. MATLAB analysis will then illustrate the spatial distribution of oxygen concentration on the plate. This experimental set-up has larger real- world implications. When the paint is applied to columns of PVC spheres submerged within a flume, this can be used to model groundwater flow in the hyporheic zone where benthic biofilms may create anoxic and oxic regions. The paint itself can be used to approximate local oxygen concentrations and not just the bulk concentration. This greater detail of understanding will assist in better modeling of complex behavior such as transport and fate of contaminants in watersheds.

Developing Small Molecules and Determining Their Proteasome Activation as a Mechanism of Action to Treat Amyotrophic Lateral Sclerosis Kevin Zhao Amyotrophic Lateral Sclerosis is a fatal neurodegenerative disease leading to progressive motor neuron loss and eventually paralysis and death. The only FDA approved drug is riluzole, which merely extends lifetimes by an average of 2-3 months. A collaborative high throughput screen (HTS) was performed using cell PC12 cell lines exhibiting mutations of enzyme Cu/Zn superoxide dismutase (SOD1), which are determined causes in familial ALS cases. One scaffold identified is the class of arylsulfanyl pyrazolones (ASP). After further modification, these were developed into arylazanyl pyrazolones (AAP), which demonstrated better activity. In order to continue studying the structure activity relationship (SAR), various analogues were synthesized and their EC50 was determined through a neuroprotective assay. Furthermore, a separate assay utilizing both a reversible and an irreversible proteasome inhibitor is able to show how proteasome activation is a mechanism of action with these compounds. This utilized proteasome specific substrates that fluoresced after reacting with the proteasome. The proteasome activation is seen when these fluorescent patterns vary with concentration between the reversible and the irreversible inhibitors.

13


Scientia

Autumn 2014

Enriched Environment Attenuates Depressed

Characterization of Satellite III Histone

Depression

Neil Kuehnle

Claire Morley

As Major Depressive Disorder (MDD) continues to claim lives across the world, treatments for this debilitating disease are still not optimally effective. The enriched environment model, albeit simple, was hypothesized by the authors to mimic aspects of therapy in humans, and therefore, could attenuate depressive behavior even in a genetic rat model of depression. This model was previously developed by the Redei lab by bidirectional selective breeding of the Wistar Kyoto rats. Based on their performance in the forced swim test (FST), a WMI, ‘more immobile’, strain showing exaggerated despair-like behavior, and the WLI, ‘less immobile’, control strain were generated. Adult males of both strains were tested on the FST and Open Field Test (OFT) of depression- and anxiety-like behaviors before and after a 30 day exposure to an enriched environment. Enriched environment significantly attenuated despair-like behavior of WMIs suggesting that the molecular mechanisms by which it occurs can interfere with the molecular mechanisms contributing to depression-like behavior. Further exploration of the molecular mechanisms is being carried out to prove or disprove this hypothesis.

The Human Genome project did not complete the sequencing of the highly repetitive, low- expression heterochromatic regions. The Doering lab is constructing a detailed map of the short arm of chromosome 21 as a model for understanding the structure and function of heterochromatic regions in general. The heterochromatic portion of the genome is rich in normally unexpressed tandemly repetitive satellite sequences. Recent work has revealed that satellite expression is highly elevated in cancer cells compared to normal tissue. Among these, Satellite III (SatIII) repeats showed the greatest increase in expression. Different families of satellite sequence have been shown to be differentially changed in expression in cancer cells compared to normal cells; however, currently it is not known if all SatIII subfamilies show equal changes in expression. We hypothesize that SatIII subfamilies will display histone modifications consistent with expressional activation, and that different regions will display different levels of activation. If different levels of activation are found, this could leads to the development of biomarkers for cancer detection and prognosis.

Plasmodium reveals redox activity Robin David

Alex Gilman

Granular systems, such as a beach full of sand, a box of marbles, or a collection of rocks on the side of a hill, exhibit interesting fluid-like properties. We can approximate the behavior of a granular medium as similar to a fluid system, which is also made up of an aggregate of particles. We use a variant of the Navier-Stokes equations, which are used to model the behavior of fluid systems, to simulate a vertically shaken system of grains. Granular systems that are oscillated within a given range of frequencies form interesting visual patterns, similar to the phenomenon of Faraday waves in fluids. When layers of grains are oscillated at accelerational amplitudes greater than that of gravity, the layers leave the plate. Shocks are created in the system upon impact with the oscillator. We demonstrate relationships between properties associated with shocks and properties associated with the observed standing wave patterns.

14

Phosducin-like proteins (PhLPs) belong to the thioredoxin-superfamily of proteins and are highly conserved among eukaryotic organisms. Their roles have been implicated in G-protein signaling, cell cycle progression, and regulation of the folding of cytoskeletal proteins. However, the biochemical mechanism by which PhLPs perform their function is not clear. Here we describe the cloning and biochemical characterization of PhLP-1 of the malaria parasite Plasmodium (PbPhLP-1). The gene was cloned from Plasmodium cDNA and overexpressed in E. coli. Purified PbPhLP-1 showed activity in the insulin reduction assay and was also active in the thioredoxin-coupled reduction assay with a KM of 10 μM and a rate constant of 2 x 104 M-1 min-1. Furthermore, PbPhLP-1 effectively reduced the organic peroxide tBOOH, indicating antioxidant activity. Sequence alignment and homologous modeling of PbPhLP-1 indicated a conserved, putative non-typical active site, TWRC, in place of the typical catalytic CXXC motif found in classical redox-active thioredoxins. Site directed mutagenesis of the putative redox-active cysteine (C106) in PbPhLP-1 abolished the redox functions of the protein, confirming the role of C106 in the redox mechanism of the protein. We show for the first time that PhLPs are redox active enzymes that can be efficiently reduced by the thioredoxin system. Our findings shed new light on the biochemical mechanism and the biological function of these highly conserved proteins.

15


Scientia

Autumn 2014

Diphosphorus Maya Navarro

The long-term research goal of the Shelby Group is to design palladium compounds bonded to negatively charged diphosphorus compounds and to test them for catalysis. We have explored two ways to synthesize these compounds. In the more promising method, the neutrally charged diphosphorus compound is bonded to palladium; then, while it is on palladium, the diphosphorus compound is converted to the negatively charged form. We have found that this procedure generally forms new palladium-diphosphine dimers in which symmetric diphosphines remain neutral. However, when an unsymmetric diphosphine compound was tested, the palladium complex contains negatively charged diphosphorus compounds. To our knowledge, this compound is the only zero valent palladium diphosphine dimer in which the ligands are unsymmetrical and in which the ligands are not neutral. As observed in our crystallographic data of this unique compound, if visually cut in half at an angle of ~20 off the vertical, the complex contains two of the target units in which each palladium atom is bonded to a negatively charged diphosphorus compound. Based on these results, we have narrowed our focus on unsymmetric diphosphine ligands that contain only aromatic substituents with different electronic and/or spatial character to further examine conditions that favor the diphosphine to become negatively charged. Currently we are working to improve its purity so that we can obtain better NMR spectra and elemental analysis data to complete our manuscript.

Mechanisms of Na+ Transport Inhibition by Mathematical Modeling of Aquatic Communities in DuPage County

Epithelial Cells from Cystic Fibrosis Donors Maria Ulloa

Patrick Morgan

16

Mosquitos, specifically the Asian tiger mosquito (Aedes albopictus) and the common house mosquitos (Culex pipens) are common vectors of the West Nile Virus, and are known to prey on humans and animals for blood. While obtaining blood, the mosquitos transfer the virus to their prey. This is especially dangerous to infants, the elderly, and those who have weakened immune systems. Mosquitos are found typically near their breeding ground, laying eggs in stagnant water which grow to become larvae. Larvae of mosquitos are known to tolerate certain water conditions from water temperature to the amount of oxygen available. The purpose of this project was to determine if water sources near drainage pipes were optimal for mosquito’s growth. For that purpose, 500 mL water samples were collected from 2

Hypertonic saline inhalation therapy benefits Cystic Fibrosis (CF) patients. Surprisingly, these benefits are long-lasting and are diminished by the epithelial Na+ channel blocker amiloride. Our aim was to explain these effects. Human bronchial epithelial (HBE) cells from CF donors were grown in inserts and were used to measure amiloride-sensitive short circuit currents (INa), and transepithelial conductance (GT) and capacitance (CT). Hyperosmotic challenge (HC) solutions were prepared by either adding additional NaCl or mannitol to the isosmotic buffer or to a buffer containing a low (6 mM) Na+ concentration (6Na-HC). Exposure to apical or basolateral HC inhibited INa, and GT and exposure to apical HC also inhibited CT. The HC-induced inhibition of INa was protracted and required 60 minutes of re-exposure to the isosmotic solution to recover 75%. Pre-incubation with amiloride significantly accelerated the recovery of INa following exposure to HC-NaCl but not when 6Na-HC was used. Apical or basolateral membrane permea-bilization

lakes and 2 ponds at the DuPage county Forest Preserve at drainage pipes and 100 meters form each pipe. Specially, we identified all of the macroinvertebrates, small aquatic organisms that include mosquito larvae, in these locations and compared them using the Shannon-Wiener diversity index. Differences in the macroinvertebrate fauna were found between locations of drainage pipes and areas without drainage.

using nystatin revealed that exposure to HC inhibited the apical epithelial Na+ channels (ENaC) and the basolateral Na/K ATPase. Imaging the HBE membranes using fluorescent labeling suggests that exposure to HC induces membrane endocytosis. Conclusions: i) Exposure to HC inhibits HBE INa probably by inducing endocytosis of apical ENaC and basolateral Na/K ATPases; ii) Amiloride diminishes this effect.

17


Scientia

Autumn 2014

Transient Expression of FcIL7 in CHO-S cells to Augment the Level of T-cell Engraftment in a Mouse Model of Human HSC Transplantation, HIV infection, and RNA-based therapy Jawad Arshad

Dr. DiGiusto’s Lab is developing an NSG (NOD scid gamma), meaning without an immune system, mouse model of human HSC transplantation and HIV infection, and therapy. Specifically, they are modifying T-cells to express small RNA molecules that interrupt the HIV viral life cycle [1]. I, along with my mentor Dr. Mark Sherman, was tasked with optimizing the expression of FcIL7, a fusion protein that stimulates human T-cell growth and proliferation in NSG mice. Previous attempts to express FcIL7 in stably transfected CHO (Chinese Hamster Ovary) cells have resulted in poor yields (~10 ug/mL of TCS). Initial attempts at large scale transient expression have resulted in a five fold increase in yields. In an attempt to further optimize expression, we have cloned two versions of the FcIL7 gene into a plasmid that is more suitable for transient expression. We then transfected CHO-S (Chinese Hamster Ovary) cells using electroporation, and assayed supernatants using ELISA and Western blotting. As a result, we identified that the fourth clone in the FcIL7-2 line as the most viable.

Introduction Interleukin 7 (IL-7) is a hematopoietic (blood cells that give rise to all the other blood cells) growth factor, which is secreted by stromal cells in the bone marrow and thymus. IL-7 helps to stimulate the differentiation of pluripotent stem cell lines into lymphoid progenitor cells. It also stimulates proliferation of cells in the lymphoid lineage itself including T-cells, which is the basis for this project. IL-7 is critical for T-cell development. As a cytokine, it is a cofactor for V(D)J rearrangement of the T-cell receptor beta during early development of T-cells. Fusion proteins are derived from synthetic genes made by joining two or more genes that originally coded

18

for different proteins. The fusion protein we are working with in our project is FcIL7 [2], which designates that it is a combination of an antibody fragment (the Fc region) and the IL-7 protein. Fusing IL-7 to an Fc increases the cytokine’s stability and duration of action in NSG mice. Our project involved transfection of FcIL7-encoding plasmids into cell lines. Transfection can be defined as deliberately introducing nucleic acids into cells. Specifically, we used suspension adapted Chinese Hamster Ovary cells (CHO-S), which is one of the most commonly used cell lines for expressing mammalian proteins in cell culture. Our transfection involved opening transient pores in the cell membrane to allow for the

uptake of our plasmids. Our method of choice was electroporation, which is the application of an electric current across the cell membrane resulting in the temporary opening of a pore allowing for intake of exogenous molecules in the medium. Our transfections focused on short term targeting of the plasmid to the cell’s nucleus for mRNA synthesis (transient transfection) rather than long-term integration into the chromosomal DNA of the cell (stable transfection). Large-scale transient transfections have the potential to generate milligram quantities of an engineered protein in 2 weeks or less. The FcIL7 fusion protein will ultimately be used to allow these NOD scid gamma mice to have working T cells with these anti-HIV RNA molecules. Therefore, large quantities of it are necessary to allow for such experimentation to continue. From there the next step will be to test the relative deterrence of the HIV infection in these mice. Methods The goal of our project was to move the FcIL7 gene from a stable-transfection plasmid (pOptiVEC) to a transient transfection plasmid (pcDNA3.3). The latter produces a shorter mRNA that is not coupled to the DHFR gene, which is required for gene amplification during stable cell line production. The process began by pouring LB Ampicillin plates to select for E. coli transformants carrying the plasmid of interest. Next we performed a PCR reaction to selectively amplify both the FcIL7-1 and FcIL7-2 versions of the gene (FcIL7-1 lacks an intron in the secretion signal sequence). We used agarose gel electrophoresis to verify that the PCR products were singular and had the correct size. We proceeded to TOPO cloning (cloning without the vectors) which is a technique focused upon the cloning of Taq polymerase amplified DNA fragments into the vector of choice without the need for DNA ligases. However, the cloning is not directional. Therefore, at this point we had the plasmid, which contained the desired RNA insert, however, perhaps with mutations and different orientations. After transforming E. coli cells and selecting 8 clones for each gene construct we isolated plasmid DNA and used restriction enzymes to determine which clones had the gene inserted in the correct orientation. Seven of the 8 FcIL7-1 clones and only four of the FcIL 7-2 clones were oriented correctly . Next, we used DNA sequencing to confirm that no mutations had been introduced by Taq Polymerase. We found several FcIL7-1 clones with fully correct sequences, and selected clone 2B as our final

candidate. Unfortunately none of the FcIL7-2 clones were mutation-free, so selected 8 more clones and repeated the DNA isolation, restriction analysis, and sequencing before selecting clone 5D as our final candidate. Having verified the sequences of clones 2B and 5D, we performed large scale endotoxin free plasmid preps for subsequent transfection of CHO-S cells. These plasmid preps allowed for purification of the sample on a large scale. However, we also did small-scale endotoxin free plasmid preps of the same E. coli cultures so that we could submit the plasmid DNA for sequencing to reconfirm the genetic code of each plasmid. We then proceeded to transfection of CHO-S cells via electroporation. In addition to transfecting with clones 2B and 5D (FcIL7-1 and -2 in pcDNA3.3), we also transfected with the same two genes cloned into pOptiVEC. We let the transfected cells grow for 7 days, and when cell viability fell below 80% we harvested the supernatants and began looking for secreted FcIL7. Specifically, we used a Western Blot to show the existence of the desired protein product and an ELISA assay to determine how much product was being secreted by each pool of transfected cells.

DNA Marker Lengths

Gel Verification of FcIL7 1/2 Products

Figure 1: Agarose gel results for both the FcIL7-1 and FcIL7-2 gene constructs. As expected, the FcIL7-1 amplicon was roughly 1,500 bp long and the FcIL7-2 amplicon was close to 2,000 bp in length.

Discussion As expected, the FcIL7-1 amplicon was roughly 1,500 bp long and the FcIL7-2 amplicon was close to 2,000 bp in length based upon our initial agarose gel (figure 1). This confirmation allowed us to move on in our procedure. In the second gel (figure 2) , we isolated plasmid DNA and used restriction enzymes to determine which clones had the gene inserted in the correct orientation. The correct orientation is shown as the leftmost result in

19


Scientia

Autumn 2014

Figure 4: The lanes shown above from left to right are 2B, 5D, FCIL7-1 #4, FCIL7-2 #3, negative control, and positive control. Every clone, especially the first lane shown here (2B JA) demonstrated a visible amount of banding that indicated the presence of FCIL7 production. The final band used as proof in the Western was purified FCIL7 which showed strong band patterns in the same areas as the clones.

Figure 2: Agarose gel results of selecting 8 clones for each gene construct. We isolated plasmid DNA and ued restriction enzyes to determine which clones had the gene inserted in the correct orientation. 7 of the 8 FcIL7-1 clones were correct, but only 4 of the FcIL7-2 clones were correct.

each of the two respective gels. Seven of the eight FcIL7-1 clones were correctly oriented, but only four of the FcIL72 clones were. From these clones, 2B from FcIL7-1 had a DNA sequence that was free of mutations, but for FcIL7-2 we had to screen 8 more clones (run on a separate gel in a similar fashion). DNA sequencing revealed that clone 5D had the correct orientation and sequence based on the

nanodrop results. Based upon the ELISA assay (figure 3) FcIL7-1 and FcIL72 cloned into pcDNA3.3 produced greater concentrations of protein than their counterparts cloned into pOptiVEC, suggesting that having a DHFR gene in the mRNA reduces translation efficiency of the FcIL7 gene. Furthermore, the FcIL7-1 gene in both cases produced more protein than the FcIL7-2 gene, suggesting that a shorter mRNA length due to lack of an intron in the secretory signal sequence can affect the efficiency of protein production in the resulting transiently transfected cell. In the Western Blot (figure 4) every clone, especially the first lane shown here (2B JA) demonstrated a visible amount of banding that indicated the presence of FcIL7

Figure 3: ELISA assay results for each of the FcIL7 plasmids.

20

in the tissue culture supernatant. The band is the last lane (purified FcIL7) comigrates with the bands seen in the supernatant lanes, which proves that the transfected cells are secreting FcIL7. Based on preliminary ELISA Assay and Western Blot results, CHO-S cells transiently transfected with FcIL71 cloned into plasmid pcDNA3.3 express the highest amounts of secreted immunocytokine fusion protein. The next step will be to demonstrate that the differences we are seeing in FcIL7 expression levels are not simply due to differences in transfection efficiency. Techniques like qPCR of DNA extracted from transfected cells, or the spiking of transfections with eGFP plasmid, can be used to show that transfected cells are incorporating equal amounts of plasmid DNA. Once demonstrated, milligram quantities of the selected plasmid can be purified for large scale electroporation and protein production on the scale of 250 mg or more for the purposes of improving T-cell engraftment in Dr. DiGiusto’s mouse model of HIV infection and therapy. Acknowledgements Thank you to Dr. Mark Sherman for being a patient, understanding, and enthusiastic mentor who helped me to become a better scientist over the course of the Eugene and Ruth Roberts Summer Academy. Thank you to Dr. Andrew Raubitschek for accepting me into his lab and giving me an opportunity to research an inherently interesting topic. Thank you to the University of Chicago for funding me and allowing me to take part in such a worthwhile program.

References 1. Diguisto, Dr. DL, and Dr. Janet Chung. “Combinatorial RNA-based Gene Therapy for the Treatment of HIV/AIDS.” Expert Opin Biol Ther. (2013): 437-45. Web. 2. Lauder, S. and Gillies, S.D. (2009) “IL-7 Fusion Proteins” US Patent application 2009/0010875A1

21


Scientia

Autumn 2014

Variables Natalie Harrison A procedure has been created to test an effective field theory of dark matter and limit the mass scale parameter, Λ, of this model as a part of a search for dark matter signature of missing energy and two or more jets using the Razor Kinematic Variables. This search has been performed using data sample of pp collisions at a center-of-mass energy of 8 TeV . The data were collected by the CMS detector at the LHC and reflect 19.5 fb-1 of data. The limit setting procedure involves using a frequentist approach and sets limits on the production cross section of dark matter. These limits on the production cross section are then transformed into limits on the mass scale parameter to compare with previous Mono-jet searches, and 90%CL on spin-dependent and spin-independent Dark MatterNucleon cross section limits to compare with results from direct detection experiments. These limits can also be combined with results from the monojet analysis, because the searches are complementary, to increase the overall sensitivity to the direct production of DM at the CMS experiment in an unexplored region of parameter space during the next run of the LHC. This framework may be adapted to study other theories of dark matter.

Motivation Dark matter (DM) was proposed to initially explain various astrophysical measurements, such as the rotational speed of galaxies and gravitational lensing. It has previously been determined that ordinary matter makes up only 4.9% of mass-energy of our universe and that 26.8% is made up of DM. Thus, it is very important that we search for DM, as it makes up 84.5% of all matter in the universe [1] [2].

The dark matter particles escape detection in the detector and leave a lot of missing energy (MET) in the detector. This study is motivated by a paper [4] that suggests that the Razor Kinematic Variables (RKV), originally designed for SUSY searches, can be used to expand existing searches for DM production at the LHC. The razor variables are R2 and MR. They are defined as

Background There are currently three different modes of dark matter detection. The first is called direct detection, where dark matter particles interact and scatter off nuclei of the detector. The second is called indirect detection, which looks for products of WIMP (Weakly Interacting Massive Particle) annihilation, gamma rays or Standard Model particle-antiparticle pairs, in regions of high density of dark matter. The final method is dark matter production at the LHC, which is the type of detection we use in this study. Understanding the results from these three different methods and comparing them is crucial to our understanding of dark matter. Current searches for DM at colliders focus on looking for excesses in the number of events involving a single jet (mono-jet), photon (mono-photon), and missing Et [3].

where

22

particles. MR contains information about masses of pair produced particles and other weakly interacting particles from their decays [5]. We look to measure the cross section, or size, of the dark matter production and from this calculate the cross section of dark matter-nucleon interaction. Using the RKV, we can focus on events with multiple radiated jets that leave behind large missing transverse energy, as in (Fig. 1) where is the dark matter particle and Z’ is the mediator for the dark matter particle. This provides us with a search complementary to the existing mono-photon, mono-jet searches at the LHC. Thus, we may be able to combine the limits we obtain with those from previous collider searches to increase our overall sensitivity to DM. This line of research determines upper limits on the dark matter-nucleon cross section in a hard-to-probe region of razor phase space at the CMS detector at the LHC. A limit setting procedure is defined based on a frequentist approach (known as LHC-style CLs) and the data is interpreted in the following ways: (i) as a limit on the dark matter production cross section (ii) as a limit on Lambda (Λ), the mass scale parameter of the mediator (iii) as a limit on dark matter-nucleon interaction cross section (iv) in comparison to results from Direct Detection experiments and to those of the mono-jet search. Our Signal Model Our signal is generated using an effective field theory assumption. The production of dark matter occurs through the Feynman diagram in Fig. 1. Two initial state radiated jets are produced and we select these events to perform the razor analysis.

and

where is the tranvserse momentum 3-vector of one of the visible jets produced in the decay of the heavy mediator to dark matter in association with two jets, pj2 is the momentum 3-vector of one of the two jets, and is the z component of the momentum of one of the jets [5]. M is the MET 3-vector. R2 is a dimensionless variable that represents the amount of transverse momentum imbalance in a collision event. Transverse imbalance of momentum is indicative of decays created by pairproduced particles that decay to weakly interacting

Fig. 1

In effective field theory, it is assumed that the mass of the mediator particle Z’ is sufficiently heavy that it can be integrated out in the operators. Using this assumption, the Feynman diagram is in Fig. 2.

Fig. 2

We have four different samples of couplings: axial vector coupling to the u quark and d quark (spin dependent) and vector coupling to the u quark and d quark (spin independent). The effective operator representing the axial vector couplings is

The operator representing the vector coupling is

where q represents the quark being coupled to and Λ is defined earlier [4]. Our signal is generated for six different masses of dark matter MDM = 1 GeV, 10 GeV, 100 GeV, 400 GeV, 700 GeV, and 1000 GeV. For each mass and coupling, signal was modeled in our detector, and we chose events with certain selection criteria. The selection criteria require the event to have 2 jets, each having a transverse momentum greater than 80 GeV (pT < 80GeV), the angle between the two jets less than 2.5 radians (Δф (j1, j2) < 2.5), and 0 identified leptons. For events passing our selection, we calculate the razor variable distributions for these signal events. Fig. 3 shows an example of this type of a two-dimensional razor variable distribution for a specific mass of dark matter and coupling to matter. The signal is scaled to 1pb and cuts are placed (MR < 200 GeV and R2 > 0.5) on our signal. The trigger efficiencies are used to calculate the number of signal events passing our cuts. The trigger efficiency curve utilized is shown in Fig. 4. The number of signal events passing our selection criteria (for 19.5fb-1), the cross section at which our samples were produced, and the efficiencies of event selection are shown in Table 3, Table 2, Table 3, and Table 4.

23


Scientia

Autumn 2014 Table 2: Signal Information for AVu coupling

Table 3: Signal Information for Vd coupling

Fig. 3: Distribution of R2 and MR for a dark matter particle of mass m = 1 GeV with Axial Vector coupling to the d quark. Trigger Efficiency

Table 4: Signal Information for Vu coupling

Fig. 4: Trigger Efficiency distribution in Razor variables. Table 1: Signal Information for AVd coupling

Table 1 contains information on signal samples. For each sample and mass, the total number of events generated is listed in the third column, the number of events passing selection criteria is in the second column (for 19.5fb-1 of data), the resulting efficiencies are calculated in the fourth column, the Λ used to generate the samples is in the fifth column, and the cross sections that the samples were produced at are in the sixth column.

24

Methods We employ a variety of statistical procedures to compute the expected and the observed upper limits of the production cross section. In this section, a general procedure to set limits is briefly explained, including a general understanding of test statistics, how nuisance parameters are treated, generation of pseudo-data, and of a modified frequentist statistic called CLs. These are all thoroughly explained in [8] and Appendix A of [10]. Their specific implementation into this analysis using the computing tools HistFactory and combine are addressed in General Overview of Statistical Procedures and Implementation into our Analysis. General Overview of Statistical Procedures in Frequentist analysis In general, when searching for a new phenomena (a signal process), one typically defines a null hypothesis of the data, which is tested against an alternative hypothesis, to see which hypothesis agrees more closely with the data. Typically when setting limits, our hypotheses are a signal plus background hypothesis s + b and a backgroundonly hypothesis b. In an analysis, the expected number

of signal events in one or multiple bins is usually denoted by s and the number of background events by b. We try to exclude the signal plus background process, and we express this as a limit on the signal strength modifier, which changes the signal cross sections by the scale μ [10]. Predictions for signal and background yields have several uncertainties that are incorporated into statistical methods through nuisance parameters, θ. These include uncertainties on luminosity, trigger efficiencies, uncertainties on MET, PDF, and uncertainties on MC modelling and theoretical cross sections. The signal and background predictions therefore become functions of these nuisance parameters, s(θ) and b(θ). Usually and in our analysis, all sources of uncertainty are assumed to be 100% correlated. Systematic error pdfs can be constructed to reflect our degree of belief for the true value of θ. These are denoted by , where is the default value of the nuisance parameters [10]. To determine observed limits on M, we construct the likelihood function

In this formula, data is taken to represent the experimental observation of the number of events or pseudo-data used to construct sampling distributions (often referred to as toys), both of which are calculated. Poisson (data|μs + b) is a product of Poisson probabilities to observe ni events in bin i [10].

The level of agreement between the observed data and a given hypothesis is quantified typically using a test statistic. Often and in our search, the measure of incompatibility is based on the corresponding likelihood ratio, , for signal and background [10].

with the constraint that 0 ≤ μ ≤ μ . Then, values for the nuisance parameters and that describe the data are found for the b and s + b hypotheses. Then, we generate toy Monte Carlos pseudodata to construct two pdfs, and assuming a signal with strength μ in the s + b hypothesis and 0 for the b hypothesis [10]. To generate the pseudodataset, the nuisance parameters are fixed. Once these distributions are calculated, two p-values are constructed.

In general, a p-value gives us a figure for estimating agreement between the observed data with a given hypothesis. These two values, pμ and pb, are associated with the actual observation of the s + b and b hypotheses and are calculated [10]

and

Here, we calculate CLs which gives us a limit to the value of the signal strength for which 95% of the pseudoexperiments (toys) performed give a result more like signal than the current one [10] [12].

We can use CLs to determine whether or not to exclude the s + b model. For example, if we found CLs for a given μ to be 0.05, we would say that the s + b hypothesis could be excluded with a confidence level of 95% for that signal strength setting limits on μ. To obtain expected limits, a large background-only dataset is generated and CLs is calculated as if it were real data. A cumulative probability distribution (cdf) is constructed from the results and the limit lies where the cdf crosses the 50% quantile. One can find where this cdf crosses 16% and 84% to get the 1σ band and where it crosses 2.5% and 97.5% to determine the 2σ band. Implementation in Our Analysis In our analysis, we look to exclude the signal plus background hypothesis and we set an upper limit on the production cross section. We used the HistFactory method for 5fb-1 and combine for 19.55fb-1. In our 5fb-1 analysis, we first utilized a method called the HistFactory to determine our observed limits. The HistFactory method is documented thoroughly in [11]. In this analysis, we used a Monte Carlo prediction of our background. To set the limits, code was generated to input binned one-dimensional histograms of R2 for the data, background, and signal into the HistFactory object in RooStats [11]. Both the background and data are normalized to the same value. These histograms are then treated like pdfs for the signal and background. We generate what is called a RooWorkspace containing

25


Scientia

Autumn 2014 information about the data, background, and signal. The HistFactory generates a pdf for the signal plus background from the distributions that were inputted. The HistFactory then performs an inverted hypothesis test in which it scans results from toy simulations for various values of the parameter of interest to compute the confidence interval or limit. The hypothesis tests were carried out by a class called HypoTestCalculator, which uses a model for the null hypothesis (b only), a model for the alternate hypothesis (s+b), the data set, parameters specifying the null hypothesis, and parameters specifying the alternate hypothesis to obtain Fig. 5. Log likelihood ratio for various values of signal strength. For signal-like events, the distribution of log values of the likelihood is to the left (in Red). For more background-like events, the curve is to the right (in Blue). The black hypothesis test results. In our indicates the data. Successive panes of the figure show the log likelihood distributions for varying values of signal case, the parameter of interest strength, the amount of signal injected. is the cross section. We scan Frequentist CL Scan for workplace result_Sigma this parameter space on 12 points from 0 to 6 pb, and 1000 Monte Carlo toys were used. We use a Frequentist calculator and use the LEP statistic (a simple likelihood statistic) for our test statistic. For this model of dark matter with a specific coupling and mass, we generate plots of the distribution of the logarithm of the likelihood ratio discussed earlier (Fig. 5) for various values of signal strength. The distributions in the figure are and . The black line represents the value of the test statistic calculated for the observed data. As the signal strength is increased, we see these two distributions begin to separate and we are better able to discriminate one hypothesis from the other. This plot allows us determine p-values to decide Fig. 6. p-value plot. At 3:8 pb, we can set an upper limit on the production cross whether or not to reject the null model in favor of an section with CLs curve at the value 0:05. From this, we can say we reject the signal alternative model. Eventually, we can exclude the signal plus background hypothesis for a cross section above 3:8pb. plus background hypothesis. As a result of the inverted hypothesis test, a p-value CLs intersects the red line at a particular value x, indicating is stored for the null hypothesis. We can get the condence a p-value of 0.05, we reject the signal plus background intervals called CLs, CLs+b, and CLb from the p-value Fig. model for a cross section greater than x pb. A sample of the results yielded by this method are 6. We use CLs instead of CLs+b as a more conservative approach for obtaining the cross section. The x axis on shown in Tables 5-8. For our 5fb-1 sample, we use the the plot indicates the ratio of the cross sections. The binnings R2 : {0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.5} and MR : {200, histograms of the simulated signal were normalized to 300, 400, 500, 600, 900, 3500}. We later chose to utilize the combine program [12], 1pb, since we did not know their cross sections. When the

26

Table 5. HistFactory Results 5fb-1: AVd coupling

Table 6. HistFactory Results 5fb-1: AVu coupling

Table 7. HistFactory Results 5fb-1: Vd coupling

Table 8. HistFactory Results 5fb-1: Vu coupling

Fig. 7. HistFactory Results

designed for the Higgs Analysis at CMS, to compute our limits. The HybridNew algorithm by default, computes hybrid Bayesian-frequentist limits. However, we specify a fully Frequentist limit to be calculated. We use the CLs rule, which gives us a way to compute a limit from the test statistic. We specify the test statistic used by this method to be ‘Profile Likelihood modified for upper limits’ (option -- testStat LHC), which was described earlier. This profile likelihood allows for maximization of the likelihood done for the signal strength [12]. We use this HybridNew algorithm to calculate the upper limits on the dark matter production cross section for both a 5fb-1 sample of data using background prediction from Monte Carlo and 19.5fb-1 sample using a data driven background prediction. This approach is justified by our backgrounds being decaying exponentials in the razor variables. For our 19.5fb-1 sample we use the binning R2 : {0.5, 0.65, 0.85, 1, 2.5} and MR : {200, 400, 600, 800, 3500}. We use the data-driven background prediction for our final results and the specific binnings, because we perform a closure test, which validates our method and allows us to extract systematic error contributions. Our results for the closure test are the best for this specific binning and background prediction. The HybridNew approach allows us to perform a counting experiment and shape analysis. We create “shapes” for the four sources of background, W+jets, Z+jets, ttbar, and drell-yan and shape uncertainties. To run combine, we must put all of this information in a datacard. In this datacard, a root file containing distributions of the razor variables for background broken down into components, distributions of errors for these background components, signal distribution, distributions of errors for the signal, and observed data distribution is referenced. We assign a 10% error on the luminosity and shape uncertainties to all backgrounds. We assume errors on the backgrounds are fully correlated and sum them in quadrature. The exact content of the datacard can be found on git. The exact command used to obtain the observed limits is “combine -d datacard.txt -M HybridNew -testStat LHC”. The “-T” option specifies the number of toys we run, the “-H” option tells combine to start calculating limits around where the observed limit from a simple profile likelihood is observed to be, the “-M” option specifies the method (HybridNew), the “ ” option specifies that we want to calculate fully frequentist limits, and the “--testStat

27


Scientia

Autumn 2014 LHC” option specifies that we are using profile likelihood modied for upper limits. Tables detailing the limits are shown the Results section. To support the plausibility of the limits that we obtained, I compare them to limits obtained using the Asymptotic method in the combine program, described in Appendix A of [10]. The Asymptotic method differs from the HybridNew method by not requiring pseudo-experiments and by its accuracy for cases in which the dataset is large. The Asymptotic limit test is a likelihood-based statistical test used for the discovery of new phenomena. A representative data set, called the Asimov dataset, provides a simple method to obtain the median experimental sensitivity of a search or measurement. I display a table showing a comparison of the 95% CL observed limits from the Asymptotic approach to the limits from the HybridNew approach in the Results section. They are consistent. I also verify that our limits make sense by scaling the signal differently, to values of 0.1pb, 0.5pb, and 10pb, and obtain production cross sections that are consistent and within the same order of magnitude of the cross sections I produce for a signal scaled to 1pb.

As a confirmation of the hybrid limits, the asymptotic upper limits on the production cross section and lower limits on Λd are shown in a table Table. 9. Table 9. Sample Results from Combine for AVd coupling

From the results for the Λd and Λu for the axial vector and vector couplings, we can calculate a lower limit on Λ for these couplings, using the formula

Fig. 9: Λ95%CL Axial Vector coupling

Upper Limits on Dark Matter-e.Nucleon Cross section Lower limits on the mass scale parameter can be translated into upper limits on the Dark Matter-Nucleon cross section for both spin-independent and spin-dependent cases using

Results for Λ for the axial vector and vector couplings are listed in the two tables in Fig. 8. Table 10. Results for Axial Vector Coupling

Results In this section, we consider the results that we obtain using the combine limits for the 19:5fb-1 of data and a signal scaled to 1pb. We utilize the HybridNew frequentist approach. Lower Limits on the mass of Λ Λobs is the mass scale parameter for the mediator of dark matter particles. For my analysis, it is assumed that Λ has mass that is sufficiently high such that Effective Field Theory is valid. Given the value of the mass of Λ used to generate our dark matter signal samples (40 TeV); MΛ (550 GeV), the theoretical cross sections used to produce these samples; σMC; and the observed cross section for the samples, σobs; we can determine a lower limit for the mass of Λ. The mass of the mediator is related to the cross section by . From this

For example, the results for lower limits on Λd are listed for the axial coupling to the d quark as well as original upper limits on the production cross sections. The results include observed and expected limits for 90% and 95% CL.

28

Fig. 12: Λ90%CL Vector coupling

Table 11. Results for Vector Coupling

Fig. 10: Λ90%CL Axial Vector coupling

Fig. 8. Observed results for 19:5fb-1 of data using combine, Λ (GeV)at 95%CL and 90%CL. Done for both axial vector and vector couplings.

For the HybridNew results for a signal scaled to 1pb, plots of 90% and 95% CL observed lower limit on Λ are made for different values of the dark matter mass for both the axial vector couplings and the vector couplings were created. The plots are shown in Fig. 9-12. In blue is the limit that the mono-jet analysis obtains [15]. In red is our result.

Fig. 11: Λ95%CL Vector coupling

where is the reduced mass. Two tables (Fig. 13) show the spin-independent Dark Matter-Nucleon cross section limits for the vector couplings and the observed spin-dependent Dark Matter-Nucleon cross section limits for the axial-vector coupling and the observed at 90% and 95% CL. The 90% CL results for the spin-dependent and spin-independent Dark Matter-Nucleon cross sections for couplings to the u quark and d quark are plotted in Fig. 14 and Fig. 15, which may be compared with the 90% CL limits that were set by the monojet search at CMS [15] as well as various direct detection experiments [16-22] and the phenomenological minimal supersymmetric standard model (pMSSM). Interpretation and comparison to other experiments The mono-jet study at CMS finds stronger limits for spin-dependent interactions than spin-independent interactions [14]. This difference is consistent with the results obtained in this study. We find the axial vector coupling to the u quark to produce the best cross sections. The monojet/mono-photon analysis sets stronger limits on the mass of the mass scale parameter, Λ, and Dark Matter-Nucleon cross section limits. However, if we combine the cross sections obtained by the razor

29


Scientia

Autumn 2014 Acknowledgments I would like to thank and acknowledge funding for this research from the Southern California Edison funds through the MURF program and Caltech SFP Office. I would also like to thank my mentor, Dr. Maria Spiropulu, and co-mentors, Javier Duarte and Cristian Pena. I also thank our collaborators, Maurizio Pierini and Emanuele Di Marco, for their insight and recommendations.

Fig. 14: Spin-dependent Cross Section Limits

Table 12. Spin-dependent Cross Section at 90% CL

Fig. 15: Spin-independent Cross Section Limits

A break-down of the number of events is recorded in Table 14. The data are consistent with our background prediction. Table 14. Number of Events

Table 13. Spin-independent Cross Section at 90% CL

Fig. 13. Expected and Observed results for 19.5fb-1 of data using combine, Λ (GeV) at 95%CL and 90%CL. Done for both axial vector and vector couplings

di-jet analysis with those from the mono-jet analysis, we may produce the strongest cross section limits yet and improve CMS’s sensitivity to dark matter at the LHC.

30

Conclusions A framework has been set up to calculate production cross section limits and transform these results into limits on the mass scale parameter Λ and spin-independent and spindependent Dark Matter-Nucleon cross section limits for comparison to direct detection. This framework requires small modifications to incorporate other uncertainties on the background prediction and uncertainties on our signal. The analysis framework can be used to study different models of dark matter, simplified SUSY models (T2cc), ADD models of extra dimensions (a model in which the Higgs decays into two invisible particles), and models in which effective field theory may not be valid.

References [1] V. Trimble, Existence and Nature of Dark Matter in the Universe, Ann. Rev. Astron. Astrophys. 25 (1987) 425 [2] WMAP Collaboration, Seven-YearWilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Interpretation, Astrophys. J. Suppl. 192 (2011) 18 [3] The CMS Collaboration, Search for contact interactions using the inclusive jet pT spectrum in pp collisions at sqrt(s) = 7 TeV [arXiv:1301.5023 [hep-ex]]. [4] P. Fox, R. Harnik, R. Primulando, C. Yu., Taking a Razor to Dark Matter Parameter Space at the LHC [arXiv:1203.1662 [hep-ex]]. [5] C. Rogan, Kinematical variables towards new dynamics at the LHC [arXiv:1006.2727 [hep-ex]]. [6] The CMS Collaboration, “Inclusive search for supersymmetry using the razor variables in pp collisions at sqrt(s) = 7 TeV” [arXiv:1212.6961 [hep-ex]]. [7] N. Kanaya, Search for Dark Matter candidates at the LHC. Symposium on Cosmology and Particle Astrophysics. Melbourne, Australia. 18-20 June. 2009. [8] K. Nakamura et al. (Particle Data Group) 2011 Review of Particle Physics J. Phys. G 37, 075021 (2010) [9] CMS and LHCb Collaborations, Search for the rare decay B0s -> μ+ μ- at the LHC with the CMS and LHCb experiments Combination of LHC results of the search for Bs -> μ+μ-decays CMS-PAS-BPH-11-019, LHCb- CONF-2011-047 [10] ATLAS and CMS Collaborations and The LHC Higgs Combination Group, Procedure for the LHC Higgs boson search combination in Summer 2011 CMS-NOTE-2011- 005, ATL-PHYS-PUB-2011-11 [11] “Building HistFactory models using c++ and python.” http://ghl.web.cern.ch/ghl/ html/HistFactoryDoc. html (June28,2013) [12] “Documentation of the RooStats-based statistics tools for Higgs PAG” https://twiki. cern.ch/twiki/bin/ viewauth/CMS/SWGuideHiggsAnalysisCombinedLimit [13] S. Malik, “Search for the pair production of dark matter particles at CMS”International Conference on High Energy Physics. Melbourne, Australia. 2012 [14] Worm, Steve “Searches for Dark Matter in Monojets and Monophoton Events at CMS” New Paths to Particle Dark Matter. Oxford, England. 30 March 2012 [15] The CMS Collaboration Search for new physics in monojet events in pp collisions at sqrt(s) = 8TeV CMS PAS EXO-12-048 [16] CDF Collaboration, A Search for dark matter in events with one jet and missing transverse energy in pp collisions at sqrt(s) = 1.96 TeV” Phys. Rev. Lett., 108 (2012) 211804 [17] XENON100 Collaboration, Dark Matter Results from 100 Live Days of XENON100 Phys. Rev. Lett., 107 (2011) 131302 [18] CDMS Collaboration,“Results from a Low-Energy Analysis of the CDMS II Germanium Data”, Phys. Rev. Lett., 106 (2011) 131302 [19] COUPP Collaboration, Improved Limits on Spin- Dependent WIMP-Proton Interactions ofrom a Two Liter C3FI Bubble Chamber Phys. Rev. Lett., 106 (2011) 021303 [20] SIMPLE Collaboration, First Results of the Phase II SIMPLE Dark Matter Search” Phys. Rev. Lett., 105 (2010) 211301 [21] IceCube Collaboration, Multi-year search for dark matter annihilations in the Sun with the AMANDA-II and IceCuve detectors” Phys. Rev. D, 85 (2012) 042001 [22] Sumper-Kamiokande Collaboration, An Indirect Search for WIMPs in the Sun using 3109.6 Days of Upward- going Muons in Super-Kamiokande Astrophys. J., 742 (2011) 78

31


:

The Triple Helix at the University of Chicago would like to thank the following individuals for their generous and continued support: Dr. Matthew Tirrell

Founding Pritzker Director of the Institute for Molecular Engineering

Eleanor Daugherty

Assistant Vice President for Student Life and Associate Dean of the College

Arthur Lundberg

Assistant Director of the Student Activities Center

Brandon Kurzweg

Student Activities Advisor

We also thank the following departments and groups: The Institute for Molecular Engineering The Biological Sciences Division The Physical Sciences Division The Social Sciences Division University of Chicago Annual Allocations Student Government Finance Committee (SGFC) Chicago Area Undergraduate Research Symposium (CAURS)

Finally, we would like to acknowledge all our faculty members and the mentors of our abstract authors for their time and effort.

Research Submission Undergraduates who have completed substantial work on a topic are highly encouraged to submit their manuscripts. We welcome both full-length research articles and abstracts. Please email submissions to uchicago.print@thetriplehelix.org. Please include a short description of the motivation behind the work, relevance of the results, and where and when you completed your research. If you would like to learn more about Scientia and The Triple Helix, visit http://thetriplehelix.uchicago.edu or contact us at uchicago@thetriplehelix.org.

Print Division

Production

Scientia

Co-Directors Carrie Chui Charles PeĂąa Coordinators Chau Pham Tima Karginov

Editors in Chief Khatcher Margossian Luizetta Navrazhnykh Managing Editors Irene Zhang Jake Russell Michael Cervia Writers Scientia Inquiries Erin Fuller Scientia Abstracts Alex Gilman Bennett Davidson Claire Morley Dora Lin Kevin Zhao Kimberly Huynh Klevin Lo Maria Ulloa Maya Navarro Miles Winter Neal Shah Neil Kuehnle Patrick Morgan Robin David Sylwia Odrzywolska Thomas Bsaibes Veronica Ibarra Research Jawad Arshad Natalie Harrison Faculty Mentors Andrew Raubitschek, PhD Mark Sherman, PhD Maria Spiropulu, PhD Scientia Inquiry Faculty Albert Colman, PhD

Marketing Director Adiba Martin

Events Director Cecilia Jiang Coordinators Anya Krok Stephen Yu

E-Publishing Director Austin Yu Managing Editors Jake Mullen

SISR Editor in Chief Abhi Gupta Managing Editors Jacob Ryall

Executive Co-Presidents Jawad Arshad Melissa Cheng



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.