Scientia Inventing the Future - Winter 2019

Page 1

SCIENTIA A JOURNAL BY THE TRIPLE HELIX AT THE UNIVERSITY OF CHICAGO

WINTER 2019

INVENTING THE FUTURE:

Applications of Science and Technology


3 5 8

13

Produced by The Triple Helix at the University of Chicago Layout and Design by Bonnie Hu, Production Director Co-Editors-in-Chief: Zainab Aziz, Nikita Mehta Scientia Board: Zainab Aziz, Nikita Mehta, Josh Everts, Maritha Wang, Rita Khouri

contents

16 19


Introduction About The Triple Helix

About Scientia: Letter from the Editors

Tackling Computer Science’s Achievement Gap: An Inquiry with Dr. Diana Franklin Ayushi Hegde

Neutron Star Collisions & Kilonova Modeling Jessica Metzger

From Numbers to Consciousness: An Inquiry with Dr. Rebecca Willett

Sophia Horowicz

The Influence of Geographic Structure and Natural Selection on Genome-Wide Association Studies

Christian Porras

Creating the Future of Touch-Sensitive Neuroprosthetics: An Inquiry with Dr. Sliman Bensmaia Danny Kim

1


22

26

29 32

35 42

2

Characterizing the Effects of Sphingosine Kinase Inhibitor PF-543 on Model Lung Surfactant Monolayers Pascale Boonstra

Neuroethology from an Evolutionary and Computational Perspective: An Inquiry with Dr. Daniel Margoliash Jarvis Lam

Towards a Constraint on Quaoar’s Atmosphere by Stellar Occultations

Thomas Cortellesi

Deep Convolutional Neural Networks: An Inquiry with Dr. Michael Maire Gillian Shen

Photons and Consciousness in Relation to the Double-Slit Interference Pattern Experiment Noor Elmasry

Breaking the Boundaries Between Humanities and STEM: An Inquiry with Dr. John Goldsmith

Tony Song


about the triple helix at the university of chicago Dear Reader, The Triple Helix, Inc. (TTH) is the world’s largest student-run organization dedicated to evaluating the true impact of historical and modern advances in science. Of TTH’s more than 20 chapters worldwide, the University of Chicago chapter is one of the largest and most active. Our TTH chapter continues to proudly share some of the most distinct publications and events on campus, engaging the minds and bodies of our institution and the public as “a global forum for science in society.” Our mission, to explore the interdisciplinary nature of the sciences and how they shape our world, remains the backbone of our organization and the work we do. In addition to Scientia, we publish The Science in Society Review (SISR) and an online blog (E-Pub), while also creating events to discuss the most current, pressing topics at the intersection of science and our society. Our organization is driven by talented undergraduate individuals—writers, editors, and the executive board that come from all backgrounds and interests. The intellectual diversity of TTH members allows us to bring you the vast array of knowledge, research, and perspectives that we present in our works. We consciously strive to help each member grow, not only as a writer or editor, but also as a leader, who will continue to ask the very questions that lead us to innovation and advancement as a society. Over the years, TTH UChicago has expanded from just one journal and an online blog to a holistic outlet for all undergraduates on our campus. Everyone—whether in “the sciences” or not—is affected by it, contributes to it, and has to interact with it on a daily basis. We wanted to continue to grow the platform of “the sciences,” to make it accessible to everyone. We now have insightful opinion pieces through SISR, brief reportings through E-Pub, workshops and discussions through events, and original research and interviews with leading professors through Scientia—a whole ecosystem of knowledge that we hope challenges you to think and to actively join the ever-growing dialogue on “the sciences.” Today, I invite you to join us, The Triple Helix team, in celebrating the newest release of Scientia, one of our two biannual print publications. Scientia—unique to the UChicago chapter—was inspired by and continues to embody the motto of our university: Crescat scientia; vita excolatur (Let knowledge grow from more to more; and so be human life enriched). As you read our latest issue, I hope you are reminded of its essence and let your knowledge grow. Best wishes, Nila Ray President of The Triple Helix at the University of Chicago

3


about scientia Dear Reader, The binary star collision on the front cover of Scientia illustrates the intersection of two hands—one artificial, and one human. This cover is inspired by Professor Sliman Bensmaia’s work in the field of neural prosthetics, and by the recent advances made in our understanding of the universe. These are both topics featured in our Winter 2019 Scientia journal, titled “Inventing the Future: Applications of Science and Technology.” Developments in prosthetics and our ability to detect and predict gravitational wave events are both the result of technological innovation in an age where humans can manipulate and modify the world around us beyond our wildest imaginations. With this issue’s theme, we want to celebrate the innovation taking place on our campus by featuring the work and lives of professors who are leading experts in their fields. We present six Inquiry articles on professors across various departments at the University of Chicago, including Drs. Diana Franklin, Rebecca Willett, Daniel Margoliash, Sliman Bensmaia, Michael Maire, and John Goldsmith. Undoubtedly, the works of these professors represent only a snapshot of the cutting-edge research taking place at the University. In addition to Inquiries, this edition includes five Works in Progress, highlighting the original research being performed by undergraduate students at UChicago. Our featured students share their work from topics as diverse as natural selection on genomewide association studies to stellar occultations. Scientia is always looking to broaden our scope and expand the reach of our publication. If you are completing a research project and want to see it in print, or if there is a professor performing eye-opening research that you would like to share, consider writing for us! We encourage all interested writers to contact a member of our team, listed in the back. In the meantime, please enjoy this edition of Scientia, presented by The Triple Helix. Sincerely, Nikita Mehta and Zainab Aziz Co-Editors-in-Chief of Scientia

4


SCIENTIA

inquiry Director of Computer Science Education Research Associate Professor Undergraduate CS Major Advisor Department of Computer Science

Stepping inside the John Crerar Library—from the outside, a concrete block built in the brutalist style of the Regenstein Library—is like stepping inside another world. Recently renovated, its interior has the look of a laboratory with the feel of a new-age classroom: floor-to-ceiling whiteboards packed with proofs, graduate students sipping coffee over lines of code, undergrads unpacking lab reports on beanbag chairs. The building brings

Tackling Computer Science’s Achievement Gap: An Inquiry with Dr. Diana Franklin

together a dozen disciplines, making it easy to forget that it houses the University’s Department of Computer Science. But the environment is fitting for Professor Diana Franklin, whose office, like her work, lies at the heart of these intersecting disciplines. A Research Associate Professor in Computer Science and the Director of Computer Science Education at UChicago STEM Education, Franklin works to make her field more accessible by designing

curricula with applications beyond computer science. Franklin grew up in California, where she studied computer science and engineering at UC Davis. After completing her Master’s degree at UIUC, she returned to Davis to pursue a PhD in computer architecture. The decision to pursue a PhD, she says, was for practical reasons—after experiencing poverty as a child, she was determined to secure a stable future for herself. Still, her larger goal

5


WINTER 2019

Franklin works with students from Chicago’s St. Margaret of Scotland, 2015.

6

had always been educational research, making her transition into software a necessary change. “Schools don’t hire faculty to do educational work, so I earned tenure through architecture. After that, I switched jobs and ended up in [educational] research,” she explains. She received funding from the National Science Foundation, where she remembers her first major project of her new line of research: a summer camp combining Mesoamerican history with endangered species through the lens of computer science. Designed to demonstrate the role of ethnicity and gender in shaping childhood interests, the camp focused on disciplines expected to attract Spanish speakers, young girls and boys, respectively; a closing survey showed that gender had a significant influence on participants’ takeaways, with girls initially showing more hesitancy to pursue computer science than boys did. Franklin describes the experience as a defining moment in her career, particularly because it resonated with her reason for pursuing computer science. “Feminism,” she

says, “means everyone should have the same opportunities. But ‘feminine’ professions are paid less—more women should be taking advantage of [the financial security in STEM].” At the same time, she describes that barrier as harder to overcome in a position like hers: “It’s recognized that there’s more pushback the higher a woman gets professionally, but there’s only so much we can do about it.” As a result, she tries to focus on helping younger women enter the field, tuning her letters of recommendation to account for the skepticism she knows her female students will face—a gap she works to lessen through her research. Currently, she’s involved in five projects with topics ranging from elementary curricula to quantum computing, each with the ultimate aim of making computer science accessible to students of all genders and backgrounds. The first of these projects is Learning Trajectories for Everyday Computing, an effort to integrate computer science into mathematics instruction in third- and fourth-grade classrooms. “The question,” she says,

“is how we can make these activities so they allow students to interact with math in a different way.” Students are encouraged to play with concepts, building their understanding of math while developing an intuitive grasp of computer science. The approach allows each subject to be taught fully and authentically. Franklin studies a similar interdisciplinary interaction in Comprehending Code, a project leveraging the rich knowledge in how to improve the mathematics and reading comprehension skills of students with disabilities to design strategies to help students struggling with coding. One such strategy, TIPP&SEE, helps young students focus their learning in Scratch programming. Like Learning Trajectories for Everyday Computing, the project emphasizes mindful playing as a way of interacting with important concepts; one of Franklin’s favorite examples is coding charades, where kids act out commands to understand the role of loops and conditionals in a program. That approach is entirely different from that of Franklin’s third endeavor, Scratch


SCIENTIA

Encore, a partnership with Chicago Public Schools. Here, the focus is developing computer science curricula for middle schoolers while considering cultural relevance—the backgrounds and interests represented in a given classroom. Teachers are given multiple ways to express the same idea, resulting in an approach that fits the abilities of their audience. More locally, Franklin is working with the Lab School to teach computer science to students in pre-kindergarten through fifth grade. Called CreaTive, this fourth project aims to expose kids to concepts through picture books; an example is If You Give a Mouse a Cookie, whose circular plot introduces the idea of an infinite loop. Unlike Franklin’s other projects, the goal isn’t strictly to teach computer science. “We choose books that can be read to kids with a little discussion— nothing too heavy-handed, but we’re planting little seeds,” she explains. Her fifth project is the most technical, focusing on education in the domain of quantum computing. As the education and outreach arm of EPiQC, an NSF Expeditions in Computing, it uses short games and comics to teach specific concepts in a way that’s approachable. Whatever the application, Franklin’s

“Don’t be afraid to get help. It’s not about whether you’re the best—the classes are very fast, so the worst thing is procrastinating on getting help. And we want to help you.” overall aim is the same: bridging the gap between an accessible idea and an abstract one. Moreover, her results are promising. Not even past its pilot year, the TIPP&SEE method is already showing potential, picking up in schools around the country—everywhere from Chicago to San Francisco. According to Franklin, the novelty is what makes computing education exciting. “You can make a difference more easily than you can in other fields. If you do a good job, it could have a huge impact,” Franklin says. Still, her line of work can

Franklin works with students from Chicago’s St. Margaret of Scotland, 2015.

be challenging. Projects like Scratch Encore constitute relatively uncharted territory. “There aren’t any structured intermediate-level curricula for that age group, so this is the first one,” says Franklin. That unfamiliarity makes it hard to quantify impact, especially when the aim is reproducibility; since much of Franklin’s impact is novel, judging what and how to measure can be particularly difficult. “It makes it easy to get sucked into creating materials instead of focusing on the research,” she says. Often, accounting for experimental control means withholding a valuable tool from certain students over others. “And schools don’t want half of their kids with something,” Franklin explains; the result is a reluctance to participate in studies with groundbreaking implications. “It’s hard,” she adds. “But you have to think about the good you can do today versus tomorrow. Building takes time.” Until then, Franklin works in the moment. A knock on her door reveals a graduate student with a research question. “We’re working on applying quantum computing to the game, Exploding Kittens,” Franklin says. Several of Dr. Franklin’s projects use gaming to teach computer science to younger students, but it’s surprising

to see the same concept applied in such a different learning environment. According to Franklin, her goal is the same. “It’s a high-risk field, fewer females—it’s about creating resources that make it accessible. We try to look at students who are less confident.” It seems to be working; her question answered, Franklin’s graduate student promises to test another approach. The encounter perfectly captures Franklin’s advice for young women considering computer science: “Don’t be afraid to get help. It’s not about whether you’re the best—the classes are very fast, so the worst thing is procrastinating on getting help. And we want to help you.” Most important is having confidence, which, at the University of Chicago, she promises is a valid feeling. “Anyone is capable if we’re here.”

Ayushi Hegde is a first-year student at the University of Chicago majoring in biology.

7


WINTER 2019

work in progress

Neutron Star Collisions & Kilonova Modeling Jessica Metzger1, James Annis2 1 The University of Chicago, 2Fermi National Accelerator Laboratory

Abstract Binary neutron star collisions (BNSs) have been theorized as the sources of astrophysical phenomena ranging from gravitational wave emission and gamma ray bursts to high-energy neutrinos, and as the origin of heavyelement nucleosynthesis. On August 17th, 2017 the first BNS in gravitational waves was observed, and soon afterward, the electromagnetic counterpart, the “kilonova,” was located. Models of this electromagnetic emission are interesting and useful in deducing properties of the merger. The history of kilonova theory and a physicallymotivated kilonova model based on the concept of the “neutrinosphere” are presented. The data from GW170817 (Gravitational Wave 170817), and fits to the lightcurves, can be described using this model.

Introduction

8

Pre-GW170817 When neutron stars collide, they are expected to eject some sort of matter, the “ejecta,” the properties of which are central to observations. Until 2017, an understanding of BNSs came from simulations. At the moment neutron stars collide, they eject massive tidal arms of neutron-rich matter (demonstrated in the first BNS simulation by Davies et al. 1994) [1]. After the collision, the remnant either collapses promptly to a black hole or goes through an intermediate “hypermassive neutron star” (HMNS) phase as a rapidly rotating object surviving for tens of milliseconds with a mass larger than the maximum neutron star mass [2]. If there is an HMNS remnant, it oscillates, sending shocks that eject some “shockheated” matter a few milliseconds after the collision [3,4]. These first two ejecta components, tidal arms and shock-heated matter, comprise the “dynamical ejecta.” Even neutrino irradiation may be an ejection mechanism, blowing matter away in a “neutrino-driven wind” during the first few (~10) milliseconds after the collision [5]. Lastly, friction within the accretion disk causes much of it to be blown away in powerful “disk winds” [6,7]. These last two components comprise the “wind ejecta.” In the 1970s, theorists at the University of Chicago and elsewhere began pointing out that the ejecta from neutron star collisions may be one of the sources of the

universe’s heavy elements, synthesized through the “r-process” (rapid neutron capture) whereby neutrons in decompressing neutron-rich matter accumulate quickly onto seed nuclei, forming heavy elements like the lanthanides [8,9]. Simulations have since predicted that the r-process will occur in at least the dynamical ejecta from a BNS [10]. These elements form as very unstable isotopes and quickly radioactively decay, powering a supernova-like electromagnetic transient. In 1998, Li and Paczyński devised the first model of this transient, which predicted the lightcurves (its appearance in our telescopes over time). Metzger et al. in 2010 correctly predicted the brightness of a kilonova using r-process heating rates, coining the term “kilonova” (1000 times brighter than a regular nova) [11,12]. In 2013, two groups found that the synthesized elements would have very high opacities (many complex atomic transitions) that would cause photons to take much longer than expected to diffuse through the cloud of ejecta, thus causing the lanthanide-rich part of the signal to be even redder, dimmer, and slower-evolving than predicted [13,14]. Post-GW170817 The observation of GW170817 was monumental, both as a pioneering feat of multi-messenger astronomy and for its revolutionary contribution to kilonova theory. On August 17th, 2018, the Laser


SCIENTIA

Fig. 1 Dark Energy Camera (DECam) observations of the kilonova counterpart of GW170817 next to its host galaxy [16].

Interferometer Gravitational-Wave Observatory (LIGO) and Virgo gravitational detectors saw the gravitational wave signal of a BNS, and gamma rays were seen at almost the same time [15]. Many teams located the optical counterpart (the kilonova) in a nearby galaxy and observed it with telescopes in the UV to IR wavelengths. The kilonova started out bright in the UV and optical bands, in which it rapidly decayed (the “fast blue” component). There was also a dimmer, slower-evolving signal in the infrared (the “slow red” component; e.g. [16,17]). Many groups developed models for kilonova lightcurves and spectra and applied them to the lightcurves of GW170817. Notably, Kasen et al. did radiative transfer simulations of a spherical cloud of kilonova ejecta parametrized by its mass, velocity, and lanthanide fraction. They compared these models to the observed data and found a best-fit model of ~0.025 solar masses of low-lanthanide fraction ejecta traveling at 0.3c (the “fast blue” component) and ~0.04 solar masses of high-lanthanide fraction ejecta traveling at 0.1c (the “slow red” component) [18]. Villar et al. fit a 2-component modified supernova model to the lightcurves, and found it was best fit by a similar “slow red” and “fast blue” components [19,20]. Both groups’ best-fit components could correspond to the neutron-rich, “red” tidal ejecta and neutron-poorer, “blue” shock-heated ejecta. However, no models yet have treated the complicated relationships among different parameters (especially in the compositionsetting process) or the complex geometry of the anisotropic system.

Methods A Neutrinosphere Model With the aim of inferring the angle of the orbital plane from the line of sight, a physically motivated kilonova model was developed, based on the geometry of the inner surface of the collision where the matter is opaque to neutrinos (“the neutrinosphere”).

Remarkably, it is the heavy element nucleosynthesis that determines the color of the material ejected, and it is the history of the decompressing neutron star matter that determines the heavy element nucleosynthesis rate. A simple model that predicts the observations, based on the physics of the simulations, was built. This model describes the ejecta from a BNS, incorporating the two dynamical ejecta components (tidal tails and shock-heated ejecta). Each ejecta component is assigned a mass (Mej) and coasting velocity (vej). The geometry is illustrated in Figure 2, where the two angles are also parameters. While a three- or four-component model would be more accurate, the photospheres of the inner winds would be obscured by the dynamical ejecta, which is expected to cover the whole sphere. Lightcurves are then determined from these starting parameters.

Fig. 2 Two-component model of the kilonova ejecta geometry.

First, a method to determine the ejecta composition “from scratch” is developed. The nucleosynthesis products of any r-process event depend on the ejecta’s electron fraction (Ye, ratio of protons to protons plus neutrons), entropy per baryon, and expansion timescale. The most important factor is the Ye, since the r-process requires, most of all, abundant neutrons (thus favoring a low Ye for heavy element production), so the most time is devoted to parametrizing this quantity [21]. After the merger, the hypermassive neutron star (HMNS) remnant and accretion disk emit neutrinos and antineutrinos from a streaming surface known as the neutrinosphere. Informed by simulations (e.g. [5,22]), the neutrinosphere is modeled as a cylindrical blackbody with a circle on the top to represent the HMNS remnant (and with corresponding components for the antineutrinospheres), scaling the temperatures by literature values [5]. Ejecta near the poles will see a higher projected area of the

9


WINTER 2019

[18]. The two SEDs are weighted by their fractional masses and projected areas to produce a total observed SED, which is used to calculate lightcurves by convolving it with the filters of the instruments that took the photometry. The U, G, R, I, Z, Y, J, H, and Ks bands are fitted, with the U through Y bands taken by the Dark Energy Camera and the J, through Ks bands taken by FourStar, RetroCam, and VIRCAM instruments [16,26,27].

Ye

Fig. 3 In the neutrinosphere model, neutrinos from this cylindrical blackbody alter the composition of both ejecta components as they fly away.

blackbody, and thus more intense neutrino irradiation. As the neutron-rich ejecta flies away, this neutrino irradiation converts neutrons to protons and thus raises the Ye , as illustrated in Fig. 3. This composition-setting process occurs within the first ~10 ms after ejection and is governed by the equation: [Eq.1]

Where λ and λ are the neutrino and antineutrino capture rates, respectively, which can be calculated from the luminosities of our neutrinospheres [23,24], and depend on the distance to the source (since the (anti-)neutrino luminosity decays proportionate to r-2) and the polar angle (due to the anisotropic neutrino luminosity in the model). Thus, the final Ye exhibits dependence on velocity, polar angle, and ejection time, as illustrated in Fig. 4. More neutron-rich (low-Ye) ejecta synthesize heavier r-process elements (e.g. lanthanides). Once the Ye of each component has been determined, each component’s lanthanide fraction is determined by linearly interpolating Lippuner & Roberts’s nucleosynthesis grid [25]. Producing Lightcurves As the ejecta cloud expands, radioactive decay of the newly-formed unstable nuclei puts out photons which diffuse through the cloud until they escape from the photosphere. The lightcurves of the kilonova can be parametrized by the mass, velocity and lanthanide fraction of each component. Heavier, slower, or higher lanthanide fraction ejecta will produce a dimmer, redder, and slower-evolving signal. Once a lanthanide fraction is obtained for each component, the spectral energy distribution (SED; spectrum over time) is determined by interpolating Kasen et al.’s parametrized grid of radiative transfer simulations

10

coaltitude [deg] Fig. 4 Some sample values of the modeled electron fraction parametrized by velocity, ejecta component (solid = shock heated, dashed = tidal), and polar angle.

Results Markov Chain Monte Carlo (MCMC) modeling is used to fit this model to the photometric data. In this method of parameter fitting, “walkers” walk through the parameter space, tending toward regions of higher likelihood and filling a posterior distribution for each parameter. A likelihood function is defined based on the X 2 of a given model. A scatter term is added to the error in the likelihood function, to create realistic parameter distributions (as in [19]). The model was run on the Fermilab DES cluster until it converged on a best-fit model (parameters in Table 1). The lanthanide fraction of the best-fit shock-heated component was around 10 -5, and the tidal component’s was around 0.1. Best-fit lightcurves and all final walker positions with the photometry that was used to fit it are displayed in Fig 5. The best-fit SED (including the tidal and shock-heated components that were summed to produce them) against X-Shooter spectra is also displayed, although the spectra wasn’t used in the fitting (note that the features of the simulated spectra are approximate) [28].


SCIENTIA

Table 1. Best-fit parameters with 1-sigma errors.

Discussion The neutrinosphere model appears to describe the data well in most bands. Other 2-component model fits agree, roughly, with the masses and lanthanide fractions; however, most predict the tidal component to be much slower [18,19]. This phenomenon is probably occurring due to the dynamical ejecta’s composition-velocity relationship included through the neutrinosphere model (Figure 4), and which others did not include­ —the red, high lanthanide fraction component could only be achieved by allowing the ejecta to quickly escape regions of high neutrino irradiation. Based on ejecta masses and the accepted BNS rate, BNSs can more than account for the universe’s light and heavy r-process elements­—others estimate (e.g. [19]) that kilonovae must produce around 5x10 -3 solar masses of light r-process and 7x10 -4 solar masses of heavy r-process ejecta in order to be the dominant sources of r-process elements. Both estimates in this study are well above these thresholds (most notably above the heavy r-process threshold), which confirms

the theory that kilonovae are important (possibly dominant) sources of r-process material. The model also sheds light on the uncertain neutron star equation of state (EOS; e.g. the density profile). In BNS merger simulations using a soft, lowradius EOS, there are at least twice as much shockheated than tidal ejecta (see, for example, [29]). The model supports about three times as much tidal as shock-heated ejecta, implying a relatively stiff EOS. Hopefully, this model will be used to fit future kilonovae, a handful of which are expected to be detected during LIGO’s next observing run [30], starting March 2019. In the future, the group hopes to find a way to account for a third wind ejecta component to get more accurate mass predictions, and to use the viewing angle estimate to decrease uncertainty in LIGO’s distance measurement, which will decrease uncertainty in the measurement of the Hubble Constant (the expansion rate of the universe).

Fig. 5 Best-fit model displayed over the data. Left: best-fit lightcurves which were used in the fitting, along with all final walker positions [16,26,27]. Right: best-fit SEDs over the spectra, not used in the fitting [28].

11


WINTER 2019

References [1] Davies M., et al.. Merging Neutron Stars. I. Initial Results for Coalescence of Noncorotating Systems. ApJ. 10.1086/174525. (1994) [2] Hotokezaka K., et al.. Binary neutron star mergers: Dependence on the nuclear equation of state. PhRvD. 10.1103/ PhysRevD.83.124008. (2011) [3] Bauswein A., et al.. Systematics of Dynamical Mass Ejection, Nucleosynthesis, and Radioactively Powered Electromagnetic Signals from Neutron-star Mergers. ApJ. 10.1088/0004637X/773/1/78. (2013) [4] Hotokezaka K., et al.. Mass ejection from the merger of binary neutron stars. PhRvD. 10.1103/PhysRevD.87.024001. (2013) [5] Perego A., et al.. Neutrino-driven winds from neutron star merger remnants. MNRAS. 10.1093/mnras/stu1352. (2014) [6] Metzger B., et al.. Short-duration gamma-ray bursts with extended emission from protomagnetar spin-down. MNRAS. 10.1111/j.1365-2966.2008.12923.x. (2008) [7] Lippuner J., et al.. Signatures of hypermassive neutron star lifetimes on r-process nucleosynthesis in the disc ejecta from neutron star mergers. MNRAS. 10.1093/mnras/stx1987. (2017) [8] Lattimer J. M. and Schramm D. N.. Black-Hole Collisions. ApJ. 10.1086/181612. (1974) [9] Symbalisty E. and Schramm D. N.. Neutron Star Collisions and the r-Process. ApL.. (1982) [10] Bauswein A., et al.. Prompt Merger Collapse and the Maximum Mass of Neutron Stars. PhRvL. 10.1103/ PhysRevLett.111.131101. (2013) [11] Wanajo S., et al.. Production of All the r-process Nuclides in the Dynamical Ejecta of Neutron Star Mergers. ApJ. 10.1088/2041-8205/789/2/L39. (2014) [12] Li L.-X. and Paczyński B.. Transient Events from Neutron Star Mergers. ApJ. 10.1086/311680. (1998) [13] Metzger B., et al.. Electromagnetic counterparts of compact object mergers powered by the radioactive decay of r-process nuclei. MNRAS. 10.1111/j.1365-2966.2010.16864.x. (2010) [14] Barnes J. and Kasen D.. Effect of a High Opacity on the Light Curves of Radioactively Powered Transients from Compact Object Mergers. ApJ. 10.1088/0004-637X/775/1/18. (2013) [15] Tanaka M. and Hotokezaka K.. Radiative Transfer Simulations of Neutron Star Merger Ejecta. ApJ. 10.1088/0004637X/775/2/113. (2013) [16] Abbott B., et al.. Multi-messenger Observations of a Binary Neutron Star Merger. ApJ. 10.3847/2041-8213/aa91c9. (2017) [17] Soares-Santos M., et al.. The Electromagnetic Counterpart of the Binary Neutron Star Merger LIGO/Virgo GW170817. I. Discovery of the Optical Counterpart Using the Dark Energy Camera. ApJ. 10.3847/2041-8213/aa9059. (2017) [18] Chornock R., et al.. The Electromagnetic Counterpart of the Binary Neutron Star Merger LIGO/Virgo GW170817. IV. Detection of Near-infrared Signatures of r-process Nucleosynthesis with Gemini-South. ApJ. 10.3847/2041-8213/aa905c. (2017)

12

[19] Kasen D., et al.. Origin of the heavy elements in binary neutron-star mergers from a gravitational-wave event. Natur. 10.1038/nature24453. (2017) [20] Villar V., et al.. The Combined Ultraviolet, Optical, and Nearinfrared Light Curves of the Kilonova Associated with the Binary Neutron Star Merger GW170817: Unified Data Set, Analytic Models, and Physical Implications. ApJ. 10.3847/2041-8213/ aa9c84. (2017) [21] Arnett W. D.. Type I supernovae. I - Analytic solutions for the early part of the light curve. ApJ. 10.1086/159681. (1982) [22] Hoffman R., et al.. Model Independent r-Process Nucleosynthesis - Constraints on the Key Parameters. NuPhA. 10.1016/S0375-9474(97)00278-9. (1997) [23] Raffelt G. G.. Mu- and Tau-Neutrino Spectra Formation in Supernovae. ApJ. 10.1086/323379. (2001) [24] Qian Y.-Z. and Woosley S. E.. Nucleosynthesis in Neutrinodriven Winds. I. The Physical Conditions. ApJ. 10.1086/177973. (1996) [25] Pllumbi E., et al.. Impact of Neutrino Flavor Oscillations on the Neutrino- driven Wind Nucleosynthesis of an Electroncapture Supernova. ApJ. 10.1088/0004-637X/808/2/188. (2015) [26] Lippuner J. and Roberts L. F.. r-process Lanthanide Production and Heating Rates in Kilonovae. ApJ. 10.1088/0004637X/815/2/82. (2015) [27] Drout M., et al.. Light curves of the neutron star merger GW170817/SSS17a: Implications for r-process nucleosynthesis. Sci. 10.1126/science.aaq0049. (2017) [28] Tanvir N., et al.. The Emergence of a Lanthanide-rich Kilonova Following the Merger of Two Neutron Stars. ApJ. 10.3847/2041-8213/aa90b6. (2017) [29] Pian E., et al.. Spectroscopic identification of r-process nucleosynthesis in a double neutron-star merger. Natur. 10.1038/ nature24298. (2017) [30] Sekiguchi Y., et al.. Dynamical mass ejection from binary neutron star mergers: Radiation-hydrodynamics study in general relativity. PhRvD. 10.1103/PhysRevD.91.064059. (2015) [31] Abbott B., et al.. Prospects for observing and localizing gravitational-wave transients with Advanced LIGO, Advanced Virgo and KAGRA. LRR. 10.1007/s41114-018-0012-9. (2018)

Jessica Metzger is a 2nd year studying physics and mathematics. She spent the summer doing research in kilonova modeling and wants to go into astrophysics research. She isn’t related to the astrophysicist Brian Metzger who coined the term “kilonova.”


SCIENTIA

inquiry Professor Department of Statistics Department of Computer Science

Dr. Rebecca Willett has always looked at the world through numbers. As an undergraduate at Duke University studying Electrical and Computer Engineering, Dr. Willett loved solving real-world problems and finding patterns in data. She soon turned to research, which combined the two. While working in a lab that sought to develop an alternative method for detecting breast cancer, Dr.

From Numbers to Consciousness: An Inquiry with Dr. Rebecca Willett

Willett built devices for collecting information and then devised algorithms to process it. However, she soon found herself drawn more to the analysis of data than to its acquisition. She became interested in “signal processing,” a technique for analyzing data that extracts useful information from incomplete or distorted information. Dr. Willett uses the example of “photon-limited imaging” to explain

signal processing: “You might have noticed that if you try to take a picture in a dark room, it looks grainy. The problem is that only a small number of photons are hitting your detector, and that creates a lot of noise or errors in your image. Photon-limited imaging is about taking images that have these small numbers of photons and all this graininess and trying to get more accurate estimates about

13


WINTER 2019

Dr. Willett’s work has implications over a wide variety of fields. Dr. Willett is pictured here with her colleague from the University of Wisconsin–Madison, Dr. Brian Luck, with whom she developed an app to help farmers harvest corn (University of Wisconsin–Madison).

“There are a lot of machine learning software packages out there,” she explains, “and someone in, say, neuroscience may plug one of those packages into their dataset. Then they might say, ‘I don’t know if I have enough data’ or ‘None of the tools in this toolbox seem to be a good fit for my data.’ That’s where we come in.”

14

what the underlying scene looks like.” By using signal processing, a garbled mess of data can be transformed into a comprehensible pattern. This is exactly the type of problem that Dr. Willett pursued in graduate school at Rice University, where she used signal processing to improve change to medical imaging techniques such as PET scans. To her, handling data was not a chore after experiments; it was the point at which extraordinary and unexpected patterns blossomed out of raw information. Extracting those patterns using different algorithmic tools was like a brainteaser for her— fun and challenging. She continued to work in electrical and computer engineering while completing her Master’s and PhD at Rice University before becoming a professor of electrical and computer engineering at Duke University in 2005. Dr. Willett moved

to the University of WisconsinMadison’s Department of Electrical and Computer Engineering in 2013, before joining the departments of statistics and computer science at the University of Chicago this past year. Dr. Willett primarily focuses on machine learning, which spans across statistics, computer science, and electrical engineering. Coming from a background of electrical engineering and signal processing, Professor Willett now works on “inverse problems,” a field which retains her original interests. Inverse problems involve extracting information from a physical system using measurements. However, Professor Willett explains that she can only “make indirect measurements of [the system], and those measurements might be distorted by noise.” Again, what I want to do is extract some kind of information about the underlying system from these indirect, distorted measurements.”


SCIENTIA

One of the earliest solutions to an inverse problem was published by Hermann Weyl in 1911, in which he answered the question of whether it is possible to hear the shape of a drum. From indirect measurements of sound, crucial information about the drum’s shape can be interpreted. However, inverse problems are not limited to static information. For systems in which information is constantly changing, inverse problems seek to understand the network that brings events together. On Twitter, for instance, one tweet can be thought of as a discrete event. But by analyzing the network of responses and retweets, the influence of that single tweet can be extracted from indirect data. Another important setting for inverse problems is the field of computational neuroscience. When one neuron fires, it can either trigger or inhibit firing in neighboring neurons. A pressing question is whether or not the underlying network of these neurons can be inferred from such discrete firing events. This could lead to a comprehensive model of the neural mechanisms behind memory, learning, and other cognitive functions. The nature of inverse problems also has important consequences for machine learning applications, such as self-driving cars. Although the inputs and outputs of the algorithm behind the car are known, the process itself is not. This leads to unpredictable failures that can have morbid consequences. The opacity of these networks is inherent to inverse problems; yet, it may signal a major flaw in using techniques such as deep neural networks for machines that are meant to interact with humans. Dr. Willett points out that understanding when algorithms work and when they break down is an overarching question for her field of research. However, she is quick to clarify that the answer is not to “throw larger computers at the problem.” Instead, innovation comes from finding the optimal solution to a problem. Rather than

building new computers, argues Dr. Willett, we should be focusing our efforts on designing more effective algorithms that can output better predictions. An important task for machine learning researchers then, is to characterize the accuracy of a solution as a function of the amount of data available. Then, finding a good estimate for a network becomes a matter of identifying the amount of data that needs to be collected.

“What I want to do is extract some kind of information about the underlying system from these indirect, distorted measurements.” Many different types of researchers who require algorithmic networks use techniques that Dr. Willett studies. Some neuroscientists, for instance, use neural networks to predict behavior in animals, leading to breakthroughs in understanding complex cognitive functions. Optimization algorithms, such as those Dr. Willett builds, can be used to perform a search for the most accurate neural network, saving neuroscientists time and money. These are the “big problems” of Dr. Willett’s field, and she stresses that most of the theoretical questions she tries to answer come from the practical needs of collaborators. “There are a lot of machine learning software packages out there,” she explains, “and someone in, say, neuroscience may plug one of those packages into their dataset. Then they might say, ‘I don’t know if I have enough data’ or ‘None of the tools in this toolbox seem to be a good fit for my data.’ That’s where we come in. We’re not experts in neuroscience, but we are experts in algorithms and this theoretical work.” In this way, researchers like Dr. Willett apply their theoretical knowledge to a variety of practical

fields, from climate modeling to chaotic systems. This applicability to physical models whose raw data is insufficient is what makes machine learning so interesting and so important, says Dr. Willett. Emphasis on collaboration is the true drive behind the future of machine learning. Dr. Willett embraces this, and she hopes her lab will improve collaboration with national laboratories such as Argonne National Laboratory and Fermilab, which have large, interesting datasets. She is also working on a local level to more closely integrate the Toyota Technological Institute at Chicago with the University of Chicago campus through machine learning programs and projects. As technological progress accelerates around the globe, researchers will be able to collect more data on systems that are not easily observable, such as consciousness or language processing. The theoretical backbone of machine learning research may be the key to unraveling these physical processes, and to finding something beautiful within the numbers.

Sophia Horowicz is a first-year at the University of Chicago majoring in neuroscience and minoring in molecular engineering and french. She aspires to contribute to biotechnological advancements and explore the science of consciousness.

15


WINTER 2019

work in progress

The Influence of Geographic Structure and Natural Selection on Genome-Wide Association Studies Christian Porras1 , Daniel Rice1 , John Novembre1 1 University of Chicago Department of Human Genetics

Abstract Genome-wide association studies (GWAS) have shown that much of the variation in disease risk is due to rare deleterious alleles. When considering the geographic distribution of a population, studies have shown these alleles are removed by selection before they can spread beyond their original location. However, the spatial distributions of these alleles have not been characterized in detail. This study aims to understand how natural selection, spatial structure, and geographic sampling bias interact to determine the inferred local and global genetic architecture of a trait. The goal of this study is to develop theoretical models for the geographic spread of rare and deleterious alleles. These models will be used to calculate the distribution of allele frequencies as a function of the geographic sampling scheme, the strength of natural selection on the allele, and the population geographic structure. These theoretical results highlight the dependence of the GWAS-like results on the sampling scheme and the evolutionary process, and have implications for the interpretation of geographically localized GWAS cohorts. This work looks to quantify the effects of geographic sampling bias on GWAS in order to inform the sampling processes of future association studies and improve the capability of disease GWAS to predict individual risk.

Introduction Quantitative and population geneticists are interested in finding statistical associations between genetic variants and particular traits. Genome-wide association studies (GWAS) are one common method for drawing associative conclusions and are especially useful when attempting to predict an individual’s risk for disease. Disease GWAS function by comparing a large set of genotypes to the known disease phenotypes of members in the group. If members with a particular allele, or variant of a gene, are shown to have a significantly higher probability of also being in the group with the disease, then the GWAS supports the assumption that the allele increases one’s risk of developing the disease. Disease GWAS have shown that deleterious alleles are rare [1]. Previous studies have also shown that these alleles tend to be removed by natural selection before they can spread beyond their original location. Therefore, deleterious alleles are geographically localized [2], although their geographic distributions have not been rigorously studied. Conducting a GWAS requires a large set of genomic data which is gathered by a sampling process. As a result, association studies feature significant geographic sampling bias. For example,

16

the UK BioBank contains samples of people who have migrated to the United Kingdom from around the world, but still represents a small fraction of global diversity. Therefore, it is important to assess if GWAS results from geographically localized studies, such as those from the UK BioBank, can predict disease risk in other geographic regions. The goal of this paper is to create a framework for evaluating how geographic sampling bias may affect the detection of rare deleterious alleles.

Methods The simplest form of the model is based on the classic Wright-Fisher [3][4] and Stepping-Stone [5] models which describe evolution and migration, respectively. The model describes the allele frequency in a population of individuals grouped into discrete demes, or subpopulations, connected by migration (Figure 1). The model simplifies the system by representing a single genetic locus with two alleles. The allele frequency in each deme, fr, evolves due to four forces:


SCIENTIA

Fig. 1 This schematic represents the geographic structure for the model in one-dimension. Circles represent demes which are indexed by integer r. There are L many demes connected to nearest neighbors via a symmetric migration rate m.

(1) Mutation changes one allele to another at rate Îź; (2) Natural selection reduces the frequency of the deleterious allele at rate s; (3) Genetic drift introduces random noise in a finite population at rate 1/N, where N is the population size; (4) Migration reduces the allele frequency differences between neighboring demes at rate m. The change in allele frequency due to these forces is described by the stochastic differential equation:

Average Frequency <f r >

<f r > vs. Îź/s with 25 demes 1000 individuals

Îź/s [Eq.1]

Using this equation, the equilibrium distribution of allele frequencies across all demes can be calculated. In this paper, the model describes the sampling process by choosing individuals from demes at random according to a sampling distribution. Given the population parameters and sampling probability distribution, the expected number of sampled individuals with the focal allele is calculated.

Preliminary Results This section shows statistics computed from samples of alleles drawn from simulations of the model at equilibrium. Before moving to large population samples, which will be described in a later paper, the following simpler quantities are computed: (1) The probability that an individual in a sample of size 1 has the allele. (2) The probability that individuals in a sample of size 2 have the same allele. The probability that a single individual in a sample of size 1 has the allele is the average frequncy <fr>. Solving the stochastic differential equation above at equilibrium with the assumption that w yields: [Eq.2]

This proportionality is verified with simulations (Figure 2). The proportionality shown in Equation 2 is consistent with the classic mutation-selection balance theory. This theory describes the expected number of deleterious alleles in a population when the mutation

Fig. 2 This plot shows the linear relationship between the ratio of the rates of mutation over selection and the expected probability of an individual having the allele in a sample of size 1, the average frequency <fr>.

and selection rates are equivalent. Selection is shown to decrease the average frequency. This confirms that deleterious alleles are rare. The probability that two individuals sampled from the distance between demes having the same allele is equal to the two-point correlation, <fr fr+d >. This result can also be derived analytically. At equilibrium, the two-point correlation function decays exponentially as a function of distance d: [Eq.3]

Here, is the correlation distance, characterizing the rate at which frequency correlations will decay with distance. Equation 3 is also verified with simulations (Figure 3). Strong selection and low migration decrease allele frequency similarities in neighboring demes. This confirms that deleterious alleles are geographically localized.

Discussion The model in this work in progress is shown to be consistent with existing population genetics theory. The preliminary work has confirmed the following: (1) Deleterious alleles are rare. (2) Deleterious alleles are geographically localized. The size of a sample drawn from the equilibrium distribution of allele frequencies can be generalized.

17


WINTER 2019

<f r f r +d >

Correlation with 25 demes 1000 individuals

Deme Distance, d Fig. 3 Frequency correlations across space decay exponentially with distance. The ratio of the rate of migration over selection determines the rate of decay.

To do so, a sampling probability distribution function that specifies the geographic distribution of samples is applied. From there, probabilities of observing the deleterious allele are computed as a function of the width of the sampling distribution. These probabilities serve as estimates for the statistical capacity, or power, of a GWAS to detect the allele in a sampled population. This work is ultimately interested in understanding how the process of sampling may affect the power of a GWAS to detect deleterious alleles, as this interaction has not been previously studied. This theoretical framework aims to inform the interpretation of GWAS results and quantify the effect of sampling. The project aims to relate the power of a GWAS to the evolutionary forces acting upon an allele and the constructed sampling scheme. It is predicted that certain sampling schemes may increase the power of a GWAS for certain alleles. The model proposed allows one to quantitatively determine the optimal sampling strategies for given deleterious alleles of varying rarity. Increasing the power of a GWAS to detect rare deleterious alleles would likely improve predictions of individual disease risk. In order to further develop the theoretical model, higher spatial dimensions will be examined and multiple mutations will be simulated at once under varying selection and mutation rates.

18

References [1] Marouli, E. et al. Rare and low-frequency coding variants alter human adult height. Nature 542, 186–190 (2017) [2] Marcus, J. H. and Novembre, J. Visualizing the geography of genetic variants. Bioinformatics, 33(4), 594-595 (2017) [3] Wright S. Evolution in Mendelian Populations. Genetics 16, 97–159 (1931) [4] Fisher, R. A. On the Mathematical Foundations of Theoretical Statistics. The Royal Society, 222, 309–368 (1922) [5] Kimura M. and George H. Weiss. The stepping stone model of population structure and the decrease of genetic correlation with distance. Genetics 49, 561–576 (1964)

Christian Porras is a third-year student at the University of Chicago majoring in biological sciences with a specialization in quantitative biology. He aims to pursue an academic career as a physician-scientist.


SCIENTIA

inquiry Associate Professor Department of Organismal Biology and Anatomy

Every day, people use their limbs to interact with their environment and the people around them. While these activities of daily living are performed with ease, the neural mechanisms that make them possible are very complex. Sliman Bensmaia, an Associate Professor in the Department of Organismal Biology and Anatomy at The University of Chicago, is trying to understand the somatosensory system, which relays

Creating the Future of Touch-Sensitive Neuroprosthetics: An Inquiry with Dr. Sliman Bensmaia

information about the state of the body and about its interactions with objects. Without it, we would struggle to move or to interact with our environment. Professor Bensmaia uses his knowledge to tackle two significant challenges: in his basic science research, he seeks to understand how the somatosensory system encodes information about the world around us; in his applied science research, he seeks to restore somatosensation in

patients who have lost it by equipping them with sensorized bionic hands plugged directly into their nervous systems. Although Professor Bensmaia is currently a leading expert in somamtosensory neuroscience and neuroprosthetics, he did not start out being interested in these topics. Indeed, Bensmaia entered the University of Virginia as an engineering major and switched his major to

19


WINTER 2019

A goal that Professor Bensmaia hopes to achieve: where the prosthetic hand can move as well as the actual hand.

A symbolic photo of the dream where the feeling of a prosthetic hand is the same to that of a real hand.

cognitive science at the eleventh hour (his eighth semester). He began studying the sense of touch with his PhD mentor at University of North Carolina at Chapel Hill, who specifically recruited Bensmaia into the program because of Bensmaia’s programming abilities. After completing his PhD in Cognitive Psychology with a specialization in Neurobiology at the University of North Carolina, he did his postdoctoral work with Ken Johnson at Johns Hopkins University, the leading expert in somatosensory neuroscience at the time. At Hopkins, Bensmaia began his career as a neurophysiologist and neuroscientist. A few years into his tenure at Hopkins, Bensmaia was approached by a scientist from the Applied Physics Laboratory to help their neuroprosthetics team sensorize a bionic hand through a brain interface. Professor Bensmaia described, “I was initially very skeptical but when they told me the resources they could bring to bear on this, I said, let’s give it a shot!” As a result, Professor Bensmaia began the process of developing a prosthetic arm that was indistinguishable from a normal one. Since becoming a faculty member at the University of Chicago

20

in 2009, Bensmaia has led a dual track laboratory: one track on basic research, the other on translational research. One of his basic research projects consists of studying how the brain tracks hand positions and movements in real time. To this end, his team has macaque monkeys, whose somatosensory and motor systems are very similar to those of humans, perform a grasping task while precisely tracking the animals’ hand movements and recording the activity evoked in the brain. He uses state-of-the-art motion tracking while recording the activity in two parts of the monkey’s brain. This allows him and his coworkers to understand how the two cortices interact with each other. By using probability mathematics, his team is able to pinpoint what aspects of hand movements trigger activity in individual neurons in the brain. Through his translational research, Professor Bensmaia is making sure that the breakthroughs he has made in understanding the somatosensory system are being utilized to help people. In 2016, Professor Bensmaia developed a brain-computer interface that recreates the sense of touch for paralyzed or amputee patients. The neural interface is implanted into the patient, with the robotic

arm connected to it transmitting sensory feedback through electrodes implanted in areas of the brain responsible for hand movement and touch. It does this by simulating how an intact hand and nervous system encodes information, then mimics that in the neuroprosthestic. Bensmaia believes the only way to create a dexterous hand for an amputee would be for it to both move and feel—he has gotten one step closer to his goal of creating an indistinguishable prosthetic limb.

“I can comfortably say that without these two strong women in my life, I would not have been able to achieve as much as I have now.” Despite the amount of work that goes into managing a dual track laboratory, Professor Bensmaia spends a considerable amount of time teaching students. He teaches one of the core classes for the Computational Neuroscience major, called the Methods in Computational Neuroscience. He’s


SCIENTIA

Professor Bensmaia in his lab.

also a guest lecturer for several other classes, such as Topics in Integrative Organismal Biology, Systems Neuroscience, and Integrative Organismal Biology. His enthusiasm for education is related both to the guidance he has had and his role models. When describing the path he took to become an associate professor at the University of Chicago, Bensmaia credited much of his success to his mother and wife. “My mother was the one who told me to get off my butt and apply to graduate school,” said the professor in a light-hearted tone. “My wife is the one who pushed me to apply for the assistant professor position at the University of Chicago after getting tired of me complaining about being at Johns Hopkins. I can comfortably say that without these two strong women in my life, I would not have been able to achieve as much as I have now.” Due to Professor Bensmaia’s

efforts, the field of somatosensory neuroscience and neuroprosthetics has grown tremendously over the past decade. That said, Bensmaia’s work is nowhere close to done. He is still trying to understand how the sense of touch operates over six orders of magnitude in element sizes, from the smallest discernible elements, measured in tens of nanometers, to the largest elements that span a fingertip, measured in tens of millimeters. While he can replicate the sensory feedback sent between the nervous system and the arm, he has yet to uncover what the information contains. He is working on optimizing the electrodes implanted in the brain, as they only work for a few years and cannot be replaced. Professor Bensmaia is continuing to work towards his dream of creating a natural-feeling prosthetic arm, and in that process, will inevitably continue to push the field forward.

Danny Kim is a second-year student at the University of Chicago majoring in neuroscience and minoring in computational neuroscience. He hopes to learn more about how the brain stores information and solve problems, using that information to build the next generation artificial intelligence.

21


WINTER 2019

work in progress

Characterizing the Effects of Sphingosine Kinase Inhibitor PF-543 on Model Lung Surfactant Monolayers

Pascale Boonstra1, Ka Yee C. Lee1, Benjamin R. Slaw1, Luke Hwang1, Peter Chung1, Daniel Kerr1, Alessandra Leong1, and Tiffany Suwatthee1 1 University of Chicago, Department of Chemistry

Abstract Lung surfactant (LS) is a complex mixture of lipids and proteins that minimizes surface tension in the lungs; this minimization both stabilizes alveoli and reduces the work associated with normal breathing. Compromised LS is implicated in a number of lung diseases, including Respiratory Distress Syndrome in neonates and Acute Respiratory Distress Syndrome in adults. In this project, changes in LS behavior upon the addition of PF-543, a sphingosine-kinase inhibitor with 31.3 nM binding affinity and potential as a therapeutic agent for treatment of hyperoxia-induced bronchopulmonary dysplasia were characterized using a model LS system made of lipids and relevant proteins. To understand the influence of PF-543 on LS, model surfactant monolayers mixed with PF-543 were deposited at the air/fluid interface of the Langmuir trough, which expands and compresses a LS film to mimic the inhalation and exhalation cycle of the lungs, respectively. Preliminary results show that the inclusion of PF-543 in the model monolayer generates small changes in observed compression behavior. However, the phase behavior and collapse mechanism of the model LS films are quite comparable; PF-543’s impact on the monolayer behavior is minimal. Because PF-543 has shown promising therapeutic effects in mice, it may someday be used as a lung therapy for humans once its effects are better understood. The data obtained in the study takes the first steps toward assessing PF-543’s interaction with model LS and provides insight into future design of hyperoxia-induced surfactant therapeutics.

Mammalian respiration is optimized through lung surfactant (LS), a complex mixture of lipids and proteins that minimizes surface tension on the lungs. LS reduces the work of the inhalation and exhalation cycle and attenuates the risk of lung collapse. Compromised LS is correlated with lung diseases, and as a result, studying the biophysical behaviors of LS is required for lung disease therapy. PF-543, a potential drug candidate for certain types of lung diseases, has been studied extensively using a murine (mouse) model. However, PF-543’s interactions with a LS monolayer have yet to be studied. The investigation of these interactions is absolutely critical to the success of PF-543 as a therapy, as normal LS is crucial to healthy lung function. The varying effects of PF-543 and physiologically relevant proteins on films of the canonical 7:3 DPPC:POPG monolayer have been examined. If PF-543 causes a LS monolayer to display uncharacteristic behavior or lose its quintessential ability to reduce surface tension, then doubt could be cast on the therapeutic benefits of the PF-543 molecule. LS is composed of both anionic and zwitterionic phospholipids as well as four native surfactant proteins,

22

(SP)-A, B, C, and D [1]. Due to the hydrophobic and hydrophilic properties of lipids, lung surfactant forms monomolecular films called monolayers at apolar/polar interfaces such as the air/water interface. The effects of expansion and compression on LS can be modeled by depositing LS film at the surface of a constructed Langmuir trough and examined with a fluorescent microscope. To alleviate compressive stress, a model LS film undergoes a series of two-dimensional phase transitions analogous to common transitions between gaseous, liquid, and solid states [2]. The molecules do not interact significantly in the gaseous phase. The more tightly-packed molecules lose translational freedom in the liquid-expanded phase, and finally, lose almost all translational freedom in the condensed phase. In this solid-like phase, the monolayer is compressed to an area equivalent to the cross-sectional area of the film at its most densely packed configuration; this area depends on the lipid and protein composition used. Additional compression induces collapse of model LS into reversible folds [2]. Within the LS film, Super Mini B (SM-B) and PF-543 are also deposited. SM-B is a synthetic,


truncated mimic of the physiologically relevant native surfactant protein (SP) B, which itself is theorized to be the most critical surfactant protein due to its ability to induce aforementioned folding in a LS monolayer [3]. PF-543 is a highly potent inhibitor for sphingosine kinase 1 (SphKs1)[5]. As mentioned above, PF-543 has been studied using a murine model: SphKs1-deficient mice showed significant decrease in numerous hyperoxia-induced lung diseases [6]. Therefore, PF-543 is a potential therapy to be administered in response to threats such as hyperoxia-induced bronchopulmonary dysplasia.

Surface Pressure (mN/m)

SCIENTIA

Materials/Methods To analyze LS films, a custom-built Langmuir trough was used to mimic the expansion and compression during the inhalation and exhalation cycle of the lungs, respectively. Lipids in chloroform were deposited onto a liquid subphase; as the chloroform evaporated, the lipids formed a monolayer at the air/subphase interface. As the barriers compressed the LS monolayer symmetrically towards the Wilhelmy balance (a force balance that monitors surface tension), the area per molecule decreased, which resulted in an observable change in surface pressure. Surface pressure (π) is defined as surface tension normalized to the surface tension (γ) of water, i.e. π = γ water - γmeasured. Langmuir troughs produce Langmuir isotherms, which show changes in monolayer behavior during compression. A schematic of the Langmuir trough is given in Figure 1. LS can be simply approximated by a 7:3 molar ratio of lipids (zwitterionic dipalmitoyl phosphatidylcholine (DPPC) to anionic palmitoyloleoyl phosphatidylglycerol (POPG). This combination was chosen due to its comparable compression behavior with native LS. 5 wt% of SM-B was added to the lipids in chloroform to increase the physiological relevance of the model LS in some experiments. To examine the effects of PF-543 on the LS monolayer, 5 wt% of PF-543 was added to the lipids in chloroform; the molecule was co-spread with the model LS. Experiments were all conducted on a liquid subphase of pure water at 25 ± 0.5° C.

Area (Å 2 /molecule) Fig. 2 Langmuir isotherms of 7:3 DPPC:POPG with and without 5 wt% PF-543.

The inclusion of PF-543 in the model monolayer generates small changes in observed compression behavior. A discernible rightward shift between isotherms in Figure 2 is theoretically explained by the presence of an additional molecule in the monolayer; this is supported by Figure 3 Fluorescence Microscopy (FM) images a) and c), which show a wider spacing of the condensed phase upon the addition of PF543. The condensed phase otherwise appears quite visually similar between systems. Both systems with and without PF-543 also display standard folding behavior with bright, thin stripes, showing that in this system, PF-543 does not appear to modify the reversible folding mechanism key to surface tension reduction. In 7:3 systems that include 5 wt% SM-B, comparable compression behavior is also observed. The systems again both show a folding mechanism as evidenced by the thin stripes in Figure 4 b) and d). Additionally, the condensed phase shows visual similarity in both systems, showing that PF-543 still

Filter Paper

Barrier Fig. 1 Simple schematic of Langmuir trough

Liquid Subphase

Wilhelmy Force Balance

Barrier

23


WINTER 2019

a

b

a

b

c

d

c

d

Fig. 3 a) FM image of condensed phase in 7:3 system. b) FM image of fold in 7:3 system. c) FM image of condensed phase in 7:3, PF-543 system. d) FM image of fold in 7:3, PF-543 system. The white scale bar is 150 µm.

seems to have only a small effect on the properties of the monolayer upon compression, even in a more physiologically relevant system. PF-543 may also suppress protein squeeze-out in films containing SM-B, as seen in the disappearance of a feature (circled in Figure 5 theorized to be squeeze-out of SM-B). However, the phase behavior and collapse mechanisms of all the model LS films are quite comparable; PF-543’s impact on the monolayer behavior is thus considered minimal.

Discussion To a first-order approximation, preliminary results show that addition of PF-543 does not significantly affect the phase behavior or folding ability of model LS. The phase behavior and folding abilities between systems appear visually similar in fluorescence microscopy images. Although slight differences are observed both in the Langmuir isotherms and fluorescence microscopy images, the PF-543 seems to have little effect on the compression behavior of the films as exemplified by similarities in the varying systems presented above. In continuing this project, there are several future directions to be considered. In order to produce data that provide significant and relevant results, the similarity of experimental conditions to physiological conditions will be increased. This relevance can be increased in two main ways: first, by changing the water subphase to a buffer that more closely matches

24

Fig. 4 a) FM image of condensed phase in 7:3 system with 5 wt% SMB. b) FM image of fold in 7:3 system with 5 wt% SM-B. c) FM image of condensed phase in 7:3 system with 5 wt% SM-B, PF-543 system. d) FM image of fold in 7:3 system with 5 wt% SM-B, PF-543 system. The white scale bar is 150 µm.

the pH and salinity of the human body, and second, by the addition of SP-C in the LS film, which will work with the SM-B to facilitate the reversible folding mechanism [3]. Conditions more similar to the human body will help elucidate what significant changes, if any, PF-543 has on the function of LS. Another potential direction is performing SphKs1 titrations in the subphase. By including SphKs1 in the subphase, the PF-543 in the LS monolayer may exclude itself, thus encouraging the monolayer to return to its original behavior. By gradually increasing amounts of SphKs1, the LS film may gradually become closer to its standard behavior without PF-543. This is important because this may be how the PF-543 would behave in the body: it would hopefully interact with and inhibit the SphKs1 instead of including itself in the monolayer. Lastly, in some experiments performed with SM-B and PF-543, a variety of multi-focal behaviors was observed, indicating a significant presence of the model LS below the air/fluid interface. This may indicate that the membrane properties of this system encourage the monolayer to form fewer, larger folds as opposed to many small folds. However, the data supporting this hypothesis are currently limited. With an improved understanding of the large folds formed with a PF-543 and SM-B system, the effects of addition of PF-543 in the LS monolayer can be better characterized.


SCIENTIA

References

Surface Pressure (mN/m)

[1] Parra, E. & Pérez-Gil, J. Composition, structure and mechanical properties define performance of pulmonary surfactant membranes and films. Chem Phys Lipids 185, 153– 175 (2015). [2] Lee, K. Y. C. Collapse Mechanisms of Langmuir Monolayers. Annu Rev Phys Chem 59, 771–791 (2008). [3] Ding, J. et al. Effects of Lung Surfactant Proteins, SP-B and SP-C, and Palmitic Acid on Monolayer Stability. Biophys J 80, 2262–2272 (2001). [4] Schnute, M. E. et al. Modulation of cellular S1P levels with a novel, potent and specific inhibitor of sphingosine kinase-1. Biocheml J 444, 79–88 (2012). [5] Wang, J., Knapp, S., Pyne, N. J., Pyne, S. & Elkins, J. M. Crystal Structure of Sphingosine Kinase 1 with PF-543. ACS Med Chem Lett 5, 1329–1333 (2014). Area (Å 2 /molecule) Fig. 5 Langmuir isotherms of 7:3 DPPC:POPG and 5 wt% SM-B, with and without 5 wt% PF-543.

[6] Harijith, A. et al. Sphingosine Kinase 1 Deficiency Confers Protection against Hyperoxia-Induced Bronchopulmonary Dysplasia in a Murine Model. Am J Pathol 183, 1169–1182 (2013).

Conclusion Currently, there is no clear indication that the addition of PF-543 significantly changes the behavior of a model LS system. This project is in its early stages and much research still needs to be conducted, but the above experiments begin the work needed to characterize and evaluate interactions between PF-543 and LS. PF-543’s ability to be used as a therapy depends critically on whether it modifies the crucial surface-tension, relieving the abilities of LS. If further biophysical and murine studies confirm PF-543’s potential medical benefits, it may someday be administered as a lung disease therapy.

Pascale Boonstra is a second-year student in the College majoring in chemistry with a minor in art history. Her career interests currently lie in the intersection of science and law, and she hopes to pursue patent law.

25


WINTER 2019

inquiry Professor Department of Psychology

26

Dr. Daniel Margoliash, a professor in the Department of Organismal Biology and Anatomy & Psychology, approaches the field of neuroethology, the neural basis of animal behavior, in a computational and evolutionary context. Within minutes of meeting, he captivated me by asking how my audio recorder worked and figuring out how it functions. He went on to give a fascinating story of his journey into neuroethology.

Neuroethology from an Evolutionary and Computational Perspective: An Inquiry with Dr. Daniel Margoliash

A proud US immigrant, Dr. Margoliash attended the California Institute of Technology for his undergraduate studies. He started out as a Physics major but eventually decided to take the Biology path. When asked to reflect on his major choice, Dr. Margoliash recalled: “I was actually an undergraduate at Caltech when the first computer terminals were being installed in my dorm, which was an eye-opening experience. I always

think of it as my first formal introduction to a computer.” The encounter sparked Dr. Margoliash’s curiosity about the link between neural processes in the brain and computer algorithms and fuelled his passion for understanding the human brain through a logical and computational framework. He comments: “I remember having this fresh but defining insight into computers. It was like a light bulb appeared in my head: Oh! The brain


SCIENTIA

is like a computer! This may be a naive observation, but it was a source of inspiration for my future work.” The combination of neurobiology and computational approaches inspired Dr. Margoliash to stay at Caltech for a PhD in Engineering. His work initially focused on mathematical descriptions of neuron sensory inputs but eventually branched out to include neuroethology. This is when he met Mark Konishi, a prominent neurobiologist whose work focused on song acquisition in birds. Under Konishi’s mentorship, Dr. Margoliash studied the song acquisition process of white-crowned sparrows, particularly by analyzing the underlying songspecific neurons in their brains. “I was immediately invested after my first talk with Mark [Konishi] about how birds learn to sing,” he shared. “They first commit the song to memory from surrounding members and then practice singing until their voice crystalizes. Studying birdsong is biologically compelling, one of the best and most experimentally tractable examples of complex learning in vertebrates. Also, a key part is that this process has useful

similarities to how humans acquire their speech; thus you can apply an idea from one area to explore the other.” This insight went on to inspire many of his future projects. Dr. Margoliash went on to do his postdoctoral research at Washington University in St. Louis, which was at the time the mecca of neuroscience. There he worked with Nobuo Suga, another well-known Japanese biologist, on the physiology of hearing and echolocation in bats. There was a lot of uncertainty in neurobiology then surrounding how properties of single neurons relate to populations of neurons in the forebrain of birds and mammals, so Suga’s research was prominent in that it offered the first compelling example of remarkable specialization of cell areas in the cortex. At the University of Chicago, Dr. Margoliash is involved in a variety of research projects involving both graduates and other professors. One of his recent projects was centered around the contribution of sleep to song acquisition in zebra finches and resulted in the discovery of a process termed by the press

as “dreaming in songbirds.” In the experiment, scientists played songs to zebra finches in their sleep and recorded activity of neurons in the bird’s forebrain song centers. It was discovered that these neural activities greatly resemble those recorded during the finches singing. Apparently, zebra finches “dream” of singing during their sleep in order to consolidate their learning process. The result became a well-known discovery and was later found in other animals. “These studies have allowed us to explore what had been one of the uncharted areas of behavior how activity during sleep helps to regulate consolidation of memories of actions performed during the day,” Margoliash said. Inspired by Dr. Margoliash’s approach to “dreaming” in birds, a graduate student at the Margoliash Lab, Kim Fenn, wondered if the same concept could be applied to humans. She began to study in the laboratory of Margoliash’s collaborator Howard Nusbaum (Professor of Psychology) changes in humans’ behavior over periods of sleep and obtained a similar discovery. During sleep,

27


WINTER 2019

human memory of speech is consolidated and made into long-term memory, which helps stabilize their speech perception in the future. This finding is one of the many examples in which studies of song acquisition in birds can inspire those of speech acquisition in humans and vice versa. Recent studies into birds’ neural activities have continued to yield interesting results at the Margoliash Lab. Collaborating with professors from various universities, Dr. Margoliash and a postdoctoral fellow, Arij Daou, discovered the existence of what biologists refer to as “intrinsic plasticity” in single neurons. Plasticity is essentially changes within a neuron that allow for greater or lower levels of flowing electric currents in response to the same input over time in the bird forebrain system for song production. Dr. Margoliash explained: “We study activities of a single neuron through brain slicing, a laboratory technique in electrophysiology that greatly facilitates the study of electrical activity inside of individual neurons in isolation from the rest of the brain or in neural circuits in the brain slice. This resulted in evidence of plasticity within single neurons.”

“We have to change our model of how learning in the brain works. It’s no longer only the connection between neurons that change, but also within each neuron, changes occur due to learned behaviors. If this applies to humans, we can say that each person’s neuron structure is unique, and this may help explain how our speech patterns are distinct from one another.”

This finding is quite significant in that, in a vast number of studies only synaptic plasticity, changes in connections between neurons, is the focus of how the brain changes during learning. Not only

28

did Drs. Daou and Margoliash show evidence of plasticity within each individual neuron, but they also linked the change to a learned behavior, bird singing. During the study, they found that for those neurons in birds’ song centers that project to the basal ganglia, a region involved in motor control, properties of the neurons’ electric currents are similar within each individual bird but vary from bird to bird. This variation is tightly linked to the songs that birds sing. “This discovery has great implications across the neurobiology field,” he comments. “We have to change our model of how learning in the brain works. It’s no longer only the connection between neurons that change, but also within each neuron, changes occur due to learned behaviors. If this applies to humans, we can say that each person’s neuron structure is unique, and this may help explain how our speech patterns are distinct from one another.” “It is fascinating,” Dr. Margoliash mused, “if individuality of behavior extends to the strength of ion currents individual cells express.” The next step, Dr. Margoliash believes, is to study neuron plasticity in more complicated bird species, which may sing two or more songs, sing less fixed songs, or continue to learn songs in their adulthood. In the long term, he hopes that his work will be applied to the field of machine learning and artificial intelligence, tying to his early interest in the link between natural neural activities and artificial ones. He is interested in stories of how scientists around the globe are developing silicon models of neurons and making them plastic, while using non-linear mathematical models to create biologically realistic neuron systems. “The idea of taking lessons from biology to apply to artificial neural networks have been around for a while, but recently they have enjoyed a surge in popularity. I hope that my research will someday be helpful in constructing biologically feasible neural networks.”

Reflecting on the importance of academia, Dr. Margoliash stated that the most fundamental mission of research is to train the next generation of students: “A researcher’s life-long work is not enough to understand all of nature’s wonder, and most important is to be a good mentor for the next generation that will carry on their own work.” He explains that one of the reasons why he came to the University is because it set a good example of this value: “Research at UChicago is about commitment to education and should continue to be so.”

Jarvis Lam is a second-year at the University of Chicago planning to major in Computational and Applied Math. He is interested in the field of machine learning, computer vision, natural language processor and how it intersects with the study of the human mind. He plans to eventually work in strong AI as an end career.


SCIENTIA

work in progress

Towards a Constraint on Quaoar’s Atmosphere by Stellar Occultations Thomas Cortellesi1 1 University of Chicago, Yerkes Observatory

Abstract Stellar occultations can provide detailed analyses of planetary atmospheres from great distances. Presented here is a preliminary observation of a stellar occultation by 50000 Quaoar (a small classical Kuiper Belt object) that occupies a unique niche between small, irregular objects and larger bodies with detected atmospheres. The observation was taken at insufficient resolution to extract atmospheric data. However, these early results provide a springboard for future studies with greater precision.

Introduction There are two distinct subpopulations of transNeptunian objects, which are minor planets that orbit around the sun at greater distances than Neptune: those with appreciable atmospheres and those without. The presence of an atmosphere is determined by an object’s semimajor axis and mass. If an object is too far from the sun, any atmosphere will freeze out. If an object is too close for a given mass, any atmosphere will escape to space. 50000 Quaoar provides a compelling example of a limiting case that could be used to probe this relationship empirically. Quaoar has a mass one-tenth that of Pluto and an upper surface temperature of ~44K [1]. While semimajor axis is variable over the lifetime of an object like Quaoar, mass can be treated as invariant. If Quaoar migrates outwards, its atmosphere deposits onto the surface and can be re-sublimated should Quaoar migrate inward. Material is lost permanently only through atmospheric escape, so only this factor would be considered in determining whether Quaoar is a suitable target for studying an appreciable atmosphere. A useful heuristic in determining the boundedness of an atmosphere is the Jeans escape parameter λ, given by the ratio of the gravitational potential of a molecule at the exobase (top of the atmosphere) to kT, the energy scale factor: [Eq.1]

where G is the gravitational constant, M is the mass of the object, mp is the mass of a proton, Mm is the mean molar mass of the atmosphere, R is the radius of the object and z is the altitude of the exobase. If λ < 3, the atmosphere will escape hydrodynamically and be lost to space on short timescales. Objects expected to have retained atmospheres over geologic timescales will be sufficiently massive such that atmospheric loss is dominated by thermal escape, where λ > 3 [2]. Observations made by the Gemini South telescope suggest that Quaoar is unlikely to have isothermal N2 and CO atmospheres, but that a CH4 atmosphere could exist at less than 140 nbar [1]. Assuming an exobase temperature comparable to Pluto’s (~100K) sets λ equal to 3.57. Assuming these constraints, Quaoar could have an appreciable atmosphere. Testing this specific case also probes the usefulness of the Jeans escape parameter more broadly. In addition, the detection of methane spectral features on an otherwise water-ice-dominated surface implies that Quaoar is a ‘transition object’ between volatile-poor transNeptunian objects and the handful of larger, volatile-rich objects like Pluto and Eris [3]. In the absence of a New Horizons-style mission to Quaoar, the most effective way to characterize its atmosphere is by observing stellar occultations, where Quaoar passes in front of and completely obscures, or ‘occults,’ a background star. Performing rapid-cadence photometry of these events produces light curves that can indicate such features as topology and (where atmospheres are present) scale height, temperature, pressure, and density profiles. [4]

29


WINTER 2019

Fig. 1 Predicted map of the July 8 th occulation.

Materials and Methods Multiple stellar occultation shadow paths were predicted to pass over Hawaii and the contiguous United States during the summer of 2018. [5] Arrangements were made to observe multiple occultations from Haleakala Observatory with Faulkes Telescope North through the Los Cumbres Observatory (LCO) Network, and at Yerkes Observatory in the contiguous United States. The LCO pulled out of the project due to technical problems, and all but one of the events over the contiguous United States were clouded out. Data was therefore taken only once: on July 8th, 2018, at Yerkes observatory, using its 24� (0.61m) Cassegrain telescope and a specialized AT-910HX RC CCD camera. The time of occultation was predicted to be 05:30:15 UTC with a maximum duration of 49.4 seconds and predicted magnitude drop of 4.0. The 15.1 magnitude target star (180158.7898, -152032.389; J2000) was observed for a 12-minute block beginning at 05:27 UTC. Two reference stars of similar brightness were selected within the nearby starfield. The data was reduced using Tangra, a specialized occultation aperture photometry tool. Dark and bias frames were used to calibrate the recorded file, per standard procedure.

30

Fig. 2 Target star + Quaoar (blue) and two reference stars (green & pink).

Results Observation of this event was complicated by environmental factors; only a few hours before measurement, an unexpected fireworks display took place upwind of the observatory, elevating atmospheric aerosol and particulate levels and degrading viewing conditions. To resolve the event, the shutter on the CCD was slowed, increasing both the granularity of the data and the signal-to-noise ratio. This made it nearly impossible to characterize any atmospheric effects on the data; however, a significant dip in starlight was measured at 05:32:42.53 with a duration of 26.372 seconds and a magnitude drop consistent with the prediction.

Discussion Given the poor conditions, it is difficult to constrain whether the occultation was observed; the shadow may have missed (likelihood<1) and the signature could be an artefact of poor visualization. Alternatively, the occultation could have been on target and temporally outside the bounds of error predicted by the Laboratory of Space Studies and


SCIENTIA

Fig. 3 Graph of light intensity measured in flux (K) as a function of time (lightcurve) for the July 8th occulation.

References Instrumentation in Astrophysics (LESIA) in Paris. As no other observations were recorded for this event, its veracity cannot be independently verified. Yerkes Observatory is no longer associated with the University of Chicago; however, the Los Cumbres Observatory Network ensured this project’s continuation by offering time on the 1-meter guide scopes scattered across the globe. Data analysis of LCO guidescope collections from July, August and September of 2018 is ongoing. LCO guidescope data collection will resume when stellar occultations by Quaoar are once again visible at night, in March of 2019. In the interim, Faulkes Telescope North and Athabasca University Geophysical Observatory Robotic Telescope (AURT) are conducting passive photometry of Quaoar to better constrain its surface conditions. While this single observation from July 8th, 2018 failed to determine whether Quaoar has an appreciable atmosphere, it laid the groundwork for future observations and study. With luck, Quaoar’s surface conditions will be well understood before the end of the decade. The author thanks the UCISTEM Grant, Paris Observatory, Los Cumbres Observatory Network, UNC Skynet and Yerkes Observatory for their assistance in this project.

[1] Fraser et al., “Limits on Quaoar’s Atmosphere.” The Astrophysical Journal. Vol. 774, No.2. (2013). [2] Stern & Trafton, “On the Atmospheres of Objects in the Kuiper Belt.” The Solar System Beyond Neptune, University of Arizona Press, pg 365-380. (2008). [3] Schaller & Brown, “Detection of Methane on Kuiper Belt Object (50000) Quaoar.” The Astrophysical Journal. Vol. 670, Number 1. (2007). [4] Elliot & Olkin, “Probing Planetary Atmospheres with Stellar Occultations.” Annual Review of Earth and Planetary Sciences. Vol. 24 pg 89-123. (1996). [5] Desmars et al., “Orbit determination of trans-Neptunian objects and Centaurs for the prediction of stellar occultations”

Thomas Cortellesi is a second-year student at the University of Chicago, majoring in geophysics and astrophysics. He hopes to pursue a Ph.D. in planetary science.

31


WINTER 2019

inquiry Assistant Professor Department of Computer Science

Dr. Maire was a Research Assistant and Professor at the Toyota Technological Institute of Chicago (TTIC) before joining the University of Chicago’s Computer Science department as an Assistant Professor this past year. The University’s plans to expand its computer science department, particularly in machine learning and related fields, provided the perfect opportunity for Dr. Maire to

32

Deep Convolutional Neural Networks: An Inquiry with Dr. Michael Maire

take on a new role at the University. TTIC, a philanthropically endowed computer science graduate research institute, is just a 20-minute walk away from UChicago. It is specifically focused on machine learning and its multitudes of applications, including vision, robotics, natural language processing, speech, computational biology, and theory. His ongoing collaborations with a number of TTIC students on various projects continue

to be symbiotically beneficial to both the University and TTIC. Part of Dr. Maire’s research is motivated by structuring deep neural networks and changing their internal architecture or training procedure to improve their efficiency and performance. Neural networks are computational systems inspired by the workings of the human brain that facilitate deep learning—a particular set of techniques within machine


SCIENTIA

learning—in allowing computers to solve problems with efficiencies and fidelities never before seen. Machine learning as a field covers a much broader range of problems and techniques while deep learning has been the primary driver of success for much of computer vision for the past six years. It has also influenced a number of other applications in fields such as natural language processing, robotics, and reinforcement learning. He and his collaborators use computer vision as a test bed for ideas, as they are interested in neural networks for their own sake as well as for computer vision applications. Dr. Maire received his BS from the California Institute of Technology in 2003 and his PhD from UC Berkeley in 2009 before returning to Caltech as a postdoctoral researcher. The two summers he spent at the California Institute of Technology engaged in undergraduate research set the tone for his research in computer vision at the graduate level and to this day. He continues to publish primarily in computer vision journals today, though he is also trying to branch out into deep learning more generally, as these techniques continue to be applied to computer vision. Dr. Maire recalls tinkering with an early IBM PC as a teenager, and developed a particular interest in computer science and science as a whole in high school. He attended a magnet summer program at The Montgomery Blair High School focusing on math, computer science, physics, and biology—a special program in Maryland on the outskirts of Washington, D.C., which gave him a head start in his college level preparation for the sciences. His high school offered a class with the aim of preparing students for participating in senior research programs. Dr. Maire enrolled during his senior year and was further propelled along the path towards his career in science. This class helped him to submit a project to the Westinghouse Science Talent Search, which has

since become the Intel Science Talent Search, and now the Regeneron Science Talent Search. This competition played a central role in fuelling his interest in the sciences, while better preparing him for his college-level scientific pursuits.

“Dr. Maire’s research is motivated by structuring deep neural networks and changing their internal architectures or training procedures in order to improve their efficiencies and performance, or train them for tasks previously unattainable.”

Deep neural networks have often been used in computer vision as convolutional neural networks (CNNs)—in which a set of input representations are brought to a corresponding set of output representations through layers of filters and linear functions on the results of convolutions. Many of these layers are stacked together to form one long computational circuit. Rather than viewing these as a network structure, viewing them as a circular one in which we learn an independent set of parameters per layer is a step towards making these learned structures look more like programs that have internal structure. An analogy would be between specifying a circuit that performs the computation versus writing a program that has reusable computational subroutines and loops so that the program can be in a more compact form. Dr Maire’s team is looking at learning networks that have an internal looplike structure. Designing architecture that progresses in this forward manner but has layers that have the exact same parameters as the previous layers entails that implicitly, loops are created in the network structure

as a first step towards trying to learn useful programs with useful internal structures. His team was able to do this for some CNNs that performed image recognition and classification. These neural networks are a hybrid between convolutional neural networks and recurrent neural networks. Recurrent neural networks are widely used in natural language processing, but they are typically specified as a fixed sequence of steps and layers that repeat periodically. Dr. Maire’s Automatic Colorization Project involves taking images from databases, decolorizing them, and using these images to train computer systems to automatically recolor them again. This process is particularly effective, as the quality of a computer’s continual guessing and checking mechanisms can be determined by how closely the end product resembles the original image before decolorization. This iterative process continually trains the system to improve its predictive capabilities and gives rise to better understandings of depth, texture, and a variety of parameters determining the nature of objects within an image. One immediate application-level rationale for Dr. Maire’s automatic colorization project is its use as a tool for computer graphics. For a while, much of the progress in computer vision was driven by having large volumes of labeled data available. Dr. Maire’s team trains a system to take images and replicate predictions that match what humans would have manually labeled. The system needs to identify where ground truths are located in certain objects as well as the categories such objects can be housed under; if it is trying to perform a depth prediction, then a depth sensor would collect RGB color values. The system is driven by having ground truths upon which to tackle prediction problems. The real motivation for this research, beyond applications in graphics, is the effort to develop a proxy task which trains a system to

33


WINTER 2019

Using deep neural networks, computers are able to identify common “objects” in an image, such as a human, or a dog (iLenze).

label a huge amount of data. There is almost an unlimited amount of training data for this task because one can take any collection of color images from the internet, automatically turn them into greyscale images, and set the computer to the task of trying to predict the color version of the image from the greyscale version. If an artificial neural network were trained to perform this task, we could imagine taking this neural network with millions of internal parameters and adjusting the parameters slightly to allow it to perform the next task better. The next step of fine-tuning the network for a target task of interest is object detection, or image classification. The end goal is to be able to perform the same task to the same degree of accuracy with fewer examples in the fine-tuning stage than at the start. The reason why colorization works incredibly well for this proxy task is that the task colorization performance— deciding what colors to assign to objects in an image—hinges upon the program’s ability to understand the content of an image. Thus, the sky should typically be blue, grass would normally be green, and other objects should generally have a consistent distribution of colors. This proxy task primes the network to extract representations that allow it to capture object identity or object category to some extent. That learning aspect is the primary motivation for this project.

34

Deep learning is already beginning to be applied to a plethora of domains with very large collections of data— particularly scientific domains with large volumes of experimental results. “At some point, Computer Vision will have to progress to real world data—data acquired from agents in the environment,” says Dr. Maire. Although Dr. Maire does not yet have ongoing projects in these areas, he hopes that innovations he makes in deep neural network capabilities may be applicable in a wide variety of scientific fields involving building models of data with large-scale datasets. “Imagine progressing from a world in which our visual system knows about a predefined, [recognizable], finite set of objects or categories…to a world in which we have an agent that needs to acquire new information about its environment and be able to identify and remember new objects and people the first time it meets them.” Computer vision itself has seen recent progress driven by such large-scale curated datasets. The first example would be the image identification set which drove progress on image classification. In these data sets on object detection, there may be scenes that consist of multiple objects—tables, chairs, laptops, bottles, and cellphones in the same image or scene, all of which we hope to identify. In pursuit of this, Dr. Maire has been involved in a multiyear effort in developing these large-scale curated

datasets—the Common Objects in Contexts datasets that are driving progress in object detection and object segmentation. Some groups are working on even larger datasets, but at some point, computer vision will need to shift towards more real-world largescale datasets involving data acquired from the environment. Interfacing with robotics may be one way to explore that frontier.

Gillian Shen is a third-year undergraduate at the University of Chicago majoring in chemistry and biochemistry. She is particularly interested in solar energy as a solution to our energy nexus and hopes to work in the renewable energy and sustainable technology spheres in the future.


SCIENTIA

work in progress

Photons and Consciousness in Relation to the Double-Slit Interference Pattern Experiment Noor Elmasry1 University of Chicago

1

Abstract To date, physics has not produced rigorous, verifiable explanations for numerous physical phenomena—the double-slit and delayed-choice experiments, the origin of the Higgs field, and measurement of vacuum energy, to name a few. It is alarming that most of mankind’s understanding of the universe and how the largest and smallest components interact is more of a mystery than declarative knowledge. Mathematical proofs are still far from explaining the discrepancies between observed behavior and actual behavior, as predicted by mechanical or classical physics. Consequently, physics has been drifting further from classic Newtonian physics and closer to “interpretive” physics. Scientists have discovered that interpretations are needed to makes sense of experimental data and an increasing number of observed physical phenomena. This research is focused on interpretation as it applies to Quantum Field Theory (QFT). Through following a well-established procedure for meta-analysis, two novel interpretations of QFT data are obtained based on three recent publications discussing a biological basis for the role of the observer in changing quantum outcomes. When put through a meta-analysis, these texts support the interpretation that human consciousness affects physical realities and vice versa, as well as that biological matter interacts through photons. The following study is important to contribute a new perspective and continue the conversation on interpretations of quantum mechanics.

Introduction It is through observation and experimentation that scientists have discovered the majority of known information about the behavior of particles in the energy field. However, not all of their work was backed by experimentation. As defined by the Standard Model of physics, photons are the carriers of electromagnetic force, and the smallest packets of light energy [1]. As a result, studying the behavior of photos is crucial. The classic Double-Slit Interference Pattern Experiment (DSIPE), which was first conducted by Thomas Young in 1801, studied the behavior of photons [2]. Photons were shot one at a time at a projector screen through two slits while a high-speed camera stood by on the side [2]. During the experiment, two anomalies were discovered: (1) The photon behavior mimics the pattern of both waves and particles. When the scientists tested to see if the photon behaved like a particle, it did. When the scientists tested to see if the photon behaved like

a wave, it did. This is an example of a longstanding classical physics peculiarity [3]. (2) From the above, scientists deduced that the mere act of observation changes the result of the experiment. The implications of this scientifically confirmed fact are profound because it has led to speculation that our consciousness, through an observer, can create or modify the outcome of an experiment [3]. In mainstream physics, at least fourteen different interpretations have been proposed to explain these anomalies, but none of their inherent assumptions have been experimentally proven due to mechanical and experimental limitations. A selection of these interpretations is shown. Seeing as the last major interpretation to be published and considered by the scientific community was developed in 1994, this new perspective aims to trigger the cultivation of a new interpretation using information from recent

35


WINTER 2019

Table 1. Interpretations of the DSIPE with classifications. (Edited from Sebastian Fortin et al. “What Is Quantum Information?” [4])

studies. The scope of the research is strictly limited to the relationship between quantum physics and consciousness. Three sources were selected in the development of the new interpretation. (1) “Vacuum Experiments” by C. Riek et. al a. “Direct Sampling of Electric-Field Vacuum Fluctuations” (2015) [5] b. “Subcycle quantum electrodynamics” (2017) [6] (2) “The DNA Phantom Effect: Direct Measurement of a New Field in the Vacuum Substructure” by Dr. Vladimir Poponin after a discovery of Dr. Peter P. Gariaev (1991, 2002) [7] (3) Global Consciousness Project by the Institute of Noetic Sciences (1998-present) [8,9]

Methods This work directs its focus to addressing the extent to which interpretations from the publications above concerning photons and consciousness help contribute to the understanding of the Double-Slit Interference Pattern Experiment (DSIPE). The meta-interpretation technique was used, which is a new method for the interpretative synthesis of qualitative research, to approach this question

36

[10]. This method utilizes the researcher’s interpretation of a few selected publications to create simple synthesized constructs to guide the analysis. Figure 1 shows the methodology for data collection.

Development of Theory/Findings The Vacuum Experiments of 2015 (and of 2017) were groundbreaking discoveries. Before these experiments, physicists would discuss ideal test results occurring in a perfect vacuum, which was thought to model the environment of space in that it was completely empty. This belief still holds true in science classrooms around the world due to its perceived simplicity and convenience of eliminating interference with measurements. Until the 2015 study, scientists had only mathematically been able to prove that vacuums are, in fact, not empty, but filled with photons that continuously blink in and out of existence [5]. This phenomenon is known as quantum fluctuation (QF), which constantly occurs. A good way to understand QF is to consider photons as “virtual particles” and not physical matter. According to an analysis of the 2017 Vacuum Experiments, “although [virtual particles are] invisible, like most things in the quantum world, they subtly influence the real world” [12]. When directly observed, these quantum


SCIENTIA

RESEARCH AREA IDENTIFIED

Identification of initial contrasting illustrative studies Dataset developed

Review all previously excluded studies for potential re-inclusion

Further studies identified Thematic Analysis

Context Analysis

Re-develop exclusion criteria NO

FAIL

Exclusion Criteria? PASS

YES

New basis for exclusion?

Study Excluded

NO Search literature for conceptual issues

Conceptual issues identified NO

Does exclusion criteria from YES previous iterations remain relevant to studies excluded in this iteration?

Specific reasons for exclusion noted in detail and generic exclusion criteria developed further if necessary

Saturation Point Reached? YES

YES

Review of exclusion criteria

DEVELOPMENT OF THEORY / FINDINGS

STATEMENT OF APPLICABILITY

Fig. 1 Flowchart of meta-interpretation data collection process. Source: “Meta-Interpretation: A Potential Procedure,” Forum Qualitative Sozialforschung/Forum: Qualitative Social Research [11].

fluctuations are affected and no longer reflect their natural state. This was seen in the DSIPE, where results changed when an observer was present. Since the 1920’s, scientists have identified this complication as the “measurement problem” which states that detecting or amplifying single light particles in an effort to study the effects of quantum fluctuations will remove the “quantum signature” on the photon. This means that scientists cannot measure QF without disturbing their quantum state since photons either behave as a wave or particle when they are being measured, as specified in the second anomaly of the DSIPE. Through observing the light pattern around a neutron star in space, C. Riek et al. observed fluctuations in the quantum field, physically proving that vacuums are not empty. Perhaps what is most shocking is that these researchers might have discovered a way to “observe, probe, and test the quantum realm without interfering with it,” meaning that photons may be examined in their natural state without the effect of a present observer [12]. Building off of the 2015 report on detected quantum fluctuations, the same team has been able to manipulate the vacuum by observing these fluctuations and not disturbing their natural pattern. They accomplished this by using a laser that shutters at a trillionth of a second, which captures the

behavior of the photons before they have a chance to change their behavior [6]. The second publication, the “DNA Phantom Effect”, links to the first by studying the interaction of biological matter with photons in a vacuum. What is unique about this experiment is “that the fields of the DNA phantom have the ability to be coupled to conventional electromagnetic fields of laser radiation and as a consequence, it can be reliably detected and positively identified using standard optical techniques” [7]. In translation, the “DNA Phantom Effect” verifies that if DNA is placed inside a glass box subject to a vacuum state, it will affect light in the form of photons. In fact, in the presence of DNA, the arrangement of photons inside the vacuum box has been verified to take the shape of that animal’s DNA (i.e., a double helix). Additionally, even when the light source is removed, the photons which were already inside the vacuum box maintain their double helix arrangement for some variable amount of time. These findings let scientists study the vacuum substructure on a strictly scientific and qualitative level. According to Dr. Vladimir Poponin, this could be the basis for a more general nonlinear quantum theory, which may explain many of the observed subtle energy phenomena and eventually could provide a “physical theory of consciousness” [7].

37


WINTER 2019

Terrorist Attacks, Sept. 11, 2001 Cumulative Dev. (Z^2-1)*E2

3

Formal Analysis: 10 min before to 4 hrs after WTC first hit Cumulative Deviation of Chi-square 1st Crash

2

P = 0.05

2nd Crash Pentagon 1st Collapse

2nd Collapse

1

0

9

10

11

12

Eastern Daylight Time (Resolution Seconds) Fig. 2 A graph representing cumulative deviation from the mean frequency of random data patterns during 9/14/2001. Source: “Silent Prayer, Sept. 14th, 2001” Global Consciousness Project

38

Bearing this in mind, it is now logical to define what consciousness is on the individual level. A study conducted by a team of experts at Harvard Medical School has speculated that specific parts of the brain are responsible for consciousness. “For the first time, we have found a connection between the brainstem region involved in arousal and regions involved in awareness, two prerequisites for consciousness,” said lead researcher Michael Fox [13]. From this, the researcher suggests that there is a physical link between our biology and our awareness. Through extensive studies on comatose patients, scientists have located a small area of the brainstem known as the rostral dorsolateral pontine tegmentum (RDPT) that was significantly associated with their levels of awareness. In light of research that suggests that consciousness is a localized brain function, one of the researcher’s interpretations may be that some people have unlocked more potential from their RDPT. In the same way that some people have better memories and more pathways that fire to their hippocampus, others could have better access to their consciousness potential through their RDPT.

When discussing firing pathways and accessing a deep meditative state, it is imperative to consider the research done on brainwave activity. There are four distinct levels of brainwave activity identified by beta waves, alpha waves, theta waves, and finally delta waves in a descending frequency [14]. As a reference, humans regularly function in the beta and alpha frequency during daily activities and the lower the brainwaves, the more meditative state the subject enters. While impressive on the individual level, consciousness at a collective level needs to be evaluated as well. The Global Consciousness Project (GCP) has studied collective consciousness on the most comprehensive scale. The GCP is an international multidisciplinary collaboration of scientists and engineers working to collect data from a global network of physical random number generators. Their purpose “is to examine subtle correlations that may reflect the presence and activity of consciousness in the world” [15]. This is measured by analyzing the frequency of pattern presence in simultaneous sequences of synchronized 200-bit trials/sec in 73 host sites around the world over the past 19 years. While their methods may be


SCIENTIA

Silent Prayer, Sept. 14, 2001

Cumulative Dev. (Z^2-1)

30

Europe 1000 to 1003 GMT United States 1200 1203 EDT

P = 0.05

20 10 0 -10 -20 -30 1

2

3

Minutes (Resolution Seconds) Fig. 3 A graph illustrating cumulative Z-score, which shows the sign of the deviations and their magnitude from the mean frequency of random data patterns during 9/11/2001. Source: “Terrorist Attacks, Sept. 11, 2001,” Global Consciousness Project [16].

considered non-traditional by some, these researchers have used the strictest form of data filtering to make sure the statistical significance is high enough to draw conclusions. The entire database has a 5-sigma significance while analyzing the cumulative effect of hundreds of elections, natural disasters, terrorist attacks, etc., which means that there is a less than one in 3.5 million chance that the cumulative results for all the events are not correlated [8]. “Large scale group consciousness has effects in the physical world,” is a bold statement that has already been mathematically proven by John Wheeler, the creator of the Von Neumann Interpretation (See table 1) [4]. Similar to the Vacuum Experiments, the issue arises when there is little observational proof to back up mathematical constructs; therefore, interpretation is needed. One of the best examples of the GCP detecting the consciousness field is the data that came out of the random number generators on the day of 9/11/2001. Roughly ten minutes before the first plane crashed and lasting four hours after, extreme pattern frequencies were recorded to have deviated entirely from the mean [8]. The graph has a p-value of 0.028. Additionally, during the period of silent prayer following the catastrophe, the cumulative deviation went in the opposite direction.

Similar patterns have been observed locally when natural catastrophes hit an area but the contrast of both 9/11 graphs having moments of panic and peace within a defined lots of time concerning the same situation, is helpful to understand the conditions that allow a group consciousness to be affected. Through silent prayer and meditation, there was an increase in delta brainwaves that drastically affected the deviations of collective consciousness. Reacting to a crisis of any magnitude increases alertness levels and areas of the brain related to primal instinct. On the other hand, meditative activity, such as the silent prayer on September 14th, 2001, increases activity in the prefrontal cortex, which is related to higherlevel thinking and awareness. The interpretation that meditative activity on a large scale physically affected the deviations in data is further supported by the graphs in that panic and prayer recorded clear opposite movement of the mean cumulative deviation.

Statement of Applicability With everything considered, the following are two separate interpretations, or statements, that could explain the DSIPE. By no means are these

39


WINTER 2019

interpretations the summit or quintessential solution, nor should they be taken as fact. The following interpretations merely represent the extent to which current knowledge can be used to explain the phenomena around themselves. (1) Interpretation: Our collective consciousness affects the materialization of randomness around us. Explanation: As shown through the GCP project, the effects of collective consciousness on the external world can be measured by its effect on random number generators. One alternative interpretation may suggest that collective consciousness does not only affect random number generators but also other random operations in daily life that would otherwise have 50/50 chance of occurring. (2) Interpretation: Humans have minimal interaction through light energy. Explanation: As shown through the “DNA Phantom Effect,” DNA affects photons by allowing itself to be copied and retained. It could be interpreted that all of our DNA is constantly being copied by photons and those copies are interacting with our surroundings in addition to other copies of DNA at the quantum level. Simply put, presence of human DNA affects the energy field; however, the question is raised: where does one individual’s zone of influence on the photon field end and someone else’s begin? Collectively, we are creating consciousness where our photon fields are interacting just the same way that DNA affects light.

Analysis Based on the qualitative data presented earlier in this paper, the following interpretations may be derived as likely sources or causes of anomalies observed during the DSIPE. (1) In relation to the DSIPE, claiming that collective consciousness affects the intrinsic balance of even and odd, ones and zeros, lefts and rights, and other random functions is one way the explain the peculiar behavior of the photons could be explained. Photons have been observed to mimic the behavior of both waves and particles under different conditions. Perhaps in its natural state, QF will naturally shift between wave and particle motion in 50/50 ratios, but the presence of conscious beings tampers with this balance. In the same way random number generators may pick up signals and patterns related to collective consciousness around the world, miniscule photons in the space around us may pick up traces of individual consciousness. In many of the replications of the DSIPE, researchers were aware of their observer effect on photon measurement

40

altering the photons’ behavior. The researcher proposes that in context of how consciousness relates to our surroundings, the state of mind of the subject or observer is crucial to the nature of results being recorded. The researcher interprets this study to a further degree, proposing that the lower the frequency of brainwaves, the more interference there will be with the photon field. In other words, human consciousness affects physical realities and vice versa. (2) The second interpretation to explain the DSIPE anomalies is that everything is interacting through photons. More specifically, these interactions occur through photon absorption, encoding, and remission between photons and people. This leads to the conclusion that everything occurs as a product of our presence. The photons in the experiment were affected by the presence of multiple people in the room and were further interfered with by the scientific equipment present in the lab. It is possible that with an increased human presence—and increased DNA presence—the photons were more likely to copy particles surrounding them rather than behave like waves in a vacuum. This interpretation falls short when explaining the difference in results when an observer is present. The researcher can attribute this unexpected observation to Einstein’s “hidden variables” which he discusses in his ERP papers written circa 1928 [18].

Conclusion In conclusion, this research addresses a gap in physics knowledge that the scientific community has been debating for over 50 years. This gap, which includes the interpretation of QFT, is so profound that it impacts our understanding of reality. As established, energy in the form of quantum particles underpins all matter in our world­ —from plants, to people, to planets—and the fact that we do not understand how light energy works is highly alarming. The Double-Slit Interference Pattern Experiment is a representation of this paradox, and therefore, it is chosen as the focus of the meta-interpretation of three experiments concerning photons and consciousness and how they can help contribute to an understanding of this experiment. The anomalies observed in the experiment question the legitimacy of our knowledge of reality. Because it has been established that energy is at the basis of all matter and photons are the carriers of energy, it can be assumed that these light particles carry information that underpins our perception of matter and therefore reality. In the context of the previous interpretations in Table 1, these two interpretations comment on the existence of the observer role. In a way, consciousness is a tool that both allows us to access the knowledge that photons carry as well as be influenced by it. Looking forward, there is a need to pinpoint the source of the observed anomalies during DSIPE


SCIENTIA

experiments. A number of recommendations may be proposed. The recommendations proposed include: (A) A replication of the DSIPE using electrically charged atomic particles, either electrons or protons, as well as photons in the same experiment. Comparing the effect of electrical charge on the traditional experiment is likely to induce more observations and interpretations regarding the Standard Model of physics at the atomic level. Protons, electrons, and photons coexist on the atomic scale and studying how they interact is a better representation of how our world functions than limiting the study to only one particle. (B) Using the extreme high-speed cameras, with the capability of taking 1 trillion snapshots per second rather than 1 billion snapshots per second as in the Vacuum Experiments, to further observe behavior of photons. These cameras should be placed to observe the photons between the time they arrive and pass the slits until the time they hit the screen to find out whether they exhibit an interference pattern (wave behavior) or two parallel lines (particle behavior). This measurement is very likely to weed out a few older interpretations of QFT that might be considered null once new research surfaces. In sum, the need for interpretation of physical phenomena in relation to QFT is a rising concern as it only proves how little concrete knowledge we know about our world. Initiating conversation, analyzing past studies, and interpreting the fascinating operations of quantum fields is a viable means to further humankind’s understanding of physics. This research aimed to develop new ideas and a unique perspective on the matter through in-depth analysis and interpretation of current research specifically relating to one unsolved phenomenon: the DSIPE. The findings suggested that human consciousness may affect quantum outcomes, or that there is at least some connection between the two. These interpretations were created with the hopes of increasing the flow of ideas between researchers and making unconventional, yet scientific theories increasingly commonplace in the scientific community.

References [1] The Standard Model. CERN Accelerating Science. (2017) [2] Young Two-Slit Experiment. Lecture. University of Oregon. (2014) [3] Pascasio, J. M. et al. Modeling Quantum Mechanical Double Slit Interference via Anomalous Diffusion: Independently Variable Slit Widths. Physica A: Statistical Mechanics and Its Applications 392, 12, 2718-2727 doi:10.1016/j.physa.2013.02.006 (2013) [4] Fortin, S. et al. What Is Quantum Information? Cambridge University Press. (2017) [5] Riek, C. et al. Direct Sampling of Electric-Field Vacuum Fluctuations.” Science 350, 420-423 doi:10.1126/science.aac9788 (2015) [6] Riek, C. et al. Subcycle Quantum Electrodynamics. Nature 541. 376-379 doi:10.1038/nature21024 (2017) [7] Poponin, V. Dr. et al. DNA Phantom Effect: Direct Measurement of a New Field in the Vacuum Substructure. Biblioteca Pléyades. (1991) [8] Institute of Noetic Sciences. Formal Analysis September 11 2001. The Global Consciousness Project. (2009) [9] Nelson, R. et al. Exploring Global Consciousness. Explore: The Journal of Science and Healing. 1-24 [10] Weed, M. ’Meta Interpretation’: A Method for the Interpretive Synthesis of Qualitative Research. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research 6, 1-25 (2005) [11] Meta-Interpretation: A Potential Procedure. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research. (2005) [12] Macdonald, F. Physicists Say They’ve Manipulated ‘Pure Nothingness’ and Observed the Fallout. Science Alert. (2017) [13] Macdonald, F. Harvard Scientists Think They’ve Pinpointed the Physical Source of Consciousness. Science Alert, (2016) [14] Goés, L. G. Binaural beats: Brain wave induction and the use of binaural beats to induce brain wave patterns. Curr Res Integr Med 3, 15 (2018) [15] Institute of Noetic Sciences. The Global Consciousness Project Meaningful Correlations in Random Data. Noosphere Princeton. (2017) [16] Terrorist Attacks, Sept 11, 2001. Chart. Global Consciousness Project. (2001) [17] Silent Prayer, Sept 14th, 2001. Chart. Global Consciousness Project. (2001) [18] Einstein, A. et al. Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? Description of Physical Reality. Physics Review 47, 777-780 (1935)

Noor Elmasry is a first-year student in the College studying physics and philosophy.

41


WINTER 2019

inquiry Edward Carson Waller Distinguished Service Professor Department of Linguistics Department of Computer Science and Physical Science Collegiate Division Humanities Collegiate Division

Dr. John Goldsmith is one of the few professors on campus whose research in the humanities heavily incorporates STEM techniques. This is reflected in his faculty appointments in both the linguistics and computer science departments at the University of Chicago, where he regularly teaches an introductory course on computational linguistics to juniors and seniors in the College. A pioneer in his field, Dr. Goldsmith’s

42

Breaking the Boundaries Between Humanities and STEM: An Inquiry with Dr. John Goldsmith

current work involves studying morphology­—the formation of words and their relationship to other words through topological and quantum mechanical lenses. Dr. Goldsmith received an AB with honors in Mathematics, Philosophy, and Economics from Swarthmore College in 1972, and went on to receive a PhD in Linguistics from the Massachusetts Institute of Technology in 1976.

He recalls that his interests in mathematics and computer science, both essential tools in his current research, were developed during his high school years. At Columbia University’s Saturday program for high school students, he took courses in machine language, assembly, and the coding language Fortran, which eventually made him fall in love with coding. Dr. Goldsmith’s first programming experiences consisted


SCIENTIA

of typing at big clunky machines with four-kilobyte memory that punched out paper tape, or feeding computers thick stacks of Hollerith cards that took an hour to produce output. At Swarthmore College, rather than focusing on research, he decided to pursue a variety of academic interests. It was not until graduate school that he started focusing on linguistics. Dr. Goldsmith was encouraged by his linguistics professor to attend graduate school, and was given advice which he shares with his own students: “It’s more important to know who you want to study with than where, or what, you want to study.” He was most inspired by Noam Chomsky in linguistics, and Saul Kripke and Jean-Paul Sartre in philosophy. Dr. Goldsmith remembers reading some of Chomsky’s books during college, and despite not being able to comprehend exactly what everything meant, he was deeply attracted by the “mathematical view and aesthetic” at the center of Chomsky’s work. The most important milestones in his graduate program were two comprehensive papers: one on syntax, the part of linguistics which studies sentence structure, and one on phonology, which focuses on the function of sound within a language. Dr. Goldsmith remembers asking Dr. Morris Halle, a leading figure in the department of linguistics at MIT, what to write about for his phonology and syntax papers. Dr. Halle suggested “vowel harmony in a language spoken in Kenya” and encouraged him to read a dissertation by one of Halle’s former students. Although Dr. Goldsmith was extremely drawn to the theoretical problems the sources presented, his phonology paper failed to meet the standards of his advisors. “I felt miserable, and so I spent another year thinking about it and wrote the same paper next year but exploring it a great deal more in the context of a language spoken in Nigeria called Igbo,” reminisces Dr. Goldsmith. Looking at Igbo phonology from a “philosophy of

science” point of view, Dr. Goldsmith came up with a new theory while writing his phonology generals paper. By the time he finished his dissertation two years later, his theory of autosegmental phonology had won wide acceptance in the field. His graduate work helped to contribute to his philosophy of life: “Never is your first serious try a success. Failure always precedes success, and it has always been a necessary step in my career.” After receiving his PhD, Dr.

“The myths of our disciplines have a tremendous influence on our education, and it is essential that we need to figure out what ‘accepted truths’ should be characterized as myths and how to make that characterization.” Goldsmith started his career in academia at Indiana University in 1976, where he worked for eight years until he joined the University of Chicago’s linguistics department in 1984. In 2003, he received a joint professorship in the computer science department. He has served as the chairman of both departments and remains the only faculty member in the history of the university to hold such a position in both humanities and STEM departments. Scholars of his generation who end up in the fields of computer science typically have a PhD in either math or physics— not in linguistics. Given how unusual his situation was, to Dr. Goldsmith, it has always been “a great honor, and continues to be a great honor,” to have led both departments. As someone who is intellectually interested in computer science but didn’t go through the professional graduate school training, Dr. Goldsmith values the opportunity to be able to walk into a CS classroom and speak about his knowledge in

mathematics, computer science, and linguistics in front of a group of talented students. Dr. Goldsmith divides his research into two major areas. The first is the epistemological side, where he discovers the ways in which knowledge forms differently in individuals, as well as how it forms and evolves across disciplines. Three areas that are very closely linked to linguistics are philosophy, psychology, and logic. For the past ten years, Dr. Goldsmith has been working on a project that is set to explore the questions of “rupture and continuity” in the fields that have involved linguistics and how these disciplines constantly “split apart” and “come together”. This project takes the form of a book, titled Battle in the Mind Fields, which will be published by the University of Chicago Press in February. According to Dr. Goldsmith, the book’s most important takeaway “is that it’s an exhortation to younger people, to re-appropriate their history.” Every generation creates their own stories, some of which cause a significant enough impact for them to be passed down indefinitely from generation to generation. These are the foundational stories that have the potential to shape future generations’ understanding of the world through numerous perspectives, including mathematics, philosophy, and logic. The overt problem with this phenomenon that Dr. Goldsmith illuminates in his book is that many of these stories we take for granted as foundational principles are not necessarily historically consistent nor are they conducive to the field’s success. “Many of [the stories] are self-serving in a totally non-positive sense of the world,” he says. In fact, the myths of our disciplines have a tremendous influence on our education, and it is essential that we need to figure out what “accepted truths” should be characterized as myths and how to make that characterization. Dr. Goldsmith’s book is motivated by how “everything we now take as true and has been established in our discipline was

43


WINTER 2019

adjectives nouns past tense verbs auxiliary verbs

bare stem verbs

.1, .2, ... richard, john, ...

prepositions countries

years Fig. 1 A word manifold constructed from the English lexicon

at some point in history an answer to a very live question where there existed a bunch of different possible answers.” The book aims to inspire edifying conversation, promote the benefit of the doubt, and make people pause in whatever they are doing to rethink the most foundational premises of their work. Ultimately, Dr. Goldsmith explains that it is the essence of his book to provoke this process of thinking and realization, “so people can understand that there are answers that didn’t come from heaven or from God.” The computational side of Dr. Goldsmith’s research uses machinelearning techniques to investigate the unsupervised learning of natural languages. More specifically, it focuses on answering the question of how machine learning provides the tools to learn the structure of a language purely based on data being fed into the system. Since 2003, Dr. Goldsmith has been working tirelessly with his students to develop and continuously update a set of algorithms, called the Linguistica project, for determining the morphology of a natural language without any prior knowledge of the

44

language. The most recent version of the program, Linguistica 5, was fed a corpus of English consisting of a corporate set of 62,000 words provided by the digital encyclopedia Microsoft Encarta. Without any prior knowledge of the English language, the program organizes the data into a lexicon of distinct words, categorizes them into monomorphemic and polymorphemic words, and finds a suffixal system through signaturebased analysis on the polymorphemic words (a morpheme is the most fundamental unit of a language that cannot be further broken down into meaningful segments). The program then identifies the number of suffixes and signatures within the lexicon and creates geometrical representations of the signatures whose shape is determined by the number of suffixes within that signature. For instance, one quite robust signature discovered after the analysis was the chain, “NULL-al-ally-s,” and the set of corresponding stems that showed up in the corpus included “competition, economic, exception, and occupation.” The program can then perform sorting based on other morphological characteristics for the

user to see how words connect to one another. Currently, Dr. Goldsmith is using Linguistica to study English and Swahili. “Linguistics research as of today,” Dr. Goldsmith points out, “lies largely under the influence of Chomsky, and it is deeply committed to the commonality that all languages share.” For that reason, the most cutting-edge research examines differences within a language and across languages. One of Dr. Goldsmith’s most recent projects involves comparing languages through word manifolds, which are graphical representations of languages. He initially wanted to find a mathematical method that could be used to model an entire language. Words were traditionally localized, embedded into spaces of dimensionality ten, and linguists could then talk about all the words within certain radii of that word. “There were all these words and the Pythagorean notion of distance,” says Dr. Goldsmith, and he remembers being frustrated about its being in a dimensionality higher than two or three, which renders visualization impossible. He later


SCIENTIA

Fig. 2 A word manifold constructed from the French lexicon

came up with the idea to take some number k around six and figure out which are the k words contextually closest to a select word, represented in the Euclidean sense of closeness. He would put a thread between the k pairs of words and draw a graph along those edges, determine the ten most significant eigenvectors of the normalized Laplacian of the graph, and then calculate new coordinates of words and edge-weights based on distance between words in space ten. This idea utilizes a method of data visualization called forced directed graphics, which can be compared to, “starting off with a little cluster like before the big bang, and then the big bang occurs, and words repel each other.” Different words are held together by unbreakable but stretchable threads, either separated or clustered based on similar grammatical function. Fig. 1 shows that the English language explodes into clusters that are people’s names, nominative pronouns, prepositions, auxiliary verbs, determiners, and so on. After creating word manifolds for both English and French, Dr. Goldsmith was drawn to the questions that they raised. Fig. 2

shows the graphical representation of French, displaying more distinct chunks of words and longer tendrils between clusters. Such topological difference, as Dr. Goldsmith hypothesizes, might account for the fact that English contains a lot more multi-functional words. The word “May”, for example, could be a girl’s name, a month of the year, or a verb. Each node encapsulates all uses of the word, so “May” could be close to words the likes of “Mary”, “June”, or “must”, which clearly belong to disparate clusters. The presence of many such words is a possible explanation for the less pronounced clusters in the global visualization of English. Further questions about the topological shapes remain unanswered: what do they mean from a local point of view and how can we understand them in terms of its dimensionality? Dr. Goldsmith is currently working to answer these questions. Dr. Goldsmith summarizes his work at the University of Chicago as “a process of trying to understand the structure of complex linguistic systems using mathematical tools.” Computer programming, linear

algebra, information theory, and harmonic analysis have all been at the center of his linguistics research. When asked about the future of computational linguistics, Dr. Goldsmith is very optimistic. “At the University of Chicago, the ease at which the faculty members and students can jump from one division to another is just wonderful. It’s very easy for people like myself to move across divisions and interact with people in other departments. The university is doing a good job of encouraging that direction, so I really look forward to seeing what people can come up with.”

Yunxiang Song (Tony) is a first-year student at the University of Chicago planning to major in physics and mathematics. He hopes to conduct research in experimental particle physics in the future.

45


WINTER 2019

acknowledgements The Triple Helix at the University of Chicago would like to thank all of the individuals and departments on-campus that have continued to generously support and make our publications possible. We’d like to give a special thanks to our Student Involvement Advisor Tempris Daniels and all of the faculty members that provide their time and dedication to our student writers.

We also thank the following departments and groups: The Center for Leadership and Involvement University of Chicago Annual Allocations Student Government Finance Committee (SGFC) Chicago Area Undergraduate Research Symposium

research submissions Undergraduates who have completed substantial work on a topic are highly encouraged to submit their manuscripts. We welcome both full-length research articles and abstracts. Please email submissions to eic.scientia@gmail.com. Please include a short description of the motivation behind the work, relevance of the results, and where and when you completed your research. If you would like to learn more about Scientia and The Triple Helix, visit thetriplehelix.uchicago.edu or contact us at uchicago.president@thetriplehelix.org.

46


SCIENTIA

meet the staff scientia

Editors-in-Chief Zainab Aziz Nikita Mehta Managing Editors Josh Everts Maritha Wang Rita Khouri Associate Editors Ananth Panchamukhi Jordan Cooper Sweta Narayan Jessica Xia Karen Ji Alana Koscove Caroline Miller Arundhati Pillai Sydney Jenkins Daksh Chauhan Molly Sun Brian Yu Sofia Garrick Writers Noor Elmasry Sophia Horowicz Jarvis Lam Gillian Shen Danny Kim Tony Song Ayushi Hegde Thomas Cortellesi Jessica Metzger Pascale Boonstra Christian Porras

executive President

Vice President

Nila Ray

Edward Zhou

production Scientia Director Bonnie Hu SISR Directors Ariel Goldszmidt Ariel Pan

science in society review Editors-in-Chief Elizabeth Crowdus Rachel Gleyzer Managing Editors Sydney Jenkins Abby Weymouth Sharon Zeng

events

Directors Peter Ryffel William Rosenthal

e-publishing Editor-in-Chief Julia Smith

Managing Editor Yasemin Hasimoglu

contact us

uchicago.president@thetriplehelix.org thetriplehelix.uchicago.edu

47


the triple helix AT THE UNIVERSITY OF CHICAGO


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.