VOLUME
18 • ISSUE 1• FALL 2019
THE EDITION REGARDING:
ENERGY
PENNSCIENCE PennScience is a peer-reviewed journal of undergraduate research and related content published by the Science and Technology Wing at The University of Pennsylvania and advised by a board of faculty members. PennScience presents relevant science features, interviews, and research articles from many disciplines, including the biological sciences, chemistry, physics, mathematics, geological science, and computer science. PennScience is funded by the Student Activities Council. For additional information about the journal including submission guidelines, visit www.pennscience.org or email pennscience@gmail.com
EDITORS IN CHIEF Kenny Hoang
Roshni Kailar
Faculty Advisors Dr. M. Krimo Bokreta Dr. Jorge Santiago-Aviles Editing Managers:
Writing Managers:
Design Managers:
Business Managers:
Brian Song Brian Zhong
Neelu Paleti Hiab Teshome
Felicity Qin Brian Song
Helen Jiang Angela Yang
Editing:
Writing:
Elly Choi Kelly Liang Daniel Rodriguez Sumant Shrigngari Kathy Wang
Tamsyn Brann Shanna Edwards Emily Lo Andrew Lowrance Michelle Paolicelli
Design:
Business:
Alisha Agarwal Tamsyn Brann Farhaanah Mohideen Amara Okafor Ethan Seto
Glen Kahan Alex Massaro Cal Rothkrug
TABLE OF CONTENTS How Lunar Energy Resources Fuel the Next “Giant Leap” Tamsyn Brann ...................................................................................
05
Advances in research developing artificial leaves Emily Lo ............................................. ..................................................
Nuclear Energy: Friend or Foe?
10
A Means of Increasing Energy Efficiency in the World Andrew Lowrance ..........................................................................
13
15
The Push for Advancements in Renewable Energy Technology Celia Zhang ........................................................................................
19
Shanna Edwards .............................................................................
The Difference Between Good and Great: A Pair of Shoes? Michelle Paolicelli ........................................................................
Interview with Dr. Raymond J. Gorte Michelle paolicellI........................................................................
21
07
Developing a Novel Photographic Procedure for Extracting Concentrations Compared with Particle Count Microscopy Bradley Wheeler Et. Al.................................................................
23
Dear Readers, We are excited to present you with the fall issue of the eighteenth volume of PennScience Journal of Undergraduate Research. Based on the interests of our members as well as important current needs, we investigated new and exciting research related to Energy. In this issue, Tamsyn Brann looks to lunar sources for energy to help reach Mars and other areas that are currently not accessible. Andrew Lowrance writes about the use of magnets as well as thermoelectric generators for energy. Emily Lo examines the use of photosynthesis in artificial leaves as a way to reduce carbon emissions in the atmosphere. Shanna Edwards considers the use of nuclear technology in solving climate issues as well as possible concerns for this technology. Michelle Paolicelli investigates the topic of unexpected increases in energy in marathon runners from a pair of shoes. Celia Zhang studies advancements in renewable energy sources, including solar and wind sources. Finally, Michelle Paolicelli also speaks with Dr. Raymond Gorte, professor of Chemical and Biomolecular Engineering at Penn, to discuss his research on energy emissions and energy efficiency. We are also excited to present research from the Stonedahl Lab at St. Ambrose University and Geller Lab at Augustana University that investigates a novel way to detect the concentration of fine particles. Leading PennScience and working with all of the students involved in the creation of this issue has been a great honor. PennScience, a completely student-run journal, would not exist if not for excited and committed students, including writing, editing, design, and business members. Thus, we first want to thank and recognize all of these members. We also want to recognize Dr. Andrew Rappe, founding co-Director of the Penn Center for Energy Innovation, who spoke at our coffee chat. We would also like to thank the Science and Technology Wing of the King’s Court College House and Student Activities Fund, which allows us to produce each issue of the journal. We would like to thank our faculty mentors, Krimo Bokreta and Jorge Santiago-Aviles for their guidance and support. Finally, a big thanks to you for reading this PennScience issue – we hope you enjoy! Sincerely, Roshni Kailar (C’20) and Kenny Hoang (ENG’21) Co-Editors-in-Chief
Looking for a chance to
publish your research? Penn Science is accepting submissions for our spring 2020 issue! Submit your independent study projects, senior design projects, reviews, and other original research articles to share your work with fellow undergraduates at Penn and beyond. Email submissions and any questions to pennscience@gmail.com
Research in any scientific field will be considered, including but not limited to: Biochemistry Biological Sciences Biotechnology Chemistry Computer Science Engineering Geology Mathematics Physics Physiology
VISIT THE PENNSCIENCE WEBSITE WWW.PENNSCIENCE.ORG TO SEE PREVIOUS ISSUES AND FOR MORE INFORMATION
Written by Ta m s y n B r a n n
Edited by Elly Choi
Designed by Ethan Seto
How Lunar Energy Resources Fuel the Next “Giant Leap” The final mission of NASA’s Apollo Space Program lifted off from the lunar surface in December 1972. Upon their departure, astronauts left behind not only footprints but also unfinished business. To return humans to the Moon, and to eventually use the Moon as a lift-off point for future deep-space exploration, astronauts need sources of water and fuel that are larger than is sensible to carry aboard a rocket. The Moon’s south pole, where rocks are rich in hydrogen and oxygen and even carry deposits of water ice, is an ideal place for astronauts to use these natural resources to further pioneering efforts to investigate the Moon and what lies ahead. By 2024, NASA seeks to send the first woman (and the next man) to the Moon’s south pole. In the lunar missions of the 1960s and ‘70s, all landing sites were scattered
near the lunar equator on its near side, which faces Earth.1 These were strategic regions chosen for ease of touchdown and scientific value, not for their livability. The Apollo Project’s official mission involved four major tenets: carrying out scientific experiments on the Moon, inventing the technology for future space operations, confirming U.S. “preeminence” in space, and ensuring that humans could work in the lunar environment.2 The modern Artemis Program, named for the ancient Greek Moon goddess and mythical twin sister of the Sun god Apollo, has somewhat different goals.3 The initiative still seeks to maintain American leadership in Moon exploration. This time, however, NASA emphasizes that humans are going back to the Moon to stay: long-term survival in space is necessary for eventual planned missions to Mars and beyond.4 Astronaut safety is a major priority of any mission into space. Closely tied to that is the ability to bring them to their destination and home again. These aims are directly affected by the amount of resources available to both keep humans alive and fuel their spacecraft so that they can either travel deeper into space or return home. Earth’s south pole is one of the most uninhabitable regions on the planet. Without proper protection, the lunar south pole is no doubt just as deadly. However, this particular location is home to invaluable deposits of materials to provide energy for both humans and their spacecraft. The Moon’s south pole is mostly unexplored in comparison to previous landing sites, making it a prime location for on-site scientific study. It also is a confirmed location of water ice and of oxygen
FEATURES
Independently, neither hydrogen nor oxygen serves as fuel for spacecraft. If water molecules are separated into its atomic components and then later combined and ignited, the atoms release the energy that was required to split the water molecule in the first place.7 The only waste product from this is, therefore, water. In addition, hydrogen has the highest specific impulse -- a measure of how efficiently an engine, such as the one in a rocket, uses its fuel. deposits in lunar rocks.5
The lunar regolith — loose surface material — isn’t only a source of water. Previous analysis of lunar samples has The cost of transporting enough water, and the rocket fuel shown that Moon rocks can contain up to 45% oxygen by that can be derived from it, for long-term space missions weight.5 Although the energy and monetary costs of oxygen would be inefficient and extremely expensive. Only if extraction on the Moon can be considerable, this is a far water was already at the planned destination could the more realistic solution to the problem posed by the need for mission be successful. oxygen in a successful space mission as opposed to bringing the resource from home. In 2018, data from NASA’s Lunar Reconnaissance Orbiter (LRO) demonstrated that the Moon reflected an amount To explore is to be human: it is a compulsion, perhaps an of sunlight near the poles consistent with that of water obsession, but definitely an instinctive desire. Space was a ice; however, this discovery was not ground-breaking until frontier that had remained untouched before the latter half compared with mineralogical maps of the Moon, which of the 20th century, and humankind’s ability to unravel confirmed the LRO findings.6 its mysteries has been and continues to be limited by how advanced our technology is. Should future science find enough water at the south pole, it could be mined for astronauts to drink and to water plants grown aboard the lunar shuttle. It could even be divided into its atomic constituents, two atoms of hydrogen REFERENCES and one of oxygen per water molecule, which can provide breathable air to astronauts and also the main ingredients 1. King, B. (2017, November 4). How to See All Six Apollo Moon Landing Sites. Retrieved for rocket fuel.
November 16, 2019, from https://www.skyandtelescope.com/observing/ how-to-see-all-six-apollo-moon-landing-sits/ 2. Loff, S. (2015, March 16). The Apollo Missions. Retrieved November 16, 2019, from https://www.nasa.gov/mission_pages/apollo/missions/ index.html. 3. Dunbar, B. (2019, July 23). What is Artemis? Retrieved November 16, 2019, from https://www.nasa.gov/what-is-artemis. 4. We Are Going. (n.d.). Retrieved November 16, 2019, from https:// solarsystem.nasa.gov/resources/2446/we-are-going/. 5. Hugo, A. (2019, April 25). Why the Lunar South Pole? Retrieved November 16, 2019, from https://www.thespaceresource.com/ news/2019/4/why-the-lunar-south-pole. 6. Li, S., Lucey, P. G., Milliken, R. E., Hayne, P. O., Elizabeth Fisher, J.-P. W., Hurley, D. M., & Elphic, R. C. (2018, September 4). Direct evidence of surface exposed water ice in the lunar polar regions. Retrieved November 16, 2019, from https://www.pnas.org/content/115/36/8907. 7. Dunnill, C. W., Phillips, R., & Energy. (2019, August 28). Making space rocket fuel from water could drive a power revolution on Earth. Retrieved November 16, 2019, from https://theconversation.com/making-spacerocket-fuel-from-water-could-drive-a-power-revolution-on-earth-65854.
FALL 2019 | PENNSCIENCE JOURNAL 6
IFI CI A
A
RT
LP
HO TO
YN
S
IS ES TH
AC
P CE ON T
TO
LE AF
BEH
IND?
WRITTEN BY: EMILY LO EDITED BY: BRIAN SONG DESIGNED BY: ALISHA AGARWAL
7
PENNSCIENCE JOURNAL | FALL 2019
FEATURES Coming in different shapes and sizes, leaves appear in abundance and provide a staple source of color in the natural world — often leading us to take their presence for granted. Yet, according to the World Wildlife Fund, the Earth loses 18.7 million acres of forests every year, the equivalent of 27 soccer fields per minute.1 Moreover, the World Health Organization has reported that ongoing air pollution, partly perpetuated by the decrease in leaves that absorb contaminating particles in the air, kills an average of two million people worldwide every year. These alarming statistics highlight the influence of current environmental problems on public health and natural habitats, raising a need for ways to address damages being inflicted upon the planet. While preventative measures offer the strongest approach to mitigating deforestation and air pollution, researchers have begun to explore engineering solutions — investigating artificial leaves as synthetic devices that could potentially undo harm that humans have already caused the planet. While leaves may typically be associated with the natural world, the concept of an artificial leaf involves the synthesis of carbon emissions through battery technology. In 2011, American chemist Daniel G. Nocera and his colleagues developed artificial leaves that consist of a silicon layer separating metal catalysts.2 With sunlight, these metal catalysts are then able to split water into hydrogen and oxygen gas. The produced gases are funneled to a fuel cell where they are converted to electric energy.3 This technology has the potential to
play a promising role in our society. Artificial leaves could someday be used to curb carbon emissions and power entire households.4 In order to position this technology for real-world applications, researchers seek ways to modify and enhance the effectiveness of the artificial photosynthesis design in artificial leaf models. For instance, in addition to producing hydrogen gas fuel, the leaves could use carbon emissions in the atmosphere to produce hydrocarbons.5 As processing carbon often involves the usage of enzymes, which synthetic systems
“[...] Artificial leaves that consist of a silicon layer separating metal catalysts. With sunlight, these metal catalysts are then able to split water into hydrogen and oxygen gas.”
lack, researchers have explored using bacteria to convert carbon dioxide to hydrocarbons. This concept has been examined in a study conducted in 2016 by Nocera and fellow researchers, as they used hydrogen-oxidizing bacterium Raistonia eutropha and a cobalt-phosphorus watersplitting catalyst to convert carbon dioxide, sunlight, and water into oxygen and fuel. The results indicate that the artificial model of photosynthesis “achieved roughly 10 percent efficiency in converting carbon dioxide to fuel, the equivalent of pulling 180 grams of carbon dioxide from the air per kilowatt-hour of electricity
generated,”6 exceeding the abilities of natural photosynthetic systems which use “just 1 percent of the energy it receives from the sun to make glucose.”5 These findings illustrate the potential for artificial photosynthesis to increase the rate at which carbon emissions are removed from the atmosphere and converted into renewable fuel. Researchers have also sought to increase the rates and efficiency of carbon dioxide conversion while reducing the costs of producing the artificial leaf devices. Meenesh Singh, an assistant professor at the University of Illinois at Chicago, has noted the tendency for artificial leaves to convert “only 15% of the carbon dioxide they take in into fuel and release 85% of it, along with oxygen gas, back to the atmosphere.”7 Singh and fellow colleagues attribute this phenomenon to the fact that atmospheric carbon dioxide will turn into negative bicarbonate anions that are attracted to the cell’s positive regions, where oxygen and hydrogen protons are split. The interactions between the bicarbonate anions and the acidic electrolyte will lead to further carbon dioxide release. To address this issue, Singh and other researchers published a study this past summer in which they designed and implemented a bipolar membrane in the electrochemical cell, thereby barring bicarbonate anions from reaching the “positive” side of the leaf while also neutralizing hydrogen protons to lower the acidity of the electrolyte.7 When testing this membrane design,
FALL 2019 | PENNSCIENCE JOURNAL 8
the researchers found that the artificial leaves could convert 6070% of absorbed carbon dioxide into hydrocarbon fuel-- a dramatic increase in efficiency that highlights the promise of using the synthetic devices to address air pollution. While metallic catalysts have previously been made from costly metals such as silver, researchers at UIUC have also been exploring transition metal compounds that can convert carbon dioxide to oxygen at faster rates and lower costs. In a study published in Science, Anin Salehi-Khojin and fellow researchers examined the possibility of constructing catalysts out of transition metal dichalcogenides (TMDCs).8,9 This family of two dimensional nanosheets has promising characteristics, including its unique electronic properties such as its variable band gap, which represents the minimum energy required to excite an electron up to a state where it can conduct electricity, and its potential to build atomically sharp semiconductors as a result of the weak van der Waal interactions that comprise its structure.10 Upon pairing four different TMDC catalysts with ionic liquid in a photoelectrochemical cell, the researchers identified tungsten diselenide nanoflakes to be the most efficient TMDC for breaking carbon’s chemical bonds.9 According to postdoctorate researcher Mohammad Asadi, who worked on this project, “The new catalyst is 1,000 times faster than noble-metal catalysts — and about 20 times cheaper.” Despite research breakthroughs that highlight the ways artificial leaves could enhance our lives
9
and benefit the environment, it is REFERENCES: necessary to consider limitations regarding the applicability of 1. Deforestation: Facts, Causes & Effects. (n.d.). Retrieved from https://www. these devices. For instance, Singh livescience.com/27692-deforestation.html points out that despite their noted 2. Mian, Z. (2015, December 10). Artificial efficiency, artificial leaf models leaf. Retrieved from https://www. britannica.com/technology/artificial-leaf have primarily been tested with 3. Stecker, T. (2011, March 29). “Artificial carbon from pressurized tanks Leaf ” Might Provide Easy, Mobile rather than the atmosphere itself. Energy. Retrieved from https://www. scientificamerican.com/article/artificialThis indicates that we have yet to l e a f - m i g h t - p r o v i d e - m o b i l e - e n e r g y / fully gauge the applicability of 4. Niclas. (2019, July 29). Solar fuels: an the devices in real-life settings. introduction to artificial photosynthesis. Retrieved from https://sinovoltaics. In addition, the hydrogen fuel c o m / t e c h n o l o g y / s o l a r - f u e l s - a n storage of artificial leaves carries introduction-to-artificial-photosynthesis/ safety implications that researchers 5. Liu, C., Colón, B. C., Ziesack, M., Silver, P. A., & Nocera, D. G. (2016). should ideally seek to alleviate. For Water splitting–biosynthetic system with example, while current vehicles CO2reduction efficiencies exceeding photosynthesis. Science, 352(6290), tend to operate on a standard 14V 1210–1213. doi: 10.1126/science.aaf5039 system, vehicles with hydrogen 6. Martinez, J. G. (2017, June 26). fuel systems have run on voltages Artificial Leaf Turns Carbon Dioxide that are over 350V and may cause Into Liquid Fuel. Retrieved from https://www.scientificamerican.com/ electrical shocks.11 Moreover, as a r t i c l e / l i q u i d - f u e l s - f r o m - s u n s h i n e / fuel cells can enable hydrogen and 7. Clearing up the ‘dark side’ of artificial oxygen to directly combust, slowly leaves. (2019, July 31). Retrieved from https://www.sciencedaily.com/ escaping hydrogen could potentially releases/2019/07/190731145821.htm form a flammable or explosive 8. Asadi, M., Kim, K., Liu, C., Addepalli, A. mixture that harms users.11 These V., Abbasi, P., Yasaei, P., … Salehi-Khojin, A. (2016). Nanostructured transition metal concerns should be addressed dichalcogenide electrocatalysts for CO 2 before artificial leaves can feasibly reduction in ionic liquid. Science, 353(6298), be integrated in real world settings. 467–470. doi: 10.1126/science.aaf4767 9. Breakthrough solar cell captures CO2 As scientists continue to improve and sunlight, produces burnable fuel. upon the design of artificial leaves, (n.d.). Retrieved from https://today.uic. the devices have the undeniable edu/breakthrough-solar-cell-capturesco2-and-sunlight-produces-burnable-fuel potential to benefit the environment 10. Dong, R., & Kuljanishvili, I. (1970, and the quality of human lives, January 1). Review Article: Progress giving us a greater power and in fabrication of transition metal dichalcogenides heterostructure systems. duty to cultivate a sustainable Retrieved from https://avs.scitation. world. The further development o r g / d o i / f u l l / 1 0 . 1 1 1 6 / 1 . 4 9 8 2 7 3 6 of artificial leaves will heighten our 11. Safety issues regarding fuel cell vehicles and hydrogen fueled vehicles. (n.d.). Retrieved ability to consume energy while from https://dps.mn.gov/divisions/ preserving the planet, thereby s f m / p ro g r a m s - s e r v i c e s / D o c u m e n t s / Safety/Alternative Fuels/ helping us build towards a future Responder where we can better support human activity while mitigating damages people have caused the planet.
PENNSCIENCE JOURNAL | FALL 2019
FEATURES
The climate situation is escalating. The earth is the warmest it has been in thousands of years, and carbon emissions are on the rise. According to the IPCC, we need to keep the change in the earth’s temperature below 1.5ºC, or the damage will be irreparable. Though no one knows the exact timeline, one sure thing is that the climate situation is dire. This crisis has led many to search for the most effective defense strategies. While many ideas are in the works, most people overlook nuclear technology as one such strategy.
FALL 2019 | PENNSCIENCE JOURNAL 10
In recent years, there has been an extended discussion within the scientific community about nuclear energy and its feasibility. Worldwide, fossil fuels are still the most popular source of energy, with 84% of our total energy coming from this source. This is ultimately a detriment to the environmental health of the earth, as it emits large amounts of greenhouse gases. On the other hand, nuclear energy is a renewable and clean source of energy, yet it is not widely used. One possible explanation is the negative connotation of nuclear energy as a result of its representation in the media. Nuclear accidents have not only been widely covered in the news, but also have been popularized and turned into television shows. This perpetuates the idea that nuclear energy is inherently unsafe and not to be trusted. Additionally, implementing nuclear energy from start to finish is expensive, and the payoff isn’t immediate. According to the Energy Information Administration in 2016, the overnight capital cost of implementing nuclear energy in the US was $9 billion per plant, not including the costs for fuel and day-to-day upkeep of the plant. Despite this, there have been many new technological advances in the nuclear energy field in the past 10 years. Nuclear fission is the most widespread method of generating nuclear energy as of 2018. Fission involves splitting uranium atoms into smaller fragments to produce heat that is then harnessed to generate electricity. While this reaction can be easily produced, there are many risks involved. The two biggest disadvantages, other than its high cost, are the harmful radioactive waste that is released as a byproduct, and the explosive nature of the reaction. Meltdowns are likely to occur if waste is not disposed of carefully and if efficient cooling strategies are not implemented. Beyond standard fission, there are many new techniques that promise to be more efficient, while still minimizing risk. Terrapower, an energy company founded by Bill Gates, plans to use advanced fission techniques. Though t his s ounds e erily s imilar t o standard fi ssion, the differences are crucial. In their ‘travelling wave reactor’ model, depleted uranium is used as fuel. Because it is a byproduct of the uranium enrichment process, it is more cost-effective. Further, it is designed to last longer because the fuel can be converted without being removed from the core. This leads to a continuous production of electricity, since there is no need to reprocess or regenerate heat. By mid-2020, Terrapower hopes to have its first prototype of the travelling wave reactor in the works. Fusion reactors have been in the spotlight as well. The mechanism of nuclear fusion is the opposite of fission. Two s mall atoms such a s different isotopes of hydrogen or helium are combined to form a bigger one, and energy is generated in the process. Though a simple concept, the conditions needed for such a reaction to occur are extreme. Exceptionally high temperature, high pressure, and an abundance of hydrogen are required to facilitate this combination of atoms. Nuclear fusion quells most of the concerns about fission. It produces no nuclear or radioactive waste, and it will not cause a meltdown because the reaction will stop completely if the conditions are not perfect.
11 PENNSCIENCE JOURNAL | Fall 2019
Once researchers figure out the most effective way to set up the experimental conditions, there will be no concern over the possibility of nuclear accidents. Furthermore, it has the potential to be less costly because the hydrogen needed in the reaction can be sourced from sea water. While the conditions for nuclear fusion are difficult to generate, companies are searching for different methods.
FEATURES
One company, General Fusion, has proposed to use magnetized target fusion to generate the necessary conditions. A pulse of magneticallyconfined plasma fuel is injected into a vortex containing molten leadlithium in a sphere. Around this sphere, several pistons push the pressure towards the center of the sphere, where fusion can occur. Over time, heat is released and used to generate electricity via a steam turbine. General Fusion has also developed varying methods to make their technologies less costly, safer, and more practical. For instance, they use steam powered pistons, which have a proven track record, to compress the plasma while others have tried large magnets or lasers, with less stable effects. Furthermore, since the target is made only of magnetized plasma, a hot ionized gas with magnetic properties, it is effectively cost free because this does not need to be manufactured. On the other hand, Commonwealth Fusion Systems has been collaborating with MIT’s Plasma Science and Fusion Center to build another notable prototype called SPARC, a first-of-its-kind, compact, high-field fusion device that generates more energy than it consumes. Current research focuses on how best to generate this “high-field” model. There are two conditions that fall under “high field”, according to MIT. First, new high temperature superconductor magnets could create and generate the huge magnetic field needed here. Second, they are exploring the possibility of utilizing short pulse copper electromagnets as a method of confining the plasma and keeping it stable. Though still in the experimental phase, MIT has announced that the SPARC reactor could potentially be producing energy by 2025. Given the climate situation, nuclear energy could be our saving grace. Nuclear fusion, in particular, is safe, carbon free, and effective, but it does not get that positive representation in the media. No matter which company or method you get behind, nuclear energy has the potential to solve the climate problem. REFERENCES 1. The Intergovernmental Panel on Climate Change. (2019). Global Warming of 1.5 ºC. Retrieved from https://www.ipcc.ch/site/assets/uploads/sites/2/2019/06/SR15_Full_ Report_Low_Res.pdf 2. How long do we really have to save the planet from global warming? (2019, January 25). Retrieved from https://www.scmp.com/magazines/post-magazine/long-reads/article/2182663/ climate-change-how-long-do-we-really-have-save 3. U.S. Energy Information Administration. (2016). Capital Cost Estimates for Utility Scale Electricity Generating Plants. Retrieved from https://www.eia.gov/analysis/studies/powerplants/capitalcost/ pdf/capcost_assumption.pdf 4. Frois, B. (2005). Advances in Nuclear Energy. Nuclear Physics A, 752, 611–622. doi: 10.1016/j. nuclphysa.2005.02.064 5. About Us. (n.d.). Retrieved from https://www.terrapower.com/about/ 6. Devlin, H. (2018, March 9). Nuclear fusion on brink of being realised, say MIT scientists. Retrieved from https://www.theguardian.com/environment/2018/mar/09/nuclear-fusion-on-brink-of-being-realised-saymit-scientists 7. SPARC. (n.d.). Retrieved from https://www.psfc.mit.edu/sparc 8. Chandler, D. L., & MIT News Office. (2019, January 24). MIT continues progress toward practical fusion energy. Retrieved from http://news.mit.edu/2019/progress-practical-fusion-energy-0124
FALL 2019 | PENNSCIENCE JOURNAL 12
HEAT ENERGY AND ENERGY EFFICIENCY IN A WARMING WORLD According to the United States Energy Administration, 58% of energy produced is wasted, a troubling statistic considering that producing energy for our infrastructure and modern industry relies on nonrenewable resources that contribute to climate change. The conventional approaches involve the over-production of carbon-dioxide, a molecule known for remaining in the air for prolonged periods of time and trapping incoming heat from the sun, creating what is known as the greenhouse effect. Ultimately, this calls into question the efficiency of modern technologies and their means of usage in the status quo. In particular, the impending effects of climate change and concerns for the impact of technological innovation on the Earth’s future have grown to the point where modern technologies are no longer just concerned with innovation but also with environmentally conscious means of achieving such innovation. This includes increasing the overall efficiency of current energy production. Scientists have developed a means to utilize various forms of
expended heat, or heat released into the atmosphere from engines and other processes, as a source to produce energy, including electricity.1 This is accomplished by utilizing magnets and heating them to produce a current. This would serve as a step towards addressing the pressing issue of rising surface temperatures around the world. This process was primarily achieved through the discovery of particles known as paramagnons, bits that carry magnetic flux but are not themselves magnets. Magnetic flux can be thought of as a measurement of the total magnetic field that passes through an area. This is potentially beneficial given that when magnets are heated, they begin to lose their magnetic force and turn paramagnetic, meaning they have magnet-like weak attractions between their poles but are no longer magnetic. A flux of magnetism creates a type of energy, magnon-drag thermoelectricity, that up until today could not be utilized. Through a redesigning of thermoelectric semiconductors,2 materials utilized to convert heat into electricity, the issue of using magnon-
13 PENNSCIENCE JOURNAL | FALL 2019
WRITTEN BY: ANDREW LOWRANCE EDITED BY: BRIAN SONG DESIGNED BY: BRIAN SONG
drag thermoelectricity as an energy source was resolved by a research team from Ohio State University, North Carolina State University and the Chinese Academy of Sciences. The issue that originally hindered the usage of magnets revolved around the common theory that heated magnets, paramagnets, could not store energy. It was eventually observed, however, that when magnets heat up, one side is heated while the other is cooled, becoming more magnetic and producing spin that would push electrons to create electricity. Thus, the electrons would move from one side of the magnet toward the other, creating a current that flows across the magnet. One can visualize this within a magnetic strip, where electrons flow from one side of the magnet toward the other. This would happen at large enough intervals to trap energy. Alongside the usage of paramagnets, thermoelectric generators (TEGs)3 have long been used by different types of equipment to generate electricity for power. A TEG is a device that
converts a temperature difference into a voltage and manages the flow of electrical current around a circuit, using expended heat as a means to create electricity. Examples of devices that currently utilize this technology include the Mars rover Curiosity and interstellar space probes. These devices, however, rely on high temperature differences and toxic, inorganic materials. As a result, many teams have begun collaborating to search for materials that could make these thermoelectric generators possible with materials that are ideally non-toxic, Earth-abundant, efficient, and easy to fabricate.4 These efforts also include the pursuit for thermoelectric generators that rely on low temperature gradients. One particular example of a breakthrough in thermoelectric technology comes from researchers at MIT who have begun exploring methods of utilizing topological materials to capture wasted heat in order to produce electricity.5 Topological materials possess unique electronic properties that can serve as thermoelectric systems that may be more practical than current semiconductors like silicon. The
reason a thermoelectric material may be useful is that, when heated, a temperature gradient is created where one end is cooler than the other. This leads electrons in the material to flow from the hot end to the cold end, generating an electric current. This process is the same as with the case of the paramagnons previously described. Moreover, topological materials could
Topological materials possess unique electronic properties that can serve as thermoelectric systems that may be more practical than current semiconductors like silicon be turned into efficient thermoelectric devices with nanostructuring, a technique that synthesizes a material by patterning its features at the nanometer scale. The nanomaterial is made from low-temperature solution-based production processes.
FEATURES
This allows for the material to be coated on flexible plastics, bypassing the expensive materials needed for some thermoelectric materials. These thermoelectric nanomaterials have the potential to give rise to practical, solutionprocessed thermoelectric generators. Despite these various mechanical innovations that have led to a better understanding of the capturing of energy, there is still room for the research and development of more efficient technologies that encourage a future of responsible consumption. It is evident through the decreased health of the natural environment that the reliance on fossil fuels has cost society a heavy price.6 Yet, despite these current issues, capturing expended heat has served as an exciting solution to counter the inefficiency of traditional techniques of creating energy.
REFERENCES: 1. A new way to turn heat into useful energy. (2019, September 23). Retrieved from https://www.sciencedaily.com/ releases/2019/09/190923111235.htm. 2. Chen, L., Li, J., Sun, F., & Wu, C. (2005). Performance optimization of a two-stage semiconductor thermoelectricgenerator. Applied energy, 82(4), 300-312. 3. Niu, X., Yu, J., & Wang, S. (2009). Experimental study on low-temperature waste heat thermoelectric generator. Journal of Power Sources, 188(2), 621-626. 4. Gallagher, B. L. (1981). Thermoelectric powers of amorphous transition metal alloys and electron-phonon enhancement. Journal of Physics F: Metal Physics, 11(8), L207. 5. Wang, Y., Li, S., Zhang, Y., Yang, X., Deng, Y., & Su, C. (2016). The influence of inner topology of exhaust heat exchanger and thermoelectric module distribution on the performance of automotive thermoelectric generator. Energy conversion and management, 126, 266-277. 6. Roberts, D. (2018, May 12). American energy use, in one diagram. Retrieved from https://www.vox.com/energy-andenvironment/2017/4/13/15268604/american-energy-one-diagram.
FALL 2019 | PENNSCIENCE JOURNAL 14
The Difference Between Good and Great: A Pair of Shoes?
15 PENNSCIENCE JOURNAL | FALL 2019
FEATURES
WRITTEN BY: MICHELLE PAOLICELLI EDITED BY: DAN RODRIGUEZ DESIGNED BY: FARHAANAH MOHIDEEN
In 1896, the first Olympic marathon was won in just under three hours. Recently, Kenyan Eliud Kipchoge broke running’s “Last Great Barrier” by completing the classic 26.2 mile race in 1 hour, 59 minutes and 40 seconds — an average mile pace of 4 minutes and 34 seconds.1 What has changed between 1896 and now allowing the same distance to be run 30% faster? Certainly humans have not evolved to be faster in the last century. Rather, we have learned to integrate technology into our daily lives such that we seem superhuman.2 FALL 2019 | PENNSCIENCE JOURNAL 16
The campaign to break the two hour marathon barrier has been paramount in the running community for years, but is perhaps most widely known in the context of Nike’s 2017 Breaking2 event. This special marathon pitted Kipchoge against two other top marathoners at the Formula 1 track in Monza, Italy with the singular goal of breaking
the twohour time barrier. All three runners wore Nike’s newly released Vaporfly Elites. These shoes contained a carbon fiber plate in the midsole as well as a specially tailored foam base that purportedly reduced the energetic cost of running by returning some of a runner’s expended energy back to them. Some have called this claim a marketing ploy by Nike, but since their release each edition of the shoe has appeared on the podium of runners at major marathons and competitions time and time again. This warrants a deeper look into Nike’s claim that the shoes provide greater energy return to the wearer than traditional sneakers. Although a carbon fiber plate may seem unusual for a pair of shoes, it is in fact the so-called “ZoomX” foam soles that provide the greatest boost in efficiency. Polyether block amide, known commercially as Pebax, is a high performance elastomer — a polymer with both high toughness, the ability of a material to absorb energy as an external stress is placed upon it, and high elasticity.3 This is because Pebax has weak intermolecular forces and a low Young’s modulus, which indicates that a material has low stiffness and can change shape considerably when a force is placed upon it. Pebax’s combination of rigid polyamide blocks and soft polyether blocks provide the shoes with the desirable characteristics of flexibility, optimal energy return,
17 PENNSCIENCE JOURNAL | FALL 2019
and low density. Essentially, the use of Pebax allows the foam to absorb energy from the runner when they push off the ground and then return it to the runner as they continue to move -- all while remaining lightweight and comfortable. However, the carbon fiber plate embedded in the midsole of the shoes should not be overlooked. Initially believed to act like a spring for the runner’s feet, further tests have shown that the plate’s primary role is one of stabilization. More specifically, the plate improves the runner’s biomechanics by reducing the stress placed on their calves and stabilizing the ankle joint.6 Weighing only 6.9 ounces, the lightweight nature of the shoe also helps to reduce the amount of energy expended while running by reducing the force necessary for a leg swing.4 Together, these qualities have contributed to the reputation of the shoe as superior to traditional running sneakers and even track spikes. Since their initial release, the shoes have been rendered in newer editions including the Vaporfly 4%, named because they are said to make the wearer run 4% more efficiently t h a n other
sneakers. More recently, the Vaporfly Next% debuted, claiming to give runners an even greater efficiency boost.5 Many have called for the shoes to be classified as illegal due to the unnatural advantage they give to wearers, but the International Association of Athletics Federation (IAAF) has yet to find sufficient reasoning to do so, as few studies have explored the actual advantages provided by the shoe. A study comparing the relative energy expenditures of the Nike VaporFly 4% and traditional running sneakers found that the Vaporfly improved running economy by an
average of 4.2 ± 1.2% at a variety of running speeds.6 Running economy, defined as the rate of oxygen uptake at a given running velocity, is dependent on a specific athlete’s maximal oxygen uptake and aerobic capacity.7 However, elite runners with similar maximal oxygen uptake can have running economies that vary by up to 30%.6 Wearing sneakers that promise an improvement in running economy while lowering aerobic expense is of great importance to professionals. In addition to creating better shoes, this technology continues to have a positive impact within adaptive sports. Prosthetic limbs and orthotics have benefited greatly from the use of innovative materials and technologies, which have made them more akin to their natural counterparts than ever. However, this has not come without controversy. Former Paralympian Oscar Pistorius was at one point barred from competing alongside able-bodied athletes due to a fear that his carbon fiber running blades would give him an unfair advantage. The commonly held belief was that the flexible blades would improve his energy return in relation to other athletes.8 However, a study found that the limiting factor for top running speed was “ground force”, or how hard the foot,
artificial or real, hit the ground. Analyzing runners with one real leg and one artificial, they found that the artificial leg produced a “ground force” 9% lower than that of the real limb, suggesting that these limbs do not offer the suspected advantage of increased energy return.9 Further complicating the argument that protheses give athletes an unfair advantage is that although prosthetic feet and ankles are often designed with optimal energy return in mind, their effectiveness is dependent on multiple factors such as the fit of the residual limb into the prosthesis socket. Additionally, many available prosthetic limbs are poor replacements for the native part due to their lack of integration with the wearers neuromuscular system. As a result, only a small portion of energy absorbed can be returned to the wearer as they walk.4 Consequently, prosthetic limbs do not improve an athlete’s running economy and should not be considered an unfair advantage. As long as competitive and elite athletics exist, records will continue to be broken, and the standards of human achievement are sure to evolve. However, athletics should continue to focus primarily on what the human body is capable of rather than what technology can make possible. Perhaps a new division of competitive sports will form with the specific goal of assessing how assistive technology can improve human performance. For now, one thing is clear: Eliud Kipchoge’s sub two hour marathon should be remembered as a great achievement in the world of sports.
FEATURES
REFERENCE 1. Arkema. (n.d.). Pebax® Thermoplastic Elastomer Family - Energizing, Lightweight Resins. Retrieved from https://www. extremematerials-arkema.com/en/productfamilies/pebax-elastomer-family/. 2. Barnes, K. R., & Kilding, A. E. (2015). Running economy: measurement, norms, and determining factors. Sports Medicine - Open, 1(1). doi: 10.1186/s40798-015-0007-y 3. Barnes, K. R., & Kilding, A. E. (2018). A Randomized Crossover Study Investigating the Running Economy of Highly-Trained Male and Female Distance Runners in Marathon Racing Shoes versus Track Spikes. Sports Medicine, 49(2), 331–342. doi: 10.1007/ s40279-018-1012-3 4. Beck, O. N., Taboga, P., & Grabowski, A. M. (2016). Characterizing the Mechanical Properties of Running-Specific Prostheses. Plos One, 11(12). doi: 10.1371/journal. pone.0168298 5. Childers, W. L., & Takahashi, K. Z. (2018). Increasing prosthetic foot energy return affects whole-body mechanics during walking on level ground and slopes. Scientific Reports, 8(1). doi: 10.1038/s41598-018-23705-8 6. Gonzalez, R. (2017, November 22). Data Shows Nike’s Vaporfly 4% Marathon Shoe Increases Running Economy. Retrieved from https://www.wired.com/story/do-nikesnew-marathon-shoes-make-you-faster-a-nikefunded-study-says-yes/. 7. Jha, A. (2009, November 4). Prosthetics do not give sprinters unfair advantage, research suggests. Retrieved from https://www. theguardian.com/science/2009/nov/04/ prosthetics-athletes-oscar-pistorius. 8. Keh, A. (2019, October 12). Eliud Kipchoge Breaks Two-Hour Marathon Barrier. Retrieved from https://www.nytimes. com/2019/10/12/sports/eliud-kipchogemarathon-record.html. 9. Smith, K. J. P. (2016, August 5). Are We Reaching the End of World Records? Retrieved from https://www.scientificamerican.com/ article/are-we-reaching-the-end-of-worldrecords/.
FALL 2019 | PENNSCIENCE JOURNAL 18
THE PUSH FOR ADVANCEMENTS IN RENEWABLE ENERGY WRITTEN BY: CELIA ZHANG The facts are right in front of our eyes. While cleaner energy sources hold promise for the future, it remains difficult for renewable energy to break into the market due to fierce competition with major coal and natural gas companies.1 With countries across the world expanding their initiatives to reduce their carbon footprint and slow the rise in global temperatures, the demand for renewable energy technologies has increased rapidly. In many regions, renewables are already contributing a substantial share of the energy supply. However, renewable energy firms such as solar and wind power companies still face the troubling obstacle of high production 2 cost. The greatest contributors are the cost of installment and operation. Therefore, technological advancements that make renewable sources more cost and energy efficient show huge potential to advance the push for cleaner energy. Solar Scientists have been looking to use physics to create cost-efficient cells for solar panels. Solar cells work by capturing light and harvesting it for later electricity use. When light is captured, electrons hit with photons can absorb enough energy to jump from the low-energy conduction band to the mostly empty highenergy valence band. Once excited to the valence band, the extra energy that the electron carries can be harvested as electricity. The amount of excitation energy needed to jump into the valence band depends on the type of material used, which significantly impacts how efficiently the solar cells convert light into
EDITED BY: ELLY CHOI AND BRIAN SONG
DESIGNED BY: TAMSYN BRANN
electrical energy. While silicon has been the most common material in solar cells, only approximately 32% of light energy can be converted into usable electricity in a silicon solar cell.3 Researchers at Harvard University are looking to improve the efficiency of silicon solar cells by designing the silicon in microscopic pyramid shapes that increase the probability that a photon collides with an electron.4 This method makes use of the phenomenon that light travels further when absorbed into a pyramid, thereby making it more likely that each individual photon will excite an electron. Another innovation uses anti-reflective coatings on the front and back of solar cells to limit the number of photons that are reflected back into space while simultaneously forcing stray photons to be bounced back to the front of the cell to excite an electron.6 In another approach, researchers at Princeton University are looking to develop silicon/organic heterojunctions (SOH), a class of solar cells that lacks normal junctions. T r a d i t i o n a l l y, semiconductors use different types of materials known as N-type and P-type materials that are joined together to form a PN junction.5 Because foreign atoms were introduced into the normal crystal lattice of these materials in a process called doping, they display dramatic changes in their electrical properties. Due to the tough chemical vapor deposition process to deposit amorphous silicon in the PN junctions, conventional crystal
solar cells must be manufactured at temperatures of about 800 °C. This rapidly raises the production costs.
19 PENNSCIENCE JOURNAL | FALL 2019
Newer solar cells mix material systems to their advantage to provide more cost efficient manufacturing and enhanced performance. Silicon/organic heterojunctions (SOH) solar cells still use silicon to absorb light energy, but no longer require PN junctions. Instead, the materials are separated by a heterojunction composed of a thin layer of spray-on organic polymer on the original silicon that mimic a heavily doped semiconductor.6
FEATURES
These heterojunctions are attractive because they can be constructed at temperatures below 100 °C.7 This is also energetically efficient because it reduces the overall displacement of the metal at its junction and increases the rate of energy harvest within a solar cell. These new developments are rapidly improving solar energy devices while simultaneously reducing the need for expensive materials. Wind Much like solar energy, there is plenty of room to improve the efficiency of wind power. Advancements in wind power today come from significantly improving the design of wind turbines through powerful and precise models. One difficulty wind turbines experience is the wake effect, which is the
“aggregated influence on the energy production of the wind farm, which results from the changes in wind speed caused by the impact of the turbines on each other.”8 Wakes end up decreasing the power output and lifespan of wind turbines and are further complicated by the fact that wind is invisible, which makes its impacts difficult to measure. The problem with current computing models is that they often fail to capture the accurate flow principles. To accurately model wake structure, one requires an understanding of the impact of all the different atmospheric turbulent conditions, something that is not computationally possible even with many of today’s existing capabilities.9 The power that supercomputers bring to the wind energy field arises from their ability to handle enormous data sets and computations within its processors. Previously, wind analysis was done by setting up radars or detectors that could capture wind speeds and information about the turbines. The data it produced was imprecise and could not provide sufficient information on how the design and layout of the wind farm affected its performance. Supercomputers can now model wind speeds and airflow across an entire wind farm that spans up to 5,000 acres. These supercomputers
use complex computational fluid dynamics simulations that break up wind farms into millions of individual components and run simulations on each of them. The simulations can test the wind direction, speed, orientation of turbines, and terrain with precision and efficiency. By building a model that constructs the different configurations through which air can impact wind turbines, the supercomputers drastically reduce wind wakes and lead to more creative designs for larger farms that minimize the additional cost of expansion.10 Moving forward Amid an avalanche of questions on whether fossil fuels should still be considered the best choice for power generation, we are constantly seeking out ways to drive renewable energy solutions to the forefront. Today, major advances in technology are making necessary steps to increase efficiency and reduce the cost of renewable energy. These are the current steps being taken in improving renewable energy technologies, and the impact of extending them to thousands of places both on a national and global scale could be even more remarkable.
REFERENCES: 1. Abolhosseini, S., Heshmati, A., & Altmann, J. (2014, April). A review of renewable energy supply and energy efficiency technologies. Retrieved from http://ftp.iza.org/dp8145.pdf 2. Conversation, C. D. T. (2019, October 23). Three ways environmentally conscious countries can conquer the fossil fuel industry. Retrieved from https://www.popsci.com/fight-fossil-fuel-companies-lobbies/ 3. Winkelman, D., Jacklet, B., & Peterson, D. (2019, March 21). The Future of Solar is Bright. Retrieved from http://sitn.hms.harvard.edu/ flash/2019/future-solar-bright/ 4. Chandler, D. L., & MIT News Office. (2019, February 7). Unleashing perovskites’ potential for solar cells. Retrieved from http://news.mit. edu/2019/perovskites-microstructure-solar-cells-0207 5. PN Junction Theory for Semiconductor Diodes. (2018, January 29). Retrieved from https://www.electronics-tutorials.ws/diode/diode_2.html 6. Next Generation Photovoltaics Round 2. (n.d.). Retrieved from https://www.energy.gov/eere/solar/next-generation-photovoltaics-round-2 7. Nagamatsu, K. A., Avasthi, S., Jhaveri, J., & Sturm, J. C. (2014). A 12% Efficient Silicon/PEDOT:PSS Heterojunction Solar Cell Fabricated at < 100 °C. IEEE Journal of Photovoltaics, 4(1), 260–264. doi: 10.1109/jphotov.2013.2287758 8. Peckham, O. (2019, October 11). Optimizing Offshore Wind Farms with Supercomputer Simulations. Retrieved from https://www.hpcwire. com/2019/10/09/optimizing-offshore-wind-farms-with-supercomputer-simulations/ 9. Lundquist, J. K., Duvivier, K. K., Kaffine, D., & Tomaszewski, J. M. (2018). Costs and consequences of wind turbine wake effects arising from uncoordinated wind energy development. Nature Energy, 4(1), 26–34. doi: 10.1038/s41560-018-0281-2 10. Woolley, S. (2019, January 24). Catching A Second Wind: How Supercomputers Are Helping Neighboring Wind Farms Squeeze More Energy Out Of Their Turbines. Retrieved from https://www.ge.com/reports/to-boost-the-output-of-new-wind-farms-ge-is-using-supercomputers-tounderstand-how-turbines-affect-their-neighbors/
FALL 2019 | PENNSCIENCE JOURNAL 20
dr. raymond j. gorte RUSSEL PEARCE AND ELIZABETH CRIMIAN HEUER PROFESSOR OF CHEMICAL AND BIOMOLECULAR ENGINEERING (CBE) AND MATERIALS SCIENCE & ENGINEERING INTERVIEW BY MICHELLE PAOLICELLI DESIGNED BY BRIAN SONG Bio: Dr. Gorte is a professor of Chemical and Biomolecular Engineering at Penn. His current research focuses on creating thin films that allow heterogeneous catalysts to be more efficient in fuel cell applications. He has been here at Penn for thirty nine years and his research as a graduate student revolved around emission control catalysis, a research area he still works in today. Emission control catalysis is important in the automotive industry and has been of great interest in recent years with the push to make cars more energy efficient. Dr. Gorte’s research has many other important applications in a wide variety of fields, all with the potential to make the world more energy efficient. Q: What is the problem your research aims to address? “Automotive catalysts are expensive because they require lots of precious metals. I’ve been interested in how we can use less precious metal and get the same effectiveness of the catalyst. Research from a division of Toyota discovered certain perovskite materials, minerals that are abundant in the Earth’s crust, would allow the small particles of the metal catalysts to go into their lattice. Small metal particles mean high surface area and less metal is needed. This was a very exciting discovery but ultimately didn’t work because the exsolution, or separation of the catalyst, was occurring into the bulk material and moving away
21 PENNSCIENCE JOURNAL | FALL 2019
from the surface meaning you can’t get a reaction.” Q: How does your research provide a potential solution to this problem? “Our idea was to make a very thin perovskite coating on something more stable and then essentially create something where the metal particle can’t exsolve in the bulk because there’s really no bulk perovskite there. We prepare these thin perovskites using a method called atomic layer deposition, a common deposition technique in the semiconductor industry for making very thin films.” Q: Are there applications of this technology beyond emissions control? “Yes, this area was picked up on by the solid oxide fuel cell community, because you can have metals exsolve from ceramic electronic conductors and maintain better structure of the overall material and still maintain high catalytic activity for anode materials. What was observed in addition to the fact that you could maintain the structure was that these materials were anti-coking. An issue in solid oxide fuel cells is that you want to run them on hydrocarbon fuels. The reason you can’t is that nickel catalysts normally form graphitic
INTERVIEW
materials that coke-up. However, when you exsolve nickel from these perovskites that doesn’t happen. We’ve observed that you can also put nickel on our thin film materials and not coke up. This is an important property for catalysts that are used in other applications such as steam reforming. Hydrogen is formed from a process involving steam reforming of methane, and that uses a nickel catalyst. If you could operate that nickel catalyst without coking it, you could operate it more efficiently. These perovskite supported nickel catalysts are very interesting from that perspective.” Q: You mentioned that one of the initial reasons people began research in this area was because it was very expensive to use a lot of precious metals in catalysis. Is cost still a barrier for the newer methods like atomic layer deposition (ALD)? “It might be. I had a conversation with a colleague who recently retired from Ford and has a lot of experience in the area and he told me that he was always convinced that ALD would never be used for catalysts because it’s just too expensive a process. He also told me that the recent research has made him change his mind.”
Q: What has your general energy research experience been like? “Energy research is an interesting topic. It tends to go through phases depending on the cost of energy and whether people are concerned about it. I think that this time it’s probably not going to go away. I think that the concern about climate change and everything else is going to keep the focus on it. Right now, catalysis is a critical component of any kind of chemical or fuel production. If you’re gonna make a fuel from biomass, shale, etc. you’re going to involve heterogeneous catalysis.” Q: What do you think the future of energy will look like? “I think the future will include the applications of this catalysis research. Even if we go to an all-electric world where everything is solar and wind, which I hope someday happens, I still think we are going to need fuels and chemicals, and they will still have to be produced in some way. Hopefully, we will be able to do most of it through recycling, but I think that’s a long way off. Even if its recycled, it will involve some form of heterogeneous catalysis. Energy research is exciting and fun, and I hope more students will become interested in getting involved!”
FALL 2019 | PENNSCIENCE JOURNAL 22
Developing a Novel Photographic Procedure for Extracting Concentrations Compared with Particle Count Microscopy Bradley Wheeler1, Juliana Pinheiro2, Sheiny Tjia-Fleck1, Ethan Zeller1, Scott Gehler2, Susa H. Stonedahl1 1
2
St. Ambrose University Augustana College
ABSTRACT We developed a method of measuring the concentration of a specific fine particle (DayGloÂŽ AX-11-5 Aurora Pink) suspended in solution using a camera. One accepted way to measure these concentrations uses a counting slide and a fluorescent microscope. We implemented this approach and found that it had many disadvantages and yielded barely adequate results. This paper presents the new method, evaluates it for ease of implementation and accuracy, and compares it to the microscope method. Overall, we found that the camera method produced more accurate data within a shorter time frame at a fraction of the equipment cost associated with the microscope method.
INTRODUCTION Fine particles are small particles (i.e. <100 Îźm), which can be made out of any material. Some notable examples are fine particulate organic matter, microplastics, and fine sediments. Transport of fine particulate organic matter affects biogeochemical processes within streams, which in turn affect the ecological health of the stream (Battin et al., 2008; Fisher 1977; Cushing et al., 1993; Bilby and Likens, 1979). Microplastics are a pollutant that accumulate in plants and animals, contaminate drinking water, and affect human health (Gallo et al., 2018; Eerkes-Medrano et al., 2015). Pollutants and nutrients can attach to fine sediments and then they will be transported together affecting contamination and nutrient cycling (Fox et al., 2014). The presence of fine sediments in large quantities can also create irreversible changes to the functions of river systems (Owens et al, 2015). Due
to all the possible consequences of the presence and movement of fine particles, it is important to improve our understanding of their transport mechanisms. DayGloÂŽ AX-11-5 Aurora Pink fluorescent pigments (Cleveland, OH) are a thermoplastic pigment that can be used as paint or screen printing pigments. They have been used as a representative fine particle in several transport studies (Drummond, et al., 2017; Harvey, et al. 2012; Drummond et al., 2014) because they are fluorescent, inexpensive, and have a similar size to particulate organic carbons. We hope that improving our understanding of the transport of these pink particles will help others improve models of POCs and contaminants, which could be used to affect pollutant policies or stream restoration projects and thus improve stream health. Our research group chose to use these particles in an ongoing research project and generated a large number of samples for which we wanted to know the Figure 1 shows the dilution series used as known relative concentrations to test the microscope and camera methods.
23 PENNSCIENCE JOURNAL | FALL 2019
relative concentration of particles. The fluorescent property of the particles enabled us to use a standard microscope and Neubauer counting chamber to count the particles. However, the microscope counting method is time consuming and requires an expensive fluorescent microscope. Our objective for this research was to design a quicker, relatively inexpensive, and ideally more accurate method to measure the relative concentrations of our samples. Our research group observed that we could detect variations in the samples’ colors with our naked eyes, so we thought that a camera would be able to detect even more changes due to their large number of sensors and with controlled settings could be a tool used to quantify the pink particle concentrations. This idea was supported when we found that image processing of RGB values collected with digital cameras has been used as a quantification device for many purposes including dye concentrations in sand columns (Persson 2005; Aeby et al. 1997), surface concentrations of particles sprinkled on food (Shan et al., 1997), rice chlorophyll content (Wang et al. 2014), and microalgae biomass quantification (Sarrafzadeh et al., 2015). We believed a camera would be faster and hoped we could develop a more accurate approach allowing us to analyze a large number of samples. This led to the camera method presented in this paper.
RESEARCH
METHODS Calibration Dilution Series First, we created a series of dilutions (Figure 1), which we used as our known concentrations to evaluate both the microscope and new camera methods. The particles are hydrophobic, so we used a dispersant, sodium hexametaphosphate, (NaPO3)6, to help suspend the particles in solution. In order to speed up the process of dissolving the (NaPO3)6, the solution was heated and stirred with a magnetic stirrer. 5 g/L of (NaPO3)6 was used to suspend 0.84 g/L of DayGlo AX-11-5 Aurora Pink fluorescent pigments, which is consistent with preparation methods used by previous researchers (e.g., Drummond, et al., 2017), while constantly stirring with a magnetic stirrer. We found that the solution temperature needed to be maintained below 80°C to avoid precipitation of the particles and that 40°C was an optimal temperature as we could process the solution without waiting
Figure 2 shows a hemocytometer grid, brightfield image, fluorescent image, masked image and composite image. (A) The Neubauer hemocytometer grid composed of four main quadrants. Each quadrant (1 mm x 1mm x 0.1 mm) was photographed using the AxioVision image acquisition software. (B) Brightfield image of one of the quadrants, captured under halogen lighting. (C) Fluorescent image of one of the quadrants with 20 ms exposure, captured under fluorescent lighting. (D) Binary image of the total number of particles counted within the defined particle size range of 0.05-4.0 μm2. (E) Overlay of the masked image on the raw fluorescent image to produce the composite image. This colocalization allowed us to assess the accuracy of the mask. In the ImageJ software, the masked particles were assigned the color red and the fluorescent particles the color blue. Proper alignment of the particles on both images resulted in the color purple.
FALL 2019 | PENNSCIENCE JOURNAL 24
for it to cool. After 15 minutes of vigorous mixing, some unsuspended pink particles inevitably floated to the top. We removed these using a separatory funnel and placed some of the remaining solution in a 20 mL glass scintillation vial. We labeled this pink particle solution 1. We measured 500 mL of solution 1 with a volumetric flask, then transferred the solution into a 1000 mL volumetric flask, and diluted it to the mark with DI water. We inverted the solution many times to ensure proper mixing for the ½ concentration. We then repeated the process Figure 4 shows an example picture taken with the camera setup. until (½)12 concentration was achieved. An aqueous The average green in the sample (SG) and in the background solution of (NaPO3)6 was used as the 0 concentration. (BG) comes from the pixels in the S and B rectangles respectively. Microscope Method We inverted each of the previously prepared solution viles 10 times, placed them on a vortex for 10 seconds, and inverted them one last time to check for fine particles on the bottom. We then introduced a 10 μL sample into a Neubauer counting chamber from each solution for analysis. The microscope used was a Zeiss Axio Imager. A1 (Jena, Germany) and the camera was the Zeiss AxioCam MRm. The samples were examined under a rhodamine filter at 5x magnification (objective lens).
Figure 3 shows the setup constructed for the camera method from the side (A) and from above (B).
25 PENNSCIENCE JOURNAL | FALL 2019
Under the halogen light, we centered each quadrant of the Neubauer counting chamber (Figure 2A) on the visual field of the computer screen. We captured the images using AxioVision image acquisition software (Figure 2B). We then switched to fluorescent light and snapped three pictures (Figure 2C), at different exposures (20 ms, 80 ms, and 320 ms). This procedure took 7.5 min to give us four data points. We selected the 20 ms exposure for data analysis because it produced data with the least amount of variance. We saved the pictures as .TIF files. Images were analyzed using the ImageJ software (LOCI, University of Wisconsin; Rasband, W. S., 1997). We opened the image file and set the scale for analysis (‘Analyze’ → ‘Set Scale’) as follows: distance in pixels, known distance 20.00, pixel aspect ratio 1.00, and unit length in μm. In order to threshold each image (‘Image’ → ‘Adjust’ → ‘Threshold’), we set the default ‘Dark Background’, choosing ‘Red’ from the pull down menu, and then adjusted the threshold to (3, 255). Binary images were made for each picture file. We defined the particle size range to be 0.05-4.0 μm2. Particle analysis was performed by defining ‘Show:Masks’, selecting ‘Display results’ and then ‘Summarize’. This allowed us to get a total particle count from each image (Figure 2D). Accuracy was determined by making an image overlay (‘Image’ → ‘Color’ → ‘Merge Channels’). Channel 1 (Red) was designated to the fluorescent image and Channel 3 (Blue) to the masked image The colocalization of the two images resulted in a purple color when the pictures were aligned properly (Figure 2E). Camera Method
We developed a procedure for taking pictures in a controlled and replicable manner. We built the setup shown in Figure 3 from wood, PVC, poster board, screws, bolts, 3D-printed plastic, and wingnuts. The setup was designed to be a rigid structure resistant to movement and to allow for simple modifications. During the initial stages of developing the setup for this method, we observed that glare, shadows, and movement played a key role in the consistency of the data. For these reasons the placement and immobility of each component of the setup was critical. The only light in the room came from the 15 W bright white LED bulb in the lamp, which we positioned at an angle so as to minimize shadows behind the sample and reflection into the camera. The camera (Sony Îą57) with a 35 mm focal length lens (DT 1.8/35 SAM) was positioned so that the sample was at the center of the picture. The camera was focused manually on the center of the sample. The ISO value was set to 100 because this was the lowest ISO the camera can do and this allowed for the most information to be gathered in the picture. Higher ISO values create grainy images and are often used to photograph objects that are in motion (Busch, 2013). The aperture was set to F11 because it is a middle value that allows for the depth of field to have both the sample and the background be in focus. The shutter speed was set to 1/100 because in conjunction with the ISO and aperture it gave a good overall exposure for the photographs. A self-timer of two seconds was set to allow time for the photographerâ&#x20AC;&#x2122;s hand to be removed in order to avoid interference with the light and to ensure no movement of the camera during the picture. The white balance was set to incandescent because it was the best match for the light bulb we used. The color temperature for incandescent bulbs is about
Figure 5 shows the concentration versus the median green values of each sample
RESEARCH
3,200 Kelvin, and the light bulb we used was 3,000 Kelvin (Busch, 2013). Once the camera was ready, we prepared each sample as follows. First, we cleaned the samples with a microfiber cloth to remove any dust or fingerprints from the sample bottles. Then we shook the sample 10 times, placed it on a vortex for 10 seconds, and shook it another 10 times before placing it on our custom 3D printed sample stand to ensure each sample was always in the exact same position. We then waited 10 seconds before taking a picture to allow all of the bubbles induced from shaking to be removed. After taking four pictures of the sample, we repeated the procedure for the next sample. The procedure takes about 1 minute per sample and all of the materials including the camera cost approximately $800.
Figure 6 shows the average of the measured dilution concentrations collected from the microscope and camera methods plotted vs. the known dilution concentrations.
Figure 7 shows the average percent error on a log-scale of each measured dilution concentration from the microscope and camera methods.
FALL 2019 | PENNSCIENCE JOURNAL 26
possible outliers (A_G median). This process was done for four repetitions of the fourteen samples (16x14 pictures). We then calculated a trendline (Equation 2) for the relationship between the adjusted green medians and the dilution concentrations (Figure 5). y=-0.0089x-0.6708
Equation 2
In this equation the x-value corresponds to the A_G median concentration and y-value corresponds to the dilution concentration, C, which leads to Equation 3. C=-0.0089A_G median-0.6708
Figure 8 shows the average total error of each measured dilution concentration from the microscope and camera methods.
The pictures are composed of pixels, which each have red, green, and blue integers in the range of 0-255 (RGB values) that represent the color information within the photographs. We found that the green component is affected most by the concentration of the pink particles by comparing the values of the most concentrated pink solution to the blank solution. When analyzing each picture, we calculated the average green value within a 300 by 600 pixel rectangle (S_G) located in the center of the sample where no glare is present. We also calculated the average green value in a 400 by 600 pixel background rectangle (BG), where no shadows exist and no pink is present (Figure 4). This background rectangle captures variations in the picture due to either camera inconsistency or lightbulb variations. We used Equation 1 to calculate the adjusted green value (A_G) for each picture. S_G-kB_G=A_G
Equation 1
The constant k was calculated to be 1.282 for our setup. To do this calculation we tried all potential k-values between 0.500 and 2.000 to three decimal places. For each of these k-values we calculated the variance on 16 pictures for each of the 14 solutions. We then selected the k-value, which minimized total variance. In our procedure we used four photographs of the sample to calculate a single measurement. We then took the median of these four values in order to avoid 27 PENNSCIENCE JOURNAL | FALL 2019
Equation 3
The resulting equation gives us a way to calculate the concentration from the A_G median values collected as the median of the four pictures of any unknown sample. We then evaluated the quality of this method on twenty additional independent repetitions for each of the fourteen dilutions. RESULTS AND DISCUSSION From both the camera and the microscope datasets we plotted the average measured concentrations versus the known dilution concentrations and generated a trendline with a slope, y-intercept, and an R2 value (Figure 6). For a perfect correlation the slope would be one, the y-intercept would be zero, and the R2 value would be one. The microscope method gives a slope of 1.0916 and the camera method gives a slope of 1.0073. While both of these values are near the desired value, the camera method is much closer. Both y-intercept values are extremely close to zero. The R2 value shows the variation of the observed data from the perfect trendline. The microscope method results in an R2 value of 0.9656, which is good, but the camera method results in a trendline R2 value of 0.9997, which is considerably better. The average percent error of the dilution concentrations from both the microscope and camera methods are compared in Figure 7. The camera method resulted in a lower average percent error at all concentrations. Each concentration measured using the camera method had less than 9.5% error when the relative concentration is greater than or equal to 2-6 (â&#x2030;&#x2C6; 0.016), and the same concentrations measured using the microscope method had less than 97.5% error. The average error of these first seven dilutions for the camera method was 4.3% as compared to 40.8% for the microscope method. As the dilution values became lower the error for the camera method remained less than 105% when the
concentration was greater than or equal to 2-9 (≈ RESEARCH 0.002). The microscope method error reached 924% for these same concentrations. The average error of these first ten dilutions was 25.1% for the camera method, and 192.1% for the microscope method. We calculated the total error for each dilution because at low concentration values a small difference in values can create a large percent Scientific Inquiry for funding this project. error (Figure 8). We found that the maximum total error for the camera method was 0.0152 and was 0.141 for the microscope method. The average REFERENCES total error of all fourteen dilutions was 0.00414 Aeby, P., J. Forrer, H. Flühler, and C. Steinmeier. for the camera method and compared to 0.0373 “Image Analysis for Determination of Dye Tracer for the microscope method. The camera method Concentrations in Sand Columns.” Soil Science resulted in lower total errors at all concentrations. Society of America Journal 61, no. 1 (2/01 1997): 33–35. https://doi.org/10.2136/sssaj1997.03615995006100010006x. CONCLUSIONS From the experiment we found that the camera Battin, Tom J., Louis A. Kaplan, Stuart Findlay, method that we developed outperformed the Charles S. Hopkinson, Eugenia Marti, Aaron microscope method at all concentrations. The I. Packman, J. Denis Newbold, and Francesc camera method met all of our objective requirements Sabater. “Biophysical Controls on Organic as it can be performed almost twice as quickly, Carbon Fluxes in Fluvial Networks.” Nature costs approximately fifty times less, and produced Geoscience 1, no. 2 (February 2008): 95–100. significantly less error than the microscope method. https://doi.org-/10.1038/ngeo101. Although the camera method was found to be more accurate, it still should not be used if concentrations Bilby, Robert E., and Gene E. Likens. “Effect of values are lower than 0.2% of our maximum Hydrologic Fluctuations on the Transport of Fine concentration. Another drawback of the camera Particulate Organic Carbon in a Small Stream1: method is that another method would need to be Fluctuations in Stream-Water FPOC.” Limnology used to provide absolute data instead of relative data. and Oceanography 24, no. 1 (January 1979): 69–75. https://doi.org/10.4319/lo.1979.24.1.0069. ACKNOWLEDGEMENTS We would like to thank Dr. Jen Drummond from Busch, David D. David Busch’s Sony Alpha SLT-A57 the University of Birmingham for providing us with Guide to Digital Photography. 1 edition. Boston, pink particle information and Dr. Forrest Stonedahl MA: Cengage Learning PTR, 2012. from Augustana College for technical camera and computer consultation, as well as providing us with Cushing, Colbert E., G. Wayne Minshall, and an initial review of the paper. We would like to thank J. Denis Newbold. “Transport Dynamics of many people at St. Ambrose University for help with Fine Particulate Organic Matter in Two Idaho our project. In particular the chemistry department Streams.” Limnology and Oceanography 38, no. for the use of laboratories and equipment, especially 6 (1993): 1101–15. https://doi.org/10.4319/ Dr. Andrew Axup for his chemistry consultation. lo.1993.38.6.1101. We thank Dr. Michael Opar for his assistance with construction and Dr. William Hixon for the use of a Drummond, Jennifer. D., Robert. J. Davies-Colley, counting slide. We thank students Grace Richardson Rebecca Stott, James. P. S. Sukias, John W. Nagels, and Stephanie Quigley for preliminary data analysis, Alice Sharp, and Aaron I. Packman. “Retention construction assistance, and taking pictures. We thank and Remobilization Dynamics of Fine Particles Miranda Noack, Rebecca Foster, Katelyn Schroeder, and Microorganisms in Pastoral Streams.” Water and Collin Berry for taking pictures. We thank Research 66 (December 1, 2014): 459–72. https:// Aiko Mendoza for modifying some of our figures. doi.org/10.1016/j.watres.2014.08.025. Lastly, we thank the Stoffel Fund for Excellence in FALL 2019 | PENNSCIENCE JOURNAL 28
Drummond, Jennifer D., Laurel G. Larsen, Ricardo González‐Pinzón, Aaron. I. Packman, and Judson. W. Harvey. “Fine Particle Retention within Stream Storage Areas at Base Flow and in Response to a Storm Event.” Water Resources Research, July 1, 2017. https://doi.org/10.1002/2016-WR020202. Eerkes-Medrano, Dafne, Richard C. Thompson, and David C. Aldridge. “Microplastics in Freshwater Systems: A Review of the Emerging Threats, Identification of Knowledge Gaps and Prioritisation of Research Needs.” Water Research 75 (May 15, 2015): 63–82. https://doi. org/10.1016/j.watres.2015.02.012 Findlay, Stuart, Michael Pace, and David Lints. “Variability and Transport of Suspended Sediment, Particulate and Dissolved Organic Carbon in the Tidal Freshwater Hudson River.” Biogeochemistry 12, no. 3 (March 1, 1991): 149–69. https://doi.org/10.1007/BF00002605. Fisher, Stuart G. “Organic Matter Processing by a Stream-Segment Ecosystem: Fort River, Massachusetts, U.S.A.” Internationale Revue Der Gesamten Hydrobiologie Und Hydrographie 62, no. 6 (1977): 701–27. https://doi.org/10.1002/iroh.1977.3510620601. Fox, James, William Ford, Kyle Strom, Gabriele Villarini, and Michelle Meehan. “Benthic Control upon the Morphology of Transported Fine Sediments in a Low-Gradient Stream.” Hydrological Processes 28, no. 11 (2014): 3776– 88. https://doi.org/-10.1002/hyp.9928. Gallo, Frederic, Cristina Fossi, Roland Weber, David Santillo, Joao Sousa, Imogen Ingram, Angel Nadal, and Dolores Romano. “Marine Litter Plastics and Microplastics and Their Toxic Chemicals Components: The Need for Urgent Preventive Measures.” Environmental Sciences Europe 30, no. 1 (December 2018): 13. https:// doi.org/10.-1186/s12302-018-0139-z. Owens, Philip N., R. J. Batalla, A. J. Collins, B. Gomez, D. M. Hicks, A. J. Horowitz, G. M. Kondolf, et al. “Fine-Grained Sediment in River Systems: Environmental Significance and Management Issues.” River Research and Applications 21, no. 7 (September 2005): 693– 717. https://doi.org/10.1002-/rra.878. 29 PENNSCIENCE JOURNAL | FALL 2019
Persson, Magnus. “Accurate Dye Tracer Concentration Estimations Using Image Analysis.” Soil Science Society of America Journal 69, no. 4 (July 1, 2005): 967–75. https://doi. org/10.2136/sssaj2004.0186. Rasband, Wayne. S. “ImageJ software.” National Institutes of Health: Bethesda, MD, USA 2012 (1997). Sarrafzadeh, Mohammad H., Hyun-Joon La, Jae-Yon Lee, Dae-Hyun Cho, Sang-Yoon Shin, Woo-Jin Kim, and Hee-Mock Oh. “Microalgae Biomass Quantification by Digital Image Processing and RGB Color Analysis.” Journal of Applied Phycology 27, no. 1 (February 1, 2015): 205–9. https://doi.-org/10.1007/s10811-0140285-7. Shan, Yun, Mark D Normand, and Micha Peleg. “Estimation of the Surface Concentration of Adhered Particles by Color Imaging.” Powder Technology 92, no. 2 (July 15, 1997): 147–53. https://doi.org/10.-1016/S0032-5910(97)032324. Wang, Yuan, Dejian Wang, Peihua Shi, and Kenji Omasa. “Estimating Rice Chlorophyll Content and Leaf Nitrogen Concentration with a Digital Still Color Camera under Natural Light.” Plant Methods 10, no. 1 (November 6, 2014): 36. https://doi.org-/10.1186/1746-4811-10-36
RESEARCH
FALL 2019 | PENNSCIENCE JOURNAL330
Interested in joining PennScience? Contact www.pennscience.org Penn Science is sponsored by the Science and Technology Wing at the University of Pennsylvania