Volume 18 Issue 2
S
/fē’chərsF/ E
A T U R E S
BSJ
1
DNA: Building Blocks of Nanotechnology Alexander Powers
B S J
6
y n t h e t Spring 2014
Applications of Magnetoelectric Materials for Solid-State Devices Karthik Gururangan
Fabricating Nano-Scale Devices: Block Copolymers and their Applications Aditya Limaye 10
Bright Ideas in Solar Energy Jo Melville 13
Total Heart Transplant: A Modern Overview Nithya Lingampalli 19
Manufactured Memories Jessica Robbins 23
Lab on a Microchip and Microfluidic Technologies: Toxology and Drug Development Ann Heslin
i
28
c
18(2)
s
i • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
/
/
R E S E Arē’sûrch’ R C H & I N T E /̓intər,vyōō/ R V I E W
BSJ
Interview: An Interview with Professor Jan Rabaey: Neural Prosthetics and Their Future Applications Kuntal Chowdhary, Jingyan Wang, Saavan Patel, Shruti Koti
B S J
32
Research: Copper Catalyzed Oceanic Methyl Halide Production Jae Yun Robin Kim and Robert Rhew 41
Phylogenetic Diversity and Endemism: Metrics for Identifying Critical Regions of Conifer Conservation in Australia Annasophie C. Lee and Brent Mishler 48
ii • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
/kŏn’tākt’/
C O N T A C T
Mailing Address Berkeley Scientific Journal 5 Durant Hall #2940 Berkeley, CA 94720-2940 Phone Number (510) 643-5374
/stāf S T/ A
B /S J
bē ěs jā/
F F
Editor-in-Chief
Prashant Bhat
Managing Editor
Malone Locke
Features Editors
Alvin Huang Jessica Robbins Nithya Lingampalli
Email bsj.berkeley@gmail.com Online bsj.berkeley.edu
and Writers
B S J
Dear Reader, In this issue of Berkeley Scientific Journal, we explore how recent technological and biomedical developments have advanced our understanding of “synthetics” in science. We are constantly surrounded by synthetic science— manmade technological advancements that enable us to communicate faster, manipulate genes, and progress toward cleaner energy. UC Berkeley professors and other members of the academic community are currently researching cancer immunotherapy, genome editing, and solar energy among other topics. These accomplishments among many others have garnered global attention. Now is the perfect time to dedicate the current BSJ issue to “synthetics.” This semester’s issue is filled with high quality research, features articles, and an interview with an award-winning Cal professor. For an understanding on how synthetic science works on a microscopic level, read our features articles about DNA as building blocks of nanotechnology [6], magnetoelectric materials [1], and “lab on a chip” [28]. Departing from micro-scale technology, explore interesting articles on “Manufactured Memories” [23], artificial heart devices [19], and “Bright Ideas in Solar Energy” [13]. Additionally, Berkeley Scientific had the opportunity to interview Jan Rabaey, a professor of Electrical Engineering and Computer Science about his research on neural prosthetics and their future applications [32]. I invite you to read the second issue of Berkeley Scientific Journal’s eighteenth volume, filled with fine articles and undergraduate research papers on the topic of synthetics. Go Bears! Sincerely, Prashant Bhat Editor-in-Chief
Interview Editors and Team
Aditya Limaye alexander Scott Powers Ann Heslin Jo Melville Karthik Gururangan Kuntal Chowdhary Ali Palla Harshika Chowdhary Eiman Kazi Jingyan Wang Manraj Gill Rhea Misra Saavan Patel Shruti Koti
Publicity
Tanu Patel
Research Editors
David Ding Eric Huang
and Team
Design & Layout Editors and Team
Alex Yang Grace Deng Michael Looi Lucy Zhang Spring Chau Alexis Bowen Jingting Wu Cheng (Kim) Li
( ) 18 2
iii • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
Applications of Magnetoelectric Materials for Solid-State Devices
B S J
Karthik Gururangan
In 1947, three men, William Shockley, Walter Brattain, and John Barden, revolutionized the world of electronics by developing the first operational transistor. The transistor made the creation of the computer and other digital electronics possible, ushering in a new era of technology and earning the three men the 1956 Nobel Prize in Physics. Armed with the transistor, scientists and engineers (notably, those from Intel Corporation) developed faster, smaller, and more efficient microprocessors and RAM storage devices throughout the 1980s and 1990s. The development of solid-state storage devices and hard drives is of utmost importance for our increasingly electronic world. Within the last decade, computers have reached unprecedented operation speeds, but we are approaching a plateau. Recently, scientists have found that we cannot overcome this hurdle simply by fitting more and more transistors onto a chip; we must actually change the fabric of the device by creating new materials in a fashion that takes advantage of their inherent quantum electronic properties. Currently, hard drives operate by using the principle of magnetoresistance. Magnetoresistance describes the phenomenon of a magnetic polarization affecting the resistance of a material.
Figure 1. Diagram of parallel and anti-parallel magnetic tunnel junction (MTJ) configurations (Tsymbal, 1999)
In Figure 1, we can see that the magnetic tunnel junction (MTJ) consists of a thin insulator sandwiched by two magnets. The insulator has a thickness on the order of a few nanometers; therefore, electrons
from one magnet can tunnel across the dielectric to the other magnet. Tunneling essentially means that every particle has a non-zero probability of traversing a solid boundary. While the mechanics of tunneling are not well understood, we do know that it is due to the wave-particle duality of matter; thus, tunneling is a purely quantum phenomenon and can only happen to very small particles such as electrons or quarks. When electrons traverse the boundary, they create a current and we can measure a resistance across the MTJ device. The number of electrons that will tunnel across the insulator is dictated by the relative orientations of the two magnets. Suppose that there are only two magnetic orientations: “left” and “right.” If both magnets have the same orientation, we call that the parallel state. If they point in opposite directions, they are in the anti-parallel state. Tunneling is highly encouraged in the anti-parallel state and suppressed in the parallel configuration (Nishimura, 2002). Thus, we can conclude that for a given potential, the antiparallel state has a very low resistance while the parallel state has a high resistance. In hard disk drives, a read head applies a set voltage over many of these MTJ bits, measuring the resulting resistance of each one. It interprets a high resistance as a “1” state and a low resistance as a “0” state; thus, magnetoresistance directly leads to a digital interpretation. In the case of RAM (random-access memory), these bits are constantly being changed and reinterpreted over and over. The problem is that imparting these magnetic polarizations (a process called writing) utilizes an external magnetic field and is energetically inefficient given how many times the polarization changes. Recently, there has been a huge interest in a new class of materials called multiferroics. These materials exhibit two or more so-called “ferroic parameters”: ferroelectricity, ferromagnetism, and ferroelasticity. Current research suggests that using specific multiferroic materials in place of the magnets in Figure 1 can revolutionize computer memory systems and usher in a new era of quantum electronic technology. Before we can understand what the different ferroic parameters are and how they occur, we need to understand crystalline materials and crystal structure. When one thinks of crystals, one often pictures
1 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
diamond or some other glistening gem. In reality, crystals are much more common than precious stones and the study of crystalline materials constitutes the backbone of solid-state physics and quantum electronics. Crystalline materials are simply those in which the atoms that make up the material are arranged
B S J
“Current research suggests that using specific multiferroic materials in place of the magnets in Figure 1 can revolutionize computer memory systems and usher in a new era of quantum electronic technology” in a regular pattern. There are seven different crystal structures: cubic, tetragonal, trigonal, orthorhombic, monoclinic, triclinic, and hexagonal. In cubic crystal systems, the atoms lie on or within a cube; the other 6 systems can be thought of as a normal cube warped along one or more dimension. Crystallographers further break down these seven systems by their spatial symmetry into 14 unique structures known as Bravais lattices. The three Bravais lattices that we will be focusing on are the simple lattice, base-centered lattice, and face-centered lattice. Figure 2 shows these three different Bravais lattices
centered structure is the same as the simple structure except for that there is one more atom in the center of the cube. Face-centered lattices share the same framework as simple lattice but also includes an atom at the center of each of the 6 cubic faces. The geometry of a material’s unit cell is called that material’s lattice while the unique atomic makeup is called the material’s crystal motif. A good analogy is to think of the lattice as a blueprint for a building while the motif is what actually fills the building. Many buildings may have the same blueprint (for example, homes built by the same architect) but each building has a different appearance and interior due to what one chooses to fill it with. Similarly, many materials exhibit the same lattice but each material has a unique motif that gives it the properties it possesses. Armed with a basic understanding of crystal structure, we can tackle the electronic ferroic parameter: ferroelectricity. All materials can be grouped into one or more of three electronic categories: dielectric, paraelectric, and ferroelectric. These distinctions are made based on how a material reacts when an external electric field is applied to it. While a material can only be characterized into one electronic category at a time, it can switch between categories with a change in external parameters such as temperature. Figure 3 depicts the three different responses:
Figure 3. Graphs of dielectric (a), paraelectric (c), and ferroelectric (b) responses (TRS Technolgies, 2014)
Figure 2. (a) simple lattice, (b) base-centered lattice, (c) facecentered lattice. The blue dots represent the constituent (not necessarily the same) atoms of the material (Kotowski, 2002)
From the picture, we can see that the simple structure is the easiest to understand with one atom occupying a vertex position of the cube in each unit cell. A base-
Each graph plots the material’s electric displacement field (D) against the applied electric field (E). For the sake of simplicity, we can take the displacement field to be the internal electric polarization of the material. A dielectric material has a linear response to the applied field: as the external field increases in magnitude, so does the material’s polarization. The drastic non-linear response in figure 3(c) shows that a paraelectric material actually acquires a polarization greater than the applied electric field. The most interesting of these, the ferroelectric response, is characterized by a spontaneous, intrinsic electric polarization even in the absence of an externally applied field (there is a non-zero y-value for an applied field of zero in the ferroelectric graph). Furthermore, a ferroelectric material can have the direction of
2 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
its intrinsic polarization changed by changing the external electric field. Why do these reactions occur? Dielectrics (often called insulators) are often polymers, glasses, and other non-conducting materials while paralectrics and ferroelectrics are almost exclusively crystalline materials. Let us observe the unit cell of a generic paraelectric:
to crystalline materials with the chemical formula ABO3 where O is oxygen and A and B are two other elements. For example, the best classical example of a ferroelectric material is barium titanate (BaTiO3). Barium ions lie on the 8 cube vertices, one titanium ion sits in the center, and 6 oxygen ions are located at so-called “tetrahedrally coordinated” positions. The titanium ion is permanently displaced from its center equilibrium position and is responsible for causing a net electric moment. Another important feature of ferroelectric materials is that their electric polarization can be changed by applying an electric field in a different direction. Imagine we have a ferroelectric unit cell with polarization pointing up and we decide to apply an electric field pointing directly to the right. The polarization (black) will move toward the direction of the electric field (orange) over a period of time:
Figure 4. Generic paraelectric/ferroelectric FCC crystal unit cell. The black arrows represent each atoms’ electric polarization (Strobel, 2009)
The reason why a paraelectric has no intrinsic electronic polarization is because all of these small arrows cancel each other out in the end, resulting in a net polarization of 0. However, when acted upon by an externally applied field, all of these arrows suddenly align in the direction of the applied field and there is a huge spike in the electric polarization of the material as a whole. Consider the same figure but now take it to be a ferroelectric material. Ferroelectrics come in two different flavors: displacive and order-disorder. In displacive ferroelectrics, each unit cell actually has a net polarization (the arrows do not cancel each other out) and for a certain range of temperature, each unit cell in the material as a whole has a non-zero electric polarization that contributes to a predictable, non-zero intrinsic material polarization. At higher and higher temperatures, displacive ferroelectrics can have more random unit-cell polarizations that lead to an unpredictable and possibly non-existent spontaneous polarization. Order-disorder ferroelectrics are caused by slight ionic displacements that lead to an asymmetry between the nearby ions’ electric fields and the elastic restoring force on the unit cell. In particular, the electric field grows faster than the mechanical force, causing a net electric force on the ion. This leads to a net ionic displacement which leads to a net polarization. In the sample unit cell in Figure 4, the center ion is typically the one that is displaced from equilibrium and causes the net electric moment. Scientists have observed that many order-disorder ferroelectrics exhibit Perovskite structure. Perovskite is the common name given
If we were to plot this motion to change the polarization by 180 degrees, reverse the bias, then plot the movement back to the original direction, we would trace out a loop similar to that in Figure 3(b). This is referred to as a hysteresis loop; this name is derived from the fact that the new position of the polarization is guided by its previous position (or history). If we looked at a paraelectric material, the polarization would immediately snap into line with the electric field. The second and most famous ferroic parameter is called ferromagnetism. Ferromagnetism is completely analogous to ferroelectricity except that ferromagnets have a spontaneous magnetic moment; therefore, much of our discussion of ferroelectricity applies in a similar way to ferromagnetism. Everyday magnets are ferromagnets, including the most famous one, iron, for which the property is named. Similar to electronic categorization, materials also belong to at least one of the following three classes: paramagnets, diamagnets, and ferromagnets. The graphs of the applied magnetic field versus the material magnetization look the same as those shown in Figure 3. Paramagnetic atoms align their magnetic moments with the external field much like paraelectrics do while diamagnets partially cancel out the external and almost stay magnetically neutral. Ferromagnets exhibit a spontaneous magnetic moment similar to how ferroelectrics have an intrinsic electronic polarization. On top of being ferromagnetic, some
3 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
materials can also be classified as anti-ferromagnetic or ferrimagnetic. Anti-ferromagnets have alternating magnetic polarizations on each atom, causing them to be magnetically neutral but controllable by external magnetic forces while ferromagnetic materials exhibit unequal alternating moments on their constituent atoms, causing a net magnetization equal to exactly half that of a normal ferromagnetic magnetization. We can now delve into the heart of multiferroic research: magnetoelectric multiferroic materials. Magnetoelectric multiferroics are those materials that are both ferroelectric and either ferromagnetic or antiferromagnetic. Indeed, the most promising of these materials is bismuth ferrite (BiFeO3), a perovskite ferroelectric/anti-ferromagnetic multiferroic. At room temperature, BFO has rhombohedral crystal structure, but depending on the substrate it is grown on, it can turn into a monoclinic or tetragonal phase with different electronic and magnetic properties (Catalan, 2009). BFO is composed of equal parts Bi2O3 and Fe2O3. According to current research, when BFO is grown as a thin-film (nanometer to micron thickness), it retains a nearly 15% higher remnant electric polarization when compared to bulk crystalline BFO (Catalan, 2009). The reason for this and for why thin-film technology is incredibly promising for developing technology is because electrons behave differently when they are interacting within a crystal of thickness approximately equal to the wavelength of the electron wavefunction. The electrons act more or less as if they were in a potential well rather than a free particle. This is to say that electrons in thin-films are essentially trapped; thus, they have very predictable and well-defined behaviors. This makes thin-film nanostructures an exciting prospect for quantum electronics research.
when the ferromagnet CoFe is placed on top of a thin-film of BFO, we observe an exchange coupling interaction between the anti-ferromagnetic moment of BFO and the ferromagnetic moment of CoFe (Martin 2008). Even though an anti-ferromagnet it magnetically neutral, it still has two “orientations.” If the first atom has a “down” polarization, the next must have an “up,” followed by a “down,” and so on until the whole unit cell is magnetically neutral. It we flip the initial condition (i.e. start on an “up”), all of the other atoms also flip their polarizations. To put in simply, this magnetic exchange coupling interaction refers to the CoFe’s ability to “read” the orientation of the antiferromagnetic moment and align its own ferromagnetic moment appropriately (Martin 2008). Suppose the “down” initial condition on BFO corresponds to a “left” magnetic moment on CoFe; if the BFO were to flip for some reason and go into the “up”-starting configuration, the CoFe would read this switch and turn its magnetic moment appropriately to the “right” orientation. The only question that now remains is: how do we flip BFO’s antiferromagnetic configuration? Fortunately, scientists at UC Berkeley have observed another internal coupling interaction between BFO’s ferroelectric polarization and its antiferromagnetic moments (Martin 2008). As we have established earlier, all ferroelectrics can have their electric polarization changed by an external voltage. Thus, if we apply an electric field to BFO and flip its electric polarization, we also flip its anti-ferromagnetic polarization which is read by the CoFe on top of it, flipping the CoFe’s magnetic moment and allowing us to successfully re-write a magnetic moment without ever having to utilize an energetically expensive magnetic field. This discovery is a huge triumph and forms the basis
“In particular, when the ferromagnet CoFe is placed on top of a thin-film of BFO, we observe an exchange coupling interaction between the antiferromagnetic moment of BFO and the ferromagnetic moment of CoFe” Earlier we proposed that whatever multiferroic replaced the magnets in Figure 1 could have a tremendous impact on the energetic efficiency of any magnetoresistive device. Since BFO is antiferromagnetic, it would seem counterintuitive that it would make a good magnetic component. This is why we would not use just BFO for those magnets; rather, we would replace the magnet with a heterostructure: a stacking of different thin-film materials. In particular,
for new magnetoelectric random access memory (MERAM). MERAM would certainly revolutionize all technology, greatly increasing the speed, decreasing the cost, and stabilizing modern computers as well as phones, microprocessors, and other solid-state devices.
4 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
References Bibes , M., & Barthelemy, A. (2008). Towards a magnetoelectric memory. Informally published manuscript, Unite Mixte de Physique , , Available from Nature. Retrieved from http://www.matdl.org/ matdlwiki/images/9/91/Magnetoelectric_memory.pdf Catalan, Gustau. Physics and Applications of Bismuth Ferrite. working paper., University of Cambridge, 2009. Advanced Materials. http:// onlinelibrary.wiley.com/doi/10.1002/adma.200802849/pdf. Chu, Y., & Martin, L. W. (2008). Electric-field control of local ferromagnetism using a magnetoelectric multiferroic. (Doctoral dissertation), Available from Nature. Retrieved from http://sector7.xray.aps.anl.gov/~wen/ nmat2184-1.pdf
B S J
Kotowski, R. (Designer). Bravais Lattices [Web Photo]. Retrieved from http://edu.pjwstk.edu.pl/wyklady/fiz/scb/Wyklad14/w14.xml Nishimura, N., & Hirai, T. (2002). Magnetic tunnel junction device with perpendicular magnetization films for high-density magnetic random access memory. Manuscript submitted for publication, Semiconductor Device Development Center, Canon, Inc., , Available from Journal of Applied Physics. Retrieved from http://scitation.aip.org/docserver/ fulltext/aip/journal/jap/91/8/1.1459605.pdf?expires=1392681749&i d=id&accname=149833&checksum=39076556DE123DBCD043B8A7B F3A7C01 Strobel, R. (Photographer). (2009, Nov 24). PZT Perovskite [Web Photo]. Retrieved from http://www.zeit.de/wissen/2009-11/erde-sdpiezokristalle Technologies, T. (Photographer) (2014). Displacement Field Response [Web Photo]. Retrieved from http://www.trstechnologies.com/Products/ Specialty_Capacitors/high_energy_capacitors.php Tsymbal, E. Y. (Designer). (1999). Giant Magnetoresistance [Web Photo]. Retrieved from http://physics.unl.edu/tsymbal/tsymbal_files/ Presentations/JMW-1999.pdf Wang, J. Epitaxial BiFeO3 Multiferroic Thin Film Heterostructures. working paper., University of Maryland, College Park, 2003. Science http://www.sciencemag.org/content/299/5613/1719.full
Layout by: Cheng (Kim) Li
5 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
DNA : Building
Blocks of
Nanotechnology Alexander Powers
B S J
“A friend of mine suggests a very interesting possibility for relatively small machines. He says that, although it is a very wild idea, it would be interesting in surgery if you could swallow the surgeon. You put the mechanical surgeon inside the blood vessel and it goes into the heart and ‘looks’ around...it find out which valve is faulty and takes a little knife and slices it.” Richard Feynman offered this prophetic vision at his famous 1959 Caltech lecture “There’s Plenty of Room at the Bottom” – a seminal event in the history of nanotechnology. In developing nanoscale machines, Feynman suggested scientists take a hint from biology. After all, proteins zip around cells on elaborate transport systems while DNA molecules encode vast quantities of information on a molecular scale. Feynman asked innovators to, “consider the possibility that we too can make an object that maneuvers at that level”(Feynman, 1960). Little did he know that biology would be the key to making his vision a tangible reality nearly 50 years later. The burgeoning field of DNA nanotechnology -- using nucleic acids as a building material in an nonbiological context -has recently yielded some incredible breakthroughs ranging from programmable drug delivery capsules to enzyme “spiders” and chemical logic gates. DNA nanotechnology has the potential to finally realize the nano-surgeon. Deoxyribonucleic acid (DNA) makes for an extremely effective building material at the nanoscale; afterall, nucleic acids are life’s information storage molecule of choice. Nearly everyone learns about
uses four different kinds of nucleotides with different chemical structures: adenine, guanine, thymine, and cytosine. The sequence of these nucleotides describes the information available for “building” an organism, similar to the way in which letters of the alphabet appear in a certain order to form words and sentences. Two strands of DNA pair up according to certain rules dictated by the molecular geometry of each nucleotide. Thymine pairs with adenine and cytosine pairs with guanine. This base pairing specificity is the foundation of designing DNA nanostructures. The key to building small is encoding the assembly information into the molecules themselves rather than using external forces to arrange them. Early successes often relied on these outside forces like atomic force microscopy or scanning tunneling microscopy to build structures molecule by molecule - approaches which cannot be easily scaled up to create large, complex structures (Shankland, 2009). The main advantage of DNA as a building material is that it can spontaneously self assemble, the sequence of nucleotides can be precisely controlled and the 3D structure is well understand (in contrast to the complexity of proteins). DNA nanostructures fall into one of two categories: structural and dynamic. Static structures are fixed arrangements of DNA in specific shapes. A variety of strategies exist to do this, one of the most successful of which is DNA origami. Dynamic structures are formed similarly but are designed to move using techniques like strand displacement - this is essential for any sort of computational or mechanical functionality.
“Deoxyribonucleic acid (DNA) make for an extremely effective building material at the nanoscale” DNA by the time they graduate middle school and with good reason. Just as computers derive vast amounts of information from a simple code of 1’s and 0’s stored electronically, DNA encodes the vast complexity of life in simple chemicals. Deoxyribonucleic acids are composed of long strands of repeating subunits known as nucleotides. DNA
In a 2006 Nature article, Paul Rothemund coined the fanciful term “DNA Origami” to describe his successful manipulation of DNA strands into a variety of shapes. He synthesized six different shapes including squares, triangles, and five-pointed stars consisted of flat lattices of DNA. His revolutionary technique utilized a single long “scaffold” strand of
6 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
DNA Origami can used used to fold long strands of DNA into a variety of shapes as seen in these computer models and $' #" " (& (& '# # #" &'% " & # "'# ) % '+ # & $ & & electron microscope images (Rothemund 2006).
& " " ' & #!$(' % !# & " '%#" ! %#& #$ ! & #' !(" ! ''$ *** " '(% #! " '(% #(%" genomic DNA from a virus (7,000 nucleotides long), ) " ! & " '(% $ which was coiled, twisted, and stacked by small custom “staple� strands. The long single strand won’t bind to itself, so a computer algorithm designs short strands complementary to certain regions - these are designed so as to maximize the connectivity and tightly hold everything together. Perhaps the most surprising part of the method is its simplicity – staple strands are mixed with the long strand and heated for 2 hours at 95C. The process is entirely spontaneous - strands join together to maximize complementary binding thus forming the correct structure. With the steadily falling cost of synthesizing custom DNA strands, this method is relatively inexpensive. Synthetic DNA strands have been available by mail order for the past 20 years now most cost less than 10 cents per base (Carlson, 2009).
“ squares, triangles, and fivepointed stars consisted of flat lattices of DNA� Rothemund’s second achievement was developing a method to pattern the 2-D shapes. He
Staple strands bind to complementary sites on the long scaffold strand winding it back and forth into an energetically favored configuration (Rothemund 2006).
designed staple strands that would stick up from the flat lattice, increasing the height of the nanostructure at desired locations. The staple strand, normally in plane with the flat DNA lattice, contains “hairpins� - short regions that don’t bind to the scaffold. The hairpin structures were clearly visible by atomic force microscopy. A world map as well as the word “DNA� were successfully created and visualized. These letters
7 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
DNA boxes can be unlocked by key strands and have potential for drug delivery (Andersen 2009).
are about 30nm high - 6000 times smaller than the width of a human hair. Rothemund’s method has been extended to the construction of dynamic structures, for example, a hollow cube with a controllable lid composed of six sheets (Andersen, 2009). This technology has its main application in targeted drug delivery that concentrates therapeutics in some regions of the body relative to others and can do so in response to desired stimuli. The entire box is composed solely of a single long strand of DNA from the M13 bacteriophage and hundreds of staple strands. The lid is functionalized with a lock key system consisting of 2 strands of DNA - a mere 2.5nm wide. The lid is initially closed by these two nearly complementary strands, one attached to the lid and the other to the cube side wall. This system takes advantage of a method called toehold strand displacement to allow the box to open in response to the presence of a unique “key” oligonucleotide that displaces one of the lock strands. This key strand attaches to the toehold region initially and, having a better match than one of the complementary strands, replaces the other lock strand. The lid is now free to open. The lab detected the opening of the box by incorporating two fluorescent dyes into the faces of the box; when close together, fluorescence is increased through a process known as FRET - fluorescence resonance energy transfer. Therefore, when the box opens and the dyes are further apart, FRET decreases and is detectable via spectroscopy. Further experiments utilized two locks each with distinct keys. In order for the box to open, both keys had to be present. Further experiments demonstrated that the box could respond to complex combinations of strands and even cellular messenger RNAs. This opens the possibility for smart systems that could respond to The above methodologies - DNA origami as well as dynamic strand displacement strategies, provide the foundation for more complex functional devices. Nanoscale machinery will require tiny
moving parts to interact with and manipulate their environment. Moving machinery synthesized at the nanoscale is a daunting challenge; relatively simple molecules must move in desired paths. Scientists again looked to biology for inspiration. While cells might appear to be stationary and static, they are in fact buzzing with tiny protein machines. Motors, like the enzyme ATP synthetase or proteins that power flagellum, spin at up to 6,000 to 17,000 rpm (Rice, 1999). Other motor proteins include “walkers” like kinesin which transport payloads along cellular highways of microtubule filaments. Kinesin travels in a controlled, specific direction because the they only attach in one orientation dictated by the microtubules structure (Rice, 1999). Motor proteins like kinesin have inspired scientists to artificially create walkers using DNA nanotechnology (Lund, 2010). One of the biggest obstacles to an artificial walker is its simple molecular structure which prevents it from containing “programmed” instructions. Thus it must take its cues for movement from its environment, in this case,
“nanoscale robot- in this case termed “DNA spiders” which can walk across a flat sheet of DNA”
DNA walkers composed of enzymes can move along 2D sheets of DNA origami along paths containing substrate strands sticking up orthogonally (Lund 2010).
8 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
patterning of short strands sticking up from a 2D DNA origami sheet. Published in Nature in 2010, the paper describes nanoscale robot - in this case termed “DNA spiders” which can walk across a flat sheet of DNA in a complex pre-determined path. This is very comparable to creating a robot which moves forward and turns based on preprogrammed instructions except about 109 times smaller. The walker is actually not composed of DNA but rather proteins and enzymes that act on DNA. These include streptavidin (a protein often used as a connector between different proteins of interest) as the body of the spider and 3 deoxyribozyme “legs” each connected to the streptavidin. Using the DNA origami patterning techniques developed by Rothemund, surfaces are designed that layout paths for the spiders to follow. These are made up of characteristic strands that stick up perpendicular to the 2D surface. The legs of the spider bind to the short strands, cleaving them into two upon contact with the enzymatic leg. Each leg moves independently from one site to an accessible substrate site. Thus, the body of a spider at the interface between cleaved strands and fresh substrate (uncleaved strands) will move towards the substrate region. This amounts to the spider moving directionally along a track as the substrates are cleaved. In comparison to protein walkers, these are more predictable, programmable, and can interact with designed landscapes. In conclusion, DNA has great potential beyond its biological role. Its capacity for information storage and self assembly makes DNA is a powerful tool for nanotechnology.
References
Andersen, E. S., Dong, M., & Nielsen, M. M. (2009). Self-assembly of a nanoscale DNA box with a controllable lid. Nature, 459 (7243), 73-76. doi:10.1038/nature07971 Carlson, R. (2009). The changing economics of DNA synthesis. Nature biotechnology, 27(12), 1091. Feynman, R. P. (1960). There’s plenty of room at the bottom. Engineering and Science, 23(5), 22-36. Lund, K., Manzo, A. J., Anthony, J., Dabby, N., & Tan, H. (2010) Molecular robots guided by prescriptive landscapes. Nature, 465 (7295), 206-210. doi:10.1038/nature09012 Rothemund, P. W. K. (2006). Folding DNA to create nanoscale shapes and patterns. Nature, 440 (7082), 297-302. doi:10.1038/nature04486 Rice, S., Lin, A. W., Safer, D., Hart, C. L., Naber, N., Carragher, B. O., ... & Vale, R. D. (1999). A structural change in the kinesin motor protein that drives motility. Nature, 402(6763), 778-784. Shankland, S. (2009). Ibm’s 35 xenon atoms and the rise of nano tech. CNET, Retrieved from http://news.cnet.com/8301-30685_3-10362747-264. html
Image Sources
http://www.nature.com/nature/journal/v440/n7082/images/ nature04586-f2.2.jpg h t t p s : / / w w w. d r o p b o x . c o m / s / 5 y y 1 i q g q f f z q n 9 k / D N A % 2 0 Nanotechnology%20Staple%20Strands.jpg https://www.dropbox.com/s/ph5mn39381ko7jl/DNA%20box.jpg https://www.dropbox.com/s/b0zqwyzy95tqkg5/DNA%20spider.jpg
9 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
Layout by: Lucy Zhang
Fabricating Nano-Scale Devices: Block Copolymers and their Applications
B S J
Aditya Limaye
In the earliest days of synthetic chemistry, scientists in the field attempted to control the ordering of atoms on a molecular scale in order to create useful compounds for various applications, such as synthetic medicines, dyes, and catalysts. Molecular-scale synthetic chemistry continues to thrive today, and has led to the discovery of synthesis schemes for millions of organic molecules, ranging from simple hydrocarbons such as methane to complex biological molecules such as Vitamin B12. However, as molecular-scale chemistry has flourished, new techniques in macromolecular chemistry, dealing with compounds of a thousand atoms or more, have emerged rapidly, enabling the control of atomic placement in macromolecules, which are found widely in commercial products and in biological systems. One of the major focuses of macromolecular chemistry today is the production and processing of polymers, which are macromolecules made from repeating sub-units known as monomers. Molecules such as deoxyribonucleic acid and cellulose, polymers of ribose and glucose, respectively, play crucial roles in biological
monomers, creating a large macromolecule with varying chemical composition at different points along the chain (Matsen and Bates, 1996). This added complexity in the case of block copolymers leads to a variety of interesting phenomena, all of which can be controlled and fine-tuned using synthetic chemical approaches, which enables the creation of a new class of functional materials. Many of the interesting properties of block copolymers arise from the chemical interactions between different blocks of the polymer. In a traditional, molecular chemistry sense, unfavorable interactions between two molecules, such as water and oil, lead to repulsions and, if the forces are strong enough, phase separation. However, in a block copolymer, even though certain blocks may have unfavorable interactions with each other, they cannot simply separate like water and oil, because they are all part of the same macromolecule. Rather than driving phase separation processes such as those found in water-oil mixtures, the relative strengths of chemical interactions in block copolymers drive a process known as self-assembly, in which the blocks fold, twist, and
Unlike regular polymers, which are long chains of the same repeating monomer unit, block copolymers are assembled from “blocks” of different monomers, creating a large macromolecule with varying chemical composition at different points along the chain (Matsen and Bates, 1996). systems, highlighting their versatility and widespread use. In 1907, scientist Leo Baekeland completed the first successful synthesis of a polymer, Bakelite, which found use in an immense amount of commercial products such as brake pads and electrical insulation, and was coveted for its high resistance to heat and chemical corrosion. The creation of Bakelite ushered in the age of modern plastics, which resulted in the creation of many useful polymeric materials such as Nylon, Kevlar, and Teflon. Today, while attempts to synthesize simple polymers continue, significant research effort is directed towards understanding the properties and applications of block copolymers. Unlike regular polymers, which are long chains of the same repeating monomer unit, block copolymers are assembled from “blocks” of different
re-organize into a more favorable three-dimensional structure that situates chemically similar blocks next to each other (Matsen and Bates, 1996). Due to the complexity of self-assembly processes, they are best understood using a familiar analogy to highlight the atomic-scale driving forces that result in self-assembly. Each of the blocks in a block copolymer can be thought of as one person in a large gathering of ten thousand or more people. This gathering, however, has the odd caveat that every person must hold the hands of two random people, effectively linking everyone into one large chain. Due to the random selection of two adjacent people, it is completely possible, and in fact likely, that two people holding hands do not know each other at all. After some time, the large chain of people would re-
10 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
organize into a folded structure due to the movement of people towards those that they know. Much like the reorganization of people in this gathering, blocks in block copolymer re-organize (either spontaneously or with sufficient thermal activation) in order to ensure that favorable chemical interactions are maximized. Some
(Zhang et al., 2012). As one may imagine, the placement of nanowires with such high precision at low length scales is nearly impossible, even with advanced techniques such as molecular beam epitaxy (Jung et al., 2010). Block copolymers, on the other hand, with blocks arranged in cylindrical morphologies, present the perfect template for
B S J
Much like protein folding in biological systems, the folding and self-assembly processes of block copolymers can be harnessed to create ordered structures on the scale of hundreds of nanometers, which can enable the creation of functional synthetic nanomaterials.
A packed cylinder block copolymer morphology, used as a template for producing nanowires.
common three-dimensional structures found in many block copolymers include “striped” lamellae, packed cylinders, and gyroids (Matsen and Bates, 1996). Much like protein folding in biological systems, the folding and self-assembly processes of block copolymers can be harnessed to create ordered structures on the scale of hundreds of nanometers, which can enable the creation of functional synthetic nanomaterials. Furthermore, due to modern characterization techniques such as X-Ray diffraction (XRD), and transmission electron microscopy (TEM), scientists can “see” in great detail the threedimensional structures of macromolecules, enabling rational engineering of the self-assembly process to form useful materials with nano-scale ordering. One major application of block copolymers is in the fabrication of nanowires, which are, micrometerlong wires with a length and thickness on a nanometer scale. These nanostructures are incredibly exciting due to their capability to link nano-scale devices much like regular wires link macro-scale electronic devices today. For example, quantum dot solar cells, which can harvest the solar spectrum to previously unattainable efficiencies (Nozik, 2002), need some method of transporting charge around the solar cell array. Of course, a regular metal wire would be far too large for such a purpose, so wires at the length scale of the quantum dots must be used
Various self-assembled block copolymer morphologies.
growing nanowires, as they have near-perfect ordering on a length scale relevant for nanowire device applications. Recently, a research group at MIT led by Prof. Caroline Ross succeeded in the nano-scale patterning of nanowire arrays using block copolymers. Using the block copolymer poly(styrene-block-dimethylsiloxane), which self-assembles (at a slightly elevated temperature) into a structure made of packed cylinders that are approximately fifteen nanometers wide. Using this macrostructure as template for nanowire deposition, the researchers etched away the polystyrene matrix, leaving behind cylindrical troughs where the polystyrene blocks once were. After etching, a thin layer of metal was deposited into these cylindrical troughs using sputter coating, a processing technique common in semiconductor fabrication. Once the templated metal was deposited, the rest of the polymer was etched away, leaving behind nanowires with a predictable size, shape and periodicity (Jung et al., 2010). Such a technique, achievable entirely due to block copolymer self-assembly, holds a high potential for device fabrication applications (Cheng et al., 2006), which require creating and placing nanowires of predictable size. Another recent, exciting application of block copolymers has emerged in the fabrication of new quantum dot light-emitting diodes (LEDs), which can
11 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
efficiently miniaturize the LED technology now used nearly ubiquitously in electronic displays. Traditional LEDs work by sandwiching a specifically designed semiconductor material between two different electrical contacts. By tuning the electronic properties of the semiconductor material, scientists have been able to engineer LEDs that emit light at different colors of the visible spectrum, which can then be combined into pixels on a visual display. Quantum dots, which are nano-scale semiconductor
Synthetic chemistry has certainly come a long way since the creation of synthesis techniques to control molecular-scale ordering, and can now exert a high degree of control over atomic and structural ordering at a much larger length scale. Block copolymers, due to their self-assembly properties, present excellent opportunities for the creation of devices that operate at the nano-scale. Furthermore, since the chemistry of self-assembly can be well controlled, scientists can now bring synthetic chemistry techniques to bear on problems that operate at much larger length scales. References Cheng, J. Y., Ross, C. A., Smith, H. I., & Thomas, E. L. (2006). Templated Self‐ Assembly of Block Copolymers: Top‐Down Helps Bottom‐Up. Advanced Materials, 18(19), 2505-2521.
B S J
Jung, Y. S., Lee, J. H., Lee, J. Y., & Ross, C. A. (2010). Fabrication of diverse metallic nanowire arrays based on block copolymer self-assembly. Nano letters, 10(9), 3722-3726. Matsen, M. W., & Bates, F. S. (1996). Origins of complex self-assembly in block copolymers. Macromolecules, 29(23), 7641-7644. Nozik, A. J. (2002). Quantum dot solar cells. Physica E: Low-dimensional Systems and Nanostructures, 14(1), 115-120.
Example of an array of GaAs quantum dots patterned using block copolymer self-assembly.
crystals, can not only miniaturize current LED technology, but also substantially reduce its power requirements, making them optimal choices for new LED displays (Sun et al. 2007). Until recently, however, quantum dot display technologies were hampered by their inability to be deposited as thin films resistant to oxidation from air exposure and other environmental degradation. Block copolymers, however, seem to present the perfect solution for such issues. Recently, a research group from Johannes Gutenberg University in Germany successfully created quantum dot thin films by grafting molecules of poly(para-methyl triphenylamine-block-cysteamine acrylamide), a block copolymer, onto the quantum dots. Due to the unique self-assembly properties of the block copolymer, each quantum dot assembled in a patterned manner, forming a thin film with excellent environmental stability and luminescent properties (Zorn et al. 2009). Furthermore, these devices were three times more efficient than previously created quantum dot LEDs, while also emitting light at a higher intensity (Zorn et al. 2009). These quantum dot thin film LEDs, while still in the early stages of development, show great promise for device applications due to the unique properties of block copolymers.
Park, C., Yoon, J., & Thomas, E. L. (2003). Enabling nanotechnology with self assembled block copolymer patterns. Polymer, 44(22), 6725-6760. Sun, Q., Wang, Y. A., Li, L. S., Wang, D., Zhu, T., Xu, J., ... & Li, Y. (2007). Bright, multicoloured light-emitting diodes based on quantum dots. Nature Photonics, 1(12), 717-722. Zhang, Q., Yodyingyong, S., Xi, J., Myers, D., & Cao, G. (2012). Oxide nanowires for solar cell applications. Nanoscale, 4(5), 1436-1445. Zorn, M., Bae, W. K., Kwak, J., Lee, H., Lee, C., Zentel, R., & Char, K. (2009). Quantum Dot− Block Copolymer Hybrids with Improved Properties and Their Application to Quantum Dot Light-Emitting Devices. ACS Nano, 3(5), 10631068.
Image Sources http://dsec.pku.edu.cn/complex-fluids/research/phase%20separation/ABC.jpg http://www.rsc.org/ej/SM/2007/b609780d/b609780d-f1.gif http://www.princeton.edu/cbe/people/faculty/register/group/research/blockcopolymers-nano/nanofabrication-with-bloc/ http://en.clipdealer.com/preview/image/001/595/609/previews/1--1595609Complex%20Molecule%20Structure%203D%20render.jpg
12 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
Layout by Jingting Wu
Bright Ideas in Solar Energy
B S J
Jo Melville
It’s no secret that solar power is one of the electricity directly. Though STE dates back to ancient fastest-growing subsects of the renewable energy times (such as the semi-mythological tale of Archimedes industry. In 2011, $150 billion was invested into solar at Syracuse, in which the famous Greek mathematician energy -- more than wind, biofuels, and hydroelectric allegedly defended his home city from a naval siege by combined, according to Bloomberg New Energy Finance deploying large parabolic bronze mirrors to remotely (BNEF, 2011). The reason that solar energy receives such ignite the besieging ships), it is still widely used today -a spotlight, even among other renewable energies, is many modern solar power plants, rather than using solar because it is considered by many investors to have much panels, use mirrors to concentrate sunlight to boil water untapped potential, a belief that backed the 50% growth to spin turbines. In fact, the largest solar power plant in in investment solar energy saw in 2011. Wind energy is the world, the 354-megawatt Solar Energy Generating capricious and, like hydroelectric energy, is hugely limited Systems works by this very method (for reference, an by location; few biofuels can claim to be truly as emissionaverage coal power plant has an output of about 500 neutral as other alternative energy sources. Solar energy, megawatts) (Quiggin, 2012). on the other hand, can be implemented effectively in Photovoltaic solar collection techniques, on the many climates, requires little infrastructure, and can be other hand, are far more contemporary, yet have had a used effectively on a personal or an industrial scale. About much greater cultural and scientific impact than solar 4.2 quadrillion kilowatt-hours (kWh) of solar energy strike thermal energy. All modern photovoltaic research is the earth every day, of which about 2.1 quadrillion kWh based on the photovoltaic effect, discovered in 1839 by make it to the surface (Quiggin, 2012). As much energy Henri Becquerel, that details how light can induce an strikes the Earth in 24 hours as is contained in all the electric current in certain materials by exciting electrons petroleum reserves on Earth! The solar power industry with rays of incident light. The first photovoltaic cell was is growing exponentially created in 1954 at Bell Labs, -- more megawatts of solar and was able to convert 4% power have been installed of incoming solar energy in the last 18 months than into electrical power (this in the 30 years before that! is known as the cell’s Why is solar power conversion efficiency). growing so rapidly, and is it In 1958, Vanguard I destined to last? became the first solar To answer those powered satellite, a field questions -- and to in which solar continues understand what lies in to dominate to this day the future for solar energy (almost all satellites, -- we have to understand including the International the current state of the Space Station and Hubble solar energy industry, the Telescope, extract power Solar cells have been the dominant method for powering satellites bottlenecks holding it from their photovoltaic and space stations since the 1960s. back, and how advances arrays). Since then, in synthetic materials are advances in technology, helping to overcome those materials, and methods barriers. It is by these innovations that widespread solar have each improved the viability and versatility of solar power will live or die. power. Currently, the largest photovoltaic power station Most types of solar energy collection can be is the partially-completed Topaz Solar Farm in San Luis divided up into two broad fields: photovoltaic (PV) and Obispo, with a power output of 300 megawatts, though Solar Thermal Energy (STE). Solar Thermal Energy is solar it is expected to reach 550 megawatts once it is complete in its most fundamental form -- collection of heat energy (SEIA, 2013). from sunlight -- and contrasts with the stereotypical While traditional photovoltaic cells use crystalline photovoltaic notion of turning energy from the sun into structures (often of silicon, because it is easy to work with),
13 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
the development of revolutionary thin-film photovoltaic devices has drastically increased the potential flexibility (figuratively and literally) of solar photovoltaics. By using vapor-deposited silicon layers instead of bulk silicon wafers, solar cells can be created at much smaller scales
temperature, about 7% of incoming solar energy will inevitably be emitted in such a fashion without being harnessed. The second, radiative recombination, factors in that just as a semiconductor can absorb a photon to produce an exciton, an exciton can relax to emit a
B S J
At the 1.1 eV band gap of the most common solar cell material, silicon, it is impossible to eke out more than a meager 29% efficiency. than before, though generally with lower conversion efficiencies (Mayer, 2012). Solar photovoltaic devices generally operate in 3 discrete steps: photons are absorbed by a semiconducting material, such as silicon, and used to knock electrons into higher energy levels (known as bands). This action produces a negatively charged electron in the higher energy band, while leaving a positively charged electron “hole” in the lower band; this electron-hole pair is referred to an “exciton”. Due to the structure of the photovoltaic cell, excitons can induce a flow of electrons, or current, that can then be harnessed (Fehr et. al., 2014). However, despite the amazing breakthroughs that solar photovoltaic energy has achieved in the halfcentury since its inception, there are still bottlenecks that inhibit its viability as a competitive energy source and must be circumvented for it to become a more significant means of energy production. The most notable is simply that existing solar power stations are not large or efficient enough to produce comparable amounts of power as fossil-fuel or hydroelectric plants. The largest STE power station today has a load capacity of 354 MW, and the largest PV power station is expected to have a capacity of 550 MW by the end of the year; by comparison, the largest nuclear power plant (Kashiwazaki-Kariwa NPP in Japan) has a yield of 8,200 MW, and the largest hydroelectric power plant (the Three Gorges Dam in China) produces a whopping 22,500 MW of power (Quiggin, 2012). Why do solar power plants fail to measure up to other sources of power? A major factor is the ShockleyQuiesser limit. First calculated in 1961, this limit sets a hard cap on the maximum efficiency of a solar cell in terms of several factors, including the distribution of frequencies in solar radiation, the separation in energy bands of the photovoltaic cell (also known as the band gap), and the precise construction of the cell itself. In its original formulation, Shockley and Quiesser accounted for 3 major factors to determine the maximum efficiency of a solar cell: blackbody radiation, radiative recombination, and spectrum losses. The first, blackbody radiation, accounts for the fact that any substance will inevitably emit radiation depending on its temperature; at room
photon (or “recombine radiatively”), ensuring that not all excitons that form can produce current. The final and most significant factor is spectral losses, specifically how they relate to band gap. Natural sunlight consists of a set variety of wavelengths of light, each with different energy. When photons of a specific energy strike a semiconductor, they attempt to excite electrons from a low electronic band to a higher one, creating an exciton that can be used to create an electrical current. However, if the energy of the photon is less than the band gap of the material, the electron will not have enough energy to jump the band gap, and thus no exciton will be formed. Additionally, if the energy of the photon is greater than the band gap, the electron excited will jump into the higher energy band, then immediately relax down to the bottom of the band -- essentially, any energy over and above the band gap is wasted. By themselves, spectral losses bring down the theoretical efficiency of an otherwise-perfect solar cell to a mere 48%. Taking into account these three factors, it is possible to calculate the theoretical maximum efficiency of a solar cell as a function of the band gap of the solar cell material. With a perfectly optimal solar cell operating with a band gap of 1.34 eV, only a mere 33.7% of all incident solar energy can be absorbed. At the 1.1 eV band gap of the most common solar cell material, silicon, it is impossible to eke out more than a meager 29% efficiency. Considering modern commercial solar cells are already on average 22% efficient, the Shockley-Quiesser limit tells us that improving the quality of our solar cells can only increase the efficiency so much. Considering how even large solar farms can barely compare to modestly-sized power plants of other sources, the Shockley-Quiesser limit highlights the need for entirely new approaches to solar energy production (Krisch, 2014). Fortunately, there are a large number of ways to circumvent the Shockley-Quiesser limit to make solar cells with the greater efficiencies. By layering different solar cells with varying absorbance ranges on top of one another, the overall efficiency can be further increased -- as high as 86% as the number of layers approaches infinity. Most “multijunction”, or “tandem solar cells”, as they are called, have around 3 layers, with a maximum
14 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
The Shockley-Quiesser limit puts a cap on the maximum efficiency of a solar cell in terms of the band gap of the cell material. At low band gaps, many electrons are lost to recombination, as the small energy difference enables electron relaxation. As the band spacing increases, fewer incident photons have enough energy to clear the gap.
theoretical efficiency of 49%; unfortunately, the layering process is expensive and tandem solar cells are unlikely to see widespread usage until they become more affordable. Using lenses or mirrors to concentrate solar radiation can also help increase efficiency past the Shockley-Quiesser limit, and since it requires few solar cells per area, it increases the viability of tandem solar cells; tandem solar cells under concentrated sunlight can have theoretical efficiency caps as high as 86% -- almost 4 times current commercial levels! (Graham-Rowe, 2008) Unfortunately, concentrated solar radiation has the unfortunate side effect of heating the solar panels beyond their operational range, requiring special cooling techniques to keep the panels working. Researchers at IBM have proposed a solution: a liquid microlayer of conducting indiumgallium alloy (Bullis, 2008). Because of the liquid nature of the alloy, it can conform to the minute grooves in the cell to efficiently siphon away heat, and the metallic character of the substance gives it the conductivity needed to do so, effectively creating a sort of ultra-efficient thermal paste. Put together, this new solar strategy can circumvent the Shockley-Quiesser limit through analog increases in efficiency, and can even be installed on pre-existing solar panels to retroactively increase their efficiency. Another approach to bypassing the ShockleyQuiesser limit involves reevaluating the core assumptions made in its calculation. According to Shockley and Quiesser’s assumptions, a vast amount of energy loss stems from the fact that energy above the band gap is lost, as excitons with energies above the band gap energy relax downward. For decades, scientists believed that this loss was unavoidable, primarily because it seemed impossible for one photon to produce multiple excitons, the only obvious way to refute this assumption. In the
This curve describes the maximum possible efficiency of a solar cell as a function of the bandgap of the cell material. Efficiency caps out at a mere 33.7% at a bandgap of 1.34 eV.
last ten years, however, Brian Korgel and his team at the University of Texas, Austin have revolutionized the field of multiexciton stimulation. Using copper indium selenide -- a common semiconductor with a high conversion efficiency -- Korgel and his group showed that through a process known as “photonic curing”, organic compounds that inhibit multiexciton stimulation can be vaporized off the film (Sagoff et. al., 2014). The study shows that multiexciton production can be induced in massproduced, real-world materials, though Korgel says the real work is yet to come: “The holy grail of our research is not necessarily to boost efficiencies as high as they can theoretically go, but rather to combine increases in efficiency to the kind of large-scale roll-to-roll printing or processing technologies that will help us drive down costs.” (Korgel, 2014) Alternatively, entirely new methods of solar energy collection can be implemented that alter every parameter of the process, sidestepping the Shockley-Quiesser limit: a new field, known as solar thermophotovoltaics (STPV) seeks to combine STE and PV processes to harness solar radiation in an entirely new way. STPV devices first use concentrated sunlight to heat a specific compound (an emitter) that, when at a sufficiently high temperature, radiates energy in a very specific spectrum that is more amenable to photovoltaic capture at high efficiencies by a specialized “absorber”. Because almost all of the energy from the sun can be absorbed through the STE process, the theoretical efficiency cap for this process is over 80%. Until recently, experimental prototypes for this technique demonstrated conversion efficiencies below 1% due to difficulty of controlling the spectrum of the emitter; however, a recent breakthrough by Lenard et al. (Lenert, 2014) reports a drastic increase in efficiency to 3.2% by pairing a carbon nanotube absorber with a silicon-quartz
15 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
“The holy grail of our research is not necessarily to boost efficiencies as high as they can theoretically go, but rather to combine increases in efficiency to the kind of large-scale roll-to-roll printing or processing technologies that will help us drive down costs.” (Korgel, 2014) emitter, with a goal of reaching a commercially-viable 20% efficiency in the near future. Unfortunately, low conversion efficiencies are not the only problems stymieing the solar industry. Another major problem that must be overcome is the issue of energy storage. Because solar power stations can only produce electricity in the daytime, areas powered predominantly by solar energy must have an efficient means of energy storage to ensure that electricity is available at night or on overcast days. However, current means of energy storage are very inefficient, and pass on this inefficiency to the entire solar process. Batteries have limited capacities and can significantly increase the cost of solar systems, and “storing” the energy as hydrogen gas (a common energy storage mechanism, due to the high energy density of hydrogen and the ease of access of stored energy) requires the use of efficiency-limiting fuel cells to turn electricity into hydrogen (Krisch, 2008). In fact, simple energy density calculations show that without the use of chemical energy storage mechanisms like fossil fuels or hydrogen, storage is impractical just from a volumetric standpoint: even the best batteries can only store about 0.3 kWh of energy per kilogram,
Because solar power stations can only produce electricity in the daytime, areas powered predominantly by solar energy must have an efficient means of energy storage to ensure that electricity is available at night or on overcast days. whereas conventional fuels like gasoline pack in over 13 kWh/kg. Due to solar’s small market share in the energy industry (around 1%) this isn’t a problem, because solar energy producers can simply sell their energy to the grid during daylight hours, and consumers can revert to conventional energy sources at nightfall -- but as solar power contributes increasing amounts of energy to the grid, more and more energy storage is required on an almost impractical scale. A solution to the fuel storage problem has been proposed by Dr. Daniel Nocera of MIT, and revolves
around the production of an “artificial leaf” that is able to harness sunlight to form secondary energy sources such as hydrogen directly, much as how an actual leaf utilizes sunlight directly to form carbohydrates from carbon dioxide and water. Dr. Nocera (2011) details the creation of solar water-splitting cells through the use of a cobalt-boron catalyst. These cells are completely
This class of solar thermal energy collection, known as a parabolic trough, uses parabolic mirrors to focus sunlight onto an insulated tube containing a liquid that is piped off to generate electricity.
integrated and do not require internal wires, granting them great solidity and flexibility of position. Though the cell created only runs at an efficiency of 5%, this number is constantly being improved by further research, and the direct production of energy-rich hydrogen provides a good solution to energy storage problems in solar energy. Hydrogen is extremely energy dense due to its gaseous nature and low weight, packing in 142 kWh/kg
16 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
-- over 10 times as much energy per pound as gasoline! per unit weight of these molten salts leaves much to be In addition, hydrogen is also easy to transport and is even desired (0.012 kWh/kg places it below even batteries), this easier to extract energy from, and the only byproducts of is more than offset by the extremely low cost and long its combustion are environmentally friendly water vapor. lifetimes of the substance. In early 1988, the Department If commercial water-splitting solar cells are able to reach of Energy (DoE) invested in the construction of a solar competitive efficiency rates, solar power would become a power tower to field-test the effectiveness of moltenmuch more viable energy creation mechanism. (Reece et. salt energy storage, in a project known as Solar Two. In a al., 2011) report released in 1996, the Energy storage, DoE (through their branch unlike the Shockleyat Sandia National Labs) Quiesser limit, extends reported: beyond just photovoltaics “These solar plants -- it is equally hampering to [Solar Two] operate by solar thermal methods of using large, sun-tracking energy collection, which are mirrors to concentrate comparably prevalent in the sunlight on a receiver solar industry. Due partially that sits atop a tower. The to their overwhelming concentrated sunlight simplicity, STE methods can heats the molten salt Solar Two was a proof-of-concept solar-thermal energy plant demonstrate efficiencies as it flows through the that used molten salt to capture and store the sun’s heat, that far outstrip the leading receiver. The very hot salt allowing for energy production even at night. PV technologies, while is then piped away, stored, retaining good scalability and used when needed and at relatively low cost. Some of the highest-efficiency to produce steam to drive a turbine/generator STE techniques include the Solar Stirling Engine, which that produces electricity. The system is capable of
Though many of the largest solar power plants are use solar thermal methods, the Topaz Solar Farm in San Luis Obispo, CA is poised to become the largest solar power plant in the world, delivering a planned 550 MW of power.
uses large parabolic dishes to focus sunlight and drive a heat engine; this deceptively simple process boasts an efficiency of over 30% (NREL, 2007). Though these processes suffer from the same storage problems that plague photovoltaics, an innovative solution has emerged: the use of vast quantities of molten salt (generally a mixture of sodium and potassium nitrates) to store heat until it is needed. Though the energy stored
The combination of parabolic reflectors and a Stirling heat engine can produce efficiencies upward of 30%, almost double that of competing photovoltaic technologies. These so-called “Dish-Sterling” or “SolarSterling” systems provide the highest known efficiencies of any method.
operating smoothly through intermittent clouds and can continue generating electricity long into the night.” (Miller, 1996) Though the Solar Two plant was deconstructed in 1999, the continued growth of solar-thermal energy as a field, as demonstrated by the success of SEGS and the
17 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
investment in the larger Ivanpah Solar Power Facility, has led to many plants evaluating or adopting molten-salt storage as a means of levelling their power output levels. As innovation continues to revolutionize solar power, we can expect to see solar power become more ubiquitous in everyday life. In 1993, the solar-photovoltaic energy capacity was a mere 50.3 MW; in 2003, it had only risen to 275.2 MW; yet by 2013, it had veritably exploded to 11,972 MW and continues to grow at a rate of about 70% per year (SEIA, 2013)! Though solar power only composes about 3% of all US renewable energy sources, that share is rising more rapidly than that of other sources. Solar is quickly reaching cost parity with fossil fuel sources, and has already surpassed nuclear energy in this regard (Quiggin, 2008). With the help of carbon taxes and additional government subsidies, it is likely that solar power will begin to eat an increasingly large portion of the renewable energy pie in the very near future. In the past 10 years solar energy has gained a foothold on the energy industry it is unlikely to relinquish; as more solar power plants (thermal and photovoltaic) continue to be built and solar photovoltaics integrate themselves further into the national infrastructure, solar power has a real chance of displacing oil or coal as a major energy source. Through innovations in synthetic science, solar power has grown exponentially, and continued innovation of this sort will help it overcome the obstacles it faces today. Investments in alternative, renewable energies like solar power help us prepare for the future, and for solar power, the future looks very bright indeed. References
Bernardo, B. D. Cheyns, B. Verreet, R.D. Schaller, B.P. Rand, N.C. Giebink. Delocalization and dielectric screening of charge transfer states in organic photovoltaic cells. Nature Communications, 2014; 5 DOI: 10.1038/ ncomms4245 Brown, R. K. (2011, October). Molten Nitrate Salt for Solar Energy Storage. Retrieved from http://www.energystoragenews.com/Molten%20 Nitrate%20Salt%20for%20Solar%20Energy%20Storage.html Bullis, K. (2008). Sun + Water = Fuel. Retrieved from http://www. technologyreview.com/featuredstory/411023/sun-water-fuel/ Bullis, K. (2008). More-Efficient Solar Cells. Retrieved from http://www. technologyreview.com/news/410606/more-efficient-solar-cells/ Cordaro, J. G., Kruizenga, A. M., Altmaier, R., Sampson, M., & Nissen, A. (2007). Thermodynamic properties of molten nitrate salts. Retrieved from http:// energy.sandia.gov/wp/wp-content/gallery/uploads/ThermodynamicPorperties-of-Molten-Nitrate-Salts-Cordaro.pdf Cheng, X, Zhang, D. Zhang, G. Zhang, X. Design and machining of Fresnel solar concentrator surfaces. International Journal of Precision Technology, 2013; 3 (4): 354 DOI: 10.1504/IJPTECH.2013.058257 European Photovoltaic Industry Association (EPIA) (2014, March). Market Report 2013. Retrieved from http://www.epia.org/news/publications/ Fehr, M., Schnegg, A., Rech, B., Astakhov, O., Finger, F., Bittl, R., Teutloff, C., Lips, K. Metastable defect formation at microvoids identified as a source of light-induced degradation in a-Si:H. Phys. Rev. Lett., 2014 DOI:10.1103/ PhysRevLett.112.066403
Graham-Rowe, D. (2008). A Cool Trick for Solar Cells. Retrieved from http:// www.technologyreview.com/news/410143/a-cool-trick-for-solar-cells/ page/2/ International Energy Agency (IEA) (2010, January). Trends in Photovoltaic Applications: Survey report of selected IEA countries between 1992 and 2009. Retrieved from http://www.iea-pvps.org/index. php?id=92&eID=dam_frontend_push&docID=432 Kato, Yuichi, Jung, Min-Cherl, Lee, Michael V., Qi, Yabing. Electrical and optical properties of transparent flexible electrodes: Effects of UV ozone and oxygen plasma treatments. Organic Electronics, 2014; 15 (3): 721 DOI:10.1016/j.orgel.2014.01.002 Krisch, J. (2014). 3 Clever New Ways to Store Solar Energy. Retrieved from http://www.popularmechanics.com/science/energy/solar-wind/3-clevernew-ways-to-store-solar-energy-16407404 Lenert, Andrej, Bierman, David M. Youngsuk Nam, Walker R. Chan, Ivan Celanović, Marin Soljačić, Evelyn N. Wang. A nanophotonic solar thermophotovoltaic device. Nature Nanotechnology, 2014; DOI: 10.1038/nnano.2013.286 Mayer, M. (2012, February 14). Why are solar cells made of silicon? [Berkeley Energy and Resources Collaborative]. Retrieved from http://berc.berkeley. edu/why-are-solar-cells-made-of-silicon_1/ Miller, C. (1996, June). Solar Two News Release. Retrieved from http://www. sandia.gov/media/solarll.htm National Renewable Energy Laboratories (NREL) (2007, March). Molten Salt Systems, other Applications, link to Solar Power Plants. Retrieved from http://www.nrel.gov/csp/troughnet/pdfs/2007/koning_molten_salt_ applications.pdf Quiggin, J. (2012, January 3). The End of the Nuclear Renaissance. The National Interest. Retrieved from http://nationalinterest.org/commentary/the-endthe-nuclear-renaissance-6325?page=1 Reece, S. Y., Hamel, J. A., Sung, K., Jarvi, T. D., Esswein, A. J., Pijpers, J. J., & Nocera, D. G. (2011). Wireless solar water splitting using silicon-based semiconductors and earth-abundant catalysts. Science, 334(6056), 645-648. Retrieved from http://www.sciencemag.org/content/334/6056/645.short Sagoff, J. (2014). New solar cell technology captures high-energy photons more efficiently. Retrieved from http://www.anl.gov/articles/new-solar-celltechnology-captures-high-energy-photons-more-efficiently Solar Energy Industries Association (SEIA) (2013, January). Solar Market Insight Report 2012 Year in Review. Retrieved from http://www.seia.org/researchresources/solar-market-insight-report-2012-year-review Solar Energy Industries Association (SEIA) (2014, January). Solar Market Insight Report 2013 Year in Review | SEIA. Retrieved from http://www.seia.org/ research-resources/solar-market-insight-report-2013-year-review
Image Sources
http://en.wikipedia.org/wiki/File:ShockleyQueisserBreakdown2.svg http://en.wikipedia.org/wiki/File:ShockleyQueisserFullCurve.svg http://en.wikipedia.org/wiki/File:ROSSA.jpg http://en.wikipedia.org/wiki/File:Parabolic_trough.svg http://en.wikipedia.org/wiki/File:IvanpahRunning.JPG http://www.theguardian.com/environment/2008/jun/06/renewableenergy. alternativeenergy http://en.wikipedia.org/wiki/File:SolarStirlingEngine.jpg http://en.wikipedia.org/wiki/File:Solar_two.jpg
18 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
Layout by Jingting Wu
Total Heart Transplant: A Modern Overview
B S J
Nithya Lingampalli More than 5 million people are diagnosed with congestive heart failure (CHF) every year, and this number rises by about 500,000 more every year. CHF is a multifaceted problem that is becoming increasingly prevalent in our society and is associated with endstage heart disease (Norman, 2006). This final stage of heart failure is usually characterized by a myocardial infarction, a heart attack. Cardiac transplants are the only option for the long-term treatment of severe cases that lead to irreparable biventricular failure, which is a failure of both ventricles of the heart. This condition effectively handicaps the heart’s ability to pump blood through the body (Copeland, 2004). Since there are many complications associated with existing procedures, there is a constant push to improve the success of treatment for congestive heart failure.
the most effective treatment for heart failure as they involve only natural human tissue, and the process of getting onto a waitlist is tightly restricted. Placement on the waitlist still does not guarantee treatment because 25% of patients on a waitlist die before they are able to receive a donor heart (Dunning, 1997). This lag in receiving a donor heart is due to the process by which the donors are selected. The donor heart is procured from an individual that has been declared brain dead. Their organ can be used only if they have previously given consent, or their family has provided the consent on their behalf, for their heart to be donated. Before the organ can be donated, two surgeons that are not affiliated with the patient must declare it suitable for transplant, and only then does the heart qualify for transplantation. The heart is then kept
“More than 5 million people are diagnosed with congestive heart failure every year, and this number rises by about 500,000 more every year.” Since a cardiac transplant is currently the only option available for patients in the final stages of heart failure, multiple devices have been developed to replace all the components of the heart (valves, chambers, and the pacemaker) before total failure. The three types of procedures currently available are a donor transplant, left ventricle assist devices (LVAD), and total artificial heart implantation (TAH) (Dunning, 1997). In the case of donor transplant, the heart of a donor is isolated and placed into the patient’s body after most of their original heart is removed. The process of obtaining a donor heart and having transplant surgery is very long and complicated due to the many criteria that must be met before the surgery can proceed. To be considered for a donor transplant, the patient must go through extensive tests proctored by cardiologists, nurses, and health workers to ensure that they will be able to survive the procedure and abide by the restrictive post-operative requirements. Once a patient is determined to be a suitable candidate, they are placed on a waiting list with thousands of other similar patients. Then begins the waiting game, as the patient must now wait until it is finally their turn to receive a donor heart. Although donor heart transplants are
beating through dopamine support and mechanical ventilation. Information regarding the blood type and health of the heart is entered into the waiting list database. The database then uses an algorithm that takes into account blood type, body size, and length of waiting period to select the most suitable candidate for the organ. Interestingly, race and gender are kept out of the consideration, although gender may affect the size requirement of the donor heart required. Once a suitable candidate has been identified, surgery itself proceeds very quickly because the donor heart is only capable of a successful transplant a few hours after harvesting (Cleveland Clinic). Given that the donor heart has a certain blood type, there is a chance of immunosuppression or rejection by the patient’s body. Although every effort is made to ensure that the donor and patient blood types are as similar as possible, there are a multitude of genes that affect blood type and composition and it is impossible to control for all blood factors. Therefore, even though a donor and patient are classified as having the same blood type, the composition of their blood may vary enough to cause immune rejection. Immunosuppression is the process by which the
19 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
body’s immune system, the biological network in place to protect the body from disease, recognizes the heart as harmful foreign tissue and attempts to destroy it as it would another pathogen. The patient is given immunosuppressant drugs post-operatively to prevent this response. However, by repressing the immune system, the patient becomes more susceptible to infections and diseases. As a result, the patient’s health has to be closely safeguarded after the procedure since their body is too weak to fight external infections. If the immunosuppressant drugs are not effective, or not given in certain cases where they can aggravate other conditions that the patient has, then the body’s immune system will succeed in destroying the newly implanted heart in a reaction known as rejection. This is very dangerous and must be addressed immediately to prevent significant damage to the new heart, otherwise it will be rendered useless (Cleveland Clinic). Given that organic donor hearts are difficult and time-consuming to obtain, the next best alternative is to fabricate an artificial human heart that can then be transplanted into a patient and function as a normal heart. Hence, the focus shifted to developing total artificial heart (TAH) devices whose use is permitted when an individual needs to be kept alive until a donor heart transplant or when an individual is not eligible for a donor transplant but still has final-stage heart failure in both ventricles. Currently, only a select number of people are eligible for these devices due to the complexity of their implementation. This restricted eligibility for TAH devices is due to their artificially constructed parameters such as size, weight, and pump capacity that are suitable only for certain body types. However, researchers are conducting clinical research trials to make them more efficient and much more widely available (What is a total artificial heart?, 2012). The first instance of a successful human TAH implantation was reported in 1984. The patient was a 61-year old man who had congestive heart failure and the implanted device extended his life for an additional 112 days after the operation. The TAH transplant failure was mainly preceded by
hemorrhagic complications of anticoagulation – meaning that excessive internal bleeding due to a lack of coagulation (clotting) agents caused TAH failure, and ultimate death. Acute renal failure, the kidneys failure to remove wastes and balance body fluids, was also observed although the source was unidentified, and this also contributed to transplant failure. However, an extensive autopsy after death revealed that the artificial heart device itself had been unaffected by thrombosis, blood clots, or any infectious diseases, indicating that the device itself had successfully integrated into the body’s local system but triggered external hemorrhagic responses. (DeVries, 1984). There are multiple TAH devices currently available in the market. The two most commonly used are the CardioWest Total Artificial Heart and AbioCor Artificial Heart. The CardioWest Total Artificial Heart is a device that has been developed to replace the patient’s ventricles as well as all four cardiac valves. It functions by running power tubes from the patient’s abdomen through the heart and then to an outside power source. The mobility of this device is impaired by its reliance on an external power source. The battery’s large, unwieldy nature requires that the patient remain in the hospital after the transplant as they must be connected to the power source at all times. The power source is available portably only in Europe, allowing the patient to leave the hospital and not be confined to inpatient care (Copeland, 2004). The AbioCor device does not have an external power source. Instead, a specially developed magnetic charger is able to fuel the battery within by simple contact with the skin, which in turn increases patient mobility (What is a total artificial heart?, 2012). The AbioCor transplant device performed well in tests of circulatory support and life expectancy extension. The device is totally implantable, and requires no penetration of the skin, minimizing the chance of infection as external pathogens have a decreased chance of penetrating the body and infecting the device surface. Its major drawback is thromoboembolism due to a lack of anticoagulation protocol, meaning that the device increases the risk of developing a deep vein blood clot, as it is unable to properly form blood clots during post-operative healing. Another factor that limits the device’s use is its large size, meaning that it can only be used in large male patients. This is because their bodies are equipped with the blood and respiratory vessels of the necessary size and strength to support the device’s large cardiac output. It cannot be used in smaller patients such as women and children, and thus has limited application in terms of the usage by the entire population (Frazier 2004).
20 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
Perhaps the device’s most impressive claim is that it can keep the patient alive for five years after the time of transplantation, an estimate that is leaps and bounds ahead of the other devices that currently exist (Seamons, 2013). It is expected that the TAH device transplantation procedure will cost around $190,000 when it is made available to the public, a price that is on par with the current market value of a TAH transplant procedure (Allen, 2013). The direction of future research for the development of TAH devices has to address their clinical applicability in terms of being able to account for variation in patient body types and integrate seamlessly with the body’s systems. One avenue of research focuses on positive displacement pumps and specifically their size and complications. Researchers predict that application of continuous- flow technology can help in solving some of these issues. The culmination of these research avenues is currently being applied to find a new generation of smaller, and more effective, TAH devices (Sale, 2012). References Allen, P. (2013, Decem 21). Man gets the world’s first artificial heart: French surgeons perform ground-breaking operation read more. Retrieved from http://www.dailymail.co.uk/health/article-2527492/ Pioneering-French-surgeons-perform-worlds-artificial-hearttransplant.html Cleveland Clinic. (n.d.). Heart transplant. Retrieved from http:// my.clevelandclinic.org/heart/disorders/heartfailure/transplant. aspx
The latest development in TAH devices comes from the work of Alain Carpentier who claims to have made the first purely artificial, self-regulating heart whose functionality looks promising after clinical test transplantation in a 75 year-old patient. He claims that this TAH device is different from the existing models as it is self-regulating and completely mimics an actual heart. This means that rather than maintaining a consistently regulated heartbeat, it would, when you were feeling excited, speed up just like a natural heart (Seamons, 2013). The surface area of the device that is exposed to human blood is partly made from cow tissue, as opposed to synthetic material like other existing heart devices, allowing for the formation of blood clots and minimizing the chance of rejection by the host body’s immune system. This is a major improvement over the currently existing devices whose main problem was thromboembolism due to anticoagulation processes. The device has greater flexibility and clotting ability, two of the most pressing problems that previous TAH devices struggled with.
Copeland, J. G., Smith, R. G., Arabia, F. A., Nolan, P. E., Sethi, G. K., Tsau, P. H., ... & Slepian, M. J. (2004). Cardiac replacement with a total artificial heart as a bridge to transplantation. New England Journal of Medicine, 351(9), 859-867. http://anthemequity.com/img/2004-0826-New-England-Article-on-the-TAH.pdf DeVries, W. C., Anderson, J. L., Joyce, L. D., Anderson, F. L., Hammond, E. H., Jarvik, R. K., & Kolff, W. J. (1984). Clinical use of the total artificial heart. New England Journal of Medicine, 310(5), 273-278. Dunning, J. (1997). Artificial heart transplants. British medical bulletin, 53(4), 706-718 http://bmb.oxfordjournals.org/content/53/4/706. full.pdf
Frazier O.H. · Dowling R.D. · Gray Jr. L.A. · Shah N.A. · Pool T. · Gregoric I. Cardiology 2004; 101:117–121 (DOI:10.1159/000075992) Norman A. Gray Jr, Craig H. Selzman, Current status of the total artificial heart, American Heart Journal, Volume 152, Issue 1, July 2006, Pages 4-10, ISSN 0002-8703, http://dx.doi.org/10.1016/j.ahj.2005.10.024. Sale, S. M., & Smedira, N. G. (2012). Total artificial heart. Best Practice & Research Clinical Anaesthesiology, 26(2), 147-165. Seamons, K. (2013, Decem 23). Revolutionary artificial heart transplanted. USA Today, Retrieved from http://www.usatoday.com/story/news/ world/2013/12/23/newser-artificial-heart/4174139/ What is a total artificial heart?. (2012, July 06). Retrieved from http:// www.nhlbi.nih.gov/health/health-topics/topics/tah/
21 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
Image Sources http://blog.naver.com/PostView.nhn?blogId=lisa7348053&log No=80192778352 http://www.santabanta.com/photos/love/9033134.htm https://www.nhlbi.nih.gov/health/health-topics/topics/tah/
B S J
Layout by : Cheng (Kim) Li
22 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
Manufactured Memories B S J
Jessica Robbins The brain is continually awash in information. Most people would prefer to believe that--even if they are unable to retain all the information that they encounter over the course of their lives-- the information and events that they are able to recall are an accurate reflection of their experiences. However, this is not always the case. The processes of classifying, conceptualizing, and consolidating the continuous streams of data that constitute our mental and sensory experiences are complex, and it seems inevitable that occasional errors should occur. These types of errors result in the creation of false memories--recollections of events or details that never took place. The process of false memory creation is complex, involving several regions of the brain and a wide variety of mental activities. When these false memories are mistaken for true recollections, the consequences can be profound: false witness testimony can result in wrongful conviction; allegations of childhood sexual abuse made on the basis of supposedly recovered memories can throw lives into tumult. However, false memories seem to be an unavoidable consequence of the brain’s attempts to strike a balance between information intake, management, and recovery. Memories can be broadly differentiated into one of two types—declarative and non-declarative (Byrne, n.d.). Declarative memory consists of information that can be consciously recalled—such as facts, dates, and concrete knowledge. Declarative memory can in turn be subdivided into semantic memory— context-dependent memory of concepts, meaning, and facts about the world—and episodic memory—the repository of autobiographical information (Straube, 2012; Tulving, 1972). The capacity to re-experience autobiographical events in their original context is a consequence of episodic memory (Straube, 2012). In contrast, non-declarative memory manages memories that are not formed or accessed consciously, such as acquired skills, habits, and simple forms of associative
learning (Byrne, n.d.). These two types of memories are formed, stored, and accessed by different neural structures (Payne et al., 2009). Declarative memory is studied more frequently in the context of memory dis-
“creation of false memories is itself an active—if unconscious— process that occurs during encoding” tortion. The formation of memories can be divided into three stages: encoding (memory formation), consolidation (the strengthening of memory traces), and retrieval (the active recollection of memories) (Roediger & McDermott, 1995), each of which involves sev-
Location of key brain regions.
23 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
Different types of memories.
eral brain regions. Semantic encoding is carried out by the medial temporal lobe, which consists of the hippocampus, amygdala, and related tissues (Kim & Cabeza, 2006). The hippocampus plays a pivotal role in the formation of accurate declarative memory (Johnson, Raye, Mitchell & Ankudowich, 2012); however, its ability to integrate and recombine information from numerous sources has implicated it in false memory formation (Straube, 2012). The hippocampus also helps stabilize memories to enable their long-term storage and is hypothesized to be engaged in memory retrieval (Carr, Jadhav & Frank, 2011). The frontal cortex is also involved in the processing of semantic memory and helps the brain determine the relative importance of information while creating and retrieving memories, a function that is important for assessing the plausibility of memories and comparing alternative recollections (Johnson et al., 2012; Kim & Cabeza, 2006). This ability to distinguish plausible memories complements the function of the perithinal cortex, whose activation is associated with the subjective sense of ‘remembering,’ even in the absence of a readily-accessible memory (Johnson et al., 2012). The amygdala—the center of emotional processing in the brain—can also contribute to this subjective sense of remembrance, and furnishes memories with an emotional depth that can contribute to their sense of plausibility (Byrne, n.d.; Johnson et al., 2012). The formation of episodic memory is more complex and involves activation of several brain re-
gions not associated with semantic memory. The involvement of several different neural regions is believed to make episodic memory more prone to error than the semantic memory, as memories are compiled from a wider variety of sources and are composed of a more diverse set of data (Straube, 2012). Recent work at M.I.T. has shown that false memories can be created during memory retrieval, which may occur when a previously formed memory becomes closely associated with a particularly evocative external stimulus (Ramirez et al., 2013). False memories, however, are not merely the result of mechanical errors in the encoding and storage processes, but may also result from the brain’s efforts to extract as much meaning from information as efficiently as possible (Gallo, 2010). This view of false memory formation was given credence by the word-list recall experiments conducted by Deese in the 1950’s and Roediger and McDermott in the 1990’s, which provide a powerful illustration of the malleable nature of recollection. The structure of Deese, Roediger, and McDermott’s experiments—now referred to as the DRM task—is simple. In the first stage, the subject is presented with a list of thematically related words (e.g. bed, night, and dream) and asked to study them. After the study period, the subject is then asked to recall the words presented in the previous part of the experiment. Results from experiments which adhere to this general structure reveal that subjects often falsely remember having studied non-presented critical lures that are thematically consistent with the words from the list (e.g. sleep) (Gallo, 2010; Raymaekers, Peters, Smeets, Abidi & Merckelbach, 2011; Roediger & McDermott, 1995). While these types of memory tests have limitations—autobiographical memories are significantly more complex than word lists—studies of brain processes involved in the DRM task have revealed much about both how true and false memories are formed, retrieved, and assessed. The DRM task engages both components of declarative memory: the memory of events controlled by the episodic system, and the processing of underlying concepts and themes that is the hallmark of semantic memory (Payne et al., 2009). Work by Kim and Cabeza (2006) revealed that during completion of the DRM task, regions of the brain associated with semantic memory were activated during both true and
“memory building is a dynamic process and memories often retain the capacity to be updated by incoming information and integrated with recent experiences.”
24 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
false memory formation, suggesting that the creation of false memories is itself an active—if unconscious— process that occurs during encoding (Kim & Cabeza, 2006; Straube, 2012). Specifically, Kim and Cabeza found that activation of the left ventrolateral prefrontal cortex promotes not only the creation of true memories of presented words, but that of false memories of closely associated but non-presented semantic associates (Kim & Cabeza, 2006; Straube, 2012). Kim and Cabeza’s observations may be the result of semantic elaboration—the integration of incoming information with pre-existing semantic knowledge—a process that has the potential to both promote the formation of true memories and contribute to creation of false ones (Kim & Cabeza, 2006). There are several schematic theories regarding how this process may work. According to the fuzzy trace view, studying a list of closely associated words leads to the formation of two types of mental traces—verbatim and gist. Verbatim traces contain item-specific information, whereas gist traces contain the essence of the presented information—such as the underlying theme of a list of associated words. (Brainerd & Reyna, 2002; Kim & Cabeza, 2006; Straube, 2012). It is theorized that gist traces are more robust to the passage of time than verbatim traces, meaning that recovered memories are more likely to reflect the overall essence of an experience rather than its concrete details (Brainerd & Reyna, 2005). The spreading activation view posits that, within a semantic memory network, mental activation spreads from presented words to closely related words. This secondary activation is later attributed to memory (Kim & Cabeza, 2006; Roediger & McDermott, 1995; Straube, 2012). While activation may promote the formation of false memories, monitoring—the editing and decision process that helps determine the origin of activated information—helps prevent their proliferation (Gallo, 2010). The source monitoring framework theorizes that mental experiences consist of several attributes, such as recollections of sensory details or a sense of remembrance, which can help qualify their origin and accuracy (Johnson et al., 2012). There are several different types of monitoring processes. Diagnostic monitoring is the process by which memories are dismissed as false because they fail to elicit strong feelings of recollection (e.g. “I didn’t take a helicopter to work today because helicopter rides are distinctive and I would have remembered doing that”). In contrast, disqualifying monitoring relies on corroborative evidence to disentangle true and false recollections. When a questionable memory elicits recollections that are inconsistent with prior information, the memory is
dismissed as false (e.g. “I didn’t drive to work today because I remember taking the bus instead”) (Gallo, 2010). The monitoring process, however, is not infallible. Monitoring can be affected by the presence of prior knowledge and the social and cultural context in which experiences occur and judgments regarding adequate evidence of remembrance are made—both of which can lead to memory misclassifications (Johnson et al., 2012). While results from DRM-based experiments have shown that false memory creation occurs during the encoding process, memories can be distorted and fabricated during consolidation as well. Consolidation primarily takes place during sleep and involves an intensive reorganization and reintegration of information gathered and memories generated over the course of the day. New memory traces are strengthened, promoting the formation of strong, accurate memories. However, information from these new memories can also be integrated with pre-existing memories (Straube, 2012). Thus, sleep promotes the formation of both true and false memories. Sleep deprivation, in contrast, decisively favors the creation of false recollections. This effect is likely due to the reduction in monitoring that occurs as a result of sleep deprivation, making it more difficult to correctly identify the source of mental activation (Straube, 2012). As the studies of memory consolidation during sleep illustrate, memory building is a dynamic process and memories often retain the capacity to be updated by incoming information and integrated with recent experiences. This plasticity of memory can introduce ambiguity into legal proceedings, where accurate memories of oftentimes violent and traumatic events are necessary to determine culpability. Unfortunately, such unadulterated memories are difficult to obtain. High levels of the stress hormone cortisol have been shown to impair the retrieval of autobiographical memory, resulting in vague, disjointed eyewitness testimony that often strikes jurors as less credible (Lacy & Stark, 2013). In addition, the retention and retrieval of memories can be very vulnerable to the kind of suggestive questioning, presentation of misinformation, and positive feedback responses that play a prominent role in witness questioning (Lacy & Stark, 2013). Reliance on inaccurate recollections can have chilling consequences: incorrect eyewitness identification has been implicated in the wrongful conviction of 75% percent of individuals who were later exonerated on the basis of DNA evidence (Lacy & Stark, 2013). The effect of memory distortions reach beyond eyewitness testimony. The general public, including jurors and judges, retains several misconceptions about how memories are formed and maintained (Lacy
25 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
& Stark, 2013). The types of suggestive questioning used during eyewitness testimony can induce memory distortion in jurors as well: jurors often confound information presented by lawyers and witnesses (Lacy & Stark, 2013). Jurors also tend to place more faith in testimony when witnesses themselves believe that their memories are accurate, despite the evidence that outside of laboratory settings the connection between perceived and genuine memory accuracy is tenuous (Lacy & Stark, 2013). A particularly contentious confluence of memory retention and the justice system is the use of suggestive therapy to retrieve supposedly repressed memories of childhood sexual abuse (CSA), which has in some cases led to accusations of malpractice. One such example is that of Beth Rutherford, who sought out therapy in 1992 at the age of 19 while suffering from work-related stress. Prior to beginning therapy, Beth believed that she had enjoyed a happy childhood and with warm and loving parents. Beth was thus initially surprised to be informed that her symptoms were consistent with those of sexual abuse victims. Beth’s therapy sessions focused heavily on themes of sexual abuse and she was encouraged by her therapist to re-interpret her childhood memories and experiences in ways that were consistent with the theory that she had repressed memories of being sexually abused by her father. As the emphasis on sexual abuse and memory recovery in Beth’s therapy sessions increased, she began to have dreams of being assaulted by her father, and was informed that these dreams were in fact heavily repressed memories. After two and half years in therapy, Beth claimed to have recovered highly specific memories of childhood sexual abuse at the hands of her father which had resulted in two pregnancies and forced abortions. Beth’s revelations reduced her to a state of physical and mental deterioration: her weight dropped to 90 pounds, she became estranged from her parents, and began taking mood-controlling medications. However, there was reason to doubt the veracity of Beth’s recollections: her father had undergone a vasectomy when Beth was four years old, rendering him incapable of causing the pregnancies that Beth so vividly remembered (Brainerd & Reyna, 2005). Unfortunately, both the nature of the kind of suggestive therapy used with Beth and that of those who seek it out may contribute to false memory for-
mation. Studies have shown that those who claim to have recovered memories of CSA during suggestive therapy sessions are more likely to make misattributions during DRM exams (Association for Psychological Science, 2009; Brewin, 2012; Raymaekers et al., 2011), suggesting that these people may be more vulnerable to false memory formation in general. In addition, individuals undergoing therapy are encouraged to maintain an attitude of openness and discovery, possibly causing them to confirm the veracity of events that may never have occurred but are consistent with the overall essence of their experience or that appear to be plausible explanations for their behavior. This effect is exaggerated by the often significant passage of time between when the memory is claimed to have taken place and its recovery in therapy (Brainerd & Reyna, 2005). Beth’s story illustrates the power of suggestion on the process of memory formation and attribution, and the impact that memory--or what is perceived as memory--can have on one’s sense of self. Given the profound effect that Beth’s false memory recovery had one her physical and mental health, one must wonder how the mind can be so vulnerable to external influences. Beth’s experiences, while traumatic, may be the result of an evolutionary advantage process: the ability to integrate suggestions from respected sources and vague emotional sensations into highly convincing recollections of seemingly plausible events that can explain confusing or troubling events and behaviors. Such is the power of false memories: when abused or misapplied they can have significant negative effects; but false memories can also be a learning tool, a way of consolidating information and integrating life experiences in a way that obscures details but maintains their underlying truth.
B S J
“The general public, including jurors and judges, retains several misconceptions about how memories are formed and maintained”
26 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
References 1.
Association for Psychological Science. (2009). Differences in recovered memories of childhood sexual abuse. ScienceDaily. Retrieved February 18, 2014 from www.sciencedaily.com/releases/2009/02/090202175057.htm
2.
Brainerd, C.J. & Reyna, V.F. (2002). Fuzzy-trace theory and false memory. Current Directions in Psychological Science , 11(5), 164-169.
3.
Brainerd, C.J. & Reyna, V.F. (2005). False memory in psychotherapy. In The science of false memory (8). Retrieved from http://www.oxfordscholarship.com/view/10.1093/acprof:o so/9780195154054.001.0001/acprof-9780195154054-chapter-8
4.
Brewin, C.R. (2012). A theoretical framework for understanding recovered memory experiences. In Belli, R.F. (Ed.), True and false recovered memories (pp. 149-173). New York, NY: Springer New York.
5.
Byrne, J.H. (n.d.). Neuroscience Online. Retrieved from http://neuroscience.uth.tmc.edu
6.
Carr, M.F., Jadhav, S.P. & Frank, L.M. (2011). Hippocampal replay in the awake state: a potential substrate for memory consolidation and retrieval. Nature Neuroscience, 14, 147-153.
7.
Gallo, D.A. (2010). False memories and fantastic beliefs: 15 years of the DRM illusion. Memory & Cognition, 38(7), 833-848.
8.
Johnson, M.K., Raye, C.L., Mitchell, K.J. & Ankudowich, E. (2012). The cognitive neuroscience of true and false memories. In Belli, R.F. (Ed.), True and false recovered memories (pp. 15-52). New York, NY: Springer New York.
9.
Kim, H. & Cabeza, R. (2006). Differential contributions of prefrontal, medial temporal, and sensory-perceptual regions to true and false memory formation. Cerebral Cortex, 17, 2143-2150. 10. Lacy, J.W. & Stark, C.E.L. (2013). The neuroscience of memory: implications for the courtroom. Nature Reviews Neuroscience, 14, 649-658. 11. Payne, J.D., Schacter, D.L., Propper, R.E., Huang, L., Wamsley, E.J., Tucker, M.A., Walker, M.P. & Stickgold, R. (2009). The role of sleep in false memory formation. Neurobiology of Learning and Memory, 92, 327-324. 12. Ramirez, S., Liu, X., Lin, P., Suh, J., Pignatelli, M., Redondo, R.L., Ryan, T.J. & Tonegawa, S. (2013). Creating a false memory in the hippocampus. Science 341(6144), 387-391. 13. Raymaekers, L., Peters, M.J., Smeets, T., Abidi, L. & Merckelbach, H. (2011). Underestimation of prior remembering and susceptibility to false remembering: two sides of the same coin? Consciousness and Cognition, 20(4), 1144-1153. 14. Roediger, H.L. & McDermott, K.B. (1995). Creating false memories: remembering words not presented in lists. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21(4), 803-814. 15. Straube, B. (2012). An overview of the neuro-cognitive processes involved in the encoding, consolidation, and retrieval of true and false memories. Behavioral and Brain Functions, 8(35). doi: 10.1186/17449081-8-35 16. Tulving, E. (1972). Episodic and semantic memory. In Tulving, E. & Donaldson, W. (Eds.), Organization of Memory (pp. 381-403). New York, NY: Academic Pr.
Image Sources 1.
Camazine, Scott. http://quadriv.wordpress.com/2011/08/12/alexia-sine-agraphia/.
Layout by: Spring Chau
27 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
LAB-ON-A-CHIP Ann Heslin Picture the length of two centimeters. Now, picture a synthetic chip that can mimic all functions of an entire human lung within this two-centimeter length. Pretty remarkable, right? What is even more remarkable is that this in vitro (existing outside a living organism) device is precise on a micro-scale (10-6) and has been shown to be a reliable human lung model. This lung-on-a-chip device is one example of a broader category of synthetic devices known as ‘labon-a chip’ or ‘microfluidic’ devices. Lab-on-a-chip is what it implies: miniaturized science. Though the theory of miniaturization is not hard to conceptualize, the process of turning theory into practice has proven to be much harder. Nevertheless, meticulous engineering has led to the rapid advance of these devices and has paved an auspicious path for their contribution to organ modeling. One challenge the lung model faced was how to recreate a functional alveolar-capillary interface that is present in all human lungs. To understand this model, let us briefly revisit the physiology of a lung. During respiration, air passes down through the lungs into alveolar sacs where gas exchange occurs (Fig 1). But before oxygen can enter the bloodstream, it first diffuses through alveolar epithelial cells and then passes through a fused basement membrane, where it encounters the capillary endothelial cells. In the lung model, a 10 μm porous membrane composed of poly(dimethylsiloxane) (PDMS) represented the fused basement membrane, which had epithelial and endothelial cells growing as
B S J
Figure 1.
monolayers on opposing side (Fig 2). Extracellular matrix (ECM), which can be thought of as a gluing substance, was coated on both sides of the PDMS membrane to allow the two cell lines to firmly adhere. Once attached to the material, both cell lines grew to 100 percent confluence (the whole PDMS membrane was covered with cells) and sustained themselves for a prolonged period of over two weeks. Next, this thin PDMS membrane was clamped between two thicker layers to form two smaller side chambers and one larger central cavity (Fig 3). An etching solution was pumped through the two side microchannels so that only the larger, central cavity had an upper and lower compartment. This engineering allowed for air to enter the upper compartment, mimicking the air-liquid interface within alveoli. A vascular medium (blood-like substance) was also allowed to run simultaneously through the lower compartment to emulate capillary blood-flow. The lung model also accounted for the physical movements—expansion and contraction—that occur during respiration. During inspiration, the diaphragm contracts (moves downwards) to decrease the intrapleural pressure inside the alveolar sacs (sacs expand in volume), which increases the efficiency of gas exchange (Fig 1). A vacuum was applied to the two side micro chambers to create a pressure-driven stretching of the thin PDMS membrane (Fig 4). This vacuum-induced stretching distorted the cultured epithelial and endothelial cells, forcing them to survive a physical environment similar to one that occurs in vivo (inside a living organism).
Figure 2.
28 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
Not only did the cells withstand the physical strain, but the top epithelium cells also responded by producing surfactant, a detergent-like substance that reduces surface tension. This biological response was significant because it further validated this lung-ona-chip as a reliable model that could imitate a real lung. Pulmonary inflammation was assessed in order to determine the model’s ability to produce a whole-organ response. To start, TNF, a potent proinflammatory mediator, was applied to the top epithelial cells to induce an inflammatory response. Blood-borne immune cells (neutrophils) Figure 4. were simultaneously channeled through the lower (vascular) compartment of the middle cavity (Fig 5). The bottom epithelial cells that faced the vascular component responded to the pulFigure 5. monary inflammation by recruiting neutrophils from the stream of blood. The fluorescently labeled neutrophils were visualized to physically adhere to the bottom endothelial cells, transmigrate across the thin PDMS membrane, and finally reemerge onto the top epithelium subsurface—all within minutes! More elaborate and extensive inflammatory response tests were conducted and all reconfirmed the reliability of this in vitro lung-on-a-chip. Despite the remarkable accomplishment of this device, lab-on-a-chip needs to do more than merely reconfirm what biologists already know. Microfluidics is a tool for scientists to gather novel insight into to how, where, or why an organ might react in a given circumstance. The lung-on-a-chip also led to these types of novel findings. Specifically, when silica nanoparticles (a known pulmonary irritant) were introduced to the system, mechanical strain was shown to exacerbate pulmonary inflammation. Silica nanoparticles were first introduced in the absence of the vacuuminduced strain, which resulted in the top epithelial cells producing the expected inflammatory response. However, when vacuum-induced strain was applied, silica nanoparticle absorption into the bloodstream increased. This in vitro lung apparatus observed the toxic effects of silica nanoparticles, a new mechanism of toxicity that had never been previously visualized. Despite the inherent differences between the in vitro
lung-on-a-chip and an in vivo human lung, this microfluidics device has been established as the most reliable human lung model. There is a need to extend models beyond that of just the lung. Insomuch as the heart and liver are the two organs most associated with idiosyncratic and adverse drug reactions, microFigure 3. fluidic developments in these areas are of particular priority. The reason for the slower development of reliable heart and liver models is that these models necessitate another component: a third dimension. Note that the lung model did not necessitate cells to exist beyond that of monolayers because the alveolar-capillary interface is, in fact, composed of single cell layers. 3-D cell culture has been shown to act differently than conventional 2-D ones, so 3-D models become essential for the proper and accurate modeling of a heart and liver. 3-D heart models have been less successful because they fail to incorporate both the electrophysiological properties and contractile organization of heart tissue. Even though heart tissues can physically contract in vitro, they lack either contractile alignment or 3-D structure. In vitro liver models must support hepatic polarity (diversity) and cannot disturb cellmatrix interactions. Advanced 2-D models were shown to improve cellular polarity, function, and viability, but past 3-D models have failed to provide the required quantities and qualities of oxygen and nutrients. In summary, in vitro cardiac and hepatic models cannot yet accommodate the high-throughput drug screenings necessary for generating reliable data that can predict heart and liver toxicities. Nevertheless, efforts to develop these 3-D models have been accelerated with the integration of stem cell research. In 2012, Shinya Yamanaka and John Gurdon shared the Nobel Prize when they discovered how to reprogram human adult fibroblasts (a specific type of cell) into pluripotent stem cells. Specifically, the Yamanaka lab performed a retroviral transduction of four factors (inserted specific reprograming genes)
29 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
into human adult cells, which induced those mature cells to become pluripotent cells, stem cells capable of regenerating into any type of body cell. Induced pluripotent stem (iPS) cells are advantageous because this means any adult tissue can be harvested and transformed into any type of body cell. One advantage to this is that patient- and disease-specific cells could be cultured to provide scientists with a personalized and regenerative 3-D model. Implement-
ing iPS cell technology would also bypass the use of embryonic stem (ES) cells, which sparked a controversy about embryo interference. iPS cell technology has revitalized the potential for the coupling of stem cells with lab-on-a-chip to build better 3-D human models. The National Institute of Health (NIH) has recognized the importance of developing more accurate 3-D in vitro models and has allocated various awards that support tissue chip technologies. In 2012 the NIH issued grants that supported the development of 3-D tissue and cell source models. One of these awards was presented to the Healy research group at UC Berkeley. Prof. Kevin Healy’s lab in collaboration with Prof. Luke Lee, Prof. Bruce Conklin, Prof. Holger Willenbring at Berkeley, Gladstone Institutes, and UCSF respectively, are working on in vitro human cardiac and liver tissue based models that employ normal and patient-specific human iPS cells. Anurag Mathur and Peter Loskill, postdoctoral fellows in the Healy lab are currently developing a microfluidic device that can support the function of cardiomyoctes (CMs) (cardiac muscle cells) derived from iPS cells. They insert CMs into a custom microfluidic device and then wait for the cells to attach and spread in vitro. These CMs form a 3-D tissue, which exhibits essential electro-physiological and contractile properties of the heart tissue. Despite the advancements Mathur and coworkers have made, there are challenges concerning accurate reproduction and replication. The regeneration of CMs requires “meticulous documentation of every small molecule and/or growth factor that is introduced,” he says. In other words, a slight deviation
might cause lower yield or inconsistent data. Nevertheless these problems will most likely be resolved in the near future and continual development is worthwhile. The other NIH awards support the development other organ models, which are all rooted in one principle: that the integration of microfluidics and iPS cells will culminate in the best 3-D in vitro models. The future of microfluidic and iPS cell integration is fascinating. Consider, for example, the layering of multiple in vitro organ models to generate an apparatus that can analyze multiple organ-organ responses. The heart and liver were previously mentioned as primary targets for organ toxicity. Imagine an apparatus that coupled both heart and liver organs—such a model would be indispensible to the field of predictive toxicology, which is currently inefficient, costly, and unreliable. This inadequate drug discovery and development process can be attributed to insufficient toxicity models and dependency on animal testing. Don Ingber, head of the human-on-a-chip project at Wyss Institute at Harvard, labels this inefficiency as Eroom’s Law, which states “that the number of medicines in“iPS cell vented halves technology has each year, while the prices seem revitalized the to go up continupotential for the ally.” Eroom’s Law opposes coupling of stem the well-known cells with lab-onMoore’s Law (Eroom spelled a-chip to build backwards) that better 3-D human observes increasing price and models” product efficiency. The development of better predictive models that integrate iPS technology will make the drug discovery and development process more affordable, efficient and reliable. Coupling iPS cell technology with microfluidics will culminate in 3-D human models that offer a range of advantages: there will be faster higherthroughput analysis and screening of cellular responses to drugs, chemicals, particulates, toxins, and pathogens; assays will be reproducible and parallelized; miniaturization will decrease the amount of experiment material needed; biologists will have a cheaper tool to investigate mechanisms of toxicity and metabolic processes; patient-specific cell lines will be generated for personalized medicine; and, portable
30 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
technologies will be used in surveillance and detection of pathogens in public health. The aforementioned examples do not encompass all the potential applications for lab-on-a-chip, and the established success within this field is a harbinger of the success that will to continue to emerge. It is also important to understand that the biology, chemistry, engineering and physics must continue to be interdisciplinary. The integration of multiple fields will enable microfluidic models to be more accurate— albeit complex—and will improve in vitro testing.
B S J
Abbreviations PDMS : poly(dimethylsiloxane) ECM: extracellular matrix (ECM) TNF: tumor necrosis factor alpha iPS: Induced pluripotent stem (iPS) CMs: cardiomyoctes (CMs)
References 1. Huh, D., Matthews, B., Mammoto, A., Montoya-Zavala, M., Hsin, H., & Ingber, D. (2010). Reconstituting Organ-Level Lung Functions on a Chip. Science, 328, 1662-1668 2. Ibid. 3. Ibid. 4. Daniels C. & Orgeig, S. (2003). Pulmonary Surfactant: The Key to the Evolution of Air Breathing. News in Physiological Sciences 18 (4): 151–157. 5.
Walsh, L., Trinchieri, G., Waldorf, H., Whitaker, D., Murphy, G. (1991). Human dermal mast cells contain and release tumor necrosis factor alpha, which induces endothelial leukocyte adhesion molecule 1. Proceedings of the National Academy of Sciences, 88 (10): 4220–4
6. Huh, D., Matthews, B., Mammoto, A., Montoya-Zavala, M., Hsin, H., & Ingber, D. (2010). Reconstituting Organ-Level Lung Functions on a Chip. Science, 328, 1662-1668 7. Ding, M., Chen F., Shi X., Yucesoy, B., Mossman B., & Vallyathan, V. (2002). Diseases caused by silica: mechanisms of injury and disease development. International Immunopharmacology (2-3):173-82 Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/11811922 8. Ibid. 9. Johnson, D. (2013). Fusion of nonclinical and clinical data to predict human drug safety. Expert Reviews, 6(2), 185-195. 10. Cukierman, E., Pankov, R., Stevens, D., & Yamada, K. (2001). Taking Cell-Matrix Adhesions to the Third Dimension. Science, 294 (5547): 1708-1712 Retrieved from http://www.sciencemag.org/ content/294/5547/1708.short 11. Eschenhagen, T. (1997). Three-dimensional reconstitution of embryonic cardiomyocytes in a collagen matrix: a new heart muscle model system. FASEB J, 11, 683-694. Retrieved from http://www. ncbi.nlm.nih.gov/pubmed/9240969 12. Song, H., Yoon, C., Kattman, S., Dengler, J., Masse, S., Thavaratnam, T., Gewarges, M., Nanthakumar, K., Rubart, M., Keller, G., Radisic, M., & Zandstra, P. (2010). Interrogating functional integration between injected pluripotent stem cell-derived cells and surrogate cardiac tissue. Proceedings of the National Academy of Sciences of the United States of America, 107, 3329-3334. Retrieved , from http://www.ncbi. nlm.nih.gov/pubmed/19846783 13. Toh, Y., Lim, T., Tai, D., Xiao, G., Noort, D., & Yu, H. (2009). A microfluidic 3D hepatocyte chip for drug toxicity testing. Lab on a Chip, 14, 2026-2035. 14. Chan, C., Berthiaume, F., Nath, B., Tilles, A., Toner, M., & Yarmush, M. (2004). Hepatic tissue engineering for adjunct and temporary liver support: critical technologies. Liver Transplantation, 10, 1331-1342. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/15497161 15. Mathur, A., Loskill, P., Hong, Lee, S., Marcus, S., Dumont, L., Conklin, B., Willenbring, H., Lee, L., & Healy, K. (2013). Human induced pluripotent stem cell-based microphysiological tissue models of myocardium and liver for drug development. Stem Cell Research & Therapy, 4 Retrieved from http://stemcellres.com/content/4/S1/ S14 16. (2012, 8). Nobelprize.org. The 2012 Nobel Prize in Physiology or Medicine - Press Release. Retrieved February 2, 2014, from http:// www.nobelprize.org/nobel_prizes/medicine/laureates/2012/press. html 17. Takahashi, K., Tanabe, K., Ohnuki, M., Narita, M., Ichisaka, T., Tomoda, K., & Yamanaka, S. (2007). Induction of Pluripotent Stem Cells from Adult Human Fibroblasts by Defined Factors. Cell, 131, 861-872.
31 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
18. National Center for Advancing Translational Sciences (NCATS). Tissue Chip Awards: Model Systems. Retrieved February 2, 2014, from http://www.ncats.nih.gov/research/reengineering/tissuechip/projects/model/model-systems.html 19. Li, A. (2009). The Use of the Integrated Discrete Multiple Organ Co-culture (IdMOC®) System for the Evaluation of Multiple Organ Toxicity. Alternatives to Laboratory Animals, 37, 377-385. 20. Neuži, P., Giselbrecht, S., Länge, K., Huang, T., & Manz, A. (2012). Revisiting lab-on-a-chip technology for drug discovery.. Nature Reviews. Drug Discovery, 11(8), 620-32. Retrieved from http://www. ncbi.nlm.nih.gov/pubmed/22850786 21. Chin, C., Linder, V., & Sia, S. (2007). Lab-on-a-chip devices for global health: Past studies and future opportunities. Lab on a Chip, 7 (1): 41-57 Retrieved from http://pubs.rsc.org/en/content/ articlepdf/2007/lc/b611455e
B S J
22. Bluestein, A. (2013). Model Organs in Miniature. Proto Magazine. Retrieved March 24, 2014, from http://protomag.com/assets/ model-organs-in-miniature.
Layout by: Alexis Bowen
32 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
An Interview with Professor Jan Rabaey: Neural Prosthetics and Their Future Applications
B S J
By: Kuntal Chowdhary, Jingyan Wang, Saavan Patel, Shruti Koti BSJ interviewed Professor Jan Rabaey to gain insight on his research regarding brain-machine interfaces (BMI) and microscopic implantable devices. Professor Rabaey received his E.E. and Ph.D. degrees in Applied Sciences from the Katholieke Universiteit Leuven, Belgium, in 1978 and 1983 respectively. From 1983-1985, he was a Visiting Research Engineer at UC Berkeley. In 1987, joined the faculty of the Electrical Engineering and Computer Science (EECS) department at UC Berkeley, where he is now holds the Donald O. Pederson Distinguished Professorship. He was the Associate Chair (EE) of the EECS Dept. at Berkeley from 1999 until 2002 and is currently the Scientific co-director of the Berkeley Wireless Research Center (BWRC), as well as the director of the Multiscale Systems Research Center (MuSyC). Professor Rabaey has authored a wide range of papers in the area of signal processing and design automation. He has received numerous scientific awards, including the 1985 IEEE Transactions on Computer Aided Design Best Paper Award (Circuits and Systems Society), the 1989 Presidential Young Investigator award, and the 1994 Signal Processing Society Senior Award. In 1995, he became an IEEE Fellow. He has also be awarded the 2002 ISSCC Jack Raper Award, the 2008 IEEE Circuits and Systems Mac Van Valkenburg Award, the 2009 EDAA Lifetime Achievement Award, and the 2010 Semiconductor Industry Association University Researcher Award. In 2011, he was elected to the Royal Flemish Academy of Arts and Sciences (Belgium). BSJ: How did you get involved in your line of research? Prof. Rabaey: In research, things always go in unexpected ways, and it’s always unexpected things that you moved in another direction. I’ve been working in the field of integrated circuits and wireless for quite a long time. I have been working on low-power mobile devices since the early 1990s. We had a project around 1992 to 1996, which was called InfoPad. It’s almost like… It was the
idea that I should have a lightweight device that connects to a wireless network backbone, that acts as the way to primarily access date, which was 15 to 20 years before the iPad. So I was looking into low power wireless devices. Focusing on limits to what one can accomplish drove me to look at the applications that require small devices and lower energy. So, I was doing a lot of work in the early – late 1990s on sensor nets, little sensor nodes that could have a remote or internal energy source, as well as processing abilities. These could be implemented in all kinds of immersive applications like environmental applications. And then, it happened that we were looking at driving the devices to be smaller and smaller and said hey, if we keep pushing technology further down, we should be able to build innovative devices. Now they’re getting as small as the size of a biological cell. Then you could have an electronic sensor that sits next to a cell, and they start talking to each other. Now can you build something with that, and is it really possible to do these things? Where do I get the energy? Biological cells have a way of getting energy, but the electronics need to get it from some other way. These are the questions that we are asking. At that point in time, we hired
32 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
a new faculty member for the department. His name is Jose Carmena. He has an engineering background, and at the time he partially moved out of engineering, into neuroscience. After going to some of his talks, I thought “Hey, that’s kind of cool” and then I started thinking about brainmachine interfaces. Could we build devices that can talk to neurons? So we invited him here. I remember it very well - we had a group meeting and he gave me a talk. At the end of the talk I said, “What can I do for you?” He suggested a head stage. But, that’s boring. Anyone can do that. Give us something harder. He came back to our lab later and said, “Could you build for us little free floating electrodes, localized in the cortical regions, which can wirelessly send send information in and out? Could you do that?” My first guess was “This is impossible. This is too hard.” It turned out that since then, which was about 8 years ago, we have been gradually moving into directions of building these devices. Something you start working hard and look at it from all angles, suddenly the impossible becomes possible.
Microsensor integrated within organic polymer circuit board that can be implanted within the human body. Quarter is used as a reference for size.
modulate data back on the reflected waveform, just like how RFID works - you send an RF frequency at it. It’s a sine wave. And the RFID tag just modulates the impedance and superimposes information back on the reflected waveform. And that’s the way you can read it. We did the math, and wrote a little proposal. After two months we came to the conclusion that it’s impossible. We looked at the physics analysis and saw that sending the largest amount of power within regulation didn’t have enough to power it. We cannot have a huge amount of power pounding on your head, that’s not very healthy. We got basically 2 nanowatts for a single node of 50 micrometers. That’s nothing - you can’t do anything with it. That was the problem. We couldn’t get enough sensitivity. So
Since this change of direction, we have had more and more faculty added, so we can have a bigger and bigger undertaking. It’s really exciting.
BSJ: Yes, definitely. That’s kind of our interviews team right here - two of us are biology majors and two are electrical engineering and computer science majors.
‘“Could you build for us little free floating electrodes, localized in the cortical regions, which can wirelessly send send information in and out? Could you do that?”’ Prof. Rabaey: Exactly! And that’s where you exchange information, when you learn from other spaces, and you see opportunities. Absolutely! BSJ: We came across your research on neural dusts, with shrinking components smaller and smaller. We understand that they’re used in the BMIs. How do you power such a small device? Prof. Rabaey: That’s the right question to ask. This is the powering problem. There are little passive elements that are free-floating, little cubes. When you power them, the way you get back the data is that you take the incoming waveform, and you
we thought, that’s it.
We gave up, until about two years later, when we revisited the idea, and asked: What if I would use acoustic sound instead of electromagnetics? Basically, use an acoustic wave to power it. The advantage of acoustic waves is that they have a much smaller wavelength. And it’s all about that – it’s all about impedance matching. When you have a little node, and the wave is too big, you don’t have good coupling between the two. So in tissues, acoustic wave fronts propagate a lot more effectively and efficiently, not like magnetics. That was the solution. Suddenly we got three orders of
33 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
magnitude of more power for the same node size. And that’s a lot! You don’t get this easily. In science, sometimes you get 2 or 3 times more. Three orders of magnitude is a big win.
B S J
Basically, it says, that we should go for acoustics, and generating acoustics is not hard to do. You have piezoelectric material that allow you put in an electric waveform, acoustic waveform coming out, and then it hits the little node. Acoustics really go well through tissue; acoustics don’t go well through bone. For example, I couldn’t put an acoustic generator here (pointing on the head) and hope I can talk through skull. That doesn’t work very well, because of the scattering of the signal through the skull. Electromagnetics are much better, so for that purpose, we indeed use electromagnetics and an antenna as an intermediate stage. We send an electromagnetic wave through the skull to power an intermediate stage, which powers amplifiers, supplies energy to de-converters, and generates an acoustic wave to send to the small neural dust nodes. The intermediate stage then takes the refracted acoustic wave coming back from the neural dust, demodulates it and sends it back out of the skull on an electromagnetic wave. So you need to combine them all. That’s engineering. Engineering is not just focused on one problem. It focuses on big systems with all the components. You have to put them together and it all has to fit in the end.
BSJ: What kind of circuits are present on each neural dust node? Prof. Rabaey: Neural dust node has one transistor, and it’s mostly passive. You have a power waveform coming in, you turn it into a power signal, and that powers the single transistor. You need some amplification, which is provided by that single transistor. The transistor modulates information back onto the piezo element, making it move, which scatters the information backwards. However, I need something that generates a waveform and decodes the information coming back from the neural dust- something like a radar, which is not easy. A radar has several antennas, where one can set a beam with different phase shifts for the different antenna elements. When the data comes back, it comes from a whole bunch of antennas, the neural dust, and you have to take them apart. Signal processing is required on the intermediate stage and that will require more power. But, fortunately, with the intermediate stage, you’re not
very deep in the tissue, and we have area. Area matters. If I have more area, I can have more power. The amount of power and energy a small node can get really depends on the size of the node. On the top of cortex, the intermediate stage can have a little membrane spread out that has quite a large aperture in terms of antenna size, electronics and processing. BSJ: You mentioned just now that the neural dust nodes are mostly passive. Could you explain a little bit about the passive and active states and what those mean? Prof. Rabaey: A passive device is a device similar to a resistor or capacitor something that doesn’t perform any gain, so they don’t have any amplifiers. If I basically take a resistor, I put a voltage across it, I get a response. It’s a pure response to an active waveform. Now let’s use a very simple example. Suppose I have an RFID tag, I put a sine wave in, and I modulate the information back down. There is no need to do anything active with the device, there is no energy stored, per se, in the device. The energy is conserved, and I modulate the information back out on that energy beam. If I want to do computation, I’m going to need an energy source. An energy source means I have to take the wave coming in, rectify it in some way or another, make sure it becomes DC, and store it in some energy resource like a battery or capacitor. I use that energy to perform computation, which puts the data back into the oncoming waveform. Active components require that you have some extra components that do some individual computation; while with passive components, you react to what’s coming in, change it a little bit and send it back. BSJ: Because the neural dust nodes are so small, I would imagine that it’s very hard to control the direction of the nodes, which comes back to the incoming word. Does it matter for the output signal? Prof. Rabaey: That is definitely a good point, directionality matters. You have a little cube that consists of piezoelectric material and two electrodes. The rotations will definitely change the way it’s going to refract. That is why, ultimately, you don’t look at it as a single transmitter-receiverreflector type system. To use it, you really need an array of interrogating elements. So, basically, you put in one wave, and it scatters back in different directions. The directional information can be used
34 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
to identify individual nodes. So, the signal will definitely be impacted by how deep the nodes are, and how the nodes are oriented, but you can learn those things. Once you learn those things, the nodes don’t move much. They might move a couple of microns occasionally, but that’s it. For example, every time my heart beats, I pump blood into the brain, and the brain expands and contracts continuously. You might have some micro motion but the nodes can be considered to be generally static in location. That is kind of an assumption that we are making. If we say they are moving all over the place the problem becomes much harder.
B S J
BSJ: Building off of that question, how exactly do you deliver neural dust into the cortex of the brain? Prof. Rabaey: Ah, good question. What apparatus you choose to use, this is not trivial at all. Obviously, you have to have surgery first. Surgery, hopefully, can be done by having a little burr hole. You try to avoid taking the whole skull off. You make a little hole in skull, about a centimeter wide that goes through the dura and into the cortical material. There are some apparati that allow you to push material in to do this. You can actually build surgical tools; people have been doing this for a variety of devices that help you to put the nodes into the system. What you are trying to do is minimize the amount of damage to the surrounding tissue when you push something in, as you have all of the arteries and capillaries around there.
That’s a good question; we haven’t really gotten into massive deployment of these things yet. We have a lot to learn. These are really tiny, tiny little devices, manipulating these type of things is not trivial. But, we do have some experience with this. We are not doing this just with neural dust, we have many other devices that we are working with, which are ECoG based devices. These are flexible membranes with electrodes that you put on top of the cortex. It’s like EEG, but ECoG electrodes are placed below the skull, because that is much more efficient in terms of information gathering. Your skull is a low pass filter and an attenuator. With EEG you don’t get much information -- everything above 50 Hz is gone. But if you go below the skull, you can go up to 300 Hz and get a lot more information. There are, however, some packaging issues. For example, how do you make it flexible, how do you make it compliant, and all these types of things.
Another method we use is to push little needles, almost in the shape of an octopus, into the cortex. These needles are flexible and connect to a central platform, where a radio and power generator sit. You use a special apparatus to push this device into the cortex. There are a variety of tools that people have built. If you can make small things, you can also make very small apparati.
BSJ: I’m guessing these devices are meant to be chronic. This seems like a very difficult task. What are some tradeoffs and challenges that you had to address to make these devices chronic? Prof. Rabaey: If I want to put this in a human, for whatever purpose, it could be used to address motor dysfunction or any other type of neural disease. Once you do an implant you want to make sure it stays there for a long time. People typically talk about ten years minimum for these devices. It’s hard. No one has really done it in the neural space because there are a whole bunch of problems that emerge over time. However, not everything has to be chronic. There are certain implants that could be used for short term implant and explant. A very good example is neural implants for stroke patients. If somebody has stroke, the stroke basically destroys certain regions of the brain. It turns out that many stroke patients afterwards are capable of remapping some functionality, so if they have motor issues, they can remap some of those motor functions to other regions in the neighborhood that are not damaged. Same thing works with speech. Stroke patients initially have a hard time speaking, but then they can recover
Prof. Rabaey demonstrates implantation of microsensor within the brain.
35 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
some speech. We hope to use BMI to help them rehabilitate. If someone has a stroke and cannot move their hand, what they do now is that they have an exoskeleton that moves the hand for them. What you hope is that by moving the hand, things start linking up and they start rebuilding some neural connectivity. It would be even better if I had that exoskeleton with an implant. You have electrodes that drive that exoskeleton, and you have a linkage between a region in the brain and what’s happening. You do this for a couple of months, and when you are done you explant the device. So, not everything has to be chronic.
If it is chronic, there are a bunch of failure mechanisms you have to address; for example materials and scar tissue. It turns out that a lot of the implants that people do today with humans, monkeys, rats, and other animals is that when you put electrodes in the brain, after a certain period of time, you see the sensitivity of the electrodes goes down. The signals get weaker and weaker, and suddenly they disappear. No signal anymore. The main reason for this is scar tissue. You have created damage. If you put something big in there, ranging from 100 microns to 2mm long, the body reacts to it. The other thing that happens, is when you move your head, there is micro motion. The electrode is fixed, but your brain is moving, so you create more damage. You see glial tissue growing around it and you lose connection to the neuron in that neighborhood.
That’s why we thought of neural dust. Neural dust is free floating, so it moves with the brain. The reason we go after a 50 micron size is because it has been shown, by a number of groups, that if you make an object smaller than 50 microns and you put it in the body, the body basically ignores it. It considers this object to be normal. It’s only if it’s bigger it reacts to it. That’s one of the reasons we want to make neural dust very tiny.
The other issue involves the materials that you are using. The brain is a vicious environment, there several types of fluids there. For example, if you have two materials that fit perfectly well together, say you have a polymer and a titanium wire on top of that, you have a perfect connection. However, in the presence of liquids and certain acids, they might delaminate over time. Water gets in there, and suddenly a wire might come loose. So, the right choice of material is very important.
The other thing which becomes very important is the possibility of infections. Today, most of the implants are done via burr hole surgery. Every time you have a hole through tissue, at some point in time, you will surely get an infection. That is why we are so insistent on wireless connectivity. With our system, you can close the skull back up, with the implant below, and you have reduced all sources of infection. BSJ: So if you have problems with the neural dust, is there any way to remove them, or get at them and change how they work at all. Or is it that once they are in, you are unable to modify them at all? Prof. Rabaey: You have hit on a very important issue. You can implant them, but explanting them is almost impossible. You are not going to start fishing after nodes that are 50 microns in size. You can composition them with some imaging strategy, but that’s it. The idea is that they are there for life. But, they don’t matter as they can get absorbed by the tissue. In general, the idea is, and dust says it all, that you sprinkle many of them, more than you need. If I really want to do listhetic or prosthetic control, I can talk to a certain set of neurons, say 50 or 100 neurons, and that should have enough connectivity. To make it robust over time, as some of those nodes might not work anymore or the neuron might not be operational, we put plenty of them in. The idea is that we sprinkle more than a hundred, we sprinkle hundreds to thousands of nodes, so that you have redundancy. Now you have a very wide bandwidth. If some disappear, no problem, you have another one now. That’s kind of the mindset. You can imagine getting these things to regulation is not going to be a trivial thing. So initially we are looking at this for rats, monkeys, and other animals. It’s going to take a lot of water to go to the sea before you can really put this in a human. For humans, we use more standard technologies, which you can do step by step. BSJ: What kind of packaging do you have to put the neural dust into? You mentioned that you have to choose your materials wisely to make sure that it doesn’t interact with the brain at all, so what do you have to do to guarantee this? Prof. Rabaey: It all depends on what you’re talking about, and how complex the nodes are. Neural dust is fairly simple, and the only thing you really need to have is something that can measure voltage. You need two electrodes that are exposed metal;
36 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
the rest of the node is a tiny piezoelectric layer and a transistor that can be built on the same platform. You use the two exposed electrodes to measure voltage. That’s the risk factor. The rest of the dust, the piezoelectric layer and transistor, can just be put in a blob of silicon dioxide, a relatively resistant and inert material. In general the silicon dioxide doesn’t react with any chemical processes. However, a problem arises when something gets past the silicon dioxide covering of the node. Over time, this could destroy the node. However, neural dust encapsulation is easy. For instance, cochlear implants are a little more complicated – they use a titanium box in which everything is encapsulated. Again, you have to be careful where the electrode comes out, which is where you have the weak spots.
We just started a large project with UCSF on these next generation implant devices. We work with Livermore Labs, which do the biocompatibility encapsulation – they have a lot of experience in that space. It is something that you have to build. You have to know what works and what doesn’t work.
BSJ: Is there any reason why the neural dust is expected to stay in its place? Does it have any attractive interactions with cells? Prof. Rabaey: That’s a good question. Obviously the key thing we’re doing is sending acoustics through the system. In itself, you might say that piezomaterial is going to stand and go in. Now if you compute how much motion there is, you’ll find that it’s extremely small. However, one thing we have been worried about is that if I transmit a large acoustic wave, could I create electric waves that, in one way or another, start interfering with the operation itself? Would that basically create stimulation? Now stimulation, in itself, is a useful thing to have. What neural dust does right now is to read out of the brain – it looks at a neuron and information comes out. But for a number of applications, you would like to write into the brain. You want to, basically, add some electric current, and you stimulate a neuron to fire. That’s exactly the way cochlear implants work: they stimulate the nerves. A deep brain stimulation, for Parkinson’s disease, involves long electrodes, with which they inject small electric currents. Amazingly, people who have extreme Parkinson’s, where they cannot control their limbs, can start writing. Often, when you have reading of material, you don’t want to
have unintentional writing in the body. We’ve been looking at that, and we’re convinced that the amount of acoustic power we put in is much smaller than the amount that would lead to stimulation. BSJ: When you are reading these signals from the brain and transmitting them, how do you target the signals you want without interference from other signals, and how do you interpret the data? Prof. Rabaey: The beauty of the brain is that it’s an extremely plastic environment – it’s a platform that can be configured, and reconfigured. It’s not a fixed computational system. If you look at a lot of the BMI systems, you want to control prosthetic limbs. To do that, first, you have to map the brain. Every human is a little different, so where exactly the function lies depends on the size of the brain, and other factors. Using imaging techniques, you can figure out where the auditory controls, motor controls, and other functions are. You’re not going to randomly choose. However, you don’t have to be too precise. You basically have a specific neuron, and you get signals, which you feed into a controller. The device takes in the inputs, and conducts computation and filtering. That translates into signals that go into the prosthetic device. If that was the whole story, though, this would never work. The first time you tried it, the arm would go left, and right, and all over the place, because that neuron has no clue about what’s happening. Fortunately, you have eyes – feedback. Feedback comes into the game. I have tactile feedback, visual feedback, which gets put back into the system. It then finds its way to that particular neuron. The brain is really densely connected. So you start reprogramming that neuron, and the pathways between various neurons. After a number of trials, it gets better. Then in the end, after hundreds of trials, they get 95% agreement in the experiment. If the brain wasn’t plastic and flexible, you wouldn’t be able to do this. That’s why you don’t have to know exactly what you’re shooting at. BSJ: So how do you convert these signals that you’re getting? Inherently, they’re just electric impulses, so how do you convert them into something meaningful? Prof. Rabaey: This is a question about neural codes. How is the information encoded into those signals? So you look in any single neuron, and you put an electrode to measure voltage in the neighborhood of the neuron. Firstly, you have to
37 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
measure the neuron itself. The neuron basically contains many impulses that build up, reach a threshold, and then fires. That propagates down the axon and connects to all the neurons that are involved in the connection. That’s the electric field you’re measuring, from which you’ll get a voltage signal. Most of the time you may measure two or three neurons, not just one. However, they all look different – some of them are further away, or closer, and the shape of the pulse is different. There’s also other information there because there are thousands of neurons in this neighborhood that are all firing back and forth. So you can imagine that somewhere, you’ll have a combination of those signals. Now it turns out that you would expect to get a whole bunch of noise. But in reality, neurons are connected. One neuron fires, then the next one fires, then the next one. You’ll get phase coherence within different neurons, from which you get a waveform. This is where all the EEG signals come from. All the alpha, beta, and gamma waves are resolved of many neurons acting in synchronization. That’s the other information you get, which you can measure.
After this, it depends what you want to do. You can filter it, or you can process it. Most of the BMI folks that work on prosthetics so far have been using spikes. When you find the spikes, you can figure out how often the neuron fires, on the average. That’s the key information that most BMIs use. If I use EEG, you don’t have spikes. You look at the low frequency waveforms, which I described, and you look at them in the frequency domains. You may have delta waves, which are very low frequency, along with alpha, beta, lower gamma, and upper gamma waves. If you divide these into frequency units, and do spectral analysis, you’ll observe not the energy itself, but how it changes. If I am active, I am going to see the energy shifted to gamma waves. If I am not active, the energy shift will be seen at lower frequencies. So this sort of spectral information can be used for BMI as well. People have been using this for doing speech synthesis among other things. But in the end that’s what you get is some metric, be it energy change or spike rates. And that’s what inputs to my model. I try to build a model, an adaptive model, or a stochastic model that learns and adjusts the parameters in a given situation to give the best possible response. So that’s a lot of signal processing, machine learning, all these things come into the game.
BSJ: What are some near future applications for neural dust? Prof. Rabaey: We just got a large grant from UCSF that’s looking at neuro-psychiatric diseases. There are many of soldiers coming back from Afghanistan, and a lot of them have neural conditions such as depression, stress, and posttraumatic stress disorder. It’s a big fraction of our society, so can I learn about why these things happen, and what causes depression? Can I do something about it? Right now it’s drugs - you basically have overdoses of some chemicals that are being created and you put in a drug which suppresses it. But maybe I can learn. If it turns out these are neurons that start firing an open loop, I can stimulate that region. For instance I have a discussion going on right now with a cardiologist in Poland. He came here with a group of people, so we heard his presentation. There is a beautiful application in this field, too. People who have coronary diseases, basically have clogging of the arteries. They put a stent, which is a flexible device that keeps the artery open. The blood has to come in and out. And what typically happens with someone with clogging, is this makes the whole tissue, the whole cellular membrane, stiff. If doesn’t move anymore, the arteries will get clogged up. Then, when you put in the stent, you hope that the artery will start recovering and become flexible again. So, the new idea we had is that the acoustic power and integration of the neural dust would be really beautiful if it could actually be a pressure sensor instead. The pressure sensor basically something that’s flexible, that can measure strain and stress. Then, since piezoelectric anyhow, I can interrogate it again with my acoustic wave from the outside and basically on a database figure out if things are getting better, how fast are they evolve and things like that. So the idea of monitoring devices is possible. Peripheral neural is something we haven’t talked about - it’s really anything in the nervous system, where there are a lot of signals which you can tap into. I might be able to drive prosthetic devices as well. So there is a set of applications that seems to be coming. It’s all very interesting, but obviously when you have a small group, you have to focus on research; you cannot just say well I am going to take everything on. If you want to get some results, you have to focus on specific things and topics and say this is what I’m going do first, and I’ll see what’s happening later.
38 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
BSJ: Where do you see this research taking in you 5-10 years? There are a lot of neuro-inspired applications that BMI has. What are a few things we can look forward to in the news in upcoming years? Prof. Rabaey: There are many interesting directions we could go in, right? The things I am interested in, on one side, is part of this whole process in learning about how the brain works. The Obama initiative is about mapping the brain, understanding brain functionality, and so on and so forth. It’s cool, because the brain is a decent machine. It has 20 Watts of power - that’s not a lot, its about as much a little light bulb - and that’s the total power it takes on the average, and it does a lot of good computation. We’re darn good as humans in doing things like multitasking, taking time, and doing pattern recognition. It’s really amazing, that we do all of this in such a small brain. Now we say, could we learn from the brain, to build better computers? That’s one thinking process. So Moore’s Law is slowing down a little bit. The question is, “What’s coming up after that, if silicon based computing basically plateaus out?” Could you not learn from the brain to build computers that are a little bit different, that are good in certain tasks like pattern recognition, synthesis, ordering, decision making? These are processes that computers are not very good at right now. So you start looking at the brain with this perspective. Why does the brain work the way it does? 1. It’s not a digital machine. The brain doesn’t work on 1s and 0s, (analog coding) 2. It has plenty of concurrence and parallels. It’s a giant parallel machine. Only certain fractions work at any point in time. If everything was firing at all the time, your head would explode. We would explode. From an energetic, and thermal perspective, you would be unable to maintain it. 3. It’s very redundant. If I kill one neuron, take one neuron away, functionality wouldn’t be impacted in a major way, even though that neuron would be trained for a very specific task. For example, you have these grandmother neurons or Mona Lisa neurons, named after the concept of seeing the Mona Lisa and having this one neuron fire like crazy. Now if I take that Mona Lisa neuron away, I would still recognize the Mona Lisa, but I might have to a bit more inference involved in the process of recognizing the painting. So, there are several good properties of the brain that can be used to build better computers. Computers that are energy efficient and that can be built on your cell phone, can help you have more precise and exact functionality.
The other way I am looking at this is that with BMI is that neural diseases are huge in humans. I already mentioned spinal cord injury, stroke, epilepsy, stress, depression, and a whole range of neuropsychiatric diseases. The impact of this slew of diseases attacking humanity is huge. Looking at technology and how you can address some of these illnesses or resolve them, or at least aid them, is very noble goal. At the same time, now, wait a second, once I have indeed a connection into the brain, could I do a lot more with this? Its not purely trying to address neural disease, but could I also use it for the brain-machine interfaces’ controlling function. Basically, I could have a cyber wall, a more closelinked channel, high linked channel between the two. So, this is an interesting question that, obviously, we’re far away from this. I can put an EEG helmet on and say go left, go right. You can do about 3-4 things. It’s good for one day and then it’s really boring, so that’s not really efficient. You can imagine that as technology evolves, the purpose will change. This is one part of what you can call “human augmentation”. Now people don’t want to speculate about this very dangerous topic to speculate about. Augmentation from an ethical standpoint has a ton of questions. But they can imagine that there are many other ways of augmentation in our body. So we turn to wearable devices: you have a watch, you have bangles, dongles, and all kinds of electronics that people can wear. That could be interesting. If I start putting those things into a network, I can start building what I call the “Human Intranet” - a network that parallels the network that’s inside your body. Inside your body you have your nervous system, which is a data information network, and you have your arterial network, which is basically energy, provision, and nutrients. So if something gets attacked, could I not replace it, but complement it with a network that is sitting on my skin outside my body? You can imagine that I have a set of
“If I start putting those things into a network, I can start building what I call the “Human Intranet” - a network that parallels the network that’s inside your body.”
39 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
sensors on my brain and inside my brain, that act as the control faction, and I could use this for instance to drive an exoskeleton. Once I have an exoskeleton I want to run faster, or for example I want to drive my bicycle or car. So, obviously those networks have to have sensors and energy. That’s why the placement of the arterial networks is so important as well. How do you get the energy to power these sensors? Well you need a network of energy distribution. It could be wireless, it could be acoustic, or it could be infrared. So this whole mindset involves thinking about how to evolve this whole variable world. If you start thinking about having that link to the brain as well, it’s really intriguing. Now is this 5-10 years? Probably not. But it’s good to have the thought that somewhere this might be possible. Then you can start questioning, is this something I want, is something that is acceptable, is this something that’s safe? A lot of discussion these days is also about if it’s wireless, people can hack into it. Suddenly you have security issues. You have privacy issues. If someone can start reading your brain activity, can he start snooping on you? Can you imagine NSA in your head? That’s pretty scary, right? So we have to start thinking about this, and start thinking, maybe I should put security in this wireless device or I should add some encryption. There are many different angles and we don’t know where it’s going to go. That’s the nice thing about research - you speculate. And you go forward, and at the same time you see all these possibilities which you can explore. BSJ: BSJ would like to thank you for your time. Layout by: Lucy Zhang
40 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
Copper Catalyzed Oceanic Methyl Halide Production Jae Yun Robin Kim, Robert Rhew.
Department of Geology, University of California, Berkeley Keywords: San Francisco Bay, ocean-atmosphere exchange, photochemical reaction, oxidation reaction, metals
B S J
Abstract
Methyl halides are found in all of Earth’s biomes, produced naturally or through manmade means. Their presence in the atmosphere is problematic, as they catalyze depletion of stratospheric ozone. To understand the full environmental impact of these compounds, it is important to identify their chemical cycling processes. Iron increases methyl halide production in soils and oceans, yet copper’s influence remains unknown despite its similar chemical oxidation properties to iron. I experimentally tested the effect of copper sulfate and sunlight on methyl halide fluxes in San Francisco Bay seawater. Samples exposed to copper sulfate and sunlight averaged higher positive flux rates than other treatments. Copper sulfate also increased carbon dioxide production and acidification of the water. The interaction of copper sulfate and sunlight in seawater suggests a new mechanism for methyl halide production, most likely via a photochemical reaction or through suppression of normal uptake processes causing overabundant concentrations.
Introduction
Concerns over human and environmental health have increased over the past few decades as depleting ozone layers allow greater amounts of ultraviolet radiation to penetrate the Earth’s atmosphere. Increased ultraviolet radiation is linked to: growing rates of skin cancer and an increased frequency of ocular disease in humans, disruption of natural processes in the terrestrial and oceanic environments, and involvement in global warming events (1). Methyl halides, comprised of methyl bromide, methyl chloride, and methyl iodide, act as catalysts for stratospheric ozone depletion and contribute to these concerns (2). Measurements of overall global production of methyl halides differ from quantified global uptake of these compounds (2). For example, methyl bromide’s known sinks outweigh its known sources by approximately 35 Gg yr-1, and more methyl chloride uptake than production is reported (2). Without fully understanding the various environmental pathways involved in the methyl halide lifecycle, the inconsistencies in overall quantification creates difficulties for methyl halide regulation as scientists and legislators are unable to specify target reduction areas. Methyl halides occur naturally in the environment, but anthropogenic production has contributed to growing levels of methyl halides in the atmosphere (2). Methyl halides exist in all earth systems, including terrestrial and oceanic environments, and can be produced in both abiotic and biotic settings (3). Methyl halide flux measurements have been studied worldwide in various ecosystems such as: grasslands, oceans, salt marshes, and
the Arctic tundra (4, 5, 6, 7). Additionally, anthropogenic production and use of methyl halides in industry and agriculture have contributed to their global budget and environmental impact (8). Until the Montreal Protocol banned methyl bromide for its detrimental impact on the ozone layer, it served as an important agricultural fumigant used to grow strawberries, tomatoes, and peppers (9, 10). Continued identification of both known and unknown anthropogenic sources of methyl halides would create opportunities for additional mitigation. A newly identified terrestrial abiotic mechanism for methyl halide production involving iron and organic matter in soils suggests that new processes for production remain to be quantified (11). This particular abiotic mechanism has only been tested in the terrestrial environment, but there is potential for its application in other systems. Aside from the many ecosystems in the terrestrial environment where methyl halides can be found, the marine environment plays a major role in the methyl halide lifecycle, acting as both a methyl bromide sink and methyl chloride source (2). Iron’s interaction within the marine environment has already been accounted for (12, 13). Copper however, which has similar oxidation properties to iron and can participate in these reactions, remains to be examined. Many anthropogenic pathways for copper discharge to the oceans exists. Copper leaching from boat antifouling paint and aquaculture nets, and copper residue from automobile brake pads in urban runoff being the primary routes (14, 15, 16). By studying the interaction of copper and oceanic organic matter, an unaccounted, primarily anthropogenic driven methyl halide source could be revealed.
41 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
The objective of this experiment is to determine if there is a quantifiable change in oceanic methyl halide production when copper is added to seawater samples. By incubating samples of seawater with copper sulfate, I will determine if exposure to copper sulfate changes the flux of methyl halide species, indicating either a production or a loss of these compounds from the water.
B S J
Methods
Study area and sample collection To assess the reaction between copper and seawater, I collected half-liter seawater samples using half-gallon Ball brand glass mason jars from the Cal Sailing Club dock at the Berkeley Marina. I collected a total of twenty samples of San Francisco Bay water. Samples were stored in the lab refrigerator immediately after collection and after sample testing. Prior to sample collection, I washed all mason jars with deionized water, ethanol, and acetone to ensure purity of the jars. I designated ten jars of water as controls for the experiment, and left them as pure seawater samples without any copper sulfate. An additional ten jars were used as experimental samples, to which I added close to 0.3 grams of Copper (II) Sulfate reagent (Sigma-Aldrich, St. Louis, MO). The amount of copper sulfate added to the samples is greater than reported measurements of copper leaching into the oceans; however, the increased concentration was necessary to test the influence of copper addition on methyl halide production in the laboratory. Sample testing To observe the reaction, I exposed each sample to its specified treatment parameters and incubated samples at approximately thirty minute intervals. The twenty samples were separated into four different testing regimes that looked at the influence of copper sulfate and sunlight on methyl halide production. Samples in treatments 1 and 2 did not have copper sulfate added, but treatment 1 samples were exposed to sunlight while treatment 2 samples were not. Treatments 3 and 4 samples had copper sulfate but only treatment 3 samples were exposed to sunlight. Each sample was first aired out on the balcony of McCone Hall, either in shade or direct sunlight for at least 1.5 hours. After this time period the sample was capped and sealed off from external air, and attached to the inlet system of the GC/MS (Agilent Technologies 6890N Network GC, Agilent 5973 inert mass selective detector). I injected 70 torr, or about 30 milliliters, of the headspace gas from each sample into the GC/MS machine over three time periods to obtain concentration measurements over time for each sample. The sample was constantly stirred with a magnetic Teflon coated stir bar during testing to ensure adequate gas exchange between the water and
air. After each sample injection, I added 10 milliliters of ambient air to the headspace of the sample jar to prevent a vacuum from forming. I tested the samples at lab room temperature, approximately 20-23 degrees Celsius, and at atmospheric pressure. Because methyl halides exist in very scarce (partper-trillion) levels in the atmosphere, the gas samples required a pre-concentration process before injection into the GC/MS. I pre-concentrated gases by capturing target compounds onto two cooled “U” shaped stainless steel traps prior to injection. The first trap was cooled with an ethanol-liquid nitrogen mixture at about -70 degrees Celsius and the second trap was cooled with pure liquid nitrogen. To ensure clean concentration readouts, the GC/MS inlet line had an Ascarite trap to capture carbon dioxide and a magnesium perchlorate trap to prevent water vapor from entering the machine. To account for climate variables on methyl halide concentrations, I noted the salinity and pH of all samples immediately after collection. For samples with copper sulfate added, these variables were measured again after addition of copper sulfate. Measuring the pH and salinity is important because copper availability in water is known to be highly dependent upon environmental conditions (17), and these variables could influence the outcome of the reaction I hope to measure. Using Oakton Instruments Waterproof SaltTestr probe and Waterproof pH Testr 3+ double junction probe (Oakton Instruments, Vernon Hill, IL, USA), I measured the pH and salinity of samples at room temperature, about 20 to 23 degrees Celsius. Additionally, upon suspecting that large concentrations of carbon dioxide were emitted after copper sulfate addition, I measured the carbon dioxide concentration of samples using a LiCor water vapor and carbon dioxide analyzer (LI-840A CO2 /H2O Analyzer).
Data analysis To analyze the GC/MS readouts, I compared the data from all seawater samples against a calibration curve created from a lab ambient air standard with known amounts of methyl halide concentration. The calibration curve served as the baseline for methyl halide concentration comparison. I performed data analysis on Microsoft’s Excel program and a coding system developed by Robert Rhew. During this process, I eliminated bad runs and made labeling adjustments to the remaining data. I calculated the concentration in part-per-trillion values for methyl chloride and methyl bromide from the area under the curve from the GC/MS readouts. I separately compared each sample’s methyl chloride and methyl bromide concentrations with those of the calibration curve standard to determine if there was a major difference in concentration for either compound. Additionally, I calculated concentrations for methyl chloride and methyl
42 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
bromide using their two major isotopes, methyl chloride 50 and 52 and methyl bromide 94 and 96. Comparing the isotope concentrations allowed me to cross reference my values to ensure that the correct concentration of each methyl halide was accounted for. Finally, to determine if net production of methyl halides occurred in the water, I calculated methyl chloride and methyl bromide fluxes (nmol m-2 s-1) for all samples using concentration over time values. Positive flux indicates a flow of methyl halides from the water to the air, representing production in the water. Negative flux describes a flow of methyl halides from the air into the water.
B S J
Results
Salinity measurements for pure seawater and seawater with copper sulfate samples had similar values, however, the pH values for samples containing copper sulfate were 100 times more acidic. Salinity measurements ranged from 7.87 to 8.00 parts-per-thousand for all samples, regardless of the addition of copper sulfate (Fig. 1). Average sample pH measurements showed a wider range; the average pH for pure seawater samples had a range of 7.98 to 8.11 while the average for copper sulfate seawater samples was 5.11 to 5.96 (Fig. 2).
Figure 2. Average pH measurements comparing samples with and without copper sulfate addition. Samples without copper sulfate show average oceanic pH levels of 8, while samples with copper sulfate added have acidic pH values of around 6. Error bars indicate standard error. N=10 per treatment.
Figure 3. Carbon dioxide concentration comparison between samples with and without copper sulfate. The concentration for samples without copper sulfate remains steady over time, however samples with copper sulfate show production of carbon dioxide over time. Figure 1. Average Salinity Measurements comparing samples with and without copper sulfate addition. Samples do not show a marked difference in salinity, measured in part per trillion. Error bars indicate standard error. N=10 per treatment.
In addition, I measured higher emission levels of carbon dioxide from seawater samples with copper sulfate versus those without. Measured carbon dioxide concentration from the LiCor machine showed a steady concentration range of 400 to 500 parts-per-million over the course of an hour for pure seawater samples. However, the addition of copper sulfate caused the concentration of carbon dioxide to immediately increase and rise over time (Fig. 3). Variable average flux measurements for methyl chloride and methyl bromide amongst the four treatments
signified a difference in methyl halide production contingent upon treatment parameters. Treatment 3 samples, which were exposed to copper sulfate and sunlight, showed the greatest positive average flux for both methyl chloride and methyl bromide (Fig. 4). Flux rates between isotope species for methyl chloride and methyl bromide were found to agree, which supports the correctness of the data. Average flux measurements for methyl chloride and methyl bromide were greater for samples with copper sulfate added than for pure seawater samples. Samples exposed to sunlight showed greater variation in standard deviation for methyl halide flux measurements than measurements from samples aired in the shade.
Discussion
Methyl halide production is contingent upon the presence
43 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
Influence of copper on methyl halide production The average flux measurements for methyl chloride and methyl bromide were highest for samples exposed to copper sulfate and sunlight, while samples undergoing other treatments displayed smaller fluxes. Comparing measurements for all samples with copper sulfate addition showed that without exposure to sunlight, the flux measurements of methyl bromide and methyl
B S J
of both copper and sunlight; copper addition alone does not foster greater than normal levels of methyl bromide and methyl chloride production in samples. Although the exact mechanism to explain this reaction remains unknown, possible options include photochemistry, a biologic reaction, or prevention of oceanic consumption and uptake. In addition to increased levels of methyl chloride and methyl bromide, carbon dioxide was
Figure 4. Average methyl chloride and methyl bromide isotope flux measurements for testing treatments. Average flux measurements for (a) MeCl (50), (b) MeCl (52), (c) MeBr (94), and (d) MeBr (96) are reported for various treatments. Treatments 3 (N=4) and 4 (N=4) have copper sulfate added, while treatments 1 (N=2) and 2 (N=5) do not. Treatments 1 and 3 are exposed to sunlight and Treatments 2 and 4 were incubated in the shade. The error bars indicate one standard deviation. values of around 6. Error bars indicate standard error. N=10 per treatment
produced as a byproduct of copper sulfate addition, lowering pH levels to acidic values. Salinity however, remained constant among samples with and without copper sulfate. Future research studying the effects of organic matter in seawater or levels of sunlight exposure prior to sample measurement could aid in identifying the mechanism driving the production of methyl halides in copper seawater samples.
chloride were lower than measurements for samples containing copper sulfate and exposed to sunlight. This implies that copper sulfate addition alone does not enhance the production of methyl halides in the water, and exposure to sunlight acts as a major catalyst for the reaction to move forward. The standard deviations for all treatment flux measurements coincide with those reported in the literature. For treatments 1, 2, and 4, the standard
44 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
deviation for flux measurements was on the order of 10-4 nmol m-2s-1 (4, 18). Although the standard deviation for treatment 3 samples was quite large, this amount of variation is common for samples exposed to sunlight (19) and results indicate a definite positive flux of methyl halides from the water to the air. Possible drivers of this reaction are based on previously identified mechanisms for iron catalyzed methyl halide production.
removal of excess methyl halides into the air. Wingenter et al (2004) attributed increased oceanic methyl bromide in iron fertilized patches to this suppression mechanism. In the ocean, methyl halides are produced by bacteria (24) and broken down by microorganisms (25). Adding copper sulfate could have removed these organisms and stopped this chemical cycle, causing a build-up of methyl halides in the water.
Photochemical reaction A likely driver of the reaction is a photochemical pathway involving sunlight and halides in seawater. Concentrations of methyl chloride and methyl bromide increased with longer exposure time to sunlight, supporting a possible photochemical production pathway. Richter and Wallace (2004) identified a purely photochemical production pathway for methyl iodide production in the Atlantic Ocean. Moore (2008) also reported that dissolved organic oxygen reacting with chlorine and sunlight generated methyl chloride in seawater. However, Moore (2008) reports that pH levels above 7.7 were more conducive to methyl chloride production than lower pH levels. Copper addition was shown to decrease the pH of samples from 8 to 5; but contrary to Moore’s study conclusions, the decreased pH did not seem to deter methyl halide production. Rather, the copper worked in conjunction with sunlight to increase the rate of methyl halide flux from the water. Considering the greater methyl chloride and methyl bromide flux measurements in samples with copper sulfate and sunlight, the implication of sunlight as a driver for the reaction is highly plausible.
Byproducts of copper and seawater interaction Salinity and pH are two important climate factors for studying reactions in the marine environment. Changes in ocean pH levels can affect the atmospheric-oceanic flux of compounds (26) and can detrimentally alter the toxicity of metals (21). Increasing salinity levels can alter the reactivity of chemicals in seawater and affect hydrophobicity of trace metals such as copper (27, 28). Salinity measurements from the samples showed no significant change in salinity between pure seawater samples versus those of seawater and copper samples. Turner and Mawji (2005) found that increased salinity can decrease the hydrophobicity of copper in estuary waters. Although shifts in salinity may affect how copper reacts in the water, I found that adding copper sulfate to seawater does not considerably change the salinity. The pH measurements between sample treatments, however, changed drastically. Immediately after copper sulfate was added to seawater, carbon dioxide concentrations increased in those samples, indicating that an exothermic reaction between the copper sulfate and seawater must be occurring. This conclusion is supported by the acidic pH values measured from samples containing copper sulfate, suggesting that slight acidification of the seawater must be occurring as carbon dioxide is produced (21, 29).
Biological reaction A biologically driven reaction can also enhance methyl halide production in the oceans. Richter and Wallace (2004) compared unfiltered seawater samples to filtered seawater samples and found that unfiltered seawater samples showed greater amounts of methyl iodide production. Unfiltered seawater differs from filtered seawater by retaining the biological organisms during the testing process. I did not filter my samples which leaves open the possibility for a biological pathway. However, copper is toxic to many organisms at high levels of exposure (22, 23) and adding copper sulfate should have removed biological organisms from the samples. As I only saw greater methyl halide fluxes in those samples with copper added, a biological reaction seems unlikely for this experiment. Uptake suppression Rather than producing new methyl halides, copper and sunlight could have acted to prevent the uptake of methyl halides in the water. Reducing uptake would create a build-up of concentration in the water, leading to
Limitations Alternate methods for sample testing could be implemented to confirm the reliability of data. Working within the limitations of equipment in the laboratory, I relied on the assumption that constant disturbance to the water by a stir bar would create a state of equilibrium between the gases in the air and water to test methyl halide concentrations in the water. A more precise method for analyzing compounds in water is by stripping gases from the water sample itself (30, 31). Direct measurements of methyl halides using the stripping method may provide more precise measurements than the present method, but the equilibrium assumption was sufficient to test the general hypotheses for the study. Despite these limitations the results still show significant findings that give rise to questions that may be studied with future experiments. The relatively small sample size prevents a
45 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
direct application of these findings to the general study system. A sample size of at least thirty samples would lead to greater confidence in the results and lend itself to statistical analysis. However, the standard deviation for flux measurements for treatments 1, 2, and 4 were on the order of 10-4 nmol m-2 s-1, which is within the limits reported in the literature (4, 18). The standard deviation for treatment 3 samples was quite large; however this amount of variation is common for samples exposed to sunlight (19). Although the sample size is limiting, the small variation in flux measurements for both methyl chloride and methyl bromide speak to the quality of the study results as a first indication for a possible production pathway.
Future directions The influence of sunlight on methyl halide production was undiscovered until the latter half of the experimentation period. Future projects can clarify the role of sunlight in the mechanism by varying the amount of radiation and length of exposure to sunlight. Comparing radiation exposure levels to measured methyl halide fluxes could determine a possible correlation between the two variables. If flux measurements increase with increasing radiation levels, then the influence of sunlight on the mechanism is demonstrated. Conclusions Addition of copper sulfate to seawater samples invokes a positive methyl halide flux from the sea to the air, indicating possible production of methyl halides from the reaction with copper sulfate. However, copper sulfate alone does not drive this reaction forward and environmental factors such as sunlight play a major role. Future experimentation could determine if this positive flux is indeed due to new production, similar to iron’s role in methyl halide production, or if it is caused by a disruption of normal methyl halide uptake processes. More importantly these results show that not all methyl halide chemical cycling in the environment is accounted for, and future studies should not only test new study systems but also reexamine known production pathways with different study parameters to ensure that all possibilities are explored.
References
1. United Nations Environment Programme, Environmental Effects Assessment Panel, Environmental effects of ozone depletion and its interactions with climate change: progress report, 2011, Photochemical & Photobiological Sciences, 11, 13-27, 2011. 2. Montzka, S. A., and Reimann, S., Ozone‐Depleting Substances (ODSs) and Related Chemicals, Pages 1-108 in A. R. Ravishankara et al., editors, Scientific Assessment of Ozone Depletion 2010, WMO, Geneva, 2011. 3. Redeker, K. R., and Kalin, R. M., Methyl chloride isotopic signatures from Irish forest soils and a comparison between abiotic and biogenic methyl halide soil fluxes, Global Change Biology, 18, 1453-1467, 2012.
4. Rhew, R. C., Sources and sinks of methyl bromide and methyl chloride in the tallgrass prairie: applying a stable isotope tracer technique over highly variable gross fluxes, Journal of Geophysical Research Biogeosciences, 116, G03026, 2011. 5. Li, H. J., Yokouchi, Y., and Akimoto, H., Measurement of methyl halides in the marine atmosphere, Atmospheric Environment, 33, 1881-1887, 1999. 6. Rhew, R. C., Miller, B. R., and Weiss, R. F., Natural methyl bromide and methyl chloride emissions from coastal salt marshes, Nature, 403, 292-295, 2000. 7. Teh, Y.A., Mazéas, O., Atwood, A., Abel, T., and Rhew, R.C., Hydrologic regulation of methyl chloride and methyl bromide fluxes in Alaskan Arctic tundra, Global Change Biology, 15, 330-345, 2009. 8. United Nations Environment Programme, Report of the Fourth Meeting of the Parties to the Montreal Protocol on Substances that Deplete the Ozone Layer, Copenhagen, 1992. 9. Schneider, S. M., Rosskopf, E. N., Leesch, J. G., Chellemi, D. O., Bull, C. T., and Mazzola, M., United States Department of Agriculture – Agricultural Research Service research on alternatives to methyl bromide: pre-plant and post-harvest, Pest Management Science, 59, 814-826, 2003. 10. Mayfield, E. N., and Norman, C. S., Moving away from methyl bromide: political economy of pesticide transition for California strawberries since 2004, Journal of Environmental Management, 106, 93-101, 2012. 11. Keppler, F., Eiden, R., Niedan,V., Pracht, J., and Schoeler, H. F., Halocarbons produced by natural oxidation processes during degradation of organic matter, Nature, 403, 298-301, 2000. 12. Moore, R. M., and Wang, L., The influence of iron fertilization on the fluxes of methyl halides and isoprene from ocean to atmosphere in the SERIES experiment, Deep-Sea Research II, 53, 2398-2409, 2006. 13. Wingenter, O. W., Haase, K. B., Strutton, P., Friederich, G., Meinardi, S., Blake, D. R., and Rowland, F. S., Changing concentrations of CO, CH4, C5H8, CH3Br, CH3I, and dimethyl sulfide during the Southern Ocean Iron Enrichment Experiments, PNAS, 101, 8537-8541, 2004. 14. Davis, A. P., Shokouhian, M., and Ni, S., Loading estimates of lead, copper, cadmium, and zinc in urban runoff from specific sources, Chemosphere, 44, 997-1009, 2001. 15. Schiff, K., Diehl, D., and Valkirs, A., Copper emissions from antifouling paint on recreational vessels, Marine Pollution Bulletin, 48, 371-377, 2004. 16. Tsukrov, I., Drach, A., DeCew, J., Swift, M., and Celikkol, B., Characterization of geometry and normal drag coefficients of copper nets, Ocean Engineering, 38, 1979-1988, 2011. 17. Boyd, T.J., Wolgast, D.M., Rivera-Duarte, I., Holm-Hansen, O., Hewes, C.D., Zirino, A., and Chadwick, D.B., Effects of dissolved and complexed copper on heterotrophic bacterial production in San Diego Bay, Microbial Ecology, 49, 353–366, 2005. 18. Rhew, R. C., Aydin, M., and Saltzman, E. S., Measuring terrestrial fluxes of methyl chloride and methyl bromide using a stable isotope tracer technique, Geophysical Research Letters, 30, 2103, 2003. 19. Richter, U., and Wallace, D. W. R., Production of methyl iodide in the tropical Atlantic Ocean, Geophysical Research Letters, 31, L23S03, 2004. 20. Richter, U., and Wallace, D. W. R., Production of methyl iodide in the tropical Atlantic Ocean, Geophysical Research Letters, 31, L23S03, 2004. 21. Moore, R. M., A photochemical source of methyl chloride in saline waters, Environmental Science Technology, 42, 1933-1937, 2008. 22. Pascal, P. Y., Fleeger, J. W., Galvez, F., and Carman, K. R., The toxicological interaction between ocean acidity and metals in coastal meiobenthic copepods, Marine Pollution Bulletin, 60, 2201-2208, 2010. 23. Santore, R. C., Di Toro, D. M., Paquin, P. R., Allen, H. E. and Meyer, J. S., Biotic
46 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
ligand model of the acute toxicity of metals. 2. Application to acute copper toxicity in freshwater fish and Daphnia, Environmental Toxicology and Chemistry, 20, 2397-2402, 2001. 24. Brownell, D. K., Moore, R. M., and Cullen, J. J., Production of methyl halides by Prochlorococcus and Synechococcus, Global Biogeochemical Cycles, 24, GB2002, 2010. 25. Cox, M. J., Schäfer, H., Nightingale, P. D., McDonald, I. R., and Murrell, J. C., Diversity of methyl halide-degrading microorganisms in oceanic coastal waters, FEMS Microbiology Letters, 334, 111-118, 2012. 26. Mari, X., Does ocean acidification induce an upward flux of marine aggregates?, Biogeosciences, 5, 1023-1031, 2008. 27. Turner, A., Diagnosis of chemical reactivity and pollution sources from particulate trace metal distributions in estuaries, Estuarine, Coastal and Shelf Science, 48, 177-191, 1998.
B S J
28. Turner, A., and Mawji, E., Hydrophobicity and reactivity of trace metals in the low-salinity zone of a turbid estuary, Limnology and Oceanography, 50, 1011-1019, 2005. 29. Bates, N.R., Best, M. H. P., Neely, K., Garley, R., Dickson, A. G., and Johnson, R. J., Detecting anthropogenic carbon dioxide uptake and ocean acidification in the North Atlantic Ocean, Biogeosciences, 9, 2509-2522, 2012. 30. Moore, R. M., W. Groszko, and S. J. Niven, Ocean-atmosphere exchange of methyl chloride: Results from NW Atlantic and Pacific Ocean studies, Journal of Geophysical Research, 101, 28,529-28,538, 1996. 31. Lu, X. L., Yang, G. P., Song, G. S., and L. Zhang, Distributions and fluxes of methyl chloride and methyl bromide in the East China Sea and the Southern Yellow Sea in autumn, Marine Chemistry, 118, 75–84, 2010.
Acknowledgements
I thank my thesis mentor, Robert Rhew, for his invaluable guidance and support. Thanks to Mary Whelan, Patina Mendez, Gabriella Vozza, Jeffrey Wong, Michelle Chang, Emily Gilson, Toby Walpert, and Jenny Tang.
47 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
Phylogenetic diversity and endemism:
B S J
metrics for identifying critical regions of conifer conservation in australia Annasophie C. Lee, Brent Mishler Department of Integrative Biology, University of California, Berkeley Keywords: Australian endemics, biodiversity conservation, phylogeny, Biodiverse, ArcMap
Abstract Accurately and sufficiently quantifying biodiversity is integral for conservation. Traditional metrics for measuring biodiversity, species richness (SR) and weighted endemism (WE), do not take into account the evolutionary history of organisms. Phylogenetic diversity (PD) addresses the shortcomings of SR by quantifying the evolutionary connections among the species present in an area. Phylogenetic endemism (PE) addresses the shortcomings of WE and represents the ranges of the branches of the evolutionary tree connecting the species in an area. Australia, with its advanced digitization of spatial reference data is the best model system for quantitative studies of biodiversity at present. I created a phylogeny for the 39 indigenous Australian conifer species using matK and rbcL sequences from GenBank and sequencing the 4 species for which there were no existing data. I used spatial data from Australia’s Virtual Herbarium. More precise estimates of biodiversity can be used by conservation policy-makers. Introduction Conserving global biodiversity, the variability between organisms, species, or ecosystems, is integral for conservation efforts (1,2). However, prioritization of critical species or regions for biodiversity conservation is a major challenge for conservation policymakers from a number of perspectives. Historically, conservation efforts have often been focused on either conserving key species or regions (3). To identify key regions and species for conservation, measures of endemism have played a central role to quantify how restricted a species is to a given region. The degree to which species are restricted or widely dispersed is
a strong predictor of extinction risk (4). Identifying these species at risk for extinction can be based on evolutionary history, geographic location, or a combination of the two. Geographically rare species are at greater risk of extinction (4), and phylogenetically rare species (5) contain disparate genetic information and contribute heavily to biodiversity, thus it is critical to examine the intersection of these subjects. The quantification of biodiversity has historically been problematic and current metrics are problematic because they do not include an evolutionary perspective. For example, enumeration of species is hindered by the lack of a universal
48 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
agreed-upon species concept across researchers reflecting an arbitrarily decided level of genetic and morphological variation, which leads to inconsistency in taxonomic ranking or hierarchy (6,7). Additionally, inconsistencies in identification and discovery of species lead to false classifications that both under and overestimate biodiversity. More importantly, these issues with naming and identifying species are compounded when traditional biodiversity metrics are calculated without considering evolutionarily relatedness between species, and their dispersal from their geographic origins. Species richness (SR), the absolute number of species in a region, was developed to quantify the number of species in a region and weighted endemism (WE) quantified their level of endemism (8,9). However, SR and WE as measures of biodiversity consider only the terminal taxa of a phylogenetic tree, without considering the evolutionary relationships among them (10). Species vary in their evolutionary isolation and genetic diversity, and these differences give insight into how species may have evolved and which are most important for biodiversity conservation (11). SR and WE do not include information about how closely related species are, excluding relationships between sister groups given by the phylogeny. Consequently, these metrics are limited in their ability to describe biodiversity patterns as they are a more surfacelevel analysis of biodiversity as compared to one that incorporates the evolutionary perspective (10, 11, 12). Diversity measures based on phylogeny, or the evolutionary relationships between species, have since been developed to address the shortcomings of descriptors such as species richness and species endemism. Phylogenies are derived from shared, homologous characters, or characteristics shared by all the descendants of a common ancestor, and are an indication of recently shared ancestry. Phylogenetic diversity (PD) and phylogenetic endemism (PE) are metrics that provide a more comprehensive view of diversity within and between species (10, 12, 13, 14). PD calculates the shared evolutionary history of specified taxa (10,14) and is largely resistant to taxonomic uncertainty, or the discrepancies in the identification of species, because it relies on robust hypothesis of evolution, derived from the shared homologous characters between species (15). PD has been utilized to understand global patterns of biodiversity, and is especially useful when the taxonomy of a clade is poorly understood (13). PE is a measure of the amount of shared evolutionary history between a set of branches on a phylogenetic tree in relation to how widespread the branches are geographically (10). WE is the sum of the inverse of the species’ range found
over a fixed area (9). PE, unlike WE, incorporates the ranges of all the branches of the tree connecting the species, not just the terminal branches (10). This weighted phylogenetic endemism provides a more comprehensive measure of the distribution of rarity than weighted endemism of species alone. PD and PE are more robust to changes in taxonomic classification than SR and WE, and PE analyzes endemism across a consistent spatial scale, regardless of previously defined geographic boundaries (10). These metrics provide evolutionary and genetic information necessary for making informed conservation-policy. Calculating PD and PE requires a high resolution of spatial distribution information along with a highly resolved phylogeny. Australia is the best model system, at present, for this type of study due to the advanced state of digitization of herbaria voucher specimens and spatial reference data (16, 17). Australia’s Virtual Herbarium (AVH), contains millions of records of spatial flora collections from Australia’s major Herbaria. Additionally, Australia is important for global biodiversity conservation as it is rich with endemic species, resulting from its geographic isolation (9, 18). Conifers are also largely confined to either the Northern or Southern hemisphere, specifically extant species of Araucariaceae, Podocarpaceae, and the Callitroideae (the sister group to Cupressoideae), fossil records also indicate that these trends have persisted throughout time (19). Although metrics such as species richness and species endemism have been calculated for many conifer species in Australia (1) calculation of diversity metrics from an evolutionary perspective using PD and PE remains to be accomplished. The main objective of this study was to calculate and visually display diversity metrics that couple phylogenetic and spatial information. I calculated PD and PE to identify regions of Australia most densely populated with phylogenetically rare conifers and compared these results with Australian natural reserves to identify regions of phylogenetic rarity that are not currently being protected. I hypothesized that species which are evolutionarily distant will be more geographically distant and closely-related species will be spatially clustered, because of habitat requirements (20). Additionally, I evaluated the relationships between PD, PE and traditional diversity metrics such as SE and WE. I expected PE to be correlated with WE; however I expected WE to fail at consistently predicting areas of high PE (10). These results will prove valuable to informing conservation-policy makers regarding critical regions of conifer conservation.
49 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
Methods
B S J
Spatial data acquisition I studied the 39 indigenous species of conifers in Australia (Appendix A, List 1). To obtain specimen locations, I used data from AVH (http://avh.ala.org. au/). The AVH is a digital database containing 75% of the 6 million specimens of plants, algae and fungi that have been collected by Herbaria in Australia. I downloaded a total of 12,300 Australian endemic conifer species datapoints and then used Google Refine version 2.5 (http://code.google.com/p/ google-refine/), to clean the dataset and remove non-conifer records, foreign collections (as well as Norfolk and Macquarie Islands), and any naturalized specimens grown in a botanic garden or otherwise. I reconciled the taxonomy against a classification for extant conifers with the Australian Plant Census (APC)
Figure 1. Spatial location of individual conifer specimens. Specimens collected using AVH database.
(http://www.anbg.gov.au/chah/apc/index.html) and corrected any misspellings. I trimmed records without geographic coordinates from the dataset. I then transformed the latitude and longitude values of the remaining records into xy meter coordinates using the Albers projection which corrects for inconsistencies in grid size of latitude and longitude near the earth’s poles. This cleaned dataset contained 7300 spatial records (Fig 1) Molecular data acquisition: Phylogeny I used two genes, matK and rbcL to create a phylogeny, using both existing and new sequence data. RbcL is commonly referred to as the “universal barcode” for plants; however using two genes, both matK and rbcL, is more informative and created a more complete and accurate phylogeny (Quinn et al. 2002). I searched
the online database GenBank (http://www.ncbi.nlm. nih.gov/genbank/) using scientific names of each of the 40 species in my study (39 indigenous species and one outgroup, Gingko biloba). I noted which sequences were unavailable in GenBank and saved the accession numbers of the available sequences (Appendix A). Once I identified which species were missing, I collected plant tissue for Callitris baileyi, Callitris monticola, Callitris oblonga, Callitris columellaris, Actinostrobus acuminatus and Microstrobos niphophilus at the Royal Botanic Garden, Sydney. I located the species I needed in the botanic garden and cut a piece of fresh leaf from which to extract DNA. After I cataloged the plant tissue in the Royal Botanic Garden Herbarium’s collection, I prepped my tissue samples for DNA extraction by sealing them in a silica gel filled box to desiccate them. I then performed DNA extractions using a Qiagen DNEasy Kit (Germany,www.qiagen.com) with minor modifications. These modifications were: using 1 zirconia bead and 5 mg sand instead of 50 µL small zirconia beads, not using any liquid nitrogen, using the lyser (written “bead-beater” in kit) for 25 seconds, incubating at 65°C for 40 minutes, and incubating the products of buffer AE and DNA for 10 minutes . Once I extracted DNA from the leaf tissue, I amplified the regions matK and rbcL using PCR. I performed the standard procedure using the primers Forward TX2 and Reverse TX4 to amplify matK regions and the primers Forward rbcL_1 and Reverse rbcL_635. I ran a program called Immolase 50°C on the Thermocycler (Corbett Life Science, PalmCycler) for 2.5 hours. Then I loaded the product into wells on gels and ran electrophoresis on the gel with indicator and gel red at 300 W for approximately 10 minutes, checking to see the movement of the bands periodically. I then transferred the plates to a UV hood and visualized the plates. After taking note of which trials were successful, I collected the PCR products for sequencing. I then sent the PCR products to the Genetic Sequencing Lab on the UC Berkeley campus. The sequenced products were then sent back to me as a data file. Phylogeny construction To create the Australian conifer phylogeny, I acquired DNA sequences from the processes outlined above and used the default settings for the MUSCLE alignment in Geneious (http://www.geneious.com/) to align the sequences for each gene region, matK and rbcL. Once I aligned the genetic sequences, I deleted any unreliable end pieces that were unlikely to represent rbcL or matK gene regions. I chose one matK and one rbcL sequence to represent each species using the following criteria,
50 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
known as taxon priming: longest sequence, a sequence that withstands a cluster analysis, and Australian in origin. I then created a concatenated matrix including rbcL and matK, a total of 2783 base pairs, and used the default parameters in GARLI (Genetic Algorithm for Rapid Likelihood Inference) version 0.951 (https:// code.google.com/p/garli/) to create a Maximum Likelihood phylogeny (Fig 2). I then compared the relationships in the phylogeny I created with previously published conifer phylogenies (e.g., 19). Biodiverse: Spatial Location and Phylogeny Biodiverse v 0.17 (http://code.google.com/p/ biodiverse/) is a program that uses a phylogeny and specimen level spatial data to create a map of the occurrence of species across a region and calculates SR, WE and phylogenetic metrics PD and PE. SR and WE require only spatial data, whereas PD and PE require spatial and phylogenetic data. I loaded the cleaned spatial data I acquired from AVH into Biodiverse which displayed a map each species’ occurrence (Fig. 1) and the phylogeny I created from the gene regions matK and rbcL . First, I calculated species richness—defined as the number of species in an area (here represented by
50,000 m2 grids). Second, I calculated PD (Eq 1, 21), which is calculated by summing the branch lengths on the phylogenetic subtree connected the species in a particular gird. Third I calculated PE, defined as PD weighted by the inverse of the branchlength’s ranges. PE incorporates the spatial range of the phylogenetic branch lengths down to the root of the phylogeny (10). For example, if a widely distributed taxon is sister to a narrowly distributed (highly endemic) species, the highly endemic species will be negatively weighted by its sister and the PE score of the pair will be lowered. PD
(Eq. 1) where Lc is the length of branch c and C is the set of branches in the minimum spanning path connecting the species (Rosauer et al. 2009). (Eq. 2) Where variables are defined as above, and Rc is clade range, the combined ranges of the descendant taxa of branch c, so that overlapping areas are considered
Figure 2. Maximum likelihood phylogeny of endemic Australian conifers. Derived from matK and rbcL gene regions and calculated using GARLI v. 0.951.
51 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
only once (Rosauer et al. 2009).
B S J
To discern any correlations between SR and PD, I created a scatterplot of SR as a percent of total number of species against PD. I performed the same calculation for WE versus PE, to graphically display any correlation between the two metrics. I also calculated the correlation coefficient for each relationship (r2). Spatial analysis To determine whether areas of significant PD and PE were correlated with protected regions in Australia, I overlaid map layers of Natural Parks and Reserves in Australia using ArcMap v 10.1 (GISESRI). I gathered the data layers from the Atlas of living Australia (http://spatial.ala.org.au/) and loaded the data layers into ArcMap, projected them, if they were not already in the projection GDA94 / Australian Albers. I then projected the Biodiverse-exported ASCII grid files into the same projection to visualize properly. I clipped the data if it contained more data points than the continent of Australia. Then I symbolized the data to display the Australian Protected regions shapefile (CAPAD 2010) and overlaid an outline of the shape of Australia in GDA94/Australian Albers Projection to display the continent’s bounds. I used this visualization process to
discern any correlations or patterns in the data layers over the randomization maps of RPD, RPE and superendemism.
Results Study organisms and study site The phylogeny I created is fully resolved, and provides a robust hypothesis of the evolutionary relationships between Australian endemic clades. However, it probably includes an incorrect relationship: Microstrobos niphophilus probably belongs in the same clade as Microstrobos fitzgeraldii (19). For the purposes of these calculations it does not make a difference, because both PD and PE take into account branch lengths, and the erroneous branch is very short. Fig. 2 is the result of a maximum likelihood phylogenetic tree for the 39 conifer species, rooted on the outgroup, Gingko biloba. Biodiverse: Geographic Location and Phylogeny I found that species richness was highest in Tasmania and on the Northeast coast of Australia (Fig 3a). PD was more scattered than SR, but also clumped in Tasmania and on the East Coast (Fig 3b). SR was fairly strongly correlated with PD (r2=0.75), where r B
Callitris
C
Scatterplot of Standardized Species Richness against Phlyogenetic Diversity
r2=0.75
Figure 3. (A) Species Richness Species richness of endemic conifers species in Australia. Red regions represent high species diversity.regions and calculated using GARLI v. 0.951. (B) PD of conifers Phylogenetic Diversity of conifer species in Australia. The dark red regions, primarily on the East Coast and Tasmania, represent high levels of PD. The genus Callitris was widely distributed, especially on the West coast of Australia. (C) Species richness (%) against phylogenetic diversity weighted by branch lengths, with a best fit line. Note that data were not normally distributed, so r2 is a residual value, thus I did not include a significance value.
52 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B
A
Legend
A= Callitris endlicheri, Callitris muelleri, Microstrobos fitzgeraldii, Podocarpus spinulosus
A
C
B S J
Scatterplot of Phlyogenetic Endemism against Weighted Endemism
Figure 4. (A) Weighted endemism of conifers Weighted endemism of endemic Australian conifers. (B) Phylogenetic Endemism of Australian conifers. Dark regions represent high PE (PE>0.035). The grid cell labeled A contains the genera Callitris, Microstrobos and Podocarpus. PE is also relatively high on the Northeast coast. (C) Weighted endemism against phylogenetic endemism with a best fit line. PE is overall strongly correlated with WE, but this correlation does not hold for some values of high PE or high WE. Note that data were not normally distributed, so r2 is a residual value, thus I did not include a significance value.
r2= 0.87
represents the residual value, as the data were not normally distributed. Tasmania has an especially high PD score and contains species that are distantlyrelated, Athrotaxis, Diselma, Lagarostobos, Microstrobos, Phyllocladus and Podocarpus. WE was concentrated primarily in Tasmania and along the Northeast coast (Fig 4a). PE was not as high in Tasmania, but also was concentrated along the Northeast (Fig 4b). WE was highly correlated with PE, but underestimated some regions of high PE. For example, the grid cell which contained the highest PE value, 0.0361, was underestimated by WE (Fig 4c). This grid cell contained Callitris, Microstrobos, and Podocarpus, which are not sister terminal taxa on the phylogeny. ArcGIS Analysis After calculating PE, PD with randomizations and visualizing with CAPAD 2010 Protected Regions, I found that the majority of regions with significantly high (p value>0.95) PD were protected (Fig 5). Those cells that contained significantly high PD values are concentrated in areas that are heavily protected, such as the far Northeast and Tasmania.
Discussion Accurately and sufficiently quantifying biodiversity is essential for conservation efforts. In this study, I explored biodiversity metrics that quantified the spatial distribution of evolutionary history of Australian endemic conifer species in comparison to traditional metrics which do not take evolutionary history into account. SR and PD were largely correlated, with some exceptions where SR did not predict PD values accurately. WE and PE were also largely correlated, but that correlation broke down for some high values of WE or PE. The spatial and phylogenetic analysis yielded that most regions, high or low with PD and PE, are currently being protected as reserves under Australian law. Phylogenetic Metric Performance Regional trends in species richness, endemism vs. PD and PE As a whole, the continent of Australia had relatively low PD values compared to a random distribution, which could be due to biogeographic barriers to dispersal and diversification (12). SR and PD were
53 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
Figure 5. Randomization of PD Significantly low PD (red) is scattered throughout the south and on the Northeast coast. There are fewer regions of significantly high PD (blue), and they are concentrated primarily on the Northeast coast.
largely correlated, which one would expect (20), given that the more terminal taxa that are sampled from a specific grid cell, the more of the phylogenetic tree is sampled. However, some regions had more or less PD than predicted by their SR (Fig 3c). This correlation was weak for intermediate levels of species richness
and PD (Fig 3c). In most cases, SR underpredicted PD, meaning that there were more distantly –related taxa in that grid-cell than expected given SR count. Regions high in PD, which are characterized by many distantly related taxa, were concentrated on the Southeast coast of Australia and throughout Tasmania (Fig 3b). These regions have been found to have a high diversity of conifers in previous studies (22). Fossils for 33 species of conifers have been found in North Western Tasmania, which indicates high conifer diversity relative to the size of the region (22). Tasmania and South Eastern Australia experienced a decline in conifer diversity after the early Oligocene (23) and other evidence suggests that most of the endemic genera, Athrotaxis, Lagarostrobos and Microcachrys, represent the only surviving members of lineages extending back to at least the earliest Cretaceous (24). These genera were also more geographically widespread in the past (23). Athrotaxis, Lagarostrobos, Michrocachrys, Dislema and Phyllocladus are largely restricted geographically to Tasmania. These findings suggest that these clades’ ranges may be restricted by an ecological factor that has changed through time. Regions low in PD, which are characterized by many closely-related species, were more prevalent and were concentrated
Fig 5 Randomization of PD as a proportion of branch length overlaid with Australian Protected Regions
(CAPAD 2010). Dark blue cells indicate regions significantly high in PD compared to a random distribution.
54 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
B S J
inland of the coast and were primarily comprised of the genus Callitris, which is widespread throughout Australia (Fig 3b). Regions low in PD relative to their species richness estimate may be regions of isolated, large radiations (25). WE and PE were strongly correlated (r2= 0.87); however, WE underestimated the highest values of PE (Fig 4c). WE both overpredicted and underpredicted high PE scores (Fig 4c, due to the fact that closely related taxa may affect the result of PE if they contribute to the range of a clade with taxa in the study area (10). There were few regions high in PE, and they were concentrated Tasmania and on the Northeast coast, potentially due to the aforementioned endemic history of Tasmania. Limitations and Future Directions A key limitation of my study is the spatial scale at which I performed analyses. Ecological and evolutionary patterns may differ at different spatial scales. Thus, it is important to re-analyze the data at different spatial scales, for instance 100,000m2 grids or 25,000m2 grids to check for consistency among the spatial scales. For this study, we chose 50,000m2 grids because they have been shown to display subtleties of the data, and roughly estimate community sizes (16,17). Another spatial limitation stems from my use of the CAPAD 2010 shapefile in its entirety. This shapefile included all parklands, not only major reserves or conifer-specific reserves, and the number of vectors in this data layer made it difficult to interpret how effectively regions of high PD and PE are being conserved. Additionally, I was unable to answer one of my original research questions, which was to identify and map biogeographic regions that could be potential environmental explanations of PD/PE trends. I plan to continue this analysis and overlay these factors in the future. Phylogenetically, my study is limited in its robustness, because I focused on a subset of species inhabiting the continent and this is a monophyletic group in relation to Gingko biloba, but polyphylys may be nested in these lineages. The phylogeny used for this study probably contains an error, a matK sequence for Microstrobos niphophilus which needs to be re-sequenced. Due to time constraints, I was unable to re-sequence it in time for this paper. It, however, does not affect the calculation of PD and PE as all of the branch lengths are incorporated that join sister taxa which share a spatial grid cell (10)
Broader Implications and Conclusions Examining the intersection of evolutionary history and spatial distribution of conifer species is a key method for properly informing conservation policy. Historically, approaches to biodiversity conservation have attempted to apply different concepts. Some have been more concerned with conserving rare species, while others have focused on key habitats. PD and PE are metrics that provide a way to account for both geography and evolutionary rarity. They are not in disagreement with SR and WE, instead they incorporate these metrics and provide more insight into the evolutionary and ecological processes that have occurred throughout time.
Acknowledgements This project would not have been possible without the support of Patina Mendez for her dedication, thorough editing, and constant positivity. Dr. Nathalie Nagalingum, Dr. Brent Mishler, Sonia Nosratinia and Nunzio Knerr were all integral to this project. They deserve so much more than a “thank you.”
55 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
References 1. Pimm, S. L.; G.J. Russell.; J.L. Gittleman,., and T.M. Brooks. 1995. The future of biodiversity. Science 269: 347–350. 2. Reaka-Kudla, M.L. 1997. Biodiversity II: understanding and protecting our biological resources. National Academy of Sciences, USA. 3. Lombard A. T., R. M. Cowling, R. L. Pressey, and A. G. Rebelo. 2003. Effectiveness of land classes as surrogates for species in conservation planning for the Cape Floristic Region. Biological Conservation 112:45-62. 4 Gaston and Fuller. 2009. The size of species’ geographic ranges. Journal of Applied Ecology. 46: 1-9. 5. Crozier R. H. 1997. Preserving the Information Content of Species: Genetic Diversity, Phylogeny, and Conservation Worth. Annual Review of Ecology and Systematics 28:243-268.
21. Rosauer, D., S. W. Laffan, M.D. Crisp., S.C. Donnellans, and L.G. Cook. 2009. Phylogenetic endemism: a new approach for identifying geographical concentrations of evolutionary history. Molecular Ecology 18: 4061-4072. 22. Jordan, . G.J. and R.S. Hill. 2002 Cenozoic plant macrofossil sites of Tasmania. Papers and Proceedings of the Royal Society of Tasmania, 136 .127-139 23. Hill, RS and Brodribb, TJ.1999. Turner Review No. 2 Southern Conifers in Time and Space. Australian Journal of Botany, 47: 639-696. 24. Quinn, C.J., R.A. Price, and P.A. Gadek. 2002. Familial concepts and relationships in the conifer based on rbcL and matK sequence comparisons. Kew Bulletin 57: 513- 531. 25. Fritz S. A., C. Rahbek. 2012. Global patterns of amphibian phylogenetic diversity. Journal of Biogeography 39:1373-1382.
B S J
6. Nixon K. C., Q. D. Wheeler. 1990. An amplification of the phylogenetic species concept. Cladistics 6:211-223. 7. Mishler B. D. 2009. Three Centuries of Paradigm Changes in Biological Classification: Is the End in Sight? Taxon 58:61-67. 8. Chao, A. 2005. Species richness estimation. Encyclopedia of Statistical Sciences. 7909-7916. 9. Crisp, M.D., S. Laffan, H. P. Linder, and A. Monro. 2001. Endemism in the Australian flora. Journal of Biogeography 29: 183-198. 10. Rosauer, D.F. and S.W. Laffan. 2008. Linking phylogenetic trees, taxonomy and geography to map phylogeography using Biodiverse. Taxonomic Data Working Group 2008. Perth, Australia. 11. Mooers, A. O. 2007. The diversity of biodiversity. Nature 445: 717-718. 12. Faith, D. P., and A. M. Baker. 2006. Phylogenetic diversity (PD) and biology conservation: some bioinformatics challenges. Evolutionary Bioinformatics 2: 70-77. 13. Meiri, S. and G. Mace. 2009. New taxonomy and the origin of species. Public Library of Science Biology 5: 1385- 1387. 14. Davies, J. T. and L. B. Buckley. 2011. Phylogenetic diversity as a window into the evolutionary and biogeographic histories of present- day richness gradients for mammals. The Royal Society 366: 2414-2425. 15. Mace, G. M., J.L. Gittleman, and A. Purvis. 2003. Preserving the tree of life. Science. 300: 1707- 1709. 16. Nagalingum, Nathalie. in prep. 2013 17. Mishler, B. D., N. Knerr, C. E. González-Orozco, A.H. Thornhill, S.W. Laffan, and J.T. Miller. (in review) Phylogenetic approaches to biodiversity, endemism, and conservation. Nature Communications 18. Ingelby, S. 2009. Endemism in Australian mammals. Australian Museum. http://australianmuseum.net.au/Endemism-inAustralian-mammals (Version 5/7/2012). 19. Leslie, A.B., J. M. Beaulieub, S.R. Hardeep, P.R. Cranea, M.J. Donoghueb, and S. Mathews. 2012. Hemisphere- scale differences in conifer evolutionary dynamics. Proceedings of the National Academy of Sciences of the United States of America. 109: 16217-16221. 20. Forest, F. R. Greyner, M. Rouget, J. Davies, R.M. Cowling, D.P. Faith, A. Blamford, J.C. Manning, S. Proches, M. van der Bank, G. Reeves, T.A. J. Hedderson, and V. Savolanien. 2007. Preserving the evolutionary potential of floras in biodiversity hotspots. Nature 445: 757-759
56 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
APPENDIX A: LIST OF INDIGENOUS CONIFER SPECIES
B S J
List 1: Indigenous conifer species list, including the outgroup used for this study, Ginkgo biloba Actinostrobus arenarius Callitris baileyi Callitris columellaris Callitris monticola Callitris oblonga Callitris roei Microstrobos niphophilus Agathis atropurpurea Agathis microstachya Agathis robusta Araucaria bidwillii Araucaria cunninghamii Microcachrys tetragona Actinostrobus acuminatus Actinostrobus pyramidalis Athrotaxis cupressoides Athrotaxis selaginoides Callitris canescens Callitris drummondii Callitris endlicheri Callitris macleayana Callitris muelleri Callitris preissii Callitris rhomboidea Callitris verrucosa Diselma archeri Lagarostrobos franklinii Microstrobos fitzgeraldii Phyllocladus aspleniifolius Podocarpus dispermus Podocarpus drouynianus Podocarpus elatus Podocarpus grayae Podocarpus lawrencei Podocarpus smithii Podocarpus spinulosus Prumnopitys ladei Sundacarpus amarus Wollemia nobilis Ginkgo biloba
57 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2
APPENDIX B: GENBANK ACCESSION NUMBERS
B S J
Table 1: Genbank Accession numbers for matK and rbcL gene regions rbcL
matK
JF725937 Actinostrobus arenarius
JF725837 Actinostrobus arenarius
EU161450 Actinostrobus pyramidalis
JF725831 Actinostrobus pyramidalis
AF502087 Agathis atropurpurea
EU025977 Agathis atropurpurea
AF508920 Agathis microstachya
EU025978 Agathis microstachya
EF490509 Agathis robusta
AF456371 Agathis robusta
AM920227 Araucaria bidwillii
EU025974 Araucaria bidwillii
EF490510 Araucaria cunninghamii
EU025975 Araucaria cunninghamii
JF725921 Athrotaxis cupressoides
JF725821 Athrotaxis cupressoides
JF725938 Athrotaxis selaginoides
JF725838 Athrotaxis selaginoides
JF725945 Callitris canescens
JF725845 Callitris canescens
JF725939 Callitris drummondii
JF725839 Callitris drummondii
JF725932 Callitris endlicheri
AY988331 Callitris endlicheri
JF725933 Callitris macleayana
JF725833 Callitris macleayana
JF725924 Callitris muelleri
JF725824 Callitris muelleri
JF725940 Callitris preissii
JF725840 Callitris preissii
L12537 Callitris rhomboidea
JF725825 Callitris rhomboidea
JF725942 Callitris verrucosa
JF725842 Callitris verrucosa
JF725926 Diselma archeri
JF725826 Diselma archeri
HM593609 Lagarostrobos franklinii
EU161486 Lagarostrobos franklinii
HM593611 Microcachrys tetragona
EU161483 Microcachrys tetragona
AF249646 Microstrobos fitzgeraldii
EU161484 Microstrobos fitzgeraldii
AF249647 Microstrobos niphophilus AF249651 Phyllocladus aspleniifolius
AY442147 Phyllocladus aspleniifolius
JF969685 Podocarpus dispermus
HM593741 Podocarpus dispermus
HM593639 Podocarpus drouynianus
HM593742 Podocarpus drouynianus
HM593641 Podocarpus elatus
HM593745 Podocarpus elatus
AF249608 Podocarpus grayae
HM593750 Podocarpus grayae
HM593651 Podocarpus lawrencii
HM593755 Podocarpus lawrencii
HM593675 Podocarpus smithii
HM593779 Podocarpus smithii
AF249630 Podocarpus spinulosus
HM593780 Podocarpus spinulosus
HM593620 Prumnopitys ladei
HM593723 Prumnopitys ladei
AF249663 Sundacarpus amarus
HM593788 Sundacarpus amarus
EF490508 Wollemia nobilis
AF456377 Wollemia nobilis
58 • Berkeley Scientific Journal • Synthetics • Spring 2014 • Volume 18 • Issue 2